0

Full Content is available to subscribers

Subscribe/Learn More  >

Development of an Integrated Simulation System for Design of Speech-Centric Multimodal Human-Machine Interfaces in an Automotive Cockpit Environment

[+] Author Affiliations
Yifan Chen, Basavaraj Tonshal, James Rankin

Ford Motor Company, Dearborn, MI

Fred Feng

University of Michigan Transportation Research Institute, Ann Arbor, MI

Paper No. DETC2016-59309, pp. V01AT02A004; 9 pages
doi:10.1115/DETC2016-59309
From:
  • ASME 2016 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference
  • Volume 1A: 36th Computers and Information in Engineering Conference
  • Charlotte, North Carolina, USA, August 21–24, 2016
  • Conference Sponsors: Design Engineering Division, Computers and Information in Engineering Division
  • ISBN: 978-0-7918-5007-7
  • Copyright © 2016 by ASME

abstract

In the past two decades, various CAE technologies and tools have been developed for design, development and specification of the graphical user interface (GUI) of consumer products both in and outside the automotive industry. The growing trend of deploying speech interfaces by automotive manufacturers and the resulting usage of speech requires that the work be extended to speech interface modeling — an area where both technologies and methodologies are lacking.

This paper presents our recent work aimed at developing a speech interface integrated with an existing GUI modeling system. A multi-contour seat was utilized as the testbed for the work. Our prototype allows one to adjust the multi-contour seat with a touchscreen GUI, a steering wheel mounted button coupled with an instrument cluster display, or a speech interface.

The speech interface modeling began with an initial language model, which was developed by interviewing both the experts and novice users. The interview yielded a base corpus and necessary linguistic information for an initial speech grammar model and dialog strategy. After the module was developed it was integrated into the exiting GUI modeling system, in a way that the human voice is treated as a standard input for the system, similar to a press on the touchscreen. The multimodal prototype was used for two customer clinics. In each clinic, we asked a subject to adjust the multi-contour seat using different modalities, including the touchscreen, steering wheel mounted buttons, and the speech interface. We collected both objective and subjective data, including task completion time and customer feedback. Based on the clinic results, we refined both the language model and dialogue strategy. Our work has proven effective for developing a speech-centric, multimodal human machine interface.

Copyright © 2016 by ASME

Figures

Tables

Interactive Graphics

Video

Country-Specific Mortality and Growth Failure in Infancy and Yound Children and Association With Material Stature

Use interactive graphics and maps to view and sort country-specific infant and early dhildhood mortality and growth failure data and their association with maternal

NOTE:
Citing articles are presented as examples only. In non-demo SCM6 implementation, integration with CrossRef’s "Cited By" API will populate this tab (http://www.crossref.org/citedby.html).

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In