913 S. University Ave
University of Michigan, Michigan 48109

When dialog pertains to the objects and events in the world around us, gaze and speech become closely intertwined: Speakers typically look at objects about 1 second before they mention them, while listeners fixate relevant objects within 250mseconds of hearing them mentioned. In face-to-face dialog, listeners can “short-cut” this process by following the speaker’s gaze directly to get cues about which objects he/she is planning to mention.

Thus gaze serves as a useful visual cue for grounding and disambiguating linguistic interaction. This talk will discuss recent research which seeks to better understand the importance of gaze for situated dialog in human-computer interaction. The first part of the talk will focus on how eye gaze of both robots and virtual agents influences the way people understand their speech. Then we will discuss ongoing research which exploits the real-time gaze of human users in order to improve the automatic generation of spoken directions, as users seek to navigate their way through a virtual environment. The talk will conclude by summarizing the importance of eye-gaze as a real-time channel for situated interaction, and speculate on how increasingly ubiquitous gaze-tracking technologies could be exploited in future applications.

Added by Sara Steinhurst on March 29, 2012