Identifying the Addressee in Human-Human-Robot (2008) Interactions Depending on,
Windows 7 Keygen, Michael Katzenmaier
Abstract On this work we check out the power of acoustic and visual cues, and their combination, to establish the addressee in a very human-human-robot interaction. According to eighteen audiovisual recordings of two human beings as well as a (simulated) robot we discriminate the interaction from the two humans in the interaction of 1 human together with the robot. The paper compares the outcome of three approaches. The primary method utilizes purely acoustic cues to find the addressees. Reduced level,
Microsoft Office Enterprise 2007, attribute based cues also as higher-level cues are examined. In the second approach we test whether the human's head pose is a suitable cue. Our results show that visually estimated head pose is a more reliable cue for the identification with the addressee in the human-human-robot interaction. Within the third method we combine the acoustic and visual cues which results in significant improvements.
Details der Publikation Download Quelle Mitarbeiter CiteSeerX Archiv CiteSeerX - Scientific Literature Digital Library and Search Engine (United States) Keywords attentive interfaces,
Office Home And Student 2010, focus of attention,
Microsoft Office 2010 Home And Student, head pose estimation Typ text Sprache Englisch Verknüpfungen 10.1.1.6.1719,
Office 2010 Activation, 10.1.1.28.8271