Identifying the Addressee in Human-Human-Robot (2008) Interactions Depending on, Michael Katzenmaier
Abstract With this perform we look into the power of acoustic and visual cues,
Office 2010 Product Key, and their mixture,
Windows 7 Download, to recognize the addressee inside a human-human-robot interaction. Determined by eighteen audiovisual recordings of two human beings and a (simulated) robot we discriminate the interaction from the two humans from your interaction of 1 human using the robot. The paper compares the outcome of 3 techniques. The primary method uses purely acoustic cues to locate the addressees. Low level,
Windows 7 64 Bit, function primarily based cues also as higher-level cues are examined. Within the 2nd method we check no matter whether the human's head pose is a suitable cue. Our results show that visually estimated head pose is a more reliable cue for the identification with the addressee within the human-human-robot interaction. From the third strategy we combine the acoustic and visual cues which results in significant improvements.
Details der Publikation Download Quelle Mitarbeiter CiteSeerX Archiv CiteSeerX - Scientific Literature Digital Library and Search Engine (United States) Keywords attentive interfaces,
Windows 7, focus of attention,
Office 2010 Product Key, head pose estimation Typ text Sprache Englisch Verknüpfungen 10.1.1.6.1719, 10.1.1.28.8271