Skip to Main content Skip to Navigation
New interface
Conference papers

Speech to Head Gesture Mapping in Multimodal Human-Robot Interaction

Abstract : In human-human interaction, para-verbal and non-verbal communication are naturally aligned and synchronized. The difficulty encountered during the coordination between speech and head gestures concerns the conveyed meaning, the way of performing the gesture with respect to speech characteristics , their relative temporal arrangement, and their coordinated organization in a phrasal structure of utterance. In this research, we focus on the mechanism of mapping head gestures and speech prosodic characteristics in a natural human-robot interaction. Prosody patterns and head gestures are aligned separately as a parallel multi-stream HMM model. The mapping between speech and head gestures is based on Coupled Hidden Markov Models (CHMMs), which could be seen as a collection of HMMs, one for the video stream and one for the audio stream. Experimental results with Nao robot are reported.
Document type :
Conference papers
Complete list of metadata

Cited literature [25 references]  Display  Hide  Download
Contributor : Amir ALY Connect in order to contact the contributor
Submitted on : Tuesday, October 13, 2015 - 7:22:14 PM
Last modification on : Wednesday, May 11, 2022 - 3:20:03 PM
Long-term archiving on: : Thursday, January 14, 2016 - 6:20:56 PM


Files produced by the author(s)


Public Domain




Amir Aly, Adriana Tapus. Speech to Head Gesture Mapping in Multimodal Human-Robot Interaction. The European Conference on Mobile Robots (ECMR), Sep 2011, Orebro, Sweden. ⟨10.1007/978-3-642-27449-7_14⟩. ⟨hal-01169983v2⟩



Record views


Files downloads