A Model for Mapping Speech to Head Gestures in Human-Robot Interaction - ENSTA Paris - École nationale supérieure de techniques avancées Paris Accéder directement au contenu
Chapitre D'ouvrage Année : 2012

A Model for Mapping Speech to Head Gestures in Human-Robot Interaction

Amir Aly

Résumé

In human-human interaction, para-verbal and non-verbal communication are naturally aligned and synchronized. The difficulty encountered during the coordination between speech and head gestures concerns the conveyed meaning, the way of performing the gesture with respect to speech characteristics , their relative temporal arrangement, and their coordinated organization in a phrasal structure of utterance. In this research, we focus on the mechanism of mapping head gestures and speech prosodic characteristics in a natural human-robot interaction. Prosody patterns and head gestures are aligned separately as a parallel multi-stream HMM model. The mapping between speech and head gestures is based on Coupled Hidden Markov Models (CHMMs), which could be seen as a collection of HMMs, one for the video stream and one for the audio stream. Experimental results with Nao robot are reported.
Fichier principal
Vignette du fichier
Chapter_Aly_2012.pdf (654.78 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02501845 , version 1 (08-03-2020)

Identifiants

  • HAL Id : hal-02501845 , version 1

Citer

Amir Aly, Adriana Tapus. A Model for Mapping Speech to Head Gestures in Human-Robot Interaction. T. Borangiu and A. Thomas and D. Trentesaux. Service Orientation in Holonic and Multi-Agent Manufacturing Control, Springer, Heidelberg, pp.183-196, 2012, Studies in Computational Intelligence. ⟨hal-02501845⟩

Collections

ENSTA ENSTA_U2IS
21 Consultations
35 Téléchargements

Partager

Gmail Facebook X LinkedIn More