Interactive Robot Learning for Multimodal Emotion Recognition - Archive ouverte HAL Access content directly
Conference Papers Year :

Interactive Robot Learning for Multimodal Emotion Recognition


Interaction plays a critical role in skills learning for natural communication. In human-robot interaction (HRI), robots can get feedback during the interaction to improve their social abilities. In this context, we propose an interactive robot learning framework using mul-timodal data from thermal facial images and human gait data for online emotion recognition. We also propose a new decision-level fusion method for the multimodal classification using Random Forest (RF) model. Our hybrid online emotion recognition model focuses on the detection of four human emotions (i.e., neutral, happiness, angry, and sadness). After conducting offline training and testing with the hybrid model, the accuracy of the online emotion recognition system is more than 10% lower than the offline one. In order to improve our system, the human verbal feedback is injected into the robot interactive learning. With the new online emotion recognition system, a 12.5% accuracy increase compared with the online system without interactive robot learning is obtained.
Fichier principal
Vignette du fichier
ICSR2019_final_1.pdf (1.25 Mo) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-02371856 , version 1 (20-11-2019)


  • HAL Id : hal-02371856 , version 1


Chuang Yu, Adriana Tapus. Interactive Robot Learning for Multimodal Emotion Recognition. The Eleventh International Conference on Social Robotics, Nov 2019, Madrid, Spain. ⟨hal-02371856⟩
84 View
245 Download


Gmail Facebook Twitter LinkedIn More