Learning Legible Motion from Human–Robot Interactions - ENSTA Paris - École nationale supérieure de techniques avancées Paris Accéder directement au contenu
Article Dans Une Revue International Journal of Social Robotics Année : 2017

Learning Legible Motion from Human–Robot Interactions

Résumé

In collaborative tasks, displaying legible behavior enables other members of the team to anticipate intentions and to thus coordinate their actions accordingly. Behavior is therefore considered to be legible when an observer is able to quickly and correctly infer the intention of the agent generating the behavior. In previous work, legible robot behavior has been generated by using model-based methods to optimize task-specific models of legibility. In our work, we rather use model-free reinforcement learning with a generic, task-independent cost function. In the context of experiments involving a joint task between (thirty) human subjects and a humanoid robot, we show that: 1) legible behavior arises when rewarding the efficiency of joint task completion during human-robot interactions 2) behavior that has been optimized for one subject is also more legible for other subjects 3) the universal legibility of behavior is influenced by the choice of the policy representation. Fig. 1 Illustration of the button pressing experiment, where the robot reaches for and presses a button. The human subject predicts which button the robot will push, and is instructed to quickly press a button of the same color when sufficiently confident about this prediction. By rewarding the robot for fast and successful joint completion of the task, which indirectly rewards how quickly the human recognizes the robot's intention and thus how quickly the human can start the complementary action, the robot learns to perform more legible motion. The three example trajectories illustrate the concept of legible behavior: it enables correct prediction of the intention early on in the trajectory.
Fichier principal
Vignette du fichier
main_final.pdf (1.38 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01629451 , version 1 (06-11-2017)

Identifiants

Citer

Baptiste Busch, Jonathan Grizou, Manuel Lopes, Freek Stulp. Learning Legible Motion from Human–Robot Interactions. International Journal of Social Robotics, 2017, 211 (3-4), pp.517 - 530. ⟨10.1007/s12369-017-0400-4⟩. ⟨hal-01629451⟩
316 Consultations
503 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More