Abstracting Visual Percepts to learn Concepts - Université Pierre et Marie Curie Accéder directement au contenu
Communication Dans Un Congrès Année : 2002

Abstracting Visual Percepts to learn Concepts

Nicolas Bredèche
Lorenza Saitta
  • Fonction : Auteur

Résumé

To efficiently identify properties from its environment is an essential ability of a mobile robot who needs to interact with humans. Successful approaches to provide robots with such ability are based on ad-hoc perceptual representation provided by AI designers. Instead, our goal is to endow autonomous mobile robots (in our experiments a Pioneer 2DX) with a perceptual system that can efficiently adapt itself to ease the learning task required to anchor symbols. Our approach is in the line of meta-learning algorithms that iteratively change representations so as to discover one that is well fitted for the task. The architecture we propose may be seen as a combination of the two widely used approach in feature selection: the Wrapper-model and the Filter-model. Experiments using the PLIC system to identify the presence of Humans and Fire Extinguishers show the interest of such an approach, which dynamically abstracts a well fitted image description depending on the concept to learn.

Dates et versions

hal-01548182 , version 1 (27-06-2017)

Identifiants

Citer

Jean-Daniel Zucker, Nicolas Bredèche, Lorenza Saitta. Abstracting Visual Percepts to learn Concepts. SARA 2002 - 5th International Symposium on Abstraction Reformulation and Approximation, Aug 2002, Kananaskis, Alberta, Canada. pp.256-273, ⟨10.1007/3-540-45622-8_19⟩. ⟨hal-01548182⟩
36 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More