GEP-PG: Decoupling Exploration and Exploitation in Deep Reinforcement Learning Algorithms - ENSTA Paris - École nationale supérieure de techniques avancées Paris Accéder directement au contenu
Communication Dans Un Congrès Année : 2018

GEP-PG: Decoupling Exploration and Exploitation in Deep Reinforcement Learning Algorithms

Résumé

In continuous action domains, standard deep reinforcement learning algorithms like DDPG suffer from inefficient exploration when facing sparse or deceptive reward problems. Conversely, evolutionary and developmental methods focusing on exploration like Novelty Search, Quality-Diversity or Goal Exploration Processes explore more robustly but are less efficient at fine-tuning policies using gradient-descent. In this paper, we present the GEP-PG approach, taking the best of both worlds by sequentially combining a Goal Exploration Process and two variants of DDPG. We study the learning performance of these components and their combination on a low dimensional deceptive reward problem and on the larger Half-Cheetah benchmark. We show that DDPG fails on the former and that GEP-PG improves over the best DDPG variant in both environments. Supplementary videos and discussion can be found at frama.link/gep_pg, the code at github.com/flowersteam/geppg.
Fichier principal
Vignette du fichier
1802.05054.pdf (5.89 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01890151 , version 1 (08-10-2018)

Identifiants

  • HAL Id : hal-01890151 , version 1

Citer

Cédric Colas, Olivier Sigaud, Pierre-Yves Oudeyer. GEP-PG: Decoupling Exploration and Exploitation in Deep Reinforcement Learning Algorithms. International Conference on Machine Learning (ICML), Jul 2018, Stockholm, Sweden. ⟨hal-01890151⟩
178 Consultations
90 Téléchargements

Partager

Gmail Facebook X LinkedIn More