Parental Guidance: Efficient Lifelong Learning through Evolutionary Distillation

Published: 23 Oct 2024, Last Modified: 04 Nov 2024CoRL 2024 Workshop MAPoDeLEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Continual Learning, Evolutionary Optimization, Distillation, Deep Reinforcement Learning, Locomotion
TL;DR: We study effective distillation in the context of evolutionary continual learning where there are many parents possessing diverse capabilities.
Abstract: Developing robotic agents that can perform well in diverse environments while showing a variety of behaviors is a key challenge in AI and robotics. Traditional reinforcement learning (RL) methods often create agents that specialize in narrow tasks, limiting their adaptability and diversity. To overcome this, we propose a preliminary, evolution-inspired framework that includes a reproduction module, similar to natural species reproduction, balancing diversity and specialization. By integrating RL, imitation learning (IL), and a coevolutionary agent-terrain curriculum, our system evolves agents continuously through complex tasks. This approach promotes adaptability, inheritance of useful traits, and continual learning. Agents not only refine inherited skills but also surpass their predecessors. Our initial experiments show that this method improves exploration efficiency and supports open-ended learning, offering a scalable solution where sparse reward coupled with diverse terrain environments induces a multi-task setting.
Submission Number: 9
Loading