Continual Learning with Deep Generative Replay

Continual Learning with Deep Generative Replay

12 Dec 2017 | Hanul Shin, Jung Kwon Lee, Jaehong Kim, Jiwon Kim
The paper "Continual Learning with Deep Generative Replay" by Hanul Shin addresses the challenge of catastrophic forgetting in deep neural networks, which occurs when a model's performance on previously learned tasks degrades when trained on new tasks. Inspired by the hippocampus's generative nature, the authors propose a novel framework called Deep Generative Replay. This framework consists of a deep generative model (generator) and a task-solving model (solver). The generator creates pseudo-data that mimics past training examples, which are then interleaved with new data to train the solver. This approach allows the model to retain knowledge from previous tasks while learning new ones without the need for explicit replay of past data, reducing the requirement for large memory storage. The paper evaluates the effectiveness of the Deep Generative Replay framework through several experiments, including sequential learning on image classification tasks. The results show that the model can maintain performance on old tasks while learning new ones, outperforming other continual learning methods such as exact replay and noise-based approaches. The framework is also shown to be compatible with other continual learning models like Learning without Forgetting (LwF) and can be applied to different domains and tasks, demonstrating its versatility and robustness. The authors discuss the limitations of the approach, particularly the dependency on the quality of the generator, and suggest future directions, including extending the framework to reinforcement learning and improving the training of deep generative models for more complex domains.The paper "Continual Learning with Deep Generative Replay" by Hanul Shin addresses the challenge of catastrophic forgetting in deep neural networks, which occurs when a model's performance on previously learned tasks degrades when trained on new tasks. Inspired by the hippocampus's generative nature, the authors propose a novel framework called Deep Generative Replay. This framework consists of a deep generative model (generator) and a task-solving model (solver). The generator creates pseudo-data that mimics past training examples, which are then interleaved with new data to train the solver. This approach allows the model to retain knowledge from previous tasks while learning new ones without the need for explicit replay of past data, reducing the requirement for large memory storage. The paper evaluates the effectiveness of the Deep Generative Replay framework through several experiments, including sequential learning on image classification tasks. The results show that the model can maintain performance on old tasks while learning new ones, outperforming other continual learning methods such as exact replay and noise-based approaches. The framework is also shown to be compatible with other continual learning models like Learning without Forgetting (LwF) and can be applied to different domains and tasks, demonstrating its versatility and robustness. The authors discuss the limitations of the approach, particularly the dependency on the quality of the generator, and suggest future directions, including extending the framework to reinforcement learning and improving the training of deep generative models for more complex domains.
Reach us at info@study.space