2018 | Jonathan Schwarz, Jelena Luketina, Wojciech M. Czarnecki, Agnieszka Grabska-Barwinska, Yee Whye Teh, Razvan Pascanu, Raia Hadsell
Progress & Compress is a scalable framework for continual learning that enables a system to learn new tasks while preserving performance on previously encountered tasks. The framework consists of two components: a knowledge base, which stores previously learned skills, and an active column, which is used to learn new tasks. After learning a new task, the active column is distilled into the knowledge base, ensuring that previously acquired skills are not lost. This cycle of active learning (progress) followed by consolidation (compression) allows the system to learn new tasks without requiring access to or storage of previous data or task-specific parameters. The framework is designed to be scalable, with no architecture growth, and is applicable to both supervised and reinforcement learning domains.
The Progress & Compress framework is particularly effective in scenarios where tasks are learned sequentially, such as sequential classification of handwritten alphabets and reinforcement learning domains like Atari games and 3D maze navigation. The framework achieves positive transfer, where knowledge from previously learned tasks is used to improve learning on new tasks, while minimizing catastrophic forgetting. This is accomplished through a modified version of Elastic Weight Consolidation (EWC), which helps protect previously learned skills during the consolidation phase.
The framework is evaluated on various tasks, including sequential learning of handwritten characters from the Omniglot dataset and navigation tasks in 3D environments. Results show that Progress & Compress outperforms other methods in terms of performance retention and forward transfer, particularly in scenarios where tasks are similar. The framework is also shown to be effective in reinforcement learning settings, where it achieves significant improvements in learning efficiency and performance.
Overall, Progress & Compress provides a scalable and effective solution for continual learning, allowing systems to learn new tasks while preserving performance on previously encountered tasks. The framework is designed to be adaptable to a wide range of tasks and domains, making it a promising approach for future research in continual learning.Progress & Compress is a scalable framework for continual learning that enables a system to learn new tasks while preserving performance on previously encountered tasks. The framework consists of two components: a knowledge base, which stores previously learned skills, and an active column, which is used to learn new tasks. After learning a new task, the active column is distilled into the knowledge base, ensuring that previously acquired skills are not lost. This cycle of active learning (progress) followed by consolidation (compression) allows the system to learn new tasks without requiring access to or storage of previous data or task-specific parameters. The framework is designed to be scalable, with no architecture growth, and is applicable to both supervised and reinforcement learning domains.
The Progress & Compress framework is particularly effective in scenarios where tasks are learned sequentially, such as sequential classification of handwritten alphabets and reinforcement learning domains like Atari games and 3D maze navigation. The framework achieves positive transfer, where knowledge from previously learned tasks is used to improve learning on new tasks, while minimizing catastrophic forgetting. This is accomplished through a modified version of Elastic Weight Consolidation (EWC), which helps protect previously learned skills during the consolidation phase.
The framework is evaluated on various tasks, including sequential learning of handwritten characters from the Omniglot dataset and navigation tasks in 3D environments. Results show that Progress & Compress outperforms other methods in terms of performance retention and forward transfer, particularly in scenarios where tasks are similar. The framework is also shown to be effective in reinforcement learning settings, where it achieves significant improvements in learning efficiency and performance.
Overall, Progress & Compress provides a scalable and effective solution for continual learning, allowing systems to learn new tasks while preserving performance on previously encountered tasks. The framework is designed to be adaptable to a wide range of tasks and domains, making it a promising approach for future research in continual learning.