15 Apr 2019 | Gido M. van de Ven & Andreas S. Tolias
This paper presents three scenarios for continual learning, which are distinguished by whether task identity is provided at test time and whether it must be inferred. These scenarios are: task-incremental learning (Task-IL), where task identity is known; domain-incremental learning (Domain-IL), where task identity is not known but the task structure is consistent; and class-incremental learning (Class-IL), where task identity must be inferred. The authors compare recent continual learning methods on these scenarios using the split and permuted MNIST task protocols. They find that methods based on replaying previous experiences perform better, especially in Class-IL scenarios where task identity must be inferred. Regularization-based methods like EWC and SI fail in these scenarios. The paper also discusses various strategies for continual learning, including task-specific components, regularized optimization, modifying training data, and using exemplars. The results show that replay-based methods outperform regularization-based methods in all scenarios, particularly in Class-IL. The authors conclude that replay may be an unavoidable tool for more challenging scenarios where task identity is not provided.This paper presents three scenarios for continual learning, which are distinguished by whether task identity is provided at test time and whether it must be inferred. These scenarios are: task-incremental learning (Task-IL), where task identity is known; domain-incremental learning (Domain-IL), where task identity is not known but the task structure is consistent; and class-incremental learning (Class-IL), where task identity must be inferred. The authors compare recent continual learning methods on these scenarios using the split and permuted MNIST task protocols. They find that methods based on replaying previous experiences perform better, especially in Class-IL scenarios where task identity must be inferred. Regularization-based methods like EWC and SI fail in these scenarios. The paper also discusses various strategies for continual learning, including task-specific components, regularized optimization, modifying training data, and using exemplars. The results show that replay-based methods outperform regularization-based methods in all scenarios, particularly in Class-IL. The authors conclude that replay may be an unavoidable tool for more challenging scenarios where task identity is not provided.