14 Apr 2017 | Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, Christoph H. Lampert
iCaRL is an incremental classifier and representation learning method that allows learning about new classes over time without requiring all training data for all classes to be available at once. It simultaneously learns classifiers and a data representation, which distinguishes it from previous methods that were limited to fixed data representations. Experiments on CIFAR-100 and ImageNet ILSVRC 2012 data show that iCaRL can learn many classes incrementally over a long period of time, where other strategies quickly fail. iCaRL's three main components are: 1) a nearest-mean-of-exemplars classifier that is robust against changes in the data representation, 2) a herding-based step for prioritized exemplar selection, and 3) a representation learning step that uses exemplars in combination with distillation to avoid catastrophic forgetting. The method is evaluated on two benchmarks: iCIFAR-100 and iILSVRC. Results show that iCaRL outperforms other methods, particularly in scenarios with smaller batch sizes. The confusion matrices of the different methods show that iCaRL has no intrinsic bias towards or against classes encountered early or late during learning. The main reason for iCaRL's strong classification results is its use of exemplar images. Despite promising results, class-incremental learning is still far from solved, and iCaRL's performance is still lower than what systems achieve when trained in a batch setting. Future work aims to analyze the reasons for this and study related scenarios where the classifier cannot store any of the training data in raw form.iCaRL is an incremental classifier and representation learning method that allows learning about new classes over time without requiring all training data for all classes to be available at once. It simultaneously learns classifiers and a data representation, which distinguishes it from previous methods that were limited to fixed data representations. Experiments on CIFAR-100 and ImageNet ILSVRC 2012 data show that iCaRL can learn many classes incrementally over a long period of time, where other strategies quickly fail. iCaRL's three main components are: 1) a nearest-mean-of-exemplars classifier that is robust against changes in the data representation, 2) a herding-based step for prioritized exemplar selection, and 3) a representation learning step that uses exemplars in combination with distillation to avoid catastrophic forgetting. The method is evaluated on two benchmarks: iCIFAR-100 and iILSVRC. Results show that iCaRL outperforms other methods, particularly in scenarios with smaller batch sizes. The confusion matrices of the different methods show that iCaRL has no intrinsic bias towards or against classes encountered early or late during learning. The main reason for iCaRL's strong classification results is its use of exemplar images. Despite promising results, class-incremental learning is still far from solved, and iCaRL's performance is still lower than what systems achieve when trained in a batch setting. Future work aims to analyze the reasons for this and study related scenarios where the classifier cannot store any of the training data in raw form.