End-to-End Incremental Learning

End-to-End Incremental Learning

Sep 2018 | Francisco M. Castro, Manuel J Marín-Jiménez, Nicolás Guil, Cordelia Schmid, Karteek Alahari
The paper "End-to-End Incremental Learning" by Francisco M. Castro, Manuel J. Marín-Jiménez, Nicolás Guil, Cordelia Schmid, and Karteeq Alahari addresses the challenge of incremental learning in deep neural networks, particularly the issue of catastrophic forgetting. The authors propose a novel approach that combines cross-entropy and distillation loss functions to enable incremental training of deep neural networks using only a small set of samples from old classes. This approach retains knowledge from old classes while learning new classes, maintaining end-to-end learning and joint optimization of the classifier and feature representation. The method is evaluated on the CIFAR-100 and ImageNet datasets, demonstrating state-of-the-art performance in incremental learning tasks. Key contributions include the use of a representative memory to store and manage samples from old classes, and the cross-distilled loss function that integrates both classification and distillation losses. The paper also discusses related work, implementation details, and extensive experimental results, highlighting the effectiveness and robustness of the proposed approach.The paper "End-to-End Incremental Learning" by Francisco M. Castro, Manuel J. Marín-Jiménez, Nicolás Guil, Cordelia Schmid, and Karteeq Alahari addresses the challenge of incremental learning in deep neural networks, particularly the issue of catastrophic forgetting. The authors propose a novel approach that combines cross-entropy and distillation loss functions to enable incremental training of deep neural networks using only a small set of samples from old classes. This approach retains knowledge from old classes while learning new classes, maintaining end-to-end learning and joint optimization of the classifier and feature representation. The method is evaluated on the CIFAR-100 and ImageNet datasets, demonstrating state-of-the-art performance in incremental learning tasks. Key contributions include the use of a representative memory to store and manage samples from old classes, and the cross-distilled loss function that integrates both classification and distillation losses. The paper also discusses related work, implementation details, and extensive experimental results, highlighting the effectiveness and robustness of the proposed approach.
Reach us at info@study.space