5 Oct 2018 | Rahaf Aljundi, Francesca Babiloni, Mohamed Elhoseiny, Marcus Rohrbach, Tinne Tuytelaars
The paper "Memory Aware Synapses: Learning what (not) to forget" by Rahaf Aljundi, Francesca Babiloni, Mohamed Elhoseiny, Marcus Rohrbach, and Tinne Tuytelaars introduces a novel approach called Memory Aware Synapses (MAS) for lifelong learning (LLL). The authors argue that in the context of limited model capacity and an unlimited stream of new information, knowledge must be selectively preserved or erased. Inspired by neuroplasticity, MAS computes the importance of neural network parameters in an unsupervised and online manner, allowing the model to adapt to new tasks while preventing the overwriting of important knowledge from previous tasks. The method penalizes changes to important parameters when learning new tasks, ensuring that frequently used knowledge is retained. The paper also explores a local variant of MAS that is linked to Hebb's rule, a model of brain learning. Experimental results on object recognition tasks and the challenging problem of learning an embedding for predicting <subject, predicate, object> triplets demonstrate state-of-the-art performance and the ability to adapt importance weights based on unlabeled data, adapting to specific test conditions. The contributions of the paper include a new LLL method, a connection to Hebbian learning, and superior performance compared to existing methods.The paper "Memory Aware Synapses: Learning what (not) to forget" by Rahaf Aljundi, Francesca Babiloni, Mohamed Elhoseiny, Marcus Rohrbach, and Tinne Tuytelaars introduces a novel approach called Memory Aware Synapses (MAS) for lifelong learning (LLL). The authors argue that in the context of limited model capacity and an unlimited stream of new information, knowledge must be selectively preserved or erased. Inspired by neuroplasticity, MAS computes the importance of neural network parameters in an unsupervised and online manner, allowing the model to adapt to new tasks while preventing the overwriting of important knowledge from previous tasks. The method penalizes changes to important parameters when learning new tasks, ensuring that frequently used knowledge is retained. The paper also explores a local variant of MAS that is linked to Hebb's rule, a model of brain learning. Experimental results on object recognition tasks and the challenging problem of learning an embedding for predicting <subject, predicate, object> triplets demonstrate state-of-the-art performance and the ability to adapt importance weights based on unlabeled data, adapting to specific test conditions. The contributions of the paper include a new LLL method, a connection to Hebbian learning, and superior performance compared to existing methods.