Larimar: Large Language Models with Episodic Memory Control

Larimar: Large Language Models with Episodic Memory Control

21 Aug 2024 | Payel Das, Subhajit Chaudhury, Elliot Nelson, Igor Melnyk, Sarathkrishna Swaminathan, Sihui Dai, Aurélie Lozano, Georgios Kollias, Vijil Chenthamarakshan, Jiří Navrátil, Soham Dan, Pin-Yu Chen
Larimar is a novel architecture that enhances Large Language Models (LLMs) with a distributed episodic memory system, enabling efficient and accurate knowledge updates without retraining. The system allows for dynamic, one-shot updates of knowledge, making it faster and more flexible than existing methods. Experimental results show that Larimar achieves performance comparable to competitive baselines in fact editing tasks, while significantly improving speed, with up to 10x faster updates. It also supports selective fact forgetting and information leakage prevention, and generalizes well to longer input contexts. The architecture is inspired by the brain's complementary learning systems, where the hippocampus handles fast learning and the neocortex handles slow learning. Larimar uses a hierarchical memory system, where the memory is updated using a generative model approach, allowing for efficient and accurate knowledge updates. The system is trained using a variational lower bound of the conditional likelihood, and during inference, it uses the memory to condition the LLM decoder. Larimar is tested on various benchmarks, including the CounterFact and ZsRE datasets, and shows strong performance in single and sequential fact editing. It outperforms existing methods in terms of speed and accuracy, and is able to handle long input contexts by recursively reading from its memory. The system also supports selective fact forgetting and information leakage prevention, and is able to generalize to longer input contexts. The paper also discusses the limitations of current LLM editing approaches, such as high training costs and difficulties in generalizing to new data. Larimar addresses these limitations by providing a simple, general, and principled approach to update LLMs in real-time using an adaptable episodic memory control. The system is designed to be LLM-agnostic and can be applied to a wide range of tasks, including question answering and summarization. Future work includes expanding Larimar to model longer sentences and more tasks, as well as testing it on more challenging inference tasks.Larimar is a novel architecture that enhances Large Language Models (LLMs) with a distributed episodic memory system, enabling efficient and accurate knowledge updates without retraining. The system allows for dynamic, one-shot updates of knowledge, making it faster and more flexible than existing methods. Experimental results show that Larimar achieves performance comparable to competitive baselines in fact editing tasks, while significantly improving speed, with up to 10x faster updates. It also supports selective fact forgetting and information leakage prevention, and generalizes well to longer input contexts. The architecture is inspired by the brain's complementary learning systems, where the hippocampus handles fast learning and the neocortex handles slow learning. Larimar uses a hierarchical memory system, where the memory is updated using a generative model approach, allowing for efficient and accurate knowledge updates. The system is trained using a variational lower bound of the conditional likelihood, and during inference, it uses the memory to condition the LLM decoder. Larimar is tested on various benchmarks, including the CounterFact and ZsRE datasets, and shows strong performance in single and sequential fact editing. It outperforms existing methods in terms of speed and accuracy, and is able to handle long input contexts by recursively reading from its memory. The system also supports selective fact forgetting and information leakage prevention, and is able to generalize to longer input contexts. The paper also discusses the limitations of current LLM editing approaches, such as high training costs and difficulties in generalizing to new data. Larimar addresses these limitations by providing a simple, general, and principled approach to update LLMs in real-time using an adaptable episodic memory control. The system is designed to be LLM-agnostic and can be applied to a wide range of tasks, including question answering and summarization. Future work includes expanding Larimar to model longer sentences and more tasks, as well as testing it on more challenging inference tasks.
Reach us at info@study.space
[slides and audio] Larimar%3A Large Language Models with Episodic Memory Control