LLAMA 2: Open Foundation and Fine-Tuned Chat Models

LLAMA 2: Open Foundation and Fine-Tuned Chat Models

19 Jul 2023 | Hugo Touvron*, Louis Martin†, Kevin Stone† Peter Albert Amjad Almahairi Yasmine Babaei Nikolay Bashlykov Soumya Batra Prajiwal Bhargava Shruti Bhosale Dan Bikel Lukas Blecher Cristian Canton Ferrer Moya Chen Guillem Cucurull David Esioibu Jude Fernandes Jeremy Fu Wenyin Fu Brian Fuller Cynthia Gao Vedanuj Goswami Naman Goyal Anthony Hartshorn Saghar Hosseini Rui Hou Hakan Inan Marcin Kardas Viktor Kerkez Madian Khabsa Isabel Kloumann Artem Korenev Punit Singh Koura Marie-Anne Lachaux Thibaut Lavril Jenya Lee Diana Liskovich Yinghao Lu Yuning Mao Xavier Martinet Todor Mihaylov Pushkar Mishra Igor Molybog Yixin Nie Andrew Poulton Jeremy Reizenstein Rashi Rungta Kalyan Saladi Alan Schelten Ruan Silva Eric Michael Smith Ranjan Subramanian Xiaoqing Ellen Tan Binh Tang Ross Taylor Adina Williams Jian Xiang Kuan Puxin Xu Zheng Yan Iliyan Zarov Yuchen Zhang Angela Fan Melanie Kambadur Sharan Narang Aurelien Rodriguez Robert Stojnic Sergey Edunov Thomas Scialom*
This paper introduces Llama 2, a collection of pre-trained and fine-tuned large language models (LLMs) ranging from 7 billion to 70 billion parameters. The focus is on Llama 2-CHAT, which is optimized for dialogue use cases and outperforms open-source chat models on various benchmarks. The authors detail their fine-tuning methodology, including supervised fine-tuning (SFT) and Reinforcement Learning with Human Feedback (RLHF), and introduce a new technique called Ghost Attention (GAtt) to improve multi-turn consistency. They also discuss safety improvements, such as data annotation and red-teaming, and provide a comprehensive evaluation of Llama 2-CHAT's performance and safety. The paper aims to enable the community to build on their work and contribute to the responsible development of LLMs.This paper introduces Llama 2, a collection of pre-trained and fine-tuned large language models (LLMs) ranging from 7 billion to 70 billion parameters. The focus is on Llama 2-CHAT, which is optimized for dialogue use cases and outperforms open-source chat models on various benchmarks. The authors detail their fine-tuning methodology, including supervised fine-tuning (SFT) and Reinforcement Learning with Human Feedback (RLHF), and introduce a new technique called Ghost Attention (GAtt) to improve multi-turn consistency. They also discuss safety improvements, such as data annotation and red-teaming, and provide a comprehensive evaluation of Llama 2-CHAT's performance and safety. The paper aims to enable the community to build on their work and contribute to the responsible development of LLMs.
Reach us at info@study.space
[slides] Llama 2%3A Open Foundation and Fine-Tuned Chat Models | StudySpace