19 Jul 2023 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom
LLAMA 2 is a family of large language models (LLMs) with sizes ranging from 7 billion to 70 billion parameters. These models include LLaMA 2 and LLaMA 2-Chat, which are pretrained and fine-tuned for dialogue use cases. LLaMA 2-Chat outperforms open-source chat models on most benchmarks and is comparable to some closed-source models based on human evaluations. The models were fine-tuned using supervised fine-tuning (SFT), reinforcement learning with human feedback (RLHF), and a system message for multi-turn consistency. Safety improvements were made through data annotation, red-teaming, and iterative evaluations. The paper provides a detailed description of the fine-tuning methodology and safety improvements, aiming to enable the community to build on this work and contribute to responsible LLM development. The models are released to the public for research and commercial use, with a responsible release strategy to ensure safety and ethical considerations. The paper also discusses the pretraining methodology, fine-tuning process, safety measures, and key observations from the development of LLaMA 2 and LLaMA 2-Chat. The models were evaluated on various benchmarks, showing improved performance compared to previous versions and other open-source models. The paper also addresses the challenges of data contamination and provides insights into the effectiveness of different fine-tuning techniques. The results indicate that LLaMA 2-Chat performs well on helpfulness and safety benchmarks, with improvements in both areas compared to previous versions. The paper highlights the importance of responsible AI development and the need for ongoing research to ensure the safety and ethical use of large language models.LLAMA 2 is a family of large language models (LLMs) with sizes ranging from 7 billion to 70 billion parameters. These models include LLaMA 2 and LLaMA 2-Chat, which are pretrained and fine-tuned for dialogue use cases. LLaMA 2-Chat outperforms open-source chat models on most benchmarks and is comparable to some closed-source models based on human evaluations. The models were fine-tuned using supervised fine-tuning (SFT), reinforcement learning with human feedback (RLHF), and a system message for multi-turn consistency. Safety improvements were made through data annotation, red-teaming, and iterative evaluations. The paper provides a detailed description of the fine-tuning methodology and safety improvements, aiming to enable the community to build on this work and contribute to responsible LLM development. The models are released to the public for research and commercial use, with a responsible release strategy to ensure safety and ethical considerations. The paper also discusses the pretraining methodology, fine-tuning process, safety measures, and key observations from the development of LLaMA 2 and LLaMA 2-Chat. The models were evaluated on various benchmarks, showing improved performance compared to previous versions and other open-source models. The paper also addresses the challenges of data contamination and provides insights into the effectiveness of different fine-tuning techniques. The results indicate that LLaMA 2-Chat performs well on helpfulness and safety benchmarks, with improvements in both areas compared to previous versions. The paper highlights the importance of responsible AI development and the need for ongoing research to ensure the safety and ethical use of large language models.