DeepSeek LLM: Scaling Open-Source Language Models with Longtermism

DeepSeek LLM: Scaling Open-Source Language Models with Longtermism

5 Jan 2024 | Xiao Bi, Deli Chen, Guanting Chen, Shanhuang Chen, Damai Dai, Chengqi Deng, Honghui Ding, Kai Dong, Qiushi Du, Zhe Fu, Huazuo Gao, Kaige Gao, Wenjun Gao, Ruiqi Ge, Kang Guan, Daya Guo, Jianzhong Guo, Guangbo Hao, Zhewen Hao, Ying He, Wenjie Hu, Panpan Huang, Erhang Li, Guowei Li, Jiashi Li, Yao Li, Y.K. Li, Wenfeng Liang, Fangyun Lin, A.X. Liu, Bo Liu, Wen Liu, Xiaodong Liu, Xin Liu, Yiyuan Liu, Haoyu Lu, Shanghao Lu, Fuli Luo, Shirong Ma, Xiaotao Nie, Tian Pei, Yishi Piao, Junjie Qiu, Hui Qu, Tongzheng Ren, Zehui Ren, Chong Ruan, Zhangli Sha, Zhihong Shao, Junxiao Song, Xuecheng Su, Jingxiang Sun, Yaofeng Sun, Minghui Tang, Bingxuan Wang, Peiyi Wang, Shiyu Wang, Yaohui Wang, Yongji Wang, Tong Wu, Y. Wu, Xin Xie, Zhenda Xie, Ziwei Xie, Yiliang Xiong, Hanwei Xu, R.X. Xu, Yanhong Xu, Dejian Yang, Yuxiang You, Shuiping Yu, Xingkai Yu, B. Zhang, Haowei Zhang, Lecong Zhang, Liyue Zhang, Mingchuan Zhang, Minghua Zhang, Wentao Zhang, Yichao Zhang, Chenggang Zhao, Yao Zhao, Shangyan Zhou, Shunfeng Zhou, Qihao Zhu, Yuheng Zou
The paper "DeepSeek LLM: Scaling Open-Source Language Models with Longtermism" by Xiao Bi et al. explores the scaling laws of large language models (LLMs) and introduces DeepSeek LLM, a project aimed at advancing open-source LLMs with a long-term perspective. The authors delve into the study of scaling laws and present their findings, which guide the development of DeepSeek LLM in two prevalent configurations: 7B and 67B. They have developed a dataset consisting of 2 trillion tokens and conducted supervised fine-tuning (SFT) and direct preference optimization (DPO) to create DeepSeek Chat models. Evaluation results show that DeepSeek LLM 67B outperforms LLaMA-2 70B across various benchmarks, particularly in code, mathematics, and reasoning tasks. Additionally, DeepSeek LLM 67B Chat outperforms GPT-3.5 in both Chinese and English open-ended evaluations, demonstrating superior performance in generating high-quality responses and engaging in meaningful conversations. The paper also discusses the safety evaluation of DeepSeek LLM, highlighting its ability to provide harmless responses in practical scenarios.The paper "DeepSeek LLM: Scaling Open-Source Language Models with Longtermism" by Xiao Bi et al. explores the scaling laws of large language models (LLMs) and introduces DeepSeek LLM, a project aimed at advancing open-source LLMs with a long-term perspective. The authors delve into the study of scaling laws and present their findings, which guide the development of DeepSeek LLM in two prevalent configurations: 7B and 67B. They have developed a dataset consisting of 2 trillion tokens and conducted supervised fine-tuning (SFT) and direct preference optimization (DPO) to create DeepSeek Chat models. Evaluation results show that DeepSeek LLM 67B outperforms LLaMA-2 70B across various benchmarks, particularly in code, mathematics, and reasoning tasks. Additionally, DeepSeek LLM 67B Chat outperforms GPT-3.5 in both Chinese and English open-ended evaluations, demonstrating superior performance in generating high-quality responses and engaging in meaningful conversations. The paper also discusses the safety evaluation of DeepSeek LLM, highlighting its ability to provide harmless responses in practical scenarios.
Reach us at info@study.space
[slides and audio] DeepSeek LLM%3A Scaling Open-Source Language Models with Longtermism