This paper investigates the language-specific neurons in large language models (LLMs) and their role in multilingual capabilities. The authors propose a novel method called language activation probability entropy (LAPE) to identify language-specific neurons within LLMs. LAPE measures the activation probability of individual neurons in response to different languages, and neurons with low LAPE scores are identified as language-specific neurons. The study shows that LLMs' proficiency in processing a particular language is mainly due to a small subset of neurons, primarily located in the top and bottom layers of the model. Furthermore, the authors demonstrate the feasibility of "steering" the output language of LLMs by selectively activating or deactivating language-specific neurons. The research provides important evidence for understanding and exploring the multilingual capabilities of LLMs. The study is conducted on several representative LLMs, including LLaMA-2, BLOOM, and Mistral, and the results show that deactivating language-specific neurons significantly degrades the model's multilingual capabilities. The findings suggest that language-specific neurons play a crucial role in multilingual processing, and that the output language of LLMs can be controlled by manipulating these neurons. The study also highlights the importance of language-specific neurons in cross-lingual generation tasks and provides insights into the structural distribution of language-specific neurons across different layers of the model. The research contributes to the understanding of how LLMs process multilingual texts and provides a new method for identifying language-specific neurons in LLMs.This paper investigates the language-specific neurons in large language models (LLMs) and their role in multilingual capabilities. The authors propose a novel method called language activation probability entropy (LAPE) to identify language-specific neurons within LLMs. LAPE measures the activation probability of individual neurons in response to different languages, and neurons with low LAPE scores are identified as language-specific neurons. The study shows that LLMs' proficiency in processing a particular language is mainly due to a small subset of neurons, primarily located in the top and bottom layers of the model. Furthermore, the authors demonstrate the feasibility of "steering" the output language of LLMs by selectively activating or deactivating language-specific neurons. The research provides important evidence for understanding and exploring the multilingual capabilities of LLMs. The study is conducted on several representative LLMs, including LLaMA-2, BLOOM, and Mistral, and the results show that deactivating language-specific neurons significantly degrades the model's multilingual capabilities. The findings suggest that language-specific neurons play a crucial role in multilingual processing, and that the output language of LLMs can be controlled by manipulating these neurons. The study also highlights the importance of language-specific neurons in cross-lingual generation tasks and provides insights into the structural distribution of language-specific neurons across different layers of the model. The research contributes to the understanding of how LLMs process multilingual texts and provides a new method for identifying language-specific neurons in LLMs.