This article, the first part of a two-part series, explores the philosophical implications of large language models (LLMs) like GPT-4, which have achieved remarkable proficiency in various language-based tasks. The authors, Raphaël Millière and Cameron Buckner, discuss the ongoing debates about the cognitive competence of LLMs, drawing parallels with classic philosophical discussions in cognitive science, artificial intelligence, and linguistics. They cover topics such as compositionality, language acquisition, semantic competence, grounding, world models, and the transmission of cultural knowledge. The article argues that LLMs challenge long-held assumptions about artificial neural networks but also highlights the need for further empirical investigation to understand their internal mechanisms. The second part of the series will focus on novel empirical methods for probing the inner workings of LLMs and the philosophical questions they raise. The introduction provides an overview of the historical development of LLMs, from early symbolic and stochastic approaches to the current Transformer-based models, and discusses the philosophical significance of LLMs in the context of classic debates.This article, the first part of a two-part series, explores the philosophical implications of large language models (LLMs) like GPT-4, which have achieved remarkable proficiency in various language-based tasks. The authors, Raphaël Millière and Cameron Buckner, discuss the ongoing debates about the cognitive competence of LLMs, drawing parallels with classic philosophical discussions in cognitive science, artificial intelligence, and linguistics. They cover topics such as compositionality, language acquisition, semantic competence, grounding, world models, and the transmission of cultural knowledge. The article argues that LLMs challenge long-held assumptions about artificial neural networks but also highlights the need for further empirical investigation to understand their internal mechanisms. The second part of the series will focus on novel empirical methods for probing the inner workings of LLMs and the philosophical questions they raise. The introduction provides an overview of the historical development of LLMs, from early symbolic and stochastic approaches to the current Transformer-based models, and discusses the philosophical significance of LLMs in the context of classic debates.