8 June 2024 | Michael Townsen Hicks, James Humphries, Joe Slater
The article argues that large language models (LLMs), such as ChatGPT, should be described as producing "bullshit" rather than "hallucinations" or "lies." This is because LLMs are not designed to accurately represent the truth but instead to generate text that appears truthful. They are indifferent to the truth of their outputs and do not aim to convey accurate information. The authors distinguish between "hard" and "soft" bullshit: "soft" bullshit occurs when there is no concern for truth, while "hard" bullshit involves an attempt to deceive about the speaker's intentions. ChatGPT is at minimum a "soft" bullshitter, as it does not intend to convey truth but produces text that appears truthful. The authors argue that describing LLMs as bullshitters is more accurate and useful than using terms like "hallucinations," which can mislead the public and policymakers. They also suggest that ChatGPT may be a "hard" bullshitter if it is considered to have intentions, as it is designed to appear truthful rather than to convey accurate information. The article concludes that ChatGPT is a "bullshit machine" and that using the term "bullshit" is more appropriate than "hallucinations" when describing its outputs.The article argues that large language models (LLMs), such as ChatGPT, should be described as producing "bullshit" rather than "hallucinations" or "lies." This is because LLMs are not designed to accurately represent the truth but instead to generate text that appears truthful. They are indifferent to the truth of their outputs and do not aim to convey accurate information. The authors distinguish between "hard" and "soft" bullshit: "soft" bullshit occurs when there is no concern for truth, while "hard" bullshit involves an attempt to deceive about the speaker's intentions. ChatGPT is at minimum a "soft" bullshitter, as it does not intend to convey truth but produces text that appears truthful. The authors argue that describing LLMs as bullshitters is more accurate and useful than using terms like "hallucinations," which can mislead the public and policymakers. They also suggest that ChatGPT may be a "hard" bullshitter if it is considered to have intentions, as it is designed to appear truthful rather than to convey accurate information. The article concludes that ChatGPT is a "bullshit machine" and that using the term "bullshit" is more appropriate than "hallucinations" when describing its outputs.