Large language models, such as OpenAI’s ChatGPT, have revolutionized the way artificial intelligence interacts with humans, producing text that often seems indistinguishable from human writing. Despite their impressive capabilities, these models are known for generating persistent inaccuracies, often referred to as “AI hallucinations.” However, in a paper published in Ethics and Information Technology, scholars Michael Townsen Hicks, James Humphries, and Joe Slater from the University of Glasgow argue that these inaccuracies are better understood as “bullshit.” Large language models (LLMs) are…