ChatGPT isn't hallucinating — it's bullshitting

If you're looking for a clear explanation of how ChatGPT works and its limitations, check out this short online course called "The Bullshit Machines," created by two professors at the University of Washington. It combines short essays and explanatory videos to describe how large language models (LLMs) like ChatGPT work and their "ability to sound authoritative on nearly any topic irrespective of factual accuracy."

"In other words, their superpower is their superhuman ability to bullshit."

From Lesson 2:

LLMs are designed to generate plausible answers and present them in an authoritative tone. All too often, however, they make things up that aren't true.

Computer scientists have a technical term for this: hallucination. But this term is a misnomer because when a human hallucinates, they are doing something very different.

In psychiatric medicine, the term "hallucination" refers to the experience of false or misleading perceptions. LLMs are not the sorts of things that have experience, or perceptions.

Moreover, a hallucination is a pathology. It's something that happens when systems are not working properly.

When an LLM fabricates a falsehood, that is not a malfunction at all. The machine is doing exactly what it has been designed to do: guess, and sound confident while doing it.

When LLMs get things wrong they aren't hallucinating. They are bullshitting.

Previously:
OpenAI could watermark the text ChatGPT generates, but hasn't
Teacher devises an ingenious way to check if students are using ChatGPT to write essays