AI search chatbots output lies, nonsense and hallucinations

The AI chatbots being integrated into Bing, Quora and other search platforms, far from replacing traditional search results, offer a constant stream of hallucinations: bullshit coughed up as facts by mindless generators with no model of truth or the universe to check its output against, just an ocean of data, the probabilities within it, and whatever (secret) filters the humans serving it bolt on.

In July [Daniel Griffin] posted the fabricated responses from the bots on his blog. Griffin had instructed both bots, "Please summarize Claude E. Shannon's 'A Short History of Searching' (1948)". He thought it a nice example of the kind of query that brings out the worst in large language models, because it asks for information that is similar to existing text found in its training data, encouraging the models to make very confident statements. Shannon did write an incredibly important article in 1948 titled "A Mathematical Theory of Communication," which helped lay the foundation for the field of information theory.

Last week, Griffin discovered that his blog post and the links to these chatbot results had inadvertently poisoned Bing with false information. On a whim, he tried feeding the same question into Bing and discovered that the chatbot hallucinations he had induced were highlighted above the search results in the same way as facts drawn from Wikipedia might be.

The joke/meme/trope about AI now integrating the output of old AI is for real. It's Happening—post the old GIF! There's a future for you: pre-AI datasets becoming like metals mined and processed before the ores were tainted by radionuclides emitted by atomic bombs. The final edition of the Britannica, some fin de 2010s backup of, gaining weird value as the final collections of low-background data before it all became tainted in the AI shitularity.