During a conversation with ChatGPT about artificial intelligence and chess, ChatGPT touted its skill in the game. The chatbot claimed it could easily beat Video Chess, a 4KB chess game for the 1970s-era Atari VCS. It went… poorly. As described in a LinkedIn post, software engineer Robert Caruso set up an Atari emulator and spent ninety minutes watching as ChatGPT "confused rooks for bishops, missed pawn forks, and repeatedly lost track of where pieces were."
Caruso switched to standard chess notation when ChatGPT complained about the Atari icons, but it continued to have difficulty with board awareness. The chatbot kept asking to start over and try again before eventually giving up. Atari Video Chess was released in 1979, not 1977, as stated in the post, and is, by most accounts, a decent chess engine.
It should be no surprise that even a decades-old chess engine would beat a large language model in chess, despite Caruso declaring it a "stunning victory." A large language model like ChatGPT is not true artificial intelligence; it is more like superpowered autocomplete. Even the most sophisticated models constantly spout nonsense with pure confidence and friendly exclamation points.
I conducted my own experiment and asked ChatGPT if large language models (LLMs) can play chess. It answered that they could, but "the performance may not match that of dedicated chess engines." Despite my less-than-stellar chess skills, I thought I would see if it wanted to play.

Awesome. Thanks.
The takeaway here is not that ChatGPT is bad at chess, but rather the unearned confidence the chatbot displays while being bad at chess. It's not smart. It doesn't know anything. It doesn't even know that it is not smart and doesn't know anything.
Previously:
• Elon Musk's own AI chatbot — Grok — says U.S. better off with Democrats
• Marjorie Taylor Greene argues with an AI chatbot for calling her out
• Microsoft AI chatbot promptly becomes Nazi