There's a new op-ed in The New York Times from Noam Chomsky and two of his academic colleagues — Dr. Ian Roberts, a linguistics professor at University of Cambridge, and Dr. Jeffrey Watumull, a philosopher who is also the director of artificial intelligence at a tech company. The three extrapolate from their knowledge and experience with human linguistic development and artificial intelligence to bluntly but beautifully explain the differences between artificial and "true" intelligence. And in doing so, they warn about the hype around products like ChatGPT — both for investors who are suddenly very excited about the prospective of independent robot labor, and for others who are potentially concerned about the impending robot apocalypse or whatever.
It is genuinely a fantastic essay. Here's a taste, jumping off the Borgesian idea of "the imminence of a revelation," and the simultaneity of living in a time of great comedy and tragedy all at once:
It is at once comic and tragic, as Borges might have noted, that so much money and attention should be concentrated on so little a thing — something so trivial when contrasted with the human mind, which by dint of language, in the words of Wilhelm von Humboldt, can make "infinite use of finite means," creating ideas and theories with universal reach.
The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question. On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations.
From there, Chomsky, et al compare AI to a child with burgeoning language skills. This was particularly resonant to me as a parent of a two-and-a-half year old (who is already developing his own relationship with our Alexa as it is). What the call the "language explosion" in childhood development is a very real phenomenon. It's like we woke up one day and he suddenly just innately understood how to turn those random words he'd been occasionally uttering into complete sentences. Even his adorable grammatical errors — "Where are me?!" when he's playing hide-and-seek is just the best — have their own sense of logic and clarity to them because he is actively striving communicate things, and replicate what he observes in the world. And he does that based on a much smaller sample-set than ChatGPT has to work with. Sure, he still struggles to articulate the contradicting feelings of being simultaneously excited and overwhelmed by the size of his new daycare class. But you can tell he's still feeling those things, and thinking about them, and what that means in the context of his life. ChatGPT can't do that either.
Noam Chomsky: The False Promise of ChatGPT [Noam Chomsky, Ian Roberts and Jeffrey Watumull / The New York Times]
Full disclosure: I also write for Wirecutter, which is part of the New York Times Company, which also publishes The New York Times