By now, most of folks have encountered or messed around with artificial intelligence. Chatting with an A.I. customer service bot on the phone or Internet is a sh*tty part of daily life. Maybe you use A.I. to speed things up at your job. Some scum are pretending to write books with it. Some use it to kill the loneliness that can make living in the digital so painful. But should you rely on it it for therapy? At first blush… maybe? An A.I. can be jammed full of topic specific knowledge and trained to respond in a way that, to the user, feels caring, attentive and meaningful. What's more, with the cost of mental healthcare, an A.I. might be the only talk therapy assistance that many people can afford.
But here's the thing: a therapist, psychologist or psychiatrist are duty-bound to keep your personal information, your pain and most of your proclivities, private. Tech bros and artificial intelligent giants like the creators of OpenAI, Grok and Meta? They don't take oaths. They're not down with Do No Harm. They can't even resist using copyrighted material without the permission of its owners to train their artificial brains. This shows a lack of scruples. Hell, if they're dumb enough to use one of my books to teach the next paradigm in computing how to think, should we trust them with anything?
Smart person, Ali Robertson says yeah, no.
From The Verge (subscription required):
US residents are being urged to discuss their mental health conditions and personal beliefs with chatbots, and their simplest and best-known options are platforms whose owners are cozy with the Trump administration. xAI and Grok are owned by Musk, who is literally a government employee. Zuckerberg and OpenAI CEO Sam Altman, meanwhile, have been working hard to get in Trump's good graces — Zuckerberg to avoid regulation of his social networks, Altman to support his efforts for ever-expanding energy infrastructure and no state AI regulation. (Gemini AI operator Google is also carefully sycophantic. It's just a little quieter about it.) These companies aren't simply doing standard lobbying, they're sometimes throwing their weight behind Trump in exceptionally high-profile ways, including changing their policies to fit his ideological preferences and attending his inauguration as prominent guests.
Ew.
As the use of artificial intelligence continues to escalate in healthcare, as well as every other facet of our lives, so will the probing rat claws trying to get their hands on our personal information. Many of us are already screwed: we freely give up our full names, date of birth, personal preferences, sexual orientation and other details of our lives, in exchange for access to apps and websites that alter our images, give free access to games or let us doom scroll to our heart's content. If you're creeped out by targeted ads, just think of how much having, say Google, target you with information about your mental health issues will make your ass pucker.
Robertson's story lays out the potential dangers of leveraging A.I. to care for our mental health with terrifying frankness. I'm in favour of websites asking us to pay for their content: it keeps journalists employed and allows great stories, like Robertson's get published. But I wish The Verge would follow suit with WIRED and allow work like this to be read for free as a public service. With the use of artificial intelligence changing the world around us so rapidly, we need all the help we can get to understand the ramifications of deciding—or being forced—to use it.
Previously:
• AI is either going to save us or kill us. Or both.