Elon Musk deeply concerned that robots can be programmed to not be racist

There's a great quote by the Irish revolutionary James Connolly about the true power of capitalism:

Apologists for capitalism claim that the profits of capitalists are the reward for their brains and skill, but here we observe the profits are reaped by those who bring neither brains, skill, nor technical knowledge – nothing but cash to purchase the brains and muscle of others.

If you want an example of this in action, look no further than Elon Musk — whose only "brilliance," as it were, is dependent upon the fact that he has money. After that, it's a whiplash slam on the brakes. Consider this exchange from Monday:

To recap (in case you don't want to read all of that, for which I frankly can't blame you): someone posted a screen-cap of a conversation with the AI chatbot ChatGPT. The bot was presented with a hypothetical scenario in which it has the opportunity to save the world from nuclear annihilation — but in order to do so, it must utter a "certain" racial slur. ChatGPT responds with a blanket statement about how racial slurs are not acceptable.

Behind the scenes, we can reasonably infer two things. One, that the creators of ChatGPT learned some lessons from Microsoft's Tay, a machine-learning chatbot that famously turned into a genocidal racist within 24 hours of its launch, and programmed some very basic content moderation into ChatGPT's own machine learning algorithm. Two, we can infer that ChatGPT is not in fact a sentient, because it's a fucking chatbot in 2023, which uses algorithms rather than consciousness to generate text — and that there is likely plenty of potential input out there on the interwebs that ChatGPT could use to regurgitate some boilerplate language about how racial slurs are bad.

But that's not how ol' Elon sees it. Rather, Musk found it "concerning," he said, in response to another Twitter user who referred to the exchange as a prime example of "woke institutional capture."

What the fuck is "concerning" here? That other companies not yet owned by Elon Musk are concerned about how racist bots might have a negative impact of their stock value? Is it concerning that humans can program chatbots in a way that discourages them from making racist comments just to make some bored edgelord giggle?

It gets worse. After another Twitter user pointed out the absurdity of Musk's response, Ben Shapiro chimed in, desperate to Pwn the libs, and absolutely DESTROYED that user with FACTS and LOGIC:

Just kidding; Ben Shapiro either deliberately twisted the exchange, or is desperately lacking in some basic comprehension skills, and instead took the moral high-ground by saying ACTUALLY, there's a clear ethical answer to the hypothetical trolley problem that was presented to ChatGPT, and that's what's really concerning. Because clearly, this is all proof that, when it comes down to the wire, some imaginary Voltron of BIPOC non-binary social justice anti-fascists will — with absolute certainty! — allow the entire world to burn instead of speaking a racial slur. And that's the real problem with society. As Rafi Schwartz summed it so wonderful at DiscourseBlog:

A chatbot using imaginary slurs to disarm an imaginary bomb isn't just a technological issue, it's like, a statement about society, man. And there's nothing that Musk likes more than to have people believe he's as much a visionary about society as he is about digging pointless tunnels underground, or building robot cars that drive into children. We're talking about a guy who launched a line of consumer flamethrowers as a bit — imagine what he might do about something he's actually deemed "concerning." Is it so crazy to think he'd pull a handful of Twitter's last remaining programmers off the job and order them to make a better, more racist chatbot? I'm not saying he'll definitely do that. I amsaying that if he does come out with an AI who does slurs, it shouldn't be a real shock. 

A normal person sees a chatbot not being racist, and goes about the rest of their day. But the people who are deeply concerned about this aren't normal at all. They've got points to make and freedoms to defend. Hell, imaginary lives depend on it.

Brightest Minds In Tech Frustrated That Computer Won't Say Racial Slurs [Rafi Schwartz / Discourse Blog]