The UK's parliament passed a massive and elaborate "online safety" bill Tuesday requiring internet service providers to prevent illegal material from appearing online, prevent children accessing "harmful" content, enforce age-checking measures for adult internet users, and provide official mechanisms to report all of the above.
The new law pleases charities eager for something to be seen to be done about online abuse and to hold big tech accountable for the toxic content it facilitiates, promotes and profits from.
Driving the bill have been the stories of those who have suffered losses and harm which they attribute to content posted on social media. Online safety campaigner Ian Russell has told the BBC the test of the bill will be whether it prevents the kind of images his daughter Molly saw before she took her own life after viewing suicide and self-harm content online on sites such as Instagram and Pinterest. Imran Ahmed of the Center for Countering Digital Hate welcomed the passage of the bill saying "too much tragedy has already befallen people in this country because of tech companies' moral failures".
It dismays those wary of providing authorities and internet gatekeepers expansive mass-surveillance and pre-emptive censorship powers under the "think of the children" banner—which is to say, everyone else.
Lawyer Graham Smith, author of a book on internet law, said the bill had well-meaning aims, but in the end it contained much that was problematic. "If the road to hell is paved with good intentions, this is a motorway," he told the BBC. He said it was "a deeply misconceived piece of legislation", and the threat it posed to legitimate speech was likely to be "exposed in the courts".
The thing that no-one wants to really think about yet, because it's a colossal pain in the ass, is the prospect of being unable to trust any platform that continues to operate in the U.K., because it's all of them.