Why the UK's mandatory opt-out censorware plan is stupid

My latest Guardian column is "There's no way to stop children viewing porn in Starbucks," a postmortem analysis of the terrible debate in the Lords last week over a proposed mandatory opt-out pornography censorship system for the UK's Internet service providers.

In order to filter out adult content on the internet, a company has to either look at all the pages on the internet and find the bad ones, or write a piece of software that can examine a page on the wire and decide, algorithmically, whether it is inappropriate for children.

Neither of these strategies are even remotely feasible. To filter content automatically and accurately would require software capable of making human judgments – working artificial intelligence, the province of science fiction.

As for human filtering: there simply aren't enough people of sound judgment in all the world to examine all the web pages that have been created and continue to be created around the clock, and determine whether they are good pages or bad pages. Even if you could marshal such a vast army of censors, they would have to attain an inhuman degree of precision and accuracy, or would be responsible for a system of censorship on a scale never before seen in the world, because they would be sitting in judgment on a medium whose scale was beyond any in human history.

Think, for a moment, of what it means to have a 99% accuracy rate when it comes to judging a medium that carries billions of publications.

Consider a hypothetical internet of a mere 20bn documents that is comprised one half "adult" content, and one half "child-safe" content. A 1% misclassification rate applied to 20bn documents means 200m documents will be misclassified. That's 100m legitimate documents that would be blocked by the government because of human error, and 100m adult documents that the filter does not touch and that any schoolkid can find.


There's no way to stop children viewing porn in Starbucks