As anyone who has been following the sorry saga of the EU copyright reform, key elements -- Articles 3 on text and data mining, 11 on the link tax and 13 on the upload filter censorship machine -- are turning into the proverbial dog's breakfast, a complete and utter mess. The well-founded criticisms of the proposed law have piled up to an unprecedented extent, causing the politicians behind it to resort to iterative obfuscation. Successive arguments against each of the three articles mentioned above have led to the Commission's original text being mashed and murdered in an attempt to "address" the points by adding in new "clarifications" that just make things worse.

It's enough to look at the recent text being discussed in the European Parliament under Rapporteur MEP Axel Voss in the lead JURI (legal affairs) committee: it's a barely literate hodge-podge of inchoate ideas. And that's before some of the 1000 amendments proposed for the final JURI report have been voted on and shoved into the text to butcher it further.

Legislative rough-and-tumble is normal enough for complex legislation. It wouldn't matter too much if (a) the end results were good and (b) the public weren't being misled by some of the claims made about what the constantly shifting text really means. But, unable to respond to the justified criticisms of the proposals, it seems that some of the copyright directive's supporters are trying to muddy the waters. The plan is evidently to hide what is really going on here — the destruction of the Internet as we know it in the EU — until it's too late.

Once the directive is done and dusted, the copyright industry can safely celebrate the passing of a law that diminishes the online public space while propping up lazy companies that have refused to propoerly embrace the digital world. Even once the harmful consequences of Articles 3, 11 and 13 become evident as they enter into force, there's no chance the copyright directive will be revised or revisited for many years. It matters little how misleading or downright mendacious the campaign in favour of the copyright directive becomes: once it is passed, it is passed.

That effectively irreversible nature of the EU law makes the current fight against the copyright directive's worst elements all-the-more vital. If we don't stop them now, we never will. The key action is contacting MEPs explaining why the proposed versions of Articles 3, 11 and 13 are so harmful, and what the consequences would be for the EU and its digital realm if they are not thrown out or at least greatly modified.

Mythbusting Misinformation on the Article 13 Censorship Machine

The new SaveYourInternet site has plenty of resources to help people do that, including tools that make it easy to contact MEPs via email, phone or Twitter. Those resources deal with the facts of the situation. Here, I'd like to address some of the fiction that is floating around, because it's part of the toxic atmosphere of calculated misinformation designed to make it hard to discuss effectively the copyright directive with MEPs.

The EU Legislative Process, and why we need to act now

One of the problems here is that the EU's legislative process is poorly understood by most people. That's not a criticism of the latter: the EU has done a terrible job of explaining itself, so it's no wonder that the public finds it hard to follow how the sausage machine of government grinds up proposals and spits out new laws.

As a result, people may not appreciate how important the text in the JURI report, to be agreed later this month, is in terms of the final result. Currently, the JURI text is very similar to that of the Council, as drawn up under the Bulgarian Presidency. This means when the legislative discussion enters the so-called "trilogue" negotiations, which involve representatives of the European Commission, the Council and the European Parliament, the final result will inevitably be very close to the deeply-flawed Council/JURI text.

We need MEPs to stand up for EU citizens, and to introduce strong alternatives, particularly for Article 13, the most pernicious part of the current text. Here, the only acceptable solution is removing it completely.

It's still filtering, even if you don't say the word "filtering"

It is striking that the different draft versions of the copyright directive texts all assiduously avoid using the tell-tale word "filtering". But it is clear from the obligations imposed by Article 13 that a constant, general filter is the only technology capable of spotting files that copyright companies want removed automatically. In order to ensure that there is no copy of material on a site, it is obvious that every single upload has to be inspected, compared against a list of "forbidden" material, and then filtered out if necessary. There is simply no other way, so it is entirely false to claim that Article 13 does not impose a general filtering obligation on companies. It most certainly does, just not explicitly. And a general, omnipresent filter is surveillance, and leads inevitably to censorship – which is why the draft directive avoids using the term.

This goes way beyond notice-and-takedown

There's another misapprehension about the upload filter. Sometimes, people claim that it is nothing special or onerous, since it's just like the current take-down system employed by the Digital Millennium Copyright Act in the US, and the e-commerce directive in the EU. But the crucial difference is that those only apply to allegedly infringing material; the copyright directive would allow companies to send vast lists of material and require that every upload be filtered against them.

The Council text makes this clear: "it may not be proportionate to expect small and micro enterprises to apply preventive measures and that therefore in such cases these enterprises should only be expected to expeditiously remove specific unauthorised works and other subject matter upon notification by rightholders." Other companies, though, are expected to apply "preventive measures" — that is, to filter out pre-emptively anything the copyright industry cares to send through. As well as being a completely disproportionate obligation, this also confirms that a general upload filter would be needed since it is the only technology that could even to attempt to achieve that.

There's no magic filter for all types of content

One claim in this context is that this massive filtering effort of every single upload isn't a problem. After all, some point out, Google has adopted it on a voluntary basis for uploads to YouTube. This shows that the technology already exists, they say, and so rolling it out more widely is straightforward. This overlooks a number of issues.

Google's Content ID video filtering technology required 50,000 hours of coding, and cost $60 million to develop. It's not only a huge project, it's proprietary, which means that Google probably won't want to share it with rivals. And if it did, it would doubtless charge high fees for a licence. If other companies wanted to offer filtering technology, they would probably have to spend similar amounts to Google. The high costs involved, and the relatively small market for this specialised software – completely separate and different solutions need to be developed for video, music, images, text and software code, making it even more unattractive as an investment – together mean that only very deep-pocketed companies would be interested.

As a result, they will probably be US-based, like Audible Magic. The company offers a filtering system for music, and will be well-placed to pick up business thanks to the copyright directive. This means that the EU's Internet would not only be automatically censored by black boxes running inscrutable software, but the embedded rules for that censorship would probably be set by US companies. Does the EU really think this is the best way of encouraging a vibrant indigenous digital economy?

The GDPR is an ally, not the enemy

Finally, one of the more absurd claims flying around is that the resistance to Article 13 has nothing to do with concerns about surveillance and censorship as a result of general upload filtering. Instead, some say, this is simply another attack by big US companies that hate the EU's approach to regulation, particularly the GDPR, which imposes stringent privacy protections on hitherto freewheeling online services. While it is certainly true that the major US online services such as Facebook and Google dislike the GDPR, and continue to press for it to be interpreted in the weakest way possible, this is unrelated to the fight against Article 13.

Indeed, the constant monitoring required for the upload filter to work is not only the antithesis of the GDPR, but contravenes it. Article 22 of the GDPR states: "The data subject shall have the right not to be subject to a decision based solely on automated processing." Filtering is an automated decision process that has major negative consequences for EU citizens, notably in limiting their right to freedom of expression. Since filtering algorithms cannot capture the rich complexity of EU copyright law — even courts find it hard – they are inevitably unable to "safeguard the data subject's rights and freedoms and legitimate interests" as required by the GDPR, and are thus illegal under that law. So, far from being a reaction against the GDPR, efforts to get Article 13 thrown out actually build on it.

Please try to Save Your Internet

Although many of the "mythbusting" points above may seem like quibbles, discussions about the copyright directive in general, and Article 13 in particular, very often hinge on these subtle points. Forewarned is forearmed, and being able to counter any misleading points that MEPs might raise — either maliciously or simply through lack of knowledge — should help to make conversations with them more fruitful and ultimately more successful. Please try.


(Image:
Georgie Pauwels
, CC-BY
)