The EU Commission's High-Level Expert Group on AI (AI HLEG) has tabled its Policy and investment recommendations for trustworthy Artificial Intelligence, recommending a ban on the use of machine learning technologies to generate Chinese-style Citizen Scores and limits on the use of the technology in monitoring and analyzing mass surveillance data.
However, much of the report simply recommends "further study," while other recommendations, like limits on the use of emotional tracking and assessment technologies, are maddeningly vague. Members of the panel like philosopher Thomas Metzinger and Access Now policy analyst Fanny Hidvegi publicly condemned the report that emerged from their work, calling it "dumbed down" with "red lines" over bans on AI applications being reduced to mere "critical concerns" in the final document.
Last March, the EU narrowly approved a proposal to subject all public online communications to algorithmic surveillance and automated censorship in the name of preventing copyright infringement.
Fanny Hidvegi, a member of the expert group that authored the report and a policy analyst at nonprofit Access Now, said the document was overly vague, lacking "clarity on safeguards, red lines, and enforcement mechanisms." Others involved have criticized the EU's process for being steered by corporate interests. Philosopher Thomas Metzinger, another member of the AI expert group, has pointed out how initial "red lines" on how AI shouldn't be used have been dumbed down to mere "critical concerns."