Edward Snowden to keynote London's ORGCON!

ORGCON19 is the annual conference put on by the UK Open Rights Group (disclosure: I co-founded ORG and volunteer on its advisory board); it is "the UK’s largest human and digital rights conference," and this year's conference -- held on July 13 in central London -- is centred on "Data and Democracy, Digital Privacy, Online Censorship & the Role of Algorithms," so it only follows that the whistleblower Edward Snowden as its keynote speaker! Read the rest

UK minister says airlines used "exploitative algorithms" to split up families unless they paid extra

UK Digital Minister Margot James has vowed to crack down on "exploitative algorithms" used by airlines that deliberately split up families' seat assignments if they did not pay for pre-assigned seats; James says that the airlines used these algorithms to coerce families into paying for pre-assigned seating. Read the rest

A successful no-platforming means we can talk about Alex Jones again

Zeynep Tufekci (previously) says that Big Tech's "engagement maximization" algorithms meant that any time you talked about Alex Jones critically, the algorithms would start relentlessly recommending that you watch some Alex Jones videos, because they were so well designed to please the algorithms by sucking up our attention. Read the rest

A human being at Facebook manually approved the idea of targeting ads to users interested in "white genocide"

A year ago, Facebook apologized for allowing advertisers to target its users based on their status as "Jew haters" and blamed an algorithmic system that automatically picked up on the most popular discussions on the platform and turned them into ad-targeting segments. Read the rest

Stet, a gorgeous, intricate, tiny story of sociopathic automotive vehicles

Sarah Gailey's micro-short-story STET is a beautiful piece of innovative storytelling that perfectly blends the three ingredients for a perfect piece of science fiction: sharply observed technological speculation that reflects on our present moment; a narrative arc for characters we sympathize with; and a sting in the tail that will stay with you long after the story's been read. Read the rest

Facebook's spam filter blocked the most popular articles about its 50m user breach

When news broke yesterday that Facebook had suffered a breach affecting at least 50,000,000 users, Facebook users (understandably) began to widely share links to articles about the breach. Read the rest

Facebook sends man animation featuring cartoon characters dancing on his mother's grave

Facebook wants you to "engage" with its service, so they have an algorithm that plucks your most favorited images out of your past stream and adds dancing whimsical cartoon characters and then rams the resulting animation into your eyeballs, because why not? Read the rest

ICE hacked its algorithmic risk-assessment tool so it recommended detention for everyone

One of the more fascinating and horrible details in Reuters' thoroughly fascinating and horrible long-form report on Trump's cruel border policies is this nugget: ICE hacked the risk-assessment tool it used to decide whom to imprison so that it recommended that everyone should be detained. Read the rest

Thanks to 2016's trade secret law and algorithmic justice, America's courts have become AI-Kafka nightmares

In 2014, the Alice decision made it much harder to patent software in the USA; in 2016, Congress passed the Defend Trade Secrets Act, creating the first federal trade secrets statute: the result of these two developments is that software companies aggressively switched from patents to trade secrets as a means of controlling competition and limiting inspection and criticism of their products. Read the rest

Fired by an algorithm, and no one can figure out why

Ibrahim Diallo was eight months into a three year contract with a big company when its systems abruptly decided that he was fired: first it told his recruiter that he'd been let go, then it stopped accepting his pass for the parking and the turnstyles, then his logins stopped working, and at each turn, his supervisor, and that person's boss, and the HR people, were at a loss to explain or reverse the steady, automated disappearance of Ibrahim from the company. Read the rest

"Friendly" apps are good at maximizing engagement, but their context-blindness is a cesspit of algorithmic cruelty

Designers use metrics and a/b splitting experiments to maximize engagement with their products, seeking out the phrases that trigger emotional responses in users -- like a smart scale that congratulates you on losing weight -- but these systems are context-blind, so they are unable to distinguish between people who might be traumatized by their messages (say, a woman who's just miscarried late in her pregnancy, being congratulated on her "weight loss"). Read the rest

Algorithms try to channel us into repeating our lives

Molly Sauter (previously) describes in gorgeous, evocative terms how the algorithms in our life try to funnel us into acting the way we always have, or, failing that, like everyone else does. Read the rest

Michigan Governor Rick Snyder's MIDAS program had a 93% error rate and falsely accused 20,000 workers of unemployment fraud

Under the cruel austerity of Michigan governor Rick Snyder -- whose policies led to the mass-poisonings of children in Flint -- any claims for unemployment insurance were vigorously investigated, with the state operating on the assumption that any worker who claimed a benefit was probably committing fraud. Read the rest

We need a new science of measurement to figure out when AI is working and why it fails

Writing on Medium, AI researcher Kate Crawford (previously) and Simply Secure (previously) co-founder Meredith Whittaker make the case for a new scholarly discipline that "measures and assesses the social and economic effects of current AI systems." Read the rest

Predictive policing predicts police harassment, not crime

In Chicago, the "Heat List" system is used to direct policing resources, based on data-mining of social media to identify potential gang-members; the model tells the cops where to go and who to arrest, and is supposed to reduce both violent crime and the likelihood that suspects themselves will be killed -- but peer-reviewed analysis (Scihub mirror) of the program shows that while being on the Heat List increases your chances of being harassed and arrested by Chicago PD, it does not improve crime rates. Read the rest

Forget Skynet: AI is already making things terrible for people who aren't rich white dudes

Kate Crawford (previously) takes to the New York Times's editorial page to ask why rich white guys act like the big risk of machine-learning systems is that they'll evolve into Skynet-like apex-predators that subjugate the human race, when there are already rampant problems with machine learning: algorithmic racist sentencing, algorithmic, racist and sexist discrimination, algorithmic harassment, algorithmic hiring bias, algorithmic terrorist watchlisting, algorithmic racist policing, and a host of other algorithmic cruelties and nonsense, each one imbued with unassailable objectivity thanks to its mathematical underpinnings. Read the rest

Algorithmic risk-assessment: hiding racism behind "empirical" black boxes

Courts around America and the world increasingly rely on software based risk-assessment software in determining bail and sentencing; the systems require the accused to answer more than a hundred questions which are fed into a secret model that spits out a single-digit "risk score" that courts use to decide who to lock up, and for how long. Read the rest

More posts