Racist algorithms: how Big Data makes bias seem objective

The Ford Foundation's Michael Brennan discusses the many studies showing how algorithms can magnify bias -- like the prevalence of police background check ads shown against searches for black names.

What's worse is the way that machine learning magnifies these problems. If an employer only hires young applicants, a machine learning algorithm will learn to screen out all older applicants without anyone having to tell it to do so.

Worst of all is that the use of algorithms to accomplish this discrimination provides a veneer of objective respectability to racism, sexism and other forms of discrimination.

I recently attended a meeting about some preliminary research on "predictive policing," which uses these machine learning algorithms to allocate police resources to likely crime hotspots. The researchers at the Human Rights Data Analysis Group discussed how these systems are often deployed in response to complaints about racial bias in policing, but the data used to train the algorithm comes from the outcomes of the biased police activity. If the police are stop-and-frisking brown people, then all the weapons and drugs they find will come from brown people. Feed that to an algorithm and ask it where the police should concentrate their energies, and it will dispatch those cops to the same neighborhoods where they've always focused their energy, but this time with a computer-generated racist facewash that lets them argue that they're free from bias.

There is no easy fix. Instead, a broad coalition of civil society organizations must push for change in a number of directions at the same time. Sweeney and Bedoya outline a number of strategies, including:

* Investing in the technical capacity of public interest lawyers, and developing a greater cohort of public interest technologists. With more engineers participating in policy debates and more policymakers who understand algorithms and big data, both government and civil society organizations will be stronger.

* Pressing for “algorithmic transparency.” By ensuring that the algorithms underpinning critical systems like public education and criminal justice are open and transparent, we can better understand their biases and fight for change.

* Exploring effective regulation of personal data. Current laws and regulations are out dated and provide relatively little guidance on how our data is utilized in the technologies we rely on every day. We can do better.

Can computers be racist? Big data, inequality, and discrimination [Michael Brennan/Ford Foundation]

Notable Replies

  1. miasm says:

    Inb4 'Algorithms can't be racist'.

  2. #AllAlgorithmsMatter

    This is a seriously bad thing, but it's mostly a problem with the lazy humans doing the analysis and not the algorithms themselves.

    Messy data is hard to work with, and people get WAY too excited when they tease something out without doing proper reality checks far too often. (I've fallen into that trap myself). All those neat machine learning toys seem like magic sometimes and can do really impressive things, but they're only as good as the source and their analysts.

    If you don't have critical people doing the analysis and others double-checking their work then this is exactly what we're going to end up with.

  3. Computers are designed by us and taught by us, but lack the capacity for independent thought.

    GiGo

  4. There's no easy solution, but there is a hard one: to acknowledge bias and correct for it. Not only algorithms, but privileged people can do this and have a great positive impact, but it requires thinking hard thoughts.

  5. A lot of bias is objective.
    It becomes discrimination or *-ist when you ruthlessly apply statistical facts to individuals.

    Whenever you discriminate against a group X because they are more likely to be/do Y, that's unfair towards those members of group X that aren't/don't Y.

    Young men are more likely to crash their car than young women are (statistical fact, at least in Austria). But making someone pay higher insurance premiums because they are male is discrimination. Why should a careful young man pay more than a careless middle-aged woman?
    On the other hand, women are likely to live longer, should they therefore get lower retirement benefits?

    [...] provides a veneer of objective respectability to racism, sexism and other forms of discrimination.

    I think that is the wrong problem. The problem is not that these algorithms make things seem objective. The problem is that too many people think that "objectively justified discrimination" is respectable.

    If the police are stop-and-frisking brown people, then all the weapons and drugs they find will come from brown people.

    It is of course nice to defend "brown people" as a group, but what if it turns out to be true after all that the crime rate in one racial group is higher than in another? Is it then OK to stop and frisk people just based on their skin color?

    I'm hoping that clusterism will turn out to be less bad than the traditional *isms, because two different machines will likely come up with different clusterings, whereas two different people are likely to divide the world into roughly the same races, ethnicities, and genders.

Continue the discussion bbs.boingboing.net

7 more replies

Participants