Harvard professor Roland Fryer says algorithms could solve the problem of deciding who to admit to universities "far better than any committee armed with SAT scores, legacy flags and, worse, their gut instincts." He writes in the Wall Street Journal, "Evidence is clear that machine-learning algorithms can make better decisions than humans, especially when bias is present."
The evidence he offers is compelling: 47% reduction in employee turnover, radiologists matched in X-ray readings, and 25% crime reduction in bail decisions.
Fryer proposes training admissions algorithms on comprehensive data including grades, test scores, essays, family income, and school resources. The system would learn which combinations of attributes predict success within each subgroup. "For a black public-school student from rural Georgia, high-school course rigor may matter more than test scores. For a prep-school applicant from Boston's suburbs, Advanced Placement depth may be decisive."
But analyzing "subgroups" separately doesn't solve the bias problem: The metrics used to define "success" — like test scores, grades, or post-graduation earnings — reflect systemic inequities. If an algorithm learns that students from well-resourced schools have historically performed better, it will favor those schools, regardless of how it segments the data.
This exact problem emerged in the 1980s when London's St. George's Hospital Medical School automated its admissions. The computer program, trained on historical admissions data, systematically rejected women and applicants with non-European names regardless of their qualifications. "The program had incorporated the gender and racial biases in the data used to train it," wrote Bahar Gholipour in Futurism. "It was essentially taught that women and foreigners were not doctor material." An estimated 60 candidates per year were unfairly denied interviews based solely on gender or ethnicity.
Recent examples show that bias is still a problem. Amazon abandoned its AI recruiting tool after Reuters reported it penalized resumes containing words like "women's" or mentioning all-women's colleges. ProPublica's investigation of the COMPAS recidivism algorithm found it falsely labeled Black defendants as future criminals at twice the rate of white defendants.
Machine learning is not some magical dust that you sprinkle on a problem to make bias go away. In many cases, it will actually amplify bias. In her essay, "AI is not just learning our biases; it is amplifying them," Laura Douglas provides an example of bias amplification:
Imagine there is an algorithm which aims to predict from an image of a person whether they are a man or a woman. Let's say it's 75% accurate. Assume we have a biased dataset which is not reflective of true society (perhaps because, for example, it's made of images from Hollywood movies). In this dataset, 80% of the time somebody is in the kitchen, it is a woman. The algorithm can then increase its accuracy from 75% to 80% by simply predicting that everyone who is ever in a kitchen is a woman. Since the algorithm simply optimises for accuracy, this is what it will most likely do. So the bias here has been amplified from 80% in the dataset, to 100% in the algorithm.
Human decision-making is flawed, but we understand these flaws and can hold people accountable for unfair decisions. While algorithms are fast, cheap, and seemingly objective, their apparent fairness is deceptive — they inherit human biases through their training data. The solution to making fairer decisions isn't to reduce human oversight in decision-making, but to increase it.