A new law went into effect in New York City on Wednesday that requires any business using A.I. in its hiring to submit that software to an audit to prove that it does not result in racist or sexist outcomes. A link to the NBC News article is here.
Under the new law, any NYC business that uses machine learning or artificial intelligence software to "substantially" help choose employment candidates must subject that software to an audit by a third-party company to show it is free of racist or sexist bias.
Companies are increasingly using automated tools in their hiring processes. Cathy O'Neil, the CEO of Orcaa, a consulting firm that has been running audits of hiring tools for companies that want to be in good standing with New York's new law, said the rise in tools that automatically judge job candidates has become necessary because job seekers are also using tools that send out huge numbers of applications.
AI-infused hiring programs have drawn scrutiny, most notably over whether they end up exhibiting biases based on the data they're trained on. Studies have long found that programs that use machine learning or artificial intelligence often exhibit racism, sexism and other biases.
For example, one study found that a commonly used health care algorithm that did not even have race as a input routinely called for more medical attention for white patients as opposed to black ones.
[Ziad Obermeyer, an acting associate professor at the Berkeley School of Public Health,] said that "simply because you left the race variable out of the model does not guarantee by any means that your algorithm will not be racist."
"When we're making these algorithms, we make these choices that seem technical and small, they're deeply important impacts — both positive and negative — on people's lives," he said.
It usually takes us eight or nine years to teach our children implicit racism. Looks like the machines catch on much more quickly.