To do in NYC next Sat, May 11: "The Bigot in the Machine," a panel on algorithmic bias from PEN and McSweeney's


Next weekend, PEN America is throwing its World Voices Festival, including a McSweeney's-sponsored panel on algorithmic bias called The Bigot in the Machine, featuring poet/media activist Malkia Cyril, and Equality Labs founder Thenmozhi Soundararajan, moderated by investigative journalist Adrianne Jeffries: it's on May 11 at 2:30 at Cooper Union's Frederick P. — Read the rest

What is machine learning anyway? This three minute animation delivers an excellent simple explanation

After 50 years of false starts, incremental progress, ethical quandaries, and endless hype cycles, artificial intelligence may finally be ready for prime time. The biggest breakthroughs in recent years have been around machine learning, a specific type of artificial intelligence that uses neural networks—inspired by the human brain—that can analyze massive data sets, find patterns, and get "smarter" in the process. — Read the rest

Remote bar exams for aspiring attorneys are a terrible and dangerous idea

Due to the ongoing pandemic, many jurisdictions—except those plunging ahead with superspreader events—are opting to hold a remote bar examination. Passing the bar exam is currently required for a law school graduate to practice law in most cases. But it is extremely unlikely that jurisdictions have the ability to build out the infrastructure to securely administer a remote bar exam, especially on only a few months' notice. — Read the rest

Second wave Algorithmic Accountability: from "What should algorithms do?" to "Should we use an algorithm?"

For ten years, activists and theorists have been developing a critique of "algorithms" (which have undergone numerous renamings over the same time, e.g. "filter bubbles"), with the early critiques focusing on the way that these can misfire with dreadful (or sometimes humorous) consequences, from discrimination in which employment and financial ads get served to the "dark patterns" that "maximized engagement" with services that occupied your attention but didn't bring you pleasure.

"Intellectual Debt": It's bad enough when AI gets its predictions wrong, but it's potentially WORSE when AI gets it right

Jonathan Zittrain (previously) is consistently a source of interesting insights that often arrive years ahead of their wider acceptance in tech, law, ethics and culture (2008's The Future of the Internet (and how to stop it) is surprisingly relevant 11 years later); in a new long essay on Medium (shorter version in the New Yorker), Zittrain examines the perils of the "intellectual debt" that we incur when we allow machine learning systems that make predictions whose rationale we don't understand, because without an underlying theory of those predictions, we can't know their limitations.

The emerging split in modern trustbusting: Alexander Hamilton's Fully Automated Luxury Communism vs Thomas Jefferson's Redecentralization

From the late 1970s on, the Chicago School economists worked with the likes of Ronald Reagan, Margaret Thatcher, Augusto Pinochet and Brian Mulroney to dismantle antitrust enforcement, declaring that the only time government should intervene is when monopolists conspired to raise prices — everything else was fair game.