The AI Now Report: social/economic implications of near-future AI

The National Economic Council convened a symposium at NYU's Information Law Institute in July, and they've released their report: 25 crisp (if slightly wonky) pages on how AI could increase inequality, erode accountability, and lead us into temptation -- along with recommendations for how to prevent this, from involving marginalized and displaced people in AI oversight; to increasing the diversity of AI researchers; to modifying the Computer Fraud and Abuse Act and Digital Millennium Copyright Act to clarify that neither stands in the way of independent auditing of AI systems. Read the rest

Jigsaw: "wildly ambitious" Google spin-out aimed at tackling "surveillance, extremist indoctrination, censorship"

Technologists have a dismal pattern: when it comes to engineering challenges ("build a global-scale comms platform") they rub their hands together with excitement; when it comes to the social challenges implied by the engineering ones ("do something about trolls") they throw their hands up and declare the problem to be too hard to solve. Read the rest

Rules for trusting "black boxes" in algorithmic control systems

Tim O'Reilly writes about the reality that more and more of our lives -- including whether you end up seeing this very sentence! -- is in the hands of "black boxes": algorithmic decision-makers whose inner workings are a secret from the people they affect. Read the rest

Interview with teen botmaster whose lawyerbots are saving people millions

Joshua Browder, the teenaged botmaster whose Do Not Pay bot is helping drivers save millions by challenging NYC and London parking tickets and assisting UK homeless people who are applying for benefits, sat down for a chat on the O'Reilly Bots Podcast (MP3). Read the rest

Machine learning system can descramble pixelated/blurred redactions 83% of the time

A joint UT Austin/Cornell team has taught a machine learning system based on the free/open Torch library to correctly guess the content of pixellated or blurred redactions with high accuracy: for masked faces that humans correctly guess 0.19% of the time, the system can make a correct guess 83% of the time, when given five tries. Read the rest

We need a new science of measurement to figure out when AI is working and why it fails

Writing on Medium, AI researcher Kate Crawford (previously) and Simply Secure (previously) co-founder Meredith Whittaker make the case for a new scholarly discipline that "measures and assesses the social and economic effects of current AI systems." Read the rest

Blackballed by machine learning: how algorithms can destroy your chances of getting a job

The Guardian's published a long excerpt from Cathy O'Neil's essential new book, Weapons of Math Destruction, in which O'Neil describes the way that shoddy machine-learning companies have come to dominate waged employment hiring, selling their dubious products to giant companies that use them to decide who can and can't work. Read the rest

Sampling bias: how a machine-learning beauty contest awarded nearly all prizes to whites

If you've read Cathy O'Neil's Weapons of Math Destruction (you should, right NOW), then you know that machine learning can be a way to apply a deadly, nearly irrefutable veneer of objectivity to our worst, most biased practices. Read the rest

Weapons of Math Destruction: invisible, ubiquitous algorithms are ruining millions of lives

I've been writing about the work of Cathy "Mathbabe" O'Neil for years: she's a radical data-scientist with a Harvard PhD in mathematics, who coined the term "Weapons of Math Destruction" to describe the ways that sloppy statistical modeling is punishing millions of people every day, and in more and more cases, destroying lives. Today, O'Neil brings her argument to print, with a fantastic, plainspoken, call to arms called (what else?) Weapons of Math Destruction.

The Equation Group's sourcecode is totally fugly

With the leak of exploits developed by The Equation Group, the long-secret, NSA-adjacent super-elite hacking squad -- published by The Shadow Brokers, who have some extremely heterodox theories about auction design -- it's now possible to audit the source code of some of the NSA's crown-jewel cyberweapons. Read the rest

UK/EU security researchers: tax-free stipend to study privacy and authentication

UC London's offering a tax-free stipend for UK/EU students to work on designing and evaluating new approaches for continuous authentication, based on a solid theoretical underpinning so as to give a high degree of confidence that the resulting decisions match expectations and requirements" as well as "ways to preserve user privacy by processing behavioural measurements on the user’s computer such that sensitive information is not sent to the online service." (Image: LordHarris, CC-BY-SA) (Thanks, William!) Read the rest

If the 2016 election is hacked, it's because no one listened to these people

Ever since the Supreme Court ordered the nation's voting authorities to get their act together in 2002 in the wake of Bush v Gore, tech companies have been flogging touchscreen voting machines to willing buyers across the country, while a cadre computer scientists trained in Ed Felten's labs at Princeton have shown again and again and again and again that these machines are absolutely unfit for purpose, are trivial to hack, and endanger the US election system. Read the rest

Forget Skynet: AI is already making things terrible for people who aren't rich white dudes

Kate Crawford (previously) takes to the New York Times's editorial page to ask why rich white guys act like the big risk of machine-learning systems is that they'll evolve into Skynet-like apex-predators that subjugate the human race, when there are already rampant problems with machine learning: algorithmic racist sentencing, algorithmic, racist and sexist discrimination, algorithmic harassment, algorithmic hiring bias, algorithmic terrorist watchlisting, algorithmic racist policing, and a host of other algorithmic cruelties and nonsense, each one imbued with unassailable objectivity thanks to its mathematical underpinnings. Read the rest

100 million VWs can be unlocked with a $40 cracker (and other cars aren't much better)

In Lock It and Still Lose It—On the (In)Security of Automotive Remote Keyless Entry Systems, a paper given at the current Usenix Security conference in Austin, researchers with a proven track record of uncovering serious defects in automotive keyless entry and ignition systems revealed a technique for unlocking over 100,000 million Volkswagen cars, using $40 worth of hardware; they also revealed a technique for hijacking the locking systems of millions of other vehicles from other manufacturers. Read the rest

Trump is an object lesson in the problems of machine learning

Trump's algorithm is to say semi-random things until his crowd roars its approval, then he iteratively modifies those statements, seeking more and more approval, until he maxes out and tries a new tack. Read the rest

How facial recognition works (and how to hack your own in Python)

You could not ask for a clearer, easier-to-read, more informative guide to facial recognition and machine learning thank Adam Geitgey's article, which is the latest in a series of equally clear explainers on machine learning, aimed at non-technical people -- and if you are a programmer, he's got links to Python sample source and projects you can use to develop your own versions. Read the rest

Man builds giant, discrete-component-based computer that can play Tetris

James Newman's "Megaprocessor" is a giant "microprocessor" built on transistors and other discrete components that he soldered onto boards and wired together in frames that stand 2m high and run 10m long. Read the rest

More posts