Signal, which makes a popular private messaging service, reaffirmed that it will not comply with demands to compromise user privacy in Europe. France has reportedly revived a stalled EU surveillance plan with some bizarre bureacrat-brain compromise ("agree to have your chats scanned or you can no longer share & receive pictures,videos and links") that could only be possible if there were no end-to-end encryption in the first place, i.e. — Read the rest
Google will not be required to admit it did anything wrong in a National Labor Relations Board (NLRB) settlement over complaints the tech giant restricted employees' speech —- but Google does have to inform employees of their protected free speech rights.
In May 2018, Google faced a series of public resignations and scandals over a secret internal project to supply AI tools to the Pentagon's drone warfare project; then, in August 2018, scandal hit again with the news that Google was secretly developing a censoring, surveilling Chinese search-tool; then came the news that the company had secretly paid Android founder Andy Rubin $90m to quietly leave the company after credible accusations of sexual abuse and assault.
The Googler Uprising was a string of employee actions within Google over a series of issues related to ethics and business practices, starting with the company's AI project for US military drones, then its secretive work on a censored/surveilling search tool for use in China; then the $80m payout to Android founder Andy Rubin after he was accused of multiple sexual assaults.
Last year, Google was rocked by a succession of mass uprisings by its staff, who erupted in fury after discovering that the company was secretly pursuing a censored Chinese search tool and an AI project for US drones, and that it had secretly paid Android founder Andy Rubin $150m to quietly leave the company after women who worked for him accused him of sexually assaulting them.
Senior Google employees Meredith Whittaker and Claire Stapleton were key organizers of last year's string of googler protests, including the 20,000-employee walkout over the company tolerance and rewarding of execs who engaged in sexual harassment; last month, Whittaker and Stapleton revealed that they had been targeted for retaliation by the company; now, a group of googlers around the world have staged another walkout in solidarity with Whittaker and Stapleton, this one a "sit-and-knit" that was also held in solidarity with women who've had their sexual harassment claims mishandled by Google.
Meredith Whittaker (previously) and Claire Stapleton were two of the principal organizers of the mass googler walkouts over the company's coverup and rewarding of sexual assault and harassment, as well as other Google employee actions over the company's involvement in drone warfare and Chinese censorship; now, in a widely circulated letter to colleagues, they say that they have been targeted for retaliation by Google management.
Every year, NYU's nonprofit, critical activist group AI Now releases a report on the state of AI, with ten recommendations for making machine learning systems equitable, transparent and fail-safe (2016, 2017); this year's report just published, written by a fantastic panel, including Meredith Whittaker (previously — one of the leaders of the successful googler uprising over the company's contract to supply AI tools to the Pentagon's drone project); Kate Crawford (previously — one of the most incisive critics of AI); Jason Schultz (previously — a former EFF attorney now at NYU) and many others.
Google's decision to provide AI tools for use with US military drones has been hugely controversial within the company (at least a dozen googlers quit over it) and now the New York Times has obtained internal memos revealing how senior officials at the company anticipated that controversy and attempted (unsuccessfully) to head it off.
Writing on Medium, AI researcher Kate Crawford (previously) and Simply Secure (previously) co-founder Meredith Whittaker make the case for a new scholarly discipline that "measures and assesses the social and economic effects of current AI systems."
Meredith from Simply Secure writes, "Artificial Intelligence is already with us, and the White House and New York University's Information Law Institute are hosting a major public symposium to face what the social and economic impacts might be. AI Now, happening July 7th in New York City, will address the real world impacts of AI systems in the next next 5-10 years."
In 1989, Canadian activist, engineer and thinker Ursula Franklin gave a series of extraordinary lectures on the politics of technology design and deployment called "The Real World of Technology."
As you know, Apple just said no to the FBI's request for a backdoor in the iPhone, bringing more public attention to the already hot discussion on encryption, civil liberties, and whether “those in authority” should have the ability to see private content and communications -- what's referred to as “exceptional access.”[1]
"We know of no case where such an addition of exceptional access capabilities has not resulted in weakened security."
Measurement Lab, an open, independent analysis organization devoted to measuring the quality of Internet connections and detecting censorship, technical faults and network neutrality violations, has released a major new report on how ISPs connect to one another, and it's not pretty.