Larry Tesler, the father of cut, copy, paste, has died

Larry Tesler, the Xerox PARC computer scientist who coined the terms cut, copy, and paste, has died.

Born in 1945 in New York, Tesler went on to study computer science at Stanford University, and after graduation he dabbled in artificial intelligence research (long before it became a deeply concerning tool) and became involved in the anti-war and anti-corporate monopoly movements, with companies like IBM as one of his deserving targets. In 1973 Tesler took a job at the Xerox Palo Alto Research Center (PARC) where he worked until 1980. Xerox PARC is famously known for developing the mouse-driven graphical user interface we now all take for granted, and during his time at the lab Tesler worked with Tim Mott to create a word processor called Gypsy that is best known for coining the terms “cut,” “copy,” and “paste” when it comes to commands for removing, duplicating, or repositioning chunks of text.

Read the rest of his obit on Gizmodo.

[H/t Jim Leftwich]

Image: Yahoo! Blog from Sunnyvale, California, USA - Larry Tesler Smiles at Whisper, CC BY 2.0, Link Read the rest

"Edge AI": encapsulating machine learning classifiers in lightweight, energy-efficient, airgapped chips

Writing in Wired, Boing Boing contributor Clive Thompson discusses the rise and rise of "Edge AI" startups that sell lightweight machine-learning classifiers that run on low-powered chips and don't talk to the cloud, meaning that they are privacy respecting and energy efficient. Read the rest

The bubbles in VR, cryptocurrency and machine learning are all part of the parallel computing bubble

Yesterday's column by John Naughton in the Observer revisited Nathan Myhrvold's 1997 prediction that when Moore's Law runs out -- that is, when processors stop doubling in speed every 18 months through an unbroken string of fundamental breakthroughs -- that programmers would have to return to the old disciplines of writing incredibly efficient code whose main consideration was the limits of the computer that runs on it. Read the rest

Wireheading: when machine learning systems jolt their reward centers by cheating

Machine learning systems are notorious for cheating, and there's a whole menagerie of ways that these systems achieve their notional goals while subverting their own purpose, with names like "model stealing, rewarding hacking and poisoning attacks." Read the rest

Trying to land on some runways causes the Boeing 737's control screens to go black

The Boeing 737 Next Generation has a gnarly bug: on instrument approach to seven specific runways, the six cockpit display units used to guide the pilots to their landing go suddenly black and they remain black until the pilots choose a different runway to land on. Read the rest

AI, machine learning, and other frothy tech subjects remained overhyped in 2019

Rodney Brooks (previously) is a distinguished computer scientist and roboticist (he's served as as head of MIT's Computer Science and Artificial Intelligence Laboratory and CTO of Irobot); two years ago, he published a list of "dated predictions" intended to cool down some of the hype about self-driving cars, machine learning, and robotics, hype that he viewed as dangerously gaseous. Read the rest

AI Now's annual report: stop doing "emotion detection"; stop "socially sensitive" facial recognition; make AI research diverse and representative -- and more

Every year, the AI Now Institute (previously) publishes a deep, thoughtful, important overview of where AI research is and the ethical gaps in AI's use, and makes a list of a dozen urgent recommendations for the industry, the research community, and regulators and governments. Read the rest

Librecorps: an organization that connects student free/open source software developers with humanitarian NGOs

Librecorps is a program based at the Rochester Institute for Technology's Free and Open Source Software (FOSS) initiative that works with UNICEF to connect students with NGOs for paid co-op placements where they build and maintain FOSS tools used by nonprofits. Read the rest

Model stealing, rewarding hacking and poisoning attacks: a taxonomy of machine learning's failure modes

A team of researchers from Microsoft and Harvard's Berkman Center have published a taxonomy of "Failure Modes in Machine Learning," broken down into "Intentionally-Motivated Failures" and "Unintended Failures." Read the rest

Tiny alterations in training data can introduce "backdoors" into machine learning models

In TrojDRL: Trojan Attacks on Deep Reinforcement Learning Agents, a group of Boston University researchers demonstrate an attack on machine learning systems trained with "reinforcement learning" in which ML systems derive solutions to complex problems by iteratively trying multiple solutions. Read the rest

How to recognize AI snake oil

Princeton computer scientist Arvind Narayanan (previously) has posted slides and notes from a recent MIT talk on "How to recognize AI snake oil" in which he divides AI applications into three (nonexhaustive) categories and rates how difficult they are, and thus whether you should believe vendors who claim that their machine learning models can perform as advertised. Read the rest

Genetic Evasion: using genetic algorithms to beat state-level internet censorship

Geneva ("Genetic Evasion") is a project from the University of Maryland's Breakerspace ("a lab dedicated to scaling-up undergraduate research in computer and network security"); in a paper presented today at the ACM's Conference on Computer and Communications Security, a trio of Maryland researchers and a UC Berkeley colleague present their work on evolutionary algorithms as a means of defeating state-level network censorship. Read the rest

Tpmfail: a timing attack that can extract keys from secure computing chips in 4-20 minutes

Daniel Moghimi, Berk Sunar, Thomas Eisenbarth and Nadia Heninger have published TPM-FAIL: TPM meets Timing and Lattice Attacks, their Usenix security paper, which reveals a pair of timing attacks against trusted computing chips ("Trusted Computing Modules" or TPMs), the widely deployed cryptographic co-processors used for a variety of mission-critical secure computing tasks, from verifying software updates to establishing secure connections. Read the rest

Behind the scenes, "plain" text editing is unbelievably complex and weird

One of the most interesting things about programming is that it forces you to decompose seemingly simple ideas into a set of orderly steps, and when you do that, you often realize that the "simplicity" of things you deal with all day, every day, is purely illusory and that these are actually incredibly complex, nuanced, fuzzy and contradictory: everything from peoples' names to calendars to music, art, email addresses, families, phone numbers, and really, every single idea and concept. Read the rest

Procedural one-page dungeon generator

Oleg Dolya (last seen here for his amazing procedural medieval city-map generator) is back with a wonderful procedural one-page dungeon generator that produces detailed, surprisingly coherent quickie dungeons for your RPG runs (it's an entry in the monthly challenge from /r/procedural generation). Read the rest

SQL Murder Mystery: teaching SQL concepts with a mystery game

SQL Murder Mystery is a free/open game from Northwestern University's Knight Lab that teaches the player SQL database query structures and related concepts while they solve imaginary crimes. Read the rest

The Hippocratic License: A new software license that prohibits uses that contravene the UN Universal Declaration of Human Rights

Coraline Ada Ehmke's Hippocratic License is a software license that permits the broad swathe of activities enabled by traditional free/open licenses, with one exception it bars use by: "individuals, corporations, governments, or other groups for systems or activities that actively and knowingly endanger, harm, or otherwise threaten the physical, mental, economic, or general well-being of individuals or groups in violation of the United Nations Universal Declaration of Human Rights." Read the rest

More posts