Tesla released a video of a commute from home to office, including parking as a demonstration of its fully self-driving hardware. "The person in the driver's seat is only there for legal reasons. He is not doing anything. The car is driving itself."
Federal auto safety regulators today said that self-driving cars “will save time, money and lives,” but also sent a clear signal that they want the power to inspect and approve technology before it hits the highways, rather than each U.S. state setting its own safety standards.
U.S. Transportation Secretary Anthony Foxx said on a press call today that a new federal premarket approval system "would require a lot more upfront discussion, dialogue and staffing on our part."
The government's statement today is big news for Uber, Google, Apple, and other Silicon Valley firms pouring millions of R&D dollars into figuring out how to swap human drivers for smart machines, or at least allow us to share control in “semiautonomous” setups.
The promise of self-driving cars is to take our vehicle fleets from 5% utilization to near-100% utilization, reducing congestion, parking problems, emissions and road accidents. But what if the cheapest way to "park" your autonomous vehicle is to have it endlessly circle the block while you're at work? What do we do about the lost jobs of bus-, truck- and cab-drivers? How will we pay for roads if gas-tax revenues plummet thanks to all-electric fleets? Read the rest
The Moral Machine is a website from MIT that present 13 traffic scenarios in which a self-driving car has no choice but to kill one set of people or another. Your job is to tell the car what to do. Think carefully before making your choices, because one of the goals of the website is to crowd source the behavioral rules for self driving cars in the future. By participating, you could affect the outcome of who lives and who dies.
Read the rest
From self-driving cars on public roads to self-piloting reusable rockets landing on self-sailing ships, machine intelligence is supporting or entirely taking over ever more complex human activities at an ever increasing pace. The greater autonomy given machine intelligence in these roles can result in situations where they have to make autonomous choices involving human life and limb. This calls for not just a clearer understanding of how humans make such choices, but also a clearer understanding of how humans perceive machine intelligence making such choices.
Recent scientific studies on machine ethics have raised awareness about the topic in the media and public discourse. This website aims to take the discussion further, by providing a platform for 1) building a crowd-sourced picture of human opinion on how machines should make decisions when faced with moral dilemmas, and 2) crowd-sourcing assembly and discussion of potential scenarios of moral consequence.
Jan Chipchase has assembled a provocative, imaginative, excellent list of "driver behaviors in a world of autonomous mobility" that go far beyond the lazy exercise of porting the "trolley problem" to self-driving cars and other autonomous vehicles, including flying drones. Read the rest
Here's something to fear about self-driving cars! Once they're up and running and insurance companies and legislators realize they're much better at it than humans, you won't even be allowed to drive. Also, the infrastructure is decaying badly and there's no political will to face up to the costs of fixing it, so the roads themselves may end up getting effectively sold off.
Read the rest
Public-private partnerships for roads might begin the erosion of the public right of way. But it’s also possible that autonomous vehicles will all but require limited access to public roads to operate effectively.
Today’s self-driving cars have to be designed and programmed to interact with messy circumstances. Pedestrians, dogs, bicycles, human-driven vehicles, and other obstacles all pose challenges to robocars, and if autonomous vehicles are even modestly successful, avoiding collisions with fallible human drivers will prove a temporary problem. ... The more self-driving cars there are on the roads, the less complex and more predictable the overall behavior of traffic becomes.
Here's a gentleman taking a nap while his Tesla drives for him. From Electrek:
Read the rest
Tesla’s Autopilot requires the driver to always monitor the vehicle and be ready to take control. If the system lacks data to continue to actively steer the vehicle safely, it will show an alert on the dashboard.
If the driver ignores the alert for too long, it will emit a sound and decelerate while activating the hazard lights and moving the vehicle to the side of the road. The vehicle basically assumes that the driver is unconscious if he can’t take control after visual and audible alerts.
In this case, it seems like the Autopilot is still very much in control and therefore is not bothering the sleeping driver – now a simple passenger.
My new Locus Magazine column, Wicked Problems: Resilience Through Sensing, proposes a solution the urgent problem we have today of people doing bad stuff with computers. Where once "bad stuff with computers" meant "hacking your server," now it could potentially mean "blocking air-traffic control transmissions" or "programming your self-driving car to kill you." Read the rest
A police officer pulled over a Google self-driving car yesterday because it was going only 24 miles per hour in a 35 mph zone. But the car had no driver, so he could not issue a ticket. The officer asked the human passenger why the car decided to drive so slowly.
Read the rest
In a Google Plus post, the Google Self-Driving Car Project pled guilty to slow driving.
"We've capped the speed of our prototype vehicles at 25 mph for safety reasons," the post said. "We want them to feel friendly and approachable, rather than zooming scarily through neighborhood streets."
In the end, the officer determined the car had broken no law. No harm, no foul.
And no ticket was issued -- not because there was no driver to whom to issue it but because the car had committed no violation.
David Weinberger's Would a Google car sacrifice you for the sake of the many? explores many philosophical conundra regarding self-driving cars, including the possibility that the rich and powerful might literally buy their way into the fast-lane. This is the premise of my 2005 story "Human Readable," which appears in my collection With a Little Help (there's also a spectacular audio edition, read by Spider Robinson). Read the rest