Google’s driver-less cars and robot morality

Discuss

64 Responses to “Google’s driver-less cars and robot morality”

  1. SnowRaptor says:

    In a driverless robotic car world, there would be proper communication protocols among the vehicles with set priorities and rights of way.

    Redundancy and safety measures (such as speed limiting when safety/communication systems are not entirely functional) could drastically reduce the probability of such events and adequate protocols to deal with obstacles could avoid the need of taking such decisions.

    • Brainspore says:

      In a driverless robotic car world, there would be proper communication protocols among the vehicles with set priorities and rights of way.

      It’s unrealistic to assume that a driverless world would happen overnight. For some period of time there will be fancy new driverless cars on the same roads as old fashioned human-driven cars. Hell, a lot of people still drive cars built over 60 years ago.

    • Spyro Poulos says:

      Look there shouldn’t be driverless anything in contact with people. Automated factories make sure humans are not within kill me horribly accident distance while they run. Automated drones? OK well they are made to kill people so exception there but seriously there is ethics and there is common freaking sense already. If a driverless car can malfunction and hit am manned one or even one with people without a human stopgap it is just a bad idea.

    • fragmuffin says:

      The fact that driverless cars will dramatically reduce the number of accidents is important to remember, but it will not eliminate them entirely. The question of responsibility in a driverless world is going to be an interesting one in those cases where the system fails. We may see civil cases where juries struggle to decide which automated system was at fault (including in no-win scenarios), assigning legal blame to the owner of the car, whose only crime was owning some equipment.

  2. Ted Lemon says:

    I think it’s pretty clear that the estate of the decedent would sue the manufacturer if the car made a decision on the basis you propose.   So I think this is not actually a case where computers need to be ethical.

    What I really want is a computer that prevents the accident in the first place.   This scenario is a rounding error compared to the lives an effective self-driving car would save.

    • petertrepan says:

      Not to mention a human driver in the same position would probably plow into the bus before he was aware there was a decision to be made.

    • Brainspore says:

      What I really want is a computer that prevents the accident in the first place.

      We’d all like that, but it’s clear that there will always be some number of accidents. We won’t see widespread adoption of fully automated vehicles until it’s clear who will bear the legal responsibility for those accidents, even if the overall accident rate goes down.

      If you’re injured or killed in an accident involving an automated car, who could be held liable? The car’s owner? The manufacturer? The software developer? The government agency that approved the car for public roads? What happens when some unforeseen circumstance causes two automated cars to collide with each other?

      These are important and complex questions we need to be asking (and answering) now.

      • nowimnothing says:

        Exactly, too bad the article did not ask any of those questions in favor of mindless ruminations of the 3 laws.

      • Ted Lemon says:

        Snrk.   It’s too bad that the legal system hasn’t already trod that path with existing tech.   When cars crash nowadays, it’s mayhem, I tell you.   Mayhem!

  3. signsofrain says:

    Robot cars are cool… I don’t know much about how they work but it seems to me it’d be hard to build something safe and reliable. How does the car differentiate between a highway and a regular road? Can you trust the carbot on narrow mountain passes? Does it use GPS? What happens if the GPS reception conks out? What happens if you have a sensor malfunction? (what do these things use.. sonar? MMW radar…?) If the car can’t get enough information about where it is, will it just stop dead? From what I’ve seen, Google’s unmanned cars are advanced enough to drive around on regular roads, but what are the barriers for national adoption? (Technological barriers I mean – I don’t have the chutzpah to even try wrapping my head around the legal issues – insurance companies are sure to get a huuuuuge shakeup when these things start being commonplace…)

    • Bevatron Repairman says:

      I followed a Google robotic car in the East Bay a few months back.  It had a driver, but he seemed to be hands off the whole time.  Couldn’t really tell any difference in the driving.  Reasonable acceleration from a stop light.  Generous breaking distance.  Smooth lane changes.  The only thing I did notice  was that when it made a left hand turn from a four lane road (two each way) to another, it seemed to swing much more widely to the outside of the turn lane than the other vehicles.  At first I thought something was wrong, but gathered that the vehicle was programmed to make sure the vehicle was running fully parallel with the lane markers before it cleared the intersection.  Most people end up squaring up once they in the lane.  

      So, if anything, it was driving better (well, more precisely anyway) than the people around it.

      • jandrese says:

        Most people are really bad about cutting off the corner on those left turns.  When I follow the painted line around the corner my wife yells at me for taking the turn too wide.

        • David says:

          I’ve noticed that people only get worse about following those left-turn-paint-lines the farther you’re from a big city. Sitting in your own left turn lane when the cross bound traffic starts to turn left can mean cringing each and every time someone comes inches from your bumper. %*@)!!

    • Sanjaya Kumar says:

      If you want to learn how the Google robotic car works, you can take this free online class at Udacity. It is taught by Sebastian Thrun, a Google Fellow and one of the lead scientists in the Google self-driving car project.

      http://www.udacity.com/overview/Course/cs373/CourseRev/apr2012

      • puppethead says:

        I did, as well as the original free Stanford AI class taught by Sebastian Thrun and Peter Norvig prior to Thrun founding Udacity.

        I learned a lot about how these cars work (software-wise) and a bit of which sensor technology used was discussed. GPS is far too inaccurate, but used for general location. The laser range finder on top of the car samples millions of points per minute and the software uses probability to determine where the car is at all times in relation to its surroundings. The cars actually look for the paint stripes on roads or the contrast difference between pavement and ground. We covered that in the courses, although Thrun has said they haven’t been able to solve the snow-covered road problem yet.

        The software (particle filters in particular) is surprisingly simple, so the processing overhead isn’t nearly as mind-boggling as say, modeling weather. After taking these classes I’m of the opinion that robotic cars are way more predictable and safer than human-controlled ones. The key is in how they deal with unexpected stimuli.

        Anyone who’s curious should check out YouTube, there are a number of videos of these cars operating in heavy, arbitrary traffic in urban areas. The robots drive among us already.

  4. KWillets says:

    What’s interesting to me is that it will no longer be optional for motorists to have ethical systems.  The old/current days of “I didn’t see him” or hit-and-run will be replaced by cars that stop the moment a pedestrian steps into a crosswalk.  

    • John Fleming says:

       As long as the stop isn’t so sudden that it wakes me up, that’s cool…

      • ocker3 says:

         I don’t want a robot car so I can sit there ready to take over if it fails, I want a robot car so I can eat, read, watch a movie or grab a quickie during transit

  5. SamSam says:

    Huh, it sounds like need some laws for robotics. Three laws, maybe. And make them impossible to circumvent.

    • Richard says:

      This sounds more like the 0th law:
      0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

      In other words, it’s OK to harm a small number of people if that prevents a larger number of people from being harmed.

      Or the “it’s OK to kill Hitler” law.

      • cdh1971 says:

        ——–”it’s OK to kill Hitler”———

        No, no, no. 

        It’s _not_ okay to kill Hitler. That would be time tampering and could get you committed to a Federation penal colony for a very, very, long time. 

  6. John Fleming says:

    People have ethical systems.  Robots have programmed instructions. Unless Google has equipped its cars with strong AI and not told us, this discussion seems odd.

    The article touches on Asimov’s Laws of Robotics, and mentions that they treat robots like slaves.  Huh?  My microwave has a safety feature that keeps it from running when the door is open.  Is it also being treated like a slave?  These are mechanisms, not conscious beings.

  7. xzzy says:

    Ethics never come into it. A human driver certainly isn’t going to spend time weighing the merits of their own life over a bus full of kids, they’re going to try and avoid a collision just as passionately as if it were a bus full of murderers. Or a giant boulder.

    Ethics only become a factor after the scene has played out. If the driver dies because he wrapped his car around a tree we all applaud him for being a hero and saving kids, but that’s not what happened. He just screwed up trying to survive an accident.

    As others are stating, computers make it easier to avoid the situation entirely. Vehicles should be able to register their location and route automatically and other traffic will adjust to provide room. Obviously this requires all vehicles to be self-piloting, but the way I see it that’s a foregone conclusion already. We’re only a generation or two away from having people existing that have never had to drive their own car.

    Mandating a code of ethics for computers is going to lead to a whole mountain of wrongful death lawsuits when the code makes the wrong choice. No programmer on the planet wants to risk being a scapegoat for that.

    • Boundegar says:

      Very good point.  I for one wouldn’t need to calculate the exact number of children my life is worth (it’s 4 or 5.)  I would just say bad words and turn the wheel.

      The reason this will never happen is because the first time there’s an accident, the software company gets sued into oblivion.  The lawyers would never let this proposal come to market.  In fact, driverless cars are very unlikely for the same reason.  They need a driver to blame.

  8. Marc45 says:

    Ethics really only matter if you survive.

  9. jandrese says:

    As I understand it, Google’s cars never swerve to avoid accidents, apparently it was safer to just use the brakes every time.

    I don have to wonder to what standard we are going to judge the ethics of these machines.  Only asking them to be as ethical as your average human being seems like too low of a threshold. 

    • xzzy says:

      People who live in deer country preach the same thing. Swerving just sends you into the ditch and probably a very immobile tree. Hitting the brakes is still going to be pretty bad for deer and bumpers, but brakes are really good at soaking up energy and any collision that does happen is going to be significantly more survivable.

      Of course, in moose country, they try  not to talk about it. There ain’t a passenger car out there that will survive a moose collision. So buy a truck with as much ground clearance as you can afford and hope for the best. 

  10. ando bobando says:

    Let’s just be sure we all understand that computers only do what they are programmed to do. They don’t make ethical decisions – their programmers do. And even if we did manage to make robots with that level of artificial intelligence, we wouldn’t need them to drive our cars. A plain old do-what-we-tell-you robot will work just fine in this situation, as Google has shown quite well!

  11. robcat2075 says:

    Between a bus and a car, a collision is going to come out way worse for the car.  Consider also that the people (children!) in the bus are riding quite a bit above where a car would hit.  There aren’t many situations where deciding to plow into the bus is even a self-interest one. If the computer car has any good sense it will do what it has to do to avoid hitting the bus, just for its own survival.

  12. Jardine says:

    In that situation, the person in the car is in danger in either case. If the car swerves out of the way of the bus, you’re going into the other lane and you might hit someone else. If the car doesn’t swerve out of the way, you’re hitting a bus. You’re probably better off swerving anyway unless something worse than the bus is in the other lane. Hopefully if there’s someone in that other lane, they saw the bus go into the wrong lane and slowed down.

  13. David says:

    I’d imagine the big, heavy school bus will simply crumple up the aluminum Google car. It’s not like they’ll be placing these systems in 18 wheel tractor trailers any time soon. The system won’t swerve and risk hitting another car. (But would it blare the horn?)

  14. vrplumber says:

    For the car to recognize the other vehicle as a school bus, wouldn’t the bus also have to be fitted with a similar system so as to communicate its status, speed and trajectory to other vehicles.  

    If so, the accident probably wouldn’t happen because both vehicles would be aware of each other well before coming into contact.

  15. Rhyolite says:

    I call BS.  What percentage of traffic accidents involve any scope for ethical thinking?  1%? 0.1%? School bus accidents are a tiny faction of total accidents and most of those are not going to give a driver much choice about how to react. 

    When a human driver does have time and space to react, what percentage of the time are they going to make the “correct” ethical decision?  How often is even clear what the “correct” decision would be to a human? 

    How likely is it even that a human driver would be paying sufficient attention?  How likely would they be speeding?  How likely would they be tailgating? 

    I posit that a machine driver with no more ethical thinking than following the speed limit, maintaining a safe following distance, obeying traffic rules and instantly applying maximum breaking if a collision is likely, will kill fewer kids on school buses than the most ethical human driver.

  16. fragmuffin says:

    I don’t think the question of driverless car ethics is going to come up in the foreseeable future. Presumably, we’re a long way off from a comprehensive AI network that coordinates cars to execute a strategy that minimizes human loss.

    Instead, we’ll see cars using information from the environment and data from neighboring vehicles, and use a dumb local strategy to avoid hitting things. We can discuss emergent phenomena from that, such as an overall reduction in accidents, but ethical reasoning is not among those phenomena. Think bees protecting the hive, not soldiers protecting a fortress.

    Now, it is plausible, without strong AI, for driverless cars to execute non-local strategies, but those decisions would have been made in advance by engineers. Cars would have to coordinate their information, consult a massive and detailed database of no-win scenarios, and execute predetermined strategies based on the ethical choices of the engineers. And since Google will not always hold a monopoly on driverless cars, that database would have to be a cooperative, standardized effort by all manufacturers. Technically, this is possible. Practically, it seems pretty far-fetched.

  17. Edward says:

    having all automated cars far fewer accidents will occur, ethical thinking or not. no drunk drivers no texting drivers, no drivers doing their nails. sounds great.

  18. Mister44 says:

    I wonder if in my life time all cars will be robotically driven. If you want to drive your own car you would have to go to a special closed course for that sort of thing.

    • jandrese says:

      I don’t think they’re going to outlaw human vehicle operation in my lifetime, even if robotic cars become the norm.  There are just too many entrenched interests (and lawmakers are old nostalgic people) for it to happen.  Maybe in 100 years when the people making the laws have only known robotic cars and most of what they know about human driven ones are that they make the news when they cause some terrible accident and people like to run them around racetracks still.  Plus, given how long people hold on to cars it would be hard to be even majority robot controlled by 2040, especially if the robotic controls are expensive (they will be). 

      Look at the road today.  There are still lots and lots of cars from the 90s out there, and a good number from the 80s.  With the improvements in rust proofing and paint that were made in the 90s, in addition to better manufacturing of engines and transmissions, those 90s cars are likely to be around for many years still.

  19. Carl Johnson says:

    So there’s one hypothetical, which might happen one time in 10 million. Here’s another one: someone gets behind the wheel after too much drinking/drugging/distracted/whathaveyou. That happens tens of thousands of times a day. Shall we prevent the possible occurrence of the hypothetical, or should we let technology help us address the sad reality that too many people make idiotic choices every day that put ALL of us, not just the 40 kids on that pretend bus, at risk?

  20. Harvey says:

    It’s a question of liability, not ethics. If the driver hits the bus, the “errant” bus was at fault. If the driver swerves to miss and hits the other bus that they didn’t see, the driver is 100% liable. Sorry, but the law demands you hit the brakes and possibly the children.

  21. andygates says:

    It would brake.  Swerving is bad.  Braking might get the car behind you up your arse, but that’s a much safer thing and hey, the computer already knows that car’s proximity and has a hair trigger on the airbags.

    Interesting that this strain of robotics is getting ethical thinking, while army bots get more and more killy. 

  22. These question if machines can “learn” the concept of ethics is still to be answered. And if that was the case, if they could execute this concept in extreme situations where several values have to be copared and a complex ethcal cosideration has to be made. I am very curious what we will learn about this in the future.

  23. bkad says:

    As others point out, humans in accidents like that described are not likely doing complicated ethical thinking, but are just trying to avoid the accident entirely. And I at least would not judge anyone making a snap decision between two bad outcomes.

    As far as legal issues go, don’t many US states already have the concept of ‘no fault’ accidents? [after wiki research: no, only 12 do, and usually only for minor accidents. But these are probably the majority.] These systems basically say, ‘each insurance company covers its own client; no one even bothers trying to determine responsibility’. Probably driver-less car accidents could be handled by an evolutionary (not a revolutionary) expansion of no-fault insurance law. Some cases would come down to actual owner negligence (e.g., not maintaining brakes) or manufacturer defect (stuck accelerator pedals) but these would be edge cases.

    Maybe I’m too optimistic, but thankfully this is not going to be an overnight transformation.

  24. Jacob Ewing says:

    The problem with this article is that the author seems to have a very science-fiction based concept of how robots work (Clearly demonstrated with his Asimov reference).  The ability to control a vehicle, navigate, and do all of the other countless things that humans do when driving is completely separate from any ethics.

    This is akin to saying that the auto-pilot systems used in airliners need to make moral decisions when a bird approaches their jet engines.

    • AlexG55 says:

      An airliner still has to have a licensed pilot alert and sitting at the controls at all times, even if the autopilot is actually flying it. The Google car wouldn’t have this (it would be fairly pointless if it did).

    • tom digenti says:

      cant compare a bird to a bunch load of kids

  25. tom digenti says:

    I think legally, morally, whatever, you have to swerve; don’t you?

  26. Ignatz Homenez says:

    I think we should separate the legal from the ethical from the moral.  Only the law would really be at play other than in the minds of the people involved or in the media, community, etc.  The law would come into play in a couple of ways potentially.  The first is whether a criminal statute was breached.  Was the driver (regardless of swerve and miss, hit, attempt to swerve and fail to miss, et cetera) criminally negligent?  Or vehicular homicide or whatever criminal laws are in effect in the jurisdiction.  If so, then the state may prosecute someone, whether that is the driver, the programmer, a company and so on, i.e. the criminally-responsible party.  Not much changes, though it may be much harder to prove that a team of Google and Ford programmers was criminally responsible for an accident.

    Let’s say there is no criminal statute breached, then we’re on to civil liability.  Again, the most likely head of damages is negligence – your action or failure to act caused my injury in a way you could have reasonably avoided.  But again you would have to prove negligence on the part of the programmers, the car, the driver, the bus, or some combination thereof.

    I think it’s entirely possible that most accidents which occur might not meet either a criminal or civil standard, leaving no one at fault for the accident.  Good legislation could cure what ambiguity exists – we can readily imagine a social compact in the form of laws which essentially remove liability unless manifest negligence is found.  We already accept that people are injured and die despite the best intentions of the engineers behind other safety mechanism like air bags, so it’s really not that far a stretch to say that autonomous automobiles are themselves akin to this – a very remote risk of injury or death is simply part of the system we accept.

    Finally, there are plenty of areas (depending on which jurisdiction you live in) where we accept injury or death without either criminal or civil penalty, e.g. law enforcement, standard error rates in surgeries, etc.  I think we can solve this if the public and political will is there to create the right legal environment, assuming of course the tech is really that foolproof.

    TL;DR: morals and ethics aside, only the criminal and civil law will really matter, which is a solvable problem by legislatures.  The barrier is social and political will to a legal problem, not an ethical one.

  27. traalfaz says:

    Robots can not have morality. The people who program them have morality (or not) and are limited to what they can think of ahead of time. There have been systems that have been capable of programmatically saving or killing people for years now.

    The reality is that even if a robotic car makes the wrong choice and kills people, that’s regrettable, but it’s almost impossible to argue that robotic cars won’t save far more lives than they will cost. The average driver is so bad that it’s just going to be better. Heck, it already is. I read earlier this week that Googles cars have now amassed hundreds of thousands of miles and have been involved with exactly one accident, which was deemed to be completely the fault of the other (human) driver.

  28. ChickieD says:

    I agree with all the above comments that these do not pose any ethical problems.

    I am a huge fan of this technology. Where I live, there are a ton of older people on the road who should not be driving, and these robot cars are a game changer – I would prefer to be driving behind a robot car than an elderly person with poor vision and poor reflexes. I think the roads will be a million times safer when we all have robot cars. 

    Many years ago when the DC Metro system was launched, it had driverless trains. There was such an outcry about it that they had to add drivers into the system. I am worried that when the Google cars go to market, there will be a similar backlash, totally undermining all the benefits of automating cars.

Leave a Reply