Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Robotics News Science

In a Crash, Should Self-Driving Cars Save Passengers or Pedestrians? 2 Million People Weigh In (pbs.org) 535

In what is referred to as the "Moral Machine Experiment", a survey of more than two million people from nearly every country on the planet, people preferred to save humans over animals, young over old, and more people over fewer. From a report: Since 2016, scientists have posed this scenario to folks around the world through the "Moral Machine," an online platform hosted by the Massachusetts Institute of Technology that gauges how humans respond to ethical decisions made by artificial intelligence. On Wednesday, the team behind the Moral Machine released responses from more than two million people spanning 233 countries, dependencies and territories. They found a few universal decisions -- for instance, respondents preferred to save a person over an animal, and young people over older people -- but other responses differed by regional cultures and economic status.

The study's findings offer clues on how to ethically program driverless vehicles based on regional preferences, but the study also highlights underlying diversity issues in the tech industry -- namely that it leaves out voices in the developing world. The Moral Machine uses a quiz to give participants randomly generated sets of 13 questions. Each scenario has two choices: You save the car's passengers or you save the pedestrians. However, the characteristics of the passengers and pedestrians varied randomly -- including by gender, age, social status and physical fitness. What they found: The researchers identified three relatively universal preferences. On average, people wanted: to spare human lives over animals, save more lives over fewer, prioritize young people over old ones. When respondents' preferences did differ, they were highly correlated to cultural and economic differences between countries. For instance, people who were more tolerant of illegal jaywalking tended to be from countries with weaker governance, nations who had a large cultural distance from the U.S. and places that do not value individualism as highly. These distinct cultural preferences could dictate whether a jaywalking pedestrian deserves the same protection as pedestrians crossing the road legally in the event they're hit by a self-driving car.
Further reading: The study; and MIT Technology Review.
This discussion has been archived. No new comments can be posted.

In a Crash, Should Self-Driving Cars Save Passengers or Pedestrians? 2 Million People Weigh In

Comments Filter:
  • Passengers... (Score:5, Insightful)

    by Bert64 ( 520050 ) <.moc.eeznerif.todhsals. .ta. .treb.> on Sunday October 28, 2018 @05:07PM (#57551315) Homepage

    A self driving car should protect its passengers first or they wouldn't sell. Who would willingly ride in a vehicle that would intentionally sacrifice their life for any reason?

    • Re: (Score:3, Insightful)

      by SeaFox ( 739806 )

      The passengers have seatbelts, air bags, and crumple zones to lessen their injuries, though. Pedestrians might as well be naked.

      • Re:Passengers... (Score:5, Interesting)

        by ShanghaiBill ( 739463 ) on Sunday October 28, 2018 @06:14PM (#57551681)

        The passengers have seatbelts, air bags, and crumple zones to lessen their injuries

        The question is usually framed to already take that into account. They way I have heard it is:

        Choice 1: Hit pedestrian.
        Choice 2: Drive off a cliff and kill the passenger.

        It may be an interesting philosophical question, but it has little to do with reality. A scenario like that is almost never going to happen, and even if it did, a human driver would be faced with the same split second dilemma and be no more likely to make the "correct" decision (whatever that is).

        Far more important is that the SDC would have much better reaction time, more braking distance, better control of steering, more situational awareness of other traffic, and thus better able to kill no one.

        • First of all, maybe this will happen maybe it won't; but there are bound to be 100 other similar types of circumstances that make this particular circumstance irrelevant. When it happens, it will be very real. No, no one blames the human when they make one decision or the other, but a human isn't capable of logging every single variable that led to the decision in that fraction of a second, but the car better well keep track of everything because full forensics will need to be done following an event and
        • Re:Passengers... (Score:5, Insightful)

          by mark-t ( 151149 ) <markt AT nerdflat DOT com> on Sunday October 28, 2018 @07:32PM (#57552083) Journal

          Choice 1: Hit pedestrian.
          Choice 2: Drive off a cliff and kill the passenger.

          While this is an interesting hypothetical scenario, I might suggest that the number of times that this sort of thing has actually been any kind of real choice to have to make, particularly in a situation that was not preventable by paying enough attention to the road in the first place, is probably countable on one hand in the entire history of automobiles, if not actually entirely non-existent.

          The ideal is that the self-driving car would be paying enough attention (tirelessly, I might add) to the road and what lies ahead that this sort of "kill the driver or kill the pedestrian" situation that people like to dream up wouldn't ever arise in practice... an automated car that is genuinely designed for safety would simply not drive so fast in any sort of hypothetically reduced visibility situation that there would not be enough time to stop safely in the first place.

          • It's pretty easy to say how good these cars will be in a low visibility situation while it's still a dream that they could ever be in such a situation.
        • A scenario like that is almost never going to happen, and even if it did, a human driver would be faced with the same split second dilemma and be no more likely to make the "correct" decision (whatever that is).

          Besides a negligible outlier at best.....no human driver is going to choose keeping themselves alive over any other beings if the choice is between them or someone else.

          That's just human nature, self preservation.

        • In such split second decisions humans react on instinct. Unless specially trained, that instinct will be self-preservation or panic paralysis. Think someone punching a person, most people will just try cover themselves or do freeze "like a deer in the headlights". People trained in hand-to-hand combat will get out of the way, block, redirect the punch or even use it to attack back. The question at hand is, should cars react like exactly like humans, and if so, which humans? If not, how should the cars react

        • The question as you phrased it is loaded. If the car goes off the cliff, the passenger dies, but if it doesn't, it hits the pedestrian who may or may not die. The only hope you have of not killing anybody is to hit the pedestrian and hope he/she survives.
        • by nut ( 19435 )

          It may be an interesting philosophical question, but it has little to do with reality. A scenario like that is almost never going to happen, and even if it did, a human driver would be faced with the same split second dilemma and be no more likely to make the "correct" decision (whatever that is).

          It's not just a philosophical question. A team of engineers has to sit down and write code, or at the very least models for machine learning, that will allow a self-driving car to make a reasonable decision in any conceivable scenario. The choice you give is just a marker for a whole class of decisions that some cars will have to make at some time. This is a real problem that these engineers have to face before these cars are on the road.

          The fact that human drivers in the same situation could make a poor c

        • by rtb61 ( 674572 )

          In the matter of legal innocence or guilt. The person chose to enter the vehicle and take a private risk. The other person choose to be protected by their government and take a walk upon government owned and controlled land. The AI is programmed with choice by the programmer and funded by the corporation for profit. So the choice to be made, is not one life over another, the choice is commit premeditated murder to suit the convenience of the person who choose to enter the vehicle.

          So in terms of legal choic

    • A self driving car should protect its passengers first or they wouldn't sell.

      And as soon as that happens ... the inflatable "plastic passengers" which are used to fool surveillance cameras on "dual occupancy" or car pool lanes will start being weighted, so the car thinks it has an actual passenger on board, and therefore be more likely to protect the driver by proxy.

      My first guess would be that users would fill the inflatable legs and torso with water, to trip the weight sensor in the seat.

    • Re:Passengers... (Score:5, Insightful)

      by Aighearach ( 97333 ) on Sunday October 28, 2018 @05:19PM (#57551379)

      A self driving car should protect its passengers first or they wouldn't sell. Who would willingly ride in a vehicle that would intentionally sacrifice their life for any reason?

      No, actually, we're going to let the traffic engineers at the Department of Transportation set the rules, which will be the same as for humans (stay in lane, stop as fast as you can, DO NOT SWERVE) and the engineers won't even ask the public.

    • by uncqual ( 836337 )

      Legislation might require that the passengers in the car be prioritized below law abiding pedestrians in failure cases. This would encourage people to buy/rent/share the most reliable cars -- i.e., those that don't have as many failure cases that require making such decisions. It also makes the person responsible for the selection (the passengers) accountable for their actions.

    • A self driving car should protect its passengers first or they wouldn't sell. Who would willingly ride in a vehicle that would intentionally sacrifice their life for any reason?

      And if your car damages other people, that alone will make them win any court case against you.

      What do you think happens if you intentionally kill people with your car to minimise your own damage?

    • ....For 2 reasons.

      1) By being able to operate a vehicle orders of magnitude faster and with far more information than a human, the chance that the car will ever even get into a situation were this decision would have to be made is very, very unlikely.

      2) If it gets into this situation where stopping entirely w/o injuring anyone is off the table, then the car will have so little time to react that making a decision to kill one group or the other and acting on it is a pointless exercise.

      Also, there ar
    • A self driving car should protect its passengers first or they wouldn't sell. Who would willingly ride in a vehicle that would intentionally sacrifice their life for any reason?

      This is great, I had the exact opposite conclusion. The passengers of the vehicle signed up for the risk, the pedestrians did not. So peds whom are not invested in the risk of using a self-driving car should be spared over the passengers if there's a choice to be made about who lives or dies.

  • by crow ( 16139 ) on Sunday October 28, 2018 @05:24PM (#57551407) Homepage Journal

    You never know for certain that a given course of action will cause a fatality. When you're driving, you try to avoid accident. Self-driving cars will do the same. They'll compute the odds of an accident for all options and select the one with the lowest odds. It may be just a fraction of a percent less likely, but it will take that.

    • by uncqual ( 836337 )

      In all of the cases in the survey I took at this site, SOME form of loss of life was unavoidable (although some were between humans and non-humans).

      However, in many real world cases, there are likely to be differing probability of human death and, certainly, self-driving software should take that into account. A car hitting an elderly pedestrian squarely at 35 MPH is very likely to kill the pedestrian (maybe >80%?) but in a modern car with the highest crash protections, hitting a concrete barrier at the

  • Are the pedestrians paying attention to their surroundings or walking (or bicycling) while fixated on their phones? The latter group is going to end up as Darwin Award winners at some point anyway... (And, yes, I have seen people riding a bike while staring at their smartphone.)
  • No questionnaire can resolve this problem.

    Cars must prioritize the safety of its occupants over everything else. If they do not, then people can murder others through machine logic without any hacking at all. Additionally, no computer will ever be advanced enough to know that the next decision it makes will be better or worse than just default saving it's occupants.

    It is also immoral to evaluate lives based on worthless criteria like age, gender, political, racial, class, or religious ideology.

    Just imagi

    • by Xenx ( 2211586 )

      >It is also immoral to evaluate lives based on worthless criteria like age, gender, political, racial, class, or religious ideology.

      I agree with most of that. However, you shouldn't lump age into that. Most people would find it morally correct to save a kid over an adult. That being said, it is still best to not factor any of it in.

      • Saving a kid over the adult is not as cut and dry as it sounds. What about all the other kids that the adult may be financially supporting? Additionally, how far can that go? is it better to save an 11 year old vs a 12 year old? Can a vehicle/machine really make that evaluation correctly each time?

        I agree that most people would go with the cut can dry save a kid over an adult, myself as well, but it just really is not the simple when you really start to think about it. There are all sorts of additional

        • Saving a kid over the adult is not as cut and dry as it sounds. What about all the other kids that the adult may be financially supporting? Additionally, how far can that go? is it better to save an 11 year old vs a 12 year old? Can a vehicle/machine really make that evaluation correctly each time?

          The car needs more information. We need to have an identifying beacon of sorts so the car can identify us and evaluate our lives or just look up our social value ranking or some such metric such as predicted tax contributions over their lifetimes.

          And if there are multiple people at stake it may have to decide whether killing 2 lower ranked people is preferable to killing 1 of a higher rank. Taking an average doesn't seem fair, but neither does just adding up the 2 lower ranked people's scores.

          I dunno, is

        • by Xenx ( 2211586 )
          I agree with your point, just didn't agree with deciding by age being immoral. This wouldn't be the only instance where the moral choice, in the moment, isn't the best choice overall.
        • ... most people would go with the cut can dry save a kid over an adult,

          Why?

      • by pjt33 ( 739471 )

        My initial reaction on reading the summary was that there were no surprises, but then it referred to cultural differences and it suddenly struck me that actually it is surprising that there was universal preference to save the young. Certainly historically there have been cultures which valued the old (with their wisdom and experience) over the young.

  • Really, its a click bait question. An automated car seeing a pedestrian it may collide with with will brake as hard as possible while avoiding other obstacles and staying in its prescribed lane. The chances of this happening as well as the car having to swerve into an obstacle that would also injure the driver is so small as to be irrelevant noise.

    • by MrL0G1C ( 867445 )

      I agree, they wasted time and money asking 2 million people a stupid bloody set of questions. I'm guessing they know fuck all about autonomous vehicles systems because if they actually know anything then they might have been able to ask some useful questions.

      Here's some more interesting questions:

      If a $2000 lidar system can see ahead 500 yards and a $1000 lidar system can see ahead 250 yards, should the manufacturers be allowed to just install the $1000 system.

      Or

      Should the government be mandating what dista

  • by feedayeen ( 1322473 ) on Sunday October 28, 2018 @05:38PM (#57551473)

    Trolley problems fail rigor because they make a critical assumption, an artificial intelligence is smart enough that it knows the results of two choices each with negative outcomes but is somehow not smart enough to have avoided that situation to begin with. An AI developer who is trying to produce the safest AI system possible is prioritizing the likely cases first and attempting to produce the best reaction in your typical crash. Nobody in development is concerned about the situation where you have a car speeding down a narrow road where a pedestrian steps out at just the right time and place where the only cause of actions is to crash into them or crash into a power pole. That situation is rare and shouldn't be optimized yet.

    Let's say that we're worried about optimizing that situation now and we somehow have omniscient AI that still runs into this situation. Now our problem is probabilities. What's the probability that the pedestrian will survive jump out of the road in time and no crash will happen? What's the probability that the pedestrian will die from the crash? What's the probability that the passenger will die when if we swerve into the light pole? Who is going to be harmed by that falling light pole?

  • There will NEVER be a set rule of anything like "protect passengers over pedestrians. Or Vice Versa. Because that is not how computers work. And forget about age discrimination, that is just plane stupid. The computer will have a hard enough time deciding if an obstacle is a pedestrian, it won't have that kind of higher logic to estimate the age of the people.

    It might not even be able to tell how many people are in the car let alone how many people are currently standing in the middle of the road.

    The c

    • and when the sensors mess-up and class an kid as safe to run over Debris??

      • If it's a kid in the road you're probably on a residential street. It's probable that if you're driving one of those streets, rather then trying to park on them using an assisted park feature, the AI will actually require you stay in control of the car.

        For later versions that actually work in residential driving, the car will be going 20-25 MPH rather then 50+MPH, and will probably have specific programming to not run over anything because anything might be a puppy/ball/etc. being chased by a four-year-old.

        • residential mode needs map data so we can just blame the map data provider for fucking up.

          • In theory, the computer should be able to figure out whether it's driving residential streets or not from GPS (to tell you the state), and traffic signs like speed limits. Generally the residential zones will have different speed limit then commercial.

            But yes, you can also blame the map provider. Depending on the local libility laws and your contract with Google, it might even stick in Court.

      • and when the sensors mess-up and class an kid as safe to run over Debris??

        First of all, the whole road would have to be covered in something as big as the kid to even think about running over an obstacle.

        Secondly, very probably any software would simply stop if the road was filled with debris that large, or at worst run around.

        Thirdly, moving "debris" would rate a higher priority not to go over compared to static obstacles.

        Fourthly, don't set your damn baby down on the road or Grandma will never even see it

    • by Ichijo ( 607641 )

      I agree, whether you should swerve left or swerve right is a silly question. Just brake and maintain control of the vehicle. Reducing your kinetic energy helps everyone. If someone hits you from behind, it's their own fault for tailgating, and anyway you and they are both well protected by your steel cages and airbags.

    • If you've watched any of google's visualizations, they clearly have systems working out where the car is not allowed to drive. A cyclist waving his arms around? Paint a red line across the lane behind him in case he is trying to turn. Train crossing with other vehicles? Paint a red line this side of the crossing until the way is clear.

      If the vehicle is surprised, due to some sensor failure or erratic pedestrian, I'm certain the car would just hit the brakes or change lanes if possible. Then I'd expect a sm

  • NYC, glad to know (Score:5, Insightful)

    by b0s0z0ku ( 752509 ) on Sunday October 28, 2018 @06:01PM (#57551605)

    Glad to know that NYC (and Boston, probably) has a large cultural distance from the rest of the US. Any place that's not tolerant of jaywalking isn't worth living in, since it puts the needs of steel sensory deprivation bubbles ahead of human needs...

    "For instance, people who were more tolerant of illegal jaywalking tended to be from countries with weaker governance, nations who had a large cultural distance from the U.S. and places that do not value individualism as highly."

  • A modest proposal (Score:5, Insightful)

    by Bobrick ( 5220289 ) on Sunday October 28, 2018 @06:14PM (#57551675)
    How about making sure the only person in harm's way is the one that chose to let a computer drive in their place?
  • What if the pedestrian is in the road because they were ejected from another vehicle in a crash. Still feel justified in plowing through them?

    What if terrorists are jumping in front of self-driving cars in the road. Should your car always crash anyway just in case?

    The real question is why we should settle for some crap self-driving car design that uses RNG to decide whether or not to ram pedestrians or crash and burn? I should hope we can do better than that.

  • by NicBenjamin ( 2124018 ) on Sunday October 28, 2018 @06:41PM (#57551835)

    We all know that whether the car decides to hit a jaywalker or not will depend on several variables:

    1) Who is more likely to win a multi-million verdict in a Civil Suit: a jay-walker or the passenger?

    2) Will drivers buy the AI software if it will decide to kill their entire families?

    3) How well the engineers work on a feature (deciding whether to hit jaywalker or kill passenger by driving off cliff?) that is much less likely to be used in the real world then every other feature of the AI?

    And variable 4) Moral philosophers have written a paper on this based on millions of data points from an online quiz, is not on the list.

  • Be predictable (Score:4, Interesting)

    by Sigma 7 ( 266129 ) on Sunday October 28, 2018 @07:07PM (#57551967)

    In a crash, self-driving cars should be predictable, rather than coming up with convoluted means to determine which group of pedestrians should be slammed.

    Human drivers are erratic enough. No need to make computer-assised drivers to also be erratic.

  • ... know that they have to set priorities. You can spend time on X or on Y but not on both. So you decide what has more benefits, working on X or on Y, and that's what you do.

    Working to make cars more secure is highly beneficial. Working on deciding moral dilemmas, whether to kill one person or another, isn't beneficial in any way. One person dead, one way or another. So spending developer time on this kind of question is absolutely pointless until these cars are 100% safe, and then it is even more point
  • A machine shouldn't "prioritize" the passengers or the pedestrian's lives, per se... it should prioritize driving safely. Full stop. Nothing more and nothing less. Driving safely entails being aware enough of one's surroundings and driving at an appropriate speed that one is able to safely stop in a hypothetical reduced visibility scenario that the likelihood of something that is genuinely unexpected arising should be statistically negligible. Any sense of "priotizing" would be pointless, and would onl
  • Trolley problems are interesting for the average person to discuss with each other.

    To an engineer they are engineering failures. And I don't know about you personally, maybe you're some daredevil alcoholic behind the wheel, but I've yet to ever encounter a life or death situation for anyone while driving. That includes ever even seeing anyone else in one. Considering self driving cars are supposed to be safer than human drivers to begin with, not only is even getting into a stupid trolley problem situati
  • The algorithm is probably quickly calculating a tree of possibilities and taking the min(sum(damages)). Would the damages be more affecting the passengers, as long as overall there are less damages than hurting the pedestrians, the algo should take that path.
  • In a crash, obey road rules as much as practical. Normally, this means braking and staying in your lane. Stray outside your lane only if it won't kill someone.

    Further, AI today is generally too clever by half. I don't think its capable of making any such decisions.

  • The highest responding age group was 20 year olds, and largest number of respondents (close to 40% respondents) fell into the $0-$5,000 annual income bracket, so people not with means to purchase a self-driving car, hence responding to the questions from "what others should do" perspective, not "what I would do". No surprise, people are usually very altruistic when asked what others should do. If the question was "what should your car do" or "what should your loved one's car do", the answers would be differ

  • by Tom ( 822 ) on Monday October 29, 2018 @01:22AM (#57553195) Homepage Journal

    It's a cute experiment with not exactly surprising results (humans prefer humans over animals - who'd have thought?).

    But in the end, like the trolley experiment, it is informative and insightful and a bunch of other +5 mod points buzzwords, but the actual solution for the real world will be made by engineers, not by philosophers, and it will almost certainly not involve a "moral decision" subsystem. The primary effort of a practical AI is in making a decision so quickly that it can still minimize damage. Every CPU cycle wasted on evaluating the data in other ways is silly. It will rely for its decision on whatever data its sensors have already provided, and that data will not be in the shape or form of "there are 3 black people with this age range and these fitness indicators in the car, here are their yearly incomes, family relations and social responsibilities. Outside the car we can choose between the river, average temperature 2 degrees, giving the passengers this table of survival probabilities. Or crowd A, here is a data set of their apparent age, social status and survival probabilities. Or crowd B, here is their data set."

    This is how the philosopher imagines the problem would be stated to the AI - or to a human in a survey.

    But in reality, the question will be more likely something like: "Collision avoidance subsystem. Here's some noisy sensor data that looks like the road ends over there. A bunch of pixels to the left could be people, number unclear. A bunch of pixels to the right also seem to be people, trajectory prediction subsystem has just given up on them because they're running fuck knows where. Estimated time to impact: 0.5 seconds. You have 1 ms to plot a course somewhere or it doesn't make a difference anymore. Figure something out, I need to adjust the volume on the infotainment system and make the crash warning icon blink."

    What we will end up with is some general heuristics, like "don't crash into people if you can avoid it" and then the AI will somehow come up with some result, and it will work ok in most cases in the simulator, and then it will be installed in cars.

  • by TechyImmigrant ( 175943 ) on Monday October 29, 2018 @02:29AM (#57553303) Homepage Journal

    Jaywalking is not a crime in most countries. Pedestrians typically have right of way over cars. That may sound odd to Americans who haven't traveled, but most countries don't have a word for jaywalking because it is just walking.

    So tolerance of jaywalking comes from it being fine in most places.

"The vast majority of successful major crimes against property are perpetrated by individuals abusing positions of trust." -- Lawrence Dalzell

Working...