Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Transportation Businesses Software United States Technology

Tesla Says Autopilot Was Engaged During Fatal Model X Crash (theverge.com) 422

An anonymous reader quotes a report from The Verge: Tesla says Autopilot was engaged at the time of a deadly Model X crash that occurred March 23rd in Mountain View, California. The company posted a statement online late Friday, after local news reported that the victim had made several complaints to Tesla about the vehicle's Autopilot technology prior to the crash in which he died. After recovering the logs from the crash site, Tesla acknowledged that Autopilot was on, with the adaptive cruise control follow distance set to a minimum. The company also said that the driver, identified as Apple engineer Wei "Walter" Huang, had his hands off the steering wheel and was not responding to warnings to re-take control. Tesla said in a statement: "The driver had received several visual and one audible hands-on warning earlier in the drive and the driver's hands were not detected on the wheel for six seconds prior to the collision. The driver had about five seconds and 150 meters of unobstructed view of the concrete divider with the crushed crash attenuator, but the vehicle logs show that no action was taken."

According to Mercury News, the driver of the car was headed southbound on California's Route 101 when his Model X crashed headfirst into the safety barrier section of a divider that separates the carpool lane from the off-ramp to the left. "The front end of his SUV was ripped apart, the vehicle caught fire, and two other cars crashed into the rear end. [The driver] was removed from the vehicle by rescuers and brought to Stanford Hospital, where he died from injuries sustained in the crash."
This discussion has been archived. No new comments can be posted.

Tesla Says Autopilot Was Engaged During Fatal Model X Crash

Comments Filter:
  • by fluffernutter ( 1411889 ) on Saturday March 31, 2018 @08:05AM (#56358253)
    Tesla said in a previous article that autopilot had done this route 85,000 times. I guess repetition doesn't necessarily mean success here. Big surprise.
    • by Mr D from 63 ( 3395377 ) on Saturday March 31, 2018 @08:14AM (#56358269)
      In several articles about this accident, Musk goes on with stats about the safety of autonomous driving. I understand why Musk wants to make it clear the driver was negligent in his use of Auto-pilot, but he should not be making unsupported safety claims regarding autonomous driving, nor conflating them with Tesla Auto pilot safety. While it very well may be safer, the data doesn't exist to prove it. Tesla Auto-pilot is, per Tesla, used only on limited access highways when there is good visibility. It is not used in rain, snow, or fog. It is not to be used where traffic conditions are changing rapidly. So the comparable safety data should be limited to sedans traveling on limited access highways in nice weather with stable traffic conditions. Furthermore, the comparison should be based on number of fatal accidents and not number of deaths, which is higher where there are more people in a vehicle. A car that crashes with 4 people in it is not for times more dangerous to drive than a car that crashes with only one person.

      In addition, the data should account for accidents not caused by the sedan, such as a tractor trailer suddenly coming across the median a taking out a car, or other 'unpreventable' incidents that neither an autonomous or human controlled car could avoid.

      Comparison against total highway deaths is apples and oranges.

      Musk is a smart man, smart enough to know how to use statistics properly. I believe he is quite aware his claims are not supported by existing data. It is disappointing and unnecessary. If I want to abuse statistics, I'd say the data clearly shows that on the particular day of this accident it was thousands of times safer to be in a human driven vehicle when passing the deficient barrier than in an auto-piloted Tesla.
      • by Anonymous Coward on Saturday March 31, 2018 @08:24AM (#56358293)

        ... he should not be making unsupported safety claims regarding autonomous driving, nor conflating them with Tesla Auto pilot safety.

        He has to. Tesla is on the brink of going under and unless he gets more cash to keep the business going, it'll be bust by the end of the year.

        To get that cash, he has to keep in the news and make a lot of hype.

        • To get that cash, he has to keep in the news and make a lot of hype.

          Right, because sticking to what he's doing has been so ineffective...

  • by Futurepower(R) ( 558542 ) on Saturday March 31, 2018 @08:14AM (#56358267) Homepage

    ... the driver, identified as Apple engineer Wei "Walter" Huang, had his hands off the steering wheel and was not responding to warnings to re-take control.

    He apparently had plenty of money; he was driving a Tesla. He was an engineer, so he was educated.

    Tesla said in a statement: "The driver had received several visual and one audible hands-on warning earlier in the drive and the driver's hands were not detected on the wheel for six seconds prior to the collision. The driver had about five seconds and 150 meters of unobstructed view of the concrete divider with the crushed crash attenuator, but the vehicle logs show that no action was taken.

    It amazes me that often people don't recognize that driving a car is a potentially extremely dangerous activity. 100% attention is required at all times, particularly since other drivers often do things they shouldn't do.

    • Re: (Score:3, Informative)

      Don't take Tesla's word that he had his hands off the wheel; he may have had them resting lightly on the wheel. They use a pressure sensor. I've got a Tesla Model X, and have been nagged many times, because my touch is a bit too light for it to detect.

    • by fluffernutter ( 1411889 ) on Saturday March 31, 2018 @08:22AM (#56358283)

      It amazes me that often people don't recognize that driving a car is a potentially extremely dangerous activity. 100% attention is required at all times, particularly since other drivers often do things they shouldn't do.

      Then putting Autopilot in a vehicle is illogical. You don't put something in a vehicle to steer for drivers while totally failing at relieving any kind of duty of driving. Eventually they will get sidetracked, it's just human.

      • That's what Google, to its credit, has been saying since day one. Autopilot is either a safety backup system like Meritor Onguard, or it's totally in control. Driving is not a dificult task, it's no easier to monitor a computer driving than it is to drive. Consequently if people aren't driving they are looking at their cell phones.

      • by Mascot ( 120795 )

        Then putting Autopilot in a vehicle is illogical. You don't put something in a vehicle to steer for drivers while totally failing at relieving any kind of duty of driving. Eventually they will get sidetracked, it's just human.

        It makes perfect sense to me, along the lines of adaptive cruise control. It makes for a more relaxing drive in that I don't have to actively maintain speed and distance myself, but it does not relieve me from the need to pay attention in order to be able to intervene should the need arise.

        Additionally, if I should have a momentary lapse of attention at an inopportune time, odds are that it does not happen at the exact moment my car fails to notice that the car in front slowed down, so it adds safety.

        For so

        • With a name like "autopilot" it's completely understandable. And the fact that he was an Apple engineer means nothing. It's 50/50 whether he was a hard-nosed nerd who knew enough to tell you how it worked to ten decimal places or whether he was a start-eyed utopian who believed all the propaganda. The summary implies the latter.
          • by Mascot ( 120795 )

            If there's a general consensus in the English speaking world that autopilot is synonymous with autonomous, I agree the name was poorly chosen. That's not my impression of the general understanding of the word – we expect a pilot to remain in the cockpit and alert when the plane we're on is on autopilot, after all – but if the data shows otherwise then I'd be the first to argue for the feature to have its name changed.

        • by arth1 ( 260657 )

          It makes perfect sense to me, along the lines of adaptive cruise control. It makes for a more relaxing drive in that I don't have to actively maintain speed and distance myself, but it does not relieve me from the need to pay attention in order to be able to intervene should the need arise.

          I find that adaptive cruise control makes the drive less relaxing, in that I have to monitor what it does constantly, whereas if I drive manually, most adjustments are reflexes requiring little conscious effort. Nor using adaptive cruise control frees up part of my attention to deal with other potential dangers, like the road ahead and the behaviour of other drivers.

          • by Mascot ( 120795 )

            I guess our brains are very different. I find it really relaxing to just pay attention to the road and steering, and let the car deal with the speed. Since I'm obviously watching the road anyway, it takes away some (literal) footwork without adding any additional workload.

            If it was unreliable, it would just add extra anxiety of having to quickly correct for it all the time. But in the year I've had the car, it's been rock solid, even in the worst of slushy winter conditions.

            On really winding roads, I do ten

    • It amazes me that often people don't recognize that driving a car is a potentially extremely dangerous activity. 100% attention is required at all times, particularly since other drivers often do things they shouldn't do.

      Unfortunately, nobody can claim they pay 100% attention at all times and be telling the truth. Everybody has a moment when they are distracted, good drivers quickly re-engage their minds.

      Which is why we need to be very careful about technologies that give people a false sense of confidence that they can take their attention of of driving for longer periods of time.

    • by SlaveToTheGrind ( 546262 ) on Saturday March 31, 2018 @08:47AM (#56358361)

      He was an engineer, so he was educated.

      Educated but apparently not particularly smart, given that he had complained to Tesla several times about issues with the guidance system and yet continued to blindly rely on it.

    • He was an engineer, so he was educated.

      "Schooled" hardly means "educated" and "educated" definitely doesn't imply "informed" or "intelligent," as evidenced by this twit's (moment of silence) decision.Besides, no "computer engineer" (i.e. someone with a deep understanding of both analog and digital logic) would be willing to trust their lives to a cutting-edge machine that'st being tested not in a controlled environment but rather a fucking city.

  • by Cederic ( 9623 ) on Saturday March 31, 2018 @08:21AM (#56358281) Journal

    So you design a car that can safely drive itself in traffic, can track whether the driver is actively using the controls and knows that for six seconds the driver hasn't been using them while driving at speeds the car can't protect them through a crash.

    And you didn't design in, "Slow the fuck down because nobody is in control of the vehicle"?

    • So you design a car that can safely drive itself in traffic

      Well, that premise is under a bit of debate due to incidents like this, isn't it?

  • by rmdingler ( 1955220 ) on Saturday March 31, 2018 @08:22AM (#56358285) Journal

    The driver had about five seconds and 150 meters of unobstructed view of the concrete divider with the crushed crash attenuator, but the vehicle logs show that no action was taken.

    Having narrowly avoided two separate impending collisions while driving due to insects, one hornet loose in the cab & one bee in the eye through an open window, I have a macabre fascination with the last few seconds in a vehicle before the collision the takes the life of the human witness(es).

    Sure, we live in an age of unrivaled electronic distractions, but there have always been ample incentive to pick the wrong five seconds to look away from the road. Outside of law enforcement, we'd never see the video, even if it did exist... but the new tech vehicles are getting makes the 'fly on the wall' view ever more likely.

  • Apple engineer (Score:4, Insightful)

    by The Evil Atheist ( 2484676 ) on Saturday March 31, 2018 @08:27AM (#56358299)

    Huang reportedly complained that the car’s Autopilot option kept veering the car toward the same barrier on Highway 101, near Mountain View, into which he crashed the car last Friday.

    If you've noticed unsafe behaviour and have made complaints about it, why the fuck would you keep using it?

    Not surprising that an Apple engineer has no common sense.

    And the only common sense thing for Tesla to do is to disable the damn thing. People are too stupid to be trusted with anything.

    • He probably thought he was holding it the wrong way.
    • Because he paid $120,000 for it. The real question is why is Tesla putting out a faulty expensive product and beta testing it on our roads?
      • by sinij ( 911942 )

        Because he paid $120,000 for it. The real question is why is Tesla putting out a faulty expensive product and beta testing it on our roads?

        This. Unless there was something unique about circumstances that lead to a fatal crash, it is reasonable to assume that non-faulty self-driving car on a clear day would not ram itself into a divider killing everyone involved.

        Warning to take control are only good for defending against lawsuits. They are not actually useful due to desensitization / "crying wolf" phenomenon. If Tesla was serious about it, they would have implemented "Red Alert, High Probability of Crash" warning, not some BS "put your hands o

    • by tomhath ( 637240 )

      Huang reportedly complained that the car’s Autopilot option kept veering the car toward the same barrier on Highway 101, near Mountain View, into which he crashed the car last Friday.

      ...

      Tesla spokesperson said the company has been searching its service records, “And we cannot find anything suggesting that the customer ever complained to Tesla about the performance of Autopilot.”

      The spokesperson added that there had been “a concern” raised about the car’s navigation not working properly, but “Autopilot’s performance is unrelated to navigation.”

      The car usually worked okay, it just had an obsession with that particular barrier. But that was a "navigation" concern, not autopilot. Got it?

    • If you've noticed unsafe behaviour and have made complaints about it, why the fuck would you keep using it?

      You really have to ask?? I would've thought it obvious; he was an Apple engineer!

      The answer, of course, is courage.

  • by Andrew Lindh ( 137790 ) on Saturday March 31, 2018 @09:08AM (#56358433)

    On average there are 700 deaths on US roads EVERY week and two more should not be national news. With safer cars this number has been dropping in the last decade but this is news is actually about computer AI making a choice, or by not making a choice, killing two people. It may not be full AI, but it is still a computer program in control. Two people died because of a computer program. With both accidents the "self-driving" AI program should have saved these people. Both times the person behind the wheel should have been able to avoid or lessen the collision if they were actually driving. We don't hear much about AI driving success in avoiding crashes just like we don't hear about planes that land safely. We only hear about failures. These features will get better with time and debugging (meaning more failures to come). Just as early commercial planes had their problems so does AI self-driving. For now flying is safer than driving no matter who is in control of the car (0 commercial aviation deaths for 2017 in US) and improved technology can only help our chances of making it home safely even if it makes the wrong choice occasionally (well, on average).

    • by 110010001000 ( 697113 ) on Saturday March 31, 2018 @09:20AM (#56358501) Homepage Journal
      Why are we allowing corporations to "debug" their cars on our roads? You have no concept of reality. 700 deaths on US roads. What would the number be if every car was using Tesla's autonomous technology? It might be 10,000 deaths per week. How would you know?
      • If the AI in its buggy state is still safer than human drivers, then it makes more sense to roll it out in its buggy state rather than wait until it's been debugged. As critical as I am of Tesla naming the feature "autopilot", it does seem to lower accident rates [electrek.co] on average. Pointing to specific incidents of failures when the average failure rate has actually gone down, is nothing but cherry picking data contrary to the average to support the conclusion you want.
    • by hey! ( 33014 )

      I think we're at a interesting point, where robotic drivers aren't as safe as human drivers in general, but are safer than human drivers under certain circumstances.

      What this means is that robotic assistance can be used to improve safety, but when misused will actually make things worse. So for the foreseeable future, every time one of these things is in a crash the question will arise as to whether the system failed, or the driver misused it.

      In fact both scenarios are bound to happen, and will continue

    • Uber plowed into a pedestrian at full speed on a well lit road, whereas this driver ignored six seconds of warnings to take control.

  • This situation is truly unfortunate. That being said, if we were all driven by computers (no humans) I bet the number of overall accidents would be far less. Accidents are just that, accidents. Condolences to the family. I ordered a Model 3 two days ago. Wife and I decided against the Auto Pilot mostly because of the added expense. I believe the auto pilot concept is great, I'm going to give it a little more time to mature.
    • This situation is truly unfortunate. That being said, if self driving worked properly and wasn't just a pipe dream, I bet the number of overall accidents would be far less.

      FIFY, and I happen to agree with you.
  • The driver had set the following distance at the minimum which sounds like an unsafe decision. I have a Subaru with smart cruise control which allows setting the follow distance which is speed dependent. I set it on the maximum which is very close to my normal follow distance. Any less would seem too risky for me to respond to unexpected events ahead. I don't know if the stated 150 meters was sufficient warning based on his speed (unknown?). Clearly the driver should have been driving. 150 meters is
  • Spin from Tesla (Score:3, Insightful)

    by MobyDisk ( 75490 ) on Saturday March 31, 2018 @09:19AM (#56358495) Homepage

    Reading and re-reading the quote from Tesla, I see I was mislead:

    The driver had received several visual and one audible hands-on warning earlier in the drive and the driver's hands were not detected on the wheel for six seconds prior to the collision. The driver had about five seconds and 150 meters of unobstructed view of the concrete divider with the crushed crash attenuator, but the vehicle logs show that no action was taken."

    This does not mean that the warning fired during the 6-seconds prior to the collision. It wasn't telling him a collision was imminent. It says that "earlier in the drive" it warned him. So the warning could have been 45 minutes prior. Also, it sounds like the autopilot warning happens any time the user takes their hands off the wheel, not just when it needs help. It might be that autopilot drivers have a tendency to ignore the warning, like a dialog box that comes up so often people just click "OK" to it.

    I begin to think that a semi-autopilot is a bad idea. If it is not reliable enough that a person can take their hands off the wheel, and they still must pay full attention to the road in case it makes a mistake, then they might as well drive? It is very hard to pay attention to something you aren't actively involved in. Airline pilots and lifeguards and factory quality inspectors know this. Those industries have specific policies and practices designed to keep people engaged and aware.

  • New classification of death - robotic process.

    By algorithm, metaphor and processing machine made choice was fatal. This post reality dawn of an age where humans are given a metaphor stand-in for reality to represent risk. Tesla chose to use a sound to implement a warning. What could go wrong? Did he have windows down and couldn't hear? Maybe cabin noise was chaotic or distracting but the metaphor implementation failed the human-in-control.

    SO the cost of that weak metaphor is catastrophic. I think thou

  • Darwin wins again (Score:4, Insightful)

    by hyades1 ( 1149581 ) <hyades1@hotmail.com> on Saturday March 31, 2018 @09:27AM (#56358527)

    "...the victim had made several complaints to Tesla about the vehicle's Autopilot technology prior to the crash...

    So the guy who has complained not once, but repeatedly, that his car's autopilot is inadequate engages it and completely ignores what it's doing.

    This takes a special kind of stupid.

    Somehow I found the strength to ignore the low-hanging fruit: that this potential Darwin Award winner was an Apple engineer.

  • A dazed driver cannot assume responsibility of a ton of steel travelling fast down the highway within five seconds!

    In fact, studies has shown that a driver is not "up to speed" in driving capability for a long time after a requested activation, up to 40 seconds.

    Having a five second limit is simply irresponsible, and 40 seconds is almost same AD challenge as a level 4 system.

    Abolish level 3!

  • <sarcasm>It's not the fault of our autopilot, that apparently crashed the car into a wall under perfect driving conditions, it's the fault of the wall for being there without protections, and of the driver for trusting our autopilot(tm) system to auto-pilot his car. We are happy however to relieve the driver's memory from the guilt of having been burnt to death by our batteries that catch fire when punctured, because the rescue operators acted fast enough to remove his incapacitated body from the wrec
  • According to ABC news: http://abc7news.com/automotive... [abc7news.com]

    The driver had complained about trouble with his car to Tesla before the fatal crash:

    "Walter Huang's family tells Dan Noyes he took his Tesla to the dealer, complaining that -- on multiple occasions -- the auto-pilot veered toward that same barrier -- the one his Model X hit on Friday when he died."

    If his Tesla has a history of doing something reckless, why would he re-enable it? Why would he also not have your hands on the wheel? Why didn't Tesla analyzed the data in his car when he reported this to see what was going on? Seems like it would have been a pretty simple check: Did the car attempt to steer the car towards the barrier or not?

One good suit is worth a thousand resumes.

Working...