Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
United States

Flawed Analysis, Failed Oversight: How Boeing, FAA Certified the Suspect 737 MAX Flight Control System (seattletimes.com) 471

In one of the most detailed descriptions yet of the relationship between Boeing and the Federal Aviation Administration during the 737 Max's certification process, the Seattle Times reports that the U.S. regulator delegated much of the safety assessment to Boeing and that the analysis the planemaker in turn delivered to the authorities had crucial flaws. 0x2A shares the report: Both Boeing and the FAA were informed of the specifics of this story and were asked for responses 11 days ago, before the second crash of a 737 MAX. [...] Several technical experts inside the FAA said October's Lion Air crash, where the MCAS (Maneuvering Characteristics Augmentation System) has been clearly implicated by investigators in Indonesia, is only the latest indicator that the agency's delegation of airplane certification has gone too far, and that it's inappropriate for Boeing employees to have so much authority over safety analyses of Boeing jets. "We need to make sure the FAA is much more engaged in failure assessments and the assumptions that go into them," said one FAA safety engineer. Going against a long Boeing tradition of giving the pilot complete control of the aircraft, the MAX's new MCAS automatic flight control system was designed to act in the background, without pilot input. It was needed because the MAX's much larger engines had to be placed farther forward on the wing, changing the airframe's aerodynamic lift. Designed to activate automatically only in the extreme flight situation of a high-speed stall, this extra kick downward of the nose would make the plane feel the same to a pilot as the older-model 737s.

Boeing engineers authorized to work on behalf of the FAA developed the System Safety Analysis for MCAS, a document which in turn was shared with foreign air-safety regulators in Europe, Canada and elsewhere in the world. The document, "developed to ensure the safe operation of the 737 MAX," concluded that the system complied with all applicable FAA regulations. Yet black box data retrieved after the Lion Air crash indicates that a single faulty sensor -- a vane on the outside of the fuselage that measures the plane's "angle of attack," the angle between the airflow and the wing -- triggered MCAS multiple times during the deadly flight, initiating a tug of war as the system repeatedly pushed the nose of the plane down and the pilots wrestled with the controls to pull it back up, before the final crash.

[...] On the Lion Air flight, when the MCAS pushed the jet's nose down, the captain pulled it back up, using thumb switches on the control column. Still operating under the false angle-of-attack reading, MCAS kicked in each time to swivel the horizontal tail and push the nose down again. The black box data released in the preliminary investigation report shows that after this cycle repeated 21 times, the plane's captain ceded control to the first officer. As MCAS pushed the nose down two or three times more, the first officer responded with only two short flicks of the thumb switches. At a limit of 2.5 degrees, two cycles of MCAS without correction would have been enough to reach the maximum nose-down effect. In the final seconds, the black box data shows the captain resumed control and pulled back up with high force. But it was too late. The plane dived into the sea at more than 500 miles per hour. [...] The former Boeing flight controls engineer who worked on the MAX's certification on behalf of the FAA said that whether a system on a jet can rely on one sensor input, or must have two, is driven by the failure classification in the system safety analysis. He said virtually all equipment on any commercial airplane, including the various sensors, is reliable enough to meet the "major failure" requirement, which is that the probability of a failure must be less than one in 100,000. Such systems are therefore typically allowed to rely on a single input sensor.

This discussion has been archived. No new comments can be posted.

Flawed Analysis, Failed Oversight: How Boeing, FAA Certified the Suspect 737 MAX Flight Control System

Comments Filter:
  • by Anonymous Coward on Monday March 18, 2019 @12:37PM (#58293138)
    This judgement is going to run into 10 digits.
    • Re: (Score:2, Interesting)

      by Anonymous Coward

      There's a good chance the aftermath of this is going to bankrupt Boeing.

      The evidence for gross engineering negligence is piling up, and they are not going to live through the results.

      • by Humbubba ( 2443838 ) on Monday March 18, 2019 @03:37PM (#58294406)
        The underlying problem is the FAA has a revolving door to the Aviation Industry where people, regulation and oversight passes through unobstructed by responsibility or moral conscience.

        On a side note, this story from the Seattle Times shows how important investigative reporting is to society. If the government ever gets serious about regulating private enterprise again, it will be due to stories like this, and the resulting public outrage. We are yet again in their debt.

  • by Anonymous Coward on Monday March 18, 2019 @12:42PM (#58293176)

    [quote]only two short flicks of the thumb switches[/quote]

    In the systems you design, typically how many times is the user expected to press the Stop Trying To Kill Us button before the system leaves off trying to do so?

    • by alvinrod ( 889928 ) on Monday March 18, 2019 @12:46PM (#58293208)
      Infinitely many, but then again I'm designing a robot system that's specifically designed to kill humans.

      Otherwise, I use two. I'd use one, but Amazon also has the patent for single-click Sop Trying to Kill Us buttons in addition to single-click purchasing.
      • Re: (Score:3, Funny)

        by fahrbot-bot ( 874524 )

        Infinitely many, but then again I'm designing a robot system that's specifically designed to kill humans.

        But, like Octillion Killbots [fandom.com], Boeing 737 MAX planes have a preset kill limit. The only way to defeat them is to throw wave after wave of passengers at them ...

    • by jythie ( 914043 )
      I guess it depends if they bothered to include a 'shut off the system' button.
  • by mrlinux11 ( 3713713 ) on Monday March 18, 2019 @12:42PM (#58293178)
    The statement of using only one sensor is scary especially for something that automatically adjust the flight path, but even having two is scary. With 2 sensors how does the software know which is right when they disagree ? For true fault tolerance you need a minimum of 3 sensors
    • Re: (Score:2, Interesting)

      by Anonymous Coward

      Yeah, but that costs extra, and making it an option allows Boeing to nickel-and-dime the airlines that want to look more professional.

      And we can't have these costly things being mandatory in a free market neo-liberal economy!

    • by maroberts ( 15852 ) on Monday March 18, 2019 @12:58PM (#58293294) Homepage Journal

      In general if you have 2 sensors that disagree significantly, you disable all functions that rely on those sensors and issue an alarm.

      You might be able to decide which sensor is correct from data from other systems, but that is another story

    • by mattmarlowe ( 694498 ) on Monday March 18, 2019 @01:03PM (#58293338) Homepage

      Right, automation is good but when lives are on the line....one needs to take every precaution and think about failure cases. I saw a video elsewhere that said that there was an easy way to disable the sensor, but when the pilot only has a few seconds to respond and he is busy trying to keep the plane in the air... in either case, even if we agreed that 1 sensor is enough, 1 in 100K chance doesn't sound reliable enough to me.....I'd rather see 1 in a million minimum, 1 in a billion ideally.. You might need to 5 sensors where at least 3 of them must trigger fault to get super reliability. I'm not sure how expensive or tricky placing several of these sensors is.... In any case, non of us are pilots so its all speculation here.

      Politics and economics wise, the US Air Force was reported to have recently chastened Boeing for QA issues. China and Europe, which want to dominate high tech airplanes have a vested interest in taking down Boeing. But, it sounds like Boeing did this all to themselves....perhaps cutting corners to increase time to market and production speed.

      As for the FAA, I never have high expectations of any government agency to look out for public safety over vested national and economics interests. Letting companies get sued into bankruptcy with the CEO's unemployable when they massively screw up is a much more compelling and reliable way to ensure corners aren't cut.

      • Technically at 1 failure out of 100k makes this a seven 9's system. That's on pair with medical devices and aerospace systems. That's a very stable system in general. To put that into perspective, if you were running trying to run a system with a seven 9's uptime that ran 24/7/365 you would only have outage of about 36 mins over the course of a year. These are very stable and dependable systems.
        • by jbengt ( 874751 )
          And yet, at least one of these sensors already failed.
          And I'm wondering, what does 1 in 100,000 mean?
          Is it 1 in 100,000 instruments over their lifetimes? That would be pretty good, but it obviously hasn't met that criteria.
          Is it 1 in 100,000 flight hours? That wouldn't be very good for a harzadous failure like this.
          Bottom line, though, is that Boeing should have had training materials about this failure mode.
        • Re: (Score:2, Informative)

          by Anonymous Coward

          Technically at 1 failure out of 100k makes this a seven 9's system. That's on pair with medical devices and aerospace systems. That's a very stable system in general.

          To put that into perspective, if you were running trying to run a system with a seven 9's uptime that ran 24/7/365 you would only have outage of about 36 mins over the course of a year. These are very stable and dependable systems.

          99.99999% ("seven nines") is only 3.16 seconds per year, not not 36 mins. That is closer to 4 nines, which might be fine for Facebook, but not the plane that I'm getting on.

        • The failure rate makes it a 1 out of 100k component. If the system is comprised of many such components, its failure rate will be much, much worse than 1:100000
      • I saw a video elsewhere that said that there was an easy way to disable the sensor, but when the pilot only has a few seconds to respond and he is busy trying to keep the plane in the air...

        ...then it's a training issue. They didn't train for that failure enough. If airlines want to fly planes with new technology, they have a responsibility to make sure that pilots are trained on it. Fighting the plane while ignoring the warning that the plane thinks something wacky is going on is pilot error, but the fault likely lies with the airlines' training requirements being designed primarily for low cost rather than for adequacy.

    • With 2 sensors how does the software know which is right when they disagree ?

      At least one possibility is laid out in TFA -- measure both sensors against a known point of reference when the plane is taxiing and therefore has an angle of attack of basically zero.

      It's extremely disconcerting that (1) they had two sensor inputs available but apparently chose to use only one; and (2) they apparently chose not to calibrate or otherwise validate the sensors before making use of them in a given operational cycle .

    • by ceoyoyo ( 59147 ) on Monday March 18, 2019 @01:51PM (#58293666)

      With two sensors, if they disagree, you scream and don't do anything. The human then has to decide what's going on. That scenario is fine (even desirable) for a supplemental system like the MCAS. It's very, very unlikely that both of the sensors would get stuck in the same position, although you'd want to make sure that doesn't happen if some twit leaves a protective cover on them or something.

      A really critical system, that can't be shut off, should have triple redundancy.

    • The statement of using only one sensor is scary especially for something that automatically adjust the flight path, but even having two is scary. With 2 sensors how does the software know which is right when they disagree ? For true fault tolerance you need a minimum of 3 sensors

      It is scary, but it is also a trend. As we have continued to advance in our sensor and instrumentation development (not only in airlines but across many industries where sensors are required) we have become increasingly more reliable. As new standards are published they have continued to reduce requirements for redundancy and independence for safety critical equipment. Even the latest IEC standards stopping your local chemical plant from gassing all its neighbours is following this trend.

      That said, I can't

    • by Shotgun ( 30919 )

      This.

      For manually flying IFR, the answer is that you compare multiple instruments. If the artificial horizon tells you that you're flying straight and level, but your compass is spinning, your altimeter says you're losing altitude, and your engine is revving higher than what it normally does for where the throttle is at...you know the artificial horizon is broken AND that you're in a spiral dive.

      The fact that the computer did not have a backup AoA, but that it is not constantly cross-checking against all t

    • by gweihir ( 88907 )

      And every good engineer in the safety/security space knows that. But bring in some MBAs and they will find statistics that say this can be done cheaper. And cheaper. And then a lot of people die.

      This is pretty much what I would have written when I had heard of the single sensor before these crashes. It is bloody obvious.

  • by Anonymous Coward on Monday March 18, 2019 @12:42PM (#58293182)

    This smells like a collusion between Boeing and the US Government (FAA) in order to rush through certification to be anti-competitive to the Airbus product that was ready for this area.

    The resulting hundreds of dead is a testament to failed oversight and cost-cutting, lack of redundancy, and what appears to be basic lying to other air regulators.

    Almost certainly this will come back to bite Boeing badly - firstly the lawsuits from the families of the dead, second with sales on what many people would consider a flying death trap of a plane design. It will take a while for this taint to be forgotten, assuming that it is fixed, redundant systems are installed on all planes, and that they pass more robust certification processes around the world.

    • by bobbied ( 2522392 ) on Monday March 18, 2019 @01:17PM (#58293420)

      Well, you may be right that this smells... And you may be right in your assumption that Boeing rushed through the certification process and the FAA failed in its oversight capacity and Boeing will be left liable for a pile of money... However, the implication that there was some kind of behind the scenes collusion deal between the FAA and Boeing though is a pretty heavy lift as you have crossed over from civil liability into criminal activity where the burden of proof moves from preponderance of evidence to beyond a shadow of a doubt.

      But, the Civil liability problem here will be borne by Boeing's insurance companies and punitive damages will rack up some pretty big numbers for the victims as a result which will come out of Boeing's profits after being tied up in court for about a decade on appeal.

      The end result will be that the aircraft will be rendered fit for service pretty quick and sales of the 737 MAX will resume unabated perhaps with a new name, with some PR efforts by Boeing and the airlines that fly these aircraft for a reason (they are cheaper to operate). There is nothing systemically wrong with the aircraft mechanically or aerodynamically and this flight control issue will be resolved, albeit by adding multiple sensors, cross checking of existing and redundant sensor data along with some software fixes and pilot training.

      I'm no Boeing fan boy, but let's be reasonable here. Yes, this will hurt Boeing in the short term and the awards will initially be sizeable, with the punitive part getting appealed and appealed for at least a decade before they get paid. This will largely be paid by their insurance carrier and their premiums will be assured to rise. However, these awards pale in comparison to the cost of an aircraft development program and Boeing won't struggle to pay them when they come due. The aircraft system will be reevaluated and redesigned as necessary to account for lessons learned. Any folks who should have known better in the decision tree for fielding and certifying the 737 MAX will be rooted out, processes to make sure this kind of thing doesn't slip by again will be introduced and we will return to normal.

      Where this mistake is bad, let's put it in prospective for the nations air safety. We've come a LONG way from the 60's when the accident rates where huge compared to now or even the 90's on air safety, when DC-10's where crashing right and left from Cargo doors blowing open and uncontained turbine failures. It's been a LONG time since the last major management mistake in air safety. A very long time. Humans make mistakes and flying is a risky business that quickly turns mistakes into tragedy, we won't avoid human error in the future, all we can do is try and catch it before it kills anybody.

    • Both Boeing and the FAA are following the same interests in principle: Allowing a safe aircraft to fly. It appears someone screwed up - that aircraft apparently ain't safe.
      I can think of a similar case, June 3 1998 in a place called Eschede in Germany. Some of the wheels broke up on a train travelling at around 125 mph, part of the train smashed into a bridge which brought the bridge down on the rearmost part of the train. 101 dead and 88 badly injured.
      It turned out that that particular version of the IC

  • by JoeyRox ( 2711699 ) on Monday March 18, 2019 @12:46PM (#58293210)
    Forget the revolving door between the aerospace industry and the FAA - Boeing took out the middleman by convincing the government to let it self-regulate, even on matters of extreme importance like the airworthiness certification of aircraft. It's a win-win: Boeing wins because they reduce R&D and materials costs in getting subpar designs certified that otherwise would be rejected. Politicians win because they get their healthy campaign donations. The only people who lose are the ones who screamed for their lives as their plane plummeted to the earth.
    • It's a win-win: Boeing wins because they reduce R&D and materials costs in getting subpar designs certified that otherwise would be rejected.

      The win they went for is much, much bigger than this. It is market opportunity. By "streamlining" (gutting) regulatory oversight they can get their new models to market faster against the still competition of Airbus, and book more sales. That is an enormously larger gain than R&D costs. Every airline that already received one of these has parted with their money. Boeing doesn't give refunds.

    • by rsilvergun ( 571051 ) on Monday March 18, 2019 @01:52PM (#58293672)
      "regulation" implies a neutral third party. The Credit Card Industry has PCI. Video Games have ESRB. Movies the MPA. None of those things are as immediately lethal as a busted airplane though.

      But I wouldn't call it "regulatory capture" either, since Boeing were left to their own devices. They didn't have anything to capture.

      No, what we have here is plain, good 'ole deregulation. These days regulation > deregulation is automatic in most people's minds. Between this, Flint Mi, and the 2008 crash I hope folks are starting to change their minds in that regard.
      • Re: (Score:3, Insightful)

        by Solandri ( 704621 )
        That's a rather convenient argument. When regulation succeeds, you laud it. When regulation fails, you blame it on deregulation. Therefore regulation can never fail and is thus always good. Brilliant. Successful regulation requires proper implementation of regulations. Failure to implement those regulations properly is a regulatory failure, not a failure due to deregulation.

        It should be noted that lots of other regulationors offload the work (and thus the cost) of implementing those regulations ont
      • by Pyramid ( 57001 )

        If you think for a second that PCI is a neutral party, you're out of your mind.

    • by fermion ( 181285 )
      Another data point. The pentagon is being run by a Boeing yes man who recently put in a billion dollar order for 1972 jet fighters.

      This is what happens when oversight is thrown away and the lobbyist run the government.

  • by roc97007 ( 608802 ) on Monday March 18, 2019 @12:54PM (#58293274) Journal

    > Yet black box data retrieved after the Lion Air crash indicates that a single faulty sensor -- a vane on the outside of the fuselage that measures the plane's "angle of attack," the angle between the airflow and the wing -- triggered MCAS multiple times during the deadly flight, initiating a tug of war as the system repeatedly pushed the nose of the plane down and the pilots wrestled with the controls to pull it back up, before the final crash.

    Jesus, what a nightmare. And, I'm sure, no way of turning off the MCAS even though it was clearly malfunctioning. That has to be the worst last moments for a pilot, ever.

    I read in a different article that the reason for the airframe design has its roots in the way airports were designed decades ago. Before they had those mobile tunnels that connected between the terminal and the plane, passengers had to walk out to the plane and ascend on a portable stairway. To make boarding easier, the original 737 was designed to be lower to the ground, so there wouldn't be as many steps to board. That part of the 737 design was never changed, and it made the airframe changes for the Max very awkward to implement. Hence the necessity for something like the MCAS, and hence the current mess.

    • Jesus, what a nightmare. And, I'm sure, no way of turning off the MCAS

      There's a switch, and a warning when MCAS activates, according to assorted comments in related discussions. That makes the crash a combination of bad design, equipment failure, and pilot error.

      I read in a different article that the reason for the airframe design has its roots in the way airports were designed decades ago.

      I read in slashdot comments :D that the reason for MCAS is the poor choice of putting too-big engines on this plane instead of doing a new design. It doesn't matter why the old design wasn't suitable for larger engines, the problem was not coming up with a new design that is suitable.

      • Hm. Ultimately, you are correct, but I think knowing the root of the design decisions points up another failing -- the company's tendency (or perhaps industry's tendency) to reuse old airframes for new designs. I suspect it's hugely more expensive to design a new airframe (Boeing's "new" dreamliner design is now 15 years old) rather than retrofit an existing one, and there's way too much financial temptation to leverage existing designs, even (this is the important part) where inappropriate.

        But whatever,

        • by turbidostato ( 878842 ) on Monday March 18, 2019 @03:12PM (#58294230)

          "but I think knowing the root of the design decisions points up another failing -- the company's tendency (or perhaps industry's tendency) to reuse old airframes for new designs."

          Only this has nothing to do with current situation. Of course incremental development is inherently cheaper and safer and of course too, when time comes a new development is due, which Boing perfectly knows.

          This was just because of time and time only: they wanted to fight in the current wave of companies' renovation against Airbus, which, because of timing too, was on the market with a more modern system (it will probably be the other way around in, say, five years): they couldn't reach the market on time with a new airframe but they could do if they just scratched a bit more from the bottom of the old barrel.

          They tried, and it's just OK for them to do so.

          But then, all checks and balances were outplaced: instead of letting FAA do their job, more and more parts where self-assessed by Boing itself (what could possible go wrong? duh!): "Good" for Boing, which could reach their goal date, and "good" for the overwhelmed FAA which was strongly pressured to do more with less.

          As basically with any other accident, a lot of circumstances need to get aligned for the fatality but then, corporate greed and corporate greed alone put those planes much more near the tragedy line than they should.

          * An old airframe design already squeezed.
          * Pressure for passing approval at speed.
          * Pressure for more and more processes to be pushed to Boing's side so they can reach their dates
          * Business interest to offer the new MAX to be just like the old NG so there would be no re-training for pilots (not only cheaper, but also sooner and, you know, time is money)
          * Moving posts for the approval process (0.6 to 2.5 degrees)

          * ...and to top it all, the quite minor mistake among all this rush and changes, of forgetting that the final MCAS implementation would end up having full authority instead of just either 0.6 or even 2.5 degrees which in "standard" circumstances wouldn't fly past the first or maybe second reviewer.

          So you ended up with a system categorized as non-critical (which it wast, by first draft), with (indirectly) full authority, and that was not even mentioned at least in the first batch of training manuals (because we made the new MAX to feel-fly exactly like the older NG for your convenience).

          A magnificent example of the effects of modern capitalism in action.

        • by Shotgun ( 30919 )

          The catch is that designing a new airframe leads to new, unknown failure modes. The 737 is a tried and true airframe that a wide array of mechanics and inspectors know intimately. It's failure modes are known and protocols are in place to deal with them.

          Take the Airbus crash in New York many years ago. One of the problems that led to that was an under powered horizontal stabilizer that had been serviced improperly that gave way when it fell into the wake turbulence. (It's been said that aviation accident

      • by PPH ( 736903 )

        There's a switch, and a warning when MCAS activates

        But no pilot training. So "What's an MCAS?"

    • There's an auto-trim cut-out switch that shuts off MCAS. The pilots on the Lion Air flight kept on manually adjusting the trim (correctly diagnosing the problem as an auto-trim issue) but didn't cut off the auto-trim system. The penultimate flight crew on the same Lion Air jet also experienced the same problem, but disabled auto-trim and landed.

      • by Pyramid ( 57001 )

        In the United States, a runaway trim problem would have immediately grounded the aircraft. The 2nd (doomed) crew would have never taken off in that aircraft.

        Either the last crew failed to log it correctly or that country's failure laws are absolutely insane.

    • by ceoyoyo ( 59147 )

      I don't think the reason for the 737's ground clearance is really passenger boarding (although that might have been nice). A major feature of the 737 is that it's low to the ground so you can more easily load and unload cargo, including baggage. Passengers hike up stairs no problem... their bags and other cargo doesn't.

      Boeing has done a lot to keep that feature into the present day, including special engines with the bottom of the fairing flattened on the upgraded classic and NG 737s.

    • The 737 predates modern turbofan engines. The old turbojets were narrower and longer, which fit under the 737's wings.

      https://airwaysmag.com/wp-cont... [airwaysmag.com]

      https://www.preferente.com/wp-... [preferente.com]

      They don't even look like the same aircraft, which is how Boeing can slip continuous changes to the 737 line in.

  • You what? (Score:4, Interesting)

    by mrbester ( 200927 ) on Monday March 18, 2019 @01:02PM (#58293330) Homepage

    > "Going against a long Boeing tradition of giving the pilot complete control of the aircraft, the MAX's new MCAS automatic flight control system was designed to act in the background, without pilot input"

    Or notify them either, it seems. Or be disabled when it erroneously kicks in over 20 times causing unexpected dives. Fuck everything about this system. Even if they fix it I'm not flying on any aircraft that has this.

    > "this extra kick downward of the nose would make the plane feel the same to a pilot as the older-model 737s"

    And that's also ridiculous. Because of the change in the engine configuration it is an aircraft that handles differently. "Compensating" so the pilot doesn't know the difference causes confusion, something you don't need when in charge of a passenger jet. Do they make 747s feel like you're flying a TriStar? Of course not.

    • "And that's also ridiculous."

      No, it isn't.

      "Because of the change in the engine configuration it is an aircraft that handles differently."

      No, it doesn't. That's exactly the point of MCAS.

      ""Compensating" so the pilot doesn't know the difference causes confusion"

      No, it doesn't. The pilots were not confused about the flight envelope of their planes in the slightest.

      "Do they make 747s feel like you're flying a TriStar?"

      Was the intention when designing a TriStar that it should behave like a 747? Of course not.

      Yo

  • by Anonymous Coward

    Going against a long Boeing tradition of giving the pilot complete control of the aircraft, the MAX's new MCAS automatic flight control system was designed to act in the background, without pilot input.

    Part of the problem is Boeing didn't want pilots to have to retrain and certify under a different type of aircraft.

    So they've jiggled things around to make it look like it's just like any other 737, but it now has different flight characteristics.

    So now Boeing has created a situation where they wanted this to

  • by Futurepower(R) ( 558542 ) on Monday March 18, 2019 @01:05PM (#58293362) Homepage
    The safety analysis:

    "1) Understated the power of the new flight control system, which was designed to swivel the horizontal tail to push the nose of the plane down to avert a stall. When the planes later entered service, MCAS was capable of moving the tail more than four times farther than was stated in the initial safety analysis document."

    "2) Failed to account for how the system could reset itself each time a pilot responded, thereby missing the potential impact of the system repeatedly pushing the airplane's nose downward."

    "3) ...

    I think this is the most important story on Slashdot in a long time.

    The article linked by Slashdot is the best, deepest story in a long time: Flawed analysis, failed oversight: How Boeing, FAA certified the suspect 737 MAX flight control system. [seattletimes.com]
    • by thegarbz ( 1787294 ) on Monday March 18, 2019 @02:10PM (#58293818)

      What we see here is reflected somewhat in most major incident investigations through industry involving instrumented systems, the reliability of the equipment is not in question. Throughout the process industry some 80% of safety system failures were systematic. Poor design, poor maintenance, poor interaction, incorrect operation, etc. One in 100000 units failing is not what ultimately caused these planes to crash, it was a bunch of engineers who didn't think about how the system works in operation.

  • Just like how the FDA relies on the drug companies to run all the tests, submit supporting docs, etc.
    • If these unfortunate events were to have happened on any aircraft not built by the USA, the media would be in a self congratulatory mode - touting how USA's "superior checks and balances" have been able to positively influence commercial flight with distinction for over a century.

      This whole thing smells of corruption; usually relegated to "those other countries." I am not surprised that the USA was the last to ground these planes - corruption is why.

    • I assume that's largely due to funding. With these organizations getting their funding cut year after year (or at very least not seeing increases), I don't think they have the financial ability to do the testing themselves.
    • by ceoyoyo ( 59147 )

      That's not *quite* true. The actual human testing is usually planned by independent academics, performed by independent groups (often a whole bunch of hospitals around the world) and often coordinated by an independent contract research organization. The analysis of that data may be done by the company, or might be done by another independent company. Either way, there's a good paper trail, and a whole bunch of people involved who are not paid by the company.

      It's done that way because abuses have happened i

  • Stick pusher... (Score:4, Insightful)

    by b0s0z0ku ( 752509 ) on Monday March 18, 2019 @01:08PM (#58293384)

    Why not just use a stick pusher, like any other non-FBW aircraft with stall issues? Design it so it can be overridden with appropriate back force on the control wheels. Using trim for this is stupid, since with full down trim, you might not have enough elevator authority to recover quickly from a dive (i.e. even if the system is turned off, trim may have to be cranked back manually before the plane can recover).

    This looks like criminal stupidity on the part of Boeing engineers.

  • "Going against a long Boeing tradition of giving the pilot complete control of the aircraft, the MAX's new MCAS automatic flight control system was designed to act in the background, without pilot input"

    Often old and simpler is far better....

    • by thegarbz ( 1787294 ) on Monday March 18, 2019 @02:18PM (#58293884)

      Often old and simpler is far better....

      Right until you look at outcomes. You're speaking emotionally from a recent tragic incident. You're not speaking based on data. The airline (along with others such as the process and automotive) industries have had a long downward trend of safety incidents. One of the primary drivers of that has been taking control away from people. As a Boeing noses down to prevent a stall, a car somewhere in the world saves a drive thanks to forward crash avoidance. An operator who mistakenly lowers the level from a high pressure separator is greeted by flashing alarms on his screen and a valve slamming shut in the field to prevent an explosion.

      Humans make mistakes, giving them full control is not the answer. It's always worth remembering why this system was built, and how in the past pilots have through their own failure demolished plenty of planes due to putting the aircraft into a stall.

      Sidenote: The thing that is really missing here which goes against industry trends is a lack of inherently safer design. A more stable plane is preferable to a plane that is only stable when a certain control system is active.

    • by dunkelfalke ( 91624 ) on Monday March 18, 2019 @02:39PM (#58294006)

      You are. This crash merely shows yet again that a badly trained pilot - and many of them are - will crash the aircraft as soon as something unexpected happens. The cycle repeated for 21 bloody times yet the pilot kept fighting the aircraft instead of executing the correct procedure for a runaway stabiliser (essentily flicking two switches and manually cranking the stabiliser in the correct position).

      Bad pilots are a fact of life, hence the only way to protect passengers from pilots is more automation, not less.

      • This crash merely shows yet again that a badly trained pilot - and many of them are - will crash the aircraft as soon as something unexpected happens.

        Your post merely shows that you are an idiot. The pilots were properly trained, however the MCAS and means of disabling it are undocumented, and Boeing claimed that pilots did not need to be retrained for this version of the 737.

  • What about the ASI? (Score:4, Interesting)

    by NewtonsLaw ( 409638 ) on Monday March 18, 2019 @01:40PM (#58293598)

    Attitude is only one element of the aircraft's operation -- what about airspeed?

    Surely if there was a large disparity between the aircraft's airspeed and its attitude (ie: it is accelerating beyond 500mph while the attitude sensor says it's in a steep climb) then the safety system ought to have recognized that there was a fault condition and triggered an alarm which would allow pilots to disable it with the simple flick of a switch.

    Sadly, it seems that this system was never designed to be disabled -- because it was part of the FBW system used to modify the apparent flight characteristics of the new Max8 model so that it would fly like an earlier 737. This was done (so I understand) solely to make the plane more attractive to airlines that didn't want the extra expense of having to get their pilots "rated" for a new aircraft type.

    When it comes to the mighty dollar versus safety -- you *know* which one wins :-(

    Meanwhile, some people are still saying "it's only a matter of time before a drone brings down an airliner". I wish they'd shut up and focus on the *real* risks that are *actually* claiming hundreds of lives in the aviation industry.

  • He said virtually all equipment on any commercial airplane, including the various sensors, is reliable enough to meet the "major failure" requirement, which is that the probability of a failure must be less than one in 100,000.

    One in a hundred thousand WHAT?

    Flights? As of 2014 there's a bit over 100,000 flights per DAY! With a rule like that there should be on the average somewhat over one "major failure" per day per system of that classification level, which allows a single point of failure to exist.

  • How the hell does a critical sensor on an aircraft fail without the system knowing about it? My freaking car told me yesterday that the microphone in the entertainment unit had developed a fault...

  • Too many patches to keep building newer technology airplanes that handle like the old ones. Just to save money on certification and pilot training. Stop already. Just design a new airplane.

  • by ytene ( 4376651 ) on Monday March 18, 2019 @02:21PM (#58293904)
    On June 29th, 2011, the Department of Transport's Office of Inspector General issued a detailed (23 page) audit report that examined the Federal Aviation Authority's approach to Risk Management.

    You can read the report directly here [dot.gov].

    This report, published in June 2011, documents in stark detail that the approach taken by the FAA - to significantly scale back oversight of aircraft manufacturers - represented significant risk, even if that activity were performed adequately.

    In more detail, the report explains how the FAA took the decision to delegate responsibility for the hiring of individuals to serve as "FAA engineers" - essentially the supposedly independent inspectors who are intended to be able to objectively assess the effectiveness of the design and modification procedures conducted by the company that hired them.

    If that wasn't bad enough, the report goes on to say that once the FAA had conducted initial inspections [the document quotes a 2 year time window of monitoring] it then stepped back from even an oversight role. In other words, there was no way that the FAA could have had any confidence that the modifications introduced with the 737 MAX aircraft were actually functional as claimed.

    If you read around this news story in search of more details, you might find a couple of other relevant pieces of information. Staggering pieces of information...

    One is that Boeing's design/development process broke down, so that when the "final" aircraft was reviewed / safety inspected by their in-house "FAA engineer", all the presented paperwork showed that the force imparted on the contol column by MCAS was set at relatively low, original design levels. In truth the design had changed, to the extent that one of the pilots in Lion Air flight incident had been attempting to fight the controls with over 100lbs of force - and had failed to overcome the aircraft's systems.

    Another is that the sensor input to the MCAS system that turned out to be closely related to the problem may have been basing decisions on a single, faulty attitude sensor.

    Whatever the causes of the two recent failures in terms of the operational characteristics of the two aircraft involved, I think the 2011 Inspector General's report clearly shows that both of these events were clearly avoidable and could have been prevented had the FAA leadership performed their duties responsibly.
  • by Btrot69 ( 1479283 ) on Monday March 18, 2019 @02:57PM (#58294132)

    I love software testing, but I quit and moved on cause the field just doesn't get the respect/resources is needs.
    Developers and managers are always trying to "reign in" the testing staff and make them stick to a stupid script --
    written by the same developers that made the mistakes in the first place.

    My most important bug discoveries were almost always the result of informal testing, or thinking about the test script
    and "trying something" that wasn't on the script. Overnight "random monkey testing" with the automated test harness was
    very effective at finding real world problems -- but invariably got a rebuke from some manager, "Why were you doing that?"

    This sounds a lot like that, but with the added bureaucracy of Aerospace+gov't.
    The development process then adapts to minimize bureaucracy, instead of maximizing safety.

    So as I see it, one of two things happened:

        1. There was a test engineer somewhere who thought about these failure modes before the first crash. He was ignored and didn't have the power to escalate the issue.

        2. The tests were stupid and were run by stupid people.

    There were enough red flags -- I think it was #1.

    Test Engineers -- throw off your chains !
    The safety of the world depends on you.

  • by FeelGood314 ( 2516288 ) on Monday March 18, 2019 @03:31PM (#58294366)
    And my ex-wife likely was responsible for the OS that the plane was using. Certification is backwards. The company making the OS or plane or drug should not be paying for the certification. The buyers of the product need to group together to do it. When I did security certification at IBM no one ever failed. Our customer was the maker of the product so we couldn't fail them. We almost never asked the customer to make changes (and when we did we never verified that they did make the changes), all the certification process was about getting the paper work correct. For the OS certification it might actually be worse. The certifiers probably aren't very good programmers. Their tests are running automated code checkers and running a subset of the tests the OS maker made. One really bad mistake my ex's team made was misunderstanding a processor errata spec on cache misses. A non-trivial percentage of the worlds aircraft were nearly grounded because of that*. My ex's team had misread the errata and the certification house had relied on her teams interpretation of the errata (or more likely had no clue what it meant).

    Critical systems don't allow free() so all non-stack memory will be in static locations. Someone was able to write a program to analyse the executable images to determine if this particular cache miss would ever happen. Turned out that no production systems were affected. The scary part though is change the length of a single text string could trigger this problem.
  • by brausch ( 51013 ) on Monday March 18, 2019 @05:12PM (#58295010)

    One in 100,000 what? Seconds, minutes, hours, lifetimes?

    It is stupid to make something that can kill people rely on a single input sensor. I programmed experimental tests in nuclear reactors and we always had multiple inputs (thermocouples, flow sensors, etc.) and had sanity checks on the values to identify failed equipment.

    Seems like Boeing's software could have taken more things into consideration than just the angle of attack? What about speed, altitude, rate of climb/descent, etc.

Life is a whim of several billion cells to be you for a while.

Working...