Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI The Military Programming United States News Science Technology

AI Downs 'Top Gun' Pilot In Dogfights (dailymail.co.uk) 441

schwit1 writes from a report via Daily Mail: [Daily Mail reports:] "The Artificial intelligence (AI) developed by a University of Cincinnati doctoral graduate was recently assessed by retired USAF Colonel Gene Lee -- who holds extensive aerial combat experience as an instructor and Air Battle Manager with considerable fighter aircraft expertise. He took on the software in a simulator. Lee was not able to score a kill after repeated attempts. He was shot out of the air every time during protracted engagements, and according to Lee, is 'the most aggressive, responsive, dynamic and credible AI I've seen to date.'" And why is the US still throwing money at the F35, unless it can be flown without pilots. The AI, dubbed ALPHA, features a genetic fuzzy tree decision-making system, which is a subtype of fuzzy logic algorithms. The system breaks larger tasks into smaller tasks, which include high-level tactics, firing, evasion, and defensiveness. It can calculate the best maneuvers in various, changing environments over 250 times faster than its human opponent can blink. Lee says, "I was surprised at how aware and reactive it was. It seemed to be aware of my intentions and reacting instantly to my changes in flight and my missile deployment. It knew how to defeat the shot I was taking. It moved instantly between defensive and offensive actions as needed."
This discussion has been archived. No new comments can be posted.

AI Downs 'Top Gun' Pilot In Dogfights

Comments Filter:
  • Unsurprising (Score:5, Interesting)

    by fredgiblet ( 1063752 ) on Tuesday June 28, 2016 @06:03AM (#52404441)
    It was only a matter of time, computers are able to keep complete situational awareness while analyzing what the target is doing. The only question is how long until we can trust them to work totally autonomously. THAT probably won't come for a while.
    • Re:Unsurprising (Score:4, Insightful)

      by Anonymous Coward on Tuesday June 28, 2016 @06:20AM (#52404483)

      Completely unsurprising since game bots have been able to outmaneuver human players for decades now. The only thing game bots were lacking was adequate sensor input to gain area awareness in the real world without oversimplified preprocessed maps and precisely placed path nodes.

    • Re: Unsurprising (Score:3, Interesting)

      by Anonymous Coward

      Typical 'futurist' article complete with over the top superlatives and everything. Predicting the demise of humans in yet another field where nobody actually wants that. Countermeasures to things will always exist and the fun part about countermeasures to 'artificial intelligence' is that when you have one the entirety of the enemy's systems are cooked. Look at what happened when our last one trick pony the F-117 has it's stealth penetrated. The entire platform became useless.

      Maybe, and here's a concept

      • Comment removed (Score:5, Insightful)

        by account_deleted ( 4530225 ) on Tuesday June 28, 2016 @06:27AM (#52404489)
        Comment removed based on user account deletion
        • G-force limits, too (Score:5, Interesting)

          by Richard Kirk ( 535523 ) on Tuesday June 28, 2016 @07:24AM (#52404659)
          This is probably old data, but few pilots in special, elasticated suits can get beyond 10g without blacking out. As we approach our limit, our peripheral vision goes, so even if we don't black out, we are not working well if we keep this up for long. It is possible to make conventional airframes that can take 25g if you don't have to cut big holes in the airframe for the cockpit. So, a computer in a plane built for a computer ought to rule.
          • This was in an episode of Airwolf, where Stringfellow has to fight another helicopter piloted by a computer. The "unfortunate" passengers (badguys) in the AICopter were killed by the G-forces inflicted on them by the machine as is disregarded their safety to try to win the dogfight.

          • by MTEK ( 2826397 ) on Tuesday June 28, 2016 @11:04AM (#52406047)

            Hybrid solution, though not something I'd want to sign up for...

            • 1. Pilot identifies threat aircraft.
            • 2. Pilot engages combat AI.
            • 3. Pilot wakes up fives minutes later with a headache and a kill.
      • by m76 ( 3679827 )

        So it will be that the dreams of getting rid of humans will die a cold death in the various parents' basements where these futurists live.

        Humans doing less dangerous and menial jobs is a good thing, not a bad thing.

        • Re: Unsurprising (Score:5, Interesting)

          by arth1 ( 260657 ) on Tuesday June 28, 2016 @07:58AM (#52404791) Homepage Journal

          Humans doing less dangerous and menial jobs is a good thing, not a bad thing.

          That's dogmatic, and not necessarily true.

          I would think that humans doing dangerous things for which there are rewards[*] helps provide an evolutionary pressure against those not doing dangerous things, and those failing at them.

          [*]: Primary, as in winning wars, or secondary, as in being better paid than average or attracting more mates.

          That you can toss a wrapper into the wastebin from across the room, that you can walk for miles, and that you can balance on a bike are likely all because of your ancestors doing dangerous things. It paid off.

          As for menial tasks, the same applies, Being good at those too lends an advantage.

          We have this big thing on top of our necks, and really complicated protein factory patterns. We can afford to be good at a lot of things, much more so than most of our cousin species. But that's only to our advantage if we do become good at things, and fill that squishy bulb.
          I firmly believe that that includes doing both dangerous and menial things.

          Which is why I'm now getting into my car, challenging death on the county road to do menial tasks like benchmarking at work. Have a nice day!

          • by m76 ( 3679827 )

            Humans doing less dangerous and menial jobs is a good thing, not a bad thing.

            That's dogmatic, and not necessarily true.

            I would think that humans doing dangerous things for which there are rewards[*] helps provide an evolutionary pressure against those not doing dangerous things, and those failing at them.

            [*]: Primary, as in winning wars, or secondary, as in being better paid than average or attracting more mates.

            That you can toss a wrapper into the wastebin from across the room, that you can walk for miles, and that you can balance on a bike are likely all because of your ancestors doing dangerous things. It paid off.

            As for menial tasks, the same applies, Being good at those too lends an advantage.

            We have this big thing on top of our necks, and really complicated protein factory patterns. We can afford to be good at a lot of things, much more so than most of our cousin species. But that's only to our advantage if we do become good at things, and fill that squishy bulb.
            I firmly believe that that includes doing both dangerous and menial things.

            Which is why I'm now getting into my car, challenging death on the county road to do menial tasks like benchmarking at work. Have a nice day!

            Knowledge can't be passed down between generations, it's inherited. I'm not able to walk because my grandfather was made to walk in WW1 and died doing it. I can walk because I have legs. It has nothing to with putting people into dangerous situations that can be avoided. Of course there are dangerous situations where the person wants to be there, but that's a different thing. I'm not saying don't let them. But would any coal mine workers want to be in the mine, if it wasn't for a wage slave predicament?

            Peop

            • I'm not able to walk because my grandfather was made to walk in WW1 and died doing it. I can walk because I have legs.

              You're able to walk because your great-times-n grandfather did something dangerous, namely coming down from the trees.

              Maybe n isn't so large, given this gem.

              Knowledge can't be passed down between generations, it's inherited.

            • by arth1 ( 260657 )

              Knowledge can't be passed down between generations, it's inherited.

              You can read, can't you? Was that inherited?

              I'm not able to walk because my grandfather was made to walk in WW1 and died doing it. I can walk because I have legs.

              You can walk because an ancestor of yours climbed down from a tree, and dared cross the plain to find food or get away from predators. Those staying behind, or dying crossing the plain didn't get to propagate their genes. And those who dared hunt big animals, trusting that they could catch them or run away if needed. Repeat thousands of times, and evolution paid the ultimate reward to those who had mutations making walking more functional.

              Your talk about WWI s

          • Unfortunately ... helps provide an evolutionary pressure ... [*]: Primary, as in winning wars, or secondary, as in being better paid than average or attracting more mates. this does not generate genetic pressure.

            Actually there is no such thing as "genetic pressure".

            Bottom line it is about who breeds faster, or breeds before he dies.

            You can wipe out the gene pool of some brows or yellows with Napalm and Nukes: that has no affect at all on your gene pool.

            Pretty dumb to think otherwise.

            Getting some of the yell

          • by ranton ( 36917 )

            First off, modern humans are no longer under any meaningful evolutionary pressure, other than perhaps traits which contribute to male or female infertility. Almost everyone who wants children can have them unless they are infertile. Secondly, modern humans will be capable of genetic engineering very soon from an evolutionary point of view. It may be a decade, it may be 200 years, but almost no evolutionary changes would happen in either time frame. Once that happens natural selection will no longer play any

            • by ceoyoyo ( 59147 )

              Sure we do. Look at all those guys in WWII who died before they had kids. Or all the jocks who manage to die in high school or college.

              Larry Niven has written science fiction stories about alien species starting human wars in order to try and breed a more docile human species. We still have evolutionary pressure. It's just in the opposite direction the OP thinks it is.

      • Predicting the demise of humans in yet another field where nobody actually wants that.

        You might not want to replace humans with computers even if computers are superior at the task, but if you don't, your fighters have a disadvantage against any enemy who will - and that disadvantage is only going to get larger with time since computers advance faster than humans evolve. The "god of war" makes the decisions, you obey or die. That's the true nature of a world driven by competition: everyone has their choice

    • If they are planning on going down the totally autonomous route they might as well totally redesign the fighter as there's no need for a cockpit aned most of the safety stuff that goes with it. Once we've completely got rid of all the dead weight we had in there for the pilot then we can truly bow down before our robot overlords.
    • It was only a matter of time, computers are able to keep complete situational awareness while analyzing what the target is doing.

      Umm, you are aware that this is a SIMULATION, not the real world, right? We're not talking about a real jet with a real AI in real combat conditions. Yeah, computer can beat people in games - we've been able to do that for a long time. Not at all the same thing as a real world fight in conditions where the rules of engagement are unclear, the political situation is fraught, and the decision to fire is difficult. We put humans as pilots as much for their decision making abilities as we do their ability t

      • Yes, we are a very long way from letting these things operate completely autonomously, but they don't need to. The drones can be operated remotely by human operators, then once the decision has been made to engage a target the drone switches over to automatic for the actual combat.

        • Yes, we are a very long way from letting these things operate completely autonomously, but they don't need to.

          We should NEVER let these things operated with complete autonomy. Ever. Doing so is both unethical and a bad idea for very practical reasons as well.

          The drones can be operated remotely by human operators, then once the decision has been made to engage a target the drone switches over to automatic for the actual combat.

          Actual combat isn't a simple thing that you can switch on and off. It's messier than that. Giving complete autonomy to a drone at any point is a highly questionable idea because your ability to retake control may be out of your hands. Once the bullet leaves the chamber it's pretty hard to bring it back. Real combat isn't like a video game where you have n

      • Not at all the same thing as a real world fight in conditions where the rules of engagement are unclear, the political situation is fraught, and the decision to fire is difficult.

        Isn't that even more reason to use AI planes? They are, after all, expendable. You can afford to lose them at whatever rate the factories can manufacture them without having to worry about lost lives or grieving families.

    • To me, this is like those quadcopters that can play ping-pong - in a perfectly known environment; in the case of the copters, with fixed tracking cameras all around the room.

      Getting that kind of total situational awareness in the field, with smoke and chaff and hostile signals in the air, can be more challenging. To paraphrase young Solo: "Good in a simulation, that's one thing, good in the real world, that's something else."

  • by Anonymous Coward on Tuesday June 28, 2016 @06:05AM (#52404449)

    So maintaining air superiority now becomes an IT security issue.

  • by OpenSourced ( 323149 ) on Tuesday June 28, 2016 @06:05AM (#52404453) Journal

    ...it doesn't end well.

  • by Anonymous Coward on Tuesday June 28, 2016 @06:19AM (#52404481)

    Translation: he took on the software in its version of reality, with it either being omniscient or having a perfect model of its sensors' deficiencies. While having to work with its presentation of its reality filtered through its presentation devices, limiting the information available to everything the simulator builders considered important enough to bother with and which are actually physically presentable (good luck with proper accelerations, for example).

    • by prefec2 ( 875483 )

      You are right, this is a threat to the validity. However, it is still impressive. Furthermore, you do not know if the simulation included perfect sensors for the AI. This could have been fixed in the simulation by a module which provides sensor issues and physics. Also it is usually not possible to use real planes, as those can only be shot down once and they are kind of expensive, i.e., not in the range of a doctoral candidate budget.

    • Re: (Score:2, Informative)

      by Anonymous Coward
      TFA indicated it wasn't a perfect simulation, and even with handicaps the AI still handily beat out the human.
      • by arth1 ( 260657 )

        TFA indicated it wasn't a perfect simulation, and even with handicaps the AI still handily beat out the human.

        It also indicated that the human pilot was not the best, having first been promoted to not flying, and then retired.
        And then put in a situation where the familiar cues of flying were missing, like feeling G-forces and gravity.

        This was never meant to be a fair fight. It was meant to attract financing by showing a concept, and at the same time winning over the less critical thinking (i.e. politicians) by having a "win".
        This was well orchestrated and well performed. Now let's see if it opens the drawstrings.

    • by AmiMoJo ( 196126 ) on Tuesday June 28, 2016 @08:12AM (#52404865) Homepage Journal

      I don't know why you are surprised that the computer is better. Aside from anything else, it will be able to push the aircraft to the absolute limit of performance without blacking out due to G forces. All modern jets rely on computers to distil sensor data down to something that the pilot can process at a much slower rate than the machine can anyway.

      The simulators are pretty good actually. They spend a lot of effort making the computer controlled opponents realistic in terms of sensor capability. If anything the human has an advantage here, since acceleration induced blackouts are not simulated.

    • by dave420 ( 699308 )

      You're desperate to make any excuse you can, huh? Your translation demonstrates you probably don't know what you're talking about. This is not a copy of Flight Simulator running on a 14" CRT with a Logitech Wingman Extreme.

      • Of course it's not Microsoft Flight Simulator, but it's definitely a flight simulator. The article doesn't say if AI is driven by simulated data from radars and stuff or can read simulation data directly. If it's plugged into simulation directly then it has the same problem as all other game AIs. Since AI is omniscient with perfect reaction times it needs to be artificially made extremely dumb so humans could have any chance to win.
    • Translation: he took on the software in its version of reality, with it either being omniscient or having a perfect model of its sensors' deficiencies.

      After boats (which got autopilots very early on) aircraft are literally the easiest vehicle piloting job for AI for a broad array of reasons. The sensor package is one of the most compelling; they really know where they are, and what they are doing. Some literally $1 accelerometers will tell you the vast majority of what you need to know to keep a plane in the air.

      It should not shock anyone that an AI would be a better combat pilot than a human, especially when it comes to stuff like leading shots.

      Tracking a target with a camera and making a visual estimation of its heading is not that hard any more, again, especially of aircraft which we've been spotting first with our eyes and then with software since they have existed. We have rather complex and expensive spying programs designed to tell us where military aircraft are and what they are doing. And aircraft don't go backwards, and they don't stop in mid-air, etc. What they are up to is a lot easier to estimate than other types of vehicle, again, besides boats.

  • I'll take the blue pill now.
    • by prefec2 ( 875483 )

      Sorry pal you cannot get back into the simulation. Out is out. After taking the red pill you are out.

      • Sorry pal you cannot get back into the simulation. Out is out. After taking the red pill you are out.

        Well... you can ask to be let back in. But you'll probably have to lubricate a piston.

  • by dwillden ( 521345 ) on Tuesday June 28, 2016 @07:16AM (#52404635) Homepage
    Because when we automate war and remove the risk of losses on our side, it becomes too easy to just throw more robots into a situation. War is not something that should be automated, we need to retain the potential of real losses to restrain our desire to engage in war. Even extensive use of drones is taking us dangerously down that path. We can kill those who oppose or offend us without risk of our own losses and thus we have little cause for showing restraint in using such equipment to conduct our foreign policy.

    Oh and Skynet!!!
    • It's actually "good PR" to have pilots in the planes... shows we care enough to risk a man's life to do the task. Now, when the "manned" planes start flying with mannequins in the pilot's seat...

    • by c ( 8461 )

      War is not something that should be automated, we need to retain the potential of real losses to restrain our desire to engage in war.

      Automated war would be far more palatable if we strapped the idiot politicians who get us into wars into the passenger seats of our killbots.

    • Because when we automate war and remove the risk of losses on our side, it becomes too easy to just throw more robots into a situation.

      We are only robots or slaves to the elite who send us to war anyway, so no. That's not really a valid argument. The only reason we're not using robotic pilots right now is that they're not as reliable as humans. That's changing.

  • by PvtVoid ( 1252388 ) on Tuesday June 28, 2016 @07:24AM (#52404665)

    It's worse than that: the AI in this test won when piloting evenly matched planes. But the weak point in modern fighter jet design is the squishy fragile thing in the cockpit, which can't take more than 8 g-s or so, and not even close to that for negative g-forces. Get rid of the pilot, and you can design a plane whose performance is vastly better than a piloted plane. Now put that AI in it and send it head-to-head against an F-35. No contest.

    • by DNS-and-BIND ( 461968 ) on Tuesday June 28, 2016 @08:56AM (#52405099) Homepage

      The airframes can't take 8G either. You take a modern fighter jet fresh off the assembly line, put it through several 8G turns, and you've just drastically shortened the service life. High G turns create a huge amount of stress on the metal and if you keep making them, the wings will crack and fall off just like a WWI biplane.

      So you can stuff that "pilot can't take it" line, it's partially true but not really why they don't allow fighter planes to go above 4-5G unless it's wartime.

      • Re: (Score:2, Insightful)

        by drinkypoo ( 153816 )

        The airframes can't take 8G either.

        So they'll use the techniques BMW used to mass-produce the i3 to make carbon fiber drones that can take even more. Taking the pilot out of the equation saves volume that lets you make the craft smaller, and then you benefit from square-cube law instead of getting fucked by it.

  • by sjbe ( 173966 ) on Tuesday June 28, 2016 @07:33AM (#52404697)

    He took on the software in a simulator.

    So he was fighting in a computer game, not in a real jet and certainly not in real combat conditions. This is a limited scenario with limited conditions. Keep this in mind.

    And why is the US still throwing money at the F35, unless it can be flown without pilots.

    See above. There is a HUGE difference between a computer game and flying a real jet in combat conditions. We've had computer "AI" (using the term loosely) that could beat people at games for a long time. That isn't the same thing as having an AI that is ready for real world combat and it is even further from having an AI that is trustworthy on decisions of whether to shoot or not. To the best of my knowledge we do not presently nor are we likely to any time soon have an AI that we can or should trust to make judgements about what to shoot or when to shoot it. It's not clear to me that we ever can or should take humans out of that loop. It might be necessary to take them out of the vehicle physically (what with us being bags of fluid and all) but we'd be idiots to trust any current AI with complete control of combat.

    Furthermore an F35 does a lot more than just dog fighting. In fact its primary role is likely to be air to ground combat far more often than air to air. That's why they call it a Strike Fighter. I'm not moving the goal posts here either. Yes it is reasonable that a computer AI could outperform a human in air combat maneuvering. Especially when the jet doesn't have a human on board with the physical limitations of a human, particularly in relation to G forces. We've had jets for decades that can generate more g forces than a human can handle and we've had to artificially limit them. The problem is that we still need humans in the loop for decision making and for the most part that is a good thing. Even our drones don't shoot automatically because we cannot trust them to make appropriate firing decisions in most cases.

    • Humans are out of the loop in planetary exploration, and most near Earth satellite work. Should humans always be involved in shoot-to-kill decisions? The writers of RoboCop 2014 think so.

      • Humans are out of the loop in planetary exploration, and most near Earth satellite work.

        No they are not. The humans issue the instructions and the computer on the remote vehicle executes them. The fact that there is some pretty severe latency on the execution of the instructions doesn't change anything. The robots aren't making any decisions about what to explore. Even far from Earth probes like New Horizons were simply executing a series of pre-programmed steps in a sequence determined by people and humans have been in communication with it since day one.

        Should humans always be involved in shoot-to-kill decisions?

        Yes. Absolutely yes. It is uneth

        • Humans are out of the loop in planetary exploration, and most near Earth satellite work.

          No they are not. The humans issue the instructions and the computer on the remote vehicle executes them. The fact that there is some pretty severe latency on the execution of the instructions doesn't change anything. The robots aren't making any decisions about what to explore. Even far from Earth probes like New Horizons were simply executing a series of pre-programmed steps in a sequence determined by people and humans have been in communication with it since day one.

          A question of degrees. In 1969, human pilots were required, zero lag, for docking maneuvers... today, that can be fully automated. The extra-planetary probes make considerable decisions autonomously... we send a general instruction, they execute, but the instructions we send are becoming higher and higher level all the time. At some point, we may be sending a robot factory with general instructions to build enough robots to terraform 1000 sq km of surface for agriculture and deploy them to do that; those

    • To the best of my knowledge we do not presently nor are we likely to any time soon have an AI that we can or should trust to make judgements about what to shoot or when to shoot it.

      That argument is ridiculous in every way because we do not have human pilots that we trust to make judgements about what to shoot or when to shoot it. They have to get permission before they engage an attacker, and they are given their ground targets before they even take off.

  • I've seen this movie.

  • tower this is ghost rider requesting a flyby

  • by bkmoore ( 1910118 ) on Tuesday June 28, 2016 @08:19AM (#52404905)
    Since the first world war, most air to air kills were scored against opponents that did not see their attacker. The preferred tactic was to come out of the sun or attack from a blind spot. The Red Baron stated, "I get real close, pull the trigger, and he blows up", or something to that effect. An AI- piloted airplane would have this same limitation, as it would only be aware of what its sensors tell it. If you jam its on board sensors and data-link capability, all that AI won't be worth anything. What this has to do with the F-35, I don't know? Unless it's just to flame an airplane that a lot of arm-chair experts don't like. There are lots of missions for a manned airplane, and "dogfighting" (or BFM) is a tactic and not not a mission.
    • PS - we don't know how much information the AI had about its opponent in the simulator. Did the AI know its opponents Airspeed, AOA, throttle settings, etc? That would give it an unfair advantage that it wouldn't have in a real-world engagement.
      • It is explained, to a large degree, in the serious paper a few links away from the story. [omicsgroup.org]
      • For current mission profiles, ALPHA’s red forces are handicapped with shorter range missiles and a reduced missile payload than the blue opposing forces. ALPHA also does not have airborne warning and control system (AWACS) support providing 360 long range radar coverage of the area; while blue does have AWACS. The aircraft for both teams are identical in terms of their mechanical performance. While ALPHA has detailed knowledge of its own systems, it is given limited intelligence of the blue force a priori and must rely on its organic sensors for situational awareness (SA) of the blue force; even the number of hostile forces is not given

  • How well does the AI perform against rules of engagement, or will it blindly fire at any object or structure because it identified a threat being in said object or structure?

  • And why is the US still throwing money at the F35, unless it can be flown without pilots.

    So the same criminals that we're paying a king's ransom to in order to develop an aircraft that may not be able to dogfight effectively in real life (limited ammo supply for machine gun = don't miss!) will be able to charge us a second king's ransom to add the AI flight capability later.

    Military Contractor Business Plan Principle #1: Don't "volunteer" anything. If the customer wants a feature after the contract is alrea

You knew the job was dangerous when you took it, Fred. -- Superchicken

Working...