Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI United States Technology

DARPA Wants To Build 'Contextual' AI That Understands the World (venturebeat.com) 79

The Defense Advanced Research Projects Agency (DARPA), a division of the U.S. Department of Defense responsible for the development of emerging technologies, is one of the birthplaces of machine learning, a kind of artificial intelligence (AI) that mimics the behavior of neurons in the brain. Dr. Brian Pierce, director of DARPA's Innovation Office, spoke about the agency's recent efforts at a VentureBeat summit. From the report: One area of study is so-called "common sense" AI -- AI that can draw on environmental cues and an understanding of the world to reason like a human. Concretely, DARPA's Machine Common Sense Program seeks to design computational models that mimic core domains of cognition: objects (intuitive physics), places (spatial navigation), and agents (intentional actors). "You could develop a classifier that could identify a number of objects in an image, but if you ask a question, you're not going to get an answer," Pierce said. "We'd like to get away from having an enormous amount of data to train neural networks [and] get away with using fewer labels [to] train models." The agency's also pursuing explainable AI (XAI), a field which aims to develop next-generation machine learning techniques that explain a given system's rationale. "[It] helps you to understand the bounds of the system, which can better inform the human user," Pierce said.
This discussion has been archived. No new comments can be posted.

DARPA Wants To Build 'Contextual' AI That Understands the World

Comments Filter:
  • Please, can we get some better publishers... this story is a lame repost.

    • Re:Repost (Score:5, Interesting)

      by ShanghaiBill ( 739463 ) on Tuesday October 23, 2018 @03:23PM (#57525885)

      Cyc [wikipedia.org] has been working on this for decades (with poor results), and they have received DARPA funding. How is "new" direction any different?

      • Also, isn't this headline basically just saying, "DARPA wants to do The Thing All Laypeople Think AI Is For"? (See also, every movie about AI every made.)

        • by sycodon ( 149926 )

          Actually, what lay people think AI is for is actually what AI should be.

          Autopilot stuff for your car isn't AI. It's just a control system.

          Now, if you were to muse out loud that you feel like a donut and the Autopilot knew you were talking about a particular kind of pastry and not the spare "donut" tire in the trunk, found a place and pulled into it after pre-ordering, that might be AI.

          • No, that isn't Autonomous AI. That's programming.

            AI today is just math. It's a series of statistical probabilities and programmed actions based on those probabilities. AI doesn't "think". It calculates the probable 'correct' action, as determined by the programmer, and is programmed to act upon it.

            The difference between now and 50 years ago is that we now have the math and CPU power to provide less and less information up front to more complicated math problems so that they'll determine the correct output

            • by sycodon ( 149926 )

              Well now we are getting into the realm of sentience.

            • by Kjella ( 173770 )

              If your boss told you to learn Go, here's the rule book and you became a master player are you or your boss the smart one? Because that's what they did with AlphaGo Zero, it never saw a human play. The people who programmed it didn't know how to play before and they still didn't know afterwards, apart from setting the ultimate goal of winning the game they didn't give any input on what's a good or bad move or position. In the beginning it's stupid, it doesn't know what to do. It learns by playing itself, wi

              • You've actually proven my point. With AlphaGo Zero, the programmers gave the program inputs (the action space and the rules) and the desired output (winning the game) and used a learning algorithm that they made.

                That doesn't mean that AlphaGo Zero can suddenly decide it wants to lose for whatever reason. It has been programmed to win and it will try to win every time.

                The human mind has a tendency to personify inanimate objects. It's how our brain works. The trick to autonomous AI isn't to make an AI actuall

            • AI today is just math. It's a series of statistical probabilities and programmed actions based on those probabilities. AI doesn't "think". It calculates the probable 'correct' action

              Your brain is also just doing math. There's a function from {sensory input, memory} to {actions, memory}. You don't "think". Your brain just calculates that function.

              • "Your brain is also just doing math."

                The brain is doing a lot more than math.

                And even if it tries to only do math it doesn't do it anything like AIs do it.

      • Re:Repost (Score:4, Informative)

        by sycodon ( 149926 ) on Tuesday October 23, 2018 @03:42PM (#57526021)

        They have been getting pretty good results and have several products [cyc.com]

        • They have been getting pretty good results and have several products [cyc.com]

          Their "products" consist of the knowledge database and inference engine. Making it actually do something useful is up to the customer.

          Using these products as evidence that they get results is sort of like saying a company is good at building houses because they sell hammers.

          Deep Learning (the opposite approach to AI) has many applications including, processing images of checks for the banking industry, face recognition in security applications, speech recognition and generation, fake porn, etc.

          What has Cyc

          • by Tom ( 822 )

            Their "products" consist of the knowledge database and inference engine. Making it actually do something useful is up to the customer.

            This.

            I've been following the CyC program for almost 20 years. Well, for sufficiently lenient definitions of "following". But anyways... I was thrilled when they finally released a product. I was deeply disappointed when I saw what. And playing around with the free version a bit was even more disappointing.

            For the 30+ years that they've put into that, the results are ridiculous.

            Deep Learning (the opposite approach to AI) has many applications including,

            I disagree that it's opposite to AI.

            Deep learning ditches the "meaning" part and simply does large numbers statistics. That's a brut

            • Deep Learning (the opposite approach to AI) has many applications including,

              I disagree that it's opposite to AI.

              I meant that the deep learning approach to AI is the opposite of the Cyc approach to AI. Sorry if I wasn't clear.

              Cyc is trying to exhaustively list all the "common sense" facts about the world, by creating structured data with human effort.

              Deep Learning just randomizes some tensors and then feeds in data until the network figures out the "facts" on its own.

              They are completely opposite approaches.

              • by Tom ( 822 )

                They are different approaches, but you can find traces of each in the other. For example, once CyC could read by itself, they gave it a lot of input and let it ask questions. This is not unlike a backpropagation network training.

      • while we're at it we should get that Newton schmuck to stop wasting his time on his "theory of gravity". I mean, if you can't show profitable results in a decade or two it's time to pack it in.
        • while we're at it we should get that Newton schmuck to stop wasting his time on his "theory of gravity". I mean, if you can't show profitable results in a decade or two it's time to pack it in.

          Poor analogy. Newton was able to explain and predict the elliptical orbit of the planets as soon as his theory was published. It was an instant success.

  • What we need first: (Score:3, Interesting)

    by Rick Schumann ( 4662797 ) on Tuesday October 23, 2018 @03:27PM (#57525913) Journal
    We need to understand how a human brain is capable of producing the phenomenon we refer to as 'thinking'.
    Before we can do that, we need to invent the instrumentality to actually be able to observe, in detail, how our own brains function; fMRI ain't cutting it, or we'd already have the answer to the above.
    Then, and only then, when we have the understanding, can we create machines that actually 'think'.
    What we have now just mimicks a very small element of how a brain actually functions. Throwing faster processors and more memory at it won't make it magically 'wake up' and be like a human brain.
    I'm going to assume they understand all this since they seem to acknowledge that the current approach is insufficient and will be starting from square one for a new approach.
    • What you describe is the sure fire way, but thereâ(TM)s also no promise that the brains functions canâ(TM)t be abstracted into something simpler and more appropriate for computers. I think trying to derive physicality is a smart idea. Give the computer what we have in terms of stereo video feeds and audio, accelerometers and actuation, and take it from there. The name of the game is a radical reduction of input info into a cohesive physical world model.

    • If you want it to work exactly like a human sure. But if all you want is the same result there could be a lot of ways to do it. "There is more than one way to skin a cat."
      • But see we have NO IDEA how it is we 'think'. It's still a mystery mainly because we don't have a sufficient way to 'see' how our brains function.
        • by mikael ( 484 )

          Studies have been done. The simplest ones involve working with people who have suffered brain damage from strokes and other accidents. That usually knocks out a region of the brain or two. Then the researchers can study how it affects the thinking process. Some people lose short-term memory - they can remember everything before their accident, but after that, they need a diary to keep track of what happened 10 minutes ago.

          Others lose the ability to construct long sentences - the guy who made the "Kinder sur

          • If we know SO MUCH about how our brains produce the phenomena of 'thought', 'cognition', 'consciousnes', and so on, then why do we not have machines that can do that? Because we DO NOT KNOW how these things work. You cannot refute that fact.
        • We also don't know everything about insect flight. Yet, we still have airplanes. In many ways, our airplanes are superior to insects.

          • Your argument is totally and completely irrelevant and invalid. That's a purely physical thing that is easily defined, how a human brain actually works is clearly and objectively NOT, otherwise we'd have machines already that work just like our brain does. I get accused by some shitty AC of being arrogant yet there are clearly those of you who are so overweeningly arrogant as to think that we've got this subject all figured out already, know all there is to know, but we clearly and objectively know next to
    • This reminds me of a quote:

      “If our brains were simple enough for us to understand them, we'd be so simple that we couldn't.”
      - Ian Stewart

    • People are doing research into monitoring actual brains, but it's very hard to do. It's not a good idea to wait until they are done and have it all figured out when you can already work on real world applications with the knowledge we have.

      What we have now just mimicks a very small element of how a brain actually functions. Throwing faster processors and more memory at it won't make it magically 'wake up' and be like a human brain.

      I wouldn't worry about 'waking up'. Throwing faster processors, more memory, but also better methods, will make it capable of solving increasingly difficult problems. That's all we need. For many applications, AI is already ahead of human brains. AI doesn't get tired or d

  • Insert your "It's just a bunch of if statement..." joke here.
  • If they DO succeed, do you think I could get it to explain the world to me ? I've been here quite a while and I have not yet arrived at a suitable understanding myself.

    • Yea, you're onto something important here. What happens when this AI comes to conclusions about reality that were unexpected and hostile?

  • can it run Linux?
  • The moment you teach a robot to understand the world is the moment it turns evil.

  • It must have lots of tape drives and blinking lights, be housed at Cheyenne mountain and named Joshua.
  • When we say "understand the world" we pretty much just want a 'terrorist'/'non-terrorist' breakdown; missiles aren't cheap.
    • Basically.
      It's easier to get people to trust this machine then go "Oh well it told us these were really really bad people so we bombed them."

  • ... blend?
  • Broad classification is based on lots of previous experience in a context. Current training uses TONS of non-contiguous snapshot images with a classifier attached. While that could be viewed as similar to how humans work, if you squint at it...I think seeing the world work over time, and learning while you do it, is the only way to get close to what we might think of as human level classification. And while their desire to use less training input would be nice, I don't think that would be expected to imp

  • So is everybody else. But maybe they are wrong, and DARPA will create an A.I. with common sense.
  • Comment removed based on user account deletion
  • They have wanted this for decades. But, reality and hype are two different things.

Solutions are obvious if one only has the optical power to observe them over the horizon. -- K.A. Arsdall

Working...