Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Education Software The Military United States Technology

The US Military Wants To Teach AI Some Basic Common Sense (technologyreview.com) 94

DARPA, the research arm of the U.S. military, has a new Machine Common Sense (MCS) program that will run a competition that asks AI algorithms to make sense of questions with common sense answers. For example, here's one of the questions: "A student puts two identical plants in the same type and amount of soil. She gives them the same amount of water. She puts one of these plants near a window and the other in a dark room. The plant near the window will produce more (A) oxygen (B) carbon dioxide (C) water." MIT Technology Review reports: A computer program needs some understanding of the way photosynthesis works in order to tackle the question. Simply feeding a machine lots of previous questions won't solve the problem reliably. These benchmarks will focus on language because it can so easily trip machines up, and because it makes testing relatively straightforward. Etzioni says the questions offer a way to measure progress toward common-sense understanding, which will be crucial. [...] Previous attempts to help machines understand the world have focused on building large knowledge databases by hand. This is an unwieldy and essentially never-ending task. The most famous such effort is Cyc, a project that has been in the works for decades. "The absence of common sense prevents an intelligent system from understanding its world, communicating naturally with people, behaving reasonably in unforeseen situations, and learning from new experiences,"https://www.darpa.mil/ Dave Gunning, a program manager at DARPA, said in a statement issued this morning. "This absence is perhaps the most significant barrier between the narrowly focused AI applications we have today and the more general AI applications we would like to create in the future."
This discussion has been archived. No new comments can be posted.

The US Military Wants To Teach AI Some Basic Common Sense

Comments Filter:
  • by nospam007 ( 722110 ) * on Thursday October 11, 2018 @05:25PM (#57463762)

    Good luck with that.
    Common sense isn't very common I'm afraid.

    • by mlheur ( 212082 )

      1) I've recently started referring to it as uncommon sense.
      2) Human's can't do common sense so we need computers to do it for us?
      3) We can't do common sense, what makes us think we can teach computers to do it?

      Easy to see where this won't go...

    • by Aighearach ( 97333 ) on Thursday October 11, 2018 @06:29PM (#57464102)

      Yeah, even in the attempted example question, the "common sense" answer is A, but the "actual fact" answer is A and B. A during the day, B at night. C is most likely also true in real scenarios.

      My goal in writing automated systems is to make less of the mistakes known by the moniker "common sense," not to make more of them.

      If you lack information and are forced into action, "common sense" might be a decent least-bad semi-random choice, but it should never be expected to be correct or optimal.

    • by gweihir ( 88907 )

      Good luck with that.
      Common sense isn't very common I'm afraid.

      Indeed. And in particular, the military is the last place where you find it.

    • Actually its much easier than you think.

      Common sense in machine learning is accomplished using a Gaussian distribution (bell curve). You can even do it in databases using standard deviation functions.
    • Of course it is. Common sense is merely the outcome of your own personal prejudices and biases.

      It's common sense that harsh prison sentences deter crime: nobody wants to go back to that, so they won't do bad things. Unfortunately, real-world data suggests that crime is a product of many, many factors, largely poverty, abuse, and a lack of social mobility; and that harsh prison environments and long sentences drive people to be better criminals who learn to evade punishment for their crimes.

      Normalizin

  • by Lab Rat Jason ( 2495638 ) on Thursday October 11, 2018 @05:41PM (#57463850)

    That doesn't sound like common sense... Imagine an adult that had dropped out of school at a young age... they may not be able to answer the given question. Knowing that plants produce oxygen is something learned. I would put common sense more in the realm of: if you spit in someone's face, will they: a) hug you, b) get angry, c) eat a sandwich? Common sense is sense acquired through common experience, not schooling.

    • by Calydor ( 739835 )

      Yeah, I'm not sure how they went from 'common sense' to 'effects of photosynthesis'.

      • by bugs2squash ( 1132591 ) on Thursday October 11, 2018 @05:59PM (#57463970)

        I always thought that a reasonable definition of common sense was the set of rules you fall back on when you don't have sufficient specific knowledge to address the issue at hand. For example, you may not specifically know how to repair your car, so common sense tells you to seek help from someone more qualified.

        As you say, there is no common sense that can be applied to plants and sunlight, you either know about the process or you don't. A system applying common sense would defer to a botanist or refer to some reference material to improve its skillset or some other such thing

        Now that's not to say that you can't infer from other data that perhaps it takes more energy to produce O2 than C02 and guess that the light might be such an energy source but at this point you're falling back on specialist knowledge that it either has or it lacks

        • by rtb61 ( 674572 )

          Common sense is just common knowledge. How common it is, entirely subject to the society that produces that common sense. So common sense for a nomadic society how to track animals, common sense for an advanced society, how to read and write typically used words and sentences.

          Teaching an AI common sense is stupid, common sense for a computer, how to talk to another computer.

          What they are after is speech parsing algorithms. How to translate speech, written or oral into, logical digital patterns. For a star

        • by Shaitan ( 22585 )

          "For example, you may not specifically know how to repair your car, so common sense tells you to seek help from someone more qualified."

          That is in no small part what separates genuine intelligence from mediocrity. The mediocre seek the most qualified to provide them answers, the exceptional constantly strive to improve their qualification. Not just about cars, about everything. Only then do you become qualified to judge the qualification of another who you might delegate the actual labor to.

  • by JBMcB ( 73720 ) on Thursday October 11, 2018 @05:42PM (#57463862)

    ...for the last 30 years. Nobody has cracked it yet. This "common sense" bit is what prevents "true" AI from becoming a reality. What we have now are glorified expert systems and pattern matching algorithms.

    • by gweihir ( 88907 )

      Looks very much like anything requiring insight is not accessible to computing machinery. For anybody with some understanding of the problem, that is no surprise. It is highly doubtful insight and intelligence can actually work without self-awareness and free will (yes, I know some neuro-morons claim their faulty experiments show there is no such thing), as that is the only place were we observe it.

      Sure, physicalists (basically a fundamentalist cult) claim that everything is physical, and hence strong AI sh

      • by Shaitan ( 22585 )

        "Sure, physicalists (basically a fundamentalist cult) claim that everything is physical, and hence strong AI should be possible, but that is belief, not science. They consistently fail to explain consciousness except by calling it magic (in other words), for example. There is also the little problem that they think they can get intelligence and insight without free will. There is no indication that is even possible, and there certainly is no example of that in nature."

        I don't see how the first two sentences

        • by gweihir ( 88907 )

          I can see that you "don't see". Your argument is completely out of sync with current AI research results. "Hysterical fear mongering" has absolutely nothing to do with it. AI researchers do not have any clue at all how what you describe could be implemented. Last time (2017) I talked to a high-profile AI expert in a non-public setting, his immediate statement was "not in the next 50 years". That is science-speak for "we have nothing". (Yes, I am a scientist and I have been following AI research for about 30

          • by Shaitan ( 22585 )

            That is a bit of a misnomer, not really having any idea doesn't mean much when there really isn't any motivation due to having little to no commercial applicability and ethical challenges.

            Hell, I built a self scoring AI combined with an evolutionary algorithm (and a bit of other secret sauce) in just a few days with nothing more than what I'd read to in high level descriptions and my own ideas about how life works about six years back. It isn't exactly rocket science. The only difference between AI now and

      • It is highly doubtful insight and intelligence can actually work without self-awareness and free will

        Actually, insight can work without self-awareness and free will on a restricted domain: a machine is capable of identifying a particular pattern in outcomes and exploiting it without specific programming to do that, so long as it is designed to examine the specific data and seek specific outcomes by modifying specific actions to cause useful state changes.

        General insight and intelligence, however, doesn't work without self-awareness and free will. General intelligence is, by definition, general: you c

        • by gweihir ( 88907 )

          Well, I do know that some people have started to call things not "general intelligence" "intelligence". That makes absolutely no sense from a scientific point of view but helps with marketing. In any scientific context, assume that "intelligence" means "general intelligence" as there really is nothing that would qualify as "nongeneral intelligence". The "general" is at the very core of the idea of "intelligence". The rest is automation, statistical classification, planning algorithms, etc. but not in any se

          • Some have called such things "machine learning".

            • by gweihir ( 88907 )

              "Machine learning" is pretty much a nonsensical marketing-term as well. There is no learning in machines, just automatic parameter adjustment in fixed algorithms. Actual learning requires insight, this is just training. Although, even when applied to humans this term is misused and a lot of things that are really "training" are called "learning". It is not as bad as "non-general intelligence" though. The whole thing is about anthropomorphizing machines or "machinizing" humans (by the physicalist morons). Pr

              • There is no learning in machines, just automatic parameter adjustment in fixed algorithms.

                Yes, that's how humans work.

    • The biggest question IMO is "how are memories stored in the human mind?" It is an important question because the way things are stored changes the kind of algorithms that can be efficiently used (a hash table allows quick access but slow sorting, unlike a tree, for example).
      • The two issues I see holding machines back, that are inherent in the human mind, are:

        1. In the brain the "memory" and "logic" hardware are mixed together. There are no separate areas of the brain, they are spread out and combined all over the place, except for some specialized structures for handling visual processing and autonomic systems.

        2. The brain can create arbitrary connections between different parts of itself, making it, essentially, *massively* parallel with no set routes between areas. Computers

      • The biggest question IMO is "how are memories stored in the human mind?"

        The physical data structure in humans is a hop field.

        • And indeed the physical structure of everything in a computer is an array of memory (at least, the logical structure). And yet that doesn't get us any closer to the answer of how memories are stored and retrieved.
          • Thing is we know how a hopfield stores and retrieves memories: neurons activate based on stimulus inputs, and they respond to these inputs based on prior training. If the prior training suggests an output, they emit an output. Arrays of this chain down and converge onto a target vector, which produces its own output. When a particular memory is better-associated, those activations cause the activation of more neurons, which increases the chances of reaching the correct target vector in the end.

            Hopfie

            • That's all pretty vague. There are lots of different types of neurons, too [wikipedia.org]
              • It's a pretty vague process. Basically, a neuron has many outputs; it may have an input, or it may be self-firing (e.g. a neuron may be GABA-mediated such that it simply fires more-rapidly if there is less chlorine traveling across a particular ion channel and less-rapidly when GABA binds to that ion channel and draws more chlorine into the cell; or it may be GABA mediated such that it fires with less input if there is less chlorine, etc.). Really, that's about it.

                The neuron's outputs go to other neuron

  • They're asking for common knowledge, not common sense.

    Instilling common knowledge into an intelligence takes approximately 18 years. You're allowed 8 hours per day of offline processing. Approximately 17% of intelligences can be expected to fail to complete the training.

    Good luck.

    • by gweihir ( 88907 )

      17% fail? Looks to me more like around 17% succeed, the rest just builds up a dictionary of observed behaviors without any understanding.

  • Define "Common sense".


    (And after that I'd like to see them code whatever the hell they come up with.)
    • Define "Common sense".

      Common sense is whatever the ignorant utter when they hear experts talking about details they don't understand.

      Just like, a paradox is something that you refuse to accept that you don't understand, and so remain confused about even after you found out that the part you thought you understood was actually wrong.

      And irony is comedy based on irrational expectations predictably not matching reality.

      Half the people are below average. Common sense is that wisdom that is common even to most of them. It should be n

  • by decipher_saint ( 72686 ) on Thursday October 11, 2018 @05:56PM (#57463948)

    Aim the noisy end away from face (or whatever the AI equivalent is)

    • Generalized to AI it would be, "aim the energetic end towards negative infinity on the friend axis."

      But for contemporary electronic machines a better simplification might be just to measure (predicted) heat, and aim the hot end towards -1

  • by hey! ( 33014 ) on Thursday October 11, 2018 @06:11PM (#57464010) Homepage Journal

    ... is that it could plausibly have been picked out from tech news headlines from 35 years ago, when I was in school. And I wouldn't be the least surprised if it crops up again 35 years from now.

    The rich contextual knowledge humans have of the world has been the the clear advantage we have over software ever since AI researchers were doing the digital equivalent of banging rocks together. I remember being awed by SHRDLU's ability to interact with people so long as you pared all context away and you restricted yourself to an artificial, constructed world.

    Logic, after all, is only good as the propositions you feed it. The illogic of human reasoning is both our Achilles' heel and our greatest strength. Our familiarity with the world makes us reject conclusions which are logically valid, but just seem wrong. This is often wrong, and when it is we call it a "cognitive bias". But it's often right, too, and when it is we call it "common sense". Same mechanism, different words.

    • Our familiarity with the world makes us reject conclusions which are logically valid, but just seem wrong. This is often wrong, and when it is we call it a "cognitive bias". But it's often right, too, and when it is we call it "common sense". Same mechanism, different words.

      The problem is that you presume there is some external measure of correctness that tells us the difference.

      But in reality, what is called "common sense" is usually cognitive bias, and what is called "cognitive bias" is often also just a correct observation made by the "wrong" person. They might actually just be synonyms for unsupported crap that is usually wrong.

    • by gweihir ( 88907 )

      I agree. The problem is basically intractable (at least with digital computers) as even the dumbest human has this quality of "understanding" that completely eludes software. Well, "understanding" requires consciousness for the moment of insight, and we do not even know what that is. (No we don't, go away physicalist idiots. You are a religious cult.)

    • by Shaitan ( 22585 )

      The problem now isn't that we can't build a generalist AI, the problem is that it is difficult to measure the success of an implementation since you are no longer defining it's goals and a generalist AI of high complexity will not necessarily rapidly show consistency and progress on logic tasks. Even the brain of a child has immense raw processing power but it takes a great deal of time to train basic addition. We are testing our AI's with dramatically less processing power and looking for them to solve bas

      • by hey! ( 33014 )

        I think the problem is there isn't the economic motivation to shoulder the enormous cost of development. And once you had it, all you'd have done is prove a point; what you'd get is en effect a bizarre person, and we've got plenty of those already.

        • by Shaitan ( 22585 )

          A bizarre person who can be snapshotted and restored, eats silicon and electricity from any source, who never reaches retirement age (although I tend to think some of the entropy issues that lead to alzheimers and senility might be statistically inevitable whether a long lived AI or a human). This AI can be a companion, can work and take care of humans, can be raised by humans who can't have children, etc. Also, recognizing animal intelligence doesn't mean we don't still consider ourselves superior purely b

  • Sounds like they want to crack the Winograd Schema Challenge, i.e. questions with linguistic ambiguities that require the reader to resolve the ambiguities by referring to relevant background information: https://cs.nyu.edu/faculty/dav... [nyu.edu]
  • Robots start shooting themselves in the foot to get out of the army.

  • Douglas Lenat has been working on this common sense problem for years.
  • The example of the "questions" doesn't actually ask the AI a question. It just says the the plant by the window will produce more (a) oxygen (B) carbon dioxide (C) water. That's a statement of fact and not a question. Over time it's true because the plant in the dark will die and produce nothing.

    Besides, why do they want to introduce common sense into the military? They spend a couple of months knocking it out of every person when they first enlist.

  • ... a good example was what happened to videogames for the last 20 years. The entire industry once high speed internet was everywhere was literally able to steal game software and hold it back on their servers because the average person did not have the "common sense" that corporations will do anything to make a buck including try to take the software you are paying for away from you in order to charge you more.

  • by Tom ( 822 ) on Friday October 12, 2018 @05:10AM (#57466008) Homepage Journal

    Didn't we already have this 30 years ago? It was called CyC, a program of the U of Texas, if I recall correctly, and it had exactly this goal, except that they called it "general background knowledge" and not "common sense".

    As I recall, the software eventually could read and understand newspaper articles, but didn't progress beyond the understanding of a pre-teen child.

  • The last people you should do this for are the military... of any nation.

It is easier to write an incorrect program than understand a correct one.

Working...