The US Military Wants To Teach AI Some Basic Common Sense (technologyreview.com) 94
DARPA, the research arm of the U.S. military, has a new Machine Common Sense (MCS) program that will run a competition that asks AI algorithms to make sense of questions with common sense answers. For example, here's one of the questions: "A student puts two identical plants in the same type and amount of soil. She gives them the same amount of water. She puts one of these plants near a window and the other in a dark room. The plant near the window will produce more (A) oxygen (B) carbon dioxide (C) water." MIT Technology Review reports: A computer program needs some understanding of the way photosynthesis works in order to tackle the question. Simply feeding a machine lots of previous questions won't solve the problem reliably. These benchmarks will focus on language because it can so easily trip machines up, and because it makes testing relatively straightforward. Etzioni says the questions offer a way to measure progress toward common-sense understanding, which will be crucial. [...] Previous attempts to help machines understand the world have focused on building large knowledge databases by hand. This is an unwieldy and essentially never-ending task. The most famous such effort is Cyc, a project that has been in the works for decades. "The absence of common sense prevents an intelligent system from understanding its world, communicating naturally with people, behaving reasonably in unforeseen situations, and learning from new experiences,"https://www.darpa.mil/ Dave Gunning, a program manager at DARPA, said in a statement issued this morning. "This absence is perhaps the most significant barrier between the narrowly focused AI applications we have today and the more general AI applications we would like to create in the future."
Teach AI Some Basic Common Sense (Score:4, Insightful)
Good luck with that.
Common sense isn't very common I'm afraid.
Re: (Score:2)
1) I've recently started referring to it as uncommon sense.
2) Human's can't do common sense so we need computers to do it for us?
3) We can't do common sense, what makes us think we can teach computers to do it?
Easy to see where this won't go...
Re:Teach AI Some Basic Common Sense (Score:5, Interesting)
Yeah, even in the attempted example question, the "common sense" answer is A, but the "actual fact" answer is A and B. A during the day, B at night. C is most likely also true in real scenarios.
My goal in writing automated systems is to make less of the mistakes known by the moniker "common sense," not to make more of them.
If you lack information and are forced into action, "common sense" might be a decent least-bad semi-random choice, but it should never be expected to be correct or optimal.
Re: (Score:2)
Good luck with that.
Common sense isn't very common I'm afraid.
Indeed. And in particular, the military is the last place where you find it.
Re: (Score:3)
Common sense in machine learning is accomplished using a Gaussian distribution (bell curve). You can even do it in databases using standard deviation functions.
Re: Teach AI Some Basic Common Sense (Score:2)
Works, provided your question really is one question. The example given is actually three Independent questions, each with a different bell curve.
Re: (Score:2)
Of course it is. Common sense is merely the outcome of your own personal prejudices and biases.
It's common sense that harsh prison sentences deter crime: nobody wants to go back to that, so they won't do bad things. Unfortunately, real-world data suggests that crime is a product of many, many factors, largely poverty, abuse, and a lack of social mobility; and that harsh prison environments and long sentences drive people to be better criminals who learn to evade punishment for their crimes.
Normalizin
That doesn't sound like common sense (Score:4, Insightful)
That doesn't sound like common sense... Imagine an adult that had dropped out of school at a young age... they may not be able to answer the given question. Knowing that plants produce oxygen is something learned. I would put common sense more in the realm of: if you spit in someone's face, will they: a) hug you, b) get angry, c) eat a sandwich? Common sense is sense acquired through common experience, not schooling.
Re: (Score:2)
Yeah, I'm not sure how they went from 'common sense' to 'effects of photosynthesis'.
Re:That doesn't sound like common sense (Score:4, Interesting)
I always thought that a reasonable definition of common sense was the set of rules you fall back on when you don't have sufficient specific knowledge to address the issue at hand. For example, you may not specifically know how to repair your car, so common sense tells you to seek help from someone more qualified.
As you say, there is no common sense that can be applied to plants and sunlight, you either know about the process or you don't. A system applying common sense would defer to a botanist or refer to some reference material to improve its skillset or some other such thing
Now that's not to say that you can't infer from other data that perhaps it takes more energy to produce O2 than C02 and guess that the light might be such an energy source but at this point you're falling back on specialist knowledge that it either has or it lacks
Re: (Score:2)
Common sense is just common knowledge. How common it is, entirely subject to the society that produces that common sense. So common sense for a nomadic society how to track animals, common sense for an advanced society, how to read and write typically used words and sentences.
Teaching an AI common sense is stupid, common sense for a computer, how to talk to another computer.
What they are after is speech parsing algorithms. How to translate speech, written or oral into, logical digital patterns. For a star
Re: (Score:2)
"For example, you may not specifically know how to repair your car, so common sense tells you to seek help from someone more qualified."
That is in no small part what separates genuine intelligence from mediocrity. The mediocre seek the most qualified to provide them answers, the exceptional constantly strive to improve their qualification. Not just about cars, about everything. Only then do you become qualified to judge the qualification of another who you might delegate the actual labor to.
Re: (Score:2, Funny)
(e) Develop a taste for human saliva, causing it to enslave all humans and farm them for saliva.
Re: (Score:2)
Possibly B, if it is a Venus Fly Trap or other carnivore. Especially if before spitting you were drinking Earth Sugar Drink.
And expecting an AI to agree with humans about the applicability of C is perhaps dubious. If you reduced photosynthesis there might be chemical markers released that signify the plant is aware of non-optimal conditions, and it might even wave its branches around trying to find the sun. There is not really much reason to assume than an AI would see this as being different than the condi
They've been trying to... (Score:5, Insightful)
...for the last 30 years. Nobody has cracked it yet. This "common sense" bit is what prevents "true" AI from becoming a reality. What we have now are glorified expert systems and pattern matching algorithms.
Re:They've been trying to... (Score:5, Insightful)
Well, that happened. Bots now routinely pass that test.
It has not. What has happened is that bots can successfully claim to be some limited form of human being (very young and/or with serious mental defects) in a very restricted topic area and a very time-limited conversation. The general Turing test is unsolved.
Re: (Score:1)
Chatbots pass no test. They may seem to do fine for a while, but is too easy to trip up. Some bots have enough "backstory" that you can't stump them simply by asking where they grew up or if their first schoolteacher was female. And they use well-formulated sentences. But they have almost no memory for new information. Most respond only to the previous sentence. Here is how to trip them up:
1. Tell the chatbot something sensational; your wife has three tits. Get it to agree that this is special indeed.
2. C
Re: (Score:3)
30 years ago, we thought that if a machine could pass the Turing test we'd call it intelligent. Well, that happened. Bots now routinely pass that test.
Bots don't pass the Turing test. Every time that happens, look at the details and you'll see that the experiment is done poorly. One common technique is to have the tester guess whether they are talking to a human or a computer: but the human uses limited responses, acting somewhat like a computer.
Re: (Score:3)
Machines aren't intelligent. Look at humans: the brain is a neural network, and human memory is physically a hopfield with associative references (and yes, an artificial neural network has the same problem with easily storing things but then not being able to find them again later--with the same solution being that it's easier to recall things if you activate more neurons, so artificial neural networks can better remember things by using mind palaces, pegs, and simple rumination to reflect on how things
Re: (Score:3)
Looks very much like anything requiring insight is not accessible to computing machinery. For anybody with some understanding of the problem, that is no surprise. It is highly doubtful insight and intelligence can actually work without self-awareness and free will (yes, I know some neuro-morons claim their faulty experiments show there is no such thing), as that is the only place were we observe it.
Sure, physicalists (basically a fundamentalist cult) claim that everything is physical, and hence strong AI sh
Re: (Score:1)
"Sure, physicalists (basically a fundamentalist cult) claim that everything is physical, and hence strong AI should be possible, but that is belief, not science. They consistently fail to explain consciousness except by calling it magic (in other words), for example. There is also the little problem that they think they can get intelligence and insight without free will. There is no indication that is even possible, and there certainly is no example of that in nature."
I don't see how the first two sentences
Re: (Score:2)
Re: (Score:2)
It does not. It would if AI researchers were seeing a credible approach to create strong AI anywhere on the distant horizon. They do not. There is absolutely nothing. The whole public discussion is completely baseless.
Re: (Score:2)
I can see that you "don't see". Your argument is completely out of sync with current AI research results. "Hysterical fear mongering" has absolutely nothing to do with it. AI researchers do not have any clue at all how what you describe could be implemented. Last time (2017) I talked to a high-profile AI expert in a non-public setting, his immediate statement was "not in the next 50 years". That is science-speak for "we have nothing". (Yes, I am a scientist and I have been following AI research for about 30
Re: (Score:1)
That is a bit of a misnomer, not really having any idea doesn't mean much when there really isn't any motivation due to having little to no commercial applicability and ethical challenges.
Hell, I built a self scoring AI combined with an evolutionary algorithm (and a bit of other secret sauce) in just a few days with nothing more than what I'd read to in high level descriptions and my own ideas about how life works about six years back. It isn't exactly rocket science. The only difference between AI now and
Re: (Score:2)
We have examples of naturally occuring intelligences without free will
That's not a valid description of autism.
Re: (Score:2)
Re: (Score:2)
We have examples of naturally occuring intelligences without free will
Oh? What do you know that the entire AI research field has missed for a few decades?
Re: (Score:2)
It is highly doubtful insight and intelligence can actually work without self-awareness and free will
Actually, insight can work without self-awareness and free will on a restricted domain: a machine is capable of identifying a particular pattern in outcomes and exploiting it without specific programming to do that, so long as it is designed to examine the specific data and seek specific outcomes by modifying specific actions to cause useful state changes.
General insight and intelligence, however, doesn't work without self-awareness and free will. General intelligence is, by definition, general: you c
Re: (Score:2)
Well, I do know that some people have started to call things not "general intelligence" "intelligence". That makes absolutely no sense from a scientific point of view but helps with marketing. In any scientific context, assume that "intelligence" means "general intelligence" as there really is nothing that would qualify as "nongeneral intelligence". The "general" is at the very core of the idea of "intelligence". The rest is automation, statistical classification, planning algorithms, etc. but not in any se
Re: (Score:2)
Some have called such things "machine learning".
Re: (Score:2)
"Machine learning" is pretty much a nonsensical marketing-term as well. There is no learning in machines, just automatic parameter adjustment in fixed algorithms. Actual learning requires insight, this is just training. Although, even when applied to humans this term is misused and a lot of things that are really "training" are called "learning". It is not as bad as "non-general intelligence" though. The whole thing is about anthropomorphizing machines or "machinizing" humans (by the physicalist morons). Pr
Re: (Score:2)
There is no learning in machines, just automatic parameter adjustment in fixed algorithms.
Yes, that's how humans work.
Re: (Score:2)
Two issues (Score:2)
The two issues I see holding machines back, that are inherent in the human mind, are:
1. In the brain the "memory" and "logic" hardware are mixed together. There are no separate areas of the brain, they are spread out and combined all over the place, except for some specialized structures for handling visual processing and autonomic systems.
2. The brain can create arbitrary connections between different parts of itself, making it, essentially, *massively* parallel with no set routes between areas. Computers
Re: (Score:2)
The biggest question IMO is "how are memories stored in the human mind?"
The physical data structure in humans is a hop field.
Re: They've been trying to... (Score:2)
Re: (Score:2)
Thing is we know how a hopfield stores and retrieves memories: neurons activate based on stimulus inputs, and they respond to these inputs based on prior training. If the prior training suggests an output, they emit an output. Arrays of this chain down and converge onto a target vector, which produces its own output. When a particular memory is better-associated, those activations cause the activation of more neurons, which increases the chances of reaching the correct target vector in the end.
Hopfie
Re: (Score:2)
Re: (Score:2)
It's a pretty vague process. Basically, a neuron has many outputs; it may have an input, or it may be self-firing (e.g. a neuron may be GABA-mediated such that it simply fires more-rapidly if there is less chlorine traveling across a particular ion channel and less-rapidly when GABA binds to that ion channel and draws more chlorine into the cell; or it may be GABA mediated such that it fires with less input if there is less chlorine, etc.). Really, that's about it.
The neuron's outputs go to other neuron
Common knowledge (Score:2)
They're asking for common knowledge, not common sense.
Instilling common knowledge into an intelligence takes approximately 18 years. You're allowed 8 hours per day of offline processing. Approximately 17% of intelligences can be expected to fail to complete the training.
Good luck.
Re: (Score:2)
17% fail? Looks to me more like around 17% succeed, the rest just builds up a dictionary of observed behaviors without any understanding.
Yeahhhh.... (Score:2)
(And after that I'd like to see them code whatever the hell they come up with.)
Re: (Score:2)
Define "Common sense".
Common sense is whatever the ignorant utter when they hear experts talking about details they don't understand.
Just like, a paradox is something that you refuse to accept that you don't understand, and so remain confused about even after you found out that the part you thought you understood was actually wrong.
And irony is comedy based on irrational expectations predictably not matching reality.
Half the people are below average. Common sense is that wisdom that is common even to most of them. It should be n
The Noisy End (Score:3)
Aim the noisy end away from face (or whatever the AI equivalent is)
Re: (Score:3)
Generalized to AI it would be, "aim the energetic end towards negative infinity on the friend axis."
But for contemporary electronic machines a better simplification might be just to measure (predicted) heat, and aim the hot end towards -1
The interesting thing about this article title (Score:3)
... is that it could plausibly have been picked out from tech news headlines from 35 years ago, when I was in school. And I wouldn't be the least surprised if it crops up again 35 years from now.
The rich contextual knowledge humans have of the world has been the the clear advantage we have over software ever since AI researchers were doing the digital equivalent of banging rocks together. I remember being awed by SHRDLU's ability to interact with people so long as you pared all context away and you restricted yourself to an artificial, constructed world.
Logic, after all, is only good as the propositions you feed it. The illogic of human reasoning is both our Achilles' heel and our greatest strength. Our familiarity with the world makes us reject conclusions which are logically valid, but just seem wrong. This is often wrong, and when it is we call it a "cognitive bias". But it's often right, too, and when it is we call it "common sense". Same mechanism, different words.
Re: (Score:2)
Our familiarity with the world makes us reject conclusions which are logically valid, but just seem wrong. This is often wrong, and when it is we call it a "cognitive bias". But it's often right, too, and when it is we call it "common sense". Same mechanism, different words.
The problem is that you presume there is some external measure of correctness that tells us the difference.
But in reality, what is called "common sense" is usually cognitive bias, and what is called "cognitive bias" is often also just a correct observation made by the "wrong" person. They might actually just be synonyms for unsupported crap that is usually wrong.
Re: (Score:2)
I agree. The problem is basically intractable (at least with digital computers) as even the dumbest human has this quality of "understanding" that completely eludes software. Well, "understanding" requires consciousness for the moment of insight, and we do not even know what that is. (No we don't, go away physicalist idiots. You are a religious cult.)
Re: (Score:2)
The problem now isn't that we can't build a generalist AI, the problem is that it is difficult to measure the success of an implementation since you are no longer defining it's goals and a generalist AI of high complexity will not necessarily rapidly show consistency and progress on logic tasks. Even the brain of a child has immense raw processing power but it takes a great deal of time to train basic addition. We are testing our AI's with dramatically less processing power and looking for them to solve bas
Re: (Score:2)
I think the problem is there isn't the economic motivation to shoulder the enormous cost of development. And once you had it, all you'd have done is prove a point; what you'd get is en effect a bizarre person, and we've got plenty of those already.
Re: (Score:1)
A bizarre person who can be snapshotted and restored, eats silicon and electricity from any source, who never reaches retirement age (although I tend to think some of the entropy issues that lead to alzheimers and senility might be statistically inevitable whether a long lived AI or a human). This AI can be a companion, can work and take care of humans, can be raised by humans who can't have children, etc. Also, recognizing animal intelligence doesn't mean we don't still consider ourselves superior purely b
Re: (Score:2)
And that is a question that does not even require any intelligence, just a linear search forward in time. These fabulous "digital assistants" cannot even do that? Now I am _never_ getting one. (Besides, they are creepy...)
Winograd Schema Challenge (Score:2)
Common sense? (Score:2)
Robots start shooting themselves in the foot to get out of the army.
Cyc by Douglas Lenat (Score:2)
Maybe they should ask a question? (Score:2)
The example of the "questions" doesn't actually ask the AI a question. It just says the the plant by the window will produce more (a) oxygen (B) carbon dioxide (C) water. That's a statement of fact and not a question. Over time it's true because the plant in the dark will die and produce nothing.
Besides, why do they want to introduce common sense into the military? They spend a couple of months knocking it out of every person when they first enlist.
As if human beigns have common sense... (Score:2)
... a good example was what happened to videogames for the last 20 years. The entire industry once high speed internet was everywhere was literally able to steal game software and hold it back on their servers because the average person did not have the "common sense" that corporations will do anything to make a buck including try to take the software you are paying for away from you in order to charge you more.
CyC ? (Score:3)
Didn't we already have this 30 years ago? It was called CyC, a program of the U of Texas, if I recall correctly, and it had exactly this goal, except that they called it "general background knowledge" and not "common sense".
As I recall, the software eventually could read and understand newspaper articles, but didn't progress beyond the understanding of a pre-teen child.
Not for the military (Score:2)