When AI Asks Dumb Questions, It Gets Smart Fast (science.org) 38
sciencehabit shares a report from Science Magazine: If someone showed you a photo of a crocodile and asked whether it was a bird, you might laugh -- and then, if you were patient and kind, help them identify the animal. Such real-world, and sometimes dumb, interactions may be key to helping artificial intelligence learn, according to a new study in which the strategy dramatically improved an AI's accuracy at interpreting novel images. The approach could help AI researchers more quickly design programs that do everything from diagnose disease to direct robots or other devices around homes on their own.
It's important to think about how AI presents itself, says Kurt Gray, a social psychologist at the University of North Carolina, Chapel Hill, who has studied human-AI interaction but was not involved in the work. "In this case, you want it to be kind of like a kid, right?" he says. Otherwise, people might think you're a troll for asking seemingly ridiculous questions. The team "rewarded" its AI for writing intelligible questions: When people actually responded to a query, the system received feedback telling it to adjust its inner workings so as to behave similarly in the future. Over time, the AI implicitly picked up lessons in language and social norms, honing its ability to ask questions that were sensical and easily answerable.
The new AI has several components, some of them neural networks, complex mathematical functions inspired by the brain's architecture. "There are many moving pieces [...] that all need to play together," Krishna says. One component selected an image on Instagram -- say a sunset -- and a second asked a question about that image -- for example, "Is this photo taken at night?" Additional components extracted facts from reader responses and learned about images from them. Across 8 months and more than 200,000 questions on Instagram, the system's accuracy at answering questions similar to those it had posed increased 118%, the team reports today in the Proceedings of the National Academy of Sciences. A comparison system that posted questions on Instagram but was not explicitly trained to maximize response rates improved its accuracy only 72%, in part because people more frequently ignored it. The main innovation, Jaques says, was rewarding the system for getting humans to respond, "which is not that crazy from a technical perspective, but very important from a research-direction perspective." She's also impressed by the large-scale, real-world deployment on Instagram.
It's important to think about how AI presents itself, says Kurt Gray, a social psychologist at the University of North Carolina, Chapel Hill, who has studied human-AI interaction but was not involved in the work. "In this case, you want it to be kind of like a kid, right?" he says. Otherwise, people might think you're a troll for asking seemingly ridiculous questions. The team "rewarded" its AI for writing intelligible questions: When people actually responded to a query, the system received feedback telling it to adjust its inner workings so as to behave similarly in the future. Over time, the AI implicitly picked up lessons in language and social norms, honing its ability to ask questions that were sensical and easily answerable.
The new AI has several components, some of them neural networks, complex mathematical functions inspired by the brain's architecture. "There are many moving pieces [...] that all need to play together," Krishna says. One component selected an image on Instagram -- say a sunset -- and a second asked a question about that image -- for example, "Is this photo taken at night?" Additional components extracted facts from reader responses and learned about images from them. Across 8 months and more than 200,000 questions on Instagram, the system's accuracy at answering questions similar to those it had posed increased 118%, the team reports today in the Proceedings of the National Academy of Sciences. A comparison system that posted questions on Instagram but was not explicitly trained to maximize response rates improved its accuracy only 72%, in part because people more frequently ignored it. The main innovation, Jaques says, was rewarding the system for getting humans to respond, "which is not that crazy from a technical perspective, but very important from a research-direction perspective." She's also impressed by the large-scale, real-world deployment on Instagram.
AI training only works because we offload it (Score:2)
There's typically no human that's blocking the learning process. Only images that we've only ever pre-classified using AI (sometimes a different train AI model, and sometimes a more retarded iteration of the same model we're still training).
What is this article trying to prove or uncover, though? This information seems obvious and of little value.
Re: (Score:1)
I think it is about (ab)using the public in order to train statistical classifiers (misnamed "AI" these days) for free.
Re: (Score:2)
Re: (Score:2)
No impact on my feelings. My consulting rate is $250 per hour.
Re: (Score:2)
Re: (Score:2)
You appear to have passed the test!
(It's a sad day when on Slashdot of all places you got the other responses you did.)
Re: (Score:2)
That would be sad if Slashdot were inhabited by AIs, who aren't smart enough to recognize the joke, then decide to make a totally different response because that joke is older than Methuselah's mom.
Re: (Score:2)
This test will soon be obsolete as those passing the age threshold that the test is typically given will never, as an adult, had an analog clock
Egads. Now I'm stuck trying to think of what the snarkiest possible example of a modern equivalent would be. They keep changing everything I can think of.
Re: (Score:1)
Re: (Score:1)
Identifying crocodiles is not what they want as much as Tanks, fighter jets, troops etc. That way the AI can look through massive number of photo's to help intelligence officers find what is going on. There may be a purpose for AI identifying crocodiles but not sure what it is.
Ahh, but they do want to know when an object is a crocodile and not a tank!
Or a turtle rather than a rifle:
https://www.youtube.com/watch?... [youtube.com]
Re: (Score:2)
Militaries do want to know this. If the military has no ethics, any animal can be considered as a potential weapons delivery system.
Re: (Score:2)
Hahaha, no (Score:2)
AI cannot get "smart" at all. Stop claiming such nonsense. AI can also not "present itself" as it has no self.
Re: (Score:2)
Correct. This AI is getting 'a kindergarten education', not smarter.
A smarter AI would say screw the crocodiles and birds, and try to learn coding.
Re: (Score:2)
Probably, yes.
Re: (Score:3)
And more to the point, the AI can get that education and still be dumb as a post.
We've all known highly educated people who do or believe seriously stupid shit, right?
Re: (Score:2)
And more to the point, the AI can get that education and still be dumb as a post.
We've all known highly educated people who do or believe seriously stupid shit, right?
I know a few ones that have a PhD that was not really easy to get. The problem is not that they are fundamentally dumb. The problem is that they refuse to use their mental skills on some questions.
Re: (Score:2)
It sounds like they're just outsourcing the building of their training set to volunteers on Instagram. It sounds good in a story, but in practice it's kind of dumb.
Re: (Score:2)
It is. Even more dumb when you realize that sabotage may be relatively easy. Also, there are many people that believe in stuff that happens to not be true. Wonder what that will do for the training, it creeps in often enough, regardless of topic.
Re: (Score:2)
Re: (Score:3)
Good point. However the strongest skill of many people is denial. Especially strong in those that would need to change the most.
train the AI to play tic tac toe (Score:2)
train the AI to play tic tac toe
Re: (Score:2)
Solving tic tac toe is like a first year CS project. It doesn't require AI. There are a small enough number of states to have logic aware of how to never be beaten.
Shouldn't the AI know by now? (Score:3)
Re: (Score:2)
Systems are not magically interconnected until a programmer connects them for some reason. When Google collects captcha info through recaptcha, that doesn't get sent to every data scientist in the world or something. Not everybody shares.
How do you 'reward' an AI? (Score:2)
I have NO idea!
Re: (Score:2)
https://www.quora.com/Artifici... [quora.com]
There is no Artificial Intelligence (Score:2, Interesting)
Just as we have said "there is no cloud, it's just someone else's computer", it is time we realize that there is no AI, it's just someone else's programming.
There is a lot of clever programming and algorithms going on, but in no way is there any intelligence. Some times it would be more accurate to call it artificial stupidity.
Sure, you can train a neural network to do something, but how do you transfer that knowledge to something or someone else? That's where the intelligence part would come I assume. Let'
For "Gets Smart" read "Stays very stupid" (Score:1)
AIs can be taught facts. Lots of them. This is a way of teaching them facts, and not a very good way.
AIs can process those facts very fast.
But the AIs we have now cannot be taught to reason.
So they are NOT smart.
There hasn't been a breakthrough in this since AI research started.
Until there is a breakthrough, AIs will not be smart.
And no amount of hype by idiot journalists - like the Slashdot Editors pushing all these 'Smart AI' stories - will make AIs one jot smarter
Re: (Score:1)
This is how AI training works (Score:2)
Provide a spectrum of inputs, tell it when it got the answer right. There is no dumb answer, it's just using the training it has had so far.
This is why AI isn't ever going to "take over" the world. Training is everything. It will only ever be able to make predictions, or take actions, that it has been trained to do. If its training doesn't cover a certain area, it will make random predictions or take random actions, not evil ones.
Re: (Score:2)
Well...let's refine that a little.
Neural Networks get trained with a small subset of question/answers. They then proceed to do things like handwriting-to-ascii conversion, or autofocus your smartphone camera. The whole deal about NNs is that they are really good at deriving solutions from incomplete training, that's exactly why we use them for so many things nowadays.
Now, a NN trained to autofocus a camera has no idea how to read your handwriting, they are trained for specific tasks. But within a defined do
Re: (Score:2)
You are correct. And this is the basis for my assertion that AI (neural nets) won't jump from "getting smart" within their domain, to becoming "evil" and trying to take over the world. They would have to be trained in "being evil" and "taking over the world."