A Neural Network Can Learn To Recognize the World It Sees Into Concepts (technologyreview.com) 69
An anonymous reader quotes a report from MIT Technology Review: As good as they are at causing mischief, researchers from the MIT-IBM Watson AI Lab realized GANs, or generative adversarial networks, are also a powerful tool: because they paint what they're "thinking," they could give humans insight into how neural networks learn and reason. [T]he researchers began probing a GAN's learning mechanics by feeding it various photos of scenery -- trees, grass, buildings, and sky. They wanted to see whether it would learn to organize the pixels into sensible groups without being explicitly told how. Stunningly, over time, it did. By turning "on" and "off" various "neurons" and asking the GAN to paint what it thought, the researchers found distinct neuron clusters that had learned to represent a tree, for example. Other clusters represented grass, while still others represented walls or doors. In other words, it had managed to group tree pixels with tree pixels and door pixels with door pixels regardless of how these objects changed color from photo to photo in the training set.
Not only that, but the GAN seemed to know what kind of door to paint depending on the type of wall pictured in an image. It would paint a Georgian-style door on a brick building with Georgian architecture, or a stone door on a Gothic building. It also refused to paint any doors on a piece of sky. Without being told, the GAN had somehow grasped certain unspoken truths about the world. Being able to identify which clusters correspond to which concepts makes it possible to control the neural network's output. The team has now released an app called GANpaint that turns this newfound ability into an artistic tool. It allows you to turn on specific neuron clusters to paint scenes of buildings in grassy fields with lots of doors. Beyond its silliness as a playful outlet, it also speaks to the greater potential of this research.
Not only that, but the GAN seemed to know what kind of door to paint depending on the type of wall pictured in an image. It would paint a Georgian-style door on a brick building with Georgian architecture, or a stone door on a Gothic building. It also refused to paint any doors on a piece of sky. Without being told, the GAN had somehow grasped certain unspoken truths about the world. Being able to identify which clusters correspond to which concepts makes it possible to control the neural network's output. The team has now released an app called GANpaint that turns this newfound ability into an artistic tool. It allows you to turn on specific neuron clusters to paint scenes of buildings in grassy fields with lots of doors. Beyond its silliness as a playful outlet, it also speaks to the greater potential of this research.
Re:speaks to the greater potential of this researc (Score:4, Interesting)
The AI can now "learn", do "thinking" and "see".
Funding secured.
Re: (Score:2)
It can learn a style like georgian, that is still impressive.
Re: (Score:2)
Re: (Score:2)
"So you're saying strong AI has potential? Sounds good. When though."
None dare call it strong AI, that's all. Pitch it as an approach to extended versions of the same sort of problems that narrow AI is solving, and you will partake of the same rich funding as narrow AI.
So what happens when this thing blows a cap? (Score:1)
Does it go berserk and kill 5 billion people?
I think these guys are building a matrix...
Re: (Score:2, Funny)
"Alexa, draw my hands larger on the news."
News from November? (Score:2, Interesting)
Re:Total BS (Score:5, Funny)
Re: (Score:3)
I would like to see someone smoke a hash function, you can't!
No, but you can let the magic smoke out of something that runs it. With enough acrid solder smoke floating around you won't know the difference.
CPU instruction HCF: Halt and Catch Fire. Link. [definitions.net]
This was a TV Show?!? I didn't know.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
You dummy, they were Australian scientists!
Re: (Score:1)
Everyone in the field knows this. They distinguish neural networks from neuronal networks, which are intentionally more faithful to biology.
Nevertheless there is some commonality between neural and neuronal networks in overall aspects of computation. The first big book already had the right name: Parallel Distributed Processing.
Re: Total BS (Score:1)
Brain or not, it works in the real world. It's been a standard technique in particle physics since the early 1990's, and has been used to make real discoveries. You believe it's not thinking? Fine. But that just means it's increasingly possible to do these very sophisticated tasks without thought, which could hitherto only be done with thought. That's big news, not snake oil.
AI (Score:5, Insightful)
So it statistically correlated randomly-grouped information over millions of trials?
Still not AI. Still just statistics. Bad statistics. With complete lack of inference. With shady, if not downright dishonest, assertions made about its capabilities.
You did what "AI" has had done to it for decades now... flip a bit to indicate success in some fashion, and throw millions of trials at it until it trains itself to activate "bit" more than a handful of times.
It could be flipping because it's majority green. Because the top-left pixel is green. Because there's watermark on the image. Because some frequency curve (if you have those, it's highly unlikely to form those itself in even a billion attempts / generations / evolutions / trainings) hits on a certain colour.
The fact is: You have NO idea what it's correlating on. It's almost superstition on the part of the AI (if it wasn't completely lacking in any intelligence).
Did you know that if you feed a pigeon in a box at random times it starts to associate feeding time with whatever it happened to be doing, and so repeat that? Whether that's bobbing its head, pecking at the floor, or looking a certain direction.
It then spends most of its time trying to replicate that convinced that it's "just not doing it quite right", like someone with a superstition about their team winning because they were wearing their lucky underpants - no amount of negative correlation will convince them they are wrong and get them to change their ways.
And that's exactly the problem with "AI" / neural networks. Of course you can train them to a statistical correlation - you know why? Because you're eliminating / training out / not breeding from those that don't correlate somewhat. It doesn't mean that what they are working from has anything to do with what you were after. And, most importantly, it does not mean you can trust them further on new data, nor that you can "untrain" them when they get it wrong, nor that you can perpetually improve them by more and more training.
All that happens is that it plateaus before it ever really gets useful (usually within the range of a PhD study - write your thesis up quick!), people release rubbishy apps "to show what it can do" and then it's never touched again because it can't be used for anything else and isn't particularly good or useful at what it does do.
We don't have AI, stop trying to pretend that we do. When you get a machine that can infer, that can actually reason its answer (not just "well it matches shape 22%, colour 17% and overall pattern roughness 19.4%", but "I can see branches here, here and here. They are connected. The connection grow and increase in width. The thickness part, which looks trunk-like, ends in a solid base which resembles soil", etc.)
Until then, this is all just a waste of time, and heuristics (YOU told it when it got the tree right).
Re: (Score:2)
You don't want your Tesla to change lanes because "it's Sagittarius and has a good feeling about this full moon".
You want it to use intelligence. Slightly more intelligence than a pigeon.
Re: (Score:2)
"If every tiny neural network showed genius level intellect it would strongly suggest that they were very different from real brains."
These days nobody is really trying to make something that works like a real brain. Something could be the functional equivalent of a real brain without being anything like one as well but nobody is building that either.
What NN are as implemented are basically self-organizing algorithms. I should use the word self loosely, instead of writing code you use statistics to train th
Re: (Score:3)
While true, as a new father, I find myself asking this question.
Is our intelligence just "statistically correlated randomly-grouped information over millions of trials?"
I really ask that as a serious question. I watch my kind learn and he is like that pigeon you speak of. Maybe that's all our intelligence is. Just more complex.
Your last sentence really piqued my interest.
When we look at the world and identify objects, maybe we really do see more like:
"well it matches shape 22%, colour 17% and overall patter
I concur (Score:2)
Re: (Score:1)
It seems clear that kids learn in this way after spending a bit of time with a few of my own. This is especially apparent when asking "unanswerable" questions. Here are some good ones that I like:
"what is daddy wearing?" "bracelet" (correct answer: watch)
"what sound does a bunny make?" *jumps up and down*
"what sound does a strawberry make?" "RED!"
"what sound does RED make?" *makes the sounds of an ambulance*
They lack the regulation to think about and carefully answer (the the pre-programmed correct answer
Re: (Score:1)
Again, I just wonder about such things. It's easy to dismiss AI just statistics or pattern recognition. But then you dismiss the pigeon as just pattern recognition. My genuine question is just how much of us is just pattern recognition? Maybe our intelligence is not something more mystical than that.
I think you are spot on. Neuroscience shows that our brains are unceasingly making predictions and trying to fit patterns, and at any given moment, our visual cortex is actually processing much more information from the prediction centers of the brain than from the optic nerve - in other words, our vision is more dominated by what our brain predicts it will see, than what we actually see.
This research sounds eerily close to what a biological brain does. You can argue that "unguided categorization of pattern
Re: (Score:2)
Re: (Score:2)
Any algorithm is a series of if statements that work on data, and any data manipulation could be interpreted as being 'statistics'.
Also, from what we know, the human brain does exactly that, too. Lots of correlations and sums of different action-potential cascades. If something is above a cert
Re: (Score:2)
News flash: it's statistics all the way down.
Humans have no idea what we are correlating on, either. This has been amply demonstrated in neurology over and over again.
What we do have, however, is a highly specialized module for Making Shit Up—which like the visual cortex—pretty much never takes a day off. It's central to why humans are fixated on communicating so much of life in narrative form. Sometimes the MSU is onto something, other times you've checked into hotel Bats
Re: (Score:2)
Well, it works a lot better if you don't randomize the times, but rather feed them at certain regular intervals - not so short that they don't reproduce the behavior, and not so long that they never get rewarded when they do reproduce the behavior.
Trying to find the great advance in research (Score:1)
MIT PR department overhypes absolutely everything to the point of making it next to impossible to understand what is actually new and valuable research. Well, I guess that keeps the money flowing in, but sometimes it would be nice if they wouldn't appear to mostly act as an obfuscation layer between their researchers and the reader.
A great first step (Score:1)
Now we just need the neural network to see further so anyone can really been far even as decided to use even go want to do look to concepts.
bad headline again (Score:2)
"Recognize the World It Sees Into Concepts" ?
Either the editor is illiterate and made a gross grammar error, or the headline should read:
"Reorganize the World It Sees Into Concepts"
In defense of the editor, they usually just copy stuff from a source without reading it. Perhaps the source was wrong this time.