Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Media Network Technology

A Neural Network Can Learn To Recognize the World It Sees Into Concepts (technologyreview.com) 69

An anonymous reader quotes a report from MIT Technology Review: As good as they are at causing mischief, researchers from the MIT-IBM Watson AI Lab realized GANs, or generative adversarial networks, are also a powerful tool: because they paint what they're "thinking," they could give humans insight into how neural networks learn and reason. [T]he researchers began probing a GAN's learning mechanics by feeding it various photos of scenery -- trees, grass, buildings, and sky. They wanted to see whether it would learn to organize the pixels into sensible groups without being explicitly told how. Stunningly, over time, it did. By turning "on" and "off" various "neurons" and asking the GAN to paint what it thought, the researchers found distinct neuron clusters that had learned to represent a tree, for example. Other clusters represented grass, while still others represented walls or doors. In other words, it had managed to group tree pixels with tree pixels and door pixels with door pixels regardless of how these objects changed color from photo to photo in the training set.

Not only that, but the GAN seemed to know what kind of door to paint depending on the type of wall pictured in an image. It would paint a Georgian-style door on a brick building with Georgian architecture, or a stone door on a Gothic building. It also refused to paint any doors on a piece of sky. Without being told, the GAN had somehow grasped certain unspoken truths about the world. Being able to identify which clusters correspond to which concepts makes it possible to control the neural network's output. The team has now released an app called GANpaint that turns this newfound ability into an artistic tool. It allows you to turn on specific neuron clusters to paint scenes of buildings in grassy fields with lots of doors. Beyond its silliness as a playful outlet, it also speaks to the greater potential of this research.

This discussion has been archived. No new comments can be posted.

A Neural Network Can Learn To Recognize the World It Sees Into Concepts

Comments Filter:
  • Does it go berserk and kill 5 billion people?

    I think these guys are building a matrix...

  • News from November? (Score:2, Interesting)

    by Anonymous Coward
    I remember seeing this and playing with GANPaint last November. It was on HackerNews, Twitter,... The article reads, "The team has now released an app called GANPaint." Now? The tweet right above this text, announcing GANPaint, is from November 27, 2018. So...is there something new, or is MIT just now getting the memo?
  • AI (Score:5, Insightful)

    by ledow ( 319597 ) on Monday January 14, 2019 @03:23AM (#57957562) Homepage

    So it statistically correlated randomly-grouped information over millions of trials?

    Still not AI. Still just statistics. Bad statistics. With complete lack of inference. With shady, if not downright dishonest, assertions made about its capabilities.

    You did what "AI" has had done to it for decades now... flip a bit to indicate success in some fashion, and throw millions of trials at it until it trains itself to activate "bit" more than a handful of times.

    It could be flipping because it's majority green. Because the top-left pixel is green. Because there's watermark on the image. Because some frequency curve (if you have those, it's highly unlikely to form those itself in even a billion attempts / generations / evolutions / trainings) hits on a certain colour.

    The fact is: You have NO idea what it's correlating on. It's almost superstition on the part of the AI (if it wasn't completely lacking in any intelligence).

    Did you know that if you feed a pigeon in a box at random times it starts to associate feeding time with whatever it happened to be doing, and so repeat that? Whether that's bobbing its head, pecking at the floor, or looking a certain direction.

    It then spends most of its time trying to replicate that convinced that it's "just not doing it quite right", like someone with a superstition about their team winning because they were wearing their lucky underpants - no amount of negative correlation will convince them they are wrong and get them to change their ways.

    And that's exactly the problem with "AI" / neural networks. Of course you can train them to a statistical correlation - you know why? Because you're eliminating / training out / not breeding from those that don't correlate somewhat. It doesn't mean that what they are working from has anything to do with what you were after. And, most importantly, it does not mean you can trust them further on new data, nor that you can "untrain" them when they get it wrong, nor that you can perpetually improve them by more and more training.

    All that happens is that it plateaus before it ever really gets useful (usually within the range of a PhD study - write your thesis up quick!), people release rubbishy apps "to show what it can do" and then it's never touched again because it can't be used for anything else and isn't particularly good or useful at what it does do.

    We don't have AI, stop trying to pretend that we do. When you get a machine that can infer, that can actually reason its answer (not just "well it matches shape 22%, colour 17% and overall pattern roughness 19.4%", but "I can see branches here, here and here. They are connected. The connection grow and increase in width. The thickness part, which looks trunk-like, ends in a solid base which resembles soil", etc.)

    Until then, this is all just a waste of time, and heuristics (YOU told it when it got the tree right).

    • While true, as a new father, I find myself asking this question.

      Is our intelligence just "statistically correlated randomly-grouped information over millions of trials?"

      I really ask that as a serious question. I watch my kind learn and he is like that pigeon you speak of. Maybe that's all our intelligence is. Just more complex.

      Your last sentence really piqued my interest.

      When we look at the world and identify objects, maybe we really do see more like:

      "well it matches shape 22%, colour 17% and overall patter

      • I think our 'intelligence' is just more layers of pattern recognition. And, by patterns, I don't mean just what our 5 senses bring in. The brain monitors additional inputs. We construe those as emotions, feelings, guesses, etc.
      • by Anonymous Coward

        It seems clear that kids learn in this way after spending a bit of time with a few of my own. This is especially apparent when asking "unanswerable" questions. Here are some good ones that I like:
        "what is daddy wearing?" "bracelet" (correct answer: watch)
        "what sound does a bunny make?" *jumps up and down*
        "what sound does a strawberry make?" "RED!"
        "what sound does RED make?" *makes the sounds of an ambulance*

        They lack the regulation to think about and carefully answer (the the pre-programmed correct answer

      • Again, I just wonder about such things. It's easy to dismiss AI just statistics or pattern recognition. But then you dismiss the pigeon as just pattern recognition. My genuine question is just how much of us is just pattern recognition? Maybe our intelligence is not something more mystical than that.

        I think you are spot on. Neuroscience shows that our brains are unceasingly making predictions and trying to fit patterns, and at any given moment, our visual cortex is actually processing much more information from the prediction centers of the brain than from the optic nerve - in other words, our vision is more dominated by what our brain predicts it will see, than what we actually see.

        This research sounds eerily close to what a biological brain does. You can argue that "unguided categorization of pattern

      • In my opinion it's highly likely likely that actual strong AI will use today's algorithms as simply component parts. After all, autonomous cars are shaping up to use quite a few of the base design details from the Ford model A, they just use many additional elements to achieve the emergent behaviors.
    • > So it statistically correlated randomly-grouped information over millions of trials? I don't get this "AI is just a lot of IF statements that do statistics!" argument that gets thrown around a lot these days.

      Any algorithm is a series of if statements that work on data, and any data manipulation could be interpreted as being 'statistics'.
      Also, from what we know, the human brain does exactly that, too. Lots of correlations and sums of different action-potential cascades. If something is above a cert
    • by epine ( 68316 )

      Still just statistics.

      News flash: it's statistics all the way down.

      Humans have no idea what we are correlating on, either. This has been amply demonstrated in neurology over and over again.

      What we do have, however, is a highly specialized module for Making Shit Up—which like the visual cortex—pretty much never takes a day off. It's central to why humans are fixated on communicating so much of life in narrative form. Sometimes the MSU is onto something, other times you've checked into hotel Bats

    • by jbengt ( 874751 )

      Did you know that if you feed a pigeon in a box at random times it starts to associate feeding time with whatever it happened to be doing, and so repeat that? Whether that's bobbing its head, pecking at the floor, or looking a certain direction.

      Well, it works a lot better if you don't randomize the times, but rather feed them at certain regular intervals - not so short that they don't reproduce the behavior, and not so long that they never get rewarded when they do reproduce the behavior.

  • MIT PR department overhypes absolutely everything to the point of making it next to impossible to understand what is actually new and valuable research. Well, I guess that keeps the money flowing in, but sometimes it would be nice if they wouldn't appear to mostly act as an obfuscation layer between their researchers and the reader.

  • Now we just need the neural network to see further so anyone can really been far even as decided to use even go want to do look to concepts.

  • "Recognize the World It Sees Into Concepts" ?

    Either the editor is illiterate and made a gross grammar error, or the headline should read:

    "Reorganize the World It Sees Into Concepts"

    In defense of the editor, they usually just copy stuff from a source without reading it. Perhaps the source was wrong this time.

New crypt. See /usr/news/crypt.

Working...