Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Media AI Google

Turning Neural Networks Upside Down Produces Psychedelic Visuals 75

cjellibebi writes: Neural networks that were designed to recognize images also hold some interesting capabilities for generating them. If you run them backwards, they turn out to be capable of enhancing existing images to resemble the images they were meant to try and recognize. The results are pretty trippy. A Google Research blog post explains the research in great detail. There are pictures, and even a video. The Guardian has a digested article for the less tech-savvy.
This discussion has been archived. No new comments can be posted.

Turning Neural Networks Upside Down Produces Psychedelic Visuals

Comments Filter:
  • They've nailed it (Score:5, Informative)

    by David Spencer ( 3006427 ) on Friday June 19, 2015 @05:16PM (#49949153)
    I've had a few up-close experiences with heavy psychedelics. Those photos took me right back. Wonderful insights!
    • Re: (Score:3, Interesting)

      by Anonymous Coward

      That makes an argument as to what psychedelics actually do.

    • by qpqp ( 1969898 ) on Friday June 19, 2015 @06:08PM (#49949375)
      While I agree that the photos provoke a similar reaction on a superficial glance, what struck me while on psychedelics was the actual texture of what you see, which extends fractal-wise when concentrated upon.
      • Re:They've nailed it (Score:4, Interesting)

        by iMadeGhostzilla ( 1851560 ) on Friday June 19, 2015 @08:54PM (#49950071)

        It's true. And we know the physical texture does extend fractal-wise into infinity... I'm thinking the opposite is when one is not on psychedelics and is further stressed out, texture details disappear if they are not relevant for the stressful situation. (E.g a sponge becomes a yellow block, no holes or pores.) As if psychedelics open the valve and stress closes it, like many people have said.

    • Are there any higher resolution pictures available? I'm sitting on a crappy 1024x600 monitor now, and these pictures don't even max it out. Seriously, why are these pictures in such shitty resolution?
      • by Ed Avis ( 5917 )
        Perhaps the input images they used were also low-res? If they had used higher resolution photos it would have taken much more computing time to run them through the neural network for hundreds of iterations. I guess the same neural networks could also enhance the resolution of the images by being fed a scaled-up version and outputting it with more (imagined) detail.
    • > up-close experiences with heavy psychedelics

      yuk, i try to steer clear of obese mystics.

    • If all OP got out of that was that layers of neural nets make cool pictures...

      WHY the hell is it on Slashdot?
      • Because reverse-engineering psychedelics is news for nerds.
        • by narcc ( 412956 )

          That's ... not even close to what they did.

          • But it could have great implications in the ongoing quest to reverse engineer the psychedelic experience on a purely intellectual level.
            • by narcc ( 412956 )

              No, no it couldn't. It's pretty clear from the article that the summary is complete nonsense. They don't run them backwards (whatever that's supposed to mean) nor do the NN's produce images.
              .
              In the case of the random input image, as an analogy, think of the NN as a fitness function in a genetic algorithm. The photo input images work similarly.

              It makes pretty pictures, but as far as psychedelic experiences are concerned, there is absolutely no knowledge to be gained here.

              • It makes pretty pictures, but as far as psychedelic experiences are concerned, there is absolutely no knowledge to be gained here.

                Speak for yourself, dumbass.

          • Alas, Tennant's writings about baloney also contain their fair share of baloney.

            For example, he dismisses the idea that ad hominem includes the case where one party tries to discredit the other party, thereby attempting to undermine the credibility of his argument. It isn't direct ad-hominem, as a direct part of the first party's argument, but it is still an indirect attempt to undermine the second party's argument, and therefore part of the first party's argument.

            So since it is part of the first part
  • by Anonymous Coward on Friday June 19, 2015 @05:19PM (#49949165)

    I ran slashdot backwards through the DICE marketing bullshit neural network and got www.soylentnews.org

    • by Sowelu ( 713889 )

      So it's attractive but ultimately derivative with no new content except what you ascribe to it?

  • Some of those pictures are just noise, but some of them are brilliant.

    Also, I'll go so far as to say it's not something human could do. Sure a human can do 'similar' things, but I'm betting some of the patterns are more precise than that. (For a 'barely related' but spiritually equivalent example....a human couldn't draw an actual Mandlebrot set.)

    • Re:Now THAT's art! (Score:5, Informative)

      by captnjohnny1618 ( 3954863 ) on Friday June 19, 2015 @05:43PM (#49949261)
      A human can't do it? Alex Gray begs to differ [google.com].

      I guess we could argue that it's "similar" (i.e. not the same), but it's pretty darn close ;-).

      The Mandelbrot set is a very different animal from what these algorithms are doing. I agree that a human couldn't draw a Mandlebrot set, but in some sense this work is much less precise and analytic than something like a Mandlebrot set.
    • by Anonymous Coward

      Wow, they trained a computer to apply random amounts of Photoshop filters onto existing images. I suppose any 12 year old with a Tumblr account couldn't do that.

      • by qpqp ( 1969898 )

        Wow, they trained a computer

        If they actually did that they'd have an artificial intelligence, but maybe I just overheard the "whoosh!"

    • I've seen things in the clouds before. I've also heard people wax poetic about how it's a uniquely human thing.

    • by narcc ( 412956 )

      Some people seem to have some strange ideas about what is being done here. The reality is far less interesting:

      Say you want to know what sort of image would result in “Banana.” Start with an image full of random noise, then gradually tweak the image towards what the neural net considers a banana [...] By itself, that doesn’t work very well, but it does if we impose a prior constraint that the image should have similar statistics to natural images, such as neighboring pixels needing to be correlated.

      Even less exciting, naturally, are the swirly ones and the ones with the mixed images. (I presume the anthropomorphism in the quote is to tickle the Kurzweil nuts and "science" reporters.) Very few difference, but it makes prettier pictures -- start with a photo instead of noise, tweak it differently.

      In this case we simply feed the network an arbitrary image or photo and let the network analyze the picture. We then pick a layer and ask the network to enhance whatever it detected. Each layer of the network deals with features at a different level of abstraction, so the complexity of features we generate depends on which layer we choose to enhance. For example, lower layers tend to produce strokes or simple ornament-like patterns, because those layers are sensitive to basic features such as edges and their orientations.

      Don't read the comments after reading the article. You'll lose all faith in humanity.

      Next up:

  • Wow! (Score:5, Funny)

    by nospam007 ( 722110 ) * on Friday June 19, 2015 @05:33PM (#49949217)

    The slashdot UI produces psychedelic visuals even without any artificial or natural intelligence.
    Every time I come here, there are icons all over the place, in the middle of the text, the title bar shows random icons or text and I'm not even on beta.
    Not to mention the dupes or stupid articles and don't make me begin about the videos.

  • They should train the network more and see how it changes these outputs.

    Or, start out with a new network and see how things change as it becomes more competent.

    This is awesome.

    • They should train the network more and see how it changes these outputs.

      Training it more on the same data would make little difference. You would need to train it with different images. Most image recognition using deep learning is done using layered RBMs [wikipedia.org]. You actually train an RBM by shoving data through it backwards, to get it roughly working, and then fine tune it with backpropagation [wikipedia.org].

  • Some questions, in case anyone here knows:

    What kind of size are these neural nets, typically? As in how many bytes would it take to define one?

    I vaguely get the idea of neural nets, but how do you apply them to images (or vice versa, rather)? Does the input layer consist of one "neuron" per pixel, or what?

    • Here is my best answer. I am not active in the field so the answer is a combination of knowledge, extrapolation and intuition. I think it provides some of the kind of info you are curious about

      Typically the first layer of nodes will receive input feature detectors run on the image. For example edge sharpness and orientation calculation. This will be at a range of scales that are small compared to the overall picture.

      This first layer will connect and provided weighted values to another layer or two that is

    • Re:Questions (Score:4, Interesting)

      by ShanghaiBill ( 739463 ) on Friday June 19, 2015 @09:09PM (#49950123)

      I know nothing about these NNs, but the NNs used for the ImageNet Competition [image-net.org] typically have a few hundred thousand neurons. This is to place images of about 1M pixels into one of 1000 categories. Most image recognition NNs are "convolutional" which means they are tiled. So each neuron in the first layer is only looking at a small part of the image. Later layers will pool the results from the convolutional layer into extended features. This cuts way down on both the size, and the amount of computation needed. The number of layers will typically be 5 or 6. Even more layers should, in theory, help, but deeper networks are very hard to train. The total size in bytes would be maybe 100MB, but that will vary widely depending on the implementation. I don't know how big Baidu's implementation was (they were the winner, but they cheated). NNs can run fast, categorizing hundreds or thousands of images per second. But it can take a long time to train them, days or weeks on a GPU farm. Fortunately the training is highly parallel.

      • Thanks. That is great info. I am happy about my guess. I may be only an order of magnitude off on the # of nodes and I had 5-7 layers [input, 3-5, output] :)

        When I was in college the 100 astrophysics class [rocks in space] taught estimation and actually asked how many trees in Chicago on the final. I thought this was probably one of the most valuable things you could teach liberal art students in a science class. One of my co-worker did very well on his hiring interview by doing a very good estimation of th

  • by jscottk ( 4016937 ) on Friday June 19, 2015 @05:50PM (#49949301)
    This makes me wonder if a similar process is occurring in the brain of someone on a psychedelic. Are the compounds stimulating pattern recognition feedback loops from the inside out, causing people to see their imaginations manifested in the fuzz?
    • by vix86 ( 592763 ) on Friday June 19, 2015 @06:26PM (#49949467)
      There is a guy that wrote wrote an essay some years ago that suggested as much. He posited that drugs like psilocibin basically overload the brain and cause it to form feedback loops. Many of the effects you can experience on hallucinogens also suggest as much. Closed eye visuals for instance are basically the "lights" you see when you push on your eye balls. They are just amplified and put into a feedback loop. Thought loops are common on hallucinogens as well, I imagine its the result of that as well.
      • Yes sort of, but the brain is a heck of a lot more complicated than that. I think a lot of the psychedelic effects are caused by the 'synchroniser' and other top level filters in our visual systems shutting down allowing us to see some of how our brains actually generate our view of the world. The brain has at least two completely separate visual processing cores - one for real time motion and danger intercept and a far more complex colour vision world and object interpretation system. There is probably a t

    • It's likely the Convolutional neural network algorithm.

      https://en.wikipedia.org/wiki/... [wikipedia.org]

      It's along well enough that you can get premade ones to invent magic the gathering cards or recognize dogs and highly dog like things in pictures. It's useful even completely independently of the AI research as such.

    • That's pretty much exactly what people have been saying they do for the past 40-50 years, if not longer. I've tripped on several different psychedelics and these pictures are uncannily close to the effect... so ridiculously close that my jaw just kinda dropped when I saw it.
  • Prints (Score:5, Interesting)

    by lq_x_pl ( 822011 ) on Friday June 19, 2015 @05:51PM (#49949307)
    Any possibility that they will release higher-res versions of these images? Maybe sell some prints?
    I realize these are just the output of a funnel-run-backwards, but they'd make awfully cool posters.
    • by JimMcc ( 31079 )

      I was wondering the same thing. I don't know if I'd want the dreamscape images or more disturbing images. Both make me stop and think about the image.

  • http://cdn2.hubspot.net/hub/13... [hubspot.net]

    TFA: The results are intriguing—even a relatively simple neural network can be used to over-interpret an image, just like as children we enjoyed watching clouds and interpreting the random shapes. This network was trained mostly on images of animals, so naturally it tends to interpret shapes as animals.

    Less intriguing: to consider that similar networks (especially once giving "recommendations" to unquestioning end users) might ascribe e.g. criminal propensity or lack o

  • by Anonymous Coward

    the paintings that one dude did of cats as he got more and more schizophrenic

    • Louis Wain [dangerousminds.net]:

      There has been some speculation that Wain’s schizophrenia was caused by toxoplasma gondii—a parasite found in cat’s excreta. Whatever began the illness, Wain was incarcerated in various asylums and mental hospitals for years at a time. The changes to his life were reflected in his art. His paintings of cats took on a radiance and vitality never before seen: the fur sharp and colorful, the eyes brilliant, and a wired sense of unease of disaster about to unfold.

      But these paintings look normal compared to the psychedelic fractals and spirals that followed. Though these are beautiful images, startling, stunning, shocking—they suggest a mind that has broken reality down to its atomic level.

      Though it is believed that Louis Wain’s paintings followed a direct line towards schizophrenia, it is actually not known in which order Wain painted his pictures. Like his finances, Wain’s mental state was erratic throughout his life, which may explain the changes back and forth between cute and cuddly and abstract and psychedelic. No matter, the are beautiful, kaleidoscopic, disturbing and utterly mesmerizing.

  • I see an intriguing resemblance to Wain's cats [wikimedia.org]--paintings made by Louis Wain, while going insane, perhaps from schizophrenia.

  • by cjellibebi ( 645568 ) on Saturday June 20, 2015 @05:27AM (#49951069)

    Does anyone know if these images can be created in real-time? If so, demo-coders will pounce on the algorithm and have an absolute field-day! Demos will never quite be the same again. Another idea could be an easter-egg for a video-game where if the player has just ended a very intense gaming-session, the visuals of the frontend (even if only the background) could have this algorithm applied to them just to see if the player notices anything out of the ordinary (after a particularly intense session, this will be harder to spot immediately).

    I know that training a neural network can take a very long time, but using it to recognise images can be done very quickly. If a standard CPU or GPU cannot do this in realtime, would the more dedicated demo-coders start creating their own FGASs / ASICs that are designed just for this task, and bringing their creations along to demoparties?

  • I've never taken any psychedelics myself (so I guess you could call me a psychedelic layperson), but have read several experiences from people who have. One of the things my brain tends to do during it's 'down-time' is to try and interpret these experiences (from the point of view of someone who's not had any first-hand psychedelic experiences) and using my knowledge of neural networks and other geeky things, to try and figure out what is really going on, and hopefully in the process, to figure out the natu

    • I think this comment on another article [slashdot.org] is of relevance here.

      One possible way of establishing frames of reference (eg. trying to explain the taste of an orange to someone who's never tasted a citrus fruit) is to figure out how to manually stimulate neurons (this does not necessarily involve brain-implants - maybe this can be done purely by meditation or something) and finding out which ones to stimulate for oranges and generic citrus fruit. Then, we could develop vocabulary that can be used to generalise co

  • I have seen this used for upscaling image resolution.
    The neural net is trained on a certain type of image (comics/manga in the example below). It then uses its knowledge about how such a picture should look, to fill in missing information and remove artifacts during the upscale process. Kind of like the nets in the story will try to see their animals/objects in clouds and static.

    The result can be really amazing if used on the right type of image. I got some perfect results increasing the image size 16x from

As of next Thursday, UNIX will be flushed in favor of TOPS-10. Please update your programs.

Working...