Turning Neural Networks Upside Down Produces Psychedelic Visuals 75
cjellibebi writes: Neural networks that were designed to recognize images also hold some interesting capabilities for generating them. If you run them backwards, they turn out to be capable of enhancing existing images to resemble the images they were meant to try and recognize. The results are pretty trippy. A Google Research blog post explains the research in great detail. There are pictures, and even a video. The Guardian has a digested article for the less tech-savvy.
They've nailed it (Score:5, Informative)
Statute of limitations (Score:2)
I don't know about other states, but this page [findlaw.com] claims that the statute of limitations for a misdemeanor in Indiana is two years.
Re: (Score:3)
It's a crime to be a test subject for experimental pharmaceuticals?
Re: (Score:3, Interesting)
That makes an argument as to what psychedelics actually do.
Re:They've nailed it (Score:5, Insightful)
Re:They've nailed it (Score:4, Interesting)
It's true. And we know the physical texture does extend fractal-wise into infinity... I'm thinking the opposite is when one is not on psychedelics and is further stressed out, texture details disappear if they are not relevant for the stressful situation. (E.g a sponge becomes a yellow block, no holes or pores.) As if psychedelics open the valve and stress closes it, like many people have said.
Better pictures? (Score:2)
Re: (Score:2)
Re: (Score:2)
> up-close experiences with heavy psychedelics
yuk, i try to steer clear of obese mystics.
Re: (Score:2)
WHY the hell is it on Slashdot?
Re: (Score:3)
Re: (Score:2)
That's ... not even close to what they did.
Re: (Score:2)
Re: (Score:2)
No, no it couldn't. It's pretty clear from the article that the summary is complete nonsense. They don't run them backwards (whatever that's supposed to mean) nor do the NN's produce images.
.
In the case of the random input image, as an analogy, think of the NN as a fitness function in a genetic algorithm. The photo input images work similarly.
It makes pretty pictures, but as far as psychedelic experiences are concerned, there is absolutely no knowledge to be gained here.
Wrong, moron (Score:1)
It makes pretty pictures, but as far as psychedelic experiences are concerned, there is absolutely no knowledge to be gained here.
Speak for yourself, dumbass.
Re: (Score:2)
For example, he dismisses the idea that ad hominem includes the case where one party tries to discredit the other party, thereby attempting to undermine the credibility of his argument. It isn't direct ad-hominem, as a direct part of the first party's argument, but it is still an indirect attempt to undermine the second party's argument, and therefore part of the first party's argument.
So since it is part of the first part
I tried this myself (Score:4, Funny)
I ran slashdot backwards through the DICE marketing bullshit neural network and got www.soylentnews.org
Re: (Score:2)
So it's attractive but ultimately derivative with no new content except what you ascribe to it?
oot, ti deirt i (Score:2)
Now THAT's art! (Score:2)
Some of those pictures are just noise, but some of them are brilliant.
Also, I'll go so far as to say it's not something human could do. Sure a human can do 'similar' things, but I'm betting some of the patterns are more precise than that. (For a 'barely related' but spiritually equivalent example....a human couldn't draw an actual Mandlebrot set.)
Re:Now THAT's art! (Score:5, Informative)
I guess we could argue that it's "similar" (i.e. not the same), but it's pretty darn close
The Mandelbrot set is a very different animal from what these algorithms are doing. I agree that a human couldn't draw a Mandlebrot set, but in some sense this work is much less precise and analytic than something like a Mandlebrot set.
Re: (Score:2)
https://en.wikipedia.org/wiki/... [wikipedia.org]
Re: (Score:2)
Alex Gray ...
Wasn't that the smart parrot?
Re: (Score:1)
Wow, they trained a computer to apply random amounts of Photoshop filters onto existing images. I suppose any 12 year old with a Tumblr account couldn't do that.
Re: (Score:2)
Wow, they trained a computer
If they actually did that they'd have an artificial intelligence, but maybe I just overheard the "whoosh!"
Re: (Score:2)
I've seen things in the clouds before. I've also heard people wax poetic about how it's a uniquely human thing.
Re: (Score:2)
Some people seem to have some strange ideas about what is being done here. The reality is far less interesting:
Say you want to know what sort of image would result in “Banana.” Start with an image full of random noise, then gradually tweak the image towards what the neural net considers a banana [...] By itself, that doesn’t work very well, but it does if we impose a prior constraint that the image should have similar statistics to natural images, such as neighboring pixels needing to be correlated.
Even less exciting, naturally, are the swirly ones and the ones with the mixed images. (I presume the anthropomorphism in the quote is to tickle the Kurzweil nuts and "science" reporters.) Very few difference, but it makes prettier pictures -- start with a photo instead of noise, tweak it differently.
In this case we simply feed the network an arbitrary image or photo and let the network analyze the picture. We then pick a layer and ask the network to enhance whatever it detected. Each layer of the network deals with features at a different level of abstraction, so the complexity of features we generate depends on which layer we choose to enhance. For example, lower layers tend to produce strokes or simple ornament-like patterns, because those layers are sensitive to basic features such as edges and their orientations.
Don't read the comments after reading the article. You'll lose all faith in humanity.
Next up:
Wow! (Score:5, Funny)
The slashdot UI produces psychedelic visuals even without any artificial or natural intelligence.
Every time I come here, there are icons all over the place, in the middle of the text, the title bar shows random icons or text and I'm not even on beta.
Not to mention the dupes or stupid articles and don't make me begin about the videos.
Amazing, need to compare after more training (Score:2)
They should train the network more and see how it changes these outputs.
Or, start out with a new network and see how things change as it becomes more competent.
This is awesome.
Re: (Score:2)
They should train the network more and see how it changes these outputs.
Training it more on the same data would make little difference. You would need to train it with different images. Most image recognition using deep learning is done using layered RBMs [wikipedia.org]. You actually train an RBM by shoving data through it backwards, to get it roughly working, and then fine tune it with backpropagation [wikipedia.org].
Questions (Score:2)
Some questions, in case anyone here knows:
What kind of size are these neural nets, typically? As in how many bytes would it take to define one?
I vaguely get the idea of neural nets, but how do you apply them to images (or vice versa, rather)? Does the input layer consist of one "neuron" per pixel, or what?
Re: (Score:2)
Here is my best answer. I am not active in the field so the answer is a combination of knowledge, extrapolation and intuition. I think it provides some of the kind of info you are curious about
Typically the first layer of nodes will receive input feature detectors run on the image. For example edge sharpness and orientation calculation. This will be at a range of scales that are small compared to the overall picture.
This first layer will connect and provided weighted values to another layer or two that is
Re:Questions (Score:4, Interesting)
I know nothing about these NNs, but the NNs used for the ImageNet Competition [image-net.org] typically have a few hundred thousand neurons. This is to place images of about 1M pixels into one of 1000 categories. Most image recognition NNs are "convolutional" which means they are tiled. So each neuron in the first layer is only looking at a small part of the image. Later layers will pool the results from the convolutional layer into extended features. This cuts way down on both the size, and the amount of computation needed. The number of layers will typically be 5 or 6. Even more layers should, in theory, help, but deeper networks are very hard to train. The total size in bytes would be maybe 100MB, but that will vary widely depending on the implementation. I don't know how big Baidu's implementation was (they were the winner, but they cheated). NNs can run fast, categorizing hundreds or thousands of images per second. But it can take a long time to train them, days or weeks on a GPU farm. Fortunately the training is highly parallel.
Re: (Score:2)
Thanks. That is great info. I am happy about my guess. I may be only an order of magnitude off on the # of nodes and I had 5-7 layers [input, 3-5, output] :)
When I was in college the 100 astrophysics class [rocks in space] taught estimation and actually asked how many trees in Chicago on the final. I thought this was probably one of the most valuable things you could teach liberal art students in a science class. One of my co-worker did very well on his hiring interview by doing a very good estimation of th
Is this your brain on drugs? (Score:5, Insightful)
Re:Is this your brain on drugs? (Score:5, Interesting)
Re: (Score:1)
Yes sort of, but the brain is a heck of a lot more complicated than that. I think a lot of the psychedelic effects are caused by the 'synchroniser' and other top level filters in our visual systems shutting down allowing us to see some of how our brains actually generate our view of the world. The brain has at least two completely separate visual processing cores - one for real time motion and danger intercept and a far more complex colour vision world and object interpretation system. There is probably a t
Re: (Score:3)
It's likely the Convolutional neural network algorithm.
https://en.wikipedia.org/wiki/... [wikipedia.org]
It's along well enough that you can get premade ones to invent magic the gathering cards or recognize dogs and highly dog like things in pictures. It's useful even completely independently of the AI research as such.
Re: (Score:2)
Prints (Score:5, Interesting)
I realize these are just the output of a funnel-run-backwards, but they'd make awfully cool posters.
Re: (Score:2)
I was wondering the same thing. I don't know if I'd want the dreamscape images or more disturbing images. Both make me stop and think about the image.
Re: (Score:2)
those photos do NOT look anything like what you see when tripping
Of course. If the dose is high enough, these pictures only look similar to the onset of a trip.
Re: (Score:2, Interesting)
I have seen things very similar to these on some level. My pattern-recognition algorithms aren't fixated on dogs, but the way some of these images look is very familiar. Translucent objects popping out of feedback-looped noise is real. Fractal-repeated infinite patterns are real. Sky turning into crinkly paint swirls is real. Empty space around objects (like the Google logo) showing iridescent patterns is real. Now, there is a caveat that each experience I've had was as different from the previous one as it
How close? (Score:2)
6 blind men analyze an elephant (Score:2)
Less intriguing: to consider that similar networks (especially once giving "recommendations" to unquestioning end users) might ascribe e.g. criminal propensity or lack o
Some of those look like... (Score:1)
the paintings that one dude did of cats as he got more and more schizophrenic
Re: (Score:3)
Louis Wain [dangerousminds.net]:
There has been some speculation that Wain’s schizophrenia was caused by toxoplasma gondii—a parasite found in cat’s excreta. Whatever began the illness, Wain was incarcerated in various asylums and mental hospitals for years at a time. The changes to his life were reflected in his art. His paintings of cats took on a radiance and vitality never before seen: the fur sharp and colorful, the eyes brilliant, and a wired sense of unease of disaster about to unfold.
But these paintings look normal compared to the psychedelic fractals and spirals that followed. Though these are beautiful images, startling, stunning, shocking—they suggest a mind that has broken reality down to its atomic level.
Though it is believed that Louis Wain’s paintings followed a direct line towards schizophrenia, it is actually not known in which order Wain painted his pictures. Like his finances, Wain’s mental state was erratic throughout his life, which may explain the changes back and forth between cute and cuddly and abstract and psychedelic. No matter, the are beautiful, kaleidoscopic, disturbing and utterly mesmerizing.
Re:does this explain dreaming and the "mind's eye" (Score:4, Interesting)
To some extent yes, but it's likely way more complicated than that. But, yeah, without sensory input we start hallucinating. It's like asking your senses repeatedly "is that real" and the senses always say yes, rather than no. So you drift off into whatever because that's real and therefore there's this other things too.
It's a bit like that old parlor game where you tell somebody that you're going to have them ask questions about a dream, send them in the other room, asking for dream volunteers, and then tell the people still in the room that the answer is yes if the last letter of the last word of the question ends A-M and no if the last letter of the last word of the question ends N-Z. -- They inevitably guess a dream involving all manner of perverted stuff as the crowd confirms and rejects bits at random. Inventing a dream out of his own head rather than somebody else's head.
There's also every day hallucinations like seeing detail where it doesn't exist, movement where it doesn't exist, and hallucinating something to fill the big blind spot in our eyes.
Definite resemblance to Wain's cats (Score:2)
I see an intriguing resemblance to Wain's cats [wikimedia.org]--paintings made by Louis Wain, while going insane, perhaps from schizophrenia.
Implications for the demoscene (Score:3)
Does anyone know if these images can be created in real-time? If so, demo-coders will pounce on the algorithm and have an absolute field-day! Demos will never quite be the same again. Another idea could be an easter-egg for a video-game where if the player has just ended a very intense gaming-session, the visuals of the frontend (even if only the background) could have this algorithm applied to them just to see if the player notices anything out of the ordinary (after a particularly intense session, this will be harder to spot immediately).
I know that training a neural network can take a very long time, but using it to recognise images can be done very quickly. If a standard CPU or GPU cannot do this in realtime, would the more dedicated demo-coders start creating their own FGASs / ASICs that are designed just for this task, and bringing their creations along to demoparties?
Reverse- engineering psychedelics (Score:2)
I've never taken any psychedelics myself (so I guess you could call me a psychedelic layperson), but have read several experiences from people who have. One of the things my brain tends to do during it's 'down-time' is to try and interpret these experiences (from the point of view of someone who's not had any first-hand psychedelic experiences) and using my knowledge of neural networks and other geeky things, to try and figure out what is really going on, and hopefully in the process, to figure out the natu
Re: (Score:2)
I think this comment on another article [slashdot.org] is of relevance here.
One possible way of establishing frames of reference (eg. trying to explain the taste of an orange to someone who's never tasted a citrus fruit) is to figure out how to manually stimulate neurons (this does not necessarily involve brain-implants - maybe this can be done purely by meditation or something) and finding out which ones to stimulate for oranges and generic citrus fruit. Then, we could develop vocabulary that can be used to generalise co
Used in image upscaling (Score:2)
I have seen this used for upscaling image resolution.
The neural net is trained on a certain type of image (comics/manga in the example below). It then uses its knowledge about how such a picture should look, to fill in missing information and remove artifacts during the upscale process. Kind of like the nets in the story will try to see their animals/objects in clouds and static.
The result can be really amazing if used on the right type of image. I got some perfect results increasing the image size 16x from
Re: (Score:2)