Better Than JPEG? Researcher Discovers That Stable Diffusion Can Compress Images (arstechnica.com) 93
An anonymous reader quotes a report from Ars Technica: Last week, Swiss software engineer Matthias Buhlmann discovered that the popular image synthesis model Stable Diffusion could compress existing bitmapped images with fewer visual artifacts than JPEG or WebP at high compression ratios, though there are significant caveats. Stable Diffusion is an AI image synthesis model that typically generates images based on text descriptions (called "prompts"). The AI model learned this ability by studying millions of images pulled from the Internet. During the training process, the model makes statistical associations between images and related words, making a much smaller representation of key information about each image and storing them as "weights," which are mathematical values that represent what the AI image model knows, so to speak.
When Stable Diffusion analyzes and "compresses" images into weight form, they reside in what researchers call "latent space," which is a way of saying that they exist as a sort of fuzzy potential that can be realized into images once they're decoded. With Stable Diffusion 1.4, the weights file is roughly 4GB, but it represents knowledge about hundreds of millions of images. While most people use Stable Diffusion with text prompts, Buhlmann cut out the text encoder and instead forced his images through Stable Diffusion's image encoder process, which takes a low-precision 512x512 image and turns it into a higher-precision 64x64 latent space representation. At this point, the image exists at a much smaller data size than the original, but it can still be expanded (decoded) back into a 512x512 image with fairly good results.
While running tests, Buhlmann found that images compressed with Stable Diffusion looked subjectively better at higher compression ratios (smaller file size) than JPEG or WebP. In one example, he shows a photo of a candy shop that is compressed down to 5.68KB using JPEG, 5.71KB using WebP, and 4.98KB using Stable Diffusion. The Stable Diffusion image appears to have more resolved details and fewer obvious compression artifacts than those compressed in the other formats. Buhlmann's method currently comes with significant limitations, however: It's not good with faces or text, and in some cases, it can actually hallucinate detailed features in the decoded image that were not present in the source image. (You probably don't want your image compressor inventing details in an image that don't exist.) Also, decoding requires the 4GB Stable Diffusion weights file and extra decoding time. Buhlmann's code and technical details about his findings can be found on Google Colab and Towards AI.
When Stable Diffusion analyzes and "compresses" images into weight form, they reside in what researchers call "latent space," which is a way of saying that they exist as a sort of fuzzy potential that can be realized into images once they're decoded. With Stable Diffusion 1.4, the weights file is roughly 4GB, but it represents knowledge about hundreds of millions of images. While most people use Stable Diffusion with text prompts, Buhlmann cut out the text encoder and instead forced his images through Stable Diffusion's image encoder process, which takes a low-precision 512x512 image and turns it into a higher-precision 64x64 latent space representation. At this point, the image exists at a much smaller data size than the original, but it can still be expanded (decoded) back into a 512x512 image with fairly good results.
While running tests, Buhlmann found that images compressed with Stable Diffusion looked subjectively better at higher compression ratios (smaller file size) than JPEG or WebP. In one example, he shows a photo of a candy shop that is compressed down to 5.68KB using JPEG, 5.71KB using WebP, and 4.98KB using Stable Diffusion. The Stable Diffusion image appears to have more resolved details and fewer obvious compression artifacts than those compressed in the other formats. Buhlmann's method currently comes with significant limitations, however: It's not good with faces or text, and in some cases, it can actually hallucinate detailed features in the decoded image that were not present in the source image. (You probably don't want your image compressor inventing details in an image that don't exist.) Also, decoding requires the 4GB Stable Diffusion weights file and extra decoding time. Buhlmann's code and technical details about his findings can be found on Google Colab and Towards AI.
The first question I ask about any new technology (Score:2)
Re:The first question I ask about any new technolo (Score:4, Interesting)
Re: (Score:2)
There are absolutely models available with this filter turned off.
Re:The first question I ask about any new technolo (Score:5, Funny)
There are absolutely models available with this filter turned off.
I have always wanted to date one of those models with the filter turned off.
Re: (Score:2)
Only the free online version, and I think it can be turned off.
You can make all the furry porn on your computer that you want.
Re: (Score:2)
Stable Diffusion has an NSFW filter, which often triggers on quite innocuous inputs. It cannot be turned off.
Ah. So it is useless for the main application. Obviously idiots at work there...
Re: (Score:2)
Re: (Score:2)
It can very easily and trivially be turned off.
Re: (Score:2)
The article says it can hallucinate extra details that weren't present in the original. So the answer is yes, if you like triple nipple chicks.
Re: (Score:2)
Re: (Score:2)
Hey ... ! (Score:3, Funny)
Actual news for nerds!
Next thing you know, the comments will all be technical in nature - and then the Universe will end ...
Re: (Score:2)
I'll do my part to help it end. I imagine the problem with ML/AI image compression compared to algorithmic one is you cannot predict what it may decide to omit (or include!) in some important image some day.
Or replace, for that matter -- say a formation of bomber planes with a school of fish, for some unfathomable reason.
Not surprising (Score:3)
If you played around with text inversion, you can train it to understand a new object from a handful of images and it'll fit in a 5kb inversion. You can also just type "mona lisa" and get the original image back so that's basically 100% compression.
There are some issues of course
* Computationally expensive to encode
* Requires the whole 4GB model to be available and in memory
* Computationally expensive to decode
Re: (Score:2)
Also, presumably this high compression rate only applies with images that have elements similar to things the AI has been trained with. Sort of like a text compression algorithm that is optimized for common English letter combinations ("and", "the", etc) and would have trouble compressing text from another language (or streams of random letters).
Re: (Score:2)
Short answer on what this compression scheme is going to do: convert images into cartoons.
Re: (Score:2)
Re: (Score:2)
That's a "problem" for all compression algorithms. It's not possible to compress some possible data without expanding other data. The key is to compress the likely data and expand the unlikely.
Re: (Score:2)
This allows you to have a guarantee that the compressed data won't be larger than the
Re: (Score:3, Insightful)
Adding the escape sequence is the expansion. In the unlikely case where no data can be compressed the escape sequence in itself will make the "compressed" data larger than the uncompressed data (by at least one bit).
Re: (Score:2)
Technically, "mona lisa" won't be 100% compressed, to store it you will need the "mona lisa" text PLUS the 4GB model. So is kind of a negative compression.
Re: (Score:2)
Just consider the model to be part of the decoder, and not the image! We don't count the size of the jpeg implementation in the image size after all.
Re: Not surprising (Score:2)
So like a really large (Score:2)
font file that does for pictures what a font does for text.
So if the "font" is swapped out, you get a distinct change in style?
Not compression as generally understood (Score:5, Interesting)
If you look at the heart emoji, the Stable Diffusion version is a totally different from what is in the other pics. So it's as if Stable Diffusion noted "There's a heart emoji, I'll just use what I have in my library rather than what was in the original pic".
So sure, use Stable Diffusion for compression if you are happy with it actively changing your images. But that's not what I use compression for.
And that's not even addressing the 4GB of image weight data that you need to download first in order to use it.
Re: (Score:2)
Well jpg and gifs also change your images.They're lossy after all.
I think it would just come down to whether a difference in some detail is acceptable in a particular use case. For instance, does it really matter for the llama image if the heart has a bit of a shine on it or not? Whoever made the original image might be very specific about the type of heart emoji, or they might not care. I, as a viewer, don't see a meaningful difference.
Re: (Score:3)
Well jpg and gifs also change your images.They're lossy after all.
Compression artifacts are not the same thing as wholesale replacement of parts of your image.
Re: (Score:3)
It's still information loss though. Which type of loss is preferable, depends on the specific case IMO.
Re: (Score:3)
I, as a viewer, don't see a meaningful difference.
Ah, so like Amy Adams and Isla Fisher, or Tom Cruise and Don Knotts?
Re: (Score:2)
In the llama image, it replaces the lettering with scribbles. This is a sign of how it could go quite wrong. Imagine instead of it replacing text with unintelligible scribbles, it instead manages to remember 'uhh, it's something text but don't record the specific text', it could replace some text with another, potentially with a quite different meaning.
Mix and match with various other contexts where there's a deliberate detail that is intended to be significant, and the model discards the detail and repla
Re: (Score:2)
Of course it's not practical, it's just a hack on top of a model not intended for this type of use. And because storage is relatively cheap, we might never need to move beyond jpeg anyway.
But, fundamentally, I don't see how it's not an valid alternative, at least theoretically. You're still losing information, just some specific detail gets replaced rather than the overall sharpness degradation. Whether or not those specific details are important, I think depends on the use case. Of course some limitations
Re: (Score:2)
With the traditional approaches, lost detail is, well, plain lost. You can always 'AI upscale' it to look more pleasing to the eye, but it conveys the information that is actually retained, and makes it clear what detail is lost, in an intuitive way.
With this approach, it isn't clear whether detail is from the source image or was injected by the model. Which for innocuous detail may not matter, but sometimes knowing whether a detail is authentic or not is important to the meaning of the content.
Re: (Score:1)
Gif is NOT lossy. The conversion to 8 bit may be lossy, but that is something you do before saving to gif. Gif itself uses LZW compression, which is loseless. Also, it is possible to use gif for truecolor images, but very few applications support that.
Re: (Score:2)
Re: (Score:1)
Xerox once had a problem with photo copiers changing digits [theregister.com] because of the compression algorithm they used. Not quite so innocuous.
Re: (Score:2)
Re: (Score:2)
4GB is nothing in this day and age, and you can have a lot of other fun with those 4GB anyway.
Re: (Score:1)
4GB is nothing in this day and age
Nothing? Try sending that image to your phone or your smart TV.
Re: (Score:2)
Re: (Score:2)
You could add a stage that looks at the output Stable Diffusion generated and decides if it is close enough to the original to be valid.
You could also take the output of Stable Diffusion and compress the difference between that image and the original separately. That's what FLAC does for audio, a lossy compression stage and then losslessly compress the difference between that and the original. The second stage doesn't have to be lossless either, if you don't need a lossless copy.
The main issue is that the g
Possibly useful for social media. (Score:4, Interesting)
If it's possible to increase the storage efficiency given a larger input pool then this may be useful for large social media sites like facebook that have pictures of a lot of similar things. Instead of storing a link to a file, the server would store the "description" (presumable a few KiB) which would used to recreate the desired JPEG using dedicated hardware. Doing so would reduce costs for long-term storage at the expense of the cards and a little extra energy.
Could be problematic.. (Score:2)
In the example given, the reconstituted picture is a *mostly* credible alternative picture to the original. Details are lost, but mostly replaced with alternative details that look believable. If you hold a reconstituted image next to a pristine original, you can easily spot differences. If you don't say which is which, you might not be able to identify the original, but you will know they are different.
Now on the one hand, you say no big deal all those insipid unoriginal posts just get replaced by filli
Re: (Score:2)
If you hold a reconstituted image next to a pristine original, you can easily spot differences.
Not really. Did you look at the example images?
Re: (Score:2)
Yes, for example, the pattern of straw and the pattern of fur is replaced with a different pattern of straw and different pattern of fur.
Ignoring the lettering and the emoji which got *really* messed up on the way, of course, and it not quite dealing well with the glare+lettering on the strap of course. But even in the doesn't look artificial parts, there are differences in the detail, it's just that the replacement detail still (mostly) looks realistic, and the 'big picture' is preserved.
Re: (Score:2)
If it's cost effective then I see no reason why social media companies would decline to use it. Users are the product, they don't care about them.
Re: (Score:2)
An AI mistake could be harmless, but if it makes a certain sort of mistake it could become libel, if it makes a sign say something else, or if it knows 'some hand sign' is being displayed, but lost the detail and replaced it with an offensive gesture.
I think the social media company would rather waste storage or lose visual fidelity than risk an AI filling in the wrong detail at the wrong time.
Re: (Score:2)
An AI mistake could be harmless, but if it makes a certain sort of mistake it could become libel
Are you kidding? You don't think their EULA/ToS indemnifies them from any action from their own product? Seriously, get real.
Re: (Score:2)
You can't ToS away something like Libel. They could ToS their way out of copyright violation (e.g. you assign all copyright of uploaded content to us), but you can't grant yourself the right to publish libel about the user.
Re: (Score:2)
LOL! Sure you can because it's the responsibility of the uploader to ensure it's correct. Besides, it's likely that they would fix small issues like that before deployment. Worst case, they will simply takedown the image upon request.
You seem to think think that normal people have giant legal teams but they don't. Seriously, the libel idea is laughable.
Re: (Score:2)
And remember this is with an untuned model. With different goals, they can do dramatically better.
What they need is a neural net that can remove compression artifacts and reconstruct images that have been through layers of iterative compression hell. They could even train it on high resolution font samples to reconstruct sharp letterform edges.
Knowing these companies, though, they care as much about saving bandwidth as they do about storage. They'll cache the entire multi-GB AI model on your phone and ma
Re: (Score:2)
They'll cache the entire multi-GB AI model on your phone and make your phone convert it.
For a smartphone, yes. For their website, no... but they may only serve heavily compressed versions to convince you to use their smartphone app.
Interesting, but impractical (Score:2)
So it may be somewhat smaller filesize, but with a multi-gig dataset 'decode the file' to fill in the blanks, versus webp and jpg which do almost as well with less than a megabyte of footprint.
Besides, while it may not have those 'synthetic' artifacts, it clearly has troubles. Like the lettering on the strap has the model really confused as to what to do, and so it just kinda smears some white in there. It also mangles the heart shape. Other places it looks like it *could* be a believable picture, but it'
Re: (Score:2)
lettering on the strap has the model really confused as to what to do, and so it just kinda smears some white in there. It also mangles the heart shape.
This is the original dataset without any additional training. You could run additional training with more text samples.
A subset of the Stable Diffusion training images is viewable online. I can't even find examples of text except incidentally in the background of photos.
https://laion-aesthetic.datase... [datasette.io]
Re: (Score:2)
I mean sure, for every image that gets mangled you can then use the image as training fodder and grow the corpus of details to use to synthesize a new image for any given artifact, but when do you declare 'done'?
In any event, it doesn't address the issue of losing detail and the chances that a replacement detail looks convincing instead of a weird artifact, but has entirely different meaning. E.g. if you downscale an image to the point where the text is illegible, an AI upscaler might just make up text ver
Re: (Score:2)
Small savings. (Score:1)
The amount of savings on this is too small for it to be practical.
Also, a significant limitation is this encodes the image in the 512x512 size. We know from traditional compression techniques, including Jpeg, that the larger the resolution, the more information you can throw away, because we are very detail oriented creatures. This throws that advantage out the window as a first step.
There is another interesting project that is using AI to compress video that holds more promise for my time.
Re: (Score:2)
Let's say you have a 1024x2048 portrait image to compress. First step is to break it into 6 blocks and compress them separately.
On reconstruction, you'd have to do a little extra work to hide the seams, but Stable Diffusion has already been used to upsample images in this way.
Re: (Score:2)
There are lots of experiments using deep learning to compress images and video, and the purpose built models work a lot better. This is just a story about a guy who was playing around and found that a particular system could sort of do image compression even though it was never intended to.
Re: Small savings. (Score:1)
Re: (Score:2)
Hardware is right. This uses the GPU, you know, that thing games depend on? And each image takes several second to be rendered, though the article didn't mention any time frame for the changes they made. This is definitively not something that can be done in real time now, so what you are left with is 4GB of extra space, plus the space the cached large sized texture will take, the hours it would take for every image in a 40GB game, which are on the small side these days, so you could turn that 40GB into 38G
Re: (Score:2)
Mind's Eye, or Simply Guessing? Not Compression. (Score:1)
Re: (Score:2)
This will probably never be useful for actual image compression. The comparison is interesting, because it does give a way to measure information density.
I think where this is most useful is actually finding a way to objectively gauge how complete the dataset is and what additional training images should be sought out to best fill in any gaps.
a better choice of algorithm (Score:1)
Generally speaking D2F sub-1 needs to equal D2F sub-2, and D2F sub-3 needs to equal D2F sub-4, where length L creates a complimentary shaft angle. Call that theta D. Take into account girth-similarity, also T2O, has to be the same for each matching pair of ... data
Re: (Score:2)
Call it a theta D if you want, but it sounds like a beta D to me.
Neat idea, but... (Score:3)
I have a great idea for my entry into the universal 16:1 compression algorithm contest: I'll just use lookup tables! Yeah, the decompressor needs the tables, but you can transmit those ahead of time.
Re: (Score:2)
JPEG has lookup tables too. So does GIF. They're just a lot smaller. In the case of JPEG and GIF the lookup tables are small enough to fit inside the file and still result in size reduction.
This would only work for massive datasets. And it's probably a good insight into human memory. Even "photographic" memory doesn't seem to fill up a brain totally.
Re: (Score:2)
Re: (Score:2)
It's not like Stable Diffusion meets any standard of a lookup table either. The entire point is that both are a stretch.
Re: (Score:2)
It's like a MIDI file. You have the instrument set distributed ahead of time, and then each song is just telling the computer how to reconstruct the original from that data. If you have the same instrument data, it sounds exactly the same as it did for the composer.
In other words, it's a well tested and highly practical method of compressing data.
Re: (Score:2)
What this really reminds me of is data deduplication.
Scan storage system looking for duplicated chunks across files, and then store the chunks in the chunkstore and shrink the source files and replace the data with pointers to the chunk file. When someone wants to read the file, it gets reconstituted with the chunks from the chunkstore.
The 4GB dataset is kind of the "chunk file" in this kind of metaphor.
Most important queastion (Score:2)
Lame (Score:4)
Re:Lame (Score:4, Informative)
no Lenna benchmark pic
Lena has been retired from image processing. People decided that it was inappropriate to use a Playboy model, including Lena herself, who said "I retired from modeling a long time ago. It’s time I retired from tech, too... Let's commit to losing me."
https://en.wikipedia.org/wiki/Lenna
I, Robot, For One, Welcome Our Hallucinating... (Score:3)
When Artificial Intelligences were deemed sentient, the Robot Responsibility And Autonomy Act (RRAA) was passed in 2043. Class 4 AIs were then able to be held legally liable for damage and harm to humans.
In this week's news, Alfred.21347, on trial for murder for killing a man in New London, has claimed self-defense, asserting that he saw the victim pointing a disintegrator. His attourney, JohnyCoch.1138 will present evidence showing that Alfred actually did see the weapon, but that it was a compression artifact. The Prosecuting Attourney, Talan Uring called a press conference to refute this move, "The Stable Diffusion defense is essentially an insanity plea, and it won't fly in this case. At this point we can't really know or explain what goes on in the so-called "mind" of an AI. We don't judge them on their state of mind, only on their actions."
This harks back to the "self-driving" cars of the mid-2020s, and the "hallucinations" their primitive sensors experienced that caused a spate of deaths from inappropriate maneuvers.
Neural Memory (Score:2)
So if I'm following this correctly from the summary, another way of looking at this is that they're using the neural network training to learn the photos, then using it to recall them. This is memory, as in brain memory. While this may not prove useful for image compression, it certainly is fascinating seeing the use of neural networks for yet another major brain function. There's a lot of potential here for fascinating results. Like seeing how older images get corrupted or lost as new ones are added.
Conditional Improvement = Fail (Score:2)
Compression is understanding, according to some (Score:1)
This viewpoint motivated the Hutter P [hutter1.net]
Computers playing telephone (Score:1)
WebP was a PITA, don't repeat that boondoggle (Score:2)
When WebP came out, few editing and viewing tools recognized the format, or displayed it wrong. Many still gag. And some experts say that improved JPEG encoding algorithms have since made it competitive with WebP's compression resolution. Don't F with standards unless the challenger proves clearly better for at least 5 years. Jerko Google
(No, this has nothing to do with my spank-bank, honest :-)
Re: (Score:3)
Comment removed (Score:4, Insightful)
the ultimate compression algorithm (Score:2)
Before there were fashionable 'apps' we used 'programs' and 'applications' in our Apple][ computers. There was a frenzy of creative energy in software development. One of the most fascinating items was a compression program that could compress any file by about 20%. Even then we had better compression programs, so who cares? The trick was that the program could be run again, compressing the same file by another 20%. And again, and again. And it promised to restore the file fully by a reverse operation.
I nev
It's (image size) + 4GB (Score:2)
So, you need the 4GB data model to turn the compressed image back into it's uncompressed form.
I guess it is (image size + 4gb)/(# images to process), so the more images you process the higher the compression.
There is an alternate argument that my title is actually correct. Per image, the compression has to include the 4GB data. Otherwise, I have a program that has one bit, and that bit points to my image, with no compression, that I have in my data model. So my compression is as good as it gets - down to
Re: (Score:2)
Only if you also include the size of your other decompression software in calculations as well, for example if you also say "WinRAR compressed my file is 4kb, plus the size of WinRAR".
The 4GB model is just a very large compression/decompression tool, it is not the size of the archive.
If the 4GB model was actually really good at this , the 4GB size would not be a big issue: you install the tool once and use it forever, just like 7Zip etc. Let's face it, 4GB isn;t even a lot of lost storage on a modern phone.
Re: (Score:2)
>Only if you also include the size of your other decompression software in calculations as well, for example if you also say "WinRAR compressed my file is 4kb, plus the size of WinRAR".
Nope. WinRAR has no data model. This is a 4GB data model that has all the features required to restore the picture. It would be as if WinRAR had a copy of the original file included in it and output some token to signify which included file to output to the user.
>The 4GB model is just a very large compression/decompr
Re: (Score:2)
What I am trying to point out is that WinRAR has, built in to it's exe, the decompression algorithm. Using SD as a decompressor is like that exe: we can think of the model as just part of the decompressor. It's static data that does not change no matter what different image files we are using it with to decompress, just like the bytes of code that WinRAR has in it's exe that it uses to do it's decompression. We don't need to send someone a 4GB model with every file we want to decompress: they just need to h
Not compression, recollection (Score:2)
The title is kind of cute and all, and the technical details are interesting, but this is a lot bigger than any of that.
What this is is an AI recollection of the original object, not a compressed version of the original. It uses its "cognitive" database to reconstruct what it remembers the object looked like - and while some basics look better, other details are lacking. And, you'll get some artifacts which make it clear it isn't just the original. Just like with real, human memory and (say) a hand drawn re
Revenge of the 90's (Score:2)
You have to include the algorithm for a benchmark (Score:2)
I remember before the internet (Score:1)
I remember before the internet when we had BBS's that used RIP (Remote Image Processing), which was dependant on a massive (in those days) local image library, the across the wire image size was reduced as it would be a combination of vector graphics and references to the local image data. On the 9600bps modem it worked a charm.