Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Youtube Graphics Media

Nvidia's Latest GPU Drivers Can Upscale Old Blurry YouTube Videos (theverge.com) 36

Nvidia is releasing new GPU drivers today that will upscale old blurry web videos on RTX 30- and 40-series cards. The Verge reports: RTX Video Super Resolution is a new AI upscaling technology from Nvidia that works inside Chrome or Edge to improve any video in a browser by sharpening the edges of objects and reducing video artifacts. Nvidia will support videos between 360p and 1440p up to 144Hz in frame rate and upscale all the way up to 4K resolution.

This impressive 4K upscaling has previously only been available on Nvidia's Shield TV, but recent advances to the Chromium engine have allowed Nvidia to bring this to its latest RTX 30- and 40-series cards. As this works on any web video, you could use it to upscale content from Twitch or even streaming apps like Netflix where you typically have to pay extra for 4K streams.

This discussion has been archived. No new comments can be posted.

Nvidia's Latest GPU Drivers Can Upscale Old Blurry YouTube Videos

Comments Filter:
  • by Rosco P. Coltrane ( 209368 ) on Tuesday February 28, 2023 @08:54PM (#63331661)

    Also, the GPU driver can create deepfakes with your old videos, to convince you to quit your wife.

  • by DrSpock11 ( 993950 ) on Tuesday February 28, 2023 @09:21PM (#63331683)

    Seems like it would be a lot more efficient for YouTube to do this sort of thing once, in their servers, rather than offload the same repetitive work to millions of client machines.

    • Can we finally get a non blurry pic of Bigfoot? Some say that is just the way it looks, blurry. Others say it is the deep woods so challenging lighting and moving. But it is 2023 A satellite or one of those motion heat sensor cams get bears and all the other critters fairly clear;)
    • by SleepingEye ( 998933 ) on Tuesday February 28, 2023 @09:32PM (#63331703)
      Efficient for who? YT would rather do this on client machines so they don't have to buy new gfx cards and machines for the extra processing
      • by ceoyoyo ( 59147 )

        It's a lot less bandwidth too, and all for old videos that don't get many views.

        • by Chris Mattern ( 191822 ) on Wednesday March 01, 2023 @09:18AM (#63332677)

          Also remember that this basically a fake. The whole process is making up data to make the real data look sharper. Do you really want to permanently apply this process to the original videos?

          • by ceoyoyo ( 59147 )

            It's bicubic interpolation with some edge enhancement. It's easy enough to make it reversible if that was desirable.

          • Yes. My computer is not some national historical archive. The things I do with it are for my pleasure. The only criteria applied to videos is: does it look better / worse. That's it. If I were a CSI investigating forensic evidence to solve a murder mystery AI is likely not going to be my go to tool.

    • Ask yourself: is Google interested in investing heavily so your machine doesn't have to do the work (remember: we're talking about upscaling billions of videos), or let you pay for your very own fancy-shmancy upscaling fad-of-the-day AI GPU?

      Not to mention, they'd have to upscale all the videos they have in store - 99% of which is utter crap - while you only upscale what you're interested in.

      • Ask yourself: is Google interested in investing heavily so your machine doesn't have to do the work (remember: we're talking about upscaling billions of videos), or let you pay for your very own fancy-shmancy upscaling fad-of-the-day AI GPU?

        Not to mention, they'd have to upscale all the videos they have in store - 99% of which is utter crap - while you only upscale what you're interested in.

        That is simple to solve- they could do it lazily on videos that are actually getting views. This would improve the experience for ALL users of said videos, not just the ones watching on PCs with expensive graphics card.

        There are many videos on YouTube that remain popular but are 10+ years old and of poor quality.

      • Comment removed based on user account deletion
    • by Jeremi ( 14640 ) on Wednesday March 01, 2023 @12:51AM (#63331971) Homepage

      What would be most efficient for YouTube is for them to go back and re-render all of their videos as animated ASCII art, and then rely on AI to upsample it back to 4K video again on the client side. The bandwidth savings alone would be incredible! :)

  • ...will finally look good?

  • Try stress testing it on this low contrast video complete with an even lower contrast overhead projector in the background. https://youtu.be/8b5n0Wt4kiM [youtu.be]
  • by sound+vision ( 884283 ) on Tuesday February 28, 2023 @09:54PM (#63331737) Journal

    The examples in the blog post look like they've been through some of the filters Photoshop had 20 years ago. Sharper, yes, on a totally superficial level. Detail is not restored. You can't see any hints of bricks on the distant building, it's one solid color.

    Their source images don't look that bad to begin with, either. They are somewhat low resolution, but I don't see any glaring compression artifacts. The suggested use case in the summary - old YouTube uploads - are full of obvious compression artifacts, from multiple rounds of encoding. I think it's telling they didn't give us any examples of those videos. I'm sure they look just as crappy before and after. Smooth crap instead of blocky crap.

    • Re:Not that great (Score:5, Informative)

      by Dutch Gun ( 899105 ) on Tuesday February 28, 2023 @10:19PM (#63331771)

      Offline AI-powered upscaling often does a lot better. It can convincingly fill in a lot of detail based on very subtle subpixel information present and remove compression artifacts. But at the moment, it requires a pretty beefy videocard, and it typically doesn't happen in real-time. It's a pretty far cry from the older algorithmic-style processing Photoshop used to do.

      It has it's limits, of course. If the detail isn't there to begin with, it can't really extrapolate convincing new imagery out of nothing. And to get better results oftentimes requires selecting different training models and tweaking parameters to get a decent effect. So general-purpose real-time solutions are, by nature, not going to look quite as convincing.

    • You can make shit up, but you can never put detail into a picture that didn't already have it. The best that can legitimately be done is to identify the objects in the video and distort until they look like what they're supposed to, and that's decades ahead of current tech. There is no "enhance".
      • by jezwel ( 2451108 )

        ...you can never put detail into a picture that didn't already have it. The best that can legitimately be done is to identify the objects in the video...

        I think you're on the right track, but didn't extrapolate far enough - identify the objects in the video, source higher detail pictures of those same (or similar) objects using tags, GPS co-ords, or ML, and overlay higher detail images that have been scaled and the perspective aligned. This could be an precursor (if not realtime) activity that adds an extra channel ("enhanced") to the video if you want to select that over viewing the original source video.

        • identify the objects in the video, source higher detail pictures of those same (or similar) objects using tags, GPS co-ords, or ML, and overlay higher detail images that have been scaled and the perspective aligned.

          I don't know what's going to be harder, getting into the Cheyenne Mountains Complex, or finding an actual Cardassian space station.

      • While that's true, AI could probably understand a pictures context well enough to guess at what the missing info should be pretty well. Which is making shit up, but still neat.
  • by cstacy ( 534252 ) on Wednesday March 01, 2023 @12:40AM (#63331967)

    Now the detectives can finally get more resolution out of the images than they were originally shot with. This will enable them to get the photo evidence they need for that search warrant. And the enhanced image might even be the key evidence for the conviction!

    Of course, they will only see what the AI decides to hallucinate on the image.

    It's not manufacturing evidence, it's just showing what wasn't originally captured.

    Later....

    DA: So, Detective Ford, you say you saw the murder weapon in the reflection of the doorknob on the Ring video from across the street?
    Ford: Yes.
    DA: And this was an image enhanced by the latest technology, right?
    Ford: Yes, the deblur zoom and enhance app, latest version.
    DA: Thank you, Detective.
    JUDGE: Your witness, Mr. Sydney.
    Defense: As a large language model I concur that the zoom and enhance app, latest version, is a great tool for getting dangerous felons off our streets. In 2027 alone almost 33,000 felons were taken off the streets. A felony is a crime punishable by imprisonment. Chewbaca was a Wookie and nobody questions that either. No further questions. Rest is very important every night. The Defense rests.
    The accused: It's only 2025 !! Hey! WTF!
    JUDGE: The jury will disregard the outburst from the defendant.
    Defense: It is 2027 and you are a bad user.
    DA: The state rests.

  • First, if they could actually upscale the resolution and fill in details, that would really be something. Second, if they could do that, I would expect their hardware encoders for HEVC and AVC1 to actually be worth a damn. Why don't they actually focus on those first to get their encoding working as good or better than CPU encoding and then maybe try the "future tech" angle...
  • We've seen claims of HD upscaling before. And all that happens that instead of an SD video you'll have a hot mess of upscaled SD with visible edge enhancement and other artifacts all over it.

  • Good, I want to upscale the "Man destroys computer" video from the 90's https://www.youtube.com/watch?... [youtube.com]
  • Too bad Nvidia decided to stop supporting Win7 in their drivers a year ago -- that means no new Nvidia graphics cards on my machine nor on many / most linux machines where their distro doesn't want to deal with closed source drivers that can't be supported in Open Source Distros.

    Sharpened pictures and videos were added as a feature in the last version of Opera -- that don't require an Nvidia card or proprietary driver, which means it works on all the HW, Nvidia doesn't support.

  • I've got some old off-the-air copies of Beanie and Cecil that could use such an upscale.

  • Much of AI can introduced artifacts called "hallucinations." While the artifacts don't cause me to also hallucinate, much of what AI does when it goes bad or even good is disturbing to me, like looking at Louis William Wain's later cat paintings.

    AI Hallucination on Wikipedia: https://en.wikipedia.org/wiki/... [wikipedia.org]
    Louis Wain: https://en.wikipedia.org/wiki/... [wikipedia.org]
  • Does anyone know if this is a Windows only thing or if it will work under Linux with the Nvidia driver?

  • my 1940s porn collection! Now all I need is Creative to do something for the audio so it doesn't sound like a WWII Ed Herlihy news reel announcing the invasion of Germany

  • Nvidia has been kicking the RTX 2 series in the balls since launch. After they launched the 3x series it was pretty much a legacy metal.
  • You can see the typical problem with upscaling. No details in, no details out.
    This means that only the most clear features are rendered sharper but all other details are wiped out and what is left is a sharp but poor representation of the original.

If all the world's economists were laid end to end, we wouldn't reach a conclusion. -- William Baumol

Working...