DeepMind's AI Agent MuZero Could Turbocharge YouTube (bbc.com) 22
DeepMind's latest AI program can attain "superhuman performance" in tasks without needing to be given the rules. From a report: Like the research hub's earlier artificial intelligence agents, MuZero achieved mastery in dozens of old Atari video games, chess, and the Asian board games of Go and Shogi. But unlike its predecessors, it had to work out their rules for itself. It is already being put to practical use to find a new way to encode videos, which could slash YouTube's costs. [...] MuZero could soon be put to practical use too. Dr Silver said DeepMind was already using it to try to invent a new kind of video compression. "If you look at data traffic on the internet, the majority of it is video, so if you can compress video more effectively you can make massive savings," he explained.
"And initial experiments with MuZero show you can actually make quite significant gains, which we're quite excited about." He declined to be drawn on when or how Google might put this to use beyond saying more details would be released in the new year. However, as Google owns the world's biggest video-sharing platform -- YouTube -- it has the potential to be a big money-saver. DeepMind is not the first to try and create an agent that both models the dynamics of the environment it is placed in and carries out tree searches -- deciding how to proceed by looking several steps ahead to determine the best outcome. However, previous attempts have struggled to deal with the complexity of "visually rich" challenges, such as those posed by old video games like Ms Pac-Man.
"And initial experiments with MuZero show you can actually make quite significant gains, which we're quite excited about." He declined to be drawn on when or how Google might put this to use beyond saying more details would be released in the new year. However, as Google owns the world's biggest video-sharing platform -- YouTube -- it has the potential to be a big money-saver. DeepMind is not the first to try and create an agent that both models the dynamics of the environment it is placed in and carries out tree searches -- deciding how to proceed by looking several steps ahead to determine the best outcome. However, previous attempts have struggled to deal with the complexity of "visually rich" challenges, such as those posed by old video games like Ms Pac-Man.
Re: (Score:2)
Hasn't AI done enough damage to YouTube already?
Re: It's A Form Of Video Compression (Score:2)
Try SkyTube on mobile. (Via F-Droid.)
No ads as far as I can tell, channel blocking built-in, quick to use, and the ability to force a certain resolution and a different one on mobile data.
Offers downloading for offline watching too. I always download a bunch in advance. Saves me quite a bit of data when on the go.
The developer is very responsive, and a nice guy (so be nice please), and could use a bit of support, both financially and in terms of streamlining the more cumbersome aspects of dealing with Googl
More a *tool* for Video Compression (Score:3)
A new form of video compression that reduces the amount of bandwidth needed to stream a video while maintaining the video quality would be a welcome improvement.
These type of application are not much new form of video compression (in the sense of new video codecs) as tools to use them.
We already have very efficient video codec, with AV1 being the latest.
The problem, is that video compression isn't just a single "Compress Video!" button to click.
You need to find the perfect balance between compression and quality.
How much bitrate should you allocate to which frame in order to obtain something the *looks perceptually* close enough to the original to look good ?
(And h
How about *actually* putting it to use... (Score:2)
By compressing videos to only their actual contents.
I mean extracting the semantic structures. And turning them into a graph (of graphs) and then text.
Because while the average ScienceClick or 3blue1brown video cannot be compressed that much,
your average YouTube react trash, taste test trash, or shitty tutorial video could be condensed to a single sentence, or a paragraph, that you could read and skip and backtrack at your own pace, without watching five minutes of ads, 20 minutes of empty blahblah and 10 m
Re: (Score:2)
Re: (Score:2)
JPEG doesnt summarize, but it was a non-A.I. attempt at reduction. Highly primitive for sure, as it effectively only uses an order-0 model, but we are still talking about the same thing. Modern hand-crafted order-0 models are far better, but still quite limited.
There exists information in the data that is qualitatively noise. The video compressors already handle
Re: (Score:2)
>>What are the 'important' parts?
That sounds hard.
"What are useless bullshit parts" sounds like something AI can get started identifying. Won't chop as much (abridged Mythbusters chopped out what, 70%? more?) but it should be dramatically easier to pattern out.
Won't get the support codec work does, of course. Can't have it hurting the youtube "business model".
Re: (Score:2)
Re: (Score:1)
two bit culture (Score:2)
In a dystopian future there will only be two movies. The Black Movie and The White Movie. Even without AI, data compression will be so turbocharged that both movies can be transmitted in their entirety by the transmission of just a single bit.
Re: (Score:2)
Well until that one bit gets DMCAed and then the entire remaining catalogue of films can be transmitted with no bits at all.
No... at least not any time soon. (Score:2)
It is already being put to practical use to find a new way to encode videos, which could slash YouTube's costs.
Performance is a real issue and if you aren't using codec specific hardware acceleration then you are going to end up using a lot of computational power decoding video. There is a reason they haven't pushed out AV1 as the de-facto codec for video and that's because of computational power required to decode. This translates into battery drain, smartphones heating up, and some machines being unable decode video in realtime, all of which are highly undesirable.
Additionally, unless the encoding is generic en
Where'd Multicast Go? (Score:3)
Twenty years ago, we were going to unicast enough data to get people caught up to the closest multicast group and then they would just join that, making sure the video data only transited the link once per group.
If it costs Google (and one presumes backbone providers) that much money, why is everything still being unicast? Same for Netflix. Google isn't afraid of proposing new protocols. Even if ISP's terminated the streams as unicast for the last mile, it should still take a good chunk off the top if a video is getting millions of views a day.
Re: Where'd Multicast Go? (Score:2)
Re: (Score:2)
Replying to cancel my accidental moderation.
Re: (Score:3)
The problem is multicast only works if everyone sees the same video at the same time. If one person even wants to rewind it 30 seconds to re-watch something, they have to be broken off the multicast and back to unicast.
The other option would be to have the players cache playback, but if you just joined you wouldn't have a history buffer.
You would contemplate doing this for say, broadcast TV, but then cloud DVRs letting you rewind screw it all up too. (Cloud DVRs - what a joke).
Re: (Score:2)
They could always torrent if two people aren't watching at the same time.
But for popular videos let's use a video Linus Tech Tips posted 14 hours ago now with 600k views. That means that 14hr * 60 minutes = 840 starting minutes.
It's 14 minutes long so if you jumped in and needed to start streaming immediately it could unicast for 59 seconds while caching the next minute from the multicast 59 seconds ahead of you. Youtube would also benefit from precaching a multicast of a video it determines you are likel
Re: (Score:3)
Twenty years ago, we were going to unicast enough data to get people caught up to the closest multicast group and then they would just join that, making sure the video data only transited the link once per group.
If it costs Google (and one presumes backbone providers) that much money, why is everything still being unicast? Same for Netflix. Google isn't afraid of proposing new protocols. Even if ISP's terminated the streams as unicast for the last mile, it should still take a good chunk off the top if a video is getting millions of views a day.
They sort-of already do. They have cache servers in ISP access points all over the place and if you're viewing popular content you might well be streaming off those, so then the transfer to the cache is amortized across your usage and everybody else's usage, so that's a lot like multicast, except superior because it doesn't require simultaneous viewing. Even with all those efforts, it's still the biggest thing on the net, evidently. More compression helps with bandwidth for not as popular video and it impr
Deep-learning video = YouTube is haunted (Score:3)
Compression produces artifacts, as we all know. So what might video compressed through deep learning actually look like, especially at low bit rates? Instead of 8x8 pixel blocks, would you see semantic errors or "close enough" matches manifest as video 'ghosts'?
Imagine watching a time lapse of your favourite YouTuber gluing 100,000 matches together or whatever, but when they set it aflame an actual face shows up 'cause of a fluke. Religious types will say it's Jesus or the devil. Or suppose, whenever a single-bit error creeps into your stream, the algorithm produces the kind of f'ed up nightmare fuel from of its training set that Google Deep Dream was so notorious for. Messing with AI could make for a fun party game.