Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Youtube Technology

DeepMind's AI Agent MuZero Could Turbocharge YouTube (bbc.com) 22

DeepMind's latest AI program can attain "superhuman performance" in tasks without needing to be given the rules. From a report: Like the research hub's earlier artificial intelligence agents, MuZero achieved mastery in dozens of old Atari video games, chess, and the Asian board games of Go and Shogi. But unlike its predecessors, it had to work out their rules for itself. It is already being put to practical use to find a new way to encode videos, which could slash YouTube's costs. [...] MuZero could soon be put to practical use too. Dr Silver said DeepMind was already using it to try to invent a new kind of video compression. "If you look at data traffic on the internet, the majority of it is video, so if you can compress video more effectively you can make massive savings," he explained.

"And initial experiments with MuZero show you can actually make quite significant gains, which we're quite excited about." He declined to be drawn on when or how Google might put this to use beyond saying more details would be released in the new year. However, as Google owns the world's biggest video-sharing platform -- YouTube -- it has the potential to be a big money-saver. DeepMind is not the first to try and create an agent that both models the dynamics of the environment it is placed in and carries out tree searches -- deciding how to proceed by looking several steps ahead to determine the best outcome. However, previous attempts have struggled to deal with the complexity of "visually rich" challenges, such as those posed by old video games like Ms Pac-Man.

This discussion has been archived. No new comments can be posted.

DeepMind's AI Agent MuZero Could Turbocharge YouTube

Comments Filter:
  • By compressing videos to only their actual contents.

    I mean extracting the semantic structures. And turning them into a graph (of graphs) and then text.

    Because while the average ScienceClick or 3blue1brown video cannot be compressed that much,
    your average YouTube react trash, taste test trash, or shitty tutorial video could be condensed to a single sentence, or a paragraph, that you could read and skip and backtrack at your own pace, without watching five minutes of ads, 20 minutes of empty blahblah and 10 m

    • Summarization is certainly researched as well, although it's quite different because it's very subjective what a desirable summarization would be (how long should it be? What are the 'important' parts?) And there is no existing huge corpus of training data, whereas with video compression, it's simple - the output must approximate the input.
      • Thats why barefoot is specifically not talking about summarization. A reduction to the information entropy of the signal is not a summarization.

        JPEG doesnt summarize, but it was a non-A.I. attempt at reduction. Highly primitive for sure, as it effectively only uses an order-0 model, but we are still talking about the same thing. Modern hand-crafted order-0 models are far better, but still quite limited.

        There exists information in the data that is qualitatively noise. The video compressors already handle
      • by Falos ( 2905315 )

        >>What are the 'important' parts?

        That sounds hard.

        "What are useless bullshit parts" sounds like something AI can get started identifying. Won't chop as much (abridged Mythbusters chopped out what, 70%? more?) but it should be dramatically easier to pattern out.

        Won't get the support codec work does, of course. Can't have it hurting the youtube "business model".

    • And that is why for most topics I prefer text to video. More information density, less fluff. Less wasted time.
    • This might sound dumb to anyone who hasn't tried searching for a fix for an obscure IT problem, and all they could find was a shitty 5 minute youtube video that could be condensed into 2 sentences.
  • In a dystopian future there will only be two movies. The Black Movie and The White Movie. Even without AI, data compression will be so turbocharged that both movies can be transmitted in their entirety by the transmission of just a single bit.

    • Well until that one bit gets DMCAed and then the entire remaining catalogue of films can be transmitted with no bits at all.

  • It is already being put to practical use to find a new way to encode videos, which could slash YouTube's costs.

    Performance is a real issue and if you aren't using codec specific hardware acceleration then you are going to end up using a lot of computational power decoding video. There is a reason they haven't pushed out AV1 as the de-facto codec for video and that's because of computational power required to decode. This translates into battery drain, smartphones heating up, and some machines being unable decode video in realtime, all of which are highly undesirable.

    Additionally, unless the encoding is generic en

  • by bill_mcgonigle ( 4333 ) * on Wednesday December 23, 2020 @04:52PM (#60860938) Homepage Journal

    Twenty years ago, we were going to unicast enough data to get people caught up to the closest multicast group and then they would just join that, making sure the video data only transited the link once per group.

    If it costs Google (and one presumes backbone providers) that much money, why is everything still being unicast? Same for Netflix. Google isn't afraid of proposing new protocols. Even if ISP's terminated the streams as unicast for the last mile, it should still take a good chunk off the top if a video is getting millions of views a day.

    • Microsoft Mediaroom did that: unicast quick start of play for channel surfing, then join to the multicast for continued viewing. This is ok for linear TV and scheduled events, but it is useless for on-demand. Switch multicast capacities and join latencies up the network can be disappointing, and packet buffer replication and tenure management are non-trivial.
    • by tlhIngan ( 30335 )

      The problem is multicast only works if everyone sees the same video at the same time. If one person even wants to rewind it 30 seconds to re-watch something, they have to be broken off the multicast and back to unicast.

      The other option would be to have the players cache playback, but if you just joined you wouldn't have a history buffer.

      You would contemplate doing this for say, broadcast TV, but then cloud DVRs letting you rewind screw it all up too. (Cloud DVRs - what a joke).

      • They could always torrent if two people aren't watching at the same time.

        But for popular videos let's use a video Linus Tech Tips posted 14 hours ago now with 600k views. That means that 14hr * 60 minutes = 840 starting minutes.

        It's 14 minutes long so if you jumped in and needed to start streaming immediately it could unicast for 59 seconds while caching the next minute from the multicast 59 seconds ahead of you. Youtube would also benefit from precaching a multicast of a video it determines you are likel

    • by NoSig ( 1919688 )

      Twenty years ago, we were going to unicast enough data to get people caught up to the closest multicast group and then they would just join that, making sure the video data only transited the link once per group.

      If it costs Google (and one presumes backbone providers) that much money, why is everything still being unicast? Same for Netflix. Google isn't afraid of proposing new protocols. Even if ISP's terminated the streams as unicast for the last mile, it should still take a good chunk off the top if a video is getting millions of views a day.

      They sort-of already do. They have cache servers in ISP access points all over the place and if you're viewing popular content you might well be streaming off those, so then the transfer to the cache is amortized across your usage and everybody else's usage, so that's a lot like multicast, except superior because it doesn't require simultaneous viewing. Even with all those efforts, it's still the biggest thing on the net, evidently. More compression helps with bandwidth for not as popular video and it impr

  • by ElrondHubbard ( 13672 ) on Wednesday December 23, 2020 @07:14PM (#60861286)

    Compression produces artifacts, as we all know. So what might video compressed through deep learning actually look like, especially at low bit rates? Instead of 8x8 pixel blocks, would you see semantic errors or "close enough" matches manifest as video 'ghosts'?

    Imagine watching a time lapse of your favourite YouTuber gluing 100,000 matches together or whatever, but when they set it aflame an actual face shows up 'cause of a fluke. Religious types will say it's Jesus or the devil. Or suppose, whenever a single-bit error creeps into your stream, the algorithm produces the kind of f'ed up nightmare fuel from of its training set that Google Deep Dream was so notorious for. Messing with AI could make for a fun party game.

The truth of a proposition has nothing to do with its credibility. And vice versa.

Working...