Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Media Technology

The End of Video Coding? (medium.com) 137

An anonymous reader writes: Netflix's engineering team has an insightful post today that looks at how the industry is handling video coding; the differences in their methodologies; and the challenges new comers face. An excerpt, which sums up where we are:

"MPEG-2, VC1, H.263, H.264/AVC, H.265/HEVC, VP9, AV1 -- all of these standards were built on the block-based hybrid video coding structure. Attempts to veer away from this traditional model have been unsuccessful. In some cases (say, distributed video coding), it was because the technology was impractical for the prevalent use case. In most other cases, however, it is likely that not enough resources were invested in the new technology to allow for maturity.

"Unfortunately, new techniques are evaluated against the state-of-the-art codec, for which the coding tools have been refined from decades of investment. It is then easy to drop the new technology as "not at-par." Are we missing on better, more effective techniques by not allowing new tools to mature? How many redundant bits can we squeeze out if we simply stay on the paved path and iterate on the same set of encoding tools?"

This discussion has been archived. No new comments can be posted.

The End of Video Coding?

Comments Filter:
  • by MBGMorden ( 803437 ) on Wednesday June 13, 2018 @11:27AM (#56777260)

    Should they just adopt new and inferior solutions and hope for the best?

    To me this is the "science" part of Computer Science. Do research into new algorithms and methods of video encoding, but it would be stupid to start adopting any of that into actual products or live usage until and unless it tops the more traditional methods in performance.

    • by AHuxley ( 892839 )
      Build new networks that are not just paper insulated wireline?
      Usable bandwidth to the dwelling would allow for 4K, HD and other video resolutions on demand.
      • by war4peace ( 1628283 ) on Wednesday June 13, 2018 @11:46AM (#56777384)

        In many parts of the world, that's already standard. I got gigabit fiber to my home for cheap in a 3rd world country.
        So as far as me and my countrymen are concerned, we're good, even for 4K@60fps.

        • by AHuxley ( 892839 )
          Thats all that needs to be done. The codecs just work. The existing CPU and GPU products are ok for 4K. Sound is ok.
          Its not the codec.
          Its not like having to work on Multiple sub-Nyquist sampling encoding https://en.wikipedia.org/wiki/... [wikipedia.org] .
        • To be fair, third-world countries have a slight advantage in that their infrastructure is all new and mostly modern, whereas the US is trying to piggyback on a lot of old POTS and first-gen fiber infrastructure.

          In a lot of cases, developing countries are completely skipping copper infrastructure, and building out wireless systems.

        • I think his point was that first world countries had been (still are in many areas) hamstrung by having a wired telephone infrastructure in place for a century or more, and reluctance by the local government and providers to replace it with something different. Similar to the way current encoding methods are ingrained.

          And yes, emerging countries often end up with a better data infrastructure than the older parts of first world countries. (I'm told cell service is often better too.) I live in a recent subu

      • Build new networks that are not just paper insulated wireline?

        Usable bandwidth to the dwelling would allow for 4K, HD and other video resolutions on demand.

        Yours isn't? Weird.

        I live in Europe and I get 300MBs symmetrical with 5ms ping for $30 a month. It's fiber all the way to my router.

        All speeds measured and confirmed: http://www.speedtest.net/resul... [speedtest.net]

        • by Anonymous Coward

          Why do you pay in Dollars?

        • by GNious ( 953874 )

          Yours isn't? Weird.

          I live in Europe and I get 300MBs symmetrical with 5ms ping for $30 a month. It's fiber all the way to my router.

          All speeds measured and confirmed: http://www.speedtest.net/resul... [speedtest.net]

          I live in the "capitol" of Europe (Brussels), and random outages, phone-lines randomly swapped around, regular drops in connection speeds, and max option is 100/15 MBits (seems recently upped from 50/15).

          Unlimited data, unless you use more than 100GB/Month, then there's add-ons or throttled to 5Mbit.

          Phone+inet+IPTV is delivered via copper (from a box down the road), and you can see the wiring dangling off of buildings even in the more well-off areas.

          Oddly enough, when perfomance drops, visiting Speedtest.ne

      • This. Lossy compression techniques will become less and less important in the future.
        • Not any time soon, because bandwidth demands keep going up. People's connections are catching up with 4K now, but 8K is already on the way and that means another factor of four increase in bandwidth. After that I'm sure we'll see 12K or 16K There may not be much point to those resolutions for normal movie watching, but they will be important for VR and for applications like video walls where people might walk up to the display to see more detail of a small part of a scene.
    • by Luthair ( 847766 )
      Maybe Netflix should put their money where their mouth apparently is - hire a very large codec development team and spend a couple decades going down these potential rabbit holes ....
      • by AHuxley ( 892839 )
        The best codec still needs some new bandwidth. Looking at every part of every "frame" of a 4K movie, tv series again and again with new maths is not going to make the existing bandwidth to a dwelling any better.
        Want a good looking 4K movie? Thats going to need a better network. The movies and shows are ready. The payment system works. New content is getting paid for and making a profit.
        Bandwidth and the network is the last part that needs some support.
        • by Anonymous Coward

          Want a good looking 4K movie? Thats going to need a better network.

          Not if you're ok with it taking longer to download than to play.

          I keep laughing at streaming while also accepting that maybe, some day, it will be good enough for me. "Some day" keeps not happening, though. And the thing is... I just don't care. With a 7 Mbps connection I already download way more video than I'll ever have time to watch.

          Time to download, I got plenty of. (No, wait: it's the computer who has plenty of that; I don't spend even

          • by jabuzz ( 182671 )

            The BBC are streaming the World Cup matches from Russia in 4K HDR at 50fps and are advising that a 40Mbps connection is required. I am golden with a 80/20Mbps VDSL2 stream that delivers 80/20 Mbps and I presume a compatible TV. At least I could watch the Blue Planet 2 in 4K HDR over the Christmas period. So I can watch England go out on penalties again :-)

            Perhaps you don't care about sport or other live events, but lots of people do.

            The reality is that we have the means right now to make new codecs unneces

          • Time to download, I got plenty of. (No, wait: it's the computer who has plenty of that; I don't spend even a second on this.) Time to watch? That is the bottleneck.

            We can tweet more than 140 characters but I still don't have a 30 hour day?! Fuck you, engineers!!

            For Netflix on my PC, I typically watch things at 1.2x to 1.4x speed.
            For YouTube, I do 1.25x or 1.5x. Also be sure to use the J and L shortcuts in YouTube for easy skipping!

            You get used to the faster speed quite quickly, and it has very little impact on the overall presentation / impact. Watching stuff at 1x now feels so fucking slow that I can't stand it.

            • Watching stuff at 1x now feels so fucking slow that I can't stand it.

              This reminds me of the youtube comments in a lot of songs "Hey, at 1.75x this is still pretty decent" and.... I just don't know what to think. Is the point of art to consume the maximum quantity? Or is it about the present experience while interpreting the art?

              I happen to heavily favour the latter. I used to always think ahead, whatever I'm doing lose joy because I'm thinking about the future when I stop doing it.. That somehow life was some equation that I had to maximize to get the most out of it...

              These

              • Depends on what you're watching.

                If I need a technical walk through for a repair or procedure, speeding up the all the crap at the start is great, and I can drop it to normal or pause when i get to the part that's relevant to me. Sometimes you don't want to outright skip around for fear of missing some key note or prereq. Likewise, if I'm watching some news article or interview, I don't care about the presentation. I just want the info.

                For a recent example, I watched most of the E3 presentations after the

        • Bandwidth also follows it's own progression curve that looks a little bit like Moore's law. For a long time, we couldn't speed that along any faster than the underlying technological improvements and deployment could occur so it made sense to target our compression algorithms for improvement. Now we're coming out on the other side where we've got more than enough bandwidth to consume video,. While improvements to our compression algorithms will mean we save costs, the additional increases in bandwidth avail
      • They actually did. (Score:5, Informative)

        by DrYak ( 748999 ) on Wednesday June 13, 2018 @11:57AM (#56777478) Homepage

        The "hired very large codec dev team" they were contributing to is called "AOMedia - Alliance for Open Medi", and one of the potential rabbit hole that got considered and worked on was Daala by Xiph (tons of new crazy idea, including stuff like extending block as lapped blocks, a perceptual vector quantisation that doesn't rely on residual coding, etc.)

        At the end of the day, the first thing that currently came out of AOMedia, by combining work such as Xph's Daala, Google's VP10 and Cisco's Thor, is AV-1.
        It's much tamer that what it could have been, but still incorporate some interesting idea.
        (they didn't go all the way to using the ANS entropy coders suggested more recently by experiment such as Daala, but at least replaced the usual arithmetic encoder with Daala's range encoder).

        By the time AV-2 gets out, we should see some more interesting stuff.

        Probably this speech was meant as a rousing speech to encourage developers to go crazy and try new stuff.

      • by harrkev ( 623093 )

        Actually, that is what universities are for: going down rabbit holes.

        I am sure that there are tons of thesis papers just waiting to be written on this subject. Coming up with new methods seems like a great way to get a PhD, even if it is not significantly better -- it just has to be different.

    • by skids ( 119237 ) on Wednesday June 13, 2018 @11:43AM (#56777358) Homepage

      Should they just adopt new and inferior solutions and hope for the best?

      I think the idea here is that the follow-up science/engineering to academic initiatives doesn't actually get funded/done because the unoptimized first cut of a new methodology isn't instantly better than the state of the art. It's basically arguing that the technology is undergoing path dependence [wikipedia.org], which is no big surprise as it happens all the time in lots of areas.

      That said, the AV crowd has sure made a complete and utter mess of their formats. Piles of CODECs all with various levels of support for piles of video modes all bundled into piles of meta-formats with piles of methods for syncing up audio/ancillary/multistream... my eyes glaze over pretty quickly these days when faced with navigating that maze. Having options is awesome. Leaving them perpetually scattered around on the floor like a bunch of legos... not so much.

      (Still waiting to see someone with serious genetic-algorithms chops tackle lossless CODECs... there's a ready-made problem with a cut and dry fitness function right there.)

      • by AmiMoJo ( 196126 ) on Wednesday June 13, 2018 @12:11PM (#56777594) Homepage Journal

        Part of the problem is that we have hardware acceleration for certain operations, and if codecs want to do stuff outside that then performance can become an issue for playback. Most streaming devices don't have enough CPU power in their little ARMs to handle decoding, it has to be hardware accelerated by the GPU.

        Then again if anyone can argue successfully for hardware changes it's Netflix.

        • by swb ( 14022 )

          The dependence on hardware decoding is probably a major factor. Encoders want the largest possible audience and will always defer to the coding schemes with the least system impact and best performance, which will end up being hardware decoders.

          There's a lag between the development of a new coding scheme and its widespread availability as actual deployed silicon. The investment in silicon in dependent on encoder adoption and popularity, which may lag encoding development.

          Suddenly it looks like a no-win si

        • Well, hardware companies are onboard on the alliance for open media and so they've made sure in the development of AV1 that the codec is hardware decoding friendly.
    • Well if you read the article and not the summary, the authors are discussing that there doesn't seem to be any fundamental changes coming anytime soon. Sure newer codecs are coming out but they are all the same approach. It's like if we discussing public key cryptography and the algorithms used. Imagine if RSA was the only real technique and the only new changes coming out were merely larger keys and that other techniques like elliptic curves didn't exist.
      • Also a big part of the article is the disconnect between research and development and client frameworks. The 3x vs 100x quote especially - that someone like Netflix has enormous computing power to throw at encoding if it can save them bandwidth shouldn't be a surprise to someone developing codecs, but apparently it was.
      • by slew ( 2918 ) on Wednesday June 13, 2018 @01:00PM (#56777946)

        Well if you read the article and not the summary, the authors are discussing that there doesn't seem to be any fundamental changes coming anytime soon. Sure newer codecs are coming out but they are all the same approach. It's like if we discussing public key cryptography and the algorithms used. Imagine if RSA was the only real technique and the only new changes coming out were merely larger keys and that other techniques like elliptic curves didn't exist.

        I think your analogy is somewhat flawed. Public key cryptography was in somewhat of the same "rut" as video codec. Video codecs have been stuck on hybrid block techniques and Public key cryptography has been stuck using modulo arithmetic (RSA, and Elliptical curves both use modulo arithmetic although they depend on the difficulty of inverting different mathematical operations in modulo arithmetic).

        There are of course other hard math problem that can be used in public key cryptography (lattices, knapsack, error-correcting codes, hash based) and they languished for years until the threat of quantum computing cracking the incumbent technology...

        Similarly, I predict hybrid block techniques will likely dominate video encoding until a disruption (or in mathematical catastrophe theory parlance a bifurcation) shows the potential for being 10x better (because 1.2x or 20% better doesn't even pay for your lunch). It doesn't have to be 10x better out of the gate, but if it can't eventually be 10x better, why spend time optimizing it as much as hybrid block encoding. Nobody wants to be developing something that doesn't have legs for a decade or more. The point isn't to find something different for the sake of difference, it's to find something that has legs (even if it isn't better today).

        The problem with finding something with "legs" in video encoding, is that we do not fully understand video. People don't really have much of a theoretical framework to measure one lossy video compression scheme against another (except for "golden-eyes" which depend on what side of the bed you wake up on). Crappy measures like PSNR and SSIM [wikipedia.org] to estimate the loss-ratio vs entropy are still being used because we don't have anything better. One of the reasons people stick to hybrid block coding is that the artifacts are somewhat known even if they can't be measured so it is somewhat easier to make sure you are making forward progress. If the artifacts are totally different (as they would likely be for a different lossy coding scheme), it is much more difficult to compare if you can't objectively measure it to optimize it (the conjoint analysis problem).

        So until we have better theories about what makes a better video codec, people are using "art" to simulate science in this area, and as with most art, it's mostly subjective and it will be difficult to convince anyone of a 10x potential if it is only 80% today. If people *really* want to find something better, we need to start researching more on the measurement problem and less about the artistic aspects. It's not that people haven't tried (e.g., VQEG [bldrdoc.gov], but simply very little has come from the efforts to date and there has been little pressure to keep the ball moving forward.

        In contrast, the math of hard problems for public key cryptography is a very productive area of research and the post-quantum-encryption goal has been driving people pretty hard.

        Generally speaking, if you measure it, it can be improved and it's easier to measure incremental progress than big changes on a different dimension.

        • I think your analogy is somewhat flawed. Public key cryptography was in somewhat of the same "rut" as video codec. Video codecs have been stuck on hybrid block techniques and Public key cryptography has been stuck using modulo arithmetic (RSA, and Elliptical curves both use modulo arithmetic although they depend on the difficulty of inverting different mathematical operations in modulo arithmetic).

          My understanding of ECC is that it does not use the same modulo arithmetic (g^a mod n and g^b mod n) as the main technique and relies on an elliptic curve (y^2 = x^3 + ax +b where g is a random point on the curve) which results in shorter keys.

        • The current approach are very mathematical, looking at pixels. But what ends up in our brains is high level symbols. If an AI can get at those, then extremely tight coding is necessary.

          For example, it takes a lot of bytes to represent "a man walking under a tree". But that phrase only took 50 bytes. The reconstructed video does not have to have the same type of tree as the original, just some sort of tree.

          That's taking it to the extreme. But if an AI can recognize the types of objects in a video, and p

          • I agree, every show and movie should have the same "man" and "woman", the same "tree" and "spaceship." Just like I can't wait for all painted art to depict the same scene. That way we can hvae more direct comparisons of things, and remove the subjective element entirely! Progress!
          • That all said, massive increases in available bandwidth make this rather pointless.

            True, but a factor is the cost of the bandwidth and who controls it. One of the issues is the ISPs as technically I can get gigabit speeds to my house as the technology exists. Realistically it would cost me a lot and right now none of the ISPs in my area offer it. For cellular ISPs, there has to be massive upgrades to the infrastructure to support higher speeds. This is the problem that companies like Netflix face; there is a wide discrepancy between bandwidths on different networks for their customers. O

    • by AmiMoJo ( 196126 )

      They aren't talking about adopting anything, merely not abandoning research into it so early due to less than stellar results.

    • New vs old (Score:5, Insightful)

      by DrYak ( 748999 ) on Wednesday June 13, 2018 @12:18PM (#56777654) Homepage

      but it would be stupid to start adopting any of that into actual products or live usage until and unless it tops the more traditional methods in performance.

      The logic behind the article is that the new techniques will never top more traditional (or at least could not have a way to achieved in the current state of affair), because most of the resources (dev time, budget, etc.) are spent optimizing the "status-quo" codecs, and not enough is spent on the new comer.
      By the time something interesting comes up, the latest descendant of the "status-quo" would have been much more optimized.
      It doesn't matter that the PhD thesis "Using Fractal Wavelets in non-Euclidian spaces to compress video" shows some promising advantages over MPEG-5 : it will not get funded, because by then "MPEG-6 is out" and is even better just by minor tweaking every where.
      Thus new idea like a PhD thesis never get funded and explored further, and only further tweaking of what already exist gets funded.

      I personally don't agree.

      The most blatant argument is the list it self.
      With the exception of AV-1, the list is exclusively only the actual list of block based algorithm : MPEG-1 and it's evolutions (up to HEVC) and things that attempts to do something similar while avoiding the patents (the VPx serie by On2, Google).

      It completely ignores stuff like Dirac and Schroedinger :
      completely different approach to video compression (based on wavelets) that got funded, developed and are actually in production (by no less than the BBC).

      It completely ignores the background behind AV-1 and how it relates to Daala.

      AV-1 was designed from the ground up not as an incremental evolution (or patent circumvention) over HEVC, it was designed to go along a different direction (if nothing else, at least for the reason to avoid the patented techniques of MPEG, as avoiding patent madness was the main target behind AV-1 to begin with).
      It was done by AOMedia, where lots of group poured resources (including Netflix themselves).

      Yes, on one side of the AV-1 saga, you have entities like Google that donates their work on VP10 to serve as a basis - so were's again at the "I can't believe it's not MPEG(tm)!" clones.

      But among other code and techniques contributions (beside Cisco's Thor which I'm not considering for the purpose of my post), there's also Xiph who provided their work on Daala.
      There's some crazy stuff that Xiph has been doing there : stuff like replacing the usual "block"-based compression with slightly different "lapped blocks", more radical stuff like throwing away the whole idea of "coding residuals after prediction" and replacing it with what "Perceptual Vector Quantization", etc.
      Some of these weren't kept for the AV-1, but other crazies actually made it into the final product (the classic binary arithmetic coding used by the MPEG family was thrown away for integer range-encoding, though they didn't go as far as use the proposed alternative ANS - Asymmetrical Number System)

      Overall, incrementally improving on MPEG (MPEG 1 -> MPEG 2 -> MPEG 4 ASP -> MPEG 4 AVC/H264 -> MPEG 4 HEVC/H265) get hit hard by the law of diminishing returns. There's only so far that you can reach be incremental improvement.

      Time to get some new approaches.

      Even if AOMedia's AV-1 isn't that much revolutionnary, that's more out of practical considerations (we need a patent-free codec available as fast as possible, including available quickly in hardware, better end up selecting thing that are known to work well) than for not having tried new stuff.
      And even if some of the more out of the box experiment didn't end up in AV-1, they might end up in some future AV-2 (Xiph is keeping experimenting with Daala).

      • It doesn't matter that the PhD thesis "Using Fractal Wavelets in non-Euclidian spaces to compress video" shows some promising advantages over MPEG-5 : it will not get funded, because by then "MPEG-6 is out" and is even better just by minor tweaking every where.

        I did A/V compression development work once upon a time, and I can tell you that almost 20 years ago we were already looking into 3d wavelet functions for video decoding. The problem comes in that it's vastly less computationally and memory efficient than the standard iFrame/bFrame block decoders, and it messes up WAY worse if there's the slightly disruption in the stream.

        I mean, sure, if an iFrame gets hosed you lose part of a second of the video, but you can at least still kind of see what it is with a

    • Seconded. Let them fuck about with it in the lab all they like.

      There's already more than enough half-assed ones (or rather, ones with half-assed player support) out in the wild.

    • that's right irrelevant "staying in the news"-crud better tools will erupt when time and technology allows like matter from the quantum foam i love my metaverse the universe HATES vacuums if theres a spot betcha fiver it will take half a split seconds for the contenders to be known
  • Video codecs are not the only example of this, there are many.

    • Yep. There are working models and production engines that, given a century of development, probably would have replaced pistons.
  • by Anonymous Coward

    There's nothing "insightful" about saying "there may be something better out there."

    The insightful thing would be to find or create it.

  • by SuperKendall ( 25149 ) on Wednesday June 13, 2018 @11:37AM (#56777318)

    This is one case where the actual article is well worth reading, with a ton of links off to other areas to explore, and more interesting detail than the summary presents... well worth taking a look if you are at all interested in video compression and where the state of the art is going.

    • Actually the article is a lot less interesting. It's about the lazy mans approach to video coding. Can't past video standards test. No problem. Base them on human perception. As long as people "believe" they are seeing HD the codec does not need to pass the test that says it delivers HD.

      fucking twaddle.
      • That is really the problem with movies in general; they are always considering how humans perceive them. It is almost as if that were literally all that matters.
      • You are missing the point. The point is that the metric being used was chosen for simplicity instead of accuracy, and because the alternatives were expensive and time-consuming. Over time everything optimized around that metric, to the point where it's prohibitively difficult to make any gains. Now that we have the ability to create better metrics we should, because otherwise we risk overlooking actual performance gains because they aren't significant on the old metric.
      • by alvinrod ( 889928 ) on Wednesday June 13, 2018 @12:35PM (#56777782)
        If humans truly are incapable of discerning the difference in a controlled study, doesn't that suggest that the test is flawed because it is being too strict for some arbitrary reason?

        To better illustrate what I mean,say I want to buy hosting for a service and want 99% uptime. However, the person considering providers throws out any without guarantees of 99.999% uptime. They're not actually doing what I want and I may end up paying more than I would otherwise need to for no good reason. Or suppose I have a machine that judges produce and will remove anything that it thinks shoppers won't purchase (as a result of appearance, bruising, etc.) so that I don't waste resources shipping it to a store that will eventually have to throw it out as unsold. I want that machine to be as exact as possible because if it's being more picky than the shoppers, that's wasted produce I could otherwise be selling.
    • Summary missed the big "aha" moment of the article, which was that academic researchers in new encoding techniques had been thinking that increasing the complexity of their algorithms by 3X was a hard limit, whereas production practitioners such as Netflix, Facebook, and Hulu were thinking that a 100x increase in complexity was the upper limit.

  • Huh? (Score:3, Insightful)

    by Anonymous Coward on Wednesday June 13, 2018 @11:38AM (#56777320)

    Unfortunately, new techniques are evaluated against the state-of-the-art codec, for which the coding tools have been refined from decades of investment. It is then easy to drop the new technology as "not at-par." Are we missing on better, more effective techniques by not allowing new tools to mature?

    What a stupid statement.

    Is the expectation we adopt crappy replacements to "allow them to mature?"

    They can mature until they're as good as what we have, not replace it with something which doesn't work to give it room to grow into something which doesn't suck.

    Either you have a working replacement, or you have a good idea and a demo.

    "Not-at-par" means the latter -- you don't have a mature product, and nobody is going to adopt it if it can't do what they can do now. Saying "ti will eventually be awesome" tells me that eventually we'll give a damn, but certainly not now.

    It's bad enough I have to fight my vendors that I'm not accepting a beta-rewrite and suffering through their growing pains to get to the mature product they're trying to replace. I'm not your fucking beta tester, so please don't suggest I grab your steaming turd and live with it until you make it not suck.

    Boo hoo, immature technologies which don't cover what the technology they're trying to replace aren't being allowed to blossom into something useful. Make it useful, and then come to us.

    • Is the expectation we adopt crappy replacements to "allow them to mature?"

      I think the argument is that if Netflix/Google/Facebook/Amazon/MS/Apple want to make advances in video encoding, they have to be willing to have a team that will produce work consistently worse than the state of the art for years, and hope it catches up and exceeds. It's a huge cost and leap of faith, had to imagine it happening.

  • If the math says a new technique is better, it won't matter if the first implementation isn’t good. Someone will fix the implementation and then it will match the mathematically predicted performance (or the guy who did the math with fix his error).

  • by Anonymous Coward

    Let's say for argumentation that a new and much more efficient video codec was just invented.

    The trouble is that it will immediately be locked up behind patents, free implementations will be sued, and it'll be packed with DRM and require per-play online-permission.

    Our main problem isn't technology, it's the legal clusterfuck that has glommed onto the technology landscape.

  • H.264 was king. Now we've got H.265 and AV1 which have not entirely replaced H.264 due to compatibility purposes, but have still gained significant traction.

    On the audio side, AAC replaced MP3, and Opus is set to replace AAC. Opus can generally reach the same quality as MP3 in less than half the bits!

    So I don't see this stagnation they talk about. These algorithms are generally straightforward and codec devs, even if they don't have a hyper-efficient implementation yet, will be able to see the benefit -- it's just a matter of investing in their time to develop high quality code and hardware for it.

    • by Anonymous Coward

      So I don't see this stagnation they talk about.

      If you look at codecs life cycles, you'll see we have about 10 year between each "gen". H264 was released in 2003 but only became mainstream about 5 years latter. H265 is following the same pattern. It was presented in 2013 and nowadays it is on it's way to become mainstream.

      At this pace, Netflix knows that a new standard will probably see the light of the day only by 2023, and become mainstream by 2028. Not stagnant, but it surely isn't blistering fast.

  • by UnknowingFool ( 672806 ) on Wednesday June 13, 2018 @11:52AM (#56777428)

    Seriously the title and summary would have been much better and easier to understand if they used a single word "Research": "The End of Video Coding Research". The article discusses that while video coding use is pretty much everywhere, there hasn't been much progress or change made into newer standards despite lots of interest and investment. New codecs are coming out but there are all variations of the "block-based hybrid video coding structure" of MPEG-2/H.264/VP9, etc. Netflix is one company that would benefit from newer encoding standards.

  • by rsilvergun ( 571051 ) on Wednesday June 13, 2018 @11:56AM (#56777462)
    they're getting more power efficient, but not much faster. I'm not expert, but from what I could tell the revolution in video encoding came because client hardware got a _lot_ faster at decoding high def video. That led to new codecs to take advantage of the increased power. I remember in 2005 needing special software to decode a 1080p stream on my GTX 240 video card and Athlon x64. By 2013 my phone could do it with VLC.
  • by darkain ( 749283 )

    We should invest a shitton of money in order to create a new codec that everyone can use and benefits everyone... You literally just described AV1. The entire process of it "being inferior while being iterated until better" also directly describes the past few years of AV1 until recently where it started to pull ahead in the compression vs quality game compared to other leading codecs.

  • We should just declare one of the current schemes as "good enough", use it long enough for all relevant patents to expire, universally implement it on all devices, and serve it by default from almost all media sources.

    It would be kind of like mp3 and jpg, and it would lower everybody's stress level.

  • by StandardCell ( 589682 ) on Wednesday June 13, 2018 @12:04PM (#56777534)
    At some point, you have to start asking why you need certain quality of experience in limited environments, and what infrastructure it takes to get there.

    The biggest ongoing cost for streaming movies today is CDN storage, in the sense of having enough bitrates and resolutions to be able to accommodate all target devices and connection speeds. As much as people would like to deliver an HD picture to a remote village in the Philippines over a mobile connection on a feature phone, it isn't feasible at the moment for two reasons: they don't need or care about that level of experience, and it isn't technically feasible. The goal of CDN storage is to ensure the edge delivers the content, and the industry has toyed with real-time edge transcoding/transrating to address some of these issues, but fundamentally we are dropping asymptotically to a point on visual quality for a given bitrate and amount of computing power that a codec can deliver at the playback device.

    In that sense, I'm shocked that Anne's post didn't mention Netflix's own VMAF [github.com], which is a composite measure of different flavors of PSNR, SSIM and some deep learning. But even here, the fundamental is that we are still using block-based codecs for operations simply because of the fundamental nature of most video, i.e. objects moving around on a background. I'm also shocked that Anne didn't discuss alternative coding methods like wavelet-based (e.g. JPEG 2000), but - again - these approaches have their own limitations and don't address interframe encoding in the same way that a block-based codec can. If there was a novel approach to coding psychovisually-equivalent video that would address computing power, bitrate and quality reasonably, I believe it would have been brought forward already.

    I think 5G deserves a big mention here that was lacking in Anne's post, because faster connections may solve many of the types of issues that affect perceived visual quality at low bitrates. Get more bandwidth, and you have a better experience. Hopefully 5G will proliferate quickly, but this will be tricky in the developing world where its inherently decentralized nature and the political environments will make its ubiquitous deployment a serious challenge.

    In the end, we're all fighting entropy, particularly when it comes to encoding video. Our ability to perceive video is affected by an imperfect system - the human eye and brain. That's why we've made such gains in digital video since the MPEG-1 days. But the fantasies of ubiquitous HD video to everyone in the world on 100kbps connections are just that. When you're struggling to get by and don't have good health care or clean drinking water, the value of streaming high-quality video isn't there from a business perspective, much less a technical perspective. Everyone will get an experience relative to the capabilities of technology and the value it brings to them accordingly. All else is idealistic pipe dreams until otherwise proven.
    • Don't they encode on the fly? I can understand having copies of the most popular formats but it seems much easier to do the oddball ones on the fly. I use Univeral Media Server and cpu usage hardly registers on my old first gen i7 and it has no hardware acceleration.
    • I'm also shocked that Anne didn't discuss alternative coding methods like wavelet-based (e.g. JPEG 2000), but - again - these approaches have their own limitations and don't address interframe encoding in the same way that a block-based codec can.

      I mean, I guess you could use JPEG-2000 for the iFrames, but it's very seriously not designed for video. An interesting potential method would be extrapolating the wavelet to the third dimension (in this case, the time series) for videos, but your working memory would go up dramatically (which would make hardware decoders prohibitively expensive). Also, data loss in hierarchical encoding schemes is often catastrophic to the entire block. Not so bad on a single frame, pretty devastating if you just torche

  • "For the video codec community to innovate more quickly, and more accurately, automated video quality measurements that better reflect human perception should be utilized."

    Yup because innovation is at a stand still and new technologies don't get created to solver an actual problem. Hey wait a tic, they do.

    Hmmm, so what's your fucking problem again? Could it be you want adoption of your codec but your codec doesn't pas the standard testing suites. So the solution is not to improve your codec - that
    • No, the problem which the summary is terrible at discussing is that video codec research is at a stand still. New video codecs can come and go but they are all the same basic approaches to the problem. Take for example H.264 vs H.265. H.265 adds higher resolution and other advancements over H.264. It has better compression but requires more processing power. But it's not fundamentally different than H.264. VP9 and AV1 isn't fundamentally different either. The main benefit of VP9 and AV1 over H.265 is that t
  • Any new mousetrap is going to be inferior to the highly refined existing solutions, no point in building a better mouse trap.
  • Competitors to JPEG so far have only reached 30% less data under laboratory conditions compared to the standard JPEG encoder. That amount of improvement can however also be experienced by using more sophisticated JPEG encoders. Even if we'd reach half the file size for a realistic set of images, I doubt we would switch, as JPEG is largely good enough.

    We might see a successor to JPEG for moving images yet, as those codecs slowly get good enough that the key frames matter and can even take up most of the stor

  • by Anonymous Coward

    yes, it's dead. Use png for images, ogg theora for video and flac for sound. End of discussion.

  • Instead of breaking images up into rectangles and compressing each rectangle separately (which produces block artifacts), we should just to a wavelet compression on the entire image at once for best viewing.
  • Why not convince the people behind a good open source video player (vlc springs to mind) and a good converter tool (handbrake springs to mind) to support a promising new codec? Geeks start using it, and we rapidly see whether it's worth pursuing or not. This strategy has worked for other codecs.

    If we sit on our hands waiting for the industry to adopt a new standard, we'll still be using mp4 when cockroaches inherit the earth.

  • Just pulling numbers out of a hat, starting from raw video, lossless compression perhaps drops the bitrate needed by an order of magnitude. Existing lossy algorithms drop maybe another order of magnitude. It is very likely that with a lot of work, that could drop another 50%, but it's fairly unlikely to drop another order of magnitude off of existing systems.

    Netflix should watch what it wishes for, though. Dropping another order of magnitude maybe would make things cheaper for Netflix, but that's also th

  • In a word. NO. There is nothing missed.

    New technology, per 1991 ATT research, doesn't have a chance of disrupting older technology without order of magnitudes greater than 10X measurable improvement. GOOG failed upon search to locate the research. Here's the latest I can Google https://bit.ly/2tc6oJl [bit.ly]

  • Comment removed based on user account deletion
  • I never understood why video specific autoencoders [wikipedia.org] were not used instead of existing codecs. I understand the hardware side of this is difficult (hardware could be created that could handle autoencoders but they don't currently exist), but for laptops and cell phones, an autoencoder would likely work much more efficiently for when bandwidth was limited. Perhaps bandwidth is good enough and saving cell phone batteries is higher priority but general purpose codecs are a very heavy hammer with which to solve
  • Its the quantum theory of technology. New tech cannot be just a little better. It has to be a quantum leap. It has to be significant enough to overcome the inertia of an establish tech and ecosystem. Or it must fulfill a specific need.
  • One if the issues is getting newer ideas added to existing standards or even developing standards. I had found that grey coding (only 1 bit changes while counting: 000,001,011,010,110,111) graphical images helped with early compression and proposed that when png was being developed but it didn't go anywhere. Mapping color space was another idea because about 8 million of the colors in 24 bit RGB are brown or grey. 24 bit HSV was trivial to add to the analog section of VGA and would display far more shade

  • Comment removed based on user account deletion

One man's constant is another man's variable. -- A.J. Perlis

Working...