The End of Video Coding? (medium.com) 137
An anonymous reader writes: Netflix's engineering team has an insightful post today that looks at how the industry is handling video coding; the differences in their methodologies; and the challenges new comers face. An excerpt, which sums up where we are:
"MPEG-2, VC1, H.263, H.264/AVC, H.265/HEVC, VP9, AV1 -- all of these standards were built on the block-based hybrid video coding structure. Attempts to veer away from this traditional model have been unsuccessful. In some cases (say, distributed video coding), it was because the technology was impractical for the prevalent use case. In most other cases, however, it is likely that not enough resources were invested in the new technology to allow for maturity.
"Unfortunately, new techniques are evaluated against the state-of-the-art codec, for which the coding tools have been refined from decades of investment. It is then easy to drop the new technology as "not at-par." Are we missing on better, more effective techniques by not allowing new tools to mature? How many redundant bits can we squeeze out if we simply stay on the paved path and iterate on the same set of encoding tools?"
"MPEG-2, VC1, H.263, H.264/AVC, H.265/HEVC, VP9, AV1 -- all of these standards were built on the block-based hybrid video coding structure. Attempts to veer away from this traditional model have been unsuccessful. In some cases (say, distributed video coding), it was because the technology was impractical for the prevalent use case. In most other cases, however, it is likely that not enough resources were invested in the new technology to allow for maturity.
"Unfortunately, new techniques are evaluated against the state-of-the-art codec, for which the coding tools have been refined from decades of investment. It is then easy to drop the new technology as "not at-par." Are we missing on better, more effective techniques by not allowing new tools to mature? How many redundant bits can we squeeze out if we simply stay on the paved path and iterate on the same set of encoding tools?"
What else would one do? (Score:5, Insightful)
Should they just adopt new and inferior solutions and hope for the best?
To me this is the "science" part of Computer Science. Do research into new algorithms and methods of video encoding, but it would be stupid to start adopting any of that into actual products or live usage until and unless it tops the more traditional methods in performance.
Re: (Score:2)
Usable bandwidth to the dwelling would allow for 4K, HD and other video resolutions on demand.
Re:What else would one do? (Score:4, Informative)
In many parts of the world, that's already standard. I got gigabit fiber to my home for cheap in a 3rd world country.
So as far as me and my countrymen are concerned, we're good, even for 4K@60fps.
Re: (Score:2)
Its not the codec.
Its not like having to work on Multiple sub-Nyquist sampling encoding https://en.wikipedia.org/wiki/... [wikipedia.org]
Re: (Score:3)
To be fair, third-world countries have a slight advantage in that their infrastructure is all new and mostly modern, whereas the US is trying to piggyback on a lot of old POTS and first-gen fiber infrastructure.
In a lot of cases, developing countries are completely skipping copper infrastructure, and building out wireless systems.
Re: (Score:3)
Exactly what happened in my country. But the more important aspect is competition. Here, competition is really tough, with at least 4-5 different (large) ISPs fighting over customers.
Re: (Score:2)
I'm going to guess Romania.
Interesting situation there: https://foxnomad.com/2012/03/1... [foxnomad.com]
Re: (Score:2)
Yes, interesting article. I suck at math, though :)
Re: (Score:2)
I think his point was that first world countries had been (still are in many areas) hamstrung by having a wired telephone infrastructure in place for a century or more, and reluctance by the local government and providers to replace it with something different. Similar to the way current encoding methods are ingrained.
And yes, emerging countries often end up with a better data infrastructure than the older parts of first world countries. (I'm told cell service is often better too.) I live in a recent subu
Re: (Score:2)
Itâ(TM)s funny that in a post about going with a new generation of video encoding, people are arguing that the u.s, isps cant upgrade because theyâ(TM)re stuck with an old
Infrastructure l!
The argument is that they're comparable situations -- a new, perhaps better technology unable to gain traction because the old technology is too ingrained. Or am I missing something here?
Re: (Score:2)
Build new networks that are not just paper insulated wireline?
Usable bandwidth to the dwelling would allow for 4K, HD and other video resolutions on demand.
Yours isn't? Weird.
I live in Europe and I get 300MBs symmetrical with 5ms ping for $30 a month. It's fiber all the way to my router.
All speeds measured and confirmed: http://www.speedtest.net/resul... [speedtest.net]
Re: (Score:1)
Why do you pay in Dollars?
Re: (Score:2)
To avoid confusing any Americans in the room.
Re: (Score:2)
Yours isn't? Weird.
I live in Europe and I get 300MBs symmetrical with 5ms ping for $30 a month. It's fiber all the way to my router.
All speeds measured and confirmed: http://www.speedtest.net/resul... [speedtest.net]
I live in the "capitol" of Europe (Brussels), and random outages, phone-lines randomly swapped around, regular drops in connection speeds, and max option is 100/15 MBits (seems recently upped from 50/15).
Unlimited data, unless you use more than 100GB/Month, then there's add-ons or throttled to 5Mbit.
Phone+inet+IPTV is delivered via copper (from a box down the road), and you can see the wiring dangling off of buildings even in the more well-off areas.
Oddly enough, when perfomance drops, visiting Speedtest.ne
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Want a good looking 4K movie? Thats going to need a better network. The movies and shows are ready. The payment system works. New content is getting paid for and making a profit.
Bandwidth and the network is the last part that needs some support.
Re: (Score:1)
Not if you're ok with it taking longer to download than to play.
I keep laughing at streaming while also accepting that maybe, some day, it will be good enough for me. "Some day" keeps not happening, though. And the thing is... I just don't care. With a 7 Mbps connection I already download way more video than I'll ever have time to watch.
Time to download, I got plenty of. (No, wait: it's the computer who has plenty of that; I don't spend even
Re: (Score:2)
The BBC are streaming the World Cup matches from Russia in 4K HDR at 50fps and are advising that a 40Mbps connection is required. I am golden with a 80/20Mbps VDSL2 stream that delivers 80/20 Mbps and I presume a compatible TV. At least I could watch the Blue Planet 2 in 4K HDR over the Christmas period. So I can watch England go out on penalties again :-)
Perhaps you don't care about sport or other live events, but lots of people do.
The reality is that we have the means right now to make new codecs unneces
Re: (Score:2)
Re: (Score:2)
Time to download, I got plenty of. (No, wait: it's the computer who has plenty of that; I don't spend even a second on this.) Time to watch? That is the bottleneck.
We can tweet more than 140 characters but I still don't have a 30 hour day?! Fuck you, engineers!!
For Netflix on my PC, I typically watch things at 1.2x to 1.4x speed.
For YouTube, I do 1.25x or 1.5x. Also be sure to use the J and L shortcuts in YouTube for easy skipping!
You get used to the faster speed quite quickly, and it has very little impact on the overall presentation / impact. Watching stuff at 1x now feels so fucking slow that I can't stand it.
Re: (Score:2)
Watching stuff at 1x now feels so fucking slow that I can't stand it.
This reminds me of the youtube comments in a lot of songs "Hey, at 1.75x this is still pretty decent" and.... I just don't know what to think. Is the point of art to consume the maximum quantity? Or is it about the present experience while interpreting the art?
I happen to heavily favour the latter. I used to always think ahead, whatever I'm doing lose joy because I'm thinking about the future when I stop doing it.. That somehow life was some equation that I had to maximize to get the most out of it...
These
Re: (Score:2)
Depends on what you're watching.
If I need a technical walk through for a repair or procedure, speeding up the all the crap at the start is great, and I can drop it to normal or pause when i get to the part that's relevant to me. Sometimes you don't want to outright skip around for fear of missing some key note or prereq. Likewise, if I'm watching some news article or interview, I don't care about the presentation. I just want the info.
For a recent example, I watched most of the E3 presentations after the
Re: (Score:2)
Re: (Score:2)
Realistically, this just means we'll stream higher resolution content and a higher framerate,
Perhaps, but more likely households will be doing multiple streams. Dad can watch his action/explosion drama while mom binges her romcom and little Suzy watches the latest teen craze.
Re:What else would one do? (Score:5, Funny)
it's means it is.
It's been nice proving you wrong.
They actually did. (Score:5, Informative)
The "hired very large codec dev team" they were contributing to is called "AOMedia - Alliance for Open Medi", and one of the potential rabbit hole that got considered and worked on was Daala by Xiph (tons of new crazy idea, including stuff like extending block as lapped blocks, a perceptual vector quantisation that doesn't rely on residual coding, etc.)
At the end of the day, the first thing that currently came out of AOMedia, by combining work such as Xph's Daala, Google's VP10 and Cisco's Thor, is AV-1.
It's much tamer that what it could have been, but still incorporate some interesting idea.
(they didn't go all the way to using the ANS entropy coders suggested more recently by experiment such as Daala, but at least replaced the usual arithmetic encoder with Daala's range encoder).
By the time AV-2 gets out, we should see some more interesting stuff.
Probably this speech was meant as a rousing speech to encourage developers to go crazy and try new stuff.
Implementation (Score:2)
Arithmetic coders are mathematically equivalent to range coders. Is it an encoding speed increase they were looking for? Or perhaps the ease with which you can modify range coder mid-stream compared to arithmetic?
The key point is that MPEG's coders are binary coders. The code bit after bit.
On hardware, that means a part of that needs to run at a very high frequency.
Mean that you also need binarize (to convert the symbols into a bit stream) and manage contexts for all these individual bits.
(CABAC = *Context adaptive* binary arithmetic coder).
Implementations can rely on non-integers.
Daala's entropy coder can work on any discrete list of symbols (not necessary 1 bit).
In practice it works on value encoded in a few bits
Re: (Score:2)
Actually, that is what universities are for: going down rabbit holes.
I am sure that there are tons of thesis papers just waiting to be written on this subject. Coming up with new methods seems like a great way to get a PhD, even if it is not significantly better -- it just has to be different.
Re: (Score:2)
4K is already way beyond the ability of a average consumer to discern it's quality.
Uh-huh. I remember hearing that junk when we were talking about going from SD to HD for video. Then again a similar argument for framerates. It's easy for anyone who does not have vision problems to discern 1080p from 4k video. Maybe you're thinking of a situation where someone is watching 4k content on a 1080p display. Maybe you're thinking of someone watching 1080p content on a 4k display. Maybe you're talking about a situation where the video has been poorly compressed and degraded to a point where
Re: (Score:2)
I remember that too. I was one of those naysayers. Now I can't stand looking at stuff in sd.
In all fairness to myself, the TV I had back then didn't do much to help the case for HD
Re: (Score:2)
Just had to be the pedantic man to point out that BluRay is not uncompressed for any of the common definitions of the word. It just uses a reasonable bitrate for its resolution
Re:What else would one do? (Score:5, Interesting)
Should they just adopt new and inferior solutions and hope for the best?
I think the idea here is that the follow-up science/engineering to academic initiatives doesn't actually get funded/done because the unoptimized first cut of a new methodology isn't instantly better than the state of the art. It's basically arguing that the technology is undergoing path dependence [wikipedia.org], which is no big surprise as it happens all the time in lots of areas.
That said, the AV crowd has sure made a complete and utter mess of their formats. Piles of CODECs all with various levels of support for piles of video modes all bundled into piles of meta-formats with piles of methods for syncing up audio/ancillary/multistream... my eyes glaze over pretty quickly these days when faced with navigating that maze. Having options is awesome. Leaving them perpetually scattered around on the floor like a bunch of legos... not so much.
(Still waiting to see someone with serious genetic-algorithms chops tackle lossless CODECs... there's a ready-made problem with a cut and dry fitness function right there.)
Re:What else would one do? (Score:5, Interesting)
Part of the problem is that we have hardware acceleration for certain operations, and if codecs want to do stuff outside that then performance can become an issue for playback. Most streaming devices don't have enough CPU power in their little ARMs to handle decoding, it has to be hardware accelerated by the GPU.
Then again if anyone can argue successfully for hardware changes it's Netflix.
Re: (Score:3)
The dependence on hardware decoding is probably a major factor. Encoders want the largest possible audience and will always defer to the coding schemes with the least system impact and best performance, which will end up being hardware decoders.
There's a lag between the development of a new coding scheme and its widespread availability as actual deployed silicon. The investment in silicon in dependent on encoder adoption and popularity, which may lag encoding development.
Suddenly it looks like a no-win si
Re: (Score:2)
Re: (Score:2)
Where have you been in the 90s? The situation was FAR more of a mess, with lots of proprietary codecs and players like Real and On2.
Well, they may not be popular anymore, but once they are there, they are still there, so the situation can only get worse if you are writing a comprehensive library :-)
Re: (Score:2)
Re: (Score:2)
Re:What else would one do? (Score:5, Interesting)
Well if you read the article and not the summary, the authors are discussing that there doesn't seem to be any fundamental changes coming anytime soon. Sure newer codecs are coming out but they are all the same approach. It's like if we discussing public key cryptography and the algorithms used. Imagine if RSA was the only real technique and the only new changes coming out were merely larger keys and that other techniques like elliptic curves didn't exist.
I think your analogy is somewhat flawed. Public key cryptography was in somewhat of the same "rut" as video codec. Video codecs have been stuck on hybrid block techniques and Public key cryptography has been stuck using modulo arithmetic (RSA, and Elliptical curves both use modulo arithmetic although they depend on the difficulty of inverting different mathematical operations in modulo arithmetic).
There are of course other hard math problem that can be used in public key cryptography (lattices, knapsack, error-correcting codes, hash based) and they languished for years until the threat of quantum computing cracking the incumbent technology...
Similarly, I predict hybrid block techniques will likely dominate video encoding until a disruption (or in mathematical catastrophe theory parlance a bifurcation) shows the potential for being 10x better (because 1.2x or 20% better doesn't even pay for your lunch). It doesn't have to be 10x better out of the gate, but if it can't eventually be 10x better, why spend time optimizing it as much as hybrid block encoding. Nobody wants to be developing something that doesn't have legs for a decade or more. The point isn't to find something different for the sake of difference, it's to find something that has legs (even if it isn't better today).
The problem with finding something with "legs" in video encoding, is that we do not fully understand video. People don't really have much of a theoretical framework to measure one lossy video compression scheme against another (except for "golden-eyes" which depend on what side of the bed you wake up on). Crappy measures like PSNR and SSIM [wikipedia.org] to estimate the loss-ratio vs entropy are still being used because we don't have anything better. One of the reasons people stick to hybrid block coding is that the artifacts are somewhat known even if they can't be measured so it is somewhat easier to make sure you are making forward progress. If the artifacts are totally different (as they would likely be for a different lossy coding scheme), it is much more difficult to compare if you can't objectively measure it to optimize it (the conjoint analysis problem).
So until we have better theories about what makes a better video codec, people are using "art" to simulate science in this area, and as with most art, it's mostly subjective and it will be difficult to convince anyone of a 10x potential if it is only 80% today. If people *really* want to find something better, we need to start researching more on the measurement problem and less about the artistic aspects. It's not that people haven't tried (e.g., VQEG [bldrdoc.gov], but simply very little has come from the efforts to date and there has been little pressure to keep the ball moving forward.
In contrast, the math of hard problems for public key cryptography is a very productive area of research and the post-quantum-encryption goal has been driving people pretty hard.
Generally speaking, if you measure it, it can be improved and it's easier to measure incremental progress than big changes on a different dimension.
Re: (Score:2)
I think your analogy is somewhat flawed. Public key cryptography was in somewhat of the same "rut" as video codec. Video codecs have been stuck on hybrid block techniques and Public key cryptography has been stuck using modulo arithmetic (RSA, and Elliptical curves both use modulo arithmetic although they depend on the difficulty of inverting different mathematical operations in modulo arithmetic).
My understanding of ECC is that it does not use the same modulo arithmetic (g^a mod n and g^b mod n) as the main technique and relies on an elliptic curve (y^2 = x^3 + ax +b where g is a random point on the curve) which results in shorter keys.
AI based approaches VS mathematical (Score:2)
The current approach are very mathematical, looking at pixels. But what ends up in our brains is high level symbols. If an AI can get at those, then extremely tight coding is necessary.
For example, it takes a lot of bytes to represent "a man walking under a tree". But that phrase only took 50 bytes. The reconstructed video does not have to have the same type of tree as the original, just some sort of tree.
That's taking it to the extreme. But if an AI can recognize the types of objects in a video, and p
Re: (Score:2)
Re: (Score:2)
That all said, massive increases in available bandwidth make this rather pointless.
True, but a factor is the cost of the bandwidth and who controls it. One of the issues is the ISPs as technically I can get gigabit speeds to my house as the technology exists. Realistically it would cost me a lot and right now none of the ISPs in my area offer it. For cellular ISPs, there has to be massive upgrades to the infrastructure to support higher speeds. This is the problem that companies like Netflix face; there is a wide discrepancy between bandwidths on different networks for their customers. O
Re: (Score:2)
They aren't talking about adopting anything, merely not abandoning research into it so early due to less than stellar results.
New vs old (Score:5, Insightful)
but it would be stupid to start adopting any of that into actual products or live usage until and unless it tops the more traditional methods in performance.
The logic behind the article is that the new techniques will never top more traditional (or at least could not have a way to achieved in the current state of affair), because most of the resources (dev time, budget, etc.) are spent optimizing the "status-quo" codecs, and not enough is spent on the new comer.
By the time something interesting comes up, the latest descendant of the "status-quo" would have been much more optimized.
It doesn't matter that the PhD thesis "Using Fractal Wavelets in non-Euclidian spaces to compress video" shows some promising advantages over MPEG-5 : it will not get funded, because by then "MPEG-6 is out" and is even better just by minor tweaking every where.
Thus new idea like a PhD thesis never get funded and explored further, and only further tweaking of what already exist gets funded.
I personally don't agree.
The most blatant argument is the list it self.
With the exception of AV-1, the list is exclusively only the actual list of block based algorithm : MPEG-1 and it's evolutions (up to HEVC) and things that attempts to do something similar while avoiding the patents (the VPx serie by On2, Google).
It completely ignores stuff like Dirac and Schroedinger :
completely different approach to video compression (based on wavelets) that got funded, developed and are actually in production (by no less than the BBC).
It completely ignores the background behind AV-1 and how it relates to Daala.
AV-1 was designed from the ground up not as an incremental evolution (or patent circumvention) over HEVC, it was designed to go along a different direction (if nothing else, at least for the reason to avoid the patented techniques of MPEG, as avoiding patent madness was the main target behind AV-1 to begin with).
It was done by AOMedia, where lots of group poured resources (including Netflix themselves).
Yes, on one side of the AV-1 saga, you have entities like Google that donates their work on VP10 to serve as a basis - so were's again at the "I can't believe it's not MPEG(tm)!" clones.
But among other code and techniques contributions (beside Cisco's Thor which I'm not considering for the purpose of my post), there's also Xiph who provided their work on Daala.
There's some crazy stuff that Xiph has been doing there : stuff like replacing the usual "block"-based compression with slightly different "lapped blocks", more radical stuff like throwing away the whole idea of "coding residuals after prediction" and replacing it with what "Perceptual Vector Quantization", etc.
Some of these weren't kept for the AV-1, but other crazies actually made it into the final product (the classic binary arithmetic coding used by the MPEG family was thrown away for integer range-encoding, though they didn't go as far as use the proposed alternative ANS - Asymmetrical Number System)
Overall, incrementally improving on MPEG (MPEG 1 -> MPEG 2 -> MPEG 4 ASP -> MPEG 4 AVC/H264 -> MPEG 4 HEVC/H265) get hit hard by the law of diminishing returns. There's only so far that you can reach be incremental improvement.
Time to get some new approaches.
Even if AOMedia's AV-1 isn't that much revolutionnary, that's more out of practical considerations (we need a patent-free codec available as fast as possible, including available quickly in hardware, better end up selecting thing that are known to work well) than for not having tried new stuff.
And even if some of the more out of the box experiment didn't end up in AV-1, they might end up in some future AV-2 (Xiph is keeping experimenting with Daala).
Re: (Score:3)
It doesn't matter that the PhD thesis "Using Fractal Wavelets in non-Euclidian spaces to compress video" shows some promising advantages over MPEG-5 : it will not get funded, because by then "MPEG-6 is out" and is even better just by minor tweaking every where.
I did A/V compression development work once upon a time, and I can tell you that almost 20 years ago we were already looking into 3d wavelet functions for video decoding. The problem comes in that it's vastly less computationally and memory efficient than the standard iFrame/bFrame block decoders, and it messes up WAY worse if there's the slightly disruption in the stream.
I mean, sure, if an iFrame gets hosed you lose part of a second of the video, but you can at least still kind of see what it is with a
Re: (Score:2)
Seconded. Let them fuck about with it in the lab all they like.
There's already more than enough half-assed ones (or rather, ones with half-assed player support) out in the wild.
Re: (Score:1)
Internal combustion engine (Score:2)
Video codecs are not the only example of this, there are many.
Re: (Score:2)
"Insightful post"?!?! It's navel-gazing (Score:1)
There's nothing "insightful" about saying "there may be something better out there."
The insightful thing would be to find or create it.
Article is much more interesting than summary (Score:4, Insightful)
This is one case where the actual article is well worth reading, with a ton of links off to other areas to explore, and more interesting detail than the summary presents... well worth taking a look if you are at all interested in video compression and where the state of the art is going.
Re: (Score:2)
fucking twaddle.
Re: Article is much more interesting than summary (Score:1)
Re: (Score:3)
Re:Article is much more interesting than summary (Score:5, Insightful)
To better illustrate what I mean,say I want to buy hosting for a service and want 99% uptime. However, the person considering providers throws out any without guarantees of 99.999% uptime. They're not actually doing what I want and I may end up paying more than I would otherwise need to for no good reason. Or suppose I have a machine that judges produce and will remove anything that it thinks shoppers won't purchase (as a result of appearance, bruising, etc.) so that I don't waste resources shipping it to a store that will eventually have to throw it out as unsold. I want that machine to be as exact as possible because if it's being more picky than the shoppers, that's wasted produce I could otherwise be selling.
Re: (Score:3)
Summary missed the big "aha" moment of the article, which was that academic researchers in new encoding techniques had been thinking that increasing the complexity of their algorithms by 3X was a hard limit, whereas production practitioners such as Netflix, Facebook, and Hulu were thinking that a 100x increase in complexity was the upper limit.
Exactly! (Score:1)
Yes! It was things like that totally missing from the summary that made it interesting to fully read through.
Huh? (Score:3, Insightful)
What a stupid statement.
Is the expectation we adopt crappy replacements to "allow them to mature?"
They can mature until they're as good as what we have, not replace it with something which doesn't work to give it room to grow into something which doesn't suck.
Either you have a working replacement, or you have a good idea and a demo.
"Not-at-par" means the latter -- you don't have a mature product, and nobody is going to adopt it if it can't do what they can do now. Saying "ti will eventually be awesome" tells me that eventually we'll give a damn, but certainly not now.
It's bad enough I have to fight my vendors that I'm not accepting a beta-rewrite and suffering through their growing pains to get to the mature product they're trying to replace. I'm not your fucking beta tester, so please don't suggest I grab your steaming turd and live with it until you make it not suck.
Boo hoo, immature technologies which don't cover what the technology they're trying to replace aren't being allowed to blossom into something useful. Make it useful, and then come to us.
Re: (Score:2)
What’s the weissman score? (Score:1)
If the math says a new technique is better, it won't matter if the first implementation isn’t good. Someone will fix the implementation and then it will match the mathematically predicted performance (or the guy who did the math with fix his error).
Re: (Score:3)
Video compression is typically lossy. The Weissman score only applies to loseless.
The Weissman score is fiction (a product of HBO screenwriter request to a professor make up something "tech-sounding").
It isn't even an absolute score (because it depends on the unit of time you use to measure it and that isn't defined so if you happen to use something that results in the value 1, your score is infinite). Using a Weissman score in real life is like how fanbois convolute a Hollywood-ism like... "made the Kessel Run in less than twelve parsecs" into something that isn't totally gibberish (whe
Misses the real problem. (Score:2, Insightful)
Let's say for argumentation that a new and much more efficient video codec was just invented.
The trouble is that it will immediately be locked up behind patents, free implementations will be sued, and it'll be packed with DRM and require per-play online-permission.
Our main problem isn't technology, it's the legal clusterfuck that has glommed onto the technology landscape.
Re: (Score:2)
I'd say we're moving at a pretty healthy pace. (Score:5, Insightful)
H.264 was king. Now we've got H.265 and AV1 which have not entirely replaced H.264 due to compatibility purposes, but have still gained significant traction.
On the audio side, AAC replaced MP3, and Opus is set to replace AAC. Opus can generally reach the same quality as MP3 in less than half the bits!
So I don't see this stagnation they talk about. These algorithms are generally straightforward and codec devs, even if they don't have a hyper-efficient implementation yet, will be able to see the benefit -- it's just a matter of investing in their time to develop high quality code and hardware for it.
Re: (Score:1)
So I don't see this stagnation they talk about.
If you look at codecs life cycles, you'll see we have about 10 year between each "gen". H264 was released in 2003 but only became mainstream about 5 years latter. H265 is following the same pattern. It was presented in 2013 and nowadays it is on it's way to become mainstream.
At this pace, Netflix knows that a new standard will probably see the light of the day only by 2023, and become mainstream by 2028. Not stagnant, but it surely isn't blistering fast.
Missing a word: Research (Score:3)
Seriously the title and summary would have been much better and easier to understand if they used a single word "Research": "The End of Video Coding Research". The article discusses that while video coding use is pretty much everywhere, there hasn't been much progress or change made into newer standards despite lots of interest and investment. New codecs are coming out but there are all variations of the "block-based hybrid video coding structure" of MPEG-2/H.264/VP9, etc. Netflix is one company that would benefit from newer encoding standards.
Clients aren't getting any faster (Score:4, Interesting)
Re:Clients aren't getting any faster (Score:5, Informative)
The revolution came from stable, standardized algorithms that allowed custom hardware to be built. Doing video decoding on general-purpose CPUs is never going to hold a candle to a custom H.264 chip.
https://www.youtube.com/watch?... [youtube.com]
AV1 (Score:2)
We should invest a shitton of money in order to create a new codec that everyone can use and benefits everyone... You literally just described AV1. The entire process of it "being inferior while being iterated until better" also directly describes the past few years of AV1 until recently where it started to pull ahead in the compression vs quality game compared to other leading codecs.
I hope it's the end (Score:2)
We should just declare one of the current schemes as "good enough", use it long enough for all relevant patents to expire, universally implement it on all devices, and serve it by default from almost all media sources.
It would be kind of like mp3 and jpg, and it would lower everybody's stress level.
The value of entropy and psychovisual perception (Score:5, Interesting)
The biggest ongoing cost for streaming movies today is CDN storage, in the sense of having enough bitrates and resolutions to be able to accommodate all target devices and connection speeds. As much as people would like to deliver an HD picture to a remote village in the Philippines over a mobile connection on a feature phone, it isn't feasible at the moment for two reasons: they don't need or care about that level of experience, and it isn't technically feasible. The goal of CDN storage is to ensure the edge delivers the content, and the industry has toyed with real-time edge transcoding/transrating to address some of these issues, but fundamentally we are dropping asymptotically to a point on visual quality for a given bitrate and amount of computing power that a codec can deliver at the playback device.
In that sense, I'm shocked that Anne's post didn't mention Netflix's own VMAF [github.com], which is a composite measure of different flavors of PSNR, SSIM and some deep learning. But even here, the fundamental is that we are still using block-based codecs for operations simply because of the fundamental nature of most video, i.e. objects moving around on a background. I'm also shocked that Anne didn't discuss alternative coding methods like wavelet-based (e.g. JPEG 2000), but - again - these approaches have their own limitations and don't address interframe encoding in the same way that a block-based codec can. If there was a novel approach to coding psychovisually-equivalent video that would address computing power, bitrate and quality reasonably, I believe it would have been brought forward already.
I think 5G deserves a big mention here that was lacking in Anne's post, because faster connections may solve many of the types of issues that affect perceived visual quality at low bitrates. Get more bandwidth, and you have a better experience. Hopefully 5G will proliferate quickly, but this will be tricky in the developing world where its inherently decentralized nature and the political environments will make its ubiquitous deployment a serious challenge.
In the end, we're all fighting entropy, particularly when it comes to encoding video. Our ability to perceive video is affected by an imperfect system - the human eye and brain. That's why we've made such gains in digital video since the MPEG-1 days. But the fantasies of ubiquitous HD video to everyone in the world on 100kbps connections are just that. When you're struggling to get by and don't have good health care or clean drinking water, the value of streaming high-quality video isn't there from a business perspective, much less a technical perspective. Everyone will get an experience relative to the capabilities of technology and the value it brings to them accordingly. All else is idealistic pipe dreams until otherwise proven.
Re: (Score:1)
Re: (Score:2)
I'm also shocked that Anne didn't discuss alternative coding methods like wavelet-based (e.g. JPEG 2000), but - again - these approaches have their own limitations and don't address interframe encoding in the same way that a block-based codec can.
I mean, I guess you could use JPEG-2000 for the iFrames, but it's very seriously not designed for video. An interesting potential method would be extrapolating the wavelet to the third dimension (in this case, the time series) for videos, but your working memory would go up dramatically (which would make hardware decoders prohibitively expensive). Also, data loss in hierarchical encoding schemes is often catastrophic to the entire block. Not so bad on a single frame, pretty devastating if you just torche
Solution in search of a problem (Score:1)
Yup because innovation is at a stand still and new technologies don't get created to solver an actual problem. Hey wait a tic, they do.
Hmmm, so what's your fucking problem again? Could it be you want adoption of your codec but your codec doesn't pas the standard testing suites. So the solution is not to improve your codec - that
Re: (Score:2)
Dont build a better mousetrap ??? (Score:2)
Re: (Score:3)
so just get a cat?
For still pictures we have already reached that (Score:2)
Competitors to JPEG so far have only reached 30% less data under laboratory conditions compared to the standard JPEG encoder. That amount of improvement can however also be experienced by using more sophisticated JPEG encoders. Even if we'd reach half the file size for a realistic set of images, I doubt we would switch, as JPEG is largely good enough.
We might see a successor to JPEG for moving images yet, as those codecs slowly get good enough that the key frames matter and can even take up most of the stor
lol... (Score:1)
yes, it's dead. Use png for images, ogg theora for video and flac for sound. End of discussion.
Need to use newer methods (Score:2)
geeks can help (Score:2)
Why not convince the people behind a good open source video player (vlc springs to mind) and a good converter tool (handbrake springs to mind) to support a promising new codec? Geeks start using it, and we rapidly see whether it's worth pursuing or not. This strategy has worked for other codecs.
If we sit on our hands waiting for the industry to adopt a new standard, we'll still be using mp4 when cockroaches inherit the earth.
Keep diminishing returns in mind. (Score:2)
Just pulling numbers out of a hat, starting from raw video, lossless compression perhaps drops the bitrate needed by an order of magnitude. Existing lossy algorithms drop maybe another order of magnitude. It is very likely that with a lot of work, that could drop another 50%, but it's fairly unlikely to drop another order of magnitude off of existing systems.
Netflix should watch what it wishes for, though. Dropping another order of magnitude maybe would make things cheaper for Netflix, but that's also th
Are we missing out? (Score:2)
In a word. NO. There is nothing missed.
New technology, per 1991 ATT research, doesn't have a chance of disrupting older technology without order of magnitudes greater than 10X measurable improvement. GOOG failed upon search to locate the research. Here's the latest I can Google https://bit.ly/2tc6oJl [bit.ly]
Re: (Score:2)
Why write a codec? (Score:2)
quantum economics (Score:1)
There is plenty of room for improvement (Score:2)
One if the issues is getting newer ideas added to existing standards or even developing standards. I had found that grey coding (only 1 bit changes while counting: 000,001,011,010,110,111) graphical images helped with early compression and proposed that when png was being developed but it didn't go anywhere. Mapping color space was another idea because about 8 million of the colors in 24 bit RGB are brown or grey. 24 bit HSV was trivial to add to the analog section of VGA and would display far more shade
Re: (Score:1)