AMD Open-Sources Video Encode Engine 34
An anonymous reader writes "AMD's latest feature added to their open-source Radeon DRM graphics driver is VCE video encoding via a large code drop that happened this morning. A patched kernel and Mesa will now allow the Radeon driver with their latest-generation hardware to provide low-latency H.264 video encoding on the GPU rather than CPU. The support is still being tuned and it only supports the VCE2 engine but the patches can be found on the mailing list until they land in the trunk in the coming months."
Re: (Score:2)
> It's almost like this has been on Intel's CPUs for years...
Oh really?
So what build of ffmpeg on Linux supports this?
Having some weaker-than-everyone-elses GPU feature and actually having libre support for that feature are two entirely different things.
Re: (Score:1)
FFmpeg (upstream SVN tree >= 2010/01/18 / version 0.6.x and onwards)
for reference: http://www.freedesktop.org/wik... [freedesktop.org]
Re:Holy shit (Score:4, Informative)
They have decoding support, but at least as recently as Google Summer of Code 2013 [multimedia.cx] they don't have hardware encoding support. That seems to be the fault of the ffmpeg project though, encoding was added to the VA API in June 2009. Lack of interest?
Re: (Score:2)
Re: (Score:2)
Encoding was added to the VAAPI interface, but was never supported by Intel hardware. There's not much sense implementing a protocol when there's no hardware to interface with. You may be looking for the Intel Media SDK [intel.com], which wasn't made publicly available until the middle of last year.
Intel says otherwise [01.org]: Hardware encoding is supported with Intel HD 2000 and newer (Sandy Bridge)
Re: (Score:3)
Did you mean Gpu? GPU is four times as fast. (Score:3)
You said Intel CPU. Did you mean Intel GPU?
GPU encoding is about four times as fast as CPU. Of course that depends on which GPU is being compared to which CPU.
Re: (Score:3)
I'm sure he means Intel's Quick Sync hardware codecs, which are integrated on Intel's CPUs and does not use the integrated GPU.
My understanding of AMD's VCE is that it is also a fully separate codec which does not use any GPU compute power, though they do have optimized paths to copy the framebuffer into VCE for low-latency screen capture.
Re: (Score:2)
interesting, hardware video chip on the CPU (Score:2)
That's interesting. For anyone who, like me, wasn't familiar with Quick Sync, it seems to be a dedicated video codec chip on the CPU. In testing, it was much faster than CPU or even GPU encoding, but at lower picture quality per megabyte.
not exactly. Can real time vs can't (Score:3)
I don't think it's quite nonsense. A motorcycle has lower latency than a truck (you can get there faster), but a truck has higher throughput (it can deliver 1000 boxes quicker). That's a useful distinction.
Especially Quick Sync can easily encode IN REAL TIME, so it's useful for DVRs, etc. (Think instant replay). An unassisted CPU will struggle with real time encoding. Being able to encode even multiple streams in real time is better than not being able to.
Re: (Score:1)
Yes, this is the sort of task-oriented dedicated function blocks for video decode and encode that have been popular in GPU, ARM SoC and now x86 "APU" for quite some time.
Useless for high quality encoding, but great for standard consumer uses, quick encoding and transcoding of all those phone videos.
The PS4 probably uses VCE for its TwitchTV integration, for example.
Re: (Score:2)
Distribution will be a hassle, as always; but it's not architecturally much different from just adding a chunk of flash to the card and storing the firmware there instead.
Re:binary driver (Score:4, Informative)
From the mailing list, it appears you still need to link this all to a closed source binary...
No, it's firmware/microcode. The driver sends it to the GPU at boot as a blob, it lives inside the card hidden from everything. The alternative would be to have an EEPROM and a firmware flashing utility, it'd still be there and closed source but it wouldn't be in the driver. It's not really part of the programming model, it's hardware initialization/configuration/tweaks to make the it work correctly according to the model.
Re: (Score:2)
Quality vs Speed (Score:2)
For details check out: http://www.behardware.com/articles/828-27/h-264-encoding-cpu-vs-gpu-nvidia-cuda-amd-stream-intel-mediasdk-and-x264.html [behardware.com] and http://www.tomshardware.com/reviews/video-transcoding-amd-app-nvidia-cuda-intel-quicksync,2839-13.html [tomshardware.com]
Re: (Score:2)
Re: (Score:2)
Not a hardware bug. The program running on the GPU isn't the same as the one running on the CPU.
Re: (Score:2)
it's not the same program.
that's the thing... these new codecs, they don't specify exactly how you should encode.
Re: (Score:2)
Re: (Score:2)
That's the point. Because codec quality is highly dependent on the tables you use, which is the main selling point of the codecs. In other words, the quality of the final output is strictly determined by the quality of the coder.
The decoder rarely adds quality loss itself - it just reconstructs the signal based on what it is given and few decoders actually have a say in quality decisions.
The coder part though is where the
Codecs as far as the eye can see.... (Score:4, Interesting)
Video decoder verification was the most tedious task I have ever been assigned. Just sayin.