Intel Starts Publishing Open-Source Linux Driver Code For Discrete GPUs (phoronix.com) 43
fstack writes: Intel is still a year out from releasing their first discrete graphics processors, but the company has begun publishing their open-source Linux GPU driver code. This week they began by publishing patches on top of their existing Intel Linux driver for supporting device local memory for dedicated video memory as part of their restructuring effort to support discrete graphics cards. Intel later confirmed this is the start of their open-source driver support for discrete graphics solutions. They have also begun working on Linux driver support for Adaptive-Sync and better reset recovery.
As long as the GPUs ... (Score:2)
... sign a non-disclosure agreement.
Re: (Score:2)
Then find another job, OK?
Re: (Score:2)
I'm no where near your country.
I'm in Texas.
I always thought it would be interesting (Score:3)
Nvidia and AMD would never do this because it would cannibalize their sales of GPUs with more VRAM. Right now if your GPU doesn't have enough VRAM to run a game, your only choices are to reduce texture quality, or buy a new GPU. Intel only did it because they built GPUs without any VRAM, or with just 32-64 MB of eDRAM.
The need has decreased as SSDs have supplanted HDDs. And some games appear to be doing this manually - caching all textures in system RAM so they don't need to be re-read from disk. But system RAM as second-tier VRAM would be faster and a more universal solution.
Re:I always thought it would be interesting (Score:5, Insightful)
Re: (Score:2)
> even if you could stick a standard dimm slot onto the graphics card
Now there's an idea. Sure, it'd be less efficient than VRAM, but it would be much more efficient than talking to main memory across the bus. And who doesn't have a few old DIMMS lying around, displaced by other upgrades?
Of course the cost of a DIMM slot or two, and quite possibly a second memory controller to talk to the very different memory might very well outweigh the benefits
Re: (Score:2)
Absolutely. But if you've got a video card with high-speed RAM, *plus* lower speed add-on ram as an additional cache against having to transfer data over the bus, then that would almost certainly be an improvement over the high-speed ram alone.
Re: (Score:2)
What part of "over the bus" did you not understand? Memory buses are typically orders of magnitude faster than anything else outside the CPU. Typically in the range of 25-60GB/s. By contrast even a PCIe 3.0 lane has a bandwidth of just under 1GB/s. Even a 16x dedicated channel is going to have less than a third of the bandwidth of a fast memory controller, and FAR worse latency.
That's the same reason your CPU has multiple layers of increasingly larger and slower cache sitting between it and RAM - even R
Re: (Score:2)
NVidia's GPUs already do that (I assume AMDs too).
It is called unified memory in CUDA. It's been out there for years now.
Re: (Score:2)
Re: (Score:2)
Some software likes lots of RAM when the math is well understood and the software can make use of the GPU and RAM.
It depends on the task, math, skill and time put into the OS, GPU, of the software code.
Excellent. (Score:4, Insightful)
As much as I dislike Intel for their usual business practices, it's a good thing that they are bringing more open source hardware to the market. If nothing else, this will put additional preasure on other companies *cough*nvidia*cough* to be more open about their own hardware.
I've always found it strange that some companies release hardware with almost no documentation and half-assed drivers because it's basically kneecapping your own product.
Re: (Score:1)
Misleading. NVIDIA releases closed-source tarballs for BSD. AMD does not.
AMD releases specifications and documentation and has great open source drivers. NVIDIA does not.
Most of the work for both AMD and NVIDIA open source drivers are being done on Linux.
NVIDIA doesn't support any platform except their own. AMD as well but at least they release the specifications to allow you to support your own platform.
Intel and AMD follow a similar path. NVIDIA does not.
Unless you've purchased a workstation graphics adap
Intel has a history of Linux support (Score:2)
For example, superior support for power-saving as compared to AMD. AMD never bothered to properly support power-saving on e.g. my Athlon Mobile L110.
Re: (Score:2)
That's bullshit, either OEM didn't implement proper BIOS settings or user/idiot error.
You talk a lot of shit for a coward.
Either way neither one is AMD's fault
Intel commits code to support their processors and chipsets. AMD didn't bother to do that for this chip, so it had to be guessed and bodged from other chips.
AND you trying to segue into that from "intel has a history of linux support" = laughable trolling, you're a putz.
I use AMD processors pretty much exclusively. I am typing this from a system with FX-8350 right now. If facts frighten you, perhaps you should fuck right off.
A row of CPU's (Score:2)
Existing CPU design trying to sell ray tracing as a new powerful GPU design.
Can all todays GPU math be made extra fast by using a lot more CPU math?
Fast CPU math will make an amazing GPU card for a set of ray tracing math.
CPU math that computer games will have to understand and work to support as graphics.
Just keep adding another CPU onto the GPU card until the rays work at 60 fps in 4K?
All games crave adding that extra
Hope their drivers will be more stable than AMD's (Score:2)
After all, the i915 has been very reliable for me in recent years.
In other words (Score:3)