Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Intel Graphics Open Source

Intel Starts Publishing Open-Source Linux Driver Code For Discrete GPUs (phoronix.com) 43

fstack writes: Intel is still a year out from releasing their first discrete graphics processors, but the company has begun publishing their open-source Linux GPU driver code. This week they began by publishing patches on top of their existing Intel Linux driver for supporting device local memory for dedicated video memory as part of their restructuring effort to support discrete graphics cards. Intel later confirmed this is the start of their open-source driver support for discrete graphics solutions. They have also begun working on Linux driver support for Adaptive-Sync and better reset recovery.
This discussion has been archived. No new comments can be posted.

Intel Starts Publishing Open-Source Linux Driver Code For Discrete GPUs

Comments Filter:
  • ... sign a non-disclosure agreement.

  • by Solandri ( 704621 ) on Saturday February 16, 2019 @02:29PM (#58131808)
    and useful if a discrete GPU could begin to use system RAM as second-tier VRAM once the VRAM on board the GPU was exhausted. That would prevent the issue where if you run out of VRAM, the game starts to stutter as the game dumps textures from VRAM and is forced to read new textures in off disk. If those extra textures could be held in system RAM instead, the stutter when it was transferred to the GPU would be considerably smaller than having to read it off disk.

    Nvidia and AMD would never do this because it would cannibalize their sales of GPUs with more VRAM. Right now if your GPU doesn't have enough VRAM to run a game, your only choices are to reduce texture quality, or buy a new GPU. Intel only did it because they built GPUs without any VRAM, or with just 32-64 MB of eDRAM.

    The need has decreased as SSDs have supplanted HDDs. And some games appear to be doing this manually - caching all textures in system RAM so they don't need to be re-read from disk. But system RAM as second-tier VRAM would be faster and a more universal solution.
    • by Fly Swatter ( 30498 ) on Saturday February 16, 2019 @02:38PM (#58131834) Homepage
      They already use ram for additional storage, but the problem still exists because the speed of data transfer across the bus between the CPU and the GPU are still the limiting factor. Also the VRAM is tuned purely for the GPU, and is significantly more efficient even if you could stick a standard dimm slot onto the graphics card right next to the GPU.
      • > even if you could stick a standard dimm slot onto the graphics card
        Now there's an idea. Sure, it'd be less efficient than VRAM, but it would be much more efficient than talking to main memory across the bus. And who doesn't have a few old DIMMS lying around, displaced by other upgrades?

        Of course the cost of a DIMM slot or two, and quite possibly a second memory controller to talk to the very different memory might very well outweigh the benefits

    • by godrik ( 1287354 )

      NVidia's GPUs already do that (I assume AMDs too).
      It is called unified memory in CUDA. It's been out there for years now.

    • by AHuxley ( 892839 )
      That memory was too slow for advanced new computer games.
      Some software likes lots of RAM when the math is well understood and the software can make use of the GPU and RAM.
      It depends on the task, math, skill and time put into the OS, GPU, of the software code.
  • Excellent. (Score:4, Insightful)

    by Gravis Zero ( 934156 ) on Saturday February 16, 2019 @02:30PM (#58131810)

    As much as I dislike Intel for their usual business practices, it's a good thing that they are bringing more open source hardware to the market. If nothing else, this will put additional preasure on other companies *cough*nvidia*cough* to be more open about their own hardware.

    I've always found it strange that some companies release hardware with almost no documentation and half-assed drivers because it's basically kneecapping your own product.

  • For example, superior support for power-saving as compared to AMD. AMD never bothered to properly support power-saving on e.g. my Athlon Mobile L110.

  • A big long card with a few CPU's and sell it as a new way of thinking about a GPU.
    Existing CPU design trying to sell ray tracing as a new powerful GPU design.
    Can all todays GPU math be made extra fast by using a lot more CPU math?
    Fast CPU math will make an amazing GPU card for a set of ray tracing math.
    CPU math that computer games will have to understand and work to support as graphics.
    Just keep adding another CPU onto the GPU card until the rays work at 60 fps in 4K?
    All games crave adding that extra
  • As much as I appreciate AMD's efforts to implement the "amdgpu" driver, the result is still so far away from being stable enough for serious 24/7 production use (rather than just gaming), that I really hope Intel will do better and provide an alternative for buyers.

    After all, the i915 has been very reliable for me in recent years.
  • by TeknoHog ( 164938 ) on Sunday February 17, 2019 @08:18AM (#58134362) Homepage Journal
    Some companies like to wait until product launch, but Intel isn't being too discrete about their plans.

If all else fails, lower your standards.

Working...