Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Open Source AMD

AMD's CUDA Implementation Built On ROCm Is Now Open Source (phoronix.com) 29

Michael Larabel writes via Phoronix: While there have been efforts by AMD over the years to make it easier to port codebases targeting NVIDIA's CUDA API to run atop HIP/ROCm, it still requires work on the part of developers. The tooling has improved such as with HIPIFY to help in auto-generating but it isn't any simple, instant, and guaranteed solution -- especially if striving for optimal performance. Over the past two years AMD has quietly been funding an effort though to bring binary compatibility so that many NVIDIA CUDA applications could run atop the AMD ROCm stack at the library level -- a drop-in replacement without the need to adapt source code. In practice for many real-world workloads, it's a solution for end-users to run CUDA-enabled software without any developer intervention. Here is more information on this "skunkworks" project that is now available as open-source along with some of my own testing and performance benchmarks of this CUDA implementation built for Radeon GPUs. [...]

For those wondering about the open-source code, it's dual-licensed under either Apache 2.0 or MIT. Rust fans will be excited to know the Rust programming language is leveraged for this Radeon implementation. [...] Those wanting to check out the new ZLUDA open-source code for Radeon GPUs can do so via GitHub.

This discussion has been archived. No new comments can be posted.

AMD's CUDA Implementation Built On ROCm Is Now Open Source

Comments Filter:
  • by illogicalpremise ( 1720634 ) on Tuesday February 13, 2024 @09:39PM (#64237820)

    Nope. Don't fall for it. The ROCm platform is targeted at DATACENTER GPUs. As soon as any consumer GPU becomes affordable it's quietly dropped from the next ROCm release.

    If you go down this road expect to spend mega $$$$ on Instinct datacenter GPUs or buying a new high-end GPU every year.

    • The ROCm platform is targeted at DATACENTER GPUs. As soon as any consumer GPU becomes affordable it's quietly dropped from the next ROCm release.

      This doesn't seem right at all. I'm using ROCm drivers for OpenCL applications right now on an RX 6600, an affordable consumer GPU. ROCm is open source (see the Wikipedia link in the summary) so there isn't an immediate danger of dropping support for a given hardware; you can always fork it and backport things etc.

      AMD also provides closed source drivers so perhaps you're referring to them?

      • The ROCm platform is targeted at DATACENTER GPUs. As soon as any consumer GPU becomes affordable it's quietly dropped from the next ROCm release.

        This doesn't seem right at all. I'm using ROCm drivers for OpenCL applications right now on an RX 6600, an affordable consumer GPU. ROCm is open source (see the Wikipedia link in the summary) so there isn't an immediate danger of dropping support for a given hardware; you can always fork it and backport things etc.

        AMD also provides closed source drivers so perhaps you're referring to them?

        No. I'm referring to your card (and most others) having no official ROCm support from AMD and only working (if at all) via various hacks and black magic;

        https://github.com/ROCm/ROCm/i... [github.com]

        Like I said I got my Vega M chipset kind-of working too but only through frustrating hours of trial and error and even then the result was flaky and highly dependant on very specific driver and library versions, custom source compiles and/or undocumented settings found in long and rambling stackoverflow threads and help foru

        • Ah, I must have been lucky with my applications and the GPU. I actually started with Mesa OpenCL which seemed fine at first, but there were timeouts on my longer-running kernels, and ROCm has none of that.

          I do fairly simple but heavy numerical stuff, and it turns out AMD cards are much better for these uses. For example, double precision float speed is only half of single precision, whereas DP on Nvidia consumer cards is much slower. It's easy to check this as Nvidia also runs OpenCL, so in my experience

  • I'm glad they leveraged Rust!

    Isn't that a weasel word for "I don't actually know anything"

  • From the README (FAQ section)
    >>With neither Intel nor AMD interested, we've run out of GPU companies. I'm open though to any offers of that could move the project forward.
    >>Realistically, it's now abandoned and will only possibly receive updates to run workloads I am personally interested in (DLSS).

    Unfortunately AMD is not interested in running CUDA applications. Companies these days are all about lock-in. Which is fine but it only benefits the one with the largest user base. In ML field it
    • by jabuzz ( 182671 )

      Someone needs to write a tender for a sizable GPU system that mandates CUDA support to get AMD's attention.

    • I've refused to learn CUDA as I don't want my code to be at the mercy of a single GPU maker. The project looks interesting at first glance, but it seems like they'd just be playing catch-up with new CUDA developments. Open standards are much nicer, and besides OpenCL, I've got the impression that ROCm itself (which is open source) provides a lot of CUDA-like higher-level functionality.
    • I'd wager that they would love to, but are afraid of a legal battle with Nvidia.

      Now there's precedent that should favor interface cloning, but every time it happens it still has involved an expensive, protracted legal dispute. So they may not want to get transferred 6 up in that even if they should feel confident in the ultimate result.

  • Fan boys here (Score:4, Insightful)

    by jmccue ( 834797 ) on Wednesday February 14, 2024 @09:18AM (#64238640) Homepage
    Seems to be a lot of Nvidia fan boys here. I hope this works out. Any Open source product that can break Nvidia's strangle hold on the market is good to me. I hope AMD continues on with this research, ignoring the MBAs, so we can have a real performant open source GPU for Linux and the BSDs.
    • Seems to be a lot of Nvidia fan boys here.

      Do there? There seem to be many more people just shaking their heads at AMD's crushing incompetence and disdain for their users.

      I buy NVidia, of course I do, because I want to do deep learning. I can get anything from a cheapass 1050, to a consumer monster like the 4090 or deploy to a cloud server with an H100 and that shit just works. I can tell non computer science students to use pytorch and it will work without them having to learn how to use fucking docker, or

      • I really wish that NVidia had some competition, but AMD just flat out won't do the legwork and are somehow worried about cheap consumer cards cannibalising their nonexistent market for datacentre cards, even though this clearly is not the case for NVidia.

        Yeah, nvidia solved this problem by making all their cards expensive. (I now have a 4060 16GB. It was $450. The absolute most I've ever spent on a GPU before was $200, and I was there at the beginning of the consumer GPUs... VooDoo 1 and 2, Riva TNT and TNT2, Permedia 2, PowerVR, Matrox, I tried them all. Nvidia was the best even then, although 3dlabs was pretty good. AMD wasn't even worth screwing with until they got OSS drivers.

        I hope this thing works out for them, I really do, because nvidia needs some c

        • Yeah, nvidia solved this problem by making all their cards expensive.

          Ha! Their cards are now expensive, stupid expensive and if you have to ask you can't afford it expensive. Still though the merely expensive ones appear to not cannibalise the stupid expensive ones.

          Though some of that I is companies doing shitty deals with Dell who only offer overpriced and often slower Quadro cards on their workstations...

          I was there at the beginning of the consumer GPUs... VooDoo 1 and 2, Riva TNT and TNT2, Permedia 2, P

Crazee Edeee, his prices are INSANE!!!

Working...