Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Open Source Intel Programming

Intel CTO Wants Developers To Build Once, Run On Any GPU (venturebeat.com) 58

Greg Lavender, CTO of Intel, spoke to VentureBeat about the company's efforts to help developers build applications that can run on any operating system. From the report: "Today in the accelerated computing and GPU world, you can use CUDA and then you can only run on an Nvidia GPU, or you can go use AMD's CUDA equivalent running on an AMD GPU,â Lavender told VentureBeat. "You can't use CUDA to program an Intel GPU, so what do you use?" That's where Intel is contributing heavily to the open-source SYCL specification (SYCL is pronounced like "sickle") that aims to do for GPU and accelerated computing what Java did decades ago for application development. Intel's investment in SYCL is not entirely selfless and isn't just about supporting an open-source effort; it's also about helping to steer more development toward its recently released consumer and data center GPUs. SYCL is an approach for data parallel programming in the C++ language and, according to Lavender, it looks a lot like CUDA.

To date, SYCL development has been managed by the Khronos Group, which is a multi-stakeholder organization that is helping to build out standards for parallel computing, virtual reality and 3D graphics. On June 1, Intel acquired Scottish development firm Codeplay Software, which is one of the leading contributors to the SYCL specification. "We should have an open programming language with extensions to C++ that are being standardized, that can run on Intel, AMD and Nvidia GPUs without changing your code," Lavender said. Lavender is also a realist and he knows that there is a lot of code already written specifically for CUDA. That's why Intel developers built an open-source tool called SYCLomatic, which aims to migrate CUDA code into SYCL. Lavender claimed that SYCLomatic today has coverage for approximately 95% of all the functionality that is present in CUDA. He noted that the 5% SYCLomatic doesn't cover are capabilities that are specific to Nvidia hardware.

With SYCL, Lavender said that there are code libraries that developers can use that are device independent. The way that works is code is written by a developer once, and then SYCL can compile the code to work with whatever architecture is needed, be it for an Nvidia, AMD or Intel GPU. Looking forward, Lavender said that he's hopeful that SYCL can become a Linux Foundation project, to further enable participation and growth of the open-source effort. [...] "We should have write once, run everywhere for accelerated computing, and then let the market decide which GPU they want to use, and level the playing field," Lavender said.

This discussion has been archived. No new comments can be posted.

Intel CTO Wants Developers To Build Once, Run On Any GPU

Comments Filter:
  • OpenCL (Score:5, Insightful)

    by Joce640k ( 829181 ) on Tuesday October 04, 2022 @10:11PM (#62939327) Homepage

    Why not OpenCL?

    ("The nice thing about standards is that there's so many to choose from!").

    • by gweihir ( 88907 )

      But then Intel would not have control. So no, absurd idea!

      • Re: (Score:2, Flamebait)

        What the fuck are you going on about this time?

        SYCL is a Khronos-managed standard.
        To answer the above's question for you, since you clearly don't know enough about this topic to really have a valid opinion, "Why Not OpenCL", because SYCL doesn't replace OpenCL.
        Intel also provides support for OpenCL.

        The first thing you'll learn when you grow up some day and start writing OpenCL kernels, is that they're too level to be portable. You will need different kernels for different platforms.
        SYCL exists to try
        • *too low-level
          • What's hardware support like at the moment?

            Google seems to say that NVIDIA and AMD aren't interested.

            • All the usual suspects.

              Since the backends are pluggable, and anyone can make a SYCL implementation, you can get SYCL supporting anything that OpenCL supports (the SYCL implementation will just need to be aware of the OpenCL hardware specifics)
              Beyond that, there are CUDA and HIP backends as well.

              This means you can run it on every GPU on the market, including your apple silicon GPU, or any CPU with OpenCL drivers.
              • Yep. Since I posted that I found out that all Intel is doing is translating SYCL to LLVM then LLVM is translated to SPIR.

                SPIR is then translated to whatever your hardware prefers (even OpenCL if that's what you have).

                It's all layers upon layers: https://www.khronos.org/spir/ [khronos.org]

                • Yup.
                  The way to look at it, is SPIR (SPIR-V, really, at this point) is a better substrate for a higher level language to use to compile down into, than directly to OpenCL.
                  It then uses appropriate backends to talk to whatever hardware need be talked to.
                  SYCL is the specification for the upper level languages to be able to compile to the SPIR-V "VM" which will translate out existing drivers to whatever hardware.

                  The end result of all of this, is it gives us something close to a hardware-agnostic CUDA, which
    • Came here to find out how this is different from OpenCL...

      • SYCL is higher-level than OpenCL.
        OpenCL, contrary to widely held slashdot-armchair-expert belief, is not high-level, and you will often have to write several version of a kernel to run on different hardware that it may run on.
    • "OpenCL is a framework for writing programs that execute across heterogeneous platforms consisting of central processing units, graphics processing units, digital signal processors, field-programmable gate arrays and other processors or hardware accelerators."

      That sounds...impractical in real life but what do I know.

      • OpenCL is pratical. It is open CUDA but NVIDIA puts all its effort on its own proprietary solution and has the best GPUs so if you want to be competitive you have to use CUDA. Also SYSCL is just a higher level OpenCL/CUDA and it will fail because NVIDIA will ignore it.
        • OpenCL is a shit-show. It is because of that fact that CUDA became vastly more popular in the compute space. It was worth it to constrain yourself to a single GPU and enjoy the niceties of CUDA rather than deal with the not-actually-portable-but-trying-to-be-portable mess that is OpenCL.

          SYCL seeks largely to fix that, and it already sees use these days. As for overtaking the NV/CUDA dominance in compute, who knows. You may be right about that.
          • From real_nickname: "OpenCL is pratical..."

            From DamnOregonian: "OpenCL is a shit-show..."

            Well I'm glad we got that settled...lol

            • Yup. The difference is one of us uses it, and one of us does not.

              I'll leave that up to you to figure out, but if you're looking for hints, you probably need to look no further than the post that incorrectly identifies SYCL, and is full of assertions of factual about the intent of billion dollar corporations.
    • Re:OpenCL (Score:4, Interesting)

      by _merlin ( 160982 ) on Wednesday October 05, 2022 @12:01AM (#62939533) Homepage Journal

      OpenCL has fallen out of favour. The more-popular portable layer these days is SPIR [khronos.org] (Standard Portable Intermediate Representation), and SPIR-V which is SPIR with Vulkan graphics features. The idea isn't to write code in SPIR directly. You code in some high-level language which is compiled to SPIR using a standard compiler not tied to any GPU. Then the SPIR is compiled with a light-weight compiler provided by the GPU vendor as part of the driver.

      • The V in SPIR-V is for "Vector", not "Vulkan". OpenCL and Vulkan can both use kernels compiled to SPIR-V, and the OpenCL C programming language is one of the languages that can be compiled to it.

        OpenCL is a high level API for compiling and executing kernels. It sits at the same level of the stack as Vulkan, but it's optimized for compute rather than graphics. SPIR-V is a low level representation of compiled kernel code, just one level above the GPU's native machine language. They aren't alternatives to

    • I wondered too. One answer I found was... (from https://en.wikipedia.org/wiki/... [wikipedia.org] )

      "SYCL is a royalty-free, cross-platform abstraction layer that builds on the underlying concepts, portability and efficiency inspired by OpenCL that enables code for heterogeneous processors to be written in a “single-source” style using completely standard C++. SYCL enables single-source development where C++ template functions can contain both host and device code to construct complex algorithms that use hardw

    • OpenCL is a pale reflection of CUDA. SYCL is looking good as a CUDA replacement.
    • SYCL is strictly ahead-of-time compiled. Unlike CUDA and OpenCL, it can't compile new kernels at runtime. That makes it a lot less useful for many applications.

    • by jay age ( 757446 )

      SYCL is the abstraction layer for OpenCL, although now also different back-ends are available.

      Khronos offers it as a far simpler API to use than straight OpenCL calls, and frankly they have a point.

  • by bloodhawk ( 813939 ) on Tuesday October 04, 2022 @10:20PM (#62939351)
    Not surprising, those in last place are always wanting interoperability right up until they are no longer last.
    • Not surprising, those in last place are always wanting interoperability right up until they are no longer last.

      Make everyone else last and you'll magically be first!

      Like in Between Time and Timbuktu, where "true equality" is based on handicapping everyone until they're equally bad.

      I miss the Chronosynclastic Infundibulum.

      • Make everyone else last and you'll magically be first!

        ie. They've probably designed it so it will be difficult to optimize on existing NVIDIA/AMD chips.

        Result: Intel chips are the fastest!

        • by _merlin ( 160982 )

          Nah, it's just yet another high-level language that gets compiled to SPIR, which is then compiled to native code for the GPUs. There are SPIR or SPIR-V compilers available for quite a few languages now, including special dialects of C/C++ and shader languages like GLSL and HLSL. The main benefit of SPIR is that the the front-end compilers aren't tied to any particular OS or GPU - the same compiler can be used to compile source to SPIR or SPIR-V for use on any OS and GPU. The GPU vendor just has to provid

    • They are threatening to make GPU programming as "efficient" as Java. Soon there will be no first place. Only day-dreams of what used to be.

  • by 93 Escort Wagon ( 326346 ) on Tuesday October 04, 2022 @10:21PM (#62939355)

    Run equally badly on any GPU.

    • Run equally badly on any GPU.

      Lol, my first thought exactly.

      It'll run on anything but it'll run like shit because it's gotta wade through x number of layers of abstraction or conversion or translation or whatever the appropriate buzzword is.

      I mean, you could (probably) run Linux under Windows through MacOS on an Atari 800 but it'd take all year to boot.

      And not to put too fine a point on it, but umm, errr....wasn't that the promise of Java, basically? Compile it and "run it anywhere"? Or was it compile it for each OS and "run it anywhere

    • Run equally badly on any GPU.

      It's the old WORA dream. But write once, run anywhere ends up meaning working with the least common denominator in terms of featureset. And the official standard moves slowly, but you can avoid that by adding extensions and then when vendors want to propose new features they can add their own vendor-specific extensions and oh crap now it doesn't run everywhere anymore.

    • True that but not every application has "running well" on it's requirements list. If it did then Java wouldn't exist.
      Sure if you're running a large model on a supercomputer and paying per the processing second then you probably want the most efficient piece of code. If on the other hand you're using an image processing tool like those from Topaz then you're probably okay with a slight performance hit and the developer is likely very okay with a simplified coding approach.

    • Write once. Compile for separate GPU tech though.

    • Right, which is why C++ is notoriously for running poorly for example

  • If I care about performance enough to write GPU code, I probably care enough to use the GPU vendor's native compiler.

    I might even care enough to make a dedicated FPGA. And if I have really deep pockets, I might even spring a few (tens of) millions for an ASIC.

    But if I'm in that deep, I probably have a pretty good idea of what I want and little reason to hop platforms or suppliers. On the other hand if I'm still in the ill-defined exploratory phase, I could throw any old GPU at it and won't lose much by reco

  • They spastically switch without direction between different threaded programming "standards". Often endorsing several at once, depending on which of their web pages you happen to land upon.

    Right now you can find some of the company telling you that Cilk is the answer here, other groups saying OpenMP is certainly the way to go, and even some Threaded Building Blocks advice. But, SYCL is certainly the flavor of the month.

    And, they bundle this up in the nebulous OneAPI just so you can't figure out which compil

    • Right now you can find some of the company telling you that Cilk is the answer here, other groups saying OpenMP is certainly the way to go, and even some Threaded Building Blocks advice. But, SYCL is certainly the flavor of the month.

      And of course OpenCL, std::par and OpenACC.

      And, they bundle this up in the nebulous OneAPI

      You do have to hand it to them for the ambitious naming though ;)

  • Developers don't write for CUDA because they prefer Nvidia over Intel, but because they want the best performance they can get, and are willing to break compatibility and interoperability in pursuit of that goal.

    Intel chose to produce lackluster GPUs rather than devote serious resources to toppling Nvidia, and it shows. They (Intel) did a good job marketing to the general-purpose computing crowd, but let's face it - they've never succeeded in niche markets and probably never will. Nvidia chose to do on

    • The average (non-gamer) person doesn't care what video card they run because whatever came with computer is fine for the average user's needs. Nvidia knows this, Intel does not. Nvidia doesn't even bother trying to match Intel on price, or volume.

      While it's true that the average computer user doesn't care about gaming or need a beefy GPU, the market for non-integrated GPUs was $5 billion just for the last quarter, even after seeing a huge sequential sales drop. Nvidia, AMD, and Intel all want a piece of that huge market.

      While AMD has always been able to compete with benchmarks, Nvidia hasn't had much market competition from AMD for a while, so it can charge more. Intel is the new competitor, so it has to find a way to get buyers to even notice the

  • by PhunkySchtuff ( 208108 ) <kai&automatica,com,au> on Wednesday October 05, 2022 @02:19AM (#62939673) Homepage

    There is OpenGL which is an open framework for 3D graphics, however NVIDIA encourages developers to specifically target their own APIs as they have hobbled OpenGL performance in their drivers. ATI used to have decent OpenGL performance, but now they also hobble it so that developers are encouraged to use their Radeon APIs instead.

    Then, we have OpenCL - which Intel has at some stage in the past supported on Intel processors, but now wants to move away from because reasons.
    https://www.intel.com/content/... [intel.com]

    • This isn't Intel. This is a project as part of the Khronos group itself, the same group that oversees OpenCL and SYCL was originally created by the OpenCL group itself before it became a large enough project to have its own working group.

      They do different things. SYCL is a high level framework that works with OpenCL, or potentially also CUDA or ROCm.

  • by SciCom Luke ( 2739317 ) on Wednesday October 05, 2022 @04:08AM (#62939767)
    ...I am really totally fine compiling for different hardware, and give it time to optimize.
    What would make my life easier is to have a less draconic interface with the GPU.
  • In the late 1980s or early 90s, I think Symantec C on Mac had a forthcoming product called Bedrock where you wrote your app, and it would produce apps for Mac and PC.

    Sounded pretty cool hey what's with all the crickets?

  • Not worth my time to learn a new framework that gets forgotten.
  • Obligatory xkcd link

    https://xkcd.com/927/

  • by Pinky's Brain ( 1158667 ) on Wednesday October 05, 2022 @01:10PM (#62940887)

    Put on some big boy shoes and support dynamic parallelism or go home, with the hardware and the software.

    At least OpenCL 3.0 got enqueue from device as an optional feature, HIP and SYCL just pretend it doesn't exist to suit the shitty hardware they are meant for.

  • If you build once, deploy everywhere, you are limited to using the features supported by all. Or best case, the toolset gracefully degrades for targets that don't support specific features you want to use. Every manufacturer wants to stand out somehow, they want you to buy _their_ hardware. They are never going to limit themselves to the common standard. That would not be in their financial best interest.

  • So this is the Embrace phase.

Receiving a million dollars tax free will make you feel better than being flat broke and having a stomach ache. -- Dolph Sharp, "I'm O.K., You're Not So Hot"

Working...