Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Graphics Books Media Software Programming Book Reviews IT Technology

The Cg Tutorial 111

Martin Ecker writes "NVIDIA's book The Cg Tutorial: The Definitive Guide to Programmable Real-Time Graphics, published by Addison-Wesley is a book that many 3D graphics programmers have been waiting for. Finally a book is available that introduces NVIDIA's high-level shading language Cg (short for 'C for Graphics') and the concepts involved with writing shader programs for programmable graphics pipeline architectures to the interested reader." If you are such an interested reader, you'll find the rest of Ecker's review below.
The Cg Tutorial: The Definitive Guide to Programmable Real-Time Graphics
author Randima Fernando, Mark J. Kilgard
pages 384
publisher Addison-Wesley Publishing
rating 8
reviewer Martin Ecker
ISBN 0321194969
summary An excellent introduction to the high-level shading language Cg (C for Graphics) and its uses in real-time 3D graphics.

The first half of the book teaches the basic language constructs of the Cg shading language and shows how to use them in concrete example shaders, whereas the second half concentrates on more advanced techniques that can be achieved on today's programmable GPUs with Cg, such as environment or bump mapping. Even these more advanced techniques are explained in a clear and easy-to-understand manner, but the authors do not neglect to present the mathematics behind the techniques in detail. Especially the more serious 3D programmer will appreciate this fact. The explanation of texture space bump mapping must be the easiest-to-understand explanation of the technique I have read to date, which alone makes it worth to have this book on my shelf. At this point it is important to note that the book does not discuss the Cg runtime which is used by applications to compile and upload shaders to the GPU. The book focuses exclusively on the Cg language itself. So if you're already familiar with Cg and want to learn how to use the Cg runtime, this book is not for you and you should rather read the freely available Cg Users Manual.

The book contains many diagrams and figures to illustrate the discussed equations and show the rendered images produced by the presented shaders. Note that most figures in the book are in black and white which sometimes leads to funny situations, such as in chapter 2.4.3 where the resulting image of a shader that renders a green triangle is shown. Since the figure is not in color the triangle that is supposed to be solid green ends up being solid gray. However, in the middle of the book there are sixteen pages with color plates that depict most of the important color images and also show some additional images of various applications, NVIDIA demos, and shaders written for Cg shader contests at www.cgshaders.org.

Accompanying the book on CD-ROM is an application framework that allows you to modify, compile, and run all the example shaders in the book without having to worry about setting up a 3D graphics API, such as OpenGL or Direct3D. The application framework uses configuration files to load meshes and textures and set up the graphics pipeline appropriately for the shaders. This way the Cg shaders can be examined and modified in isolation with the results being immediately visible in the render window of the application. Thanks to this framework application even readers that are not yet familiar with a 3D graphics API or even 3D artists interested in programmable shading on modern GPUs can begin to learn Cg and experiment with real-time shaders.

A final note for programmers using Direct3D 9: The high-level shading language included with the latest version of Direct3D, simply called HLSL for High-Level Shader Language, is syntactically equivalent to Cg. Everything written in the book about Cg equally applies to HLSL. Thus, the book is also an excellent guide for programmers that only intend to work with HLSL.

This book truly is the definitive guide for all beginners with the Cg language, and also more advanced 3D programmers will find the chapters about vertex skinning, environment mapping, bump mapping, and other advanced techniques interesting. Once you've started writing shaders in Cg you will never want to go back to writing them in low-level assembly shading languages ever again.


You can purchase The Cg Tutorial: The Definitive Guide to Programmable Real-Time Graphics from bn.com. The book's official website has additional information and ordering options besides. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.

This discussion has been archived. No new comments can be posted.

The Cg Tutorial

Comments Filter:
  • Looks Interesting (Score:4, Interesting)

    by (54)T-Dub ( 642521 ) <tpaine.gmail@com> on Thursday April 24, 2003 @11:07AM (#5800018) Journal
    I just hope it's not too little too late. Nvidia seems to be going the way of Voodoo. Taking the same card and clocking it faster with a bigger fan. It's not bad enough that the athlon's are toasters, we have to have 2 of them in a box with enough fans to have a tornado.

    nforce looks pretty cool though.
    • I see what you're saying... nvidia's latest video card with the super loud fan and only marginal performs gains reminds me of the voodoo3 to a certain extent. But they are still my video card of choice and that's because a) their drivers rock and b) they try to provide full access to all of the video cards new features in linux (usually by openGL extensions). I admit, i have a hard time understanding how to use these extensions, but at least I know that if I did know how to use them, I'd be able to in lin
      • Hoping for good ATI drivers for Linux? You must be living in a dream world. ATI hasn't written a single decent driver for ANY platform, let alone their secondary platform support. Every LAN party we have is cursed by the poor sap who read some review saying the new ATI Radeon has 3% faster performance and bought in to the worst supported cards ever made.

        On a related note, does Age Of Mythology even support the ATI Radeon 9700 Pro? We messed with it for hours trying different patches, hacking the video card
    • Re:Looks Interesting (Score:2, Interesting)

      by snackbar ( 650322 )
      What are you talking about? The GeForce FX is much different than the GeForce 4, specifically in the way it processes fragments. It also has a much higher transistor count.
      • Jesus, Dude, look at the numbers. The "FX" scores about the same as the "4" in every real-world game. You may say that has to do with DX9 implementations, but that is just marketing crap.

        Look at the numbers and you will see that the "4" and the "FX" are, basicly, the same card. I think a "p4 trick" is upon us. They boosted the clock speed but reduced the instructions per cycle.

        Why the hell would you retool your factory to produce a card identical to the old one? Seems strange to me.
        • Um, there are two reasons for the performance diff. First, current code barely uses GeForce4-level features, much less FX level features. Second, the newer architecture is much more generalized (and higher-precision) than the old one, and the new drivers aren't fully optimized. We saw the same thing when the PPro came out. It was a much faster architeture, but it didn't show because code wasn't out there to take advantage of it. The "P4" comment makes no sense, because the clock-speed of the card isn't gene
    • That alone is the reason I think they will make it. Voodoo only had graphics cards, thats it, nothing else of note out there. But NVIDIA has the, currently, number one chipset for ahtlon's out there on the market.
    • I just hope it's not too little too late. Nvidia seems to be going the way of Voodoo. Taking the same card and clocking it faster with a bigger fan.

      Sadly, they're also walking the Voodoo path with their new "strategic alliances" [nvidia.com] under which publishers will develop games using Nvidia specific features. It's a pathetic tactic from a company that built their success on strong engineering rather than lame marketing ploys.

  • NVIDIA (Score:4, Funny)

    by AbdullahHaydar ( 147260 ) on Thursday April 24, 2003 @11:08AM (#5800032) Homepage
    I'm sick of NVIDIA trying to control the graphics market by controlling the language developers have to use.

    ...That's like someone trying to control Java

    ...Never mind...(Come to think of it, I can't even think of a counter-example where someone didn't try to control a market through control of the programming language.)
    • Re:NVIDIA (Score:3, Interesting)

      by Moonshadow ( 84117 )
      Personally, I think that Cg is pretty cool. Sure beats writing shaders in assembly. Regardless of its status as a "marketing tool", nVidia has provided game devs with a tool that makes achieving all kinds of nifty effects a lot easier and faster than before. I'm not thrilled with the Geforce FX, but I can stick with a GF4 Ti for a while. :)
    • Re:NVIDIA (Score:2, Interesting)

      by (54)T-Dub ( 642521 )
      A video card independent programing language would be nice. Though the efficiecy would probably rival java's.

      I think i would prefer better eye-candy to more eye-candy.
      • Re:NVIDIA (Score:3, Interesting)

        by Moonshadow ( 84117 )
        Yeah, I'd like an independent language too, but for that to happen, we'd have to have some kind of HLL standard be implemented, and we ALL know how well competing companies do with implementing standards proposed by a competitor. Usually, card makers will conform to the standards placed on them by the parent hardware (AGP, analog out, etc) and push their proprietary stuff as the end-all-be-all. There's less interest in cooperation than there is in convincing consumers that they can't live without your nifty
      • Re:NVIDIA (Score:3, Informative)

        by br0ck ( 237309 )
        This overview [tomshardware.com] by Tom's Hardware of HLSLs says that Nvidia is pushing for Cg to be hardware independant and used by all video card vendors (see the "Which HLSL?" page). The article also explains exactly what HLSLs are and why ATI and Nvidia have created the respective languages Rendermonkey and Cg.
    • Control makes it easier to implement standards. Standards make it easier to develop. Development makes it easier to profit. Your argument makes no sense. Of course NVIDIA is trying to control the graphics market - that's their job. If controlling the language is one of the best ways to go about doing that, why shouldn't they try? I smell a big fat commie rat.. and it's you.
      • by Anonymous Coward
        I smell a big fat commie rat
        Yeah - chapter 17 of Das Kapital is entitled "On Creating A Card Independent Graphics Programming Language". This sort of call for standards is one of the founding tenets of Communism!
    • Re:NVIDIA (Score:5, Funny)

      by Cyberdyne ( 104305 ) on Thursday April 24, 2003 @11:22AM (#5800194) Journal
      ...Never mind...(Come to think of it, I can't even think of a counter-example where someone didn't try to control a market through control of the programming language.)

      BCPL springs to mind; it was developed (by one of my old supervisors!) specifically to avoid platform lockin. At the time, the university was about to acquire a second computer - but it wasn't compatible with the first. To make matters easier for the users, Martin Richards [cam.ac.uk] designed BCPL and an accompanying bytecode language called cintcode. Despite its age - it's an indirect ancestor of C! - it is still in use today in a few applications; apparently Ford have a custom-built setup running in BCPL on a pair of Vaxes to manage each factory outside the US. (For some reason, the US factories use a different system.) With the demise of the Vax, Ford have been supporting Martin's work in porting the whole BCPL/cintcode/Tripos (a cintcode OS) to run on Linux/x86 systems.

      For that matter, I seem to recall most of the early computer languages were intended to reduce the need to be tied to a specific platform; Fortran, Pascal, C (derived from B, itself a cut-down version of BCPL), as well as the original Unix concept.

      • Re:NVIDIA (Score:2, Interesting)

        by wct ( 45593 )

        Mindless trivia: The original user components of AmigaDOS were written in BCPL, by a British company (Metacomco) contracted by Commodore. Through various revisions, they were rewritten by Commodore in C.

    • Re:NVIDIA (Score:2, Insightful)

      Ummm... As the reviewer points out, Cg is more or less equivalent to gpu-vendor-neutral HLSL for Direct3D - which belies your comment about Nvidia trying to dominate the market with a language.

      However, one might say that MS and Nvidia are doing so together...

      --TRR

      • Direct3D is vendor neutral??????? So where are the Linux drivers?

        OpenGL is vendor neutral, while Direct3D is Microsoft's (very successful) attempt to lock game developers into Windows and its DirectX platform.

        • If the OpenGL ARB could respond to technology change at anything near the rate Direct3D has managed to keep up, then yes, OpenGL would be cool for games development. OpenGL developpers have to rely on vendor-specific extensions to access hardware features, instead of a cross-vendor HAL.

          And I hate microsoft as much as the next guy...

    • Cg runs with ATI products as well. It is an attempt to make it easier to program graphics for high end GPU's. Fortunately, nvidia dominates this market (at least in terms of having a solid business with top notch products) which is beneficial to them.
    • (Come to think of it, I can't even think of a counter-example where someone didn't try to control a market through control of the programming language.)

      What about Bell Labs, with the original "release" of C? But perhaps that's old enough not to count. Besides, I'm not actually sure that NVIDIA are trying to control all that much... Isn't it perfectly possible to implement a Cg compiler for ATI hardware? It really ought to be, in theory. In practice, I gather ATI rather concentrate on OpenGL 2.0.

  • I couldn't get the official site's link to the NVIDIA Gear Store to work. It says the link won't work if it's out of stock, but it's hard to believe it's out of stock already! I thought this book was pretty new and this is before the /. crowd even knew about ordering it. It's only like 50 cents cheaper than ordering from BN anyway, but I wanted to check it out.
  • A copy of 3ds Max 5, and a team of artists, and I can start coding graphics stuff for fun again! Uh oh, this is /. and I said DX9, hello Troll demod!
  • by 0x00000dcc ( 614432 ) on Thursday April 24, 2003 @11:17AM (#5800136) Journal
    Excuse my ignorace in this realm, but why would you want to want to learn Cg when you could extend a C/C++ library to include the various graphics that you want to use?

    Seriously, I really would like someone to debunk this idea if possible, because I have picked up an interest in graphics programming and am just starting out - would like to know more. It seems like an easier / pragmatic route (due to code reusability) to go the other route ...

    • The shader programs are totally different anyway, they're not x86 (or whatever) assembly, they run in the graphics chip and have their own specialized assembly language suited for the stuff they do. So it can't be "just an extra library", it needs to be a new compiler and all that. The regular C-library part (which would be done through DirectX anyway) would be just for copying the shader programs to the graphics chip. But I'm not an expert, somebody correct me if I'm wrong.
      • C/C++ is a high level language. It's intended for writing code for the more macro level (moving objects around a scene, setting lighting sources, texturing, etc).

        Cg on the other hand is used for writing what are known as procedural shaders. Shaders determine what an object/particle will look like, _procedural_ shaders can change what a polygon will look like based on any number of criteria (not the lease of which is time).

        So if I'm going to texture a wall with wood, it makes sense to proceduraly generat
      • There is however a very nice approach called shader metaprogramming [acm.org] that makes it possible to write shaders directly in C++ syntax and call them as if they were functions. The basic idea is to build up a parse tree through clever operator overloading, and then compile that into a shader to bind before using it. Clever approach.
    • by Molt ( 116343 ) on Thursday April 24, 2003 @11:27AM (#5800250)
      Cg is a very specific language which runs on the graphics card itself, and is only used for pixel and vertex shader programming. It's always used in conjunction with one (Or even more) higher level libraries.
      Firstly you have the application-level library (SDL, for example), this handles the stuff like opening windows, the user interaction. This is also the bit that's often written specifically for the game.
      At the next level we have the 3d library, normally OpenGL or DirectX. This handles most of the actual graphics work itself, such as displaying your polygon meshes, applying standard lighting effects, and so forth.
      Finally we hit the shader level. It's here that Cg comes into it's own, with special snippets of Cg code to get the reflections on the water to look just right and ripple as the character walks through, or to make the velvet curtains actually have the distinctive sheen. Special effects work only.
      It is worth noting that Direct X does have it's own way of doing shaders now, and OpenGL does have a specification for them but last time I looked no one had this implemented.
      Hope this makes sense.
    • C/C++ is used to program general-purpose processors for a huge variety of operations, where flexibility and expressiveness of the language is paramount. Cg is used to program the highly specialized processor on the graphics card for a relatively narrow range of operations where performance is paramount. They really aren't the same thing.

      As for the library idea... Chances are that it will soon become feasible to implement what we would call GL's or DX's feature set entirely on the shader processor, and th
    • Cg is a high level shader language. You only use it for shaders not for your real program. You write your stuff in C, C++ or every other language where you got OpenGL or Direct3D bindings.
      Cg is only used for shaders, tiny little specialiced code fragments that run directly on the graphic card and manipulate vertexes (vertex shader) or shade the inside of the triangles in new ways. (pixel shaders)
      Before Nvidia created Cg, the only way creating new shaders was using low-level assembly like language. With Cg
    • The whole point of shaders is that you can now program the video card HARDWARE, which is much faster than doing everything in software.
    • Because a C/C++ program isn't going to fit on a video card. Cg is like Renderman, OpenGL Shader Language, and DirectX High Level Shader Language. It's a small, tight, targetted language for running shader programs on the video board.
  • by phraktyl ( 92649 ) <{wyatt} {at} {draggoo.com}> on Thursday April 24, 2003 @11:26AM (#5800241) Homepage Journal
    A Definitive Guide not by O'Reilly? That's it, the gloves are off!
  • by gr8_phk ( 621180 ) on Thursday April 24, 2003 @11:26AM (#5800246)
    I've been thinking the C standard needs native support for vector data types for some time. Sure, I have Vec3, Vec4 and M4x4 classes that I wrote, but they don't take advantage of SSE instructions and such. Intel has a compiler that supposedly works with these instruction sets, but I haven't tried it. Wider support would be available if Cg was a real standard extension to C. When is GCC going to handle Cg? This will allow all those shaders to be used in software renderers (Mesa for example) unchanged. I'm not sure Cg as defined is the correct way to extend C, but you get my point.
    • Cg isn't C. Its not an extension to C, where you would call a Cg function from a C function, and compile it all into a program. Cg compiles to its own bytecode which is loaded into the graphics processor at runtime. I don't know if extending GCC would work/be a good idea. However, an open source compiler from NVidia that can work on multiple platforms (not just DX9!) would be a good idea. Especially if they're looking for it to be accepted by the community as a standard.
      • Cg itself is not open source as far as I know but NVIDIA supports people writing different backends for Cg. So if the Mesa folks wanted to write a backend that would compile Cg directly into Intel (vector instruction) machine code they could. However the right way to do it would be that Mesa implements support for the ARB_vertex_program and ARB_fragmen_program extensions writing a compiler that translates the "OpenGL FP and VP assembly" into Intel assembly.
    • The current development version of GCC (3.3) *does* have support for automatic vectorization (i.e. using SIMD instructions where appropriate). I'm not sure whether you need to help it out by flagging decls with GCC-specific attributes, but it's definitely there.

      As others have said, Cg is not an extension of C, and GCC will never and should never support it.
    • There is support for vectorial extensions on newer releases of GCC. It does support MMX, SSE, Altivec and so on. If the vendors could supply specs for their hardware, surelly there could be support for GPUs as well. The only problem should be the great latency overhead (it could be hard to improve the use of the GPU's pipeline).
    • If you're doing C++, you can easily just uses classes that embed SSE instructions. Thanks to the magic of operator overloading you'll end up with a class that performs at the level of ASM, but looks like a built-in data type to the user. Intel's intrinsics library, in fact, does exactly this.
  • good book (Score:5, Informative)

    by Horny Smurf ( 590916 ) on Thursday April 24, 2003 @11:28AM (#5800265) Journal
    I got a copy last month, and I've only read a few chapters, and skimmed some others, but it looks liek a good book.


    Don't let the title foo you -- it contains high level descriptions of the algorithms as well as the mathematical concepts. They cover some advanced realtime techniques that older books don't (since the processing power wasn't there even 4 years ago), but also discuss optimizing for low-end systems.


    I do recommend this book if you ahve any interest in graphic programming (whether you use Cg or not). If you use it with Coputer Graphics (3rd edition), you should have access to pretty much all graphic algorithms. (at least until TAOP volume 7: Computer Graphics is written :)

    • But what a lot of us need before we get heavy into the CG and workings of the card - is a knowledge of the more basic 3d programming principles. Does anyone have some basic, well-written code snippets for GL or something else linux-friendly.

      Personally, I've found directX a nice language to work with, but it's MS and somewhat restricted to the OS. Why not a good GL wrapper for CG, does one exist? How about some good GL samples, period? Can anyone help here?
      • by magic ( 19621 ) on Thursday April 24, 2003 @12:12PM (#5800726) Homepage
        Why not a good GL wrapper for CG, does one exist? How about some good GL samples, period? Can anyone help here?


        I released 75,000 lines of C++ code for supporting OpenGL game development on Windows and Linux as the G3D [graphics3d.com] library. It is under the BSD license. The next release includes support for the shaders that are compiled by Cg-- you can grab it from the SourceForge CVS site.


        G3D includes some small OpenGL demos (~200 lines), wrappers for the nasty parts of OpenGL, and wrappers for objects like textures and vertex shaders.


        -m

      • there's also nehe's site [gamedev.net] which is pretty nice. Tends towards windows, but a lot of things have been re-written in SDL, and should therefore work in linux.
      • Check out "Real-Time Rendering" by Tomas Akenine-Moller and Eric Haines. It's extremely math-intensive, but that's an inherent fact of 3D graphics in general anyway, and if you know the math behind it well, you can write a much better engine. It also only assumes you know the very basics of matrix and vector math, which you can learn online in an afternoon if you don't already know it. From there it goes into extreme detail on damn near everything relating to realtime rendering (as the title indicates :)
        • I have to second this recommendation. I bought the first edition of this book two years ago when I started learning about graphics in-depth, and it's been the most useful resource I've had. The second edition is even better, and has a great chapter on shading languages.

          (btw, if you want a REALLY math-intensive book, check out "3D Game Engine Design" by David Eberly. That was the first game/graphics programming book I purchased, and it almost scared me away entirely)
    • by spakka ( 606417 )

      Don't let the title foo you

      I think you mean 'don't let the title bar you'
  • Cg and OpenGL (Score:3, Informative)

    by dmouritsendk ( 321667 ) on Thursday April 24, 2003 @11:53AM (#5800493)
    First, i dont understand why some people think its a bad idea nvidia are doing Cg. One of the goals of Cg is easier crossplatform development.. thats a noble cause if any =D

    Secondly, my bet is OpenGLs shading language and Cg will eventually merge.

    check: this [opengl.org]

    and notice this part:

    Compatibility with DX is very important, but they're willing to weigh advantages of changes vs. cost. Cg and the proposed "OpenGL 2.0" language are similar but not identical; both look like C, and both target multiple underlying execution units. Interfaces, communication between nodes, and accessing state across nodes are different. It's very late, but not too late to contemplate merging the two languages.


    • There is also an OpenGL back-end for Cg, so you can compile Cg programs to run on the OpenGL ARB program extensions or the NVIDIA extensions.

      -m
  • V for Vengeance

    C for Craphics? Well, at least that how I first read it.

  • The world needs another proprietary language like a hole in the head. Cg was created to reinforce NVIDIA's (then) lock on high-end consumer 3D and to abstract away the wildly differing capabilities of underlying hardware (thus encouraging developers to support the top-of-the-line, high-profit-margin chips without shutting out the huge installed base of older chips).

    The first of these is in nobody's interest except NVIDIA. The second is a noble ideal, but very hard to pull off; the range of capability is ju
    • You've got no clue.

      >thus encouraging developers to support the top-
      >of-the-line, high-profit-margin chips without
      >shutting out the huge installed base of older
      >chips

      Nobody makes money with the top-of-the-line, because there is no high-profit-margin.

      Well, the fact that Microsoft copied the Cg syntax verbatim into the DirectX 9.0 specification shows that Cg is not that proprietary but that there is already a vendor offering an alternative implementation of the language.
      • It's not verbatim. It's close, because it's attacking the same basic problem, but it's not verbatim. Microsoft's HLSL is certainly not an "alternative implementation" of Cg.

        I suspect that MS HLSL, Cg and glslang are all roughly equivalent. Given that, why on earth would anyone pick the single-vendor solution?
  • by menasius ( 202515 ) on Thursday April 24, 2003 @01:50PM (#5801760)
    Sure Cg was developed and supported by NVIDIA but it works on a higer level than that. It compiles Programs down to either the DX shader language or the OpenGL ARB standards. The only vendor specific part is support for older hardware (NV_vertex_program extension and the like) but nothing is holding back someone from creating a profile to support ATI's proprietary extensions.

    It is another layer and a nice one to boot. There is no performance loss running it on ATI's cards, infact the few demos I have written have run better on my friends radeon than on my Geforce3 by a long shot.

    Quit trying to demonize nVidia for bringing some peace to the hectic world of writing shaders nine thousand different ways so some guy with an obscure video card doesn't complain.

    -bort
  • Forgive my ignorance, but is Cg made for NVIDIA only? Or is it even optimized for NVIDIA chips?
    If it is one of the above, I think that this is another gimmick for NVIDIA to get a greater market share
    • Nope, its optimized for DX and openGl's shader interfaces. So the GPU optimizations happens at the normal display driver level. It run's fine if not better on ATI's than on nVidia's cards.

      All it consists of is a language which sits atop multiple interfaces to shaders.

      -bort
    • by Dr. Sp0ng ( 24354 ) <mspong.gmail@com> on Thursday April 24, 2003 @07:48PM (#5805078) Homepage
      orgive my ignorance, but is Cg made for NVIDIA only? Or is it even optimized for NVIDIA chips?

      It's not optimized for anything. When you compile a Cg program (either offline or at runtime - the Cg compiler is very fast!) you specify a "profile" for it to use. Some of the currently-supported profiles are arbvp1 (which outputs code for OpenGL's ARB_vertex_program extension), vs_1_1 (DirectX 8 vertex shader), vp20 (NVIDIA's NV_vertex_program OpenGL extension), vp_2_0/vp_2_x (DirectX 9 vertex shader), vp30 (NVIDIA's NV_vertex_program2 OpenGL extension), ps_1_1/ps_1_2/ps_1_3 (DirectX 8 pixel shader), fp20 (OpenGL NV_texture_shader and NV_register_combiners extensions), arbfp1 (OpenGL's ARB_fragment_program extension - vendor-independent for older cards), ps_2_0/ps_2_x (DirectX 9 pixel shader, vendor-independent for 4th generation GPUs), and fp30 (NV_fragment_program OpenGL extension, for 4th-gen NVIDIA GPUs).

      So these profiles are optimized for their target platforms, and yes, currently NVIDIA chips are better supported. However, vendors can write profiles for their chips without NVIDIA's support, so for example, ATI could write a profile for the Radeon 9800 and it would work fine. However, ATI has already written support for DX9 shaders, so the vs_2_x/ps_2_x targets would work fine for that (or vs_1_x/ps_1_x for the 8500 generation).

      Don't listen to the Slashbots here - I am a professional game developer, and Cg is a godsend (and I'm even developing mostly using ATI cards). Since runtime compilation is so fast, Cg programs can be compiled when the game is played and the exact hardware being used is known. I don't imagine I have to go into more detail as to why that's a fantastic thing.
  • I remember back in the days when some of the more elite among us would do some very elegant demonstrations of the graphical capabilities of the tiny computer systems at the time.

    Where have they gone?

    I'd like to see what can be done with today's hardware when it's really pushed to the limits, along with the same style of creativity that these guys had.

    Even just some individual demonstrations of what is possible by doing some clever hacking in the Cg context would be awesome.

    I miss the Amiga. =(
    • by Anonymous Coward
      scene.org

      That'll take you in. It's a little difficult to google "demo."
    • Well, nVidia have put out some interesting stuff:
      http://www.nvidia.com/view.asp?IO=demo_daw n
      http://www.nvidia.com/view.asp?PAGE=po3d_downlo ads
    • Modern PC graphics hardware gotten to the point that there is much less emphasis on clever hackery, and more on just generating good source art. In the old demos, the general-purpose CPUs with simple bit-mapped video cards weren't powerful enough to do a lot of 3D effects, but clever programming could 'fake it out' in various ways, usually by taking advantage of some special case (such as the "2.5D" games like Wolfenstein3D). Nowadays, we have specialized 3D hardware pipelines that solve the general/corr
  • by TerryAtWork ( 598364 ) <research@aceretail.com> on Thursday April 24, 2003 @03:09PM (#5802632)
    And thats's what Carmack thinks of it.

    If he writes Quake 4 in Cg - it's in.

  • Be wary... (Score:3, Interesting)

    by GarfBond ( 565331 ) on Thursday April 24, 2003 @03:15PM (#5802726)
    Why should you use Cg? At this point, the only benefit one can see is if you're going to be doing crossplatform coding (DX vs. OGL). If you're going to be doing DX-only, you should stick with HLSL. Why?

    Cg was developed, designed, and created by nvidia. While one of their claims is that it can be made to run on any card and is multiplatform, don't let that fool you. Cg is, at its worst, a thinly veiled attempt to convince developers to produce optimal code for nvidia cards at the expense of broad hardware support. ATI has already said that they will not be supporting Cg (in order for it to work best on ATI cards, someone needs to create profiles for it) and will instead be supporting HLSL. I doubt S3/Via or SIS have the resources to commit to 2 different projects, so I bet they're going to go with HLSL.

    If you don't understand why nvidia might be looking for code that works best only on its cards (it's almost a "duh" question), look at it a different way. Look at the GFFX. In almost every instance, it's a failure. Sure, it can stick to 32-bit precision, but it runs really, really slow when you do (just look at the 3dmark03 scores recently released and john carmack's .plan comments). When it runs at 16-bit precision, it's still damn slow, almost always losing out to the Radeon 9700/9800s, but it's a little more competitive (DX9's minimum spec appears to require 24bit precision, but rumor says the jury's still out on that). It's in nvidia's best interest to make the FX appear to be fast (which it isn't), and so they're relegated to make Cg code that optimizes for nvidia cards their best interest.

    Sorry I don't have links, but the beyond3d.com [beyond3d.com] forums have a lot of information on this subject.

    • How much playing with the cg toolkit have you done? In all but ONE of the profiles, the Cg code compiles down to STANDARDIZED interfaces eg DirectX's HLSL OR the ARB exensions to OpenGL. The ONE that it actually compiles for NVIDIA hardware is close to legacy support not likely to be useful in games considering the other options.

      I'm sorry to say this but you are flat wrong on this. Cg is sytatctically equivalent to HLSL and is not "optimized" for nvidia products. Had anyone else developed this it would
    • Cg was developed, designed, and created by nvidia.

      Together with Microsoft, who then took the result and renamed it HLSL. That should answer the rest of your questions (of course, reading the article would have done that too).

Hackers are just a migratory lifeform with a tropism for computers.

Working...