Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Graphics Books Media Software Programming Book Reviews Hardware IT Technology

OpenGL Shading Language 96

Martin Ecker writes "A few months ago, the OpenGL Shading Language -- OpenGL's own high-level shading language for programming Graphics Processing Units (GPUs) -- was ratified by the Architectural Review Board (ARB) responsible for the development and extension of the OpenGL graphics API. The first real-world implementations are just becoming available in the latest graphics drivers of the big graphics hardware vendors. Now the first book that features this new shading language is available, with the intention of becoming the standard book on the subject. Randi J. Rost's OpenGL Shading Language (published by Addison-Wesley) is a good introduction to developing shaders with the new OpenGL Shading Language, and demonstrates a number of useful applications for real-time programmable shaders." Read on for the rest of Ecker's review.
OpenGL Shading Language
author Randi J. Rost
pages 608
publisher Addison-Wesley Publishing
rating 8/10
reviewer Martin Ecker
ISBN 0321197895
summary A solid introduction to developing shaders in the OpenGL Shading Language.

Because of its orange cover, the book is also called the "Orange Book," and together with its siblings, the classic "Red Book" (aka OpenGL Programming Guide) and the "Blue Book" (aka OpenGL Reference Manual (see this earlier review), it is a member of the OpenGL family of books from Addison-Wesley. Although it has a short overview of the basic features of OpenGL, it is intended for an audience that is already somewhat familiar with OpenGL and with 3D graphics programming in general. The interested reader should probably have read the "Red Book" or at least have a good understanding of how to use the OpenGL graphics API before attempting to tackle this book.

Rost, as well as the co-authors on some of the chapters, John M. Kessenich and Barthold Lichtenbelt, all employees of the graphics hardware vendor 3Dlabs, were driving forces behind the inception of the OpenGL Shading Language. They are also core contributors to the final language specification as well as the OpenGL extensions that provide the framework for this new shading language. So the information in the book actually comes from the people that created the language, which is a definite plus.

The book consists of 17 chapters and two appendices which can be roughly categorized into four major parts: An introduction to the basics of OpenGL and GPU programmability; a description of the OpenGL Shading Language and the associated OpenGL extensions; a number of chapters that show the shading language in action; and finally a reference section on the language grammar and the entry points introduced by the new OpenGL extensions. Each chapter of the book has numerous interesting references to get further information on the presented topics. I can only recommend taking a closer look at some of them.

The first two chapters of the book describe the basics of the OpenGL graphics API, followed by an overview of the new programmable processors in the graphics pipeline and an overview of the shading language used to program them. The introductory chapter on OpenGL basics is very well written and worth the read even for more experienced OpenGL programmers. However, as mentioned above, the reader should have enough expertise in using OpenGL to be able to understand the more advanced parts of the book. The introductory chapter won't be enough in my opinion.

The third chapter, written by John Kessenich - one of the main authors of the OpenGL shading language specification - presents the language definition. This chapter is where the basic data types as well as the available control structures are described in detail. For people interested in writing a compiler for the OpenGL Shading Language, Appendix A also contains the entire language grammar in BNF.

Chapter four moves on to describe the programmable graphics pipeline, which was first introduced in the second chapter, in more detail. The programmable vertex and fragment processors and their interaction with OpenGL's fixed functionality are presented. In chapter five, the description of the shading language concludes with the available built-in functions. Chapter six offers the first simple example that shows the shading language in action - a shader to procedurally create a brick texture.

Until this point, the book doesn't talk much about how to integrate shaders into the host program running on the CPU. New OpenGL extensions were created for this purpose, in particular GL_ARB_shader_objects, GL_ARB_vertex_shader, and GL_ARB_fragment_shader. Chapter seven contains detailed descriptions of the entry points provided by these new extensions. Among other things, it describes how shader objects are created, compiled, and then linked to form shader programs that can then be used to render objects. Appendix B also has a reference section on the new entry points similar in style to the "Blue Book." Chapter seven concludes the dry, technical part of the book that introduced both the shading language and the necessary infrastructure to use it from a host program running on the CPU.

The remaining chapters delve into the really interesting topic: shader development. These chapters offer multiple ideas on what can be done with shaders and how to effectively use them in graphics programming. Standard techniques, such as bump mapping, procedurally creating textures, using noise, and others, are presented. Chapter nine deserves special mention because it presents shaders that mimic the behavior of the OpenGL fixed-function pipeline. Many developers new to shader programming are faced with re-implementing certain features offered by OpenGL's fixed functionality. This chapter addresses this.

Chapter fourteen also deserves mention. Shaders that procedurally create textures usually suffer from aliasing artifacts. This chapter shows a number of anti-aliasing techniques to diminish these artifacts. In my opinion, this important topic has not received the attention it deserves -- it's good to see such a chapter in this book.

Closing this section of the book, chapters fifteen and sixteen describe some interesting non-photorealistic shaders and shaders for doing image processing. (For more ideas on what can be done with shaders I also recommend the book "GPU Gems", which I have read and reviewed some time ago.

The final chapter of the book (chapter seventeen) is a language comparison with other high-level shading languages such as the RenderMan Shading Language, SGI's Interactive Shading Language from the OpenGL Shader package, Microsoft's HLSL, and NVIDIA's Cg. Although I am quite familiar with most of these languages, I found this chapter to be an interesting read because it attempts to look at the languages objectively, listing advantages and disadvantages of the various approaches.

The book contains many diagrams and images, all in black and white, except for 16 pages containing 30 color plates in the middle of the book. Most of the images are not overly "flashy" but do give a practical idea of the types of rendered images a particular shader can produce.

There is also a website for the book where you can find an errata list and download a sample chapter (chapter six). As mentioned above, this chapter develops a simple brick shader to show the basic features of the shading language. The website also has all the shaders presented in the book available for download. Because the book does not come with a CD-ROM this is the only means of getting shaders code without having to type them. At the time of this review, the site appeared to be in a transitional state.

Rost's OpenGL Shading Language succeeds at giving a good introduction to shader programming with the OpenGL Shading Language. Not only does it provide the necessary technical instruction to allow the reader to write his/her own shaders as well as integrate them with the host program, it also demonstrates a number of practical applications for shaders and tries to encourage exploring the new dimension of real-time graphics programming opened up by the OpenGL Shading Language. Since there is no other book currently available on this topic, it is hard to say whether the "Orange Book" will stand the test of time and actually become the reference book on the OpenGL Shading Language, but I believe it will.


Ecker has been involved in real-time graphics programming for more than 9 years and works as a arcade game developer. He also works on a graphics-related open source project called XEngine. You can purchase OpenGL Shading Language from bn.com. Slashdot welcomes readers' book reviews -- to see your own review here, carefully read the book review guidelines, then visit the submission page.

This discussion has been archived. No new comments can be posted.

OpenGL Shading Language

Comments Filter:
  • by Samir Gupta ( 623651 ) on Thursday July 15, 2004 @05:23PM (#9711457) Homepage
    I'm no Microsoft apologist, but there's one architectural decision that I foresee to be the potential cause of major problems: the fact that each IHV's driver is responsible for the high-level compilation of the shading language, rather than having a common runtime do it, as with DirectX HLSL, where it is compiled into a intermediate binary token format, after which is then passed on to the driver and turned into a vendor-specific and optimized binary format.

    The core competencies of graphics IHVs are generally not in compiler writing -- writing a good optimizing compiler is still a "black art" significantly more difficult than writing a hardware driver, and it will be very annoying if compiler bugs show up on some vendors' drivers and some don't, forcing developers to work around them -- or different compilers optimize things differently.

    At least with DirectX, there's guaranteed to exist one common compiler that's written by a company with years of experience in optimizing compilers.

    Of course, the philosophy of OpenGL is counter to DirectX in that there's no one Big Company controlling it all, but
    at the very least, there needs to be some standard token bytecode defined and standarized by the ARB, and a reference compiler design, as well as a compliance suite to verify compiler correctness and language compliance.
  • Graphics? (Score:2, Insightful)

    by Anonymous Coward on Thursday July 15, 2004 @05:28PM (#9711509)
    The most important design issue... is the fact that Linux is supposed to be fun... -- Linus Torvalds at the First Dutch International Symposium on Linux
  • by PixelSlut ( 620954 ) on Thursday July 15, 2004 @05:34PM (#9711567)
    In a way, I'd say it's more advanced thatn DirectX 9.0. At least in terms of shaders. GLSL allows you to design things to be more modular. You can write multiple shaders (multiple vertex shaders, multiple fragment shaders) and then link them together into a single program object. This is a very good step in the right direction, I think. Also, HLSL specifically compiles its code into a DirectX-specified shading assembly. Microsoft is controlling those, of course, so there's less room for innovation of compiler optimizations on the part of the hardware vendors. I think the OpenGL approach (define the language, not the underlying assemblies) is a better approach. Sadly, I think the hardware vendors will be locked into doing things the Microsoft way and this will also affect how the GLSL compilers work internally.
  • I disagree (Score:3, Insightful)

    by woodhouse ( 625329 ) on Thursday July 15, 2004 @06:00PM (#9711777) Homepage
    I'm guessing whoever modded this down doesn't know what they're talking about.

    But for what it's worth, the parent raises some interesting points, although I disagree that a bytecode/pseudo asm route is a better method.

    In reality, the ASM-like bytecode used probably bears little resemblance to the machine code instruction sets actually used by the various cards. Forcing vendors to use such a low level instruction set allows very little room for optimisation, whereas allowing vendors to write their own compilers gives much better scope to improve the code at the machine code according to their hardware's instruction sets and limitations.

    There may be a few teething problems at first (witness the arguments about nvidia's GLSL compliance on OpenGL.org), but I'm convinced its a better method overall.
  • by IrresponsibleUseOfFr ( 779706 ) on Thursday July 15, 2004 @06:25PM (#9711957) Homepage Journal

    Here is brief overview of shading, in case you don't know.

    Shading is the process by which a renderer assigns color values to pixels on the screen. There are currently three popular rendering methods: Rasterization, REYES, and Ray-tracing. Rasterization simply projects planar polygons onto a 2D plane and discreetizes them to pixels. REYES is slightly more complicated in that it takes mathmatically defined patches, slices and dices them into micro-polygons which are then rasterized. Ray-tracing point-samples the scene by tracing rays through it.

    Rasterization is the method of rendering that is implemented on your commodity graphics accelerator. It is commonly considered the most adept at handling real-time graphics. In the past, the type of shading that you could perform in real-time was rather limited. What people did was take a dial and switch approach to shading. Users would tell their graphics API (like OpenGL) what features they wanted to use and pass in parameters. That was basically it. Unfortunately, the more bells and whistles you add to the rendering pipeline, the more unweildly your graphics API becomes. This is because, users deal with the most shading algorithm possible, no matter how simple the task.

    Hence, the programmable pipeline is born. Instead, of adjusting dials and switches, you just write little procedures telling the renderer exactly what you want it to do at a certain stage of rendering pipeline. There is some history behind this approach coming from REYES renderer. Renderman is the shading language that tells Pixar's REYES renderer what to do. It is popular, and it is pretty standard in the graphics industry. People have even implemented Ray-tracers capable of using the same shaders that were used for the REYES renderer. But, in all this is pretty inefficent. Shaders are intimately tied to the rendering method that you are using. Therefore, a new shading language needed to be developed.

    In all, the shading languages will look like Renderman, which in turn looks like C. Real-time shading languages differ in their specifics. But, in rasterization there are two procedures you can write. A vertex shader which allows the procedure to manipulate points of your planar polygon in some fashion, and a fragment shader which is invoked after the discreetization to actually color the pixels. Cg and HLSL are actually the same shading language. OpenGL Shading language (glslang) is an alternative. Probably the biggest difference at this point is HLSL/Cg dictates a particular instruction set architecture on the hardware. GLslang doesn't, but it requires that your graphics driver has a compiler that compiles the shading language to something the graphics card can use.

    In all, I think the money is on HLSL/Cg to win. It has been out considerably longer, and I think it has already picked up developer mind-share. I also think that it makes things considerably easier for graphic driver writers although it might be more limiting. However, we probably won't feel the pains of it being limiting in the next 5 years, and by then the battle will be over.

    As for those that mention rendermonkey. Rendermonkey isn't a shading language, it is a suite of tools that help you produce shaders. It uses HLSL/Cg or a general graphics assembly language underneath. It is pretty independent of the whole shading language war.

    That said, this book might be good to pick up independent of the actual language that it discusses just because it goes over important issues that you face when writing these shaders.

  • by lcracker ( 10398 ) on Thursday July 15, 2004 @06:58PM (#9712176) Homepage
    The point is that the DX shader assembly code has *lost meaning* in the translation from whatever high level language it came from. There are many possible optimizations that could have been made with a cheap analysis of the high level language that would be very difficult to make from looking at the low level language.
  • by Pseudonym ( 62607 ) on Thursday July 15, 2004 @07:44PM (#9712452)

    Exactly.

    I haven't worked with GLSL et al, though I have done a lot of work with RenderMan shading language compilers, including writing two of them from scratch. I definitely agree that these shading languages are much easier to compiler compared with C++ or C. (The lack of user-defined types and recursion helps a LOT --- no fancy stack frames.) Similarly, the target platforms (usually some kind of virtual machine in the case of RenderMan) are invariably quite simple.

    However, the problem is that even though the target platforms are simple, they're all extremely different. Most of them are SIMD architectures, but some are not. Some use stacks and others use register transfer. One or two (most notably Pixar's PRMan) execute annotated abstract syntax trees (called "shade trees") directly. Many compile to C++, but even then, they often have very different capabilities and limitations.

    Take, for example, the following piece of code, where all of the variables are "varying" (that is, they vary over the grid being shaded):

    w = x + y + z;

    If the target machine supports SIMD addition, then it pays to use it. So we might use a temporary, and generate code like this, scheduling the two additions separately:

    varying t = x + y;
    w = t + z;

    If, however, you're compiling to C/C++, then you're better off scheduling the additions together:

    for (all grid elements g)
    {
    g->w = g->x + g->y + g->z;
    }

    This increases the effective size of each basic block and hence gives the C/C++ compiler a better chance of performing low-level optimisations.

    And this is just showing the difference between two software VMs. I can only imagine what the differences are between the capabilities between two generations of NVIDIA hardware, let alone between NVIDIA and ATI. Leaving optimisation up to the IHV was definitely the right thing to do.

  • by Anonymous Coward on Friday July 16, 2004 @07:38AM (#9714882)
    I highly doubt that Doom3 uses these new extensions. ARB vertex/pixel is NOT the hight-level SL extension which is surely too new to be used in a game due in 2 weeks.

Beware of Programmers who carry screwdrivers. -- Leonard Brandwein

Working...