Forgot your password?
typodupeerror
Book Reviews Books Media

GPU Gems 3 63

Posted by samzenpus
from the read-all-about-it dept.
Martin Ecker writes "Weighing in at fifty pages short of a thousand, NVIDIA has recently released the third installment of its GPU Gems series, aptly titled "GPU Gems 3" published by Addison-Wesley Publishing. Just like the two previous books before it, GPU Gems 3 is a collection of articles by numerous authors from the game development industry, the offline rendering industry, academia, and of course NVIDIA. The 41 chapters of the book grouped into six parts discuss a wide range of topics, all dealing with recent advancements in using graphics processing units (GPUs, for short) to either render highly realistic images in real-time or do high-performance, parallel computation, an area that is called GPGPU (short for General Purpose computation on GPUs). In this latest installment of the series, the focus of a lot of the chapters is on using new hardware features of Direct3D 10-level hardware, such as NVIDIA's GeForce 8 series, to either get more realistic looking results or higher performance." Read on for the rest of Martin's review.
GPU Gems 3
author Huber Nguyen (Editor)
pages 942
publisher Addison-Wesley Publishing
rating 9/10
reviewer Martin Ecker
ISBN 0-321-51526-9
summary in-depth discussions of bleeding-edge techniques, tips, and tricks in real-time graphics and GPGPU.
The book is aimed at the intermediate and advanced graphics programmer that has a solid background in computer graphics algorithms. The reader is also expected to be familiar with commonly used real-time shading languages, in particular HLSL, which is used in most of the chapters. Familiarity with graphics APIs, such as Direct3D and OpenGL, is also required to get the most out of this book.

The first part of the book is about geometry with the first chapter diving right into generating complex procedural terrains on the GPU. This interesting chapter explains the techniques behind a recent NVIDIA demo that shows very nice, 3-dimensional, procedurally generated terrain using layering of multiple octaves of 3-dimensional noise. An interesting contribution of this chapter is how the authors texture the terrain avoiding the typical, ugly texture stretching that previous techniques exhibit. This is followed by a chapter on rendering a large amount of animated characters using new Direct3D 10 features, in particular the powerful geometry instancing that is now available. The author suggests doing palette skinning by storing bone matrices in animation textures instead of the traditional way where they are stored in shader constant registers. The next chapter is in a similar vein, but uses blend shapes aka morph targets instead of skinning to animate characters. In particular, the main focus is again on how to use Direct3D 10 features to accelerate blend shapes on the GPU. Other chapters in this part of the book are on rendering and animating trees, visualizing metaballs (also useful for rendering fluids), and adaptive mesh refinement in a vertex shader.

Part two of the book deals with light and shadows. For me personally, this is one of the most exciting parts of the book with very practical techniques that we are going to see applied fairly soon in video games. The first chapter is on summed-area variance shadow maps, an extension to the popular variance shadow maps algorithm that provides nice soft shadows without aliasing artifacts. The next chapter is on GPU-based relighting, which is mostly useful for fast previewing in offline rendering. Then we move on to a nice chapter on parallel-split shadow maps, which are a way of doing dynamic, large-scale environment shadows by splitting the view frustum into different parts and having a separate shadow map for each of them. Other chapters in this part of the book are on improved shadow volumes, high-quality ambient occlusion, which is an improvement of a technique previously presented in GPU Gems 2, and volumetric light scattering.

The third part of the book is on rendering techniques and it starts with a very interesting chapter on rendering realistic skin in real-time. This chapter with its more than fifty pages is one of the longest in the book, but it definitely deserves the space. I have never seen such realistic looking skin rendered in real-time before. The result is really astonishing and the authors go into detail of all the various techniques and tricks employed to achieve it. Simply put, they take a diffuse map and apply multiple Gaussian blurs of varying kernel sizes to it. These blurred images are then linearly combined using certain weights to get an approximation to a so-called diffusion profile, which is used to visualize subsurface scattering. Of course, the devil is in the details and the technique is a bit more complicated than what I've described here. Some other chapters in this part of the book are on capturing animated facial textures and storing them efficiently using principal component analysis (PCA) as used in recent EA Sports games, animating and shading vegetation in the upcoming game Crysis, and a way of doing relief mapping without the artifacts of previous methods.

Part four starts out with a chapter on true imposters, i.e. billboards generated by raytracing through a volumetric object on the GPU. It's fairly interesting but I doubt that we'll see it in video games anytime soon because the costs of this technique seem fairly high. Another chapter is on rendering large particle systems to lower resolution, off-screen buffers and then recombining them with the framebuffer as a post process. This technique allows for rendering very fill-rate intensive particle systems with good performance. Other chapters include an appeal to make sure you do your lighting calculations in linear space and be careful when and where gamma correction needs to be applied, followed by some chapters on post processing effects, in particular motion blur and depth of field, and a chapter co-authored by Jim Blinn himself on rendering vector fonts in high quality via pixel shaders.

With part five dealing with physics simulation on the GPU we enter GPGPU territory. While a lot of the techniques in this and the following part of the book are highly interesting and innovative, I doubt we'll be seeing them applied a lot in video games in the next year or two, simply because they use up a lot of GPU processing power and GPU memory that us game developers would rather spend on doing fancy graphics. The first chapter is on doing rigid body simulation on the GPU. The author uses spherical particles to represent rigid bodies, which greatly simplifies the collision detection even between the most complex shapes. The subsequent chapter is on simulating and rendering volumetric fluids entirely on the GPU. The authors apply fluid simulation to create realistic smoke, fire, and water effects. The presented technique is based on running a fluid simulator on a voxelized 3D volume stored in 3D textures. Also solid objects that interact with the fluid are voxelized on the fly on the GPU. To render the fluid a ray-marching algorithm is used. The remaining chapters of this part of the book discuss N-body simulation, broad-phase collision detection and convex collision detection with Lemke's algorithm for the linear complementarity problem. Many chapters of this part of the book use NVIDIA's new language for doing GPGPU called CUDA and the reader is expected to be familiar with it. CUDA is both a runtime system and a language based on C that eliminates the need to have in-depth knowledge of a graphics API in order to implement GPGPU algorithms.

The final part of the book is on GPU computing with chapters that show how to apply the incredible parallel computing power of modern GPUs to classic computation problems that are not directly related to either computer graphics or physics. One chapter demonstrates how to search for virus signatures on the GPU, effectively turning your graphics card into an antivirus scanner. Another chapter shows how to do AES encryption and decryption on the GPU, which is now possible thanks to the new generation of GPUs supporting integer operations in addition to floating-point operations. Other chapters deal with generating random numbers, computing the Gaussian, and using the geometry shader introduced with Direct3D 10 to implement computer vision algorithms on the GPU that previously were not possible with vertex and pixel shaders only, such as histogram building and corner detection.

One of the features that distinguishes the GPU Gems series from other graphics books was kept for GPU Gems 3: the high quality and large number of images and diagrams. All figures in the book are in color, and there are plenty of them. The book also comes with a DVD that has the sample source code to most of the techniques discussed in the book. A lot of these programs require Direct3D 10 hardware (and as consequence Windows Vista) to run. However, for most of these, demo videos are also made available so you can see how a technique looks like without having the latest hardware or operating system. Furthermore, the book's website offers a visual table of content and three sample chapters to download in PDF format.

As with the previous two GPU Gems books, most of the chapters in this book are fairly advanced and ahead of their time. A lot of the presented techniques are not yet practical for video games on current generation GPUs, simply because they use up all the computation power and/or memory that they have to offer. However, a lot of techniques from the previous two books are now commonly used and we can expect the same to be the case for many of the techniques discussed in this book. As such, it is required reading for any serious professional working in the real-time computer graphics industry.

Martin has been involved in real-time graphics programming for more than 10 years and works as a professional game developer for High Moon Studios in sunny California.

You can purchase GPU Gems 3 from amazon.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.
This discussion has been archived. No new comments can be posted.

GPU Gems 3

Comments Filter:
  • Re:OpenGL please (Score:5, Informative)

    by n dot l (1099033) on Wednesday October 17, 2007 @03:48PM (#21015099)

    Dammit I hate to see all this DirectX10 emphasis. It's games only.
    The book is written for game developers, and none of the topics are exclusive to DX10 - NVIDIA has already released OpenGL extensions that offer the same functionality under OpenGL. The fact that the samples use DX10 is irrelevant because the API isn't the point. Anyone with a working knowledge of both DX and GL can translate code from one to the other fairly easily.

    Right now there is no laptop let alone "consumer" card in the world that can handle even the kind of CAD work a lot of people have to do.

    These cards cost hundreds of dollars but they can't handle an assembly with 100 parts in a CAD model simply because they barely have any OpenGL hardware in them. A car, airplane, etc has millions of parts.
    That's like comparing a pickup truck to a freight train. Consumer cards aren't designed to do CAD, they're designed to do games because (surprise!) they're sold to gamers. Workstation cards are made to do CAD. If you want to play the latest games, you get a 8800GTX. If you want to do CAD, or ultra high-poly modeling, or movie-quality animation, you get a Quadro FX. Or a FireGL if you prefer AMD/ATI.

    And now all the graphics cards are focusing on the DirectX and neglecting OpenGL.
    Graphics cards don't focus on either. Graphics cards focus on accelerating the sort of math that's common to all 3D rendering - transforming vertices, rasterizing triangles, and shading fragments (which are roughly analogous to pixels, for those of you that don't speak GL). Graphics drivers focus on DX or GL, and even in the consumer space you'd be stretching if you said that OpenGL is being neglected (see all the OpenGL extensions [opengl.org] that start with NV_ or ATI_ for proof).
  • Re:OpenGL please (Score:5, Informative)

    by s_p_oneil (795792) on Wednesday October 17, 2007 @04:05PM (#21015415) Homepage
    Actually, neither nVidia nor the editor chose DirectX.

    Each chapter is contributed by a different author, and each author decides which API to use. I wrote one of the chapters of GPU Gems 2 (see http://sponeil.net/ [sponeil.net]), and my chapter /demo used OpenGL. When I asked the guys at nVidia if they had a preference, they didn't care. They didn't even care whether I used nVidia's Cg or the standard GLSL. (I started with GLSL but switched to Cg because the GLSL compiler didn't optimize it well enough.)
  • Re:Open Source Gems? (Score:3, Informative)

    by lavid (1020121) on Wednesday October 17, 2007 @05:48PM (#21016883) Homepage
    So, I'm cited in this book for my work on the parallel prefix sum implementation they used. I later went on to rework an MPEG4 encoder for CUDA acceleration. So, to answer your question about using CUDA in these projects: it does offer a speed up, specifically of motion estimation, where most of encoding spends its time. Also, a lot of that speed up comes from exploiting the G80's memory architecture, which I do not believe you can do using GLSL. The problem ends up being that you need a G80, you need NVIDIA's drivers, and you need NVIDIA's compiler.
  • Re:Open Source Gems? (Score:2, Informative)

    by bassman2k (409481) on Wednesday October 17, 2007 @09:22PM (#21019481)
    The latest version of VMD uses CUDA to speed things up by 100 times or more. VMD is released under the UIUC Open Source License.

    http://www.ks.uiuc.edu/Research/vmd/ [uiuc.edu]
  • Re:OpenGL please (Score:4, Informative)

    by n dot l (1099033) on Wednesday October 17, 2007 @11:21PM (#21020509)

    Well I'm pleased to hear that NV and ATI are still working on OpenGL as much as they are DirectX if that's really the case.
    It certainly is. NVIDIA had OpenGL equivalents to the new DX10 features out in the very first release of its DX10 driver. So did ATI (though their first DX10 card came much later than NV's so they had more time to begin with). I don't think either will ever be ignored - and that's a good thing. Competition between the two APIs has yielded a lot of good innovation that's now been adopted into both.

    Are you really telling me that the only difference between a $1500 Quadro and gamer card is the drivers though? The bad-ass gamer card in my friend's computer chokes and can barely run at even the most basic animation of an of maybe 30 parts in CAD.
    No. There's much more to it than that, of course. It all comes down to usage. If you profile a video game and a CAD program you'll see that they stress completely different parts of the card. Workstation cards will have more silicon dedicated to things like the memory controller (CAD sends a lot more data across the bus each frame than a game does), whereas consumer cards put most of their power behind the shader processor (games use long and complex shaders to implement animation, lighting, shadowing, etc - CAD typically just shades everything with simple Phong lighting). There's a lot of other differences as well, though I'd rather not write a 10 page essay on the topic right now :)
  • Re:OpenGL please (Score:3, Informative)

    by n dot l (1099033) on Wednesday October 17, 2007 @11:58PM (#21020787)

    Actually, as a graphics chip developer, I can tell you that Graphics chip development focuses almost exclusively on Direct3D. What Microsoft wants, Microsoft gets. The needs of OpenGL are entirely secondary when it comes to the hardware design.
    Who's chips do you develop? And if the answer's NV or ATI then maybe you should talk to whoever gets sent out to GDC, because that sure as hell isn't what they're telling us game developers.

    In fact, I can actually think of a few cases where GL had something before DX did: NV_primitive_restart [opengl.org]'s been spec'd since 2002 and MS just brought it into DX with DX10 (could have been a caps bit long before then). Same thing with EXT_depth_bounds_test [opengl.org] (is this even in DX10? - I haven't seen it in the docs yet). I'm pretty sure NV also had a bunch of their depth shadowing stuff available through OpenGL before DX had anything of the sort in the spec as well. The only case I can think of where DX could so something way before GL could is MRT rendering - and that's just 'cuz pbuffers were allowed to exist for far too long before the introduction of EXT_framebuffer_object [opengl.org].

    And I've always gotten better framerates with large numbers of small draw calls, or when rendering a lot of dynamic content, or even just uploading static data, from OpenGL than I've ever seen in DX, across both ATI and NV's drivers...so it's not like the core paths are being neglected that I can see.

    Dunno. I'd be interested to hear some of the cases where you feel GL's being left in the dust... And do your comments also apply to the workstation hardware?

Nondeterminism means never having to say you are wrong.

Working...