GPU Gems 116
GPU Gems – Programming Techniques, Tips, and Tricks for Real-Time Graphics | |
author | Randima Fernando (Editor) |
pages | 816 |
publisher | Addison-Wesley Publishing |
rating | 9 |
reviewer | Martin Ecker |
ISBN | 0321228324 |
summary | An excellent book containing many "gems" for real-time shader developers. |
The book is intended for an audience already familiar with programmable GPUs and high-level shading languages and is divided into six parts that concentrate on particular domains of graphics programming. Each part contains between five andd nine chapters, with the entire book containing a total of 42 chapters. Each chapter was written by a different renowned expert(s) from a gaming company, tool developer, film studio, or the academic community. About half of the contributors are from NVIDIA's Developer Technology group. The chapters focus on effects and techniques that help developers to get the most out of current programmable graphics hardware. With approximately twenty pages per chapter, the contributors are able to describe various effects and techniques in-depth, as well as delve into the required mathematics.
All the shaders in the book are written in the high-level shading languages Cg and HLSL. The demo programs on the CD-ROM that accompanies the book use both Direct3D and OpenGL as graphics API, depending on the authors' preferences. Even though the shaders are in Cg and HLSL, it should be fairly straightforward for OpenGL programmers who might prefer to use the recently released OpenGL Shading Language to port the shaders, as the syntax is very similar.
The first part of the book deals with natural effects and contains chapters on rendering realistic water surfaces, water caustics, flames, and grass. Two chapters look behind the scenes of NVIDIA's Dawn demo, which shows a dancing fairy with realistically lit skin. There is also a chapter on Perlin noise (improved version) and its implementation on GPUs that was written by Ken Perlin himself.
The second part of the book concentrates on lighting and shadows. There are chapters from people at Pixar Animation Studios that describe some of the lighting and shadow techniques used in their computer-generated movie productions, as well as a chapter on managing visibility for per-pixel lighting. In the shadow department, the two predominant ways of rendering shadows in real-time, shadow mapping and shadow volumes, are discussed with possible optimizations and improvements. The chapter by Simon Kozlov on methods to improve perspective shadow maps presents some especially interesting new material on the topic.
The third part of the book covers materials and contains chapters on subsurface scattering, ambient occlusion, image-based lighting, spatial BRDFs, and how to use them efficiently in real-time, while part four describes various techniques for image processing (being used more frequently in computer games), mostly in the form of post-processing filters. The chapters presented in this section deal with various depth-of-field techniques, a number of filtering techniques using shaders, and the real-time glow effect seen in many of the newer games (especially in Tron 2.0). Not surprisingly, one of the authors of this chapter is John O'Rorke from Monolith Productions, a developer of the game. Contributors from Industrial Light & Magic introduce the OpenEXR file format used for storing high-dynamic-range image files (see openexr.org).
Part five, titled "Perfomance and Practicalities," is a collection of chapters that deal more with software engineering aspects of developing software that uses shaders. In particular, there are chapters on optimizing performance and detecting bottlenecks, using occlusion queries efficiently, integrating shaders into applications and content creation packages (in particular Cinema4D), and how to develop shaders using NVIDIA's FX Composer tool. There is also an interesting chapter on converting shaders written in the RenderMan shading language, a language for offline rendering, to real-time shaders. The chapter uses a fur shader from the movie "Stuart Little" to demonstrate this conversion. With the large increase of GPU processing power, more shaders from the offline rendering world will enter the realm of real-time graphics and it will be useful to re-use already existing resources, such as RenderMan shaders.
The final part of the book deals with a topic that has recently received a lot of attention by graphics researchers - a topic called General Purpose GPU or GPGPU programming, i.e. using the GPU for other things than rendering triangles. This part comprises chapters on performing computations, in particular fluid dynamics, on the GPU, chapters on volume rendering, and a nice chapter on generating stereograms on the GPU. As a side note, there is a website that deals exclusively with news in the GPGPU community at gpgpu.org.
The book contains a many images that show the presented effects in action, and also plenty of diagrams and illustrations that explain more complicated techniques in detail. Unlike Randima Fernando's previously released book, The Cg Tutorial, which I have also reviewed in the past on Slashdot, the book and all of its illustrations and images are printed entirely in color. The large number and high quality of the illustrations is probably one of the best features of this book that makes even the more advanced effects easily comprehensible.
The book comes with a CD-ROM that contains sample applications for most of the chapters in the book. Some of these applications include the full source code, whereas others, such as NVIDIA's Dawn demo (also described in some of the book's chapters), are included as executables only. It must be noted that all applications run exclusively on Windows, even though some of the samples that are available in source code form and use OpenGL could probably be built to run on other operating systems as well. Furthermore, about half of the samples require what Fernando and Kilgard in The Cg Tutorial call a fourth-generation graphics card to run, in particular, an NVIDIA GeForceFX card. Note that most samples that require a GeforceFX will not run on comparable ATI hardware. This comes as no surprise since GPU Gems is predominantly an NVIDIA book. It should be noted, however, that the techniques, effects, and shaders presented in the book's text are generally applicable to programmable GPUs and are equally useful when working with graphics hardware from vendors other than NVIDIA.
This is a great book that every programmer involved in game development and/or real-time computer graphics should have on his/her shelf. For the game programmer it is critical to stay up-to-date with the latest and greatest effects available with modern GPUs in order to remain competitive when creating the gaming experience. For the graphics developer, it is interesting to see how the immense processing power of current graphics hardware can be exploited in graphics applications. This book offers insight on both of these topics and more, and I highly recommend it.
A few notes from reader Akalgonov:
Reader akalgonov contributes a few more thoughts on the book:"The sample programs and demos require shader support, Cg, OpenGL, or the latest version of DirectX to run. On the plus side, the majority of the companion topics included pre-compiled binaries (but not the runtime dynamic link libraries) or an AVI illustrating the subject in addition to the source code. While the CD contains over 600 MB of examples from the text, it provided only 23 of the 42 topics covered in the book. Since most of the articles provide an overview and references to a topic, additional material on the CD would have been beneficial.
I found the wide range of subjects quite interesting - and was refreshed that the topics actually seemed "ahead of the curve" in terms of hardware requirements. However in order to provide more subject depth, it seemed that the text could have been split into two volumes in order to expand the existing chapters with sufficient depth. As the material is just enough to get one started, the subject treatment may disappoint some readers seeking to apply the clever and unique techniques presented in the book directly or those hoping to use the book as an opportunity to learn some of the advanced features provided in a programming graphical processing unit."
Martin Ecker has been involved in real-time graphics programming for more than 9 years and works as a games developer for arcade games, and works on the open source project XEngine. You can purchase GPU Gems -- Programming Techniques, Tips, and Tricks for Real-Time Graphics from bn.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.
Re:Gems? (Score:2)
gems? (Score:4, Funny)
386 emulator for GPU (Score:1)
Re:gems? (Score:4, Funny)
My gf's ex bought her a Diamond video card for their anniversary. I was warned that that little joke was only funny the first time.
you can get it cheaper at.... (Score:5, Insightful)
Yawn... (Score:2, Interesting)
Actually, I'm a bit surprised that the big names haven't started looking at raytracing. Sure, it has a reputation for being slow, but graphics technology has grown by leaps and bounds. Combined with about 5 billion caching and approximation tricks, and the fact that ray tracing is a highly parallel operation,
Re:Yawn... (Score:1)
Why bother? All that computational power is being put to use dealing with the real challenges of interactive real-time 3D games: collision, animation, physics, AI.
Re:Yawn... (Score:1)
Re:Yawn... (Score:2)
However, raytracing still requires the same transformations, so GPUs as they work now are no more useful for physics, etc. than a raytracing GPU. In fact, with per-pixel shading, modern GPUs practically *are* raytracers.
Can someone point out exactly what differentiates a per-pixel polygon shader from a raytracing engine from a practical point of view? I'd be interested to know.
Re:Yawn... (Score:2)
Which is basically my problem with current cards. Programming them has become exceedingly complex because they stick to a polygon/raster model instead of simply declaring rays outright.
Re:Yawn... (Score:3, Insightful)
I'm not even asking you to do it from scratch yourself - borrow liberly from people like Purcell, and from GPGPU.org, and from BrookGPU and from other stream-processing-on-GPU sources.
When you say that you want a new "driver," I think you should really consider using a wrapper layer like BrookGPU - or just figure out how to do things the way Purcell did
Re:Yawn... (Score:2, Interesting)
A per-pixel polygon shader is just that: a small program that gets run for every pixel of that polygon on the screen. That says absolutely nothing about what lighting method is used.
Now, that pixel shader can do raytracing, but simply being a pixel shader doesn't mean that raytracing is being done. The pixel shader could instead do shadow mapping or something.
Raytracing is just what it sounds like: you literally trac
Re:Yawn... (Score:4, Interesting)
So don't look to CPU or GPU manufacturers for help with ray tracing... you want to bitch at the short-bus-riding DRAM people instead.
Re:Yawn... (Score:2)
Re:Yawn... (Score:1, Insightful)
With another 10X performance improvement, that may change. But my point is, the 10X improvement is going to have to occur on the memory side, not the CPU and GPU side where it usually happens.
Re:Yawn... (Score:3, Informative)
I fail to see how one million rays per second is "real time" for most images people associate with ray-tracing. Even at one ray per pixel, you're limited to a single 500x500 image per second. But the value of ray tracing is the recursion: one ray hits an object, and anywhere between 2 and 200 rays result (counting for any subsequent recursions, lights and diffusions).
Your budget: 1000000 rays per
correction: 1ray x 500x500 x 4fps (Score:1)
Re:Yawn... (Score:3, Informative)
There are serious investigations into making cache optimized algorithms. For example, the matrix transposition and array index bit reversal algorithms have been investigated in two papers. Also, Bailey's 4-step and 6-step FFT algorithms are also cache efficient. The latter example shows that a complex algorithm such as a FFT can be made cache efficient with the sacrafice of only a few extra computations. Perhaps it would be prudent to use a hybrid ray-tracer/polynomial renderer to section each portion of th
Re:Yawn... (Score:2, Funny)
Re:Yawn... (Score:5, Interesting)
Actually, I'm a bit surprised that the big names haven't started looking at raytracing. Sure, it has a reputation for being slow, but graphics technology has grown by leaps and bounds. Combined with about 5 billion caching and approximation tricks, and the fact that ray tracing is a highly parallel operation, I'm thinking that we should already have games that are raytraced.
I'm not sure that's gonna happen. The fact of the matter is that current graphics hardware is fast approaching the point where raytracing will be irrelevant. The lighting algorithms that can be coded on GPUs will one day match the complexity of raytracers and you won't know the difference. The fact of the matter is that scan conversion is not actually mathematically inferior to raytracing as a rendering technique, it's just a way to quickly generate the first recursive step of the raytracer. That advantage isn't going to go away. In actuality, the end result will probably be something of a hybrid between raytracing and traditional scan conversion techniques and you won't really be able to identify it as one or the other.
Re:Yawn... (Score:2, Informative)
Actually, AFAIK the opposite is true.
Raytracers scale very nicely with geometric complexity: O(log n). So as the virtual environments continue to grow, raytracing should gain popularity over scan conversion. Have a look at this [uni-sb.de] - that's 50 million triangles raytraced at 4-5 fps!
Most of the current interactive raytracing is still done on parallel
Re:Yawn... (Score:3, Interesting)
The hardware raytracing site has a nice video of their FPGA-based system rendering about 187 million triangles at about 15 - 40 fps (512x384, 90MHz FPGA).
Re:Yawn... (Score:3, Interesting)
You also need to consider that the O(log N) figure for ray tracing does not include the cost of building a ray-acceleration data structure, and it also assumes the entire scene fits in RAM. Polygon splatting is O(N), but the coeffi
Re:Yawn... (Score:3, Informative)
google for rtChess.
The ray tracing engine has since seen a 40% performance boost and has added photon mapping and scales nicely with more CPUs - I just haven't written a game with it since. I don't think a GPU implementation will be much faster. nVidia seems to think they make general purpose processors now - HAH what a laugh.
Re:Yawn... (Score:1)
Re:Yawn... (Score:1)
Raytraced Nethack? (Score:2)
You hit the Umber Hulk -more-
His pixels shimmer gracefully -more-
The Umber Hulk hits -more-
You die -more-
You leave a good-looking corpse
Re:Yawn... (Score:1)
Links (Score:1, Informative)
http://graphics.stanford.edu/papers/tpurcell_thes
http://graphics.stanford.edu/papers/photongfx [stanford.edu]
(And not a karma whore in sight.)
Re:Yawn... (Score:2)
Re:Yawn... (Score:5, Informative)
First, Why? Most people don't even make movies that are raytraced.
Second, they already are doing raytracing on the GPU. Purcell [stanford.edu] had one working in 2002. There was a presentation on it, in a course at SIGGRAPH 2003. The GPU is maybe a little faster than the CPU, right now, for raytracing.
"Tweaking OpenGL" is kind of like saying "tweaking the CPU", any more. It's fairly close to a generalized stream processor. And their specs already are open enough to have figured this out. Look at GPGPU [gpgpu.org] and read some more about how people are doing amazing stuff on the GPU today. No need to wait for ATI and NVidia to open up any specs - they already did. Cg and GLSlang are fully up to the task.
And, photon mapping and similar techniques are much more sophisticated than raw raytracing.
Re:Yawn... (Score:2)
Because current methods are getting too complex. The shear number of details in writing a modern 3D engine is daunting, even to an experienced 3D coder. A raytracer would allow you to hit a big reset button and go back to times that were simpler. As a bonus, quality could eventually be taken much farther than today's polygon/shader methods.
And, photon mapping and similar techniques are much more sophisticated than raw raytracing.
Raw ra
Re:Yawn... (Score:2)
That's naive in the extreme. A coder will always have to be involved. If for no other reason than for optimizing performance, which is honestly no easier than writing a "modern 3D engine" as you described it.
Now, if your point is that dog-slow rendering is "better" than fast rendering, then pick your fight elsewhere. But don't blame the GPU for being fast, especially since it is now just as capable of high-accuracy rendering
Re:Yawn... (Score:2)
What am I blaming the GPU for? I just want to reprogram it and make everyone's lives easier. Sure, the scene will need to be optimized by a coder who understands, but the artist should be capable of deciding what effects will work and which w
Re:Yawn... (Score:2)
SOMEONE ELSE HAS ALREADY DONE IT.
How many times do I have to say this?
Timothy Purcell [stanford.edu] at Stanford University did it two years ago.
So stop wishing 'if it were only possible' to do something that people have already done. Read my link, and if you want to be polite, thank me for showing you where to find exactly the kind of information that you were complaining didn't exist.
Re:Yawn... (Score:4, Informative)
Don't believe you can do it? Here's a link some projects that do real-time raytacing, radiosity, photon mapping, and subsurface scattering [gpgpu.org], all on GPUs. These GPUs are programmable without them opening up their specs.
(The desire for them to open up their specs is for other reasons, not because they are hiding some functionality from you.)
Re:Yawn... (Score:2)
Re:Yawn... (Score:2)
(Simplified concept, of course, but you get the point.)
Re:Yawn... (Score:2)
You don't know what you're talking about. Reread the specs, and go and read the Purcell papers that people keep pointing you to. And learn what a stream processor is, so maybe you can understand why the mathematics actually are general purpose enough to meet those needs. Just because you don't understand the capabilities doesn't mean that they don't exist.
not just the vertex shader
They're using pixel shaders to do
Re:Yawn... (Score:2)
It's not about whether the GPU can do the math or not. It's about whether the GPU is programmed to do the math for raytracing or not. Many RayTracing engines have twisted OpenGL a bit to get their ray tracing operations done. Which is fine since it gives them a performance boost. But this boost is insignificant compared to what could be achieved with dedicated G
Re:Yawn... (Score:2)
Viking Coder: "No problem, AKAImBatman."
It's about whether the GPU is programmed to do the math for raytracing or not.
This is a nonsense statement. I'm sorry, but it really is. That's like saying that, "the CPU is programmed to do the math for raytracing or not."
I understand your argument, in that back in the day, a generalized CPU needed an FPU to do mathematics operations that the CPU could do in software... .
Re:Yawn... (Score:2)
To match the quality of anti-aliased triangle rendering, you need at least 4 samples per pixel. Then you need to support full-screen resolutions (2048x1536). To be officially real-time, you need at least 15 frames/second, if not the full 60/80 that most games provide now.
That would give you a budget requirement of at least 2
Re:Yawn... (Score:3, Interesting)
The first stage or two got
Re:Yawn... (Score:2)
However, there seem to be many open source real-time ray-tracing projects going on:
OpenRT [openrt.de], with it's own FAQ [acm.org]. This project seems to have several games written for it.
Re:Yawn... (Score:3)
I never said it was easy.
Re:Yawn... (Score:2)
Remember that those shiney Pixar movies you see generally don't bother with ray-tracing (I think Finding Nemo did a little bit of it; the previous ones didn't) because, as has been said, you can get 'good enough' using rules-of-thumb and hacks, most of the time.
Re:Yawn... (Score:2, Insightful)
We have become so used to these in games now, that I dare say if you did produce a real-time raytracer you would be hard-pushed to explain to the average gamer what was so cool about it.
The bar has been raised significantly since ray-tracing was first presented in the 70s. And we've long since started looking beyond what raytr
Re:Yawn... (Score:2)
Re:Yawn... (Score:1)
Re:Yawn... (Score:2)
Re:Yawn... (Score:1)
Re:Yawn... (Score:2)
The necessary geometry and graphics for RayTracing works out to pretty much the same cost as polygon stuff (sometimes even smaller). Given that today's cards have 64-128MB of RAM, the memory on the card is not the issue. The bandwidth can be an issue, but no more than today's graphic
The Gems books are classics... (Score:5, Interesting)
A lot of the articles are practical, too, if you're working in the field. When I was fiddling with some fuzzy logic [rubyforge.org] stuff the articles from Game Programming Gems II was very helpful.
Re:The Gems books are classics... (Score:1)
Either this stuff is impossible to document, or it just plain changes too fast.
Perlin (Score:4, Funny)
Wow.. there's a person behind Perlin noise? I always thought it was a random noise generator based on the chaos found in Perl programs. Thus, the noise was generated by an http client that has "gone perlin'" -- which means to crawl the web in search of arbitrary bits of Perl.
who knew!?
Re:That was not funny at all (Score:1)
Thank you! I'll be here all week! Enjoy your stay, be sure to tip your waitress.
Re:That was not funny at all (Score:1)
You're saying this about a joke that doesn't involve "In Soviet Russia ..." or "I for one welcome our ... overlords?"
Personally, I for one welcome our new non-recycled format joke overlords.
Perlin Performance (Score:1)
Ken Perlin actually sang a song at SIGGRAPH 2002 before he presented his "Improving Noise" paper, and didn't even fail to be funny, sadly I can't find the text anymore, but it was hillarious. This guy manages to bring technical stuff to a tired audience and getting the whole crowd to laugh with his witty lyrics, on the subject of something as interesting as noise.
Ken Perlin is also the guy who has brought together much of the talent that is responsible for the ongoing success of Pixar. I guess you could
Where would we be without shaders (Score:4, Interesting)
Also Check Out... (Score:3, Informative)
Forget nVidia or ATI... (Score:1, Offtopic)
And don't even get me started on the clear, crisp sounds of the SID chip!
Re:Forget nVidia or ATI... (Score:1)
brings to mind an old question I once had. (Score:4, Interesting)
Re:brings to mind an old question I once had. (Score:3, Informative)
While it would probably be possible to use a GPU for general purpose number crunching, I believe it would make the GPU unable to send a signal to your monitor at the same time.
I asked the same question back in the days of RC5-64 and I was told that it was not feasible for ju
I think you're wrong (Score:3, Insightful)
See this paper [in.tum.de] for some examples which not only use the GPU simultaneously for graphics and number crunching, but which use the graphics to give real-time output of computational fluid results.
The only remaining problem I remember is that the bandwidth to current video cards is very asymmetric, which is fine for video games that just push a lot of data to the video card but not so go
Re:I think you're wrong (Score:2)
But regarding the first generation of 3D cards, the question is irrelevant- because those cards had NO 2D output ability, so you always had a separate 2D card running anyhow.
Re:brings to mind an old question I once had. (Score:2)
It's got an amazingly high downstream rate, GBps, but reading back from the card can be as low as 256 KB/sec in some models.
Far too slow to do any kind of processing on a high-bandwidth stream at a time, although the circuitry of a GPU (matrix optimizations) would be useful in crypto, the rate at which data could be returned from the card would choke the stream, and the buffer would fill up, and you'd start losing data.
Re:brings to mind an old question I once had. (Score:1)
For applications like that, the back channel isn't that much of an issue, because the data coming out of the process is so very much smaller, ie - a lot of data is being thrown away in the GPU
Conversely, on
Re:brings to mind an old question I once had. (Score:2)
I have my doubts(but love playing devil's advocate), the overhead of managing everything may well negate any benefit of farming out work to the other cards. It would also shift the bottle-neck to the various communication channels, by increasing the traffic between components I guess.
Not to mention the effort inherent in setting something like that...
X Windows is your friend (Score:2)
Re:brings to mind an old question I once had. (Score:1)
GPGPU.org (Score:2)
Hmmm,. I wonder if it is very nVidia centric? (Score:2, Interesting)
Re:Hmmm,. I wonder if it is very nVidia centric? (Score:2, Insightful)
Re:Hmmm,. I wonder if it is very nVidia centric? (Score:1)
Re:Hmmm,. I wonder if it is very nVidia centric? (Score:1)
Re:Hmmm,. I wonder if it is very nVidia centric? (Score:1)
In any case, the nVidia Cg compiler produces much more inefficient code than you would get from the HLSL compiler; ergo, if you're actually interested in producing games, learn GLSL/OSL an
Re:Hmmm,. I wonder if it is very nVidia centric? (Score:1)
This is the dumbest thing anyone has said about this book. The book is bad because you disagree with their choice of shading language? Hardware shaders are not complex things, an
Re:Hmmm,. I wonder if it is very nVidia centric? (Score:1)
Re:Hmmm,. I wonder if it is very nVidia centric? (Score:1)
This is a Gems book, written by numerous people. These people tell the editor, "Yeah, I have a cool GPU technique that would make a good 'gem' for the book." and they say, "Okay, give us an implementation." and that happens to be in either HLSL or Cg.
The fa
Re:Hmmm,. I wonder if it is very nVidia centric? (Score:1)
As for why they used Cg, surely you're not stupid enough to believe they just 'picked one', or (from reading your post) perhaps you are. Do you know who the author works for? Do you know what company pushes Cg? It isn't some 'conspiracy theory' but it IS a conflict of interest given that it V
Re:Hmmm,. I wonder if it is very nVidia centric? (Score:1)
I guess we'll just have to disagree on this one, because I think a book can be written about a topic that is not the language used to implement the topic of discussion. I don't believe every book needs to advertise on the title what language they use to implement the thing they're interested in
Re:Hmmm,. I wonder if it is very nVidia centric? (Score:1)
Re:Hmmm,. I wonder if it is very nVidia centric? (Score:1)
You gave every indication before that you thought this book's content was not valuable, and it seemed that your reasoning was simply because it was written in Cg. From a previous post:
Re:Hmmm,. I wonder if it is very nVidia centric? (Score:1)
Re:Hmmm,. I wonder if it is very nVidia centric? (Score:1)
Yeah, I understand. But at the same time, from their perspective as publishers and writers
Re:Hmmm,. I wonder if it is very nVidia centric? (Score:1)
HLSL, well, I dont use D3D, so its of no use for me.
Cg is a valid choice I have if i want my shader code to work on my ti4400 and on my radeon9600. The only other option is to rewrite ALL shaders for both cards, which is a real pain in the ass (especially the fragment shader on the ti4400, which has to be constructed with NV texture shaders + register combiners). Fortunately, the ARB vertex programs are supp
Re:Hmmm,. I wonder if it is very nVidia centric? (Score:1)
My original complaint about the book is that a book is being published which purports to be a guide to programming GPUs and yet rather than use GLSL or HLSL it uses a private corporations shading language.
It is like someone producing a book on C++ and making Microsoft friendly examples rather than mai
Re:Hmmm,. I wonder if it is very nVidia centric? (Score:1)
B&N? Ripoff! (Score:4, Informative)
In the case of this book, I've taken the liberty of making your life easier by providing you with urls which will take you directly to the price list for the book. For future reference: AddAll is a shopping 'bot, looking at thirty-six stores. AddAll [addall.com] Results and BookPool [bookpool.com]
Now, if you insist upon paying Amazon and B&N prices, let me know. You can PayPal the money to me and I'll order the book for you from AddAll or BookPool and have it shipped to you. (Of course, I'll keep the difference. After all, you were willing to pay the extra price!) If you're willing to waste your money, I'd rather collect the waste than Amazon or B&N.
p.s. Remember this the next time you see someone post a message saying, "it's -this price- at Amazon!"
p.p.s.
Here's [google.com] the listing from Froogle [google.com] (just in case you haven't used it yet)
Followup volumes will deal with OGPU gems, (Score:2)
Cue "in Soviet Russia" jokes...