VIEW THE DEMO
Lighting: The Key To Realistic Graphics
The lighting used here boosts the realism in this scene.
Lighting in computer graphics plays a huge part in creating a realistic-looking, immersive experience. To create a sense true of realism, you need to show direct illumination as well as indirect illumination. Unfortunately, the latter is hard to get right, and it’s an “expensive” computation in terms of processing and memory.
You can get good results if you precompute the indirect illumination. But then you have a static image – not exactly ideal for gaming where dynamic objects are interacting with the scene. Or, you can try to render it fairly quickly – but then you lose detail and precision.
I started working on a way to get higher-quality interactive indirect illumination last summer as an NVIDIA DevTech intern. Later, as I worked to finish my PhD in computer graphics at the University of Grenoble, France, I collaborated with several colleagues from academia and NVIDIA to research solutions to some of these problems. You can find our published results in a paper called “Interactive Indirect Illumination Using Voxel Cone Tracing” (you can download the authors’ version here).
How We Solved the Real-Time Lighting Puzzle
Our approach allows you to show changes to environmental lighting as dynamic objects interact with various sources of illumination. We can do this with a level of quality that approaches what you achieve with offline rendering – but in real time.
This research makes possible a whole range of effects in a gaming environment, from color bleeding (when nearby surfaces change color as a result of reflected light) to glossy reflections (when an object moves across a polished floor). Previous approaches were capable of rendering diffuse illumination but not specular effects (like blurry or glossy reflections). Our research allows us to render both diffuse illumination and specular effects. To see some examples of these effects action, check out the YouTube video, featured above.
Algorithms, Voxels and Octrees – Oh My!
The key to our approach lies in a new algorithm and data structure that allow much faster computation. Instead of working on triangles (the traditional way of rendering graphics), we’re using voxels. Each voxel is one value in a 3D grid. The voxels are stored in an octree structure (a tree structure in which each node has eight children), with voxels as the nodes on the tree. Octree structures effectively compact the information stored in large amounts of graphics data. Octree structures use less memory, making them faster and more efficient at rendering tasks such as ray tracing.
Notice the indirect lighting and “color bleed.”
Using this structure, our algorithm allows us to very quickly compute effects that have traditionally been quite computationally intensive. For example, previous approaches had difficulty showing a real-time reflection from a glossy curved surface (see time marker 1:20 in the above video). We accomplished this by using the previously mentioned “sparse voxel octree” to speed-up the tracing of “cones”.
Traditionally, you would compute specular reflections by launching a ray from your eye to the reflector, and then launch a secondary ray from the reflector in the direction of reflection. With specular reflections, we want to see all kinds of materials reflecting light realistically. The challenge is to compute these realistic reflections as fast as possible, and with a minimum number of rays.
When it comes to rendering glossy reflections, you would traditionally need to launch hundreds or thousands of scattered secondary rays for each ray launched to the reflector. My research allows us to replace the thousands of secondary rays with just one “voxel cone”. The cone is an approximation of the effect of secondary rays. It results in very realistic results at a much lower computational cost. We also use the same approach to quickly compute color bleeding effect (diffuse materials) with only a few scattered cones.
An Eye Toward The Future
Right now, I’m working on ways to refine the technology to allow more dynamic precision: in other words, allowing you to dynamically add voxels as an object gets closer. That’s a direction I already explored in a slightly different context, with my PhD work on “GigaVoxels”.
So, we’re not production-ready yet. But some companies have already approached NVIDIA to preview the research. I hope that, within a few years, we’ll see our research being used by game engines or by movie studios for realistic lighting effects.