* Organization Before I start, here's an overview of the four major sections of this talk. I'll start with a little introduction. Then I'll explain the algorithm itself. We'll show ClearCoat360, which uses the algorithm. Then I'll wrap up. * Realism and Interactivity This work exists at the boundary between realistic image generation and interactive graphics. Realistic rendering can take anywhere from seconds to hours per frame but produces extremely accurate results Interactive rendering requires somewhere between, say, 10 and 60 frames per second, just a few tens of milliseconds per frame. To meet these time constraints, we make a lot of simplifying assumptions. * Interactive Approximations For example, we usually only use local approximations for the interaction of light with surfaces in a scene: no shadows, no reflections, no multiple bounces. We use a Phong approximation to the BRDF and even that is just computed at the vertices and interpolated. And we use simple approximations for the lights as well, with either point or directional light sources. The result is interactive rendering of images like this one * Environment maps Of course, there have been many improvements beyond the simple model I've just presented. For example, things like texture or shadow mapping can improve the rendered realism and still maintain interactive frame rates. One such improvement that I'd like to focus on is environment or reflection mapping. Environment mapping is a fast way to approximate mirror reflections from a distant environment. * Environment Representation At the heart of environment mapping is a texture map containing the colors for reflections in all directions. There is some flexibility in how to represent all possible reflection directions in a flat map. When Blinn and Newell first proposed environment mapping in 1976, they used a polar coordinates. Since then, both the "cube map" and "sphere map" representations have been accelerated by graphics hardware. * Sphere map I'll focus on sphere maps for now since that's what our hardware accelerates, but the techniques described will work for any representation. A sphere map is essentially what you would get from a orthographic image of a shiny sphere. In use (even for perspective rendering), you determine which point on the sphere map gives the desired reflection direction and use the color you find there. * Image Based Rendering A different form of rendering doesn't use a geometric model at all. Image-based rendering starts from one or more images of the scene. To render a view for an arbitrary viewpoint, you warp a source image you already have. * Organization Now I'd like to move on to talk a little about our algorithm. * More Complex BRDF We'd like to interactively "walk around" geometric models, but with more complex BRDF than Phong and mirror reflectors. There have other been techniques for interactively rendering with more complex BRDFs in complex environments, but most require an analytic BRDF. Sometimes one that can be separated easily into independent components. But what if all we have is an empirical model, or even a physical sample of some car paint? Notice some of the subtleties of this car paint model, with an undercoat and a layer of polyurethane with pigment flecks embedded in it. There's a weird interaction of the undercoat color and the color of the pigment fleck. There's a strong Fresnel component making the reflection off the polyurethane brighter at glancing angles. * Paint Example As an example, DailmerChrysler has used an implementation of our algorithm, called ClearCoat360 to produce these images of their new Mercedes S-Class Automobile. * Radiance Environment Map Some of our requirements are handled by an generalization of ordinary environment mapping. Instead of thinking of an ordinary environment map as telling you the colors for each reflection direction, you could think of the same map as telling you the reflected color for set of surface orientations. My surface is oriented like this, so my texel in the map must be here and my reflected color must be this. Each point in a radiance environment map instead holds the surface color for a surface with the given orientation AND BRDF. Essentially, each point in the map is the result of computing the lighting integral of BRDF and environment for a surface with the given orientation. Another way to think about it is that instead of an image of a shiny sphere, the spherical radiance environment map is an image of a painted sphere. In any case, it might be a little harder to compute or aquire, but the technique for rendering with a radiance environment map is exactly the same as for an ordinary environment map. Any hardware that can currently render environment maps can also render radiance environment maps. * Limitations There are several limitations to using a radiance environment map. Like an ordinary environment map, it only works when the environment is far away from the reflecting objects. Because the map only contains one color for any given surface orientation, it can't handle anisotropic BRDF like this christmas ball (show). Finally, they only work for a single viewpoint. This is NOT an artifact of our choice to use sphere maps instead of cube maps. It's due to view dependent intensity variations from things like the Fresnel effect or backscatter. A glancing reflection and a head on reflection of the same part of the environment give completely different results. The glancing reflection is strong, while the head-on reflection is weak. * Moving Viewpoint The fixed viewpoint is a problem for our application. We really want to walk around the models and see the shading and environment change appropriately. Our problem is that we have a 4D function. Two dimensions: the surface orientation, are encoded in the map. But the other two dimensions: the view direction are fixed. Taking a lesson from IBR, we record several radiance environment map images for a fixed set of viewpoints. Then we attempt to reconstruct the radiance environment map we need by warping and blending together the maps that we have. * View Sampling I've said that we need to sample the space of view directions, but what does that mean? If you describe the view direction with a unit view vector, they cover the surface of a sphere. We need to pick some number of view vectors on that sphere, and for each of those, we need to generate a new radiance environment map. I've shown here a new view direction in yellow and several sampled view directions in grey. For each of the grey view directions, we've got a pre-computed radiance environment map. If we've got a dense enough sampling of view directions, we could forget about any warping and just render the new view using the closest pre-computed map. A better choice might be to pick the closest three and blend them together. * Spherical Barycentric Blend How? We'll use a linear blend. We use barycentric weights, based on the ratio of triangle areas. Since the view directions all lie on a sphere, we use the ratio of spherical triangles. For example, the image shows a blown-up section of sphere. The three outer vertices correspond to view vectors where we have pre-computed radiance environment maps. The intersection point in the center is the the current view vector that we want to use for rendering. The weight for the map corresponding to the red viewpoint in the lower left is the ratio of the area of the red triangle to the area of the large outer triangle. This ratio is goes to one as the new view vector approaches the red view direction and approaches zero when the new view vector is farthest from the red view direction. * Sampling Density How densely would we need to sample the view directions if we just blended the three closest views? Pretty densely, it turns out. Here are radiance environment maps for twelve evenly spaced views. They're obviously different enough that if we just blended them together we'd get some horrible thing with multiple ghost reflections. * Warp The answer? Use some knowledge about our BRDFs to transform the radiance environment maps before we blend. Like IBR, we try to warp an image (or in this case a radiance environment map) for one view into what you would see for a different view. Since our BRDFs have a strong reflective component, our warp matches up reflection directions. For example, the sun in the map on the left reflected from here in the environment, so it would land here on the map on the right. That defines a warp correspondence between this point and that point. At the bottom, I've applied the warp to a test pattern instead of a normal radiance environment map so you can better understand its character. To do the warp, we render the tessellated disk in the center, using the map on the right as a texture. We use the warp function to determine texture coordinates at each vertex in the tessellated disk. * Algorithm Setup To summarize the algorithm, before we start rendering we sample a set of view directions and aquire ar generate a radiance environment map for each. * Per-frame Algorithm Then, for each frame, we pick the three closest views, warp their radiance environment maps to match the view we want, and blend the results together. Actually, this is just three passes rendering the textured disk. In each pass, the warp is done by the texture coordinates we choose and the blend uses the standard OpenGL blending capabilities to weight and blend properly. We save the result as an environment map, and use it in a fourth drawing pass -- the only one where we draw the object geometry (which is usually much more complex than our friend the triceratops here). Notice how the warped maps in the center row still differ from each other. This is due to the view-dependent aspects of each radiance environment map. * Organization Now let's see ClearCoat360 in use on something a little more interesting. * Conclusion We've succeeded in our goal to have interactive rendering of geometric models in arbitrary environments and with intersting BRDFs. * Future Work Still, we've got some limitations -- primarily restrictions on the BRDF and the lack of local reflections. IBR replaces the use of geometric models, and all of standard rendering with warped images. We've retained the geometric models, but replaced the use of BRDFs, lights and analytical shaing models with warped images. I predict further uses of IBR techniques in portions of the rendering process. * Acknowledgements We have many people to thank. I'll leave this up while I take any questions, if we've still got time.