Eugene d’Eon | David Luebke | Eric Enderton
NVIDIA Corporation
Eurographics Symposium on Rendering 2007
Abstract
Existing offline techniques for modeling subsurface scattering effects in multi-layered translucent materials such as human skin achieve remarkable realism, but require seconds or minutes to generate an image. We demonstrate rendering of multi-layer skin that achieves similar visual quality but runs orders of magnitude faster. We show that sums of Gaussians provide an accurate approximation of translucent layer diffusion profiles, and use this observation to build a novel skin rendering algorithm based on texture space diffusion and translucent shadow maps. Our technique requires a parameterized model but does not otherwise rely on any precomputed information, and thus extends trivially to animated or deforming models. We achieve about 30 frames per second for realistic real-time rendering of deformable human skin under dynamic lighting.
Video
Appeared in SIGGRAPH 2007 Electronic Theatre
[wmv] (22MB)
Real-time Demo
Retrospective, April 9, 2011
This work was based on an empirical observation that sums of Gaussians fit diffusion dipole scattering profiles quite well. That idea has since been replaced with a rigorous connection to time-resolved diffusion theory (specifically, to the quantization of it). New higher-accuracy scattering profiles can be computed using the methods described in A Quantized-Diffusion Model for Rendering Translucent Materials, removing the requirement for non-linear optimization and the Appendix in this paper.