• Platinum Pass Platinum Pass
  • Full Conference Pass Full Conference Pass
  • Full Conference One-Day Pass Full Conference One-Day Pass

Date: Wednesday, November 20th
Time: 11:00am - 12:45pm
Venue: Plaza Meeting Room P1
Session Chair(s): Gurprit Singh, Max Planck Institute for Informatics


A Differential Theory of Radiative Transfer

Abstract: Physics-based differentiable rendering is the task of estimating the derivatives of radiometric measures with respect to scene parameters. The ability to compute these derivatives is necessary for enabling gradient-based optimization in a diverse array of applications: from solving analysis-by-synthesis problems to training machine learning pipelines incorporating forward rendering processes. Unfortunately, physics-based differentiable rendering remains challenging, due to the complex and typically nonlinear relation between pixel intensities and scene parameters. We introduce a differential theory of radiative transfer, which shows how individual components of the radiative transfer equation (RTE) can be differentiated with respect to arbitrary differentiable changes of a scene. Our theory encompasses the same generality as the standard RTE, allowing differentiation while accurately handling a large range of light transport phenomena such as volumetric absorption and scattering, anisotropic phase functions, and heterogeneity. To numerically estimate the derivatives given by our theory, we introduce an unbiased Monte Carlo estimator supporting arbitrary surface and volumetric configurations. Our technique differentiates path contributions symbolically and uses additional boundary integrals to capture geometric discontinuities such as visibility changes. We validate our method by comparing our derivative estimations to those generated using the finite-difference method. Furthermore, we use a few synthetic examples inspired by real-world applications in inverse rendering, non-line-of-sight (NLOS) and biomedical imaging, and design, to demonstrate the practical usefulness of our technique.

Authors/Presenter(s): Cheng Zhang, University of California, Irvine, United States of America
Lifan Wu, University of California, San Diego, United States of America
Changxi Zheng, Columbia University, United States of America
Ioannis Gkioulekas, Carnegie Mellon University, United States of America
Ravi Ramamoorthi, University of California, San Diego, United States of America
Shuang Zhao, University of California, Irvine, United States of America


Reparameterizing Discontinuous Integrands for Differentiable Rendering

Abstract: Differentiable rendering has recently opened the door to a number of challenging inverse problems involving photorealistic images, such as computational material design and scattering-aware reconstruction of geometry and materials from photographs. Differentiable rendering algorithms strive to estimate partial derivatives of pixels in a rendered image with respect to scene parameters, which is difficult because visibility changes are inherently non-differentiable. We propose a new technique for differentiating path-traced images with respect to scene parameters that affect visibility, including the position of cameras, light sources, and vertices in triangle meshes. Our algorithm computes the gradients of illumination integrals by applying changes of variables that remove or strongly reduce the dependence of the position of discontinuities on differentiable scene parameters. The underlying parameterization is created on the fly for each integral and enables accurate gradient estimates using standard Monte Carlo sampling in conjunction with automatic differentiation. Importantly, our approach does not rely on sampling silhouette edges, which has been a bottleneck in previous work and tends to produce high-variance gradients when important edges are found with insufficient probability in scenes with complex visibility and high-resolution geometry. We show that our method only requires a few samples to produce gradients with low bias and variance for challenging cases such as glossy reflections and shadows. Finally, we use our differentiable path tracer to reconstruct the 3D geometry and materials of several real-world objects from a set of reference photographs.

Authors/Presenter(s): Guillaume Loubet, EPFL, Switzerland
Nicolas Holzschuch, National Institute for Research in Computer Science and Automation (INRIA), France
Wenzel Jakob, EPFL, Switzerland


Non-linear sphere tracing for rendering deformed signed distance fields

Abstract: Signed distance fields (SDFs) are a particularly powerful implicit representation for modeling solids, volumes and surfaces. Their infinite resolution, controllable continuity, and robust constructive solid geometry operations, coupled with smooth blending, have enabled powerful and intuitive sculpting tools for creating complex SDF models. Their metric properties also allow efficient surface rendering via sphere tracing. Unfortunately, SDFs remain incompatible with many popular direct deformation techniques which re-position a surface via its explicit representation. Linear blend skinning used in character articulation, for example, directly displaces each vertex of a triangle mesh. To overcome this limitation, we propose a variant of sphere tracing for directly rendering deformed SDFs. We show that this problem reduces to integration of a non-linear ordinary differential equation (ODE). We propose an efficient approach with controllable error which first automatically computes an initial value along each cast ray and then walks conservatively along the curved ray in the undeformed space according to the signed distance. Importantly, our approach does not require knowing, computing, or even the global existence of the inverse deformation, which allows us to plug in many existing forward deformations with relative ease. We demonstrate the effectiveness of this approach for interactive rendering of a variety of popular deformation techniques that were so far limited to explicit surfaces.

Authors/Presenter(s): Dario Seyb, Dartmouth College, United States of America
Alec Jacobson, University of Toronto, Canada
Derek Nowrouzezahrai, McGill University, Canada
Wojciech Jarosz, Dartmouth College, United States of America


Differentiable Surface Splatting for Point-based Geometry Processing

Abstract: We propose Differentiable Surface Splatting (DSS), a high-fidelity differentiable renderer for point clouds. Gradients for point locations and normals are carefully designed to handle discontinuities of the rendering function. Regularization terms are introduced to ensure uniform distribution of the points on the underlying surface. We demonstrate applications of DSS to inverse rendering for geometry synthesis and denoising, where large scale topological changes, as well as small scale detail modifications, are accurately and robustly handled without requiring explicit connectivity, outperforming state-of-the-art techniques.

Authors/Presenter(s): Yifan Wang, ETH Zürich, Switzerland
Felice Serena, ETH Zürich, Switzerland
Shihao Wu, ETH Zürich, Switzerland
Cengiz Oztireli, ETH Zürich, Disney Research, Switzerland
Olga Sorkine-Hornung, ETH Zürich, Switzerland


The Camera Offset Space: Real-time Potentially Visible Set Computations for Streaming Rendering

Abstract: Potential visibility has historically always been of importance when rendering performance was insufficient. With the rise of virtual reality, rendering power may once again be insufficient, e.g., for integrated graphics of head-mounted displays. To tackle the issue of efficient potential visibility computations on modern graphics hardware, we introduce the camera offset space (COS). Opposite to how traditional visibility computations work---where one determines which pixels are covered by an object under all potential viewpoints---the COS describes under which camera movement a sample location is covered by a triangle. In this way, the COS opens up a new set of possibilities for visibility computations. By evaluating the pairwise relations of triangles in the COS, we show how to efficiently determine occluded triangles. Constructing the COS for all pixels of a rendered view leads to a complete potentially visible set (PVS) for complex scenes. By fusing triangles to larger occluders, including locations between pixel centers, and considering camera rotations, we describe an exact PVS algorithm that includes all viewing directions inside a view cell. Implementing the COS is a combination of real-time rendering and compute steps. We provide the first GPU PVS implementation that works without preprocessing, on-the-fly, on unconnected triangles. This opens the door to a new approach of rendering for virtual reality head-mounted displays and server-client settings for streaming 3D applications such as video games.

Authors/Presenter(s): Jozef Hladky, Max Planck Institute for Informatics, Germany
Hans-Peter Seidel, Max Planck Institute for Informatics, Germany
Markus Steinberger, TU Graz, Max Planck Institute for Informatics, Austria


Back