• Platinum Pass Platinum Pass
  • Full Conference Pass Full Conference Pass
  • Full Conference One-Day Pass Full Conference One-Day Pass

Date: Wednesday, November 20th
Time: 9:00am - 10:45am
Venue: Plaza Meeting Room P2
Session Chair(s): Ling-Qi Yan, UC Santa Barbara, United States of America


GradNet: Unsupervised Deep Screened Poisson Reconstruction for Gradient-Domain Rendering

Abstract: Monte Carlo (MC) methods for light transport simulation are flexible and general but typically suffer from high variance and slow convergence. Gradient-domain rendering alleviates this problem by additionally generating image gradients and reformulating rendering as a screened Poisson image reconstruction problem. To improve the quality and performance of the reconstruction, we propose a novel and practical deep learning based approach in this paper. The core of our approach is a multi-branch auto-encoder, termed GradNet, which end-to-end learns a mapping from a noisy input image and its corresponding image gradients to a high-quality image with low variance. Once trained, our network is fast to evaluate and does not require manual parameter tweaking. Due to the difficulty in preparing ground-truth images for training, we design and train our network in a completely unsupervised manner by learning directly from the input data. This is the first solution incorporating unsupervised deep learning into the gradient-domain rendering framework. The loss function is defined as an energy function including a data fidelity term and a gradient fidelity term. To further reduce the noise of the reconstructed image, the loss function is reinforced by adding a regularizer constructed from selected rendering-specific features. We demonstrate that our method improves the reconstruction quality for a diverse set of scenes, and reconstructing a high-resolution image takes far less than one second on a recent GPU.

Authors/Presenter(s): Jie Guo, State Key Lab for Novel Software Technology, Nanjing University, China
Mengtian Li, State Key Lab for Novel Software Technology, Nanjing University, China
Quewei Li, State Key Lab for Novel Software Technology, Nanjing University, China
Yuting Qiang, State Key Lab for Novel Software Technology, Nanjing University, China
Bingyang Hu, State Key Lab for Novel Software Technology, Nanjing University, China
Yanwen Guo, State Key Lab for Novel Software Technology, Nanjing University, China
Ling-Qi Yan, University of California, University of California, Santa Barbara, United States of America


Adversarial Monte Carlo Denoising with Conditioned Auxiliary Feature Modulation

Abstract: Denoising Monte Carlo rendering with a very low sample rate remains a major challenge in the photo-realistic rendering research. Many previous works, including regression-based and learning-based methods, have been explored to achieve better rendering quality with less computational cost. However, most of these methods rely on handcrafted optimization objectives, which lead to artifacts such as blurs and unfaithful details. In this paper, we present an adversarial approach for denoising Monte Carlo rendering. Our key insight is that generative adversarial networks can help denoiser networks to produce more realistic high-frequency details and global illumination by learning the distribution from a set of high-quality Monte Carlo path tracing images. We also adapt a novel feature modulation method to utilize auxiliary features better, including normal, albedo and depth. Compared to previous state-of-the-art methods, our approach produces a better reconstruction of the Monte Carlo integral from a few samples, performs more robustly at different sample rates, and takes only a second for megapixel images.

Authors/Presenter(s): Bing Xu, KooLab, Kujiale, China
Junfei Zhang, KooLab, Kujiale, China
Rui Wang, State Key Laboratory of CAD & CG, Zhejiang University, China
Kun Xu, Tsinghua University, China
Yongliang Yang, University of Bath, United Kingdom
Chuan Li, Lambda Labs, Inc., China
Rui Tang, KooLab, Kujiale, China


Learning Generative Models for Rendering Specular Microgeometry

Abstract: Rendering specular material appearance is a core problem of computer graphics. While smooth analytical material models are widely used, the high-frequency structure of real specular highlights requires considering discrete, finite microgeometry. Instead of explicit modeling and simulation of the surface microstructure (which was explored in previous work), we propose a novel direction: learning the high-frequency directional patterns from synthetic or measured examples, by training a generative adversarial network (GAN). A key challenge in applying GAN synthesis to spatially varying BRDFs is evaluating the reflectance for a single location and direction without the cost of evaluating the whole hemisphere. We resolve this using a novel method for partial evaluation of the generator network. We are also able to control large-scale spatial texture using a conditional GAN approach. The benefits of our approach include the ability to synthesize spatially large results without repetition, support for learning from measured data, and evaluation performance independent of the complexity of the dataset synthesis or measurement.

Authors/Presenter(s): Alexandr Kuznetsov, University of California, San Diego, United States of America
Miloš Hašan, Adobe Research, United States of America
Zexiang Xu, University of California, San Diego, United States of America
Ling-Qi Yan, University of California, Santa Barbara, United States of America
Bruce Walter, Cornell University, United States of America
Nima Kalantari, Texas A&M University, United States of America
Steve Marschner, Cornell University, United States of America
Ravi Ramamoorthi, University of California, San Diego, United States of America


Deep Point Correlation Design

Abstract: Designing point patterns with desired properties can require substantial effort, both in hand-crafting coding and mathematical derivation. Retaining these properties in multiple dimensions or for a substantial number of points can be challenging and computationally expensive. Tackling those two issues, we suggest to automatically generate scalable point patterns from design goals using deep learning. We phrase pattern generation as a deep composition of weighted distance-based unstructured filters. Deep point pattern design means to optimize over the space of all such compositions according to a user-provided point correlation loss, a small program which measures a pattern's fidelity in respect to its spatial or spectral statistics, linear or non-linear (e.g., radial) projections, or any arbitrary combination thereof. Our analysis shows that we can emulate a large set of existing patterns (blue, green, step, projective, stair, etc.-noise), generalize them to countless new combinations in a systematic way and leverage existing error estimation formulations to generate novel point patterns for a user-provided class of integrand functions. Our point patterns scale favorably to multiple dimensions and numbers of points: we demonstrate nearly 10k points in 10-D produced in one second on one GPU.

Authors/Presenter(s): Thomas Leimkuehler, MPI Informatik, Germany
Gurprit Singh, MPI Informatik, Germany
Karol Myszkowski, MPI Informatik, Germany
Hans-Peter Seidel, MPI Informatik, Germany
Tobias Ritschel, University College London (UCL), United Kingdom


Back