Geometry Off the Deep End Platinum Pass Full Conference Pass Full Conference One-Day Pass Date: Wednesday, November 20th Time: 2:15pm - 4:00pm Venue: Plaza Meeting Room P2 Session Chair(s): Wenping Wang, University of Hong Kong, China RPM-Net: Recurrent Prediction of Motion and Parts from Point Cloud Abstract: We introduce RPM-Net, a deep learning-based approach which simultaneously infers movable parts and hallucinates their motions from a single, un-segmented, and possibly partial, 3D point cloud shape. RPM-Net is a novel Recurrent Neural Network (RNN), composed of an encoder-decoder pair with interleaved Long Short-Term Memory (LSTM) components, which together predict a temporal sequence of point-wise displacements for the input shape. At the same time, the displacements allow the network to learn movable parts, resulting in a motion-based shape segmentation. Recursive applications of RPM-Net on the obtained parts can predict finer-level part motions, resulting in a hierarchical object segmentation. Furthermore, we develop a separate network to estimate part mobilities, e.g., per part motion parameters, from the segmented motion sequence. Both networks learn the deep predictive models from a training set that exemplifies a variety of mobilities for diverse objects. We show results of simultaneous motion and part predictions from synthetic and real scans of 3D objects exhibiting a variety of part mobilities, possibly involving multiple movable parts. Authors/Presenter(s): Zihao Yan, Shenzhen University, ChinaRuizhen Hu, Shenzhen University, ChinaXingguang Yan, Shenzhen University, ChinaLuanmin Chen, Shenzhen University, ChinaOliver van Kaick, Carleton University, CanadaHao (Richard) Zhang, Simon Fraser University, CanadaHui Huang, Shenzhen University, China Learning Adaptive Hierarchical Cuboid Abstractions of 3D Shape Collections Abstract: Abstracting man-made 3D objects as assemblies of primitives, i.e., shape abstraction, is an important task in 3D shape understanding and analysis. In this paper, we propose an unsupervised learning method for automatically constructing compact and expressive shape abstractions of 3D objects in a class. The key idea of our approach is an adaptive hierarchical cuboid representation that abstracts a 3D shape with a set of parametric cuboids adaptively selected from a hierarchical and multi-level cuboid representation shared by all objects in the class. The adaptive hierarchical cuboid abstraction offers a compact representation for modeling the variant shape structures and their coherence at different abstraction levels. Based on this representation, we design a convolutional neural network (CNN) for predicting the parameters of each cuboid in the hierarchical cuboid representation and the adaptive selection mask of cuboids for each input 3D shape. For training the CNN from an unlabeled 3D shape collection, we propose a set of novel loss functions to maximize the approximation quality and compactness of the adaptive hierarchical cuboid abstraction and present a progressive training scheme to refine the cuboid parameters and the cuboid selection mask effectively. We evaluate the effectiveness of our approach on various 3D shape collections and demonstrate its advantages over the existing cuboid abstraction approach. We also illustrate applications of the resulting adaptive cuboid representations in various shape analysis and manipulation tasks. Authors/Presenter(s): Chun-Yu Sun, Tsinghua University, Microsoft Research Asia, ChinaQian-Fang Zou, University of Science and Technology of China, Microsoft Research Asia, ChinaXin Tong, Microsoft Research Asia, ChinaYang Liu, Microsoft Research Asia, China StructureNet: Hierarchical Graph Networks for 3D Shape Generation Abstract: The ability to generate novel, diverse, and realistic 3D shapes along with associated part semantics and structure is central to many applications requiring high-quality 3D assets or large volumes of realistic training data. A key challenge towards this goal is how to accommodate diverse shape variations, including both continuous deformations of parts as well as structural or discrete alterations which add to, remove from, or modify the shape constituents and compositional structure. Such object structure can typically be organized into a hierarchy of constituent object parts and relationships, represented as a hierarchy of n-ary graphs. We introduce StructureNet, a hierarchical graph network which (i)~can directly encode shapes represented as such n-ary graphs, (ii)~can be robustly trained on large and complex shape families, and (iii)~be used to generate a great diversity of realistic structured shape geometries. Technically, we accomplish this by drawing inspiration from recent advances in graph neural networks to propose an order-invariant encoding of n-ary graphs, considering jointly both part geometry and inter-part relations during network training. We extensively evaluate the quality of the learned latent spaces for various shape families and show significant advantages over baseline and competing methods. The learned latent spaces enable several structure-aware geometry processing applications, including shape generation and interpolation, shape editing, or shape structure discovery directly from un-annotated images, point clouds, or partial scans. Authors/Presenter(s): Kaichun Mo, Stanford University, United States of AmericaPaul Guerrero, University College London, United KingdomLi Yi, Stanford University, United States of AmericaHao Su, University of California, San Diego, United States of AmericaPeter Wonka, King Abdullah University of Science and Technology (KAUST), Saudi ArabiaNiloy Mitra, University College London, Adobe, United KingdomLeonidas Guibas, Stanford University, Facebook, United States of America SDM-NET: Deep Generative Network for Structured Deformable Mesh Abstract: We introduce SDM-NET, a deep generative neural network which produces structured deformable meshes. Specifically, the network is trained to generate a spatial arrangement of closed, deformable mesh parts, which respect the global part structure of a shape collection, e.g., chairs, airplanes, etc. Our key observation is that while the overall structure of a 3D shape can be complex, the shape can usually be decomposed into a set of parts, each homeomorphic to a box, and the finer-scale geometry of the part can be recovered by deforming the box. The architecture of SDM-NET is that of a two-level variational autoencoder (VAE). At the part level, a PartVAE learns a deformable model of part geometries. At the structural level, we train a Structured Parts VAE (SP-VAE), which jointly learns the part structure of a shape collection and the part geometries, ensuring a coherence between global shape structure and surface details. Through extensive experiments and comparisons with the state-of-the-art deep generative models of shapes, we demonstrate the superiority of SDM-NET in generating meshes with visual quality, flexible topology, and meaningful structures, which benefit shape interpolation and other subsequently modeling tasks. Authors/Presenter(s): Lin Gao, Institute of Computing Technology, Chinese Academy of Sciences, ChinaJie Yang, Institute of Computing Technology, Chinese Academy of Sciences, ChinaTong Wu, Institute of Computing Technology, Chinese Academy of Sciences, ChinaYu-Jie Yuan, Institute of Computing Technology, Chinese Academy of Sciences, ChinaHongbo Fu, School of Creative Media, City University of Hong Kong, ChinaYu-Kun Lai, Cardiff University, ChinaHao Zhang, Simon Fraser University, Canada Back