Looking & Sounding Great Platinum Pass Full Conference Pass Full Conference One-Day Pass Date: Wednesday, November 20th Time: 9:00am - 10:45am Venue: Plaza Meeting Room P1 Session Chair(s): Jehee Lee, Seoul National University, Movement Research Lab., South Korea Learning an Intrinsic Garment Space for Interactive Authoring of Garment Animation Abstract: Authoring dynamic garment shapes for character animation on body motion is one of the fundamental steps in the CG industry. Established workflows are either time and labor consuming (i.e., manual editing on dense frames with controllers), or lack of keyframe-level control (i.e., physically-based simulation). Not surprisingly, garment authoring remains a bottleneck in many production pipelines. Instead, we present a deep-learning-based approach for semi-automatic authoring of garment animation, wherein the user provides the desired garment shape for one keyframe, while our system infers a latent representation for its motion-independent intrinsic parameters (e.g., gravity, cloth materials, etc.). Given new character motions, the latent representation allows to automatically generate a plausible garment animation at interactive rates. Having factored out character motion, the learned intrinsic garment space enables smooth transition between keyframes on a new motion sequence. Technically, we learn an intrinsic garment space with an motion-driven autoencoder network, where the encoder maps the garment shapes to the intrinsic space under the condition of body motions, while the decoder acts as a differentiable simulator to generate garment shapes according to changes in character body motion and intrinsic parameter. We evaluate our approach qualitatively and quantitatively on common garment types. Experiments demonstrate our system can significantly improve current garment authoring workflow via an interactive user interface. Compared with the standard CG pipeline, our system significantly reduces the ratio of required keyframes from 20% to 1-2%. Authors/Presenter(s): Tuanfeng Wang, Mihoyo inc., ChinaTianjia Shao, University of Leeds, United KingdomKai Fu, Mihoyo Inc., ChinaNiloy Mitra, University College London (UCL), Adobe Research, United Kingdom Biomimetic Eye Modeling & Deep Neuromuscular Oculomotor Control Abstract: We present a novel, biomimetic model of the eye for realistic virtual human animation. We also introduce a deep learning approach to oculomotor control that is compatible with our biomechanical eye model. Our eye model consists of the following functional components: (i) submodels of the 6 extraocular muscles that actuate realistic eye movements, (ii) an iris submodel, actuated by pupillary muscles, that accommodates to incoming light intensity, (iii) a corneal submodel and a deformable, ciliary-muscle-actuated lens submodel, which refract incoming light rays for focal accommodation, and (iv) a retina with a multitude of photoreceptors arranged in a biomimetic, foveated distribution. The light intensity captured by the photoreceptors is computed using ray tracing from the photoreceptor positions through the finite aperture pupil into the 3D virtual environment, and the visual information is output by the eye via an optic nerve vector. Our oculomotor control system includes a foveation controller implemented as a locally-connected, irregular Deep Neural Network (DNN) that conforms to the nonuniform retinal photoreceptor distribution, and a neuromuscular motor controller implemented as a fully-connected DNN, plus auxiliary Shallow Neural Networks (SNNs) that control the accommodation of the pupil and lens. The DNNs are trained offline through deep learning from data synthesized by the eye model itself. Once trained, the oculomotor control system operates robustly and efficiently online. It innervates the intraocular muscles to perform illumination and focal accommodation and the extraocular muscles to produce natural eye movements in order to foveate and pursue moving visual targets. We additionally demonstrate the operation of our eye model (binocularly) within a recently introduced sensorimotor control framework involving an anatomically-accurate biomechanical human musculoskeletal model. Authors/Presenter(s): Masaki Nakada, University of California, Los Angeles, United States of AmericaArjun Lakshmipathy, University of California, Los Angeles, United States of AmericaTao Zhou, University of California, Los Angeles, United States of AmericaHonglin Chen, University of California, Los Angeles, United States of AmericaXin Ling, University of California, Los Angeles, United States of AmericaDemetri Terzopoulos, University of California, Los Angeles, United States of America Acoustic texture rendering for extended sources in complex scenes Abstract: Extended stochastic sources, like falling rain or a flowing waterway, provide an immersive ambience in virtual environments. In complex scenes, the rendered sound should vary naturally with listener position, differing not only in overall loudness but also in texture, to capture the indistinct murmur of a faraway brook versus the bright babbling of one up close. Modeling an ambient sound as a collection of random events such as individual raindrop impacts or water bubble oscillations, this variation can be seen as a change in the statistical distribution of events heard by the listener: the arrival rate of nearby, louder events relative to more distant or occluded, quieter ones. Reverberation and edge diffraction from scene geometry multiply and mix events more extensively compared to an empty scene and introduce salient spatial variation in texture. We formalize the notion of acoustic texture by introducing the event loudness density (ELD), which relates the rapidity of received events to their loudness. To model spatial variation in texture, the ELD is made a function of listener location in the scene. We show that this ELD field can be extracted from a single wave simulation for each extended source and rendered flexibly using a granular synthesis pipeline, with grains derived procedurally or from recordings. Our system yields believable, real-time changes in acoustic texture as the listener moves, driven by sound propagation in the scene. Authors/Presenter(s): Zechen Zhang, Cornell University, United States of AmericaNikunj Raghuvanshi, Microsoft Research, United States of AmericaJohn Snyder, Microsoft Research, United States of AmericaSteve Marschner, Cornell University, United States of America Redirected Smooth Mappings for Multi-User Real Walking in VR Abstract: We propose a novel technique to provide multi-user real walking experiences with physical interactions in VR applications. In our system, multiple users walk freely while navigating a large, virtual environment within a smaller, physical workspace. These users can interact with other real users or physical props in the same physical locations. The key of our method is a redirected smooth mapping that incorporates the redirected walking technique to warp the input virtual scene with small bends and low distance distortion. Users possess a wide field of view to explore the mapped virtual environment while being redirected in the real workspace. To keep multiple users away from the overlaps of the mapped virtual scenes, we present an automatic collision avoidance technique based on dynamic virtual avatars. These avatars naturally appear, move, and disappear, producing as little influence as possible on users’ walking experiences. We evaluate our multi-user real walking system through formative user studies, and demonstrate the capability and practicability of our technique in two multi-user applications. Authors/Presenter(s): Zhi-Chao Dong, University of Science and Technology of China, ChinaXiao-Ming Fu, University of Science and Technology of China, ChinaZeshi Yang, University of Science and Technology of China, ChinaLigang Liu, University of Science and Technology of China, China Deep Iterative Frame Interpolation for Full-frame Video Stabilization Abstract: Video stabilization is a fundamental and important technique for higher quality videos. Prior works have extensively explored video stabilization, but most of them involve cropping of the frame boundaries and introduce moderate levels of distortion. We present a novel deep approach to video stabilization which can generate video frames without cropping and low distortion. The proposed framework utilizes frame interpolation techniques to generate in between frames, leading to reduced inter-frame jitter. Once applied in an iterative fashion, the stabilization effect becomes stronger. A major advantage is that our framework is end-to-end trainable in an unsupervised manner. In addition, our method is able to run in near real-time (15 fps). To the best of our knowledge, this is the first work to propose an unsupervised deep approach to full-frame video stabilization. We show the advantages of our method through quantitative and qualitative evaluations comparing to the state-of-the-art methods. Authors/Presenter(s): Jinsoo Choi, KAIST, South KoreaIn So Kweon, KAIST, South Korea Back