• Platinum Pass Platinum Pass
  • Full Conference Pass Full Conference Pass
  • Full Conference One-Day Pass Full Conference One-Day Pass

Date: Tuesday, November 19th
Time: 2:15pm - 4:00pm
Venue: Plaza Meeting Room P1
Session Chair(s): Taku Komura, University of Edinburgh


Learning predict-and-simulate policies from unorganized human motion data

Abstract: The goal of this research is to create physically simulated biped characters equipped with a rich repertoire of motor skills. The user can control the characters interactively by modulating their control objectives. The characters can interact physically with each other and with the environment. We present a novel network-based algorithm that learns control policies from unorganized, minimally-labeled human motion data. The network architecture for interactive character animation incorporates an RNN-based motion generator into a DRL-based controller for physics simulation and control. The motion generator guides forward dynamics simulation by feeding a sequence of future motion frames to track. The rich future prediction facilitates policy learning from large training data sets. We will demonstrate the effectiveness of our approach with biped characters that learn a variety of dynamic motor skills from large, unorganized data and react to unexpected perturbation beyond the scope of the training data.

Authors/Presenter(s): Soohwan Park, Seoul National University, South Korea
Hoseok Ryu, Seoul National University, South Korea
Seyoung Lee, Seoul National University, South Korea
Sunmin Lee, Seoul National University, South Korea
Jehee Lee, Seoul National University, South Korea


DReCon: Data-Driven Responsive Control of Physics-Based Characters

Abstract: Interactive control of self-balancing, physically simulated humanoids is a long standing problem in the field of real-time character animation. While physical simulation guarantees realistic interactions in the virtual world, simulated characters can appear unnatural if they perform unusual movements in order to maintain balance. Therefore, obtaining a high level of responsiveness to user control, runtime performance, and diversity has often been overlooked in exchange for motion quality. Recent work in the field of deep reinforcement learning has shown that training physically simulated characters to follow motion capture clips can yield high quality tracking results. We propose a two-step approach for building responsive simulated character controllers from unstructured motion capture data. First, meaningful features from the data such as movement direction, heading direction, speed, and locomotion style, are interactively specified and drive a kinematic character controller implemented using motion matching. Second, reinforcement learning is used to train a simulated character controller that is general enough to track the entire distribution of motion that can be generated by the kinematic controller. Our design emphasizes responsiveness to user input, visual quality, and low runtime cost for application in video-games.

Authors/Presenter(s): Kevin Bergamin, McGill University, Ubisoft La Forge, Canada
Simon Clavet, Ubisoft La Forge, Canada
Daniel Holden, Ubisoft La Forge, Canada
James Richard Forbes, McGill University, Canada


Learning Body Shape Variation in Physics-based Characters

Abstract: Recently, deep reinforcement learning (DRL) has attracted great attention in designing controllers for physics-based characters. Despite the recent success of DRL, the learned controller is viable for a single character. Changes in body size and proportions require learning controllers from scratch. In this paper, we present a new method of learning parametric controllers for body shape variation. A single parametric controller enables us to simulate and control various characters having different heights, weights, and body proportions. The users are allowed to create new characters through body shape parameters, and they can control the characters immediately. Our characters can also change their body shapes on the fly during simulation. The key to the success of our approach includes the adaptive sampling of body shapes that tackles the challenges in learning parametric controllers, which relies on the marginal value function that measures control capabilities of body shapes. We demonstrate parametric controllers for various physically simulated characters such as bipeds, quadrupeds, and underwater animals.

Authors/Presenter(s): Jungdam Won, Seoul National University, South Korea
Jehee Lee, Seoul National University, South Korea


SoftCon: Simulation and Control of Soft-Bodied Animals with Biomimetic Actuators

Abstract: We present a novel and general framework for the design and control of underwater soft-bodied animals. The whole body of an animal consisting of soft tissues is modeled by tetrahedral and triangular FEM meshes. The contraction of muscles embedded in the soft tissues actuates the body and limbs to move. We present a novel muscle excitation model that mimics the anatomy of muscular hydrostats and their muscle excitation patterns. Our deep reinforcement learning algorithm equipped with the muscle excitation model successfully learned the control policy of soft-bodied animals, which can be physically simulated in real-time, controlled interactively, and resilient to an external perturbation. We demonstrate the effectiveness of our approach with various simulated animals including octopuses, lampreys, starfishes, and stingrays. They learned to swim and perform goal-driven maneuvers including octopus grasping and escaping from a bottle. We also implemented a simple user interface system that allows the user to easily create their creatures.

Authors/Presenter(s): Sehee Min, Seoul National University, South Korea
Jungdam Won, Seoul National University, South Korea
Seunghwan Lee, Seoul National University, South Korea
Jungnam Park, Seoul National University, South Korea
Jehee Lee, Seoul National University, South Korea


Neural State Machine for Character-Scene Interactions

Abstract: We propose Neural State Machine, a novel data-driven framework to guide characters to achieve goal-driven actions with precise scene interactions. Even a seemingly simple task such as sitting on a chair is notoriously hard to model with supervised learning. This difficulty is because such a task involves complex planning with periodic and non-periodic motions reacting to the scene geometry to precisely position and orient the character. Our proposed deep auto-regressive framework enables modeling of multi-modal scene interaction behaviors purely from data. Given high-level instructions such as the goal location and the action to be launched there, our system computes a series of movements and transitions to reach the goal in the desired state. To allow characters to adapt to a wide range of geometry such as different shapes of furniture and obstacles, we incorporate an efficient data augmentation scheme to randomly switch the 3D geometry while maintaining the context of the original motion. To increase the precision to reach the goal during runtime, we introduce a control scheme that combines egocentric inference and goal-centric inference. We demonstrate the versatility of our model with various scene interaction tasks such as sitting on a chair, avoiding obstacles, opening and entering through a door, and picking and carrying objects generated in real-time just from a single model.

Authors/Presenter(s): Sebastian Starke, University of Edinburgh, United Kingdom
He Zhang, University of Edinburgh, United Kingdom
Taku Komura, University of Edinburgh, United Kingdom
Jun Saito, Adobe Research, United States of America


Back