• Platinum Pass Platinum Pass
  • Full Conference Pass Full Conference Pass
  • Full Conference One-Day Pass Full Conference One-Day Pass

Date: Monday, November 18th
Time: 11:00am - 12:45pm
Venue: Plaza Meeting Room P3


Fast Terrain-Adaptive Motion Generation Using Deep Neural Networks

Speaker(s): Moonwon Yu, NCSOFT, South Korea
Byungjun Kwon, NCSOFT, South Korea
Jongmin Kim, Kangwon National University, South Korea
Shinjin Kang, Hongik University, South Korea
Hanyoung Jang, NCSOFT, South Korea

Moonwon Yu completed Master Degree in Computer Science and Engineering focusing on EEG based authentication system and HCI. In 2016, he joined NCSOFT Motion AI team in the interest of AI research. His main research area is Data-driven motion generation specifically in Motion Style Transfer, Data-driven Inverse Kinematics, and Motion Control.

no Infomation

I am currently an Assistant Professor at the Department of Computer Science, Kangwon National University (KNU), Korea. I was a research professor at the Institute for Embedded Software, Hanyang University in 2015 and a mocap software developer and researcher at Weta Digital in 2016, and also worked with Weta Digital as a mocap consultant in 2017. My current research interests include computer graphics and animation, deep learning, and numerical optimization.

no Infomation

Dr. Hanyoung Jang is the Team Leader of the Motion AI team of NCSOFT, a global game company. He has researched in the fields of robotics, General-purpose computing on graphics processing units (GPGPU), computer graphics, and AI. Currently, his primary research interest lies in creating natural character animation through deep learning.

Description: Our neural network system makes it possible to generate terrain-adaptive motions of a large number of game characters. In addition, the generated motions retain human nuances.


Interactive editing of performance-based facial animation

Speaker(s): Yeongho Seol, Weta Digital, New Zealand
Michael Cozens, Weta Digital, New Zealand

Yeongho Seol is experienced in developing character animation, computer vision, and machine learning technologies and making them useful in the real-world VFX production. The tools and techniques he developed have been actively used to create successful digital characters in many movies including The Planet of the Apes series, The Hobbit series, and Avengers series. His research has appeared in a number of forums including Siggraph and Siggraph Asia.

Mike has been working in visual effects for 20 years. He spent three years as Lead Animator on multiple sequences on Avatar and went on to supervise the extended edition of the film. He worked as a Senior Animator on the first two films in Hobbit trilogy before moving into the role of Animation Supervisor on The Hobbit: The Battle of the Five Armies. In between The Hobbit films, he was Animation Supervisor on Prometheus and The Wolverine. Mike’s most recent work as Animation Supervisor has been Pete’s Dragon, Avatar: Flight of Passage and Alita: Battle Angel.

Description: We present a set of interactive editing solutions for performance-based facial animation. The presented solutions allow artists to enhance the result of the automatic solve-retarget with a few tweaks.


Piku Piku Interpolation: An artist-guided sampling algorithm for synthesizing detail applied to facial animation

Speaker(s): Richard Andrew Roberts, CMIC, Victoria University of Wellington, New Zealand
Rafael Kuffner dos Anjos, CMIC, Victoria University of Wellington, New Zealand
Ken Anjyo, CMIC, Victoria University of Wellington; OLM Digital, Inc., Japan
J.P. Lewis, Victoria University of Wellington, United States of America

Richard Roberts is a Postdoc Researcher at CMIC, Victoria University of Wellington. In this role, he works with international industry practitioners to bring the next generation of digital avatars to life. While focused primarily on Computer Graphics research, Richard also has a background in media design, visual effects, computer science, and programming languages.

Rafael Kuffner dos Anjos is a Postdoc Researcher at CMIC, Victoria University of Wellington. All his studies were at Técnico Lisboa (BSc, MSc, and, most recently, PhD in information systems and computer science). There, Rafael worked with the Visualization and Intelligent Multimodal Interfaces Group at INESC-ID Lisbon, and was a part of the BlackBox Lab at FCSH-NOVA. While his PhD research was focused on image-based rendering, he used this work at VIMMI to collaborate with HCI researchers on VR and AR topics, and worked with the performance arts community in the BlackBox lab.

Ken Anjyo is a co-director of Computational Media Innovation Center at Victoria University of Wellington. He an executive R&D adviser of OLM Digital, the digital production company in Tokyo famous for the \emph{Pokémon} movies and other 3D animated feature films. He is a director of Advanced Research Group at IMGICA GROUP, Inc. His research focuses on the construction of mathematical and computationally tractable models.

J.P. Lewis is a numerical programmer and researcher. Lewis is Research Scientist at Google AI, and is an adjunct Associate Professor in the machine learning group of Victoria University of Wellington. His interests include computer vision and machine learning applications in entertainment. He has received credits on a few movies including \emph{Avatar} and \emph{The Matrix} sequels, and several of his algorithms have been adopted in commercial software including Maya and Matlab.

Description: We present a new sampling algorithm that adds realism to early-stage facial animation by recreating detail observed FACS data extracted from videos.


Saliency Diagrams: A tool for analyzing animation through the relative importance of keyposes

Speaker(s): Nicolas Xuan Tan Nghiem, Visual Media Lab, KAIST; École Polytechnique, France
Richard Roberts, CMIC, Victoria University of Wellington, New Zealand
JP Lewis, Victoria University, New Zealand
Junyong Noh, Visual Media Lab, KAIST, South Korea

Nicolas Nghiem recently graduated from Ecole polytechnique with a Masters' degree in Applied Maths and Computer sciences, and from Gobelins with a Certificate of Art in 3D Character animation. His work focuses on creating a toolbox for understanding and analyzing the motion, in the hope of making better tools for animators as well as becoming a better animator himself in the process.

Richard Roberts is a Postdoc Researcher at CMIC, Victoria University of Wellington. In this role, he works with international industry practitioners to bring the next generation of digital avatars to life. While focused primarily on Computer Graphics research, Richard also has a background in media design, visual effects, computer science, and programming languages.

J.P. Lewis is a numerical programmer and researcher. Lewis is Research Scientist at Google AI, and is an adjunct Associate Professor in the machine learning group of Victoria University of Wellington. His interests include computer vision and machine learning applications in entertainment. He has received credits on a few movies including \emph{Avatar} and \emph{The Matrix} sequels, and several of his algorithms have been adopted in commercial software including Maya and Matlab.

Dr. Junyong Noh is a Professor and head of the Graduate School of Culture Technology (GSCT) at KAIST. His current research focus includes facial/character animation, virtual reality, immersive and interactive display of panoramic content. Prior to his academic career, he was a graphics scientist at a Hollywood visual effects company, Rhythm and Hues Studios. Currently, he is the director of Visual Media Lab at KAIST and conducts research to develop software that can be practically utilized for the creation of movie visual effects and animated films.

Description: In this paper, we take inspiration from keyframe animation to compute what we call the Saliency diagram of the animation which can be used to analyze the motion.


Back