• Platinum Pass Platinum Pass
  • Full Conference Pass Full Conference Pass
  • Full Conference One-Day Pass Full Conference One-Day Pass

Date: Monday, November 18th
Time: 11:00am - 12:45pm
Venue: Plaza Meeting Room P3
Session Chair: Wan-Chun Alex Ma, Google, United States of America

Fast Terrain-Adaptive Motion Generation Using Deep Neural Networks

Author(s)/Presenter(s): Moonwon Yu, NCSOFT, South Korea
Byungjun Kwon, NCSOFT, South Korea
Jongmin Kim, Kangwon National University, South Korea
Shinjin Kang, Hongik University, South Korea
Hanyoung Jang, NCSOFT, South Korea

Abstract: Our neural network system makes it possible to generate terrain-adaptive motions of a large number of game characters. In addition, the generated motions retain human nuances.

Interactive editing of performance-based facial animation

Author(s)/Presenter(s): Yeongho Seol, Weta Digital, New Zealand
Michael Cozens, Weta Digital, New Zealand

Abstract: We present a set of interactive editing solutions for performance-based facial animation. The presented solutions allow artists to enhance the result of the automatic solve-retarget with a few tweaks.

Piku Piku Interpolation: An artist-guided sampling algorithm for synthesizing detail applied to facial animation

Author(s)/Presenter(s): Richard Andrew Roberts, CMIC, Victoria University of Wellington, New Zealand
Rafael Kuffner dos Anjos, CMIC, Victoria University of Wellington, New Zealand
Ken Anjyo, CMIC, Victoria University of Wellington; OLM Digital, Inc., Japan
J.P. Lewis, Victoria University of Wellington, United States of America

Abstract: We present a new sampling algorithm that adds realism to early-stage facial animation by recreating detail observed FACS data extracted from videos.

Saliency Diagrams: A tool for analyzing animation through the relative importance of keyposes

Author(s)/Presenter(s): Nicolas Xuan Tan Nghiem, Visual Media Lab, KAIST; École Polytechnique, France
Richard Roberts, CMIC, Victoria University of Wellington, New Zealand
JP Lewis, Victoria University, New Zealand
Junyong Noh, Visual Media Lab, KAIST, South Korea

Abstract: In this paper, we take inspiration from keyframe animation to compute what we call the Saliency diagram of the animation which can be used to analyze the motion.