• Platinum Pass Platinum Pass
  • Full Conference Pass Full Conference Pass
  • Full Conference One-Day Pass Full Conference One-Day Pass
  • Basic Conference Pass Basic Conference Pass
  • Student One-Day Pass Student One-Day Pass

Date: Wednesday, November 20th
Time: 4:00pm - 6:00pm
Venue: Great Hall 1&2


A Clever Label

Speaker(s): Michela Ledwidge, Mod Productions Pty Ltd, Australia

Description: A Clever Label (ACL) is an interactive immersive documentary experience with video and VR versions. Inside the ACL experience the audience is guided by the presenter to explore the story facts using realtime spatial data visualizations, which can be generated by non-technical presenters, as a new format for presenting information.

Bio of Speaker: Michela Ledwidge is a director who specialises in realtime and virtual production. Recently that has included augmented and virtual reality, games and using these technologies behind the scenes for film & TV. She is co-founder of Sydney studio, Mod and has been both creative and technical lead on numerous productions. Michela has been an Internet pioneer since the early 1990s, creating the first website in NSW (Australia) and award-winning remixable film architecture. She is currently working on investigative documentary experience, A Clever Label.


Virtual Reality Protein Builder

Speaker(s): jean-marc gauthier, Virtual Technology & Design, United States of America

Description: A virtual world is an ideal place to explore and visualize physical, biochemical and molecular protein structures. Virtual protein structures can be shared between several people using virtual reality or mixed reality. One of the great appeals of visualizing virtual proteins is the possibility of interaction at the nanoscale level.

Bio of Speaker: Jean-Marc Gauthier, Assoc. Professor, Virtual Technology and Design, Univ. of Idaho. Master: ITP, Tisch School of the Arts, NYU, DPLG, Ecole des Beaux Arts, Paris. Fields of study: Virtual Reality, innovative technologies, 3-D visualization, animation, & storytelling. Expertise: Design, programming, user testing of virtual worlds, user interface (UI), user experience (UX) design, Mixed Reality (XR). Jean-Marc is involved with research on mobility and entertainment in virtual reality and human machine interface design. He tinkers with 3D photogrammetry, artificial intelligence, drones and wildlife. He currently designs virtual agents that can mimic molecular dynamics. Jean-Marc’s interactive media artwork has been presented at domestic and international venues. He has written books on creating animations, the production of real-time 3D animations and games.


Global Bidirectional Remote Haptic Live Entertainment by Virtual Beings

Speaker(s): Akihiko Shirai, GREE, Inc.; GREE VR Studio Lab, Japan
Yusuke Yamazaki, GREE, Inc.; GREE VR Studio Lab, Japan
Kensuke Koike, EXR, Inc.; Koike Design Laboratory, Japan
Yoshitaka Soejima, NTT DOCOMO, INC/Open innovation strategy, Japan

Description: We show a next generation live entertainment using global bidirectional remote network and haptic displays for VTubers. Performers will play 3D real-time characters. The remote players harmonize the stage by applauding and cheering on the performer on-site. Emotion will be visualized for various audiences which includes audio and/or visual disabilities.

Bio of Speaker: AKIHIKO SHIRAI, Ph.D is director of GREE VR Studio Lab, which promotes a new industrial research field for vTuber (Virtual YouTuber) and exploring VR live entertainment. He received a Ph.D (Engineering) in 2004 from Tokyo Institute of Technology Japan by research concerning the “Tangible Playroom”, an entertainment system for young children using haptics on human scale play-field with real-time physics engine. Experienced post-doc at NHK-ES in Japan, ENSAM Presence & Innovation Laboratory in France, Science Communicator in national museum of emerging science and innovation (Miraikan, Japan), IVRC Executive Committee. Visiting professor in Digital Hollywood University Graduate School since 2018.

Yusuke is a doctoral student at Tokyo Tech. He invented a new vibration mechanism using a combination of motors and string in 2015 when he was a master student. The invention was highly evaluated in international conferences e.g. achieved Best Demo Award in EuroHaptics2016. Then he had developed Hapbeat, a haptic device using the mechanism, and founded Hapbeat LLC. He re-designed Hapbeat as a productive model and started selling. It has been sold more than 200pcs mainly to consumers. From April 2019, he started an internship in GREE, Inc. to develop haptic rendering algorithms for utilizing Hapbeat more widely.

He has experienced developer of educational solutions for companies using VR and tactile sensation since 2018. Since 2013, he has been involved in web promotion as a web design director and marketing consultant at Koike Design Institute. He collaborates with GREE VR Studio Lab as a program director for VRSionUp and Virtual Beings World in SIGGRAPH 2019, he is also playing original VTubers by himself.

He worked in NTT DoCoMo since 1998, the third-generation mobile phone system development in research and development phase to engage in i-mode for 5 years. In the service department, consider broadcasting related services, plan and developed “i-concier”, information distribution service as a concierge for 6 years since 2008. He seconded to a TV station to produce animation-related content for 6 years. In current department, he produced live animation (a.k.a VTuber today) for 4 years since 2015.


Matt AI - Speech Driven Digital Human with Emotions

Speaker(s): Jingxiang Li, Tencent Technology Company Limited, China
Shiyin Kang, Tencent Technology Company Limited, China
Xinwei Jiang, Tencent Technology Company Limited, China

Description: Photorealistic digital human usually has an exquisite face which leads to a huge challenge in speech driven facial animation, especially, with subtle human emotions. Based on deep learning method, we present a novel framework to drive digital human with emotions via speech in real-time.

Bio of Speaker: Jingxiang Li has worked on computer graphics for more than ten years. Before joining Tencent company, he worked at Oriental Dreamworks which makes animated feature films like Kung Fu Panda 3. After that, he works for Tencent to bring film technology into real time digital human field. In past three years, he went through several digital human projects, including Meet Mike, Siren and Matt AI. Currently he is technical lead for a digital idol project.

Shiyin Kang received the B.S. degree in automation and the M.S. degree in computer science and technology from Tsinghua University, Beijing, China. He received the Ph.D. degree in systems engineering and engineering management from The Chinese University of Hong Kong. He joined Tencent Inc. in 2016, where he is currently a senior researcher of Tencent AI Lab. His research interests include speech processing, multi-modal speech synthesis, singing synthesis, voice conversion and applications in machine learning.

Xinwei Jiang joined NExT Studios of Tencent in 2017, and worked as a senior researcher. Before that, he was an assistant researcher at Institute of Automation, Chinese Academy of Sciences(CASIA) and was a PhD student from 2011 to 2016 at CASIA. His research interests include computer vision and machine learning. Now he focuses on combining computer vision and machine learning with computer graphics to develop new algorithms for game and digital human production.


Personalized Avatars for Realtime Virtual Try-on

Speaker(s): Koki Nagano, Pinscreen, United States of America
Hao Li, Pinscreen, University of Southern California, United States of America
Lain Goldwhite, Pinscreen, United States of America
Marco Fratarcangeli, Deform Dynamics, Chalmers University of Technology, Sweden
Kyle San, Pinscreen, United States of America
Jaewoo Seo, Pinscreen, United States of America

Description: To solve the virtual fitting room problem, we propose a solution using personalized avatars with cutting edge clothing simulation. While real-time clothed avatars exist, they are limited to skinning techniques. Our approach produces a fully-rigged avatar using an input image, performance-driven motion and complex real-time clothing simulation on consumer hardware.

Bio of Speaker: I am a principal scientist at Pinscreen. I received my PhD at USC and Bachelor of Engineering at the Tokyo Institute for Technology. I previously worked for Oculus Research and Weta Digital.

Hao Li is CEO/Co-Founder of Pinscreen, Associate Professor of Computer Science at the University of Southern California, and the director of the Vision and Graphics Lab at the USC Institute for Creative Technologies. He was previously a visiting professor at Weta Digital, a research lead at Industrial Light & Magic / Lucasfilm, and a postdoctoral fellow at Columbia and Princeton Universities. He was named top 35 innovator under 35 by MIT Technology Review in 2013 and was also awarded the Google Faculty Award, the Okawa Foundation Research Grant, as well as the Andrew and Erna Viterbi Early Career Chair. He won the Office of Naval Research (ONR) Young Investigator Award in 2018 and was named named to the DARPA ISAT Study Group in 2019. Hao obtained his PhD at ETH Zurich and his MSc at the University of Karlsruhe (TH).

Lain a software engineer with an interest in graphics programming and a background in games from USC.

Marco Fratarcangeli is an Associate Professor of Computer Science at Chalmers University of Technology in Gothenburg, Sweden. His main academic interests are in the area of large-scale nonlinear deformable simulation for time-critical applications like surgical simulation, physically-based animation, virtual characters, industrial prototyping and design, virtual/augmented reality, etc. He has been an Assistant Professor (2011-2014) at Sapienza University in Rome (Italy), and a visiting professor (2014) at Telecom Paristech in Paris (France). Marco obtained his PhD in Computer Engineering from Sapienza in 2009, jointly with Linköping University in Sweden. He has been awarded the Young Researcher grant from Vetenskapsrådet (Swedish Research Council) in 2015. He is CEO and Co-Founder of Deform Dynamics AB.

Kyle San is a software engineer at Pinscreen.

Dr. Jaewoo Seo is a Director of Engineering at Pinscreen. Previously he worked at ILM, Weta Digital, and OLM Digital as a research engineer. His research interests include geometry processing, GPU programming, motion capture and animation techniques for faces and bodies. He received his PhD in 2012 at KAIST.


The AirSticks: An Instrument for Audio-Visual Performance Through Gesture in Augmented Reality

Speaker(s): Alon Ilsar, SensiLab, Monash University, Australia
Matthew Hughes, University of Technology Sydney, Technische Universität Berlin, Australia

Description: The AirSticks are a gesture-based audio-visual instrument. This latest incarnation combines spatially controlled sound design with a 3D game environment projected onto a transparent screen. This system allows for the composition of highly integrated audio-visual environments superimposed directly onto the performance area.

Bio of Speaker: Alon Ilsar is an Australian-based drummer, composer, instrument designer and researcher at Monash University's SensiLab. He is the co-designer of a new gestural instrument for electronic percussionists, the AirSticks. Alon is researching the uses of the AirSticks in the field of health and wellbeing, making music creation more accessible to the broader community. Alon holds a PhD in instrument design through the University of Technology Sydney. He has played the AirSticks at Sydney’s Vivid Festival, on Triple J’s Like a Version and at NYC’s MET Museum, in projects including Trigger Happy ‘Visualised’, The Hour, The Sticks, Tuka (from Thundamentals), Sandy Evans’ ‘Ahimsa,’ Ellen Kirkwood’s ‘[A]part‘, Kirin J Callinan, Kind of Silence (UK), Cephalon (US) and Silent Spring. He has played drums in Belvoir Theatre’s ‘Keating! the Musical,’ Sydney Theatre Company’s ‘Mojo,’ Meow Meow with the London Philharmonic, Bergen Philharmonic and Sydney Symphony Orchestras, Alan Cumming, Jake Shears and Eddie Perfect.

Matt Hughes is a researcher, artist, programmer and musician whose works dive deep into the connection between sight and sound. Currently a PhD student at UTS' Animal Logic Academy, Matt's research explores the implementation and implications of using augmented reality (AR) in live electronic music performance. His work with Alon Ilsar for the AirSticks won the 'Best Instrument' and 'Best Performance' people's choice awards at the 2019 Guthman New Musical Instrument competition at Georgia Tech.