• Platinum Pass Platinum Pass
  • Full Conference Pass Full Conference Pass
  • Full Conference One-Day Pass Full Conference One-Day Pass
  • Basic Conference Pass Basic Conference Pass
  • Student One-Day Pass Student One-Day Pass
  • Experience Pass Experience Pass
  • Exhibitor Pass Exhibitor Pass

Date: Wednesday, November 20th
Time: 11:00am - 12:45pm
Venue: Plaza Meeting Room P3


Live 6DoF Video Production with Stereo Camera

Speaker(s): Ayaka Nakatani, Sony Interactive Entertainment Inc., Japan
Takayuki Shinohara, Sony Interactive Entertainment Inc., Japan
Kazuhito Miyaki, Sony Interactive Entertainment Inc., Japan

Description: We propose a light-weight 6DoF video production pipeline using only one stereo camera as input. The subject can move freely in any direction (lateral and depth) as the camera follows to keep the subject within the frame. The processing runs in real time to provide 6DoF live viewing experience.

Bio of Speaker: Ayaka Nakatani is a software engineer working for the Global R&D Tokyo Division at Sony Interactive Entertainment Inc. She studied computer graphics and received her Master of Engineering from Keio University, Yokohama in 2012. Her research interests include computer graphics, live-action video processing and computer vision.

Takayuki Shinohara is a software engineer working at Sony Interactive Entertainment Inc. He worked on the locomotion control software of entertainment robot AIBO at Sony Corporation from 2001 to 2004. After moving to SIE in 2004, he developed several non-game applications for PlayStation platforms such as Reader for PlayStation Vita as the lead programmer. His research interests include virtual reality, computer graphics and computer vision.

Kazuhito Miyaki is a director of Research & Development at Sony Interactive Entertainment Inc., leading video signal processing technology and deep learning project.


FaceDrive: Facial Expression Driven Operation to Control Virtual Supernumerary Robotic Arms

Speaker(s): Masaaki Fukuoka, Keio University, Japan
Adrien Verhulst, The University of Tokyo, Japan
Fumihiko Nakamura, Keio University, Japan
Ryo Takizawa, Keio University, Japan
Katsutoshi Masai, Keio University, Japan
Maki Sugimoto, Keio University, Japan

Description: In this exhibition, we demonstrate the mapping method between facial expressions and wearable robot arm commands in virtual reality environment. By acquiring reflection intensity information with a number of optical sensors embedded in head-mounted display and training a machine learning system, our system is able to recognize intermediate facial expressions.

Bio of Speaker:


Light Me Up: An Augmented-Reality Projection System

Speaker(s): Panagiotis-Alexandros Bokaris, L'Oréal, France
Benjamin Askenazi, L'Oréal, France
Michaël Haddad, L'Oréal, United States of America

Description: We propose a new compact projector-camera system (ProCam) composed of off-the-self devices that achieves dynamic facial projection mapping. A mini projector and a depth sensor camera are coupled together to project dynamic content such as makeup on a user's face. Our technology is eye safe and requires no initialization step.

Bio of Speaker: Panagiotis-Alexandros Bokaris leads a team at L’Oréal Research and Innovation working on appearance digitalization and diagnostics for beauty personalization. He holds a Diploma in Engineering from the University of Patras (2011) and the Erasmus Mundus Master Degree CIMET (2013). He obtained his PhD in computer science from the University of Paris-Saclay (2016), during which he had a stay as a visiting PhD student at the University College London (2015). His research interests span the space of augmented reality, computer vision and graphics.

Benjamin Askenazi is an optical engineer who studied at the Institut d'Optique (ParisTech) and Imperial College (London). He then completed a Ph.D. at the Matériaux et Phénomènes Quantiques laboratory, attached to the University of Paris Diderot, in Experimental Optics Physics, on the interaction between Light and Matter in semiconductor systems. Benjamin joined L’Oréal R&I in November 2014 where currently he leads the Digital and Applied Optics team.

Michaël holds a PhD in nonlinear optics and laser-material interaction completed in Ecole Polytechnique and an MBA from Kedge Business School. He has 20 years of experience in optics in numerous industrial sectors (defense, telecom, automotive) where he has filed a dozen patents and designed innovative diagnostics. Since September 2015, in L’Oréal, he created the Applied Optics and Algorithms division and led the projects Le Teint Particulier and Shade Finder for Lancôme. He is now in charge of the Augmented Beauty Invention Domain for the USA to bring expertise in optics, AI or computer vision to innovation projects.


M-Hair: Extended Reality by Stimulating the Body Hair

Speaker(s): Roger Boldu, Augmented Human Lab, Auckland Bioengineering Institute; The University of Auckland, New Zealand
Sambhav Jain, Augmented Human Lab, Auckland Bioengineering Institute; The University of Auckland, New Zealand
Juan Pablo Forero Cortés, Augmented Human Lab, Auckland Bioengineering Institute; The University of Auckland, New Zealand
Haimo Zhang, Augmented Human Lab, Auckland Bioengineering Institute; The University of Auckland, New Zealand
Suranga Nanayakkara, Augmented Human Lab, Auckland Bioengineering Institute; The University of Auckland, New Zealand

Description: M-Hair is a novel method for providing tactile feedback by stimulating only body hair. By applying passive magnetic materials to the body hair, these become responsive to external magnetic forces/fields, creating a new opportunity for interactions, such as enriching media experiences and evoking emotional responses through this subtle stimulation.

Bio of Speaker:


FreeMo: Extending Hand Tracking Experiences Through Capture Volume and User Freedom

Speaker(s): Pascal Chiu, Tohoku University, Institut national des sciences appliquées de Lyon, Japan
Isamu Endo, Tohoku University, Japan
Kazuki Takashima, Tohoku University, Japan
Kazuyuki Fujita, Tohoku University, Japan
Yoshifumi Kitamura, Tohoku University, Japan

Description: FreeMo gets rid of interaction area and pre-defined workspace limitations for both desktop and VR hand tracking all in a single device. Both users and developers can now enjoy drastically improved interaction freedom as demonstrated by our featured games, thus getting us closer to the full potential of hand tracking.

Bio of Speaker: Pascal Chiu is a research graduate and computer sciences engineering student affiliated to both Tohoku University in Japan and INSA de Lyon in France. Having studied in France, England, and Japan, he is now performing research activities related to human-computer interaction, interaction design, virtual reality interfaces, virtual content, and robotics.

Isamu Endo is a first year master student in Tohoku University. He is a researcher of Virtual Reality and social interaction.

Dr. Takashima received his Ph.D from the Graduate School of Information Science and Technology at Osaka University in 2008. He then worked as an assistant professor at Osaka University, and joined Tohoku University's Research Institute of Electrical Communication as an assistant professor in 2011. Dr. Takashima was promoted to the rank of associate professor at Tohoku University in 2018.

Dr. Kazuyuki Fujita is an Assistant Professor at Tohoku University. He received his Ph.D. in Information Science and Technology from Osaka University in 2013. His research interests are spatial user interface and virtual reality.

Dr. Yoshifumi Kitamura has been Professor in the Research Institute of Electrical Communication, Tohoku University, since 2010. His research interests include interactive content design, human computer interactions, 3D user interfaces, virtual reality, and related fields. He serves in positions such as the Japan Liaison of IFIP TC-13, Japan Liaison and Chair of ACM SIGCHI Asian Development Committee, Chair of Japan ACM SIGCHI Chapter, Steering Committee Chair of ACM VRST, and Conference Chair of ACM SIGGRAPH Asia 2015, General Chair of CHI 2021. His formal education was obtained at Osaka University, B.Sc. (1985); M.Sc. (1987); and Ph.D. (1996).


PhantomTouch: Creating an Extended Reality by the Illusion of Touch using a Shape-Memory Alloy Matrix

Speaker(s): Sachith Muthukumarana, Augmented Human Lab, Auckland Bioengineering Institute; The University of Auckland, New Zealand
Don Samitha Elvitigala, Augmented Human Lab, Auckland Bioengineering Institute; The University of Auckland, New Zealand
Juan Pablo Forero Cortes, Augmented Human Lab, Auckland Bioengineering Institute; The University of Auckland, New Zealand
Denys J.C. Matthies, Augmented Human Lab, Auckland Bioengineering Institute; The University of Auckland, New Zealand
Suranga Nanayakkara, Augmented Human Lab, Auckland Bioengineering Institute; The University of Auckland, New Zealand

Description: "PhantomTouch" is a wearable forearm augmentation that enables the recreation of natural touch sensation by applying shear forces onto the skin using a matrix of Shape Memory Alloy based plasters.

Bio of Speaker:


Beyond the screen - Volumetric Displays from Voxon Photonics

Speaker(s): Ben Weatherall, Voxon Photonics, Australia

Description: This talk with give an overview of the Voxon VX1 3D volumetric display. Volumetric 3D data is fast becoming the gold standard in 3D interactive entertainment. Advances in Volumetric capture technology have enabled entirely new digital experiences that include Sport Replay, Music Videos, Gaming and Advertising. Yet despite the technological advances in 3D content creation, the most common way to view 3D is still on a 2D screen. VR and AR has partially addressed this shortcoming, but the need to wear goggles or headgear creates a barrier between the user and the 3D experience. Voxon is seeking to remove that barrier, and in doing so is enabling human-centric 3D digital experiences that can be viewed from any direction with the naked eye. Using a unique volumetric rendering technology the Voxon display can bringing 3D digital assets into the physical world, and in doing so, enable a new type of shared visual experience. To make this display technology a reality Voxon had to develop the world’s fastest real-time light engine, which is capable of rendering over 3 Billion points of light per second. Ben Weatherall, Voxon’s Unity Lead Programmer will talk about Voxon’s core technology, how to create content, and some of the unique aspects of volumetric imagery that set it apart from other types of media. He will also discuss important areas for future research in the volumetric display space.

Bio of Speaker: Ben Weatherall is the Software Systems Integration Developer for Voxon Photonics. Primarily concerned with developing an interface between Unity and the Voxon API, Ben also works on integration between third party products and the VX1 display, handling developer support on our community slack and is currently building a unified front end for Voxon products. When not massaging vectors into voxels, Ben focuses on his love of books, cats and taming an unwieldy garden. Ben is currently focused on designing media players for volumetric data formats and writing a DirectX interceptor to bring established games into the volumetric landscape.


SmartSim: Combination of Vibro-Vestibular Wheelchair and Curved Pedestal of Self-Gravitational Acceleration for Road Property and Motion Feedback

Speaker(s): Vibol Yem, Tokyo Metropolitan University, Japan
Ryunosuke Yagi, Tokyo Metropolitan University, Japan
Yasushi Ikei, Tokyo Metropolitan University, Japan

Description: We developed a riding simulation system for immersive virtual reality. Our system mainly consists of a wheelchair for vibration and vestibular sensation, and a pedestal with a curve-shape surface for the wheelchair running on. It is low cost and simple in mechanism.

Bio of Speaker: Vibol Yem received a PhD in engineering from University of Tsukuba in 2015. He is currently an assistant professor in the Department of Computer Science, Graduate School of Systems Design, Tokyo Metropolitan University. His research interests are Human Interface, Tactile/Haptic Device, VR/AR, Wearable, Robotics. He is a member of ACM and IEEE.

Ryunosuke Yagi is a bachelor's degree student at Tokyo Metropolitan University in the department of mechanical systems engineering. His research interests include development of telepresence system, virtual reality and ultra reality.

Yasushi Ikei graduated from the Graduate School of Engineering, The University of Tokyo with a PhD degree in the industrial mechanical engineering in 1988. He joined Tokyo Metropolitan Institute of Technology in 1992. He is currently Professor at the department of computer science, Tokyo Metropolitan University. His research interests are in the virtual reality, ultra reality, telepresence, multisensory display, and cognitive engineering. He received paper awards and contribution award from the Virtual Reality Society of Japan (VRSJ). He gained innovative technologies awards from METI Japan and DCAJ in 2012,2017,2018. He is former vice president and fellow member of the VRSJ.


Closing and Awards Ceremony

Speaker(s):

Description:

Bio of Speaker:


Back