• Platinum Pass Platinum Pass
  • Full Conference Pass Full Conference Pass
  • Full Conference One-Day Pass Full Conference One-Day Pass
  • Basic Conference Pass Basic Conference Pass
  • Student One-Day Pass Student One-Day Pass
  • Experience Pass Experience Pass
  • Exhibitor Pass Exhibitor Pass

Date: Wednesday, November 20th
Time: 11:00am - 11:11am
Venue: Plaza Meeting Room P3


Speaker(s):

Abstract: We propose a light-weight 6DoF video production pipeline using only one stereo camera as input. The subject can move freely in any direction (lateral and depth) as the camera follows to keep the subject within the frame. The processing runs in real time to provide 6DoF live viewing experience.

Speaker(s) Bio: Ayaka Nakatani, Sony Interactive Entertainment Inc., Japan
Ayaka Nakatani is a software engineer working for the Global R&D Tokyo Division at Sony Interactive Entertainment Inc. She studied computer graphics and received her Master of Engineering from Keio University, Yokohama in 2012. Her research interests include computer graphics, live-action video processing and computer vision.

Takayuki Shinohara, Sony Interactive Entertainment Inc., Japan
Takayuki Shinohara is a software engineer working at Sony Interactive Entertainment Inc. He worked on the locomotion control software of entertainment robot AIBO at Sony Corporation from 2001 to 2004. After moving to SIE in 2004, he developed several non-game applications for PlayStation platforms such as Reader for PlayStation Vita as the lead programmer. His research interests include virtual reality, computer graphics and computer vision.

Kazuhito Miyaki, Sony Interactive Entertainment Inc., Japan
Kazuhito Miyaki is a director of Research & Development at Sony Interactive Entertainment Inc., leading video signal processing technology and deep learning project.

Date: Wednesday, November 20th
Time: 11:11am - 11:23am
Venue: Plaza Meeting Room P3


Speaker(s):

Abstract: In this exhibition, we demonstrate the mapping method between facial expressions and wearable robot arm commands in virtual reality environment. By acquiring reflection intensity information with a number of optical sensors embedded in head-mounted display and training a machine learning system, our system is able to recognize intermediate facial expressions.

Speaker(s) Bio: Masaaki Fukuoka, Keio University, Japan
Adrien Verhulst, The University of Tokyo, Japan
Fumihiko Nakamura, Keio University, Japan
Ryo Takizawa, Keio University, Japan
Katsutoshi Masai, Keio University, Japan
Maki Sugimoto, Keio University, Japan

Date: Wednesday, November 20th
Time: 11:23am - 11:34am
Venue: Plaza Meeting Room P3


Speaker(s):

Abstract: We propose a new compact projector-camera system (ProCam) composed of off-the-self devices that achieves dynamic facial projection mapping. A mini projector and a depth sensor camera are coupled together to project dynamic content such as makeup on a user's face. Our technology is eye safe and requires no initialization step.

Speaker(s) Bio: Panagiotis-Alexandros Bokaris, L'Oréal, France
Panagiotis-Alexandros Bokaris leads a team at L’Oréal Research and Innovation working on appearance digitalization and diagnostics for beauty personalization. He holds a Diploma in Engineering from the University of Patras (2011) and the Erasmus Mundus Master Degree CIMET (2013). He obtained his PhD in computer science from the University of Paris-Saclay (2016), during which he had a stay as a visiting PhD student at the University College London (2015). His research interests span the space of augmented reality, computer vision and graphics.

Benjamin Askenazi, L'Oréal, France
Benjamin Askenazi is an optical engineer who studied at the Institut d'Optique (ParisTech) and Imperial College (London). He then completed a Ph.D. at the Matériaux et Phénomènes Quantiques laboratory, attached to the University of Paris Diderot, in Experimental Optics Physics, on the interaction between Light and Matter in semiconductor systems. Benjamin joined L’Oréal R&I in November 2014 where currently he leads the Digital and Applied Optics team.

Michaël Haddad, L'Oréal, United States of America
Michaël holds a PhD in nonlinear optics and laser-material interaction completed in Ecole Polytechnique and an MBA from Kedge Business School. He has 20 years of experience in optics in numerous industrial sectors (defense, telecom, automotive) where he has filed a dozen patents and designed innovative diagnostics. Since September 2015, in L’Oréal, he created the Applied Optics and Algorithms division and led the projects Le Teint Particulier and Shade Finder for Lancôme. He is now in charge of the Augmented Beauty Invention Domain for the USA to bring expertise in optics, AI or computer vision to innovation projects.

Date: Wednesday, November 20th
Time: 11:34am - 11:46am
Venue: Plaza Meeting Room P3


Speaker(s):

Abstract: M-Hair is a novel method for providing tactile feedback by stimulating only body hair. By applying passive magnetic materials to the body hair, these become responsive to external magnetic forces/fields, creating a new opportunity for interactions, such as enriching media experiences and evoking emotional responses through this subtle stimulation.

Speaker(s) Bio: Roger Boldu, Augmented Human Lab, Auckland Bioengineering Institute; The University of Auckland, New Zealand
Sambhav Jain, Augmented Human Lab, Auckland Bioengineering Institute; The University of Auckland, New Zealand
Juan Pablo Forero Cortés, Augmented Human Lab, Auckland Bioengineering Institute; The University of Auckland, New Zealand
Haimo Zhang, Augmented Human Lab, Auckland Bioengineering Institute; The University of Auckland, New Zealand
Suranga Nanayakkara, Augmented Human Lab, Auckland Bioengineering Institute; The University of Auckland, New Zealand

Date: Wednesday, November 20th
Time: 11:46am - 11:58am
Venue: Plaza Meeting Room P3


Speaker(s):

Abstract: FreeMo gets rid of interaction area and pre-defined workspace limitations for both desktop and VR hand tracking all in a single device. Both users and developers can now enjoy drastically improved interaction freedom as demonstrated by our featured games, thus getting us closer to the full potential of hand tracking.

Speaker(s) Bio: Pascal Chiu, Tohoku University, Institut national des sciences appliquées de Lyon, Japan
Pascal Chiu is a research graduate and computer sciences engineering student affiliated to both Tohoku University in Japan and INSA de Lyon in France. Having studied in France, England, and Japan, he is now performing research activities related to human-computer interaction, interaction design, virtual reality interfaces, virtual content, and robotics.

Isamu Endo, Tohoku University, Japan
Isamu Endo is a first year master student in Tohoku University. He is a researcher of Virtual Reality and social interaction.

Kazuki Takashima, Tohoku University, Japan
Dr. Takashima received his Ph.D from the Graduate School of Information Science and Technology at Osaka University in 2008. He then worked as an assistant professor at Osaka University, and joined Tohoku University's Research Institute of Electrical Communication as an assistant professor in 2011. Dr. Takashima was promoted to the rank of associate professor at Tohoku University in 2018.

Kazuyuki Fujita, Tohoku University, Japan
Dr. Kazuyuki Fujita is an Assistant Professor at Tohoku University. He received his Ph.D. in Information Science and Technology from Osaka University in 2013. His research interests are spatial user interface and virtual reality.

Yoshifumi Kitamura, Tohoku University, Japan
Dr. Yoshifumi Kitamura has been Professor in the Research Institute of Electrical Communication, Tohoku University, since 2010. His research interests include interactive content design, human computer interactions, 3D user interfaces, virtual reality, and related fields. He serves in positions such as the Japan Liaison of IFIP TC-13, Japan Liaison and Chair of ACM SIGCHI Asian Development Committee, Chair of Japan ACM SIGCHI Chapter, Steering Committee Chair of ACM VRST, and Conference Chair of ACM SIGGRAPH Asia 2015, General Chair of CHI 2021. His formal education was obtained at Osaka University, B.Sc. (1985); M.Sc. (1987); and Ph.D. (1996).

Date: Wednesday, November 20th
Time: 11:58am - 12:09pm
Venue: Plaza Meeting Room P3


Speaker(s):

Abstract: "PhantomTouch" is a wearable forearm augmentation that enables the recreation of natural touch sensation by applying shear forces onto the skin using a matrix of Shape Memory Alloy based plasters.

Speaker(s) Bio: Sachith Muthukumarana, Augmented Human Lab, Auckland Bioengineering Institute; The University of Auckland, New Zealand
Don Samitha Elvitigala, Augmented Human Lab, Auckland Bioengineering Institute; The University of Auckland, New Zealand
Juan Pablo Forero Cortes, Augmented Human Lab, Auckland Bioengineering Institute; The University of Auckland, New Zealand
Denys J.C. Matthies, Augmented Human Lab, Auckland Bioengineering Institute; The University of Auckland, New Zealand
Suranga Nanayakkara, Augmented Human Lab, Auckland Bioengineering Institute; The University of Auckland, New Zealand

Date: Wednesday, November 20th
Time: 12:09pm - 12:21pm
Venue: Plaza Meeting Room P3


Speaker(s):

Abstract: This talk with give an overview of the Voxon VX1 3D volumetric display. Volumetric 3D data is fast becoming the gold standard in 3D interactive entertainment. Advances in Volumetric capture technology have enabled entirely new digital experiences that include Sport Replay, Music Videos, Gaming and Advertising. Yet despite the technological advances in 3D content creation, the most common way to view 3D is still on a 2D screen. VR and AR has partially addressed this shortcoming, but the need to wear goggles or headgear creates a barrier between the user and the 3D experience. Voxon is seeking to remove that barrier, and in doing so is enabling human-centric 3D digital experiences that can be viewed from any direction with the naked eye. Using a unique volumetric rendering technology the Voxon display can bringing 3D digital assets into the physical world, and in doing so, enable a new type of shared visual experience. To make this display technology a reality Voxon had to develop the world’s fastest real-time light engine, which is capable of rendering over 3 Billion points of light per second. Ben Weatherall, Voxon’s Unity Lead Programmer will talk about Voxon’s core technology, how to create content, and some of the unique aspects of volumetric imagery that set it apart from other types of media. He will also discuss important areas for future research in the volumetric display space.

Speaker(s) Bio:

Date: Wednesday, November 20th
Time: 12:21pm - 12:33pm
Venue: Plaza Meeting Room P3


Speaker(s):

Abstract: We developed a riding simulation system for immersive virtual reality. Our system mainly consists of a wheelchair for vibration and vestibular sensation, and a pedestal with a curve-shape surface for the wheelchair running on. It is low cost and simple in mechanism.

Speaker(s) Bio: Vibol Yem, Tokyo Metropolitan University, Japan
Vibol Yem received a PhD in engineering from University of Tsukuba in 2015. He is currently an assistant professor in the Department of Computer Science, Graduate School of Systems Design, Tokyo Metropolitan University. His research interests are Human Interface, Tactile/Haptic Device, VR/AR, Wearable, Robotics. He is a member of ACM and IEEE.

Ryunosuke Yagi, Tokyo Metropolitan University, Japan
Ryunosuke Yagi is a bachelor's degree student at Tokyo Metropolitan University in the department of mechanical systems engineering. His research interests include development of telepresence system, virtual reality and ultra reality.

Yasushi Ikei, Tokyo Metropolitan University, Japan
Yasushi Ikei graduated from the Graduate School of Engineering, The University of Tokyo with a PhD degree in the industrial mechanical engineering in 1988. He joined Tokyo Metropolitan Institute of Technology in 1992. He is currently Professor at the department of computer science, Tokyo Metropolitan University. His research interests are in the virtual reality, ultra reality, telepresence, multisensory display, and cognitive engineering. He received paper awards and contribution award from the Virtual Reality Society of Japan (VRSJ). He gained innovative technologies awards from METI Japan and DCAJ in 2012,2017,2018. He is former vice president and fellow member of the VRSJ.

Date: Wednesday, November 20th
Time: 12:33pm - 12:44pm
Venue: Plaza Meeting Room P3


Speaker(s):

Abstract:

Speaker(s) Bio:

Back