• Platinum Pass Platinum Pass
  • Full Conference Pass Full Conference Pass
  • Full Conference One-Day Pass Full Conference One-Day Pass
  • Basic Conference Pass Basic Conference Pass
  • Student One-Day Pass Student One-Day Pass
  • Experience Pass Experience Pass
  • Exhibitor Pass Exhibitor Pass

Date: Tuesday, November 19th
Time: 10:00am - 6:00pm
Venue: Great Hall 3&4 - Experience Hall


[Curated] Beyond the screen - Volumetric Displays from Voxon Photonics

Speaker(s):

Ben Weatherall is the Software Systems Integration Developer for Voxon Photonics. Primarily concerned with developing an interface between Unity and the Voxon API, Ben also works on integration between third party products and the VX1 display, handling developer support on our community slack and is currently building a unified front end for Voxon products. When not massaging vectors into voxels, Ben focuses on his love of books, cats and taming an unwieldy garden. Ben is currently focused on designing media players for volumetric data formats and writing a DirectX interceptor to bring established games into the volumetric landscape.

Description: Volumetric 3D data is fast becoming the gold standard in 3D interactive entertainment. Advances in Volumetric capture technology have enabled entirely new digital experiences that include Sport Replay, Music Videos, Gaming and Advertising. Yet despite the technological advances in 3D content creation, the most common way to view 3D is still on a 2D screen. VR and AR has partially addressed this shortcoming, but the need to wear goggles or headgear creates a barrier between the user and the 3D experience. Voxon is seeking to remove that barrier, and in doing so is enabling human-centric 3D digital experiences that can be viewed from any direction with the naked eye. Using a unique volumetric rendering technology the Voxon display can bringing 3D digital assets into the physical world, and in doing so, enable a new type of shared visual experience. To make this display technology a reality Voxon had to develop the world’s fastest real-time light engine, which is capable of rendering over 3 Billion points of light per second. Ben Weatherall, Voxon’s Unity Lead Programmer will talk about Voxon’s core technology, how to create content, and some of the unique aspects of volumetric imagery that set it apart from other types of media. He will also discuss important areas for future research in the volumetric display space.


[Curated] Scene Reconstruction for the Oculus Space Sharing Demo

Speaker(s):

Description: The Oculus Space Sharing Demo “Together from Wherever” allows two users to interact in VR inside a photo-realistic re-creation of a real environment. Real world places (such as your living room) have meaning and lead to a greater sense of presence. With this demo, we can glimpse a future where it should be possible to experience that profound sense of presence with someone important in your life in virtual reality irrespective of physical distance. Contributors: John Abad, Matthew Banks, Justin Blosch, Anders Bond, Milton Cadogan, Christopher Dotson, Francesco Georg, Simon Green, Shiva Halan, Heath Liles, James Lin, Steven Lovegrove, Jeffrey Mancebo, Alessia Mara, Christopher Ocampo, Luis Pesqueira, Vyacheslav Rakityanskey, Sarthak Ray, Thomas Rubalcava, Ryan Rutherford, Christina Tanouye, Andrew Welch, Benjamin Wulfe, Mingfei Yan, Stefano Zanetti Facebook


AR-ia: Volumetric Opera for Mobile Augmented Reality

Speaker(s): Sean Kelly, Google, United States of America
Samantha Cordingley, Google, Australia
Patrick Nolan, Opera Queensland, Australia
Christoph Rhemann, Google, United States of America
Sean Fanello, Google, United States of America
Danhang Tang, Google, United States of America
Jude Osborn, Google, Australia
Jay Busch, Google, United States of America
Philip Davidson, Google, United States of America
Paul Debevec, Google, United States of America
Peter Denny, Google, United States of America
Graham Fyffe, Google, United States of America
Kaiwen Guo, Google, United States of America
Geoff Harvey, Google, United States of America
Shahram Izadi, Google, United States of America
Peter Lincoln, Google, United States of America
Wan-Chun Alex Ma, Google, United States of America
Jonathan Taylor, Google, United States of America
Xueming Yu, Google, United States of America
Matt Whalen, Google, United States of America
Jason Dourgarian, Google, United States of America
Genevieve Blanchett, Genevieve Blanchett, Australia
Narelle French, Opera Queensland, Australia
Kirstin Sillitoe, Google, Australia
Tea Uglow, Google, Australia
Brenton Spiteri, Opera Queensland, Australia
Emma Pearson, Opera Queensland, Australia
Wade Kernot, Opera Queensland, Australia
Jonathan Richards, Google, Australia

Sean G. Kelly is a Creative Technologist at Google’s Creative Lab and the lead developer on Project AR-ia. He has developed and researched XR in professional and artistic applications ranging from dance and opera to neurosurgery and aerospace.

Samantha Cordingley is a Creative Producer at Google’s Creative Lab, who works with globally recognised partners in the Arts and Culture sector to create immersive digital work. Throughout her career as a producer she has led brands and teams across the Design, Film and Fashion industries.

Patrick Nolan is the Artistic Director & CEO of Opera Queensland. He has worked in theatre, film, opera, large scale outdoor performance, creating productions for London 2012 Cultural Olympiad, Glasgow Commonwealth Games, Sydney Opera House, Sydney Chamber Opera, Opera Australia, New Zealand Opera, Seattle Opera, Sydney Theatre Company, Belvoir St, Griffin Theatre, Melbourne Theatre Company, and all the major Australian capital city festivals.

Christoph Rhemann works as a Research Scientist at Google. He obtained his Microsoft Research sponsored PhD from the Vienna University of Technology in 2010. In the previous 7 years he worked at Vienna University of Technology, Microsoft Research and perceptiveIO. His research is focused on depth estimation, image segmentation and matting. Key projects include HoloLens, Holoportation (shown at TED 2016), PatchMatch Stereo and CostFilter Stereo. He is now working on volumetric reconstruction of humans.

Sean Fanello is a Research Scientist and Manager at Google where he leads the Volumetric Capture efforts. He obtained his PhD in Robotics at the Italian Institute of Technology and University of Genoa in 2014. Previously he was a Senior Scientist at perceptiveIO and he also spent 3 years at Microsoft Research working on Hololens and Holoportation - the first real-time 3D telepresence system for augmented and virtual reality.

Danhang Tang is a Senior Research Scientist at Google, interested in applying machine learning to 3d compression, 3d hand gesture recognition, object pose estimation, etc. He was previously a founding team member at perceptiveIO, Inc., and a visiting researcher at Microsoft Research. He received a PhD degree from Imperial College London, his MSc degree from University College London with distinction, as well as his BSc degree from Sun Yat-sen University.

Jude Osborn leads creative development at Google’s Creative Lab in Sydney. Jude has been tinkering with tech and writing software longer than he’d care to admit, having led countless development projects at Creative Lab and previously the startup and non-profits spaces.

Jonathan Richards is a Creative Lead at Google's Creative Lab in Sydney, where he works on projects which explore the intersection between technology and the arts. Previously he was at the Guardian newspaper in London where he ran the Interactive team, and prior to that he was an editor, writer, and developer at the Times of London.

Description: Pushing mobile computing to bold new limits that were not possible as recently as March this year, we have designed an end-to-end pipeline for reconstruction, compression and streaming that generates 3D assets for a mixed reality opera experience rendered in real time on high end mobile phones.


Come to the Table! A digital interface to support intercultural relationship building

Speaker(s): Mairi Gunn, University of Auckland, New Zealand
Huidong Bai, University of Auckland, New Zealand
Prasanth Sasikumar, University of Auckland, New Zealand

An award-winning filmmaker and cinematographer, Mairi is currently a senior lecturer in Digital Design at Auckland University of Technology and a PhD candidate at the Elam School of Fine Arts, University of Auckland. Her practice-led research has broken away from traditional flat screen aspect ratios, using PanOptica software to create ultra-widescreen (48:9) moving images for her installation Common Ground. It highlighted commonalities between her Scottish ancestors and the indigenous Māori of Aotearoa New Zealand. In her PhD thesis common/room, working with Māori and immigrants, she uses 3D/360° image-capture and XR at dining tables that serve as urban commons.

Description: Come to the Table! employs an extended reality interface, centred around a domestic kitchen table, as a first step towards overcoming racial tensions by inviting indigenous Māori, people from migrant backgrounds and all others to join viewers, thereby fostering intercultural relationship building.


Encounters: A Multiparticipant Audiovisual Art Experience with XR

Speaker(s): Ryu Nakagawa, Nagoya City University, Japan
Ken Sonobe, Nagoya City University, Japan

Ryu Nakagawa, is an artist, an associate professor at Graduate School of Design and Architecture, Nagoya City University, and leads Art and Media Experience Lab (NakagawaLab). He has received Ph.D. from Tokyo University of the Arts. He had been a lecturer at Musashino Art University (2010-2014), a research assistant at JST-ERATO Okanoya Emotional Information Project (2012-2014), a researcher at Department of Intermedia Art at Tokyo University of the Arts (2012-2014). He is taking a constructive approach to explore new artistic acts, such as art expression or art appreciation, with XR (extended reality, cross reality) that changes reality and society.

Ken Sonobe, is a undergraduate student at School of Design and Architecture, Nagoya City University. His research theme is “Possibilities of audiovisual art expression focusing on XR (cross reality) experience.” He participates in “Drawing Sound in MR Space” project organized by Ryu Nakagawa, as an engineer and a designer. “Encounters,” one of the project works, he contributed greatly, was selected as the finalist work of VR CREATIVE AWARD 2019 that is one of the most famous VR awards in Japan, was also accepted to the XR program in SIGGRAPH Asia 2019.

Description: What if we can make sound with physical objects using supernatural powers? We propose a multiparticipant audiovisual art experience using XR. In the experience, participants can fire virtual bullets or virtual beams at physical objects which then create a sound and a corresponding virtual visual effect.


FaceDrive: Facial Expression Driven Operation to Control Virtual Supernumerary Robotic Arms

Speaker(s): Masaaki Fukuoka, Keio University, Japan
Adrien Verhulst, The University of Tokyo, Japan
Fumihiko Nakamura, Keio University, Japan
Ryo Takizawa, Keio University, Japan
Katsutoshi Masai, Keio University, Japan
Maki Sugimoto, Keio University, Japan

Description: In this exhibition, we demonstrate the mapping method between facial expressions and wearable robot arm commands in virtual reality environment. By acquiring reflection intensity information with a number of optical sensors embedded in head-mounted display and training a machine learning system, our system is able to recognize intermediate facial expressions.


FreeMo: Extending Hand Tracking Experiences Through Capture Volume and User Freedom

Speaker(s): Pascal Chiu, Tohoku University, Institut national des sciences appliquées de Lyon, Japan
Isamu Endo, Tohoku University, Japan
Kazuki Takashima, Tohoku University, Japan
Kazuyuki Fujita, Tohoku University, Japan
Yoshifumi Kitamura, Tohoku University, Japan

Pascal Chiu is a research graduate and computer sciences engineering student affiliated to both Tohoku University in Japan and INSA de Lyon in France. Having studied in France, England, and Japan, he is now performing research activities related to human-computer interaction, interaction design, virtual reality interfaces, virtual content, and robotics.

Isamu Endo is a first year master student in Tohoku University. He is a researcher of Virtual Reality and social interaction.

Dr. Takashima received his Ph.D from the Graduate School of Information Science and Technology at Osaka University in 2008. He then worked as an assistant professor at Osaka University, and joined Tohoku University's Research Institute of Electrical Communication as an assistant professor in 2011. Dr. Takashima was promoted to the rank of associate professor at Tohoku University in 2018.

Dr. Kazuyuki Fujita is an Assistant Professor at Tohoku University. He received his Ph.D. in Information Science and Technology from Osaka University in 2013. His research interests are spatial user interface and virtual reality.

Dr. Yoshifumi Kitamura has been Professor in the Research Institute of Electrical Communication, Tohoku University, since 2010. His research interests include interactive content design, human computer interactions, 3D user interfaces, virtual reality, and related fields. He serves in positions such as the Japan Liaison of IFIP TC-13, Japan Liaison and Chair of ACM SIGCHI Asian Development Committee, Chair of Japan ACM SIGCHI Chapter, Steering Committee Chair of ACM VRST, and Conference Chair of ACM SIGGRAPH Asia 2015, General Chair of CHI 2021. His formal education was obtained at Osaka University, B.Sc. (1985); M.Sc. (1987); and Ph.D. (1996).

Description: FreeMo gets rid of interaction area and pre-defined workspace limitations for both desktop and VR hand tracking all in a single device. Both users and developers can now enjoy drastically improved interaction freedom as demonstrated by our featured games, thus getting us closer to the full potential of hand tracking.


Head Gaze Target Selection for Redirected Interaction

Speaker(s): Brandon J. Matthews, University of South Australia, Australia
Ross T. Smith, University of South Australia, Australia

Description: Redirected interaction enables many virtual buttons to be mapped to one physical button. This paper addresses some limitations with redirected interaction through application of head gaze to enable users to determine the interaction sequence and a new physical-virtual mapping using multiple physical targets to remove the required reset button.


HyperDrum: Interactive Synchronous Drumming in Virtual Reality using Everyday Objects

Speaker(s): Ryo Hajika, The University of Auckland, New Zealand
Kunal Gupta, The University of Auckland, New Zealand
Prasanth Sasikumar, The University of Auckland, New Zealand
Yun Suen Pai, The University of Auckland, New Zealand

Ryo Hajika is a postgraduate certificate student in Engineering and a research internship at the Empathic Computing Laboratory, the University of Auckland, under the supervision of Professor Mark Billinghurst. He obtained his Bachelors Degree in Engineering at the Ritsumeikan University, Japan. His research interest is rooted in the human-computer interaction design to enhance people's communication; especially VR/AR technique and physiological sensing. He also interested in media art and has working experience as a designer or a software engineer for several installations.

Kunal Gupta is a Ph.D. candidate at the Empathic Computing Laboratory, The University of Auckland, New Zealand, under the supervision of Prof. Mark Billinghurst. His research topic revolves around emotion recognition and representation in VR/AR using physiological sensing and contextual information. Prior to this, he worked as a User Experience Researcher at Google Inc. (Contract) and as an Interaction Designer at a startup in India with a total of around 5 years of industry experience. He obtained his masters from HITLab NZ at Unversity of Canterbury under the supervision of Prof Mark Billinghurst in 2015.

Prasanth Sasikumar is a Ph.D. candidate at the Empathic Computing Laboratory, The University of Auckland, New Zealand, under the supervision of Prof. Mark Billinghurst. His research topic revolves around multimodal input in remote collaboration and dense scene reconstruction. Prior to this, he worked as an XR Developer at JIX, and as a software developer at IBS, India with a total of around 3 years of industry experience. He obtained his masters from HITLab NZ at Unversity of Canterbury under the supervision of Prof Rob Lindeman in 2018.

Yun Suen Pai is currently a research fellow at the Empathic Computing Laboratory, University of Auckland, New Zealand, directed by Mark Billinghurst. He obtained his Bachelors Degree in Computer-Aided Design and Manufacturing Engineering, as well as Masters Degree in Engineering Sciences at the University of Malaya, Malaysia. Then, he proceeded to obtain his Ph.D. in Media Design at Keio University, Japan. His research interest is the effects of VR/AR/MR towards human behavior, perception and learning using physiological signals and artificial intelligence.

Description: HyperDrum, which is about leveraging this cognitive synchronization to create a collaborative music production experience with immersive visualization in virtual reality. Participants will wear an electroencephalography (EEG) head-mounted display to create music and VR space together using a physical drum.


JumpinVR: Enhancing Jump Experience in a Limited Physical Space

Speaker(s): Tomas Havlik, Czech Technical University in Prague, FEE, Czech Republic
Daigo Hayashi, Tohoku University, Japan
Kazuyuki Fujita, Tohoku University, Japan
Kazuki Takashima, Tohoku University, Japan
Robert W. Lindeman, University of Canterbury, New Zealand
Yoshifumi Kitamura, Tohoku University, Japan

Tomas Havlik is a graduate student of human-computer interaction at Czech Technical University in Prague. He is interested in novel applications of virtual reality as well as UX design in general.

Daigo Hayashi is a master course student of Research Institute of Electrical Communication, Tohoku University. His research interest is a novel redirection system of virtual reality.

Dr. Kazuyuki Fujita is an Assistant Professor in ICD Lab., Research Institute of Electrical Communication at Tohoku University. He received his Ph.D. in Information Science and Technology from Osaka University in 2013. He worked for ITOKI and was engaged in research and development on designing future workplace from 2013 to 2018. His research interest includes Human-Computer Interaction and Virtual Reality.

Dr. Takashima received his Ph.D from the Graduate School of Information Science and Technology at Osaka University in 2008. He then worked as an assistant professor at Osaka University, and joined Tohoku University's Research Institute of Electrical Communication as an assistant professor in 2011. Dr. Takashima was promoted to the rank of associate professor at Tohoku University in 2018.

Dr. Robert W. Lindeman has been doing research in the field of VR since 1993. His work focuses on immersive, multi-sensory feedback systems for VR, AR, and gaming, as well as natural and non-fatiguing interaction. He is Professor of Human-Computer Interaction, and Director of the Human Interface Technology Lab NZ (HIT Lab NZ) at the University of Canterbury. Rob was General Chair of the IEEE 2010 VR Conference, Program Co- Chair of the IEEE 2014 & 2015 ISMAR Conferences, and Co-Chair of the IEEE 2014, 2015 & 2016 3DUI Symposia. He is an avid geocacher, skier and mountain biker.

Dr. Yoshifumi Kitamura has been Professor in the Research Institute of Electrical Communication, Tohoku University, since 2010. His research interests include interactive content design, human computer interactions, 3D user interfaces, virtual reality, and related fields. He serves in positions such as the Japan Liaison of IFIP TC-13, Japan Liaison and Chair of ACM SIGCHI Asian Development Committee, Chair of Japan ACM SIGCHI Chapter, Steering Committee Chair of ACM VRST, and Conference Chair of ACM SIGGRAPH Asia 2015, General Chair of CHI 2021. His formal education was obtained at Osaka University, B.Sc. (1985); M.Sc. (1987); and Ph.D. (1996).

Description: We introduce a short virtual reality experience highlighting a use-case scenario of distance relocation technique in Redirected Jumping to reduce the size requirements for tracked working space of spatial applications. In our demo, the player traverses a virtual factory by jumping between moving platforms with jump distance scaled by gain.


Light Me Up: An Augmented-Reality Projection System

Speaker(s): Panagiotis-Alexandros Bokaris, L'Oréal, France
Benjamin Askenazi, L'Oréal, France
Michaël Haddad, L'Oréal, United States of America

Panagiotis-Alexandros Bokaris leads a team at L’Oréal Research and Innovation working on appearance digitalization and diagnostics for beauty personalization. He holds a Diploma in Engineering from the University of Patras (2011) and the Erasmus Mundus Master Degree CIMET (2013). He obtained his PhD in computer science from the University of Paris-Saclay (2016), during which he had a stay as a visiting PhD student at the University College London (2015). His research interests span the space of augmented reality, computer vision and graphics.

Benjamin Askenazi is an optical engineer who studied at the Institut d'Optique (ParisTech) and Imperial College (London). He then completed a Ph.D. at the Matériaux et Phénomènes Quantiques laboratory, attached to the University of Paris Diderot, in Experimental Optics Physics, on the interaction between Light and Matter in semiconductor systems. Benjamin joined L’Oréal R&I in November 2014 where currently he leads the Digital and Applied Optics team.

Michaël holds a PhD in nonlinear optics and laser-material interaction completed in Ecole Polytechnique and an MBA from Kedge Business School. He has 20 years of experience in optics in numerous industrial sectors (defense, telecom, automotive) where he has filed a dozen patents and designed innovative diagnostics. Since September 2015, in L’Oréal, he created the Applied Optics and Algorithms division and led the projects Le Teint Particulier and Shade Finder for Lancôme. He is now in charge of the Augmented Beauty Invention Domain for the USA to bring expertise in optics, AI or computer vision to innovation projects.

Description: We propose a new compact projector-camera system (ProCam) composed of off-the-self devices that achieves dynamic facial projection mapping. A mini projector and a depth sensor camera are coupled together to project dynamic content such as makeup on a user's face. Our technology is eye safe and requires no initialization step.


Live 6DoF Video Production with Stereo Camera

Speaker(s): Ayaka Nakatani, Sony Interactive Entertainment Inc., Japan
Takayuki Shinohara, Sony Interactive Entertainment Inc., Japan
Kazuhito Miyaki, Sony Interactive Entertainment Inc., Japan

Ayaka Nakatani is a software engineer working for the Global R&D Tokyo Division at Sony Interactive Entertainment Inc. She studied computer graphics and received her Master of Engineering from Keio University, Yokohama in 2012. Her research interests include computer graphics, live-action video processing and computer vision.

Takayuki Shinohara is a software engineer working at Sony Interactive Entertainment Inc. He worked on the locomotion control software of entertainment robot AIBO at Sony Corporation from 2001 to 2004. After moving to SIE in 2004, he developed several non-game applications for PlayStation platforms such as Reader for PlayStation Vita as the lead programmer. His research interests include virtual reality, computer graphics and computer vision.

Kazuhito Miyaki is a director of Research & Development at Sony Interactive Entertainment Inc., leading video signal processing technology and deep learning project.

Description: We propose a light-weight 6DoF video production pipeline using only one stereo camera as input. The subject can move freely in any direction (lateral and depth) as the camera follows to keep the subject within the frame. The processing runs in real time to provide 6DoF live viewing experience.


Lost City of Mer Virtual Reality Experience

Speaker(s): Gregory W. Bennett, Auckland University of Technology, New Zealand
Liz Canner, Astrea Media, United States of America

Gregory Bennett is currently Head of Department for Digital Design and Visual Arts at the School of Art & Design at Auckland University of Technology, New Zealand. His academic teaching and research include collaborative work on Virtual Reality projects in science, heritage and health. He is also a digital artist who works with 3D animation, motion capture, and interactive and immersive technologies. He has exhibited internationally in New Zealand, Australia, the USA, and Europe, including ISEA, Rencontres Internationales Paris/Berlin, Supernova, Currents, AA|LA Gallery, Centre for Contemporary Photography, and his work is represented in both public and private collections.

Liz Canner is a multi-award-winning filmmaker, artist and writer who creates films, cross-platform digital media projects, and installations. She is director of Astrea Media Inc., a non-profit media company dedicated to creating innovative projects on human rights and environmental issues. Notable projects include the public cyber art documentary Symphony of a City about the housing crisis, and the feature documentary Orgasm Inc. about the pharmaceutical industry and women’s health which was a New York Times “Critic's Pick”. Her work has screened at over 100 film festivals, been widely theatrically released, broadcast globally, and streamed on Sundance Now, Netflix and Kanopy.

Description: Lost City of Mer is a virtual reality experience combined with a smartphone app that immerses players in a fantasy undersea civilization devastated by ecological disaster caused by global warming. Harnessing the empathetic potential of VR players are given agency regarding their personal carbon footprint in combating climate change.


M-Hair: Extended Reality by Stimulating the Body Hair

Speaker(s): Roger Boldu, Augmented Human Lab, Auckland Bioengineering Institute; The University of Auckland, New Zealand
Sambhav Jain, Augmented Human Lab, Auckland Bioengineering Institute; The University of Auckland, New Zealand
Juan Pablo Forero Cortés, Augmented Human Lab, Auckland Bioengineering Institute; The University of Auckland, New Zealand
Haimo Zhang, Augmented Human Lab, Auckland Bioengineering Institute; The University of Auckland, New Zealand
Suranga Nanayakkara, Augmented Human Lab, Auckland Bioengineering Institute; The University of Auckland, New Zealand

Description: M-Hair is a novel method for providing tactile feedback by stimulating only body hair. By applying passive magnetic materials to the body hair, these become responsive to external magnetic forces/fields, creating a new opportunity for interactions, such as enriching media experiences and evoking emotional responses through this subtle stimulation.


PhantomTouch: Creating an Extended Reality by the Illusion of Touch using a Shape-Memory Alloy Matrix

Speaker(s): Sachith Muthukumarana, Augmented Human Lab, Auckland Bioengineering Institute; The University of Auckland, New Zealand
Don Samitha Elvitigala, Augmented Human Lab, Auckland Bioengineering Institute; The University of Auckland, New Zealand
Juan Pablo Forero Cortes, Augmented Human Lab, Auckland Bioengineering Institute; The University of Auckland, New Zealand
Denys J.C. Matthies, Augmented Human Lab, Auckland Bioengineering Institute; The University of Auckland, New Zealand
Suranga Nanayakkara, Augmented Human Lab, Auckland Bioengineering Institute; The University of Auckland, New Zealand

Description: "PhantomTouch" is a wearable forearm augmentation that enables the recreation of natural touch sensation by applying shear forces onto the skin using a matrix of Shape Memory Alloy based plasters.


Physical e-Sports in VAIR Field system

Speaker(s): Masasuke Yasumoto, Kanagawa Institute of Technology, CENOTE Inc., Japan
Takehiro Teraoka, Takushoku University, CENOTE Inc., Japan

Associate Professor in Kanagawa Institute of Technology, CEO in CENOTE Inc. Masasuke Yasumoto born 1980 in Japan, he received Ph.D.(film and new media) from Tokyo University of the Arts in 2010. He is a media artist, researcher and engineer working at the intersection of art and science. His work covers a range of disciplines including media arts, computer graphics, xR, game design.

Takehiro Teraoka was born in 1980 in Kumamoto, Japan. He received a Ph.D. degree from Keio University, Kanagawa, Japan in 2013. His research interests include natural language processing. He was a research associate at Keio University from 2011 to 2014. He was an assistant professor at Tokyo University of Technology from 2014 to 2018. From 2018, he is an assistant professor of faculty of engineering at Takushoku University.

Description: This study is one of the physical e-sports that improves through physical training. xR without HMD. It is a combat shooting in which guns and bow-shaped devices are mixed. It requires the same sense of operation and physical ability as the real thing, and realizes e-sports more like conventional sports.


Pumping Life: Embodied Virtual Companion for Enhancing Immersive Experience with Multisensory Feedback

Speaker(s): Jing Yuan Huang, National Taipei University of Technology, Taiwan
Wei Hsuan Hung, National Taipei University of Technology, Taiwan
Tzu Yin Hsu, National Taipei University of Technology, Taiwan
Yi Chun Liao, National Taipei University of Technology, Taiwan
Ping Hsuan Han, National Taipei University of Technology, Taiwan

Jing-Yuan Huang received a B.Eng. Degree from the Department of Interaction Design at the National Taipei University of Technology. She is an Interaction Designer who works for Awespire Inc. Her current interests include Virtual Reality (VR) and Haptic Technology.

Wei-Hsuan Hung received a B.Eng. degree from the Department of Interaction Design at the National Taipei University of Technology. She is studying for a master 's degree at Taipei National University of the Arts. Her current interests include Virtual Reality (VR) and Digital art.

Tzu-Yin Hsu received a B.Eng. degree from the Department of Interaction Design at the National Taipei University of Technology. She will live in Japan to laern Japanese in October. Her current intersets include Virtual Reality (VR) and Web Design.

Yi-Chun Liao received a B.Eng. Degree from the Department of Interaction Design at Taipei University of Technology, she is studying two years of English major. Her current interests include digital art and 3D animation.

Ping-Hsuan Han received his Ph.D. at the Graduate Institute of Networking and Multimedia at National Taiwan University, M.S. from Master Program in Toy and Game Design, and B.S. from the Department of Digital Technology Design all at National Taipei University of Education. He is currently a lecturer in the Department of Interaction Design at the National Taipei University of Technology. His current research interests include Human-Computer Interaction (HCI), Virtual Reality (VR), Mixed Reality (MR), and Multisensory Technology, and he also focuses on Creativity in Engineering Education.

Description: We present Pumping Life, a dynamic flow system for enhancing the virtual companion with multisensory feedback, which utilizes water pumps and heater to provide shape deformation and thermal feedback. To show the interactive gameplay with our system, we deploy the system into a teddy bear in a VR game.


SceneCam: Using AR to Improve Multi-Camera Remote Collaboration

Speaker(s): Troels Ammitsbøl Rasmussen, Aarhus University, Denmark
Weidong Huang, University of Technology Sydney, Australia

Description: We present SceneCam, a prototype with which we use AR to explore different techniques for improving the usability of multi-camera remote collaboration by making optimal camera selection easier and faster.


SmartSim: Combination of Vibro-Vestibular Wheelchair and Curved Pedestal of Self-Gravitational Acceleration for Road Property and Motion Feedback

Speaker(s): Vibol Yem, Tokyo Metropolitan University, Japan
Ryunosuke Yagi, Tokyo Metropolitan University, Japan
Yasushi Ikei, Tokyo Metropolitan University, Japan

Vibol Yem received a PhD in engineering from University of Tsukuba in 2015. He is currently an assistant professor in the Department of Computer Science, Graduate School of Systems Design, Tokyo Metropolitan University. His research interests are Human Interface, Tactile/Haptic Device, VR/AR, Wearable, Robotics. He is a member of ACM and IEEE.

Ryunosuke Yagi is a bachelor's degree student at Tokyo Metropolitan University in the department of mechanical systems engineering. His research interests include development of telepresence system, virtual reality and ultra reality.

Yasushi Ikei graduated from the Graduate School of Engineering, The University of Tokyo with a PhD degree in the industrial mechanical engineering in 1988. He joined Tokyo Metropolitan Institute of Technology in 1992. He is currently Professor at the department of computer science, Tokyo Metropolitan University. His research interests are in the virtual reality, ultra reality, telepresence, multisensory display, and cognitive engineering. He received paper awards and contribution award from the Virtual Reality Society of Japan (VRSJ). He gained innovative technologies awards from METI Japan and DCAJ in 2012,2017,2018. He is former vice president and fellow member of the VRSJ.

Description: We developed a riding simulation system for immersive virtual reality. Our system mainly consists of a wheelchair for vibration and vestibular sensation, and a pedestal with a curve-shape surface for the wheelchair running on. It is low cost and simple in mechanism.


Super Size Hero

Speaker(s): Till Sander-Titgemeyer, Filmakademie Baden-Wurttemberg, Animationsinstitut, Germany
Jiayan Chen, Filmakademie Baden-Württemberg, Animationsinstitut, Germany
Ramon Schauer, Filmakademie Baden-Wurttemberg, Animationsinstitut, Germany
Mario Bertsch, Filmakademie Baden-Wurttemberg, Animationsinstitut, Germany
Sebastian Selg, Filmakademie Baden-Wurttemberg, Animationsinstitut, Germany
York von Sydow, Filmakademie Baden-Wurttemberg, Animationsinstitut, Germany
Ihab Al-Azzam, Filmakademie Baden-Wurttemberg, Animationsinstitut, Germany
Verena Nomura, Filmakademie Baden-Wurttemberg, Animationsinstitut, Germany

Description: The player will take on the role of an overweight superhero trying to save the day. In order to do so, the player who is wearing a tracked fat-suit has to use his belly in order to prevent a bank robbing.


TouchVR: a Wearable Haptic Interface for VR Aimed at Delivering Multi-modal Stimuli at the User's Palm

Speaker(s): Daria Trinitatova, Skolkovo Institute of Science and Technology, Russia
Dzmitry Tsetserukou, Skolkovo Institute of Science and Technology, Russia
Aleksei Fedoseev, Skolkovo Institute of Science and Technology, Russia

Daria Trinitatova received the BS degree in Applied Mathematics and Physics from the Moscow Institute of Physics and Technology, Russia, in 2015 and the MS degree in Space and Engineering Systems from the Skolkovo Institute of Science and Technology, Moscow, Russia, in 2018. She is currently working toward the Ph.D. degree in the Engineering Systems, Skolkovo Institute of Science and Technology, Russia. Her research interests include haptics, virtual and augmented reality, robotic interfaces for virtual reality.

Dzmitry Tsetserukou received the Ph.D. degree in Information Science and Technology from the University of Tokyo, Japan, in 2007. From 2007 to 2009, he was a JSPS Post-Doctoral Fellow at the University of Tokyo. He worked as Assistant Professor at the Electronics-Inspired Interdisciplinary Research Institute, Toyohashi University of Technology from 2010 to 2014. From August 2014 he works at Skolkovo Institute of Science and Technology as Head of Intelligent Space Robotics Laboratory. His research interests include wearable haptic and tactile displays, robotic manipulator design, telexistence, human-robot interaction, affective haptics, virtual and augmented reality.

Aleksei Fedoseev received the BSc degree in Robotic Systems and Mechatronics from the Bauman Moscow State Technical University (National Research University), Russian Federation, in 2018. Currently, he is the 2nd year Master student in the Space and Engineering Systems, Skolkovo Institute of Science and Technology, Moscow, Russian Federation. In addition, he completed a personnel exchange program for Monodukuri Engineer Human Resource Program in Kindai University, Japan. His research interests include robotic manipulator control, real-time remote robotics, VR and AR applications, and encounter-type haptic displays.

Description: A TouchVR haptic interface provides cutaneous feedback on the palm by DeltaTouch haptic display and vibrotactile feedback on the fingertips by vibration motors. The developed haptic interface can potentially bring a new level of immersion of the user in VR and make it more interactive and tangible.


Upload Not Complete

Speaker(s): Bing-Hua Tsai, PSquare Media Lab, Taiwan
Zhao-Qing Chang, Pepperconrs Interactive Media Art Inc., Taiwan
Chin-Hsiang Hu, Pepperconrs Interactive Media Art Inc., Taiwan

Bing-Hua Tsai Interactive Designer/new media artist, focus on interactive experience design, creative coding education, technology arts creation, graduated from the National Taiwan University of Arts, works were selected and exhibited at Arte Laguna Prize(2014) (2019), 404 Electronic Arts Festival(2013) (2014) , Taipei Digital Arts Festival (2013) (2018), Taoyuan Creative Art Award (2013), etc. PSquare Media Lab PSquare Media Lab is a new media art group which is mixed with interactive programming, electronic sensor design, motion graphics design, and performance choreography. We focus on New Media Performance.

The continuous creation of art and an accumulation of life experiences have led Chin-Hsiang Hu to take on various identities: artist, engineer, interactive designer, lecturer, etc. Even so, he still desires to focus his work on the creation of new media art to examine the over-development of science and technology. Each kind of creative direction expressed a different idea and required the use of different technologies. In terms of integrating technologies, the development of interactive software played a very important role in his development.

Description: Imagine that upload process can see a virtual object in real space. When you see the virtual object and feel the influence (wind and vibration), after passing through the upwardly extending tunnel, the screen enters the completely virtual space, but you don't know whether the upload is completed.


Who You Are is What You Tell: Effects of Perspectives on Virtual Reality Story Experiences

Speaker(s): Enrique Eduardo Klein Garcia-Godos, University of Queensland, Australia
Valerie Williams Eguiguren, SUQ, Australia
Arindam Dey, University of Queensland, Australia

Enrique Klein is a Game Designer from Peru involved with educational games since the start of his career. Enrique started his journey studying a Bachelor of IT, major in Games Modelling, at The University of Queensland. Then, he specialized in Game Design at Vancouver Film School. After a few years of working in educational and non-educational games back in Peru, Enrique is now a Masters of Philosophy student at The University of Queensland exploring the intersection of Narrative and Virtual Reality, in particular how to modify a virtual environment to better align the storyteller’s creative intentions with the participant’s experience.

Valerie Williams is a Graphic Designer with over 4 years of experience in different fields such as web design, 3D modelling, mobile UI design, VR experience design, corporate identity, editorial design, packaging design. She studied Graphic Design at the Peruvian University of Applied Sciences (UPC). Valerie has skill in creating VR experiences from concept to release: conceptualisation, narrative design, scope management, core interaction mechanics design, interface design, 3D asset creation including modelling, texturing and lighting.

Arindam Dey is a Lecturer at the School of Information Technology and Electrical Engineering and Co-Director of the Empathic Extended Reality and Pervasive Computing Laboratory. His research is focused on making extended reality interfaces capable of measuring, sharing, and adapting to user’s real-time emotional and cognitive states. Before joining the University of Queensland, he was a Research Fellow at the Empathic Computing Laboratory (University of South Australia). Earlier, he held postdoctoral positions at the University of Tasmania, Worcester Polytechnic Institute (USA), and James Cook University.

Description: This virtual reality story lets the viewers experience the narrative from different perspectives of different characters. The story revolves around a family dispute between a couple who has an unfortunate son who can do little to stop his father leaving the house but tries his best.


Back