• Platinum Pass Platinum Pass
  • Full Conference Pass Full Conference Pass
  • Full Conference One-Day Pass Full Conference One-Day Pass
  • Basic Conference Pass Basic Conference Pass
  • Student One-Day Pass Student One-Day Pass
  • Experience Pass Experience Pass
  • Visitor Pass Visitor Pass
  • Exhibitor Pass Exhibitor Pass

Date/Time: 17 – 20 November 2019, Sunday – Wednesday, 9am – 6pm
Venue: Great Hall Foyer (Mezzanine Level, Merivale St)


[Animation] A Method to Create Fluttering Hair Animations That Can Reproduce Animator's Techniques

Abstract: We propose a method based on an animator technique to create an animation for fluttering objects in the wind such as hair and flags.

Author(s)/Presenter(s):
Naoaki Kataoka, Tokyo University of Science, Japan
Tomokazu Ishikawa, Toyo University, Prometech CG Research, Japan
Ichiro Matsuda, Tokyo University of Science, Japan


[Animation] Human Motion Denoising Using Attention-Based Bidirectional Recurrent Neural Network

Abstract: In this paper, we propose a novel method of denosing human motion using deep learning framework. Ours can be be used in motion capture application as a post-processing step.

Author(s)/Presenter(s):
Seong Uk Kim, Kangwon National University, South Korea
Hanyoung Jang, NCSOFT, South Korea
Jongmin Kim, Kangwon National University, South Korea


[Animation] Method to Make 3DCG Movement to Anime-Style Using Animation Technique

Abstract: Anime images have usually been created using 3DCG. We focused on the 3-D movement that occurs when creating anime-style 3DCG. We proposed a method to reduce 3-D based on the experimental results, and could make more anime-style.

Author(s)/Presenter(s):
Kei Kitahata, Hokkaido University, Japan
Yuji Sakamoto, Hokkaido University, Japan


[Animation] Search Space Reduction In Motion Matching by Trajectory Clustering

Abstract: We propose a novel method for solving the minimum cost search problem in the motion matching technique. This method performs better than the preceding methods while selecting a natural pose.

Author(s)/Presenter(s):
Junghoon Jee, NCSOFT, South Korea
Gwonjin Yi, NCSOFT, South Korea


[Geometry and Modeling] A Method of Making Wound Molds for Prosthetic Makeup using 3D Printer

Abstract: Conventionally to make wound props, firstly artists carved wound sculpture using oil clay and made the wound mold by pouring plaster over the finished sculpture and got the wound props by pouring silicone into the wound mold but it takes a lot of time and effort to learn how to handle materials or to acquire wound carving techniques. This paper suggests a simple and rapid way for users to create a wound mold model from a wound image and to print one using a 3D printer and our method provides the easy-to-use capabilities for wound molds production.

Author(s)/Presenter(s):
Yoon-Seok Choi, Electronics and Telecommunications Research Institute (ETRI), South Korea
Soonchul Jung, Electronics and Telecommunications Research Institute (ETRI), South Korea
Jin-Seo Kim, Electronics and Telecommunications Research Institute (ETRI), South Korea


[Geometry and Modeling] A Wavelet Energy Decomposition Signature for Robust Non-Rigid Shape Matching

Abstract: We present a novel local shape descriptor, named wavelet energy decomposition signature (WEDS) for robustly matching non-rigid 3D shapes with incompatible shape structures such as resolutions, triangulation, and transformation.

Author(s)/Presenter(s):
Yiqun Wang, National Laboratory of Pattern Recognition (NLPR), Chinese Academy of Sciences; University of Chinese Academy of Sciences, China
Jianwei Guo, National Laboratory of Pattern Recognition (NLPR), Chinese Academy of Sciences; University of Chinese Academy of Sciences, China
Jun Xiao, University of Chinese Academy of Sciences, China
Dongming Yan, National Laboratory of Pattern Recognition (NLPR), Chinese Academy of Sciences; University of Chinese Academy of Sciences, China


[Geometry and Modeling] Color-Based Edge Detection on Mesh Surface

Abstract: An algorithm for detecting edges based on the color of a mesh surface is proposed.

Author(s)/Presenter(s):
Yi-Jheng Huang, Yuan Ze University, Taiwan


[Geometry and Modeling] Computational Design and Fabrication of 3D Wire Bending Art

Abstract: We introduce a computer-assisted framework for manually creating 3D wire bending art from given 3D models.

Author(s)/Presenter(s):
Yinan Wang, The University of Tokyo, Japan
Xi Yang, The University of Tokyo, Japan
Tsukasa Fukusato, The University of Tokyo, Japan
Takeo Igarashi, The University of Tokyo, Japan


[Geometry and Modeling] Computing 3D Clipped Voronoi Diagrams on GPU

Abstract: An efficient GPU algorithm to compute 3D clipped Voronoi diagrams with respect to a tetrahedral mesh in parallel.

Author(s)/Presenter(s):
Xiaohan Liu, National Laboratory of Pattern Recognition (NLPR), Chinese Academy of Sciences; University of Chinese Academy of Sciences, China
Dong-Ming Yan, National Laboratory of Pattern Recognition (NLPR), Chinese Academy of Sciences; University of Chinese Academy of Sciences, China


[Geometry and Modeling] Multi-directional 3D Printing with Strength Retention

Abstract: We proposed a refined scheme and system to improve the bonding strength of the printed objects from the multi-directional 3D printers by introducing the CO2 laser.

Author(s)/Presenter(s):
Yupeng Guan, Beijing University of Technology, China
Yisong Gao, Beijing University of Technology, China
Lifang Wu, Beijing University of Technology, China
Kejian Cui, Institute of Automation, Chinese Academy of Sciences, China
Jianwei Guo, Institute of Automation, Chinese Academy Of Sciences, China
Zechao Liu, Beijing University of Technology, China


[Human-Computer Interaction] Code Weaver: A Tangible Programming Learning Tool with Mixed Reality Interface

Abstract: We developed a tool for learning basic programming concepts designed for elementary school children. A tangible user interface of this tool can be programmed by directly combining parts with a physical form. In this way, we attempted to resolve several problems, including the typical obstacles encountered in learning programming languages by small children such as text input via a keyboard, strict syntax rule requirements, and difficulties associated with group learning involving multiple participants. The Interviews and demonstrations at some conferences proved the concept of this tool, which promotes the understanding of programming by cooperative learning via a tangible user interface.

Author(s)/Presenter(s):
Ren Sakamoto, Ritsumeikan University, Japan
Toshikazu Ohshima, Ritsumeikan University, College of Image Arts and Sciences, Japan


[Human-Computer Interaction] Gamification in a Physical Rehabilitation Setting: Developing Proprioceptive Training Exercises for a Wrist Robot

Abstract: Our project discusses ongoing efforts to develop a game that can accompany a robot-aided wrist proprioceptive training exercise. Proprioception is an essential sense that aids with the neural control of movement.

Author(s)/Presenter(s):
Christopher Curry, University of Minnesota, United States of America
Naveen Elangovan, University of Minnesota, United States of America
Reuben Gardos Reid, University of Minnesota, United States of America
Jiapeng Xu, University of Minnesota, United States of America
Jürgen Konczak, University of Minnesota, United States of America


[Human-Computer Interaction] HaptoBOX: Multi-Sensory Physical Interface for Mixed Reality Experience

Abstract: This study proposes an interface device for augmenting multisensory reality based on a visually unified experience with a high level of consistency between real and virtual worlds using video see-through type mixed reality (MR). By putting on an MR head-mounted display (HMD) and holding a box-shaped device, virtual objects are displayed within the box, and both vibrations and reactions are presented in a synchronized way based on the dynamics of objects. Furthermore, one can also hear the sound emitted from virtual objects using 3D sound localization.

Author(s)/Presenter(s):
Toshikazu Ohshima, Ritsumeikan University, College of Image Arts and Sciences, Japan
Kiichiro Kigawa, Ritsumeikan University, Japan


[Human-Computer Interaction] HinHRob: A Performance Robot for Glove Puppetry

Abstract: We develop a glove puppetry performance robot named HinHRob. It delivers a balance of art and technology, and promote the protection and inheritance of intangible cultural inheritance.

Author(s)/Presenter(s):
Huahui Liu, School of Informatics, Xiamen University, China
Yingying She, School of Informatics, Xiamen University, China
Lin Lin, School of Art, Xiamen University, China
Shizhang Chen, Jinjiang Hand Puppet Art Protection and Inheritance Center, China
Jin Chen, School of Informatics, Xiamen University, China
Xiaomeng Xu, School of Informatics, Xiamen University, China
Jiayu Lin, School of Informatics, Xiamen University, China


[Human-Computer Interaction] Interaction Method using Party Horns for Multiple Users

Abstract: We propose a method to use party horns for interface of interactive contents, and create action games for multiple players which use party horns as user interface.

Author(s)/Presenter(s):
Rina Ito, Aichi Institute of Technology, Japan
Shinji Mizuno, Aichi Institute of Technology, Japan


[Human-Computer Interaction] PondusHand: Measure User’s Weight Feeling by Photo Sensor Array around Forearm

Abstract: PondusHand measures weight feeling from the deformation of the forearm muscles, measured by photo sensor array(MAE150g). Compared to EMG, the sensor is less affected by electrical noise or sweat.

Author(s)/Presenter(s):
Satoshi Hosono, Waseda University, Japan
Shoji Nishimura, Waseda University, Japan
Ken Iwasaki, H2L Inc., Japan
Emi Tamaki, Waseda University, Japan


[Human-Computer Interaction] Reinforcement of Kinesthetic Illusion by Simultaneous Multi-Point Vibratory Stimulation

Abstract: Our investigation finds an intensity change of kinesthetic illusion when multiple points at synergist muscles of arm extension are stimulated. Kinesthetic illusion is strenghthened by increasing the number of stimulated points in some participants. However, further investigation on the effect of increasing stimulated points is needed.

Author(s)/Presenter(s):
Keigo Ushiyama, The University of Electro-Communications, Japan
Satoshi Tanaka, The University of Electro-Communications, Japan
Akifumi Takahashi, The University of Electro-Communications, Japan
Hiroyuki Kajimoto, The University of Electro-Communications, Japan


[Human-Computer Interaction] Sense of non-presence:Visualization of invisible presence

Abstract: This is a device which enables visualization of the movement of an invisible creature.We intend to allow viewers to recognize specific movements from particles blown up one by one.

Author(s)/Presenter(s):
TAKUYA MIKAMI, Sapporo City University, Japan
MIN XU, Sapporo City University, Japan
KAORI YOSHIDA, Sapporo City University, Japan
KOSUKE MATSUNAGA, Sapporo City University, Japan
JUN FUJIKI, Sapporo City University, Japan


[Image & Video Processing] Animation Video Resequencing with a Convolutional AutoEncoder

Abstract: Given an unordered collection of images, our system decides suitable in-between images for a set of key-frames, or synthesize new animation sequences which are locally “as smooth as possible”.

Author(s)/Presenter(s):
Shang-Wei Zhang, National Cheng-Kung University, Taiwan
Charles C. Morace, National Cheng-Kung University, Taiwan
Thi Ngoc Hanh Le, National Cheng-Kung University, Taiwan
Chih-Kuo Yeh, National Cheng-Kung University, Taiwan
Sheng-Yi Yao, National Cheng Kung University, Taiwan
Shih-Syun Lin, National Taiwan Ocean University, Taiwan
Tong-Yee Lee, National Cheng-Kung University, Taiwan


[Image & Video Processing] Balance-Based Photo Posting

Abstract: This paper focuses on the online photo posting. By analyzing photo's color and content, we formulate the balance metric and take the measurement to optimize the final result.

Author(s)/Presenter(s):
Yu Song, National Laboratory of Pattern Recognition (NLPR), Chinese Academy of Sciences; University of Chinese Academy of Sciences, China
Fan Tang, Fosafer, China
Weiming Dong, National Laboratory of Pattern Recognition (NLPR), Chinese Academy of Sciences, China
Feiyue Huang, Youtu Lab, Tencent, China
Changsheng Xu, National Laboratory of Pattern Recognition (NLPR), Chinese Academy of Sciences, China


[Image & Video Processing] Diverse Layout Generation for Graphical Design Magazines

Abstract: We propose a system that automatically generates layouts for magazines that require graphical design. In this system, when images or texts are input as the content, several appropriate and diverse layouts are automatically generated. The automation makes the layout creation task performed by users such as graphical designers to be much more efficient. It also allows the user to choose from a wide range of ideas to create attractive layouts.

Author(s)/Presenter(s):
Sou Tabata, Dai Nippon Printing Co., Ltd, Japan
Haruka Maeda, Dai Nippon Printing Co., Ltd, Japan
Keigo Hirokawa, Dai Nippon Printing Co., Ltd, Japan
Kei Yokoyama, A+U Publishing Co., Ltd, Japan


[Image & Video Processing] Generation of Photorealistic QR Codes

Abstract: In this paper, we propose a useful method for generating photorealistic QR codes while avoiding the effects on the scanability/readability of the QR code.

Author(s)/Presenter(s):
Shih-Syun Lin, National Taiwan Ocean University, Taiwan
Yu-Fan Chang, National Taiwan Ocean University, Taiwan
Thi Ngoc Hanh Le, National Cheng Kung University, Taiwan
Sheng-Yi Yao, National Cheng Kung University, Taiwan
Tong-Yee Lee, National Cheng Kung University, Taiwan


[Image & Video Processing] Real-time Table Tennis Forecasting System based on Long Short-term Pose Prediction Network

Abstract: This is a deep learning-based real-time table tennis serve forecasting system, which only use a single RGB camera. The system can help training table tennis skill and increase user's motivation.

Author(s)/Presenter(s):
Erwin Wu, Tokyo Institute of Technology, Japan
Florian Perteneder, Tokyo Institute of Technology, Japan
Hideki Koike, Tokyo Institute of Technology, Japan


[Image & Video Processing] User-friendly Interior Design Recommendation

Abstract: We propose a novel interior design recommendation that can take into account the combination between furniture based on each user’s preferences.

Author(s)/Presenter(s):
Akari Nishikawa, Ryukoku University, Japan
Keiko Ono, Ryukoku University, Japan
Mitsunori Miki, Doshisha University, Japan


[Imaging & Rendering] A Low Cost Multi-Camera Array for Panoramic Light Field Video Capture

Abstract: We present a portable multi-camera system for recording panoramic light field video content. The proposed system is capable of capturing wide baseline (0.8 meters), high resolution (>15 pixels per degree), large field of view (>220 degree) light fields at 30 frames per second. The array contains 47 time-synchronized cameras distributed on the surface of a hemispherical plastic dome. We use commercially available action sports cameras (Yi 4k) mounted inside the dome. The dome, mounts, triggering hardware and cameras are inexpensive and the array is easy to fabricate.

Author(s)/Presenter(s):
Michael Broxton, Google Inc., United States of America
Jay Busch, Google, United States of America
Jason Dourgarian, Google, United States of America
Matthew DuVall, Google, United States of America
Daniel Erickson, Google, United States of America
Dan Evangelakos, Google, United States of America
John Flynn, Google, United States of America
Ryan Overbeck, Google, United States of America
Matt Whalen, Google, United States of America
Paul Debevec, Google, United States of America


[Imaging & Rendering] Computational Spectral-Depth Imaging with a Compact System

Abstract: Relying on an efficient computational reconstruction algorithm with deep learning instead of customized hardware, our system enables simultaneous acquisition of the spectral and depth information in real-time with high resolution.

Author(s)/Presenter(s):
Mingde Yao, University of Science and Technology of China, China
Zhiwei Xiong, University of Science and Technology of China, China
Lizhi Wang, Beijing Institute of Technology, China
Dong Liu, University of Science and Technology of China, China
Xuejin Chen, University of Science and Technology of China, China


[Imaging & Rendering] Fast, memory efficient and resolution independent rendering of cubic Bezier curves using tessellation shaders

Abstract: A novel technique to render cubic Bezier curves that orchestrates graphics pipeline to approximate cubic Bezier into quadratic Bezier on GPU hardware and performs 5-10x faster than existing techniques.

Author(s)/Presenter(s):
Harish Kumar, Adobe, India
Anmol Sud, Adobe, India


[Imaging & Rendering] Focus stacking by multi-viewpoint focus bracketing

Abstract: An approach to obtain high-quality focus-stacking images by using the depth maps generated by multi-view structure-from-motion algorithm.

Author(s)/Presenter(s):
Yucheng Qiu, Shibaura Institute of Technology, China
Daisuke Inagaki, Shibaura Institute of Technology, Japan
Kenji Kohiyama, Keio University, Japan
Hiroya Tanaka, Keio University, Japan
Takashi Ijiri, Shibaura Institute of Technology, Japan


[Imaging & Rendering] Fundus imaging using DCRA toward large eyebox

Abstract: We propose a novel fundus imaging method using a dihedral cornerreflector array (DCRA). The proposed method achieves wavelength independence, robust to eye movement, and a simple optical system.

Author(s)/Presenter(s):
Yui Atarashi, Digital Nature Group, University of Tsukuba; Pixie Dust Technologies, Inc., Japan
Kazuki Otao, Digital Nature Group, University of Tsukuba; Pixie Dust Technologies, Inc., Japan
Takahito Aoto, Digital Nature Group, University of Tsukuba, Japan
Yoichi Ochiai, Digital Nature Group, University of Tsukuba; Pixie Dust Technologies, Inc., Japan


[Imaging & Rendering] Modelling scene data for render time estimation

Abstract: A supervised learning approach to estimate render times of images in post-production. This technique builds a learning model based off scene and rendering parameters utilising previously rendered 3D scenes

Author(s)/Presenter(s):
Harsha K. Chidambara, DNEG, United Kingdom


[Imaging & Rendering] Parallel Adaptive Frameless Rendering with Nvidia OptiX

Abstract: We implement adaptive frameless renderer in parallel and interactively with NVIDIA OptiX. This is a real-time rendering scheme which discards traditional frame to achieve improve visual accuracy and less latency.

Author(s)/Presenter(s):
Chung-Che Hsiao, North Carolina State University, United States of America
Benjamin Watson, North Carolina State University, United States of America


[Imaging & Rendering] Rendering Point Clouds with Compute Shaders

Abstract: A compute shader based point cloud rasterizer with up to 10 times higher performance than classic point-based rendering with the GL_POINT primitive.

Author(s)/Presenter(s):
Markus Schütz, TU Wien, Austria
Michael Wimmer, TU Wien, Austria


[Imaging & Rendering] Sound Propagation Considering Atmospheric Inhomogeneity and Ground Effects

Abstract: This paper propose an improved FDTD-PE method to simulate sound propagation in 3D outdoor scene. In the simulation, the ground and atmosphere are considered as porous and inhomogeneous medium, respectively.

Author(s)/Presenter(s):
Jin Liu, Tianjin University, China
Shiguang Liu, Tianjin University, China


[Virtual Reality, Augmented Reality, and Mixed Reality] 360-Degree-Viewable Tabletop Light-Field 3D Display Having Only 24 Projectors

Abstract: Conventional light-field methods of producing 3D images having circular parallax on a tabletop surface require several hundred projectors. Our novel approach produces a similar light field using only 1/10 the number of projectors. Adopting our method, two cylindrical mirrors are inserted in the projection light paths. By appropriately folding the paths with the mirrors, we form any viewpoint image in an annular viewing area from a group of rays sourced from all projectors arranged on a circle.

Author(s)/Presenter(s):
Shunsuke Yoshida, National Institute of Information and Communications Technology, Japan


[Virtual Reality, Augmented Reality, and Mixed Reality] AUDIOZOOM: Location Based Sound Delivery system

Abstract: Just like focusing light with a projector, we create a focus of sound that follows a user, tracked with the Kinect. Useful in public spaces and VR.

Author(s)/Presenter(s):
Chinmay Rajguru, University of Sussex, United Kingdom
Daniel Blaszczak, University of Sussex, United Kingdom
Arash PourYazdan, University of Sussex, United Kingdom
Thomas J. Graham, University of Sussex, United Kingdom
Gianluca Memoli, University of Sussex, United Kingdom


[Virtual Reality, Augmented Reality, and Mixed Reality] DeepMobileAR: A Mobile Augmented Reality Application with integrated Visual SLAM and Object Detection

Abstract: The possibility of real-time, Integrated AR & Deep Learning service on consumer mobile is confirmed in indoor & outdoor.

Author(s)/Presenter(s):
Suwoong Lee, Electronics and Telecommunications Research Institute (ETRI), South Korea
Seungjae Lee, Electronics and Telecommunications Research Institute (ETRI), South Korea
Keundong Lee, Electronics and Telecommunications Research Institute (ETRI), South Korea
Jong Gook Ko, Electronics and Telecommunications Research Institute (ETRI), South Korea


[Virtual Reality, Augmented Reality, and Mixed Reality] Investigating the Role of Task Complexity in Virtual Immersive Training (VIT) Systems

Abstract: The focus of this research is to introduce the concept of Training Task Complexity (TTCXB) in the design of Virtual Immersive Training (VIT) systems. In this report, we describe design parameters, experimentation design and initial results.

Author(s)/Presenter(s):
Konstantinos Koumaditis, Aarhus University, Denmark
Francesco Chinello, Aarhus University, Denmark


[Virtual Reality, Augmented Reality, and Mixed Reality] Lucciola: Presenting Aerial Images by Generating a Fog Screen at any Point in the Same 3D Space as a User

Abstract: We propose a method of presenting an aerial image at any point in the same three-dimensional space as a user by colliding vortex rings and remaining particles.

Author(s)/Presenter(s):
Takahiro Kusabuka, NTT Service Evolution Laboratories, Japan
Shinichiro Eitoku, NTT Service Evolution Laboratories, Japan


[Virtual Reality, Augmented Reality, and Mixed Reality] Method for estimating display lag in the Oculus Rift S and CV1

Abstract: Cybersickness is the major "kill factor" limiting the uptake of HMD VR. This work shows how display lag inherent in even the most recent systems can contribute to adverse effects.

Author(s)/Presenter(s):
Jason Feng, UNSW Sydney, Australia
Juno Kim, UNSW Sydney, Australia
Wilson Luu, UNSW Sydney, Australia
Stephen Palmisano, University of Wollongong, Australia


[Virtual Reality, Augmented Reality, and Mixed Reality] Pop-up digital tabletop: seamless integration of 2D and 3D visualizations in a tabletop environment

Abstract: We propose a pop-up digital tabletop system that seamlessly integrates two-dimensional (2D) and three-dimensional (3D) representations of contents in a digital tabletop environment.

Author(s)/Presenter(s):
Daisuke Inagaki, Shibaura Institute of Technology, Japan
Yucheng Qiu, Shibaura Institute of Technology, Japan
Raku Egawa, Shibaura Institute of Technology, Japan
Takashi Ijiri, Shibaura Institute of Technology, Japan


[Virtual Reality, Augmented Reality, and Mixed Reality] Stealth Projection : Visually Removing Projectors from Dynamic Projection Mapping

Abstract: In this study, we propose a stealth projection method that visually removes the ProCam system in dynamic projection mapping by using aerial image display technique and IR camera-based marker-less tracking.

Author(s)/Presenter(s):
Masumi Kiyokawa, The University of Electro-Communications, Japan
Shinichi Okuda, The University of Electro-Communications, Japan
Naoki Hashimoto, The University of Electro-Communications, Japan


[Virtual Reality, Augmented Reality, and Mixed Reality] Virtual Immersive Educational Systems: Early Results and Lessons Learned

Abstract: Higher education is embracing a digital transformation in a relatively slow adoption rate, with few fragmented solutions to portray the capabilities of new immersive technologies like Virtual Reality (VR). One may argue that deployment costs and substantial levels of design knowledge might be critical stagnation factors to create effective Virtual Immersive Educational (VIE) systems. We attempt to address these impediments in a cost-effective and user-friendly VIE system. In this paper, we report in brief the main elements of this design and initial result and lessons learned.

Author(s)/Presenter(s):
Francesco Chinello, Aarhus University, Denmark
Konstantinos Koumaditis, Aarhus University, Denmark


[Visualization] BookVIS: Enhancing Browsing Experiences in Bookstores and Libraries

Abstract: BookVIS is a visual "companion" that recognizes a book’s cover, and uses it instantly to provide users with information about the book. After snapping a photo of the book, the app generates the visual dashboard that users may use for further exploration. The system allows users to identify and discover more information on books from pictures of book covers taken on their smartphones. With the presented system and its interplay between real and digital worlds, new avenues could be opened for creating new dimensions for adaptive visualizations.

Author(s)/Presenter(s):
Zona Kostic, Harvard University, United States of America


[Visualization] Eye-Tracking Based Adaptive Parallel Coordinates

Abstract: We demonstrate how eye-tracking can help the analyst to efficiently re-order the axes in a parallel coordinates. The implicit input from an eye-tracker assists the system in finding unexplored dimensions.

Author(s)/Presenter(s):
Mohammad Chegini, Graz University of Technology, Nanyang Technological University, Austria
Keith Andrews, Graz University of Technology, Austria
Tobias Schreck, Graz University of Technology, Austria
Alexei Sourin, Nanyang Technological University, Singapore


[Visualization] Midair Haptic Representation for Internal Structure in Volumetric Data Visualization

Abstract: We propose a novel method to present the internal structure of volumetric data using midair haptics to solve the occlusion problem in volumetric visualization. The proposed method simplifies internal structures by approximating them using a Gaussian mixture model (GMM) and converts them into haptic stimuli. We applied this technique to 3D microscopy.

Author(s)/Presenter(s):
Tomomi Takashina, Nikon Corporation, Japan
Mitsuru Ito, Nikon Corporation, The University of Tokyo, Japan
Yuji Kokumai, Nikon Corporation, Japan


[Visualization] Nanoscapes: Authentic Scales and Densities in Real-Time 3D Cinematic Visualizations of Cellular Landscapes

Abstract: Nanoscapes is a cinematic first-person interactive exploration of cellular landscapes that attempts to address the challenges of visualizing authentic molecular scales and densities in real-time.

Author(s)/Presenter(s):
Andrew R. Lilja, UNSW Art & Design, Australia
Shereen R. Kadir, UNSW Art & Design, Australia
Rowan T. Hughes, UNSW Art & Design, Australia
Nicholas J. Gunn, UNSW Art & Design, Australia
Campbell W. Strong, UNSW Art & Design, Australia
Benjamin J. Bailey, UNSW Art & Design, Australia
Robert G. Parton, University of Queensland, Australia
John B. McGhee, UNSW Art & Design, Australia


[Visualization] Non-Euclidean Embeddings for Graph Analytics and Visualisation

Abstract: This research project uses machine learning techniques to generate optimised embeddings of graphs in different types of geometric spaces. Our approach utilises non-euclidean geometries in the data processing and visualisation stages. This results in a flexible, versatile, and interactive visualisation tool, which is suitable for a variety of graph analytics use cases across different high-end visualisation systems and display types.

Author(s)/Presenter(s):
Daniel Filonik, EPICentre, UNSW Art & Design, Australia
Tian Feng, La Trobe University, Australia
Ke Sun, CSIRO's Data61, Australia
Richard Nock, CSIRO's Data61, Australia
Alex Collins, CSIRO's Data61, Australia
Tomasz Bednarz, EPICentre, UNSW Art & Design, Australia


[VRCAI] How to VizSki: Visualizing Captured Skier Motion in a VR Ski Training Simulator

Abstract: Alpine ski training is restricted by environmental requirements and the incremental and cyclical ways of how movement and form are taught. Therefore, we propose a virtual reality ski training system based on an indoor ski simulator. The system uses two trackers to capture the motion of skis so that users can control them on the virtual ski slope. For training we captured the motion of professional athletes and replay them to the users to help them to improve their skills. In two studies, we explore the utility of visual cues to help users to effectively learn the motion patterns from the pro-skier. In addition, we look at the impact of feedback on this learning process. The work provides the basis for developing and understanding the possibilities and limitations of VR ski simulators, which have the potential to support skiers in their learning process.

Author(s)/Presenter(s):
Erwin Wu, Tokyo Institute of Technology, Japan
Florian Perteneder, Tokyo Institute of Technology, Japan
Takayuki Nozawa, Tokyo Institute of Technology, Japan
Hideki Koike, Tokyo Institute of Technology, Japan


[VRST] A Technique for Mixed Reality Remote Collaboration using 360 Panoramas in 3D Reconstructed Scenes

Abstract: We designed a prototype system that combines 360 panoramas into a 3D scene to introduce a novel way for users to interact and collaborate with each other. The system features a few main points: > Spatially embeds 360 panorama videos into a 3D reconstruction scene > Platform to evaluate the feature of having a 3D scene with 360 panorama inserts > Offer inspirations and implications for designing Mixed Reality remote collaboration system that merge 360 and 3D views.

Author(s)/Presenter(s):
Theophilus Teo, University of South Australia, CSIRO, Australia
Ashkan F. Hayati, University of South Australia, Australia
Gun A. Lee, University of South Australia, Australia
Mark Billinghurst, University of South Australia, Australia
Matt Adcock, CSIRO, Australia


[VRST] Measurement-based Hyper-elastic Material Identification and Real-time FEM Simulation for Haptic Rendering

Abstract: In this paper, we propose a measurement-based modeling framework for hyper-elastic material identification and real-time haptic rendering. We build a custom data collection setup that captures shape deformation and response forces during compressive deformation of cylindrical material samples. We collected training and testing sets of data from four silicone objects having various material profiles. We design an objective function for material parameter identification by incorporating both shape deformation and reactive forces and utilize a genetic algorithm. We adopted an optimization-based Finite Element Method (FEM) for object deformation rendering. The numerical error of simulated forces was found to be perceptually negligible.

Author(s)/Presenter(s):
Arsen Abdulali, Kyung Hee University, South Korea
Ibragim Atadjanov, Kyung Hee University, South Korea
Seungkyu Lee, Kyung Hee University, South Korea
Seokhee Jeon, Kyung Hee University, South Korea