• Platinum Pass Platinum Pass
  • Full Conference Pass Full Conference Pass
  • Full Conference One-Day Pass Full Conference One-Day Pass
  • Basic Conference Pass Basic Conference Pass
  • Student One-Day Pass Student One-Day Pass
  • Experience Pass Experience Pass
  • Visitor Pass Visitor Pass
  • Exhibitor Pass Exhibitor Pass

Date: Monday, November 18th
Time: 9:00am - 6:00pm
Venue: Great Hall Foyer


[Animation] A Method to Create Fluttering Hair Animations That Can Reproduce Animator's Techniques

Speaker(s): Naoaki Kataoka, Tokyo University of Science, Japan
Tomokazu Ishikawa, Toyo University, Prometech CG Research, Japan
Ichiro Matsuda, Tokyo University of Science, Japan

Naoaki Kataoka is a Master's student at Tokyo University of Science. He received his B.E. degrees in electrical engineering from Tokyo University of Science in 2018. His main research interests cover animation and computer graphics.

Tomokazu Ishikawa is an Associate Professor at Toyo University. He received the B.S. degrees in Information Science from Tokyo University of Science in 2003, and his M.S. and Ph.D in Complexity Science and Engineering from the University of Tokyo in 2005, 2012, respectively. He was hired as a system engineer by IBM Japan, Ltd. until 2007. His research interests center on computer graphics, including visual simulation.

He received his B.E., M.E. and Ph.D. degrees in electrical engineering from Tokyo University of Science, in 1991, 1993 and 1996, respectively. Since 2014, he is a professor in the Department of Electrical Engineering, Tokyo University of Science. His main research interests cover image and video coding and processing.

Description: We propose a method based on an animator technique to create an animation for fluttering objects in the wind such as hair and flags.


[Animation] Human Motion Denoising Using Attention-Based Bidirectional Recurrent Neural Network

Speaker(s): Seong Uk Kim, Kangwon National University, South Korea
Hanyoung Jang, NCSOFT, South Korea
Jongmin Kim, Kangwon National University, South Korea

I am a BS student at Kangwon National University, Korea. I am interested in capturing and editing human motion using the deep learning framework.

Hanyoung Jang, Ph.D. is the team leader of Creative AI team at NCSOFT, a game company. He has researched in the fields of computer graphics, GPGPU, robotics, and AI. Currently, his primary research interest lies in natural character animation via deep learning.

I am currently an Assistant Professor at the Department of Computer Science, Kangwon National University (KNU), Korea. I was a research professor at the Institute for Embedded Software, Hanyang University in 2015 and a mocap software developer and researcher at Weta Digital in 2016, and also worked with Weta Digital as a mocap consultant in 2017. My current research interests include computer graphics and animation, deep learning, and numerical optimization.

Description: In this paper, we propose a novel method of denosing human motion using deep learning framework. Ours can be be used in motion capture application as a post-processing step.


[Animation] Method to Make 3DCG Movement to Anime-Style Using Animation Technique

Speaker(s): Kei Kitahata, Hokkaido University, Japan
Yuji Sakamoto, Hokkaido University, Japan

Kei KITAHATA received the B.S degrees in Engineering from Hokkaido University, Japan in 2018. Currently, he is a master of the Graduate School of Information Science and Technology, Hokkaido University. He has been engaged in research on computer graphics.

Yuji SAKAMOTO received the B.S., M.S., and Ph.D. degrees in Electrical Engineering from Hokkaido University, Japan in 1983, 1985, and 1988, respectively. In the same year, he joined the Hitachi, Ltd. From 1994 to 2000, he was a associate professor at the Department of Electrical Engineering, Muroran Institute of Technology. In 2000, he joined the Graduate School of Engineering, Hokkaido University as a associate professor. Currently, he is a professor at the Graduate School of Information Science and Technology, Hokkaido University. He has been engaged in research on computer-generated hologram, 3D image processing, computer graphics, and digital wireless communication.

Description: Anime images have usually been created using 3DCG. We focused on the 3-D movement that occurs when creating anime-style 3DCG. We proposed a method to reduce 3-D based on the experimental results, and could make more anime-style.


[Animation] Search Space Reduction In Motion Matching by Trajectory Clustering

Speaker(s): Gwonjin Yi, NCSOFT, South Korea
Junghoon Jee, NCSOFT, South Korea

Developer in NCSOFT

Technical Director in NCSOFT

Description: We propose a novel method for solving the minimum cost search problem in the motion matching technique. This method performs better than the preceding methods while selecting a natural pose.


[Geometry and Modeling] A Method of Making Wound Molds for Prosthetic Makeup using 3D Printer

Speaker(s): Yoon-Seok Choi, Electronics and Telecommunications Research Institute (ETRI), South Korea
Soonchul Jung, Electronics and Telecommunications Research Institute (ETRI), South Korea
Jin-Seo Kim, Electronics and Telecommunications Research Institute (ETRI), South Korea

Yoon-Seok Choi is a Principle Researcher at Creative Contents Research Division, Electronics and Telecommunications Research Institute, South Korea. His diverse research interests include non photorealistic rendering, real-time rendering, virtual reality, machine learning, evolutionary computation and image analysis. He received his PhD Engineering degree from Seoul National University, Korea in 2008 where he completed his BS and MS degrees in Computer Engineering from Chung-Ang University in 1996 and 1998 respectively.

Soonchul Jung received the BS degree from KAIST in 1998, and the MS and PhD degrees in computer engineering from Seoul National University, in 2000 and 2006, respectively. He was a research engineer at Korea Telecom from 2006 to 2012. Since 2012, he has worked for Electronics and Telecommunications Research Institute in Korea. His research interests include machine learning, evolutionary computation, computer vision, and virtual reality.

Jin Seo Kim received his BS in electronics engineering from Incheon National University, Incheon, Rep. of Korea, in 1991, his MS in electrical engineering from the Polytechnic Institute of NYU, New York, NY, USA, in 1993, and his PhD in color science from the University of Leeds, Leeds, UK, in 2009. He joined ETRI, Daejeon, Rep. of Korea, in 1993, and he is currently a managing director in the Affective Interaction Research Group. His current research interests include color science, cross media color reproduction, digital image processing, and image quality enhancement.

Description: Conventionally to make wound props, firstly artists carved wound sculpture using oil clay and made the wound mold by pouring plaster over the finished sculpture and got the wound props by pouring silicone into the wound mold but it takes a lot of time and effort to learn how to handle materials or to acquire wound carving techniques. This paper suggests a simple and rapid way for users to create a wound mold model from a wound image and to print one using a 3D printer and our method provides the easy-to-use capabilities for wound molds production.


[Geometry and Modeling] A Wavelet Energy Decomposition Signature for Robust Non-Rigid Shape Matching

Speaker(s): Yiqun Wang, National Laboratory of Pattern Recognition (NLPR), Chinese Academy of Sciences; University of Chinese Academy of Sciences, China
Jianwei Guo, National Laboratory of Pattern Recognition (NLPR), Chinese Academy of Sciences; University of Chinese Academy of Sciences, China
Jun Xiao, University of Chinese Academy of Sciences, China
Dongming Yan, National Laboratory of Pattern Recognition (NLPR), Chinese Academy of Sciences; University of Chinese Academy of Sciences, China

Yiqun Wang is a Ph.D. candidate of the National Laboratory of Pattern Recognition (NLPR), Institute of Automation, Chinese Academy of Sciences (CASIA) and University of Chinese Academy of Sciences (UCAS). Before that, He received his bachelor's degree from Chongqing University in 2016. His research interests include computer graphics, shape analysis, and pattern recognition. He is a student member of CCF, ACM, and IEEE.

Jianwei Guo is an associate researcher in the National Laboratory of Pattern Recognition (NLPR), Institute of Automation, Chinese Academy of Sciences(CASIA). He received his Ph.D. degree in computer science from the Institute of Automation, Chinese Academy of Sciences in 2016, and a bachelor's degree from Shandong University in 2011. His research interests include computer graphics, geometry processing, and 3D vision.

Jun Xiao is now a professor at the University of Chinese Academy of Sciences, Beijing. He obtained his Ph.D. degree in communication and information systems from the Graduate University of the Chinese Academy of Sciences, Beijing, in 2008. His research interests include computer graphics, computer vision, image processing, and 3D reconstruction. He is a senior member of CCF.

Dong-Ming Yan received his Ph.D.degree in computer science from Hong Kong University, Hong Kong, in 2010, and his Bachelor’s and Master’s degrees in computer science from Tsinghua University, Beijing, in 2002 and 2005, respectively. He is currently a professor at the National Laboratory of Pattern Recognition of the Institute of Automation, Chinese Academy of Sciences, Beijing. His research interests include computer graphics and geometric processing. He is a member of CCF, ACM, and IEEE.

Description: We present a novel local shape descriptor, named wavelet energy decomposition signature (WEDS) for robustly matching non-rigid 3D shapes with incompatible shape structures such as resolutions, triangulation, and transformation.


[Geometry and Modeling] Color-Based Edge Detection on Mesh Surface

Speaker(s): Yi-Jheng Huang, Yuan Ze University, Taiwan

Yi-Jheng Huang received the BS degree from the Department of Computer Science and Information Engineering, National Dong Hwa University, Taiwan, in 2007 and the MS and PhD degree in computer science from National Chiao Tung University, Taiwan, in 2009 and 2017, respectively. Now, she is an assistant professor in the Department of Information Communication, Yuan Ze University. Her research interests include computer graphics and virtual reality.

Description: An algorithm for detecting edges based on the color of a mesh surface is proposed.


[Geometry and Modeling] Computational Design and Fabrication of 3D Wire Bending Art

Speaker(s): Yinan Wang, The University of Tokyo, Japan
Xi Yang, The University of Tokyo, Japan
Tsukasa Fukusato, The University of Tokyo, Japan
Takeo Igarashi, The University of Tokyo, Japan

Yinan Wang received her master's degree in department of creative informatics in the University of Tokyo. Her research interests are geometry processing and HCI.

Xi Yang is currently a project assistant professor in the Graduate School of Information Science and Technology at The University of Tokyo. He received the BE degree in the College of Information Engineering from Northwest A&F University in 2012. He received the ME, DE degrees in Graduate School of Engineering at Iwate University. His research interests include geometric processing, visualization, and deep learning.

Tsukasa Fukusato is currently an assistant professor at Computer Science Department in the University of Tokyo. He received a Ph.D. degree from the Department of Pure and Applied Physics at Waseda University in 2017. His main research areas are computer graphics and human computer interaction such as computational geometry.

Takeo Igarashi is a professor at Computer Science Department, the University of Tokyo. He received his Ph.D. degree from Information Engineering Department, the University of Tokyo, in 2000. His research interest is in user interface in general and current focus is on interaction techniques for 3D graphics.

Description: We introduce a computer-assisted framework for manually creating 3D wire bending art from given 3D models.


[Geometry and Modeling] Computing 3D Clipped Voronoi Diagrams on GPU

Speaker(s): Xiaohan Liu, National Laboratory of Pattern Recognition (NLPR), Chinese Academy of Sciences; University of Chinese Academy of Sciences, China
Dong-Ming Yan, National Laboratory of Pattern Recognition (NLPR), Chinese Academy of Sciences; University of Chinese Academy of Sciences, China

Xiaohan Liu received his Bachelor's degree in software engineering from Nanjing University of Aeronautics and Astronautics in 2017. He is currently a Master's student at the National Laboratory of Pattern Recognition (NLPR), Institute of Automation, Chinese Academy of Sciences (CASIA) and the University of the Chinese Academy of Sciences (UCAS). His research interests include computer graphics and geometric processing. He is a student member of CCF and ACM.

Dong-Ming Yan received his Ph.D. degree in computer science from Hong Kong University, Hong Kong, in 2010, and his Bachelor’s and Master’s degrees in computer science from Tsinghua University, Beijing, in 2002 and 2005, respectively. He is currently a professor at the National Laboratory of Pattern Recognition of the Institute of Automation, Chinese Academy of Sciences, Beijing. His research interests include computer graphics and geometric processing. He is a member of CCF, ACM, and IEEE.

Description: An efficient GPU algorithm to compute 3D clipped Voronoi diagrams with respect to a tetrahedral mesh in parallel.


[Geometry and Modeling] Multi-directional 3D Printing with Strength Retention

Speaker(s): Yupeng Guan, Beijing University of Technology, China
Yisong Gao, Beijing University of Technology, China
Lifang Wu, Beijing University of Technology, China
Kejian Cui, Institute of Automation, Chinese Academy of Sciences, China
Jianwei Guo, Institute of Automation, Chinese Academy Of Sciences, China
Zechao Liu, Beijing University of Technology, China

Yupeng Guan is a postgraduate with Electronics and Communication Engineering, Faculty of Information Technology, Beijing University of Technology. He received his BE degree in Beijing Information Science and Technology University in 2018. His current research interest is 3D printing.

Yisong Gao is a postgraduate with Electronics and Communication Engineering, Faculty of Information Technology, Beijing University of Technology, China. He received the B.E. degree from Beijing University of Technology in 2017. His current research interest is geometry processing for 3D printing.

Lifang Wu received the B.S., M.S., and Ph.D. degrees from the Beijing University of Technology, Beijing, China, in 1991, 1994, and 2003, respectively. She is currently a professor with Beijing University of Technology. Her research interests include 3D printing, social recommendation, face encryption and deep learning-based video analysis.

Dr. Kejian Cui, who is majored in Applied Chemistry and Polymer Chemistry, received his PhD from Beijing Institute of Technology in 2015. Since 2015, he has been a post-doctor in Institute of Chemistry, Chinese Academy of Sciences (ICCAS). His research interests include medicine synthesis, 3D printing, materials preparation, and graphene application. At present, he is making researches on “Blue laser-induced photopolymerization for 3D printing” and “Transparent electric heating graphene glass and its application in hot photo-curable 3D printing”. He has published more than 30 journal and conference papers, and applied for more than 10 Chinese patents as principal investigator.

Jianwei Guo is an associate researcher in National Laboratory of Pattern Recognition (NLPR), Institute of Automation, Chinese Academy of Sciences(CASIA). He received his Ph.D. degree in computer science from Institute of Automation, Chinese Academy of Sciences in 2016, and bachelor degree from Shandong University in 2011. His research interests include computer graphics, geometry processing, and 3D vision.

Zechao Liu is a postgraduate with Electronics and Communication Engineering, Faculty of Information Technology, Beijing University of Technology. He received his B.E. degree from Beijing University of Technology. His current research interest is 3D printing.

Description: We proposed a refined scheme and system to improve the bonding strength of the printed objects from the multi-directional 3D printers by introducing the CO2 laser.


[Human-Computer Interaction] Code Weaver: A Tangible Programming Learning Tool with Mixed Reality Interface

Speaker(s): Ren Sakamoto, Ritsumeikan University, Japan
Toshikazu Ohshima, Ritsumeikan University, College of Image Arts and Sciences, Japan

Ren Sakamoto received a bachelor's degree of Image Arts and Sciences from Ritsumeikan University, Japan, in 2018. He is now a master course student of Graduate School of Image Arts in Ritsumeikan University. His research interests include mixed reality technologies, interactive learning environments, and design of edutainments.

Toshikazu Ohshima has been a professor at Ritsumeikan University since 2006. In 1991, he received a Doctorate from University of Tsukuba and joined Canon Inc., where he worked on the research of virtual reality. From 1997, he worked as a senior researcher in a national research project on mixed reality (MR) and began serving as a section chief at the MR division in Canon Inc. from 2001. His research interests include MR technologies and its deployment in education, medicine, and arts. Since 2006, he has created technological and educational artworks based on scientific and biological simulations utilizing the MR experience.

Description: We developed a tool for learning basic programming concepts designed for elementary school children. A tangible user interface of this tool can be programmed by directly combining parts with a physical form. In this way, we attempted to resolve several problems, including the typical obstacles encountered in learning programming languages by small children such as text input via a keyboard, strict syntax rule requirements, and difficulties associated with group learning involving multiple participants. The Interviews and demonstrations at some conferences proved the concept of this tool, which promotes the understanding of programming by cooperative learning via a tangible user interface.


[Human-Computer Interaction] Gamification in a Physical Rehabilitation Setting: Developing Proprioceptive Training Exercises for a Wrist Robot

Speaker(s): Christopher Curry, University of Minnesota, United States of America
Naveen Elangovan, University of Minnesota, United States of America
Reuben Gardos Reid, University of Minnesota, United States of America
Jiapeng Xu, University of Minnesota, United States of America
Jürgen Konczak, University of Minnesota, United States of America

Christopher Curry is a Ph.D. candidate in Kinesiology at the University of Minnesota. His research seeks to link basic science to real-world applications with interest in using his expertise in Human Factors and Kinesiology for the development and enhancement of medical devices and products that can be utilized for preventive, diagnostic, and rehabilitative purposes. His dissertation research will investigate ways to mitigate cybersickness with VR head-mounted displays. He's particularly drawn to mitigating cybersickness in VR because he believes there are immense practical applications with VR. One area that he wishes to further dive into is augmenting physical rehabilitation with VR.

Naveen Elangovan is a postdoctoral research associate in University of Minnesota. He graduated with a Bachelor of Physiotherapy degree in 2006. Before he moved to Minnesota for his M.S. in Kinesiology in 2009, he practiced Physical Therapy for about two and a half years in India. During his masters program, he majored in Biomechanics and Neural Control. In 2016, he earned his PhD in Kinesiology from University of Minnesota. His research interests revolve around movement neuroscience and neurological rehabilitation. His current research focuses on enhancing proprioceptive function and motor performance in healthy and neurological population using robotic technology.

Reuben Gardos Reid is an undergraduate research assistant developing software at the Human Sensorimotor Control Lab at the University of Minnesota - Twin Cities. In addition pursuing a B.S. in Computer Science, he has spent the last three years at the lab working on the WristBot rehabilitation robot project. His research interests include application machine learning and how it can be applied in both the medical and environmental sciences.

Jiapeng Xu is a Ph.D. student of Human Sensorimotor Control Laboratory at the University of Minnesota. His current work focuses specifically on the development of robot-aided rehabilitation devices. His research interest include neural engineering, more specifically brain-computer interface (BCI). He has four years of experience in motor imagery based BCI developing. He is currently pursuing a Ph.D. degree that focuses on movement neuroscience. He believes that by having this clinical research experience, he will have a deeper understanding of mechanisms of movement, which will aid him with his future career goals.

Jürgen Konczak is a Full Professor in the School of Kinesiology at the University of Minnesota. He is the head of the Human Sensorimotor Control Laboratory. Dr. Konczak’s research focuses on the neurophysiology and biomechanics of human motor function in neurological populations such as ataxia, Parkinson's disease and dystonia. His research has been funded by the U.S. National Institutes of Health, the U.S. National Science Foundation, the German Science Foundation and the European Commission.

Description: Our project discusses ongoing efforts to develop a game that can accompany a robot-aided wrist proprioceptive training exercise. Proprioception is an essential sense that aids with the neural control of movement.


[Human-Computer Interaction] HaptoBOX: Multi-Sensory Physical Interface for Mixed Reality Experience

Speaker(s): Toshikazu Ohshima, Ritsumeikan University, College of Image Arts and Sciences, Japan
Kiichiro Kigawa, Ritsumeikan University, Japan

Toshikazu Ohshima has been a professor at Ritsumeikan University since 2006. In 1991, he received a Doctorate from University of Tsukuba and joined Canon Inc., where he worked on the research of virtual reality. From 1997, he worked as a senior researcher in a national research project on mixed reality (MR) and began serving as a section chief at the MR division in Canon Inc. from 2001. His research interests include MR technologies and its deployment in education, medicine, and arts. Since 2006, he has created technological and educational artworks based on scientific and biological simulations utilizing the MR experience.

Mr. Kiichiro Kigawa was received the bachelor degrees in Bachelor of Image Arts and Sciences from Ritsumeikan University, Kyoto Japan,in 2019. He is now a master course student of Ritsumeikan University Graduate School of Image Arts. His research interest augmenting multi-sensory reality based on a visually unified experience with a high level of consistency between the real and virtual worlds.

Description: This study proposes an interface device for augmenting multisensory reality based on a visually unified experience with a high level of consistency between real and virtual worlds using video see-through type mixed reality (MR). By putting on an MR head-mounted display (HMD) and holding a box-shaped device, virtual objects are displayed within the box, and both vibrations and reactions are presented in a synchronized way based on the dynamics of objects. Furthermore, one can also hear the sound emitted from virtual objects using 3D sound localization.


[Human-Computer Interaction] HinHRob: A Performance Robot for Glove Puppetry

Speaker(s): Huahui Liu, School of Informatics, Xiamen University, China
Yingying She, School of Informatics, Xiamen University, China
Lin Lin, School of Art, Xiamen University, China
Shizhang Chen, Jinjiang Hand Puppet Art Protection and Inheritance Center, China
Jin Chen, School of Informatics, Xiamen University, China
Xiaomeng Xu, School of Informatics, Xiamen University, China
Jiayu Lin, School of Informatics, Xiamen University, China

Huahui Liu received his bachelor's degree from Fuzhou University in 2018. Currently, he is studying for a master's degree in digital media technology at Xiamen University, China. His main research direction is human-computer interaction.

Yingying She, PhD., graduated from Department of Computer Science and Software Engineering, Concord University, Canada. She currently worked as associate professor in informatics school, Xiamen University. Her main research interest is human-computer interaction, artificial intelligence, virtual reality, etc. , In recent years, she led the research team to devote into the intangible cultural heritage projects, and focus on the combination research of "Technology and Arts".

Lin Lin received her Bachelor of Fine Arts degree from Xiamen University in 2000 and her Master of Fine Arts degree from Dutch Art Institute in 2003. She is now an associate professor in Art College of Xiamen University and a visiting scholar in University of Maryland, Baltimore County, USA.

Shizhang Chen, Vice Chairman of Jinjiang Federation of Literary and Art Circles, Director of Jinjiang Hand Puppet Art Protection and Inheritance Center, Director of Culture Department of Jinjiang Culture and Sports Bureau, and Chairman of Jinjiang Dramatists Association, is committed to inheriting the intangible cultural heritage in a live manner.

Majoring in digital media technology at Xiamen University. Professional courses include virtual reality, human-computer interaction. His research direction is HCI based on the combination of software and hardware. The project includes the design of emotion-based robotic cognition system and the mechanical control of the Marionette.

From Xiamen University and majors in Human-Computer Interaction.

From Xiamen University and majors in Human-Computer Interaction.

Description: We develop a glove puppetry performance robot named HinHRob. It delivers a balance of art and technology, and promote the protection and inheritance of intangible cultural inheritance.


[Human-Computer Interaction] Interaction Method using Party Horns for Multiple Users

Speaker(s): Rina Ito, Aichi Institute of Technology, Japan
Shinji Mizuno, Aichi Institute of Technology, Japan

Rina Ito is currently 3rd grade at Faculty of Information Science, Aichi Institute of Technology. Her research interests include computer graphics and interactive contents.

Shinji Mizuno is currently a professor at Faculty of Information Science, Aichi Institute of Technology. He received the Ph.D degree from Nagoya University in 1999. He was a Research Associate at Toyohashi University of Technology from 2000 to 2009. His research interests include computer graphics, image processing, virtual reality, and interactive arts.

Description: We propose a method to use party horns for interface of interactive contents, and create action games for multiple players which use party horns as user interface.


[Human-Computer Interaction] PondusHand: Measure User’s Weight Feeling by Photo Sensor Array around Forearm

Speaker(s): Satoshi Hosono, Waseda University, Japan
Shoji Nishimura, Waseda University, Japan
Ken Iwasaki, H2L Inc., Japan
Emi Tamaki, Waseda University, Japan

Satoshi Hosono is master student of Waseda University and intern of H2L Inc.

Shoji Nishimura is a professor at Waseda University, Tokyo, Japan. His current research interests include educational technology, especially education and the Internet. He received his bachelor’s degree in mathmatics from Waseda Univercity, MSc in applied physics from Waseda University, and PhD in Human Sciences from Osaka University. In 1991, he joined the Advance Research Center, INES Corporation as a senior researcher. Currently, he is a professor in Faculty of Human Sciences, Waseda University. He is a member of Japan Society for Educational Technology, Japanese Society for Information and Systems in Education, and Information Processing Society of Japan.

Ken Iwasaki got his B.S. (Biochemistry) and M.S. (Computer Science) in 2008, 2010, respectively. His main research area is Human Computer Interaction. When he was a graduate student, he was selected as a creator of IPA:Exploratory IT Human Resources Project (The MITOH Program) and learned R&D and project management. After he graduated, he experienced business consulting at Accenture, and R&D management at RIKEN (Brain Science Institute). In 2012, he co-founded H2L, Inc. with Dr. Tamaki and Dr. Kamada. He takes advantage of his unique experience of both business and research to make research-stage technologies into products.

Emi Tamaki is researching to interact haptic sensation and physical sensation between computer and human. Her goal is to share rich body experiences by facilitating information exchange between humans and computers. She received her Ph.D. in Interdisciplinary Informatics from the University of Tokyo in 2012. She also received President’s Award of the University of Tokyo. In 2012, she co-founded H2L, Inc. H2L developed a product UnlimitedHand that is a version of PossessedHand for sales. New research project and the product “FirstVR” ,which corroborates with NTT docomo 5G, is released in 2019.

Description: PondusHand measures weight feeling from the deformation of the forearm muscles, measured by photo sensor array(MAE150g). Compared to EMG, the sensor is less affected by electrical noise or sweat.


[Human-Computer Interaction] Reinforcement of Kinesthetic Illusion by Simultaneous Multi-Point Vibratory Stimulation

Speaker(s): Keigo Ushiyama, The University of Electro-Communications, Japan
Satoshi Tanaka, The University of Electro-Communications, Japan
Akifumi Takahashi, The University of Electro-Communications, Japan
Hiroyuki Kajimoto, The University of Electro-Communications, Japan

Keigo Ushiyama is an under graduate student of The University of Electro-Communications. His research interests include haptic interface, human computer interaction and virtual reality.

Satoshi Tanaka received the B.S. degree in engineering from the University of Electro-Communications, in 2019. He is a graduate student at the University of Electro-Communications. His research interests include haptic interface, human computer interaction and virtual reality.

Akifumi Takahashi is a PhD student in the University of Electro-Communications, Japan. His research interests are tactile displays, tactile sensors, human interface and virtual reality.

Hiroyuki Kajimoto received a PhD in information science and technology from The University of Tokyo in 2006. He is currently a professor in the Department of Informatics at the University of Electro-Communications, Japan. His research interests are tactile displays, tactile sensors, human interface and virtual reality. He is a member of IEEE and ACM.

Description: Our investigation finds an intensity change of kinesthetic illusion when multiple points at synergist muscles of arm extension are stimulated. Kinesthetic illusion is strenghthened by increasing the number of stimulated points in some participants. However, further investigation on the effect of increasing stimulated points is needed.


[Human-Computer Interaction] Sense of non-presence:Visualization of invisible presence

Speaker(s): TAKUYA MIKAMI, Sapporo City University, Japan
MIN XU, Sapporo City University, Japan
KAORI YOSHIDA, Sapporo City University, Japan
KOSUKE MATSUNAGA, Sapporo City University, Japan
JUN FUJIKI, Sapporo City University, Japan

Takuya Mikami is a graduate student at Sapporo City University(graduate school of design). He is interested in human-computer interaction(using human sense of touch, perception and cognition).

Min Xu is a graduate student at Sapporo City University(graduate school of design). She is interested in traditional Japanese art and learning Japanese culture.

Kaori Yoshida is a graduate student at Sapporo City University(graduate school of design). She is interested in computer graphics and researches how to express Japanese anime.

Kosuke Matsunaga (born in Kanagawa, Japan, in Nov. 1978) is an Assistant Professor at Sapporo City University, Japan. He received master of design degree from University of Kyushu Institute of Design, Japan, in 2004. In 2015, he received PhD from Kyushu University. He researches on human deformation modeling using programming based on mathematics or physics. And he create original geometric puzzle games.

Jun Fujiki is a associate professor for Sapporo City University. He is looking for new relations between expression and principle, and examining the laws of humans and of physics.

Description: This is a device which enables visualization of the movement of an invisible creature.We intend to allow viewers to recognize specific movements from particles blown up one by one.


[Image & Video Processing] Animation Video Resequencing with a Convolutional AutoEncoder

Speaker(s): Shang-Wei Zhang, National Cheng-Kung University, Taiwan
Charles C. Morace, National Cheng-Kung University, Taiwan
Thi Ngoc Hanh Le, National Cheng-Kung University, Taiwan
Chih-Kuo Yeh, National Cheng-Kung University, Taiwan
Sheng-Yi Yao, National Cheng Kung University, Taiwan
Shih-Syun Lin, National Taiwan Ocean University, Taiwan
Tong-Yee Lee, National Cheng-Kung University, Taiwan

Shang-Wei Zhang currently is a master's student in the Department of Computer Science and Engineering, National Cheng Kung University, Taiwan. His research interest is computer graphics.

Charles C. Morace received the BS degree in computer science and mathematics from the University of Rhode Island, in 2012. He is currently is a master student in the Department of Computer Science and Information Engineering, National Cheng-Kung University, Tainan, Taiwan, ROC, in 2014. He is a member of the Computer Graphics Group of the Visual System Laboratory. His research interests include computer graphics, visualization, and digital map generation.

Thi Ngoc Hanh Le received the Master degree in Ho Chi Minh University of Science, Viet Nam National University, Viet Nam, in 2014. She began graduate study in the Department of Computer Science and Information Engineering, National Cheng Kung University, Taiwan, ROC, in 2018, where she is currently working as a Ph.D. student in the Computer Graphics Group of the Visual System Laboratory. Her research interests include computer graphics, visualization and media processing.

Chih-Kuo Yeh received the BS degree from the Department of Information Engineering and Computer Science, Feng Chia University, in 2005, the MS degree from the Institute of Bioinformatics, National Chiao Tung University, in 2007, and the PhD degree from the Department of Computer Science and Information Engineering, National Cheng-Kung University, Taiwan, in 2015. He is currently a postdoctoral researcher with National Cheng-Kung University. His research interests include scientific visualization, computer animation, and computer graphics.

Sheng-Yi Yao received the Bachelor of Information Management degree in Department of Information Management from National Kaohsiung University of Applied Sciences, Kaohsiung, Taiwan, in 2017, and the Master of Information Management degree in Institute of Information Management from National Kaohsiung University of Science and Technology, Kaohsiung, Taiwan, in 2019, and he is working towards the Ph.D. degree in Multimedia System and Intelligent computing at National Cheng Kung University, Tainan, Taiwan. His research interest is computer graphics.

Shih-Syun Lin received the PhD degree in computer science and information engineering from National Cheng-Kung University, Taiwan, in 2015. He was as a postdoctoral fellow in the Computer Graphics Group of Visual System Laboratory (CGVSL), National Cheng-Kung University, Taiwan, from 2015 to 2016. He currently is an assistant professor in the Department of Computer Science and Engineering, National Taiwan Ocean University, Taiwan. He leads the Intelligent Graphics Laboratory, National Taiwan Ocean University (http://igl.cse.ntou.edu.tw). His research interests include computer graphics, information visualization, and pattern recognition. He is a member of the IEEE and the ACM.

Tong-Yee Lee received the PhD degree in computer engineering from Washington State University, Pullman, in May 1995. He is currently a chair professor with the Department of Computer Science and Information Engineering, National ChengKung University, Tainan, Taiwan, ROC. He leads the Computer Graphics Group, Visual System Laboratory, National Cheng-Kung University (http://graphics.csie.ncku.edu.tw/). His current research interests include computer graphics, nonphotorealistic rendering, medical visualization, virtual reality, and media resizing. He is a senior member of the IEEE Computer Society and a member of the ACM

Description: Given an unordered collection of images, our system decides suitable in-between images for a set of key-frames, or synthesize new animation sequences which are locally “as smooth as possible”.


[Image & Video Processing] Balance-Based Photo Posting

Speaker(s): Yu Song, National Laboratory of Pattern Recognition (NLPR), Chinese Academy of Sciences; University of Chinese Academy of Sciences, China
Fan Tang, Fosafer, China
Weiming Dong, National Laboratory of Pattern Recognition (NLPR), Chinese Academy of Sciences, China
Feiyue Huang, Youtu Lab, Tencent, China
Changsheng Xu, National Laboratory of Pattern Recognition (NLPR), Chinese Academy of Sciences, China

Yu Song received the master degree in control science and engineering from Tianjin University in 2017. She is currently a PHD candidate from the institute of automation, in National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences. Her research involves computer graphics, computer vision and machine learning.

Fan Tang received the PhD degree from National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences in 2019. He received the BSc degree in computer science from North China Electric Power University in 2013. He is currently a researcher at Fosafer. His research interests include image synthesis and image recognition.

Weiming Dong is a Professor in the Sino-European Lab in Computer Science, Automation and Applied Mathematics (LIAMA) and National Laboratory of Pattern Recognition (NLPR) at Institute of Automation, Chinese Academy of Sciences. He received the BEng and MEng degrees in Computer Science in 2001 and 2004, both from Tsinghua University, China. He received the PhD degree in information technology from the University of Lorraine, France, in 2007. His research interests include visual media synthesis and image recognition. Weiming Dong is a member of the ACM and IEEE.

Feiyue Huang is the director of Youtu Lab, Tencent. He received his BSc and PhD degrees in Computer Science in 2001 and 2008, both from Tsinghua University, China. His research interests include image understanding and face recognition.

Changsheng Xu is a Professor in National Lab of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences and Executive Director of China-Singapore Institute of Digital Media. His research interests include multimedia content analysis/indexing/retrieval, pattern recognition and computer vision. He has hold 30 granted/pending patents and published over 200 refereed research papers in these areas. He has served as associate editor, guest editor, general chair, program chair, area/track chair, special session organizer, session chair and TPC member for over 20 IEEE and ACM prestigious multimedia journals, conferences and workshops. He is IEEE Fellow, IAPR Fellow and ACM Distinguished Scientist.

Description: This paper focuses on the online photo posting. By analyzing photo's color and content, we formulate the balance metric and take the measurement to optimize the final result.


[Image & Video Processing] Diverse Layout Generation for Graphical Design Magazines

Speaker(s): Sou Tabata, Dai Nippon Printing Co., Ltd, Japan
Haruka Maeda, Dai Nippon Printing Co., Ltd, Japan
Keigo Hirokawa, Dai Nippon Printing Co., Ltd, Japan
Kei Yokoyama, A+U Publishing Co., Ltd, Japan

Sou Tabata is a research developer of Dai Nippon Printing Co., Ltd. in Tokyo, Japan. He joined Dai Nippon Printing Co., Ltd. in 2005. He has been working in research and development of image processing, machine learning, and their applications and systems. His current research and development fields are computer vision, image processing, and machine learning.

Haruka Maeda is a research developer of Dai Nippon Printing Co., Ltd. in Tokyo, Japan. She joined Dai Nippon Printing Co., Ltd. in 2018. She has been working on the development of machine learning application and system. Her current research and development fields are machine learning, natural language processing and applications of graph data base. She majored in neuroscience at graduate school.

Keigo Hirokawa is a researcher of Dai Nippon Printing Co., Ltd. in Tokyo, Japan. In recent years, he has been working on automatic layout generation for magazines. His research areas includes computer graphics, computer vision, and machine learning.

Kei Yokoyama is a senior editor of the architecture magazine a+u, published by A+U Publishing Co., Ltd., in Tokyo, Japan. He joined A+U Publishing Co., Ltd. in 2007, and has also helped to establish a+u’s Singapore branch and has been engaged in the editing of Shinkenchiku, a magazine published by Shinkenchiku-sha, an affiliate of a+u. He currently is involved with on Shinkenchiku-sha’s digital platform project for architecture.

Description: We propose a system that automatically generates layouts for magazines that require graphical design. In this system, when images or texts are input as the content, several appropriate and diverse layouts are automatically generated. The automation makes the layout creation task performed by users such as graphical designers to be much more efficient. It also allows the user to choose from a wide range of ideas to create attractive layouts.


[Image & Video Processing] Generation of Photorealistic QR Codes

Speaker(s): Shih-Syun Lin, National Taiwan Ocean University, Taiwan
Yu-Fan Chang, National Taiwan Ocean University, Taiwan
Thi Ngoc Hanh Le, National Cheng Kung University, Taiwan
Sheng-Yi Yao, National Cheng Kung University, Taiwan
Tong-Yee Lee, National Cheng Kung University, Taiwan

Shih-Syun Lin received the PhD degree in computer science and information engineering from National Cheng-Kung University, Taiwan, in 2015. He was as a postdoctoral fellow in the Computer Graphics Group of Visual System Laboratory (CGVSL), National Cheng-Kung University, Taiwan, from 2015 to 2016. He currently is an assistant professor in the Department of Computer Science and Engineering, National Taiwan Ocean University, Taiwan. He leads the Intelligent Graphics Laboratory, National Taiwan Ocean University (http://igl.cse.ntou.edu.tw). His research interests include computer graphics, information visualization, and pattern recognition. He is a member of the IEEE and the ACM.

Yu-Fan Chang received a B.S. degree in computer science and engineering from National Taiwan Ocean University, Taiwan, in 2018. He currently is a master student in the Department of Computer Science and Engineering, National Taiwan Ocean University, Taiwan. His research interests include computer graphics and information visualization.

Thi Ngoc Hanh Le received the Master degree in Ho Chi Minh University of Science, Viet Nam National University, Viet Nam, in 2014. She began graduate study in the Department of Computer Science and Information Engineering, National Cheng Kung University, Taiwan, ROC, in 2018, where she is currently working as a Ph.D. student in the Computer Graphics Group of the Visual System Laboratory. Her research interests include computer graphics, visualization and media processing.

Sheng-Yi Yao received the B.I.M. degree in Department of Information Management from National Kaohsiung University of Applied Sciences, Kaohsiung, Taiwan, in 2017, and the M.I.M degree in Institute of Information Management from National Kaohsiung University of Science and Technology, Kaohsiung, Taiwan, in 2019, and he is working towards the Ph.D. degree in Multimedia System and Intelligent computing at National Cheng Kung University, Tainan, Taiwan. His research interest is computer graphics.

Tong-Yee Lee received the Ph.D. degree in computer engineering from Washington State University, Pullman, in May 1995. He is currently a chair professor in the Department of Computer Science and Information Engineering, National Cheng-Kung University, Tainan, Taiwan, ROC. He leads the Computer Graphics Group, Visual System Laboratory, National Cheng-Kung University (http://graphics.csie.ncku.edu.tw/). His current research interests include computer graphics, nonphotorealistic rendering, medical visualization, virtual reality, and media resizing. He is a senior member of the IEEE Computer Society and a member of the ACM.

Description: In this paper, we propose a useful method for generating photorealistic QR codes while avoiding the effects on the scanability/readability of the QR code.


[Image & Video Processing] Real-time Table Tennis Forecasting System based on Long Short-term Pose Prediction Network

Speaker(s): Erwin Wu, Tokyo Institute of Technology, Japan
Florian Perteneder, Tokyo Institute of Technology, Japan
Hideki Koike, Tokyo Institute of Technology, Japan

Erwin is a Phd Candidate at the School of Computing, Tokyo Institute of Technology. He received his B.Sc.CS from Shanghai Jiao Tong University and M.Sc.Eng from Tokyo Institute of Technology. He also spent a year in Kyoto as an exchange student, and will become a visiting research at Carnegie Mellon University. He is keen on learning different language and culture, therefore, already mastered 5 different languages. His research interests include artificial intelligence, computer vision, vision-based HCI, and mixed reality.

Florian Perteneder is an HCI researcher currently investigating innovative ways of learning sports as a postdoc at the Tokyo Institute of Technology. In 2018, he received a doctoral degree from the Johannes Kepler University Linz after receiving a master’s and a bachelor’s degree from the University of Applied Sciences Upper Austria. Florian was a visiting researcher at the University of Calgary and the University of Canterbury in 2011 and 2009, respectively. He published and showcased several award-winning works at international conferences (e.g. CHI, UIST, TEI, SIGGRAPH Emerging Technologies) in the domain of large interactive whiteboards and interactive environments.

Hideki Koike is a professor at School of Computing, Tokyo Institute of Technology. He received his BE, ME, and Dr. Eng. from the University of Tokyo in 1986, 1988, and 1991, respectively. After working at University of Electro-Communications, Tokyo, he joined Tokyo Institute of Technology in 2014. He was a visiting researcher at UC Berkeley and University of Sydney in 1994 and 2003, respectively. His research interests include vision-based HCI, projector-camera systems, human augmentation, digital sports, and cyber security.

Description: This is a deep learning-based real-time table tennis serve forecasting system, which only use a single RGB camera. The system can help training table tennis skill and increase user's motivation.


[Image & Video Processing] User-friendly Interior Design Recommendation

Speaker(s): Akari Nishikawa, Ryukoku University, Japan
Keiko Ono, Ryukoku University, Japan
Mitsunori Miki, Doshisha University, Japan

Akari Nishikawa was received the B. E degrees in electronic informatics from Ryukoku University, Kyoto, Japan, 2018. She is a student in the Graduate school of science and technology, Ryukoku University. Her research interests include image processing and machine learning.

Keiko Ono received her BS, MS, and PhD degrees in engineering from Doshisha University, Kyoto, Japan, in 2001, 2003 and 2007, respectively. In April 2009, she joined Doshisha University as Assistant Professor of the Organization for Research Initiatives and Development. In April 2010, she joined Ryukoku University, Kyoto, Japan. Currently, she is an Associate Professor in the Department of Electronics and Informatics. Her research interests include parallel computing, evolutionary optimization and machine learning. She is a member of the Institute of Electrical and Electronics Engineers (IEEE), the Japanese Society for the Information Processing (IPSJ), the Japanese Society for Evolutionary Computation(JPNSEC).

Mitsunori Miki received his MS, and PhD degrees in engineering from Osaka City University, Osaka, Japan, in 1974 and 1978, respectively. In April 1981, he joined Kanazawa Institute of Technology as Associate Professor of Faculty of Engineering. In April 1994, he joined Doshisha University, Kyoto, Japan. Currently, he is a Professor in the Department of Information and Computer Science. His research interests include parallel computing, office environment, lighting control. He is a member of the Institute of Electrical and Electronics Engineers (IEEE), the American Institute of Aeronautical and Space Sciences(AIAA), the Japanese Society for the Information Processing (IPSJ).

Description: We propose a novel interior design recommendation that can take into account the combination between furniture based on each user’s preferences.


[Imaging & Rendering] A Low Cost Multi-Camera Array for Panoramic Light Field Video Capture

Speaker(s): Michael Broxton, Google Inc., United States of America
Jay Busch, Google, United States of America
Jason Dourgarian, Google, United States of America
Matthew DuVall, Google, United States of America
Daniel Erickson, Google, United States of America
Dan Evangelakos, Google, United States of America
John Flynn, Google, United States of America
Ryan Overbeck, Google, United States of America
Matt Whalen, Google, United States of America
Paul Debevec, Google, United States of America

Michael Broxton is a research scientist at Google specializing in light field imaging, view synthesis, inverse problems, and optics. He holds a PhD from Stanford University, where he specialized light field microscopy. Prior to that he worked at NASA Ames Research Center, where he built automated stereogrammetry and photogrammetry software for mapping Mars and the Moon.

Jay Busch is a technical artist and mechanical engineer at Google.

Jason Dourgarian is a technical artist at Google.

Matthew DuVall is a technical artist at Google.

Daniel Erickson is a software engineer at Google.

Dan Evangelakos is a MBA Candidate at Harvard Business School and former software engineer at Google.

John Flynn is a software engineer at Google.

Ryan Overbeck is a software engineer at Google.

Matt is a hardware engineer and technical program manager at Google.

Paul Debevec is a Senior Scientist at Google VR and Adjunct Research Professor of Computer Science in the Viterbi School of Engineering at the University of Southern California, working within the Vision and Graphics Laboratory at the USC Institute for Creative Technologies.

Description: We present a portable multi-camera system for recording panoramic light field video content. The proposed system is capable of capturing wide baseline (0.8 meters), high resolution (>15 pixels per degree), large field of view (>220 degree) light fields at 30 frames per second. The array contains 47 time-synchronized cameras distributed on the surface of a hemispherical plastic dome. We use commercially available action sports cameras (Yi 4k) mounted inside the dome. The dome, mounts, triggering hardware and cameras are inexpensive and the array is easy to fabricate.


[Imaging & Rendering] Computational Spectral-Depth Imaging with a Compact System

Speaker(s): Mingde Yao, University of Science and Technology of China, China
Zhiwei Xiong, University of Science and Technology of China, China
Lizhi Wang, Beijing Institute of Technology, China
Dong Liu, University of Science and Technology of China, China
Xuejin Chen, University of Science and Technology of China, China

Mingde Yao received the B.S. degree in Department of Automation, College of Information Science and Engineering, Northeastern University, China, in 2018. He is currently pursuing the M.S. degree in Dept. Electronic Engineering & Information Science, University of Science and Technology of China (USTC). His research interests include computational photography and low-level vision.

Zhiwei Xiong received his B.S. and Ph.D. degrees in Electronic Engineering from USTC in 2006 and 2011, respectively. After working at Microsoft Research Asia (MSRA) as a Researcher for five years, he returned USTC as a Professor in 2016. His research interests include computational photography, low-level vision, and biomedical image analysis.

Lizhi Wang received the BS and PhD degrees from Xidian University, in 2011 and 2016, respectively. He is now a research professor with the Beijing Institute of Technology. He was a research intern with Microsoft Research Asia from 2013 to 2016. His research interests include compressive sensing, computational photography, and image processing.

Dong Liu received the B.S. and Ph.D. degrees in electrical engineering from the University of Science and Technology of China (USTC), Hefei, China, in 2004 and 2009, respectively. He was a Member of Research Staff with Nokia Research Center, Beijing, China, from 2009 to 2012. He joined USTC as an Associate Professor in 2012. He had visited Microsoft Research Asia from July 2005 to July 2008, and from December 2015 to August 2016. His research interests include image and video coding, multimedia signal processing, and multimedia data mining.

Xuejin Chen obtained Ph.D. and B.S in Department of Electronic Science and Technology in University of Science and Technology of China (USTC) in 2008 and 2003. Her Ph.D. supervisor is Prof. Heung-Yeung Shum. She had been taking internship in Visual Computing Group in Microsoft Research Asia (MSRA) since July 2004 to 2008. During 2008~2010, she worked with Prof. Julie Dorsey and Prof. Holly Rushmeier on sketching related projects at Computer Graphics Lab at Yale. She visited Stanford University and worked in Prof Leonidas Guibas's Lab as a visiting associate professor from Feb 2017 to Aug. 2017.

Description: Relying on an efficient computational reconstruction algorithm with deep learning instead of customized hardware, our system enables simultaneous acquisition of the spectral and depth information in real-time with high resolution.


[Imaging & Rendering] Fast, memory efficient and resolution independent rendering of cubic Bezier curves using tessellation shaders

Speaker(s): Harish Kumar, Adobe, India
Anmol Sud, Adobe, India

I am working as Computer Scientist in Adobe India. Over the last few years I have worked upon accelerating the Illustrator’s GPU rendering pipeline and has served as a key contributor on the project. I have designed and implemented several algorithms for rendering Bezier curves, path rendering and PDF transparency composition on GPUs. I have sufficient understanding of GPU architecture, GPU rendering frameworks such as OpenGL, Metal and is continuing efforts for performance improvements in Illustrator’s rendering workflows.

I have been a part of the team developing the GPU rendering pipeline of Illustrator for three years now. Over these three years I have developed a significant understanding of GPU architecture. I have also taken multiple machine learning courses during this period and developed a thorough understanding of the same.

Description: A novel technique to render cubic Bezier curves that orchestrates graphics pipeline to approximate cubic Bezier into quadratic Bezier on GPU hardware and performs 5-10x faster than existing techniques.


[Imaging & Rendering] Focus stacking by multi-viewpoint focus bracketing

Speaker(s): Yucheng Qiu, Shibaura Institute of Technology, China
Daisuke Inagaki, Shibaura Institute of Technology, Japan
Kenji Kohiyama, Keio University, Japan
Hiroya Tanaka, Keio University, Japan
Takashi Ijiri, Shibaura Institute of Technology, Japan

Mr. Yucheng Qiu is an undergraduate student at Shibaura Institute of Technology, Tokyo, Japan. His research interests include image processing and 3D digitization.

Mr. Daisuke Inagaki is an undergraduate student at Shibaura Institute of Technology, Tokyo, Japan. His research interests include interactive systems and 3D digitization.

Prof. Kenji Kohiyama is a photographer and an Emeritus Professor at Keio University SFC, Kanagawa, Japan. His research covers 3D Scanning and 3D digitization.

Prof. Hiroya Tanaka is a Professor at Keio University SFC, Kanagawa, Japan. He received BA in Human Integrated Studies, MA in Human and Environment Studies, and Ph.D in Engineering (the University of Tokyo). His research covers 3D scanning, 3D printing, Creative support system and Digital Fabrication.

Prof. Takashi Ijiri is an associate professor at Shibaura Institute of Technology, Tokyo Japan. He received BA in information science (Tokyo Institute of Technology), and MA and Ph.D in information science and technology (the University of Tokyo). His research covers 3D digitization, image segmentation, and interactive systems.

Description: An approach to obtain high-quality focus-stacking images by using the depth maps generated by multi-view structure-from-motion algorithm.


[Imaging & Rendering] Fundus imaging using DCRA toward large eyebox

Speaker(s): Yui Atarashi, Digital Nature Group, University of Tsukuba; Pixie Dust Technologies, Inc., Japan
Kazuki Otao, Digital Nature Group, University of Tsukuba; Pixie Dust Technologies, Inc., Japan
Takahito Aoto, Digital Nature Group, University of Tsukuba, Japan
Yoichi Ochiai, Digital Nature Group, University of Tsukuba; Pixie Dust Technologies, Inc., Japan

Yui Atarashi was born in Hyogo prefecture in Japan in 2000. She graduated from Sumagakuen Junior & Senior High School. Since 2018 of freshman, she belongs to the Digital Nature Group at University of Tsukuba hosted by Associate Professor Yoichi Ochiai. Her research interest includes optics and 3d printing. Mail: yuiatarashi(-at-)digitalnature.slis.tsukuba.ac.jp

Kazuki Otao was born in 1996. After studying computer science and electronic engineering at National Institute of Technology, Tokuyama College since 2012, he is incorporated into the College of Media Arts, Science and Technology at the University of Tsukuba in 2017. He got a prize in many contests including U-22 programming contest and national college of technology programming contest. He is an expert researcher of the aerial imaging system in Digital Nature Group. https://kazukiotao.com Mail: kazuki.otao(at)digitalnaturegroup.slis.tsukuba.ac.jp

Takahito Aoto is currently a researcher at University of Tsukuba, Japan since 2018. He was a researcher at Nara Institute of Science and Technology (NAIST), Japan in 2016. He was a researcher at National Institute of Informatics (NII), Japan in 2017. He received M.S and Ph. D. degrees in Engineering from NAIST in 2012 and 2015. His research interests include applied optics and computational photography, field, and fabrication. He is a member of OSA, CVF and ACM.

B.1987, PhD (Applied Computer Science / University of Tokyo, Graduate School of Interdisciplinary Information Studies in 2 years the fastest record), From 2015, joined University of Tsukuba, School of Library Information and Media Studies as Assistant Professor. He is the head of Digital Nature Laboratory. CEO of Pixie Dust Technologies.inc, via JSPS Research Fellow DC1 and Research Intern on Microsoft Research Redmond. From 2017 to 2019, he worked as Advisor to President of University of Tsukuba. From 2017, he established Strategic Research Platform toward Digital Nature and became Director of Platform and Associate Professor of University of Tsukuba.

Description: We propose a novel fundus imaging method using a dihedral cornerreflector array (DCRA). The proposed method achieves wavelength independence, robust to eye movement, and a simple optical system.


[Imaging & Rendering] Modelling scene data for render time estimation

Speaker(s): Harsha K. Chidambara, DNEG, United Kingdom

The author is a Technical Director working at DNEG, London. He has over 5 years of experience working both in Post-production and Feature animation. Prior to joining DNEG, he was working at Iloura(Method) and DreamWorks. He has a keen interest in Machine learning and data analysis.

Description: A supervised learning approach to estimate render times of images in post-production. This technique builds a learning model based off scene and rendering parameters utilising previously rendered 3D scenes


[Imaging & Rendering] Parallel Adaptive Frameless Rendering with Nvidia OptiX

Speaker(s): Chung-Che Hsiao, North Carolina State University, United States of America
Benjamin Watson, North Carolina State University, United States of America

Chung-Che Hsiao is a PhD student of computer science at North Carolina State University. He received his master’s degree at National Tsing Hua University in Taiwan. He is currently doing research in Visual Experience Lab at North Carolina State University. His research interests include real-time rendering, ray tracing, virtual reality, and game engine. His current research focuses on adaptive rendering, foveated rendering and alternative display. He will be exporing a faster way to produce high quality imagery which takes advantage of human visualization system with affordable computation load.

Benjamin Watson is Associate Professor of Computer Science at North Carolina State University. His Visual Experience Lab focuses on the engineering of visual meaning, and works in the fields of graphics, visualization, interaction and user experience. His work has been applied in entertainment, security, finance, education, and medicine. Watson co-chaired the Graphics Interface 2001, IEEE Virtual Reality 2004 and ACM Interactive 3D Graphics and Games (I3D) 2006 conferences, and was co-program chair of I3D 2007. Watson is an ACM and senior IEEE member. He earned his doctorate at Georgia Tech's Graphics, Visualization and Usability Center.

Description: We implement adaptive frameless renderer in parallel and interactively with NVIDIA OptiX. This is a real-time rendering scheme which discards traditional frame to achieve improve visual accuracy and less latency.


[Imaging & Rendering] Rendering Point Clouds with Compute Shaders

Speaker(s): Markus Schütz, TU Wien, Austria
Michael Wimmer, TU Wien, Austria

Markus Schütz finished his master about point cloud rendering in web browsers (Potree) in 2016, and is currently working on his PhD thesis about improving performance and quality of point cloud renderings, with a focus on VR and Web Browsers.

Michael Wimmer is an Associate Professor at the Institute of Computer Graphics and Algorithms at TU Wien, where he heads the Rendering and Modeling Group and is the director of the Center for Geometry and Computational Design. His academic career started with his M.Sc. in 1997 at TU Wien, where he obtained his Ph.D. in 2001. His research interests are real-time rendering, computer games, real-time visualization of urban environments, point-based rendering, procedural modeling and shape modeling. He has coauthored over 100 papers in these fields. He also coauthored the book Real-Time Shadows.

Description: A compute shader based point cloud rasterizer with up to 10 times higher performance than classic point-based rendering with the GL_POINT primitive.


[Imaging & Rendering] Sound Propagation Considering Atmospheric Inhomogeneity and Ground Effects

Speaker(s): Jin Liu, Tianjin University, China
Shiguang Liu, Tianjin University, China

Jin Liu received the B.S. degree from the School of Science, North University of China, Taiyuan, P.R. China, in 2017, and is currently working toward the M.S. degree at the Division of Intelligence and Computing, Tianjin University, P.R. China . Her current research is sound propagation simulation.

Shiguang Liu received the Ph.D. degree from State Key Lab of CAD & CG, Zhejiang University, P.R. China. He is currently a professor in the Division of Intelligence and Computing, Tianjin University, P.R. China. His research interests include computer graphics, multimedia, visualization, and virtual reality, etc.

Description: This paper propose an improved FDTD-PE method to simulate sound propagation in 3D outdoor scene. In the simulation, the ground and atmosphere are considered as porous and inhomogeneous medium, respectively.


[Virtual Reality, Augmented Reality, and Mixed Reality] 360-Degree-Viewable Tabletop Light-Field 3D Display Having Only 24 Projectors

Speaker(s): Shunsuke Yoshida, National Institute of Information and Communications Technology, Japan

Shunsuke YOSHIDA is a senior researcher at the Universal Communication Research Institute, National Institute of Information and Communications Technology (NICT), Japan. He received his PhD in Human Informatics from Nagoya University in 2001. He was a researcher at the Advanced Telecommunications Research Institute International (ATR) from 2002 to 2010. He has been a researcher at NICT since 2006. His work focuses on natural human-machine interfaces, such as glasses-free 3D displays and haptic devices to interact with virtual environments. His research interests include the application of real-time computer graphics for practical use.

Description: Conventional light-field methods of producing 3D images having circular parallax on a tabletop surface require several hundred projectors. Our novel approach produces a similar light field using only 1/10 the number of projectors. Adopting our method, two cylindrical mirrors are inserted in the projection light paths. By appropriately folding the paths with the mirrors, we form any viewpoint image in an annular viewing area from a group of rays sourced from all projectors arranged on a circle.


[Virtual Reality, Augmented Reality, and Mixed Reality] AUDIOZOOM: Location Based Sound Delivery system

Speaker(s): Chinmay Rajguru, University of Sussex, United Kingdom
Daniel Blaszczak, University of Sussex, United Kingdom
Arash PourYazdan, University of Sussex, United Kingdom
Thomas J. Graham, University of Sussex, United Kingdom
Gianluca Memoli, University of Sussex, United Kingdom

Chinmay is a 1st year Ph.D. student at the University of Sussex, UK. Received his Master of Science degree in Computer Science at Southern Illinois University, USA in 2017. His ambition is to find new ways to give an additional dimension to digital media through Human-Computer Interaction. He wants to explore the effect of sound and additional elements to users in digital entertainment.

Daniel graduated in 2019 from School of Engineering and Informatics, University of Sussex

Arash Pouryazdan received his BEng degree in Electrical and Electronic Engineering from the University of Sussex, UK in 2013 and his Ph.D. in Engineering from the same University in 2018. His current research interests include non-contact electric potential sensing and microscopy, novel flexible sensors, wearable electronics, and innovative acoustic interfaces

Thomas J. Graham received his Ph.D. in Physics at the University of Exeter. He started working as a research fellow in Acoustic Metamaterials at the University of Sussex.

Gianluca is a Lecturer at the University of Sussex and has been working in acoustics for 15 years. He conceived the vision that sound could be managed like the light.

Description: Just like focusing light with a projector, we create a focus of sound that follows a user, tracked with the Kinect. Useful in public spaces and VR.


[Virtual Reality, Augmented Reality, and Mixed Reality] DeepMobileAR: A Mobile Augmented Reality Application with integrated Visual SLAM and Object Detection

Speaker(s): Suwoong Lee, Electronics and Telecommunications Research Institute (ETRI), South Korea
Seungjae Lee, Electronics and Telecommunications Research Institute (ETRI), South Korea
Keundong Lee, Electronics and Telecommunications Research Institute (ETRI), South Korea
Jong Gook Ko, Electronics and Telecommunications Research Institute (ETRI), South Korea

Suwoong Lee is a senior researcher at ETRI. He joined the creative contents research division at ETRI in 2007 and has researched image processing and human-machine interaction technologies for augmented reality service. His current research interest is efficient deep learning and its mobile applications. He and his team participated Low Power Image Recognition Challenge (1st place in 2018)

Seungjae Lee is a principal researcher at ETRI. He joined the creative content research division at ETRI in 2005 and has researched content identification, classification, and retrieval systems. He and his team participated in visual searching related challenges such as ImageNet challenge (classification and localization: 5th place in 2016, detection: 3rd place in 2017), Google Landmark Retrieval (8th place in 2018) and Low Power ImageNet Recognition Challenge (1st place in 2018)

Keundong Lee is a senior researcher at ETRI. He joined the creative content research division at ETRI in 2009 and has researched object recognition and image retrieval systems. He and his team participated in visual searching related challenges such as ImageNet challenge (classification and localization: 5th place in 2016, detection: 3rd place in 2017) and Google Landmark Retrieval (8th place in 2018)

Jong Gook Ko is a principal researcher at ETRI. He joined the security research division at ETRI in 2000. His current research interest is object recognition technologies using deep learning. He and his team participated in visual searching related challenges such as ImageNet challenge (detection: 3rd place in 2017), Google Landmark Retrieval (8th place in 2018) and Low Power ImageNet Recognition Challenge (1st place in 2018)

Description: The possibility of real-time, Integrated AR & Deep Learning service on consumer mobile is confirmed in indoor & outdoor.


[Virtual Reality, Augmented Reality, and Mixed Reality] Investigating the Role of Task Complexity in Virtual Immersive Training (VIT) Systems

Speaker(s): Konstantinos Koumaditis, Aarhus University, Denmark
Francesco Chinello, Aarhus University, Denmark

Konstantinos Koumaditis research aims to understand the interplay between Information Technology (IT) and humans and explore the many opportunities and challenges we face in our professional settings to interact in a harmonious and beneficial way. With studies in Information Systems , management and healthcare to Immersive Technologies, his research tries to tackle real-life challenges and create frameworks for a wider audience of stakeholders. Currently, as associate professor in BTECH Aarhus University, Denmark and director of the Extended Reality and Robotics (xR2) lab he enjoys the ability to guide the new generation of engineers and research innovative technologies.

He is born on the 5th of February 1984 in Grosseto, Italy. He received his M.S. degree in Computer Engineering (focus on Robotics and Industrial automation) in 2010 from the University of Siena. He defended his PhD On February 2014, studying "Wearability in Robotics: Developing Cutaneous Devices for Haptic Stimuli", at the University of Siena, (Italy) and Italian Institute of Technology, Genova(Italy). He has been guest summer school student at Arizona State University, SBHSE (Phoenix, Arizona). He is currently Assistant Professor to the Department of Business Development and Technology, Aarhus School of Business and Social Science, at Herning (Denmark).

Description: The focus of this research is to introduce the concept of Training Task Complexity (TTCXB) in the design of Virtual Immersive Training (VIT) systems. In this report, we describe design parameters, experimentation design and initial results.


[Virtual Reality, Augmented Reality, and Mixed Reality] Lucciola: Presenting Aerial Images by Generating a Fog Screen at any Point in the Same 3D Space as a User

Speaker(s): Takahiro Kusabuka, NTT Service Evolution Laboratories, Japan
Shinichiro Eitoku, NTT Service Evolution Laboratories, Japan

He received his B.E. and M.S. in engineering from Ritsumeikan University, in 2010 and 2012. He joined NTT in 2012. His research interests include Human Interface. He studied technical support system, interaction using taste and AR service using transparent display.

He received the B.S. degree in Engineering from The University of Tokyo in 2004 and the M.S. and Ph.D. degrees in Information Science and Technology from The University of Tokyo in 2006 and 2013. He joined Nippon Telegraph and Telephone Corporation in 2006. His main interests and activities are information systems, and multimedia systems for communications.

Description: We propose a method of presenting an aerial image at any point in the same three-dimensional space as a user by colliding vortex rings and remaining particles.


[Virtual Reality, Augmented Reality, and Mixed Reality] Method for estimating display lag in the Oculus Rift S and CV1

Speaker(s): Jason Feng, UNSW Sydney, Australia
Juno Kim, UNSW Sydney, Australia
Wilson Luu, UNSW Sydney, Australia
Stephen Palmisano, University of Wollongong, Australia

Jason is an undergraduate student enrolled in the Materials Science and Engineering (Hons)/Master of Biomedical Engineering program at the University of New South Wales (UNSW Sydney). He concurrently conducts research at the Sensory Processes Research Laboratory (UNSW Sydney), focusing on multisensory perception through virtual reality. Jason has a keen passion for Computer Aided Design (CAD) and its application in 3D printing and fabrication technologies. He applies his fabrication ability to research into display technologies, including HMDs. Jason's current work involves latency testing and benchmarking VR systems to assess the impact of display lag on cybersickness.

Juno is the lead researcher of the Sensory Processes Research Laboratory, University of New South Wales (UNSW Sydney). He conducts research examining the effects of multisensory integration on human performance and perception. One recent area of interest has been to understand how display lag affects perception of self-motion, presence and cybersickness in virtual environments. Dr Kim is also devoted to promoting health technology innovation (HTI) throughout the Asia Pacific region. His collective works have appeared in top outlets, including Neurology, Journal of Vision, Current Biology, Proceedings of the National Academy of Sciences, and Nature Neuroscience.

Wilson Luu graduated from UNSW with a Bachelor of Optometry (Hons) in 2015. Since graduating, Wilson worked as a staff optometrist in rural NSW and metropolitan Sydney, and he travelled overseas to provide eye care to the Nepalese Everest community. Wilson held positions on the Young Optometrists NSW/ACT executive team and has been a speaker at several of their events. He is currently exploring the application of virtual reality in patients with eye disease at the Centre for Eye Health. He has interests in ocular pathology and advancements in technology to be used in eye care.

Stephen Palmisano is internationally recognised as an expert on self-motion perception and vection (i.e., the illusion of self-motion). Over the last 20 years he has shown that stereoscopic motion, changes in optical size, viewpoint jitter and eye-movements all play important roles in vection. Stephen is also well known for his research on stereoscopic depth perception, postural control when standing, and motion sickness. In the last decade he has focussed this expertise into his research on simulation and virtual reality (including his studies on simulation based mines rescue training and cybersickness from head-mounted displays).

Description: Cybersickness is the major "kill factor" limiting the uptake of HMD VR. This work shows how display lag inherent in even the most recent systems can contribute to adverse effects.


[Virtual Reality, Augmented Reality, and Mixed Reality] Pop-up digital tabletop: seamless integration of 2D and 3D visualizations in a tabletop environment

Speaker(s): Daisuke Inagaki, Shibaura Institute of Technology, Japan
Yucheng Qiu, Shibaura Institute of Technology, Japan
Raku Egawa, Shibaura Institute of Technology, Japan
Takashi Ijiri, Shibaura Institute of Technology, Japan

Mr. Daisuke Inagaki is an undergraduate student at Shibaura Institute of Technology, Tokyo, Japan. His research interests include interactive systems and 3D digitization.

Mr. Yucheng Qiu is an undergraduate student at Shibaura Institute of Technology, Tokyo, Japan. His research interests include image processing and 3D digitization.

Mr. Raku Egawa is an undergraduate student at Shibaura Institute of Technology, Tokyo, Japan. His research interests interaction in 3D space.

Prof. Takashi Ijiri is an associate professor at Shibaura Institute of Technology, Tokyo Japan. He received BA in information science (Tokyo Institute of Technology), and MA and Ph.D in information science and technology (the University of Tokyo). His research covers 3D digitization, image segmentation, and interactive systems.

Description: We propose a pop-up digital tabletop system that seamlessly integrates two-dimensional (2D) and three-dimensional (3D) representations of contents in a digital tabletop environment.


[Virtual Reality, Augmented Reality, and Mixed Reality] Stealth Projection : Visually Removing Projectors from Dynamic Projection Mapping

Speaker(s): Masumi Kiyokawa, The University of Electro-Communications, Japan
Shinichi Okuda, The University of Electro-Communications, Japan
Naoki Hashimoto, The University of Electro-Communications, Japan

Masumi Kiyokawa is currectly a master student at the Graduate School of Informatics and Engineering, the University of Electro-Communications. She received a Bachelor of Informatics from Bunkyo University in 2019. Her research interests include virtual reality, augmented reality and mid-air imaging techniques. She is a member of ACM SIGGRAPH.

Shinichi Okuda is currectly an undergraduate student at the Faculty of Informatics and Engineering, the University of Electro-Communications. His research interests include virtual reality, augmented reality and mid-air imaging techniques.

Naoki Hashimoto is currently an associate professor at the Graduate School of Informatics and Engineering, the University of Electro-Communications. He received a Doctor of Engineering from Tokyo Institute of Technology in 2001. His research interests include virtual reality, image projection techniques, spatial augmented reality, interactive media techniques and human-machine interface. He is a member of ACM SIGGRAPH, the Institute of Electronics, Information and Communication Engineers, the Virtual Reality Society of Japan, and the Institute of Image Information and Television Engineers.

Description: In this study, we propose a stealth projection method that visually removes the ProCam system in dynamic projection mapping by using aerial image display technique and IR camera-based marker-less tracking.


[Virtual Reality, Augmented Reality, and Mixed Reality] Virtual Immersive Educational Systems: Early Results and Lessons Learned

Speaker(s): Francesco Chinello, Aarhus University, Denmark
Konstantinos Koumaditis, Aarhus University, Denmark

He is born on the 5th of February 1984 in Grosseto, Italy. He received his M.S. degree in Computer Engineering (focus on Robotics and Industrial automation) in 2010 from the University of Siena. He defended his PhD On February 2014, studying "Wearability in Robotics: Developing Cutaneous Devices for Haptic Stimuli", at the University of Siena, (Italy) and Italian Institute of Technology, Genova(Italy). He has been guest summer school student at Arizona State University, SBHSE (Phoenix, Arizona). He is currently Assistant Professor to the Department of Business Development and Technology, Aarhus School of Business and Social Science, at Herning (Denmark).

Konstantinos Koumaditis research aims to understand the interplay between Information Technology (IT) and humans and explore the many opportunities and challenges we face in our professional settings to interact in a harmonious and beneficial way. With a background in Information Systems, management and healthcare to Immersive Technologies, his research tries to tackle real-life challenges and create frameworks for a wider audience of stakeholders. Currently, in his role as associate professor in BTECH Aarhus University, Denmark and director of the Extended Reality and Robotics (xR2) lab he enjoys the ability to guide the new generation of engineers and research innovative technologies.

Description: Higher education is embracing a digital transformation in a relatively slow adoption rate, with few fragmented solutions to portray the capabilities of new immersive technologies like Virtual Reality (VR). One may argue that deployment costs and substantial levels of design knowledge might be critical stagnation factors to create effective Virtual Immersive Educational (VIE) systems. We attempt to address these impediments in a cost-effective and user-friendly VIE system. In this paper, we report in brief the main elements of this design and initial result and lessons learned.


[Visualization] BookVIS: Enhancing Browsing Experiences in Bookstores and Libraries

Speaker(s): Zona Kostic, Harvard University, United States of America

My name is Zona Kostic and I'm a research fellow at Harvard University. Furthermore, I actively participate in research projects that combine information visualization and machine learning into an intelligent Web system, and intensively collaborate with researchers and professors inside and outside of the US. I have been a peer reviewer for numerous high-impact scientific journals as well as a committee member for most prestigious conferences. I also served as a management committee member or a participant in multiple EU projects. I published six books, and number of research works at scientific journals and conferences (IEEE and ACM).

Description: BookVIS is a visual "companion" that recognizes a book’s cover, and uses it instantly to provide users with information about the book. After snapping a photo of the book, the app generates the visual dashboard that users may use for further exploration. The system allows users to identify and discover more information on books from pictures of book covers taken on their smartphones. With the presented system and its interplay between real and digital worlds, new avenues could be opened for creating new dimensions for adaptive visualizations.


[Visualization] Eye-Tracking Based Adaptive Parallel Coordinates

Speaker(s): Mohammad Chegini, Graz University of Technology, Nanyang Technological University, Austria
Keith Andrews, Graz University of Technology, Austria
Tobias Schreck, Graz University of Technology, Austria
Alexei Sourin, Nanyang Technological University, Singapore

My name is Mohammad Chegini, and I am currently a PhD student and a researcher at Institut für Computer Graphik und Wissensvisualisierung at TU Graz. I got my information technology bachelor and master from the Sharif University of Technology and the University of Tehran. In 2017, I enrolled in a joint PhD program between TU Graz in Austria and NTU in Singapore. My research interests are visual analytics, HCI and game technologies. Before starting my research career, I was a co-founder of a gaming company called Hafsang.

Keith Andrews is a tenured associate professor at the Institute of Interactive Systems and Data Science (ISDS) at Graz University of Technology, in Austria. He has a B.Sc.(Hons) in Mathematics and Computer Science from the University of York, England, and an M.Sc. and Ph.D. in Technical Mathematics/Computer Science from Graz University of Technology. Having worked in the fields of computer graphics, 3d virtual worlds, hypermedia, and the web, he is currently pursuing research in the fields of information visualisation and usability.

Since 2015 I am a Professor with the Institute of Computer Graphics and Knowledge Visualization at the Faculty for Computer Science and Biomedical Engineering of Graz University of Technology. My main research interests are in Visual Data Analysis and in applied 3D Object Retrieval. The research questions we are tackling in our group include, among others, how large collections of data, e.g., high-dimensional and spatial-temporal data, and 3D object data, can be visually explored and analysed for data understanding and decision making.

Dr. Alexei Sourin was born in Moscow and received his M.Eng. and Ph.D. degrees in computer graphics from the Moscow Engineering Physics Institute, Russia (MEPhI) in 1983 and 1988, respectively. From 1983 to 1993 he was a researcher at MEPhI where he worked on different scientific visualization and computer animation projects. Since 1993 he held faculty positions at Nanyang Technological University (NTU) in Singapore, except a period from 1999 to 2000 when he was Associate Professor at Moscow Institute of Physics and Technology (MIPT).

Description: We demonstrate how eye-tracking can help the analyst to efficiently re-order the axes in a parallel coordinates. The implicit input from an eye-tracker assists the system in finding unexplored dimensions.


[Visualization] Midair Haptic Representation for Internal Structure in Volumetric Data Visualization

Speaker(s): Tomomi Takashina, Nikon Corporation, Japan
Mitsuru Ito, Nikon Corporation, The University of Tokyo, Japan
Yuji Kokumai, Nikon Corporation, Japan

Dr. Tomomi Takashina is a technical expert at Nikon Corporation. His research and engineering work focuses on intelligent image processing and user interface. He is also the author of some books on the programming language R. He received a BE degree in communication engineering, and ME and PhD degrees in computer science from the University of Electro-Communications, Tokyo, Japan.

Mitsuru Ito is a researcher at Nikon Corporation and a Ph.D. student in the Graduate School of Frontier Sciences, the University of Tokyo, Japan. He received his M.S degree in Frontier Sciences, the University of Tokyo in 2015. His research interests include midair haptic interfaces and aerial displays.

Yuji Kokumai is an R&D manager of Technology Strategy Department at Nikon Corporation. His research and engineering work focuses on image processing, 3D measurement, and user interface. He received BE and ME degrees in computer science from Hokkaido University, Japan.

Description: We propose a novel method to present the internal structure of volumetric data using midair haptics to solve the occlusion problem in volumetric visualization. The proposed method simplifies internal structures by approximating them using a Gaussian mixture model (GMM) and converts them into haptic stimuli. We applied this technique to 3D microscopy.


[Visualization] Nanoscapes: Authentic Scales and Densities in Real-Time 3D Cinematic Visualizations of Cellular Landscapes

Speaker(s): Andrew R. Lilja, UNSW Art & Design, Australia
Shereen R. Kadir, UNSW Art & Design, Australia
Rowan T. Hughes, UNSW Art & Design, Australia
Nicholas J. Gunn, UNSW Art & Design, Australia
Campbell W. Strong, UNSW Art & Design, Australia
Benjamin J. Bailey, UNSW Art & Design, Australia
Robert G. Parton, University of Queensland, Australia
John B. McGhee, UNSW Art & Design, Australia

Having an extensive research background in biomedical science and a longstanding interest in the visual arts, Andrew works as the lead pre-rendered content developer in the lab. He specializes in developing creative solutions to complex visualizations in addition to carrying out a supportive role in the lab’s interactive VR content development.

Shereen Kadir is originally from London, UK, and has a background in Biochemistry. She worked as a post-doctoral research scientist for many years specializing in cell biology and microscopy. Having a passion for art, she transitioned into a career in medical visualization and science communication. She is particularly interested in 3D animation and VR to help scientists explain difficult concepts and hypotheses.

Rowan Hughes grew up in Galway, Ireland, an altogether wonderful place (if a bit wet). Having a PhD in Computer Science from Trinity College Dublin and experience in the VFX business, Rowan is a veteran computer scientist and researcher with a passion for computer graphics, simulation, data science and Star Wars.

Nick Gunn is the lead 3D Unity Artist at the 3D Visualisation Aesthetics Lab. With a background in architecture, game design and multimedia production, Nick's work explores new forms of communication through 3D spatiality and virtual kinaesthetics.

Campbell acts as lead technical developer in the lab. He has a background in molecular microbiology, film VFX, and making movies about molecular machines. The adoption of game and film 3D modelling software for showcasing biological systems has afforded great directorial freedoms, but are often lacking in the tools required to bring in raw scientific data sets and work with them in accurate and articulated ways. Campbell spends time addressing this gap by mashing his keyboard.

John Bailey is the lead Virtual Reality developer of the 3D Visualisation Aesthetics Lab. Coming from a background in the advertising and games industries, he specializes in real-time 3D worldbuilding and serves as the creative and technical lead for the lab's interactive VR projects.

Rob is Group Leader of the Cell Biology and Molecular Medicine Division at the Institute of Molecular Bioscience (University of Queensland). As a CBNS Chief Investigator and co-leader of the JTCC project, Rob and his team generate cellular and molecular imaging data in astonishing detail which forms the foundational structures of the interactive cellular VR landscapes. Rob has a keen interest in immersive visualizations of cellular imaging data and how the platform might enhance our understanding of the living world.

John McGhee is a praciticing 3D Computer Artist and the Director of the 3D Visualisation Aesthetics Lab. His visual practice explores art and design-led modes of visualising complex scientific and biomedical data using 3D computer arts approaches, most recently Virtual Reality (VR) Head Mounted Displays (HMD). In early 2016, he was recognised as one of UNSW Sydney’s 20 ‘Rising Stars’ and emerging research leaders

Description: Nanoscapes is a cinematic first-person interactive exploration of cellular landscapes that attempts to address the challenges of visualizing authentic molecular scales and densities in real-time.


[Visualization] Non-Euclidean Embeddings for Graph Analytics and Visualisation

Speaker(s): Daniel Filonik, EPICentre, UNSW Art & Design, Australia
Tian Feng, La Trobe University, Australia
Ke Sun, CSIRO's Data61, Australia
Richard Nock, CSIRO's Data61, Australia
Alex Collins, CSIRO's Data61, Australia
Tomasz Bednarz, EPICentre, UNSW Art & Design, Australia

Daniel Filonik is a Postdoctoral Fellow in High Performance Visualisation at the EPICentre, UNSW Art and Design, Sydney, Australia. His research interests are in Information Visualisation, Computer Graphics and Human-Computer Interaction. His current work focuses on developing natural interfaces for interactive data exploration in immersive environments. Previously, Daniel conducted his PhD research at the Urban Informatics Research Lab, QUT, Brisbane, Australia. He investigated challenges and opportunities of data visualisation and composition interfaces based on large-scale display walls. In particular, these technologies were applied to explore the notion of Participatory Data Analytics, a collaborative, community-led approach to the interpretation of data.

Dr Tian Feng is a Lecturer in Interactive Digital Media at La Trobe University. He earned his BSc degree in Geographic Information Systems from Zhejiang University in 2012, and his PhD degree in Information Systems Technology and Design from the Singapore University of Technology and Design in 2017. He was a Postdoctoral Fellow in Visualisation, Simulation and AI at UNSW EPICentre. His research interests focus on Computer Graphics, Computer Vision, and Artificial Intelligence.

Ke did a PhD in computer science in Département d’informatique, Université de Genève in Switzerland. From 2016 to 2017, he was a postdoc in École Polytechnique in Paris and then in King Abdullah University of Science and Technology in Saudi Arabia. Since 2018, he is a machine learning researcher in Data61 in Sydney. In 2017, Ke received the “Prix de l’excellence de l’UNIGE” from University of Geneva.

Richard is an Adjunct Professor at the Australian National University, the University of Sydney and a Senior Principal Researcher, Data61. He defended a PhD in computer science under the supervision of Olivier Gascuel and an accreditation to lead research (HDR) in computer science with Michel Habib. He graduated simultaneously in computer science and in agronomical engineering ("Ingénieur Agro", majors in statistics and industrial microbiology), after two years of CPGE ("Classes Préparatoires aux Grandes Ecoles"). His interests cover Machine Learning, Privacy, Information Geometry at large.

Research and development leader, building cutting edge graph analytics and machine learning technology for data scientists. Skilled in research and development, delivering technology, data science, software engineering, team leadership, machine learning and computational physics. Strong research, engineering and management professional with a PhD in Physics from University of New South Wales.

Tomasz Bednarz is an A/Professor and Director and Head of Visualisation at the Expanded Perception and Interaction Centre (EPICentre) UNSW Art & Design and Team Leader at CSIRO’s Data61 (Visual Analytics Team, Software & Computational Systems Program). He is also AI/CI at the ARC Centre of Excellence for Mathematical and Statistical Frontiers (ACEMS) and holds Adjunct positions at the QUT (School of Mathematical Sciences), University of Sydney (Design Lab) and University of South Australia (School of IT and Math Sciences). He has done his doctorate and masters in computational science area.

Description: This research project uses machine learning techniques to generate optimised embeddings of graphs in different types of geometric spaces. Our approach utilises non-euclidean geometries in the data processing and visualisation stages. This results in a flexible, versatile, and interactive visualisation tool, which is suitable for a variety of graph analytics use cases across different high-end visualisation systems and display types.


Back