• Platinum Pass Platinum Pass
  • Full Conference Pass Full Conference Pass
  • Full Conference One-Day Pass Full Conference One-Day Pass

Date: Sunday, November 17th
Time: 4:15pm - 6:00pm
Venue: Plaza Meeting Room P3


Speaker(s):

Abstract: The Chebyshev distance to the nearest occupied region of a volume is precomputed and used to efficiently skip empty space while volume rendering. We show improved performance over the state-of-the-art.

Speaker(s) Bio: Lachlan Deakin, Australian National University, Australia
Lachlan Deakin (BEng) is a PhD student at the department of applied mathematics at the Australian National University. He previously worked for FEI and Thermo Fisher Scientific developing tools for the analysis of massive volumetric images. He specialises in high performance computing on clusters and acceleration of algorithms using GPUs. Lachlan has also previously utilised his mechatronic engineering background for collaborative robotics projects such as the “Amazon Picking Challenge” with a focus on computer vision for object detection and pose estimation.

Mark Knackstedt, Australian National University, Australia
Mark Knackstedt (BEng ChemEng, Columbia; PhD ChemEng, Rice) is a Professor at the Department of Applied Mathematics at the Australian National University. He has led a group working for 20 years in the field of Digital materials technology based on 3D multiscale imaging, analysis and modelling. The key paradigm of the technology is to "image and compute"-- imaging the material, performing 3D time series imaging of experiments (e.g., flow, mechanical deformation) and building calibrated numerical simulations of the physical processes. He has helped to translate the technology into tangible commercial outcomes in the Energy industry.

Date: Sunday, November 17th
Time: 4:15pm - 6:00pm
Venue: Plaza Meeting Room P3


Speaker(s):

Abstract: Faster RPNN is a new deep learning model for anisotropic clouds rendering, which outcompetes the current state of the art in performance by 2-3 times with no loss of quality.

Speaker(s) Bio: Mikhail Panin, ITMO University, Playneta Ltd., Russia
As a Tech Lead at Playneta Ltd, Mikhail has over 5 years of game development and realtime graphics experience. Mikhail holds a Bachelor Degree in Computer Science and Applied Mathematics of ITMO University in Saint Petersburg, Russia. Currently, he is reading a course on real-time computer graphics for master’s students at HSE National Research University, Saint Pertersburg.

Sergey Nikolenko, Steklov Institute of Mathematics at St. Petersburg, Neuromation, Russia
Sergey Nikolenko is a computer scientist working in the field of machine learning (deep learning, including computer vision and natural language processing) and analysis of algorithms (algorithms for networking, online algorithms, bioinformatics). He obtained his Ph.D. from the Steklov Mathematical Institute at St. Petersburg in 2009; he has authored more than 150 research papers and several books, including a best-selling "Deep Learning" book (in Russian).

Date: Sunday, November 17th
Time: 4:15pm - 6:00pm
Venue: Plaza Meeting Room P3


Speaker(s):

Abstract: We propose an extended programming model for ray tracing which includes an additional programmable stage called the Traversal Shader that enables various use-cases such as multi-level instancing and stochastic LOD.

Speaker(s) Bio: Won-Jong Lee, Intel Corporation, United States of America
Won-Jong is a research scientist at Advanced Rendering Technology Team in Intel, where he is working on graphics research. Before joining in Intel, he worked on ray tracing, advanced computer graphics, real-time rendering, reconfigurable processor, and hardware algorithms for mobile GPUs at Samsung Advanced Institute of Technology (SAIT) as a principal research scientist and team lead. He received a Ph. D. and MS in computer science at Yonsei University, where he researched on graphics architecture, simulation framework, and parallel rendering on GPU clusters. He also worked on scientific volume visualization during his internship in AIST, Tokyo.

Gabor Liktor, Intel Corporation, United States of America
Gabor joined Intel’s Advanced Rendering Technology team in 2015, following his internship in 2013. He received his diploma in Computer Science from the Budapest University of Technology and Economics in 2009, then pursued his PhD in the graphics research group of Karlsruhe Institute of Technology, where he is expected to defend his dissertation soon. His further experiences in the industry include collaborative research with Crytek and an internship at Walt Disney Animation Studios. His primary research interests are shading reuse in real-time rendering architectures, stochastic sampling and geometry subdivision.

Karthik Vaidyanathan, Intel Corporation, United States of America
Karthik Vaidyanathan received his MS from Stanford University in 2011 and has a bachelor's degree from Pune University. His research focus is in the area of real-time stochastic rendering. Prior to graduate studies at Stanford, he had also worked for several years in research and development for wireless communications.

Date: Sunday, November 17th
Time: 4:15pm - 6:00pm
Venue: Plaza Meeting Room P3


Speaker(s):

Abstract: This paper presents a novel wave-based precomputation method that enables accurate and fast simulation of sound propagation in inhomogeneous atmosphere.

Speaker(s) Bio: Jin Liu, Tianjin University, China
Jin Liu received the B.S. degree from the School of Science, North University of China, Taiyuan, P.R. China, in 2017, and is currently working toward the M.S. degree at the Division of Intelligence and Computing, Tianjin University, P.R. China . Her current research is sound propagation simulation.

Shiguang Liu, Tianjin University, China
Shiguang Liu received the Ph.D. degree from State Key Lab of CAD & CG, Zhejiang University, P.R. China. He is currently a professor in the Division of Intelligence and Computing, Tianjin University, P.R. China. His research interests include computer graphics, multimedia, visualization, and virtual reality, etc.

Back