• Platinum Pass Platinum Pass
  • Full Conference Pass Full Conference Pass
  • Full Conference One-Day Pass Full Conference One-Day Pass
  • Basic Conference Pass Basic Conference Pass
  • Student One-Day Pass Student One-Day Pass

Date: Sunday, November 17th
Time: 9:00am - 4:30pm
Venue: Plaza Meeting Room P7


ACM SIGGRAPH Frontiers Workshop on Truth in Graphics and the Future of AI-Generated Content

Speaker(s):

Hao Li is an Associate Professor of Computer Science at the University of Southern California, the Director of the Vision and Graphics Lab at the USC Institute for Creative Technologies, and the CEO/Co-Founder of Pinscreen, an LA-based startup that makes photorealistic avatars accessible to consumers. Hao's work in Computer Graphics and Computer Vision focuses on digitizing humans and capturing their performances for immersive communication, scalable 3D content creation, and telepresence in virtual worlds. His research involves the development of novel data-driven and deep learning algorithms for geometry processing. He is known for his seminal work in non-rigid shape alignment, real-time facial performance capture, hair digitization, and dynamic full body capture.

Noelle Martin is an activist and law reform campaigner fighting for justice in Australia and globally against image-based sexual abuse, online misogyny and the weaponisation of deepfakes. Noelle has survived around 7 years of escalating online abuse after anonymous sexual predators doctored images of her into pornography and distributed it online without consent. After speaking out publicly and advocating for law reform across Australia, Noelle was involved in the efforts to criminalise the non-consensual sharing of intimate images across Australia. Despite her efforts, the perpetrators have continued the abuse and distributed fake pornographic videos of her. Noelle is advocating for a global response to this global issue, and speaks out against the objectification and dehumanisation of women, victim blaming and slut shaming attitudes. She was awarded Young Western Australian of the Year for 2019 and was listed on Forbes 30 Under 30 Asia List, Class of 2019.

Chris Bregler is a Senior Staff Scientist and Research Manager at Google AI working on synthetic media and abuse mitigation. He received an Academy Award in the Science and Technology category for his work in visual effects. His other awards include the IEEE Longuet-Higgins Prize for "Fundamental Contributions in Computer Vision that Have Withstood the Test of Time," the Olympus Prize, and grants from the National Science Foundation, Packard Foundation, Electronic Arts, Microsoft, U.S. Navy, U.S. Airforce, N.S.A, and C.I.A. Formerly a professor at New York University and Stanford University, he was named Stanford Joyce Faculty Fellow, Terman Fellow, and Sloan Research Fellow. In addition to working for several companies including Hewlett Packard, Interval, Disney Feature Animation, LucasFilm's ILM, and the New York Times, he was the executive producer of Squidball.net, for which he built the world's largest real-time motion capture volume. He received his M.S. and Ph.D. in Computer Science from U.C. Berkeley

Tianxiang is a software engineer in ZAO team, participating in algorithm R&D from the very beginning at the Deep Learning Lab (supervised by Tao Zhang) at MOMO Co. Ltd. ZAO is a cutting-edge AI-driven face swapping software for entertainment with leading effects and experiences. Due to the high video processing speed and splendid face swapping performance, ZAO has gained great attention from all around the world, both from academia and industry. Before joining MOMO, he worked as an engineer in the State Grid Corporation of China. He obtained his Ph.D. degree and bachelors degrees from Tsinghua University, China, with his Ph.D. degree advised by Prof. Guofan Jin.

Sergey is a Machine Learning Manager at Snap Inc., which he joined in 2018. He is leading the development of image-to-image translation models, which are used to deliver a new generation of Snapchat lenses. Several of them have become viral. Earlier he has also been working at IBM Research Australia, solving the problems of medical image classification and segmentation, and at a computer vision startup. He received his PhD from the University of Melbourne in 2015 under the supervision of James Bailey, Rao Kotagiri and Christopher Leckie.

Taylor Beck is lead senior privacy features manager at Magic Leap responsible for implementing privacy by design processes across the company successfully expanding buy-in and engagement across most business units.
 Taylor has nearly a decade of experience working in privacy across industries. At Magic Leap, in addition to evaluating and defining privacy requirements for the platform, he has taken a deep dive into Spatial Computing focusing on evaluating the novel data set collected by spatial computing devices. This work has focused on evaluating data for identifiability, categorizing data risk vectors, and working with engineers to design software and hardware mitigations. Taylor has also begun working with the Future of Privacy Forum to engage industry thought leaders to start thinking about defining a privacy forward spatial internet and application ecosystem.

Alexandre de Brébisson is a researcher and entrepreneur in AI. He co-founded Lyrebird in 2017, a startup pioneering voice cloning. For the very first time, their technology allowed anyone to create a digital copy of their voice from a few minutes of audio only -- for free. In September 2019 Lyrebird merged with Descript and the company is now developing software to edit and generate media. Alexandre is also a PhD candidate at the MILA lab under the supervision of Pascal Vincent and Yoshua Bengio.

Chris Vigorito is the founder and CEO of Crucible AI, a startup building next-generation content creation tools powered by machine learning. He received his PhD in computer science from UMass Amherst, focusing on research in reinforcement learning. He served as technical director at the independent game developer HitPoint Studios for several years, and later as a research engineer at Osaro, a machine learning and robotics startup building solutions for industrial automation.

Jassim Happa is a lecturer in information security in the Information Security Group at Royal Holloway, University of London. He is also a visiting lecturer in the Dept. of Computer Science at the University of Oxford. His research interests include: Computer Graphics, Cyber Security, Human Factors, Resilience, Rendering and Visualization. He obtained his BSc (Hons) in Computing Science at the University of East Anglia in 2006. After a year of working as an Intrusion Detection System (IDS) analyst, he began his PhD at the University of Warwick in October 2007 where he researched the topic of physically-based rendering of heritage for archaeological study. He successfully defended his PhD in January 2012, and between December 2011 – August 2019 worked at Oxford as a Research Fellow on the topic of cybersecurity analytics. Specifically, visual analytics in: threat modelling, situational awareness, risk propagation, human factors in security, resilience, decision support, data protection and cyber threat intelligence. He joined Royal Holloway in September 2019 continuing this line of research. Teaching wise, he lectures various courses in both cybersecurity and computer graphics at Oxford and Royal Holloway. He also lectures for and supervises doctoral students for the Centre for Doctoral Training (CDT) in Cyber Security in topics such as situational awareness, intrusion detection and security architecture.

Alain Chesnais started off his career in computer graphics working in such companies as Alia|wavefront where he led the Maya development. From 2002 through 2005 he served as ACM SIGGRAPH President. His team received a Technical Academy Award in 2003 for the creation of Maya. In 2010 he was elected ACM President and founded TrendSpottr where he developed predictive analytics for social media that could predict how much total traffic viral content would engender. Alain has continued in this field since and currently is the founder of Primo Monitum, where, by analyzing the traffic patterns of shared content on social media, the company delivers recommendations to help users find the most interesting content available. One of the most difficult issues the company deals with is detecting fakes and scams.

Juan Miguel de Joya is the Consultant Expert for Strategic Partnerships at the International Telecommunications Union (ITU) where he contributes to the strategic roadmap for AI and other emerging technologies for the UN agency. He previously worked at Facebook, Google, Pixar Animation Studios and Walt Disney Animation Studios and was an undergraduate researcher in computer-driven physics simulation at the Visual Computing Lab at the University of California, Berkeley. He holds volunteer leadership positions in the Association for Computing Machinery's (ACM) Practitioners Board, the ACM Professional Development Committee, Million Peacemakers and the ACM SIGGRAPH Strategy Group.

Barnaby Francis, often working under the pseudonym Bill Posters, is an award winning artist-researcher, author and hacktivist who is interested in art as research and critical practice. Posters works collaboratively across inter-disciplinary fields to interrogate persuasion architectures and power relations that exist in public space and online. Since 2016, Posters has extended his focus to explore computational forms of propaganda and the associated architectures that define the Digital Influence Industry. He uses art to challenge forms of power in relation notions of democracy, human rights, autonomy and privacy. His projects have received global media coverage in print, on radio and TV for holding power to account. His latest project 'Big Dada' (2019) utilised deep fake technologies to create AI synthesised personas of Mark Zuckerberg and other celebrity 'Influencers'. After going viral on Instagram, the project successfully trolled Facebook, one of the largest tech companies on earth and raised important questions concerning Facebook and Instagram's policies for computational forms of propaganda on their platforms. Posters is a published author and international speaker and guest lecturer. His works are held in public and private collections.

Koki Nagano is a Principal Scientist at Pinscreen. His work focuses on research and development in achieving realistic digital avatars including 3D face digitization, facial animation, holographic displays, and deep learning. His work on high-resolution skin geometry capture and simulation has helped create digital characters in Hollywood feature films such as “Ready Player One” and “Blade Runner 2049”. His research has won the DC Expo 2015 Special Prize and he was named a Google PhD Fellow 2016. He obtained his PhD from the University of Southern California.

Description: [Organizer: Prof. Hao Li] Increasingly realistic computer graphics, computer vision together with advances in AI, are facilitating stunningly realistic imagery, video, social and synthetic experiences. These effects no longer require the skills of visual effect studios, but can be produced by anyone. This realism also means that it can be challenging to identify reality from fakes. An example of this are technologies such as Deep Fakes which convincingly swaps faces in videos, or allows one to manipulate facial expressions and speech of a person. While new creative tools are enabled for various entertainment applications and avatar creation technologies, there is a growing concern for the spread of disinformation and privacy, due its accessibility by end users. Coupled with this, the increasingly prevalent use of AI algorithms is also gaining attention due to a number of high-profile incidents of algorithmic bias, for example computer judged beauty contests that deducted points for dark skin and AI-driven recruitment that biased against female applicants. In this workshop, we will examine the positive and negative sides of these technologies, their rapid adoption on social media platforms and apps, and discuss what we as a community of researchers and practitioners can do build trust in our algorithms and ensure that our techniques are not misused. Furthermore, we will explore how the future of AI-driven content creation will impact society.


Back