• Platinum Pass Platinum Pass
  • Full Conference Pass Full Conference Pass
  • Full Conference One-Day Pass Full Conference One-Day Pass
  • Basic Conference Pass Basic Conference Pass
  • Student One-Day Pass Student One-Day Pass
  • Experience Pass Experience Pass
  • Exhibitor Pass Exhibitor Pass

Date: Wednesday, November 20th
Time: 10:00am - 4:00pm
Venue: Great Hall 3&4 - Experience Hall


Speaker(s):

Abstract: This talk explores some of the possible applications of unsupervised machine learning methods in found footage cinema, a tradition of experimental art that re-edits excerpts from existing films. This artistic practice sometimes aims to reconfigure our experience of the moving image heritage. In this context, machine learning algorithms has the potential to capture aspects of the cinematic experience for which we lack critical concepts, and which are for this reason difficult to describe. One important example concerns cinematic motion. Established critical discourse often speaks of motion in film by reference to the movement of objects or the camera. Film scholars might describe a scene by noting, for instance, that a person is walking fast or that the camera is tilting upwards. What is missing in this kind of description is the visual texture of cinematic movement. The two-channel algorithmic installation Errant: The Kinetic Propensity of Images applies matrix factorization techniques to the analysis of optical flow in cinema, focusing on the work of Chinese director King Hu. This method produces a visual dictionary of basic motion patterns the represent what could be described as the "kinetic overtones" of image sequences. The results are then visualized using streaklines, a technique from fluid dynamics. This presentation will discuss the motivation and methodology used in the production of this work, in relation to other work by the speaker. Implications for cinema theory will also be briefly discussed.

Speaker(s) Bio:

Date: Wednesday, November 20th
Time: 10:00am - 4:00pm
Venue: Great Hall 3&4 - Experience Hall


Speaker(s):

Abstract: Anathema is the psychological study of Doug's demons, nightmares and fantasies in a dream-like autoscopy that displays the strengths and vulnerabilities of a self-altered and self-made man in his 70's. The piece was created with the Kinect volumetric sensor, with music by Mexican cyber-punk musician Cesar Cardenas aka Zoonosis.

Speaker(s) Bio: Santiago Echeverry, The University of Tampa, United States of America

Date: Wednesday, November 20th
Time: 10:00am - 4:00pm
Venue: Great Hall 3&4 - Experience Hall


Speaker(s):

Abstract: Captured by an Algorithm is a commemorative plate series that looks at romance novels through the lens of the Amazon Kindle Popular Highlight algorithm. Each plate features one highlight and an algorithmically generated landscape. The highlights tell a story of the loneliness, grief, vulnerability, and discontent felt by the readers.

Speaker(s) Bio: Sophia Brueckner, University of Michigan, United States of America

Date: Wednesday, November 20th
Time: 10:00am - 4:00pm
Venue: Great Hall 3&4 - Experience Hall


Speaker(s):

Abstract: The personality of living things is beautiful. Is it possible to have a personality in the digital world? CharActor is a video work that produces animation by recognizing shader programs as genes and by changing the mathematical expression itself using evolutionary computation.

Speaker(s) Bio: masaru mizuochi, mizumasa, Japan
Born in Japan in 1989. After graduating from the University of Tokyo Graduate School, he continues to conduct art production and stage production activities along with research on new visual expressions. The theme of his works is humanity in the digital world. Applying the knowledge of computer vision, it is expressed in both analog and digital expressions. His work has been presented at museums, events, and conferences in Japan and overseas.

Date: Wednesday, November 20th
Time: 10:00am - 4:00pm
Venue: Great Hall 3&4 - Experience Hall


Speaker(s):

Abstract: When we feel and sense through machines, are we still ourselves? In a mixed reality where embodied actions and blinding visions are part woman/part machine, the tactile surface of plants is a portal that conjures augmented materialities into existence.

Speaker(s) Bio: Rewa Wright, University of Newcastle, Australia
Dr. Rewa Wright is Lecturer in Creative Technologies at the University of Newcastle in Australia. Originally from Aotearoa/New Zealand, she is Māori from Ngāti Whātua and Ngāpuhi iwi. She uses embodied movements to perform with plants, software, and sensors, in a perceptually challenging infra-red mixed reality. Her vision punctuated by digital augments, she co-composes with the bio-electrical signals of living plants, generating an audio-visual swarm of movement and sound.

Simon Howden, Independent Artist, Australia
Simon Howden is a conceptual artist, sound designer, and a Billboard music producer. He holds an MFA (Intermedia/Sculpture) from Elam School of Fine Arts at the University of Auckland, and completed his BFA in Film at Ilam School of Fine Arts at the University of Canterbury in New Zealand.

Date: Wednesday, November 20th
Time: 10:00am - 4:00pm
Venue: Great Hall 3&4 - Experience Hall


Speaker(s):

Abstract: 'Dream Clanger' is a hybrid art/computer science project that re-imagines AFL Player GPS data and match video. Building on Baden Pailthorpe's 2017 major exhibition 'Clanger', this work pushes the envelope further by integrating machine learning.

Speaker(s) Bio: Baden Pailthorpe, ANU School of Art & Design, Australia
Baden Pailthorpe is a contemporary artist who works with emerging and experimental technologies. He is the Convenor of Hybrid Art Practice at the ANU School of Art & Design, Canberra. His artistic practice interrogates the relationship between aesthetics and power, interrogating the politics of technological and economic structures across Sport, Finance and the Military-Industrial Complex. Since 2011, Baden’s practice has integrated performance and installation alongside screen-based interventions. Examples include: a commissioned performance at the Centre Pompidou, Paris (2014); video work depicting a hacked military simulator at the Palais de Tokyo, Paris (2012); documentation of a video game performance exhibited at the Triennale di Milano, Milan (2016); a ‘start-up as artwork’ at Sullivan+Strumpf (2017); and an experimental data visualisation of AFL player GPS data at UTS Art, Sydney (2017).

Charles Gretton, ANU College of Engineering and Computer Science, Australia
Charles Gretton is a Senior Lecturer convening the TechLauncher program at the Australian National University. From June 2015, he was a founder at HIVERY, and in 2017 also a Catalyst in Residence at the Michael Crouch Innovation Centre at the Univeristy of New South Wales. Working with industry, he developed data-driven systems that optimize by creating retail-AI technologies which coupled industrial optimisation with machine learning. From August of 2011 Charles was at the NICTA (then Data61) Canberra lab and part of CECS at ANU in Canberra. He was working at the intersection of Artificial Intelligence and Operations Research on solutions to fleet logistics problems. Before that, he was a research fellow with the Intelligent Robotics Lab at the University of Birmingham 2008-2011; There he worked on a project investigating cognitive robots that can self-understand and self-extend.

Rhys Healy, ANU College of Engineering and Computer Science, Australia
Rhys Healy is a fifth-year undergraduate student at the Australian National University, undertaking a Bachelor of Actuarial Studies and a Bachelor of Advanced Computing (Honours). Through his study, research and industry involvement he has developed an interest in statistical machine learning. In 2018. he worked with Data 61 at CSIRO and led a team of students to develop to design, create and optimise a tool for generative music composition using deep learning. Extending the work, he co-authored 'Computer Assisted Computation in Continuous Time', a paper under consideration for AAAI 2020. This research details the derivation and implementation of a sequential Monte-Carlo algorithm in continuous time, used for sampling music from a generative learning model conditioned on a set of musical constraints. Outside of my studies, Rhys plays cricket and AFL at a semi-professional level and have co-founded a local Indigenous tech firm specialising in automation, artificial intelligence and change management.

Date: Wednesday, November 20th
Time: 10:00am - 4:00pm
Venue: Great Hall 3&4 - Experience Hall


Speaker(s):

Abstract: Fifty Sisters is comprised of fifty computer synthesised plant-forms, algorithmically “grown” from computer code using artificial evolution and generative grammars. Each plant-like form is derived from the primitive graphic elements of oil company logos. The title of the work refers to the original “Seven Sisters” – a cartel of seven oil companies that dominated the global petrochemical industry and Middle East oil production from the mid-1940s until the oil crisis of the 1970s. Oil has shaped our civilisation and driven its unprecedented growth over the last century. We have been seduced by oil and its bi-products as they are now used across almost every aspect of human endeavour, providing fuels, fertilisers, feedstocks, plastics, medicines and more. But oil has also changed the environment, evident from the petrochemical haze that hangs over many a modern metropolis, the environmental damage of major oil spills, and the looming spectre of the global climate crisis. With worldwide demand for oil now at 93 million barrels per day, humanity’s appetite for oil is unrelenting. Oil companies regularly report many of the all-time largest annual earnings in corporate history. This 3-screen triptych of the work presents each form sequentially as a slow, evolving mediation on nature, technology and human consumption. 3 x 4k synchronised video displays, stereo speakers, 30mins

Speaker(s) Bio:

Date: Wednesday, November 20th
Time: 10:00am - 4:00pm
Venue: Great Hall 3&4 - Experience Hall


Speaker(s):

Abstract: ILLUSION explores the relationship between the body, mind, and machine by taking advantage of the brain-computer interface. It detects whether your consciousness has visual stimuli to produces cadenced sound, an exterior manifestation of the performer’s internal state. When you close your eyes, you can see the world imaged by sounds.

Speaker(s) Bio: Haein Kang, University of Washington, DXARTS, United States of America

Date: Wednesday, November 20th
Time: 10:00am - 4:00pm
Venue: Great Hall 3&4 - Experience Hall


Speaker(s):

Abstract: This artwork is an installation that expresses the future in which users can manufacture designer's babies themselves. You can design, customize and manufacture your baby with your favorite gene on your laptop. A 3D printed child appears from the display, and the child's face created based on the visitor's face.

Speaker(s) Bio: Emi Kusano, Independent Artist, Japan
Emi Kusano is a Tokyo based artist and a musician. Born in 1990 and graduated Keio University. She is currently the lead singer and composer of synthwave music unit "Satellite Young" which strongly inspired by the 80s Sci-fi and modern technology.

Junichi Yamaoka, The University of Tokyo, Japan
Junichi Yamaoka is a researcher and an artist. He is researching new interfaces which connecting the real world and the information society as a postdoctoral fellow at the University of Tokyo. He also produces media artwork expressing the computer graphics world in the real world, and presents them at the international exhibition such as Ars Electronica and ACM SIGGRAPH and received the prize of WIRED CREATIVE HACK AWARD 2014 Grand Prix.

Date: Wednesday, November 20th
Time: 10:00am - 4:00pm
Venue: Great Hall 3&4 - Experience Hall


Speaker(s):

Abstract: LightWing II creates a mysterious sensation of tactile data. In this interactive installation, a kinetic construction is augmented with stereoscopic 3D projections and spatial sound. A light touch sets the delicate wing-like structure into a rotational oscillation and enables the visitor to navigate through holographic spaces and responsive narratives.

Speaker(s) Bio: Uwe Rieger, arc/sec Lab, University of Auckland, New Zealand
Uwe Rieger is founder and head of the arc/sec Lab for Cross Reality Architecture and Interactive Systems. He is Associate Professor for Design and Design Technology at the University of Auckland and has worked as architect and researcher in the field of Reactive Architecture for over 20 years. Uwe has exhibited and developed projects for international institutions such as: the Ars Electronica Centre in Austria, the Museum of Modern Art Barcelona (MACBA) in Spain, the Venice Architecture Biennale in Italy, the World EXPO and the International Building Exhibition IBA in Germany, the National Museum of Indonesia, and Te Papa Tongarewa, the National Museum of New Zealand.

Yinan Liu, arc/sec Lab, University of Auckland, New Zealand
Yinan Liu received a MArch(Prof)(Hons) degree from the University of Auckland. She is the lead technologist at the arc/sec Lab and coordinator of the Digital Research Hub at the School of Architecture and Planning at the University of Auckland. Yinan is co-founder of arc/sec Solutions ltd., which develops customised applications for cross-reality environments and interactive systems.

Date: Wednesday, November 20th
Time: 10:00am - 4:00pm
Venue: Great Hall 3&4 - Experience Hall


Speaker(s):

Abstract: Machine Hallucinations - Latent Study II is part of an ongoing synthetic reality collection that explores the relationship between memory and dreams, recognition and perception.

Speaker(s) Bio: Refik Anadol, Refik Anadol Studio, United States of America

Date: Wednesday, November 20th
Time: 10:00am - 4:00pm
Venue: Great Hall 3&4 - Experience Hall


Speaker(s):

Abstract: Metascape: Villers Bretonneux is an immersive, interactive memoryscape experienced in first-person perspective, that simulates 72 hours in real time of the 1918 First World War Second Battle of Villers Bretonneux. The work relies on multiple forms of spatial and memory reconstruction, both driven by algorithmic processes.

Speaker(s) Bio: Andrew Yip, UNSW Art & Design, Australia

Date: Wednesday, November 20th
Time: 10:00am - 4:00pm
Venue: Great Hall 3&4 - Experience Hall


Speaker(s):

Abstract: Numb, shaped as an exaggerated eyeball, follows you and reflects the blinks of yours. It makes you become aware of your own blinking and sensitive to your own sensation. Numb illustrates how we build relationship with technology through senses, and how we become sensitive to ourselves by and with technology.

Speaker(s) Bio: Taeil Lee, Korea University, South Korea
He has Industrial Design background holding B.S at KAIST and M.Des at Institute of Design, IIT. He has been teaching Interaction Design and Product Design in Korea University and other universities for about 18 years, and working for korean companies such as Samsung Electronics, LG electronics etc. in various Interaction Design research projects. He has been always interested in prototyping interactions, playing with Processing and Arduino, and now tries to change gears to more artistic arena after all those years.

Date: Wednesday, November 20th
Time: 10:00am - 4:00pm
Venue: Great Hall 3&4 - Experience Hall


Speaker(s):

Abstract: Combining algorithmic modeling strategies for data visualization with digital fabrication, the work includes the generative design of a series of geometrically intricate crystals-like 3D models using as raw data Twitter APIs having as the search phrase hashtags related to Brazilian 2018 presidential elections twitted from defined geolocations.

Speaker(s) Bio: clarissa ribeiro, unifor, CrossLab, Brazil
herbert rocha, unifor, Laboratorio de Pesquisa em Direito Privado e Internet da Universidade de Brasilia (LAPIN/UnB), Brazil

Date: Wednesday, November 20th
Time: 10:00am - 4:00pm
Venue: Great Hall 3&4 - Experience Hall


Speaker(s):

Abstract: Smile is a mixed-media installation consisting of a screen in a black box, mounted on the wall. When an interactor smiles, drone-footage of the ruins of Gaza fades in. If the interactor stops smiling, the video stops. It only plays when the interactor widely smiles at it.

Speaker(s) Bio: Tomas Laurenzo, School of Creative Media, City University of Hong Kong, Hong Kong
Tomás Laurenzo is an artist and academic who works with both physical and digital media, exploring meaning, power and politics. With a background in both computer science and art, Laurenzo’s work spans across different practices and interests, including new media art, HCI, machine learning, and VR. His artistic production includes installations, music, performance, and digital lutherie. His artworks and performances have been shown globally. He is assistant professor at the School of Creative Media. City University of Hong Kong.

Date: Wednesday, November 20th
Time: 10:00am - 4:00pm
Venue: Great Hall 3&4 - Experience Hall


Speaker(s):

Abstract: "SoS" recreates the experiences of two Syrian asylum seekers as they lose sight of each other during a treacherous ocean voyage from Indonesia to Northern Australia.

Speaker(s) Bio: Dennis Del Favero, UNSW, iCinema + EPICentre, Australia
Tomasz Bednarz, UNSW, EPICentre + iCinema, Australia

Date: Wednesday, November 20th
Time: 10:00am - 4:00pm
Venue: Great Hall 3&4 - Experience Hall


Speaker(s):

Abstract: Tactile Microcosm of ALife offers interaction with artificial organisms, whereby the user can enjoy playing with fish-like organisms through aerial imaging and haptic feedback. The holographic organisms float in water in a petri dish, and the user can feel a forcefield of the vital of the organisms via force feedback.

Speaker(s) Bio: Toshikazu Ohshima, Ritsumeikan University, College of Image Arts and Sciences, Japan
Toshikazu Ohshima has been a professor at Ritsumeikan University since 2006. In 1991, he received a Doctorate from the Graduate School of Engineering, University of Tsukuba. In 1991, he joined Canon Inc. and worked on research and development of virtual reality. From 1997, he worked as a senior researcher in a national mixed reality (MR) technology project and from 2001 began serving as a section chief at the MR R&D division in Canon Inc. His research interests include MR technologies and its deployment in education, medicine, and the arts. Since 2006, he and his team have continually created technological and educational artworks based on scientific and biological simulations utilizing mixed reality experience. Since 2012, these works of art and edutainment have been annually exhibited at Laval Virtual ReVolution in France. In 2017, “MitsuDomoe,” a virtual ecosystem in a petri dish, was awarded the Laval Virtual Award of Training and Education.

Back