• Platinum Pass Platinum Pass
  • Full Conference Pass Full Conference Pass
  • Full Conference One-Day Pass Full Conference One-Day Pass
  • Basic Conference Pass Basic Conference Pass
  • Student One-Day Pass Student One-Day Pass
  • Experience Pass Experience Pass
  • Exhibitor Pass Exhibitor Pass

Date: Monday, November 18th
Time: 10:00am - 6:00pm
Venue: Great Hall 3&4 - Experience Hall


Algorithmic Analysis and Visualization of Motion in Cinema

Speaker(s):

Hector RODRIGUEZ is a Hong Kong-based digital artist and theorist whose work explores the unique possibilities of computational technologies to reconfigure the history and aesthetics of moving images. He received a commendation award from the Hong Kong Government for his contributions to art and culture in 2014. He was awarded the Best Digital Work in the Hong Kong Art Biennial 2003, an Achievement Award at the Hong Kong Contemporary Art Awards 2012, and the Jury Selection Award of the Japan Media Art Festival 2012. His works have been internationally exhibited in Taiwan, Singapore, US, Poland, Germany, Spain, Greece, France, London, and more. His recent exhibitions include the 15th &16th WRO Media Art Biennale, Poland (2013, 2015), “European Conference on Computer Vision (2018), A.I. Art Gallery, Conference on Neural Information Processing System (2018, Montreal), RIXC Art Science Festival (2017, Riga), Generative Art Conference/Exhibition (2017), Athens Media Art Festival (2018, Greece), xCoAx conference on computation and art (2016, GAMEC, Bergamo), CyNetArt competition (2016 Dresden) and many more. Recently, he had his solo retrospective “Hidden Variables” in Hong Kong, October 2018. He was the Artistic Director of the Microwave International Media Art Festival in 2004-2006, and is Director for Research and Education for the Writing Machine Collective. He currently teaches at the School of Creative Media, City University of Hong Kong, where he founded the undergraduate program in art and science.

Description: This talk explores some of the possible applications of unsupervised machine learning methods in found footage cinema, a tradition of experimental art that re-edits excerpts from existing films. This artistic practice sometimes aims to reconfigure our experience of the moving image heritage. In this context, machine learning algorithms has the potential to capture aspects of the cinematic experience for which we lack critical concepts, and which are for this reason difficult to describe. One important example concerns cinematic motion. Established critical discourse often speaks of motion in film by reference to the movement of objects or the camera. Film scholars might describe a scene by noting, for instance, that a person is walking fast or that the camera is tilting upwards. What is missing in this kind of description is the visual texture of cinematic movement. The two-channel algorithmic installation Errant: The Kinetic Propensity of Images applies matrix factorization techniques to the analysis of optical flow in cinema, focusing on the work of Chinese director King Hu. This method produces a visual dictionary of basic motion patterns the represent what could be described as the "kinetic overtones" of image sequences. The results are then visualized using streaklines, a technique from fluid dynamics. This presentation will discuss the motivation and methodology used in the production of this work, in relation to other work by the speaker. Implications for cinema theory will also be briefly discussed.


Anathema / Anatema

Speaker(s): Santiago Echeverry, The University of Tampa, United States of America

Description: Anathema is the psychological study of Doug's demons, nightmares and fantasies in a dream-like autoscopy that displays the strengths and vulnerabilities of a self-altered and self-made man in his 70's. The piece was created with the Kinect volumetric sensor, with music by Mexican cyber-punk musician Cesar Cardenas aka Zoonosis.


Captured by an Algorithm

Speaker(s): Sophia Brueckner, University of Michigan, United States of America

Description: Captured by an Algorithm is a commemorative plate series that looks at romance novels through the lens of the Amazon Kindle Popular Highlight algorithm. Each plate features one highlight and an algorithmically generated landscape. The highlights tell a story of the loneliness, grief, vulnerability, and discontent felt by the readers.


CharActor

Speaker(s): masaru mizuochi, mizumasa, Japan

Born in Japan in 1989. After graduating from the University of Tokyo Graduate School, he continues to conduct art production and stage production activities along with research on new visual expressions. The theme of his works is humanity in the digital world. Applying the knowledge of computer vision, it is expressed in both analog and digital expressions. His work has been presented at museums, events, and conferences in Japan and overseas.

Description: The personality of living things is beautiful. Is it possible to have a personality in the digital world? CharActor is a video work that produces animation by recognizing shader programs as genes and by changing the mathematical expression itself using evolutionary computation.


Contact/Sense

Speaker(s): Rewa Wright, University of Newcastle, Australia
Simon Howden, Independent Artist, Australia

Dr. Rewa Wright is Lecturer in Creative Technologies at the University of Newcastle in Australia. Originally from Aotearoa/New Zealand, she is Māori from Ngāti Whātua and Ngāpuhi iwi. She uses embodied movements to perform with plants, software, and sensors, in a perceptually challenging infra-red mixed reality. Her vision punctuated by digital augments, she co-composes with the bio-electrical signals of living plants, generating an audio-visual swarm of movement and sound.

Simon Howden is a conceptual artist, sound designer, and a Billboard music producer. He holds an MFA (Intermedia/Sculpture) from Elam School of Fine Arts at the University of Auckland, and completed his BFA in Film at Ilam School of Fine Arts at the University of Canterbury in New Zealand.

Description: When we feel and sense through machines, are we still ourselves? In a mixed reality where embodied actions and blinding visions are part woman/part machine, the tactile surface of plants is a portal that conjures augmented materialities into existence.


Dream Clanger

Speaker(s): Baden Pailthorpe, ANU School of Art & Design, Australia
Charles Gretton, ANU College of Engineering and Computer Science, Australia
Rhys Healy, ANU College of Engineering and Computer Science, Australia

Baden Pailthorpe is a contemporary artist who works with emerging and experimental technologies. He is the Convenor of Hybrid Art Practice at the ANU School of Art & Design, Canberra. His artistic practice interrogates the relationship between aesthetics and power, interrogating the politics of technological and economic structures across Sport, Finance and the Military-Industrial Complex. Since 2011, Baden’s practice has integrated performance and installation alongside screen-based interventions. Examples include: a commissioned performance at the Centre Pompidou, Paris (2014); video work depicting a hacked military simulator at the Palais de Tokyo, Paris (2012); documentation of a video game performance exhibited at the Triennale di Milano, Milan (2016); a ‘start-up as artwork’ at Sullivan+Strumpf (2017); and an experimental data visualisation of AFL player GPS data at UTS Art, Sydney (2017).

Charles Gretton is a Senior Lecturer convening the TechLauncher program at the Australian National University. From June 2015, he was a founder at HIVERY, and in 2017 also a Catalyst in Residence at the Michael Crouch Innovation Centre at the Univeristy of New South Wales. Working with industry, he developed data-driven systems that optimize by creating retail-AI technologies which coupled industrial optimisation with machine learning. From August of 2011 Charles was at the NICTA (then Data61) Canberra lab and part of CECS at ANU in Canberra. He was working at the intersection of Artificial Intelligence and Operations Research on solutions to fleet logistics problems. Before that, he was a research fellow with the Intelligent Robotics Lab at the University of Birmingham 2008-2011; There he worked on a project investigating cognitive robots that can self-understand and self-extend.

Rhys Healy is a fifth-year undergraduate student at the Australian National University, undertaking a Bachelor of Actuarial Studies and a Bachelor of Advanced Computing (Honours). Through his study, research and industry involvement he has developed an interest in statistical machine learning. In 2018. he worked with Data 61 at CSIRO and led a team of students to develop to design, create and optimise a tool for generative music composition using deep learning. Extending the work, he co-authored 'Computer Assisted Computation in Continuous Time', a paper under consideration for AAAI 2020. This research details the derivation and implementation of a sequential Monte-Carlo algorithm in continuous time, used for sampling music from a generative learning model conditioned on a set of musical constraints. Outside of my studies, Rhys plays cricket and AFL at a semi-professional level and have co-founded a local Indigenous tech firm specialising in automation, artificial intelligence and change management.

Description: 'Dream Clanger' is a hybrid art/computer science project that re-imagines AFL Player GPS data and match video. Building on Baden Pailthorpe's 2017 major exhibition 'Clanger', this work pushes the envelope further by integrating machine learning.


Fifty Sisters

Speaker(s):

Jon McCormack works at the nexus of art, technology and society. His experimental practice is driven by an enduring interest in computing and incorporates generative art, sound, evolutionary systems, computer creativity, physical computing and machine learning. Inspired by the complexity and wonder of the natural world, his work is concerned with electronic ‘after natures’: alternate forms of artificial life which, due to unfettered human progress and development, may one day replace a lost biological nature. His artworks have been widely exhibited at leading galleries, museums and symposia, including the Museum of Modern Art (New York, USA), Tate Gallery (Liverpool, UK), ACM SIGGRAPH (USA), Prix Ars Electronica (Austria) and the Australian Centre for the Moving Image (Australia). He is the recipient of over 17 awards for new media art and computing research including prizes at Ars Electronica (Austria), Nagoya Biennial (Japan), the 2012 Eureka Prize for Innovation in Computer Science and the 2016 Lumen Prize for digital art (still images). He is currently undertaking a Future Fellowship, funded by the Australian Research Council, that investigates new models for the generative design of digitally fabricated materials. Professor McCormack is the founder and director of SensiLab, a trans-disciplinary research space dedicated to the future of creative technology at Monash University in Melbourne, Australia. SensiLab’s collective research explores the untapped potential of technology, its impacts on society and the new possibilities it enables. Its dedicated research space – which opened in late 2017 – encourages enthusiasm, curiosity, seamless collaboration and unrestricted experimentation. jonmccormack.info sensilab.monash.edu

Description: Fifty Sisters is comprised of fifty computer synthesised plant-forms, algorithmically “grown” from computer code using artificial evolution and generative grammars. Each plant-like form is derived from the primitive graphic elements of oil company logos. The title of the work refers to the original “Seven Sisters” – a cartel of seven oil companies that dominated the global petrochemical industry and Middle East oil production from the mid-1940s until the oil crisis of the 1970s. Oil has shaped our civilisation and driven its unprecedented growth over the last century. We have been seduced by oil and its bi-products as they are now used across almost every aspect of human endeavour, providing fuels, fertilisers, feedstocks, plastics, medicines and more. But oil has also changed the environment, evident from the petrochemical haze that hangs over many a modern metropolis, the environmental damage of major oil spills, and the looming spectre of the global climate crisis. With worldwide demand for oil now at 93 million barrels per day, humanity’s appetite for oil is unrelenting. Oil companies regularly report many of the all-time largest annual earnings in corporate history. This 3-screen triptych of the work presents each form sequentially as a slow, evolving mediation on nature, technology and human consumption. 3 x 4k synchronised video displays, stereo speakers, 30mins


Illusion: you can hear, but you can't see.

Speaker(s): Haein Kang, University of Washington, DXARTS, United States of America

Description: ILLUSION explores the relationship between the body, mind, and machine by taking advantage of the brain-computer interface. It detects whether your consciousness has visual stimuli to produces cadenced sound, an exterior manifestation of the performer’s internal state. When you close your eyes, you can see the world imaged by sounds.


Instababy Generator

Speaker(s): Emi Kusano, Independent Artist, Japan
Junichi Yamaoka, The University of Tokyo, Japan

Emi Kusano is a Tokyo based artist and a musician. Born in 1990 and graduated Keio University. She is currently the lead singer and composer of synthwave music unit "Satellite Young" which strongly inspired by the 80s Sci-fi and modern technology.

Junichi Yamaoka is a researcher and an artist. He is researching new interfaces which connecting the real world and the information society as a postdoctoral fellow at the University of Tokyo. He also produces media artwork expressing the computer graphics world in the real world, and presents them at the international exhibition such as Ars Electronica and ACM SIGGRAPH and received the prize of WIRED CREATIVE HACK AWARD 2014 Grand Prix.

Description: This artwork is an installation that expresses the future in which users can manufacture designer's babies themselves. You can design, customize and manufacture your baby with your favorite gene on your laptop. A 3D printed child appears from the display, and the child's face created based on the visitor's face.


LightWing II

Speaker(s): Uwe Rieger, arc/sec Lab, University of Auckland, New Zealand
Yinan Liu, arc/sec Lab, University of Auckland, New Zealand

Uwe Rieger is founder and head of the arc/sec Lab for Cross Reality Architecture and Interactive Systems. He is Associate Professor for Design and Design Technology at the University of Auckland and has worked as architect and researcher in the field of Reactive Architecture for over 20 years. Uwe has exhibited and developed projects for international institutions such as: the Ars Electronica Centre in Austria, the Museum of Modern Art Barcelona (MACBA) in Spain, the Venice Architecture Biennale in Italy, the World EXPO and the International Building Exhibition IBA in Germany, the National Museum of Indonesia, and Te Papa Tongarewa, the National Museum of New Zealand.

Yinan Liu received a MArch(Prof)(Hons) degree from the University of Auckland. She is the lead technologist at the arc/sec Lab and coordinator of the Digital Research Hub at the School of Architecture and Planning at the University of Auckland. Yinan is co-founder of arc/sec Solutions ltd., which develops customised applications for cross-reality environments and interactive systems.

Description: LightWing II creates a mysterious sensation of tactile data. In this interactive installation, a kinetic construction is augmented with stereoscopic 3D projections and spatial sound. A light touch sets the delicate wing-like structure into a rotational oscillation and enables the visitor to navigate through holographic spaces and responsive narratives.


Machine Hallucination - Latent Study II

Speaker(s): Refik Anadol, Refik Anadol Studio, United States of America

Description: Machine Hallucinations - Latent Study II is part of an ongoing synthetic reality collection that explores the relationship between memory and dreams, recognition and perception.


Metascape: Villers Bretonneux

Speaker(s): Andrew Yip, UNSW Art & Design, Australia

Description: Metascape: Villers Bretonneux is an immersive, interactive memoryscape experienced in first-person perspective, that simulates 72 hours in real time of the 1918 First World War Second Battle of Villers Bretonneux. The work relies on multiple forms of spatial and memory reconstruction, both driven by algorithmic processes.


Numb

Speaker(s): Taeil Lee, Korea University, South Korea

He has Industrial Design background holding B.S at KAIST and M.Des at Institute of Design, IIT. He has been teaching Interaction Design and Product Design in Korea University and other universities for about 18 years, and working for korean companies such as Samsung Electronics, LG electronics etc. in various Interaction Design research projects. He has been always interested in prototyping interactions, playing with Processing and Arduino, and now tries to change gears to more artistic arena after all those years.

Description: Numb, shaped as an exaggerated eyeball, follows you and reflects the blinks of yours. It makes you become aware of your own blinking and sensitive to your own sensation. Numb illustrates how we build relationship with technology through senses, and how we become sensitive to ourselves by and with technology.


Political Crystals: Algorithmic Strategies for Data Visualization

Speaker(s): clarissa ribeiro, unifor, CrossLab, Brazil
herbert rocha, unifor, Laboratorio de Pesquisa em Direito Privado e Internet da Universidade de Brasilia (LAPIN/UnB), Brazil

Description: Combining algorithmic modeling strategies for data visualization with digital fabrication, the work includes the generative design of a series of geometrically intricate crystals-like 3D models using as raw data Twitter APIs having as the search phrase hashtags related to Brazilian 2018 presidential elections twitted from defined geolocations.


Smile

Speaker(s): Tomas Laurenzo, School of Creative Media, City University of Hong Kong, Hong Kong

Tomás Laurenzo is an artist and academic who works with both physical and digital media, exploring meaning, power and politics. With a background in both computer science and art, Laurenzo’s work spans across different practices and interests, including new media art, HCI, machine learning, and VR. His artistic production includes installations, music, performance, and digital lutherie. His artworks and performances have been shown globally. He is assistant professor at the School of Creative Media. City University of Hong Kong.

Description: Smile is a mixed-media installation consisting of a screen in a black box, mounted on the wall. When an interactor smiles, drone-footage of the ruins of Gaza fades in. If the interactor stops smiling, the video stops. It only plays when the interactor widely smiles at it.


SoS

Speaker(s): Dennis Del Favero, UNSW, iCinema + EPICentre, Australia
Tomasz Bednarz, UNSW, EPICentre + iCinema, Australia

Description: "SoS" recreates the experiences of two Syrian asylum seekers as they lose sight of each other during a treacherous ocean voyage from Indonesia to Northern Australia.


Tactile Microcosm of ALife

Speaker(s): Toshikazu Ohshima, Ritsumeikan University, College of Image Arts and Sciences, Japan

Toshikazu Ohshima has been a professor at Ritsumeikan University since 2006. In 1991, he received a Doctorate from the Graduate School of Engineering, University of Tsukuba. In 1991, he joined Canon Inc. and worked on research and development of virtual reality. From 1997, he worked as a senior researcher in a national mixed reality (MR) technology project and from 2001 began serving as a section chief at the MR R&D division in Canon Inc. His research interests include MR technologies and its deployment in education, medicine, and the arts. Since 2006, he and his team have continually created technological and educational artworks based on scientific and biological simulations utilizing mixed reality experience. Since 2012, these works of art and edutainment have been annually exhibited at Laval Virtual ReVolution in France. In 2017, “MitsuDomoe,” a virtual ecosystem in a petri dish, was awarded the Laval Virtual Award of Training and Education.

Description: Tactile Microcosm of ALife offers interaction with artificial organisms, whereby the user can enjoy playing with fish-like organisms through aerial imaging and haptic feedback. The holographic organisms float in water in a petri dish, and the user can feel a forcefield of the vital of the organisms via force feedback.


Back