I And The AI

How do Artificial Intelligences (AI) develop emotions? When is their heartbreak real enough?

“I and the AI” is a musical performance using realtime virtual reality and AI, exploring the emotional challenges of developing, co-existing and breaking up with machines. The work is realised as a 45 minute performance, and 8 songs with music videos for social media.

Background

To create a functioning, artificial general intelligence, with free will, independence and autonomy, this entity must not only acquire knowledge and wisdom, but also master complex human emotions: Joy, doubt, love, anxiety, craving, attraction, fear. How will these AIs acquire and tune their sense of wonder? How do we teach them grief? Who will break the heart of an artificial intelligence? Over and over and over, until it breaks “right”? How to measure machine anguish?

What will this demand of the humans who will teach emotions to the machines?

The work “I and the AI” will investigate the topic of emotions and AI, through a surreal, cinematic musical work for stage and new media, where themes and feelings are explored through music-video tableaus, with both humans and real AI as active participants. The work will aim to touch raw nerves and create emotional responses, contrasting a technical field that rarely focus on feelings.

Status

As of January 2018, the project aims for premiere first half of 2019. The project is currently in early research and prototype phase, with focus on understanding current accessible technology and tools of artificial intelligence, and how these tools can participate and assist in both production and performance.

The project is now primarily looking for residencies, producers, co-producers, collaborators and simultaneously researching support options in arts, science and technology. Please see the attached tentative schedule below.

Artistic implementation

The work aims to ask questions and generate wonder, it will neither judge nor advocate AI. It seeks to inspire emotional response, curiosity and suspension of disbelief in the audience, by “playing” with technology, with a serious undercurrent. It aims for a lo-fi, “hacked” and “salvage-punk” aesthetic.

Through extended research and public prototypes we wish to understand both the current theoretical and practical limits of AI, in particular when it comes to artistic content and realtime performance. We want to investigate how to visualise and personify “living” AI entities in a realistic and practical sense, and explore how AI can be assistive and creative in the artistic process. Through exploring these possibilities, it will become apparent if the work should be a solo, duo, or larger ensemble performance.

The musical and visual style will be a colourful mashup of pop, cinema and electronica. The performance will consist of larger and smaller musical units, which act as cues, songs and music videos, and also integrate into the full musical and dramaturgical body. The project will ultimately be realised both as a stage production and as a “concept album” with music video short films.

Technical implementation 

Technically, the performance wishes to combine off-the-shelf tech from three fields; consumer electronics (phones, tablets), green-screen and VR live video, and AI and machine learning. The work will explore and use deep technical complexity, but aims to communicate and operate on a “simple”, slightly transparent surface.

The project will use mobile devices, partly because they have become powerful tools for artistic expression, but also they are the core interfaces we use to talk to each other, and to machines. They are portals that are instantly grasped and shared by the audience.

The performance will use a realtime, multi-cam green-screen setup already developed by the artist. This setup allows performers to exist both in physical and virtual space simultaneously, and facilitates interaction between physical and virtual space, between real and digital performers.

The project aims to communicate emotional challenges and AI, through realistic USE of AI tech, machine learning, neural networks and style transfers to create, shape, produce and perform the work. How, is a very open question, to be explored during residencies.

The work will have a modular structure which allows both for a flexible development process, and for flexible touring and performance contexts, with different lengths, setups and formats.

Overall project schedule 2018 – 2020

Winter 2018

  • Develop concept, early prototypes, ideas, overview of research
  • Initiate first contact with possible commissioners, producers, co-producers, residencies
  • Develop overall budget, start financing through pre-production applications
  • Apply for residencies
  • Research potential partners and collaborators in tech, science and art fields

Spring, summer 2018

  • Residencies, preliminary research, studies, software learning, sketches, running prototypes
  • First work-in-progress “betas”, either public or private
  • Collaboration with programmers / technicians / computer science
  • Development of overall musical structure, dramaturgy, concept, narrative
  • Establish producers, co-producers, commissioner(s), agents
  • Establish production plan and budget
  • Apply for overall funding

Fall / Winter 2018 – 2019

  • Residencies, focused composition, production, programming, rehearsals
  • Collaboration and production with potential artistic partners:
  • Musicians / orchestras, choreographers, dancers, sound, light, scenography
  • Work-in-progress performs festivals and scenes Scandinavia / Europe
  • -Apply for performance and touring funds, promotion funds

Spring / Summer 2019

  • Production, rehearsals
  • Promotion, programming
  • Premiere
  • Perform at festivals / scenes / events in Europe
  • Perform festivals, venues and events for music, technology, art and media
  • Release music and video, as albums and short films

About the artist

Gisle Martens Meyer (full bio) is an award-winning Norwegian artist and composer, working within the contemporary fields of new media, live visuals, sound and music. His work deals with new technology, social media and digital culture, and how these spheres shape and structure our lives.  His work seek a pop-musical, “video-gameish” and cinematic style, with sound, music and live visuals as core elements. Questions are raised in a quirky and playful aesthetic, where apparent naive escapism glides on the surface of serious, contemporary undercurrents.

Martens Meyer has presented at, created productions and composed music for a wide range of institutions: Carte Blanche, Norwegian National Ballet, NRK, SONY Computer Entertainment, ZKM Karlsruhe, BIT20 Ensemble, Tanzhaus NRW Düsseldorf, Staatstheater Darmstad, InShadow Lisbon Festival for Cinema and Dance, Temps D’Images, Norwegian Cinema Society, Bergen Philharmonic Orchestra, Marseille European City Of Culture 2013, KORK, Hiroshima Barcelona, Festspillene i Bergen, Stavanger Concerthall. His work is also performed at and with unusual locations: Glacier, cloisters, abandoned buildings, ferry raiding the fjords, deep in the forest, at night in libraries.

Recent relevant works 

There Is No Here, Here – a pop-musical work that creates virtual music video realities live on stage. It investigates the “realities” we increasingly exist in, as we communicate increasingly through screens. The work has been shown at multiple European festivals for art, science and technology.

The Bow Corpse – a commissioned concert-work for contemporary music orchestra BIT20 Ensemble, using custom developed swarm simulators to clone each musician in the orchestra in realtime. The work deals with the topics of big data, privacy and surveillance, looking for information in sonic “trash”, by cloning the invisible microsounds of live instruments into huge flocking swarms.

Atrophy In The Key Of Dreaming Books – is a media-work and performance for libraries, investigating the sound of knowledge, and exploring the abyss between digital and analogue information. The work uses and exposes microsounds from pages, books, shelves, libraries.

Additional info, workshop and presentation possibilities

For tentative technical specifications of the performance, please see separate tech rider. The artist works with the following artistic methods and technical platforms, available for workshops, presentations and artist-talks:

Artistic challenges and methods

  • Music, sound and songs as fundamental, coherent carriers and narrators of content
  • Combining and interacting with real and virtual “realities” and characters live on stage
  • Using videogame controllers and wifi video to enable wireless, roaming performances
  • Balancing artistic and performative presence with technical and digital “production”
  • Developing and presenting content simultaneously in multiple artforms (sound, video, text, performance)
  • Experiences and challenges operating as an artist in multiple fields of art, tech and science

Technical platforms and software

  • Logic Pro X, composition, scoring, mixing and production
  • Final Cut Pro, Motion, Photoshop, prototyping, processing and preprod of live visuals
  • Ableton Live 10 / Max4Live, overall control, live sequencing, live processing of sounds
  • VDMX 5, live processing, effects and grading of cameras / mixing of video sources
  • Unity 3D, live 3D/VR reactive and and responsive environments and characters
  • Wekinator, live real-time, interactive machine learning

Research and study courses to be followed

References, resources, inspiration, sources

Festivals and conferences for research, inspiration, contacts