Program

                  





Live Streams Every Evening


5pm EST | 11pm CEST







An Ars Electronica Premiere 

                  


The Freudian dream-work finds its match in a class of self-supervised AIs, neural networks tasked to learn concise representations of raw data, whose operation hinges on a kind of iterative, suffusing imagination. For this year’s festival, we paint a dream with sounds from 2020.

To create the piece for this year's festival, we fed tens of thousands of audio field recordings scraped from the internet into a neural network called YAMNet. The network examines the sound and generates embeddings, consisting of 521 characteristics, or dimensions, of the sound. In order to interact with this material on a human scale, we unfold a 2-dimensional map from the 521-dimensional universe of sound using a technique called T-SNE. Each dot on the map above represents a 10-second piece of sound plucked from this universe. This particular map is constructed from a small subset of the universe corresponding to field recordings made in the first 8 months of 2020.

We use the 2020 map to live perform the 30-minute sound experiences in this year's program, zooming way down into the map and traveling through it. As we navigate, our software mixes the short bits of sound we encounter into a continuously evolving soundscape. We invite the audience to come along with us on this wandering journey.


 The Freudian dream-work finds its match in a class of self-supervised AIs, neural networks tasked to learn concise representations of raw data, whose operation hinges on a kind of iterative, suffusing imagination. For this year’s festival, we paint a dream with sounds from 2020.