The smiles of the audience pouring through the Echo Temple were an amazing payoff for all the hard work. People danced and played all day, reacting and smiling in wonder as their movements shaped the music around them. As the day grew on, the crowd got bigger and more energetic, ending with a massive party.
With the system up and running on-site, we tested and calibrated each tower and balanced the overall mix. In this video you can see us working on one tower. With the main mix low, we are focusing on tuning the instrument and eventually testing it quantized to the groove.
A full-day event featuring a roster of amazing musical talent and drawing a huge crowd. The challenge was to find a concept that would entertain people… one that involves music but that does not replicate the experience they are getting elsewhere during the festival.
We started with the loose idea of an exhibition based entirely in sound, where movements and interactions with the air manipulate the soundscape. The installation would be in a wooded area, mostly free of structures beyond monoliths marking points of interaction.
We arrive at a refined concept where a structure is created with a series of towers wrapped in a circle around a central element, all delivering audio and points of interaction. Movement within the space would cause a reaction in music emanating from the speakers.
The towers would be relatively large, defining the space. They would each play a musical role in an overall composition being performed within the exhibit. Interactions in front of each tower would cause reactions with the sound originating from that tower. No direct visual response would be present - the participant would get all of their cues about interaction from dramatic changes within the sound and music responding to their movements.
The computer vision required to detect and respond to movement within the exhibit has some unique demands. The exhibit is meant not to have a roof or structure that controls light coming in. The location has partial tree coverage, so lighting is likely to change dramatically throughout the day, and depending on weather. We’ll need a solution that is robust enough to deal with variable lighting, partial sunlight and other environmental properties.
We settle on using a mixture of Ableton Live and Max - both in stand-alone and with Max for Live - as a production and performance environment for music. Live gives us the ability to set up musical scenes and control the overall performance of the music over time like a DJ set. Max gives us the control and scripting necessary to generate notes and sound processing controllers that will in turn manipulate Live’s output dynamically.
We begin producing some music demos ranging from mellow, ambient songs to heavy electro-house. Each demo contains strong central elements (bass, drums, etc.) that would be intended for the center speakers within the exhibit, along with various layers of melody and harmony both moving and sustained representing the tones that would play out of the 6 surrounding towers.
We began a few heavy prototyping sessions to determine what technologies would best support the computer vision requirements for the Echo Temple, and to arrive at a workable design for how the CV information would get pushed to Max/Live.
During testing we went through a number of marker prototypes, focusing on color, scale, and how lighting and reflectivity of the paper material affected accuracy. Ultimately we ended up deciding to use black and white markers, printed matte on fans that could be branded and handed out to festival-goers. The fans would be easy to hold correctly, fun to dance with, and purposeful considering it’s a hot, sweaty summer music festival.