I designed and played these 2 performances as part of a small team of engineers and musicians I have been collaborating with for a while, Valerio Visconti, Dario Mazzanti and Marco Gaudina. Altogether we share the belief that extremely interactive sessions in audio/visual performances could create a strong connection between the musicians and the audience, transforming each show into an engaging and unique experience. From the perspective of these works, “interaction” means actual active control over the same devices the artists are using during the show. Using regular portable devices, the audience is given the possibility to play music, modify visuals, leaving a personal mark on the performance. Furthermore, this connection aims at giving birth to a new concept of duet and improvisation, which places both on and off-stage people on the same level.
AvatarKontrol and four:PLAY
2012 - Interactive audio/visual performances
In both the performances, the spectators can connect to an open wifi network with their devices [smartphones, tablets, computers, etc.]. Just using a regular browser [no specific apps are required] they are automatically redirect to an on-line application that transforms the device into a multi-purpose interface. All the instruments of the musicians are connected to the same network and are configured to support remote access. During the show, the musicians themselves mainly use the same interface on smartphones and tablets to play music and control their gears.
Just after the connection, each user have to select from a menu a customized avatar, which is immediately displayed onto the visuals projected behind the performers. Thence, the user device interface turns into a controller to move her/his avatar and interact with the real-time visuals. In specific parts of the performance, the audience can move her/his avatar into special areas of the visuals that allow, for a limited period of time, a direct control of interactive parameters .
In AvatarKontrol, there are 2 types of parameters: music parameters and visual parameters. The first ones give access to the arbitrary control of a sound effect or to the real-time synthesis of a sound. In this case, the user’s device turns into a simple slider. The latter ones permit to modify the aspect and the behavior of the current visuals, using a knob on the portable device to drive the change. Additionally, during the climax of certain tracks, the audience is urged to dance and the values from the accelerometers of the devices are read to monitor the amount of movement and entrainment; the highest these values grow, the steeper the climax runs, both for audio and for visuals. The piece has been presented at the Electropark 2012 Festival in Genoa, Italy, and got the second place at the related Make Your Sound competition.
In four:PLAY, the same system is enhanced with the add of a new kind of parameter for stage light control. Furthermore, the bodies of the performers are tracked; in some specific tracks their silhouette is processed and projected onto the visuals, activating interactions with the avatars. The piece was one of the winners of the Call4roBOt 2012 competition and was selected to be showcased at the roBOt Festival 2012, in Bologna, Italy.
While the first on-stage laptop is connected to the network to control and project the visuals, the second one is handling all the music output. It locally runs an Ableton Live Set wrapping a MaxForLive patch. When the audience controls the sound parameters, the signals coming from the audience’s devices are forwarded from the server to this machine, as OSC messages. Using the LiveAPI’s embedded in MaxForLive, these signals are directly routed to a pre-defined set of Live device parameters, allowing real-time remote control.
An external interface is connected to this same machine, so that the remote signals can be outputted as MIDI and DMX signals, to control external devices [e.g., music instruments, lights]. Furthermore, an Arduino board communicates via serial port with the Live Set, and permits the control of any kind of attached actuators.
A couple of Kinect sensors are used to track the performers. Thanks to a simple hack, raw data are processed to extract the silhouettes of the performers, which are then sent to the visuals.
The projects are supported by the young Italian startup Circle Garage.