We are aiming to build performative musical instruments and installations that allow for GMIS principles to be implemented in a group setting. Overall, a networked multi-instrument setting will be implemented by using OOCSI as a communication layer, and Processing clients for sensors/interactive front-ends and MIDI
translation.
The music or sound will be generated by a MIDI-capable sequencer (e.g., Logic Pro X or Ableton Live) with multi-track audio instruments, effects and audio tracks – which is connected to the OOCSI-MIDI bridge. Processing serves as the basis for implementing interactive clients that might connect to external sensors such as Arduino-connected simple sensing modalities or complex sensors
such as Kinect, Leap Motion or else.