One of the things that needs to be decided is the technology used for input – recognizing movement of the performer. There is number of project that I was considering as a starting point:

Wekinator – integrated environment of control interface, intelligent mapping, gesture recognition and simple synthesis. It sounds like a ready-to-use project, however currently the built in video tracking mechanism is not that robust, as well as the synthesis options – which can be totally extended using ChucK, but using this environment is not my goal at the moment. The gesture recognition part is pretty interesting, but it only operates on static data, so can’t recognize for example specific hand movement happening over time. However, both input and output data can take the form of OSC messages, so I’m considering using the intelligent control part of this project.

As a video tracking input I came across project Predator (TLD). It’s a very robust and supposedly accurate video tracking system, however you need to define the tracked object at the beginning. I’d like to have something that expects human posture and can track it without any pre-setting.

There is interesting development going on around Kinect Xbox interface – project OpenKinect. It aims to provide and open source environment to access sophisticated features like skeleton tracking. There is an OpenFrameworks version of the library as well. There is a massive number of projects involving Kinect right now, one of them corresponds to my instrument a little bit, by simulating a Theremin: Therenect.

At this poing I think I’ll use Kinect for skeleton tracking, put that data to Wekinator to get some flexible and intuitive control functions and use that as set of control parameters for sound structures created in SuperCollider.

At this point I also think I’ll just use digital synthesis, leaving the idea of creating hybrid electronic-acoustic instrument, which is not crucial for this project, which focuses on innovative and intuitive control giving meaningful musical results.

The next step will be to figure out how to analyze the physical gestures to control musical structures on many levels.