Loading...

Joseph Callaly Blog Post 3

The biggest difficulty we faced in creating the installation was integrating the interactive elements using the Kinect. We had researched multiple ways in which this could be done, however we found that the most popular methods were unable to run due to compatibility errors or lack of other necessary software. All the software capable of integrating the Kinect into Ableton is made by small developers or individuals and there are often bugs or issues that arise. We had settled on using a program called DPKinect which was designed in Max MSP which and able to generate values based on the x-y-z coordinates of individual body parts. However the main difficulty then came from sending this data to Ableton in a way which could map different values to different parameters. My original idea was to convert the values in Max MSP into MIDI CC messages which would then control Ableton parameters via a virtual midi driver called LoopBe1. We had problems however sending and receiving the CC data, so we were unable to do it this way. We eventually worked out a way to convert the data in Max to Open Sound Control data which is transferred over a network. We then used another program built in Max for Live which can map OSC data to Ableton parameters. This was the major breakthrough for completing the project as it solved two problems simultaneously: it enabled mapping to Ableton but it also allowed the second computer to receive the same x-y-z coordinate values sent from Max over a local network. This had been a problem as one computer was running Windows and the other was a Mac, meaning we could not use a virtual USB hub.

Having fully integrated the Kinect we were able to finish the sound design and composition. The sound is constructed by taking four different tracks which would be blended together in different amounts based on the position of the viewer. Each of the tracks would need to able to play over the top of one another such that any combination of the four would produce a distinct atmosphere. The area in front of the tower is therefore basically divided into four quadrants, with the viewer controlling the point of a two-dimensional crossfading plane. We decided that in order for the sound to remain interesting over a long period of time the loop lengths would be around sixteen minutes long. We were able to piece together the four sixteen minute tracks using a lot of the material we had been generating whilst working on the Kinect as well as generating new audio. The division of work for the sound design went smoothly and without any problems, there was even split of audio created by Theo and myself.

Setting up the installation went roughly as expected, with few problems arising with the mapping of the projection. We had previously been worried that the projector would be blocked by the viewer too often, and that having it mounted on the ceiling may be the only way to have it not obscured. It turned out that the projector was only blocked when the viewer was directly in front of it, and due to the forty-five degree angle of the building it is more common to view the tower from either side. Mounting the projector above the viewer would still be ideal however we were happy with how it worked from in front. The main problem that arose after we set up the piece was that the Kinect was not perfect in its tracking. It can lose the position of the viewer in some cases and cannot deal with multiple people in the area as it will quickly jump between focusing on different people. Due to our placement in the room often people walking past can draw the focus of the Kinect. A solution to this would be to design a program which causes the Kinect to always focus on the closest person, or to track multiple people at once and have that be integrated into the sound and vision.

 

Leave a Reply

Your email address will not be published. Required fields are marked *