As we move forward towards first milestone presentation initial research has begun to actualize a midsts pre production. This includes Theo and Joseph pulling data from the Kinect sensor, a group day trip to KL to collect images and film, and a foundational mock up in unity 3D myself.
Takashi Aiman introduced the class to the history, etymology and key concepts of Audiovisual. His own perspective was to incorporates one’s own story, their perceived reality into works of AV art, even shaping industry projects into more meaningful experiences. A demonstration of MIDI devices and live VJing was interesting, i was was already familiar with concepts via previous classes at RMIT. I looked into
Resolume, the application Takashi gave us a quick walkthrough in. For our groups projections mapping needs it seems aimed for live visual mixing, and may have been handy if the projection object was more complex geometry where is would have morphed video into facets.
(Takashi Aiman @ thetomoe.com)
Later we met with mentors from MMU’s EEE Lab (Experimental Entertainment Experience), presentations, activities and discussions were invaluable, taking notes and skimming wikipedia articles throughout. The activity prompt asking us to rethink our project with a refocus on AR gave us some concepts such as extending the city in a virtualized space, and incorporating live data and statistics that can be overlayed to expand on the metaphors of wealth inequality. Both of these may be built into the final outcome in our later installations back in Melbourne.
Morality in AR and Machine Learning was a thought provoking discussion, with many hypotheticals and dystopic actualities shaping the goal for our integration of AR, where it should be used to enhance our current metrophores through AV and never be a tackt on application.
(Texture for production)
On Friday our group, Joseph, Mike and Theo and Myself ventured into KL city to gather assets. Capturing hundreds of stills and videos on a range of devices will build a visual library on google drive for Mike and Myself to cut, edit, distort and begin production on the aesthetic of our tower projection. With our matrix of assets and representations in mind Mike took us to areas both immersive and stunning where we could capture almost 85% of the images and film we envision needing need. Textures, textural information and objects representing a range of class segmentations.
(Mike gathering images of transportations and networks)
Later on i began to mockup a stage in Unity3D where i could show the group what i envision being the technical setup. Using Unity3D and the Kinect data to control all aspects of projected motion, randomly generated rooms including random placement of objects, lighting, peoples.
Films loops and images from an range depending on the floor it was generated will be placed on small poly planes generated in the rooms. This should produce both a collage mixed media aesthetic and allow a faux 3d perception of depth into the projection, when the user moves in the sensor area this will tilt the render cameras in unity to change perceived depth of windows rather than an orthographic camera.
This past week has both been productive and conducive to new ideas and i’m looking forward to the sunday where audio and visual will come together in the first prototype.