The device consist of several sensors, two projectors (one for video mapping and one for visual output) , solid models and a desktop program as visual output developed in C#. Users can interact with this device through different ways such as body gestures and voice then get the corresponding visual feedback.
The entire installation was deployed in a black box, inside where people will be detected by sound sensors and infrared cameras as soon as they pass through the entrance.
We used vvvv to implement video mapping as well as sensor signal manipulation, as people walk into the room, their steps will trigger the ripple visual effects which is mapped on the physical models through projectors.
Meanwhile, as people moving along, their body angles will be captured by the infrared camera on Kinect, which will be processed then regenerate the dynamic image of landscape art visuals through other projectors up front.