Proj 5: Final Project II: Actual Documentation

Posts: 9
Joined: Fri Apr 01, 2016 2:34 pm

Re: Proj 5: Final Project II: Actual Documentation

Post by zhenyuyang » Tue Jun 07, 2016 1:21 pm

Zhenyu Yang
/ / P A R A L L E L W O R L D / /

/ Concept
The idea of this project is based on the two concepts from my previous project (sculpture project). In my previous project, I mentioned there are two types of depth: General depth and detailed depth. General depth means the distance between the general objects detected by the Kinect camera. This kind of depth describes the spatial relationship among object. For example, based on general depth, we can tell if an object is near or far from us. Detailed depth describes the detailed geometry of an object. For example, based on the detailed depth, we can tell how a car looks like by looking at it from different angles.

In this project, I am thinking about creating a space by removing the detailed depth and keeping the general depth. In this space, the audience can actually observe himself. However, the perception can be unusual since the all detail depth are removed in this space and all the audience can perceive are the general distance between them and the abstract world.

Another feather brought by removing the detailed depth is parallelity: Everything is compressed into 2-dimensional so they will be always parallel to each other in the space.

Since this project involves creating a 2D world in a 3D space, a better 3D display technology can definitely enhance the experience (the ambiguity caused by the fusion of 2D and 3D perception). So I am considering making this project compatible with VR devices like oculus rift or HTC Vive so that the audience can see they are walking in the space.

/ Features
- Kinect Camera implementation, depth map filter
- Virtual camera movement controls
- Artificial stereo sound effects in planar space
/ Function Keys

Key F: Switch camera between control mode and free mode

Key C: Switch camera tracking methods

Key T: Switch camera between third person mode and first person mode

Key 1 to 5: Switch tracking object. Key 1 corresponds to the user.

/ Inspirational Resources
Some inspirational resources that are related to the concept of this project (keeping depth/distance among objects and removing the geometrical depth of each object).

Two-dimensional Egyptian art:
Camera movement techniques: ... -and-truck

Stereophonic sound:

Source Code:
(30.51 MiB) Downloaded 414 times
YouTube video demo:

Posts: 2
Joined: Fri Apr 01, 2016 2:31 pm

Re: Proj 5: Final Project II: Actual Documentation

Post by esayyad » Tue Jun 14, 2016 8:31 am



“The lake was silent for some time. Finally, it said:
"I weep for Narcissus, but I never noticed that Narcissus was beautiful. I weep because, each time he knelt beside my banks, I could see, in the depths of his eyes, my own beauty reflected.”
― Paulo Coelho, The Alchemist

This piece tries to depict our desire to love ourselves. you will start walking by a frozen lake as you hear a sound calling you to get closer until you see your face in the water. in the water you find pieces of ice covering the water surface and preventing you from seeing thoroughly. you will start to interact with the environment pushing ice pieces around and find yourself under the water.

Technical Aspects:

Microsoft Kinect was used in this project to detect the user and provide interactions with the virtual environment. I used Microsoft Kinect API and in Unity to gather character skeletal and point could information. an Internal mesh generation class was used to create a shaded mesh object of the user in front of the Kinect.


We used a top down projection mapping in UCSB Translab. the image was being provided by camera virtually placed in the environment. A synchronized and matched position between the virtual character and the user was achieved through a series of calibrations.

Many experiments were done to find a good interaction for the lake. Initially a mesh manipulation technique was developed but later I developed a wave simulation based on an algorithm described by Hugo Elias.
Rippling was later linked to user feet and hand placement and also the physical colliders like the ice pieces. An interactive sound system also was used to play an ambient lake sound and randomly chosen ripple sounds when needed.

Ice Pieces:
Ice pieces were simple colliders that had a buoyancy function to make them float on top of the surface. hey were also bound to a reset system that would be called in the system was inactive for 20 seconds. the reset system places all the ice pieces in a linearly interpolated way, back to their original positions. Ice pieces could interact with the user using the Rigidbody component of Unity.
In order to create the reflection, I used built-in Kinect face detection in the API, placing the face texture on a plane underneath water. face detection will run every 10 seconds and if there is nothing to detect. a message is sent to the audio system to play a noise; this audio can be perceived as a call to get closer to the lake. If a face is detected there are two audios that are played simultaneously. a dark and a light audio file. the volume between these two files swings based on the average pixel values in the texture. if there is no light on your face. a dark noise will be played. if you come closer and your face is brighter, it will be more visible under water and a light noise will be played. Face reflection plane is rotating underneath water to add some confusion to users' perception of their own faces. this was added to follow the narrative as narcissus didn’t know that the reflection is actually his own face.

This piece was exhibited in MAT 2016 EOYS.

Refrences: ... _water.htm

Post Reply