Wk10 - Final Project presentation Here

aleforae
Posts: 9
Joined: Mon Oct 01, 2012 4:00 pm

Re: Wk10 - Final Project presentation Here

Post by aleforae » Thu Dec 06, 2012 2:48 am

Christin Nolasco - Final Project - The Mindscape

Research:
UCSB’s Research Center for Virtual Environments and Behavior (ReCVEB) has an interesting virtual reality technology called the Head Mounted Display (HMD), a visual headset that also tracks the every motion of the viewer (eye movement, body movement, etc.) and sends the data back to the virtual environment software so that it can react accordingly. The software used to create these virtual environments is 3DS Max, a modeling, animation, and rendering tool. Using the HMD, ReCVEB’s main goal is to study human behavior and interaction through these virtual environments. For example, one environment may involve the user being chased by spiders in order to induce and observe fear. Professor Blascovich, the Co-Founder and Director of ReCVEB, has stated that immersive virtual environments “give us the experimental control we need while allowing us to realistically simulate the environments in which the behaviors we study normally occur”. In other words, researchers are allowed to conduct far-reaching experiments, which induce real reactions such as fear, while still keeping everyone safe.
Image
The HMD.

Oculus VR, a technology company, hopes to release a similar virtual reality HMD device for video-games in 2013. They are calling it Oculus Rift. The project was originally started by Palmer Luckey, Oculus VR’s founder, but has since gained the support of significant video-game industry figures such as John Carmack. Since its start, Oculus Rift has raised over $2,400,000 through the website KickStart which is a “non-profit organization that develops and markets new technologies that are bought by local entrepreneurs and used to establish new small businesses.” There are three integral parts to the Oculus Rift: immersive stereoscopic 3-D rendering, a massive field of view (110 degrees diagonally), and ultra low-latency head tracking. Currently, it is primarily being developed for use with the PC and uses a Digital Video Interface (DVI) video input but can be adapted to HDMI through an adapter. In order to integrate it with their video game, (1) developers must integrate Rift motion tracking with your character’s view camera transformation and (2) implement stereoscopic 3D rendering and optical distortion adjustment that creates the correct left/right eye image on the device. The Software Development Kit (SDK) will assist with these tasks, providing accelerometer and gyro data, screen size information, as well as pre-made shaders to simplify development. However, one of the revolutionary aspects of the Oculus Rift technology is that the image transformation to a 3D environment is primarily done through software, not optics. Earlier HMDs used a six-lens display with very little distortion, which was costly and produced a low field of view. Now that computing power is on a much higher level than it was in the past, the software that inputs video games into Rift is actually able to distort the image before it goes through the optics.
http://www.youtube.com/watch?v=uzCwczY1jTM
Image
The Oculus Rift in action.

The Gallant Lab at UC Berkeley is in the process of decoding dynamic natural visual experiences from the human visual cortex. Using functional magnetic resonance imaging (fmrI), "a procedure that measures brain activity by detecting associated changes in blood flow", the lab measures brain activity in the visual cortex. "The human visual cortex consists of billions of neurons. Each neuron can be viewed as a filter that takes a visual stimulus as input, and produces a spiking response as output. In early visual cortex these neural filters are selective for simple features such as spatial position, motion direction and speed." The data obtained from these neural filters in the visual cortex are then used to develop computational models that allow for the approximate reconstruction of what was seen by that person.
https://www.youtube.com/watch?feature=p ... sjDnYxJ0bo
fmrI

Translation into Art:
For this project, I am partnering with ReCVEB, Oculus VR, and The Gallant Lab to develop a new HMD called The Mindscape. The Mindscape will maximize the psychological research conducted by ReCVEB, and the technological resources from both Oculus VR and The Gallant Lab. Utilizing similar technology to the Oculus Rift, users will wear a HMD that will allow them to re-create what they currently see into a virtual environment and explore it based on their mind alone. This will be made possible by embedding fmrI technology into the HMD. This will measure any brain activity gathered from the visual cortex. The neural filters within the visual cortex will then provide the data necessary to create a computational model. The 3DS Max software will then translate this computational model into a virtual environment for the user of The Mindscape to see and explore. This virtual environment will simultaneously be projected onto a wall in the room while the user explores it so that spectators are able to essentially see how the world is viewed from that user's eyes. By having various individuals re-create the very same surroundings, we are able to see the difference in perception from individual to individual. For example, if a user who is color-blind and near-sighted is in an art gallery with paintings, the entire room may be rendered into the virtual reality as a different color and the furthest painting may be rendered completely blurry. The Mindscape will be plugged into nearby computers which will power it as well as supplement the programming for the device. A projector will also be connected to each computer so that the virtual environment can be seen by spectators. To navigate through this virtual environment, a game controller will be connected to The Mindscape.

Image
A virtual art gallery rendered through the eyes of someone who is color-blind and near-sighted.

Museum Presentation:
For presentation within an art museum, The Mindscape will have its own large gallery room. There will be multiple copies of The Mindscape (each accompanied by its own computer, projector, and game controller) available for use within the center of the room. All the walls will be painted white and the lights will be very dim so that the projections of the user’s virtual environment will appear crisper. While waiting in line, visitors can watch the projected virtual environments.

Image

Sources:
http://thebottomline.as.ucsb.edu/2011/0 ... ertainment
http://www.kickstarter.com/projects/152 ... o-the-game
http://www.nbcnews.com/technology/ingam ... ity-918527
http://www.nbcnews.com/technology/ingam ... ift-931087
https://sites.google.com/site/gallantla ... et-al-2011
http://en.wikipedia.org/wiki/Functional ... ce_imaging
http://www.recveb.ucsb.edu/
Last edited by aleforae on Thu Dec 06, 2012 9:41 am, edited 5 times in total.

ddavis
Posts: 3
Joined: Mon Oct 01, 2012 4:09 pm

Re: Wk10 - Final Project presentation Here

Post by ddavis » Thu Dec 06, 2012 8:48 am

Technology today is producing cameras powered by computers allowing them to take larger images, see farther, and see things that are naked to the human eye. Currently, there is a team of engineers from Duke and the University of Arizona, with the aid of defense program DARPA are working on a camera that is able to pick out an insect on a leaf in a picture of a field. Their project, called the Aware2, is a multiscale gigapixel camera, which is designed to utilize the device’s 98 independent microcameras that are capable of capturing images within 120 degrees. These 98 photos then become stitched together to create a single image of the large field.
While this is a camera, the development team has incorporated a live-stream feature allowing for events to be captured and explored in real time. What really makes this research groundbreaking is the ability of the AWARE to take the collected images and zoom in on an object that may initially appear as a dot in the large original picture. While the current size of the device limits its use and functionality, the military has been funding this project and is already using some of the larger, early models of the AWARE due to it’s ability to analyze large images.

Image

Image


Translation to art:
Translating this into an art form is very simple. While the device’s initial applications were for surveillance by the military, the ability to dive into any photo and discover almost hidden objects should be available to the public. By allowing individuals to look at large photos and videos of different locations, and then giving them the ability to look more closely at whatever they choose, would give them the sense that one scene could contain thousands of different photographs. By also allowing individuals to take photographs with the device further creates the interactive capabilities of this developmental technology.

Image

Museum Exhibition:
For my exhibition I would like to incorporate either projections or screens that can cover large areas of the walls in order to maximize the viewing space within the exhibit. Also, by having some hands-free control device that allows museum guests to interact with the photos, such as zooming in or sliding through an image. After these two devices have been set up, they would be linked with the AWARE. Most of the exhibit throughout the museum would begin with a large photograph of a scene (a city, skyline, nature) near the entrance, then next to the image would be a small setup in front of the same scene though, now guests would have the opportunity to explore what each photo was hiding in the distance, and by using gesture control, they would begin to feel like they were their own “detectives.” Near the end of the exhibit though, I would place a large screen that is connected to an AWARE devices located around campus that would be able to be controlled by the guests inside the museum, allowing them to almost take a virtual tour of the campus. The aim of this project is to realize all the little details behind the “big picture.”

References:
http://www.redorbit.com/news/technology ... otography/
http://www.nbcnews.com/technology/futur ... ion-840003
http://disp.duke.edu/projects/AWARE/
http://youtu.be/ejB1W_SFYF0

Post Reply