Wk8 - Vision Lab on Campus

kateedwards
Posts: 17
Joined: Mon Oct 01, 2012 3:15 pm

Re: Wk8 - Vision Lab on Campus

Post by kateedwards » Mon Nov 19, 2012 12:23 pm

While browsing the various research labs on campus at UCSB I came across the Center for Evolutionary Psychology. The research conducted by this department primarily deals with the notion that the human brain is hardwired to evolve to certain adaptive problems, essentially acting as a collection of “specialized computational devices” utilized for successful human interaction and survival. CEP alum Brad Duchaine's work involves the study of cognitive neuroscience and social perception. In a “60 Minutes” special, Duchaine explains a condition called “face blindness,” or, more technically, “prosopagnosia”. Prosopagnosia is a disorder in which an individual's ability to recognize faces is impaired, making it difficult for them to identify others based solely on their facial features. Duchaine compares the struggle of those with the condition to that of trying to identify faces that are upside down—an experiment which proved almost impossible for those even without any brain abnormalities. Scientists in the video explain that because faces are so similar it is a very computational process in the brain which is able to distinguish between the subtleties and identify specific individuals. People affected by prosopagnosia must rely on attributes other than facial markers, such as clothing, gait, and other mannerisms.

Image
The difficult for those with prosopagnosia to identify regular faces is equitable to trying to detect upside down faces

The Four Eyes Lab includes similar studies of facial recognition and the process of tracking individual's motion/mannerisms. Matthew Turk, a professor in the Computer Science Department, is involved in a myriad of Four Eyes project involving human-computer interaction and gesture recognition. The first project titled “4EyesFace – Detecting, Tracking and Aligning Faces in Real-Time” hopes to track faces from video and align the facial expressions to models. The method entails Active Wavelet Networks, an improvement on the Active Appearance Models, which combine shape and texture models. AWN uses wavelets to account for occlusions and variation in illumination, a problem for the prior AAM system.

Image

Since this project has already been discussed by another student during our Computational Photography assignment it will not be my primary focus, but I include it for the sake of variety to the study on facial recognition.

Another project by Turk and Ya Chang titled “Probabilistic Expression Recognition on Manifolds” creates a probabilistic video-to-video facial expression recognition based on manifold of facial expression. The system learns to group together images with different facial expressions based on its collection of varied expression. Manifold facial expression relies on the concept that images of an individual's face while transitioning from one expression to the next happens in a smooth process over high dimensional space—from frame to frame, the subject's face maintains primary features with subtle adjustments which can be tracked. Active Wavelets Networks are yet again applied to account for variation due to scaling, illumination condition, and face pose.

Image

Locally linear embedding, a method of visualizing the structure of facial manifolds, is used to create a continuous spectrum of images on specific charts (called manifold visualizations) and uses such sequences to represent the images in a “path”. This ultimately allows the researchers to generate a probabilistic model for which to identify the likelihood of certain expressions following each other, as demonstrated by the six basic paths. For example, the system can track the likelihood of an expression of anger transitioning either to disgust (high likelihood) or happiness (low likelihood).

Image

Through the combination of work done by the psychology lab along with technological advancements by Four Eyes, I hope to propose a more comprehensive and relatable art installation which addresses both the cognitive and visual aspects of how humans interact, possibly delving further into the problem of face blindness and suggesting a “solution” through my final visual representation.

Sources:
Face Blindness by UCSB Alum Brad Duchaine:
http://www.cbsnews.com/8301-18560_162-5 ... -stranger/

Four Eyes Lab:
http://ilab.cs.ucsb.edu/index.php/compo ... icle/12/34
http://ilab.cs.ucsb.edu/index.php/compo ... icle/12/45
Last edited by kateedwards on Mon Nov 19, 2012 8:05 pm, edited 1 time in total.

sydneyvg
Posts: 8
Joined: Mon Oct 01, 2012 3:16 pm

Re: Wk8 - Vision Lab on Campus

Post by sydneyvg » Mon Nov 19, 2012 7:55 pm

Sydney VandeGuchte (Partner: Kevin Alcantar)

For our final project on a vision lab Kevin and I are focusing on research being conducted at UC Berkeley about reconstructing visual experiences from brain activity, which can be seen at the following link:https://sites.google.com/site/gallantla ... et-al-2011

The Gallant Lab at UC Berkeley is doing research that aims to simulate natural vision. Recent fMRI studies have been successfully modeling brain activity of reactions to unmoving visual stimulus and consequently have been able to recreate these kinds of visuals from the brain activity. The Gallant Lab's particular "motion-energy encoding model" is a two part process that encompasses fast visual information, as generated by neurons in the brain, and blood- related reactions in response to this neural activity. The neural and hemoglobin responses allow for visualizations of natural movies to be formed, rather than visualizations of unmoving stimulus.
Screen shot 2012-11-19 at 9.33.09 PM.png
The neurons of the visual cortex work as filters that systemize spatial position, motion direction, and speed. At this day and age the best way to measure brain activity is through an fMRI, however, the fMRI does not measure neural activity; rather it measures the result of neural activity ie. changes in blood flow, blood volume, and blood oxygenation. These hemoglobin changes happen quickly but are quite slow in respect to the speed of the actual neural activity as the neurons are respond to natural movies.
Screen shot 2012-11-19 at 9.40.48 PM.png
Researchers record BOLD signals from occipital and temporal lobes that serve the visual cortex of the brain as people are watching the natural movies. The brain responses to these natural movies is modeled to independent voxels, which are those discrete elements that build up a representation of a 3-dimensional object in graphic simulations. The goal of the researchers in the Gallant Lab is to recreate the movie a subject has just observed. The subject perceives a stimulus and experiences brain activity, which is then decoded and then used to reconstruct the initial stimulus.

http://youtu.be/nsjDnYxJ0bo
Screen shot 2012-11-19 at 9.28.42 PM.png
There are various ways Kevin and I may be able to relate this research to art. We have not yet finalized the idea for our project, but I have envisioned some kind of interactive art piece. In this interactive art piece, I imagine different groups of guests to the exhibition or gallery to all experience the various visual stimuli, and then have their resulting brain reactions form personal reconstructions which would be put on display together. This would allow for each group to see others' brain reconstructions as well as other groups' reconstructions.

https://sites.google.com/site/gallantla ... et-al-2011
http://www.biomedresearches.com/root/im ... ematic.jpg
http://www.centremedicalomar.es/centrem ... Signal.jpg

andysantoyo
Posts: 6
Joined: Mon Oct 01, 2012 4:06 pm

Re: Wk8 - Vision Lab on Campus

Post by andysantoyo » Mon Nov 19, 2012 10:47 pm

This week I have also decided to use another research project from Sato Laboratories Research Center in Japan. The researchers Hideki Koike and Masataka Toyoura, from Graduate School of Information Systems University of Electro-Communications, along with Kenji Oka and Yoichi Sato, from Institute of Science of Industrial Science University of Tokyo, are making 3D Walls that interact with people so long as they have a mobile phone through which the user can gather information from a public space. They planned for it to be a touch panel, but it was realized to be too expensive for current development with the research, however such a panel would be very useful in its use. Through a 2D bar-code, the user can scan through his phone and access certain information they desire about the certain public sphere they are present in at the time, much like a digital bulletin board. The user does not only get information but he can also display information onto the screen projected. This can be beneficial, not only in the sense of gathering information about the area, but also interacting with such data so as for it to become more visual.

Image

TV screens and information boards are useful, but they have the disadvantage of not being interactive with the user. Certain questions or help is needed by people who are not overly familiar with the area, and this could be an advantage for further assistance. Of course people who don't own mobile phones that can tap into the system are at a loss, so there should be a built in system that lets even people who don't have the technology required to access the information. After one is 'logged' in to the information wall, the system depends on a hand/finger recognition system for its use. What it does is it mainly recognizes the movement of the hand to point inside the display, so one would need an amount of space to create certain movements. Certain objects tapped into leads to the certain information that it contains.

Image

This is very similar to technology today that is available to people, such as apple products like the Ipad, the Iphone, and Itouch, and smartphones as well, which allow people to log into the internet and acquire different types of information. Perhaps another useful thing the researchers can add is the information seen could be downloaded onto the personal device in order to look over it at any time if it is useful to the user. As mentioned before though the purpose of the 3D wall is to provide information about the area it inhabits, much like a bulletin board. It would be great to see this project advance into even used by people who don't use cellphones. It can also become a certain sphere where maybe even certain people can add to the bulletin board; this of course leads to supervision and updating of the bulletin board as it can be useless information. The space should be like a moderate sized room or area in which one could have the room to move and the camera can recognize the user's gestures successfully.

Image

My proposal if I were to translate this piece into an art installation is to use the camera which recognized gestures to make an art piece onto the screen. There can be different “tools” in the program to create different styles of artwork (painting, drawing, sculpture) and it can be either 3D or 2D. The 'artwork' can then be on display for a certain amount of time before the next user inhabits the space and creates his own art piece.

http://www.hci.iis.u-tokyo.ac.jp/en/res ... splay.html
http://www.hci.iis.u-tokyo.ac.jp/assets ... -PPD08.pdf

aleforae
Posts: 9
Joined: Mon Oct 01, 2012 4:00 pm

Re: Wk8 - Vision Lab on Campus

Post by aleforae » Tue Nov 20, 2012 8:59 am

Christin Nolasco - Report on Vision Lab on Campus

A lab that caught my interest was The Research Center for Virtual Environments and Behavior (ReCVEB), which is a multi-disciplinary research organization located within the Psychology East building. This particular lab focuses on using immersive virtual environment technologies to purposefully illicit certain reactions from people in order to understand why people behave the way they do. In short, the lab's main purpose is to study social interaction and influence in these virtual environments. One of the lab's main technologies used for such studies is a Head Mounted Display (HMD). The HMD is powered by a backpack that the user must wear and contains a tracking device in order to provide positioning and orientation information. In one particular study, the HMD user will be contained to an empty research room where they will be immersed into a virtual reality environment that contains nothing but a hole and a plank on the edge of that hole. The researchers will put an actual plank into the room so that the effect of jumping off the plank and into the hole in the virtual environment is enhanced and seems more realistic. In particular, this environment is designed to scare the user and test their limits. Professor Blascovich, the Co-Founder and Director of ReCVEB, has stated that immersive virtual environments “give us the experimental control we need while allowing us to realistically simulate the environments in which the behaviors we study normally occur”. In other words, researchers are allowed to conduct far-reaching experiments, which induce real reactions such as fear, while still keeping everyone safe.

Image
The Head Mounted Display (HMD)
Image
The backpack powering the HMD.

One of the technologies used to create these modeled environments is 3DS Max, a modeling, animation, and rendering tool. This technology combined with tracking and sensory technologies come together to create the HMD. There are 3 vital aspects to this immersive virtual environment: the Head Mounted Display (HMD), tracking devices, and software. The HMD experience itself is governed by field-of-view, stereo vision (depth perception), and resolution. All of these aspects come together to create a compelling virtual environment. The tracking device is designed to user motion, such as eye movement or body gestures, and constantly relay this information back to the graphics computer so that the environment is properly updated according to how the user's actions. Range and the possible loss of of signal are two of the most important factors to consider when utilizing a tracking device. The software is used to design/render and run the virtual environment. Technologies such as 3DS Max, as I mentioned previously, are one of the most by ReCVEB in particular.

Image

I am not exactly sure what I want to do for my final project yet, but I find the idea of working with virtual environments and virtual reality technology highly appealing to me on a personal level. After graduating from UCSB, I was thinking about pursuing a career in the video game design industry. Even if I do not pursue this particular career, video games are still one of my favorite hobbies and now that the virtual reality world and video game world are beginning to merge, I think that doing my final project on virtual environments and virtual reality technologies could yield some interesting possibilities.

Sources:
http://thebottomline.as.ucsb.edu/2011/0 ... ertainment
http://www.psych.ucsb.edu/research/recveb/research.html
http://www.psych.ucsb.edu/~beall/vrtechcomp.htm
http://www.recveb.ucsb.edu/pdfs/BailensonEtAl01.pdf

sidrockafello
Posts: 17
Joined: Wed Oct 03, 2012 11:14 am

Re: Wk8 - Vision Lab on Campus

Post by sidrockafello » Tue Nov 20, 2012 9:51 am

Jae Hak and Sid: Eye Tracking

In life there are no guarantees, it is important to do what you love to do and enjoy the time you have. People who celebrate their creativity and like to construct art works, need the function of their body to do such things. What happens if it were to be all taken away? Inspired by a paralyzed graffiti artist suffering from ALS, Mick Ebeling introduced a new technology that would benefit those who are in similar situations like this artist. The problem is getting the material to an affordable price for the public.
EYeWriterV1Cartoon.jpg
http://thesystemis.com/projects/eyewriter/
http://www.ted.com/talks/mick_ebeling_t ... rtist.html

orourkeamber
Posts: 8
Joined: Mon Oct 01, 2012 3:58 pm

Re: Wk8 - Vision Lab on Campus

Post by orourkeamber » Fri Nov 23, 2012 4:10 pm

Samantha Bohn and Amber O'Rourke
3d 1.jpg
3d 1.jpg (7.34 KiB) Viewed 18231 times
3d 2.jpg
3d 2.jpg (18.52 KiB) Viewed 18231 times
Aerial 3D created by Burton research was first announced in 2006. This system works to create screen-less 3D images in air or underwater. Through the use of a complex system of laser beams that are projected from below, atoms of oxygen and nitrogen are created, and the result is a suspended three dimensional image. When one views the video of the Aerial 3D in action the eerie green hologram produces nostalgic feelings of watching films such as Star Wars or Star Trek. The viewer is reminded of looking at the past ideas of what the futures technology might look like. The Aerial 3D seems to remind us that what seemed to be impossible in the past is now possible. The intangible ideas of what sort of technology will be available to us twenty years from feel more accessible when see these dreams from the past become the reality of today. Burton Inc. states “In the future we will try to make our 3D product smaller and also increase power to make a larger image size. The example possible applications of our machine are the following: For Advertising, For Entertainment, For Tsunami Signals.”
3d 3.png
3d 4.png
3d 5.png
sources:http://www.popsci.com/technology/articl ... -necessary

pumhiran
Posts: 9
Joined: Mon Oct 01, 2012 4:07 pm

Re: Wk8 - Vision Lab on Campus

Post by pumhiran » Sun Nov 25, 2012 2:44 pm

Pat Pumhiran

Idea: 360° Cameras that corresponding real time with its user

Concept:
When using a webcam on the laptop, the viewer can only see as much as 180° angles. What if there’s a web camera invention that allow the viewer to see 360° instead? Imagine being able to fully experience the all around environment. Of course the idea of 360°this camera is already existed. However, what makes this camera more unique is its functionality to be able to turn 360° the same moment the user is turning their head.

How would it work?
Normal Webcam only has one lens. The same goes for normal point and shoot camera. The 360° cameras would have multiple lenses that allow the overlap to create a field of vision. To make the vision clear, we are going to use camera SLR lens to create sharp and clear images. The user will use the vision glasses that corresponding to the when turning head (left/right/top/down/diagonally). Unlike the security camera that use Joystick or arrow keys to turn the angle, the censor on the vision glasses will make a real time view of the surrounding environment.

Research:
There are companies, which make 360° cameras for capturing images, and 360° cameras for capturing a video. Once I do a deeper research, I could possibly understand the process of their process of making these cameras. I would also need to do a research on the Vision goggle.

This is an image of Immersive Media Dodeca® 2360 camera system, which has 11 lens on it.
Image


http://www.schubincafe.com/tag/immersive-media/

kendall_stewart
Posts: 6
Joined: Mon Oct 01, 2012 3:57 pm

Re: Wk8 - Vision Lab on Campus

Post by kendall_stewart » Tue Nov 27, 2012 2:33 am

Kendall Stewart
11-20-12

Vision Lab on Campus

Having had no luck finding a vision-related research topic at UCSB, I decided to take the ‘invention’ approach for this week’s assignment. I discovered the revolutionary invention of the PixelOptics company, who have developed the first and only electronic focusing eyewear. Their glasses, called “emPower!,” contain undetectable LCD crystals that can be charged to change the focus strength of the lenses.

Image

This incredible new technology provides much needed relief for those suffering from presbyopia (when one’s eyes cannot change focus fast enough from long distance to short distance, or vice versa). This is a common condition among people over forty, and is most often corrected with bifocals, or variable-focus lenses. However, these types of glasses are fixed, meaning one must look through a certain part of the lens for distance and another part for reading. PixelOptics’ emPower! glasses have the technology to change instantly from distance focus to reading focus with just the touch of a sensor in their frame.

Image


Users can even set the change to happen automatically by activating the in-frame accelerometer, which PixelOptic’s reps say is the world’s smallest such sensor. Similar to an iPhone’s rotating screen orientation, this sensor allows the focus to be changed to reading with just a tilt forward of the head. The glasses need to be charged overnight, but look and feel no different than a standard pair.

As a long-time glasses wearer, and someone who belongs to a family of people who all suffer from presbyopia, this technology fascinates me. This is a product that I will most likely need in the future and I think it’s wonderful that it’s already available. I also enjoy that all aspects of technology are moving towards the futuristic touch-screen concept, including this medical field.


Sources:
http://www.pcmag.com/article2/0,2817,2375261,00.asp
http://www.pixeloptics.com/

aleung
Posts: 9
Joined: Mon Oct 01, 2012 3:12 pm

Re: Wk8 - Vision Lab on Campus

Post by aleung » Tue Nov 27, 2012 7:19 am

For my final project, I will be building up a project on the Syrcadian Blue, which is a light therapy device made to help those who suffer from Seasonal Affective Disorder (SAD).

Image

SAD is the most commonly diagnosed mood disorder and it affects nearly 36 million Americans. SAD is a depressive disorder caused by the reduction in the amount and type of light hitting the retina, which results in a biochemical imbalance of two key hormones, serotonin and melatonin. The brain does not produce enough serotonin, which results in the symptoms of depression, and at the same time, the brain overproduces melatonin, which tends to make people lethargic. SAD typically occurs during the winter months where the amount of sun and blue skies are limited.

Within the past 30 years, scientists discovered that many key “mood” hormones are actually regulated by light. Knowing this, light therapy was invented and today, light therapy is prescribed by psychiatrists, psychologists and family doctors and is considered the most effective non-pharmaceutical treatment for SAD. Light therapy is a very effective treatment because it treats one of the primary causes of depression- low serotonin levels in the brain. Most antidepressant drugs help boost serotonin levels in the brain, not by creating more serotonin, but by blocking the absorption of serotonin. But light therapy works by producing more serotonin in the brain.

In 2002 the prestigious journal Science discovered a new set of light sensitive receptors in the eye. These intrinsically photosensitive Retinal Ganglion Cells (ipRGCs) are not used for vision but are linked directly to the production of serotonin, and the regulation of melatonin and other hormones. These receptors are located in the lower portion of the eyeball and are primarily sensitive to blue light.

Image

Using this new knowledge, scientists were able to design a device that is able to help people produce more serotonin in the brain. The Syrcadian Blue light therapy unit has been specifically designed to stimulate the ipRGC receptors to maximize therapeutic activity, while minimizing visual discomfort. To do this, the Syrcadian Blue device provides high intensity photons at the precise blue wavelengths required to activate the ipRGC receptors (Cirtopic response), while minimizing response of the rods and cones used for normal vision (Photopic response). This device is very successful because it is easy to use, excels at serotonin production, and is easy on the eyes.

Image


Sources
http://www.syrcadianblue.com/science.html
Last edited by aleung on Thu Nov 29, 2012 7:08 am, edited 3 times in total.

ddavis
Posts: 3
Joined: Mon Oct 01, 2012 4:09 pm

Re: Wk8 - Vision Lab on Campus

Post by ddavis » Tue Nov 27, 2012 9:32 am

For a little over a month now, I have been looking into a new computer mouse dreamed up by David Holz and Michael Buckwald. They call their idea, "The Leap."

It is a device that will be compatible with most computers on the market today, including those that run Macintosh and Windows, the two most popular operating systems. What makes this so intriguing, is that the Leap will be completely controlled by gestures made mostly by your hands. Holz and Buckwald have developed a new algorithm for the device so that it is possible to be accurate to 1mm, meaning that it has already achieved a level of precision that has eluded the motion capturing market and such consumer products as the Xbox Kinect that we are all familiar with. Leap works by reading 8 cubic feet of space above it and translates the gestures created within it to commands on a computer. The accuracy and operating range for the device allows for all users to really interact on a more personal level with programs and photos by allowing them to feel that can touch and control what they see on-screen.

Image

While the idea of gesture controlled devices is not new, I have noticed a problem with other such products that the Leap is attempting to tackle. Devices such as the Xbox Kinect and Playstation's Move, are read by a single camera and while being relatively accurate, people find themselves creating awkward movements in order to achieve the desired command. Leap has the ability to be paired up with other Leap mouses, thus creating a large hyper-accurate field that is able to detect the slightest motion and create detailed three dimensional pictures.

[url]http://youtu.be/_d6KuiuteIA/url]

The artistic aspect of this device is that it allows users to control a three dimensional space and draw and sculpt models on a more personal level. While all artists may not be the best painters with their fingers, the Leap is able to recognize pencils, pens, and other foreign objects that are not the individuals hands, meaning that it is possible for users to create their works by picking up different tools as one would do when actually sculpting or painting. This would eliminate the need to select options from a tool bar within a program, creating a more fluid experience.

https://leapmotion.com/
http://www.legitreviews.com/news/13190/

Post Reply