Wk9 - Translate from the Science Lab to Art

glegrady
Posts: 160
Joined: Wed Sep 22, 2010 12:26 pm

Wk9 - Translate from the Science Lab to Art

Post by glegrady » Thu Nov 08, 2012 12:34 pm

A big picture reminder that the goal of our final project is to imagine what the photography of the future will be like by looking into the science labs - see what they do, and then to imagine how it may migrate into a camera tool, or artistic approach. Our approach to imagine presenting it at the UCSB Art Museum is a means to create a conceptual situation to achieve our research.

Our method is the process of exploring. The process is called "Play of the Imagination".

This past week the task has been to identify some specialized science research on campus. Next week you have to come up with a plan as to how this could function in a UCSB museum exhibition.

Today we discussed methods of translating something into art. The notes from the whiteboard are:

Advanced Vision ----> Translation process ------> Artistic Context
Research

Methods for making the Translation process:
. Use a metaphor, or story we are familiar with
. Visualize the research (on screens, etc)
. Recontexualize / Repurpose
. Transform - turn inside/out, change its function
. Reduce / simplify
. Combine / recombine in unexpected ways
. Demonstrate /explain

Here is an example. There is a Physics lab on campus focused on fluid patterns in convection: http://www.nls.physics.ucsb.edu/
(To see visual animations click on convection Patterns to get to videos and images: http://www.nls.physics.ucsb.edu/image_p ... tures.html)

This scientific research was transformed by an Arts graduate who explored this and proposed an art installation: The Well http://www.mat.ucsb.edu/~g.legrady/acad ... index.html
George Legrady
legrady@mat.ucsb.edu

samibohn
Posts: 8
Joined: Mon Oct 01, 2012 3:54 pm
Contact:

Re: Wk9 - Translate from the Science Lab to Art

Post by samibohn » Fri Nov 23, 2012 3:21 pm

Amber and I are collaborating on a project using Aerial 3D Technology, which projects 3D holograms using lasers. We are creating a plan for an exhibition space that uses cutting edge technology to simulate a natural environment.

When we think of holograms or 3D imaging, our strongest association is with futuristic Sci-Fi movies, like Star Wars and Star Trek, and even recently in the movie Avatar all the screens were holograms. This technology is becoming a reality. There are different kinds of 3D imaging, using different projection methods such as fog or lasers. When our audience enters the exhibition space, we plan on them making this association between holograms and impossible futuristic technology. Seeing something 3D float midair is immediately impressive and awe-inspiring.

This kind of technology has not yet been perfected. The fog displays still look like fog, wispy and indistinct, and the lasers look pixilated and jarring. Whether or not this technology is perfected by the time of the installation, perfect illusion is not our goal and would detract from the message as well as the experience. The appearance of our holograms will be the jarring, pixilated look that emphasizes the technology itself instead of the illusion.

In the exhibition space, the holograms will be of trees, and the walls will be covered with projections of the forest. However, neither the holograms or the 2D projections have a naturalistic realism. This natural environment appears futuristic, pixilated, surreal. This creates a strong and pointed contrast between technology and nature. This is where the technology translates into artistic content, by making the audience uncomfortable or aware of this contrast, and making them contemplate it. No direct message is necessary, because the audience will fill in the blanks with their own assumptions and opinions.

crismali
Posts: 8
Joined: Mon Oct 01, 2012 3:22 pm

Re: Wk9 - Translate from the Science Lab to Art

Post by crismali » Fri Nov 23, 2012 5:07 pm

The research I did last week was on reconstructions of internal projections of the mind. There is much research being done on this subject all over the world, and more specifically at the UC research institutes. UC Santa Barbara has a brain imaging research center which conducts all types of research on the brain(the neural basis for memory, differences in brain functions, image analysis for brain imaging, etc.). UC Berkeley’s Helen Wills Neuroscience research center is has been researching the decoding and reconstructing of people’s dynamic visual experiences.

In this case, the research has led so far only to the reconstruction of Hollywood movie trailers that the patient has seen. However, this type of groundbreaking research could lead to the possible reading of minds, watching of dreams, viewing memories, and more. The implications could mean new types of therapies, better analysis of the brain/memories/thought paths, etc.

The reconstruction of what one sees in their mind turns out very visually interesting. In the youtube video linked below, shots of the reconstructed brain images are shown. These short videos are colorful yet blurred; they are indistinct yet intriguing, just the way memories are inside your own brain, or the way you might remember your dreams. In this way the reconstructions, while lacking details, are accurate in their depictions of what is happening inside the brain.

Memories and dreams are important aspects to everyone’s life, and this research opens the door to how these intangible parts of our lives all of a sudden can be more real. Using this research, an important art project for the UCSB Art History Museum could be done.

This project would be quite simple, because the reconstructions themselves are already very artistic and visually interesting on their own. Using a normal projection system, these reconstructions could be shown in the museum projected onto one of the blank walls. The viewer would question what these images were, and what was happening in them, while understanding at the same time that there was a figure moving, or whatever gist they could gather from the videos. By projecting the videos onto the wall, the videos would remind the audience of watching their own home videos, maybe a slide show of vacation photos. By keeping the project simple, the audience could relate to the project maybe in ways they wouldn’t even be able to put their finger on. In the future, this art project could include actual memories that have been reconstructed by these researchers.

This would be a simple twist on the accepted act of watching home movies, but instead of video-recorded images of real life, they would be recorded images of what the brain sees. More reconstruction videos would have to be collected, as well as permission from UC Berkeley’s research team to use their videos. The title of this work would be “What were you thinking?”.

http://www.youtube.com/watch?v=6FsH7RK1 ... e=youtu.be

http://newscenter.berkeley.edu/2011/09/22/brain-movies/

http://www.chorusandecho.com/uploads/ar ... 93822a.png

giovanni
Posts: 24
Joined: Fri Oct 12, 2012 9:27 am

Re: Wk9 - WHERE IS REAL?

Post by giovanni » Mon Nov 26, 2012 11:44 am

My idea is based on mixed and augmented reality researches conducted by the Four Eyes Lab at UCSB.

The main concept is that the new technologies are changing the human perception of space and the ways in which people interact with the real world.

Real and virtual/augmented environments are moving nearer, mixing always more, colliding, and in a not too distant future it will be hard do understand what is real and what is not, or better WHERE is real and where not...

So, the performance/exhibition I thought is named WHERE IS REAL? and is based on this concept.
It would use a panoramic camera placed in the main square of different cities in the world, and the environment tracked will be translated in a model projected in a room inside the museum.

Image

The user will have the sensation to be in the square in some place in the world and using augmented reality objects and a device like a smartphone it would be possible to interact with users who are actually in that place.

http://ilab.cs.ucsb.edu/index.php/compo ... icle/10/99
Last edited by giovanni on Tue Nov 27, 2012 9:15 am, edited 2 times in total.

amandajackson
Posts: 9
Joined: Mon Oct 01, 2012 3:17 pm

The Research Center for Virtual Environments and Behavior

Post by amandajackson » Mon Nov 26, 2012 1:18 pm

I have been looking in to a research project on virtual environments and behavior in the psychology department at UCSB. The ReCVEB is dedicated to understanding the complex interplay of computer-generated virtual environments and human behavior. The virtual environment allows one to place a mobile individual within an illusionary context simulating physical and social environments completely controlled by the researcher. The research has three areas of emphasis: social interaction and social influence, spatial cognition, and vision. While a number of virtual environments focus on a decrease of realism, this project increases realism of situations which allows for more generalizability of research. Virtual environments also allow for impossible manipulations- such as changing physical attributes such as skin and hair color and also social attributes of the research participants such as gender. Spatially, virtual environments allow us to understand how one aggregates and integrates mental maps, examine alignment effects, measure navigation performance, etc. The visual emphasis of the research is working to measure perceived distance of participants, examine how motor skills are learned, and study the perception of lightness and shape.

We have been using virtual reality in video games for many years. We navigate through virtual maps that are at first unfamiliar in games such as Halo and C.O.D., but as we continue to play we learn the map, distances, shapes, etc. of the game. Another virtual reality that we have all been exposed to is Sims. Creating our own environments and social interactions, we are in control of the virtual environment.

What if the environment controlled us?
In the exhibition space, there would be a number of virtual reality stations set up.
Image
(I don't invision anyone wearing televisions on their heads, but I liked the photo)

Each station would function in the same virtual map, allowing all the participants to interact with one another. They would control their own mobility with a joystick/keyboard- but their facial expressions would be interfaced with their simulated self so that the subjects will see and react to each other. Above each virtual station will be a screen so that people may observe the faces that each participant is making in the simulation.

With three phases of reality- each participant will adjust and react to their changing surroundings and self, ultimately to be faced with the image of how everyone else has perceived their simulation to appear.
sims_3___when_your_mirror_acts_weird_____by_nadiou_13-d4p84ga.jpg
Some may react differently than others.
u-906.jpg
The Research Center for Virtual Environments and Behavior
http://www.psych.ucsb.edu/research/recveb/research.html

slpark
Posts: 10
Joined: Mon Oct 01, 2012 3:18 pm

Re: Wk9 - Translate from the Science Lab to Art

Post by slpark » Mon Nov 26, 2012 5:54 pm

My final project will focus on 3D LiDAR vision technology. LiDAR (Light Detection and Ranging) uses remote sensing technology to illuminate a desired subject with pulsing light from a laser. Data that is collected from this remote sensing is then used to create a composite image. Though LiDAR technology is not currently being used at UCSB, it is being used at Santa Barbara’s very own Raytheon Corporation. Raytheon has produced a range of 3D Flash LiDAR cameras, their most recent one being TigerEye. 3D LiDAR technology captures X, Y, and Z coordinates to create accurate three dimensional visualization. This unique process allows people to manipulate an image in ways that 2D images cannot be manipulated (i.e stripping layers of the image, see second youtube video). The LiDAR combines GPS tracking, remote sensing, laser imaging, and Inertial Measurement Units (IMU) to create accurate geospatial visualizations of information. 3D LiDAR technology is used across different fields such as archaeology, architecture, and physics, however, for the purposes of this project I will use the LiDAR for artistic means and purposes.

For my art exhibition, I am proposing an interactive exhibit where viewers will become participators that can manipulate the technology and image of their choosing for themselves. Participators will have the option of having their photo taken by a range of 3D LiDAR cameras, upon which the 3D visualization will be projected onto a photo of a realistic landscape of their choosing (via a computer that will be available for use). If the participators do not wish to have their photo taken, a model will be on standby to keep the presentation continuous. Then, the viewer will be allowed to manipulate variables of their portrait such as color, size, and layers and once they are finished, the image will be stored as a part of a series. Art exhibits are normally about the viewer and the artwork(s) that are being viewed. There is a separation that is apparent between the viewer and the viewed, as well as what is real (the viewer) and what is not(the artwork). By projecting a 3D LiDAR image of the viewers on to realistic backgrounds (i.e. a photo of the Eiffel tower or a well known painting), this separation will become reversed. Instead, the viewer will become a part of the art installation and there will be a reversal of what is considered real and what is not, as well as forced to become an active viewer.

Image

example of what people look like when thier photos are taken by a 3D LiDAR camera.

imagine juxtaposed over a well-known, relatively flat image (below) and projected for the rest of the viewers to see/to create one composite image.

Image

http://www.advancedscientificconcepts.c ... -FINAL.pdf
http://www.youtube.com/watch?v=WJoaksSKaOo
http://www.youtube.com/watch?v=wRpjIxQg ... re=related
http://lidarservices.com/lsi_technology/

orourkeamber
Posts: 8
Joined: Mon Oct 01, 2012 3:58 pm

Re: Wk9 - Translate from the Science Lab to Art

Post by orourkeamber » Mon Nov 26, 2012 6:24 pm

see Sami Bohn's post.

kateedwards
Posts: 17
Joined: Mon Oct 01, 2012 3:15 pm

Re: Wk9 - Translate from the Science Lab to Art

Post by kateedwards » Mon Nov 26, 2012 6:36 pm

Last week I combined research done by UCSB's Center for Evolutionary Psychology and face tracking studies by Four Eyes Lab in preparation for my proposed art installation involving the concept of face blindness and our ability to recognize ourselves and others in a social environment. Through application of the psychological components behind how human beings interact and register individual faces, along with the technological advancements in mapping expression and pinpointing specific facial movements/behaviors, the installation will be both an interactive experiment as well as an algorithmic study of the museum visitors.

This project will rely on the computer based technology done by Four Eyes Lab to track the facial movements of the visitors themselves and then project the collected images in somewhat of a collage, therefore making the artwork a constantly changing and personal experience. Imagine the sensation of looking into a mirror and not recognizing yourself; imagine one day not being able to distinguish your own relative from a stranger on the street. I hope to generate such a sensation in viewers by altering their perception of themselves and their peers.

To do so, I propose a surveillance-esque camera to be installed upon entry to the museum. This camera will film individuals as they enter and store the images of their faces to then be altered and later projected. As more and more people enter the museum, a computer will accumulate a database of faces to draw from. The system will rely on the tracking mechanisms used in the project "4EyesFace" and the continual pathways generated by expression manifolds (projects previously explained here: http://www.mat.ucsb.edu/forum/viewtopic ... t=10#p1252) in order to store the faces in a manner which can be further dissected into the various components, i.e. eyes, nose, mouth, etc. Once a sufficient amount of faces have been detected, the system will then combine the features of one individual with another, creating slight variations in one's appearance. The changes will be discernible, but not so drastic that an individual could not identify themselves if they truly studied the image.

Within the gallery space on the right side of the museum, a hidden projector will display the faces onto the wall immediately visible upon entering the room. This will give the individuals a few moments between when their photograph/video has been taken and when they actually see the image. The altered faces will be displayed in a grid-like fashion, first appearing to be a collection of mere strangers to the viewer. Some faces will be repeated multiple times so as to increase the likelihood of an individual quickly recognizing some aspect of themselves. Once the viewers realize it is their own image being projected in combination with the physical attributes of those surrounding them, I presume they will have an increased fascination in analyzing the artwork and attempting to find bits and pieces of themselves throughout the piece. Every few minutes as different people enter the wall projection will transition to a new set of faces, progressively combining the old and new faces to create practically new people before the viewer's eyes.

By using the viewer as a part of the artwork (and based on human tendency to be self absorbed/fascinated by humankind in general) I hope to encourage people to think about the cognitive processes behind our own identity and the strange phenomena of physically melding with others in society. Do the small components of our physical beings really determine who we are, or are they merely interchangeable attributes which can be manipulated without altering our internal sense of being? Is there a universal collection of expressions and emotions which can be easily combined to generate a more attractive or "average" individual? These are questions I wish to explore with the fleeting nature of the display and the shock value of being part of a piece without being aware of such participation.

rosadiaz
Posts: 9
Joined: Mon Oct 01, 2012 3:52 pm

Re: Wk9 - Translate from the Science Lab to Art

Post by rosadiaz » Mon Nov 26, 2012 9:05 pm

Working with Ashley Fong
Remapping for handheld 3D video communications will enhance the current 3D usage to an everyday medium. Currently humans are not able to see in 3D due to the eye due to our depth perception. Discomfort is faced when the eyes are out of the comfort zone. Video quality is also dependent upon the users’ camera/webcam and the screen resolution. But luckily the mind is capable of perceiving lines that are not there giving the sense of 3D the natural aspect of curvatures and facial structures it deserves. The future of 3D vision will expand to other technologies giving the viewer a chance to see in a 3D plane rather than in 2D.

Four our project we want to maintain viewer comfort while upholding the 3D vision. Viewers will have the chance to be seen and see others stereoscopy. The key is to communicate with others from across the country as long as the time zones overlap. The immersion of art and the spectator is important because peoples’ interaction with gallery art is limited as they feel apathetic to it. This 3D video communication will allow everyone who walks into the gallery to communicate with someone elsewhere. One user can be in the LA gallery and talking to a user in a NY space. People will then be exposed to different cultures.

The exhibit space will consist of tablets like iPads adjustably mounted on the wall to change depending on the viewer’s eye level. The space will be open to all ages because there is no limit to the exploration of new places, people and cultures with 3D imaging. Sections of the galleries will be placed by windows to accommodate a fully immersed experience. The person will not only be able to see the user but as well as see the surrounding environment. The user will be able to sign in their locations they are in and see who else is online. They can choose who they want to talk to and wanting to learn about them from their side of the world. The choice of talking or typing will be given to the user for their own personal reasons.

Futuristically we want this 3D vision to capture multiple viewers in 3D as oppose to one. Language barriers will be broken by real time subtitles including more accurate translations.

http://vivonets.ece.ucsb.edu/handheld3d.html
http://vivonets.ece.ucsb.edu/mangiat_espa.pdf

sydneyvg
Posts: 8
Joined: Mon Oct 01, 2012 3:16 pm

Re: Wk9 - Translate from the Science Lab to Art

Post by sydneyvg » Mon Nov 26, 2012 10:04 pm

Sydney VandeGuchte (Partner: Kevin Alcantar)

**UPDATED VERSION OF LAST WEEK's POST**

For our final project on a vision lab Kevin and I are focusing on research being conducted at UC Berkeley about reconstructing visual experiences from brain activity, which can be seen at the following link:https://sites.google.com/site/gallantla ... et-al-2011

The Gallant Lab at UC Berkeley is doing research that aims to simulate natural vision. Recent fMRI studies have been successfully modeling brain activity of reactions to unmoving visual stimulus and consequently have been able to recreate these kinds of visuals from the brain activity. The Gallant Lab's particular "motion-energy encoding model" is a two part process that encompasses fast visual information, as generated by neurons in the brain, and blood- related reactions in response to this neural activity. The neural and hemoglobin responses allow for visualizations of natural movies to be formed, rather than visualizations of unmoving stimulus.
Screen shot 2012-11-19 at 9.33.09 PM.png
The neurons of the visual cortex work as filters that systemize spatial position, motion direction, and speed. At this day and age the best way to measure brain activity is through an fMRI, however, the fMRI does not measure neural activity; rather it measures the result of neural activity ie. changes in blood flow, blood volume, and blood oxygenation. These hemoglobin changes happen quickly but are quite slow in respect to the speed of the actual neural activity as the neurons are respond to natural movies.
Screen shot 2012-11-19 at 9.40.48 PM.png
Researchers record BOLD signals from occipital and temporal lobes that serve the visual cortex of the brain as people are watching the natural movies. The brain responses to these natural movies is modeled to independent voxels, which are those discrete elements that build up a representation of a 3-dimensional object in graphic simulations. The goal of the researchers in the Gallant Lab is to recreate the movie a subject has just observed. The subject perceives a stimulus and experiences brain activity, which is then decoded and then used to reconstruct the initial stimulus.

http://youtu.be/nsjDnYxJ0bo
Screen shot 2012-11-19 at 9.28.42 PM.png
There are various ways Kevin and I may be able to relate this research to art. We have not yet finalized the idea for our project, but I have envisioned some kind of interactive art piece. In this interactive art piece, I imagine different groups of guests to the exhibition or gallery to all experience the various visual stimuli, and then have their resulting brain reactions form personal reconstructions which would be put on display together. This would allow for each group to see others' brain reconstructions as well as other groups' reconstructions.

Kevin and I plan to propose a camera that directly connects to the mind using the type of technology seen in this research. One would be able to simply view a subject of life and take an image of the mind's eye. In this such way, various memory snapshots can be taken and even combined to create new images. In artistic areas such as film, a camera of this sort would be particularly useful because theoretically the budget would not be as expensive and there would be no special effect issues.

Often times artists comment on the disconnect from the mind to a sheet of paper. I believe one of the best explanations of this idea is in "Understanding Comics," but I have not yet found the exact chapter. The author writes about the disconnect that occurs as an art piece moves from an idea in the mind through the hand with the pen and then finally to the paper. With the creation of this camera, Kevin and I are aiming to eliminate this disconnect by literally using the mind as a medium by which to create.

I had previously mentioned an interactive artwork, our new idea involves an actual MRI machine in the gallery space. After speaking with George, we envisioned having the visitors to the gallery actually enter the exhibition through the machine. Each individual would have an imaging from his or her brain put on a display among others' brains' imaging. In having the gallery space function in this way, the artists (Kevin and I) are eliminated; the audience becomes the artist. We function more as an author by providing the space and form while the public provides the context of our work through their brains and MRI scans, and the resulting imaging.



https://sites.google.com/site/gallantla ... et-al-2011
http://www.biomedresearches.com/root/im ... ematic.jpg
http://www.centremedicalomar.es/centrem ... Signal.jpg

Post Reply