Wk9 - Translate from the Science Lab to Art

Posts: 9
Joined: Mon Oct 01, 2012 3:10 pm

Re: Wk9 - Translate from the Science Lab to Art

Post by erikshalat » Mon Nov 26, 2012 10:18 pm

For my project I am collaborating with the Four Eyes Lab (without their knowledge, mind you) on an artistic endeavor based around Augmented Reality and its unrealized possibilities. Shows like Star Trek and those old X-Men cartoons from the 90's promised us huge holo-deck rooms that would map themselves out into whatever landscape we could scrape from the deepest recesses of our dreaming brains. Well, there is an alternative to holograms- augmented reality.


There are two different interfaces designed for your luxury and comfort, lightweight AR Glasses or the alternative, AR contact lenses, with a microscopic computer in the lens. Wearing these, you will be able to see the first combination digital/physical (to my knowledge) art gallery. To the unequipped eye the gallery space will appear empty, but upon wearing whichever AR interface you chose, you will bear witness to an amazing display of interactive and immersive 2D and 3D artworks.

This is the beauty of augmented reality, lights aren't being projected into a physical space like with holograms. Instead, images are projected onto your eyes but still "mapped" onto a physical space through an interface. Have you ever been told about the rocks you can find on the beach with little holes in them, that if you peek through you can see a magical side of the world? Thats basically what we're offering.

Within our AR lenses, microscopic computers are receiving signals from GPS satellites that allow them to know where you are and map the landscape accordingly with whatever visual imagery and information we want you to see.

http://ilab.cs.ucsb.edu/index.php/compo ... icle/10/28
Last edited by erikshalat on Tue Nov 27, 2012 9:41 am, edited 1 time in total.

Posts: 6
Joined: Mon Oct 01, 2012 4:06 pm

Re: Wk9 - Translate from the Science Lab to Art

Post by andysantoyo » Mon Nov 26, 2012 11:46 pm

Looking into the research done for last week's report, I chose to use the 3D Interactive Walls that Sato Laboratories were working on to develop a new way in which to use it. Rather than use it for information, it can be a tool with which one can create new artworks. Looking at new movies with great new technology, such as Tony Stark's uses in Iron Man or The Avengers, there is much left to human imagination in which way one can take and use it. Tony Stark not only has touch screen tech but voice and hand movement recognition embedded in his technology. It realistically seems that technology almost blends into the real world; he takes it out of it's spectrum as if he could literally hold it and it can also respond to his voice commands. Much of the touch and voice interaction has been accomplished by Apple however it has yet to be perfected to the levels portrayed in films. Having that technology at hand in the future not only becomes very resourceful, but a new way in which to blend the art and science together.


The way in which the 3D Interactive Wall can be altered is that instead of using it as a tool for information, it can be a tool for creation. People can already interact through the camera which recognizes their hand movements, however rather than having simple clicks and drags to log into a database, one can use their hands to grab and hold onto “tools” in the system; these artificial tools can be paintbrushes, pencils, markers, even chisels. One uses the space represented in the screen to become the medium; it can become the canvas in which to paint, the sketch pad in which to draw, the plaster with which to sculpt. Similar to the Xbox Kinect, which recognizes body movements, the camera only relies on and recognizes hand movements to grab onto the objects represented on the screen. It can be a canvas, and one's hand could hold onto the “brush,” and begin to paint; or onto a pencil, and begin to sketch, and so on.


It seems like any other touch device, that it seems like an old idea, yet the idea of being able to hold onto imaginary objects to create art pieces seems pretty crazy trajectory in which to take the technology but even if the real materials, and feel of the objects is different, this could be just an alternate way to create art in a more technical form, without having the actual presence of materials. A person's hand is the only necessary thing the device relies on for it to work. The Screen which displays the movements need not even be that big; it can be the size of any large plasma TV. The only thing that would really take time, like any computer program, is to master and know the functions of every click and drag of the hand, however it can be made to be pretty simple as using any other technology that relies on touch.


The display in an exhibition is a bit of a trouble for me to place. Though I have in mind having the screen along with the camera motion detector in the middle of the room; guests can proceed to test out the different programs within the device and commence in making their own art piece. After one person is done the art piece and saved into a database. It seems simple, and straightforward, but a factor could be time, so as to let people a chance to use it there is also a time limit to their creation. If I had more than one, I would like to display at least five Interactive Art Piece Devices all scattered in certain places of the room, the biggest screen would be at the north end wall for when it's time to display everyone's work it is easy to see. By the end of the show the device, which was hopefully used by many people, can display all the different forms of art people contributed.

http://www.hci.iis.u-tokyo.ac.jp/en/res ... splay.html
http://www.redmondpie.com/when-will-we- ... eal-world/

Posts: 9
Joined: Mon Oct 01, 2012 3:14 pm


Post by hcboydstun » Tue Nov 27, 2012 12:13 am

Over the past week, I have been further investigating the art possibilities behind light harvesting materials and quantum computers. While I did most (if not all of my research last week (http://www.mat.ucsb.edu/forum/viewtopic ... =203#p1235) this week was more about creating the final layout for my project.

As stated last week, the quantum computer (in the most simplest of terms) seeks to pose the question: how do we get smaller processors to have bigger storage units and produce more energy than the machine is run on. In other words, how do we create a more efficient and energy conserving machine that fits in the palm of your hand.

Taking this simplistic idea, combined with what I have discovered about the future advances in solar paneling, I generated the following idea for a gallery presentation:
Using a solar pane, I will collect energy from a natural light source. Connected to the solar panel will be a motherboard and transmitter that will harvest this energy and send it to a projector. From my research, I discovered that thinner solar cells retain the absorption like a thicker device, and have a higher voltage. Thus, the thin solar panel will be able to harvest a high voltage light, i.e. a dim sort of laser. From the projector, this laser will point directly at one of several silicon based-canvases.

From my previous research, I concluded that quantum computers work by being activated by a laser. Thus, I am using silicon as my canvases because silicon is a reactant material to the laser. Specifically, the laser will serve to activate the silicon’s photons, and produce energy. A photon is the quantum essential of light. Although technically the laser’s effects are only easily observable at both the microscopic and macroscopic level, I will be assuming that these minute movements of photos can be visible to the naked eye.
Interestingly, because the photon has no rest mass, this allows for interactions at long distances. Like all elementary particles, exhibit wave–particle duality, which means they exhibit properties of both waves and particles.

This brings me to the core artistic concept. By channeling energy at high velocities across the first canvas, this canvas will be the core reactor, and “push” the laser off onto the next canvas. Yet, this is not as simplistic as it seems. The idea behind this photon movement is called quantum teleportation. When photons becomes entangled (entanglement means that the properties of the photons, such as their polarization, are much more strongly correlated than is possible in classical physics.), and then a laser is used to fire one of the photos, it is possible to transmit the laser back and forth from canvas to canvas. This is because when the quantum state of one photon is altered, the quantum state of the second photon is immediately altered, faster than the speed of light. In essence, each canvas would read the mid of the last, creating a dazzling display of the photonic activity as the laser hits each canvas.

http://www.pveducation.org/pvcdrom/desi ... t-trapping
http://news.discovery.com/tech/solar-po ... 10919.html
http://www.zamandayolculuk.com/cetinbal ... EPORTb.htm

Posts: 9
Joined: Mon Oct 01, 2012 4:00 pm

Re: Wk9 - Translate from the Science Lab to Art

Post by aleforae » Tue Nov 27, 2012 5:55 am

Christin Nolasco - Week 9 Report

For last week's report, I discussed my interest in UCSB's ReCVEB (The Research Center for Virtual Environments and Behavior). To summarize, ReCVEB's researchers use virtual environments in order to study human behavior in ways they may not ordinarily be able to. One of the main technologies they use to create this virtual environment is a Head Mounted Display (HMD), which is a visual headset that tracks the every motion of the viewer (eye movement, body movement, etc.) and sends the data back to the virtual environment software so that it can react accordingly.

The HMD in use.

In order to translate this from something science to something art, I would like to re-purpose the Head Mounted Display (HMD) to an extreme. My idea is to give users of my HMD the ability to create their own simple environments (founded upon pre-created environments, with varying scenarios depending on the user's mindset) based on their thoughts and then explore that creation (which will probably require a non-invasive brainwave sensor of some sort). However, randomized pre-programmed data would somehow affect or entirely change the virtual environment regardless of the user's state of mind. I would then project the experience through the user's eyes onto a large screen or wall so that others may watch this "showcase". The goal of this art project would be getting users to question whether certain parts of their virtual experience were really created by their own thoughts or not (it could be on a conscious or subconscious level). In a sense, this project is heavily inspired by the movie Inception, which revolves around a group of people who are able to construct and control their dreams. However, sometimes some of the dreamer's subconscious influences the dream and the characters, as well as the viewers, are left wondering whether this is reality or if they are still stuck in a dream. Therefore, both the virtual experience and the witnessing of this virtual experience can be considered the art pieces. My concept is very similar to this movie in that I'm trying to get people (including both the user and the viewers) to question whether what is not physically real to begin with, but now made visually real, is actually truthful to the HMD user's thoughts or not. The only difference is, the user will be awake and not dreaming. Unless they think they're dreaming.

An Inception dreamscape.

Link to a similar invention that inspired me: http://www.dailymail.co.uk/sciencetech/ ... -real.html

http://thebottomline.as.ucsb.edu/2011/0 ... ertainment
Last edited by aleforae on Tue Nov 27, 2012 9:38 am, edited 5 times in total.

Posts: 9
Joined: Mon Oct 01, 2012 3:12 pm

Re: Wk9 - Translate from the Science Lab to Art

Post by aleung » Tue Nov 27, 2012 7:47 am

For my final, I have chosen to build up a project on the Syrcadian Blue, which is a light therapy device made to help those who suffer from Seasonal Affective Disorder. Since this is a commercial product, I will take a step back to the science behind this device, which is merely just light. But if you think about it, the production of light is a lot more complicated than just an on and off switch. Visible light is electromagnetic radiation that is visible to the human eye, and is responsible for the sense of sight. Cones and rods inside our eyes receive these waves and stimulation of these cones and rods sends messages to our brain and that is what you see. Light originates as single photons, or particle/waves having zero mass, no electric charge, and an indefinite lifetime. Photons are emitted by electrons as they lose orbital energy around their atom. The expendable orbital energy is transferred to the electron either when it is bumped by an atom or when it is struck by another photon. Before being excited to a higher electron shell, the electron is in the lowest possible shell, or ground state. The ground state is forced by the attraction between the electron and the proton(s) in the nucleus. When an electron is excited to a higher electron shell, it almost immediately seeks the ground state. The electron sheds the excess orbital energy as a photon with energy equal to the difference of the orbital energies an electron would have in the two electron shells.

My piece will be an interactive piece on light. It will be in a separate room that is enclosed from the rest of the gallery. Light is being dealt with; therefore the room will be dim to allow the lights to be in the spotlight. My piece will be installed on the wall. There will be a black magnetic board that is six feet by six feet hanging two feet above the ground. Black as the color of the board is chosen accordingly so the lights can stand out even more. On the board there will be twenty lights the size of baseballs. On the lights there are two buttons: an on and off button and a button that changes it’s color. The available colors are” red, orange, yellow, green, blue, purple, pink, black and white.” Only one person is allowed to be in the room at a time and for a maximum of five minutes. Five minutes so the wait would not be long for other participants and also so the participant does not think too much and only exert their feelings and emotions at the moment. Also, the participants will be in sets of ten for a reason that will be later explained. The instructions for this piece are to have the participant move the lights around to any desired areas, to turn the light either on or off and to pick a color for each of the lights. After the five minutes are up, the participant is to leave the room and a photo is taken of the board. This part of the piece is only half of the installation. The other half of it is the combination of all the photos into one single sheet. This photo is not shown to those who have not participated yet for a reason that participants stay original and not receive any ideas from other’s work. The combination of the photos taken is the reason why participants are in a set of ten, so an artwork can be produced every hour and not at the end of the day. This activity allows participants to be creative and express themselves through light. Different colored lights have different meanings; therefore the personality of the participant can be suggested as well.

Last edited by aleung on Thu Nov 29, 2012 7:22 am, edited 6 times in total.

Posts: 6
Joined: Mon Oct 01, 2012 3:57 pm

Re: Wk9 - Translate from the Science Lab to Art

Post by kendall_stewart » Tue Nov 27, 2012 8:36 am

Kendall Stewart

Translate from the Science Lab to Art

The research I did last week was on the invention of electronic eyewear, where the lenses could change focus when the user touched the frame of the glasses. For my imaginary installation I want to use this technology and recreate it in a very large scale.

I would create a giant dome (essentially a giant lens) made of the same LCD crystal-containing glass that the emPower! glasses use. This dome would sit in the center of an art gallery where it could be surrounded by other art pieces. It would have a cutout doorway so that visitors of the museum could enter the dome, and be large enough to hold several people at a time. In the center of the glass dome would be a sensor (like the one on the emPower! glasses) that could be tapped to change the focus of the surrounding glass. The glass would be pre-programmed to shuffle through about ten different distant focuses, each one changing the clarity of the surrounding gallery for the museum visitors inside the dome.

The most interesting part of this installation is that it will be a unique experience for all the museum visitors. Those who wear glasses will be encouraged to take them off before entering, therefore making their eyes return to their natural focus. As the visitor taps the sensor in the center and changes the focus of the surrounding lens, the outside world will become more or less clear and recognizable. But since everyone’s eyes are different, one setting of the lens will work for some but not necessarily for others. This concept will be most interesting if more than one person enters the dome at the same time. With each tap the viewers can verbalize which one works best for them and spot the differences in their vision capabilities.

This installation could be viewed as a metaphor for how different people view art, museums, or the world in general. It could also function scientifically as a giant eye test for viewers in the gallery. Either way, this imaginary project perfectly combines science and art and presents the research I conducted in a fascinating way.

Posts: 17
Joined: Wed Oct 03, 2012 11:14 am

Re: Wk9 - Translate from the Science Lab to Art

Post by sidrockafello » Tue Nov 27, 2012 8:56 am

Sid& Jae
Last week I looked into the world of eye tracking, through the perspective of someone who is disabled and has no use of their body beyond the movement of his eyes. The innovative measure to help this man regain his ability to draw was breath-taking as it was configured using a Playstation camera, glasses, some LEDs, and ready to go software.
Eye tracking as defined by Wikipedia, “is the process of measuring either the point of gaze (where one is looking) or the motion of an eye relative to the head. An eye tracker is a device for measuring eye positions and eye movement. Eye trackers are used in research on the visual system, in psychology, in cognitive linguistics and in product design. There are a number of methods for measuring eye movement. The most popular variant uses video images from which the eye position is extracted. Other methods use search coils or are based on the electrooculogram.” Now with a simple understanding of how eye tracking is utilized, I venture further into the realm of possibilities that this eye tracking device can be used for in regards to art. Much like how the members of Free Art and Technology (FAT), OpenFrameworks, the Graffiti Research Lab, and The Ebeling Group communities have teamed-up with a legendary LA graffiti writer to create this affordable eye writer, I desire to breach the door and unleash an augmented reality by utilizing this eyewriter technology to enhance our perception of reality.
Terminator vision.jpg
To begin, it is important to comprehend that eye tracking is device for measuring reactions, but how can it be brought into the real world? To do this it is important to disconnect oneself, as James Fantus once said “Don’t be confined by reality of precedent. Think about what could be accomplished if there were no boundaries.” So taking the current research that the Free Arts and Technology has used I looked into Acuity Eye Tracking which is a company that takes this technology and preforms tests to see the limits and measure what exactly we see and focus on in life, which varies from what we observe while we walk, how we navigate, or how Christiano Renaldo fakes out a defender
The Next level in our project is integrate the capabilities of the eye tracking system and make it interactive within an augmented reality. If you think about it, we already live in an augmented reality governed by the internet and its vast amount of information and knowledge. For example those who have an iphone can search google maps and receive an aerial image of their surrounding with allows them to find themselves and navigate to where they would like to be. On the other hand, this is not something we can see with our naked eyes and the understanding coming from it is based out of computer information. We can observe building and with current technology we can take a picture, translate an image and get any information on it that is available on the internet. So what I propose is a eyetracking system that integrates internet software into the lenses of the glasses that utilize eyetracking software. This will allow for the person to observe the world and interact with it virtually using augmented reality software. Imagine being able to look at a sign from a mile away, then using your hands like you would to enlarge an image on an iphone, do the same thing, however it is device free and your hands are recognized by the eyetracking software connected to computer programming that allows for an enhanced reality.

http://www.ted.com/talks/mick_ebeling_t ... rtist.html

Posts: 9
Joined: Mon Oct 01, 2012 3:09 pm

Re: Wk9 - Translate from the Science Lab to Art

Post by rdouglas » Wed Nov 28, 2012 2:48 pm

Using the research I have outlined here:

http://www.mat.ucsb.edu/forum/viewtopic ... =203#p1239

I will be creating a site-specific installation in UCSB's AD&A Museum that, very much like Marie Sester's seemingly invasive X-ray and LiDAR scans, will pry into a potentially personal and private element of people in the museum, their mass.

In the museum, in the largest room, there exists a wall in the center of this room. Museum visitors can freely walk around this wall as it does not touch any of the walls around it. To reach the other side of the wall, a person can either walk around the left side or around the right side, but regardless of their path of choice, they will reach the same destination. On the far wall (on the other side of the room dividing wall) there will be a large, high-definition data projection of little circles or "blips" that represent the individuals walking past the dividing wall.

These "blips" will represent, based on their respective sizes, the mass of the people who have walked past the dividing wall. Using two LiDAR-based devices (one on each side of the far wall), each person who walks past the left or the right side of the wall will be 3d scanned with lasers and then, based on the average density of a human being, their mass will be calculated from the volume of the resulting 3d model.

Each person will then be represented as a slightly or obviously different sized circle on the projection. However, the circles will be positioned on the right or left side of the projection depending on the side of the wall the participant walked around. For example, if more people choose the right side of the wall to circumnavigate, the projection will show more circles on its right side. However, if more people are using the right side but they are very small or young, and if there are fewer people using the left side but they are older and larger, the circles from the side with less mass may be attracted to the size with more mass. It will not be a fast paced visual performance, but the effect that larger mass has on small mass will be readily apparent. Along with the purely visual elements, there will be live statistics on the edges of the projection area describing in further detail the preferred side, the number of people who walked past and each side's respective mass.

The possible invasive nature of this exhibition is meant to be rather transparent, very much like the way Marie Sester wished to explore the idea of surveillance in private spaces. I hope for it to be transparent in the sense that the projection on the museum wall is intriguing enough to ignore the fact that you are being scanned and catalogued as a piece of mass and nothing else. The mass of the museum visitors may be transformed from something that they are self-conscious of to something with very positive and satisfying visual results.

Posts: 9
Joined: Mon Oct 01, 2012 4:07 pm

Re: Wk9 - Translate from the Science Lab to Art

Post by pumhiran » Wed Nov 28, 2012 5:23 pm

Pat Pumhiran

After my discussion with professor Legrady, I have decided to focus my final project based on the original concept of “The World in an Eye” by Shree K. Nayar.

Briefly according to the Columbia Science research website, “Shree Nayar is the chair of the Computer Science Department at Columbia University. He co-directs the Columbia Vision and Graphics Center and heads the Computer Vision Laboratory (CAVE), which is dedicated to the development of advanced computer vision systems.”

The concept of “The World in an Eye” is using the reflection of human eyes as a mirror to the surrounding environment. With the right geometric calculation, the camera could capture the image on a screen. Giving the viewer from the opposite side another visual experience. The most interesting facts is that a person A can see what a Person B see with out having to wear any goggle vision.

With the super high tech camera + huge installation, we could project the new kind of experience to the audience. In Fact, why stop at one camera if we can do Fours? This will allow the viewer to see 360 degree of the surrounding environment.


Posts: 10
Joined: Mon Oct 01, 2012 4:02 pm

Re: Wk9 - Translate from the Science Lab to Art

Post by aimeejenkins » Thu Nov 29, 2012 8:29 am


Darcey Lachtmen and I have been working on a project that deals with the reconstruction of objects that have deteriorated throughout their existence. As previously mentioned in our research “For all physical structures, it is inevitable that they change over time “Every material substance in the world is [consistently] affected by such natural elements as gravity, temperature, aging, and pressure.”

Here we've stumbled across great reasearch. A few Santa Barbara Professors have discovered ways to specify particular damages done to objects meant to be perfectly alligned. For the future, Darcey and I predict that a remote control camera will be patented. The main function of this camera will be to actively catch flaws in structures that have either been destroyed or withered away.

USC Professor Philip Lubin, has recently discovered “a method that actively corrects imperfections in structures that are meant to be precisely aligned. Scientists, like Lubin have been using AMPS (Adapative Morphable Precision) technology to correctly point out imperfection of structural exteriors. We want our device to utilize these same AMPS while the viewer in the museum space navigates the miniature camera. We imagine our museum space to be fully immersive. When one walks into the space they find themselves in a cylinder shaped projection room. Upon entering, one will receive instructions on how to use the remote and how it controls our little hover detector. While in this cylinder viewers will be able to control the overall movement of the device. Right now we have only discovered that the device can only detect imperfect objects up to 100 meters far away. As the viewer controls the movement of the device, they are submerged in a projection like room. What the audience sees around them becomes the perception of the ball, or what the camera sees. As we are submerged in the balls atmosphere, we get notifications of objects that have been destroyed over time and need fixing. This can help us, when flaws are unseen by the naked eye but seen by the camera. You are the ball. But from the balls perspective it can see you up to 100 meters away . This live hover video camera takes freely observes the space it occupies. It will benefit both scientist and artists, in that it optimizes structures such as buildings, telescopes and satellites. By using AMPS, engineers can recognize flaws within a structure and fix it before it worsens. The system at the moment only works 100 meters radius from the viewer. Efforts toward this experiment will extend this distance to kilometers or possibly further.

We are relating this technology to art through the idea of symmetry in science, art, and technology. The concept of Symmetry and the asymmetry guide our understanding of science as well as photography. We hope to combine the search of symmetry recognition, AMPS technology,and photography into one cohesive project: photographing and recording asymmetry in machines. We invision photography used in the future to capture errors that exist among everyday objects and deter error in moveable structures( such as plains which are meant to be perfectly symmetrica before irreparable damage is done. In this way, we combine design and science art, science, technology and city planning into one effort for structural symmetry.

The devise will look similar to a small hover craft. The video ball on the structure will move accordingly upon its environment. A remote control will be used to navigate the device. Due to it's size, luckily the device is small and weightless. The interior of the recording camera will contain a microchip that allows recovering and recording of what's going on in its atmosphere.

Sources: http://www.deepspace.ucsb.edu/
http://web.physics.ucsb.edu/~jatila/tal ... fO_WIP.pdf
http://web.physics.ucsb.edu/~jatila/pap ... ressed.pdf
Last edited by aimeejenkins on Mon Dec 03, 2012 11:31 pm, edited 1 time in total.

Post Reply