Wk10 - Final Project presentation Here

samibohn
Posts: 8
Joined: Mon Oct 01, 2012 3:54 pm
Contact:

Re: Wk10 - Final Project presentation Here

Post by samibohn » Mon Dec 03, 2012 9:21 pm

Image
Collaboration by Sami Bohn and Amber O'Rourke.

1) Science/Vision/Technology
Aerial 3D, created by Burton Inc., was first announced in 2006. This system works to create screen-less 3D images in air or underwater. Through the use of a complex system of laser beams that is projected from below, atoms of oxygen and nitrogen are excited, and the result is a suspended three dimensional image that can be observed from all sides. Aerial 3D produces 50,000 light beams per second. Burton Inc. states ‘“In the future we will try to make our 3D product smaller and also increase power to make a larger image size. The example possible applications of our machine are the following: For Advertising, For Entertainment, For Tsunami Signals.”

Image
Image

For the purpose of our exhibition we will be modifying the Aerial 3D in to allow it to be projected from above rather than below. This will enable museum visitors to travel through the gallery space without disrupting the hologram monitors. A grid of Aerial 3D laser beams will be hung above covering the full span of the gallery ceiling.

Image

As displayed in the diagram, three projectors will be hung above angled slightly downward and pointed at three of the four gallery walls. This will allow the wall projections to be shown without the risk of a museum visitor’s body blocking the projected image. Audio speakers will be installed between the ceiling and the Aerial 3D grid where they can be heard but not seen.

2) Translation into Art
Our exhibition uses cutting edge technology to create an the illusion of an interactive forest. However, illusion is not reality. First of all, even though we could create a perfect tree hologram, we are choosing to exploit the “hologram” effect which will make the trees look an unnatural green, with static. We are using this effect for a specific reason. The hologram effect produces feelings of nostalgia for most people in our generation, reminding us of our favorite SciFi movies like Star Wars and Star Trek. Aerial 3D reminds us that technology considered science fiction in the past is now possible. Intangible ideas of what sort of technology will be available to us in twenty feel more accessible when we see these dreams from the past become the reality of today. We want the viewer to feel a sense of wonder.
Technology may be our medium, but the subject of our exhibition is something completely different: nature, particularly the forest. This odd intersection of nature and technology is where this exhibition really translates into art. It prompts the viewer to wonder what our objective is, and stirs up all of their own opinions about technology and nature. Does technological advancement have a positive or negative connotation when contrasted with nature?
As well as being nostalgic, the unrefined hologram effect is also inherently creepy. We have enhanced this for artistic purposes. Projections of forest scenes on the walls look like they are made with a night vision camera and have a high level of static, should also create feelings of anxiety. The purpose of this exhibition is to make you think about technology in a new way, but also to effect your emotions. The exhibition space is strange and ominous, the holograms move in such a way as to make you feel claustrophobic and uncomfortable.


3) Museum presentation:
During our exhibition visitors will be invited to walk through the gallery space and interact with the environment we have created through the use of projections, 3D holograms, and audio sound recordings. Upon entering the gallery visitors will be met by projections of “night vision” forests scenes shown on the wall in front of them as well as to their left and right. Surrounding them will be 3D holograms of trees which will be created by using Aerial 3D technology.

Image

They will be given a compass to hold during their explorations of the exhibition. Unbeknownst to them this compass will serve as a hyper sensitive GPS tracking device as well as a pulse monitor. The GPS tracking device will track their movements through the gallery and allow the tree holograms to follow the as they explore the space. The pulse monitor will control the distance the tree holograms are from each individual, increased heart rate will cause the trees to come closer to the visitor creating a more tight claustrophobic space. Slowed heart rate will do the opposite, the trees will move away from the visitor and make the space feel more open. If the visitor walks near to the wall, the projected forest images fade to static.
The space will be affected by how each individual feels about it, if it makes them nervous or anxious the pulse monitor will read the change in heart rate, the environment will change to amplify the visitors feeling. Audio sound recordings of natural sounds such as birds chirping mixed with an undertone of white static noise will be able to be heard throughout the space. At times the sounds of the birds will cut out and only the white noise will be audible for a short period of time, again emphasizing the contrast between nature and man-made technology.

Forest Wall Projection Demo: http://www.youtube.com/watch?v=DwCSy5jb ... e=youtu.be

4) References
Aerial 3D: http://www.popsci.com/technology/articl ... -necessary
http://burton-jp.com/en/index.htm

kateedwards
Posts: 17
Joined: Mon Oct 01, 2012 3:15 pm

Re: Wk10 - Final Project presentation Here

Post by kateedwards » Mon Dec 03, 2012 10:18 pm

Face Blindness: Transforming Our Visual Sense of Self

Imagine walking through campus in a crowd of students, faces and bodies coming in and out of your line of sight at fleeting speeds. No particular individual stands out, each student but a mere fragment of a larger overwhelming scene. You notice the unusual teal color of a boy's sweater, then the sun striking a girl's auburn eyes as they make contact with yours, but you keep walking, no detail making a lasting impression. Now imagine their confusion as they wave in your direction, obviously aware of who you are, yet you have no idea what their names are or where you've seen them before. They speak to you as if they've known you for years, yet their faces are entirely strange to you, a foreign combination of eyes, lips and skin.

Face blindness, otherwise known as "prosopagnosia," is a real condition capable of rendering such an inability for one to recognize even those closest to them. Individuals suffering from face blindness are incapable of distinguishing the facial features of someone they've known their entire life, such as their own mother, from that of a stranger on the street. With prosopagnosia, you may be able to identify your best friend, but only because of features such as their clothing or the way they style their hair. Their face bears no significance. Face blindness strips an individual of one of the most important aspects of human relations--the ability to identify others by their familiarity and their recognizable image.

Inspired by research conducted by UCSB's Center for Evolutionary Psychology and Four Eyes Lab technology, I propose an artwork in which the foreign sensation of face blindness can be felt by individuals even without the condition, creating an environment in which museum visitors may feel uncomfortable but which will encourage us to think about the way we relate to others in a physical sense removed from our emotional connections. Through the use of vision technology, I propose a piece in which our visual relationship with our own appearance and the image of others is manipulated and taken out of a typical context.

Psychological Background:
Our ability to recognize faces is a cognitive skill we take for granted every day. Brad Duchaine, a CEP alum, discusses the disorder of prosopagnosia in his "60 Minutes" interview, comparing the experience of those with the condition to that of trying to identify faces upside down. A brain with typical cognitive function is able to associate specific combinations of facial features in order to put a name to a face, but once this capacity is impaired, the way we interact with others is drastically altered. This psychological research is the backstory for the application of Four Eyes technology into art.

Technological Background:
The technology required for the piece involves the work done by UCSB's Four Eyes Lab in a project titled "4EyesFace". In this project conducted by Changbo Hu, Rogerio Feris, and Matthew Turk, Adaboost face detection software is used to track the movement of an individual's eyes, mouth and overall facial composition from video in order to later be aligned to face models. The face detection pinpoints edges of facial features and allows the data to then be transferred onto models through use of Active Wavelet Networks. The AWN algorithm is able to align and synthesize faces comprehensively, and attempts to compensate for changes in light and missing pieces of the image. The following image depicts the tracking software's use of points to define specified facial features.

Image
4EyesFace screenshot

Further research by Matthew Turk on compensation for distortions and partial occlusions using a method called "locally salient ICA" will be combined with the 4EyesFace project in order to fully capture facial images. The enhanced image detection software will be particular helpful for capturing the museum surveillance footage at the best quality possible.

Another project by Turk and Ya Chang titled “Probabilistic Expression Recognition on Manifolds” creates a probabilistic video-to-video facial expression recognition based on manifold of facial expression. The system learns to group together images with different facial expressions based on its collection of varied expression. Manifold facial expression relies on the concept that images of an individual's face while transitioning from one expression to the next happens in a smooth process over high dimensional space—from frame to frame, the subject's face maintains primary features with subtle adjustments which can be tracked. Active Wavelets Networks are yet again applied to account for variation due to scaling, illumination condition, and face pose. This project will be used to group together individuals with similar expressions so that the manipulation of features will appear more seamless.

Image
Clustering facial patterns via expression manifolds

Transformation into Art:
To translate the science of face tracking into art, my piece involves video images taken from a surveillance camera outside the museum and combining the images into a collage to be projected onto a museum wall. As people enter the museum, the camera will film their expressions and movement without their knowledge, therefore ensuring their actions to be reflective of their natural/typical mannerisms and appearance. The surveillance camera, placed at eye level to capture a more straight-on shot than typical surveillance footage, will utilize the face tracking software to zoom in specifically on people's faces.

Image
Example of footage captured by camera

The camera footage will be used to create a database of individual faces and store the data in a hidden computer on site. The computer will render a grouping of images in which select features from individuals are blended together with those of another. For example, the eyes of one visitor will be swapped with the person who entered the museum behind them. One person's nose will be placed on the face of someone else. Through these small tweaks the faces will become less identifiable, yet still maintain features which viewers will be able to recognize as their own if they focus on the projection. Once the computer has collected a sufficient number of faces, a collage similar to the clustering of the expression manifold chart will be projected onto a wall. The projection will show static images of the finalized edited images as well as the transformation from one face turning into the next. The collage will be a visual representation of the face blindness sensation, in which individuals will find the images strangely familiar yet off putting. The juxtaposition of different people's features along with the combination of static and moving images will form a visual composite in which the familiar is constantly morphing before a visitor's eyes.

The following are representations of how the technology of 4EyesFace will lead to the transformation of one face into a different facial structure.

Image
Using 4EyesFace tracking pinpoints to find specific features

Image
Animation showing manipulation of eyes and nose from one subject to another

Image
Still shots comparing before and after manipulation

Museum Experience:
Upon entering the museum, visitors will be unaware of their video being taken. They will not know they are going to be a part of the artwork until they actually see the piece projected onto the large freestanding wall located in the middle of the right hand room of the gallery. The manipulation of their faces will make their participation not as apparent, but slowly they will recognize their own image as part of the piece. They may also notice the inclusion of their surrounding peers. By projecting the images of the visitors onto the wall I hope to evoke their curiosity and analysis of specific features. The cognitive process of face blindness will be partially experienced as the viewers struggle to find accurate representations of their physical identity located in the bits and pieces within the collage. The projection will change every few minutes as new participants enter the museum, making the artwork dynamic and hopefully compelling people to revisit the piece as it changes periodically.

Image
Simulated projection of collected faces

Through this artwork I hope to alter people's perception of their own image as well as those of their peers, rethinking how we relate to those around us based on a strictly physical level. The piece essentially deconstructs the visual components of the human facial structure in order to recombine them and generate new visuals. The use of visitor faces in the exhibition will make the piece relatable and universal, as it can be used in any museum with any collection of individuals.

Sources:
Face Blindness by UCSB Alum Brad Duchaine:
http://www.cbsnews.com/8301-18560_162-5 ... -stranger/

Four Eyes Lab:
http://ilab.cs.ucsb.edu/index.php/compo ... icle/12/34
http://ilab.cs.ucsb.edu/index.php/compo ... icle/12/45
http://ilab.cs.ucsb.edu/index.php/compo ... icle/12/48
Last edited by kateedwards on Wed Dec 05, 2012 8:12 pm, edited 10 times in total.

sidrockafello
Posts: 17
Joined: Wed Oct 03, 2012 11:14 am

Re: Wk10 - Final Project presentation Here

Post by sidrockafello » Mon Dec 03, 2012 11:40 pm

Sid & Jae

iGlasses

We live in a feel and touch world where our senses give us the experiences and memories we have day to day. In scientific terms we are multi-modal beings, with senses that don’t operate in isolation but rather as an entire unit. There are numerous studies and research into the human senses and how to correlate them into the computer based realm. Our senses are ground zero for exciting experiments that lead into the measurements and systems used to inspire computer simulations that allow a user to touch, hear, and see objects in a virtual world or within an augmented reality. In our project we will utilize research in eye tracking technology to engage the audience with our iGlasses within a three-dimensional and interactive photo realistic sculptural scene that will give the user an unforgettable experience.
iglasses.jpg
AAAAA.jpg
To begin, we take to Zachary Lieberman’s work in eye-tracking technology where he uses the ability to follow a person’s eye movement to translate onto a computer. When the eye movement is recognized it is translated by the computer and traces the correlating lines onto the screen, which allows the user to draw hands free. What makes this possible is the type of tracker used to record the eye movement. At Acuity eye tracking solutions Lab they use an eye tracking head rig that uses an infrared light to capture the eyes movements and another camera to record what the user is viewing. Our iGlasses will utilize the same technology which consists of a non-contact optical infrared light, which means there is nothing touching your eyes; instead an infrared light is reflected from the eye and translated by a camera onto a computer. In addition, there is also another camera placed in the center of the head rig to capture what the user is seeing and combines the two images. So the single image that is being viewed is from a first person point of view and in the center is a red dot that indicates what the user is looking at. The primary function of the eye tracker in this project is to measure the point of gaze and the sweeping motion of the eye in relation to the user’s head. By having the ability to see what the user is looking at allows us, the artists, to record and measure where the user’s attention is held and for how long. This is vital for the augmented realm we are going to put our audience in because it is the iGlasses that will allow the user to engage and interact with the virtual realm in a cognitive manner. How the iGlasses function within the exhibit is detailed in four steps: record what the eye is looking at, project the virtually augmented room through the view of the iGlasses, measure what elements the user is looking at and for how long, lastly interactive animations are activated through bar codes within the environment that respond to the user’s gaze, which is part stemming from Magic Vision Lab’s research.
iglasses2.jpg
iglasses2.jpg (10.4 KiB) Viewed 11767 times
iGlasses goal is to revolutionize mobile augmented reality devices and incorporate that into the real world. Augmented reality in our case is more specifically related to the concept of mediated reality, which is a view of reality that enhances one’s present perception of reality; imagine if you could view the world with a Heads-up-Display on your iGlasses that can be overlaid on the real world. However, we want to bring something special to the audience at the gallery exhibit. In coordination with the iGlasses, we recombined Magic Vision Lab research into the exhibit. Magic Vision Lab’s goal is to, “enhance human vision with computer-generated graphics in order to amplify human intelligence on a world-wide scale.” The projects at Magic Vision Lab’s include devices that allow for touch to be actuated, and for F-code bars to be read and manipulated in the real world. F-code bars are rectangular tabs with hexagon print on them that read like a bar codes on cereal boxes, however they instead allow for computer based graphics to be displayed in real time through a visual headset, which will be our iGlasses. In combination with each other it will allow the user to interact with the world around them and manipulate the augmented object they see. The purpose of combining the iGlasses with the F-code tabs is to allow the user to walk around in an environment they can touch, but the objects are open to change and exploitation. F-code tabs that cover a simple box will allow the artist the freedom to have the box appear as various objects from rocks to milk cartons, all through computer generated graphics pre-programmed into the iGlasses. It will allow for each audience member to experience the same environment in completely different ways.
igrasses.JPG
We are taking eye tracking software, in combination with Augmented reality software, to create an enhanced view of the world around the user that reacts to the user directly. Our goal is to change the visual experience for the audience into something they can enter, explore, and even alter. The immersive experience of the scientific research will lead into the future of photography by breaking the traditional mode of storytelling away from the restrains of a single frame that allows for narratives to expand in various directions which will no longer confine the imagination of the audience to a distinct screen. We aim to put the viewer in the center of the action with space that navigates around the user and changes the perception and context of the real world. By active engagement with the real world in combination with augmented reality we challenge the audience to not just consume ready to go information, but to question it and investigate by defining the illusion. Computer graphics become a means to control matter.
photo (2).JPG
The main exhibit will consist of two components that will make up the experience for the audience. The iGlasses acts as a heads-up-display device which shows the enhanced world displayed onto the glasses for the user, and an eye tracking system that records and relays to the computer what the user is looking at, while simultaneously measuring how far away the user is from objects in relation to themselves. The second component is research utilized by Magic Vision Labs which will be the F-code tabs. The F-codes will be stamped onto regular geometric objects and walls to allow for an environment to be open to manipulation, for the artist, in order to make objects appear as different things and interactive for the viewer. The purpose of this is to be able to have a simple box or wall to have multiple modes of perception, what used to be a box is now a chair and at the same time it can be anything else the architect of the gallery decides it to be.
photo (1).JPG
The first scenario, in a series of four pre-programmed environments, to be displayed in the gallery will be the Entrance to a Haunted Mansion, unbeknown to the user. At the gallery door an audience member will put on the iGlasses and enter the gallery space. For this particular scenario only one user will be in the space, but the audience, outside the gallery, can view the user’s experience on a monitor through the frontal camera capturing the user’s point of view and the displayed virtual graphics. The reason for using only one person in the space at a time is to isolate the user’s emotional responses to the iGlasses in comparison to experiencing it in a group of people. Outside the audience will view the user’s experience and respond as one would at a movie theatre; however the emotional involvement will hopefully be intensified because they know it is a fellow audience member in a fake world and cognitive responses will dictate how the user navigates through the space.
photo.JPG
Let’s begin the first scenario. A single member in an audience is chosen and brought into the gallery space. The gallery will be constructed out of boxes that are covered in F-code tabs in such a way that it looks like a double staircase leading to a second floor, underneath the staircase is a hallway that turns left going down to a dead end. A staff member is given the word to let in the user, as the spectator enters the space the iGlasses will show an extravagant chandelier hanging from the ceiling, red carpets that line the staircase going to the second floor, and marble flooring going throughout the room. On the walls hangs pictures of individual persons that are lit by daylight illuminating from windows. Another aspect to the gallery is the overlay of speakers that fill the room to give audio feedback to the actions of the user when interacting with the environment. After the user has a feeling for the gallery space the exploration begins. What helps navigate the user around the space is intentionally placed F-code markers, in comparison to how an audience traditionally walks around a gallery space, these markers are meant to incite intrigue with the user. The eye tracker will measure what points of the mansion seem most appealing, such as a window with a questionable hand print or perhaps a sound that is coming from the hallway. As the user moves around the space observing different objects it is important to realize that he can go over to a vase of flowers and pick it up, the flowers will continuously move in real time and this is achieved because the F-code allows for manipulation of an object to be recognized when the user is in proximity to the object. For this exhibit there will be an intentional path setup as the user navigates after a certain time. First there will be sounds and voices coming from the Hallway under the stairs, the user walks over to investigate. As the user gets closer the eye tracking system picks up the eye focusing on the very edge of the wall as if to expect someone to come around the corner, but no one appears. At the end of the hallway, which is a dead end, is a door and it begins to be thrashed upon from the other side, vibrations from the audio and a shaking door incite emotions within the user to eith investigate or flee. The user takes the latter and decides to flee and instead moves upstairs. The pictures lining the staircase catch the user’s eyes as he ascends. The user has now activated the marker and the pictures begin to animate and follow his presence; The room goes dim and the lights fade. The view through the windows downstairs shows a gloomy sky and lightning strikes. The user approaches a window and a mad faced figure lurks out! While the animation is not real the experience it is giving the user hopefully invites him to more cognitively move around the space and react however one would if moving around an augmented open world.
photo (3).JPG
As you enter the museum, you will be given pair of iGlass. With iGlass, people can experience being the museum curator. The iGlass allows people to move and replace locations of artworks within the building. Based on people's experiment, museum curators will know the preferred location of certain artwork. Not only can people virtually move artworks but they can also rebuild the museum. Benches or podiums inside the museum can be moved too. iGlass is a great tool for finding the most appealing spot for artworks.


http://www.magicvisionlab.com/index.html
http://www.musion.co.uk/#!/the-experien ... experience
http://en.wikipedia.org/wiki/Eye_tracking
http://en.wikipedia.org/wiki/Augmented_reality
http://acuity-ets.com/Solutionsforscientific.htm
http://www.youtube.com/watch?feature=pl ... 20c6jdTiLc#!
http://www.youtube.com/watch?v=2NcUkvIX ... re=related
http://www.youtube.com/watch?v=9MeaaCwBW28
http://www.youtube.com/watch?feature=pl ... o_t1NnSYp4#!
http://augmentedstories.wordpress.com/
http://graphics.tu-bs.de/
http://thesystemis.com/projects/eyewriter/
http://www.ted.com/talks/mick_ebeling_t ... rtist.html
http://www.unisa.edu.au/Research/Advanc ... ision-Lab/
http://www.eyewriter.org/
http://www.google.com/imgresnum=10&um=1 ... 1,s:0,i:87
Last edited by sidrockafello on Tue Dec 04, 2012 6:29 am, edited 8 times in total.

aimeejenkins
Posts: 10
Joined: Mon Oct 01, 2012 4:02 pm

Re: Wk10 - Final Project presentation Here

Post by aimeejenkins » Mon Dec 03, 2012 11:56 pm

When we observed one of Professor Phillip Lubin’s vision labs at the University of California, Santa Barbara, we envisioned photography of the future. We examined the possibilities and figured out ways to incorporate Philips contemporary physics research into our future camera’s design. We found our research through several processes of exploration. The exploration of scientific research and visual arts in a wide range of socio-cultural contexts specifically, broadened our thoughts towards new innovations in photography. As we look to the future, we see our enhanced technological device assisting every viewer to develop analytical problem solving skills, personal expression, and imagination. By improving each individual’s aesthetic sensibilities, our device will shift art’s institutional role in relation to the parameters set forth by new technologies.

Lubin’s research explores the connections between math, physics, and art through the common ground of symmetry. As a Professor, he teaches Symmetry and Aesthetics in his class Contemporary Interdisciplinary Physics; which is based on Symmetry as the mathematical foundation of physics as well as the motivating principle in the arts. His lab utilizes visualizations, and drawing, along with mathematics, in teaching foundations of contemporary physics, starting from the perspective of symmetry. His ideas serve as the foundation for how we will combine media, arts, and science. Symmetry and asymmetry will be at the heart of our experiments aesthetic experience.

Professors at University of Southern California and University of California, Santa Barbara have been working on different methods to correct damages done to structural exteriors. The experimental cosmological lab at UCSB, lead by Philip Lubin’s, recently discovered “a method that actively corrects imperfections in structures designed to be precisely aligned.” They call this system that detects the flaws the Adaptive Morphable Precision Structures System(AMPS). AMPS optimizes structural objects in images. History has proven it inevitable that all physical structures change over time. “Every material substance in the world is [consistently] affected by such natural elements as gravity, temperature, aging, and pressure.” Once perfectly constructed buildings, telescopes, and satellites will become defective due to damage from natural elements, no matter what the material. With AMPS, engineers not longer have to take extra design efforts to account for the inevitable decay of their products. They can account for AMPS ability to fix the imperfections that occur over time. The system works from as far as 100 meters from the subject controlling it. Efforts are being put forth to extend this distance to a kilometer or even further.

We relate this project to art through the idea of symmetry in science, art, and technology. Studies of symmetry and the asymmetric objects guide our understanding of the world. We hope to combine symmetry recognition, AMPS technology, and photography in one cohesive project. By photographing and recording objects we will utilize AMPS to reconstruct those that have deteriorated throughout their existence. In the future, we will hold a remote that controls a small hovercraft camera. The ball on this doughnut shaped disk will record visual data adjacent to its view. The viewer will than be able to navigate the device however they desire. The device is small, weightless, and can fit in the palm of a hand. The microchip inside will contain AMPS that allows recovering and recording data on the earth’s atmosphere. The main function of this camera will be to actively catch flaws in structures that have either been destroyed or withered away. We want our device to mimic AMPS capabilities. While the viewer in the museum space navigates the miniature camera, it will specify and correct particular damages done in that area. The next generation of measurements promises to greatly increase our knowledge of the CMB angular power spectrum. This should enable us to determine many of the cosmological parameters that define our universe. Our device will pick up on gesture and cultural information in an image.
device.jpg
We imagine our museum space to be fully immersive. When one walks in they find themselves contained inside a white, cylindrical structure; made of the same material used for projection screens. The cylinder will have a ten-foot radius. In the center will be a remote control located on a stand, raised hand level to the person inside. Upon entering the projection room, one will receive instructions on how to use the remote and how it controls the hover camera. The remote will be a black sphere on an axis, which mimics the camera device that is overlooking the city from a bird’s eye view. The person using the remote can push (forward, back, side), tilt (North or South), rotate, press down (to zoom in) or pull up (to zoom out) the sphere to their liking, and the results will show up on the screen around them in a 360 degree view. The camera device sends data that is then projected onto the walls inside of the cylinder. Imperfections that the camera detects will appear in red on the screen. The bright red shape that shows depends on the type of imperfection. (see figure)The user can double tap the imperfection with their fingers and zoom to make a correction. The user can then push a button that shows correction of the structure and its attributes when fixed. The corrections will not happen in real time; all the video will be recorded so that later the AMPS system can fix the imperfections as needed. The point of showing what the structure would look like fixed is purely for the instant gratification of the museum goers, to see how this device and the AMPS system can make a difference.
outside.jpg
joystick.jpg
birdseye.jpg
At the moment 100 meters is the limit to how far our device can detect an imperfect objects. Efforts put toward researching new technological approaches will attempt to extend this distance to kilometers or possibly further. By using AMPS, engineers can recognize flaws within a structure and fix it before it worsens . Our device will work with sculptural and experiential possibilities of light and natural phenomena. This device will be defined by its historical moment and everyone will be able to visualize science in new aesthetic forms. In this way, we will combine design and science, art, and, technology in efforts to maintain structural symmetry.
pisaEdited.jpg
goldengate01.jpg
faceEdited.jpg
egypt-pyramids1.jpg
Car Damage 001.jpg
Bibliography

1) Statti, David, comp. Morphable Mirror Support Joint System. UC Santa Barbara Engineering. UCSB, 06 May 2008. Web. 3 Nov. 2012. <http://www.me.ucsb.edu/projects/sites/m ... team12.pdf>.


2) Lubin, Phillip. "Faculty and Researchers." UCSB Deepspace Group. UCSB, n.d. Web. 04 Dec. 2012. <http://www.deepspace.ucsb.edu/people/fa ... esearchers>.

3) Juncal, Shaun R., comp. Precision Morphing And Adaptive Structures (AMPS).Technology & Industry Alliances. UC Santa Barbara, 2008. Web. 3 Nov. 2012. <http://techtransfer.universityofcalifor ... 22163.html>.
Last edited by aimeejenkins on Tue Dec 04, 2012 8:59 am, edited 1 time in total.

aleung
Posts: 9
Joined: Mon Oct 01, 2012 3:12 pm

Re: Wk10 - Final Project presentation Here

Post by aleung » Tue Dec 04, 2012 1:14 am

Title: Lights

For my final project, I will be building up a project on the Syrcadian Blue, which is a light therapy device made to help those who suffer from Seasonal Affective Disorder (SAD). The science behind this device is light and I will be translating light into art.

Image

SAD is the most commonly diagnosed mood disorder and it affects nearly 36 million Americans. SAD is a depressive disorder caused by the reduction in the amount and type of light hitting the retina, which results in a biochemical imbalance of two key hormones, serotonin and melatonin. The brain does not produce enough serotonin, which results in the symptoms of depression, and at the same time, the brain overproduces melatonin, which tends to make people lethargic. SAD typically occurs during the winter months where the amount of sun and blue skies are limited.

Within the past 30 years, scientists discovered that many key “mood” hormones are actually regulated by light. Knowing this, light therapy was invented and today, light therapy is prescribed by psychiatrists, psychologists and family doctors and is considered the most effective non-pharmaceutical treatment for SAD. Light therapy is a very effective treatment because it treats one of the primary causes of depression- low serotonin levels in the brain. Most antidepressant drugs help boost serotonin levels in the brain, not by creating more serotonin, but by blocking the absorption of serotonin. But light therapy works by producing more serotonin in the brain.

In 2002 the prestigious journal Science discovered a new set of light sensitive receptors in the eye. These intrinsically photosensitive Retinal Ganglion Cells (ipRGCs) are not used for vision but are linked directly to the production of serotonin, and the regulation of melatonin and other hormones. These receptors are located in the lower portion of the eyeball and are primarily sensitive to blue light.

Image

Using this new knowledge, scientists were able to design a device that is able to help people produce more serotonin in the brain. The Syrcadian Blue light therapy unit has been specifically designed to stimulate the ipRGC receptors to maximize therapeutic activity, while minimizing visual discomfort. To do this, the Syrcadian Blue device provides high intensity photons at the precise blue wavelengths required to activate the ipRGC receptors (Cirtopic response), while minimizing response of the rods and cones used for normal vision (Photopic response). This device is very successful because it is easy to use, excels at serotonin production, and is easy on the eyes.

Image

Since this is a commercial product, I will take a step back to the science behind this device, which is merely just light. But if you think about it, the production of light is a lot more complicated than just an on and off switch. Visible light is electromagnetic radiation that is visible to the human eye, and is responsible for the sense of sight. Cones and rods inside our eyes receive these waves and stimulation of these cones and rods sends messages to our brain and that is what you see. Light originates as single photons, or particle/waves having zero mass, no electric charge, and an indefinite lifetime. Photons are emitted by electrons as they lose orbital energy around their atom. The expendable orbital energy is transferred to the electron either when it is bumped by an atom or when it is struck by another photon. Before being excited to a higher electron shell, the electron is in the lowest possible shell, or ground state. The ground state is forced by the attraction between the electron and the proton(s) in the nucleus. When an electron is excited to a higher electron shell, it almost immediately seeks the ground state. The electron sheds the excess orbital energy as a photon with energy equal to the difference of the orbital energies an electron would have in the two electron shells.

Or in simpler words..
Light is made when an electron who losing its orbital energy around the atom emits a single photon. When the atom or photon strikes the electron, energy is transformed.

Image
My piece will be an interactive piece on light. It will be in a separate room that is enclosed from the rest of the gallery. Light is being dealt with; therefore the room will be dim to allow the lights to be in the spotlight. My piece will be installed on the wall. There will be a black magnetic board that is six feet by six feet hanging two feet above the ground. Black as the color of the board is chosen accordingly so the lights can stand out even more. On the board there will be twenty lights the size of baseballs. On the lights there are two buttons: an on and off button and a button that changes it’s color. The available colors are” red, orange, yellow, green, blue, purple, pink, black and white.” Only one person is allowed to be in the room at a time and for a maximum of five minutes. Five minutes so the wait would not be long for other participants and also so the participant does not think too much and only exert their feelings and emotions at the moment. The instructions for this piece are to have the participant move the lights around to any desired areas, to turn the light either on or off and to pick a color for each of the lights. After the five minutes are up, the participant is to leave the room and their piece is projected on an area of the ceiling throughout the gallery. This activity allows participants to be creative and express themselves through light. Different colored lights have different meanings; therefore the personality of the participant can be suggested as well.

Image
Sample of a piece


Sources:
http://science.howstuffworks.com/light.htm
http://en.wikipedia.org/wiki/Light
http://library.thinkquest.org/26111/lightismade.html
http://zebu.uoregon.edu/~soper/Light/atomspectra.html
http://www.syrcadianblue.com/science.html
Last edited by aleung on Tue Dec 04, 2012 8:23 am, edited 3 times in total.

kevinalcantar
Posts: 7
Joined: Mon Oct 01, 2012 3:49 pm

Re: Wk10 - Final Project presentation Here

Post by kevinalcantar » Tue Dec 04, 2012 2:58 am

We did on research being conducted at UC Berkeley about reconstructing visual experiences from brain activity, which can be seen at the following link:https://sites.google.com/site/gallantla ... et-al-2011

The Gallant Lab at UC Berkeley is doing research that aims to simulate natural vision. Recent fMRI studies have been successfully modeling brain activity of reactions to unmoving visual stimulus and consequently have been able to recreate these kinds of visuals from the brain activity. The Gallant Lab's particular "motion-energy encoding model" is a two part process that encompasses fast visual information, as generated by neurons in the brain, and blood- related reactions in response to this neural activity. The neural and hemoglobin responses allow for visualizations of natural movies to be formed, rather than visualizations of unmoving stimulus.
Image

The neurons of the visual cortex work as filters that systemize spatial position, motion direction, and speed. At this day and age the best way to measure brain activity is through an fMRI, however, the fMRI does not measure neural activity; rather it measures the result of neural activity ie. changes in blood flow, blood volume, and blood oxygenation. These hemoglobin changes happen quickly but are quite slow in respect to the speed of the actual neural activity as the neurons are respond to natural movies.


Image


Researchers record BOLD signals from occipital and temporal lobes that serve the visual cortex of the brain as people are watching the natural movies. The brain responses to these natural movies are modeled to independent voxels, which are the discrete elements that build up a representation of a 3-dimensional object in graphic simulations. The goal of the researchers in the Gallant Lab is to recreate the movie a subject has just observed. The subject perceives a stimulus and experiences brain activity, which is then decoded and then used to reconstruct the initial stimulus.

Image
http://youtu.be/nsjDnYxJ0bo

This research could lead down many paths, artistically speaking. One idea, that was Alicia's first idea for this project before joining with Kevin and Sydney, was to focus on memory. The project included instructing someone to think about certain events in their past: sister's birthday party, the day they got their dog, etc. and to capture the footage of their memories using the research technology and MRI. These memories would then be screened in the gallery using a projector. The artwork would comment on the relationship between homemade video- camera memories and the memories that are inside our brains, and the importance of both of these in today's society to the human race.
The idea of memory which this project focused on is featured in the project presented here.



We propose a camera that directly connects to the mind using the type of technology seen in this research. One would be able to simply view a subject of life and take an image of the mind's eye. In this way, various memory snapshots can be taken and even combined to create new images. In artistic areas such as film, a camera of this sort would be particularly useful because theoretically the budget would not be as expensive and there would be no special effect issues.

Image

We envision having a gallery space in which this new technology would be showcased. We are going to utilize each room of the gallery and set up MRI’s in the entrance of each room. Each room of the gallery will have a different theme based upon aspects of the Emotional spectrum: Fear, Passion, Anger, Humiliation and Joy. Very much in the same way that the idea for a work of art is lost in translation from mind to paper, our memories and the emotional meaning we attach to certain things can become blurry when verbally communicating how certain things make us feel. In each room, the audience is instructed to think about a memory that they associate with passion, fear, anger, joy or humiliation when entering one of the MRI machines situated just outside each themed room. Each MRI will be connected to a computer, through which the “camera” will function and then will send images to a projector. Upon entering each theme-specific room, visitors will be asked to imagine a situation related to the theme. Much in the same way that telling someone not to think about elephants causes them to think about elephants, the audience will then immediately think of the first thing they associate with each emotion whether it be a simple image or an experience. The proposed camera will allow for visitors to project and view their own thoughts or memories on the walls of the gallery. The projections will be delayed so that as every person who steps out of the MRI will be able to view their own projections- whether they are their fears, their happiest moments, their anger, etc.- alongside those projections of previous participants.

In having the gallery space function in this way, the artists (us) are eliminated; the audience becomes the artist. We function more as an author by providing the space and form while the public provides the context of our work through their brains and MRI scans, and the resulting imaging. The work also brings up issues of voyeurism by allowing the audience to peer into each other's most private and personal space: their mind.
ImageImage
Image
Often times artists comment on the disconnect from the mind to a sheet of paper. I believe one of the best explanations of this idea is in "Understanding Comics" by Scott McCloud (196-198). The author writes about the disconnect that occurs as an art piece moves from an idea in the mind through the hand with the pen and then finally to the paper. Then the idea must be picked up from another individual, increasing this disconnect with the conversion from this paper to another’ eye and mind. With the creation of this camera, we are aiming to eliminate this disconnect by literally using the mind as a medium by which to create.

http://www.youtube.com/watch?v=6FsH7RK1 ... e=youtu.be
http://newscenter.berkeley.edu/2011/09/22/brain-movies/
Attachments
COMICS - Scott McCloud - Understanding Comics - The Invisible Art (dragged).pdf
(381.98 KiB) Downloaded 365 times
COMICS - Scott McCloud - Understanding Comics - The Invisible Art (dragged) CHAPTER (.. 2.pdf
(277.72 KiB) Downloaded 355 times
COMICS - Scott McCloud - Understanding Comics - The Invisible Art (dragged) 1.pdf
(219.32 KiB) Downloaded 355 times
Last edited by kevinalcantar on Tue Dec 04, 2012 8:39 am, edited 1 time in total.

pumhiran
Posts: 9
Joined: Mon Oct 01, 2012 4:07 pm

Re: Wk10 - Final Project presentation Here

Post by pumhiran » Tue Dec 04, 2012 3:16 am

PAT Pumhiran Presents

Title: The Cornea Experience, The Art Exhibit
Image
Description:
1) Science/Vision/Technology researched:
• Based on the Scientific Research of Ko Nishino & Shree K. Nayar
• From the Department of Computer Science, Columbia University

2) Translation into Art: What are you proposing to translate the science into art
• A Panoramic Visualization: Imagine be able to see exactly other see using a human cornea as the mirror to the surrounding environment.
• Giving the Viewer a new kind of visual experience though the eyes of others.

Image

3) Museum presentation: What will the public see/experience
• This is a combination of science and interactive art installation
• The gallery visitors become part of the work automatically, whether they are being projected somewhere in the gallery or participating by projecting what reflect on their cornea.

REferences: URLS http://www1.cs.columbia.edu/CAVE/public ... CVPR04.pdf


1. How will the work be situated, which space
• For this Visual Installation, There is going to be several cameras located in different spaces in the Gallery. At the same time, there are going to be different projection that takes turn randomly projecting corresponding to the different cameras.


Image

2. How will it be represented, physical presence, monitors, some direct connection to the lab, etc.
• What camera captures on the person Cornea will then be projected on the different wall through out the Gallery. It is a playful sense of mirror house where it make you wonder where the reflection coming from.

Image

3. What are the key components to making the project understandable
• Again, think of it as House of Mirrors in a carnival. A person see a reflection from somewhere else, but it still with in reach of finding where it is starting. The key about this art installation is that it uses the only part of human body that is capable of reflecting anything else.

4. What is the "story", or context?
• In an Art Exhibit, it does not have to only consist of painting on the wall or sculpture on the floor. In fact, the Cornea Experience Installation is all about perspective and projection rather than focusing on certain art objects. The space and wall might be empty, but with the right amount of delicate music, soft drink and space to walk around, the visitors might find themselves in a joy ride. This space is encouraging socializing and exchange perspective.

5. Relevant to an artist
• In 1974, there was an artist name Dan Graham who created an installation known as "Time Delay Room." In this installation, the audience will see or hear the delay of themselves on a a projection. This installation seems pretty simple for a modern day technology, but the fact that it was created in the 70's made it quite an interesting piece. Graham made it so that each room has different functionality and concept. This piece is not only interactive with the audiences, but it also introduce the idea of science and art.

Image

Cite Sources:
Shree K. Nayar: http://www.cs.columbia.edu/~nayar/
Original Research: http://www1.cs.columbia.edu/CAVE/projects/world_eye/
House of Mirror Image: http://visualparadox.com/wallpapers/houseofmirrors.htm
Dan Graham: http://en.wikipedia.org/wiki/Dan_Graham
Time Delay Room: http://www.medienkunstnetz.de/works/tim ... /images/1/
Last edited by pumhiran on Sun Dec 09, 2012 1:11 am, edited 1 time in total.

jlsandberg
Posts: 3
Joined: Mon Oct 01, 2012 3:21 pm

Wk10 - Final Project Presentation - Jonathan Sandberg

Post by jlsandberg » Tue Dec 04, 2012 8:07 am

Since it's conception, art has challenged the ways in which humans interact with their surroundings. Be it for personal merit, cultural progression, a method of expression, or experimental intrigue - art has been a steadfast endeavor, pursued by a variety of people.

As time progresses and technology erupts, artists have been and will be increasingly armed in their pursuit. Painting came of age 40,000 years ago in a Spanish cave. WIth the advancement of mathematics (and in particular, geometry), Fillipo Brunelleschi developed Alhazen's perspectival view - transcending the boundary of foreshortening, spearheading the direction of change that art would take in the following years. The modern camera was invented - a derivation of the camera obscura, then came Steven Sassons' digital camera. These examples are just a miniscule blip on the radar of mediums drawn to the practice of art.

With time comes technology - and with technology comes new materials and methods in which artists can produce art. This has been the trend since it's birth. By examining technology, we are able to understand the mediums in which human beings interact on a daily basis. Knowing this, we can begin to understand the path that art might take in the upcoming future.

It is here that my research of flexible organic light-emitting diodes will hopefully demonstrate a direction in which art might head.

To begin, I'll outline just exactly what are flexible organic light-emitting diodes (LEDs). From here, I'll delve into my research done on these intriguing advancements - and then couple my research with ideal applications in order to demonstrate the relevance of these flexible LEDs.

Image
A photograph of a light-emitting diode, commonly found in most electronic displays such as televisions, flash lights, cell phones, etc. (2)

Light-emitting diodes take advantage of voltage differences found within a circuit. Within a circuit current, there is a voltage difference between the two ends. This voltage difference causes current flow, as electrons leave one side and flow to the other (this is due to the differences in charge between one end and the other, caused by an Electromotive Force, such as a battery). As the electrons flow through the circuit, they are introduced to the inner-workings of the LED. The following picture, and description, demonstrate how LEDs produce light.

Image
The electrons flow from the (+/anode) to the (-/cathode), and as they pass through the "holes" of the LED, their energy level is decreased, and the energy lost is released as photons (light) (4).

Now of course, most of this information is second-hand knowledge by this day and age. LEDs are no new subject. Yet, now the introduction of flexible organic light-emitting diodes into the modern age brings forth many questions and opportunities for discovery.

What defines a flexible organic LED? How can we as artists use these to our advantage?

We'll first define what an organic light-emitting diode (OLED) is. An OLED is "a light-emitting diode (LED) in which the emissive electroluminescent layer is a film of organic compound which emits light in response to an electric current" (5). What makes OLEDs so special is that in these organic compounds emit light "due to the electroluminescence of thin films of organic semiconductors approximately 100nm thick", where 'nm' stands for nanometers (5). Basically, as electricity is conducted through these substances, they emit light. These organic compounds can then be fabricated upon flexible materials such as "polyethelyne terephthalate", which is a bendable and light-weight substance.

To summarize, organic light-emitting diodes are a special case of LEDs in which the characteristics that produce light through electric conduction are found inherent in the material - and can thus exploited due to it's pliable, dynamic, and malleable manner. Information Display magazine (the magazine of Society for Information Display, a leading organization in the field of electronic displays) gives a momentous quote, ". . .and the introduction of curved OLED-TV units is also not far off. The next generation of TV technology is now upon us. . . the industry [is] justified for [it's] excitement at what lies ahead" (6).

Adding the flexible to the flexible organic light-emitting diode technology is as simple as applying the organic compound to bendable substances such as polyethelyne terephtalate. Now, we understand what a flexible organic light-emitting diode (FOLED) is, and now we have some context on the plausibility and depth to which the applications of the FOLED will have in society.

ImageImage of a flexible organic light-emitting diode (7).

To apply this to art is now simple. The only requirement to activate FOLEDs is a current through which conduction is possible. Beyond the activation of FOLEDs also comes the manipulation of the intrinsic characteristics found within the technology. I'll introduce a few examples of how flexible organic light-emitting diodes can be activated (and thus demonstrate the practicality of the technology in art), as well as theorizing the artistic possibilities of FOLEDs coupled with further applied science.

In order for the FOLEDs to produce photons, a current must be introduced to the system. There are a multitude of ways in which this is possible.

Applying the OLED material to a piezoelectric material would be one method of the application. Piezoelectric materials are materials that produce a voltage when placed under stress. Thus, when stress is applied on the compound, the voltage would activate the organic compound, and "activate" the light-emitting diodes. One example of this use would be light up floors, or interactive photographs.

Applying the OLED material to materials that exhibit the photomechanical effect, which means that a material changes shape when exposed to light, would also be a great example of the application of OLEDs in an exhibition. One example: fabricating a photomechanical material with OLEDs - activation of the light-emitting diodes would inherently cause the photomechanical material to change shape. A direct interaction between dynamic concrete forms of a substance and light could be observed, with the simple flip of a switch.

Applying the OLED material to a shape-memory polymer would produce, what I believe, to be amazing effects. It is here that I choose to create my art piece. Shape-memory polymers are "polymeric smart materials that have the ability to return from a deformed state (temporary shape) to their original (permanent) shape induced by an external stimulus (trigger), such as temperature change" (8). In order to activate the intrinsic effects found in shape-memory polymers, we'd need a temperature change stimulus. Thankfully, if we fabricate OLEDs to such a polymer, as the light-emitting diodes receive current, they slowly heat up. Thus, heat as a by-product of released photons of the LEDs would give life to the polymer substance, in effect, creating a self-sculpting (or self-destructive) three-dimensional piece - incorporating dynamic visual stimuli, classic spatial sculpture, as well as progressive inter/intramaterial relationships. Hovering on the border of holographic imagery and sculpture , a FOLED shape-memory polymer fabrication is a wonderful step forward in the field of digital/virtual spatial practices.

My FOLED shape-memory polymer fabricated sculpture would be massive in scale. It would seemingly ebb and flow due to the surrounding temperature, the OLED surface temperature, and even possibly the body heat due to the exhibition space. As it changes shape, the light would catch the viewers eyes differently. The OLEDs could be dimmed, slowing the rate of change of the sculpture - or intensified to produce faster changes. If the polymer is sensitive enough, the sculpture's dynamism would be showcased in the ever-changing shape of the sculpture, and an intriguing piece can be created.

http://www.youtube.com/watch?v=cVsVB42MLxE#t=2m23s
Video demonstrating the shape-memory polymer effect.


Given the proper polymer (very thin, translucent, and flexible), this synthesis between art and technology is also a step towards achieving interesting results in the field of holographic display.

These examples are, of course, so few in comparison to all the possibilities that these technologies can produce. There are a plethora of materials in which OLEDs can be applied, and even more stimuli to activate the diodes, than I could ever hope to record. It is here that I leave you with a link to some common "smart materials" which may be taken advantage of with regards to the application of OLED compounds onto their surfaces: http://en.wikipedia.org/wiki/Smart_material.

1 - http://en.wikipedia.org/wiki/Perspective_(graphical)
2 - http://s.eeweb.com/members/andrew_carte ... 154080.jpg
3 - http://en.wikipedia.org/wiki/Light-emitting_diode
4 - http://upload.wikimedia.org/wikipedia/c ... -LED-E.svg
5 - http://en.wikipedia.org/wiki/Organic_li ... ting_diode
6 - LG Displays. "OLED Technology Prepares for Landing in the Commercial TV Market". Information Display. July/August 2012. 33-36.
7 - http://spie.org/Images/Graphics/Newsroo ... 0_fig3.jpg

jacobmiller
Posts: 7
Joined: Mon Oct 01, 2012 3:50 pm

Re: Wk10 - Final Project presentation Here

Post by jacobmiller » Thu Dec 06, 2012 12:18 am

I see photography as an attempt to capture a moment in time. As time has progressed we have seen photography move towards a more accurate representation of the captured moment. We have moved from still black and white photographs to three-dimensional colored films with surround sound. The future of photography is going to follow this same path.

All the advances in photography have had one limitation in common. The photos only capture the moment from one angle when the real moment can be viewed from any angle. That being said, one of the next steps in photography would be to rid of this limitation. In order to accomplish this goal we would have to take advantage of multiple research projects.

One research project with applicable technology is called dual photography. Predeep Sen of UCSB’s Advanced Graphics Lab. Dual photography operates based on Helmholtz Reciprocity, which dictates that the flow of light can be reversed without altering its properties. This idea is exploited to take a photo and create another photo from the perspective of the light source. This is done by calculating the change in the path traveled by a photon after it hits the object and then reversing the affect.
Screen Shot 2012-12-05 at 10.58.17 PM.png
Screen Shot 2012-12-05 at 10.58.46 PM.png
Another applicable technology comes out of the ReCEVB. The lab makes use of virtual environment technology, which allows users to navigate throughout visual and spatial environments.

Image

My idea is to create a room with numerous cameras on all 6 walls (including the floor and ceiling). These cameras would capture footage of whatever is in the room from all angles. The disparity between the cameras could be used to calculate distances and in turn translate the footage into 3D. All of this information could be translated into a virtual environment using the technology of the ReCEVB lab to allow users to move throughout the footage to view it from any angle.
Screen Shot 2012-12-05 at 11.55.53 PM.png
One problem that results from this idea is that it only works for one object. If multiple objects are in the room, the straight on views between the objects would be obscured. This is where Predeep Sen’s work would prove most useful. Cameras on non-obscured walls would get footage from indirect angles that could be calculated and translated into straight on angles. This technique would also decrease the number of necessary cameras.
Screen Shot 2012-12-06 at 12.07.13 AM.png
The art installation would consist of a cylindrical room with projectors projecting on the whole wall. A controller would be located in the center of the room. All visitors would wear 3D glasses. The projectors would display the footage of a 2-minute scene of two people dancing together. The projections on the wall would create a 360-degree video display. The controller would control the angle and position of the viewing position.
Screen Shot 2012-12-06 at 12.16.14 AM.png
Links:
http://www.recveb.ucsb.edu/research.html
http://www.ece.ucsb.edu/~psen/
http://www.ece.ucsb.edu/~psen/Papers/SI ... graphy.pdf

kendall_stewart
Posts: 6
Joined: Mon Oct 01, 2012 3:57 pm

Re: Wk10 - Final Project presentation Here

Post by kendall_stewart » Thu Dec 06, 2012 2:47 am

Kendall Stewart
12-4-12

Final Project

“Transforming Lens”

My research led me to scientific inventions that improve vision, and I discovered the revolutionary product of the PixelOptics company. They have developed the first and only electronic focusing eyewear. Their glasses, called “emPower!,” contain undetectable LCD crystals that can be charged to change the focus strength of the lenses.

Image

This incredible new technology provides much needed relief for those suffering from presbyopia (when one’s eyes cannot change focus fast enough from long distance to short distance, or vice versa). This is a common condition among people over forty, and is most often corrected with bifocals, or variable-focus lenses. However, these types of glasses are fixed, meaning one must look through a certain part of the lens for distance and another part for reading. PixelOptics’ emPower! glasses have the technology to change instantly from distance focus to reading focus with just the touch of a sensor in their frame.

Image

Users can even set the change to happen automatically by activating the in-frame accelerometer, which PixelOptic’s reps say is the world’s smallest such sensor. Similar to an iPhone’s rotating screen orientation, this sensor allows the focus to be changed to reading with just a forward tilt of the head.

Here we can see that Luzerne Optical explains more about the science functioning within these glasses on their website: "Hidden in the frames of the otherwise normal-looking glasses, are a microchip, micro-accelerometer and miniature batteries. Each lens has a transparent LCD layer that can electronically change its molecular structure, changing the focus only as needed. If you tilt your head down say to read a book or peek at an object up close, the accelerometer automatically detects the motion, sending a signal to the LCD that alters how light is refracted, change the prescription quietly and in, well, a blink of the eye. You can also put the glasses in manual mode."

For my imaginary installation I want to use this technology and recreate it in a very large scale. I would construct a giant dome (essentially a giant lens) made of the same LCD crystal-containing glass that the emPower! glasses use. This dome would sit in the center of an art gallery where it could be surrounded by other art pieces. It would have a cutout doorway so that visitors of the museum could enter the dome, and be large enough to hold several people at a time.
photo.JPG
(sketch of installation)

I would also expand the “touch” aspect of the invention, and have the glass dome be a giant sensor. When tapped, the surrounding glass would change focus and therefore alter the appearance of the surrounding gallery. The glass would be pre-programmed to shuffle through about ten different distant focuses, each one changing the clarity of the surrounding gallery for the museum visitors inside the dome.
Screen shot 2012-12-06 at 1.19.17 AM.png
Yale Art Gallery.jpg
Screen shot 2012-12-06 at 1.20.31 AM.png
Screen shot 2012-12-06 at 1.21.36 AM.png
(example focus changes)

The most interesting part of this installation is that it will be a unique experience for all the museum visitors. Those who wear glasses will be encouraged to take them off before entering, therefore making their eyes return to their natural focus. As the visitor taps the glass and changes the focus of the surrounding lens, the outside world will become more or less clear and recognizable. But because everyone’s eyes are different, one setting of the lens will work for some but not necessarily for others. This concept will be most interesting if more than one person enters the dome at the same time. With each tap the viewers can verbalize which one works best for them and spot the differences in their vision capabilities.

Because it deals with altering one’s perception, this installation could be viewed as a metaphor for how different people view art, museums, or the world in general. The meaning is up for interpretation and can be determined by the participants. It could also function scientifically as a giant eye test for viewers in the gallery. However it’s received, this installation will combine science and art in a fascinating way, and encourage interaction from the gallery visitors.


Sources:

http://www.pcmag.com/article2/0,2817,2375261,00.asp

http://www.pixeloptics.com/

http://www.luzerneoptical.com/top-whole ... html?sl=EN
Last edited by kendall_stewart on Fri Dec 14, 2012 10:47 am, edited 1 time in total.

Post Reply