Proj 5 Final Research Project

annieyfong
Posts: 4
Joined: Wed Apr 12, 2017 5:07 pm

Re: Proj 5 Final Research Project

Post by annieyfong » Mon Jun 05, 2017 12:29 am

For my final project, I am focusing on immersive gallery exhibitions. I want to focus on how an artist creates an entirely new space that viewers can lose themselves in and interact with. The interactions change the display and create a one of a kind experience.
I got this idea from hearing about an exhibit that was at Menlo Park, CA called Crystal Universe by teamLab. This exhibit is a series of LED lights forming a structure representing the universe.
Another exhibit I am interested in talking about is bounding main by ecco screen. This interactive installation is a wide water mirror. People will are able to see their reflections and wade through the waves toward the surface but never emerging past the waves.strands

New Angles by Super Nature is another LED installation that uses a sensor-rigged camera to take pictures of approaching viewers and project them onto a quilt of 420 LED-backed prisms. When people are not around to trigger the sensors, it cycles through a series of pre-programmed animation. link 2
Pixel Avenue is an interactive public space done by Fred Sapey-Triomphe Saint-Denis. It is a large pixelated screen on the underside of a tunnel that helps people escape the feeling of enclosure. The display uses lights, colors, forms, and rhythms to reflect the activity in the tunnel throughout the day.
I want to explore SenseImage and how the use of this technology can further enhance immersive installations in the future.
Creature Digitali
Attachments
Art 185GL (1).pdf
(934.83 KiB) Downloaded 162 times
Last edited by annieyfong on Wed Jun 14, 2017 6:48 pm, edited 5 times in total.

mdanielyants
Posts: 2
Joined: Wed Apr 12, 2017 5:16 pm

Re: Proj 5 Final Research Project

Post by mdanielyants » Mon Jun 05, 2017 9:10 am

For my final research project, I have decided to focus on Programming and art. More specifically, I will be researching the art data visualization and the idea of how using numerical data can be a source for creating visually compelling images. I want to tie this in with synesthesia, because I feel as though the concept of visualizing various data has very interesting intersections with synesthesia. I will also briefly mention how using this art from data visualization can be used in conjunction with interactive exhibits, and how data visualization fits into the art world in general.

I will be sourcing from several projects created here at the MAT Lab
http://vislab.mat.ucsb.edu/2016.html

As well as from:

http://www.allosphere.ucsb.edu/html/res ... tions.html
and talking about the allosphere in general

What is synesthesia?
https://en.wikipedia.org/wiki/Synesthesia
Phonaesthesia (maluma vs takita)
https://en.wikipedia.org/wiki/Phonaesthetics


David McCandless's work: "data is a fertile, creative medium"
http://www.informationisbeautiful.net/v ... something/
http://www.informationisbeautiful.net/v ... hort-film/
http://www.informationisbeautiful.net/v ... d-are-you/

Interstitial Fragment Processor
https://vimeo.com/86071976

Floccular Portraits
http://www.flong.com/projects/floccugraph/

Fabian Oefner
http://fabianoefner.com/projects/field-of-sound/
http://fabianoefner.com/projects/dancing-colors/

We Feel Fine
http://number27.org/wefeelfine
Attachments
MDanielyantsDataVis.pdf
The videos on the PPT do not show up in pdf
(6.61 MiB) Downloaded 173 times
Last edited by mdanielyants on Mon Jun 19, 2017 7:16 am, edited 2 times in total.

taylormoon2014
Posts: 8
Joined: Wed Apr 12, 2017 5:47 pm

Re: Proj 5 Final Research Project

Post by taylormoon2014 » Mon Jun 05, 2017 11:42 am

Work in Progress Notes - Taylor Moon

Objective of my project: My project is exploring the intersections of cinema, virtual reality, architecture, and robotics. I specifically drew from lectures regarding aesthetic narrative and computational photography while also incorporating the mechanics and functionality of our Zumo robots.

Goldfeather, Jack. “Tracking in Virtual Reality.” Math Horizons, vol. 10, no. 3, 2003, pp. 27–31. JSTOR, ct .
http://www.jstor.org/stable/pdf/2567840 ... b7b544e4bc
object detection and tracking
utilizes eye-tracking systems p 27
by having information about a chair stored in the computer, it is able to project it onto your goggle screens like an actual object would be projected onto your right eye and the retina of your left eye p 27
it factors in your eye coordinates, the world coordinates and the angle of projection from your goggles p 27
optical ceiling tracking p 27
“the whole field of computer graphics, of which virtual reality is a part, has been dubbed ‘mathematical archaeology”” 27
“computation of position and orientation of an object begins by collecting data from the environment. Data collection might be done by cameras, transmitted signals, magnetic field distortions, etc. Typically, the data are used as coefficients in a system of equations with the position and orientation parameters as variables.” - 28
“problems with systems of equations used in tracking.... the system may have more than one solution. Determining which one to pick can be impossible in some cases. Such situations arise when the system is either under-constrained or ill-conditioned. Unfortunately, this is a common occurrence in tracking systems.... The system may have no solution. This often arises when noise in the data (i.e., measurement errors) produces inconsistent equations. Usually what is done in this case is to try to find a least-squares fit which minimizes error. However, there is no guarantee that the least-squares fit is a meaningful result or even that optimization methods will find it.” - p 29 // My camera would address which one to pick by presenting the viewer with the multiple options it found (i.e. the multiple photos it took while conducting object tracking. If multiple scenarios are found that match the computer objective, then it becomes up to the user’s discretion to pick which image)

Self
my camera would be use optic tracking in order to detect objects within the environment; it would recognize objects and locate their coordinates, converting the real world coordinate system into a local object coordinate system - see Goldfeather p 28
images that I want it to capture to generate the narrative sequence/series:
a tight shot/close up
motion/action shot
landscape
color detection/color picker
image processing
color segmentation
color space
model selection
skin detection
the ways in which photoshop and illustrator use a color picker

Cândido, Jorge, and Maurício Marengoni. “Combining Information in a Bayesian Network for Face Detection.” Brazilian Journal of Probability and Statistics, vol. 23, no. 2, 2009, pp. 179–195. JSTOR, www.jstor.org/stable/43601135.
http://www.jstor.org/stable/pdf/4360113 ... 38f93b8f3e
“One of the main tasks in computer vision is object detection. Object detection is the first step in most vision tasks ” p 179
“Object detection is a challenging step because, in general, there are no constraints on how the object shows up in an image. There are differences related to illumination, type of sensor used, visualization, and color, among others. The detection of human faces is important due to its application in surveillance systems, human computer interaction (HCl), and biometrics systems” p 179
“The work developed for face detection can be divided into two main groups: knowledge-based methods and appearance-based me” 179
related projects // “The work presented here uses the knowledge-based method. The motivation for this work is related to human computer interaction and it is part of an ongoing optical mouse project for disabled computer users. The optical mouse concept designed here allows users with certain disabilities (e.g., Parkinson's dis- ease) to operate or navigate in the Internet using the eyes to move the mouse and click on certain positions” - p 179 // the way in which it tracks a viewer’s eyes and the subtle changes in motion and direction it is able to pick up on
“The image stream will b real time by the webcam. Once the face is detected, it will be tracked a geometrical face model, the eyes expected position will be determin and, finally, the gaze estimation will be computed. Once the eyes are simple calibration process should provide enough accuracy for the optical mouse” p 180 // how my project intends to have a motion capture image in the computer-generated narrative.

Golda, Gregory J. "Integrated Arts 10 - Film Terminology and Other Resources." Film Terminology. Penn State University, n.d. Web. 24 May 2017.
http://www.psu.edu/dept/inart10_110/inart10/film.html
wide angle
zoom shot
tilt shot
soft focus
fish-eye
dissolve

Vezhnevets, Vladimir, Vassili Sazonov, and Alla Andreeva. "A survey on pixel-based skin color detection techniques." Proc. Graphicon. Vol. 3. 2003.
http://academic.aua.am/Skhachat/Public/ ... niques.pdf
“The final goal of skin color detection is to build a decision rule, that will discriminate between skin and non-skin pixels. This is usually accomplished by introducing a metric, which measures distance (in general sense) of the pixel color to skin tone. The type of this metric is defined by the skin color modeling method. One method to build a skin classifier is to define explicitly (through a number of rules) the boundaries skin cluster in some colorspace. For examlple [Peer et al. 2003]:
(R,G,B) is classified as skin if :
R >95 and G >40 and B > 20 and
max{R,G,B} - min {R,G,B} > 15 and
|R-G| > 15 and R > G and R > B

"Oscar Nominees." Oscar.go.com. ABC News, n.d. Web. 31 May 2017.
“Film Synopsis: Set inside their home, a beloved hatchback, PEARL follows a girl and her dad as they crisscross the country chasing their dreams. It’s a story about the gifts we hand down, their power to carry love.. and finding grace in the unlikeliest of places.”
“This is the second Academy Award nomination for Patrick Osborne. He was previously nominated for: FEAST (2014). Winner, Short Film (Animated)”
It is a 360-degree animated film that uses virtual reality. It plays on subject position and perspective in order to cause the viewer to feel immersed and an active member of the movie.

D'Zurilla, Christie. "Watch the Oscar-nominated 360-degree Short Film 'Pearl,' Set Entirely inside a Car." Los Angeles Times. Los Angeles Times, n.d. Web. 31 May 2017.
“the viewer experiences sitting passenger side throughout the whole story.”
This 360 experience is part of a Google branch, Google Spotlight Story

Curtis, Cassidy, et al. "The making of pearl, a 360° google spotlight story." ACM SIGGRAPH 2016 Appy Hour. ACM, 2016.
“Pearl is the first Spotlight Story to include hard cuts from shot to shot, a common film technique once considered impossible for VR, made possible in this case due to the visual anchor of the car, which remains constant while the location, time of day, props and characters are always changing (38 shots, with 26 distinct environments).”
“Pearl combines spatial audio emitters and multiple Ambisonic sound fields which track viewer orientation and are mixed binaurally in real time, as well as a musical score that blends diegetic (on-screen) and non-diegetic sources from scene to scene.”
uses synchronization for the audio
“Non-photorealistic VR: Pearl’s distinctive visual style required a break from traditional CG workflows. Instead of illuminating models with lights, object colors were baked into compact swatch textures that were then customized to achieve the exact palette required for every scene. Rough edges were achieved by warping a color pass with a structured noise field (in a second buffer) that tracks objects in space and time, has correct stereo disparity, and can be animated at any frame rate. A third buffer let lighters create art-directable contours to delineate lit and shadowed regions.”
“Pearl is a single inter- active narrative experience that we adapted to a diverse range of hardware modalities, including handheld devices (both mono- and stereoscopic), non-interactive video (both rectangular and spher- ical), and full 6-degree-of-freedom VR. Our Story Development Kit and platform-agnostic realtime engine enabled the filmmakers to focus on the story, and made adapting it to multiple mediums relatively simple.”



Google Spotlight Stories. Google, n.d. Web. 31 May 2017.
Google Spotlight Stories caters to multiple platforms for VR storytelling. It accommodates, “mobile 360, mobile VR and room-scale VR headsets” while “building the innovative tech that makes it possible.”
It is a full sensory experience.
https://www.youtube.com/watch?v=WqCH4DN ... e=youtu.be
Films featured on Google Spotlight Stories are Pearl by Patrick Osborne, Rain or Shine by Felix Massie, The Simpsons: Planet of the Couches, Buggie Night by Mark Oftedal, On Ice by Shannon Tindle, Help by Justin Lin, and Special Delivery by Tim Ruffle.

"Get Colors from Image (BETA)." HTML Color Codes. N.p., n.d. Web. 31 May 2017.
color picker
9 x 9 pixels in screen

haytham
Posts: 3
Joined: Wed Apr 12, 2017 5:12 pm

Re: Proj 5 Final Research Project

Post by haytham » Mon Jun 05, 2017 5:13 pm

For the Final project I will provide an introduction to the autonomous applications of computer vision systems in automobile vehicles. Computer vision systems can make vehicles fully autonomous or to offer more controls for the driver in various situations. I will examine how computer vision systems in vehicles are used for navigation to calculate the best route, and how computer vision systems interact with sensory systems to increase safety for the driver such as providing warning of obstacles and avoiding pedestrians.
There are a lot of experiments happening right now for self-driving cars. However, the technology is still not ready for the commercial market. I will explore the potential problems facing this technology and what areas should improve before autonomous vehicles can become a viable solution.

mfargas53
Posts: 6
Joined: Wed Apr 12, 2017 5:48 pm

Re: Proj 5 Final Research Project

Post by mfargas53 » Wed Jun 07, 2017 12:02 pm

For my final project, I am exploring Programming and Art through the use of Processing 3. I would like to explore the visualization of the Mandelbrot Set through Processing and figure out a way to zoom in and out of the graph.

The Mandelbrot Set is an imaginary fractal set in math that continues to extend no matter how far in you zoom in. I was able to find a code on the Processing Website that graphs the fractal set, however running the code does not allow you to zoom in. I want to play with the code and find a way to give it a more aesthetically pleasing appearance and find a way to interact with the graph. I want to be able to zoom in to the graph and if possible, be able to move around the running code.

Here is the Wolfram Alpha article on the Mandelbrot Set:
http://mathworld.wolfram.com/MandelbrotSet.html

Here is essentially what the Mandelbrot Set looks like:
Image

And when you zoom into the arms, it looks like this:
Image

So I overall want to achieve bringing this type of interactive image:
Image

Here is a link to the code I found:
https://processing.org/examples/mandelbrot.html

I would like to do more research on other projects that use code in this way. I have previously done a project where I made a 3-Dimensional and interactive version of both the Lorenz attractor and the Rossler attractor. I know there are other forms of art through Processing and programming that bring complex and intricate things like graphs and turn them into art. I would like to explore this more with my research.

hernando
Posts: 2
Joined: Mon Jun 05, 2017 10:54 am

Re: Proj 5 Final Research Project

Post by hernando » Sun Jun 11, 2017 2:27 pm

Project Description:
My topic will be Computer Programming and Arts and I will be discussing how programming is used and can be defined as an art over a science.
To begin my project I will define what programming is and the various uses it has in our world today.
Next I will discuss Donald Knuth and his article "Computer programming as an Art." In this article Knuth goes in depth comparing how programming is an art as well as a science. He gives a description how science can also be considered an art. When he speaks about computer programming as an art, he primarily thinks of it as an art form in an aesthetic sense and an wants to teach how to write beautiful programs.
Following Knuth's article I will show examples of artists and their uses of programming to create the beauty Knuth explains in his article and the following artists will be presented:

Robert Henke:
Link- http://roberthenke.com/

Normichi Hirakawa:
Link- http://counteraktiv.com/

The following artists will be presented with examples of their work and how they uses programming differently in their art.
Finally end with a conclusion of the presentation.
Attachments
FINAL ROBOTS.pdf
(32.9 MiB) Downloaded 178 times
Last edited by hernando on Wed Jun 14, 2017 12:13 am, edited 1 time in total.

taylormoon2014
Posts: 8
Joined: Wed Apr 12, 2017 5:47 pm

Re: Proj 5 Final Research Project

Post by taylormoon2014 » Tue Jun 13, 2017 9:45 am

Below are attached
(1) my research notes
(2) the PowerPoint
(3) the paper I wrote on the topic
(4) pictures that inspired research/were align with the concept
Attachments
hqdefault.jpg
Screen Shot 2015-05-18 at 21.23.10 PM.png
pearl_shot.jpg
maxresdefault.jpg
186GL Presentation - Taylor Moon.key.zip
(10.5 MiB) Downloaded 161 times
185GL Presentation Research .pdf
(114.07 KiB) Downloaded 164 times
185GL research writing.pdf
(82.33 KiB) Downloaded 166 times

eszaboky
Posts: 2
Joined: Wed Apr 12, 2017 5:09 pm

Re: Proj 5 Final Research Project

Post by eszaboky » Tue Jun 13, 2017 9:04 pm

SwarmSound_FinalProject.pdf
MLSUC/SwarmSound
(1.39 MiB) Downloaded 161 times
MACHINE LEARNING SYMPHONIC UNIVERSE CREATOR:

Inspired by the cooperative swarm relationships of creatures like bees, ants, termites, crickets, and birds, this is a proposal for a sound installation consisting of a multi-dimensional feedback loop. It is also inspired by chaos theory, entropy, and the butterfly effect.

Many instruments in an enclosed room.
Each instrument has a sound sensors attached which feed data to a custom machine learning sound analysis software program.
The instrumentation of each individual instrument is determined by the program's decision making as a result of its acquired input.
In order for this to work, an initial condition or initial instrumentation is required.
One instrument can either be set to play a phrase to begin the symphonic universe, or a person can utilized the interactive vocal component to initialize the symphonic universe with their own voice.
The sensors are always listening to the sounds of the environment, so each instrument will always be changing its articulations dependent upon the development of the composition as a whole.
Each composition will be unique in its resulting machine conception because each composition has a unique beginning.

Research Exploration:

- David Rosenboom's Brainwave Feedback Loop Performance: this live composition explores a musical feedback loop between digital sound software responding to the EEG monitored brain waves of two meditating individuals and Rosenboom playing the electric violin, and in turn, affecting those individuals brain pattern response. (witnessed this performance at the Don Buchla Memorial Concert @ Gray Area, SF)
--> Part One: https://vimeo.com/221524303
--> Part Two: https://vimeo.com/221524845
Image
Image
Image
Image
- Ten Dimensions Explained Youtube Video: https://www.youtube.com/watch?v=JkxieS-6WuA
- Google's Deepmind - Utilizing Neural Networks Machine Learning Algorithms to Synthesize New Music:
--> http://www.everydaylistening.com/
- Visual Feedback Loop Installation Exploring Chaos Theory: http://colmeccles.com/infinite-feedback-loop.html
- Nathaniel Stern, artist who focuses works on the exploration of feedback loops between observer and installation: http://nathanielstern.com/artist-statement/
- More Visual Feedback Loops: http://cornellsun.com/2016/09/11/art-yo ... l-gallery/
- Sound Forest Installation (Sonic Communications With Space - Interactive Sonic Feedback Loops:https://www.youtube.com/watch?v=fZRBgIUC4lg
- A swarm-like physical sound installation influenced by randomness rather than computed articulations: https://www.youtube.com/watch?v=WWgJejAiGFg
Last edited by eszaboky on Wed Jun 14, 2017 3:04 am, edited 3 times in total.

stephannilarsen
Posts: 3
Joined: Mon Apr 17, 2017 11:05 am

Re: Proj 5 Final Research Project

Post by stephannilarsen » Tue Jun 13, 2017 11:36 pm

Stephanni Larsen

Goal:
My idea is to create a project that analyzes the use and emphasis we use to determine someone's identity in reference to Myers-Briggs personality typing. Using the technology we have learned this quarter, my project will attempt to draw a connection between the observer, and the test taken to determine one's "personality type".

How:
Using a camera mounted to a wall and recording the participant
The participant will sit at a table and watch 4 different video's (each video will be formed to display aspects of the type functions; Sensing, Intuition, Feeling, Thinking).
The viewer (educated about the type) will make judgments based on how the person responds to the video
Following the viewing, the participant will be asked to take a personality test.
The final result would be displayed side by side, the individual in the video and the test results.

The goal of this project is to explore the nature of humanity's desire to categorize. The coded element within the camera will begin recording at a certain part of the video. Once the computer has signaled that the video has began, the scene that one would expect a response will be timed to capture an image.

While practically this idea is flawed and no scientific/accredited research would come form this small process, the idea is to mimic the scientific research process, and present the information within the format of an artist. Hopefully, through these two forms of judgment, the results will challenge the viewer to think deeper about their own Judgments

Myers-Briggs:
Carl G. Jung was the psychologist who coined the functional typing now so well known within the Myers-Briggs type testing. Jung found interest in two largely popular psychologists named Sigmund Freud and Alfred Adler. Both of these psychological researchers had come to understand identity in different ways. Jung saw them as two ways of thinking Freud’s theory directing more inward, while Adler focused more outward. Inward he refered to as introversion, while he outward was considered a person more extroverted.’
Jung meant this type of understanding to be a way to communicate more accuratly with clinical patents he worked with. Understanding their basic way of provessing, and uprooting too the deeper wells of intentino and motive.
Myers-Briggs created the personalty type testing (known as typeology) in order to help the every day citizen, not seeking therapy but readily eger to know themselves better, to have a frame of refernce to begin from. Their claim was that the test was to help aid in one choosing a job that would suit who they were better.

Pan/Tilt Face tracking Arduino: https://www.sparkfun.com/tutorials/304

Sources:
"faces for each type" http://www.socionics.org/type/default.aspx
Myers-Briggs: Hidden element http://www.personalitypathways.com/hidd ... ters2.html
"How Your Body Reveals Your Type" http://personalityjunkie.com/02/face-re ... rsonality/
Personality basics: https://www.capt.org/mbti-assessment/mbti-overview.htm
Jung's typology in perspective [0-933029-93-4] Spoto, Angelo
yr:1995
Myers, S. (2016). Myers-Briggs typology and Jungian individuation. Journal Of Analytical Psychology, 61(3), 289-308. doi:10.1111/1468-5922.12233
Wilde, D. J. (2011). Jung's personality theory quantified Springer-Verlag Publishing, New York, NY. doi:http://dx.doi.org/10.1007/978-0-85729-100-4
Attachments
Final Proposal_Larsen.pdf
(777.78 KiB) Downloaded 108 times
Last edited by stephannilarsen on Tue Jun 20, 2017 9:00 am, edited 1 time in total.

haytham
Posts: 3
Joined: Wed Apr 12, 2017 5:12 pm

Re: Proj 5 Final Research Project

Post by haytham » Wed Jun 14, 2017 8:53 am

For the Final project I will provide an introduction to the autonomous applications of computer vision systems in automobile vehicles. Computer vision systems can make vehicles fully autonomous or to offer more controls for the driver in various situations. I will examine how computer vision systems in vehicles are used for navigation to calculate the best route, and how computer vision systems interact with sensory systems to increase safety for the driver such as providing warning of obstacles and avoiding pedestrians.
There are a lot of experiments happening right now for self-driving cars. However, the technology is still not ready for the commercial market. I will explore the potential problems facing this technology and what areas should improve before autonomous vehicles can become a viable solution.
Attachments
Art 185 Final Research Paper.pdf
(557.32 KiB) Downloaded 105 times
Alshawaf_Final Presentation .pdf
(2.09 MiB) Downloaded 113 times
Last edited by haytham on Tue Jun 20, 2017 2:45 pm, edited 6 times in total.

Post Reply