Proj 5 Final Research Project

glegrady
Posts: 160
Joined: Wed Sep 22, 2010 12:26 pm

Proj 5 Final Research Project

Post by glegrady » Wed Apr 19, 2017 7:45 pm

Each student in the course will give a project presentation on June 14.

It will consist of a 10-15 slide PPT presentation on a topic of your choice based on material covered in the class. This presentation will represent what you have learned in this class, and what could be a next step work. Be imaginative, creative, and do your research.

Your presentation can be a proposal for a museum exhibition, or for a future research project, or a research publication. It should be based on material we have covered in the course. These include:

Introduction to camera obscura and its history
Intro to Computational Photography
Image Processing Basics
Programming & Art
Intro to Computer Vision
Physical Computing (working with sensors)
Robots & Art
Robotic behavior
Aesthetic narrative
Installation Staging
Wifi communication

The presentation should involved online research, have links to projects, to other documents, have photos, videos that fit your topic, have a reference section.

Schedule

Week 8:
5.22 Research Project description and review
5.24 Research and preparation

Week 9:
5.29 Holiday
5.31 Rodger individual consulting

Week 10:
6.05 Dead Week
6.07 Dead Week

Exam Week:
6.14 Presentation

You can post work-in-progress notes here, and eventually add a PDF of your PPT as an attachment.
George Legrady
legrady@mat.ucsb.edu

chantelchan
Posts: 8
Joined: Wed Apr 12, 2017 5:15 pm

Computational Zoom

Post by chantelchan » Tue May 30, 2017 3:20 pm

My research project will focus on Computational Zoom, taking multiple photos of the same landscape with different compositions to create a new image that offers contrasting visual depths to the foreground and background. Researched by our own UCSB ECE department and Nvidia, the computing methodologies allow the user to manipulate photos to produce another photo that is physically impossible to produce with a camera. I plan to explore the process of computational zoom, the challenges it faces, and future developments within the computing field. In addition, pictures and examples from the research paper will be used to aid my presentation.

--------------------------------------------------------------------------
Update (6/13)

My presentation will focus on the following topics:
- vocabulary
- challenges
- process
- examples
- what's next?

Files used in presentation:
Computational Zoom.pdf
Slideshow Presentation
(4.03 MiB) Downloaded 95 times
Last edited by chantelchan on Tue Jun 13, 2017 9:26 am, edited 2 times in total.

gbaier
Posts: 2
Joined: Wed Apr 12, 2017 5:09 pm

Re: Proj 5 Final Research Project

Post by gbaier » Sun Jun 04, 2017 3:07 pm

My research project will explain Dr. Yasamin Mostofi's paper X-Ray Vision with Only WiFi Power.

I will begin with a brief overview of Wifi and how it works.
Next I will try to reduce the math that Dr. Mostofi and her team used down to an understandable level for a layperson.
Then I will breakdown the components of the robots involved in the project.
Next I will go through the experimental setup.
Finally, I will summarize the results and describe future implications of the technology.

My main sources will be Dr. Mostofi's paper, video, and project page, as well as Wikipedia and other websites for general explanations of Wifi and some of the math. See the links below.
Paper - http://www.ece.ucsb.edu/~ymostofi/paper ... ostofi.pdf
Video - https://www.youtube.com/watch?v=iF1fY3bPAt0
Project Page - http://www.ece.ucsb.edu/~ymostofi/SeeTh ... aging.html
Wifi - https://en.wikipedia.org/wiki/Wi-Fi
Math - http://glaser.berkeley.edu/sherman/cont ... _Rytov.pdf
Attachments
X-Ray Vision with WiFi.pdf
Presentation
(881.88 KiB) Downloaded 93 times
Last edited by gbaier on Tue Jun 13, 2017 10:48 pm, edited 1 time in total.

leaharmer
Posts: 2
Joined: Wed Apr 12, 2017 5:08 pm

Re: Proj 5 Final Research Project

Post by leaharmer » Sun Jun 04, 2017 4:29 pm

My final research project will focus on Stanford University's computational photography camera, The Frankencamera and how cameras like these will affect the commerical camera industry.

Despite the fact that there has been much interest in computational photography within the research and photography communities, progress has been hampered by the lack of a portable, programmable camera with sufficient image quality and computing power. To address this problem, Stanford University has designed and implemented an open architecture and API for such cameras: the Frankencamera. It consists of a base hardware specification, a software stack based on Linux, and an API for C++. The architecture permits control and synchronization of the sensor and image processing pipeline at the microsecond time scale, as well as the ability to incorporate and synchronize external hardware like lenses and flashes. The Frankencamera has six computational photography applications: HDR viewfinding and capture, low-light viewfinding and capture, automated acquisition of extended dynamic range panoramas, foveal imaging, IMU-based hand shake detection, and rephotography.
Their goal was to standardize the architecture and distribute Frankencameras to researchers and students, ultimately creating an open-source camera community, leading eventually to commercial cameras that accept plugins and apps.

EDIT: My project is now focused on computational photography prototypes, cameras, and future developments.

The Future of Photography: How Computational Photography Will Redefine the Camera Industry

References:
The Frankencamera: https://graphics.stanford.edu/papers/fc ... igtalk.pdf
Experimental Platforms for Computational Photography: https://graphics.stanford.edu/papers/ca ... -cga10.pdf
TIME article: http://time.com/4003527/future-of-photography/
Integrity of the Image: https://www.worldpressphoto.org/sites/d ... report.pdf
Computational Photography MIT: http://web.media.mit.edu/~raskar/photo/
Attachments
leah_185final.pdf
Presentation Slides
(1.99 MiB) Downloaded 97 times
Last edited by leaharmer on Wed Jun 14, 2017 9:36 am, edited 1 time in total.

zoe.m.rathbun
Posts: 5
Joined: Wed Apr 12, 2017 5:14 pm

Re: Proj 5 Final Research Project

Post by zoe.m.rathbun » Sun Jun 04, 2017 6:45 pm

Robotic Spirit Animals

Image


Robotics is a quickly growing field. While some see potential in creating robots to replace much of the workforce, I also see potential in their therapeutic possibilities. Robotic speech generators like Siri can be helpful for children with autism, and further research is being done with application assisted learning and therapy for autistic spectrum disorders. Similarly, PARO is an interactive robotic seal that needs to be cared for and is used for lonely (usually elderly) individuals in situations where real animals aren’t allowed. There are many other current projects researching the therapeutic possibilities of robotics and AI. But what if robots could specialize not only in helping the autistic, elderly and those with social anxiety, but they could help “normal” people who need a bit of encouragement when they’re feeling down or are hit with something difficult in life. How many downward spirals could have been prevented if a caring person would have been there at a critical time? There are times when everyone feels alone and unwanted, leading people to not see the truth of their intrinsic strength and worth. Giving a mechanism to fight those thoughts could be of great benefit to society and human potential. Instead of going to a therapist, there could be a companion who is always with you, who understands to some extent the daily trials of your life. Although I thought that this may have been a far out idea, this concept is actually already gaining popularity in Japan as there is a large population of single men that live relatively lonely lives. It would be a concern that having these close robotic relationships could diminish real human relationships, however, perhaps these robots could act as platforms of connection, using data to connect similar people to one another. In this way, these social robotic companions would become a sort of smart phone, helping you navigate in the real world and within social media and the web. The idea of a friendly therapeutic robot doesn’t sound too strange but I want to take this concept a little further.
The future is shaped by many things, things that are beyond our control. It is impossible to know whether our future will be shiny intelligent internet utopia or humans fighting extinction. If the age of technology does continue to prosper, we must hope that we as the people of earth have some agency over the next steps for robotics, as it may change the structure of society as we know it. Robotics and mechanization is due to take many jobs in the future, leading to questions about how the economy will continue to function. These questions must be addressed before steps in robotics continue to move forward, as humanity must learn to think about consequences of its actions before it acts. I assert that robotic technology should be used for the greater good, to fill a hole in society leading to the epidemic of mental illness, depression, drug abuse and mistreatment of the elderly. If we want to build a better tomorrow, we should not only be thinking about what is possible or exciting but also what this world needs, what aspects of this world need to be made better. This is one place in the world that I think needs to be made a little better.
When I was a kid, I really liked reading fantasy books. I found myself at times wishing that I lived in a fantasy world and had magical powers like some of the characters in the novels that I was reading. There was none that incited this feeling more than the Golden Compass--a novel that takes place in a parallel universe in which human souls take on the form of animals [daemons]. When I was thinking for an idea for a project, translating this fantasy into a reality was the first thing that came to my mind. I have always felt a deep, intrinsic connection with animals that I have trouble feeling with humans without a large degree of effort. I think having an animalistic companion that is personal to you and genuinely wishes you the best in life could help people in this society greatly.


Project Requires:

1. Recognition of Individual
Although Facial recognition software has been widely available for some time, the recognition of specific faces has presented an interesting project for scientists. Unsurprisingly, researchers at Facebook are leading this area of research with the recent unveiling of project “Deep Face” which has developed an algorithm which allows a computer to access if two pictures are the same person with 97% accuracy [ as accurate as the human brain]:
source: https://research.fb.com/publications/de ... ification/

2. Recognition of Emotions
Once an individual face is recognized, it is essential that the emotions of the specific person's face could be recognized and proper behaviors could be employed. If enough time was spent with an individual, perhaps their mannerisms could be learned and
Company: https://www.kairos.com/
b) Clmtracker
Source= https://www.theatlantic.com/technology/ ... eads-your-
emotions-on-your-face/282993/
3. Machine Learning to ‘get to know a specific person’ & Developing social relationship
It would be necessary that the robot has an interesting enough personality that the person would not grow annoyed with it. It would have to be able to learn social cues, jokes, and mannerisms that were specific to an individual person.
a) Jibo, Source: https://www.jibo.com/
Able to recognize faces and speech patterns. Able to learn jokes and individual mannerisms and continue to add to social interaction. Meant for a family environment but also lonely single people. Meant to be a social appliance for the home. [Almost a Wall-e-like robot] Face = large screen that is capable of displaying different images etc.
I like the idea of some sort of display screen (as exhibited in Jibo) or holographic medium because this allows for more personalized robotic features as well as more dynamic expressions.
4. Speech Recognition/Comprehension and learning
I think speech comprehension and production would be important because they would make the person feel as though they are interacting with an intelligent being. If they are able to talk to this robot and it understands them and gives a helpful or unexpected response, or if it remembers something about what they said, they may feel that they are building a relationship of some kind. They would not feel quite as ashamed in making friends with a robot if it is at least a cool robot who is better at conversation than other humans.
For autistic and socially anxious people, having the opportunity to interact in person with a somewhat socially functional being would perhaps make them better at interacting with other real humans in real life, benefitting thieir social interactions.
5. Further Object Recognition for intelligence and interaction:
Source https://research.fb.com/publications/a- ... detection/
6. Robotic Following:
Source = https://www.wired.com/2014/10/robotic-followers/
Very informative article about all of the different types of Robotic Following. This would be important because familiars are supposed to stay by your side.
6. Durability/Adaptability and Animal-Like Behavior.
source https://www.nature.com/nature/journal/v ... 14422.html
7. Therapeutic Applications
a)PARO
Pet therapy robot for the elderly/people in hospitals where animal therapy would be helpful but animals are not allowed.

I like PARO because it is cute and endearing. My idea is similar to PARO but a bit more intelligent and less passive and has the ability to help the individual (could be different sized) rather than just be helped. Although I do think the idea of reciprocal affection/assistance could strengthen the bond between human and robot and should be explored.

APPLICATION OF IDEAS:
Phase I:
Art Exhibition: Space in which robots move and target a person. / Quantify ‘type’ of individual (expressions, colors in clothing etc) and change holographic appearance/behavior to best connect with this person

Phase II:
Personal Robotic Familiars for Therapy/ just for funsies.
***Growing popularity in Japan***---> probably first release this in Japan/have the art exhibit in Japan because this is such a popular topic there.
Attachments
Robotic Spirit Animals.pdf
(745.5 KiB) Downloaded 97 times
Last edited by zoe.m.rathbun on Wed Jun 14, 2017 1:42 am, edited 3 times in total.

fangfang
Posts: 4
Joined: Wed Apr 12, 2017 5:14 pm

Re: Proj 5 Final Research Project

Post by fangfang » Sun Jun 04, 2017 6:55 pm

For final project, I will focus on virtual reality. It will be a proposal for a virtual reality show.
I get the idea from teamlab, which “is an Ultra-technologists group made up of specialists in the information society.” Recently, they have an exhibition called Living Digital Forest and Future Park in Beijing. Audience can see four seasons and the development of flowers during four seasons in this exhibition. Participants even can interact with those flowers.
For my proposal, the VR exhibition also will create a world for participants, and they can interact with each other with VR glasses. And according to research, I know there are many VR shows, and many shows are used for fun or for commercial reasons. But I want my proposal to be more about the nature and the environment; I want participants will think it as a kind of art, but not a 3D movie or a 3D game.

History of VR & Technology of VR:
In nineteenth century, the 360-degree murals, which is also called panorama paintings, is the earliest attempt at virtual reality. Then in 1838, according to Charles Wheatstone's research, our "brain processes the different two-dimensional images from each eye into a single object of three dimensions," people can get a sense of depth and immersion by viewing two side by side stereoscopic images through stereoscope. And nowadays the design of Google Cardboard and low budget VR head mounted displays for mobile phones is based on this principle.
In 1930s, Stanley G. Weinbaum, a science fiction writer, point out the idea of a pair of goggles that let wearers experience a fictional world through holographic, smell, taste and touch.
In 1960, Morton Heilig invented Telesphere Mask, which was the first example of a head mounted display. This headset provided stereoscopic 3D and wide vision with stereo sound.
In 1965, Ivan Sutherland pointed out a concept of "Ultimate Display" that we can simulated reality to the point that audience cannot recognize which is virtual reality and which is actual reality.
But, the name of virtual reality was born until 1987 by Jaron Lanier, the founder of the visual programming lab (VPL). Then virtual reality became the name of this kind of research area.
In 1993, VR glasses got great development. Sage announced their new VR glasses that had head tracking, stereo sound and LED screens.

Discussion:
Then in 21st century, more and more companies focused on virtual reality and they made huge development on virtual reality, which helps to make the price down. Many companies created virtual reality products, such as Google Cardboard. And many game companies also focus on this area as well. Virtual reality in entertainment industry is very useful and valuable, but in other areas, it can be very meaningful as well. For example, in the art field, it can help audience see more and may be know more about artists' feelings and thoughts if they can be "inside" of artists' mind. In addition, it can be very useful in medical research. For example, the medical student can practice operations in virtual reality at any time without wasting valuable medical resources.
So I think virtual reality will be one of most important area in the future because it can bring unpriced value.

Links:
Living Digital Forest and Future Park
https://www.teamlab.art/cn/w/flowerforest/
TeamLab
https://vimeo.com/teamlabnet
VR show
https://www.virtualrealityshow.co.uk/
Attachments
art185 .pdf
(913.74 KiB) Downloaded 94 times
Living Digital Forest and Future Park.jpeg
Last edited by fangfang on Mon Jun 19, 2017 6:41 pm, edited 3 times in total.

christinepang
Posts: 4
Joined: Wed Apr 12, 2017 5:10 pm

Re: Proj 5 Final Research Project

Post by christinepang » Sun Jun 04, 2017 7:58 pm

In my project, I will discuss how to alter pixels in an image where some changes are global while others are more local. To start, I will discuss what a histogram is and how it can be used. A histogram is a graph of the distribution of all the tones in your image. With a histogram, you can analyze your images to determine which type of exposure corrections they might need. Black is at the far left and white is at the far right. A photograph with a histogram with a large number of tones overall means that it has a good dynamic range, which means a lot of editing potential. I'll also explain how color is stored on an image when we look at the different channels of the histogram.
I will also be talking about curves and how changing different points on it can transform an image. It is a different interface of the same adjustments that you would make with level controls. However with the histogram in the back, you can decide how you want to adjust your curve. An advantage to using curves is that it lets you adjust as many points as you want whereas compared to sliders, you’re limited to just the highlights, shadows, blacks and whites. When you first open the curves graph, it shows a 45 degree angled line from black to white, indicating that the input tones are identical to the output tones. If you change the shape of the line, you change the correspondence of the input tones to the output tones. With the tone curve, you can clearly see how all of your tones are being stretched and squeezed.
I will then also be talking briefly about noise in a photo. It is like grain in film images. It is not always a bad thing, since it can add texture, depth, and mood to an image. It can create an atmosphere. However, some noise looks like grain while others produce blotchy patterns of red or blue scattered around. Noise typically occurs in darker parts of an image.
Next, I will explain how sharpening works. When taken too far, it can have a destructive effect. Your computer looks for sudden changes in contrast to detect edges. If there is a sudden change in value between two adjacent pixels, then that is probably an edge. The darker pixel will be darkened and the pixels around it will be ramped down, whereas the lighter pixel will be lightened and the pixels around it ramped up. What tends to happen is one side gets darker while the other side gets lighter. The result creates a bit of a halo. Too much of a change will make your edges look unnaturally contrasted.
I will also briefly talk about how in blurry images, pixels have tones similar to them and lack sharp contrast that we see as edges.
All of this leads up to how we can use custom filters in photoshop to create blur, sharpen an image, or detect edges using a process called convolution using image kernels. Each pixel in the image gets multiplied by a kernel which is done by inputing some values which gets overlapped onto the image. The sum of these values are then divided by the size of our kernel (3x3=9 for example), and that number becomes the new value of our center pixel. In other words, doing this allows one pixel to be affected by its surrounding.

Link to my PDF because my file was too big:
https://drive.google.com/open?id=0B_7IU ... 1lYSm1kcW8


https://www.youtube.com/watch?v=C_zFhWd ... BufTYXNNeF
http://setosa.io/ev/image-kernels/
https://en.wikipedia.org/wiki/Kernel_(image_processing)
Complete Digital Photography, Third Edition by Ben Long
Last edited by christinepang on Tue Jun 13, 2017 11:16 pm, edited 1 time in total.

cguijarro
Posts: 3
Joined: Mon Apr 17, 2017 10:56 am

Re: Proj 5 Final Research Project

Post by cguijarro » Sun Jun 04, 2017 10:34 pm

My Final Project will be focusing on Kurt Kaminski's Melange Project. It was shown at the End of The Year Show at MAT a few weeks back. The 2d simulation uses a depth camera to know when a hand is in motion. It senses the hand move and knows to run the visualization on the screen. I have already contacted Kurt for some info and will be doing additional research on what he gave me. I'll be looking into things like velocity vectors and GLSL shaders. I also will be looking into the code that makes his program run. I hope to be able to explain how the whole Melange project came to be and how it works.


There are basically 2 parts to this project, the depth sensor and the 2D fluid visualization. The Depth camera is basically like the Kinect, it uses depth sensors to analyze the depth in an environments space. It takes images of one's hand and gathers data from it's velocity. The sensor then sends the information gathered to the algorithm that creates the fluid simulation. The images taken by the camera are turn to simple figures and their edges are replaced by velocity vectors to represent the velocity of the hand. Eventually, the vectors are mapped to every other aspect of the visual. It's color, velocity, and flow lines are all mapped or influenced by the hand's velocity.
The code is shown here: https://github.com/kamindustries/Melange
I personally can't really understand the code yet because I only know the basics of python and the code requires you to have TouchDesigner which is another thing I have to learn how to use.

There is a lot more on this project here: https://www.twitch.tv/bobarbubbub
Kurt gives a talk on Melange and is able to better explain everything about it.

I've also attached my final Power Point to this post.
Attachments
185GL Carlos.pdf
(5.68 MiB) Downloaded 84 times
Last edited by cguijarro on Wed Jun 14, 2017 11:22 am, edited 2 times in total.

anniewong
Posts: 4
Joined: Wed Apr 12, 2017 5:11 pm

Re: Proj 5 Final Research Project

Post by anniewong » Sun Jun 04, 2017 11:50 pm

Final Project.pdf
Picures powerpoint
(2.08 MiB) Downloaded 114 times
For my final project, I will be exploring the future and potential of image processing and how it translates into the artistic aesthetic narrative using computational photography technology. Taking into account the works from artists like Idris Khan, Jim Campbell, and Jason Salavon, their work showcases a future of digital art that narrates the human culture and the digital age in which we currently do, and shall inhabit. Image processing has not only become integral to public art, but has become the favored media of many artists because of its harmonious alliance with photography and computer generation which makes art that much more accessible to the general public.

The goal: this project dissects how image processed art and photography plays a significant role in interpreting the culture of humans, particularly in the age of the internet.

Firstly, I introduce Idris Khan, who, although lacks his own website, is incredibly well known for his master layering of photographic images. This method of image processing transfigures a sense of movement that takes basic camera photography to the next level.

Next, we shall look at Jason Salavon's "All the Ways" pieces and see how his modification of images of computer animated images translate a whole new storytelling perspectives.

Finally, we observe the public art installations of Jim Campbell and how his use of camera computer mages taken from the outside world made it onto the 1070 foot tower of the Salesforce company in San Francisco. Along with his other public arts, his use of image processing in skyscrapers using lights and camera graphics challenges the boundaries of image processing in shaping the aesthetic narrative for public viewing.
Attachments
Final Project (1).pdf
No pictures Powerpoint
(2.08 MiB) Downloaded 70 times
Last edited by anniewong on Sun Jun 18, 2017 10:22 pm, edited 2 times in total.

xinghan_liu
Posts: 4
Joined: Wed Apr 12, 2017 5:48 pm

Re: Proj 5 Final Research Project

Post by xinghan_liu » Mon Jun 05, 2017 12:21 am

For my final project, I want to explore more with how camera could analyze shape. I'm inspired by zumo's line follower program, and I wonder if a robot can have different reaction to different simple shape. Furthermore, I want to know is there any possibility for a robot has reaction to people's hand-drawing shape. However, I want the robot's reaction involved with music.
So my goal is: letting a robot make different sound when it see different shape. If each specific shape is represented as a specific note, will it able to play a song by seeing a group of ordered shapes.


First, I'll introduce the shape detector on OpenCV, basically about how it works, and how can it come to my idea, which is detecting shapes from real world.
Shape detector:
OpenCV :http://www.pyimagesearch.com/2016/02/08 ... detection/
http://opencv-srf.blogspot.com/2011/09/ ... tours.html
GitHub: https://github.com/MathieuLoutre/shape-detector
video example: https://www.youtube.com/watch?v=-CyCiSAdgfY&t=16s
https://www.youtube.com/watch?v=ES2KBnE-Be8&t=209s
Shape detection in real world:
https://dsp.stackexchange.com/questions ... ife-images


Second, I want to talk about how to connect the shape detector with the robot's reaction, which is making sound by using Pygame to connect the shape and the sound.

Sound:
Air Drums
Example: https://www.youtube.com/watch?v=MAnWwxTjL3k
OpenCV: https://github.com/amolaks76/OpenCV-Air-Drum-Project

Finally, I will talk about the idea how can this idea works for people like musician or just people who interested in it.
Attachments
185GL final.pdf
(1.44 MiB) Downloaded 58 times
Last edited by xinghan_liu on Tue Jun 13, 2017 8:49 pm, edited 8 times in total.

Post Reply