Wk7 - Computational Camera

pumhiran
Posts: 9
Joined: Mon Oct 01, 2012 4:07 pm

Re: Wk7 - Computational Camera

Post by pumhiran » Mon Nov 12, 2012 11:00 pm

Hi there,

This week on Computational Camera, I have decided to do a small review on the Lytro Digital Camera that we went over in class on Thursday.

Lytro camera is a small rectangular/monocular shape device. Lytro Camera was founded in 2006 by a Light-filed photography researcher from Standford Universtiy name Ren Ng. The main function of this camera is to take picture with different dept of fields using light and sensor. When viewing the image, we can click on the different part the image and the focus will be place on that subject.

“The pictures that you can interact with” is the company marketing pictch. Lytro camera takes square images just like function of “Instragram” or “Hipstermatic” except Lytro does not carry the colorful filters, but rather focus on the dept of field. Keep in mind, this interacting files are not purely in JPEG form. To view images at different dept, the viewer must first install the program that come with the camera.

Unfortunately, Lytro LCD screen is only 1.5 inches, which I personally think pretty outrageous (I’m guessing once they release the new camera, the LCD screen will be much bigger). The one thing I really like is design and having no physical button but a touch tab. Lytro camera does not seem to be a good fit for professional photographer, but it isn't the worst tool either. I believe the technology from Lytro camera is just another option for photographer/scientist/engineer to explore.


Image
Last edited by pumhiran on Sat Nov 24, 2012 4:20 pm, edited 1 time in total.

andysantoyo
Posts: 6
Joined: Mon Oct 01, 2012 4:06 pm

Re: Wk7 - Computational Camera

Post by andysantoyo » Mon Nov 12, 2012 11:57 pm

I chose to review the work piece in Sato Laboratory which is a camera that can track down individuals in crowds merely by their features and traits. It seems odd and a bit of a bad idea as individuals can sometimes share common traits that can be hard to identify, specially in crowds; however, they can choose a very specific trait, which seems to be mostly shown by that person to help identify him. It's a bit of a complex thing to have to identify or search for someone in a crowd in any situation unless they are really distinguishable. With this camera though it seems that in the future people running away, or if you are looking for a person you've lost in a busy area, this can be a solution to finding them quicker. Not only does it track people but it can present the trajectories of people walking closely together, and it figures out how or where, based on their gait traits, they move in order to “track” them.
Such a camera might be useful in using it as Rokeby has done in his piece "Seen" with his mapping out as people moved in an area , even including the birds, during a certain time. The purpose of this camera is that each individual can be traced and mapped due to their traits. Based on those people being overlooked, one can look again a different day and see how close they can come to “copying” the traits of others. If a few people come to differentiate they are maybe more susceptible to be noticed by the onlooker easier than others because of their “unique” traits. Noticing the odd ones out would be my artistic way of coming to use this camera since it already gives me the advantage of being able to track people.

Image

http://www.hci.iis.u-tokyo.ac.jp/en/res ... rowds.html

juliacurtis
Posts: 8
Joined: Mon Oct 01, 2012 3:56 pm

Re: Wk7 - Computational Camera

Post by juliacurtis » Tue Nov 13, 2012 12:57 am

Fredo Durand and Julie Dorsey, from MIT, performed computational research on interactive tone mapping. "Tone mapping and visual adaption are crucial for the generation of static, photorealistic images. A largely unexplored problem is the simulation of adaption and its changes over time on the visual appearance of a scene." The two researchers address these problems and propose their own model of visual adoption. Their model creates a more visually enhanced experiences of a scene with real world conditions contribution to the virtual environment just as they do in real life - greatly enhancing the immersive experience.
chroma.jpg
street.jpg
tunnel.jpg
"[They] describe a multi-pass interactive rendering method that computes the average huminance in a first pass and renders the scene with a tone mapping operator in the second pass. [They] demonstrate [their] model for the display of global illumination solutions and for interactive walkthroughs."

My personal interest in this topic lies in its potential ability to contribute to architecture and visual models. Although drafting by hand was the sole method for creating drawings for plans and constructed buildings, there now exist a wide variety of programs that have transformed the process to include the computer heavily throughout the process. Programs exist to easily create three-dimensional shapes in space, making the visualization process entirely new. Other programs exist to create a single photo composed of three: a water color painting, a marker sketch, and a 3-D graphic from a separate program, all of the same viewpoint of a building. Another program enable the exact view from a specific point in a house to be realized in a visual rendering of a standing from within the building. All in all, the research Durand and Dorsey have done on interactive tone mapping could further improve the ability of programs to create a real life immersive experience of what a building will fell like and how it will be experienced in real life by people and its site.

source
people.csail.mit.edu/fredo/PUBLI/EGWR2000/index.htm

sidrockafello
Posts: 17
Joined: Wed Oct 03, 2012 11:14 am

Re: Wk7 - Computational Camera

Post by sidrockafello » Tue Nov 13, 2012 8:37 am

the Computer Vision Research Laboratory at UCSB http://excelsior.cs.ucsb.edu/lab.htm

Here at UCSB the Computer Vision Research Laboratory is using a combination of novel optics and computational programs to produce final images that consist of multiple images integrated into one. This is the goal of many of the challenging tasks they take on at the Laboratory headed by Yuan-Fang Wang. The research that the students, visitors, and faculty active work on is related to computer vision, medical image analysis, computer graphics, and bioinformatics; however, they utilize devices available to the public known as COTS (Commercial Off The Shelf) Cameras and Camcorders.

One project proposed by the CVR Labs discusses mapping using multiple images as the memory to fill in the void. The project deals with highly complex camera issues that arise when physics and motion are in play. The problems they have come across that this project handles is “(1) the 3D structures can vary significantly from almost planar to highly complex with large variation in depth, (2) the camera can be at varying distances to the scene, and (3) the scene may show significant deformation over time.” The Research Lab provides an example. This example shows a reel of images stitched together to form a coherent black and white map taken from an unmanned aerial vehicle flight.
8881.gif
What the study covers is how to unify a network of images into a stable frame that makes it easy for the human operator to observe. This is especially important because while many pictures can be utilized in many areas, there is still a compensation for a small amount of image jitter due to “significant, long-range, and purposeful” flight movement. It is arguable that new novel images should be enhanced to experiment with new points of focus on an image, but we must remember that in many-real world image programs there may be various camera manipulations with conflicting patterns and will not match iota for iota.
Agent-Dodging.png

What this will do for my own art is more to create a scene. I like the idea of being able to take a panorama picture set and combine it into one massive print. What this project allows is greater manipulation of the images I take. In addition, the images taken can be focused into computer graphics and mapped out. My interest in this is, they can be used to show animation of a character for anatomically correct art that wants to expand its point of focus to multiple angles. Imagine if you were in a shooting scene from the matrix, I feel that imaginative thought could become reality. Because the software and programs that the UCSB labs use can run off of off the shelf devices it makes it ideal to work with and use.
Last edited by sidrockafello on Thu Nov 15, 2012 8:34 am, edited 2 times in total.

giovanni
Posts: 24
Joined: Fri Oct 12, 2012 9:27 am

Re: Wk7 - Computational Camera

Post by giovanni » Tue Nov 13, 2012 9:38 pm

From the Computational Photography resource link, select a topic of interest. Give a brief report, and describe how such a camera might advance your own artistic work
2D to 3D Conversion of Sports Content - Lars Schnyder, Oliver Wang, Aljoscha Smolic
Disney Research Zürich and University of California, Santa Cruz


This research is focused on the problem of conversion of images given from a monoscopic single camera to a sterescopic 3d view.
In particular is presented a system to automatically create a stereoscopic video from monoscopic footage of field-based sports, using a technique that constructs per-shots panoramas to ensure depth in video reconstruction.
This method produce sytnhesized 3d shots using priors images, that are almost indistinguishable from truth footage.

The diffusion of 3d-at-home systems has some problems like an insufficient amount of available contents, few broadcasts, because the creation of stereoscopic images is still very expensive and difficult, so conversion from 2d to 3d is a very important alternative also for the possibility of conversion of existing contents.
process.jpg
The typical conversion pipeline consists of estimating the depth for each pixel, projecting them into a new view, and then filling in holes that appear around object boundaries. Each of these steps is difficult and, in the general case, requires large amounts of manual input, making it unsuitable for live broadcast. Existing automatic methods cannot guarantee the quality and reliability that are necessary for TV broadcast applications. Sports games are a prime candidate for stereoscopic viewing, as they are extremely popular, and can benefit from the increased realism that stereoscopic viewing provides.

This method takes advantage of prior knowledge, such as known field geometry and appearance, player heights, and orientation and it creates a temporally consistent depth impression by reconstructing a background panorama with depth for each shot (a series of sequential frames belonging to the same camera) and modelling players as billboards.
panorama.jpg
segment.jpg
The result is a rapid, automatic, temporally stable and robust 2D to 3D conversion that can be
used in conjunction with our method to provide full 3D viewing of a sporting event at reduced cost and this can be really interesting also for photography and consumer cameras.

While it is possible to notice some differences between footage, such as the graphic overlay, most viewers were not able to distinguish the two videos.
The implementation computes multiple passes to create homographies, panorama and stereo frames, running unoptimized research-quality code on a standard personal computer, is possible to achieve per frame computations of 6-8 seconds.
Researchers state taht this does not depend on the total video length, and it would be possible to run our algorithm on streaming footage given a short delay and increased processing power.
result2.jpg
result.jpg

aleung
Posts: 9
Joined: Mon Oct 01, 2012 3:12 pm

Re: Wk7 - Computational Camera

Post by aleung » Wed Nov 14, 2012 2:19 am

Computational Research, MIT
Computational Photography and Video
Flash Photography Enhancement Via Intrinsic Relighting
Elmar Eisemann and Fredo Durand

In a dark lighting, a photographer is usually faced with a frustrating dilemma: to use flash or not. A picture without flash would have a warm atmosphere but suffer from noise and blur. A picture with flash would suffer from red eyes, flat and harsh lighting, and distracting sharp shadows at silhouettes. To fix this problem, Eisemann and Durand proposed enhancing photographs shot in dark environments by combining a picture taken with the available light and one taken with the flash. The ambiance of the original light will be preserved and the sharpness from the flash will be inserted. The bilateral filter is used to decompose the images into detail and large scale. The image is reconstructed using the large scale of the available lighting and the detail of the flash. Flash shadows are then detected and corrected. This combines the advantages of available illumination and flash photography.

Image



Image decoupling for flash relighting
Image
Two images with the available light and the flash are taken respectively. Their color, large-scale and detail intensity are decoupled. Flash shadows are corrected. The appropriate layers are re-combined to preserve the available lighting but gain the sharpness and detail from the flash image.

Image
Basic reconstruction and shadow correction. The flash shadow on the right of the face and below the ear need correction. In the naïve correction, note the yellowish halo on the right of the character and the red cast below its ear.

Shadow Treatment
In order to correct the aforementioned artifacts, the pixels that lie in the shadow must be detected. Pixels in the umbra and penumbra have different characteristics and require different treatments. After detection, color and noise in the shadows are corrected. The correction applied in shadow is robust to false positives; potential detection errors at shadow boundaries do not create visible artifacts.

This would be a really useful tool for me to advance my own artistic work because I am one of those frustrated photographers in a dark light setting. I don't like using flash in dark lighting because it makes the lighting in the photograph look really unnatural and flat. But if I don’t use flash in a dark light then my photos would be really dark. I try to play around with the ISO and find a setting that gives a decent amount of light but it is hard to take pictures without having it to turn out blurry. I have found an app for my iPhone that is pretty similar to this. It is called Pro HDR and it takes two photos, one exposed for the highlights and another exposed for the shadows, and turns the two photos into one full-resolution HDR image. But this app is for my phone and not my DSLR camera so this method would be very helpful for me in rendering better photographs in a dark setting.


Source:
http://people.csail.mit.edu/fredo/PUBLI/flash/index.htm

rdouglas
Posts: 9
Joined: Mon Oct 01, 2012 3:09 pm

Re: Wk7 - Computational Camera

Post by rdouglas » Wed Nov 14, 2012 3:14 pm

Matt Richardson, a creative technologist in Brooklyn, has created a project called Descriptive Camera. The Descriptive Camera is like an ordinary digital camera in that you point it at a scene, press a button and the camera captures the scene as digital information. However, this camera captures scenes in a very different, but interesting and useful way. Richardson is interested in the nearly unmanageable amounts of photos that we accumulate over our lifetimes and how to better analyze and organize these collections. As he states on the project's page, "Imagine if descriptive metadata about each photo could be appended to the image on the fly—information about who is in each photo, what they're doing, and their environment could become incredibly useful in being able to search, filter, and cross-reference our photo collections."

Since such technology doesn't quite exist yet, Richardson utilizes crowd sourcing as a prototype. More specifically, he uses Amazon's Mechanical Turk API (https://www.mturk.com/mturk/welcome) which allows a developer to set parameters for a worker to work within and then submit their results. For their work, the "mechanical turks" receive a very small payment and there is an incentive system of approval and reputation to ensure acceptable results. In this project they are contracted to analyze the photo that the Descriptive Camera sends them, describe it in a few words and send it back.

Image

Image
The internal components seen more clearly: a BeagleBone from Texas Instruments, a USB webcam, a thermal printer, status lights and the shutter button.

For a participant or user the process is as follows: The shutter button is pressed after composing the desired scene, the photo is sent to a Mechanical Turk for processing and a yellow LED alerts the user that the results are currently "developing". After three to six minutes of development, the camera prints the resulting descriptions for the user.

Image

Source:

http://mattrichardson.com/Descriptive-Camera/

kevinalcantar
Posts: 7
Joined: Mon Oct 01, 2012 3:49 pm

Re: Wk7 - Computational Camera

Post by kevinalcantar » Wed Nov 14, 2012 6:15 pm

figure1.jpg
In UC Berkeley's Electrical Engineering and Computer Sciences department, the researchers there are working on a personalized system for ranking amateur photographs. The system takes into account features such as RGB color, texture, black and white and portrait when ranking amateur photos according to personal preference. Two personalized ranking user interfaces are provided by the researchers thus far. One that is feature-based and the other is based on examples.

Due to the ubiquity of the digital camera, the ranking system would allow photographers to separate and discard images that are not pleasing. Since going through numerous photos to pick out which ones are good and which aren't, the researchers are creating an algorithm based on the subjective and individual preferences of the spectator to automatically find which images are going to be selected. The tool is meant for amateur photographers, not professionals. This allows a more personalized and individual experience.
fig2.jpg
When creating the ranking system, the researchers took into consideration notions of aesthetics, particularly nine rules. The rules include horizontal balance, line patterns, size of region of interest (ROI), merger avoidance,the rule of thirds, color harmonization, contrast, intensity balance, and blurriness. An accuracy of 81% was achieved on a set of 2,000 photographs using these rules. The accuracy in predicting and classifying was in about 93% accurate.

In my own work, I would use this by possibly creating a database of portraits of people. Using the ranking system and algorithm, I would create a way to detect which are the most desirable physical traits in both male and female genders. It would then also create a database of human characteristics which a user would most likely want in a sexual partner. Using both, the database would continuously change with the spirit of the times, continuously creating the "ideal" or "perfect" individual. You could then compare how you match up to the idealized individual created by analyzing everyone's preferences.

orourkeamber
Posts: 8
Joined: Mon Oct 01, 2012 3:58 pm

Re: Wk7 - Computational Camera

Post by orourkeamber » Wed Nov 14, 2012 8:14 pm

Removing Camera Shake from a Single Photograph: http://people.csail.mit.edu/fergus/pape ... fergus.pdf


A research team at the University of Toronto led by professor Rob Fergus is creating a way of correcting the blurred photographic images that are the result of camera shake. Because cameras are becoming smaller and lighter in weight they are more difficult to hold steady which can cause ones photos to be blurred. It is not always convenient to lug around a tripod to help correct this problem which is why this new technology could be helpful to snap shot photographers everywhere.
The first step in doing such corrections is to estimate the blur kernel in the original photo they then can use calculations to create an estimated kernel to “apply a standard deconvolution algorithm to estimate the latent (unblurred) image.”( http://people.csail.mit.edu/fergus/pape ... fergus.pdf) Though I still feel unsure of what this means exactly due to the scientific phrasing of this article, I think essentially what they are doing is estimating the amount and directions (horizontal, vertical and diagonally) the image moved due to camera shake and work backwards to align the pixels back to proper placement (put in the very simplest of terms).
I think that this could possibly be applied to my art because I am currently working on a series of portrait studies that are mostly painted from nonprofessional photographic snap shots. Because I need the photos to be very sharp and readable if I want to work from them it is quite frustrating when I cannot use what would have been a great photo had it “not been so blurry”. If I had program such as the one being developed at the University of Toronto I wouldn’t have to limit myself as far as which photos I could use.
I would have liked to include more images here but unfortunately the site would not let me save its photos or copy and paste them to another document. To see images of this work or to read more visit there website: http://people.csail.mit.edu/fergus/pape ... fergus.pdf
Camera-Shake 1.jpg
http://www.digitalcamerainfo.com/conten ... rence-.htm
shake2.jpg
http://digitalgadget.jp/dg095.html

erikshalat
Posts: 9
Joined: Mon Oct 01, 2012 3:10 pm

Re: Wk7 - Computational Camera

Post by erikshalat » Wed Nov 14, 2012 10:36 pm

Image

Gordon Wetzstein, Ramesh Raskar and Wolfgang Heidrich did research in photography of transparent objects using lightfield probes. More specifically, they are attempting to "visualize" the refraction of light from transparent objects. Capturing refraction from transparent objects is difficult because the objects are essentially "invisible". Nature has a way of capturing it in the form of shadows, also called "shadowgraphs".

Image

The technology to make this more efficient needed to be developed. This would grant the ability to accurately define volumes of things like glass, water or even gases. Regular cameras aren't capable of capturing the "non-linear trajectories" of photons in transparent objects, as opposed to opaque objects upon which photons travel in linear trajectories, bouncing off of the object in predictable ways.

The goal of Wetzstein, Raskar and Heidrich was to use "light field probes" to help code the reveal the colors and tones or refraction. A light field probe, from what I gather, is a white sheet with a subtle cellular-like pattern imprinted on it, that can interact with a camera to produce information. In the experiment, a probe is placed behind the object that is being photographed and this allows for "Schlieren Imaging" which bounces a light source onto a refractive medium through a filter and bounces that captured light back into a camera. This method of photography is called Light Field Background Oriented Schlieren photography, or LFBOS.

The refraction makes the light bounce off of the transparent object in several directions, and the light field probes record all of these variations in direction, and code their intensity. This information is then computed in the camera and processed into an image.

Sources:
http://www.cs.ubc.ca/~heidrich/Papers/ICCP.11.pdf
http://andrewcfyb1.files.wordpress.com/ ... shadow.jpg
http://www.cs.ubc.ca/labs/imager/tr/201 ... teaser.jpg

Post Reply