Proj 1- Intro - (Due April 17)

Posts: 160
Joined: Wed Sep 22, 2010 12:26 pm

Proj 1- Intro - (Due April 17)

Post by glegrady » Tue Apr 11, 2017 2:20 pm

Begin a blog page here by clicking on "Post A Reply". Your first project is to discuss any topics covered in the first 2 weeks that covers the history of the camera or current computational photography. Please add URLS of articles, videos, anything you have come across on the internet that is of interest to you, and discuss them breifly in a paragraph or two. This assignment is due by April 17.
George Legrady

Posts: 8
Joined: Wed Apr 12, 2017 5:15 pm

Re: Assignment 1 Due April 17

Post by chantelchan » Thu Apr 13, 2017 2:08 pm

This article shows the first photograph of a person.

Although the picture was taken on a busy street in Paris, the camera's exposure had to be processed for several of minutes, causing the traffic and crowds to not be visible. However, a man who was getting his shoe shined at the bottom left was motionless enough to be included in the photo. This photo is considered a "daguerreotype," meaning that the process of creating a picture involved different steps that are used today. It required polishing a sheet of silver-plated copper, fuming the surface to make it light sensitive, and exposing it inside the camera obscura. This is followed by fuming the surface with mercury vapor, more chemical treatments, rinsing, drying, and finally sealing it in a frame.

For a photo generated in 1838, the black and white photo "Boulevard du Temple" by Louis Daguerre was very advanced for its kind. Luckily today, cameras are able to take photos with more resolution, color, and speed. And towards the future, we are able to run these cameras in quicker processing time to further understand the world around us, just like the TedX lecture we watched in class.
Last edited by chantelchan on Mon Apr 17, 2017 8:27 am, edited 2 times in total.

Posts: 2
Joined: Wed Apr 12, 2017 5:09 pm

Re: Assignment 1 Due April 17

Post by gbaier » Sun Apr 16, 2017 1:49 pm

This article discusses the "Flutter Shutter" camera developed at MIT.

Traditional cameras introduce motion blur when taking photos of moving objects. While the shutter opens to expose the photo, the object moves during that brief time, smearing its outline in the final image. Increasing the shutter speed can help reduce the motion blur but it compromises other areas of the photo. A faster shutter speed decreases exposure time, creating a darker image. Algorithms exist to reduce the motion blur of a photo taken with a normal exposure, but these techniques introduce artifacts and make the final image look unnatural.

The MIT team developed a camera with a ferroelectric shutter that can open and close rapidly based on a binary sequence fed to it. The team programmed the camera's shutter to flutter at different speeds over the course of a photo's exposure, and then used an algorithm to stitch the final image together with the object's motion blur greatly reduced, without introducing any artifacts or compromising image quality. I think this technology will help capture better pictures if it trickles down to the commercial market, particularly in sport and motorsport photography where subjects are almost always in motion.

Posts: 2
Joined: Wed Apr 12, 2017 5:08 pm

Re: Assignment 1 Due April 17

Post by leaharmer » Sun Apr 16, 2017 3:16 pm

Lytro introduces a cinema camera that will revolutionize the way we define photography in this Variety article.

A few years ago, Lytro released, Illum, a light field camera for photography that allows its user to refocus photos after they're shot. However, the camera was quite pricey and consumers complained about image quality in low light situations. The failed success of the Lytro Illum caused the company to shift its focus from commercial to a more film and VR-based industry.

The Lytro Cinema Camera is a huge prototype, almost the size of an SUV and weighing hundreds of pounds, but its technology is incredible. The camera captures information, not pictures. Using a light field device, the Lytro camera captures a holographic digital model of the scene, captured at 300 times per second. Because the “lens” is virtual, it can have properties of lenses that would be impossible to manufacture in real life. 3D shooting can get left and right eye views from the same data. It allows for a multitude of settings to be changed in post, such as frame rate, aperture, and lens adjustments. The position of the camera, along with focus and depth and field can be altered after because the data recorded by the camera includes a depth of everything in the scene. Therefore the user can choose to simply ignore everything past a certain distance from the camera — in effect, doing greenscreen without greenscreens. Once the company gets the camera down to a reasonable size, it's going to change the film industry forever.

Posts: 8
Joined: Wed Apr 12, 2017 5:47 pm

Re: Assignment 1 Due April 17

Post by taylormoon2014 » Sun Apr 16, 2017 5:43 pm

Taylor Moon

What makes Lytro cameras so innovational is its spatial command of the six degrees of freedom, presencing a virtual reality. This method, when exercised in video games, allows the user to exercise “translational motion or force along each of three orthogonal axes and rotational motion or torque about each of the axes” (Brannon 1). The Lytro camera is a plenoptic camera, which captures the four-dimensional field of light through an array of microlenses during the period of a single photographic exposure (Stanford 1). The microlenses are arranged according to GRIN, or a gradient index (“Graded Index”). It revolutionarily subdivides the total amount of light within a given locale into the amounts in each ray, which accounts for varying source distances, or depths within the image, and increases the overall photographic sharpness (Stanford 1). This opens up possibilities in terms of hyper-enlarged, the reduction of image noise, the ability to sustain high aperture over a greater depth of field, quicker exposures, multiple perspectives, and extreme motion shots. The plenoptic camera utilizes synthetic and refocusing image technology. This employs an algorithm that measures deviations in defocus blur against an approximate blur (Yosuke 1). This gives the photographer more freedom to control the focus of their choice. Stanford’s plenoptic camera distinguishes itself from other plenoptic counterparts by using range-finding technology. Prior to this research, I did not know that light that passes through a pixel correlated to a parent microlens (Stanford 3). I was not aware of the sub-apertures that are combined to amount to the total aperture (3). Sub-apertures, to my understanding, are created due to the presence of multiple microlenses, each with their own aperture, existing behind the main lens. Light field is confined by aperture, and therefore, vignetting occurs when a synthetic photograph tries to create a synthetic photograph that requires light outside of the aforementioned boundary, or aperture (Stanford 5). Lytro’s Illum camera allows one to focus, shift the perspective, and pan after the image has been taken; moreover, upgrades include a feature of Living Picture Playback and, most interestingly, the ability to create a 3D animation of your images from its desktop software (Torres). The Illum camera, which is intended for more serious work, was produced after its more kaleidoscopic model, is Lytro’s second camera. Reviews, such as the one produced by The Verge on Youtube, describe the Illum as heavy camera with a 30-250mm lens. It still contains the expect aspect of the shutter button. It has an angled display screen on the back and is intended, according to the video, to be held about chest-high with two hands. It shoots at F2 all the time but gives you the option to process the image as high as F15 after the photo has been taken. The Youtube reviewer describes the use of Lytro’s Illum to merely take regular DSLR photos as a waste of money but that it is truly intended for employment of its Living Picture advantages. The Verge describes how the Illum compensates for the fact that you cannot merely take a shot from any angle through its Lytro button. This maps the refocus of a range of your shot in blue and orange. I had issue with this because this essentially takes most independent voice from the photographer as an artist. In essence, anyone, no matter how inexperienced, could then become an expert photographer. As I explored the Lytro camera, I grappled with the philosophical implications of what this technology will mean for the artist and the ways in which artists will need to reinvent and reclaim the medium.

Brannon, Daniel J. "Joystick apparatus for measuring handle movement with six degrees of freedom." U.S. Patent No. 5,854,622. 29 Dec. 1998.

2005-02, Stanford Tech Report Ctsr. "Light Field Photography with a Hand-Held Plenoptic Camera." Light Field Photography with a Hand-held Plenoptic Camera (n.d.): n. pag. Standford, Feb. 2005. Web. 15 Apr. 2017.

Flusberg, Benjamin A., et al. "Fiber-optic fluorescence imaging." Nature methods 2.12 (2005): 941-950.

"Graded Index Microlenses." Liens Vers La Page D'accueil Du Site De L'INO. Canada Economic Development for Quebec Regions, n.d. Web. 15 Apr. 2017.

Torres, JC. "Lytro Releases Major Update to ILLUM Light Field Camera Suite." SlashGear. SlashGear, 10 July 2015. Web. 16 Apr. 2017.

Yosuke, Bando, and Nishita Tomoyuki. "Towards Digital Refocusing from a Single Photograph." Towards Digital Refocusing from a Single Photograph. N.p., n.d. Web. 15 Apr. 2017.

Posts: 3
Joined: Wed Apr 12, 2017 5:12 pm

Re: Assignment 1 Due April 17

Post by haytham » Sun Apr 16, 2017 6:58 pm

The article Computational Photography ... issue.aspx talks about the computational technology behind camera lenses. It delves into the process of making pictures by talking about photo sites: the rectangular array of tiny light-sensitive semiconductor device. The pixels are made of photo sites with information combined from different photosites. This is part of the image-processing computer in the camera that also applies a "sharpening" algorithm to adjust edges, contrast, and color balance. Thus, the digital camera is said to be making pictures instead of taking them, because it executes a lot of internal computations.
One internal computation is done through The Light Field, where the camera captures additional information about the light field which allows focus and depth of field to be corrected after the image is taken. Some other techniques involve adjusting motion blur. One of these techniques uses Flutter shutter by stroboscopic lighting that gathers a long smeared image into a sequence of several shorter ones to aid in reconstructing an unblurred version.
The future of digital imagery might include even more computational technologies. So the concept of photographing frozen still photos might be a concept of the past. There's an upcoming trend in digital photography of treating a photograph as a computed object. As a result, cameras seem to be following the trend of photography as a subjective and interpretive form of visual expression as opposed to documentary art.
Last edited by haytham on Mon Apr 17, 2017 9:56 am, edited 1 time in total.

Posts: 4
Joined: Wed Apr 12, 2017 5:14 pm

Re: Assignment 1 Due April 17

Post by fangfang » Sun Apr 16, 2017 7:32 pm

This article ( ... twenn.html) shows the relationship between photography and painting.

At the beginning of the article, it asks two questions which I am interested in: “ Why would an artist spend countless hours painting an image like this when you could create a photograph instantly that looks the same?” and “Is photography art?” Then the author shows the history of photography and paintings that made after the invention of photography was announced. And some artists in this article also believed that the era of painting was over. After reading this article, I agree that people can use the camera to create a new vision in the new society. Time keeps changing, so artists can create any kinds of artwork with high technology now. Although some digital camera can create photos that look like paintings or drawings, I do not think photography will replace painting because both photography and painting have different limitations and advantages. In the history, they influenced with each other and artists who focus on photography or painting also got inspiration from each other. But I believe the line between them will be more blurring.

Posts: 4
Joined: Wed Apr 12, 2017 5:48 pm

Re: Assignment 1 Due April 17

Post by xinghan_liu » Sun Apr 16, 2017 8:49 pm

From the basic physical principles, which is pinhole camera model, to today's VR camera, taking pictures have became to a totally digital technique. With today's high technology, artists keep thinking of how can we create art with new generation of cameras. However, today's different type of camera provide us different function of taking pictures, such as fosera camera and one pixel camera. What kind of specific photo we want to get for our art? It's not only about recording a scene, but also about the perspective, focus point, and styles, because we can almost get all kind of images we want to express our feelings with today's technology. Beside, one can also fix the image digitally. In the article New cameras don't just capture photons; they compute pictures, the author mentions that photoshop has already changed people's view about photography. "'The camera never lies' was always a lie" explains that we can do whatever we want to change the photo.
We should really think about how can we get the photo that exactly we want to express our thoughts, because with today's technology, the action of taking a picture could be done anytime and anywhere.

Posts: 5
Joined: Wed Apr 12, 2017 5:14 pm

Re: Assignment 1 Due April 17

Post by zoe.m.rathbun » Sun Apr 16, 2017 11:47 pm

Rather than taking a specifically technical or historical approach, I am examining an underlying philosophical question relating to this class. One question that you posed in the first two weeks of class was "What is an image?"
I find this to be an incredibly nuanced question. To me, an image is some representation of the fleeting moment of existence that we reside in. If you ask google, an image is 1.
a representation of the external form of a person or thing in art. or 2.
a simile or metaphor. If you ask Lynda Barry (The first person I heard ask this question): ... -an-image/

I think it might be a little different coming from a photographer vs. an illustrator, but I think she is referencing an image in a mental sense, whereas in this class we may be referencing an image in a material sense [what is produced by the process of capturing with some medium]. Lynda Barry Draws from memory and is concerned with capturing some component of a mental 'image'. Some blurring of these two definitions could be interesting, as in exploring the difference between an image in memory vs. a physical image.
In the case of a captured image, I take an image to be a specially encoded message that happens to be formatted in such a way to be translated by our visual system. Pixels could be translated into cells on a retina, which can then be translated into specially organized visual maps in the optic tract and primary visual cortex. ... ic-mapping

These are signals that we happen to be equipped to interpret; in which we recognize people, objects and assign significance and even aesthetic value. Object recognition is a process that for many years eluded the comprehension of both neuroscientists and computer scientists alike. Now, due to machine learning computers have the ability to recognize objects and even compose photos. This could be an interesting application of our photographic robot swarm for instance if they could recognize an object and take photos from multiple angles.

Many of the topics referenced in this class seem to be challenging what an image is in a classical photographic sense. Is something still an image in the same way when it has been computed rather than simply captured? Is something an image in the same way when it is taken at a speed faster than light? Similarly, It is interesting to look at photoshop as a filtering tool for these signals in the way that a sound signal might be modulated by a synthesizer. I find the conversion between audio and visual imagery to be an interesting line of artistic inquiry and enjoyed the project created by you and one of your MAT students that you exhibited.

Zoë Rathbun

Posts: 4
Joined: Wed Apr 12, 2017 5:07 pm

Re: Assignment 1 Due April 17

Post by annieyfong » Mon Apr 17, 2017 12:44 am

When looking up current computational photography, I came across the Light L16 Camera.

Like the Lytro camera, it generates a photo based on multiple image sources. The idea behind this camera is the desire to take quality photographs while carrying around a small and portable camera. The Light L16 camera has 16 different cameras with varying focal lengths. They all fit in a module that is the size of a smart phone by positioning the cameras sideways and firing through mirrors. Multiple cameras are used to generate a single image and up to ten of the sixteen can be used at one time depending on the zoom level. The feature that intrigued me the most was the fact that you could choose your focus after you take your shot. You can adjust your depth of field and blur things manually since there are multiple cameras gathering the same image from slightly different angles. Unfortunately this camera is only available for pre-orders and has not yet been manufactured.

Post Reply