As written in previous posts, the camera obscura serves as the basic model for all future cameras. However, the camera obscura lacks in functionality. It was limited in the amount of ray and light field sampling it could capture, in a particular scene. Today, with combined forces, scientists and artists are creating computational camera applications in sensing strategies and algorithmic techniques to enhance the capabilities of digital photography. This innovation in photography requires highly developed software based methods for processing representations and reproducing them in unique and original ways. In each computational device, transformations apply to both the optics and decoding software. With respect to the computational image sensors, several research teams are also developing detectors that can perform image sensing as well as early visual processing.
Computational cameras serve to motivate new image functionalities. These functionalities can come from an enhanced field of view, spectral resolution, dynamic range, and temporal resolution. The functionalities may also manifest in terms of flexibility. Today, due the the advancements in computational cameras, more and more people have access to optical settings (focus, depth of field, viewpoint, resolution, lighting, etc.) after they capture an image.
Pupil plane coding, for instance, occurs when optical elements are placed at, or close to, the pupil plane of a traditional camera lens. A great example of pupil plane coding are seen in cell phone devices. More so than ever, drastic improvements have been made to the resolution, optical quality, and photographic functionality of the camera phone. Ubiquitous in today’s life, photo applications display coded apertures for enhancing signal-to-noise ratio, resolution, aperture, and focus. Camera phones utilize programmable apertures for viewpoint control and light field captures. By implementing computational imaging these small devices have developed a higher performance-to-complexity ratio and have achieved high image resolution through post-processing. However, not all computational technology can be implemented on portable devices. Sometimes the phone cameras' sensors and optics can't be adjusted accordingly, the computing resources aren't powerful enough, or the APIs connecting the camera to the computing software are too restrictive. Although interest in computational photography has steadily increased, progress has been hampered by the lack of portable, programmable, camera platforms available.
As a result, some researchers have looked to physics instead. They implement physic-based methods in computational photography, to remove damaged effects from digital photographs and videos. In this project computational algorithms capture and remove dirty-lens and thin-occluder artifacts. Hypothetically, this is what i mean; imagine a lens on a security camera that accumulates various types of contaminants over time (e.g., fingerprints, dust, dirt). Sometimes, the images recorded by this camera are taken through these thin layers of particles. They then obstruct the scene. Algorithms in new computational devices rely on the understanding of physics in image formation to directly recover information lost in these photographs. Because the camera defocuses, artifacts are spotted out in a low frequency and are either added or multiplied, so the viewer can recover data about the original.
Focal plane coding is a different approach implemented in computational photography. It happens when an optical element is placed on, or close to an image detector. This approach allows small physical motion sensors to capture and control pixels in multiple exposures. The Focal Sweep Camera is a prime example of how new computational software functions with focal plane coding. With this camera, users can capture (with one click) a stack of images in a scene that corresponds to the cameras different focal settings. The Focal Sweep Camera uses a high-speed image sensor that translates the image while the lens records it. The captured stacks are images of possibly dynamic scenes that are swept through by the plane of focus. This imaging system determines the sensor speed and shortest duration needed to sweep the depth. The focal stack captured by the Focal Sweep Camera corresponds to a finite duration of time and hence includes scene motion.
Now,artists achieve illumination coding by projecting complex light patterns onto a scene. With respect to digital projectors and the controllable flash, technological advances have played a more sophisticated role in capturing images. In this last project, a team designers use active illuminations to create optical (“virtual”) tags in a scene. A key component to their photography experiment is that it does not require making physical contact with particular objects in a scene. They use infrared (IR) projectors to illustrate temporally coded (blinking) dots onto a scene. They use the projector-like sources in a powerful way. As cameral flashes, it provides full brightness and color control over the time in which 2D sets of rays are emitted. The camera can now project arbitrarily complex illumination patterns onto the scene, capture images of those same patterns, and compute information regarding that scene. Although these tags are at first invisible to the human eye, they are later detected by the IR-sensitive’s photo detector time-varying codes.
http://www1.cs.columbia.edu/CAVE/projects/what_is/
http://www.cs.columbia.edu/CAVE/project ... ep_camera/
http://graphics.stanford.edu/projects/lightfield/
http://www1.cs.columbia.edu/CAVE/projects/photo_tags/
http://www1.cs.columbia.edu/CAVE/projects/cc.php