Tuesday, August 16, 2011
Misinformation: Camera Tech
Exciting new developments in computational photography may render aperture irrelevant
|This Article Features Photo Zoom|
In early July, researchers at Cornell University were able to develop a lens-free camera that's literally pinhead-sized. At 100th of a millimeter thick and one-half a millimeter on each side, the microscopic camera uses no lenses or moving parts, and is said to cost pennies to manufacture. Affectionately dubbed the Planar Fourier Capture Array, the "camera" is a flat piece of silicon that looks like a tiny compact disc, and each pixel is able to capture a vital piece of imaging information that can be combined computationally into a single image.
Myth: Losing Focus Is A Bad ThingThe term "camera" is vague in that all it implies is a device that's able to capture images and, in general, store them. As digital technology advances and computers become more and more intrinsic to the photographic process, even this broad definition is going to be put to the test. A new technology from Lytro, for instance, allows you to adjust the vital focus of an image after it already has been taken. The camera's core concept is based around the "light field," which is comprised of all of the rays of light in a scene. Rather than funneling two-dimensional light rays through a lens and sandwiching them to a camera's sensor, the Lytro camera takes these light rays and then projects them to a sensor that includes an incredibly efficient microlens array. Lytro is introducing a new file format (.lfp), and the camera includes an interior "light field engine" that works in concert with desktop software to produce "Living Pictures" that maintain multiple focusing points throughout the image.
Particularly for action, event and reportage photographers, the implications for photography are enormous, even perhaps allowing for multiple focal points in an image without the need for extensive postprocessing stacking of multiple exposures. The technology, however, isn't new. In fact, researchers have been aware of light-field theory for three-quarters of a century, and light-field rendering of 2D images from 4D information even was suggested by Marc Levoy and Pat Hanrahan in 1996. A German company called Raytrix actually has had plenoptic light-field cameras available for purchase for nearly a year, offering similar possibilities at a somewhat larger price. Adobe also has been experimenting with a plenoptic camera that takes a three-dimensional image utilizing a grouping of specially configured lenses that combine multiple captures into one behemoth, 100-megapixel image.
There are a lot of thoughts online about how this technology could, or could not, change the field of photography as we know it. Plenoptics removes many of the physical limitations of a lens, including lens aberrations, missed focus and the binding relationship between depth of field and aperture. Removing focus (and, to an extent, aperture) from the image equation has broad implications, however, culminating in the concept that you could theoretically place a camera anywhere, fire it remotely, make changes on a computer and remove the photographer from the image-taking process entirely. Of course, you can do that already. Whether through focus stacking of multiple exposures or software options that give you extensive control over bokeh, these options are already available to photographers and clients. Regardless, removing focus doesn't necessarily remove the photographer. A camera, no matter how advanced, is just a tool. A photographer is someone who knows how to wield that tool effectively.