Friday, August 31, 2012

The Quest For Ultimate Image Quality

By David Willis Published in Digital SLR Cameras
The Quest For Ultimate Image Quality
Quantum efficiency in photography is offered as a sum measurement of the effective use of incoming light information. Unlike heavily compromised direct comparisons between image noise, sensor formats and resolutions, quantum efficiency factors in all of the variables along the image chain for a direct and unbiased comparison of noise performance, as well as final image quality. Recently, the advantages of using quantum efficiency as a comparison tool was reflected when the full-frame Canon EOS 5D Mark III received a lower overall score on DxOMark.com against the Nikon D800, even though the D800 has smaller-sensor pixels at 4.7 microns while the Mark III sports comparatively large 6.25-micron pixels. Adding to the confusion, Nikon's full-frame, 36.6-megapixel D800 outscores all of the medium-format cameras that DxOMark has tested by a significant margin. With results like these, it becomes apparent that when measuring equivalence between imaging systems, a variety of parameters must be equalized so that fair comparisons can be made not only between cameras, but entire imaging systems and their unique technologies and mechanics.

DxOMark ratings of camera image quality have become something of a de facto standard. The company tests new equipment as it becomes available. Using their interactive website, you can compare camera models and see how they stack up under a variety of hypothetical shooting situations.

The Image Chain

After light enters a lens, the camera begins the process of creating an image by converting physical information in the form of light photons into digital information in the form of an electrical charge. This is done primarily via the photodiodes at the sensor, the anti-aliasing filter (which reduces sharpness right off the bat) and the analog-to-digital converter. But the process is lossy because converting a real-world projection into a mathematical representation requires compression that effectively "maps" the nearly infinite variable information of the real world into a rounded-off digital approximation. Most digital cameras use a Bayer sensor layout with a red, green and blue checkerboard pattern of pixels. The majority of these pixels are dedicated to capturing green information as a measure of luminance since our eyes are more sensitive to the green channel in light. This Bayer pattern also requires exact positioning in terms of depth so that light rays penetrate to an optimum distance for accuracy in measuring light rays channeled from the lens and aperture. This implies that the sensor can't be constructed to be thicker so more surface area is the only choice for adding extra light information, which is why you have competing sensor formats in DSLRs, from full frame to the same size as a frame of 35mm film, all the way down to Four Thirds/Micro Four Thirds at roughly one-quarter the area.

Interestingly enough, the Bayer array only measures luminance, and it uses an estimation of surrounding values from nearby RGB pixels to extrapolate color information. The most popular analogy for how these photodiodes work at the sensor is an empty bucket that captures light instead of water. If there's too much information in an overexposed image, for instance, the pixels will fill beyond capacity to the point of clipping, where potential image information is then lost. If only a small amount of light hits the bucket, then only a small amount of image information is collected. When it comes to pixel size, just as with the sensor, the larger the area, the better it will be at gathering light information.

Pixel density refers to the amount of these photodiodes that are packed into the tiny amount of real estate on the sensor. It also can be referred to as sensor resolution in megapixels (pixels by width multiplied by pixels by height). We're now seeing that a large pixel density, even on a smaller sensor, isn't a bad thing. It's almost always a good thing, actually, because there's a direct correlation between sensor resolution and outputting to print resolution. Also, the more pixels, the better the detail in the image regardless of the sensor area, which translates to better acutance for sharper edges and also to more color information thanks to the complex relationship between red, green and blue pixels on a Bayer array.

Historically, this required a trade-off, as higher-density sensor resolution would result in smaller pixels, which created light falloff in the exponential gaps found between each added photodiode. This was most pronounced in compact cameras with large, DSLR levels of resolution. But now technology has improved a great deal, and modern advances like microlens arrays and backside illumination are compensating for these downsides by making much more efficient use of all incoming light and channeling extra light information that would have ordinarily fallen in the gaps between photoreceptors. Sensors are incredibly efficient, and even these gaps have benefits as they house components and integrated circuits that aren't light-sensitive.

Prev 1/3 Next »

Login to post comments
Subscribe & Save!
International residents, click here.