DPP Home Gear Cameras The Quest For Ultimate Image Quality

Tuesday, August 14, 2012

The Quest For Ultimate Image Quality

The complex relationship between cameras, sensors and noise—bigger resolution, large sensors and huge pixels won’t necessarily lead to improved image quality


This Article Features Photo Zoom

Canon EOS 5D Mark III
Image Noise
Noise appears in an image as seemingly random spots of brightness (luminance noise) or colors (chrominance noise) that vary in size, placement and strength. The signal-to-noise ratio describes the amount of desired information (signal) in relation to the amount of unwanted and random information that's created (read noise) or captured (photon noise, also known as shot noise). In the incoming signal of light, photon fluctuations cause noise that's especially prevalent in the brighter areas of an image. Conversely, voltage fluctuations introduced during the digital processing of the image introduce more read noise to the darker areas of an exposure. With a shorter exposure at a high ISO, for example, photon shot noise is decreased from a faster shutter. The read noise introduced by heightened camera processes, on the other hand, is increased. This is twofold, as any read noise introduced to the image prior to ISO amplification is also subject to amplification. In contrast, a longer exposure will produce more signal and, consequently, more photon aberrance while read noise becomes mitigated in comparison to the ratio of heightened shot noise and more signal.

Both shot noise and read noise are random, and they're the two primary causes of image noise. However, other variables also affect noise levels to a lesser effect, including low levels of infrared radiation and, especially during longer exposures, excess heat from both the camera and external temperatures. These can set off electrons that are confused for image information during the analog-to-digital conversion, but this type of noise isn't random, and it can be compensated for when manufacturers choose to go to the expense of doing so.

Interestingly, noise also limits the potential dynamic range of each exposure. The amount of noise present in the darker areas will affect true black measurements by establishing a minimum signal-to-noise ratio upon which the rest of the dynamic range is based. So if there's more noise, dynamic range—the amount of steps from true black to sensor saturation highlights—then becomes limited. Extrapolating further, ISO sensitivity limits dynamic range, as well, because you're adding more noise to an image as you push amplification to the signal. However, at large enough ISOs, the amount of noise being created by amplification is rendered irrelevant because you would experience even more truncated image information if you were to use a lower ISO that resulted in either a slower shutter speed or a wider aperture and, consequently, a longer exposure that would overflow the sensor with information. Therein lies the rationale for selectable levels of ISO sensitivity in the first place, which is always a trade-off between image noise, image sharpness and depth of field.
 
We're now seeing that a large pixel density, even on a smaller sensor, isn't a bad thing. It's almost always a good thing, actually, because there's a direct correlation between sensor resolution and outputting to print resolution.
 
Lens Limitations
There are a number of competing sensor formats, with most cameras sporting a full-frame, APS-C, APS-H or Four Thirds/Micro Four Thirds sensor. When it comes to evaluating a lens on each of these formats, most photographers know that there's an equivalence value that compares the angle of view and focal length to the same aspects on a full-frame camera. So a 100mm lens on a sub-full-frame APS-C sensor with its 1.5x/1.6x (Nikon/Canon) equivalence translates to a 150mm/160mm throw. When optics are affected this drastically, you can see that competing sensor sizes also will require a way to effectively judge camera performance and quantum efficiency not just across competing sensor formats, but also competing imaging systems. Sensors and camera designs even show subtle fluctuations by model and generation.

As pixel density (aka sensor resolution) increases, it also can reach the limitations of the lens to resolve effectively. Ironically, full-frame lenses used with smaller sensors have to be capable of resolving to a much smaller area of space. While the cropped image circle doesn't degrade image quality and, in fact, the center area of a lens projection is often the sharpest, multiplying the scale of an image also will emphasize glaring flaws and poorly focused areas. More importantly, if the lens isn't capable of resolving to the required resolution of a sub-full-frame sensor (this is determined through MTF charts), image quality is degraded not because of the resolution of the sensor, but because of the subpar resolution of the lens. The ability of the lens to resolve will maximize at intermediate apertures while trailing off dramatically at the largest apertures (ƒ/1.4, ƒ/2.0) thanks to light aberrations that limit resolution, contrast and fine edge details.

 

Check out our other sites:
Digital Photo Outdoor Photographer HDVideoPro Golf Tips Plane & Pilot