The Quest For Ultimate Image Quality

Quantum efficiency in photography is offered as a sum measurement of the effective use of incoming light information. Unlike heavily compromised direct comparisons between image noise, sensor formats and resolutions, quantum efficiency factors in all of the variables along the image chain for a direct and unbiased comparison of noise performance, as well as final image quality. Recently, the advantages of using quantum efficiency as a comparison tool was reflected when the full-frame Canon EOS 5D Mark III received a lower overall score on DxOMark.com against the Nikon D800, even though the D800 has smaller-sensor pixels at 4.7 microns while the Mark III sports comparatively large 6.25-micron pixels. Adding to the confusion, Nikon’s full-frame, 36.6-megapixel D800 outscores all of the medium-format cameras that DxOMark has tested by a significant margin. With results like these, it becomes apparent that when measuring equivalence between imaging systems, a variety of parameters must be equalized so that fair comparisons can be made not only between cameras, but entire imaging systems and their unique technologies and mechanics.


DxOMark ratings of camera image quality have become something of a de facto standard. The company tests new equipment as it becomes available. Using their interactive website, you can compare camera models and see how they stack up under a variety of hypothetical shooting situations.


The Image Chain

After light enters a lens, the camera begins the process of creating an image by converting physical information in the form of light photons into digital information in the form of an electrical charge. This is done primarily via the photodiodes at the sensor, the anti-aliasing filter (which reduces sharpness right off the bat) and the analog-to-digital converter. But the process is lossy because converting a real-world projection into a mathematical representation requires compression that effectively "maps" the nearly infinite variable information of the real world into a rounded-off digital approximation. Most digital cameras use a Bayer sensor layout with a red, green and blue checkerboard pattern of pixels. The majority of these pixels are dedicated to capturing green information as a measure of luminance since our eyes are more sensitive to the green channel in light. This Bayer pattern also requires exact positioning in terms of depth so that light rays penetrate to an optimum distance for accuracy in measuring light rays channeled from the lens and aperture. This implies that the sensor can’t be constructed to be thicker so more surface area is the only choice for adding extra light information, which is why you have competing sensor formats in DSLRs, from full frame to the same size as a frame of 35mm film, all the way down to Four Thirds/Micro Four Thirds at roughly one-quarter the area.

Interestingly enough, the Bayer array only measures luminance, and it uses an estimation of surrounding values from nearby RGB pixels to extrapolate color information. The most popular analogy for how these photodiodes work at the sensor is an empty bucket that captures light instead of water. If there’s too much information in an overexposed image, for instance, the pixels will fill beyond capacity to the point of clipping, where potential image information is then lost. If only a small amount of light hits the bucket, then only a small amount of image information is collected. When it comes to pixel size, just as with the sensor, the larger the area, the better it will be at gathering light information.

Pixel density refers to the amount of these photodiodes that are packed into the tiny amount of real estate on the sensor. It also can be referred to as sensor resolution in megapixels (pixels by width multiplied by pixels by height). We’re now seeing that a large pixel density, even on a smaller sensor, isn’t a bad thing. It’s almost always a good thing, actually, because there’s a direct correlation between sensor resolution and outputting to print resolution. Also, the more pixels, the better the detail in the image regardless of the sensor area, which translates to better acutance for sharper edges and also to more color information thanks to the complex relationship between red, green and blue pixels on a Bayer array.

Historically, this required a trade-off, as higher-density sensor resolution would result in smaller pixels, which created light falloff in the exponential gaps found between each added photodiode. This was most pronounced in compact cameras with large, DSLR levels of resolution. But now technology has improved a great deal, and modern advances like microlens arrays and backside illumination are compensating for these downsides by making much more efficient use of all incoming light and channeling extra light information that would have ordinarily fallen in the gaps between photoreceptors. Sensors are incredibly efficient, and even these gaps have benefits as they house components and integrated circuits that aren’t light-sensitive.


Canon EOS 5D Mark III


Image Noise

Noise appears in an image as seemingly random spots of brightness (luminance noise) or colors (chrominance noise) that vary in size, placement and strength. The signal-to-noise ratio describes the amount of desired information (signal) in relation to the amount of unwanted and random information that’s created (read noise) or captured (photon noise, also known as shot noise). In the incoming signal of light, photon fluctuations cause noise that’s especially prevalent in the brighter areas of an image. Conversely, voltage fluctuations introduced during the digital processing of the image introduce more read noise to the darker areas of an exposure. With a shorter exposure at a high ISO, for example, photon shot noise is decreased from a faster shutter. The read noise introduced by heightened camera processes, on the other hand, is increased. This is twofold, as any read noise introduced to the image prior to ISO amplification is also subject to amplification. In contrast, a longer exposure will produce more signal and, consequently, more photon aberrance while read noise becomes mitigated in comparison to the ratio of heightened shot noise and more signal.

Both shot noise and read noise are random, and they’re the two primary causes of image noise. However, other variables also affect noise levels to a lesser effect, including low levels of infrared radiation and, especially during longer exposures, excess heat from both the camera and external temperatures. These can set off electrons that are confused for image information during the analog-to-digital conversion, but this type of noise isn’t random, and it can be compensated for when manufacturers choose to go to the expense of doing so.

Interestingly, noise also limits the potential dynamic range of each exposure. The amount of noise present in the darker areas will affect true black measurements by establishing a minimum signal-to-noise ratio upon which the rest of the dynamic range is based. So if there’s more noise, dynamic range—the amount of steps from true black to sensor saturation highlights—then becomes limited. Extrapolating further, ISO sensitivity limits dynamic range, as well, because you’re adding more noise to an image as you push amplification to the signal. However, at large enough ISOs, the amount of noise being created by amplification is rendered irrelevant because you would experience even more truncated image information if you were to use a lower ISO that resulted in either a slower shutter speed or a wider ape
rture and, consequently, a longer exposure that would overflow the sensor with information. Therein lies the rationale for selectable levels of ISO sensitivity in the first place, which is always a trade-off between image noise, image sharpness and depth of field.

We’re now seeing that a large pixel density, even on a smaller sensor, isn’t a bad thing. It’s almost always a good thing, actually, because there’s a direct correlation between sensor resolution and outputting to print resolution.

Lens Limitations

There are a number of competing sensor formats, with most cameras sporting a full-frame, APS-C, APS-H or Four Thirds/Micro Four Thirds sensor. When it comes to evaluating a lens on each of these formats, most photographers know that there’s an equivalence value that compares the angle of view and focal length to the same aspects on a full-frame camera. So a 100mm lens on a sub-full-frame APS-C sensor with its 1.5x/1.6x (Nikon/Canon) equivalence translates to a 150mm/160mm throw. When optics are affected this drastically, you can see that competing sensor sizes also will require a way to effectively judge camera performance and quantum efficiency not just across competing sensor formats, but also competing imaging systems. Sensors and camera designs even show subtle fluctuations by model and generation.

As pixel density (aka sensor resolution) increases, it also can reach the limitations of the lens to resolve effectively. Ironically, full-frame lenses used with smaller sensors have to be capable of resolving to a much smaller area of space. While the cropped image circle doesn’t degrade image quality and, in fact, the center area of a lens projection is often the sharpest, multiplying the scale of an image also will emphasize glaring flaws and poorly focused areas. More importantly, if the lens isn’t capable of resolving to the required resolution of a sub-full-frame sensor (this is determined through MTF charts), image quality is degraded not because of the resolution of the sensor, but because of the subpar resolution of the lens. The ability of the lens to resolve will maximize at intermediate apertures while trailing off dramatically at the largest apertures (ƒ/1.4, ƒ/2.0) thanks to light aberrations that limit resolution, contrast and fine edge details.

More noticeable on the wide end, lens diffraction properties also will limit total image resolution regardless of megapixels. There’s a limit as to how narrow an aperture can be before channeled light diverges and then blurs together as it passes through apertures that are too small to focus the diameter of incoming light photons correctly. Known as the Airy disk, even a perfectly made lens with a perfectly circular aperture would have limited fundamental resolution because of this concept of diffraction. Diffraction is also very gradual, another reason why intermediate apertures are often the sharpest in a lens.

There’s a correlation between the Airy disk diffraction and sensor size, as well. Larger sensors require smaller apertures to achieve the same depth of field as smaller sensors because sensor size affects the angle of view. So total depth of field will decrease for an aperture in relation to larger sensors. Depth of field from an aperture of ƒ/2.0 on a 100mm lens with an APS-C camera and its 1.6x crop factor, for instance, would require the equivalent of ƒ/3.2 at a 160mm focal length on a full-frame sensor to gain the same perspective. Because the size of the Airy disk is dependent on aperture and sensor size, the size of the Airy disk can also become larger than the circle of confusion (the perceived area of sharpness in the selected depth of field), which reduces sharpness. In fact, the smaller the aperture (ƒ/32, ƒ/22), the larger the Airy disk and, hence, the lower the capable resolution. Lens limitations aren’t often broached when discussing image quality in regard to the sensor, but it’s a limiting factor on sub-full-frame cameras, in particular, and it’s at least partially responsible for the perceived lack of image quality when comparing full-frame sensors to sub-full-frame sensors.


Nikon D800


The Final Resolution

In summation, more pixels on a sensor will indeed lead to higher image quality if all other factors are ignored. It’s also true that a large sensor with smaller pixels will have less noise than a smaller sensor with larger pixels for two reasons: 1) There would be more area for better light information absorption; and 2) Information from larger sensors doesn’t have to be blown up as much as it does with a smaller sensor upon enlargement to an equivalently comparable image. Smaller pixels on the sensor also will be noisier than larger counterparts because less image information is captured in comparison to the amount of noise being created, which can be expressed mathematically through the signal-to-noise ratio. Contrarily, more pixels can be fit to a sensor when the pixels are smaller, which creates finer resolution detail for larger prints and output files.

That said, more pixels on the sensor (megapixels) isn’t a guarantee of better image quality and, in fact, more resolution in this manner can degrade image quality if the sensor doesn’t employ microlenses or other devices that increase the overall quantum efficiency of light gathering and interpretation. The same can be said for larger-sensor formats. Ideally, a large sensor with a lot of pixel resolution offers the best of both worlds as long as that sensor is capable of using all of that information in an efficient manner. Larger sensors gather more light; for that reason, they’re inherently less noisy than smaller sensors, but overall quantum efficiency matters most, and even large sensors with poorly designed imaging processors can perform worse than well-designed, sub-full-frame imaging chains.


Hasselblad H4D-40

This is because quantum efficiency includes a variety of in-camera functions that will affect the total image quality—from the lens to the sensor to processing algorithms to the anti-aliasing filter to the analog-to-digital converter to in-camera noise reduction all the way out to the software edits on a computer. All of these factors influence image quality to varying degrees; however, excepting glaringly bad manufacturing, not one of them is solely responsible for overall image quality even though, at the same time, improvements to single components also will improve overall image quality even if only negligibly.

Even quantum efficiency as a measurement discounts a number of fundamental imaging system comparisons, like color and image distortion, so it’s important to remember that each element, no matter its perceived importance, is only a single parameter of the image chain, and it’s also important to evaluate an entire imaging system rather than getting stuck on any single specification, especially one as notorious as megapixel count or sensor size, for example.

Leave a Reply

Menu