Digital image sensors consist of a grid of light-sensitive photodiodes (pixels). When you make an exposure, each pixel receives a certain amount of light—a certain number of photons—according to the brightness of the portion of the scene being photographed that’s focused at that pixel. Inherent in this process is the moiré that can result when a finely patterned subject’s image at the focal plane conflicts with the pattern of the sensor’s pixel grid (see Diagram A).
A further complication with conventional sensors is the fact that a given pixel records only one primary color of light. Photodiodes don’t detect color. They detect the quantity of light (photons) striking them, but not what color (wavelength) that light is. To produce color images, most sensors employ a Bayer filter array, a grid of green, red and blue filters that’s named after the Kodak scientist who devised it. This grid positions a red, green or blue filter over each pixel so that each pixel receives only one of these primary colors (see Diagram B). The missing color data for each pixel is produced by interpolation of data from neighboring pixels, using complex proprietary algorithms in a process known as demosaicing. Due to the demosaicing, and the fact that there are twice as many green pixels as red or blue ones, aliasing not only produces moiré, but also false-color artifacts with Bayer array sensors.
To counter these effects, sensor manufacturers place an anti-aliasing filter (also known as an optical low-pass filter, or OLPF) atop the Bayer filter grid. The anti-aliasing filter is generally a multilayer unit with a top layer that slightly displaces the image horizontally, an infrared filter and a filter that slightly displaces the image vertically. This blurs the image’s high frequencies (fine detail) at the pixel level, eliminating (or at least greatly reducing) moiré and artifacts, but it also has the effect of reducing resolution.
Medium-format photographers want maximum sharpness and prefer to compensate for any moiré and artifacts on a per-image basis in postprocessing, so medium-format cameras don’t have anti-aliasing filters. But most users of smaller-format cameras would rather not have to deal with moiré in postprocessing, so these cameras have utilized low-pass filters.
As pixel counts have grown in DSLRs and mirrorless interchangeable-lens cameras, manufacturers have started omitting the anti-aliasing filter. The thinking is that with pixel counts being so high and thus pixel pitch being so small, the pixel density itself reduces the possibility of moiré. So many pixels being so closely packed together nullifies the occurrences of intersecting patterns. Moiré is still possible, but with higher pixel counts and smaller pixel pitch, it’s much less common.
Nikon began the trend with the 36.3-megapixel, full-frame D800E early in 2012. (The D800E actually has the top layer of a low-pass filter and a second layer that cancels the first layer’s effect; Nikon also offers the D800 with a weak conventional low-pass filter.) Other current DSLRs with no low-pass filter include Nikon’s 24-megapixel D7100, D5300 and D3300, and Pentax’s 16-megapixel K-5 IIs and 24-megapixel K-3. The K-3 has a unique two-strength anti-aliasing function simulator that uses the sensor-shift shake-reduction mechanism. It rapidly moves the sensor down one pixel, right one pixel and up one pixel, slightly blurring the image at the pixel level as an anti-aliasing filter would.
Non-DSLRs with no low-pass filter include Sony’s 24-megapixel full-frame a7 and RX1R, and Fujifilm models with their APS-C X-Trans sensor. The X-Trans features a unique RGB filter array that differs from conventional Bayer arrays by using a more random arrangement of the red, green and blue pixels in every horizontal and vertical row. This minimizes moiré and false colors without needing the blur-inducing anti-aliasing filter. And, of course, Sigma DSLRs and DP-series compact cameras use APS-C Foveon X3 sensors, which record all three colors at each pixel site and thus don’t need low-pass filters (see the sidebar on Foveon sensors).
As medium format begins to shift from CCD to CMOS sensors, we’re seeing the same avoidance of anti-aliasing filters in their CMOS models. Phase One’s IQ250 and Hasselblad’s H5D-50c, which both use 50-megapixel Sony CMOS sensors, don’t have anti-aliasing filters. (Learn more about sensor technology in "Sensors Un-Sensored" in this issue of DPP.)
So what’s the bottom line? If maximum resolution trumps all else, and you’re willing to deal with possible moiré and artifacts in postprocessing, you’ll enjoy the extra sharpness of a sensor without a low-pass filter.
So what’s the bottom line? If maximum resolution trumps all else, and you’re willing to deal with possible moiré and artifacts in postprocessing, you’ll enjoy the extra sharpness of a sensor without a low-pass filter. (Note that moiré is most evident with subjects whose fine pattern conflicts with the sensor’s pixel grid, so it often can be avoided simply by moving the camera a bit up or down, left or right, closer or farther away, or slightly rotating the camera or subject, if possible, or changing focal length or the focus point. You can use maximum magnification in Live View mode to check for moiré.) If you specialize in subjects with fine repeating patterns or shoot JPEGs rather than RAW (and, of course, JPEGs are a bad idea if maximum image quality is your goal!), then you might be better off with a camera that has an anti-aliasing filter. Of course, if you shoot medium-format digital, your choice is made: None of those cameras has an anti-aliasing filter.
| As explained in this article, conventional Bayer sensors record just one primary color at each pixel site; the missing colors for each pixel are produced by interpolating data from neighboring pixels. This compounds the problems of moiré and false-color artifacts, generally requiring the presence of a blurring anti-aliasing filter to minimize it.
Rather than using a Bayer array, the Foveon sensor stacks three layers of pixels, taking advantage of the fact that light penetrates silicon to different depths depending on wavelength:
Foveon sensors have used this principle since the first Sigma DSLR in 2003. Now Sigma has introduced a new Foveon sensor in its dp Quattro cameras. Like previous Foveon X3 sensors, the new X3 Quattro features vertical color separation technology—using the fact that different light wavelengths penetrate silicon to different depths—rather than colored filters to derive color information. Like previous X3s, the new sensor stacks three pixel layers: the top one recording mainly blue, the middle one, green, and the bottom one, red wavelengths. Where previous X3 sensors had three layers of identical pixel count, the new Quattro features a 4:1:1 ratio: the top layer has four pixels for each pixel in the lower layers. This allowed Sigma to up the pixel count while reducing noise and speeding up processing and writing times, thanks to less total data per image file, all while retaining the essential Foveon assets: Each primary color recorded at every pixel site, no moiré and no need for a blurring optical low-pass filter. A new TRUE III processor designed for the Quattro sensor optimizes image quality and speeds performance. (For the record, the 14-bit RAW files output by the new X3 Quattro sensor measure 5424×3616 pixels, compared to 4704×3136 for the X3 Merrill sensor’s 12-bit files.)