Sensors Un-Sensored


Digital cameras—smartphone through pro medium-format—use one of two types of image sensors: CMOS (complementary metal-oxide semiconductor) or CCD (charge-coupled device). Early digital cameras used CCD because it produced better image quality and was easier to make using the technology of the time and, basically, was the only type available at the time. Now, interestingly, CCD is found mainly in compact digital cameras—and high-end medium-format digital cameras and backs. Canon has been producing its own CMOS sensors for DSLRs since 2000; other DSLR makers joined in en masse when Sony introduced its Exmor CMOS sensor in 2007.

Both types of sensors do essentially the same thing: convert light (photons) into an electric charge (electrons) and then voltage, and then convert that into numbers (digital 1s and 0s). Yes, digital image sensors are analog devices—the output isn’t digital until the A/D (analo g-to-digital) converter makes it so. CCDs pass the charge from pixel to pixel and convert it to voltage as it leaves the chip. CMOS sensors convert the charge to voltage in each pixel, and some even do A/D conversion on-chip.

The PhaseOne IQ250 (top) and Hasselblad H5D-50c (above) have 44x33mm CMOS sensors, marking a shift from CCD.

CMOS sensors use less power and are less costly to produce, and image quality has caught up with CCD, so today, many higher-end compact digital cameras and all DSLRs use CMOS. In’s sensor testing, 11 "full-frame" (36x24mm) CMOS sensors have scored in the ballpark with the best of the medium-format CCD sensors tested, and the even smaller 19-megapixel RED EPIC DRAGON CMOS sensor recently became the first to top 100 on the DxO scale.

CMOS is faster than CCD, making it easier to produce HD video (although its rolling shutter makes it more prone to wobbly effects when the camera is panned during a video) and enabling such features as high-speed still shooting, in-camera HDR, contrast-based AF off the image sensor and 1080p full HD video.

CCD delivers a "look" favored by many medium-format shooters, based on color reproduction, dynamic range and tonal smoothness, but even medium-format has started going CMOS. While CCD has done wonderfully well for medium-format shooters in terms of low-ISO image quality, it has lacked high-ISO performance, shooting speed and DSLR-style Live View on the LCD monitor. Now, Phase One (IQ250) and Hasselblad (H5D-50c) have introduced models with a 44x33mm CMOS sensor from Sony (with much input from the camera makers) that solves these CCD drawbacks, while delivering the image quality medium-format shooters have come to expect. The new CMOS sensor even delivers one stop more dynamic range than Phase One’s flagship IQ280/IQ260 CCD cameras. Rumor has it that Pentax is also working on a camera using this sensor.

Cutaway of a sensor showing a typical Bayer array.

To Bayer, Or Not To Bayer

From the start, the vast majority of digital cameras, whether CCD or CMOS, have used Bayer sensors, which feature a grid of red, green and blue filters, one over each pixel site. The photodiode ("pixel") receives light of only one primary color. That’s because the photodiodes are colorblind; they detect how much light is striking them, but not what color (wavelength) it is. The Bayer filter grid (named after the Kodak scientist who devised it) allows these colorblind sensors to deliver full-color images.

The Sigma X3 Quattro sensor separates colors vertically in the sensor instead of using a Bayer array.

While it works amazingly well, the Bayer system has its drawbacks. First, because each pixel receives only one primary color, data for the two missing primaries must be obtained through interpolation of data from neighboring pixels using complex proprietary algorithms. This can produce moiré and color artifacts, which must be dealt with, either by using an anti-aliasing filter over the sensor assembly or by the user during postprocessing. (See the article on AA filters, "Do You Use Any Aliases?")

Because each pixel records only one color of light, not all the photons falling on the sensor are used; just half of the green ones, and one-quarter of the red ones and blue ones can be recorded. By using wider-ranging (less specific) colored filters in the Bayer array, sensor manufacturers can allow more light to reach each pixel (at the cost of less precise color rendering); still, at least half of the green light and three-quarters of the red and blue light are lost.

The Bayer system also means that the sensor doesn’t deliver as much resolution as its horizontal-by-vertical pixel count implies because not all the information coming from the lens is recorded at each pixel site.

We should state again that despite these drawbacks, Bayer sensors work very well; pro DSLRs and medium-format cameras have used them for years, with pro image quality. But the drawbacks are there.

As technology has advanced, microlens technology has improved to bring more imaging light to the photosites.

Foveon (used by Sigma digital cameras and now owned by Sigma) offered an alternative to the Bayer sensor, starting with the Sigma SD9 DSLR in 2003. Rather than using colored filters to obtain color data for each pixel, the Foveon X3 sensor stacks three pixel layers and takes advantage of the fact that light penetrates silicon to different depths, depending on wavelength (color). In effect, the top layer records mainly blue light, the middle layer, mainly green, and the bottom layer, mainly red. So each pixel site records light of all three primary colors, there’s less moiré and no Bayer-array color artifacts, and the sensor produces higher resolution than a Bayer sensor of equal horizontal-by-vertical pixel count, in part because there’s no need for the blurring AA filter. The drawbacks have been more noise than Bayer at higher ISOs and slower performance because of all the data that must be processed for each image.

Now, Sigma has announced a new Foveon sensor, the Quattro. Like previous Foveon X3 sensors, the new X3 Quattro features vertical color separation technology rather than colored filters to derive color information. Like previous X3s, the new sensor stacks three pixel layers, the top one recording mainly blue, the middle one, green, and the bottom one, red wavelengths. Where previous X3 sensors had three layers of identical pixel count, the new Quattro features a 4:1:1 ratio—the top layer has four pixels for each pixel in the lower layers. This allowed Sigma to up the pixel count while reducing noise and speeding up processing and writing times, thanks to less total data per image file, all while retaining the essential Foveon assets—each primary color recorded at every pixel site, no moiré and no need for a blurring optical low-pass filter. A new TRUE III processor designed for the Quattro sensor optimizes image quality and speeds performance. (For the record, the 14-bit RAW files output by the new X3 Quattro sensor measu
re 5424×3616 pixels compared to 4704×3136 for the X3 Merrill sensor’s 12-bit files.)

Fujifilm has tried a number of different filter patterns, currently offering in their X-Trans sensors an RGB filter grid that arranges the red, green and blue filters in a more random array, with all three colors in each horizontal and vertical pixel row. This minimizes moiré and false colors, allowing Fujifilm to do away with the sharpness-robbing optical low-pass filter required by most Bayer-sensor cameras. The newest X-Trans II CMOS sensor incorporates more than 100,000 phase-detection pixels for quicker AF in good light.

On-Sensor Phase-Detection Autofocus

This brings us to another sensor trend: putting phase-detection AF (PDAF) sensors on the image sensor. Theoretically, PDAF can tell from a single reading whether or not the image is in focus, and if not, which way it’s off and by how far. It can also calculate from successive readings how fast and in what direction the subject is moving, and predict where it will be at the instant of exposure. All DSLRs feature phase-detection AF when you’re using the eye-level SLR viewfinder.

Most cameras, including DSLRs, use contrast-based AF in Live View mode because the SLR mirror has to be in the up position for light to reach the sensor, and you can’t see through the

These Canon diagrams demonstrate how on-sensor phase-detect AF works. The system has the potential to be faster, smoother and more accurate than previous AF systems, and having AF on the sensor like this doesn’t reduce the effective resolution.

There’s a trend to put some PDAF sensors on the image sensor so they can work during live-view operation. These hybrid systems range from a handful of PDAF sensors to Canon’s Dual Pixel CMOS AF in the 20.2-megapixel EOS 70D, which actually has two photodiodes at each pixel site—during AF, the dual photodiodes provide PDAF, and during image capture, they combine to output the image signal as single pixels. The 70D has a separate 19-point AF system for viewfinder (non-live-view) shooting.

Sony’s SLT cameras feature a unique translucent SLR mirror that doesn’t move as conventional SLR mirrors do during shooting. Instead, the SLT mirror transmits most of the light from the lens to the image sensor, while simultaneously reflecting a portion up to the PDAF sensor assembly so you get full-time PDAF during live view (the SLTs are always in Live View mode). The full-frame SLT-A99 combines this with 102 PDAF sensors overlying the image sensor to further improve performance on moving subjects.

Future Developments

As sensor technology evolves, dramatic advancements are on the horizon. Innovation doesn’t happen on a precise timetable, but we’re keeping an eye on a few imminent developments. High-end still plus motion shooting is currently the domain of RED. The Digital Still & Motion Camera concept is a fundamental part of RED’s DNA. Other manufacturers are catching up, and we expect to see this capability—being able to shoot full motion where each frame is also a full-resolution still image—in more cameras in the future. We’re also particularly intrigued about the future of high-speed motion cameras. The recent Edgertronic Kickstarter campaign is pointing to the future of affordable high-speed motion capture; the cameras are less than $6,000. Again, as still capture and motion capture continue to merge, high-speed technology will be a key aspect of true "hybrid" shooting.

Leave a Reply