Photographic purists often will remind digital photographers that even the most modest piece of 35mm film can outperform digital photography when it comes to resolution. Film, they argue, has as much detail as somewhere between a 100-megapixel and 200-megapixel sensor, depending on how exactly you convert the resolving power of film’s chemical process into what the digital camera does. While that’s true on its face, with every generation of digital photography, the technology gets better, the components get more closely integrated, and the resulting images gain better and better resolution. That’s thanks to the current state of today’s sensors, lenses, printers and scanners, all of which have benefited from nearly 20 years of development since the digital era truly began.
Getting the most from today’s gear requires an understanding of the different technologies involved in the imaging workflow and how to use them together to create images with unsurpassed resolution and clarity.
Inside the Box: The Development of the Digital Sensor
At the core of digital imaging lies the sensor inside a digital camera. Look inside your camera body, and you’ll see a reflective piece of silicon that doesn’t appear any different than the electronic wafer that appeared in the very first digital cameras.
Look at the sensor under a microscope (and maybe with a degree in electrical engineering), and you’ll see that the heart of the camera is vastly different than when digital photography began.
For starters, the sensor is a CMOS (complementary metal-oxide semiconductor), where the first sensors were CCD (charge-coupled device). Aside from the different spellings of the arcane acronyms, the technology is much different.
CCDs, which came first, provided high-quality and low-noise images, albeit with a high manufacturing cost and a specialized production line. CMOS sensors can be made on the same manufacturing lines as other computer chips, which means the cost of development is lower—and that has helped keep the prices of digital cameras relatively stable. CMOS sensors also consume less power, which is good for the battery life of today’s camera gear.
When CMOS sensors first arrived, they were largely relegated to lower-end cameras like point-and-shoots and compact systems. It took a long time for the technology not only to catch up with CCD, but surpass CCD in terms of image quality.
The CMOS sensors in today’s cameras are incredibly capable, allowing systems like the Nikon D810, the Canon EOS 5DS/5DS R and the Sony a7R II to create breathtaking images at a very high resolution.
The Nikon D810 has a resolution of 36 megapixels, the Canon EOS 5DS/5DS R has a 50-megapixel sensor, and the Sony a7R II has a 42-megapixel one that’s backside-illuminated.
New technologies have further improved the performance of CMOS sensors. The Backside-Illuminated (BSI) sensors utilized in Sony’s a7R II camera represent a big leap forward in CMOS design, although the technology actually has been available for digital cameras since Sony released the Exmor R sensor in 2009.
Normally, a digital imaging sensor is produced with the electrical wiring needed to transmit the data from each pixel on the sensor located on the front of the sensor, mostly out of ease of manufacturing. But that wiring, even though it’s very, very small, still blocks some available light that might otherwise hit the photo receptors.
BSI design moves the wiring to the back of the sensor, allowing more light to hit the surface, which makes BSI sensors much more capable in low light than their standard CMOS relatives. This helps counteract the physical problem that as resolution on a sensor increases, the sensitivity decreases.
This sounds complex, but it’s just basic physics. The width of a pixel determines how sensitive it is to light. Bigger pixels are more sensitive because there’s more surface area in the pixel for photons of light to hit; smaller pixels are less sensitive because there’s a more narrow opening for light to enter the pixel, so fewer hit the surface of the pixel.
A good analogy is thinking of buckets in a rainstorm used to measure how much rain has fallen. Place one bucket in your yard, and you only know how much rain fell in one spot. Place a bunch of buckets in your yard, and the resolution of your measurements increases—you’ll know how much rain fell by your back door, your driveway, your yard and so on. A narrow bucket won’t catch any raindrops that aren’t falling straight down, so you miss recording some rainfall when the wind is blowing. A wider bucket is more likely to catch the individual drops of rain, even if the wind is blowing, so its measurements are more sensitive.
Since a 35mm sensor is always the same size, to increase the resolution (the number of buckets), you have to make the buckets smaller, which reduces their sensitivity. That means that all things being equal, the 50-megapixel Canon EOS 5DS is much less sensitive than the 22-megapixel sensor in the Canon EOS 5D Mark III, but the 5DS has much higher resolution.
The pixels of BSI sensors aren’t any larger, so they’re not inherently more capable at gathering light, but since they remove the wiring on the front that blocks part of the opening of the pixels, more light hits the sensors. The effect is like taking off a pair of sunglasses when it gets dark out—your eyes aren’t any more sensitive, but there isn’t something blocking the light in front of them.
So far, Sony has been the only manufacturer to pull off volume production of high-end BSI CMOS sensors, but it’s only a matter of time before the BSI process becomes the standard in digital imaging.
This leap gives digital sensors some pretty incredible sensitivity in low light—often three or more stops of less noise than non-BSI sensors. A camera with a BSI sensor can easily capture low-noise images at ISOs where film used to really fall apart.
At the same time that sensors have improved, their onboard processors have improved, as well. The Nikon D810, for example, eliminated the low-pass filter found on the D800, which increased image sharpness. A low-pass filter is a screen that sits before a sensor and slightly blurs the incoming light to prevent a moiré pattern—strange artifacts that occur when many lines are close to each other. Moiré is often seen when photographing fabrics and architecture, and it’s difficult to remove in software.
While a low-pass filter eliminates moiré, it does so by softening the image. The Nikon D810 and other cameras with no low-pass filter produce relatively moiré-free images. How is it possible to remove a filter and not experience the negative consequences? Largely it comes down to the processors that take the data from the sensor and create images from it. More powerful processors can eliminate moiré in-camera without the filter. The result is a much sharper image with very few artifacts, another huge boon for photographers.
There are also areas where lens design and camera design overlap to create better images, albeit not thanks to the resolving power of glass or sensors. The same advances in technology that have increased the ability of cameras to process out artifacts like moiré have also been responsible for a huge improvement in autofocus speed and accuracy.
If a camera can lock onto an image more accurately, the result is a higher-quality image. Autofocus systems are increasingly accurate, and where cameras a generation or two ago could lock onto a moving target with accuracy, many systems can perform not only face detection, but eye detection. Several cameras we’ve reviewed recently have had the ability to pick out the eye closest to the camera and prioritize focus on that eye. There’s an incredible amount of processing needed to do that, and the systems we’ve seen are much more capable of focusing on a face or an eye than camera systems without this technology.
For a photographer working in a fast-capture situation, the ability to lock onto a face or an eyeball without effort means a higher percentage of in-focus (and therefore useful) images. While the resolution hasn’t increased in this case, the ability to capture acceptable photos has increased exponentially. An in-focus picture has more resolution than one that’s out of focus.
While film has tremendous resolving power, the lenses from the film era weren’t always able to provide an image of high enough quality to take advantage of that.
Lens production technology has advanced significantly since the advent of digital photography. New optical coatings, new construction methods, blazingly fast autofocus motors and better production methods have resulted in lenses that are tack-sharp with incredibly high resolution.
Lenses in the digital era have also been designed to meet the demands of the digital sensors in cameras. The chemical coating on the surface of a piece of film was receptive to light coming in at all angles, but a digital imaging sensor only likes light that strikes the sensor directly. Lenses that were designed in the film era produce images of lower resolution than a lens designed for digital (all other things being equal).
This was one of the big rationales behind the Four Thirds standard and the new lens designs and mounts—lenses for digital sensors needed to be designed to optimize the light coming in. Digital cameras have been around long enough that the stable of lenses produced by the manufacturers has been revised with more modern design.
Combine the newer lens designs with new coatings and improved optical elements, and even some entry-level lenses are able to produce images with professional quality.
While printers seem to have become second-class citizens in the photographic world, they’re actually incredibly important tools in the digital era. (See “Hi-Tech Studio” in this issue for an overview of today’s state-of-the-art printers and the incredible output they produce.) Whether printing is done for prepress proofing, gallery exhibitions or simply to better evaluate a photograph, creating consistent and accurate artwork is incredibly important. The need for printers to lay down consistent ink with a variety of papers is a huge challenge, and it becomes even more challenging when trying to maintain consistent output across multiple printers.
Canon and Epson are the dominant players in the studio printer market, though in the wide-format world there are several other—possibly less well-known—manufacturers. While on the outside printers look like they haven’t changed, inside the technology has seen some pretty major improvements.
Print heads have improved, with more nozzles per inch, anti-clog technologies and the ability (in some printers) to sense when a nozzle is clogged and instantly replace it with a neighboring nozzle. This is a huge improvement, as it allows a printer, which otherwise would have gaps in coverage, to lay down ink across the entire page. Traditionally, clogged nozzles would require a heavy-duty cleaning cycle, but with automatically reconfiguring nozzles, it’s possible to keep printing without having missing ink or without having to stop to purge the printer.
Today’s printers are much more capable at laying down ink droplets in exactly the right spot, and the right placement of ink determines the actual resolution of a printer. (Put a few droplets that are supposed to be an eyelash in the wrong place, and you have a blurry eyelash, for example.)
That’s why the overall resolution of a printer (the 2400×1200 dpi number) isn’t the sole measure of the image’s quality. If you lay down 2400 drops in exactly the right place, the effective resolution of the printer is much higher than if you lay down 2400 drops just slightly in the wrong place.
There are a few systems that are used to lay down ink with scientific precision. The stepper motor moves the print heads along the paper, and a more precise stepper motor means more precise drops of ink deposited on the paper, and the motors (and related parts) inside a printer are much more accurate today than they have ever been.
There’s also the head and the nozzle themselves, and they’re much more capable of laying down drops of ink in the right spots. Some printers, such as the Epson P600 and P800, even use variable droplet systems that lay down different-sized drops of ink depending on the coverage needs of an area.
But it’s not just improvements in hardware that have improved the resolution of printers; inks and papers have evolved, as well. The ink that’s dispensed by a printer isn’t the same as the ink that runs out of a ballpoint pen. Each drop of ink is encapsulated with a coating that makes sure the ink ends up in the right spot, and only in the right spot. The coating makes sure the drop doesn’t spread to neighboring fibers of paper, and a “gloss optimizer” coating makes sure the ink fills in properly and reflects light correctly on a variety of different substrates.
Papers, too, have improved. Microscopic coatings on papers keep the ink in place, and even rag papers from the major paper mills now are coated with a surface that provides both texture and a safe place for ink to land and to stay in place. Ink is also now more vibrant and more durable, requiring less ink used to create an image and better theoretical longevity.
Finally, the print engines—the internal processors that take a photograph and break it down into patterns of dots necessary to produce an image—have improved. Printers are better able to render images into output, eliminating wasted ink, bleeding colors, and improving edge detail and sharpness.
The result of the combined enhancements in printer technology is output that’s vastly better than even a few years ago, even though the resolutions of the printers that are listed on the outside of the boxes still have the same 2400×1200 specifications of a few years ago.
It might be true that fewer scanners are used these days than when the film-to-digital transition began, but their role in the photographic chain is no less important. Moving images from the analog world to the digital world
requires a combination of several of the technologies already covered, optics and internal mechanics.
A scanner is a combination of the stepper motor of a printer and the lenses of a camera, with a dash of internal processor thrown in. As technologies have improved across the printer and optics spaces, scanners have improved, as well. The sensors and lenses inside a scanner are more accurate and able to process data more quickly, and the motors and mechanics that control their operation are improved, too.
The result is that scanners today can reproduce a piece of film with greater fidelity and a wider range of shades than previous generations could, at a more affordable price. For shooters with a collection of film, the scanners on the market today will produce a better, sharper and more color-accurate image than devices of just a few years ago.
Of course, these technologies will all improve over time. The sensors, lenses, processors, motors and print heads a few years from now will exceed the capabilities of today’s equipment. That’s both the joy and the headache of digital technology.
Film equipment evolved on a linear scale while digital technology tends to improve on a logarithmic scale. The result is that, every few years, the equipment, as a whole, gets sharper, more accurate and more affordable, improving the industry as a whole.
You can follow David Schloss on Twitter and Instagram @davidjschloss.