Autofocus Evolution

Since the advent of digital photography, there have been aspects of a camera’s operation that are shrouded in mystery and confusion. That’s because many of the technologies involved in digital photography are rooted more deeply in optical physics and computational algorithms than in shutter speeds and lens openings.

Most recently, a seismic shift in the technologies and techniques used to perform a camera’s autofocus is the cause of confusion when it comes to the operation and performance of everything from SLRs to mirrorless systems.

There are two different methods used to perform autofocus in a camera: contrast detection and phase detection. It used to be canon (pardon the pun) that contrast-detect autofocus was slower than phase-detect systems. That’s not the case anymore (at least, it’s not always the case, as we’ll see). With the advent of compact mirrorless systems, the ground rules have changed, and thanks to some new products in the DSLR space, the rules are in the process of changing again.

Artificial Eyes

Autofocus systems function by way of one of two mechanisms—contrast detection or phase detection—and, at best, most photographers have a sketchy understanding of the differences. Most photographers who are familiar with the two systems are likely to say "contrast-detection systems are slower than phase detection," and while that was the case recently, that has become an obsolete assumption.

Here’s an extremely rudimentary (and not scientifically precise) explanation. Phase-detection AF works by taking beams of light from different sides of a lens, bouncing that light to a separate autofocus sensor and comparing those two different beams. The lens focus is adjusted until the waveform of the light from each side overlaps. When those waveforms overlap, they’re in "phase" with each other and an image is in focus.

This is something that would be familiar to anyone with glasses who has held them at arm’s length. Try to read a sentence, and the words overlap and line up improperly. Bring the glasses back toward your face, and at a certain point, the images line up and everything is in focus.

The camera’s phase-detect sensor looks to see if the waveforms misaligned because they’re wide apart or if they’re out of alignment because they’re overlapping too far, and that indicates if the subject is back-focused or front-focused (i.e., the lens is focused too close or too far). To achieve this, SLR cameras have traditionally used part of the camera’s primary mirror (plus additional mirrors) to bounce a portion of the light onto a stand-alone autofocus sensor. As a result, phase detection usually can’t function with the mirror raised, which means that it’s disabled during live-view shooting or video recording.

This is the primary focusing system used in most SLR cameras (both film and digital) and generally has been regarded as superior to contrast-detection autofocus because of its speed and ability to predictively track an object. (More on that trick in a bit.)

By comparison, contrast detection works on a simple measurement of the contrast between adjacent pixels. The camera focuses and looks at a histogram, refocuses and evaluates the histogram again.

If contrast increases, the image is more in focus. If it decreases, it’s less in focus. The camera then refocuses and tries again. This is the cause of the back and forth "seeking" or "hunting" many people experience when focusing, and it happens more in low light because there’s not enough light available for the sensor to judge if contrast is improving or not.

One of the strengths of the contrast-detect focus system is that it can be performed using the same sensor that’s capturing the image, making it cheaper to implement and requiring less space than phase-detection systems, which have traditionally relied on a secondary autofocus-specific sensor.

Even cameras that use phase-detection autofocus will fall over to contrast detection if there’s not enough light to perform phase-detect focus, and it’s the speed discrepancy between the two systems perceived when this happens that leads many to the conclusion that contrast detection is slow. Because for most implementations in SLR cameras, when the focus speed slows down, it’s doing so because the camera has shifted to contrast detection.

Part of the sluggish performance of contrast-detect autofocus in SLRs is based on the relative heaviness of SLR lenses. Since the camera has to adjust the focus of the lens multiple times to evaluate the focus, the mass of the lens and the power of the focus motor have a huge effect. A more expensive lens with a more powerful motor will focus more quickly on a given camera than a cheaper lens with a less powerful motor will perform.


Autofocus has long relied on the mirror setup of a camera to establish fast and accurate focusing. New digital technologies and mirrorless models that do away with these designs are beginning to proliferate, like the Canon EOS 70D with Dual Pixel CMOS AF, which makes autofocus during Live View a reality.

"I don’t think that the majority of customers knew that SLRs had both [focus systems]," explains Nikon Senior Technical Manager, Steve Heiner. "We hear ‘Why don’t I have the same focus in Live View?’ and it’s because contrast detect can be done with the sensor itself." Customers shelling out a lot of money for a high-end SLR end up getting a powerful phase-detect system and fast focus with pro-level lenses, but then they see that performance slow down when shooting video or in the mirror-up Live View mode.

Once the camera’s mirror is locked up, as is necessary for video and live view, the phase-detect sensor can’t get the necessary image to evaluate; so not only does the camera focus more slowly, but it loses the predictive focus capabilities of the phase-detect system.

While contrast detection can accurately lock onto a stationary subject, it’s not able to predict where the subject will be next, and that’s a problem for continual autofocus on a moving subject.

Remember that phase-detect autofocus can tell if the subject is out of focus because the lens is front- or back-focused, so with a little bit of math, the camera can guess where the subject will be as it continues to move.

Here’s an example. Let’s say a subject is moving across the field of view and phase detect locks. A second later, it reevaluates and sees that the subject is out of focus by 10 feet, so it corrects and refocuses. Another second, and the car is 10 feet farther out of focus. The car is moving 10 feet per second, so now it can predict what the focus should be in four seconds (40 feet).

Contrast detection can’t do this predictive focus. It can perform superfast continual focus, but it can’t track an object moving across the scene the way phase detect can.

A New Breed

So, up until a few years ago, the state of autofocus was this: Predictive autofocus is the fastest system available, and it’s able to track objects with predictive focusing.

But a funny thing happened on the way to a mirrorless world—technologies changed and improved. By designing a digital-specific system, the Four Thirds partners were able to make smaller, lighter cameras and lenses that threw away some of the SLR rulebook. Part of the space savings in these cameras comes from the removal of the mirror and the mechanical mechanisms that actuate that mirror. The iconic pentaprism on the top of an SLR can disappear and the body can get thinner and lighter because they eliminate the need to make room for a piece of glass that has
to pivot up and down to capture an image.

The result is cameras and lenses that are much, much smaller and lighter than traditional SLRs. Remember, some of the sluggish performance of contrast-detection focusing is due to the weight of SLR lenses. Lighter lenses mean less mass to rack back and forth to measure focus.

But without that heavy, cumbersome mirror, the ability to divert light to a focusing sensor went away (at least at first; more on that in a moment), and as a result, Micro Four Thirds and similar mirror-free systems relied entirely on contrast-detection systems.

"The motors are getting smaller, faster and lighter," explains Nikon’s Heiner of both the company’s SLR and mirrorless cameras. "In DSLR lenses, we have the luxury of much more space. In mirrorless, the space is smaller, but they don’t have to move as much mass, so [contrast detection] tends to work extremely well in that system. Where the Silent Wave motors used [in Nikon pro lenses] work very fast, they’re power-hungry."

"The whole size battle…people have to understand that bigger isn’t necessarily better," says Richard Sasserath, technical specialist at Olympus. "When you talk about an SLR, 90% of the time, they’re using phase detect. With Micro Four Thirds with contrast detect, you’re getting a much faster autofocus."

In fact, Olympus has clocked some of their Micro Four Thirds cameras as having the fastest autofocus in the world—besting phase-detect-based cameras.

That’s because manufacturers, having eliminated their ability to include phase-detection systems in the mirrorless cameras, have had to focus a lot of time and energy on making contrast-detection focusing faster, and more accurate, to boot.

"It’s one thing to say it’s the world’s fastest," Sasserath adds, "but it’s also incredibly accurate. You can be fast and not focus on the right spot, and what good is it? We have to train people to realize that phase detection doesn’t mean the fastest in every setting."

It doesn’t matter if you don’t know where the subject will be in a few seconds, as long as you can focus so fast when the time comes that you can capture the subject in focus before it has moved out of focus again.

So the rules have changed, and now it’s no longer true that phase-detection systems perform best in all conditions, and in fact, contrast detection can be a bit faster in some applications. Unfortunately, even that new rule is changing.

Hybrid View

Once you’ve wrapped your head around the fact that contrast-detection systems can provide faster performance in some systems, there’s a new technique of combining contrast detection and phase detection on the same chip instead of using a secondary phase-detection autofocus sensor.

Remember, contrast-detection autofocus can’t perform predictive focus. It can’t follow a subject as it moves across the frame without having to refocus constantly. No matter how fast the contrast systems become at focusing on a moving subject, that’s still no substitution for being able to predict where to focus.

And, even if the system was fast enough to perform on par with phase detect when capturing a moving subject, it wouldn’t be as smooth as phase detection is when capturing video, an area where predicting a subject’s motion is vastly more important than a still image.

Shoot enough still frames, and eventually one will be in focus, but with video, a lens racking back and forth to try to achieve focus is irreparably distracting. No one will put up with a video where the focus is constantly locking and then becoming soft.

To solve this problem, companies like Canon, Nikon and Olympus have added phase-detect sensors to their imaging sensors, creating a hybrid contrast/phase-detection tool in one single chip. That’s exciting, as it opens up whole new possibilities for this relatively new breed of cameras.

The technique used is to replace some of the pixels on the imaging sensor with autofocus pixels, essentially making two discrete sensors out of one piece of silicon.

Different manufacturers have addressed this new technique differently, with various levels of performance. Nikon’s solution, found in their Nikon 1 cameras, is integrated into their CX sensor, the chip at the heart of that system.

"In the early CX sensor," explains Nikon’s Heiner, "there was a specific area where the phase detect was [located], and that was more toward the center. But it’s a larger center area than you’d typically find on the SLR [sensors]. The algorithms to determine which of those pixels are used depends mostly on the light level. But the phase-detect [pixels] displace imaging pixels; that’s why there are fewer of them in CX designs. There, you’ve got maybe about a third or less of the effective pixels. As soon as you start putting in many phase detect, then you’ve displaced a disproportionate number of pixels."
Olympus has taken a different approach to integrating phase-detect sensors on the chip and have rolled this out for their new flagship camera, the OM-D E-M1. This camera is based on the Micro Four Thirds sensor and replaces the Four Thirds-based E-5 that was their previous flagship.

Four Thirds lenses have phase-detection focusing capabilities and many pieces of glass are still in circulation. The OM-D E-M1 provides shooters who own Four Thirds lenses (and some newer Olympus MFT glass) a fully phase-detect focus system in predictive autofocus modes thanks to a hybrid imaging/phase-detect sensor.

"By putting phase and contrast on the same sensor, we’ve been able to vastly improve the autofocus on the E-M1," explains Olympus’ Sasserath. "Basically, what we’ve done is scattered the phase-detect pixels on the sensor, we’ve scattered the left and right channel, and we’re interpolating the surrounding pixels to make the final autofocus speed."

Each of these phase-on-chip systems has the same limitation to their practical use in low light. With so few phase-detect pixels (relative to a dedicated phase-detect sensor in an SLR), the use of the phase detect stops being practical when ambient light is low. That means a scene that in the afternoon could use phase-detection focus might use contrast detect when evening starts to fall.

Since contrast detect doesn’t perform predictive focus, the light level plays a big part in whether it will be possible to capture an image.

That would have resulted in a similar effect with hybrid-sensor cameras as with SLR cameras; phase detect is active when there’s a lot of light, and contrast detection, when the light levels drop.

But the biggest advance in the nascent hybrid chip world is inside the new Canon EOS 70D, which removes this last limitation with a new and exclusive-to-Canon (for the moment anyhow) technology. Every pixel on the camera’s sensor is split into two photoreceptors, with one facing right and one facing left.

This allows for a phase-detection system that functions across the full frame on every single pixel. The whole sensor is used to both capture an image and to evaluate phase-detection autofocus. It works in low light and is active in both Live View mode and while capturing video. "The major advantage of Dual Pixel CMOS AF," explains Canon’s Chuck Westfall, media spokesperson for professional products, "is the fact that every pixel on the image sensor can measure autofocus and capture image data simultaneously. This enables high-performance phase detection without compromising image quality."

Adds Westfall, "The [previous hybrid] technique requires interpolation of image data for
the pixels that are performing AF. In other words, the hybrid method takes pixels on the sensor and either assigns them to image data or focus data, but not both tasks. The pixels that are dedicated to autofocus have no image data, so the surrounding pixels have to be used to reconstruct the image. The more pixels allocated to phase detect, the more gaps in the image data are present."

Since all of the pixels in Canon’s new system are used for phase detection, there’s ample light for evaluation and there’s no degradation in image quality.

Perhaps, more importantly, for the future of both SLRs and mirrorless systems, is that a dual-pixel approach like that in the 70D changes the basic operation of the camera.

As Westfall explains, "The elimination of contrast-detection AF in Dual Pixel CMOS AF results in smoother autofocus, driving directly to the required distance setting without going past it. Dual Pixel CMOS AF also has advantages compared to conventional phase-detection AF using a separate sensor, in terms of compatibility with a wider range of maximum apertures and the elimination of any need for AF microadjustment."

Because this technology is inside a traditional DSLR’s chassis, the result (for now) is an SLR camera that’s incredibly versatile when shooting video and Live View mode, the opposite of conventional digital SLRs. In non-dual-pixel cameras, the mirror has to be in the down position to bounce light to the phase-detection system, so Live View and video modes would require the camera to use the less powerful contrast-detection systems.

Now that has been turned on its head. The mirror needs to be up in order to have full-time phase detection because the focus is performed on the imaging sensor. The result is a camera that captures video like a camcorder and can track subjects in video and Live View without having to manually adjust focus.

But this is clearly an intermediate step in camera development. SLRs, by definition, have a mirror used to view the image and perform focus. Now the focus is performed without the mirror in place. In a dual-pixel system like that in the 70D, the main reason, then, to have the mirror is to bounce light to the optical viewfinder. All that’s needed now is an electronic viewfinder (EVF) good enough to replace an optical one, and the SLR space is going to radically and quickly change.

EVFs first showed up on mirrorless cameras because without a mirror there’s no way to have an optical viewfinder. The early EVFs were really terrible, but in just a few short years, they have progressed quickly and now some of them have beautiful image quality and high resolution.

Electronic viewfinders also allow for "heads-up"-style data displayed over the image, perfect for things like real-time histograms and horizon-level display.

That means that the Canon EOS 70D isn’t just an SLR that’s designed to provide excellent video and Live View use; it’s a camera designed to move photography to a new place, a world where cameras with full-frame sensors and high-end lenses are free of mirrors. Expect to see full-frame sensors in professional cameras with optical viewfinders hitting the market very soon.

Double Vision

The stunning thing about digital photography is that the seemingly simple addition of phase-detection sensors to an imaging sensor isn’t just changing what we know about autofocus technology; it’s changing what we know about cameras.

The competing needs to develop contrast-detection and phase-detection systems forward is bringing huge benefits to the consumer. Micro Four Thirds cameras provide some of the fastest focusing times ever seen, and they do so with a contrast-detection system.

Meanwhile, phase-detection systems have moved on-chip with the imaging sensor and stand ready to revolutionize the shape of professional-grade gear.

The takeaway, then, is this: Phase- and contrast-detection systems are both incredibly capable and incredibly powerful, when done right. If Canon’s EOS 70D is an indicator, in a few short years we may even see contrast-detection systems largely abandoned.

The changes in AF technology are moving two different types of cameras toward one destination: a high-speed, mirrorless world. Compact mirrorless cameras are marching steadily toward full-frame performance and features while full-frame cameras are moving toward a mirrorless design.

Just like the waveforms used in phase-detection systems, at some point, these cameras are going to overlap perfectly and the future of photography will snap into focus.

Leave a Reply

Menu