Tuesday, June 8, 2010
Behind The Scenes
The truth about HD video capture in DSLRs
Still, DSLRs have evolved over the last decade to meet the highly specific needs of image-making, and in a way the technology has been co-opted in order to perform a task that it never really was designed to do. Live View, for instance, isn’t the best method of video capture, but all current video-capable DSLRs use what’s essentially an extension of the Live View capabilities in order to record video. In 2006, the Olympus E-330 was the first DSLR to include full Live View as a way to preview still images, and once the popularity of Live View made it a standard feature, companies realized that video capture was possible by tweaking a technology that already was incorporated into the working architecture of every available camera.
Just as in camcorders, shooting a still will disrupt video capture, causing a break of approximately a second, in this case, with the Canon EOS 5D Mark II.
Everything about a DSLR is built around projecting an image to a single image sensor, which has to act as the sensor for both high-quality still images and motion images in a video-capable camera. To produce color with a single, color-blind silicon sensor, each pixel is covered with a red, green or blue filter in a Bayer array grid. (Sigma’s Foveon X3 sensor, which doesn’t offer video capture, is the only DSLR sensor that’s currently built differently.) Each pixel then records an individual primary color, and data for the missing colors are obtained from neighboring pixels through complex algorithms in a process known as demosaicing. It works very well, resulting in high-resolution images with accurate color.
A CMOS sensor is preferable to a CCD sensor when working with Live View video because only CMOS sensors can output in rates that are fast enough to yield HD video in a single-sensor system. In fact, all current video-capable DSLRs use CMOS sensors, with the exception of the Micro Four Thirds-based Live MOS sensor used by Panasonic, which has characteristics of both CCD and CMOS sensors. CMOS sensors include most of the support circuitry internally, as opposed to the separate PCB circuit boards required by CCD sensors. CCD sensors must transmit data across the chip to be read, as well, while CMOS devices use transistors that are able to read individual pixel information. CMOS sensors typically draw less power, too, which is a big boon for power-hungry Live View modes.
|AF: Contrast Vs. Phase Detection|
|Camera companies are beginning to find ways around it, but one of the major difficulties has been autofocus while in video mode. In normal through-the-viewfinder operation for still photos, DSLRs use quick and accurate phase-detection AF, which can determine in a single reading whether or not the image is in focus, and if not, by how much and in which direction correct focus is. This system operates TTL (through the lens) using an AF sensor in the camera body. When Live View is activated, which is necessary for video recording, the SLR mirror moves into the up position to allow light to reach the image sensor. Unfortunately, light can’t reach the AF sensor when the mirror is in the up position.
The mirror can be moved briefly back to the down position in order to autofocus, which cuts off Live View and video recording, so it isn’t really an option. Instead, contrast-based autofocus is used, which doesn’t disrupt the Live View. The downside is that it’s much slower than phase-detection and has a harder time tracking action. Cameras can use either phase-detection or contrast-based AF to establish focus before beginning a video clip, but afterwards, due to the relative slowness of contrast-based AF and the sound produced by focusing motors, it’s better to focus manually during video recording. This may seem like a deterrent, but in truth, professional motion pictures are generally shot this way.
Page 1 of 4