Join Now Sign In
Get full access to articles, free contest entries and more!
Advertisement

Inside RAW Files

In 1999, the Adobe Camera Raw plug-in began to answer that question, and was followed just two years later by the first version of Capture One DSLR. For the very first time, photographers had control over their own raw processing, and for geeky early adopters, that power was intoxicating. But, for many others, the benefit wasn’t yet clear, and they continued shooting JPEG. Eventually, Lightroom came along, and selling “the Lightroom workflow” largely depended upon helping photographers understand the unique nature of raw files and how to take advantage of them.

All the benefits of shooting RAW flow from the fact that raw capture data is just that: It’s every bit of data captured by the sensor at the moment of exposure—completely raw and unprocessed. Anything you do to raw data to process and transform it into a recognizable image is destructive by its very nature, and so is necessarily a one-way street. It’s for this reason that Photoshop and Lightroom (and most other applications) always treat proprietary camera RAW files (.NEF, .CR2, .ARW, etc.) nondestructively. Just as you might treat a film negative, raw image data should carefully be preserved in its original form, and from that unchanging source it then can be used as the starting point for any type of output: JPEGs for the web, larger TIFF files for printing, black-and-white, color—whatever.

I think of raw processing as a “one-way street” because there’s simply no way to save postprocessed and tone-mapped image data back into the RAW format, and you wouldn’t want to even if you could. For starters, all raw image data is grayscale. Image sensors can’t detect or capture color—all they can do is record luminance on a pixel-by-pixel level. To create color, tiny red, green and blue filters cover alternating photosites on the sensor to separately measure red, green or blue light. For the more popular Bayer CFA (color filter array) sensors, this means a RAW image is made up of just one grayscale channel with alternating red, green and blue luminance values. During RAW processing, that grayscale image is “demosaiced” into RGB, and the missing pixel data for each channel is filled in by interpolation. So that process, in itself, is a one-way street.

Next comes tone-mapping. The luminance values that are captured on the sensor are recorded just as they come from the analog-to-digital converter, making raw luminance values “linear.” This just means that there’s a direct relationship between the amounts measured and the values used to represent those amounts. If some amount of light is recorded at one pixel (let’s call it a value of “10”), and exactly double that amount of light is recorded at some other pixel, then exactly double the first value (20) would be recorded for the second pixel. This may seem like an esoteric point, but it has everything to do with how RAW processing works. If your camera has a 14-bit sensor, the very brightest value it can record (214, right?) will be recorded as a value of 16,384—the largest number that can be recorded using 14-bits. Then an exposure value that’s just one stop lower will have exactly half that luminance, recording a value of 8,192. So you’ve used up fully half of all your 16,384 bits, just in the very brightest stop of your exposure range. How often do your photos even have data in the top stop? This should be a hint why the concept of “expose to the right” is so important when shooting RAW. That’s where all the bits are. Next, go down a second stop, which halves the amount of light once again, and that value will be recorded as 4,096, and so on. With each one-stop decrease in exposure, the raw value recorded will be half as large as the previous one.

Leave a Reply

Save Your Favorites

Save This Article