Back when the first really good DSLRs were starting to become available, the vast majority of photos still were being taken on film. So when you picked up a digital camera for evaluation, you naturally compared it to film. The quality of film was the bar. The new digital images not only had to at least equal film in sharpness and resolution, but they had to look like film, meaning an image had to look like a “photograph,” as we knew it, that had been scanned and displayed on a computer screen.
Because of this expectation, camera manufacturers had to start learning things about the art and science of image processing that they never had to worry about before—things Kodak had been thinking about for over 100 years. Raw digital image data had to be processed into RGB before it even could be viewed on the camera’s LCD!
Being in control of the image processing for the first time not only was a new responsibility, but it also represented an awesome new opportunity. For perhaps the very first time, camera makers could truly begin to differentiate themselves with the actual look of the image their cameras produced. And it wasn’t very difficult to figure out what people were going to want in this regard. By then, we had over 60 years for “the look” of color film to be refined and etched into our minds, first with Kodachrome and Ektachrome setting the standard, and later with Fujichrome upping the ante with even more saturated color.
So camera manufacturers had a very well-defined look to aim for when setting out to tune their in-camera processing. The result of that processing was an RGB image written directly to the camera card in JPEG or TIFF format. If the camera didn’t allow you to save the raw exposure data, what we now think of as the “digital negative” simply evaporated. Even today, most smartphones and point-and-shoot cameras still work this way. Letting the photographer have access to the original raw, unprocessed image data was a pretty radical idea. And, at first, I’m not even sure photographers knew what they would do with raw files, either. There weren’t any consumer-level tools available for processing camera raw files, so what would they do with them?
In 1999, the Adobe Camera Raw plug-in began to answer that question, and was followed just two years later by the first version of Capture One DSLR. For the very first time, photographers had control over their own raw processing, and for geeky early adopters, that power was intoxicating. But, for many others, the benefit wasn’t yet clear, and they continued shooting JPEG. Eventually, Lightroom came along, and selling “the Lightroom workflow” largely depended upon helping photographers understand the unique nature of raw files and how to take advantage of them.
All the benefits of shooting RAW flow from the fact that raw capture data is just that: It’s every bit of data captured by the sensor at the moment of exposure—completely raw and unprocessed. Anything you do to raw data to process and transform it into a recognizable image is destructive by its very nature, and so is necessarily a one-way street. It’s for this reason that Photoshop and Lightroom (and most other applications) always treat proprietary camera RAW files (.NEF, .CR2, .ARW, etc.) nondestructively. Just as you might treat a film negative, raw image data should carefully be preserved in its original form, and from that unchanging source it then can be used as the starting point for any type of output: JPEGs for the web, larger TIFF files for printing, black-and-white, color—whatever.
I think of raw processing as a “one-way street” because there’s simply no way to save postprocessed and tone-mapped image data back into the RAW format, and you wouldn’t want to even if you could. For starters, all raw image data is grayscale. Image sensors can’t detect or capture color—all they can do is record luminance on a pixel-by-pixel level. To create color, tiny red, green and blue filters cover alternating photosites on the sensor to separately measure red, green or blue light. For the more popular Bayer CFA (color filter array) sensors, this means a RAW image is made up of just one grayscale channel with alternating red, green and blue luminance values. During RAW processing, that grayscale image is “demosaiced” into RGB, and the missing pixel data for each channel is filled in by interpolation. So that process, in itself, is a one-way street.
Next comes tone-mapping. The luminance values that are captured on the sensor are recorded just as they come from the analog-to-digital converter, making raw luminance values “linear.” This just means that there’s a direct relationship between the amounts measured and the values used to represent those amounts. If some amount of light is recorded at one pixel (let’s call it a value of “10”), and exactly double that amount of light is recorded at some other pixel, then exactly double the first value (20) would be recorded for the second pixel. This may seem like an esoteric point, but it has everything to do with how RAW processing works. If your camera has a 14-bit sensor, the very brightest value it can record (214, right?) will be recorded as a value of 16,384—the largest number that can be recorded using 14-bits. Then an exposure value that’s just one stop lower will have exactly half that luminance, recording a value of 8,192. So you’ve used up fully half of all your 16,384 bits, just in the very brightest stop of your exposure range. How often do your photos even have data in the top stop? This should be a hint why the concept of “expose to the right” is so important when shooting RAW. That’s where all the bits are. Next, go down a second stop, which halves the amount of light once again, and that value will be recorded as 4,096, and so on. With each one-stop decrease in exposure, the raw value recorded will be half as large as the previous one.
Again, this may seem like an esoteric point, but it means that when you’re starting with raw image data, you’re starting with the overwhelming majority of all your possible gray values in the brightest two stops of any given exposure. Adobe’s Highlight Recovery technology takes advantage of this fact, giving you a lot more control over highlight detail than you would have if you were starting with tone-mapped (JPEG or TIFF) RGB data.
Adjusting exposure is also much more flexible when starting with raw data. Because exposure values in the original scene are recorded linearly, changing exposure in RAW processing is simply scaling the raw values up or down before the process of tone-mapping changes their linear relationship to each other.
The process of squishing this giant range of linear values down to make an image look the way you expect it to look is called “gamma encoding,” or “tone-mapping.” Again, that’s a one-way street. Once you’ve tone-mapped raw values into a lower bit-depth TIFF or JPEG ﬁle, it’s impossible to recover that highlight detail, or accurately move tones up or down with exposure adjustments.
Another benefit of starting with raw is the ﬂexibility you have choosing a white balance. Since the entire range of colors the camera sensor is capable of capturing is stored in the RAW ﬁle, there’s no need to set white balance when shooting RAW. It can be set to any value later in processing. But once raw data is processed and encoded into RGB, it’s much less flexible. If you’re shooting JPEG, you’re relying on the camera to perform the RAW processing, making it essential to have an accurate white balance set at the time of exp
Finally, in the early days of promoting RAW workflows, we were all fond of saying another benefit of keeping your RAW image files was that RAW processing would get better in the future. I barely believed this bit of marketing myself, but it was the release of the 2012 Process Version in Lightroom 4 that changed my mind forever. Going back to older RAW files, I find that I now have a great deal more latitude, especially in highlight recovery. Opening up deep shadow detail can also be done with a much more natural look. The detail was always there in the files; I just never had the tools to pull it off the RAW file until now.
This improved processing isn’t limited to just better control over highlight and shadow detail. Lightroom 4 also brought us lens profiling and the incredibly powerful Lens Correction tools. Lightroom 5 came with vastly better chromatic aberration correction and new defringing controls. Every time I revisit an old RAW file, I find new ways to make my corrections better than the last time I tried.
So I’m a believer now, and I fully expect we’ll have new tools that make RAW processing even more powerful than it is today. But if you’ve locked your images into RGB and no longer have the RAW files, you’ll never be able to take advantage of advances that are sure to be developed in the future of RAW processing.
George Jardine is a frequent contributor to Digital Photo Pro. You can see more of his extensive tutorials on Photoshop and Lightroom, and sign up for his workshops at mulita.com.