It’s then interesting to note that black-and-white photography was originally a limitation, not so much a conscious choice: There was no chemistry for rendering color. Even then, it wasn’t until relatively recently that accurate color was possible—and now we’re well into the digital age, where we have the benefit of deciding after capture how we would like to present our images.
Perhaps this isn’t entirely accurate. Firstly, there are cameras such as the Leica M Monochrom and Phase One Achromatics, which only capture luminance information and can’t make a color image afterward. Secondly, an image conceived, executed and presented in either color or black-and-white always will be more visually powerful than one that didn’t have a clear idea from the outset. The presence or absence of color changes composition: Different colors have different “visual weight” and relative prominence; in monochrome, we only have luminance information, and bigger/brighter is always more obvious. Even so, there are ways we can improve the presentation of an image using modern processing. Let’s start by demystifying two things.
1 Certain cameras have particular black-and-white characteristics—partially true, but, even then, only if you use JPEG. If you’re shooting RAW, they provide different starting points—this is from a tonal response point of view—but, ultimately, you can get a consistent look regardless of the camera, even if some require more postprocessing work than others. I know because I have to do this all the time—”the images look different because I used a different camera” isn’t a viable excuse for a professional.
2 There are benefits to a monochrome-only camera. Partially true, again. The Bayer filter and subsequent conversion is an interpolation of neighboring pixel image data to extract color information; luminance information is lifted from the photosite. Any sort of interpolation will reduce tonal accuracy and increase noise because the luminance value you’ve got is now an approximation instead of a true value.
However, it’s fairly easy to see that whilst there are benefits to shooting monochrome-only, you actually can convert a color RAW file into a monochrome one and lower the perceived amount of noise—though not to as low a level as a monochrome-only camera. If you have a poor interpolation method, then the luminance values can be affected, too—once again, increasing the perception of pixel-level image noise in a color image. Bottom line: Monochrome-only will give you, yes, lower noise, and, yes, better detail.
However, what you lose from a monochrome camera is the ability to control the relative luminance level of individual color channels. Why is this important? Suppose your color scene has a relatively small range of background tonal values, but your subject is a very different color. Its luminance may be the same as the background, but it stands out because of the difference in color. Normally, this kind of image is a very bad candidate for monochrome because you’d end up with something very flat-looking. (Real-life translation: Running out and buying an M Monochrom isn’t going to solve your black-and-white conversion woes, but it will give you an interesting starting base—especially when it comes to noise and dynamic range. Those of you who don’t mind doing a bit of work, hold on to your normal cameras. And, in fact, most of these techniques apply equally to the M Monochrom, too.)
The good news is, if you’re prepared to do some work, different colors but similar luminance can be overcome for tonal separation in monochrome. It’s still possible to separate the subject from the background; there are even a few options. Park that thought for a moment because we have to introduce the basics of black-and-white conversion from color first.
The simplest method is to throw out the color information, leaving luminance values only. You’re then free to do whatever you wish to complete processing of the file. After much investigation and experimentation, this is actually the method I use, coupled with another trick or two. Desaturation can be done in ACR (Saturation slider, first tab) or in Photoshop (Hue/Saturation tool, then desaturate the master).
Slightly more complicated is using a gradient map. You can use the standard linear black to white transition (press D in Photoshop first, then add a new gradient map adjustment layer), which gives very similar, but not quite the same, results as desaturation. Gradient maps with a straight gradient tend to result in a higher-contrast image than desaturation. If you want to experiment a bit, it’s actually possible to put intermediate control points into the gradient and bias it toward a high-key (mostly white, black fades out faster) or low-key (black stays for longer) look. What actually works here will, of course, depend on your image, however, so be prepared to do some fiddling. The good news is, if you use a new adjustment layer, the gradient is easily modifiable without having to redo your entire conversion.
Finally, we’ve got the channel mixer. Best used on the RAW file in ACR, this lets you decide how much of each individual color channel goes into making the final image. Note that the tool only uses the luminance components of each channel, and it’s additive; this means that color (and perceptual color) information is discarded. To make things even more complicated, there’s a separate black-and-white conversion adjustment layer in Photoshop itself that effectively does the same thing as the ACR conversion, but it only has six channels for you to play with instead of the eight in ACR. In this case, more is definitely better, as it allows for much finer tonal control. It’s very important to remember not to shift any adjacent sliders to opposite ends, though: If you do, there’s a very high chance of posterization. And, don’t forget that magenta runs into red, so these two values should also be similar. Imagine a snake: The slider positions should be joinable with a smooth curve.
Remember the earlier conundrum of how to isolate a different-colored, but similarly luminous, subject from the background? The solution to this i
s, of course, the channel mixer. You can increase the luminance of the primary color of your subject and decrease that of the predominant background color, or the reverse—thus creating visual separation between the two elements. The problem comes when you’ve got a mixture of colors in both subject and background, and they’re shared—here, changing luminance of different channels isn’t going to help you. There are some images that simply don’t work in black-and-white.
This isn’t the entire toolkit, of course. You’ll find that after this kind of conversion, things look rather flat. This is actually a good thing because it means you’ve got plenty of tonal and dynamic range information to work with; there isn’t anything clipped on either end. Digging a bit deeper, we need to remember that the way the human eye perceives contrast and separation is highly dependent on both differences in hue and comparing immediately adjacent areas as our eyes scan the frame. We don’t “see” a whole scene at once; our brains compensate with persistence of vision so we can experience large areas simultaneously.
It’s not so easy to replicate this in a still frame because of the limits of output dynamic range. The best thing to do is, once again, remember that we only need to: a) have general global zones to give an image some overall structure; and b) make sure the local areas make visual sense in isolation. Two of Photoshop’s tools will be your best friends here: the dodge and burn brush, and the curves tool. A tablet is also extremely helpful for these things, as it gives you precision control and feathering over your brush application. It lets you avoid hard edges and odd abrupt transitions, and permits highly precise editing without having to resort to lasso masking.
At this point, it’s probably worth talking about plug-ins and filters. The former are either a set of Photoshop actions or a separate program, which control the conversion—specifically, the luminance translation of each color channel into a luminance value—and the tonal map of the final file. Whilst they’re extremely popular and used by many “Internet street photographers” either to save time or because they’re unable to get their desired results from a nuts-and-bolts conversion, I personally avoid them because they don’t give you enough fine control, and even worse, everybody’s images that were run through that filter look the same.
Photography is arguably art and very much down to personal taste. If you’re 100% happy with the way those results look, that’s great, and, honestly, I’m jealous of the amount of time you’ve saved in your workflow. However, claiming this is art is disingenuous; it’s like finding out Ansel Adams shot BW400CN (a black-and-white film designed to be run through a C41 color-processing machine) and developed it at the local pharmacy—instead of Tri-X or Plus-X, controlling his development time and chemical composition, and then cutting precision masks to dodge and burn portions of his subjects. You’re no more in control of the creative process than a diner in a restaurant controls the presentation or timing of his or her dish.
There’s a second type of filter that’s useful, and in either form, it performs a similar function to the channel mixer—it either admits or cuts out light that’s of a certain range of wavelengths. The most common example of this is a physical red filter that goes over the end of your lens; the effect is dark skies because very little of the blue spectrum passes through the red filter and onto the recording medium. It works with digital, too, but you have to remember to adjust exposure accordingly, and obviously not use it in color mode. You can also replicate this effect digitally afterward: Add in a new layer, make it one color, and then select the appropriate blending mode; then only do your black-and-white conversion. There are interesting results obtainable through this method.
Finally, if you pull back the black-and-white conversion layer slightly—assuming you didn’t directly apply the conversion to the image—it’s also possible to use a color layer to create a toning effect; sepia or platinum is probably the most common. You can even use a graduated fill layer to provide a variable effect; this is especially useful for increasing the density of skies, for instance.
Personally, I prefer to shoot color and then convert to black-and-white, not because I can’t decide up front how a scene should be presented, but because there’s a lot of flexibility in how I want to handle the conversion later to highlight certain aspects of my subject or achieve a certain tonal feel. Whilst all of these techniques can be applied to JPEGs, best results obviously will be achieved with RAW files because more information has been retained: You, as the artist, can then decide how to allocate that tonality across your available output scale. I use the channel mixer method almost exclusively because of the amount of control possible, especially when combined with dodging and burning (and not to mention the undo option!). If only Ansel Adams had it so easy!
Ming Thein is a fine-art/commercial photographer and author; you can find his blog at mingthein.com. He also teaches workshops internationally and has a range of postprocessing videos available, including The Monochrome Masterclass.