There have been several milestones in the quest to display and print the visible spectrum with a level of vibrancy that approaches what the human eye can perceive, and to reproduce that with any fidelity on devices like monitors or printers. And another milestone is upon us.
Apple has recently announced deployment of what the company calls Wide Color, its hardware and software implementation of a new color standard originally designed for digital projectors at movie theaters. That standard, called inelegantly DCI-P3 (and often referred to simply as P3), was adopted in 2011 as cinemas began moving from film projectors to digital ones for faster delivery of movies.
In order to allow for a greater range of colors than in-home or office projectors and to ensure colors were matched from theater to theater, the DCI-P3 specification was developed. It describes the range of colors that complying devices need to recreate, and a way for non DCI-P3 devices to handle colors they can’t display.
The standard is also baked into the Ultra HD Premium standard, so TVs and monitors that have that branding are required to show at least 90 percent of the DCI-P3 space.
Because Apple has implemented Wide Color into iOS, MacOS and TVOS, the company’s newest iPads, iPhones, iMacs can display more colors than previous devices, and they can also communicate these colors to other devices that use the DCI-P3 standard, such as UHD Premium TV sets.
To understand the importance of this new standard and how Apple’s adoption of it will benefit photographers, it’s first necessary to step back and talk a bit about the “exciting” world of color management.
Color Management Basics
The fundamental idea of color management is that every device has a “color space,” which is the subset of visible colors the human eye can perceive that the device can reproduce.
Be it cameras, monitors or printers, not only is there is no device that can display the full range of visible colors, but these different types of imaging devices use different systems to display colors. That means that each type of device has some colors it excels at reproducing and many it is poor at or incapable of reproducing. Most importantly, these are not the same colors from device type to device type.
Color then is talked about as part of a color space, often displayed as a 2D or 3D model of all the visible colors, and the subset of those colors that a particular device like sRGB or Wide Color can reproduce. This core idea is fundamental to how we, as photographers, work with color because it’s important to keep in mind that no devices in our imaging chain can faithfully reproduce the whole range of visible colors.
Different devices have different color “channels” as part of their color space. The space is usually named after the different channels in that space. Monitors have red, green and blue channels, so their color space is RGB. A printer uses the channels of cyan, yellow, magenta and black, and so CMYK is its color space.
In an RGB color space, to display white, a monitor would set each color value to the highest intensity. In a printer, white is made by putting down no ink. Pure red in an RGB display would be created when only the red channel is set to its highest illumination, and the R and G channels are set to zero. A printer displays this same color with a combination of a lot of magenta ink and almost as much yellow, plus a bit of blue and black.
Color management is simply the act of translating colors from one space to another plus figuring out, for the colors that the output device can’t reproduce, what the closest possible color is and substituting all the values of the impossible-to-reproduce color with one the device can output.
I like to think of this in terms of a translator at the United Nations. Their first job is to take an input language like Japanese and map it to an output language like English. That’s the first part of color management, figuring out how to move things from one language to another without losing the precise meaning.
A translator at the U.N. has a second job as well, which is to figure out the closest possible word where none exists in the output language. For example, the Japanese word “yugen” translates approximately to “a profound awareness of the universe that triggers a profound response.” A translator hearing the word yugen spoken would need to figure out how to describe it to the English-speaking listener.
This is the more important job of color management, trying to figure out how to map colors that fall outside the range or gamut of the color space to something that not only can be reproduced but can be reproduced faithfully.
While some things might not matter in terms of the color mapping—a shade of blue in a sky could probably be quite incorrect but still look like “sky”—many other things are more critical. The vibrant hues of dyed fabric, for example, rarely reproduce well in print. Subtle gradations in shades of colors are often mapped to the next-closest color `on standard displays.
Consumer computer monitors have never been particularly accurate, but in the early days of computing that didn’t matter. If someone used WordStar on a green CRT or an amber one, it was only a display of text. As design and photography work moved from traditional production to digital, the need for accuracy became apparent.
In 1993, Apple began to look at the color management needs of desktop publishing hardware (the Nikon D1, which would really usher in wide adoption of digital photography, wouldn’t arrive for six more years) and introduced ColorSync, a software tool to perform the task of describing and managing the colors on screen and on press. At first Color Sync was a stand-alone application, but then it became integrated into Mac OS X.
That led the company to co-found the International Color Consortium with a number of other hardware companies in order to establish standards to handle color management tasks between devices without the herculean effort it took up until then. (When you download an ICC profile for your printer, you’re benefiting directly from the early ColorSync work.)
It also lead to the establishment of the Standard RGB or sRGB space by HP and Microsoft to ease the color management between displays and output. The problem with sRGB is that it is designed to manage colors in a typical consumer environment, that of a well-lit room and equipment that can only display a limited range of colors.
Computer displays have traditionally been 8-bit, which simply means that they can display 256 distinct steps from the lightest shade of a color to black. That result is 16 million possible color combinations. Many capture devices, including the pro digital cameras, capture colors in higher bit depth, typically 12-bit or 14-bit, which are 4096 and 16,385 colors per channel.
That means an image captured at 12-bit has 68 billion possible colors and 14-bit has four trillion possible colors, all of which are vastly more colors than the human eye can discern. Fortunately, computers do lots of things better than humans, and by working in a higher bit depth, it’s more likely that the imaging device will capture or reproduce the exact visible colors and all the color nuances in a scene.
Starting with Mac OS X “El Capitan,” the operating system is able to process colors in a 30-bit space. That means that digital cameras can capture in 12-bit or 14-bit, Macs can process it at 30-bit, but most monitors display 8-bit color. This is roughly analogous to capturing a RAW file from a high end camera and then saving it as a low-quality JPEG.
The use of sRGB as the basis for color display and output is, incidentally, why people find that if they buy a printer and a monitor and do no color management at all, the colors come out reasonably correct, but if they turn on color management, accurate color at first is more elusive. sRGB devices are built within a certain range of tolerances, expecting users to be in a bright office environment, and are calibrated at the factory to be within the sRGB color space. Once you start to move from the sRGB space, with its assumed profiles and calibration, color management has to be a lot more accurate because it’s translating to a variety of color “languages” and dialects. When you download something like an ICC profile for a printer, you’re getting a more accurate set of translations, but only if your particular device matches the output of the devices used to make the profiles. ICC profiles are largely created by averaging out the results from a number of samples, and so individual units will vary from that sampling.
If you use a color calibration and profiling tool on your monitor in bright daylight, it will be incorrect when you’re using your display in dimmer light. If you profile your printer but then some nozzles clog, it will be incorrect when you produce your printed work. Now, instead of accurately translating color from one device into the language of another device, you’re adding errors in that translation.
Getting Wider Color
Along with the 30-bit color support in the OS, Apple also began releasing displays with both more resolution (“Retina Display”) and better color reproduction. The display in the current iMacs, for example, can operate in 10-bit, and the screens in the iPhone 7 and iPad Pro models have increased the gamut as well.
This brings us back to Wide Color. With Apple moving to increase the range of available colors in its displays and improving ability of the various operating systems to handle larger bit-depths, the sRGB space just doesn’t cut it anymore.
Basing the system-level Wide Color on the Display P3 standard is a smart move, because Display P3 is an extension to sRGB. To return to the analogy of translators, Wide Color is like adding new words to the dictionary to describe new concepts instead of writing a new language.
It also means that applications that work in sRGB can also function in Wide Color, albeit without the added range of colors. An iOS photo editing app that hasn’t been updated to support Wide Color can still edit files that are in the Wide Color space just fine. Wide Color uses the same white point, gamma and color range of sRGB, it just adds values greater than 1 or less than 0 to the sRGB space.
That sounds confusing, but it’s really very simple. In sRGB, having the least amount of any color is measured as 0 and as much of a color as possible is given a value of 1. Everything else falls between those two values. What Wide Color and the Display P3 space it’s built upon allow is a device to display a color that’s brighter than the brightest shade of a color in sRGB or darker than the darkest shade of a color in sRGB, while the higher bit depth allows there to be more numbers between that 0 and 1. A piece of paper might be 0, but the tones of a “whiter” wedding dress could be less than zero.
Evolution Of A System
Naturally, none of this improved color range or fidelity will be of any use with equipment that can’t take advantage of it. If you have an 8-bit monitor, you’ll need to purchase a 10-bit display to even see some of the extra color data in a 12-bit or 14-bit file, but that still doesn’t mean that you’ll get the extra range of colors in Wide Color/Display P3.
A 10-bit display that did a large percentage of sRGB was prohibitively expensive just a few years ago, but they have come down significantly in price. A 27” Dell UP2716D display that Dell says reproduces 100% of sRGB and 87% of Display P-3 can be had for under $700.
Right now, the iPhone 7, iPad Pro and 5K iMac feature Wide Color displays, and a number of video-editing displays do as well. As the Wide Color standard on Macs grows in use and as more people begin editing in the Display P3 standard, the number of third-party displays that can produce the wider color range of Wide Color will only grow.
With the Wide Color standard only now rolling out to iOS, MacOS and TVOS, it will be some time before photographers everywhere are enjoying the advantages of a wider range of colors and more accurate gradations, but at least now the foundation is in place, and the era of moving beyond sRGB is here.