Wednesday, May 30, 2007
The Bit-Depth Decision
8-bit versus 16-bit workflow is among the least understood aspects of photography for most professionals. This primer will get you up to speed quickly.
Within the field of photography and digital imaging, a number of debates are argued by users and experts: Nikon versus Canon, Mac versus Windows, zoom versus prime lens, RAW versus JPEG—the list goes on and on. Add to that 8-bit versus 16-bit. What's the difference? Is the controversy useful or viable? After reading our primer, you'll have a better idea about where to stand on the issue.
What Is Bit-Depth?
Digital images are a massive assemblage of numbers. A pixel is merely a solid color or tone defined by numeric values. The earliest computer systems were able to assign a value of 1 or 0 to any single pixel. It was either black or white—a 1-bit file—and was unable to produce a suitable image. What about a 2-bit system? The numeric value can be 00, 01, 10 or 11. This encoding of a pixel could produce four possible pixel densities or shades: white, light gray, dark gray or black—not a very useful system if your goal is to reproduce a full-tone image.
Today, the most common encoding systems use an 8-bit scheme, which allows the definition of 256 shades from black to white (28 = 256). Research has shown that the minimum number of shades needed to produce a continuous-tone image is in the neighborhood of 250 values. If you have three color channels such as Red, Green and Blue, and each channel uses 256 tones from black to white, you can now create what's known as a 24-bit color image. A three-channel 8-bit file has the potential to describe 16.7 million colors (256 x 256 x 256).
Do we need any more? Yes, and here's why. Anytime you apply an edit to a pixel, you're altering the numbers. The scale of the numbers isn't infinite, and as you move the numbers around, the result is data loss due to rounding errors or what's often called “quantization errors.”