The Image Chain
After light enters a lens, the camera begins the process of creating an image by converting physical information in the form of light photons into digital information in the form of an electrical charge. This is done primarily via the photodiodes at the sensor, the anti-aliasing filter (which reduces sharpness right off the bat) and the analog-to-digital converter. But the process is lossy because converting a real-world projection into a mathematical representation requires compression that effectively "maps" the nearly infinite variable information of the real world into a rounded-off digital approximation. Most digital cameras use a Bayer sensor layout with a red, green and blue checkerboard pattern of pixels. The majority of these pixels are dedicated to capturing green information as a measure of luminance since our eyes are more sensitive to the green channel in light. This Bayer pattern also requires exact positioning in terms of depth so that light rays penetrate to an optimum distance for accuracy in measuring light rays channeled from the lens and aperture. This implies that the sensor can’t be constructed to be thicker so more surface area is the only choice for adding extra light information, which is why you have competing sensor formats in DSLRs, from full frame to the same size as a frame of 35mm film, all the way down to Four Thirds/Micro Four Thirds at roughly one-quarter the area.
Interestingly enough, the Bayer array only measures luminance, and it uses an estimation of surrounding values from nearby RGB pixels to extrapolate color information. The most popular analogy for how these photodiodes work at the sensor is an empty bucket that captures light instead of water. If there’s too much information in an overexposed image, for instance, the pixels will fill beyond capacity to the point of clipping, where potential image information is then lost. If only a small amount of light hits the bucket, then only a small amount of image information is collected. When it comes to pixel size, just as with the sensor, the larger the area, the better it will be at gathering light information.
Pixel density refers to the amount of these photodiodes that are packed into the tiny amount of real estate on the sensor. It also can be referred to as sensor resolution in megapixels (pixels by width multiplied by pixels by height). We’re now seeing that a large pixel density, even on a smaller sensor, isn’t a bad thing. It’s almost always a good thing, actually, because there’s a direct correlation between sensor resolution and outputting to print resolution. Also, the more pixels, the better the detail in the image regardless of the sensor area, which translates to better acutance for sharper edges and also to more color information thanks to the complex relationship between red, green and blue pixels on a Bayer array.
Historically, this required a trade-off, as higher-density sensor resolution would result in smaller pixels, which created light falloff in the exponential gaps found between each added photodiode. This was most pronounced in compact cameras with large, DSLR levels of resolution. But now technology has improved a great deal, and modern advances like microlens arrays and backside illumination are compensating for these downsides by making much more efficient use of all incoming light and channeling extra light information that would have ordinarily fallen in the gaps between photoreceptors. Sensors are incredibly efficient, and even these gaps have benefits as they house components and integrated circuits that aren’t light-sensitive.