I've decided to throw in a technical/instructional post occasionally. This is the first. I thought of doing one every week or two but I don't want to tie myself to a schedule so they will be random when I feel inspired to expound on some topic.
Several times in recent years I have encountered a theory of digital exposure that I could never quite comprehend, a philosophy of “exposing to the right”. According to this theory, in an image spanning several f/stops fully half of all the data (1024 bits on this chart) in a digital image resides in the brightest f/stop, half of the remaining data in the next lowest f/stop and so on until you reach the lowest f/stop which (according to the theory) contains only 8 bits of data. The theory holds that by exposing to the right you capture more of the available data. They place a scale at the bottom of a histogram similar to this one to illustrate the idea.
What the histogram represents is the distribution of pixels at each of the 256 available levels of brightness and each pixel consists of exactly the same number of data bits as every other pixel, 8 bits per channel or 16 bits per channel, no more, no less. In an 8 bit image the data to describe the pixels at “0” (pure black), 255 (pure white) and every pixel in between, each contain 8 bits of red data, 8 bits of green data and 8 bits of blue data for a total of 24 bits. Likewise each pixel of a 16 bit image contains 16 bits per channel for a total of 48 bits per pixel. In practice cameras today use only 12 or 14 bits of the available 16 in a 16 bit image but the file is still written in 16 bit format. No pixel is written with 1024 bits. The only pixels that are described by 8 bits are those in an 8 bit single channel black & white image and then all pixels consist of 8 bits regardless of where they fall on the histogram.
So why all the fuss about “exposing to the right” if there isn't more data there? Because digital photographers have discovered that if you stretch the tones of the shadow areas toward the right to “open them up” you often get confetti colored “noise” but if you darken bright pixels in the highlights you don’t appear to get noise. The “half the data is in the brightest f/stop theory” is an attempt to explain that phenomenon but as the above shows it has nothing to do with the amount of data. So why is it true that opening up shadows produces noise but darkening highlights to retrieve detail does not? It is a matter of perception.
We’ve all seen noisy shadow areas so I won’t trouble myself to demonstrate that but we should look at the other end of the brightness scale as a way of understanding what is really going on when we “expose to the right”. I'll start with this snow scene photo.
The visual difference between color noise in highlights and in shadows is due to saturation. It has long been a practice of film photographers to deliberately underexpose by a half stop in order to increase saturation. Shadows show noise more than highlights for exactly the same reason. Saturation increases as you go left on the histogram and decreases toward the right but color is most clearly distinguishable in the middle.
Even to our eyes, colors are most easily distinguished from one another in average light. In dim light colors become murky and in really bright light they wash out. If I were to take you into a dimly lit room and ask you what color the arm chair in a particularly dark corner was, you would have difficulty accurately describing the color. If I did that with a group of people I would probably get different opinions from each person. Those discrepancies would be “noise”.
Digital sensors are no different and, in their attempt to distinguish color, individual pixels generate variances in color (noise) as the color becomes harder to distinguish. When you brighten the shadows of a digital image you are asking the computer to do the same thing I asked of you in the darkened room, to describe a color that there was too little light to accurately see and in brightening it the discrepancies become more apparent. Likewise if I asked you to tell me the color of something the sunlight was glaring off from. You (or the camera) can’t accurately determine color under those conditions but since the saturation at the bright end is so much less, the noise in highlights is not as apparent.
So should you expose to the right of the histogram? In the user manuals most or all camera manufacturers recommend centering the histogram. If you then darken the shadows and brighten the highlights you are starting with clear color and darkening it or fading it out. Neither will create objectionable noise. If you darken 40% tones to 25 or 30% you are working from more clearly seen color toward less clearly distinguishable color. The same is true if you brighten 70% tones to 80-90%. If you do the reverse, stretch the shadows or highlights toward the center, you are forcing pixels of less discrete hue to be more discrete and that is when the discrepancies show up.
Arguably you might get cleaner whites by exposing so that the histogram is as far right as possible. Also if you expose to the right your darkest tones start out closer to the center, the shadows will have less noise because the sensor was able to “see” the real color more accurately with the increased exposure. Because desaturated (highlight) noise is less evident than more saturated (shadow) noise, biasing the exposure toward the highlights will produce smoother results than biasing toward the shadows but not because there are more bits of data on the right. It is a simple matter of saturation.
As a final note, if the light range is such that you will inevitably lose either highlight or shadows it is better to lose the shadows. The reason for this is that we are comfortable with not being able to see into deep shadows but glaring highlights either cause our eyes to adjust to the brightness or to look away to avoid the discomfort. Because of this, small washed out highlights in an image are okay but large areas of blank white tend to look unnatural.