Thursday, February 18, 2010

Right Idea, Wrong Reason

I've decided to throw in a technical/instructional post occasionally. This is the first. I thought of doing one every week or two but I don't want to tie myself to a schedule so they will be random when I feel inspired to expound on some topic.

Several times in recent years I have encountered a theory of digital exposure that I could never quite comprehend, a philosophy of “exposing to the right”. According to this theory, in an image spanning several f/stops fully half of all the data (1024 bits on this chart) in a digital image resides in the brightest f/stop, half of the remaining data in the next lowest f/stop and so on until you reach the lowest f/stop which (according to the theory) contains only 8 bits of data. The theory holds that by exposing to the right you capture more of the available data. They place a scale at the bottom of a histogram similar to this one to illustrate the idea.
 I suspect that the idea derives from the f/stop representing double or half its adjacent setting and in terms of brightness levels it is absolutely correct that brightness of an image declines geometrically as the f/stop is decreased. In the Zone System, Zone 10 is twice as bright as Zone 9 and so on. It is a fallacy however to equate digital data to brightness.  If you were to shoot two uncompressed images, one exposed toward the left of the histogram and another toward the right, the two files would contain exactly the same amount of data. It would simply be distributed differently in the histogram. If you expose so that no pixels fell in the brightest segment of the histogram shown above you are not “losing half the data”. As long as the curve drops to the base line on both sides of the histogram (as it does in the illustration above) you have lost nothing. You lose data only if the curve ends on a side rather than the baseline (RAW is somewhat of an exception but that is another discussion).

What the histogram represents is the distribution of pixels at each of the 256 available levels of brightness and each pixel consists of exactly the same number of data bits as every other pixel, 8 bits per channel or 16 bits per channel, no more, no less. In an 8 bit image the data to describe the pixels at “0” (pure black),  255 (pure white) and every pixel in between, each contain 8 bits of red  data, 8 bits of green data and 8 bits of blue data for a total of 24 bits. Likewise each pixel of a 16 bit image contains 16 bits per channel for a total of 48 bits per pixel. In practice cameras today use only 12 or 14 bits of the available 16 in a 16 bit image but the file is still written in 16 bit format. No pixel is written with 1024 bits. The only pixels that are described by 8 bits are those in an 8 bit single channel black & white image and then all pixels consist of 8 bits regardless of where they fall on the histogram.

So why all the fuss about “exposing to the right” if there isn't more data there? Because digital photographers have discovered that if you stretch the tones of the shadow areas toward the right to “open them up” you often get confetti colored “noise” but if you darken bright pixels in the highlights you don’t appear to get noise. The “half the data is in the brightest f/stop theory” is an attempt to explain that phenomenon but as the above shows it has nothing to do with the amount of data. So why is it true that opening up shadows produces noise but darkening highlights to retrieve detail does not? It is a matter of perception.

We’ve all seen noisy shadow areas so I won’t trouble myself to demonstrate that but we should look at the other end of the brightness scale as a way of understanding what is really going on when we “expose to the right”. I'll start with this snow scene photo.
It looks nearly monochromatic but I took a sample from the bright snow, the dotted rectangle near the center, and enlarged it 600% to see what colors were present. It still looks mostly monochromatic but when I increase the saturation 100% it is suddenly apparent that there is a lot of color noise. It just isn’t evident because the saturation is so low.
The real reason that shadow areas develop noticeable noise and highlights do not, is a matter of saturation. Both have noise because digital sensors don’t see color. They only see tones of grey through red, green and blue color filters placed over each photosite on the sensor. The software then generates a color for each pixel by comparing the amount of light reaching each photosite relative to its neighbors in a Bayer array. The exception to this is the Foveon system which uses a separate sensor for each channel.

The visual difference between color noise in highlights and in shadows is due to saturation. It has long been a practice of film photographers to deliberately underexpose by a half stop in order to increase saturation.  Shadows show noise more than highlights for exactly the same reason. Saturation increases as you go left on the histogram and decreases toward the right but color is most clearly distinguishable in the middle.

Even to our eyes, colors are most easily distinguished from one another in average light. In dim light colors become murky and in really bright light they wash out. If I were to take you into a dimly lit room and ask you what color the arm chair in a particularly dark corner was, you would have difficulty accurately describing the color. If I did that with a group of people I would probably get different opinions from each person. Those discrepancies would be “noise”.

Digital sensors are no different and, in their attempt to distinguish color, individual pixels generate variances in color (noise) as the color becomes harder to distinguish. When you brighten the shadows of a digital image you are asking the computer to do the same thing I asked of you in the darkened room, to describe a color that there was too little light to accurately see and in brightening it the discrepancies become more apparent. Likewise if I asked you to tell me the color of something the sunlight was glaring off from. You (or the camera) can’t accurately determine color under those conditions but since the saturation at the bright end is so much less, the noise in highlights is not as apparent.

So should you expose to the right of the histogram? In the user manuals most or all camera manufacturers recommend centering the histogram. If you then darken the shadows and brighten the highlights you are starting with clear color and darkening it or fading it out. Neither will create objectionable noise. If you darken 40% tones to 25 or 30% you are working from more clearly seen color toward less clearly distinguishable color. The same is true if you brighten 70% tones to 80-90%. If you do the reverse, stretch the shadows or highlights toward the center, you are forcing pixels of less discrete hue to be more discrete and that is when the discrepancies show up.

Arguably you might get cleaner whites by exposing so that the histogram is as far right as possible. Also if you expose to the right your darkest tones start out closer to the center, the shadows will have less noise because the sensor was able to “see” the real color more accurately with the increased exposure. Because desaturated (highlight) noise is less evident than more saturated (shadow) noise, biasing the exposure toward the highlights will produce smoother results than biasing toward the shadows but not because there are more bits of data on the right. It is a simple matter of saturation.

As a final note, if the light range is such that you will inevitably lose either highlight or shadows it is better to lose the shadows. The reason for this is that we are comfortable with not being able to see into deep shadows but glaring highlights either cause our eyes to adjust to the brightness or to look away to avoid the discomfort. Because of this, small washed out highlights in an image are okay but large areas of blank white tend to look unnatural.

6 comments:

  1. "According to this theory, in an image spanning several f/stops fully half of all the data in a digital image resides in the brightest f/stop, half of the remaining data in the next lowest f/stop and so on until you reach the lowest f/stop which (according to the theory) contains only 8 bits of data."

    Ummm, this is not a theory, that is how a digital sensor actually works. From there your assumptions are wrong.

    ReplyDelete
  2. I'm not making assumptions Vladimir. What I'm saying is based on how digital image files are constructed.

    Data is recorded on a pixel-by-pixel basis. Each pixel contains exactly the same amount of data regardless of the tonal value it is recording. It takes just as many bits of data to describe total black as it does total white and every tone in between.

    Don't confuse relative brightness of the light on the sensor with the data that records it. In brightness terms an f/stop is twice as bright as the next smaller one and so forth which does give you a geometric progression of relative brightness but when the brightness levels are recorded for reconstruction on the computer screen or printing, the file structure used limits each pixel to 8 or 16 bits (1s and 0s) per channel to describe that particular pixel.

    What the proponents of exposing to the right are suggesting is that there is additional data in the brightest pixels that can be extracted in post-exposure processing. There isn't, because each pixel can describe only one tone and hue. If you change a pixel to another tone/hue combination it still describes only one tone/hue because that is how digital files are constructed. If you stretch the highlights downward to get more tones in those areas you are altering the data, not pulling out additional or compressed data. The "data" that creates those additional in between tones is being interpolated by the editing software. It isn't finding data that was somehow compressed into the brightest pixels.

    In an analog world the theory probably has some basis but when they set the file structure for digital they needed regular patterns. Film has the advantage that it is not limited in the same rigidity as digital. It has a toe and a shoulder for exposure. At the toe the exposure curves up slowly through the dark tones until it reaches a straight line through the middle tones then curves back slowly though the highlights until it is fully exposed. Digital is linear. There is no toe and there is no shoulder. It sees light or it doesn't. When it does see light each pixel can record the relative brightness of the light falling on it but it has only 256 tones to choose from and only 8 bits of data with which to describe that tone, no more and no less.

    ReplyDelete
  3. You are equating "data" with "pixels" here. But it is about brightness levels. As per the image of the histogram above in the article, the sensor will actually capture 1024 levels and then this data will get compressed into the normalized histogram. Accordingly, the 8 levels in the shadows will get stretched. So more data in terms of levels in the highlights.
    Also there is actually less noise in the highlights. Just look up SNR curve. And it is exactly the inverse for film, so the comparison there doesn't work.

    ReplyDelete
  4. Vladimir,

    The original proposition I was testing was that the brightest pixels contained half of all the "data" in the file and the conclusion from that that the noise in dark areas was the result of less data in the darkest pixels.

    It is true that the brighter areas recorded brighter light but as you note they get compressed, that is to say the range of tones that the sensor saw are translated into a shorter range of brightness. If you are shooting RAW you can reinterpret the original data in subsequent conversions but each conversion will result in 256 tones because that is all the file can hold (it's a 256 tone box and you can't put 1024 tones in a 256 tone box) and it is all your monitor can display. Paper has an even narrower brightness range as any darkroom worker knows from experience although tone in film is not a matter of discrete steps but rather is a continuum.

    I think we may be talking at odds here because we are looking at different aspects of the process. You are looking at exposure and I'm looking at the resulting files and what is actually available for editing and printing.

    My point is that the only actual data we have for editing contains 256 discrete tones in even steps (a linear 'curve') and any edits we make to alter tones are the result not of extra data somehow squeezed into some of the pixels while putting less data in other pixels. Rather it is the result of actually changing the available data arbitrarily. My second point is that there then has to be another reason that highlights have less apparent noise and based on my experiments the reason is that the noise that does exist in highlight areas is less saturated in terms of hue. It can't be due to there being more data in the brighter tones. I think my demonstration image where I raised the saturation shows pretty conclusively that there is in fact noise in the brightest pixels, it is simply less saturated and therefore less evident. YMMV

    ReplyDelete
  5. May I add a few (belated?) observations...

    Jim wrote above: "It takes just as many bits of data to describe total black as it does total white and every tone in between." This doesn't sound right. You can describe black with just one bit: 0. It doesn't get blacker than that. However, describing white with just one bit (set to 1) will produce a pure black and white image with no shades of gray. To do better than that, we use more bits.

    We can describe white (and shades of gray) by as many bits as we can capture and process. So, in 14-bit processing, the darkest pixel will be 0 and the brightest will be 4,095 levels higher for a total range of 4,096 levels. The more bits you can work with, the more differentiation of shades of gray you can obtain.

    Thus, if you had only one bit per pixel, you would have either pure blacks or pure whites. Not that pleasing, but you could do it. If you introduce noise in this model, pixels that are really black (0) would be recorded as white (1). This would be very apparent noise.

    This leads to the incorrect conclusion: "What the proponents of exposing to the right are suggesting is that there is additional data in the brightest pixels that can be extracted in post-exposure processing. There isn't [...]"

    But there is. Instead of dealing with just blacks and whites (in the one-bit pixel model), you have many more shades of gray (at that level of brightness) to deal with. This means that noise has an effect, but is much less apparent.

    I don't see that it has anything to do with saturation. It's all about the amount of good data (signal) compared to the amount of bad data (noise).

    I'm an old transparency photographer and still tend to underexpose :-(. Perhaps, this also means that the blob-meter in my DSLR that doesn't compare with the wonderful spot-meter in my old SLR isn't that important, because I should be measuring via the histogram instead. Which is what I'm doing anyways... However, the argument for first exposing to the right (which may even be overexposure) and then adjusting to suit the subject (possibly darkening it) is convincing if you want to get better gradation in the shadows, instead of just, say, Really Really Black and nothing else.

    ReplyDelete
  6. Hi Jake,

    While it is true that you can describe black with one bit and white with one bit that only gives you black or white, no greys. To get greys we use 8 bit or 16 bit files. That is to say there are 8 or 16 bits available to describe the tone of each pixel. The way our computers are designed they can display 256 discrete tones and no more than 256. The histogram represents a bar graph of how many pixels fall in each of the 256 available tones and each pixel can be only one tone & hue.

    If a pixel is totally black it is still coded with 8 or 16 bits (24 0r 48 if it is color) despite the fact that theoretically it could be described with one bit. You can't have a file in which the amount of bits describing each pixel is different. The computer couldn't read a file that had different amounts of ones and zeros for different pixels. Each pixel is a 'data box'. They all have to contain the same amount of data, consequently you can't put more data in bright pixels than in dark ones.

    As for saturation: If you envision color as a solid that is spherical there is a core that consists of a black to white gradation. Around the 'equator' of that sphere are the maximum saturated hues. As you move to latitudes nearer the poles the saturation decreases since the surface is getting closer to the core. If your starting point is near either the white or black pole and you move toward a darker tone (as you are doing when you expose to the right and then pull the middle tones downward) you are increasing the saturation. Likewise if you expose toward the left and brighten the darker pixels.

    I grant that there is a greater signal to noise effect in the dark areas but that is not what the theory I was looking at is saying. What the chart is saying is that there is MORE data in the bright areas and that is simply untrue because of the way an image file is constructed.

    ReplyDelete