Sunday, October 10, 2010

Dynamic Range versus Sensor Size

I’ve covered this before … but the issue continues to crop up. There are people on the forums who still think it is ok to use harsh tone curves in camera and this always brings up the issue of tone curves and dynamic range. There are also people who feel that DPReview’s testing of the jpeg output of a camera is somehow more accurate than the testing performed by DXOMark on RAW images, which of course will determine the potential dynamic range for the camera if shot correctly.

This particular article just documents some information I left in response to user arguros, who asked me to elaborate on my answer to cacreeks’ assertion that he uses the infamous “tunnel of doom” image to test DR. I suggested that it was useless for that purpose and arguros asked me to explain.

So first, let me show you one of these images. I will show it small here with a ling back to the review. Let’s look at the F200EXR version of the image:

DSCF0030[1] Copyright © DCResource.com

Click on the image to get back to the F200EXR image gallery on dcresource.com …

Two things to note:

  • The sky is blue, which is extremely rare in the tunnel of doom images. Most cameras do not have sufficient dynamic range to cover the spread (so to speak) but the EXR cameras do. The pillar is still blown out though … we cannot expect miracles.
  • The shadows are nicely opened. Thus it is obvious that the meter exposed for the shadows and midtones and left the highlights to fend for themselves.

A better exposure would actually have saved the pillar too … but that would have driven the shadows (e.g. ceiling) way, way down.

Here is a version of the image from a camera with a really tiny sensor, the Panasonic ZS7.

P1000005[1] Copyright © DCResource.com

Again, click through to get to the review of the ZS7, gallery page.

More things to note:

  • The sky is not blue on this image. Not enough dynamic range to hold the highlights in the blue channel.
  • The shot is at a different time of day. The various tunnel of doom shots could never be used for objective assessment on that condition alone.
  • The pillar is even more blown out and so is the area of the inner wall where the sunlight touches it.

All this is happening because the images are not shot for dynamic range. The tunnel of doom is about fringing, not dynamic range. Shooting on auto and letting the meter do what it will is no way to determine what the camera can do in competent hands. But that is not what the shot is for, hence no foul on DCResource.

Now … back to what arguros actually asked me …

  1. What would you do if shooting RAW?
  2. What do you mean when you say that the extreme pixel size difference with a dSLR means that there will be something other than noise in the shadows when you use the technique discussed in (1)?
  3. What do you mean when you say “Of course ... some will still believe that shooting jpeg at 0ev or -.3ev with default tone settings is a good test of DR ... I can't really help that.”?

First, how would I shoot to preserve the highest dynamic range in RAW? My answer, verbatim:

If I were actually trying to capture that image as a memory or even something to frame, I would shoot in 14 bit RAW. I would set my exposure to ensure blur sky through the arches. This might require chimping (looking at the result on the LCD) and reshooting. Basically active bracketing.

One would also check the RGB histograms, because it is likely that the blue channel will be blowing out. Here is where UNI WB will make a difference, as the last thing you want is more underexposure of the shadows than you will get with a perfect exposure on the skies.

Once you have the capture, you load that into ACR and tweak the exposure and recovery dials to ensure that the sky is the way you want it. Then you open the shadows and set the correct black point.

That should do it. With a dSLR, the shadows will contain some detail with only low to moderate noise. Trying this with a tiny sensor will leave you with mostly noise in the darkest parts. Which is where the hardware assist of the Fuji EXR technology makes so much difference.

Second, about the pixel size difference from tiny pixels to huge pixels and what that means to dynamic range, again verbatim:

You must read this article to understand linear gamma of the sensor versus the logarithmic gamma of our eyes.

http://www.adobe.com/digitalimag/pdfs/linear_gamma.pdf

As you will note from that article, the signal to noise ratio is much higher when you collect more light. I.e. if you are shifted as far into the highlights as you can safely go, then you have collected as much data in the deep shadows (many stops down) as you can.

Here is where the linear gamma interacts with the well capacity of the sensor ... i.e. the number of photons that each pixel can capture in a given amount of time. This is related to the surface area of the pixels. It should be obvious that a really huge pixel will gather more light in a given (and very short) time that a really tiny pixel.

We know, for example, that a full frame 12mp sensor has pixel density of 1.4 million pixel in a square centimeter. Whereas the 12mp sensor in the F200EXR has 25 million pixels in the same area. And since each pixel is trying to capture exactly the same tone as the same pixel focused on the same field of view, we have a whole lot less chance of capturing the few photons we will see form the darkest areas.

So that exposure for the highlights, even if perfect (just about blown, but not) is going to leave the deep shadow part of the image starving for photons.

That's why the shadows can get so easily overwhelmed with noise. If you have ever underexposed an image dramatically with a small sensor shot in jpeg and then tried to look in the shadows, you will see exactly what I mean. It can be horrible ... but if you do that with a D700 shot at 14 bits, you would be surprised at how much clean detail is buried in there ...

Finally, regarding my statement on shooting auto with little or no exposure compensation as a (non) useful test of dynamic range … again verbatim:

When you shoot a jpeg, you are throwing away quite a bit of dynamic range, unless you use a specialized tone curve. See this article for info on that issue ...

http://www.adobe.com/...ts/photoshop/pdfs/understanding_digitalrawcapture.pdf

So my point here was that the tunnel of doom shot is not optimized for the highlights at all ... he does not apply sufficient exposure compensation to save the highlights, shooting instead in auto mode with little or no exposure compensation.

So the image show nothing with respect to the camera's potential dynamic range. I.e. it is a useless set of images for that purpose.

To use those images for dynamic range comparison, one would have to shoot hem for the highlights, hence every single image would have blue skies in it. Then, the shadows would have to be opened in post processing. The quality of the shadow detail would be be very instructive and would define, at least subjectively, the amount of dynamic range available in the sensor / jpeg engine combination.

That is what I meant.

One final word ... if you have read those articles I linked, then you now understand why I consider it so goofy to suggest increasing the tone curve when capturing images at night ... the shadows are already deep and the technique is designed to crush them further out of existence. And the harsher tone curve also pushes the highlights further towards blowing out.

TANSTAAFL (translation: there ain't no such thing as a free lunch)

And finally … one last thing. That thread was started by showing two images, one from the F200EXR and one from the F300EXR and saying nothing other than the F300EXR was good at DR (a given with EXR technology) and that the F200EXR *might* be as bad with CA if the framing had been the same … this leaves it to the reader to draw his or her own conclusions, which may or may not be the same as that of the original poster.

This is problematic on many levels, but the key issues is that people cannot easily disregard what they see in the frame, even if it is not present in the other frame. So let’s rectify that …

The F200EXR image above was shown below the following F300EXR image:

DSCF0048[1]

Copyright © DCResource.com

Things to note:

  1. The framing is much wider. This comes from the 24mm versus 28mm field of view.
  2. The shadows indicate that theimage was shot at the same time of day as the F200EXR, yet the inner wall, ceiling and area on the left are rendered much brighter. There is a lot of detail on the shadows, which indicates a brighter exposure.
  3. The skies are still blue, indicating very effective EXR technology despite the increased exposure.
  4. There is a large streak of fringing on the far right of the image … one presumes that this was the point the original poster was trying to make.

So … I like this image a lot more than the one from the F200EXR. It is warmer and more open. But what about that streak of CA? Well, it is pretty bad … that is for sure. But is it actually that much worse than the F200EXR? The answer is that we have no idea, because that part of the frame is missing from the F200EXR image … so a great big DUH goes to that thread.

Let’s look at the parts of the image that are actually common … I will create a matyched pair of images that have the same framing by trimming the F300EXR image to match the field of view of the other.

When we do that, we get a rather different perspective on these two cameras … perhaps things are not what they seem to first glance …

F300EXR_vs_F200EXR_tunnel[1]Copyright © DCResource.com 

It is not quite as obvious which one would be the better cam once the frames are equalized. I prefer the upper image by quite a bit.

There is one spot, and only one spot, that is pretty much identical in both images. That being the top piece of the second arch from the left … the part with blue sky. That’s as close as we can come to a fair fight. So let’s look at that second magnified.

ca_f200_vs_f33[1]

Hmmm … I don’t think that anyone quite expected that … which is why variables must be equalized when you make such comparisons. Posting a pair of images without analyzing what they represent just wastes peoples’ time … and worse, it often misleads them …

8 comments:

Pierpaolo said...

Hi Kim,
Thank for your answer, excellent and detailed as usual.
I still need to go through with the appropirate level of attention thought.
However just let me raise few points.

1) If I have two images with exactly the same exposure: same fstop, real ISO, shutter speed, similar tone curve. If one camera can get a blue sky, while the other can't. Would not this be a sign of a better dynamic range?

2) when you shoot in RAW, the in camera histogram shows you the clipped highlight of the JPG version. With this in mind, and given the linear capture of the RAW image, could not be a clear sky still be perfectly exposed in a RAW capture?
If this is true, I could still shoot for a "clear sky" on the jpg image, knowing that with RAW I will recover the "missing" channel.

arguros

Pierpaolo said...

I would like to clarify my 2 point.

If I were to shoot to maximize the dynamic range of that scene I would do the following. Shooting in RAW
1) I would see what the proper exposure for the highlights would be.
2) Knowing that in RAW I can recover at least 1/2 stops on info in the highlight (according to camera model). I would shoot the image with +1/+2 exposure compensation
3) Bring the image in LR 3.0. Recover the highlight and raise the shadows.

The logic behind is linea gamma capture curve of the RAW image. in RAW I'd rather error on the side of overexposing rather than underexposing, because I know the RAW image will be filled with data for the highlight part. Oveexposing will also allow the shadow to gather more data and hence details.
Please let me know your thoughts.

arguros

Kim Letkeman said...

arguros:

Regarding point 1. It's more complex than that. Meters are all biased slightly differently and sensors all have different actual sensitivity. Which is why identical exposures still give slightly different results. I prefer instead to shoot each camera to get the best highlights possible and then open the shadows. If you do that with a step wedge, you will see the maximum possible dynamic range as a result. But this is also very much dependent on the tone curve in use.

Regarding point 2: I have written in the past that I like to slightly overexpose and then recover highlights in ACR. This is risky, but it can maximize dynamic range. This is, in fact, the whole point of shooting with UNI WB and flat tone curves in RAW. The histogram shows accurately what is in the well, versus the usual jpeg tone curve which shows that you are blown out when you are not. It is this typical error in the jpeg tone curve that allows people to choose to shoot a bit overexposed, because the overexposure is in fact false.

So yes, if you shoot RAW and leave a normal tone curve in the camera, then you can usually safely over expose. If, on the other hand, you load a flat tone curve and shoot UNI WB, then you cannot. Because an over exposure is real.

I hope that helps a bit.

Unknown said...

That was a good blog post.

The only thing that would have improved it for me is if I had a nice lunch in front of me...I like to read good material over lunch, say, a good Mexican, Indian, Pakistani lunch, perhaps Korean, Thai or Vietnamese lunch, or a good ol' guacamole burger.

I was trying to get my head around UniWB recently. Just when I thought I got it, I didn't get it. My issue had to do with how to set it...I must be missing one critical conceptual point like I did back in Organic Chemistry.

Once again, the Adobe articles show their utility.

Very nice job.

Kim Letkeman said...

Dotbalm ... thanks, but they are *all* good blog posts :-)

Regarding UNI WB ... with the D300 and D700 there are 4 custom white balance positions, and each one can be set from an image loaded into the camera. So I looked around for UNI WB images (the perfect color balance to set multipliers around 1,1,1) and loaded them into the camera, then set the CWB-4 position to that white balance. This turns every RAW image green because there are twice the number of green pixels as red or blue. The final step is to flatten the tone curve ... selecting a neutral curve and adjusting contrast fully negative helps, but loading a perfectly flat curve into the cam makes it work even better. Toneup is one application that is cheap and can load curves into the camera.

I hope that is useful.

Unknown said...

Heh, that's right!

Thanks very much for the tip.

I first heard of this in a Nikon context, maybe from you. I googled around for how to do this with my 7D recently and didn't "get" it. With this as a basis I'll keep plugging away...

Kim Letkeman said...

Found a great article on UNI WB with a set of downloads for all Canon Cameras including the 7D.

http://www.guillermoluijk.com/tutorial/uniwb/index_en.htm

Unknown said...

Thanks very much for taking the time to evaluate, Kim!

I found that one last week...the author and article seem to be highly regarded (but it is long).

However, this puts more credence into whether I should work through the article. Time to get going.

Thanks again.