Showing results for 
Show  only  | Search instead for 
Did you mean: 

Logic of HDR for photos with dark and bright areas


Hi, I wondered if someone could help with something that puzzles me about photographing with HDR  (I’m just learning about DSLR after getting a Canon6D).   I see the concept of 3 shots, one underexposed, one over exposed and the camera combining.  But to me if you combine the two bright and dark extremes together you are back with the end result of a middle average like the middle exposure photo that had areas too dark and too bright.  


In other words, if you feel part of your scene is too bright, to me it seems that by taking a second photo correcting it a bit, but also a third where you make it worse, the merger average takes it back to square one. 


Or, is it the case that rather than overlaying the whole underexposed image on to the whole overexposed one as I’ve described, is the camera instead effectively taking the best of the 3 versions of the brighter area of the image and the best version of the darker area and combining them, like effectively sort of “stitching”.


In the field, the example I’ve had is photos of narrow canyons where the bottom of the canyon has no direct sunlight so to the eye the walls are brown and patterned, and higher up they are glowing orange cliffs thanks to reflected light, and then you see higher brighter cliffs towering up in the direct sun hundreds of feet higher.

However taking this photo it’s not what you get.   Then the bottom of the canyon comes out pretty dark without its brown colour and walls are almost black, meanwhile the top is pretty bleached out and loses its lovely deep orange which becomes very pale and may even be just white. 

 If I were to expose correctly for the bottom to make it look nice, the problem is that the higher walls actually disappear in the highlight glare.   But if I expose on the top of the cliffs in order to reveal the full height of the canyon and keep their real colour, the bottom becomes just a silhouette or even black.

Now I’ve read about HDR’s 3 versions.  But if I get the too dark bottom and then combine a 2nd version that makes it better with a 3rd version where it’s been made even worse, does it not just average out somewhere back in the middle that I wasn’t too happy with at the start.   And the same with bits that were a bit too bright before using the HDR.

       Or have I misunderstood the capability of HDR and it actually ignores the photo areas where you’ve darkened the too dark bits and the other one where the too bright bits have been further brightened,  and it is capable of combining just the improved versions of the two extreme halves of the image


Thanks Alan for showing all that  - and definitely a significant improvement on the horse in both examples so worth doing.


Because often I'm taking photos on long hikes, sometimes involving wading/swimming, uusually I won't have a tripod, s your one shot example would be good for hand-held.


But if I'm taking 3 or more shots, like you said if an object is moving there's bound to be ghosting.  But if the subject is static, but there  is still the slight movement in my hands over the second or two taking the photos, would that lead to ghosting of a landscape still - or is the processing clever enough to MERGE into one, rather than ADD the 2 versions of almost the identical view, if that makes sense.

To be honest, I really don't know how well any of the HDR softwares will handle multiple images taken handheld, dealing with any slight misalignments. The vast majority of my use of that software is as demonstrated above, a single image multi-processed and then recombined into a single HDR image. I would use a tripod for any multishot HDR work.


But I have done handheld, multi-shot panoramas using Photoshop's Photomerge. That does a pretty neat job correcting for any minor misalignments. The image below was a spur of the moment, three-shot, handheld panorama (the large blank space with my watermark is deliberate, the client wanted to use this as a banner ad and needed a space to overlay a headline).




The only problem I had was someone walked past from right to left while I was taking the shots... so they appeared three times in the final composite image. (I simply cloned out two of them.) The tonality of the sky, tree detail, and everything else lined up just fine automatically with Photomerge. To give you some perspective, the "bushes" in the lower LH and RH corners are actually a hedge that was running directly across in front of me as I took these shots, so this is close to a 180 degree angle of view. I did have to crop the image  a bit to clean up the edges, particularly top and bottom, which always come out uneven with multi-shot panoramas. In the large version of the image, the writing on the arena gate is easily read.


While today's softwares handle multi-shot panoramas well, I'm not entirely sure about HDR. It probably can deal with it, since panorama and other multi-image softwares have come a long way. Gigapan technology takes the panorama to the extreme, using a computer controlled head to take up to 250 or more images, which are then combined into a single, massive image (see George Lepp's website if interested). Something I want to experiment with is "focus stacking", where multiple images are combined to produce one final image with a crazy amount of depth of field (far more than is attainable optically... see HeliconSoft website, if interested).


Alan Myers

San Jose, Calif., USA
"Walk softly and carry a big lens."
GEAR: 5DII, 7D(x2), 50D(x3), some other cameras, various lenses & accessories