Greetings. So here are a couple of things that I've puzzled over between what I understand from computer graphics and what I understand from digital photography:
Dynamic Range - Most of our computer displays are 8-bit displays (there are newer 10-bit displays (AKA, confusingly, 30 bit displays) but probably not in common use). They are able to display 256 shades of gray - white to black. I would think this would be something like 8 stops of dynamic range (8 doublings from 1 to 256). Does this not mean that we can only see 8 stops of DR on our screens no matter what the DR of the captured data? On a similar note I've not seen any processing software that attempts to show anything greater than the display 8-bit range (would have to show, for instance, two versions - a lower part of an expanded range and a higher part of an expanded range. Agreed not a very satisfying way to show a DR over 8 stops).
How does higher DR of our newer cameras express itself in the limited DR of the computer screen? Or does it?
Resolution - Similar to the dynamic range question, unless your looking at fat bits or at least 100%, how can the high resolution images from our cameras be displayed to show differences of, say, even a 2-3 MP crop and a 36 MP FF capture? I admit to looking at an image and thinking the IQ seems low, I wonder if this is a big crop (leaving only 3,4,5 MP). But how can that really express itself on a little more than 2MP screen (let alone a 250MB jpeg)?
Is it all just a mass hallucination, and we are really just seeing 8 stops DR and 2 MP resolution, no matter the source?
Thanks for thinking on this...
Cheers,
-Michael-







Reply With Quote
).
