Results 1 to 30 of 30

Thread: ETTR & Perception of Whites

  1. #1
    Lifetime Member Michael Gerald-Yamasaki's Avatar
    Join Date
    Jan 2010
    Location
    Santa Cruz, CA USA
    Posts
    2,035
    Threads
    311
    Thank You Posts

    Default ETTR & Perception of Whites

    Attached Images Attached Images
     
    Greetings. I've long been unhappy with the concept of ETTR but ever more so with ETTR and whites. Here is a grayscale image I concocted:

    Attachment 119771

    Here you have three grayscales differing in resolution 256, 64, 32 shades of gray (maybe one less ;-). In any event, I think it useful in showing that one's visual impression differs from white to black depending on where one is in the range and how much difference there is between bands. Here the 3 scales differ in contrast, that is, the bands differ in gray values by 1, 4, and 8 respectively (in the grayscale range of 0 to 255).

    Take a close look at this image (download it, zoom in, scroll around). The banding may not be obvious in the left part of the image but it is there and if you zoom in you will see it (particularly if you move the image around a bit). The thing to notice is how hard it is to see the banding in the whiter part of the image. This is true for all three grayscales... detail in the whitest whites is hardest to see (and the blackest part as well).

    The other thing you might notice is in the rightmost scale the bands don't look a constant gray in each band. A each band looks darker against the lighter neighbor and lighter against the darker neighbor. I haven't studied this edge effect all that much but it seems to me that the wider the band the greater the edge effect. I have noticed that sometimes I perceive a halo that isn't there (not there by the color values) because of this edge effect.

    For white feather detail, it seems to me that:
    - contrast, that is very localized contrast, sometimes refered to as "micro-contrast", rules much more than exposing to the right
    - exposing to the right actually pushes the whites into a part of the grayscale where differences are harder to perceive
    - feather spacing, light, interstitial shadows play a greater role than ettr

    Anyway some food for thought.

    One other thing, color (off white) helps by adding color contrast to grayscale contrast in promoting perceived detail in "whites".

    Cheers,

    -Michael-

  2. #2
    BPN Viewer Tom Graham's Avatar
    Join Date
    Apr 2010
    Location
    Southern California, Orange County
    Posts
    1,116
    Threads
    33
    Thank You Posts

    Default ETTR & Perception of Whites

    Thanks Michael for your work, thinking, summary about your grayscale image.

    I presume you've seen this recent BPN posting - http://www.birdphotographers.net/for...tical-illusion
    Seeing is believing???? Nope, we "see" what we expect to see.

    "...rightmost scale the bands don't look a constant gray in each band...."
    I agree with you. But Photoshop does not. As I'm sure you know since you made image. Anyone doubt, take image into PS and measure across bands. Then crop image so you see only one band - now that isolated band looks constant gray.
    Also, flip image top/bottom. Now the bands flip, darker at top of band.

    Also how I have my monitor setup, so called Brightness and Contrast, affect when the light and dark bands merge on my screen.

    Tom

  3. #3
    Forum Participant
    Join Date
    Jan 2008
    Location
    Canada
    Posts
    6,588
    Threads
    643
    Thank You Posts

    Default

    Michael- Interesting. The edge effect seen at the right is a well known optical illusion.

    Regarding ETTR, I think you are missing a key part of the equation- that is that after you have exposed to the right you need to reduce the exposure in post processing. If you don't do this your image will look over-exposed and washed out, and more than likely the detail in the whites, although there, will be harder to see as you point out. Therefore ETTR is a "package deal" and to work has to include the final step of adjusting exposure in post.

  4. #4
    BPN Viewer
    Join Date
    Nov 2009
    Location
    Thailand
    Posts
    110
    Threads
    8
    Thank You Posts

    Default

    Incidentally, the basic reasoning behind ETTR is that the system reports better gradation in the white regions.
    An identical image ETTL would be like shooting a photo with 16 shades of grey whilst when ETTR it will have, say, 256 shades of grey.
    Put them both into Lightroom and press Auto-Contrast-and-Brightness and they will come out identically, except for the ETTR will show more subtle changes in brightness.
    At least that's how I understand it (and note I am paraphrasing).


    Following from the OP, I understand that it is through utilisation of this edge effect that Lightroom provides a clarity slider - essentially it adjusts brightness near an edge to give a better perception of that edge. If done subtly enough it can really help bring up the fine details.



    [Edit: changed idea that sensor has better gradation to system reports better gradation before I get too many objections. Not sure if I'm right, anyway]
    Last edited by Graeme Sheppard; 10-19-2012 at 06:08 AM.

  5. #5
    Banned
    Join Date
    Feb 2008
    Location
    Colorado
    Posts
    3,949
    Threads
    254
    Thank You Posts

    Default

    Hi Graeme,
    You are essentially correct. ETTR maximizes signal-to-noise ratio for that particular ISO. The sensor response is linear so better signal-to-noise ratio will help define subtle gradations. The problem is the tone curve that gets applied. The standard tone curve compresses the high end. Note in photoshop ACR if you say linear, that really means no adjustments to the standard tone curve and the tone curve is still applied. If one outputs 16-bit data, little is lost (probably one could never see the loss, though it can be demonstrated numerically). If one works in 8-bit, there will be limitations to fine gradients in the high end and, in processing, posterization could become evident in some cases. A solution to this problem, if encountered, is to do a raw conversion in true linear output and merge the highlights from the true linear conversion with a standard conversion. But 16-bit output with the standard curve is a simpler solution.

    Roger

  6. #6
    Forum Participant
    Join Date
    Jan 2008
    Location
    Canada
    Posts
    6,588
    Threads
    643
    Thank You Posts

    Default

    Hi Graeme- I was under the impression that the "bit-depth argument" (for ETTR) you mention was not important compared to the gains in S/N ratio from ETTR. See:

    http://theory.uchicago.edu/~ejm/pix/.../noise-p3.html (scroll down to "S/N and exposure decisions")

  7. #7
    Banned
    Join Date
    Feb 2008
    Location
    Colorado
    Posts
    3,949
    Threads
    254
    Thank You Posts

    Default

    Quote Originally Posted by John Chardine View Post
    Hi Graeme- I was under the impression that the "bit-depth argument" (for ETTR) you mention was not important compared to the gains in S/N ratio from ETTR. See:

    http://theory.uchicago.edu/~ejm/pix/.../noise-p3.html (scroll down to "S/N and exposure decisions")
    In my opinion, ETTR, while improving signal-to-noise ratio, is mainly important for getting as much signal as possible above the fixed pattern noise at low ISO to improve shadow detail. The above web page (which is excellent) does not address the fixed pattern noise. At high ISO, fixed pattern noise is no longer an issue (on all cameras I have analyzed), At high ISO, ETTR does not improve signal-to-noise ratio and with fixed pattern noise a non-issue, proper exposure is less critical (caveat: more light always helps as long as the subject doesn't get blurred, unless you want a blur). But when you have fixed exposure (can't go any shorter exposure time, and can open f/stop wider), then once above ISO 800 to 3200 (better cameras, 800, older cameras like 5DII: 3200, 1d4: 1600), ETTR makes no difference in image quality.

    Roger

  8. #8
    Lifetime Member Michael Gerald-Yamasaki's Avatar
    Join Date
    Jan 2010
    Location
    Santa Cruz, CA USA
    Posts
    2,035
    Threads
    311
    Thank You Posts

    Default

    Folks,

    Thanks much for your comments. In reading them over I've had a small ah, hah! moment for myself about something that has puzzled me for a long time in my understanding of the practice of digital photography. As some of you know, I work in the field of scientific visualization where I develop methods for analyzing data by transforming data into images. In a sense, this is raw conversion of arbitrary numerical data (used to call it raw data and cooked data, even). Anyway, the ah, hah! will take some explaining but here is the conclusion:

    Sensor data (the raw file) during the raw conversion process is constrained by the digital color encoding (r, g, b) when saved in an image format, jpeg, tiff, etc. Once so constrained, processing behaves within the constraining color encoding not as though it is working from sensor data.

    In other words, once the raw file is converted to rgb it becomes constrained by the rgb format. How might this have an impact?

    The following image is a histogram of all the colors that can be represented by 24 bits of color (8-bit rgb) by luma. There are different calculations for luma values, but this is more or less what Photoshop shows as the rgb histogram:

    Name:  histogram24bitcolors_luma.jpg
Views: 332
Size:  202.9 KB

    This is the histogram for a photo that had one pixel for each of all the colors that can be represented in 8 bits per channel. rgb 0,0,0 rgb 0,0,1 rgb 0,0,2 etc. The midtones have the best color resolution.

    John, exposing to the right then processing to the left will give you less color resolution but higher contrast, than just exposing for the midtones. I think I have that right... it's a bit of a brain twister.

    Roger, an s-curve would flatten this graph a bit giving more contrast to the midtones (less color resolution) but more color resolution (less contrast) to the extremes.

    Of course I could be wrong :-)

    Cheers,

    -Michael-

  9. #9
    BPN Viewer
    Join Date
    Nov 2009
    Location
    Thailand
    Posts
    110
    Threads
    8
    Thank You Posts

    Default

    You've lost me a bit!
    My understanding of what you've done is you have an image with all the possible, unique values of colour. And a luminance histogram for that image.

    Here's my conjecture:
    A total luminance of 0 only has one degree of freedom, (0,0,0) , so the histogram shows a low point at the left.
    A total luminance of 3 has multiple degrees of freedom { (3,0,0), (0,3,0), (1,0,2) etc } so it shows a higher peak. The histogram will follow a bell curve.

    Are you suggesting that the system will try to fit a photograph to this bell curve?

  10. #10
    Lifetime Member Michael Gerald-Yamasaki's Avatar
    Join Date
    Jan 2010
    Location
    Santa Cruz, CA USA
    Posts
    2,035
    Threads
    311
    Thank You Posts

    Default

    Quote Originally Posted by Graeme Sheppard View Post
    You've lost me a bit!
    My understanding of what you've done is you have an image with all the possible, unique values of colour. And a luminance histogram for that image.

    Here's my conjecture:
    A total luminance of 0 only has one degree of freedom, (0,0,0) , so the histogram shows a low point at the left.
    A total luminance of 3 has multiple degrees of freedom { (3,0,0), (0,3,0), (1,0,2) etc } so it shows a higher peak. The histogram will follow a bell curve.

    Are you suggesting that the system will try to fit a photograph to this bell curve?
    Graeme,

    A histogram of a photograph would not fall within this bell curve (because there is more than one pixel per color).

    It's useful to have this graph in mind when post processing. For instance. a pixel at luma 1 can only have one of 6 possible hues red (100) green(010) blue(001) yellow(110) magenta(101) or cyan(011). So if you lift that pixel to say, luma level 100, you would expect it to be one of those six hues only even though the full range of hues (1525) are available at that luma level. Saturation is dependent on the choice of interpolation. At luma level 1, the six colors are fully saturated (only the gray color at luma 1 (111) is less than fully saturated. When you lift to luma 100 should it also be fully saturated? (To be more precise we should say "value" here rather than luma since luma requres a different proportion to reds, greens and blues, but the general idea is the same).

    The limited hue resolution for near black colors is also true for the near white and is one of the issues with ettr if one intends to expose to the right and then post process to the left. One is left with limited hue resolution when compared to an image exposed at the end levels without pp.

    Hope this makes sense.

    Cheers,

    -Michael-

  11. #11
    Banned
    Join Date
    Feb 2008
    Location
    Colorado
    Posts
    3,949
    Threads
    254
    Thank You Posts

    Default

    Quote Originally Posted by Michael Gerald-Yamasaki View Post

    The limited hue resolution for near black colors is also true for the near white and is one of the issues with ettr if one intends to expose to the right and then post process to the left. One is left with limited hue resolution when compared to an image exposed at the end levels without pp.
    Hi Michael,

    In theory there is no difference between theory and practice. In practice there is.
    - various attributions

    The reality of the above is usually because the theory is incomplete. Your plot is an interesting exercise, but what does it mean in practice? For example, the in-camera signal is linear so it doesn't matter what exact level the data are. What limits detail, whether color or spatial (ignoring lens resolution and pixel sampling), is noise, and that noise is dominated by photon counting statistics. While the 14-bit A/D limits/quantizes the output signal, the 14-bit quantization of digital camera image daat is certainly more than enough at the high end. So the high end color is limited by photon noise more than anything else, and exposing to the right minimizes that issue. At the low end, color is limited by sensor read noise and downstream electronics more than quantization of the 14-bit signal. In theory, in a 16-bits/channel working space, there is for all practical purposes no limits to colors for human perception. In practice, there is but not due to sampling. but more do do with spectral response differences between the camera and output devices compared to that of the human eye.

    Roger

  12. #12
    Banned
    Join Date
    Feb 2008
    Location
    Colorado
    Posts
    3,949
    Threads
    254
    Thank You Posts

    Default

    Quote Originally Posted by Roger Clark View Post
    Hi Michael,

    In theory there is no difference between theory and practice. In practice there is.
    - various attributions

    The reality of the above is usually because the theory is incomplete. Your plot is an interesting exercise, but what does it mean in practice? For example, the in-camera signal is linear so it doesn't matter what exact level the data are. What limits detail, whether color or spatial (ignoring lens resolution and pixel sampling), is noise, and that noise is dominated by photon counting statistics. While the 14-bit A/D limits/quantizes the output signal, the 14-bit quantization of digital camera image daat is certainly more than enough at the high end. So the high end color is limited by photon noise more than anything else, and exposing to the right minimizes that issue. At the low end, color is limited by sensor read noise and downstream electronics more than quantization of the 14-bit signal. In theory, in a 16-bits/channel working space, there is for all practical purposes no limits to colors for human perception. In practice, there is but not due to sampling. but more do do with spectral response differences between the camera and output devices compared to that of the human eye.

    Roger
    I should have qualified the above. Michael specifically posted about 8-bit data, and if we only worked in 8-bit, then the quantization limits Michael discusses would come into play.

    Roger

  13. #13
    BPN Viewer
    Join Date
    Nov 2009
    Location
    Thailand
    Posts
    110
    Threads
    8
    Thank You Posts

    Default

    I think I've got it, thanks to both for the explanations.

    Now, I used to know this but forgot: noise levels across the brightness range and sensor will be reasonably constant, but bright pixels have a higher signal so signal-to-noise ratio is improved. Hence the ETTR theory to minimise noise.

    But, if I have a purely black pixel and decide to make a slight change to it, there's only a few possible hues/combinations available. If I took a grey pixel and made an similarly small change to it, there would be thousands of possible hues I could go to. Hence exposing to centre gives greater colour contrast. The real problems with this would only show at the extreme ends of the histogram.

    That's the theory as currently see it (to the depth I need).
    For my future, and actually what I have done so far, is to very slightly over-expose since by my understanding ETTR and ETTL have the same limitations in colour contrast, but ETTR improves noise.

    Essentially, I will follow the guideline of "avoid exposing to the left".

  14. #14
    Lifetime Member Michael Gerald-Yamasaki's Avatar
    Join Date
    Jan 2010
    Location
    Santa Cruz, CA USA
    Posts
    2,035
    Threads
    311
    Thank You Posts

    Default

    Quote Originally Posted by Roger Clark View Post
    In theory there is no difference between theory and practice. In practice there is.
    - various attributions
    Roger, in theory there is a relationship between photon noise, read noise and the final output of a photograph to some output device be it screen or print. In practice... there is a whole lot of cooking going on.

    I'm just saying that once you fix the sensor data into rgb you take on a new set of rules. I think it's good to know what sorts of impacts those new rules have when one goes about post-processing. The imact wrt ettr is that exposing into the bright luma levels (call it luma levels > 220) once encoded in rgb has two impacts:

    1. Localized contrast is harder to perceive (first image, pane #1)
    2. Has lower color resolution in hue and saturation (second image, pane #8)

    If these impacts work for you then, ettr is for you.

    -----

    These new rules have many impacts on pp of all sorts. One quick testable impact can be shown with the image in the OP. Much discussion when comparing sensors about the AA filter. Nikon even went so far as to offer a choice with the D800. That's all fine and good. But did you know, that simply rotating your image causes it to be antialiased? Granted this is software antialiasing but the impact is (I'm guessing here) substantially more than the hardware aa filter.

    Here is the quick test to demonstrate the antialiasing for image rotation:

    1. Download the image in the OP.
    2. In PS look at the Info panel - choose a band on the right side of the image and check out one of it's edges... you will see a distinct difference between two bands of 8 in r, g, or b (there may be an occasional off by one due to resampling but it will be pretty consistent across an edge).
    3. Go to Image->Image Rotation-> Arbitrary
    4. Rotate say 45 degrees.
    5. Now, examine the band edge. Instead of a jagged edge but still with jumps of 8 on either side of the edge, you now have a number of intermediate shades. This is antialiasing.

    All that worry about a sub-pixel AA filter, and one horizon leveling rotation... poof. By the way... one good pp practice do one and only one rotation (if you miss, reset to the prior state). The antialiasing accumulates.

    Cheers,

    -Michael-

  15. #15
    Banned
    Join Date
    Feb 2008
    Location
    Colorado
    Posts
    3,949
    Threads
    254
    Thank You Posts

    Default

    Quote Originally Posted by Michael Gerald-Yamasaki View Post
    Roger, in theory there is a relationship between photon noise, read noise and the final output of a photograph to some output device be it screen or print. In practice... there is a whole lot of cooking going on.
    Hi Michael,
    regardless of any "cooking" of the raw data, photon noise has a unique signature: the noise is the square root of the signal. That simple fact allows us to separate it from other noise sources. And the fact is that photon noise is the dominant noise source in ALL our digital camera images except in the deepest shadows where sensor read noise and downstream electronics noise are larger factors. These other factors only add noise to the photon noise. Photon noise is the lowest noise possible; other noise sources only increase the total noise. Of course cooking could average spatial resolution to reduce noise, and we see that in jpeg data at high ISOs. But raw data I have analyzed show no evidence for such manipulation, and the digital camera sensor data track well with commercial and scientific sensors where no "cooking" is happening.


    Quote Originally Posted by Michael Gerald-Yamasaki View Post
    I'm just saying that once you fix the sensor data into rgb you take on a new set of rules. I think it's good to know what sorts of impacts those new rules have when one goes about post-processing. The imact wrt ettr is that exposing into the bright luma levels (call it luma levels > 220) once encoded in rgb has two impacts:
    I agree with your assessment for 8-bit/channel data. But one need not lose anything (for all practical purposes) at any level if the digitization is adequate. For example, convert the raw data to 16 bit, not 8-bit and your >220 level becomes >56320 and instead of 255-220 = 35 levels above 220, one has 65535-56320 = 9215 levels, or 36 times more than the 256 levels in an 8-bit channel data. And if that isn't good enough, work in 32-bit integer, 32 bit floating point or 64 bit floating point. note in photoshop, you get half those levels in 16-bit (really 15 bits), so only 18 times more levels than the full 8-bit range. Working in higher bits than 8 provides plenty of color resolution.

    Quote Originally Posted by Michael Gerald-Yamasaki View Post
    1. Localized contrast is harder to perceive (first image, pane #1)
    This is also a human perception limitation, not a 16-bit and higher data limitation.


    Quote Originally Posted by Michael Gerald-Yamasaki View Post
    2. Has lower color resolution in hue and saturation (second image, pane #8)
    Thjs is an 8-bits/channel limitation, not a sensor limitation and not a limitation with 16-bits/channel or higher (or 15-bits/channel).

    How about making your plot with 16-bits/channel and show the same vertical scale as you show in your 8-bits/channel plot? If you work with 16-bit signed integers, a 15-bits/channel plot will still be adequate.

    Roger

  16. #16
    Lifetime Member Michael Gerald-Yamasaki's Avatar
    Join Date
    Jan 2010
    Location
    Santa Cruz, CA USA
    Posts
    2,035
    Threads
    311
    Thank You Posts

    Default

    Roger,

    While 16 bits per channel is what the data is under the hood, we make all our decisions about post processing based on the 8-bit representation that we see on our screens (except for those exceptionally few people with 10-bit hardware and workflows). So while 16-bit data mitigates some of the impacts being discussed here, we don't really see this mitigation until we print with a 16 bit workflow.

    Furthermore, 16 bit data doesn't alter the shape of the rgb color space.

    Name:  optpropdemo_src_02.jpg
Views: 296
Size:  172.1 KB from Matlab on the web

    This is an interesting representation of rgb color space. In addition to the r, g, b coordinates with the origin at 0,0,0 (lower right corner), the diagram represents hue (lines radiating from white to blacK) and "Lightness" (more or less) along the lines of a Hue-Saturation-Lightness model (bands from upper right corner to lower left show descending lightness).

    Note that colors at the surface of the cube are all fully saturated. The grayscale is on a line from black to white which is the 3D diagonal of this cube. Turning the cube so that the diagonal was parallel to the ground would approximate in 3D the histogram in pane #8 (there are complications involved with transforming this to luma but it is roughly the same idea). This might take a moment to understand, think of counting all the interior points of this cube (digitized to 8 bits ;-). At the extremes (toward black or toward white) there are less pomts to count. There is a middle section that is more or less the same number. Just as in the histogram.

    Tonal adjustments in rgb space have some challenges (why people use Lab mode but that's another story) involving hue shifts and perceptual saturation. The issues with color resolution and localized contrast that I described before have impacts on tonal adjustments because (IMO) of the shape of the rgb space.

    It is of course just a bit more complicated than this because of color profiles (yet another transformation which would result in a distorted cube... but it is all headache inducing, but if you must some good images at http://www.gamutvision.com/

    Hope this helps.

    Cheers,

    -Michael-

  17. #17
    BPN Viewer
    Join Date
    Sep 2012
    Location
    Colorado
    Posts
    195
    Threads
    16
    Thank You Posts

    Default

    ETTR is just a way of utilizing dynamic range. Since the sensor itself is linear, exposing as far to the right as you can without clipping maximizes your utilization of the dynamic range your sensor offers. The primary benefit is to put as much of the total useful signal above the electronic noise floor of the sensor before actually running the signal through the ADC (which effectively "bakes in" the electronic noise that exists in the total signal.) Since the analog signal on the sensor is linear, I don't believe you can actually really "lose" anything unless you overexpose enough to clip highlights. Once you have a RAW image in hand, you effectively have a digital signal to work with, and you can to a lot to move that signal around, recover detail and color fidelity, etc. using clever and advanced processing algorithms and math.

    I've learned that modern sensors offer a LOT of headroom when it comes to highlights (well, at least Canon...I don't use Nikon cameras enough to know if they are the same, but with their improved read noise these days, the need to ETTR is far lower.) These two photos were taken moments apart...one was radically over exposed and fully recovered in post, the other was properly exposed. The overexposed (ETTR) shot was 1/100s f/5.6 ISO 100, the correctly exposed shot was 1/1000s f/7.1 ISO 160, a difference of 3 stops.

    Before correction (which was a full -3EV in Lightroom 4.2):
    Name:  Dragon Fly Recovery (1 of 1).jpg
Views: 282
Size:  156.1 KB

    After correction:
    Name:  Dragon Fly Recovery (1 of 2).jpg
Views: 282
Size:  161.0 KB

    And a separate shot, taken moments later:
    Name:  Dragon Fly Recovery (2 of 2).jpg
Views: 280
Size:  197.1 KB

    No other adjustments have been made to these photos. As far as I can tell, color fidelity, tonality, etc. were all preserved in the ETTR shot (which was accidental, I rarely have the opportunity to really ETTR by over three stops in any realistic scenario). Below are two 100% crops of the same area (offset is due to me moving slightly between shots). In the normally exposed image, you can see the effects of Poisson noise (photon shot noise), where as in the ETTR version, it is virtually noiseless:

    Normal exposure:
    Name:  Dragon Fly Normal Noise.jpg
Views: 279
Size:  112.2 KB

    ETTR exposure:
    Name:  Dragon Fly ETTR Noise.jpg
Views: 279
Size:  68.1 KB


    There is one curious thing I see, that maybe Roger could answer. In the ETTR 100% crop image after a -3EV correction in post, there is a slight amount of vertical banding. I've racked my brain trying to figure out where that came from. Is it reasonable to think that the fixed component of read noise that exists in the lower fraction of the signal is being reduced along with the rest of the exposure, buy the same relative amount...and therefor having the effect of "pulling down" the exposure of my midtones a little bit, causing that slight exhibition of vertical banding? In the original uncorrected ETTR exposure, there was no hint of banding, although admittedly everything appeared to be nearly full white.
    Last edited by Jon Rista; 10-22-2012 at 05:07 PM.

  18. #18
    BPN Viewer Tom Graham's Avatar
    Join Date
    Apr 2010
    Location
    Southern California, Orange County
    Posts
    1,116
    Threads
    33
    Thank You Posts

    Default

    Roger says pane #15 - ".....photon noise has a unique signature..."
    Then in my ignorance I would ask - "Great, a unique signature, then it could be described and subtracted from the original data. Thus providing near noise free data. Thus near noise free image". No???
    Tom

  19. #19
    BPN Viewer
    Join Date
    Sep 2012
    Location
    Colorado
    Posts
    195
    Threads
    16
    Thank You Posts

    Default

    @Tom: Photon noise follows a Poisson Distribution (http://en.wikipedia.org/wiki/Poisson_distribution). It is effectively a distinct spatial wavelet that exists within the context of the total spatial wavelet that defines your actual image. If the wavelet that describes photon noise could be subtracted, theoretically only the image would remain.

  20. #20
    Lifetime Member Michael Gerald-Yamasaki's Avatar
    Join Date
    Jan 2010
    Location
    Santa Cruz, CA USA
    Posts
    2,035
    Threads
    311
    Thank You Posts

    Default

    Jon,

    Okay, so there is analog data which is converted to digital via ADC to a Bayer format RAW file. The RAW file is converted to an RGB file (TIFF, PSD, JPEG, etc.). My comments are regarding the RGB file. Photon noise is baked in long ago in the analog to digital conversion. In RGB the removal of the baked in photon noise is just data smoothing and has no connection with sensor dynamics. We speak of toning down an image by a stop, but pp is really a different beast.

    There is banding (posterization) in all your images in pane #17, if you look close enough (just as there is posterization in the far left part of the OP). This is because what you see on the screen is 8-bit RGB. You may see increased local contrast when "pulling down" which is what I suggest happens for the two reasons state before, perceptual differences between near white and middle grays and increase in contrast from lower resolution in pp from high luma toward mid-luma.

    What I think, anyway.

    Cheers,

    -Michael-

  21. #21
    BPN Viewer
    Join Date
    Sep 2012
    Location
    Colorado
    Posts
    195
    Threads
    16
    Thank You Posts

    Default

    First, sorry about the posterization. It seems now that I am a paid member of BPN, all my images get uploaded to their content host. I was trying to keep image size down, so JPEG compression added some posterization.


    Ok, I understand where you are coming from now. In one sense, yes, what you are describing is true. In another sense, it is not necessarily true. An image is also a digital signal. It is a two-dimensional waveform, to be precise. In terms of wave theory, you can add and subtract waves of different frequencies. It is rather complex, and as someone who does not do Fourier transforms and the like on a regular basis myself, I'll leave the technical explanation of it to someone more qualified. I'll see if I can describe it the general idea in as laymans terms as I can. The best way to describe it is as a simple sound, which as a waveform is effectively one-dimensional (over time). Lets say a monotone sound heard from a mid-grade speaker. The speaker itself produces some amount of noise. Let's say the monotone is 50hz, and the noise is 1000hz at 1/4 the power of the monotone. When those two waves are added together, they might look something like this:

    Name:  Tone and Noise.jpg
Views: 271
Size:  46.7 KB

    One would think the two are inseparable, however they are actually just two distinct waveforms heard at the same time. Individually, they appear as so:

    Name:  Separate Waveforms.jpg
Views: 272
Size:  63.5 KB

    If one was to apply the inverse of the noise waveform to the sound coming out of the speaker, it would cancel out the noise signal, leaving behind only the clean, clear monotone. An image can be thought of the same way. A dark pixel next to a white pixel is effectively the trough and crest of a single wavelength. Array pixels out in two dimensions, and you have a two-dimensional waveform, with troughs and crests of varying amplitude, phase, and and frequency in a layered mesh across the area of your image. As a purely mathematical construct, you could decompose an RGB TIFF or PSD image in an almost infinite number of ways. (Let's exclude JPEG for now, since it brings with it additional complexities from lossy compression.) Depending simply on how you choose to decompose an RGB TIFF image, you could effectively "isolate and eliminate" the wavelet that describes photon noise by applying its inverse to the image. If you knew the exact nature of photon noise in your image, you could effectively cancel it out 100%. Seeing as it is effectively impossible to know the exact nature of the photon noise wavelet in an image, we have to approximate it. Wavelet Deconvolution aims to do exactly that...approximate the photon noise wavelet in an image, generate an inverse, and apply it to the original waveform that describes the image. The result is a considerable reduction in photon noise, without the namesake blurring of a normal noise removal algorithm.

    Wavelet Deconvolution for Image Noise Removal is still more theory than practical application, but it is a area of furious research and development. Such algorithms tend to be very complex and extremely math-heavy, so they require a lot of computing power. Some of the alternative noise removal tools on the market, such as Topaz Denoise, Nik Define, and a variety of OSS projects are starting to apply wavelet and deconvolution theory to their algorithms, which is why they can produce such clean, nearly blur- and noise-free output (relative to a tool like Photoshop or Lightroom, that is.)

    I hope that answers the question, and explains how a photograph produced by a digital camera is not necessarily "just RGB pixels in a matrix." With advanced, high-speed computing comes the freedom to apply more advanced theory to common processing, such as noise removal, with vastly superior results. Observing an image as a set of discrete, distinct, and separable waveforms opens a whole different world of non-destructive or minimally-destructive image processing. Technically speaking, all forms of image noise could be described as 2D waveforms, as can their inverses, meaning applying this kind of thought to noise generation in general can mean near-total removal of the specific wavelets that describe noise from the composition of all wavelets of an image, leaving behind only the image itself.
    Last edited by Jon Rista; 10-22-2012 at 07:17 PM.

  22. #22
    Lifetime Member Michael Gerald-Yamasaki's Avatar
    Join Date
    Jan 2010
    Location
    Santa Cruz, CA USA
    Posts
    2,035
    Threads
    311
    Thank You Posts

    Default

    Jon,

    I think what you are saying is that one can treat the RGB data as though it were a sampling of a complex system, do transforms in the complex system and resample to transformed RGB data? I can accept that (if that is what you are saying). The unresolved issue (so to speak ;-) is, well, resolution. RGB as "pixels in a matrix" if you perceive it in a model of, say, hue, saturation and brightness has low resolution of hue and saturation at the extremes of brightness. I don't understand a way around that (perhaps, you do?).


    Cheers,


    -Michael-

  23. #23
    BPN Viewer
    Join Date
    Sep 2012
    Location
    Colorado
    Posts
    195
    Threads
    16
    Thank You Posts

    Default

    I guess I may have missed part of the context of the discussion, as I have not yet read all of the replies. So, to set a context for our local discussion, I consider the only valid environment within which one would actually perform any color, exposure, or noise-related post processing to be with the RAW. A RAW image, for all intents and purposes, is an exact replica of the signal on the sensor at the time of exposure, only as a digital signal. So long as you keep an image in its "original digital signal" form, you are not actually processing RGB pixels. You are processing an actual signal that can be interpreted in a variety of ways. You don't even necessarily need to think of a RAW as another matrix of discrete R, G, and B pixels. You can think of it as a 2D waveform of pure luminance, as well as a separate 2D waveform of chrominance (color information described by vision-accurate green-magenta + blue-yellow color axes). You can adjust color information independently of exposure information, or both independently of noise information (or concurrently process the image as a variety of other representations along a pipeline towards the final result you see on your screen). So long as you work with the original digital signal.

    When it comes to RGB images, those should only ever crop up in the final stages of your image processing. By the time you start working an 16-bit RGB TIFF, your color balance, exposure tuning, curves adjustments, and noise removal should have already been done. Personally, I'll only start doing content-related adjustments (content-aware fill, spot healing, patching, etc.) after I've done all of that. I am then working with an appropriately "tuned and ready" PP-Master image. Since that PP-Master is still RAW, with a bunch of non-destructive edits in the edit history of my RAW editor that can be overruled at any time, I can always tweak the raw further as if it was a digital signal (rather than an RGB pixel matrix). From that PP-Master, I'll generate a CC-Working 16-bit TIFF image, for the purposes of using content-aware tools in Photoshop to clean up content, sharpening, and other "final cleanup". From the CC-Working, I save out a CC-Master, which is the same original size and dimensions as the RAW. Once I have a full-sized CC-Master, only then do I really feel free to start scaling and cropping for different output mediums, or applying the very LAST edit: output sharpening. You could kind of think of the progression of edit stages like a small tree. At the root is the original unprocessed RAW, from which a series of edits progress:


    Code:
    
                                    / -> 8x10 Print, Cropped, Sharpened, 600ppi
    RAW -> "PP-Master" -> CC-Master - -> 17x22 Print, Cropped, Sharpened, 300ppi
                                    \ -> 750x500 Web, Sharpened



    To the rest of your last post and to your question:

    RGB as "pixels in a matrix" if you perceive it in a model of, say, hue, saturation and brightness has low resolution of hue and saturation at the extremes of brightness. I don't understand a way around that (perhaps, you do?).
    Since exposure tuning, color balance, and noise removal should always be performed on a RAW, they are effectively being done on an original digital signal. As long as you perform all of your exposure tuning, color balance, and noise removal in a non-destructive RAW editor, you would never really be working with a pixel matrix in an HSB model. You are working with a linear signal, attenuated by a camera profile tone curve (which is what gives it meaningful, good-looking contrast, a highlight shoulder and a shadow foot), and rendered to the screen with your edits. The signal might as well be as fluid as it was on the sensor (there are some limitations, but we can get to that later). You can shift exposure around within the available dynamic range (which in a tool like lightroom is at least 8 stops (+/- 4 EV), but more when you use tools like highlight recovery, shadow lifting, white and black tuning, etc.)

    Nothing really actually changes the original signal in a non-destructive RAW editor. What you see on the screen is simply a rendering of the digital signal contained within your RAW image. Successive edits, multiple successive tweaks with the same tools, such as Exposure, are not compounded edits on top of edits of RGB pixel data. If that was the case, you would only be able to make a few edits before your image started showing a pronounced loss of fidelity and quality...and not long after that it would start to melt into a meaningless "pixel mud" due to the error present in each and every successive edit. In actuality, in a non-destructive RAW editor, each successive edit is simply a change in a rather small set of instructions to the rendering engine that reprocesses the original digital signal and renders the image you see on your screen. After every single tiny little edit. ;)

    Assuming you did not clip your highlights when making an ETTR photo, every last scrap of information in those highlights is not only recoverable, but it also contains full, and maximally rich, color information. You achieve maximum color value in the analog signal on the actual sensor when you approach maximum saturation, since each pixel represents a luminous value for a single specific color (as opposed to an RGB composite pixel). The red, green, and blue sensor pixels have not yet been interpolated into RGB pixels when editing a RAW. If you pull down your highlights by a stop, by two stops, by four stops, the original signal is simply reprocessed with a different set of instructions...different offsets and adjustments and attenuations. You aren't reprocessing previously processed pixels, you are always adjusting the original, and for all intents and purposes "fluid", digital signal. Same goes for curves. Same goes for saturation. Same goes for white balance. If you adjust white balance by 1000k, 5000k, 10000k, the final results will still look accurate because you are always going back to that original digital signal and simply re-rendering.

    So, instead of thinking about your photographs as matrices of RGB pixels...think about them as an original, fluid digital signal until the moment you create a CC-Master as a 16-bit TIFF image. From that point on, you'll always have your original, full quality, fluid digital signal in the PP-Master...your RAW with a bunch of rendering and noise removal instructions, as well as a content-cleaned master that has undesirable content cleaned up, etc. You can always re-tune exposure and create a second CC-Master, or copy the CC-Master to crop, scale, sharpen, and publish to different mediums.



    A little bit on digital signals. While I called it "fluid" above, because it is effectively a "baked" representation of a true analog signal from the sensor, it is not "fluid without limits." The analog signal on a sensor can be tuned and adjusted at will, shifting exposure around within the dynamic range, without any loss whatsoever until you actually expose. Once you do expose, you get a digital signal that is very flexible, but not quite as good as the "real thing". For one, you have baked in noise, and noise of all forms. You obviously have your photon noise, but you also have FPN (fixed pattern noise), HVBN (horzontal and/or vertical banding noise), thermal noise, qantization noise, non-uniform pixel response noise, etc. Each of those forms of noise can be represented by a discrete wavelet in the Fourier series that ultimately represents the digital signal. As such, technically speaking, they could be removed without adversely affecting the image. The tools to really do that don't quite exist in readily, easily usable forms...so for now we can generally treat noise as a fixed quantity of our digital signal that will always be rendered along with everything else, and some of which might potentially be removed by a tool like Lightroom (which might affect other wavelets of your signal and introduce some blur), or Topaz DeNoise, Nik Define, etc. for slightly better results.

    Additionally, a digital signal is represented in a limited precision (12/14-bit) form as discrete integers, rather than in an infinite precision form as real numbers. As such, certain artifacts are intrinsic to the digital signal. These limitations ultimately put a hard limit on just how far we can push, pull, stretch, compress, and otherwise massage a RAW image and still produce an aesthetically pleasing result, even in the best and most advanced of RAW editors.

  24. #24
    Forum Participant
    Join Date
    Jan 2008
    Location
    Canada
    Posts
    6,588
    Threads
    643
    Thank You Posts

    Default

    Jon- Would that noise removal could be so easy as to reduce a wave function into the signal and the noise, and then remove the latter. Photon shot noise is a random component and cannot be assessed and removed in a single experiment (like you conduct every time you press your shutter release). Say I had a very poor tape measure and I measured the height of one of my kids once. That measurement would be made up of the true measurement and the error introduced by the tape measure. If I measure just once there is no way I can reduce the measurement to the two components and then remove the noise and thus obtain true height. The fact that in a Poisson distributed variable like number of photons per unit time hitting a sensor site, the variance (an error measurement) equals the mean does not help you in the slightest in removing this noise in a single experiment. The way around this is of course to repeat the experiment say 30 or 100 times, generating a mean signal which will be a lot less variable than the individual measurements. The photographic analogue to this would be image-stacking.

    Can you provide any references to research into "Wavelet Deconvolution for Image Noise Removal". It would be of interest to see if this has any application to every-day photography where we run single experiments with a sample size of one.

  25. #25
    BPN Viewer
    Join Date
    Sep 2012
    Location
    Colorado
    Posts
    195
    Threads
    16
    Thank You Posts

    Default

    Sure, noise removal by wavelet deconvolution is by no means a simple process at the algorithmic level. I believe I did mention it is complex, highly math and therefor cpu intensive. There are a variety of plugins for GIMP that use some form of wavelet deconvolution for noise removal. I believe some of the open source RAW editors, such as RawThearapy and DarkTable, offer a variety of sharpening and denoise algorithms that use deconvolution. I believe more recent versions of DeepSkyStacker (or possibly soon to be released versions), an astrophotography auto-stacking tool, uses some deconvolution algorithms. Any one of these various tools could be used to experiment with deconvolution. The best algorithms are less usable tools for RAW editing, and more along the lines of an algorithm with a command line that takes an input image and produces an output image. I've used a couple of those in the past, and the results were pretty stunning. Both luminance and color noise could be neatly extracted from the image without any visible degradation of actual photographic detail. If I still have links to those tools, I'll provide them.

    As for papers, here are a few:

    Removal of banding noise:
    http://lib.semi.ac.cn:8080/tsh/dzzy/...623/662316.pdf

    Removal of blur caused by a low-pass filter:
    http://www.cmap.polytechnique.fr/~ma...DeconvIEEE.pdf

    A list of resources related to wavelet deconvolution, with some application to denoiseing:
    http://www.visionbib.com/bibliography/compute93.html

    Biomedical research into wavelet deconvolution for noise removal:
    http://bigwww.epfl.ch/publications/vonesch0704.html

    There are many additional resources. Much of it is in fields entirely unrelated to general photography. For example, the study of seismic waves seems to include a lot of research into wavelet deconvolution for a variety of purposes. Wavelet deconvolution is used for non-noise related algorithms as well. Images can be sharpened with deconvolution, blur from motion can be deconvolved, etc. The decomposition of an image into its component waveforms for independent processing or elimination opens a lot of doors. The actual algorithms are very complex, and they tend to be extremely compute intensive, but when implemented properly the results tend to be mind-blowing (at least by today's standards).

  26. #26
    BPN Viewer
    Join Date
    Sep 2012
    Location
    Colorado
    Posts
    195
    Threads
    16
    Thank You Posts

    Default

    If you want to experiment with a decent wavelet domain tool that works with RAW, Darktable has something called "equalizer". I used Darktable a year or so ago on a Linux virtual (it is linux only), and the results were fairly impressive. At the time, Lightroom 4 had not been released, nor had the latest versions of Topas DeNoise or Nik Define, so I am not really sure how the wavelet deconvolution algorithms in Equalizer compare, but they were very good. If you have linux, or the ability to run a linux virtual, I recommend giving it a try.

    Another useful tool is Wavelet Denoise. That's a GIMP plugin that brings wavelet deconvolution based noise removal to a fairly common, mainstream app. Its algorithm does not seem particularly advanced, and it executes faster than I'd expect for an accurate wavelet deconvolution algorithm. It's results are good, but I wouldn't say they are much better than Lightroom 4 or other current noise removal apps. (Although, who's to say any of those apps don't already apply deconvolution in their own algorithms these days...)

  27. #27
    Banned
    Join Date
    Feb 2008
    Location
    Colorado
    Posts
    3,949
    Threads
    254
    Thank You Posts

    Default

    Quote Originally Posted by Michael Gerald-Yamasaki View Post
    Jon,

    I think what you are saying is that one can treat the RGB data as though it were a sampling of a complex system, do transforms in the complex system and resample to transformed RGB data? I can accept that (if that is what you are saying). The unresolved issue (so to speak ;-) is, well, resolution. RGB as "pixels in a matrix" if you perceive it in a model of, say, hue, saturation and brightness has low resolution of hue and saturation at the extremes of brightness. I don't understand a way around that (perhaps, you do?).
    Quote Originally Posted by Michael Gerald-Yamasaki View Post
    Roger,

    While 16 bits per channel is what the data is under the hood, we make all our decisions about post processing based on the 8-bit representation that we see on our screens (except for those exceptionally few people with 10-bit hardware and workflows). So while 16-bit data mitigates some of the impacts being discussed here, we don't really see this mitigation until we print with a 16 bit workflow.

    Furthermore, 16 bit data doesn't alter the shape of the rgb color space.
    Hi Michael,

    Regardless of the shape of the RGB color space, all one needs to do is sample it mathematically with sufficient resolution and precision to show all the levels and colors one needs to show. Whether or not any one device, whether an 8-bit monitor or a print that can only show 5 bits of dynamic range is irrelevant. After all, the basic dynamic range of an image out of a quality camera like a DSLR is greater than most displays or prints can show. That is why we post process images: dodge and burn dynamic range and adjust color and contrast to best show on output devices with less color and/or intensity range than the original scene. There is a huge difference in 8 versus 16-bit processing in this regard.

    Roger

  28. #28
    Banned
    Join Date
    Feb 2008
    Location
    Colorado
    Posts
    3,949
    Threads
    254
    Thank You Posts

    Default

    Looking at the DCRAW source code shows a wavelet denoise step, and it appears to be part of the Bayer demosaicing step. I would bet many modern raw converters have the same algorithm, or something very similar.

    Roger

  29. #29
    BPN Viewer
    Join Date
    Sep 2012
    Location
    Colorado
    Posts
    195
    Threads
    16
    Thank You Posts

    Default

    Quote Originally Posted by Roger Clark View Post
    Looking at the DCRAW source code shows a wavelet denoise step, and it appears to be part of the Bayer demosaicing step. I would bet many modern raw converters have the same algorithm, or something very similar.

    Roger
    Hmm, interesting. So DCRAW intrinsically denoises every RAW image it renders?

  30. #30
    Banned
    Join Date
    Feb 2008
    Location
    Colorado
    Posts
    3,949
    Threads
    254
    Thank You Posts

    Default

    Quote Originally Posted by Jon Rista View Post
    Hmm, interesting. So DCRAW intrinsically denoises every RAW image it renders?
    Hi Jon
    It is the default, but dcraw gives you options to choose other algorithms too. Other raw converters appear to be running noise filters in the demosaicing step too.

    Roger

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Web Analytics