PDA

View Full Version : Highlight recover



Ray Rozema
07-31-2009, 02:26 PM
How good is the highlight recovery in Lightroom?

Is it possible to do highlight recovery in lightroom and then keeping it as RAW/NEF file, then process the raw image in Nikon NX2? Highlight recovery in NX2 does not appear to be very strong. I have not been able to figure out a way to do this. ( I have not been able to bring myself to buy CS3/4 ,yet.)

Thanks very much.
Ray:)

Nancy A Elwood
07-31-2009, 03:05 PM
I find the highlight recovery slider in NX2 is very good!! In NX2 go to the top menus and choose View, then show lost highlights, then play with the slider.

Cheers
Nancy

Alfred Forns
07-31-2009, 04:07 PM
You can Ray but always keep in mind you will be introducing noise and also it makes the blacks looks a bit mushy !! It is a good tool !!!

Jim Poor
07-31-2009, 07:04 PM
Highlight recover gives you some room to recover almost blown highlights, but if they are truly blown, there is nothing to recover. In cases like that, you end up with light gray areas with no detail rather than white.

arash_hazeghi
07-31-2009, 07:48 PM
Ray,
NX2 has many ways for recovering highlights, the easiest one is exposure compensation (Not the hight light recovery slider), you can also use local brightness adjustment with LCA tool to tone down high lights selectively, and you can also apply ADL selectively using adjustment masks. I will try to post an example later, but NX2 is far superior to LR2.4 for handling NEF files. LR2.4 can produce nasty artifacts if you push the settings too much like mushy blacks or strange color casts, and it can't preserve colors as good as NX2.

Also adjustments made by LR2.4 cannot be read in NX2 or vise versa.

arash_hazeghi
08-01-2009, 02:21 PM
Here is an example This is the original NEF, as you can see the feather highlights are blown (peak to the right of the histogram).

http://www.stanford.edu/~ahazeghi/Photos/NEF/NX2/1.jpg

I first use -1EV EC to recover all the highlights

http://www.stanford.edu/~ahazeghi/Photos/NEF/NX2/2.jpg

But this makes BG and other areas of the image dark. So I use CCP (color Control Point) to locally adjust the brightness of these areas, placing the points in the areas that I want and increasing brightness by 50% and saturation just a tad to compensate for the loss of sat. due to negative EC. I also use one point to bring up under the wing shadows.

http://www.stanford.edu/~ahazeghi/Photos/NEF/NX2/3.jpg

These color points are smart and only affect the intended area by pattern recognition and color information, however they might bleed a little bit in the highlight areas, so I put another control point on the wing highlightsand click "protect detail" this will lock the highlight area preventing it from brightening.

http://www.stanford.edu/~ahazeghi/Photos/NEF/NX2/4.jpg

Now that I am done I do a final D-lightning step to equalize the tones and do a final subtle brightening of all shadow, note that I have used a low value for shadow and a high value for highlights to ensure highlights are fully protected.

http://www.stanford.edu/~ahazeghi/Photos/NEF/NX2/5.jpg

Roger Clark
08-01-2009, 05:06 PM
Arash,

While it is hard to tell from small screen grabs, it appears to me from the histograms that the red channel is the most saturated, blue probably not at all, and green not at all or only slightly. Using exposure compensation reduces the linear intensity (the sensor is linear). This places the highlights on a less compressed portion of the tone curve (this is the gamma-encoded tone curve that gets applied to images during conversion). But that doesn't change the fact that one channel is saturated. Use of highlight recovery will estimate the saturated channel from two unsaturated channels, or if two channels are saturated and one not, will make the saturated channels equal to the unsaturated channel so you at least have some tonality even though it is gray. So try the recovery tool and see if that results in more detail in the feathers.

Roger

arash_hazeghi
08-01-2009, 05:55 PM
Hi Roger,

Good eyes! If the the red pixel was saturated, negative EC will just scale to a smaller value but does not give the real R value with respect to G and B, so a false color will start to appear. In the case of two channels saturated as you say renormalizing to the unsaturated channel will give you some gray tonality which I have found in some cases actually looks worse than the original blown! because it draws too much attention!:D

I assume you are talking about "highlight recovery" slider in LR? NX2 doesn't actually have such a feature, it has what it calls highlight protection which applies very subtle changes to luminosity curve and I have found it quite useless.
Now in LR, you are saying that in the case of only one channel blown, it interpolates the numeric value of the two other channels to provide a better "estimate" of what was originally there, I guess this works well for relatively neutral tones, btw is this something that is written in Adobe documentation or something that you have observed?
When I push the settings in LR I get all sorts of nasty color artifacts, especially with NEF files, so I try to stay away from it. See for example this artifact that I posted on Adobe forum, they yet have to fix this issue :D

http://forums.adobe.com/message/2134973#2134973

Best,
Arash

Roger Clark
08-01-2009, 10:16 PM
Hi Roger,

Good eyes!

Arash,
How ironic--if you knew my eyes.;)




If the the red pixel was saturated, negative EC will just scale to a smaller value but does not give the real R value with respect to G and B, so a false color will start to appear. In the case of two channels saturated as you say renormalizing to the unsaturated channel will give you some gray tonality which I have found in some cases actually looks worse than the original blown! because it draws too much attention!:D

I assume you are talking about "highlight recovery" slider in LR? NX2 doesn't actually have such a feature, it has what it calls highlight protection which applies very subtle changes to luminosity curve and I have found it quite useless.
Now in LR, you are saying that in the case of only one channel blown, it interpolates the numeric value of the two other channels to provide a better "estimate" of what was originally there, I guess this works well for relatively neutral tones, btw is this something that is written in Adobe documentation or something that you have observed?


I do not have lightroom; I use photoshop CS3/CS4 and ACR. I assumed that LR would do the same thing as photoshop. Scientifically, it is about the only thing one could do. One could, for example, in an areas that are saturated, move away from the saturated area, examine the color and when recovering the saturated color,
extrapolate the color into the saturated zone. But PS seems to just linearly interpolate from the unsaturated channels, so simple and quick. Many of adobe's tools use 15-bit integer math additions to approximate multiplies. This cause strange results when pushed far. CS4 is supposedly fixing some of these, and it does seem better, but still not up to what scientific 32-bit floating point image processors can do. So it doesn't surprise me to hear you get strange results when pushing dome things.

Roger

arash_hazeghi
08-01-2009, 10:53 PM
Arash,
How ironic--if you knew my eyes.;)

Many of adobe's tools use 15-bit integer math additions to approximate multiplies. This cause strange results when pushed far. CS4 is supposedly fixing some of these, and it does seem better, but still not up to what scientific 32-bit floating point image processors can do. So it doesn't surprise me to hear you get strange results when pushing dome things.

Roger


Interesting how does it handle 16-Bit TIFFs then?
Some of the artifacts are also from how Adobe interprets the white balance, unfortunately Nikon uses proprietary encoding for white balance and third party sw have a hard time getting the colors right, that said, MKII RAWs sometimes have weird artifacts with ACR as well:confused:.

Ray Rozema
08-01-2009, 11:59 PM
Hi Arash

Thanks for that detailed answer and for all the work you did to put that togather. I will give it a try. And to the others thanks for comments and advise also, It is such a great help.

Ray:)

Roger Clark
08-02-2009, 01:17 AM
Interesting how does it handle 16-Bit TIFFs then?


16-bit tiffs are defined to be unsigned integers, so 0 to 65,535. Photoshop uses signed integers, so they lose one bit, 0 to 32,767, and the ignore negative values. I think it was a decision made long ago for speed, but has implications. I've also seen photoshop change the data. For example, read a tif file, and immediately write the file with a new name, with no changes. The contents of the two files (the image data) are different. It shouldn't be.
When photoshop reads 16-bit tif data, it rescales 0 to 65535 to 0 to 32767, then when the file is output, it is scaled back to 16 bits. But I've seen changes of more than 1 bit when it write the file.

Roger

arash_hazeghi
08-02-2009, 04:47 AM
16-bit tiffs are defined to be unsigned integers, so 0 to 65,535. Photoshop uses signed integers, so they lose one bit, 0 to 32,767, and the ignore negative values. I think it was a decision made long ago for speed, but has implications. I've also seen photoshop change the data. For example, read a tif file, and immediately write the file with a new name, with no changes. The contents of the two files (the image data) are different. It shouldn't be.
When photoshop reads 16-bit tif data, it rescales 0 to 65535 to 0 to 32767, then when the file is output, it is scaled back to 16 bits. But I've seen changes of more than 1 bit when it write the file.

Roger

But if you convert a 16Bit integer to a 15bit you are throwing half of the data away, so if you rescale back to 16Bit the histogram will look like a comb, at best you can interpolate the missing bits but still you are losing half of the data! Also from a speed point of view we never had 15Bit registers, long time ago CPUs were 32Bit and since early 2000s CPUs have been 64Bit albeit OS is 32 bit so 15Bit calculation and 32Bit calculation are achieved in identical number of clocks. This is a very strange choice:eek: I looked at CS4 (for Windows) specs and at least it claims to be 64Bit http://en.wikipedia.org/wiki/Adobe_Photoshop

Best,
Arash

Roger Clark
08-02-2009, 09:52 AM
But if you convert a 16Bit integer to a 15bit you are throwing half of the data away, so if you rescale back to 16Bit the histogram will look like a comb, at best you can interpolate the missing bits but still you are losing half of the data! Also from a speed point of view we never had 15Bit registers, long time ago CPUs were 32Bit and since early 2000s CPUs have been 64Bit albeit OS is 32 bit so 15Bit calculation and 32Bit calculation are achieved in identical number of clocks. This is a very strange choice:eek: I looked at CS4 (for Windows) specs and at least it claims to be 64Bit http://en.wikipedia.org/wiki/Adobe_Photoshop

Best,
Arash

Arash,
15-bit: yes, you lose half the data precision. But you would not notice that in a histogram. A histogram is binned, typically into 128 or 256 levels. You wouldn't see the effect of losing one bit on a 16-bit image. Now and 8-bit image, losing a bit would be noticeable on a 256 binned histogram.

The 15-bit mode uses a math trick (I saw a description by a mathematician) to speed things up. There is still a speed advantage to do adds and subtracts versus multiplies and divides on every cpu, I believe. And whether or not software is 64-bit, it can still use 8-bit and 16-bit integers and associated integer math. The 64-bit is only needed for addressing large images greater than 4 GBytes in size.

And for photography, 15 bits is actually enough for any application. 15-bits is a linear dynamic range of 15 stops, and a gamma-encoded range which could be orders of magnitude larger. Now camera has even close to the signal-to-noise ratio (S/N) of 15-bits, and none even have the linear dynamic range (although that will hopefully change soon). The S/N of a digital camera image is the square root of the number of photons captured. Currently, cameras with large pixels, like the Canon 5D (Mark I), 1D Mark II capture at most about 80,000 photons in a pixel, which gives a S/N = 283. So 15 bits is fine enough precision, and since digital camera data is at most 14 bit with all the photon and read noise, you could never tell a 15 versus 16-bit difference.
See: my digital sensor performance summary web page:
http://www.clarkvision.com/imagedetail/digital.sensor.performance.summary/index.html

HDR can push the linear dynamic range and I believe all HDR programs convert 15/16-bit image data to at least 32-bit numbers (some floating point).

Roger

arash_hazeghi
08-02-2009, 02:11 PM
Arash,
15-bit: yes, you lose half the data precision. But you would not notice that in a histogram. A histogram is binned, typically into 128 or 256 levels. You wouldn't see the effect of losing one bit on a 16-bit image. Now and 8-bit image, losing a bit would be noticeable on a 256 binned histogram.

The 15-bit mode uses a math trick (I saw a description by a mathematician) to speed things up. There is still a speed advantage to do adds and subtracts versus multiplies and divides on every cpu, I believe. And whether or not software is 64-bit, it can still use 8-bit and 16-bit integers and associated integer math. The 64-bit is only needed for addressing large images greater than 4 GBytes in size.

And for photography, 15 bits is actually enough for any application. 15-bits is a linear dynamic range of 15 stops, and a gamma-encoded range which could be orders of magnitude larger. Now camera has even close to the signal-to-noise ratio (S/N) of 15-bits, and none even have the linear dynamic range (although that will hopefully change soon). The S/N of a digital camera image is the square root of the number of photons captured. Currently, cameras with large pixels, like the Canon 5D (Mark I), 1D Mark II capture at most about 80,000 photons in a pixel, which gives a S/N = 283. So 15 bits is fine enough precision, and since digital camera data is at most 14 bit with all the photon and read noise, you could never tell a 15 versus 16-bit difference.
See: my digital sensor performance summary web page:
http://www.clarkvision.com/imagedetail/digital.sensor.performance.summary/index.html

HDR can push the linear dynamic range and I believe all HDR programs convert 15/16-bit image data to at least 32-bit numbers (some floating point).

Roger

Hi Roger,

I agree that current sensors in DSLRs that are limited by shot noise have 14Bits of DR at most, based on their full well capacity (there is some electronic noise from row/column sense amplifiers as well), the are some state-of-the-art sensors in the industry that have 20Bits of DR:eek: but they are not used for commercial purposes ;).

BTW, some of the sensors also use oversampling of the ADC to reduce noise (longer averaging) and overcome the limit above, a good example is Nikon D300 which has a 12Bit ADC but it can generate a 14Bit file by four readouts, the trade off is slower operation of course. You can also oversample a 14Bit ADC to generate real 16Bit data, I believe this is the way MF backs work (they all have 16Bit files, and they are all slow :D)

What I don't understand is the benefit of using 15Bits vs 16Bits in PS, based on VLSI basics if you have a 32Bit CPU, running 15Bit code and address should not provide any benefit over 16Bits. In each clock operation you push 32 Bit data in the register and adder units in the CPU (this is hardwired), if you use 15 or even 16bit data you are just not using the extra bits. So I'd like to learn about the details of how and why 15Bit is used, I googled PS and 15Bits but I didn't find anything technical except for a few forum comments, so if you have some references I'll be happy to look at :)

Best,
Arash

Roger Clark
08-02-2009, 04:58 PM
Hi Roger,

I agree that current sensors in DSLRs that are limited by shot noise have 14Bits of DR at most, based on their full well capacity (there is some electronic noise from row/column sense amplifiers as well), the are some state-of-the-art sensors in the industry that have 20Bits of DR:eek: but they are not used for commercial purposes ;).

Arash,
Hmmm... 20 bits is a million to one dynamic range. That requires one million squared (1 trillion) photons to be captured if the read noise was only 1 electron (doubtful). Likely several trillion photons would be required. With a typical electron density of 1,000 electrons per square micron, you would need a 32 mm square pixel to collect that many photons. Perhaps you mean that there are systems with 20-bit A/D converters. that is far different than actual 20-bits of real dynamic range.



BTW, some of the sensors also use oversampling of the ADC to reduce noise (longer averaging) and overcome the limit above, a good example is Nikon D300 which has a 12Bit ADC but it can generate a 14Bit file by four readouts, the trade off is slower operation of course. You can also oversample a 14Bit ADC to generate real 16Bit data, I believe this is the way MF backs work (they all have 16Bit files, and they are all slow :D)

Just because you can oversample/multisample a system may not mean improved dynamic range.
The Nikon D300. for example, has a full well capacity of 42000 and a read noise of 4.6 electrons, so the dynamic range if not limited by the A/D would be 13.1 stops. Many systems generate high bit counts, but I've yet to see any consumer (pro or amateur) digital camera actually deliver more than 12 bits of real dynamic range at any ISO.



What I don't understand is the benefit of using 15Bits vs 16Bits in PS, based on VLSI basics if you have a 32Bit CPU, running 15Bit code and address should not provide any benefit over 16Bits. In each clock operation you push 32 Bit data in the register and adder units in the CPU (this is hardwired), if you use 15 or even 16bit data you are just not using the extra bits. So I'd like to learn about the details of how and why 15Bit is used, I googled PS and 15Bits but I didn't find anything technical except for a few forum comments, so if you have some references I'll be happy to look at :)

Best,
Arash

I agree with you on the 15-bit thing. It is something left over from "the olden days" that needs to be done away with, like many other legacy things in the computer world.

Roger

arash_hazeghi
08-02-2009, 08:19 PM
Arash,
Hmmm... 20 bits is a million to one dynamic range. That requires one million squared (1 trillion) photons to be captured if the read noise was only 1 electron (doubtful). Likely several trillion photons would be required. With a typical electron density of 1,000 electrons per square micron, you would need a 32 mm square pixel to collect that many photons. Perhaps you mean that there are systems with 20-bit A/D converters. that is far different than actual 20-bits of real dynamic range.


Roger,

First desinty of states in bulk semiconductor is far more than 1,000 electron per sq. um, seondly these are specially engineered cascade quantum wells (not typical pixels you have in your DSLR) that also have amplification (each electron-hole pair generated by a photon generates multiple carrier pairs, so you need far fewer photons), the cell size is huge but not as big as you think, the total sensor area in some cases is about a full 8" wafer! such sensors exist and are in use right now!

The less exotic types of these sensors are used in automative industry, here is an example of a 256X256 pixel 20Bit gray scale sensor with 120dB DR and 56dB SNR which has been fabricated with standard CMOS process, these sensors are very common, I am surprised you haven't heard of them.


A High-Dynamic-Range CMOS Image Sensor for Automotive Applications,
IEEE JOURNAL OF SOLID-STATE CIRCUITS, VOL. 35, NO. 7, JULY 2000






Just because you can oversample/multisample a system may not mean improved dynamic range.
The Nikon D300. for example, has a full well capacity of 42000 and a read noise of 4.6 electrons, so the dynamic range if not limited by the A/D would be 13.1 stops. Many systems generate high bit counts, but I've yet to see any consumer (pro or amateur) digital camera actually deliver more than 12 bits of real dynamic range at any ISO.


You can always reduce noise by longer averaging, the full well capacity remains the same but the noise is reduced by multiple readouts (averaging) so you get a higher DR, this is very common practice in SEM imaging. You can actually test this very easily with Nikon D300, as shown here, it is not really 14Bit because the original 12Bit is not real 12Bit either:D but it certainly improves things!
http://www.earthboundlight.com/phototips/nikon-d300-d3-14-bit-versus-12-bit.html

In this case, actually Nikon D300 performs better than if it actually had a hardware 14Bit ADC!


Best,
Arash

Roger Clark
08-03-2009, 12:59 AM
Roger,

First desinty of states in bulk semiconductor is far more than 1,000 electron per sq. um,

In an imaging sensor, the photons are absorbed in a short distance, ranging from (1/e depth) about 0.4 microns in the blue to about 8 microns in the deep red. The absorption properties set some basic parameters for electronic sensors. I have seen electron densities range from under 1000 to perhaps 2000 electrons/square micron, but not higher in a quality imaging array. Perhaps in other applications.



seondly these are specially engineered cascade quantum wells (not typical pixels you have in your DSLR) that also have amplification (each electron-hole pair generated by a photon generates multiple carrier pairs, so you need far fewer photons), the cell size is huge but not as big as you think, the total sensor area in some cases is about a full 8" wafer! such sensors exist and are in use right now!

Multiplying electrons does not change the noise characteristics. What counts, no pun intended, is the photons counted, not the electrons generated. It is the random arrival time of photons that is the source of photon noise in images. Multiplying electrons generated also amplifies the noise, so at best you maintain your signal to noise ratio, but not improve it. The main reason for electron multiplying sensors is to boost the signal up so that read noise is negligible. This started with CCDs when read noise was on the order of 15 electrons with cooled sensors. We now have ambient DSLR cameras with 2.5 electron read noise. The need for electron multiplication is diminishing.



The less exotic types of these sensors are used in automative industry, here is an example of a 256X256 pixel 20Bit gray scale sensor with 120dB DR and 56dB SNR which has been fabricated with standard CMOS process, these sensors are very common, I am surprised you haven't heard of them.


A High-Dynamic-Range CMOS Image Sensor for Automotive Applications,
IEEE JOURNAL OF SOLID-STATE CIRCUITS, VOL. 35, NO. 7, JULY 2000

Yes, I have heard of them. Here are some more recent references, and to full pdf papers:

http://yard.ee.nthu.edu.tw/~syhuang/paper.ps/HDR.sensor.pdf (http://yard.ee.nthu.edu.tw/%7Esyhuang/paper.ps/HDR.sensor.pdf)

http://www.robots.ox.ac.uk/~mcad/publications/bhasteph/sb_imtc04.pdf (http://www.robots.ox.ac.uk/%7Emcad/publications/bhasteph/sb_imtc04.pdf)

http://oatao.univ-toulouse.fr/311/1/MartinGonthier_311.pdf

There are a couple of methods, one is logarithmic response at the pixel level with the problem of poor signal to noise ratio and fixed pattern noise. Two, multiple integration times (we do that now with HDR imaging) but on chip with the problem of image movement.




You can always reduce noise by longer averaging, the full well capacity remains the same but the noise is reduced by multiple readouts (averaging) so you get a higher DR, this is very common practice in SEM imaging. You can actually test this very easily with Nikon D300, as shown here, it is not really 14Bit because the original 12Bit is not real 12Bit either:D but it certainly improves things!
http://www.earthboundlight.com/phototips/nikon-d300-d3-14-bit-versus-12-bit.html

In this case, actually Nikon D300 performs better than if it actually had a hardware 14Bit ADC!


I don't see on this page where one exposure is read out multiple times to reduce noise. Sure you can stack images from multiple exposures and average them together to reduce noise. That is standard practice in astrophotography. Here is a page that shows sub one photon per pixel per exposure detection with a DSLR:

http://www.clarkvision.com/photoinfo/night.and.low.light.photography

(in fact down to 0.1 photon per pixel per frame).

Roger

arash_hazeghi
08-03-2009, 02:15 AM
In an imaging sensor, the photons are absorbed in a short distance, ranging from (1/e depth) about 0.4 microns in the blue to about 8 microns in the deep red. The absorption properties set some basic parameters for electronic sensors. I have seen electron densities range from under 1000 to perhaps 2000 electrons/square micron, but not higher in a quality imaging array. Perhaps in other applications.



Roger,


This is getting too technical, but first absorption that you are talking about is for bulk silicon material, it doesn't apply to quantum wells neither does it apply to other compound semiconductors such as III-Vs. Also I believe the density you are talking about is density of charges gathered in a typical sensor, this is not what I was talking about. Density of electrons in semiconductor material is proportional to the number of atoms (atmoic packing density) and details of the electronic bandstructure. For example for Si Nc=3E+19/cm^3 which is 3E+7 /um^3, you can pump all of these carriers optically if there is enough energy in theory. The typical well densities you have in mind are relatively low because of low quantum efficiency of commercial CMOS image sensors, you can make sensors for example with, CdHgTe QWs that have extremely high QE for a given wavelength. The world of image sensor is much broader than digital cameras, the highest performance sensors are made for space and military applications.



Multiplying electrons does not change the noise characteristics. What counts, no pun intended, is the photons counted, not the electrons generated. It is the random arrival time of photons that is the source of photon noise in images. Multiplying electrons generated also amplifies the noise, so at best you maintain your signal to noise ratio, but not improve it. The main reason for electron multiplying sensors is to boost the signal up so that read noise is negligible. This started with CCDs when read noise was on the order of 15 electrons with cooled sensors. We now have ambient DSLR cameras with 2.5 electron read noise. The need for electron multiplication is diminishing.


Of course, enough photons are gathered in these devices that photon shot noise is not a limiting factor, there is no limit on the intensity of light you can put on a cell (well unless you melt it) and cell area is large so this is not an issue. In addition to photon shot noise there is electronic shot noise or dark shot noise, this is dealt with via amplification and averaging techniques which I mentioned above. The fact is that sensors with 20bit dynamic range and more exist and are in use, so it is perfectly feasible. Also do you have a reference for how that 2.5e read noise has been measured?



Yes, I have heard of them. Here are some more recent references, and to full pdf papers:

http://yard.ee.nthu.edu.tw/~syhuang/paper.ps/HDR.sensor.pdf

http://www.robots.ox.ac.uk/~mcad/publications/bhasteph/sb_imtc04.pdf

http://oatao.univ-toulouse.fr/311/1/MartinGonthier_311.pdf

There are a couple of methods, one is logarithmic response at the pixel level with the problem of poor signal to noise ratio and fixed pattern noise. Two, multiple integration times (we do that now with HDR imaging) but on chip with the problem of image movement.
I don't see on this page where one exposure is read out multiple times to reduce noise. Sure you can stack images from multiple exposures and average them together to reduce noise. That is standard practice in astrophotography. Here is a page that shows sub one photon per exposure detection with a DSLR:

Sorry, I thought you doubted the existence of these sensors from your previous post, my bad.
When you set your D300 to 14Bit mode it oversamples the ADC 4 times, (thus slows down to 2.5 fps), so the resulting image has lower shadow noise, look at this examples in the link which clearly show this point.
http://www.earthboundlight.com/images/phototips/12-14bit-shadows.jpg

since you have a good setup you can quickly verify this by grabbing a D300 and doing a quick 12Bit NEF vs 14Bit NEF test (make sure you shoot in uncompressed mode).

BTW, your astro images are way cool :) and your website is very good as well.


Hope this helps,

Arash

Roger Clark
08-03-2009, 09:01 AM
Roger,
This is getting too technical, but first absorption that you are talking about is for bulk silicon material, it doesn't apply to quantum wells neither does it apply to other compound semiconductors such as III-Vs. Also I believe the density you are talking about is density of charges gathered in a typical sensor, this is not what I was talking about. Density of electrons in semiconductor material is proportional to the number of atoms (atmoic packing density) and details of the electronic bandstructure. For example for Si Nc=3E+19/cm^3 which is 3E+7 /um^3, you can pump all of these carriers optically if there is enough energy in theory.

Arash
I agree that bulk densities of electrons in semiconductors is very high. But that is not what we are talking about is photoelectrons generated by photons.



The typical well densities you have in mind are relatively low because of low quantum efficiency of commercial CMOS image sensors, you can make sensors for example with, CdHgTe QWs that have extremely high QE for a given wavelength. The world of image sensor is much broader than digital cameras, the highest performance sensors are made for space and military applications.

I'm not aware of evidence that says higher quantum efficiencies lead to higher full wells. For example, we have thinned back side illuminated CCDs in commercial applications with QEs of 95% and they don't have higher full wells depths. Thinned, back side illuminated CMOS sensors have also been produced with high QE and they do not have higher full wells. And CMOS sensors in digital cameras have 30 to 40% QE.

The absorption lengths in sensor materials is very close to the values I cited above. The small doping does not change the absorption significantly over the values I cited. The full table is at table 1b and reference is in:
http://www.clarkvision.com/imagedetail/digital.sensor.performance.summary

The photoelectrons generated in a sensor are confined to stay near where the photons are absorbed, so are confined to the top few microns.

I do work with scientific sensors on in labs, on aircraft and on spacecraft, including monitoring performance and calibration.





Of course, enough photons are gathered in these devices that photon shot noise is not a limiting factor,

Whoa! Photon shot noise is the ultimate limiting factor. One can never do better than that. It is a quantum mechanical limit. All other noise sources add to the photon noise. The challenge in any design is to reduce other noise sources so that one has only photon noise. A photon noise limited system is the ultimate, and all DSLRs I and others have analyzed are photon noise limited except for the bottom few stops where read noise and A/D and other amplifier noise becomes a larger factor in proportion.



there is no limit on the intensity of light you can put on a cell (well unless you melt it) and cell area is large so this is not an issue. In addition to photon shot noise there is electronic shot noise or dark shot noise, this is dealt with via amplification and averaging techniques which I mentioned above. The fact is that sensors with 20bit dynamic range and more exist and are in use, so it is perfectly feasible. Also do you have a reference for how that 2.5e read noise has been measured?

While true regarding no limit on the intensity of light if you control the light, in real world photography photons are finite. For normally metered scene, a 20% diffuse reflectance spot the exposure will deliver about 1300 photons per square micron over the green passband to the focal plane regardless of f-stop, focal length, or sensor size.
Sunlight is finite.





Sorry, I thought you doubted the existence of these sensors from your previous post, my bad.
When you set your D300 to 14Bit mode it oversamples the ADC 4 times, (thus slows down to 2.5 fps), so the resulting image has lower shadow noise, look at this examples in the link which clearly show this point.
http://www.earthboundlight.com/images/phototips/12-14bit-shadows.jpg

since you have a good setup you can quickly verify this by grabbing a D300 and doing a quick 12Bit NEF vs 14Bit NEF test (make sure you shoot in uncompressed mode).

This is interesting but on the web page there was nothing that showed the same image was being
sampled 4 times. Is there a technical paper that describes that?

Could it be that Nikon is simply using a slower speed, higher precision A/D converter? Canon is using fast 14-bit converters and they have not really improved the shadow noise over their 12-bit systems much (canon claims half a stop). But Canon's don't slow down with 14-bit.

I do not have a D300 to test.




BTW, your astro images are way cool :) and your website is very good as well.

Thanks,

I use the same methods for testing cameras as is used in the sensor industry. My methods for cameras is listed here:
http://www.clarkvision.com/imagedetail/evaluation-1d2

Emil Martinec, also a BPN member uses the same methodology, as does Christensen Buil. References to their web sites are in my reference list at:
http://www.clarkvision.com/imagedetail/digital.sensor.performance.summary

My 1D2 analysis results are in a table on the above page just above the references.

Roger

arash_hazeghi
08-03-2009, 02:00 PM
Arash
I agree that bulk densities of electrons in semiconductors is very high. But that is not what we are talking about is photoelectrons generated by photons.



I'm not aware of evidence that says higher quantum efficiencies lead to higher full wells. For example, we have thinned back side illuminated CCDs in commercial applications with QEs of 95% and they don't have higher full wells depths. Thinned, back side illuminated CMOS sensors have also been produced with high QE and they do not have higher full wells. And CMOS sensors in digital cameras have 30 to 40% QE.

The absorption lengths in sensor materials is very close to the values I cited above. The small doping does not change the absorption significantly over the values I cited. The full table is at table 1b and reference is in:
http://www.clarkvision.com/imagedetail/digital.sensor.performance.summary

The photoelectrons generated in a sensor are confined to stay near where the photons are absorbed, so are confined to the top few microns.

I do work with scientific sensors on in labs, on aircraft and on spacecraft, including monitoring performance and calibration.



Hi Roger,



Again there is no limit on the number of optically generated carriers, ans also by proper enginnering photon-generated EHPs can travel many microns with no recombination, this is my research field and we fabricate and test some of these sensors here at Stanford. If you have access to data check out HgCdTe DIR sensors for example. BTW, a well in a CMOS sensor is not a quantum well, it refers to the cell capacitance that stores charge (like a trench capacitance in FLASH). QW is a different specie.





Whoa! Photon shot noise is the ultimate limiting factor. One can never do better than that. It is a quantum mechanical limit. All other noise sources add to the photon noise. The challenge in any design is to reduce other noise sources so that one has only photon noise. A photon noise limited system is the ultimate, and all DSLRs I and others have analyzed are photon noise limited except for the bottom few stops where read noise and A/D and other amplifier noise becomes a larger factor in proportion.




You did not understand my comments correctly, I never said you can overcome the fundamental photon shot noise factor, I said enough number of photons are gathered that SNR due to shot noise is greater than the target SNR of the system. As long as the photon shot noise SNR is above the target it is not the limiting factor. Any ways, I guess after seeing the papers there is no dispute that sensors with more than 20Bit DR exist and are in use, there is data that shows 56dB DR, are you are still trying to prove it is not possible ;) ?





While true regarding no limit on the intensity of light if you control the light, in real world photography photons are finite. For normally metered scene, a 20% diffuse reflectance spot the exposure will deliver about 1300 photons per square micron over the green passband to the focal plane regardless of f-stop, focal length, or sensor size.
Sunlight is finite.This is interesting but on the web page there was nothing that showed the same image was being
sampled 4 times. Is there a technical paper that describes that?

Could it be that Nikon is simply using a slower speed, higher precision A/D converter? Canon is using fast 14-bit converters and they have not really improved the shadow noise over their 12-bit systems much (canon claims half a stop). But Canon's don't slow down with 14-bit.

I do not have a D300 to test.


Nikon D300 has a 12 mega pixel Sony EXMOR CMOS sensor with 12Bit on-chip column-parallel ADC (look at the datasheet), 14Bit is achieved by oversampling if the 12Bit ADC (that's why frame rate is reduced when you set it to 14Bit mode). Canon cameras have native 14Bit ADCs.





Thanks,

I use the same methods for testing cameras as is used in the sensor industry. My methods for cameras is listed here:
[URL]http://www.clarkvision.com/imagedetail/evaluation-1d2

Emil Martinec, also a BPN member uses the same methodology, as does Christensen Buil. References to their web sites are in my reference list at:
http://www.clarkvision.com/imagedetail/digital.sensor.performance.summary

My 1D2 analysis results are in a table on the above page just above the references.

Roger


I gather you use RAW files to measure these properties, this method is good but not scientifically reliable as cameras "cook" the RAW files, Nikons for example subtract hot pixels and do many other things with the RAW files. In the lab we hook up the sensor analog outputs to spectrum analyzer and use monochromatic light source and measure sensor current, as done in technical papers, but I understand this may not possible for DSLR sensors, I would be very cautious about the numbers I report.

Best,
Arash

arash_hazeghi
08-03-2009, 02:23 PM
Roger,
If you are interested here is also a set of pdf slides that describes circuit details for HDR sensors, it is from an invited talk given at ISSCC 2002, by one of the faculty here who is one of the biggest names in image sensor field around the world, I hope it is accessible from where you are.

http://isl.stanford.edu/~abbas/group/papers_and_pub/isscc02_tutorial.pdf

Since these are unrelated to this forum and I bet majority of BPN members have no interest in this , if you like to discuss more please email me.

Best,
Arash

Roger Clark
08-03-2009, 10:26 PM
Hi Roger,

Again there is no limit on the number of optically generated carriers, ans also by proper enginnering photon-generated EHPs can travel many microns with no recombination, this is my research field and we fabricate and test some of these sensors here at Stanford. If you have access to data check out HgCdTe DIR sensors for example. BTW, a well in a CMOS sensor is not a quantum well, it refers to the cell capacitance that stores charge (like a trench capacitance in FLASH). QW is a different specie.

Arash,
I'm starting this at the top level as the thread depth was getting too great.

Interesting about the DIR sensors. Like this:
http://www.minatec-crossroads.com/pdf-AR/Mottin.pdf
I was not aware of the DIR. Looks interesting for a future application. We use HgCdTe, for example, in the Moon Mineralogy Mapper on Chandrayaan-1 orbiting the moon.

When you say photo electrons can travel many microns, how many? And can it be made to go deep, truly making larger storage than we are seeing now in CCDs and CMOS. I understand that CMOS is different. If so, are there
any sensors with such deep wells available commercially? I have not seen any.



You did not understand my comments correctly, I never said you can overcome the fundamental photon shot noise factor, I said enough number of photons are gathered that SNR due to shot noise is greater than the target SNR of the system. As long as the photon shot noise SNR is above the target it is not the limiting factor. Any ways, I guess after seeing the papers there is no dispute that sensors with more than 20Bit DR exist and are in use, there is data that shows 56dB DR, are you are still trying to prove it is not possible ;) ?

Sorry I misunderstood you. I haven't gone back to see the details, but I also made an error in the size of the pixels needed to achieve 20 bit DR. I computed 20 bit SNR. If the read noise were only one electron, then the one would need to collect only 2^20 = a hair over one million photons, not a million squared. At an electron density of 1000 electrons per square micron, one only needs on the order of a 33 micron square pixel, and at a few electrons read noise correspondingly larger area. So I agree it is feasible to do a linear 20 bit DR sensor.

The references to the high DR sensors seem to be several years old. It seems that interest in those avenues have decreased (the Fuji super CCD uses one of those principles of a small less sensitive pixel).

But in the interim the quality of "standard" sensors continues to improve regarding read noise, lower fixed pattern noise, and so it seems the need for such high dynamic range is less.

Current DSLRs and P&S cameras are photon noise limited over most of their dynamic range.



Nikon D300 has a 12 mega pixel Sony EXMOR CMOS sensor with 12Bit on-chip column-parallel ADC (look at the datasheet), 14Bit is achieved by oversampling if the 12Bit ADC (that's why frame rate is reduced when you set it to 14Bit mode). Canon cameras have native 14Bit ADCs.

Interesting. I have not been able to get some Sony sensor data sheets online, only some of their small sensors. Do you know of an online source?




I gather you use RAW files to measure these properties, this method is good but not scientifically reliable as cameras "cook" the RAW files, Nikons for example subtract hot pixels and do many other things with the RAW files. In the lab we hook up the sensor analog outputs to spectrum analyzer and use monochromatic light source and measure sensor current, as done in technical papers, but I understand this may not possible for DSLR sensors, I would be very cautious about the numbers I report.


Yes, we use the raw data, for example from DCRAW. Those of us doing this are aware of some of the issues with raw files. Nikon on some models, for example, encoded the data into something like 667 levels to aid in compression. Even so, the encoding has been good enough to give good results consistent with data sheets for commercial sensors. We are interested in camera performance and we are not interested in dissecting a camera to get at the fundamentals, although it would be nice to have the time and mioney to do it right. The sensor data from the cameras is also showing beautiful linear responses, again consistent results with commercial sensor data. I'm pretty confident we are getting good numbers. Consistency is the key. Look at Figure 2 at:
http://www.clarkvision.com/imagedetail/digital.sensor.performance.summary
and note how the Canon 5D Mark II, 40D, 50D, and Nikon D300 plot nicely along the trend line and older models (partly due to lower fill factors) plot below the trend line.

Roger

Roger Clark
08-03-2009, 10:39 PM
Arash,

I got the pdf. Interesting read. You still need a minimum of one million photons to get 20-bit DR. The strategies seem to be ways to decrease the high end, like absorb some of the incoming photons so you don't have to count them all, or short integration times so you don't have to count them all. But it still boils down to needing a million photons times the read noise delivered to the focal plane per picture element to get the 20-bit DR.

Roger

Roger Clark
08-03-2009, 10:44 PM
Arash,

We've probably beat this one to death and the recent discussions have not been relevant to the original question of highlight recovery of current camera, so we should wind this down. I think we are coming into agreement. But please answer my last couple of questions. If we want to continue, we should do it off list, or start a new thread under the gear forum under a future of cameras discussion.

Roger

arash_hazeghi
08-04-2009, 12:48 AM
Arash,
I'm starting this at the top level as the thread depth was getting too great.

Interesting about the DIR sensors. Like this:
[QUOTE]http://www.minatec-crossroads.com/pdf-AR/Mottin.pdf[/url]
I was not aware of the DIR. Looks interesting for a future application. We use HgCdTe, for example, in the Moon Mineralogy Mapper on Chandrayaan-1 orbiting the moon.

When you say photo electrons can travel many microns, how many? And can it be made to go deep, truly making larger storage than we are seeing now in CCDs and CMOS. I understand that CMOS is different. If so, are there
any sensors with such deep wells available commercially? I have not seen any.

Hi Roger,

yes the main application is space (which I am sure you know better than I do, it has to do with certain wavelengths that distant stars/nebulas(?) emit) The interesting idea about DIR sensor is that you can engineer the bandstructure of the QW so it will have a narrow absorption at a selective wavelength, second application is military for IR imaging aboard aircraft. You can also modulate the absorption by electric field (quantum stark effect) so you can make modulators and such. I am not aware if these sensors are commercially available but auto industry is the likely place.

I have seen diffusion lengths of about ~20um (I can't remember if it was room or low temperature though), the state of the art solar cells (for example the multi-junction cells that are used for spacecraft with ~40% efficiency, are engineered to have very long carrier diffusion length by lowering the doping and engineering band structure). But the numbers you have 1-2um is reasonable for typical CMOS sensor.



Sorry I misunderstood you. I haven't gone back to see the details, but I also made an error in the size of the pixels needed to achieve 20 bit DR. I computed 20 bit SNR. If the read noise were only one electron, then the one would need to collect only 2^20 = a hair over one million photons, not a million squared. At an electron density of 1000 electrons per square micron, one only needs on the order of a 33 micron square pixel, and at a few electrons read noise correspondingly larger area. So I agree it is feasible to do a linear 20 bit DR sensor.


No worries I figured it must have been a typo.


The references to the high DR sensors seem to be several years old. It seems that interest in those avenues have decreased (the Fuji super CCD uses one of those principles of a small less sensitive pixel).

But in the interim the quality of "standard" sensors continues to improve regarding read noise, lower fixed pattern noise, and so it seems the need for such high dynamic range is less.

Current DSLRs and P&S cameras are photon noise limited over most of their dynamic range.


I guess the main reason that these sensors did not find their way into commercial photography was that the market was not large enough, and also as you imagine they have higher fab cost and have other limitations. Fuji's idea was to use two photo diods per cell, but there was area penalty and they could not scale beyond 6 mpixels in a reasonable time frame due to lack of demand. I agree current DSLRs are pretty good especially the full frame ones like the D700 and MKII, enough for me:D


Interesting. I have not been able to get some Sony sensor data sheets online, only some of their small sensors. Do you know of an online source?



There was a paper in the IEDM conference back in 2006 (Dec 2006, San Francisco) where they talked about the new column parallel ADC design, it had all the details and ADCs were 12Bit and at the end of each row/column, they hoped by reducing the readout path they could reduce noise and cross talk, so the ADC were hardwired in the CMOS chip. I will try to find the paper, you can also search for the words "Sony 12 mpixel column-parallel-ADC" to see if google can find it.



Yes, we use the raw data, for example from DCRAW. Those of us doing this are aware of some of the issues with raw files. Nikon on some models, for example, encoded the data into something like 667 levels to aid in compression. Even so, the encoding has been good enough to give good results consistent with data sheets for commercial sensors. We are interested in camera performance and we are not interested in dissecting a camera to get at the fundamentals, although it would be nice to have the time and mioney to do it right. The sensor data from the cameras is also showing beautiful linear responses, again consistent results with commercial sensor data. I'm pretty confident we are getting good numbers. Consistency is the key. Look at Figure 2 at:
http://www.clarkvision.com/imagedetail/digital.sensor.performance.summary
and note how the Canon 5D Mark II, 40D, 50D, and Nikon D300 plot nicely along the trend line and older models (partly due to lower fill factors) plot below the trend line.

Roger

Yes, the trends are nice and most likely for any practical purpose comparing cameras and observing trends is useful, but certain absolute figure are somewhat tedious to separate, for example there is a parameter that is called the "read noise" in your article and I gather you measure it by capping the camera and taking a photo with fast shutter speed, then whatever fluctuation that is observed in the RAW file is attributed to "read noise" some of this is in fact shot noise from dark current that was subtracted on chip, and also some noise from sense amplifiers, but a lot of it is actually from cross talk between the signal lines and also Johnson noise from the sensor wires so any attempt for example to measure dark current noise and thus estimate dark current from such measurement will be inaccurate.

arash_hazeghi
08-04-2009, 12:51 AM
Arash,

We've probably beat this one to death and the recent discussions have not been relevant to the original question of highlight recovery of current camera, so we should wind this down. I think we are coming into agreement. But please answer my last couple of questions. If we want to continue, we should do it off list, or start a new thread under the gear forum under a future of cameras discussion.

Roger

I enjoyed the discussion, I will look over and see if there is anything left and I will email you.

Thanks,
Arash

P.S. sorry for posting lots of technical stuff on a photography forum but some of these references are nice for people that might be interested.

arash_hazeghi
08-04-2009, 01:58 AM
In order to provide something useful for everyone I am attaching a very good document by Canon about their sensor process and technology, it explains how Canon was able to solve dark current problem with their CMOS sensors in a very nice way with cartoons, this was the main bottleneck of CCD technology at the time (2002-2006) which eventually lead to its obsolescence in DSC (Digital Still Camera) industry.

http://www.usa.canon.com/uploadedimages/FCK/Image/White%20Papers/Canon_CMOS_WP.pdf

Best,
Arash

Roger Clark
08-04-2009, 10:53 PM
Arash,
Great find! Thanks.

Roger

Flavio Rose
08-06-2009, 07:44 PM
Just wanted to respond to the original post and suggest a way to deal with highlight recovery without Capture NX which Arash discussed.

First, there are two types of highlight recovery. Your raw file can be blown (i.e., have the maximum possible value in some channel for some pixel), or alternatively your raw file is OK but your jpeg is blown.

To see if the raw file is blown, I use the free download Rawnalyze. If it is, then I employ the highlight recovery algorithms in another free download, dcraw. These recovery algorithms are not necessarily so great, but they're worth trying.

To see if the jpeg generated from the raw file is blown (i.e., has a value 255 in some channel at some pixel), one way is to look at the RGB histogram of the jpeg to see if it collides with the right hand side. For more precise information, however, I use the free download Histogrammar http://www.guillermoluijk.com/tutorial/histogrammar/index_en.htm since it generates GIFs which show exactly what pixels are blown in which channel.

If a jpeg generated from a raw file is blown, I start darkening the raw file in DPP (i.e., move left the top slider in the RAW tool) and keep on darkening it until the jpeg which the raw file gives rise to is no longer blown in areas that I care about. It's a really simple procedure that, in contrast to recovering from blown raw files, requires no special algorithms.

The resulting picture may of course be too dark overall, in which case one has to lighten it either with curves or with some other technique that e.g. lightens selected areas of the image.

I would imagine that these procedures would work with Lightroom in place of DPP.