PDA

View Full Version : Efficiency of making electrons from photons



John Chardine
07-25-2012, 03:14 PM
As I see it you can reduce digital photography to a fundamental process of making electrons out of photons, and if you can do this more efficiently your signal to noise ratio increases, which is a good thing. When you think about it, there are many places to lose photons once they have entered the lens and before they hit the photo sites on the sensor:

1. My Canon 70-200mm f2.8II lens has 23 lenses. That's around 40± surfaces (some elements are glued together and become one I assume) from which the photons passing through the lens can reflect and therefore be lost. I assume modern coatings reduce this loss to a minimum, but still ....... As well, the light has to get through 23 pieces of glass of varying thicknesses- some are fluorite, or low-dispersion glass, but still, some transmission losses must occur here.

2 Then the reduced number of photons hits the various filters in front of the sensor. According to Canon's White paper, my 1DIV has in front of the sensor a dichroic mirror to reflect IR, a layer of IR absorbing glass, a horizontal low-pass filter, a "phase" layer to convert from linear to circular polarised light, and then a vertical low-pass filter. All of these must further reduce the number of photons getting through.

3. Finally, the remaining photons hit the sensor itself and I assume, the process of conversion to electrons is not 100% efficient. One reason for this is that there are gaps on the sensor between the sites. These gaps are getting smaller to the point that there are using the term "gapless" to refer to these sensors.

I once read that about 70% of the light that manages to get to the sensor is lost due to the filters in front and the less-than-perfect efficiency of the sensor itself. The total light lost is even higher because this figure does not include losses due to the lens itself.

I guess my question is- can we expect continued improvements in efficiency which will ultimately result in higher S/N ratios? Where are these improvements most likely to focused- in the lens, the sensor filters or the sensor itself? Are other sensor designs (back-lit, etc) more efficient and have we "hit the wall" with current CMOS sensors in modern DSLRs?

Ed Cordes
07-26-2012, 10:42 PM
Very interesting question and perspective. I have some experience in optics as an optometrist for 35 years so I can say that I think continuing improvements in coatings and lens materials that will make light transmission more efficient. In fields outside my profession I believe there will be newer and better materials from which more efficient sensor pixels will be made. The S/N capabilities and dynamic range of future sensors will be greatly improved by by these developments. Imagine an image with 20 F stop dynamic range with noise free images at ISO 200,000!

Is this crazy thinking? Maybe? 15 years ago would you have believed we would be making the images we make at at 12 FPS and the current high ISOs and the dynamic range we have while storing all our images on little postage stamp sized objects?

Mike Milicia
07-27-2012, 01:27 AM
I might be missing something but something doesn't sit right with me here.

If, for example, there is so much opportunity for loss of light in a lens, how can exposure be consistent from lens to lens.
I understand loss of light due to focal length (which is accounted for in the f/stop setting) but if there is anything the least bit significant beyond that, wouldn't we have to change exposure depending on which specific lens we are using and how efficient it is at transmitting the light that enters it?

The same goes for variations in the transmission efficiency of filters in front of the sensor, although I guess this could be accounted for by changing the native ISO of the sensor, i.e. raising the ISO setting at which no post-capture amplification is needed. But it seems to me that this alone would not buy you anything with respect to S/N ratio (at least for photon shot noise) since it would just mean that you would use faster shutter speeds and/or smaller apertures to end up with the same amount of light at the sensor as before.

I have no idea how any of this stuff works beyond the conceptual level so I would be glad to hear where my thinking is going wrong, although I have a sneaking suspicion that I'm not going to understand the answer :bugeyed:

John Chardine
07-27-2012, 05:30 AM
Mike- I didn't mean to over-emphasise losses in the lens itself. It would be interesting to partition the losses into lens, sensor filters and sensor to see where most gains can be made. I suspect that the lens takes accounts for the least loss and the sensor itself, the most.

However, clearly a lens with 6 elements transmits more light that the same one with 23.

Michael Gerald-Yamasaki
07-27-2012, 10:13 AM
Mike,

Greetings. I would guess that the differences in light transmission are accomodated in the lens design itself (diameters, lengths, shapes...) so that one can change lenses without having a dramatic impact on exposure settings. (but that's just a WAG).

John,

I would think that there is much more to be had out of the efficiency of the sensor as it tracks to some extent Moore's Law and the simple matter of money. Cameras have by no means the state-of-the-art technology in computer tech due to pretty low numbers in end users (meaning the market in high end cameras doesn't support the application research to apply the highest tech). In other words, there is plenty of head room here for improvement.

Regarding future tech, I wonder about the need for shutters. It would seem that some future capability would allow for streaming data off a sensor (perhaps in very short discrete samples) and computationally constructing a desired "shutter speed", with stabilization, noise reduction, etc. thrown in for good measure. SImilarly, I wonder about the need for the number of filters which precondition the light before it hits the sensor. Should the sensor be able to distinguish wavelengths...

I think the more of the photographic process that can be transferred to computation the more computer tech can be leveraged (catch a ride on Moore's Law, the fastest ride in town :w3 ).

Cheers,

-Michael-

Ken Watkins
07-27-2012, 11:00 AM
John,

I am certainly no technician or scientist but I would like to make a wish list.

I doubt that lenses can be improved much further without excessive cost, reductions in weight and better AF are the most likely area for improvement, if the general public can afford them.

The abundance of filters has always concerned me and I would definitely like to see a reduction there.

The major breakthrough for me and anybody else who has a wife taking video would be a major reduction in shutter noise, I realise that this is apparently possible with the new mirror-less cameras. I also cannot believe that the opening and closing so rapidly cannit have an effect on vibration and thus sharpness.

Alan Lillich
07-31-2012, 10:48 AM
Mike,

The variations are real, but in my understanding not big enough with good lenses to affect exposure settings. If you look at high end lenses for video, they talk about t-stops instead of f-stops, and sometimes a lens spec lists both. The "t" being for transmission, the actual light delivered to the sensor. The differences between one len's f and t stop seems to be at most a few per-cent.

Roger Cicala has a great article on coatings: http://www.lensrentals.com/blog/2011/12/reflections-on-reflections-the-most-important-part-of-your-lens

Two relevant quotes: "Without coatings each interface would reflect about 4% of the light that reaches it." "Use a modern coating that is 99.9% effective, and total reflection changes to less than 2% for the simple lens and just over 3% for the complex lens."

Alan

P.S. Interesting to see your note about a workshop in Billerica. From 1979 to 1991 we lived on the north side of Nutting Lake.

Graeme Sheppard
08-05-2012, 07:48 AM
I'm not an expert, but I am a physicist so I have a general understanding.

Lens technology, the glass and the coatings Is a trade between amount transmitted and the quality of the light transmitted.

Fibre optic cables 100s of km long show us that the transmission of light within glass is highly efficient (particularly for expensive glass...). A single, solid lens would transmit pretty well. However even in these fibres the red and blue light arrive at the end at different times, showing that inside glass the quality of white light can change. In fact, at each surface of a lens (including to a lesser extent the glued ones) the light separates slightly and does other odd things; it also reflects.

So, the coatings and the lens design combine to minimise the reflection of light and stop it spreading out. The spreading out (e.g. Dispersion) is what worries photographers most these days in lenses, as it is responsible for perceived sharpness.

Any dramatic improvements are more likely to come from the sensors and these are likely to continue Moore's law for a while still as the change from light to electricity becomes more efficient and manufacturing techniques allow smaller sites.

In summary, ridiculously high, clean ISOs are certainly within reach, but the amount of light reaching the sensor is unlikely to change much.

If I have got anything wrong here, I'd love to be corrected as I'm going mostly by intuition.

John Chardine
08-05-2012, 08:16 AM
Makes sense to me Graeme. And thanks for all the other contributions.

John Chardine
08-05-2012, 08:35 AM
Quick question- Michael and Graeme both mention "Moore's Law", which as I understand it refers to an exponential increase in transistors located in integrated circuits and such as microprocessors. See here: http://en.wikipedia.org/wiki/Moore's_law.

My question is- does Moore's Law strictly apply to digital sensors? My hunch is it doesn't but please correct this if wrong. I would have thought that digital sensors, by virtue of what they do, have constraints that do not apply to ICs. For example, there is nothing you can do about quantum photon noise so as you continue to add more sensor sites to a sensor, the smaller sites collect less light per unit time (all else being equal) and your S/N ratio increases. So to counter this you have to make the sensor site more efficient, but how far can you go with this and will improvements in efficiency be exponential as implied by Moore?

Graeme Sheppard
08-05-2012, 10:57 AM
My feeling that if we expect sensors to follow Moore's law then the people developing the technology will try to keep up!
The "law" is obviously just a basic observation of historical trends and it was originally about transistors, then expanded to include the combined effects transistors and other advances in progressing chip speeds quickly.
We're just blithely expanding it further to include sensors since they too rely on good old semiconductor technology, including transistors, and we feel like we're seeing similar historical trends. (maybe I should have said I not We)

The problem of noise in the sensor comes from the electronics, it's not really anything to do with stray light. Therefore if the manufacturers can get to the point where one package of light hitting a photo-site releases a fixed package of electrons that can be processed and, importantly, there is no cross-talk with adjacent photo-sites then noise will be zero. The challenge is to do that with small sites and I believe that there is room for massive improvements still - as a case in point, what was the max ISO in a digital camera 4 years ago? (I've no idea, by the way).

Yes there are fundamental laws in the way, but very often these can be circumvented to minimise their effects. We'll not get crystal clear ISO 200,000, but we'll get as good as makes no difference, imo.

John Chardine
08-05-2012, 11:47 AM
Hi Graeme- As I understand it there is more than one source of noise. The one I referred to as "quantum photon noise" is the one caused by photons arriving at the sensor in a probabilistic not deterministic way following a poisson distribution.

This is a useful review by "our own" Roger Clark:

http://www.clarkvision.com/articles/digital.sensor.performance.summary/

Graeme Sheppard
08-05-2012, 12:08 PM
That sounds reasonable; I knew I was paraphrasing by just referring to the electronics, but as I said I'm writing from intuition more than expertise.

Now, how to circumvent these problems? One possibility is to go with small photo-sites. Analysis of the light reaching each one and comparing it with neighbours should allow statistical separation of the sites to some degree, especially if neighbouring sites are manufactured to have different behaviours.

I've no idea if this is being done or is feasible, but manufacturers' increasing ability to control photo-sites on a fine scale coupled with innovative design will, I think, let the improvements continue for a bit longer.

Or it may just be wishful thinking on my own part!

John Chardine
08-05-2012, 04:30 PM
It strikes me as well that we are at the threshold of a whole new set of sensor technologies that could produce discontinuous jumps in image quality. We have been mired in the front-lit CMOS technology for some time and back-lit sensors have not made their way to DSLRs as far as I know. There must be other exciting sensor technologies out there as well, we just need to live long enough to be able to take advantage!

Michael Lloyd
09-16-2012, 10:40 AM
Interesting discussion. I wonder if the loss through the lens is made insignificant by the amount of data reduction that occurs? Pick any scene that you are going to photograph. The amount of real data in that scene is immense, no matter the focal length, when compared to the size of the sensor and it's recording capabilities. The lens is a data compressor. Take a wide angle lens aimed at a mountain scene for instance. The width if the view could easily be measured in miles. The lens compresses that into a space measured in millimeters.

John Chardine
09-16-2012, 04:19 PM
Interesting Michael but I think your "real data" can never be realised. Everything from a falcon's eye to a camera sensor samples that scene remotely at different spatial frequencies, and although the falcon's eye is amazing, it is also compressing (aggregating) that theoretical information to some degree.

Michael Lloyd
09-16-2012, 04:23 PM
Interesting Michael but I think your "real data" can never be realised. Everything from a falcon's eye to a camera sensor samples that scene remotely at different spatial frequencies, and although the falcon's eye is amazing, it is also compressing (aggregating) that theoretical information to some degree.

You kind of made my point... the tiny little bit of loss through the lens elements is nothing compared to the total loss of data difference between the scene and the sensor's resolving power.

John Chardine
09-16-2012, 08:08 PM
I did and I didn't. My main point is that there is no absolute amount of data in a scene. That scene has to be sampled, measured, seen in some way and that's the only reality. So, following from that, there is no real loss between the scene and the way it is sampled (whether is be by a falcon's eye or a camera sensor) because you cannot define some theoretical amount of information for the scene independent of the way it is sampled. Yes, a 42mp sensor will sample a scene at a higher frequency than a 10mp sensor but neither can be compared to any absolute because the latter does not exist.

I do agree that light loss through the lens is probably insignificant to that occurring elsewhere in the system.

Michael Lloyd
09-16-2012, 08:41 PM
In the last few minutes I've typed argument and agreement replies... I see the scene as having an absolute amount of data but, as you noted, it's not really absolute... it's relative to the "sensor's" resolving power.

I think I'll go ask Schrödinger's cat what it thinks...

Graeme Sheppard
09-17-2012, 12:55 AM
There's no point. It'll give you two polar opposite answers that'll both be right then leave you with a perplexing wave.

Jon Rista
09-29-2012, 05:50 PM
I know this thread hasn't been active for a couple weeks. I thought I'd throw in some thoughts, primarily about lens transmission.

From Post #1, the assumption was that 70% of light could be lost through the lens. If we were discussing lenses from the early part of the 20th century, transmission loss of 70% might well indeed be true. The lack of any kind of antireflection coatings on early lens designs meant that a lot of light was lost to reflection, and in multi-element lenses of only a few elements, total transmission loss could be considerable. There have been two innovations since those early lenses that greatly reduce the loss of light transmission in a lens. The first of those was multicoating (which started out as singlecoating). With a multicoating, transmission loss at each interface, be it air to glass or glass to glass, can be reduced to around 1% on average (this allowing 99% transmission preservation per interface). In a 23 element lens, that roughly equates to 23% transmission loss in the worst case (which would result in 77% transmission). At least hypothetically. The intriguing thing about multicoating is it does not actually prevent reflection, it mitigates it, and therefor limits ghosting and flare. Reflection still occurs, and is simply eliminated (for the most part) by waveform interference (where one photon cancels the other out because their phase is off by 1/2.)

Newer lens designs from both Canon and Nikon use a newer form of antireflection coating. It takes a cue from nature, moths eyes to be specific, which are designed to ensure as much transmission of light into the moths eye as possible using a nano-scale structure at the surface. Nikon's NanoCrystal Coat and Canon's Subwavelength Coating (SWC) are both nanocoatings. Unlike multicoating, which aims to mitigate reflection when it does occur, a nanocoating aims to eliminate reflection entirely. Reflection occurs whenever there is an abrupt change in refractive index. The simple trick with a nanocoating is that small varying nano-scale structures etched or added to the surface of a lens combine with air to create a "soft transision" in refractive index. The end result is that light is effectively guided into the lens, resulting in 0.05% transmission loss or less. In a 23 element lens, total transmission loss would be a mere 1.15%, so total transmission would be 98.85% in the worst case.

I am not an optical engineer, so I don't know much about how well optical materials such as optical glass and fluorite transmit the light that isn't reflected. If we assume there is some loss, say 0.01% (totally random, bogus, off-the-cuff assumption), per element, that would cost us another 0.23% transmission. If we are using a multicoated lens thats 23.23% total loss, and if we are using a nanocoated lens our total transmission loss would be 1.38%. There might be additional loss for light that strikes the insides of the lens barrel...we could tack on another 1% for that I guess. So 24.23% multicoated or 2.38% nanocoated transmission loss. If we are looking to maximize light transmission through the lens, the clear option is to use a nanocoated lens. At the moment, I don't know if any other manufacturers besides Nikon or Canon use nanocoating. To my knowledge, only the newest lenses in their lens lineups use such a thing (I believe NanoCrystal Coat was introduced by Nikon around 5 years ago, and Canon introduced SWC in 2008.) All of the new Mark II telephoto/supertelephoto lenses from Canon, the EF 300/2.8, 400/2.8, 500/4, 600/4 all use SWC. I would assume the new normal and wide angle lenses do as well, however I have not verified that.

One can't forget about the aperture as well. Wide open, we should expect to get the most out of our total transmission. Transmission through the lens is one of the key aspects of exposure, though. We explicitly control the amount of light passing through the lens to the sensor with the aperture. However, as that's a conscious choice, we have no one to blame other than ourselves for that. ;)

As for sensor efficiency, there is some useful information regarding that from a variety of camera reviewers. A useful resource for determining the Quantum Efficiency of a sensor, or its ability to convert photons to electrons, can be found on sensorgen.info (best resource I know of...of someone knows of a better one, I'd love to know it!) If we use the Canon 1D IV as an example, its sensor Q.E. is 44%. That would mean it is capable of converting 44% of the photons that reach it into corresponding electrons. The rest are either reflected or converted to heat. I am not sure if some sensor Q.E. accounts for the filter stack above the actual CMOS die or not. If we assume so, and we are using a nanocoated lens, then it would be safe to assume we get to keep 44% of the 97.62% of light transmitted through the lens. That would put the total system efficiency at around 42.95%. That means were losing 57% of our light, and to John's original point, that has to have a pretty significant effect on S/N.

As to the question: Can we continue making gains in terms of converting photons to electrons? Yes, I do believe so. The D800 has a Q.E. (according to sensorgen.info, which is derived from DXOMark statistics) of 57%, as does the D3s. Canon's 5D III has a Q.E. of 49%. There are probably ways to continue improving Q.E. directly via the CMOS sensor die itself. Back-illuminated sensors would directly expose the photodiode to light, and combined with microlensing, that should improve Q.E. (more so for sensors with smaller pixels than large, but as we are continuing to see significant increases in megapixels, I'd imagine back-illuminated sensors will become more prevalent.) Depending on how you look at it, some of a sensor's ability to convert photons to electrons is limited by electronic noise in the sensor circuit (dark current noise). If the "initial few electrons" are "consumed" by noise, that limits the electron-holding volume of our photodiode that can be used to represent photon conversions, so reducing electronic noise should help improve Q.E. a small amount as well. I think the most significant way to improve Q.E. is via active cooling. Scientific-grade devices use powerful cooling (usually via something like a Peltier, or thermoelectric cooler) to reduce sensor temperature to -80°C. At negative 80, dark current noise is nearly eliminated, and Q.E. can surpass 80%.

If we could keep 80% of the 97.62% of light transmitted by a nanocoated lens, our total system efficiency would be up to 78%! I think there is plenty of room to improve our photon conversion rate, and improve our S/N. I think most of that work would have to be done at the sensor level though. With nanocoating, lens transmission is reaching the point of diminishing returns. Future improvements to nanocoating structure may improve transmission loss to 0.01%, maybe 0.005%. But at that point the difference is so minimal it doesn't really mean anything.

John Chardine
09-30-2012, 05:22 AM
Hi Jon- Good information on the coatings etc. In the OP I said that 70% of the light that manages to get to the sensor is lost at the sensor so am not implying at all that this loss takes place in transmission through the lens.

Jon Rista
09-30-2012, 11:30 AM
Ah, sorry John. I guess I misunderstood the first time I read. Even regarding the filter stack on top of the sensor, I would be surprised if 70% was lost there, unless we are talking the entire visible spectrum from near-IR to near-UV...in which case you don't really want a lot of those wavelengths anyway. The low-pass filter might take a little bit of light, but its purpose is to blur it just slightly in the horizontal and vertical directions, so I wouldn't imagine we are really losing a lot of light there. You can see the CFA on the sensor through the filters if you look into the mirror box with mirror up. There isn't a whole lot of light in the mirror box most of the time, so I would figure transmission loss through the filter (for the visible wavelengths of light we want to keep) is probably only a few percent.