PDA

View Full Version : Noise in Prosumer VS Pro Bodies



Jackie Schuknecht
08-31-2009, 09:52 AM
I haven't come across this in my reading yet, but can anybody explain why the prosumer bodies have so much more noise than the pro bodies. The mechanics of the whole thing. Why haven't they translated this feature into lower end bodies? Is it a cost factor?

Alfred Forns
08-31-2009, 09:54 AM
Hi Jackie it is basically sensor size but I'm sure our resident expert Roger will give you the complete rundown !!!

Roger Clark
08-31-2009, 09:22 PM
OK Alfred!

Almost all the noise you see in your images is directly related to the physical size of the pixels in your camera, except the darkest shadows. So the trends you see results mainly from the pro bodies having larger sensors and larger pixels. I have the gory details in a couple of articles:

Digital Cameras: Does Pixel Size Matter? Factors in Choosing a Digital Camera
http://www.clarkvision.com/imagedetail/does.pixel.size.matter

Digital Camera Sensor Performance Summary
http://www.clarkvision.com/imagedetail/digital.sensor.performance.summary

We can pretty much predict the performance of a camera as soon as the specs are announced, except for the low end, the deepest shadows, where electronics continue to improve with each generation, reducing the thermal noise, read noise, and fixed pattern noise. We see the results of that shadow improvement in all cameras, even P&S.

Note that contrary to popular belief, cameras with smaller sensors and smaller pixels do not have more noise. In fact they have LESS noise than large sensor pro DSLRs. What scales with pixel size is signal: smaller pixels collect less light, so the electronics must amplify the signal more which also amplifies the noise. What we perceive as noise is really noise relative to the signal and is called the Signal-to-Noise Ratio (SNR or S/N). Larger pixels give higher SNRs at all ISOs and we perceive that as less apparent noise.

But large pixels aren't everything, because for a given sensor size, larger pixels means less total pixels and less resolution. People have different opinions on what pixel size is best in this trade space. My opinion is 5 to 8 microns is optimum. As pixels go below about 5 microns, several compromises are made, including reduced dynamic range, and loss of contrast at the pixel level due to diffraction and lens aberrations.

Roger

Jackie Schuknecht
09-01-2009, 09:09 AM
Thanks Roger, I actually understood that! Interesting to know all the details.

Emil Martinec
09-01-2009, 11:50 AM
OK Alfred!

Almost all the noise you see in your images is directly related to the physical size of the pixels in your camera, except the darkest shadows. So the trends you see results mainly from the pro bodies having larger sensors and larger pixels. I have the gory details in a couple of articles:

Digital Cameras: Does Pixel Size Matter? Factors in Choosing a Digital Camera
http://www.clarkvision.com/imagedetail/does.pixel.size.matter

Digital Camera Sensor Performance Summary
http://www.clarkvision.com/imagedetail/digital.sensor.performance.summary

We can pretty much predict the performance of a camera as soon as the specs are announced, except for the low end, the deepest shadows, where electronics continue to improve with each generation, reducing the thermal noise, read noise, and fixed pattern noise. We see the results of that shadow improvement in all cameras, even P&S.

Note that contrary to popular belief, cameras with smaller sensors and smaller pixels do not have more noise. In fact they have LESS noise than large sensor pro DSLRs. What scales with pixel size is signal: smaller pixels collect less light, so the electronics must amplify the signal more which also amplifies the noise. What we perceive as noise is really noise relative to the signal and is called the Signal-to-Noise Ratio (SNR or S/N). Larger pixels give higher SNRs at all ISOs and we perceive that as less apparent noise.

But large pixels aren't everything, because for a given sensor size, larger pixels means less total pixels and less resolution. People have different opinions on what pixel size is best in this trade space. My opinion is 5 to 8 microns is optimum. As pixels go below about 5 microns, several compromises are made, including reduced dynamic range, and loss of contrast at the pixel level due to diffraction and lens aberrations.

Roger


I think it's better to say that almost all the noise one sees in images is directly related to the physical size of the sensor in the camera; the size of the pixels affecting only the deepest shadows and/or high ISO images.

The first figure in the first article is somewhat misleading, IMO; silicon has a fixed saturation density, so the bucket corresponding to a smaller pixels should be as tall, but narrower. Thus, putting together a number of such buckets to equal the area of the larger bucket, the capacity is more or less the same provided the bucket wall thickness can be neglected (the analogue of fill factor for pixels, which seems to be a relatively minor effect in sensors I've examined). Figure 5 of the first article is also misleading, as it suggests that the vast difference in image quality between the two crops is due to the pixel size, when in fact it is due to the sensor sizes differing by a factor of about 13 in area (the digicam collected about 1/13 the photons, and so its S/N ratio is correspondingly about 1.8 stops worse).

Increasing the resolution by using smaller pixels, introduces fine scale noise along with fine scale detail; it does not affect noise at coarser scales. A demonstration of this can be found in a couple of posts of mine at DPR

http://forums.dpreview.com/forums/read.asp?forum=1019&message=31922352

http://forums.dpreview.com/forums/read.asp?forum=1019&message=31922793

With smaller pixels, one thus has the choice to downsample to recover a coarser scale image with the same noise as a camera with larger pixels, to do nothing and retain both the extra detail along with the fine scale noise, or to filter the fine scale noise and retain much of the extra detail.


The above comments have to do with photon noise, which has to do with the properties of the light forming the image. The camera electronics introduce various forms of electronic read noise which Roger mentioned. These are improving with time, but tend (for instance in Canon cameras of any given generation) to have a fixed noise cost per pixel, resulting in more noise of this form when the image is comprised of more pixels. Since the read noise cost is fixed, while photon noise rises with the level of light, read noise is only important at high ISO, or in deep shadows at low ISO.


So I would say that the sweet spot depends on the kind of shooting you do. Action photography, in which the capture dictates a high shutter speed, higher ISO, etc, will tend to favor somewhat larger pixels to keep the read noise at bay. Landscape photography, where one can use low ISO, and have lots of light, will favor smaller pixels in order to maximize detail; the image noise tends to be dominated by the photon noise, which is independent of the size of the pixel for any fixed scale in the image.

But all of this is a bit of a tangent to the question originally posed. The answer I would give to that, is that one can pretty much determine the noise performance based on the sensor size. For instance, a 1.6 crop body has over two and a half times less area than full frame, and therefore captures that many times less photons for an equivalent exposure. That's 4/3 of a stop, with correspondingly that much more noise. In other words, in terms of image noise, ISO 400 on 1.6x is like ISO 1000 on FF if using the same shutter speed and aperture. However, if you need to stop down the FF camera to get as much DOF as the crop camera, that advantage is lost -- then the images become comparable in terms of noise.

arash_hazeghi
09-01-2009, 06:32 PM
Emil,

Fill factor for a number of smaller pixel is always less then a larger pixel of the same area, this is due to global interconnects (shown in the SEM) as well as vias (not shown in the SEM), also there is lateral optical cross talk due to disperssion at MLA and at pixel level for very small pixels. These factors are important.

http://www.chipworks.com/uploadedImages/Blog/Test_Blog/Nokia6(2).jpg
SEM cross section of commerical image sensor

Also electronic cross talk scales as ~1/d (d=pp) for constant plug height so tighter pixel is more prone to banding and pattern noise, so it is not a linear scaling trend and you can't quite get the same noise by downsampling. There are many more circuit issues beyond the scope of this forum. Also the smaller the pixels the heavier the burden on the MLA and the optical lens, things get tricky when you demand more than 100 lpm from an optical lens, especially in the borders of image circle. It is not as simple as you suggest.

For more information check out this excellent paper which was published by one of our labs in 2005 in IEEE CDM http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1438751

John Chardine
09-01-2009, 07:01 PM
snip
But all of this is a bit of a tangent to the question originally posed. The answer I would give to that, is that one can pretty much determine the noise performance based on the sensor size. For instance, a 1.6 crop body has over two and a half times less area than full frame, and therefore captures that many times less photons for an equivalent exposure. That's 4/3 of a stop, with correspondingly that much more noise. In other words, in terms of image noise, ISO 400 on 1.6x is like ISO 1000 on FF if using the same shutter speed and aperture. However, if you need to stop down the FF camera to get as much DOF as the crop camera, that advantage is lost -- then the images become comparable in terms of noise.

Interesting thought Emil but I don't follow. Yes, a larger sensor captures more light per exposure than a smaller sensor but I don't see how sensor-wide light capture determines noise-level. Surely it was what is happening at the pixel-site that is important re. noise.

Emil Martinec
09-01-2009, 10:34 PM
Interesting thought Emil but I don't follow. Yes, a larger sensor captures more light per exposure than a smaller sensor but I don't see how sensor-wide light capture determines noise-level. Surely it was what is happening at the pixel-site that is important re. noise.

Noise is not a single number -- it has a spectrum as a function of spatial scale (to be precise, as a function of spatial frequency in lines per picture height). This is similar to MTF -- there is not a single characteristic microcontrast passed by a lens, rather it varies with spatial frequency, which is why MTF of a lens is a graph where the horizontal axis is lph, and the vertical axis is the transimitted microcontrast at that spatial frequency. Similarly, noise varies in a well-defined manner with spatial frequency. Using the std dev of pixel values as a measure of noise integrates over the spectrum. Smaller pixels have a higher spatial frequency cutoff (the so-called Nyquist frequency) -- in simple terms, they detect finer scales in the image. They also add up the noise to a higher spatial frequency, and therefore exhibit a higher noise level.

For instance, if you look at Neat Image, or Noiseware, the controls are for high-, mid-, and low-frequency noise. These sliders affect noise on the corresponding frequency scales in the image. I developed a little demo of this here:
http://forums.dpreview.com/forums/read.asp?forum=1018&message=31774115&q=ejmartin+neat+image&qf=m


Anyway, if we focus on a fixed object in the image, it comprises a fixed area relative to frame size. We should ask what is the level of noise associated with the scale of that object. If it is rendered with small pixels, there will be several pixels rendering the object, and we should ask not what is the noise of the individual pixels, but rather what is the noise of the aggregate of pixels that renders the object. Aggregating pixels (eg by downsampling, or some other form of filtering) decreases noise, because larger samples of photons intrinsically have less noise -- this is explained in the linked DPR posts in my first post above (more simply than what I have written here, I hope). Many smaller pixels in aggregation have about the same noise as one larger pixel of the same area.

Thus to me the issue is what is the noise at a fixed reference scale. DxOMark.com gets at the same point, where for any camera they have two display options for noise levels -- "screen", which is the pixel level, and "print", which is the result of an idealized resampling of the image to a fixed resolution.


Now, if we change the sensor size, a given image scale as a fraction of the frame represents more physical sensor real estate for a larger sensor, thus collects more photons, and has smaller photon statistical noise. There is more light captured to render that particular object. It doesn't much matter whether that capture was done with a few pixels or with lots of pixels, as far as noise at the scale of the object is concerned.

So, smaller pixels are not inherently bad when the dominant image noise is statistical photon noise. The finer pixels sample the image more finely; finer samples have more detail, and more statistical fluctuation per sample (higher noise per pixel). However one can always aggregate smaller samples into larger ones and recover the same level of noise as a more coarsely sampled image using larger pixels, together with the loss of detail that entails. And of course you can't go in the other direction -- recover finer detail from coarser sampling. And in between these two extremes, you have the optimal route -- filter the fine scale noise while retaining most of the important detail.


What I wrote above is specific to photon statistical noise; there are other noise sources (sensor read noise, pattern noise, etc) which do not scale so nicely. But that is perhaps left for a separate discussion.

Emil Martinec
09-01-2009, 10:40 PM
Arash, thanks for the interesting info. I would only add that I looked at the saturation density of photo-electrons per unit area for the 40D (5.7µ), 1Ds3 (6.4µ), and 1D3 (7.2µ), which are all of the same generation of Canon sensors; there was a slight difference in favor of the larger pixels, but it was only a few percent. Certainly nothing of consequence for photography.

I can't speak to the other issues you raise, it's certainly outside the scope of my expertise. At what size pixel spacing do these issues become important? The figure you posted was for 2.2µ pixels, certainly much smaller than current DSLR's are employing.

arash_hazeghi
09-02-2009, 03:05 AM
Hi Emil,
I'm not quite sure I understand what "saturation density" is, if you are referring to the charge stored in cell capacitance that more or less scales linearly with area unless you go sub-micron or use a deep trench or other tricks, however optical fill factor of a scaled cell is not constant, remember that pixel pitch is defined as STI to STI spacing on chip. There is a lot of overhead from metal 1,2,.., via plugs as well as photogate 1, photogate 2, the pass transistor, etc. The distance signal has to travel from cell to row/column sense gate and repeater configuration directly affects the SNR pre-sense. Also when you go denser you have to scale VDD for power, which has its own effects on junction field and cell performance. These are very important parameters in CMOS imager design and fabrication at any pixel pitch and for any application. Almost all the advances made in CMOS image sensors in the past 15 years are either in this area or DSP, very little at detector level. Scaling CMOS image sensor is not simply taking a bunch of ideal i.e. photon-shot-noise-limited detectors and making them smaller, the paper I attached discusses some of these basic issues and their effect on noise, speed and other metrics.
In the lab when we measure noise, we wire-bond the chip, hook it up to a spectrum analyzer and pump it with monochromatic source and measure cell current and sensitivity directly post sense as described here http://www-isl.stanford.edu/~abbas/group/papers_and_pub/qe_spie_98.pdf and here
http://www-isl.stanford.edu/groups/elgamal/abbas_publications/C066.pdf
We also have simulation tools that allow us to accurately determine various performance metrics for a scaled sensor, these are the same CAD tools that industry uses.


Any ways, The problem with these discussions is that sometimes individual backgrounds are different and it is hard to convey technical details and other members who are not technical do not benefit much as this is a photography forum. Also unfortunately I cannot provide any exact figure for SNR of a sensor scaled under a particular scheme, pixel overhead or SEM micrographs of a patented DSLR sensor, Stanford policy is that we do not offer consulting to third party unless it is collaborative work under written agreement.


Best,
Arash

Emil Martinec
09-02-2009, 07:25 AM
I'm referring to the following:

(1) photosite gain (electrons per RAW level of the camera) can inferred from shot-noise dominated images (both Roger and I do these sorts of measurements),
(2) at base ISO 100 the cameras mentioned (40D/1Ds3/1D3) reach a limiting RAW level below 2^14-1, indicating that the photosites are close to saturation. The number of electrons that RAW level corresponds to is what I would call the practical saturation level of the photosite -- it's certainly all we photographers have access to. Divide that by the photosite area, and one gets what I called above the saturation density. IIRC it only differs by about 5-10% between the 40D and 1D3, even though the photosite areas differ by as much as 60%.

Thanks for the references.

Mike Milicia
09-02-2009, 08:51 AM
Increasing the resolution by using smaller pixels, introduces fine scale noise along with fine scale detail; it does not affect noise at coarser scales. A demonstration of this can be found in a couple of posts of mine at DPR

http://forums.dpreview.com/forums/read.asp?forum=1019&message=31922352

http://forums.dpreview.com/forums/read.asp?forum=1019&message=31922793



Emil,
Thanks for your efforts to explain these issues. I do get lost rather quickly in the details but I feel like I can at least usually absorb some of the basic concepts and summary results. I have a basic question about the above posts. I'm not really sure how to ask it but hopefully you'll get the idea. I think I understand what you say about the statistics of the coin toss and the sampling of a random event. But what is it about the way that photons and sensors behave that makes counting photons at photosites analogous to sampling a random event?

Emil Martinec
09-02-2009, 09:47 AM
Emil,
I think I understand what you say about the statistics of the coin toss and the sampling of a random event. But what is it about the way that photons and sensors behave that makes counting photons at photosites analogous to sampling a random event?

Quantum mechanics. The intensity of light at the photosite gives the probability per unit time of detecting a photon; the arrivals of individual photons are random events governed by this probability.

But many natural (and not-so-natural) phenomena obey the same statistics. For instance, in the raindrops analogy of Roger's article, the arrivals of the raindrops are statistically random events -- there is a mean rate of raindrops arriving at the buckets but there are fluctuations around that which are governed by Poisson statistics (http://en.wikipedia.org/wiki/Poisson_process#Examples), the same statistics that govern photon count fluctuations in photosites.

arash_hazeghi
09-02-2009, 01:49 PM
I'm referring to the following:

(1) photosite gain (electrons per RAW level of the camera) can inferred from shot-noise dominated images (both Roger and I do these sorts of measurements),
(2) at base ISO 100 the cameras mentioned (40D/1Ds3/1D3) reach a limiting RAW level below 2^14-1, indicating that the photosites are close to saturation. The number of electrons that RAW level corresponds to is what I would call the practical saturation level of the photosite -- it's certainly all we photographers have access to. Divide that by the photosite area, and one gets what I called above the saturation density. IIRC it only differs by about 5-10% between the 40D and 1D3, even though the photosite areas differ by as much as 60%.

Thanks for the references.

The signal value is scaled pre-sense at the Bit and Word line reference unit and quantized values are shifted and corrected many times before recording, we do not analyze metrics by looking at the "RAW recording", it is not a valid method for technical purposes.

Flavio Rose
09-02-2009, 05:06 PM
I think I get the photon count noise. I can understand that the electronics has read noise, although I'm not very clear on just how pixels' signal levels get read -- presumably there is a linear array of analog to digital converters and there are analog signal paths with appropriate muxes and amplifiers leading from the pixels into those converters which are shared between many different pixels?

What I don't get is the spatial frequency of noise. Why isn't noise at one pixel uncorrelated with noise at any other pixel sensing the same level of light, be it close or distant? The only reason for correlation which I can visualize offhand is that certain pixels' signals go to their respective analog to digital converter over one shared path and other pixels' signals would use a different path, so the pixels sharing a path might experience some correlation. To make things very concrete, why are there big splotches in high ISO images?

John Chardine
09-02-2009, 06:12 PM
I appreciate Arash's and Emil's technical input. Thanks guys. I will say though that I always insist that my students can explain complex biological concepts to the "person on the street" (and I never buy the argument that it's impossible). Maybe it isn't possible in physics though?!

Emil Martinec
09-02-2009, 07:16 PM
I always insist that my students can explain complex biological concepts to the "person on the street" (and I never buy the argument that it's impossible). Maybe it isn't possible in physics though?!

Oh dear. I thought that my coin toss analogy was accessible... :confused:

John Chardine
09-02-2009, 08:18 PM
snip


Any ways, The problem with these discussions is that sometimes individual backgrounds are different and it is hard to convey technical details and other members who are not technical do not benefit much as this is a photography forum. Also unfortunately I cannot provide any exact figure for SNR of a sensor scaled under a particular scheme, pixel overhead or SEM micrographs of a patented DSLR sensor, Stanford policy is that we do not offer consulting to third party unless it is collaborative work under written agreement.


Best,
Arash

It was Emil. My comment was meant to be light-hearted and really directed at Arash (smile).

Emil Martinec
09-02-2009, 08:54 PM
I think I get the photon count noise. I can understand that the electronics has read noise, although I'm not very clear on just how pixels' signal levels get read -- presumably there is a linear array of analog to digital converters and there are analog signal paths with appropriate muxes and amplifiers leading from the pixels into those converters which are shared between many different pixels?

What I don't get is the spatial frequency of noise. Why isn't noise at one pixel uncorrelated with noise at any other pixel sensing the same level of light, be it close or distant? The only reason for correlation which I can visualize offhand is that certain pixels' signals go to their respective analog to digital converter over one shared path and other pixels' signals would use a different path, so the pixels sharing a path might experience some correlation. To make things very concrete, why are there big splotches in high ISO images?

The layout of ADC's depends on the design -- the Sony sensors used in current Nikons have an ADC for each column of pixels; Canon uses a small number to handle the whole array. But basically you have it right.

Noise is largely uncorrelated from pixel to pixel. Photon noise is entirely uncorrelated; the sensor read noise has some correlation, most of it is along rows or columns which is the patterned line noise which is especially prevalent on Canons (and the fact that is read noise explains why you see it more when you try to lift the shadows on those cameras).

Even if noise is uncorrelated, one can decompose it into a frequency spectrum. What happens is that noise that is uncorrelated from pixel to pixel is also uncorrelated from frequency to frequency.

As to why there are big splotches in high ISO images, I think some of that is a property of the RAW conversion algorithm, which does correlate neighboring pixel values in the course of interpolating the sensor data to a full set of RGB values at each pixel location. Here for instance is an interpolation of a D700 RAW at ISO 25600; on the left is ACR, on the right is a prototype interpolation algorithm that I am developing:

http://theory.uchicago.edu/~ejm/pix/20d/posts/ojo/D700_ACR-L_AMZ-R.jpg (http://theory.uchicago.edu/%7Eejm/pix/20d/posts/ojo/D700_ACR-L_AMZ-R.jpg)

The prototype has a higher resolution at high ISO than ACR. ACR lacks high frequency structure in the rendering, so what is left is low frequency noise, which is "blotchy". The blotchiness is what results when otherwise uncorrelated noise has its high frequency component scrubbed away, as I showed in the link I posted above:

http://forums.dpreview.com/forums/read.asp?forum=1018&message=31774115&q=ejmartin+neat+image&qf=m

I suspect the blotchiness is also assisted by the structure of the read noise, which can be correlated over neighboring pixels; one sees this in various streaks, etc in both images.

But even completely random noise looks like it has structure, as one sees in the above link. For instance, in a hundred coin tosses, the likelihood of having six in a row turn up heads somewhere in the sequence is better than even odds. When you ask people to construct a sequence of numbers that appears random, they are very unlikely to select a sequence that has as many repeated numbers as is statistically likely in a random sequence. Our minds have been selected to detect patterns, and so even in randomness we are inclined to see structure; and conversely our notion of randomness dismisses the prevalence of chance coincidences.

arash_hazeghi
09-02-2009, 08:56 PM
It was Emil. My comment was meant to be light-hearted and really directed at Arash (smile).

Hi John,
I will send you a PM about this topic.

arash_hazeghi
09-02-2009, 10:34 PM
In order to provide useful information for everyone I am attaching this excellent article http://micro.magnet.fsu.edu/primer/digitalimaging/cmosimagesensors.html

It should answer most if not all of the questions and it is easy to digest. It explains why a CMOS image sensor is not a simple array of photo diodes, bad assumptions lead to even worse conclusions.

Best

Roger Clark
09-02-2009, 11:23 PM
Emil, and others,

I've been hosting a meeting of scientists this week, so haven't had time for BPN. I'll try and catch up this weekend. But a couple of quick notes.

Your questions about:
http://www.clarkvision.com/imagedetail/does.pixel.size.matter

Figure 1 is meant to be an analogy, not literal. But I see that you are taking it literally and I agree in that case I should make the heights of the walls the same. I'll do that when I get a chance.

Figure 5. I disagree. What users have had a choice of is similar megapixels from very different cameras, e.g. a 15 megapixel 50D versus a 15 megapixel G10. Then cameras have equivalent fields of view focal lengths. People do not have the choice of a 15 megapixel DSLR versus a 112 megapixel P&S with the same sensor
(same sensor size). So the choices are relatively constant megapixels between cameras but with widely varying pixel and sensor size. It is the pixel that collects the photons and that is where the noise counts.
I understand that pixel density changes on a print as megapixels from the camera increase, but people want to make larger prints with more megapixels. So arguments can go multiple ways and each has validity.

I know this topic has led to huge religious wars over this topic. But the question was why do P&S cameras have poorer performance, and that is at the pixel level. The religious wars ignore several factors that as pixel size decreases multiple compromises come into play that limit data quality and thus image quality. Some of those factors involve human perception and people have different views, much like the digital versus film wars. Maybe we should call this the pixel size wars.

One can loom at it this way. Consider a constant size sensor and vary the number of megapixels, thus pixel size. At one extreme we have the one pixel camera with maximum dynamic range. Obviously not a very good image. At the other extreme, we have pixel size ~ zero, and can only collect a maximum of 1 electron, so dynamic range is zero. Obviously not a good image. You could average those small pixels into something larger and have both spatial resolution and dynamic range. And I haven't even included all the problems with small pixels, like read noise at each pixel, photon absorption length (which is several microns in the red), etc., then add in diffraction effects and what lenses can actually deliver in image quality and there is a disadvantage to small pixels. So there is an optimum size somewhere between really small and very large. That is where the religious wars begin. But I'll not participate in those wars.

Roger

arash_hazeghi
09-03-2009, 12:26 AM
Roger,
Have you ever measured the resolution of Canon super telephoto lenses (500 f/4 IS and 600 f/4 IS naked and with TC) in terms lpm? If so would you care to share it here? I am interested to know how it compares to the Nyquist limit for 7D.

Roger Clark
09-03-2009, 08:39 AM
Roger,
Have you ever measured the resolution of Canon super telephoto lenses (500 f/4 IS and 600 f/4 IS naked and with TC) in terms lpm? If so would you care to share it here? I am interested to know how it compares to the Nyquist limit for 7D.

Arash,
No, not directly. Here is an image of the full moon with the 500 and canon 2x TC:
http://www.clarkvision.com/galleries/gallery.astrophoto-1/web/moon.rnclark.handheld.c10.25.2007.jz3f6583f-8s-800.html

Use of TCs on the supertelephotos can appear soft, but it is usually technique because the extreme magnification is amplifying all the little shaking, even with IS. I did get a new appreciation for the sharpness of the 500 when I started using the 5DII. The blur filter on the 5DII is not as aggressive as on the 1DII, so images appear significantly sharper at the pixel level on the 5DII, but I have seen more edge artifacts too. Those artifacts are pretty small, and overall image quality is amazing. But if you try and image a planet, like Jupiter or Venus with the 500 and a TC, some images will have little protrusions out the edge of the planet due to the rectangular sampling (jaggies).

Summary, 500 f/4 L IS + 1.4 or 2x TC is impressively sharp, but it sometimes tiny vibrations to rob some of that sharpness, so one needs superb technique to prevent that. A 600 would be even tougher because magnification is more.

Roger

Emil Martinec
09-03-2009, 09:51 AM
What users have had a choice of is similar megapixels from very different cameras, e.g. a 15 megapixel 50D versus a 15 megapixel G10. Then cameras have equivalent fields of view focal lengths. People do not have the choice of a 15 megapixel DSLR versus a 112 megapixel P&S with the same sensor
(same sensor size). So the choices are relatively constant megapixels between cameras but with widely varying pixel and sensor size. It is the pixel that collects the photons and that is where the noise counts.
I understand that pixel density changes on a print as megapixels from the camera increase, but people want to make larger prints with more megapixels. So arguments can go multiple ways and each has validity.

I know this topic has led to huge religious wars over this topic. But the question was why do P&S cameras have poorer performance, and that is at the pixel level. The religious wars ignore several factors that as pixel size decreases multiple compromises come into play that limit data quality and thus image quality. Some of those factors involve human perception and people have different views, much like the digital versus film wars. Maybe we should call this the pixel size wars.

Actually, the question of the OP was why do prosumer bodies have more noise than pro bodies, which I interpreted to mean why do APS-C cameras (1.5x or 1.6x crop) have more noise than APS-H (1.25-1.3x crop) or full frame, not about P&S vs DSLR. I agree that smaller sensors do tend to have smaller pixels, because a certain minimum resolution is desirable, but correlation is not causation. For instance, if the answer is that it's the size of the pixel, then one might think that an image from the 5D2 has the same noise as one from the 30D, since they both have the same size pixels (6.4µ). However, that would be wrong; the 5D2 has a noise advantage of 4/3 stop over the 30D, because its sensor is over 2.5 times larger and thus collects that many more photons over the image for a given exposure (Tv/Av).

This is why I think it's misleading to provide an image from two cameras with both different sized sensors and different sized pixels -- by varying two quantities at once (pixel size and sensor size), you cannot be sure which is the cause of any observed difference in noise. One can however look at examples like the one I gave, where sensor size is varied at fixed pixel size, and others (eg D3 vs D3x) where sensor size is kept fixed and pixel size is varied. By controlling one variable at a time, you can be more confident as to what is causing any observed effect.



One can look at it this way. Consider a constant size sensor and vary the number of megapixels, thus pixel size. At one extreme we have the one pixel camera with maximum dynamic range. Obviously not a very good image. At the other extreme, we have pixel size ~ zero, and can only collect a maximum of 1 electron, so dynamic range is zero. Obviously not a good image. You could average those small pixels into something larger and have both spatial resolution and dynamic range. And I haven't even included all the problems with small pixels, like read noise at each pixel, photon absorption length (which is several microns in the red), etc., then add in diffraction effects and what lenses can actually deliver in image quality and there is a disadvantage to small pixels. So there is an optimum size somewhere between really small and very large. That is where the religious wars begin. But I'll not participate in those wars.

Roger

As with most wars, I think this one is founded on a misunderstanding.

Within a given format, one now has a choice between MP counts differing by a factor of two (5D vs 5D2, D3/D700 vs D3x), or more (30D vs 7D), and complaints are often heard that this leads to more noise even at low ISO. These complaints are usually founded on an examination of pixel level noise, ignoring the fact that with more pixels, each one of them comprises a smaller portion of the image and when shot-noise dominated images from cameras with different MP counts are printed at a fixed output size, image noise is much the same.

I agree with you that there is a happy medium, that trades off resolution (favored by high MP counts and situations where image noise is shot-noise dominated) against low-light performance (favored by large pixels and images where image noise is read noise dominated). But that discussion, when done properly, involves the more arcane issues you mentioned like pixel level read noise, and the tradeoff is much, much more gradual than most people realize because they are fixated on the pixel level shot noise and not properly taking account of its scaling properties when resolution changes (as discussed in my tutorial links above on sampling statistics).

Your example of pixel size tending to zero overlooks this point, that noise has a spectrum as a function of spatial frequency. The DR at the Nyquist frequency goes to zero in the limit, but the DR at a fixed spatial frequency scale (taking into account only shot noise) remains fixed. Look at any half-tone newspaper image from across the room; it is made up of tiny dots that are either black or white (the analog of photon or no photon in a tiny pixel), yet the DR of the half-tone image is certainly not zero at any scale you can perceive.

arash_hazeghi
09-03-2009, 03:04 PM
Moon picture is really impressive, can you find the landing spots? :D

I agree MKII has a weaker AA filter combined with large and high fill factor pixels can pull lots of crisp detail, I don't have the 500 f/4 (although I sometimes rent it for some occasions), but my 400 f/5.6 while excellent on my MKII at all apertures struggles with my 50D when it is wide open and the 7D is yet more demanding.

As for jaggies have you tried Canon's DPP for RAWs?




Arash,
No, not directly. Here is an image of the full moon with the 500 and canon 2x TC:
http://www.clarkvision.com/galleries/gallery.astrophoto-1/web/moon.rnclark.handheld.c10.25.2007.jz3f6583f-8s-800.html

Use of TCs on the supertelephotos can appear soft, but it is usually technique because the extreme magnification is amplifying all the little shaking, even with IS. I did get a new appreciation for the sharpness of the 500 when I started using the 5DII. The blur filter on the 5DII is not as aggressive as on the 1DII, so images appear significantly sharper at the pixel level on the 5DII, but I have seen more edge artifacts too. Those artifacts are pretty small, and overall image quality is amazing. But if you try and image a planet, like Jupiter or Venus with the 500 and a TC, some images will have little protrusions out the edge of the planet due to the rectangular sampling (jaggies).

Summary, 500 f/4 L IS + 1.4 or 2x TC is impressively sharp, but it sometimes tiny vibrations to rob some of that sharpness, so one needs superb technique to prevent that. A 600 would be even tougher because magnification is more.

Roger

Roger Clark
09-03-2009, 11:18 PM
As for jaggies have you tried Canon's DPP for RAWs?

Arash,
I have not tried Canon's DPP. I have observed the jaggies on the CS4 ACR converter and in jpegs out of the camera. The jaggies have generally be minor and not every planet or moon image shows it.

Roger

Roger Clark
09-04-2009, 12:09 AM
Actually, the question of the OP was why do prosumer bodies have more noise than pro bodies, which I interpreted to mean why do APS-C cameras (1.5x or 1.6x crop) have more noise than APS-H (1.25-1.3x crop) or full frame, not about P&S vs DSLR.

Emil,
It matter little which one APS-c or P&S, the effects are the same, although less extreme from APS-H or Full frame to APS-C.



I agree that smaller sensors do tend to have smaller pixels, because a certain minimum resolution is desirable, but correlation is not causation. For instance, if the answer is that it's the size of the pixel, then one might think that an image from the 5D2 has the same noise as one from the 30D, since they both have the same size pixels (6.4µ). However, that would be wrong; the 5D2 has a noise advantage of 4/3 stop over the 30D, because its sensor is over 2.5 times larger and thus collects that many more photons over the image for a given exposure (Tv/Av).


This is where we disagree. To say sensor size is the key fails to include key photography situations, for example, bird photography. Let's take a Canon 10D, 20D, 40D, 50D, 1DII, 1DIII, 5D, 5DII, and Nikon D3. Put a 500 mm f/4 lens on each one and go photograph birds in low light. Sensor size is irrelevant except for field of view. The key metric is pixel size and the performance of of each pixel. The number of pixels on the subject has nothing to do with sensor size in cases like this (call it subject photography). In your example, you are not only saying the sensor is the key metric, you are also changing the lens focal length to give the same field of view between different sized sensors, thus you are also changing the aperture and the amount of light sent to the sensor. Thus you are effectively changing many variable at the same time.



This is why I think it's misleading to provide an image from two cameras with both different sized sensors and different sized pixels -- by varying two quantities at once (pixel size and sensor size), you cannot be sure which is the cause of any observed difference in noise.

Again, I don't change sensor size; you are doing that, along with changing the lens focal length, and the lens aperture.



One can however look at examples like the one I gave, where sensor size is varied at fixed pixel size, and others (eg D3 vs D3x) where sensor size is kept fixed and pixel size is varied. By controlling one variable at a time, you can be more confident as to what is causing any observed effect.

But you are assuming the subject is magnified to the same full sensor size. That means you are changing the lens focal length and lens aperture while arguing only the sensor is the changing.

The problem here is when people get a new camera, they don't change all their lenses to scale with the sensor size. For example, I now have a full frame camera and had a an APS-C camera. I didn't replace my 500 f/4 lens with an 800 f/4 lens when I upgraded to the full frame body. I still use my 50 mm f/1.8 in low light. I still use my 70-200 f/4. and my 20mm f/2.8. But I do make different sized prints. Prints from an 8-megapixel camera made a quality 8x12 inch print, but the 21 megapixel camera makes equal quality 12x19 inch prints with the same pixel size on each camera. So just like photographers zooming in to view images 100% on a computer screen, they also change what print size they can make with different cameras and different sensors.

You have been making an assumption that assumes people only do the same thing with all their images, e.g. make 8x10 inch prints from uncropped images. If that were the case, you idea of sensor size being the key metric would be correct. But it is not for many other situations, including subject photography like wildlife where big telephoto lenses are not scaled with sensor size, astronomical photography, moon, planets, galaxies, nebulae (where again, people have lenses or telescopes that don't scale with sensor size), where people change enlargement size depending on pixel count, cropping and enlarging portions of an image, and very low light detection where light gathering ability is paramount (including surveillance photography).



As with most wars, I think this one is founded on a misunderstanding.

Yes, and along with an incomplete model of reality of the real world.

Roger

Emil Martinec
09-04-2009, 12:17 PM
This is where we disagree. To say sensor size is the key fails to include key photography situations, for example, bird photography. Let's take a Canon 10D, 20D, 40D, 50D, 1DII, 1DIII, 5D, 5DII, and Nikon D3. Put a 500 mm f/4 lens on each one and go photograph birds in low light. Sensor size is irrelevant except for field of view. The key metric is pixel size and the performance of of each pixel. The number of pixels on the subject has nothing to do with sensor size in cases like this (call it subject photography). In your example, you are not only saying the sensor is the key metric, you are also changing the lens focal length to give the same field of view between different sized sensors, thus you are also changing the aperture and the amount of light sent to the sensor. Thus you are effectively changing many variable at the same time.


In an situation where one is focal length limited, then indeed going from 1.6x crop to FF will not change the area occupied by the subject on the sensor. But then you haven't really changed the sensor size. If you are going to crop all your full frame images by 1.6x, then you aren't really using a larger sensor. So yes, I assumed that the framing of the image is kept the same. That in fact was what you did in the figure 5 in your article -- framed the image the same way, rather than using a crop from the DSLR at the same (not equivalent) focal length, that was 1/12 or so of the frame area to match the P&S. If you had, the comparison would have been more equal (though I doubt the resolution of the DSLR crop would have been acceptable).




The problem here is when people get a new camera, they don't change all their lenses to scale with the sensor size. For example, I now have a full frame camera and had a an APS-C camera. I didn't replace my 500 f/4 lens with an 800 f/4 lens when I upgraded to the full frame body. I still use my 50 mm f/1.8 in low light. I still use my 70-200 f/4. and my 20mm f/2.8. I do hope you use the zoom to frame the image appropriately, and change lenses when needed, apart from situations where focal-length limited.



But I do make different sized prints. Prints from an 8-megapixel camera made a quality 8x12 inch print, but the 21 megapixel camera makes equal quality 12x19 inch prints with the same pixel size on each camera. So just like photographers zooming in to view images 100% on a computer screen, they also change what print size they can make with different cameras and different sensors.
With the same pixel size on each camera, then to increase the MP count you have had to increase the sensor size, so here I would say that it is the increased sensor size which has allowed you to make bigger prints of the same quality. Pixel size had nothing to do with it, since pixel size was kept the same.

But this is not the way the argument is usually framed. The statement I see most often is that with more MP on the same size sensor, one is tempted to print bigger and therefore it is the pixel noise which matters -- and therefore smaller pixels lead to noisier images. But the image noise will be the same on the same size print. Printing bigger magnifies everything, including the noise which increases at small scales, so one shouldn't be surprised! And the lower MP camera will still make a poorer large print than the higher MP camera. At any given print size, the higher MP count outperforms the lower MP count camera (in good light).

It reminds me of complaining that smaller pixels are "diffraction limited" at a smaller f-stop than larger pixels. At any given f-stop, smaller pixels resolve more, including the regime where the sensor starts to resolve the diffraction pattern. In other words, one is complaining that one's sensor has better performance and is no longer the most limiting factor in resolution. Similarly, complaining that smaller pixels on a fixed size sensor degrade the image because you are printing bigger seems to me in the same class of distorted logic.



You have been making an assumption that assumes people only do the same thing with all their images, e.g. make 8x10 inch prints from uncropped images. If that were the case, your idea of sensor size being the key metric would be correct. But it is not for many other situations, including subject photography like wildlife where big telephoto lenses are not scaled with sensor size, astronomical photography, moon, planets, galaxies, nebulae (where again, people have lenses or telescopes that don't scale with sensor size), where people change enlargement size depending on pixel count, cropping and enlarging portions of an image, and very low light detection where light gathering ability is paramount (including surveillance photography).

RogerI suppose we have a different point of view. I regard cropping an image as effectively using a smaller sensor, and so any inference one cares to draw about the effect of sensor size vs pixel size should be made in the context of that effective sensor size -- after all, that's why APS-C bodies are called 1.6x crop cameras.

No doubt, different tools are better adapted to different photographic applications. Bigger pixels are better for low light; smaller pixels are better for resolution. Bigger sensors are always better, if one can keep the framing the same. If one doesn't keep framing fixed, and crops the image to maintain the framing, then one shouldn't claim to have changed the sensor size.

Roger Clark
09-05-2009, 12:38 AM
In an situation where one is focal length limited, then indeed going from 1.6x crop to FF will not change the area occupied by the subject on the sensor. But then you haven't really changed the sensor size. If you are going to crop all your full frame images by 1.6x, then you aren't really using a larger sensor. So yes, I assumed that the framing of the image is kept the same.

Exactly, and that is why the pixel and its size is the fundamental metric in an image. And you can get many different sensors, all the same size but with different sized pixels, or different sized pixels and sensors.



That in fact was what you did in the figure 5 in your article -- framed the image the same way, rather than using a crop from the DSLR at the same (not equivalent) focal length, that was 1/12 or so of the frame area to match the P&S. If you had, the comparison would have been more equal (though I doubt the resolution of the DSLR crop would have been acceptable).

That is to illustrate the choice users actually face all the time when choosing different cameras. For example users are often confused as to why a DSLR is better over a P&S camera. Some say "I can get a super zoom with 400 mm equivalent and get the same thing as those DSLRs users with big expensive lenses." Not.



I do hope you use the zoom to frame the image appropriately, and change lenses when needed, apart from situations where focal-length limited.

Yes. That is a part of photography, and the fundamental unit of each image is the pixel. Note that
as you zoom with constant f/ratio, the light level on the sensor stays constant. But if you change the pixel size,
the light per pixel changes. The sum of small pixels can never equal a large pixel because read noise will always be larger in the summed pixels.



With the same pixel size on each camera, then to increase the MP count you have had to increase the sensor size, so here I would say that it is the increased sensor size which has allowed you to make bigger prints of the same quality. Pixel size had nothing to do with it, since pixel size was kept the same.

It is still the pixel that is the fundamental unit. It is the sum of the pixels that make the image. The pixel defines how many photons can be captured, and the fundamental resolution it would have. In the sensor world, this pixel is said to have, with a given lens, a given field of view: the Instantaneous Field of View: IFOV. That pixel also sets limits on dynamic range.



And the lower MP camera will still make a poorer large print than the higher MP camera. At any given print size, the higher MP count outperforms the lower MP count camera (in good light).

You qualify your statement because the pixel size does matter. Your statement about making a poorer print is not always correct. But this gets into human perception and people have different standards. Some do not like the noise of the smaller pixels, others like the resolution (if not limited by lens or diffraction) and can accept more noise. And both of those are controlled by pixel size. It is like the film versus digital wars, and we are seeing just that with recent APS-C cameras.



It reminds me of complaining that smaller pixels are "diffraction limited" at a smaller f-stop than larger pixels. At any given f-stop, smaller pixels resolve more, including the regime where the sensor starts to resolve the diffraction pattern.

This statement is not correct for extended images. The modulation transfer function (MTF) goes to zero and you resolve no additional detail at at some aperture. For example, at f/8 the 0 MTF occurs at 4.3 micron sampling for blue light.

You also ignore the absorption length in silicon, which is many microns in the red.



In other words, one is complaining that one's sensor has better performance and is no longer the most limiting factor in resolution. Similarly, complaining that smaller pixels on a fixed size sensor degrade the image because you are printing bigger seems to me in the same class of distorted logic.

You forget that those smaller pixels have reduced dynamic range. It is not just a resolution limit.



I suppose we have a different point of view. I regard cropping an image as effectively using a smaller sensor, and so any inference one cares to draw about the effect of sensor size vs pixel size should be made in the context of that effective sensor size -- after all, that's why APS-C bodies are called 1.6x crop cameras.

Again, this is real world photography. If you want to limit your thesis to specific conditions you should state that. So that it does not apply to situations like bird photography where the subject does not fill the frame.



No doubt, different tools are better adapted to different photographic applications. Bigger pixels are better for low light; smaller pixels are better for resolution.

Emil, you are not being consistent. You just said its the pixel that is the determining factor, while elsewhere you argre it is the sensor.



Bigger sensors are always better, if one can keep the framing the same. If one doesn't keep framing fixed, and crops the image to maintain the framing, then one shouldn't claim to have changed the sensor size.

I disagree. I do a lot of different imaging, from 8x10 inch sensors, to point and shoot. Bigger is not always better. Portability is often a factor. Larger cameras are harder to move and follow action (e.g. try wildlife photography with an 8x10 camera).

Your thesis that the sensor size is the fundamental metric is valid in some photographic situations but not others. But in either case, every image is made up of pixels. Pixels are the fundamental building blocks of the image. The more pixels you have of a given size, of course the sensor is larger, but it is the sum of those pixels that make up the image.

Roger

Emil Martinec
09-05-2009, 01:20 PM
That is to illustrate the choice users actually face all the time when choosing different cameras. For example users are often confused as to why a DSLR is better over a P&S camera. Some say "I can get a super zoom with 400 mm equivalent and get the same thing as those DSLRs users with big expensive lenses." Not.

And the reason is that what they are actually getting is an 80mm lens and a sensor that has 20x less area than FF, and because of that smaller area, an ISO 100 image on their camera looks like an ISO 2000 image on a FF DSLR, when the framing and exposure are identical. The marketing never tells them that, however.


The sum of small pixels can never equal a large pixel because read noise will always be larger in the summed pixels.This is simply not true. The Canon 40d at ISO 100 has 10% less read noise per unit area as the 1D3 (ie, considering a fixed area of sensor, and (RMS) aggregating the read noise of all the pixels in that area). Your statement is only true in low light (ie high ISO applications), where the read noise is dominated by the sensor read noise; at low ISO, it is late-chain read noise contributions which dominate, and smaller pixels have the advantage in current offerings. I suspect it is because there is less light per pixel, and so the situation is similar to a higher ISO on a larger pixel sensor -- one is closer to the point where read noise is sensor dominated.

I mostly meant my qualification about smaller pixels outperforming larger ones in good light to refer to shot-noise dominated situations, where the finer image sampling allows one to push sampling artifacts off to finer scales in the image where they are less obtrusive, and higher resolution provides higher image quality. But a minor aspect of that qualification is that smaller pixels can outperform large ones for noise at low ISO (or at least they will until manufacturers get the late-chain read noise out of the way of the sensor performance).


This statement is not correct for extended images. The modulation transfer function (MTF) goes to zero and you resolve no additional detail at at some aperture. For example, at f/8 the 0 MTF occurs at 4.3 micron sampling for blue light.I invite you to go to DPReview's lens tests; for instance they tested the Canon 70-200/2.8 on both the 5D and 40D:

http://www.dpreview.com/lensreviews/widget/Fullscreen.ashx?reviews=15,14&fullscreen=true&av=3,3&fl=70,70&vis=VisualiserSharpnessMTF,VisualiserSharpnessMTF&stack=horizontal&lock=&config=/lensreviews/widget/LensReviewConfiguration.xml%3F3

Looking at the center resolution, the results show that, for a decrease in pixel pitch by a factor of 1.45, the resolution in lp/mm improves at f5.6 by nearly the ratio of pixel pitches, as expected because neither sensor is resolving any diffraction effects; at f8 the 40D has about a 25% advantage, at f11 about a 20% advantage; then the difference narrows to about a 10% advantage for the 40D for all apertures f16 and narrower, all the way out to f32. Note that to get these results, one must convert the DPR measurements, which are in line pairs/picture height, into line pairs per mm. This is accomplished by dividing each measured result in the DPR test by the frame height in mm (24mm for the 5D, 14.8mm for the 40D). A FF camera tiled with 40D sized pixels would outresolve the 5D at any aperture. One can do the same exercise for the D300/D3 combination on the DPR test of the Nikon 70-200, with much the same results.



You forget that those smaller pixels have reduced dynamic range. It is not just a resolution limit.It is not necessary for smaller pixels to have as much DR; they comprise less of the image. The example I gave of half-tone printing is an extreme example of this fact. Since noise is a function of spatial frequency, DR is as well. Raise the Nyquist frequency, and the DR at that frequency can be lower while maintaining the DR at any fixed spatial frequency in the image.




Emil, you are not being consistent. You just said its the pixel that is the determining factor, while elsewhere you argre it is the sensor.
I am being consistent. The points I have consistently made:


For a given exposure and framing of an image, a larger sensor gathers more light and results in a better image in terms of S/N at any fixed spatial frequency relative to frame size. If an image is cropped for framing, it is the portion of the sensor that made the crop that should be used in making comparisons. If you crop the image, you change what the pixel means in terms of image resolution in lph, so you have changed the scale in the image it represents.
In shot-noise dominated situations, pixel size (with sensor size fixed) has no impact on noise at any fixed spatial frequency in the image. This is because the light gathered per unit area is unaffected. For instance the 40D, 1Ds3 and 1D3 gather the same number of photons per RAW level per unit area at any given ISO, even though their pixel sizes vary by 60%.
In read-noise dominated situations, pixel size can have an effect on image noise at fixed spatial frequency. In current offerings, read noise per unit area varies quite a bit. It also varies with ISO to the point that one camera may be better for read noise at one ISO, and a different model at another ISO. I gave the example of the 40D and 1D3. But generally, at high ISO larger pixels perform better in current offerings for read noise per area. This statement is independent of sensor size.

I would never say that sensor size is the only metric for image quality, and I don't believe I have. I think people are too interested in having one single combined aggregate measure of IQ; it's silly. But sensor size is an important factor, and IQ increases with sensor size; resolution is also an important factor, and that favors higher MP counts for any given size sensor. Low light ability is another factor, and that pushes in the direction of larger pixels. Each aspect is weighted differently by different people, and so each will have their own "sweet spot" (which is why it's silly to try to find a single number to define IQ).

Roger Clark
09-13-2009, 01:06 PM
Originally Posted by rnclark http://www.birdphotographers.net/forums/fusion/buttons/viewpost.gif (http://www.birdphotographers.net/forums/showthread.php?p=337301#post337301)
That is to illustrate the choice users actually face all the time when choosing different cameras. For example users are often confused as to why a DSLR is better over a P&S camera. Some say "I can get a super zoom with 400 mm equivalent and get the same thing as those DSLRs users with big expensive lenses." Not.
Emil Martinec: And the reason is that what they are actually getting is an 80mm lens and a sensor that has 20x less area than FF, and because of that smaller area, an ISO 100 image on their camera looks like an ISO 2000 image on a FF DSLR, when the framing and exposure are identical. The marketing never tells them that, however.

Emil,
Sorry I didn't respond sooner; I had a big paper to Science to finish and I'm trying to get a new server running.

It seems we are saying the same thing. You say the sensor size is the metric whereas I say the pixel. You then split the sensor up into N pixels, working from the top down. I go from the pixel size times N pixels to get to the sensor. We both get to the same result. (Exceptions noted below.)

Roger: " The sum of small pixels can never equal a large pixel because read noise will always be larger in the summed pixels."
Emil: This is simply not true. The Canon 40d at ISO 100 has 10% less read noise per unit area as the 1D3 (ie, considering a fixed area of sensor, and (RMS) aggregating the read noise of all the pixels in that area). Your statement is only true in low light (ie high ISO applications), where the read noise is dominated by the sensor read noise; at low ISO, it is late-chain read noise contributions which dominate, and smaller pixels have the advantage in current offerings. I suspect it is because there is less light per pixel, and so the situation is similar to a higher ISO on a larger pixel sensor -- one is closer to the point where read noise is sensor dominated.

I'm not sure where you get these numbers. Let's work a problem. ISO 100 read noise:
1DIII: 24.4 electrons, 40D: 20.1 electrons. Pixel size: 1DIII: 7.2 microns. 40D: 5.7 microns.
1 square mm on the 1DIII contains 19290.1 pixels; 40D: 30778.7 pixels.
Summed read noise: 1DIII: 24.4 *sqrt(19290.1) = 3389 electrons; 40D = 3526 electrons.
The 40D is slightly worse. But note there is some electronics differences here that bias the comparison. If the technologies were equal in the two cameras, the ISO 100 read noise would be very close. It is probably higher in the 1DIII because the A/D is running at a higher speed, not some fundamental physics difference.


Emil: I mostly meant my qualification about smaller pixels outperforming larger ones in good light to refer to shot-noise dominated situations, where the finer image sampling allows one to push sampling artifacts off to finer scales in the image where they are less obtrusive, and higher resolution provides higher image quality. But a minor aspect of that qualification is that smaller pixels can outperform large ones for noise at low ISO (or at least they will until manufacturers get the late-chain read noise out of the way of the sensor performance).

It is the qualifications that always seem to come into play why I feel that working from the bottom up is the more fndamental way to approach the problem.


Emil:
I invite you to go to DPReview's lens tests; for instance they tested the Canon 70-200/2.8 on both the 5D and 40D:
http://www.dpreview.com/lensreviews/widget/Fullscreen.ashx?reviews=15,14&fullscreen=true&av=3,3&fl=70,70&vis=VisualiserSharpnessMTF,VisualiserSharpnessMTF&stack=horizontal&lock=&config=/lensreviews/widget/LensReviewConfiguration.xml%3F3
Looking at the center resolution, the results show that, for a decrease in pixel pitch by a factor of 1.45, the resolution in lp/mm improves at f5.6 by nearly the ratio of pixel pitches, as expected because neither sensor is resolving any diffraction effects; at f8 the 40D has about a 25% advantage, at f11 about a 20% advantage; then the difference narrows to about a 10% advantage for the 40D for all apertures f16 and narrower, all the way out to f32. Note that to get these results, one must convert the DPR measurements, which are in line pairs/picture height, into line pairs per mm. This is accomplished by dividing each measured result in the DPR test by the frame height in mm (24mm for the 5D, 14.8mm for the 40D). A FF camera tiled with 40D sized pixels would outresolve the 5D at any aperture. One can do the same exercise for the D300/D3 combination on the DPR test of the Nikon 70-200, with much the same results.

This is a very interesting presentation. I'm surprised at a several of things. First, that you can read the graphs accurately enough to see a 10% difference, or did you get the actual data?

Second, as the system becomes diffraction limited, at the 0% MTF point, no further information can be obtained, so regardless of sampling, 40D or 5D, the linear resolution in the focal plane is the same. There is no 10% advantage of the 40D. This measured difference either represents your reading of the graph, the accuracy of the data, or both.

Note that in the tests, even at f/32, the 5D images (mouse over the test patterns) look much better. For the test to be conducted the way they did (the test pattern filling the frame), they must have changed the distance to the target between the 5D and 40D. If we talk about frame filling subjects, e.g. use equivalent focal lengths with each sensor, then your statement "A FF camera tiled with 40D sized pixels would outresolve the 5D at any aperture." is not correct. While the spatial resolution in the focal plane is constant for a diffraction limited system, the angular resolution changes. The 5D will win every time in those cases.

Spatial resolution in the focal plane is important only when you use the same lens on both sensors and the lens is not diffraction or aberration limited. When the lens is diffraction or aberration limited, the two sensors will produce equal resolution images, else the finer pixel pitch sensor will produce the better image IF noise is not too objectionable and dynamic range is not compromised. The key is resolution on the subject, not spatial resolution in the focal plane.

Emil:
It is not necessary for smaller pixels to have as much DR; they comprise less of the image. The example I gave of half-tone printing is an extreme example of this fact. Since noise is a function of spatial frequency, DR is as well. Raise the Nyquist frequency, and the DR at that frequency can be lower while maintaining the DR at any fixed spatial frequency in the image.

I agree IF sensors had zero read noise. But the addition of read noise changes the equation. As pixels get smaller, read noise becomes a more significant part of the signal and you lose. If read noise were zero, small pixels could be added together to equal larger pixels, but with read noise, it is always less (assuming the same technology).

But obviously there is a trade. A camera with only large pixel is not very good. A camera with too small of pixels (with read noise) is also not as good. There is an optimum. But that optimum involves human perception and I don't think you'll ever get everyone to agree.

Emil Martinec:
I am being consistent. The points I have consistently made:


For a given exposure and framing of an image, a larger sensor gathers more light and results in a better image in terms of S/N at any fixed spatial frequency relative to frame size.


But spatial frequency in the focal plane is not what photographers are concerned about. They are concerned about target resolution, thus angular frequency. See the above discussion. It is the angular resolution that is a main factor in image quality.


If an image is cropped for framing, it is the portion of the sensor that made the crop that should be used in making comparisons. If you crop the image, you change what the pixel means in terms of image resolution in lph, so you have changed the scale in the image it represents.

But not in angular resolution. I think this is a key difference. It is the angular resolution that determines the detail on a subject, e.g. a bird at great distance, and that includes effects from diffraction, lens aberrations, as well as pixel pitch.


In shot-noise dominated situations, pixel size (with sensor size fixed) has no impact on noise at any fixed spatial frequency in the image. This is because the light gathered per unit area is unaffected. For instance the 40D, 1Ds3 and 1D3 gather the same number of photons per RAW level per unit area at any given ISO, even though their pixel sizes vary by 60%.

Again, fixed spatial frequency in the focal plane is not the key. It is angular frequency in the image plane. If, for example, we use equivalent focal lengths, so the larger sensors have longer focal length lenses and work at the same f/ratio, the larger sensor collects more light. If the sensors have the same pixel count, then the angular resolution on the subject is the same for all the sensors and the larger sensor, with its larger pixels collect more photons per pixel. If the lenses are diffraction limited (e.g. f/32) then the larger sensor resolved more detail in the subject. Again, the key is the image plane, not the focal plane.


In read-noise dominated situations, pixel size can have an effect on image noise at fixed spatial frequency. In current offerings, read noise per unit area varies quite a bit. It also varies with ISO to the point that one camera may be better for read noise at one ISO, and a different model at another ISO. I gave the example of the 40D and 1D3. But generally, at high ISO larger pixels perform better in current offerings for read noise per area. This statement is independent of sensor size.

Again, the key is the image plane, not the focal plane.

Emil I would never say that sensor size is the only metric for image quality, and I don't believe I have. I think people are too interested in having one single combined aggregate measure of IQ; it's silly. But sensor size is an important factor, and IQ increases with sensor size; resolution is also an important factor, and that favors higher MP counts for any given size sensor. Low light ability is another factor, and that pushes in the direction of larger pixels. Each aspect is weighted differently by different people, and so each will have their own "sweet spot" (which is why it's silly to try to find a single number to define IQ).

I agree that sensor size is an important factor. I just work from the bottom up (pixel level). And the measure of image quality will be different in people's minds. Some will accept more noise with higher spatial resolution, others will want less noise. We are seeing that in the "sensor wars" going on now and the emotions are similar to the film versus digital wars.

Roger

Emil Martinec
09-16-2009, 12:26 PM
Roger: " The sum of small pixels can never equal a large pixel because read noise will always be larger in the summed pixels."
Emil: This is simply not true. The Canon 40d at ISO 100 has 10% less read noise per unit area as the 1D3 (ie, considering a fixed area of sensor, and (RMS) aggregating the read noise of all the pixels in that area). Your statement is only true in low light (ie high ISO applications), where the read noise is dominated by the sensor read noise; at low ISO, it is late-chain read noise contributions which dominate, and smaller pixels have the advantage in current offerings. I suspect it is because there is less light per pixel, and so the situation is similar to a higher ISO on a larger pixel sensor -- one is closer to the point where read noise is sensor dominated.

I'm not sure where you get these numbers. Let's work a problem. ISO 100 read noise:
1DIII: 24.4 electrons, 40D: 20.1 electrons. Pixel size: 1DIII: 7.2 microns. 40D: 5.7 microns.
1 square mm on the 1DIII contains 19290.1 pixels; 40D: 30778.7 pixels.
Summed read noise: 1DIII: 24.4 *sqrt(19290.1) = 3389 electrons; 40D = 3526 electrons.
The 40D is slightly worse. But note there is some electronics differences here that bias the comparison. If the technologies were equal in the two cameras, the ISO 100 read noise would be very close. It is probably higher in the 1DIII because the A/D is running at a higher speed, not some fundamental physics difference.

Perhaps it's sample variation, but the 20.1 electrons you quote for the 40D seems anomalously high. You quote two sets of read noise figures in your sensor performance summary data tables; one is 20.1 electrons, the other is 17.9. The latter agrees much better with the sample body I analyzed, at 17.1 electrons. Even using the 17.9 figure gives 3140 electrons for the area normalized read noise. Using my figure gives the 10% I quoted.

As for data rates, is there only one ADC, or one for each readout channel? I don't know the details of the design, except that the 1D3 has twice as many readout channels as the 40D. They use the same generation Digic chip, but the 1D3 has two of them, the 40D only one; you can see the doubling of the readout channels in the FFT of a blackframe.


As for the rest, I think we've both made our positions clear.