Results 1 to 50 of 50

Thread: Why "crop factor" is so pervasive

  1. #1
    Forum Participant John Chardine's Avatar
    Join Date
    Jan 2008
    Location
    Canada
    Posts
    6,631
    Threads
    650
    Thank You Posts

    Default Why "crop factor" is so pervasive

    Attached Thumbnails Attached Thumbnails Sensor.jpg  

    Subject line is a little unclear- what I mean is why do we continue to read posts mentioning the importance of crop factor in reach for distant subjects? The explanation that crop factor is irrelevant and that it is in fact pixel size, pitch or density which is the relevant metric, is often given by Roger et al. but it seems to need repeating. This is not just a BPN phenomenon, it's all over the web.

    There are several reasons and I'm sure the fact that a crop factor camera trims off the edges of the image (in relation to FF) and makes the subject appear relatively bigger in the frame must be compelling, however, I think there is another reason, illustrated in the attached image.

    I took sensor data published on his Roger's web site or elsewhere and picked some current DSLR camera models and a modern point 'n shoot (PNS) as an outlier. The graph shows pixel size (pixel area or pixel pitch squared) against sensor size (sensor area or crop factor) and illustrates a simple relationship I had suspected for some time. As sensor size increases pixel size increases, in other words in modern cameras, larger sensors tend to be sampled more coarsely than smaller sensors. Coarsely sampled sensors provide less reach than finely sampled sensors but because of the positive relationship between pixel size and sensor size, the two variables get confused and reach is assumed in error to be determined by sensor size.

    The line on the graph is fitted to the points from sensors smaller than FF and shows that the D4 is right where is "should" be but that the new D800 is way off the trend as are most of the newer FF cameras. I guess the D3x was way ahead of its time!
    Last edited by John Chardine; 03-22-2012 at 01:51 PM. Reason: typos

  2. #2
    Lifetime Member Michael Gerald-Yamasaki's Avatar
    Join Date
    Jan 2010
    Location
    Santa Cruz, CA USA
    Posts
    1,429
    Threads
    213
    Thank You Posts

    Default

    John,

    Greetings. One of the differences between cropped sensor cameras and full frame sensor cameras is the magnification of the viewfinder. So the simple comparison of just looking through the viewfinder of one camera and the the other gives the appearance of the cropped sensor camera having the advantage of greater magnification, which it does... but only in the viewfinder.

    Figuring out pixel sizes, pixels on subject and the like are much less satisfying than "seeing" the difference.

    Cheers,

    -Michael-

  3. #3
    BPN Member Cal Walters's Avatar
    Join Date
    Aug 2008
    Location
    Piedmont, CA
    Posts
    165
    Threads
    37
    Thank You Posts

    Default

    "having the advantage of greater magnification, which it does... but only in the viewfinder"

    But so many people miss this insight. They think they can multiply the length of their lens by 1.6 so my 400 becomes a 640. I have had this conversation 3 times in the last month.

  4. #4
    BPN Member allanrube's Avatar
    Join Date
    Jan 2008
    Location
    Nashua, New Hampshire, United States
    Posts
    1,225
    Threads
    244
    Thank You Posts

    Default

    But pixels on the subject is important. If I get as close to my subject as I can (and let's assume I would like to get closer but cannot - a quite common occurrence here in New England) I have to crop. With a FX Nikon I have to crop more than I would with a DX Nikon. That is where a 16 mp D4 would give me few pixels on the final crop than the 12 mp D300s. Right?

  5. #5
    BPN Member Roger Clark's Avatar
    Join Date
    Feb 2008
    Location
    Colorado
    Posts
    3,956
    Threads
    257
    Thank You Posts

    Default

    Hi John,

    You've hit the nail squarely on the head. I put crop factor as probably the most misunderstood concept in photography these days, but is only slightly more misunderstood than ISO (which doesn't change sensitivity), but up and coming is the very misunderstood big pixels are less noisy idea. A few years ago when there were fewer choices (e.g. before about 2006), the trend of large versus small pixels showed a clearer trend with crop factor so the (wrong) idea was born. Now with a large range of pixel size in the same sensor size, we will see more myths exposed.

    Roger

  6. #6
    BPN Member Tom Graham's Avatar
    Join Date
    Apr 2010
    Location
    Southern California, Orange County
    Posts
    1,114
    Threads
    33
    Thank You Posts

    Default

    Along with "full frame" has to be best. "Full", obviously being better than "not full". Might as well give up, marketing always triumphs over reality.
    Rant over - Tom
    Last edited by Tom Graham; 03-22-2012 at 11:25 PM. Reason: corrected spelling

  7. #7
    Avian Moderator arash_hazeghi's Avatar
    Join Date
    Oct 2008
    Location
    San Francisco, California, United States
    Posts
    11,781
    Threads
    751
    Thank You Posts

    Default

    Quote Originally Posted by Tom Graham View Post
    Along with "full frame" has to be best. "Full", obviously being better than "not full". Might as well give up, marketing always triumphs over reality.
    Rant over - Tom
    A larger sensor is always better than a smaller sensor for general photography because it collects more light. Plus you can pack it with a higher number of pixels (like D800). It is not marketing.
    New! Canon EOS AF Guide for avian in flight photography
    ------------------------------------------------
    Visit my avian galleries
    http://www.ari1982.smugmug.com

  8. #8
    Forum Participant John Chardine's Avatar
    Join Date
    Jan 2008
    Location
    Canada
    Posts
    6,631
    Threads
    650
    Thank You Posts

    Default

    Good comments all.

    Michael- The viewfinder shows more or less (97-100%) of what the sensor is "seeing" so I think we are talking the same thing regarding magnification in the viewfinder and my comment that crop sensors cut the edges off the a full frame image.

    Allan- Pixels on the subject is essentially what is graphed on the y-axis in the figure above, with more pixels on the subject as the pixel area decreases. Smaller sensor cameras tend to have more pixels on the subject. Correct on your last statement. Only the D800 and marginally the D3x puts as many pixels on the subject as the mid-crop cameras.

    Roger- The modern trend for sure is to bring those FF points down on the graph but I was surprised to see at least two modern cameras (D4 and 1Dx) fit the trend quite well. Overall I guess we will see the slope of the line decrease as more and more pixels are stuffed onto sensors but I know you have said in the past that there is a point of diminishing returns beyond a certain pixel size.

    Arash- Agree it's not marketing. I suppose large sensors with lots of pixels are hard to make perfect and this is why they have been relatively slow to come to the consumer market?

    I don't know whether folks are familiar with gapminder.org. Hans Rosling has developed this amazing (and quite simple) data visualisation concept that animates 2-D graphs with the time-dimension creating the movement. I have often thought that visualising how camera sensors have evolved over the years using this software would be really instructive.
    Last edited by John Chardine; 03-23-2012 at 11:21 AM. Reason: typo

  9. #9
    Lifetime Member Michael Gerald-Yamasaki's Avatar
    Join Date
    Jan 2010
    Location
    Santa Cruz, CA USA
    Posts
    1,429
    Threads
    213
    Thank You Posts

    Default

    Quote Originally Posted by arash_hazeghi View Post
    A larger sensor is always better than a smaller sensor for general photography because it collects more light.
    Arash,

    Couldn't one use the same argument for pixels? A larger pixel is always better than a smaller pixel for general photography because it collects more light?

    Hard to make apples to apples comparison. A large sensor captures more light but also from increasingly worse parts of the lens (the edges worsening the more you include)... while with large pixels you have fewer edge effects (of the CFA) than a collection of smaller pixels (per area), I think. Unless the CFA performance improves with a smaller size (which would surprise me).

    John, I'm a Rosling fan... you've seen Tufte's books?

    Cheers,

    -Michael-

  10. #10
    Lifetime Member Michael Gerald-Yamasaki's Avatar
    Join Date
    Jan 2010
    Location
    Santa Cruz, CA USA
    Posts
    1,429
    Threads
    213
    Thank You Posts

    Default

    John,

    There is a new twist to the crop factor/fov confusion. It's in Nikon's new video options (D4). For the same 1080P, output the pixels are sampled from the full frame, 1.5 crop or 2.7 crop region of the sensor (with the 2.7 crop the actual central 1920x1080 pixels of the sensor). So instead of a "crop" a different spacial sampling for the same pixel density (HD video). The talk for video is 1.0, 1.5, 2.7 telextenders with a mode change (18-200 zoom turns into 18-540 ).

    The extra giggle in the D4 is with the 1920x1080 size selected one can shoot 24 frames per second (images not video) completely silently (no shutter at all).

    Cheers,

    -Michael-

  11. #11
    Avian Moderator arash_hazeghi's Avatar
    Join Date
    Oct 2008
    Location
    San Francisco, California, United States
    Posts
    11,781
    Threads
    751
    Thank You Posts

    Default

    Quote Originally Posted by Michael Gerald-Yamasaki View Post
    Arash,

    Couldn't one use the same argument for pixels? A larger pixel is always better than a smaller pixel for general photography because it collects more light?

    Hard to make apples to apples comparison. A large sensor captures more light but also from increasingly worse parts of the lens (the edges worsening the more you include)... while with large pixels you have fewer edge effects (of the CFA) than a collection of smaller pixels (per area), I think. Unless the CFA performance improves with a smaller size (which would surprise me).

    John, I'm a Rosling fan... you've seen Tufte's books?

    Cheers,

    -Michael-
    yes Mike larger pixels have higher SNR but if the pixels are scaled perfectly according to Moore's law (this addresses the issues you mention) ,like Nikon D800 you can combine the smaller ones to recover the SNR of the larger pixel. So you can get identical high ISO performance when you need it and at the same time get excellent resolution in low ISO. If you look at the NEF files from D800 and D4, the D800 is noisier at pixel level but when you down-sample to 16 mpixel at least the visual noise is comparable up to very high ISOs with more detail in D800 files. And needless to say D800 is much better than D700 despite having smaller pixels. However, this is not always true e.g. Canon 7D which suffers from poor FPN, bad CFA spectral response etc. which was failure of pixel scaling at its time. So it really depends.

    BTW, BTW Nikon D800 just achieved the highest DxO sensor mark ever given to any digital camera (including medium format). the score is 95 while D4 scored 89. This shows the power of Moore's law scaling :D
    http://www.dxomark.com/index.php/Pub...or-performance
    Last edited by arash_hazeghi; 03-23-2012 at 10:44 AM. Reason: added link
    New! Canon EOS AF Guide for avian in flight photography
    ------------------------------------------------
    Visit my avian galleries
    http://www.ari1982.smugmug.com

  12. #12
    Avian Moderator arash_hazeghi's Avatar
    Join Date
    Oct 2008
    Location
    San Francisco, California, United States
    Posts
    11,781
    Threads
    751
    Thank You Posts

    Default

    Quote Originally Posted by John Chardine View Post

    Agree it's not marketing. I suppose large sensors with lots of pixels are hard to make perfect and this is why they have been relatively slow to come to the consumer market?
    .
    yes they are basically two issues, 1) yield and 2) the electronics needed to process the massive files. Also beyond certain point it becomes diffraction-limited so diminishing returns.
    Last edited by John Chardine; 03-23-2012 at 11:22 AM. Reason: typo in my quote
    New! Canon EOS AF Guide for avian in flight photography
    ------------------------------------------------
    Visit my avian galleries
    http://www.ari1982.smugmug.com

  13. #13
    Avian Moderator arash_hazeghi's Avatar
    Join Date
    Oct 2008
    Location
    San Francisco, California, United States
    Posts
    11,781
    Threads
    751
    Thank You Posts

    Default

    Quote Originally Posted by Michael Gerald-Yamasaki View Post
    John,

    There is a new twist to the crop factor/fov confusion. It's in Nikon's new video options (D4). For the same 1080P, output the pixels are sampled from the full frame, 1.5 crop or 2.7 crop region of the sensor (with the 2.7 crop the actual central 1920x1080 pixels of the sensor). So instead of a "crop" a different spacial sampling for the same pixel density (HD video). The talk for video is 1.0, 1.5, 2.7 telextenders with a mode change (18-200 zoom turns into 18-540 ).

    The extra giggle in the D4 is with the 1920x1080 size selected one can shoot 24 frames per second (images not video) completely silently (no shutter at all).

    Cheers,

    -Michael-
    I didn't know Nikon could do cropped video! since video resolution is fixed cropped video does translate to better reach! great feature.
    New! Canon EOS AF Guide for avian in flight photography
    ------------------------------------------------
    Visit my avian galleries
    http://www.ari1982.smugmug.com

  14. #14
    BPN Member Roger Clark's Avatar
    Join Date
    Feb 2008
    Location
    Colorado
    Posts
    3,956
    Threads
    257
    Thank You Posts

    Default

    Quote Originally Posted by Michael Gerald-Yamasaki View Post
    A larger pixel is always better than a smaller pixel for general photography because it collects more light?
    I mentioned this myth in my post above: "but up and coming is the very misunderstood big pixels are less noisy idea."

    A larger pixel enables the collection of more light, not that they collect more light. Consider this analogy: You have two buckets, one that holds 2 gallons of water and one that holds 1 gallon of water. You put the 2-gallon bucket under the faucet and turn on the water for 1 second. Now you put the 1 gallon bucket under the faucet and turn on the water at the same intensity for one second. Assume the amount of water was not enough to overfill either bucket. Which bucket has more water? (If you answer I hate story problems you fail the class.) If your answer is both buckets have the same amount of water, you are correct. Now what controls how much water is in the bucket? It is not the size of the bucket; it is the force and duration of the water controlled by the fawcet.

    In digital photography, the bucket is the pixel, the faucet is the lens and the time the faucet is on is the exposure time. There is one thing missing in the analogy, and that is focal length which spreads out the light so if the faucet has a spray nozzle on the end the spray would expand a further distance from the faucet. Now for the larger bucket, if it has a larger diameter, it would collect more water because it sees a larger area. But if the smaller bucket were moved closer to the sprayer, so it collected the same angular area, it would also collect the same amount of water. People talk about the same sensor field of view, but there is also the same pixel field of view. When the pixel field of view is the same, regardless of pixel size, the two pixels collect the same amount of light in the same amount of time and produce the same signal-to-noise ratio.

    So in the case of digital cameras, the amount of light collected is controlled by the lens, its focal length and the exposure time. The larger pixels only ENABLE the collection of more light when the exposure time is long enough. With digital cameras, that only happens at the lowest ISO. At higher ISO, the buckets (pixels) never get filled.

    So to manage noise in digital camera images, one must manage the lens aperture, the focal length, and the exposure time. The focal length manages the pixel field of view. So it is not the pixel that controls the observed noise in an image.

    Roger
    Last edited by Roger Clark; 03-23-2012 at 11:02 PM.

  15. Thanks Daniel Cadieux thanked for this post
  16. #15
    BPN Member Tom Graham's Avatar
    Join Date
    Apr 2010
    Location
    Southern California, Orange County
    Posts
    1,114
    Threads
    33
    Thank You Posts

    Default

    That's such a good analogy and explanation Roger, many thanks.
    Tom
    Last edited by Tom Graham; 03-23-2012 at 11:12 PM. Reason: typo

  17. #16
    Lifetime Member Michael Gerald-Yamasaki's Avatar
    Join Date
    Jan 2010
    Location
    Santa Cruz, CA USA
    Posts
    1,429
    Threads
    213
    Thank You Posts

    Default

    Roger,

    I tend to think of noise as being an error in light measurement, specific to a pixel (rather than generic like light drop off from a lens). So, uh, refining your analogy... Noise would be a bucket not capturing the expected amount of water given a equalized spray, that is, out of 100 buckets the few that are outside a small variance would be noise. The question is what would cause the few buckets to capture less water and is it size dependant.

    One possible way of thinking is well size doesn't matter out of 100 buckets no matter their size you will end up with about the same number of buckets with an error. One might add, the area that would see the same amount of error so if you see 4 flaws in 100 buckets you would only see 2 flaws in 50 buckets of twice the size (surface area).

    Another might counter that so for a given area the smaller buckets have twice as many errors!

    No, no. You can't compare that way... you have to add two of the smaller buckets together to normalize the bucket to surface area ratio. When you do that only two of the combined buckets still are outside of range which is the same as the larger buckets.

    Oh, you can't count that! You end up reducing meaningful variance between all the buckets when you combine them.

    on and on like that.

    I don't think there is a universally satisfying solution for this conundrum nor do I think the new camera releases will expose the myth (which ever one it is )... I'm inclined to think that there are a sufficient number of variables between the sensor and discriminating output to render a definitive answer incalculable.

    Cheers,

    -Michael-

  18. #17
    BPN Member Roger Clark's Avatar
    Join Date
    Feb 2008
    Location
    Colorado
    Posts
    3,956
    Threads
    257
    Thank You Posts

    Default

    Quote Originally Posted by Michael Gerald-Yamasaki View Post
    Roger,

    I tend to think of noise as being an error in light measurement, specific to a pixel (rather than generic like light drop off from a lens). .......
    I don't think there is a universally satisfying solution for this conundrum nor do I think the new camera releases will expose the myth (which ever one it is )... I'm inclined to think that there are a sufficient number of variables between the sensor and discriminating output to render a definitive answer incalculable.

    Hi Michael,
    While one can consider the noise we see in our digital camera images an error in measurement, most of the noise we see is due to the light itself, not the lens sensor and electronics recording the light.

    Photons arrive at random times and we are counting photons for a relatively short interval (the exposure time in a camera). The noise is the square root of the number of photons collected. This is the dominant noise source (light itself) that we see in our images. The electronics in digital cameras add a small amount of noise but it usually only becomes a factor in the deepest shadows. It includes read noise from the sensor, and noise from the electronics including fixed pattern noise (FPN). But on any subject you photograph (excluding very long (minutes) exposure astrophotos), the main noise you see in images is due to photons. The photon noise is quite predictable and so is sensor read noise and other electronic noise. So the response of a camera is quite predictable. It is mostlly basic physics and engineering.

    What is no predictable is how people react to noise sources. Different people seem to tolerate noise in images differently. One observation I find interesting is as digital cameras were emerging, people didn't like the images because they were "too smooth." People wanted that film grain as they thought images should look that way. Now people complain about the tiniest amount of noise.

    Roger

  19. #18
    Lifetime Member Michael Gerald-Yamasaki's Avatar
    Join Date
    Jan 2010
    Location
    Santa Cruz, CA USA
    Posts
    1,429
    Threads
    213
    Thank You Posts

    Default

    Quote Originally Posted by Michael Gerald-Yamasaki View Post
    I tend to think of noise as being an error in light measurement, specific to a pixel (rather than generic like light drop off from a lens).
    Quote Originally Posted by Roger Clark View Post
    While one can consider the noise we see in our digital camera images an error in measurement, most of the noise we see is due to the light itself, not the lens sensor and electronics recording the light.
    Roger, what you're saying makes sense to me (regarding the light and light recording). I think I get it now. I've had this had this strange mental disconnect from the various web discussions about noise and noise comparisons between various new sensors/cameras (talk about noise )...

    So for me at least... here is the ah, hah. The meaning of noise gets converted along with raw conversion. Subsequent to raw - it's all signal, selective smoothing and contrasting as we go.

    Cheers,

    -Michael-

  20. #19
    BPN Member Tom Graham's Avatar
    Join Date
    Apr 2010
    Location
    Southern California, Orange County
    Posts
    1,114
    Threads
    33
    Thank You Posts

    Default

    So, we have sensor pixel bucket. And it has a filter on it so it collects only "red" wave length photons. The noise is square root of number of photons collected. Does that mean that different red pixel buckets are collecting -different number- of red photons? (Given same exposure etc). Thus each red pixel bucket, as it is processed by electronics, appears different to our eyes? This we call noise? Where did I make a "wrong turn" here?
    Tom

  21. #20
    BPN Member Roger Clark's Avatar
    Join Date
    Feb 2008
    Location
    Colorado
    Posts
    3,956
    Threads
    257
    Thank You Posts

    Default

    Quote Originally Posted by Tom Graham View Post
    So, we have sensor pixel bucket. And it has a filter on it so it collects only "red" wave length photons. The noise is square root of number of photons collected. Does that mean that different red pixel buckets are collecting -different number- of red photons? (Given same exposure etc). Thus each red pixel bucket, as it is processed by electronics, appears different to our eyes? This we call noise? Where did I make a "wrong turn" here?
    Tom
    Hi Tom.

    Let's try this. Say you are imaging a uniform red target and the amount of light coming to the pixel on a long term average is 100 photons per second. But we expose for only 1 second. Say we have a 1 megapixel camera. In each pixel we would expect 100 photons, but when we look at each pixel, it varies, say 93 in one pixel 1.2 in the next, 88 in another, 114 in antoher and so on. If we average all the 1 million pixels we will a number very very close to 100. But the variation, the standard deviation, in each pixel will be 10. So the noise is ten (square root of 100) and the signal-tonoise ratio = 100 / sqrt(100) = sqrt(100) = 10.

    The above was for a perfect camera. Then add electronics and sensor read noise on top of that, but that noise is pretty small. Since this was a low signal situation, we should have been imaging at high ISO. On a 1D Mark IV, the high ISO noise would be about 2 electrons. So the noise seen would be the 10 electrons from the detected photons and 2 electrons from the sensor and electronics (add as the square root of the squared noise, so sqrt (10*10 + 2*2) = 10.2. This illustrates why most of the noise we see in our images results from photon noise (Poisson counting statistics) except in the deepest shadows where we detect only a few photons per pixel.

    Probably more than you wanted to know...

    Roger

  22. #21
    BPN Member Hazel Grant's Avatar
    Join Date
    Sep 2008
    Location
    Southern Illinois
    Posts
    1,684
    Threads
    283
    Thank You Posts

    Default

    Quote Originally Posted by Roger Clark View Post
    I mentioned this myth in my post above: "but up and coming is the very misunderstood big pixels are less noisy idea."

    A larger pixel enables the collection of more light, not that they collect more light. Consider this analogy: You have two buckets, one that holds 2 gallons of water and one that holds 1 gallon of water. You put the 2-gallon bucket under the faucet and turn on the water for 1 second. Now you put the 1 gallon bucket under the faucet and turn on the water at the same intensity for one second. Assume the amount of water was not enough to overfill either bucket. Which bucket has more water? (If you answer I hate story problems you fail the class.) If your answer is both buckets have the same amount of water, you are correct. Now what controls how much water is in the bucket? It is not the size of the bucket; it is the force and duration of the water controlled by the fawcet.

    In digital photography, the bucket is the pixel, the faucet is the lens and the time the faucet is on is the exposure time. There is one thing missing in the analogy, and that is focal length which spreads out the light so if the faucet has a spray nozzle on the end the spray would expand a further distance from the faucet. Now for the larger bucket, if it has a larger diameter, it would collect more water because it sees a larger area. But if the smaller bucket were moved closer to the sprayer, so it collected the same angular area, it would also collect the same amount of water. People talk about the same sensor field of view, but there is also the same pixel field of view. When the pixel field of view is the same, regardless of pixel size, the two pixels collect the same amount of light in the same amount of time and produce the same signal-to-noise ratio.

    So in the case of digital cameras, the amount of light collected is controlled by the lens, its focal length and the exposure time. The larger pixels only ENABLE the collection of more light when the exposure time is long enough. With digital cameras, that only happens at the lowest ISO. At higher ISO, the buckets (pixels) never get filled.

    So to manage noise in digital camera images, one must manage the lens aperture, the focal length, and the exposure time. The focal length manages the pixel field of view. So it is not the pixel that controls the observed noise in an image.

    Roger
    Thanks. That "visual" helps me understand so much more.

  23. #22
    BPN Member Tom Graham's Avatar
    Join Date
    Apr 2010
    Location
    Southern California, Orange County
    Posts
    1,114
    Threads
    33
    Thank You Posts

    Default

    Good Roger, thanks. What I see as noise is then varying "intensities" (number of photons) from pixel to pixel? "Intensity" (correct word?) being from, if expressed as 8 bit, 0 to 255? Trying so show this visually, here are three "red" pixels each after collecting different numbers of photons. The one on the left and the one on the right being noise?

    Click image for larger version. 

Name:	noise.jpg 
Views:	429 
Size:	38.1 KB 
ID:	110795

    "Average" more mathematically being the "mean", yes? And if the mean value is 100 as you use above, the variation is 10. While if the mean should be 36, the variation is 6. So, comparing 10 to 100 and 6 to 36, it is obvious that 10 is smaller proportion to 100 than 6 is to 36. Thus the 36 mean photon event will appear noisier (than the 100)?
    Tom

  24. #23
    BPN Member Roger Clark's Avatar
    Join Date
    Feb 2008
    Location
    Colorado
    Posts
    3,956
    Threads
    257
    Thank You Posts

    Default

    Quote Originally Posted by Tom Graham View Post
    Good Roger, thanks. What I see as noise is then varying "intensities" (number of photons) from pixel to pixel? "Intensity" (correct word?) being from, if expressed as 8 bit, 0 to 255? Trying so show this visually, here are three "red" pixels each after collecting different numbers of photons. The one on the left and the one on the right being noise?



    "Average" more mathematically being the "mean", yes? And if the mean value is 100 as you use above, the variation is 10. While if the mean should be 36, the variation is 6. So, comparing 10 to 100 and 6 to 36, it is obvious that 10 is smaller proportion to 100 than 6 is to 36. Thus the 36 mean photon event will appear noisier (than the 100)?
    Tom
    Yes, you got it. While the 36 mean has a lower signal-to-noise ratio than the 100, perception is another factor. In an image. we tend to not see noise in the brightest pixels, and the darkest pixels don't show a lot of noise because they are so dark. It tends to be the middle gray where we perceive the most noise in our images. Also, we perceive less noise in strong colors, especially towards red or blue. We will perceive the most noise in gray-green mid tones. So there is the technical signal-to-noise ratio folded in with human perception. The apparent noise will also depend on the brightness. For example, examining a print in bright office light will usually show more noise than viewing the same print in dim room light.

    Roger

  25. #24
    BPN Member Tom Graham's Avatar
    Join Date
    Apr 2010
    Location
    Southern California, Orange County
    Posts
    1,114
    Threads
    33
    Thank You Posts

    Default

    Thanks again Roger, your explanations as always very lucid. Our eye brain system is stranger than fiction!!! Been some excellent TV shows about it. Should be required viewing for all serious photographers.
    Tom

  26. #25
    Forum Participant John Chardine's Avatar
    Join Date
    Jan 2008
    Location
    Canada
    Posts
    6,631
    Threads
    650
    Thank You Posts

    Default

    Quote Originally Posted by Roger Clark View Post
    I mentioned this myth in my post above: "but up and coming is the very misunderstood big pixels are less noisy idea."

    A larger pixel enables the collection of more light, not that they collect more light. Consider this analogy: You have two buckets, one that holds 2 gallons of water and one that holds 1 gallon of water. You put the 2-gallon bucket under the faucet and turn on the water for 1 second. Now you put the 1 gallon bucket under the faucet and turn on the water at the same intensity for one second. Assume the amount of water was not enough to overfill either bucket. Which bucket has more water? (If you answer I hate story problems you fail the class.) If your answer is both buckets have the same amount of water, you are correct. Now what controls how much water is in the bucket? It is not the size of the bucket; it is the force and duration of the water controlled by the fawcet.

    In digital photography, the bucket is the pixel, the faucet is the lens and the time the faucet is on is the exposure time. There is one thing missing in the analogy, and that is focal length which spreads out the light so if the faucet has a spray nozzle on the end the spray would expand a further distance from the faucet. Now for the larger bucket, if it has a larger diameter, it would collect more water because it sees a larger area. But if the smaller bucket were moved closer to the sprayer, so it collected the same angular area, it would also collect the same amount of water. People talk about the same sensor field of view, but there is also the same pixel field of view. When the pixel field of view is the same, regardless of pixel size, the two pixels collect the same amount of light in the same amount of time and produce the same signal-to-noise ratio.

    So in the case of digital cameras, the amount of light collected is controlled by the lens, its focal length and the exposure time. The larger pixels only ENABLE the collection of more light when the exposure time is long enough. With digital cameras, that only happens at the lowest ISO. At higher ISO, the buckets (pixels) never get filled.

    So to manage noise in digital camera images, one must manage the lens aperture, the focal length, and the exposure time. The focal length manages the pixel field of view. So it is not the pixel that controls the observed noise in an image.

    Roger
    I've been ruminating a bit on this one Roger! What caught my eye on initial reading was the analogy of a faucet and a bucket, but particularly the faucet. The water coming out of your faucet is light = photons, and indeed if light poured onto a pixel bucket like a faucet, and the stream coming out of the faucet were a lot narrower than the mouth of the bucket, then I agree that buckets of different size would collect the same amount of light. Even buckets that had different sized openings at the top would collect the same amount of light. However, intuitively (remember, I'm biologist!) I don't view light this way- to me it's more like a heavy, drenching, minutely fine mist (of photons) falling on the pixels at the speed of light. If light is more like this than tiny streams from faucets, then the width of the pixel opening would hugely affect how much light the pixel would collect with the amount of light collected being proportional to the pixel area (pixel pitch squared). An 64 square micron pixel should collect twice as much light as a 32 square micron pixel, in the same amount of time. Therefore I would conclude that big pixels collect more light than small pixels per unit time, therefore have a higher signal to noise ratio, and therefore appear less noisy.

    What is wrong with this logic? Can my intuitive view of light be so wrong???

  27. #26
    BPN Member Roger Clark's Avatar
    Join Date
    Feb 2008
    Location
    Colorado
    Posts
    3,956
    Threads
    257
    Thank You Posts

    Default

    Quote Originally Posted by John Chardine View Post
    I've been ruminating a bit on this one Roger! What caught my eye on initial reading was the analogy of a faucet and a bucket, but particularly the faucet. The water coming out of your faucet is light = photons, and indeed if light poured onto a pixel bucket like a faucet, and the stream coming out of the faucet were a lot narrower than the mouth of the bucket, then I agree that buckets of different size would collect the same amount of light. Even buckets that had different sized openings at the top would collect the same amount of light. However, intuitively (remember, I'm biologist!) I don't view light this way- to me it's more like a heavy, drenching, minutely fine mist (of photons) falling on the pixels at the speed of light. If light is more like this than tiny streams from faucets, then the width of the pixel opening would hugely affect how much light the pixel would collect with the amount of light collected being proportional to the pixel area (pixel pitch squared). An 64 square micron pixel should collect twice as much light as a 32 square micron pixel, in the same amount of time. Therefore I would conclude that big pixels collect more light than small pixels per unit time, therefore have a higher signal to noise ratio, and therefore appear less noisy.

    What is wrong with this logic? Can my intuitive view of light be so wrong???
    Hi John,

    While part of the explanation is correct, you changed a critical parameter in the middle. When you changed the area of the pixel, you reduced the resolution (e.g. pixels on subject). While the mist idea is a first approximation, light in a camera system is not like mist falling uniformly all over. It is more a focused beam, so more like a spray nozzle spraying water and that is why I used the spray nozzle idea in my explanation.

    So what you say is correct, that the larger area pixel collects more light, and one can certainly look at the problem that way (and most photographers do just that). But it is a situation like crop sensors giving the impression of greater telephoto reach. The crop is not the reason for more telephoto reach, it is pixel pitch. Similarly, it is not pixel size that determines the amount of light gathered, it is the lens diameter and the angular area of the pixel that enables the pixel to collect the light. So if you equalize the angular area of the pixel, the amount of light collect is the same for a given exposure, regardless of pixel size. In your example of the 32 and 64 square micron pixels, put a 2x TC on the camera with the 64 sq micron pixels using the same lens (lets say a 300 f/2.8 lens) then the 64 sq micron pixel system puts the same pixels on subject as the 32 sq micron pixel system and then both cameras get the same amount of light in a given exposure time. The larger pixels didn't magically get more light.

    In summary, the control of how much light we get in the camera is: 1) lens diameter, 2) angular area seen by the pixel, and 3) exposure time. (I'm assuming all similar focal length lenses would be pretty close in transmission--which they are.) So like crop factor, pixel area is actually not part of the equation for the resolution on subject and the amount of light a pixel receives. The angular area of a pixel is proportional to the ratio of the pixel size and lens focal length, not pixel size alone. I've written this up in more detail at:
    http://www.clarkvision.com/articles/...m.performance/
    but I plan to add a lot more to the page with examples of how this works.

    But once one realizes this paradigm, we see that adding a TC is no different than decreasing pixel size by a corresponding factor (when working at higher ISO so the pixels don't overflow). This then leads to no high ISO/low light advantage to large pixels.

    Did you every decide on the 300 f/2.8 versus 500 f/4? Given the choice today, I would choose a 300 f/2.8 and body with 5 micron pixels. Gee, that's a D800.

    Roger

  28. #27
    Lifetime Member Michael Gerald-Yamasaki's Avatar
    Join Date
    Jan 2010
    Location
    Santa Cruz, CA USA
    Posts
    1,429
    Threads
    213
    Thank You Posts

    Default

    John, I think you hit the nail on the head. Roger, thanks for the explanation. I think I finally got it (of course, I've thought that before).

    Quote Originally Posted by Roger Clark View Post
    So what you say is correct, that the larger area pixel collects more light, [...]
    So one might reasonably conclude for individual pixels:

    More light less noise.
    Larger pixel more light.
    Larger pixel less noise.

    The catch is in comparing noise at the image level. Right?

    Which brings in:

    Quote Originally Posted by Roger Clark View Post
    So if you equalize the angular area of the pixel, the amount of light collect is the same for a given exposure, regardless of pixel size.[...]
    Isn't this the same as saying, normalizing for pixel size? which is also the same as normalizing resolution?

    Cheers,

    -Michael-

  29. #28
    Forum Participant John Chardine's Avatar
    Join Date
    Jan 2008
    Location
    Canada
    Posts
    6,631
    Threads
    650
    Thank You Posts

    Default

    Thanks Roger. I'll take a look at your write-up.

    Still not decided on the 300 or 500. I have a (used but new to me) 400DO right now and I plan to give it a good going-over in the next few weeks.

  30. #29
    Forum Participant John Chardine's Avatar
    Join Date
    Jan 2008
    Location
    Canada
    Posts
    6,631
    Threads
    650
    Thank You Posts

    Default

    Quote Originally Posted by Michael Gerald-Yamasaki View Post
    John, I think you hit the nail on the head. Roger, thanks for the explanation. I think I finally got it (of course, I've thought that before).



    So one might reasonably conclude for individual pixels:

    More light less noise.
    Larger pixel more light.
    Larger pixel less noise.

    The catch is in comparing noise at the image level. Right?

    Which brings in:



    Isn't this the same as saying, normalizing for pixel size? which is also the same as normalizing resolution?

    Cheers,

    -Michael-
    This has helped clarify things Michael, except for one slight modification to your thesis:

    More light same amount of noise
    Larger pixel more light.
    Larger pixel same amount of noise.
    Larger pixel higher signal to noise ratio

    all of this at the individual pixel level, like you said.

  31. #30
    BPN Member Tom Graham's Avatar
    Join Date
    Apr 2010
    Location
    Southern California, Orange County
    Posts
    1,114
    Threads
    33
    Thank You Posts

    Default

    "Photons arrive at random times and we are counting photons for a relatively short interval (the exposure time in a camera). The noise is the square root of the number of photons collected."
    "More light same amount of noise"
    No, more light more noise? That is, if 64 photons the noise is 8 (standard deviation) but if more light 225 photons noise is 15.
    How about - "More light less apparent noise"?
    Tom

  32. #31
    BPN Member Roger Clark's Avatar
    Join Date
    Feb 2008
    Location
    Colorado
    Posts
    3,956
    Threads
    257
    Thank You Posts

    Default

    Quote Originally Posted by John Chardine View Post
    This has helped clarify things Michael, except for one slight modification to your thesis:

    More light same amount of noise
    Larger pixel more light.
    Larger pixel same amount of noise.
    Larger pixel higher signal to noise ratio

    all of this at the individual pixel level, like you said.
    This again just repeats variations on the larger pixels produce less apparent noise, like that along the same false idea that crop factor multiplies focal length.

    More light = more noise, but higher signal-to-noise ratio, so we perceive less noise.

    "larger pixel more light" again is incorrect (crop factor analogy). Correct statement is larger angular pixel field of view = more light (for a given lens diameter and exposure time)

    "larger pixel same amount of noise" is only true if the angular field of view of the pixel is the same. Pixel size is irrelevant. (again for a given lens diameter and exposure time)

    "Larger pixel higher signal to noise ratio" again pixel size is irrelevant. Angular area of the pixel for a given lens diameter is the key.

    We need a clean slate (or clean etch a sketch): forget linear pixel size and think only in terms of the angular pixel size.

    Example: you photograph a bird in flight with a 500 mm lens + 2x TC wide open. You want more light without changing exposure. What to do? Take off the TC. For the same exposure, you get 4 times the light per pixel without changing the linear pixel size. Why? The angular area of the pixel has changed 4x. But now you have 4x less pixels (area) on the bird. If you want the same pixels on subject, move 2x closer without the TC. Now you have the 4x more light and the same number of pixels on the subject. And we didn't change the linear size of a pixel. If you moved 2x closer without taking off the TC, you would get the same light per pixel, because we didn't change the angular area of a pixel. Linear pixel size is irrelevant and just causes erroneous ideas.

    Roger

  33. #32
    Lifetime Member Michael Gerald-Yamasaki's Avatar
    Join Date
    Jan 2010
    Location
    Santa Cruz, CA USA
    Posts
    1,429
    Threads
    213
    Thank You Posts

    Default

    Quote Originally Posted by Roger Clark View Post
    "larger pixel more light" again is incorrect (crop factor analogy). Correct statement is larger angular pixel field of view = more light (for a given lens diameter and exposure time)
    Roger, For two pixels sitting next to each other on the same sensor, isn't larger angular pixel field of view equivalent to a larger pixel?

    So if larger angular pixel field of view = more light
    then larger pixel = more light

    Right?

    Cheers,

    -Michael-

  34. #33
    Forum Participant John Chardine's Avatar
    Join Date
    Jan 2008
    Location
    Canada
    Posts
    6,631
    Threads
    650
    Thank You Posts

    Default

    Quote Originally Posted by Tom Graham View Post
    "Photons arrive at random times and we are counting photons for a relatively short interval (the exposure time in a camera). The noise is the square root of the number of photons collected."
    "More light same amount of noise"
    No, more light more noise? That is, if 64 photons the noise is 8 (standard deviation) but if more light 225 photons noise is 15.
    How about - "More light less apparent noise"?
    Tom
    I was incorrect Tom for this (thanks) and other reasons mentioned by Roger. More light means a little bit more noise (noise = square root of signal?)

  35. #34
    Forum Participant John Chardine's Avatar
    Join Date
    Jan 2008
    Location
    Canada
    Posts
    6,631
    Threads
    650
    Thank You Posts

    Default

    Thanks Roger. You can probably tell this is a tough one for me to grasp (the pixel size/noise myth) but I promise to work on it. I see you call it an advanced topic on the link you provided. Advanced it is.

  36. #35
    BPN Member Tom Graham's Avatar
    Join Date
    Apr 2010
    Location
    Southern California, Orange County
    Posts
    1,114
    Threads
    33
    Thank You Posts

    Default

    Roger is (as you may know) a world recognized/renowned scientist in signal processing. It is great he can explain this at a level most of us can understand - if we try long enough . Come back to it and eventually it will come together for you. I have had college course in statistics and it is not "intuitive" (unless you have Roger's brains). Yet is fundamental, and largely hidden, in our everyday lives.
    Tom

  37. #36
    BPN Member Roger Clark's Avatar
    Join Date
    Feb 2008
    Location
    Colorado
    Posts
    3,956
    Threads
    257
    Thank You Posts

    Default

    Quote Originally Posted by Tom Graham View Post
    Roger is (as you may know) a world recognized/renowned scientist in signal processing. It is great he can explain this at a level most of us can understand - if we try long enough . Come back to it and eventually it will come together for you. I have had college course in statistics and it is not "intuitive" (unless you have Roger's brains). Yet is fundamental, and largely hidden, in our everyday lives.
    Tom
    Yeah, and I can be wrong too! I can also be confused.

    Actually, while I do this stuff for a living, it still took a while to sink in for digital photography, as the old film paradigm was pretty entrenched in me. But once I started thinking about the total system (lens, its focal length, and the sensor with pixels), it became no different than a spacecraft sensor, and it still took a while to purge the old film and f/ratios ideas. I probably still am purging and changing the paradigm. And I probably still need to fix some web pages that are in the old paradigm. These discussions help me refine my explanation.

    Roger

  38. #37
    BPN Member Roger Clark's Avatar
    Join Date
    Feb 2008
    Location
    Colorado
    Posts
    3,956
    Threads
    257
    Thank You Posts

    Default

    Quote Originally Posted by Michael Gerald-Yamasaki View Post
    Roger, For two pixels sitting next to each other on the same sensor, isn't larger angular pixel field of view equivalent to a larger pixel?

    So if larger angular pixel field of view = more light
    then larger pixel = more light

    Right?

    Cheers,

    -Michael-
    Yes, but the linear pixel size argument is similar to the crop sensor = more telephoto reach argument; it seems to work to explain some situations. So the larger pixel has a capability to collect more light, but it is the lens that delivers the light. So the lens is the key, not the pixel. In a specific situation, one can use the linear size as a substitute for angular size, but that keeps one in the wrong paradigm. For example, the scenario I gave in changing TCs and/or distance to the subject changed the light received and the pixel size was not changed.

    Roger

  39. #38
    BPN Member Roger Clark's Avatar
    Join Date
    Feb 2008
    Location
    Colorado
    Posts
    3,956
    Threads
    257
    Thank You Posts

    Default

    Quote Originally Posted by John Chardine View Post
    Thanks Roger. You can probably tell this is a tough one for me to grasp (the pixel size/noise myth) but I promise to work on it. I see you call it an advanced topic on the link you provided. Advanced it is.
    I understand. Whenever there is a paradigm shift, it takes a while to sink in. I've been working on this for months, and it is still sinking in.

    Roger

  40. #39
    Forum Participant John Chardine's Avatar
    Join Date
    Jan 2008
    Location
    Canada
    Posts
    6,631
    Threads
    650
    Thank You Posts

    Default

    Quote Originally Posted by Tom Graham View Post
    Roger is (as you may know) a world recognized/renowned scientist in signal processing. It is great he can explain this at a level most of us can understand - if we try long enough . Come back to it and eventually it will come together for you. I have had college course in statistics and it is not "intuitive" (unless you have Roger's brains). Yet is fundamental, and largely hidden, in our everyday lives.
    Tom
    Yes Tom, we are certainly privileged to have someone of the calibre of Roger to interact with.

    I'm OK with the stats part myself but my weakness is on the light and lens technology side. I think if I can crack angular pixel size I'll be getting there.

  41. #40
    BPN Member Roger Clark's Avatar
    Join Date
    Feb 2008
    Location
    Colorado
    Posts
    3,956
    Threads
    257
    Thank You Posts

    Default

    I think what makes BPN such a great site is there are so many knowledgeable people in so many areas.

    Roger

  42. #41
    BPN Member Karl Egressy's Avatar
    Join Date
    Mar 2009
    Location
    Ontario, Canada
    Posts
    5,510
    Threads
    518
    Thank You Posts

    Default

    I was reading through this thread a couple of times as the title indicated that I would get an answer as to how crop factor is related to the size of the picture I'm dealing with.
    As it was swayed towards more scientific details, the crop factor has been lost in the discussion at least for the average but curious person like me.
    Here is how I see it and wan to be elightened if I see it wrong.
    I compare Canon 1D Mark IV against Canon 5D Mark III as I have the first one and plan to buy the second.
    I almost strictly shoot birds. I almost always crop the image.
    It is important that the remaining pixels will result a presentable image.
    Mark IV has a sensor size of 518.9 square mm and has a pixel count of 16.2 megapixels on that sensor.
    Mark III has a 864 suare mm sensor and has a 22.2 megapixel count on that size of sensor.
    Therefore cropping the image to be comparable to that of Mark IV the pixel count of the Mark III sensor will be 518.9/864*22.2=13.3
    So the full frame picture will represent a 13.3 megapixel image compared to the 16.2 megapixel of the 1.3 crop image.
    With other words you will have a 13.3 megapixel image on the Canon 5D Mark III in Canon 1D Mark IV terms.
    You will have a better quality and less noise image but smaller pixel density.
    I would like to be corrected in simple terms if possible.
    Thank you.
    Last edited by Karl Egressy; 03-31-2012 at 08:59 AM.

  43. #42
    BPN Member Roger Clark's Avatar
    Join Date
    Feb 2008
    Location
    Colorado
    Posts
    3,956
    Threads
    257
    Thank You Posts

    Default

    Karl,
    If you are cropping your 1DIV images, sensor size has nothing to do with any of this. The differences you would see are due entirely pixel pitch or pixel density.

    Roger

  44. #43
    BPN Member Karl Egressy's Avatar
    Join Date
    Mar 2009
    Location
    Ontario, Canada
    Posts
    5,510
    Threads
    518
    Thank You Posts

    Default

    "Karl,
    If you are cropping your 1DIV images, sensor size has nothing to do with any of this. The differences you would see are due entirely pixel pitch or pixel density.

    Roger "



    Thanks Roger.
    Just strictly focusing on image size so I can compare apples to apples and disregard the pixel density and pitch.
    Is the math correct in my post?
    Will the FF image appear as if it was shot with a 1.3 crop at 13.3 megapixel in terms of image size?
    As if the FF was equivalent with a 1.3 crop camera having only 13.3 megapixels?

  45. #44
    BPN Member Tom Graham's Avatar
    Join Date
    Apr 2010
    Location
    Southern California, Orange County
    Posts
    1,114
    Threads
    33
    Thank You Posts

    Default

    "Will the FF image appear as if it was shot with a 1.3 crop at 13.3 megapixel in terms of image size?"
    Allow me to try to re-phrase the question, is it the same questions as yours?
    - We print both images at 5x7 inches. Disregarding print resolution, PPI (or DPI). That is, we print both at whatever PPI/DPI prints full frame, just filling, the 7 inches. Will the subject bird that measures 4 inches on the 1D print also measure 4 inches on the 5D print?
    Tom

  46. #45
    BPN Member dankearl's Avatar
    Join Date
    Jan 2011
    Location
    Portland, Oregon
    Posts
    5,171
    Threads
    666
    Thank You Posts

    Default

    I , too, am lost in the details here.
    I just got the Nikon D800, which gives you 4 different crop modes.
    Am I correct to assume that the crops are just in camera, in other words, other than the file size, should I
    just take FX or full frame images and just crop them myself?
    Am I going to get the same quality cropping after the fact than just taking the image in DX or 1.2 crop mode?
    If that is the case, why aren't all camera's full frame?
    Dan Kearl

  47. #46
    BPN Member Tom Graham's Avatar
    Join Date
    Apr 2010
    Location
    Southern California, Orange County
    Posts
    1,114
    Threads
    33
    Thank You Posts

    Default

    "If that is the case, why aren't all camera's full frame?"
    Taking this literally ( but applies FF, DX, APC) why ? - size, weight, and cost. Of both the body and lens. I would like a DSLR half the size body, about 1/4 sensor, of a current FF - with same IQ. Then I could afford in terms of both weight and cost to fly to African safari with all the kit I want.
    Tom
    Last edited by Tom Graham; 03-31-2012 at 12:21 PM.

  48. #47
    Lifetime Member Michael Gerald-Yamasaki's Avatar
    Join Date
    Jan 2010
    Location
    Santa Cruz, CA USA
    Posts
    1,429
    Threads
    213
    Thank You Posts

    Default

    Quote Originally Posted by dankearl View Post
    I just got the Nikon D800, which gives you 4 different crop modes.
    Am I going to get the same quality cropping after the fact than just taking the image in DX or 1.2 crop mode?
    If that is the case, why aren't all camera's full frame?
    Yes, cropping to the DX or 1.2 fov in camera or cropping after the fact are equivalent.

    There are cropped sensors for production cost considerations.

    Cheers,

    -Michael-

  49. #48
    Forum Participant John Chardine's Avatar
    Join Date
    Jan 2008
    Location
    Canada
    Posts
    6,631
    Threads
    650
    Thank You Posts

    Default

    Quote Originally Posted by Karl Egressy View Post
    I was reading through this thread a couple of times as the title indicated that I would get an answer as to how crop factor is related to the size of the picture I'm dealing with.
    As it was swayed towards more scientific details, the crop factor has been lost in the discussion at least for the average but curious person like me.
    Here is how I see it and wan to be elightened if I see it wrong.
    I compare Canon 1D Mark IV against Canon 5D Mark III as I have the first one and plan to buy the second.
    I almost strictly shoot birds. I almost always crop the image.
    It is important that the remaining pixels will result a presentable image.
    Mark IV has a sensor size of 518.9 square mm and has a pixel count of 16.2 megapixels on that sensor.
    Mark III has a 864 suare mm sensor and has a 22.2 megapixel count on that size of sensor.
    Therefore cropping the image to be comparable to that of Mark IV the pixel count of the Mark III sensor will be 518.9/864*22.2=13.3
    So the full frame picture will represent a 13.3 megapixel image compared to the 16.2 megapixel of the 1.3 crop image.
    With other words you will have a 13.3 megapixel image on the Canon 5D Mark III in Canon 1D Mark IV terms.
    You will have a better quality and less noise image but smaller pixel density.
    I would like to be corrected in simple terms if possible.
    Thank you.
    Hi Karl- The title of the original post related to the oft-quoted, pervasive myth that crop factor affects reach and didn't specifically promise to address the issue of how crop factor relates to size of picture. Also, I am not entirely sure what you mean by "size of picture"- a printed image, the size of the file on disk?

    Your calculations and logic appear to be correct. If you cut a 5DIII sensor down to the size of a 1DIV sensor you would get a roughly 13mp sensor. By the same token if you cut a D3s sensor down to the size of a 7D sensor you are left with a 5mp camera (actually 4.6). Given all the beautiful images and prints made from a D3s it only goes to show that 5mp cameras do a great job (Roger makes this point about an enlargement from a 3mp camera if I remember).

    "You will have a better quality and less noise image but smaller pixel density". - I'm not entirely sure about the first two parts of your conclusion here, but agreed you would have less pixels on the subject with the 5DIII compared to the 1DIV.
    Last edited by John Chardine; 03-31-2012 at 04:37 PM.

  50. #49
    BPN Member Tom Graham's Avatar
    Join Date
    Apr 2010
    Location
    Southern California, Orange County
    Posts
    1,114
    Threads
    33
    Thank You Posts

    Default

    And if the image size required is no larger than 1024 pixels, how small pixel size sensor before viewed IQ suffers noticeably? Not even "noticeably" but even "slightly"?
    Tom

  51. #50
    BPN Member Karl Egressy's Avatar
    Join Date
    Mar 2009
    Location
    Ontario, Canada
    Posts
    5,510
    Threads
    518
    Thank You Posts

    Default

    Thank you John Chardine. Reply with Quote did not work for me for some reason.
    I got the answere I was looking for.
    Thank you.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Web Analytics