I have been under the impression for a while that loss of image quality due to diffraction effects is caused by the sensor, with smaller sensor sites causing diffraction effects at wider apertures than larger sensor sites. However, I have read several references recently to lenses that suffer from diffraction at small apertures more than others. So, is diffraction at small apertures caused by the lens, the sensor, or a combination of both? Are certain lenses more prone to diffraction than others, keeping the sensor constant?
Last edited by John Chardine; 10-12-2011 at 07:27 AM.
Diffraction comes from fundamental laws of optics, it depends on the wavelength, and numerical aperture (focal length divided by diameter in air). It does do not depend on sensor or a particular type of lense used. Diffraction is always there regardless of the sensor. For a given aperture, a sesnor with small pixels resloves diffraction while a sensor with large pixels doesn't.
OK Arash. Thanks. Good explanation. And in the meantime I've done some reading. The Wiki article is good.
So the source of the diffraction is the light going through the aperture of the lens and is inevitable. However, can you correct for this to some degree? Are some lens designs better than others? The reason I ask is that a while back I started a thread looking at diffraction effects using the Canon 100mm f2.8 Macro. I was shocked by how poor the IQ was when the lens was shut down to f22 and f32 on the 1DIV.
OK Arash. Thanks. Good explanation. And in the meantime I've done some reading. The Wiki article is good.
So the source of the diffraction is the light going through the aperture of the lens and is inevitable. However, can you correct for this to some degree? Are some lens designs better than others? The reason I ask is that a while back I started a thread looking at diffraction effects using the Canon 100mm f2.8 Macro. I was shocked by how poor the IQ was when the lens was shut down to f22 and f32 on the 1DIV.
No diffraction is independent of optics desing. It has nothing to do with the IQ of the particular lens, it is the wave nature of the light.
There are some techniques for improving resolution beyond diffraction limit used in advanced microscopy, but there are "near-field" methods and are not applicable in a simple camera system.
almost all the cameras on the market will be diffraction limited at f/16 and beyond, so you stop down for DOF not for improved sharpness/resolution. at such small apertures the lens practically becomes a simple pin hole :)
Last edited by arash_hazeghi; 10-12-2011 at 11:40 AM.
No diffraction is independent of optics desing. It has nothing to do with the IQ of the particular lens, it is the wave nature of the light.
almost all the cameras on the market will be diffraction limited at f/16 and beyond, so you stop down for DOF not for improved sharpness/resolution. at such small apertures the lens practically becomes a simple pin hole :)
I think this needs a little clarification. An f/16 aperture may be small on a 20 mm lens (20/16 = 1.25 mm), but on a 600 mm lens, the aperture would be 600/16 = 37.5 mm, so hardly a pinhole.
The diffraction spot size is a constant linear size in the focal plane at a given f/ratio and gets smaller with faster f/ratios, so, an f/4 lens will have a larger diffraction spot size than an f/2.8 lens assuming both are diffraction limited (and lenses like the 300 f/2.8 L IS and 300 f/4 L IS are close to diffraction limited wide open). At f/8, the diffraction spot diameter for green light is 10.3 microns, so a camera with smaller pixels will sample the effects of diffraction at slower f/ratios than will a camera with larger pixels.
John was talking about macro I don't think anyone shoots a 600mm at f/16 for macro work.
The diameter of the Airy disk at 300mm and f/8 in air is 17um for 520nm (shortest green). The smaller pixel camera will hit the diffraction limit for a faster f (larger aperute or smaller f number) than a larger pixel camera. For e.g. if a camera like 7D hits the diffraction limit at f/5.6, a 5D will at about f/11.
Last edited by arash_hazeghi; 10-13-2011 at 10:05 AM.
Many do use the Canon 300 f/4 for macro work, with extenders, making it 600 mm.
Originally Posted by arash_hazeghi
The diameter of the Airy disk at 300mm and f/8 in air is 17um for 520nm (shortest green). The smaller pixel camera will hit the diffraction limit for a faster f (larger aperute or smaller f number) than a larger pixel camera. For e.g. if a camera like 7D hits the diffraction limit at f/5.6, a 5D will at about f/11.
Yes, except it doesn't matter if it is 300 mm. The diffraction spot diameter will be the same at a given f/stop regardless of lens focal length (I'm sure you know this--juts trying to clarify so others are not confused).
Also, diffraction spot diameter is not the key metric as there is detail resolved below the diffraction spot diameter, just at reduced contrast (MTF). Better to specify the Dawes Limit or Rayleigh criterion. And even then, this only applies to trying to distinguish two closely spaced subjects. Finer sampling beyond the 0% MTF provides better definition of the shape (e.g. round star or hair line) of an isolated subject.
I am not sure if a competent macto photographer would use a 300 and extender for macro of course amatures would use anything. The difraction diameter is d=1.22*lambda*NA where NA is the numerical aperture I.e focal length/diameter. I am not sure about MTF 0 ( never heard this before it sounds bizzare to me) or its relevance to macro photography. Of course it is different if you want to establish existence of two individual pixels in astronomy but that is not photography. Any ways I hope John got the answer.
Last edited by arash_hazeghi; 10-13-2011 at 11:10 AM.
I am not sure if a competent macto photographer would use a 300 and extender for macro of course amatures would use anything. The difraction diameter is d=1.22*lambda*NA where NA is the numerical aperture I.e focal length/diameter. I am not sure about MTF 0 ( never heard this before it sounds bizzare to me) or its relevance to macro photography. Of course it is different if you want to establish existence of two individual pixels in astronomy but that is not photography. Any ways I hope John got the answer.
Check out Greg Lasley's photography, which is outstanding. I met him at Bosque and I have seen many large prints of his, mostly taken with the 300 with TCs and extenders. In particular, go to his damselflies galleries. http://www.greglasley.net
From his web page:http://www.greglasley.net/odephoto.html
"I have used three different lenses for odonate photography; the Canon EF 180 mm F/3.5 Macro, the Canon EF 70-200 mm F/2.8 L IS, and the Canon EF 300 mm F/4 L IS. In addition, I usually use either the Canon 1.4X or 2X extender on all the above lenses."
Your equation above for the diffraction spot is for radius, for diameter the factor is 2.44.
0% MTF is the Dawes Limit. See: Star Testing Astronomical Telescopes, A Manual for Optical Evaluation and Adjustment by H. R. Suiter (1994, Willmann-Bell, Inc., Richmond, Virginia).
The Dawes limit (MTF = 0) = 1/(FW) in line pairs per mm where F = the f/ratio, and w = wavelength in mm.
The Rayleigh's Resolution Criterion, or Rayleigh limit = 1/(1.22*Fw) and is about 9% MTF.
For an f/4 diffraction limited lens, the Dawes limit is 2 microns, or a 35mm full frame digital camera with12000 x 18000 = 216 megapixels!
The Dawes limit is the point where no more information can be obtained from an image. So, we have along way to go.
The main factor we see in images suffering ffrom diffraction effects is a reduction in the contrast in fine details, and that can be compensated by unsharp mask or real sharpening if one has good signal-to-noise ratios.
Roger
Last edited by Roger Clark; 10-13-2011 at 06:30 PM.
With all due respect to the photographer, the images on the website you linked are medicore at best IMHO and not in par with the quality macro work that is posted on this site, most of them are not really "macro" but just closeups. Not sure if I agree with his techniques.
I agree trying to salvage "information" beyond MTF50 might be benefitial in astronomy and for scientific applications e.g. proving the existance of a far star but for photography it is not really relevant IMO. That is why the optics and camera indusrty has chosen to use MTF50, a choice that was not arbitrary.
Last edited by arash_hazeghi; 10-13-2011 at 07:27 PM.
With all due respect to the photographer, the images on the website you linked are medicore at best IMHO and not in par with the quality macro work that is posted on this site, most of them are not really "macro" but just closeups. Not sure if I agree with his techniques.
Wow Arash, You might want to see people's accomplishments before you dis them in public. http://www.greglasley.net/gregbio.html
Greg has published a number of books and has won in in photo contests. I find his work outstanding.
Originally Posted by arash_hazeghi
I agree trying to salvage "information" beyond MTF50 might be benefitial in astronomy and for scientific applications e.g. proving the existance of a far star but for photography it is not really relevant IMO. That is why the optics and camera indusrty has chosen to use MTF50, a choice that was not arbitrary.
Wow. You pride yourself in fine detail and sharpness in your images. At f/11, MTF50 is at about 55 lines pairs per mm, or 18 micron spacing. With a a 1DIV (5.7 micron pixels) you are saying that with images with a 1DIV at f/11, one could average every 3 pixels and not lose any relevant information. I am pretty sure your images will suffer--try it. The image quality below MTF50 is what gives the fine detail in photos and is very important. Below about 10% MTF there is little contribution to fine detail, but the 50 to 10% MTF is fine detail zone.
Roger, I said that IMHO the work is mediocre and that is my honest opinion. You are entitled to your opinion as well.
Also, no camera can capture detail beyond it's Nyquist limit the Nyquist limit is 2 times the pixel pitch plus the attenuation from the low pass filter as well as the CFA, so the 1D4 cannot sample anywhere close to 5.7um, the math seems wrong.
Any ways we have to disagree and let this go, but I am glad we can do this peacefully. Honestly I don't see how this discussion is related or helpful to John's question, my goal was to answer his question.
The discussion does show that this is a highly technical subject if nothing else. Actually I have gleaned some important messages from the responses and I hope others have as well.
One thing that still puzzles me is how folks continue to obtain sharp macro results at f32. Perhaps this is all about post-processing and sharpening?
The discussion does show that this is a highly technical subject if nothing else. Actually I have gleaned some important messages from the responses and I hope others have as well.
One thing that still puzzles me is how folks continue to obtain sharp macro results at f32. Perhaps this is all about post-processing and sharpening?
Hey, are you looking at 100% view or reduced size? Images that looks soft at 100% view can be made to look sharp once you down-sample them and apply careful sharpening
Well of course they are mostly ones I see posted here. However, the ones I am been thinking about have a sharpness that doesn't look "forced". I took a look back at some of the BPN macro artists I really admire like Steve Maxson and note that he usually shoots at f14 or f16 with the 5DII so at those apertures he's just getting into diffraction effects I guess. He doesn't post much any more but Mike Moats routinely produced beautifully sharp images at shot at f32 with if I remember rightly a Fuji camera and Tamron lens. I could be wrong but I'm pretty sure these images would print up quite well.
Tha macro work you mention is excellent for sure! When you down-sample an image you are averaging those pixels that are affected by diffraction so the soft look goes away and it sharpens quite nicely. It will print nicely as well since the inkjet prints don't come close to pixel-level resolution any way, unless you enlarge too much. I am sure if you look at these images at 100% they are not very different compared to your images with the Canon 100 f/2.8L macro (excellent lens for macro BTW)... In practice DOF triumphs over pixel sharpness :)
For photography applications I find this calculator very useful, it will tell you whether softness from diffraction will be visible in the final print or not. It takes print dimesnions and viewer distance into account as well.
I am not sure if a competent macto photographer would use a 300 and extender for macro
Arash,
The reason to use a long lens for macro photography is working distance so one does not scare the subject. Here are links to BPN image of the week winners in the Macro and Flora forum who used 300 mm lenses, often with extenders, and even some using 500 mm lenses. I would call them competent, and they do beautiful work, and certainly BPN staff thinks so too.
I agree Roger these are excellent and worthy and photographers are skilled for sure. However these beautiful photos are more like closeup than macro. When I think about macro I think about 1:1 magnification or something close to that, with telephoto lenses working distance is convinent for sure but magnification is still low compared to true macro.
This thread is getting full of misconceptions. I'll try and clear up a few things.
Originally Posted by arash_hazeghi
Also, no camera can capture detail beyond it's Nyquist limit the Nyquist limit is 2 times the pixel pitch plus the attenuation from the low pass filter as well as the CFA, so the 1D4 cannot sample anywhere close to 5.7um, the math seems wrong.
A line pair is the distance from the center of one line to the next. That equals the pixel pitch, not 2 times the pixel pitch. For example, the 1D mark IV, with 5.7 micron pixels, has a Nyquist limit of 1000/5.7 = 175 line pairs per mm (not 87.7). For example, dpreview measured the limit at 183 line pairs per mm, very close (probably within their noise) to the Nyquist limit.
The blur filter acts like other aberrations. It does not reduce the Nyquist limit or the Dawes limit, rather it reduces contrast in the frequencies (finest detail) below the Dawes limit.
Diffraction also does not reduce resolution. It has an upper limit to resolution (at the Dawes limit). Diffraction reduces contrast. People perceive contrast as sharpness. This is why unsharp mask appears to improve sharpness when it does not change sharpness at all, it only changes edge contrast (accutance). Diffraction reduces contrast at lower frequencies than the Dawes limit.
Diffraction limit. Arash posted a lint to Cambridge color: http://www.cambridgeincolour.com/tut...hotography.htm
but the premise of the page and their definition of diffraction limit is wrong and does not even make sense:
"at some aperture the softening effects of diffraction offset any gain in sharpness due to better depth of field. When this occurs your camera optics are said to have become diffraction limited."
That premise is incorrect. One can often have a depth of field that is very low and smaller apertures, even beyond the Dawes limit will still improve the depth of field and improve the sharpness of an out of focus area, especially in macro photography.
Diffraction limit is when the optical system resolves all the detail in an image (samples the Dawes limit; MTF=0). Diffraction limit has nothing to do with out of focus things in an image. (I will contact Cambridge color about their web page).
I'll reproduce the table I posted earlier:
The Dawes limit (MTF = 0) = 1/(FW) in line pairs per mm where F = the f/ratio, and w = wavelength in mm.
The Rayleigh's Resolution Criterion, or Rayleigh limit = 1/(1.22*Fw) and is about 9% MTF.
wavelength = 0.0005 mm (green)
F/ratio
Diffraction
Spot Size
(microns)
80%
MTF
50%
MTF
Rayleigh
Resolution
Criterion
Dawes
Resolution
Criterion
lp/mm
lp/mm
lp/mm
lp/mm
2
2.2
160
390
820
1000
2.8
3.1
110
280
580
710
4
4.5
80
190
410
500
5.6
6.3
58
140
290
360
8
8.9
40
97
200
250
11
12
29
71
150
180
16
18
19
48
100
120
22
25
15
35
75
91
32
36
10
24
51
63
45
50
7
17
36
44
64
70
5
12
26
31
Let's look at an example with the 1D Mark IV (5.7 micron pixel pitch). Given an f/5.6 perfect lens. At 58 line pairs per mm (a distance of 17 microns, or 3 pixels in the 1DIV) one has lost 20% of the true contrast. As one closes down the aperture, the contrast continues to drop. It finally drops to zero for a line pair spread over 3 pixels at about f/32. As one stops down the highest frequency (finest detail) limit does not change, just the contrast gets lower and lower and may become lost in the noise. The point is that essentially every lens we use has some effect on image quality at EVERY aperture. The so called diffraction limit that people perceive is a fuzzy concept indeed (even with the rigorous technical limit of the Dawes limit).
So in macro photography when one needs to use small apertures (f/number large), contrast in the high frequencies is lowered and unsharp mask can bring it back at the cost of noise. If one has a long exposure to collect a lot of light (e.g. use ISO 100), the noise impact may not be too bad. And it takes some skill in tuning the unsharp mask to produce a credible image without artifacts. I prefer to use Richardson-Lucy image deconvolution.
Regarding macro (or not photography), the original definition of macro photography is GREATER than 1:1 reproduction in the focal plane. By that definition, Canon's 100 mm f/2.8 macro lens is not actually a macro lens because it goes only to 1:1, not greater than 1:1. That original definition has been obsolete for a long time. We have many lenses today that are labeled macro but fail to meet the original definition. Today the lines between macro and close-up are completely blurred and I see no reason to continue the denigrating of people and their photographs because there is no clear line of distinction anymore. Here on BPN, the macro forum accepts any reasonably close up image.
Roget I think your math is wrong, Nyquist is one cycle evey two pixels, you can't sample at each pixel because aliasing will occur. For a sampling frequency f, the Nyquist limit is f/2. In digital cameras Nyquist frequency is calculated as f=1/(2*pixel spacing)-This is called the Shannon-Nyquist theorem pretty basic stuff you can find in any reference. The LPF cuts earlier than that.
you may want to look at definition of Nyquist freq here:
The Nyquist sampling theorem states that if a signal is sampled at a rate dscan and is strictly band-limited at a cutoff frequency fC no higher than dscan/2, the original analog signal can be perfectly reconstructed. The frequency fN = dscan/2 is called the Nyquist frequency. For example, in a digital camera with 5 micron pixel spacing, dscan = 200 pixels per mm or 5080 pixels per inch. Nyquist frequency fN = 100 line pairs per mm or 2540 line pairs per inch.
Also I find this reference very useful in understanding sharpness and resolution and such
Anyways your numbers are not even close and I am not sure about what you are trying to conclude it all sounds pretty bizarre to me like "MTF0" but I have to excuse myself from this discussion now. At this point, it seems we cannot agree on any of these subjects from diffraction to defination of macro. I think people can read the references I provided and figure the facts for themselves
I hope John can get the sharp-looking macro shots he wishes
Last edited by arash_hazeghi; 10-16-2011 at 03:23 AM.
Reason: added links
"Nature Interpreted" - Photography begins with your mind and eyes, and ends with an image representing your vision and your reality of the captured scene; photography exceeds the camera sensor's limitations. Capturing and Processing landscapes and seascapes allows me to express my vision and reality of Nature.
I can never comment in a Roger-Arash debate; I can just read and glean.
It is a very useful site Jay, although Roger found some problems with what they are saying about diffraction (see above). Nonetheless I have learned a lot from them. Like almost any topic these days, I could have gone out to the web and done my research. However, I think it is often useful to raise an question here in one of the forums in the hope that more than just oneself may benefit from the discussion.
Roget I think your math is wrong, Nyquist is one cycle evey two pixels, you can't sample at each pixel because aliasing will occur. For a sampling frequency f, the Nyquist limit is f/2. In digital cameras Nyquist frequency is calculated as f=1/(2*pixel spacing)-This is called the Shannon-Nyquist theorem pretty basic stuff you can find in any reference. The LPF cuts earlier than that.
Arash,
See the attached diagram. The pixel pitch describes the distance between TWO pixels. I think you are confused between the difference in line pairs per mm and cycles per mm. Modulation Transfer Function is given in line pairs per mm, not cycles. Line pair spacing is given by the pixel pitch, not 2 * pixel pitch. (Note visual acuity is given in cycles per degree, not line pairs per degree.) One cycle is 2* pixel pitch. My math, and the results on DPreview in terms of resolution in line pairs per mm are consistent and correct.
Roger I think you have trouble understanding the concept of lpm, it is Line Pair per mm it means a line and a space. Even in the diagram you drew yourself each two pixels sample a line and a space next two it.
This is such an obvious issue I can't believe I am writing about it-I am sure you will continue forever but I am really done here besides none of this is in any way related to the OP's question-good luck to you
I apologize to everyone if this got too much convoluted, I hope you find the links useful
Last edited by arash_hazeghi; 10-16-2011 at 12:48 PM.
Roger I think you have trouble understanding the concept of lpm, it is Line Pair per mm it means a line and a space. Even in the diagram you drew yourself each two pixels sample a line and a space next two it.
This is such an obvious issue I can't believe I am writing about it-I am sure you will continue forever but I am really done here besides none of this is in any way related to the OP's question-good luck to you
I apologize to everyone if this got too much convoluted, I hope you find the links useful
Arash,
What is the problem? I specifically drew line pairs. In the diagram, pixel #2 and #3 sample one line pair given in the bottom diagram. The width of the line pair is given by the pixel pitch.
Let's go to dpreview for the 1DIV. The sensor is 18.6 mm high with 3264 pixels. Vertical resolution is 3400 line pairs per picture height (maximum): http://www.dpreview.com/reviews/cano...kIV/page31.asp
Now 3400 lpm over 18.6 mm = 3400/18.6 = 183 line pairs per mm.
By your definition, one could only get 3264/(2*18.6) = 88 lpm.
How do you explain the difference? Even dpreview's "absolute resolution" of 2500 LPH = 134 lpm is greater than what your definition says is possible.
The answer is simple. Nyquist says one must sample a signal at least at twice the frequency, or in our case two pixels per cycle. That is exactly what my diagram shows. Think of it this way. Look at my diagram. Pixels 1 and 2 sample a line pair. Pixels 2 and 3 sample a line pair. Pixels 3 and 4 sample a line pair. Another way to look at it is to convert the box car signal to a sine wave and the pixel to a point spaced at the same as the pixel pitch in the diagram. The pixels sample the min and max in the sine wave, just as required by Nyquist. That is a line pair: a bright and a dark line and specified by the pixel pitch.
I've added a diagram that may help.
Roger
Last edited by Roger Clark; 10-16-2011 at 02:08 PM.
Here is another example of what I have been talking about and may be more relevant to John's question than discussing Nyquist theorem.
The attached figure shows a test target with a 300 mm f/2.8 lens and 5D Mark II camera with images at f/2.8, f/11, and f/32. Notice that the finest details (closest bars) are close to the same at f/32 as f/2.8, only the contrast is lower as f/ratio increases from 2.8 to 11 to 32. The final panel shows a sharpened f/32 image that is pretty close to the f/11 image. Diffraction's main effect on images is loss of contrast, but some of that can be recovered with sharpening. We perceive that loss in contrast as loss in sharpness.
Diffraction limit. Arash posted a lint to Cambridge color: http://www.cambridgeincolour.com/tut...hotography.htm
but the premise of the page and their definition of diffraction limit is wrong and does not even make sense:
"at some aperture the softening effects of diffraction offset any gain in sharpness due to better depth of field. When this occurs your camera optics are said to have become diffraction limited."
That premise is incorrect. One can often have a depth of field that is very low and smaller apertures, even beyond the Dawes limit will still improve the depth of field and improve the sharpness of an out of focus area, especially in macro photography.
Diffraction limit is when the optical system resolves all the detail in an image (samples the Dawes limit; MTF=0). Diffraction limit has nothing to do with out of focus things in an image. (I will contact Cambridge color about their web page).
FYI, I just received this from Sean at cambridgeincolour:
Hi Roger,
Thanks for the feedback. Yes, I agree -- that intro sentence is
misleading. It was originally intended to refer to sharpness at the
focal plane, but looking at it again (after 6 years) it certainly
didn't read like that. Besides, depth of field isn't a concept that
should be discussed so close to the beginning of an article that's
otherwise about diffraction. The relevant part has been changed to:
"This effect is normally negligible, since smaller apertures often
improve sharpness by minimizing lens aberrations. However, for
sufficiently small apertures this strategy becomes counterproductive —
at which point your camera optics are said to have become diffraction
limited."
I'm also planning on updating the visual bit with the list of camera
types at some point . . .
Roger- Many many thanks for your explanations, charts, and diagrams on all of this. I've learned a lot. If nothing else, to quit worrying about lens diffraction .
Tom
If anyone is really interested to learn the facts I recommend this page, it is very elaborate, the forumal for Nyquist and such are given here with explanations and examples
Thanks for the Norman Koren reference, it is very detailed indeed. I had never seen such a comprehensive article on this topic. Thank you for posting! Now do you have a link like that for BIF or high ISO to share with us ???
The Nyquist theorem is one of the few things I remember from Systems 101. In photography it means that you can't possibly capture a period (or line pair) closer together than two pixels so one pair per two pixels or f=1/(2*pixel size). It is even clear in Roger's diagram
Hello, yes there are some definitions that, if you start off wrong with, leads to subsequent confusion, etc.:
Arash, the diffraction diameter d is not 1.22*lambda*NA; for a well corrected photographic lens NA and F# are related by:
NA = 1/(2*F#), where F# is focal length/entrance pupil diameter. The diffraction diameter d is given by:
d = 2.44*lambda*F#, or, in terms of NA, d = 1.22*lambda/NA.
Roger, when we use lp/mm, a line pair consists of a white bar and a black bar, the physical length of a lp in the lp/mm calculation is = the width of the white bar + the width of the black bar, i.e. it's not the center-to-center distance between the black bar and the white bar. The physical length of 1 cycle in your diagram is also the physical length of 1 line pair.
Hello, yes there are some definitions that, if you start off wrong with, leads to subsequent confusion, etc.:
Arash, the diffraction diameter d is not 1.22*lambda*NA; for a well corrected photographic lens NA and F# are related by:
NA = 1/(2*F#), where F# is focal length/entrance pupil diameter. The diffraction diameter d is given by:
d = 2.44*lambda*F#, or, in terms of NA, d = 1.22*lambda/NA.
Roger, when we use lp/mm, a line pair consists of a white bar and a black bar, the physical length of a lp in the lp/mm calculation is = the width of the white bar + the width of the black bar, i.e. it's not the center-to-center distance between the black bar and the white bar. The physical length of 1 cycle in your diagram is also the physical length of 1 line pair.
Chris
Chris, I agree diameter=2.44*lmbda*F# I typed radius above-sorry for the confusion.
Hello, yes there are some definitions that, if you start off wrong with, leads to subsequent confusion, etc.:
Arash, the diffraction diameter d is not 1.22*lambda*NA; for a well corrected photographic lens NA and F# are related by:
NA = 1/(2*F#), where F# is focal length/entrance pupil diameter. The diffraction diameter d is given by:
d = 2.44*lambda*F#, or, in terms of NA, d = 1.22*lambda/NA.
Roger, when we use lp/mm, a line pair consists of a white bar and a black bar, the physical length of a lp in the lp/mm calculation is = the width of the white bar + the width of the black bar, i.e. it's not the center-to-center distance between the black bar and the white bar. The physical length of 1 cycle in your diagram is also the physical length of 1 line pair.
Chris
Arash, Chris,
I did a little more reading regarding the various definitions and I agree that you guys are correct regarding line pair per mm being the full cycle. Even my own calculations of MTF use full cycles and the table I posted is in cycles. The literature is full of inconsistent uses and the only clear method in my opinion is to label an MTF figure as: "cycles (line pairs per mm)" or similar units. I have no excuse for my slip-up, except that I was in the hospital and made those responses on pixel pitch = line pairs while in recovery and it somehow stuck in my mind until I could get to my references. Sorry for the confusion.
Arash, I agree the dpreview data are in in lines not line pairs.
The key with diffraction and the original question was lost in the details of Nyquist. The key for people to remember is that diffraction mainly robs contrast, below the Dawes limit and it can be corrected to some degree with unsharp mask or other sharpening methods, as illustrated in the bar chart figure I posted in pane 26.