Page 1 of 2 12 LastLast
Results 1 to 50 of 78

Thread: Maintaining image fine detail.

  1. #1
    Forum Participant
    Join Date
    Jul 2011
    Location
    Northamptonshire, UK
    Posts
    451
    Threads
    152
    Thank You Posts

    Default Maintaining image fine detail.

    I hope you can advise. I've been very happy with my recent images I've been taking and looking at them in DPP at 50 or 100% magnifications! the fine detail is superb. It may seem like a stupid question but is there a way to ensure that this fine detail isn't lost when processing and reducing the image size down. I think my pits processing is not bad now having taken all your advise, so I'm just looking to improve further.

    thanks

  2. #2
    BPN Member Don Lacy's Avatar
    Join Date
    Jan 2008
    Location
    SE Florida
    Posts
    3,566
    Threads
    348
    Thank You Posts

    Default

    Simon, Normal post processing will not destroy detail it's downsizing then saving for the web were it takes a hit. There is not much you can do about that other then to save the file at the largest size possible for posting, For BPN that would be 1200 pixels and 400 Kb.
    Don Lacy
    You don't take a photograph, you make it - Ansel Adams
    There are no rules for good photographs, there are only good photographs - Ansel Adams
    http://www.witnessnature.net/
    https://500px.com/lacy

  3. #3
    BPN Viewer
    Join Date
    Jan 2008
    Posts
    99
    Threads
    1
    Thank You Posts

    Default

    If you downsize an image you should do the final sharpening after you do the downsize, not before.

  4. #4
    BPN Member Don Lacy's Avatar
    Join Date
    Jan 2008
    Location
    SE Florida
    Posts
    3,566
    Threads
    348
    Thank You Posts

    Default

    Quote Originally Posted by DickLudwig View Post
    If you downsize an image you should do the final sharpening after you do the downsize, not before.
    Good point by Dick
    Don Lacy
    You don't take a photograph, you make it - Ansel Adams
    There are no rules for good photographs, there are only good photographs - Ansel Adams
    http://www.witnessnature.net/
    https://500px.com/lacy

  5. #5
    BPN Viewer
    Join Date
    Dec 2013
    Location
    Barrow, Alaska
    Posts
    37
    Thank You Posts

    Default

    Quote Originally Posted by Simon Wantling View Post
    I hope you can advise. I've been very happy with my recent images I've been taking and looking at them in DPP at 50 or 100% magnifications! the fine detail is superb. It may seem like a stupid question but is there a way to ensure that this fine detail isn't lost when processing and reducing the image size down. I think my pits processing is not bad now having taken all your advise, so I'm just looking to improve further.

    thanks
    Lets talk a little bit about what it means to be "sharp". Fine detail or high resolution in an image is the ability to change the tonal brightness quickly. That can take two forms.

    First, something like hair, texture in fabric, or a picket fence is one thing. That is high frequency spatial detail. It can only be as high as 1/2 the pixel rate! If every other pixel in a line is bright and the other dark, that is the maximum possible resolution! Of course if the image is then reduced so that the same line has 1/2 as many pixels, clearly it cannot resolve the same detail! In fact, downsizing an image is a very effective low pass filter.

    Another consideration is how fast a single edge transition goes from one brightness level to another. With a Bayer Color Filter being used to encode color, and RAW converters using at least a 3x3 matrix of sensor locations to determine pixel vaue, it is impossible to have a transition shorter than 4 to 5 pixels.

    But for both forms of "sharpness" it is possible to enhance how acutely our vision perceives transitions. That is what sharpending does. If there actually is detail in the image, the difference between the lighter side of a transition and the darker side can be enhanced.

    That is why people say it is always necessary to apply at least some sharpen to an image. And that is why the amount is different for images that have been downsized, and is the reason sharpening should be applied after the image has been resampled to the final size it will be displayed at.

    The fact is that downsizing an image absolutely clips high frequency detail. It is therefore impossible to look at an image on a computer screen, displayed at 100 pixels per inch, and determine if it will look "sharp" when printed at 16x20 using 300 pixels per inch. Instead we have to look at a 100% crop of the image and judge from that.

  6. #6
    Forum Participant
    Join Date
    Jan 2008
    Location
    Canada
    Posts
    6,588
    Threads
    643
    Thank You Posts

    Default

    Quote Originally Posted by DickLudwig View Post
    If you downsize an image you should do the final sharpening after you do the downsize, not before.
    This is a common myth. See this excellent BPN thread (viewed 18,300 times to date):

    http://www.birdphotographers.net/for...ng-Information!

  7. #7
    BPN Member Don Lacy's Avatar
    Join Date
    Jan 2008
    Location
    SE Florida
    Posts
    3,566
    Threads
    348
    Thank You Posts

    Default

    Quote Originally Posted by John Chardine View Post
    This is a common myth. See this excellent BPN thread (viewed 18,300 times to date):

    http://www.birdphotographers.net/for...ng-Information!
    Not so much a myth but a different workflow as was stated in the thread an image sharpen before down sampling might still need to be sharpen again to offset the effects of down sampling. This is one of those subjects were even the experts disagree as to the best method.

  8. #8
    BPN Viewer
    Join Date
    Dec 2013
    Location
    Barrow, Alaska
    Posts
    37
    Thank You Posts

    Default

    Quote Originally Posted by John Chardine View Post
    This is a common myth. See this excellent BPN thread (viewed 18,300 times to date):

    http://www.birdphotographers.net/for...ng-Information!
    Not a myth at all, but the cited thread is loaded with inaccuracies.

    Arthur Morris made the initial statements about the effects of sharpening images, and he was very correct except about only one single item. He said that if an image is sharpened and then down sized it should appear to be over sharpened. In fact it will be under sharpened, simpy because down sizing is a very effective low pass filter (Nyquist's Frequency for sampling). As Roger Clark explained correctly that causes a certain amount of blur to be introduced, which can to some degree be removed with sharpening. Note also that one of the primary differences between various algorithms for resampling images is the amount of "ringing" at edge transitions. A good algorithm for down sampling will have a slight amount of ringing and a good algorithm for up sampling will not.

    Roger Clark made several gross errors in his analysis. The idea that Unsharp Mask does not "sharpen" because it cannot increase resolution and instead will only increase accutance is incorrect. No sharpen algorithm can increase resolution, and they all increase accutance. They do that by increasing "micro contrast" and by adding "ringing" to an edge transition. To increase resolution would require adding data that does not exist.

    I personally do not use capture sharpening. It isn't a technical advantage, it's an ergonomic distinction. Some people cannot work with an image that isn't sharp, some can.

    There is no technical value at all in sharpening the complete image if it is to be resampled to a smaller size. Sharpening amplifies high frequency spatial components, which is how it works. Resampling removes high frequency spatial components. Whatever effect of a sharpen pass that is not removed is unpredictable at the time sharpen is applied. Hence there is simply no technical value in applying sharpen immediately before removing it! Resample the image to the desired size, then sharpen by observation for desired effect.

    You can expect that generally a "Sharpen" tool will work better than an "Unsharp Mask" tool on an image that is up sampled, and the opposite is more likely to be true for an image that has been down sampled. In either case it will require more for a larger image.

    Lets state that one more time: overall sharpening of an image before it is down sized has no benefit to the image simply because sharpen is a high pass filter and down sizing is a low pass filter that will remove the effect.

    But there is no real harm done either, and if a sharper image is easier for you as an artist to work with, do it!

  9. #9
    Macro and Flora Moderator Jonathan Ashton's Avatar
    Join Date
    Dec 2009
    Location
    Cheshire UK
    Posts
    17,333
    Threads
    2,665
    Thank You Posts

    Default

    Quote Originally Posted by Floyd Davidson View Post
    Roger Clark made several gross errors in his analysis. The idea that Unsharp Mask does not "sharpen" because it cannot increase resolution and instead will only increase accutance is incorrect. No sharpen algorithm can increase resolution, and they all increase accutance. They do that by increasing "micro contrast" and by adding "ringing" to an edge transition. To increase resolution would require adding data that does not exist.
    I have read this a few times , it is getting late but I don't think that Roger said anything incorrect and I think your statement does not contradict?????

    Returning to the original question I would suggest trying Perold sharpening for web sized images, the Actions can be downloaded from his website, I use them and have modified them to produce images 800, 1024 & 1200 pixels. I make the final size as indicated and the preceding size in the action twice that size.
    Last edited by Peter Kes; 01-09-2014 at 11:24 AM.

  10. #10
    BPN Viewer
    Join Date
    Dec 2013
    Location
    Barrow, Alaska
    Posts
    37
    Thank You Posts

    Default

    Quote Originally Posted by Jonathan Ashton View Post
    Roger Clark made several gross errors in his analysis. The idea that Unsharp Mask does not "sharpen" because it cannot increase resolution and instead will only increase accutance is incorrect. No sharpen algorithm can increase resolution, and they all increase accutance. They do that by increasing "micro contrast" and by adding "ringing" to an edge transition. To increase resolution would require adding data that does not exist.
    I have read this a few times , it is getting late but I don't think that Roger said anything incorrect and I think your statement does not contradict?????
    Here is what Roger Clark said,

    "If anything can be said is absolutely wrong, it is calling unsharp mask sharpening! [...] As such, unsharp mask actually does not sharpen; it increases accutance. See, for example: http://en.wikipedia.org/wiki/Sharpness_(visual)"

    His cited wiki page contradicts that statement. One of the best discussion for this topic on the Internet is at http://www.lensrentals.com/blog/2009...en-my-acutance and was posted by Roger Cicala. It contradicts Clark's statement in detail.

    Both of these cites define "sharpness" in the same way, as a combination of "accutance" and "resolution". Both point out that post processing cannot increase "resolution", and therefore any increase in sharpness due to post processing is from an increase in accutance. That is true of "Sharpen" and it is true of "UnSharp Mask".

    "When we sharpen, either in postprocessing the image or with in-camera sharpening, we have increased the acutance, one of the two components of what we refer to as sharpness." -- Roger Cicala in the article cited above.

  11. #11
    Super Moderator arash_hazeghi's Avatar
    Join Date
    Oct 2008
    Location
    San Francisco, California, United States
    Posts
    18,556
    Threads
    1,321
    Thank You Posts

    Default

    Sharpening after downsizing is not a myth at all, that's the way it should be done. Output sharpening is essential for achieving a high quality output.

    downsizing is a low-pass filter, it cancels the effect of sharpening by averaging neighbor pixels thus making a sharp image look softer.

    I also like to use "smart sharpen" as opposed to unsharp mask read "dump blanket sharpening". Smart sharpen analyzes the image first and then sharpens the contrasty edges only, avoiding noise in uniform areas.
    New! Sony Capture One Pro Guide 2022
    https://arihazeghiphotography.com/Gu.../Sony_C1P.html


    ------------------------------------------------
    Visit my blog
    http://www.arihazeghiphotography.com/blog

  12. #12
    Banned
    Join Date
    Feb 2008
    Location
    Colorado
    Posts
    3,949
    Threads
    254
    Thank You Posts

    Default

    Attached Images Attached Images
     
    There seems to be a lot of misunderstanding here (and Roger Cicala too is incorrect.

    Roger Cicala: "Unlike acutance, resolution (AKA microcontrast)
    can’t really be improved in postprocessing"

    There is a big difference between accutance, sharpness and resolution.
    Accutance is edge contrast. Sharpness is partly edge contrast
    but includes the size of the detail. For example, the bars in
    Roger Cicala's example, have high accutance, which he then
    blurred and then "sharpened" with unsharp mask, restoring
    the high accutance. So while his demonstration increased
    accutance, the unsharp mask did not increase true sharpness and
    resolution. So Roger's statement is correct in that
    Photoshop does not have good deconvolution tools to
    improve resolution and sharpness.

    And yes, resolution can be increased in post processing.
    That is what image deconvolution is (references below).

    Here is a good example from
    http://web.cecs.pdx.edu/~mperkows/CLASS_573/

    see this power point:
    http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=6&ved=0CE8QFjAF&url=http%3A%2F%2Fweb.cecs.pdx.edu%2F~mperkows%2FCL ***_573%2FKumar_2007%2Fdecon_lecture.ppt&ei=shDKUramBsHJygGDnoGIDw&usg=AFQjCNFMKwD9ih_yQAuAI3_yeXRFHHpT6g&bvm=bv.58187178,d.aWc&cad=rja


    (not sure how to make the above a link)
    Or search for with google: decon_lecture.ppt site:
    cecs.pdx.edu
    It should be the fist result.


    download the power point presentation and look at slide
    23. It shows 3 stars blurred by diffraction
    and atmospheric turbulence, and the 3rd panel: Richardson-Lucy

    deconvolution to resolve a 4th star separated by a tiny
    amount. I attach the images from that slide here,
    and above them I did an unsharp mask on the images.
    Many other examples after slide 23 in the presentation.

    The unsharp mask versions show that the bright areas grow.
    That is actually reducing resolution: it is making the
    blurred image (called to point spread function, or PSF) larger.
    Deconvolutuion moves energy blurred by the PSF back into
    the center, even resolving things below the diffraction limit.
    So no, unsharp mask does not increase resolution, one component
    of sharpness.

    Now, which image would you say is sharp? The originial (lower left)?
    The unsharp mask version? The Richardson-Lucy deconvolution image?
    The unsharp mask image has higher accutance, but is actually less sharp.
    The Richardson-Lucy image is an order of magnitude sharper!

    So when you run unsharp mask on an image and things like catchlight
    brightens, the accutance is increasing, but so is the size of that catchlight:
    the point spread function is getting larger and resolution
    is actually decreasing. Deconvolution routines will make the
    catchlight smaller as well as brighter. Similarly with other components.
    ONLY when you have side be side bars (as in Cicala's bar chart) does
    unsharp mask stay within the bounds and not grow in size.
    So it you image bar charts, unsharp mask will work find for
    showing sharp bars. But in real images, it does not behave as well.

    Note the quote:
    Super-resolution means recovery of spatial frequency information beyond the
    cut-off frequency of the measurement system.

    Regarding sharpening before downsizing, as downsizing is a low pass
    filter at worse it erases the sharpening one did on the full resolution
    image. But if you start with a sharper image, the downsized image
    will be sharper. It also depends on the downsizing algorithm.
    Photoshop does not have good downsizing algorithms.

    Other references:

    http://www.astrosurf.com/buil/us/iris/deconv/deconv.htm
    "To increase resolution of the images, the best way is the deconvolution."

    http://www.ncbi.nlm.nih.gov/pubmed/21599665
    Increasing axial resolution of 3D data sets using deconvolution algorithms.
    J Microsc. 2011 Sep;243(3):293-302. doi: 10.1111/j.1365-2818.2011.03503.x. Epub 2011 May 23.
    "Deconvolution algorithms are tools for the restoration of data degraded by blur and noise. An incorporation of regularization functions into the iterative form of reconstruction algorithms can improve the restoration performance and characteristics (e.g. noise and artefact handling). In this study, algorithms based on Richardson-Lucy deconvolution algorithm are tested. The ability of these algorithms to improve axial resolution of three-dimensional data sets is evaluated on model synthetic data. Finally, unregularized Richardson-Lucy algorithm is selected for the evaluation and reconstruction of three-dimensional chromosomal data sets of Drosophila melanogaster. Problems concerning the reconstruction process are discussed and further improvements are proposed."

    http://www.nature.com/srep/2013/130828/srep02523/full/srep02523.html

    Towards real-time image deconvolution: application to confocal and STED microscopy
    "... the ability to provide fast sub-diffraction resolution recordings."

    http://www.mit.edu/~ty20663/Scientif...00501_SPIE.pdf
    Gaussian beam deconvolution in optical coherence tomography
    "present a method for increasing the apparent transverse
    resolution in OCT outside of the confocal parameter using Gaussian beam deconvolution of adjacent
    axial scans,... When applied to experimentally-acquired OCT data, the
    use of these algorithms can improve the apparent transverse
    resolution outside of the confocal parameter, extending
    the comparable confocal parameter range along the axial dir
    ection. These results are likely to further improve the
    high-resolution cross-sectional imaging capabilities of OCT."


    And so on. Find many references to these methods with google searches like:

    richardson lucy deconvolution increase resolution

    and

    planetary imaging beyond the diffraction limit with richardson lucy

    Roger


    Last edited by Roger Clark; 01-05-2014 at 10:44 PM.

  13. #13
    BPN Viewer
    Join Date
    May 2011
    Location
    Behind the Lens
    Posts
    136
    Threads
    9
    Thank You Posts

    Default

    Arash, you said to "read dump blanket sharpening", Where do I read this?
    and do you do smart sharpening to the entire image or selective?
    Thanks.

  14. #14
    Super Moderator arash_hazeghi's Avatar
    Join Date
    Oct 2008
    Location
    San Francisco, California, United States
    Posts
    18,556
    Threads
    1,321
    Thank You Posts

    Default

    Quote Originally Posted by BobbyPerkins View Post
    Arash, you said to "read dump blanket sharpening", Where do I read this?
    and do you do smart sharpening to the entire image or selective?
    Thanks.
    Hi Bob,

    Sorry for the typo, what I meant was "unsharp mask" is rather dumb because it sharpens the entire image equally vs. "smart sharpen" only applies sharpening to detail edges, this avoids noise in the more uniform areas. I usually use a small radius and larger amount, depending on the image. and yes, I only sharpen the subject that I put on a separate layer in PS (you don't want to sharpen the background).

    best
    New! Sony Capture One Pro Guide 2022
    https://arihazeghiphotography.com/Gu.../Sony_C1P.html


    ------------------------------------------------
    Visit my blog
    http://www.arihazeghiphotography.com/blog

  15. Thanks BobbyPerkins thanked for this post
  16. #15
    Banned
    Join Date
    Feb 2008
    Location
    Colorado
    Posts
    3,949
    Threads
    254
    Thank You Posts

    Default

    Attached Images Attached Images
     
    I am attaching a series of images that will demonstrate some of the concepts I discussed earlier in this thread and in the above referenced thread.
    The first figure shows an image of a crowned crane (imaged in Tanzania with a 1DIV and 300 f/2.8 lens). The image (A) in the figure has been blurred with a Gaussian blur with a Full Width and Half Maximum = 4 pixels. Image (B) shows the result of "sharpening" with unsharp mask. Of course one can push the unsharp mask further and do multiple runes, but the result is increasing artifacts.

    In the next panels, I'll show other sharpening methods. Everyone is welcome to take the blurred image and try and to better than what I'll show.

    Roger

  17. #16
    Banned
    Join Date
    Feb 2008
    Location
    Colorado
    Posts
    3,949
    Threads
    254
    Thank You Posts

    Default

    Attached Images Attached Images
     
    Next is a comparison with photoshop's smart sharpen. From what I have read, smart sharpen does a deconvolution (in some modes that it is used), but it appears to be only a single iteration (thus very fast).
    Roger

  18. Thanks Matt Reeves thanked for this post
  19. #17
    Banned
    Join Date
    Feb 2008
    Location
    Colorado
    Posts
    3,949
    Threads
    254
    Thank You Posts

    Default

    Attached Images Attached Images
     
    Next is the blurred image restored with Richardson-Lucy image deconvolution. A total of 950 iterations were used. I purposely chose a Gaussian blur function different than what I used to blur the image, so that the inaccuracy might limit the result or produce artifacts. This simulates real-world conditions where one may not be able to determine the exact blur (called the point spread function). The result is much sharper than smart sharpen or unsharp mask. It is sharper in two important ways. Restoration of fine detail not perceptable in the blurred image, or in the unsharp mask and smart sharpen images, including detail smaller than the 0% MTF frequency. Strictly speaking, one can't recover multiple parallel lines separated by less than the MTF=0 cutoff, but one CAN recover information on subjects smaller than the MTF=0 frequency with small details, like two close spots. Because real-world images are not parallel bar charts, one can, in practice, recover a lot of fine detail, even detail below diffraction limits using image deconvolution. There is over half a century of scientific research on these methods, and are in routine use in many fields these days, from microscopy to Astronomy.

    Roger

  20. #18
    Banned
    Join Date
    Feb 2008
    Location
    Colorado
    Posts
    3,949
    Threads
    254
    Thank You Posts

    Default

    Attached Images Attached Images
     
    Attached is the original image, before blur, and the restored image (restored from the blurred image). One can compare the two to see that the recovered detail closely matches the original image. This proves that the recovered detail is real and not artifacts of the processing. In fact, I pushed a little further and the restored image has slightly more detail than the original. This process could go even further: the original image could be upsampled and then deconvolved to reveal even more detail. To do that, one needs a very high signal-to-noise ratio image.

    Roger

  21. #19
    Banned
    Join Date
    Feb 2008
    Location
    Colorado
    Posts
    3,949
    Threads
    254
    Thank You Posts

    Default

    Attached Images Attached Images
     
    Now to downsampling. Anyone is welcome to take the original blurred image in my first post in the series and down sample by 2x and then sharpen and produce a sharper result than I show in the attached Figure (e.g. images E or F). Image B was made from the blurred image previously posted, then down sampled to produce image A, then unsharp mask was applied only after downsizing. Images C-F all started with a sharpened images before downsizing, and ALL those pre-sharpened images show sharper results after downsizing, even when no additional sharpening is applied after downsizing (e.g. image D).

    The best downsized results come from the best sharpened images before downsizing.

    Roger

  22. #20
    BPN Viewer
    Join Date
    Dec 2013
    Location
    Barrow, Alaska
    Posts
    37
    Thank You Posts

    Default

    Try doing the same experiment with more useful differences. Don't load the dice with a gausian blurred image (which is easy fodder for any HP Sharpen algorithm, and particularly for deconvolution, but isn't the target for any USM algorithm). The previous example was a good one, absent the gausian blur, because it has both sharp tone transitions and fine detail. Inspection has to be made with 100% crops.

    Take a 16 to 24MP camera image and optimize it for size and then sharpening for printing at 20x30. Or whatever you consider a "normal" sized print. Then downsize it to say 800x1200 for web viewing. For comparison also downsize the original 24MP image directly to 800x1200.

    Given the two, do whatever you think is appropriate with the first one. Then for the second example use deconvolution first and USM second and then try USM first and deconvolution last for another example. (Note that you previously didn't make a comparison to a downsized image that then had both Sharpen and USM applied.)

    As it is, it's difficult to see a difference between your prefered "D" and the "B" example. But your examples weren't realistic.

    Regardless, it seems this all misses the actual point which is not that one shouldn't ever sharpen prior to downsizing. Rather it is that it does not matter what is done before resampling it is essential to sharpen after an image is resampled.

    That is as true for upsizing, but with downsizing it is just particularly obvious for two reasons. One is that for a tool that requires adjustment by inspection it logically cannot be done correctly if the high pass filter where inspection is done is then followed by an arbitrary low pass filter. The second problem is that Sharpen and USM do not do the same things and particularly the application of USM at one set of pixel dimensions does not do the same thing at a different pixel dimension.

  23. #21
    BPN Viewer
    Join Date
    Dec 2013
    Location
    Barrow, Alaska
    Posts
    37
    Thank You Posts

    Default

    Quote Originally Posted by Roger Clark View Post
    The best downsized results come from the best sharpened images before downsizing.
    Actually that is a very good demonstration that a Gaussian blur is exactly reversible, using a Sharpen algorithm. Or conversely Sharpen can be reversed with a Gaussian blur.

    USM is not reversible, and does not directly reverse Gaussian blur.

  24. #22
    BPN Viewer
    Join Date
    Dec 2013
    Location
    Barrow, Alaska
    Posts
    37
    Thank You Posts

    Default

    Quote Originally Posted by Roger Clark View Post
    Because real-world images are not parallel bar charts, one can, in practice, recover a lot of fine detail, even detail below diffraction limits using image deconvolution.
    The word is "recover", which allows that the data is already there. It cannot be manufactured. Resolution is a measure of the data captured, and if it is covered by noise it can be recovered. But it can't be increased. Accutance is how well it can be seen. Together that is the definition of "sharpness".

    That is exactly what Roger Cicala said in simple every day terms that are easy to undestand and make no pretences about who is who. He was not even slightly in error, he just made it easy for everyone to understand. His simple examples using Gaussian blur are meant to help with understanding the process. Your use of the same form of example doesn't show the real world fine tuning you are suggesting. (And good luck with trying, as I have yet to see anyone make that look less than excessively complex.)

  25. #23
    Banned
    Join Date
    Feb 2008
    Location
    Colorado
    Posts
    3,949
    Threads
    254
    Thank You Posts

    Default

    Floyd,
    You miss several points. Deconvolution can use ANY point spread function, including complex custom functions derived from ones own images. The examples I posted are all 100% crops. If you can't see the differences, which are clear on my calibrated monitors, either enlarge to 200% or so, or try a better monitor. There are many scientific references one can find with a google search to prove all the methodology.

    For photographers, check out pixinsight image processing, works on macs, linux, windows and has a variety of image deconvolution tools. I use Imagesplus for this, but will check out pixinsight for linux.

    Roger
    Last edited by Roger Clark; 01-08-2014 at 08:06 PM.

  26. #24
    BPN Viewer
    Join Date
    Dec 2013
    Location
    Barrow, Alaska
    Posts
    37
    Thank You Posts

    Default

    Quote Originally Posted by Roger Clark View Post
    You miss several points.
    It doesn't look like I missed a one of them...

  27. #25
    Macro and Flora Moderator Jonathan Ashton's Avatar
    Join Date
    Dec 2009
    Location
    Cheshire UK
    Posts
    17,333
    Threads
    2,665
    Thank You Posts

    Default

    POINT OF CLARIFICATION - FRAME 9

    I wish to make it clear that it is not me who is contradicting Roger in this pane, the first paragraph is the words of Floyd Davison, they are not mine.

    The following sentence in pane 9 indicates that I do agree with Roger, and after reading Floyd's paragraph I wasn't too sure that he was actually disagreeing either.

    I made the post late in the evening, on reflection I should have made it the following day with proper quotation marks indicating who was saying precisely what.

    I offer my apologies for any misconception or confusion arising from my post in pane 9 I was not disagreeing or contradicting Roger, I hold his comments in the highest regard.

    Jon.

  28. #26
    BPN Viewer Tom Graham's Avatar
    Join Date
    Apr 2010
    Location
    Southern California, Orange County
    Posts
    1,116
    Threads
    33
    Thank You Posts

    Default

    Thanks Roger for the most informative, and knowledgeable replies. And especially for the image examples.
    Tom

  29. #27
    Banned
    Join Date
    Feb 2008
    Location
    Colorado
    Posts
    3,949
    Threads
    254
    Thank You Posts

    Default

    Thanks Jonathan and Tom.

    I have used what I posted here as a springboard for a new article on image deconvolution. I've added a lot more detail and references to 40+ years of algorithms that improve resolution:
    http://www.clarkvision.com/articles/image-restoration2/

    My original article on this subject from 2005 is at: http://www.clarkvision.com/articles/image-restoration1/

    I routinely use image deconvolution in my work flow, and have for over 7 years.

    Roger

  30. #28
    Forum Participant
    Join Date
    Jan 2008
    Location
    Canada
    Posts
    6,588
    Threads
    643
    Thank You Posts

    Default

    Excellent Roger. Thanks.

    It's amazing what a few words can do earlier in the thread to stir the pot (if I used "smily faces" I would here but I don't cuz they are daft!)

  31. #29
    BPN Member Don Lacy's Avatar
    Join Date
    Jan 2008
    Location
    SE Florida
    Posts
    3,566
    Threads
    348
    Thank You Posts

    Default

    For those interested here is a very informative thread on Richardson-Lucy and deconvolution sharpening what I found really interesting is Eric Chan conformation that Adobe does use deconvolution algorithms in both ACR and with the Smart Sharpen filter, Eric along with several other posters are engineers and scientist who wright the software we use so some of the discussion for a laymen like me can get a bit technical. http://www.luminous-landscape.com/fo...?topic=45038.0
    Don Lacy
    You don't take a photograph, you make it - Ansel Adams
    There are no rules for good photographs, there are only good photographs - Ansel Adams
    http://www.witnessnature.net/
    https://500px.com/lacy

  32. #30
    Banned
    Join Date
    Feb 2008
    Location
    Colorado
    Posts
    3,949
    Threads
    254
    Thank You Posts

    Default

    Hi Don,

    Thanks for the link. A somewhat interesting read. A couple of points if people try and get through it.

    There are statements about how one can't get beyond 0% MTF. That is true for bar charts, but fortunately the real world is not made of bar charts. For example, stars are well below the 0% MTF limit, yet we can still see them. There are a lot of theoretical arguments made and if one only listened to such theories, we would not be be able to see stars and many other things in every day life, like a hair at a great distance. MTF looks at a one dimensional response, but the real world is 2 dimensional and the classical 0% MTF simply does not apply (it is the wrong theory for the application).

    The response from the Adobe programmer was interesting. He basically said ACR and smart sharpen can do just as well as anything, yet offered no examples. Perhaps it is this arrogance why photoshop is falling behind. I have not been able to get anywhere near as good with smart sharpen as multiple iteration deconvolution. The speed of deconvolution in ACR and photoshop can be no more than 1 iteration, or there is some integer approximation done if a couple of iterations. Deconvolution is an iterative process; there is no direct solution. It must be done in 32-bit or 64-bit floating point as each iteration is a small change and could be lost with integer data.

    I have added a link on my page: http://www.clarkvision.com/articles/image-restoration2/
    to the 16-bit tif image used in my study. The link is just above the conclusions. Anyone is welcome to download it and try sharpening it and post results here. For those who advocate that one must down sample first and then sharpen, I would be particularly interested if someone can take the unsharpened image, downsize it 2x and then sharpen it better than those images in Figure 7, images E or F on the web page (panel 20 above).

    Also, I responded to the lensrentals web page on accutance. My response is now online:
    http://www.lensrentals.com/blog/2009...en-my-acutance

    Roger

  33. #31
    Super Moderator arash_hazeghi's Avatar
    Join Date
    Oct 2008
    Location
    San Francisco, California, United States
    Posts
    18,556
    Threads
    1,321
    Thank You Posts

    Default

    Hey Don,

    Thanks for the post the interview with Adobe, it is interesting and very relevant to this topic. They usually cannot disclose any detail on a public forum, I hope the guy did not get into trouble. It also his a link to Emil's RAW Therapee tool which I find very helpful http://rawtherapee.com/

    Deconvolution is a generic term, i.e. it is a just an operation in Fourier space. Any sharpening method today uses some kind of deconvolution. What matters is the kernel used for deconvolution. If you have the exact mathematical description of the blur it is possible to completely cancel the blur (like the synthetic blur you apply in PhotoShop) if data has not been lost to numerical errors. Most algorithms try to "guess" the type of blur and use a standard mathematical form for it. This works well when you are dealing with simplest forms of blur such as Gaussian blur synthesized in PhotoShop. Some algorithms are so-called adaptive, which means after the "initial" guess they check if the result has improved, if not they try a different solution or change fitting parameters until some criteria is met. Modern algorithms use machine learning technology and are much more sophisticated. But they take time and resources.

    In practice it really depends on what kind of blur you are dealing with. A simple Gaussian is easy to fix but if you have motion blur, it is very difficult to remove.

    The blur that photographers have to deal with is not a simple Gaussian blur. Soft edges of detail in a digital photo comes from two main factors when you view the image at 100%, of course if focus is tack sharp. It mainly comes from the optical low pass filter that's on the sensor as well as the demosaicing process which is a heavy function of the RAW conversion method. The details of the OLF and the demosaic that manufacturer uses are propitiatory. That's why manufacturer RAW convertor (e.g. DPP or NX2) do a better job in delivering a high quality output (assuming one knows the optimal settings), especially without adding noise to the image. It serves as a good starting point after which you can apply output sharpening based on application.

    A bit redundant to the simple question the OP asked but I hope this helps. Have you tried PS CC BTW? Any improvements over CS6?

    Also, here is a link that I find most relevant to photography :

    http://www.imatest.com/docs/sharpness/
    Last edited by arash_hazeghi; 01-14-2014 at 03:08 AM. Reason: added link
    New! Sony Capture One Pro Guide 2022
    https://arihazeghiphotography.com/Gu.../Sony_C1P.html


    ------------------------------------------------
    Visit my blog
    http://www.arihazeghiphotography.com/blog

  34. #32
    BPN Member Don Lacy's Avatar
    Join Date
    Jan 2008
    Location
    SE Florida
    Posts
    3,566
    Threads
    348
    Thank You Posts

    Default

    Have you tried PS CC BTW? Any improvements over CS6?
    Arash, Sorry for the late reply no I have not switched from CS6 to CC yet nor do I plan to if I can help it. Also thanks for the deconvolution lesson it was helpful in clearing up a few points for me.

    The response from the Adobe programmer was interesting. He basically said ACR and smart sharpen can do just as well as anything, yet offered no examples
    Hi Roger, Those comments where from Jeff Schewe who I believe is a consultant for Adobe and one of the founders of the PhotoKit Sharpener plugin so as he said he does have some skin in the game when it comes to this particular discussion.
    Don Lacy
    You don't take a photograph, you make it - Ansel Adams
    There are no rules for good photographs, there are only good photographs - Ansel Adams
    http://www.witnessnature.net/
    https://500px.com/lacy

  35. #33
    Banned
    Join Date
    Feb 2008
    Location
    Colorado
    Posts
    3,949
    Threads
    254
    Thank You Posts

    Default

    Quote Originally Posted by arash_hazeghi View Post
    Deconvolution is a generic term, i.e. it is a just an operation in Fourier space. Any sharpening method today uses some kind of deconvolution. What matters is the kernel used for deconvolution.
    This is not an accurate statement. 1) deconvolution is not a generic term. it has specific mathematical meaning with specific mathematical operations.

    It is also inaccurate to characterize any sharpening as a deconvolution. For example, unsharp mask is a convolution and a mathematically linear operation. it is not a deconvolution at all. Smart Sharpen in photoshopis only partly a deconvolution in some modes. And here is the critical factor: deconvolution in restoring image detail is not a direct solution; it is an iterative solution requiring at least 32-bit floating point. Smart Sharpen, while technically could be considered a (partial) deconvolution is not doing multiple iterations for it to be very effective. So even if some sharpen tools could technically claim they are deconvolution, there is limited effectiveness unless they do multiple iterations in floating point.

    Quote Originally Posted by arash_hazeghi View Post
    If you have the exact mathematical description of the blur it is possible to completely cancel the blur
    This is not true at all in real images. Noise is usually the limiting factor and all images obtained with imaging systems in the real world contain noise.


    Quote Originally Posted by arash_hazeghi View Post
    Most algorithms try to "guess" the type of blur and use a standard mathematical form for it. This works well when you are dealing with simplest forms of blur such as Gaussian blur synthesized in PhotoShop. Some algorithms are so-called adaptive, which means after the "initial" guess they check if the result has improved, if not they try a different solution or change fitting parameters until some criteria is met. Modern algorithms use machine learning technology and are much more sophisticated. But they take time and resources.
    It can actually be more sophisticated than that, even an experienced analyst can make a good estimate of the amount of blur and use that as a starting point. It is a lot better than just a guess. For example, one can use the number of pixels in a transition at a hard edge, or a specular highlight (for example, catchlight) to make a good estimate of the blur.

    Quote Originally Posted by arash_hazeghi View Post
    In practice it really depends on what kind of blur you are dealing with. A simple Gaussian is easy to fix but if you have motion blur, it is very difficult to remove.
    Of course it depends on the amount of blur, whether Gaussian or motion, but a good deconvolution algorithm uses a blur model of any shape. It does not matter if the blur is Gaussian, symmetric, or not. Richardson-Lucy deconvolutuion is such an algorithm. Whether it is "easy" or not depends less on the shape of the blur and more on the size of the blur and the S/N of the image.

    Quote Originally Posted by arash_hazeghi View Post
    The blur that photographers have to deal with is not a simple Gaussian blur. Soft edges of detail in a digital photo comes from two main factors when you view the image at 100%, of course if focus is tack sharp. It mainly comes from the optical low pass filter that's on the sensor as well as the demosaicing process which is a heavy function of the RAW conversion method.
    That is not correct. When multiple processes contribute to blur, the results is almost always well modeled by a Gaussian profile. And you forgot a major contributor to blur: diffraction. In today's digital cameras with 4 to 7 micron pixels, diffraction is usually larger than a pixel. For example, red light at f/4 results in about a 6-micron diameter spot, raising to 11.7 microns at f/8. But it isn't a single diffraction disk, it is multi-wavelengths, even in a red, green or blue channel. This makes many overlapping diffraction disks of varying sizes, and that is closely modeled by a Gaussian. then add in lens aberrations, and the blur filter and each process is a convolution, making the result closely modeled by a Gaussian. Only when the blur is well out of focus when typically under or over corrected lenses or really bad astigmatism dominates does the blur become non Gaussian. And even then, a good deconvolution algorithm can correct a good S/N image.

    Here is an example of quite complex motion blur corrected by Richardson-Lucy deconvolution:
    http://www.astrosurf.com/buil/us/iris/deconv/deconv.htm


    Quote Originally Posted by arash_hazeghi View Post
    The details of the OLF and the demosaic that manufacturer uses are propitiatory. That's why manufacturer RAW convertor (e.g. DPP or NX2) do a better job in delivering a high quality output (assuming one knows the optimal settings), especially without adding noise to the image.
    This is a fine theory, but I don't agree. Photoshop has fallen behind in capability of raw conversion. I'm seeing better results out of other raw converters, like darktable than even DPP. I'm now delivering prints for galleries produced with darktable. And I've run Richardson-Lucy deconvolution on hundreds of images, many of which have sold in galleries and won and placed in contests.



    Quote Originally Posted by arash_hazeghi View Post
    Also, here is a link that I find most relevant to photography :
    http://www.imatest.com/docs/sharpness/
    I agree that it is a great link on MTF, but MTF is a 1-dimensional measure of image sharpness. Real images are not bar charts. Real world images have 2-D information. For example, with MTF, the theory would say there is no information to be gained once 0% MTF is reached. This is true only for the 1-dimensional profile of a bar chart. Deconvolution can restore detail beyond the 0% MTF "limit" on 2-D image objects, just not on bar chart profiles.

    Roger
    Last edited by Roger Clark; 01-18-2014 at 12:51 AM.

  36. #34
    Banned
    Join Date
    Feb 2008
    Location
    Colorado
    Posts
    3,949
    Threads
    254
    Thank You Posts

    Default

    Quote Originally Posted by Don Lacy View Post

    "The response from the Adobe programmer was interesting. He basically said ACR and smart sharpen
    can do just as well as anything, yet offered no examples"

    Hi Roger, Those comments where from Jeff Schewe who I believe is a consultant for Adobe and one of the founders of the PhotoKit Sharpener plugin so as he said he does have some skin in the game when it comes to this particular discussion.
    Perhaps that skin is covering his eyes. The simple fact is that smart sharpen that was discussed in the thread can't come close to multi-iteration deconvolution algorithms, not matter what they may claim. When I'm lazy or in a hurry, I run unsharp mask or smart sharpen on an image, trying to tune it for best results. Then later I come back and try RL deconvolution and the result is usually stunning in comparison, and this experience is consistent with hundreds of images. (I'm not familiar with the PhotoKit Sharpener plugin, so not commenting on that).

    Since Adobe moved to the cloud and I decided not to follow, I've been looking for alternatives. That includes two areas: general image editing, and a raw converter alternative to ACR. I'm being pleasantly surprised. I've found a good alternative with Darktable and will next check out Rawtherapee. I have already been turning to other image editors for some operations (like down sampling and deconvolution), because photoshop is lacking. I've read that none of the original photoshop development team has left. Maybe this is why photoshop became stagnant. I'm seeing more innovation in other image processing systems. Maybe Adobe needs new blood. But it may be too late. Unless Adobe comes out with some photoshop innovations (that exist in open source image processors) and offers it on linux, its bye bye for me. The main thing that photoshop has going for it is a good user interface, and I would pay for it, if it also had the innovation and operating system support.

    Roger

  37. #35
    BPN Member Don Lacy's Avatar
    Join Date
    Jan 2008
    Location
    SE Florida
    Posts
    3,566
    Threads
    348
    Thank You Posts

    Default

    I've read that none of the original photoshop development team has left.
    Roger not sure if you have seen this but here is an interview with the founder of PS Thomas Knoll http://www.luminous-landscape.com/in...as_knoll.shtml where he discusses some of the issue you bring up.
    Don Lacy
    You don't take a photograph, you make it - Ansel Adams
    There are no rules for good photographs, there are only good photographs - Ansel Adams
    http://www.witnessnature.net/
    https://500px.com/lacy

  38. #36
    Super Moderator arash_hazeghi's Avatar
    Join Date
    Oct 2008
    Location
    San Francisco, California, United States
    Posts
    18,556
    Threads
    1,321
    Thank You Posts

    Default

    Quote Originally Posted by Roger Clark View Post
    This is not an accurate statement. 1) deconvolution is not a generic term. it has specific mathematical meaning with specific mathematical operations.

    It is also inaccurate to characterize any sharpening as a deconvolution. For example, unsharp mask is a convolution and a mathematically linear operation. it is not a deconvolution at all. Smart Sharpen in photoshopis only partly a deconvolution in some modes. And here is the critical factor: deconvolution in restoring image detail is not a direct solution; it is an iterative solution requiring at least 32-bit floating point. Smart Sharpen, while technically could be considered a (partial) deconvolution is not doing multiple iterations for it to be very effective. So even if some sharpen tools could technically claim they are deconvolution, there is limited effectiveness unless they do multiple iterations in floating point.
    This is not true at all in real images. Noise is usually the limiting factor and all images obtained with imaging systems in the real world contain noise.
    It can actually be more sophisticated than that, even an experienced analyst can make a good estimate of the amount of blur and use that as a starting point. It is a lot better than just a guess. For example, one can use the number of pixels in a transition at a hard edge, or a specular highlight (for example, catchlight) to make a good estimate of the blur.
    Of course it depends on the amount of blur, whether Gaussian or motion, but a good deconvolution algorithm uses a blur model of any shape. It does not matter if the blur is Gaussian, symmetric, or not. Richardson-Lucy deconvolutuion is such an algorithm. Whether it is "easy" or not depends less on the shape of the blur and more on the size of the blur and the S/N of the image.
    That is not correct. When multiple processes contribute to blur, the results is almost always well modeled by a Gaussian profile. And you forgot a major contributor to blur: diffraction. In today's digital cameras with 4 to 7 micron pixels, diffraction is usually larger than a pixel. For example, red light at f/4 results in about a 6-micron diameter spot, raising to 11.7 microns at f/8. But it isn't a single diffraction disk, it is multi-wavelengths, even in a red, green or blue channel. This makes many overlapping diffraction disks of varying sizes, and that is closely modeled by a Gaussian. then add in lens aberrations, and the blur filter and each process is a convolution, making the result closely modeled by a Gaussian. Only when the blur is well out of focus when typically under or over corrected lenses or really bad astigmatism dominates does the blur become non Gaussian. And even then, a good deconvolution algorithm can correct a good S/N image.
    Here is an example of quite complex motion blur corrected by Richardson-Lucy deconvolution:
    http://www.astrosurf.com/buil/us/iris/deconv/deconv.htm
    This is a fine theory, but I don't agree. Photoshop has fallen behind in capability of raw conversion. I'm seeing better results out of other raw converters, like darktable than even DPP. I'm now delivering prints for galleries produced with darktable. And I've run Richardson-Lucy deconvolution on hundreds of images, many of which have sold in galleries and won and placed in contests.

    I agree that it is a great link on MTF, but MTF is a 1-dimensional measure of image sharpness. Real images are not bar charts. Real world images have 2-D information. For example, with MTF, the theory would say there is no information to be gained once 0% MTF is reached. This is true only for the 1-dimensional profile of a bar chart. Deconvolution can restore detail beyond the 0% MTF "limit" on 2-D image objects, just not on bar chart profiles.

    Roger
    I do not agree with any of the above. However you're free to believe what you want and use your own methods. Especially if they are producing "hundreds" of contest winning photos and gallery sales. I can't say I'm even close to those numbers but I like to stick to my methods.

    IMO it is naive and clumsy to try to model blur in a real photo with just synthetic Gaussian blur and then remove it, as Floyd also mentioned. It reminds me of Adobe adding synthetic motion blur and then removing it, It holds no value IMO. BTW, I have personally zero interest in the world of theoretical, synthetic photography, it's a waste of my time so excuse me if I will not respond to these kind of arguments.
    Last edited by arash_hazeghi; 01-21-2014 at 05:21 PM.
    New! Sony Capture One Pro Guide 2022
    https://arihazeghiphotography.com/Gu.../Sony_C1P.html


    ------------------------------------------------
    Visit my blog
    http://www.arihazeghiphotography.com/blog

  39. #37
    Banned
    Join Date
    Feb 2008
    Location
    Colorado
    Posts
    3,949
    Threads
    254
    Thank You Posts

    Default

    Attached Images Attached Images
     
    Arash and Loyd may not believe in Gaussian blur, but it is a good model. There is a reason sharpening algorithms have it as a default (e.g. smart sharpen), and why many people the world over are restoring images with deconvolution using Gaussian blur as a starting model. Attached is a profile of a star I image made in 2011. The star was overhead and the blur in the image is mainly due to diffraction plus the blur filter in the camera. The plot shows the Gaussian model fits the observed data quite well. And this is typical. Note too that the Gaussian profile is on the raw converted data, which has the standard variable-gamma transfer function found in photoshop CS5 ACR. If I used the linear data from the sensor, a Gaussian would still be a good fit, but the intensity and width of the fitted Gaussian would be different.

    In case anyone is wondering why I take such star images, Alpha Lyra is a standard star. I use it for calibrating photometry for deriving quantitative information on intensities of astronomical objects. I also use star data for determining camera + lens system throughput.

    If one followed my image restoration web page (links above), part 1 in the series used an image from the field (a fox image) and resolution improved with a Gaussian blur model. Part 2 took an image, blurred it with a known Gaussian blur, then restored it using an unknown blur, and quite successfully. The part 2 demonstrates that even when guessing at the blur, the restored image matched the original very well, proving the method is not inventing detail. This pretty well establishes that the Gaussian blur model works quite well (not that such establishment is really needed if one reads the scientific literature--it just shows it also works well with digital camera images).

    Roger

  40. #38
    Super Moderator Daniel Cadieux's Avatar
    Join Date
    Jan 2008
    Location
    Ottawa, Canada
    Posts
    26,315
    Threads
    3,979
    Thank You Posts

    Default

    The thread veered off topic a bit so a bit of clean-up was done. Just a reminder that discussions and debates like these, however passionate they become, need to remain civil...opinions can differ respectfully. Lots of good info here for members to digest and try for themselves what they prefer...

  41. #39
    Banned
    Join Date
    Feb 2008
    Location
    Colorado
    Posts
    3,949
    Threads
    254
    Thank You Posts

    Default

    Arash in pane 33 says he does not believe in anything I said in pane 32. Of course, Arash can believe in anything he wants, but if people are interested in real world facts, let's look at these issues.


    Quote Originally Posted by arash_hazeghi View Post
    Deconvolution is a generic term, i.e. it is a just an operation in Fourier space. Any sharpening method today uses some kind of deconvolution. What matters is the kernel used for deconvolution.

    Quote Originally Posted by Roger Clark View Post
    This is not an accurate statement. 1) deconvolution is not a generic term. it has specific mathematical meaning with specific mathematical operations.

    It is also inaccurate to characterize any sharpening as a deconvolution. For example, unsharp mask is a convolution and a mathematically linear operation. it is not a deconvolution at all. Smart Sharpen in photoshopis only partly a deconvolution in some modes. And here is the critical factor: deconvolution in restoring image detail is not a direct solution; it is an iterative solution requiring at least 32-bit floating point. Smart Sharpen, while technically could be considered a (partial) deconvolution is not doing multiple iterations for it to be very effective. So even if some sharpen tools could technically claim they are deconvolution, there is limited effectiveness unless they do multiple iterations in floating point.
    Quote Originally Posted by arash_hazeghi
    I do not agree with any of the above.
    Do you disagree that unsharp mask is a convolution followed by a mathematically linear operation? Unsharp mask does a blur, and that us a convolution. Deconvolution tries to undo the blur. The second step of unsharp mask is to subtract the blurred image from the original and add back in to the original image a scaled version of the subtraction. That is a mathematically linear operation. There is no deconvolution.

    Do you also believe that deconvolution is not an iterative solution? If so, please cite a reference, or publish the breakthrough as 50+ years of scientific papers have not found a direct solution.

    The key here for other readers: convolution is a blurring: a spread of signal to adjacent pixels. Deconvolution tries to put that blurred information back into each pixel. There is no direct solution. For example, if the blur in the optical system spreads the light that should be in one pixel over 25 pixels, and every adjacent pixel has the same problem, then the signal we see in our digital camera images is a combination of surrounding pixels. That can't easily be undone, because we don't know the signal from those surrounding pixels because they too are a complex result of the signals surrounding those pixels and so on. The only solution is multiple iterations of estimating the signals in the surrounding pixels and in each iteration putting some of the signal back where it should have been. If it takes hundreds of iterations and the signal only gets changed a few tens of data numbers (a small fraction of a digtial camera's image 16-bit range, one can see that integer quantization to 16-bits would limit the accuracy of the estimations in each step. Thus, the restoration must be done with higher accuracy, like 32-bit floating point. This is easily verifiable with some google searches, or even some simple back of the envelope calculations.

    Unsharp mask never moves signal from adjacent pixels back to the pixel where the signal would have been without blur. Thus, it is not deconvolution.

    ======

    Quote Originally Posted by arash_hazeghi View Post
    If you have the exact mathematical description of the blur it is possible to completely cancel the blur
    Quote Originally Posted by Roger Clark View Post
    This is not true at all in real images. Noise is usually the limiting factor and all images obtained with imaging systems in the real world contain noise.
    Quote Originally Posted by arash_hazeghi View Post
    I do not agree with any of the above.
    Please cite a real reference that shows this. All the scientific papers I read say that noise limits the deconvolution. That is also my experience. It is a general property of pretty much all sharpening methods, whether deconvolution, or edge sharpening. I would think most people here have experienced this with sharpening: that noise limits what one can achieve, and that sharpening enhances noise.

    =======

    Quote Originally Posted by arash_hazeghi View Post
    Most algorithms try to "guess" the type of blur and use a standard mathematical form for it. This works well when you are dealing with simplest forms of blur such as Gaussian blur synthesized in PhotoShop. Some algorithms are so-called adaptive, which means after the "initial" guess they check if the result has improved, if not they try a different solution or change fitting parameters until some criteria is met. Modern algorithms use machine learning technology and are much more sophisticated. But they take time and resources.
    Quote Originally Posted by Roger Clark View Post
    It can actually be more sophisticated than that, even an experienced analyst can make a good estimate of the amount of blur and use that as a starting point. It is a lot better than just a guess. For example, one can use the number of pixels in a transition at a hard edge, or a specular highlight (for example, catchlight) to make a good estimate of the blur.
    Quote Originally Posted by arash_hazeghi View Post
    I do not agree with any of the above.
    So one can't use the profile in a star, sun glint or catchlight to estimate the blur? I guess those scientific papers are all wrong that seem to estimate blur and use Gaussian models to restore images (and quite beautifully). And I guess the Gaussian profile I fit did to the star image in pane 37 must be wrong because you say Gaussian profiles are not good models. To the contrary, do a google search for:
    deconvolution estimating blur
    and see many scientific papers on the subject. You will also see the dominant blur function used is Gaussian. Please site a scientific reference that says all these studies using Gaussian blur models are wrong.


    =======

    Quote Originally Posted by arash_hazeghi View Post
    In practice it really depends on what kind of blur you are dealing with. A simple Gaussian is easy to fix but if you have motion blur, it is very difficult to remove.
    Quote Originally Posted by Roger Clark View Post
    Of course it depends on the amount of blur, whether Gaussian or motion, but a good deconvolution algorithm uses a blur model of any shape. It does not matter if the blur is Gaussian, symmetric, or not. Richardson-Lucy deconvolutuion is such an algorithm. Whether it is "easy" or not depends less on the shape of the blur and more on the size of the blur and the S/N of the image.
    Quote Originally Posted by arash_hazeghi View Post
    I do not agree with any of the above.
    So you disagree that Richardson-Lucy deconvolution can use many blur models?
    You also disagree with the recovery does not depend on signal to noise ratio?
    Please try a google search for Richardson-Lucy deconvolution and see the many applications. In ImagesPlus, the program that I use on my photos, the program has the following blur models: Gaussian, binomial, box, and custom. I sometimes use custom to fix a slight motion blur, but Gaussian blur has worked extremely well on most of the many images I have worked on.


    ========

    Quote Originally Posted by arash_hazeghi View Post
    The blur that photographers have to deal with is not a simple Gaussian blur. Soft edges of detail in a digital photo comes from two main factors when you view the image at 100%, of course if focus is tack sharp. It mainly comes from the optical low pass filter that's on the sensor as well as the demosaicing process which is a heavy function of the RAW conversion method.
    Quote Originally Posted by Roger Clark View Post
    That is not correct. When multiple processes contribute to blur, the results is almost always well modeled by a Gaussian profile. And you forgot a major contributor to blur: diffraction. In today's digital cameras with 4 to 7 micron pixels, diffraction is usually larger than a pixel. For example, red light at f/4 results in about a 6-micron diameter spot, raising to 11.7 microns at f/8. But it isn't a single diffraction disk, it is multi-wavelengths, even in a red, green or blue channel. This makes many overlapping diffraction disks of varying sizes, and that is closely modeled by a Gaussian. then add in lens aberrations, and the blur filter and each process is a convolution, making the result closely modeled by a Gaussian. Only when the blur is well out of focus when typically under or over corrected lenses or really bad astigmatism dominates does the blur become non Gaussian. And even then, a good deconvolution algorithm can correct a good S/N image.

    Here is an example of quite complex motion blur corrected by Richardson-Lucy deconvolution:
    http://www.astrosurf.com/buil/us/iris/deconv/deconv.htm
    Quote Originally Posted by arash_hazeghi View Post
    I do not agree with any of the above.
    So you disagree that the motion blur in the link above is not correct? It sure looks like it did an amazing job to me, and obviously much better than the original.
    I posted at star profile that is well fit by a Gaussian and that is a result of lens aberrations, diffraction, sensor blur filter and atmospheric turbulence.
    A google search will find that a Gaussian blur model is the dominant model used in deconvolution in scientific papers. Are all these scientists wrong?


    Here is another example that was posted on BPN: Grant Atkinson's leopard + warthog:
    http://www.birdphotographers.net/for...read.php/94021
    The Gaussian blur model produced great results.





    ===========

    Quote Originally Posted by arash_hazeghi View Post
    The details of the OLF and the demosaic that manufacturer uses are propitiatory. That's why manufacturer RAW convertor (e.g. DPP or NX2) do a better job in delivering a high quality output (assuming one knows the optimal settings), especially without adding noise to the image.
    Quote Originally Posted by Roger Clark View Post
    This is a fine theory, but I don't agree. Photoshop has fallen behind in capability of raw conversion. I'm seeing better results out of other raw converters, like darktable than even DPP. I'm now delivering prints for galleries produced with darktable. And I've run Richardson-Lucy deconvolution on hundreds of images, many of which have sold in galleries and won and placed in contests.
    Quote Originally Posted by arash_hazeghi View Post
    I do not agree with any of the above.
    So you disagree that I have run Richardson-Lucy deconvolution on hundreds of images,
    and that I have had images using Richardson-Lucy deconvolution that have won or placed in contests?
    No, Arash, I am not lying. I have been running Richardson-Lucy deconvolution for over a decade on my digital camera images. I have postings about Richardson-Lucy deconvolution
    on my web site from at least 2004. It is defined as part of my work flow on my web site, which I first published in 2005, and I had been using Richardson-Lucy deconvolution for quite a while. I purshased my first version of ImagesPlus in 2003. When I was moderator here and running the monthly processing exercises, I showed results from Richardson-Lucy deconvolution. With using it for so long, it should not be hard to believe that I have processed a lot of images using the method.



    =============

    Quote Originally Posted by arash_hazeghi View Post
    Also, here is a link that I find most relevant to photography :
    http://www.imatest.com/docs/sharpness/
    Quote Originally Posted by Roger Clark View Post
    I agree that it is a great link on MTF, but MTF is a 1-dimensional measure of image sharpness. Real images are not bar charts. Real world images have 2-D information. For example, with MTF, the theory would say there is no information to be gained once 0% MTF is reached. This is true only for the 1-dimensional profile of a bar chart. Deconvolution can restore detail beyond the 0% MTF "limit" on 2-D image objects, just not on bar chart profiles.
    Quote Originally Posted by arash_hazeghi View Post
    I do not agree with any of the above.
    So how is MTF not a 1-dimensional measure? One uses a bar chart and measures the response only perpendicular to the bars. That is a 1-dimensional measurement.
    So you are also disagreeing that deconvolution can restore detail beyond the 0% MTF limit, but how do you explain the many scientific references (some of which I have already referenced, others below) that say otherwise. Do you have some hot new scientific reference that says all these other scientific papers are wrong? Here is one reference of many that indicates not only what I say is possible but is being used in real world applications, this one in biology:

    http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3431294/
    Sub-diffraction Limit Localization of Proteins in Volumetric Space Using Bayesian Restoration of
    Fluorescence Images from Ultrathin Specimens, PLoS Comput Biol. 2012 August; 8(8): e1002671.
    Note the results section: "Two-dimensional RL deconvolution is used to improve the resolution of protein structures. Initial deconvolution trials using ultra-thin sections seeded with 110 nm beads using RL with a high-quality, low-noise empirical PSF (Figure 1) or blind deconvolution using a hypothetical Gaussian as an initial PSF (Figure 2A) demonstrated that RL performed significantly better, returning most of the diffracted light back into the central pixel (1 pixel=~100 nm, 1.4NA Oil objective)." Note: RL = Richardson-Lucy
    Key points: Blind deconvolution using Gaussian profiles, and recovered detail below the diffraction limit (that is 0% MTF)

    And general machine vision:
    http://books.google.com/books?id=-eV...0limit&f=false

    Advanced Concepts for Intelligent Vision Systems: 11th International ...
    Li et al chatpter, 2009:
    "...improving the resolution beyond the diffraction limit..."

    and many more references with simple google searches.


    You have made a lot of general charges, but have given no evidence. And the scientific and application literature is full of examples that clearly work, contrary to your general statement that it is all wrong.

    Roger

  42. #40
    Super Moderator arash_hazeghi's Avatar
    Join Date
    Oct 2008
    Location
    San Francisco, California, United States
    Posts
    18,556
    Threads
    1,321
    Thank You Posts

    Default

    "So you disagree that I have run Richardson-Lucy deconvolution on hundreds of images,
    and that I have had images using Richardson-Lucy deconvolution that have won or placed in contests?
    No, Arash, I am not lying. I have been running Richardson-Lucy deconvolution for over a decade on my digital camera images. I have postings about Richardson-Lucy deconvolution
    on my web site from at least 2004. It is defined as part of my work flow on my web site, which I first published in 2005, and I had been using Richardson-Lucy deconvolution for quite a while. I purshased my first version of ImagesPlus in 2003. When I was moderator here and running the monthly processing exercises, I showed results from Richardson-Lucy deconvolution. With using it for so long, it should not be hard to believe that I have processed a lot of images using the method
    ."


    I never said you were lying. I said you should def. stick with your technique if it has brought you hundreds of selling/award winning images.

    When I read statements like " two-dimensional RL deconvolution is used to improve the resolution of protein structures" from Sub-diffraction Limit Localization of Proteins in Volumetric Space Using Bayesian Restoration of Fluorescence Images from Ultrathin Specimens. I laugh not because the reference or science is bad, but because I don't see the connection between fluorescence imaging of protein chains and avian photography, i.e. creating a work of art. In science you can proof something with one pixel if data is good, do you also make a one-pixel photograph?


    Anyways, sorry but I do find the reference irrelevant. You can spend years talking about theories and writing books about it, or posting millions of words on the forum. I believe the proof is in the pudding. When I look at a real world photographic example like the one you provided (Grant's photo) http://www.birdphotographers.net/for...read.php/94021 the "after" image in pane # 12 does not impress me at all. It is noisy and full of artifacts. It is a delete for me as posted. The re-sized version looks better but over-sharpened and also full of artifacts. I believe I can do a better job than that but maybe I'm wrong. I will give you the benefit of doubt.

    I have uploaded an image (white-tailed kite) here, it is a 1:1 output from RAW (no adjustment applied) except for I have cropped it for composition. This photo came out soft and I had marked it for deleting. Your method should work best on the image based on what you say, so please try your deconvolution on this image and post a final 1200 pixel wide (standard BPN format) that looks best to you, no need for any crops or pixel-peeping or details just a single output. If it looks good to me at that size and better than what I can achieve, I will def. study the details of your method and its merits and purchase the software as well.

    here is the file

    http://www.arihazeghiphotography.com...47_example.jpg

    good luck I will check this thread again once you upload your result
    Last edited by arash_hazeghi; 01-26-2014 at 02:25 AM. Reason: uploaded sample
    New! Sony Capture One Pro Guide 2022
    https://arihazeghiphotography.com/Gu.../Sony_C1P.html


    ------------------------------------------------
    Visit my blog
    http://www.arihazeghiphotography.com/blog

  43. #41
    Banned
    Join Date
    Feb 2008
    Location
    Colorado
    Posts
    3,949
    Threads
    254
    Thank You Posts

    Default

    Attached Images Attached Images
     
    Here is Arash's images, processed a traditional way in photoshop: downsized with cubic spline then sharpen with unsharp mask.

    Some comments.
    The focus is clearly in front of the bird: the vegetation is in sharp focus. Arash, please tell us your strategy for focus in this situation. Were you pre focused on the vegetation, and then moved to the bird and this is the first frame or two, or were you following the bird in and for how long was AF engaged? or something else?

    The image is quite noisy for a 1DIV at ISO 500 and is underexposed, adding to noise. That relatively low S/N makes recovery more dfficult.

    Arash puts me at a disadvantage by only supplying an 8-bit jpeg with lossy compression. Jpeg compression is worse on darker parts of an image, thus increasing noise further.

    Arash, would you consider making the raw file available? Perhaps Don would consider resurrecting the monthly raw processing exercises and the image could be the first in a new series.
    (or even a quarterly raw processing exercise if people don't have the time.)

    In the next panel I'll post a deconvolution.

    Roger

  44. #42
    Banned
    Join Date
    Feb 2008
    Location
    Colorado
    Posts
    3,949
    Threads
    254
    Thank You Posts

    Default

    Attached Images Attached Images
     
    Attached here is a restored image using Richardson-Lucy image deconvolution. Several things to note:

    I only applied all steps to the entire image. Normally, for an image like this, I would keep the original background, perhaps even smoothed. Processing the background just increases noise in this situation. I also would process the feet and the vegetation with different settings as I discussed in the references to the sharpening thread that is at the start of the current thread. The vegetation is much sharper so needs less aggressive deconvolution. So the vegetation is over sharpened--ignore that. Similarly, the feet being further from the plane of focus, have even more blur, so should be treated differently. which I would normally do and improve them more. So best to concentrate on the detail in the bird's head, body, and wings when comparing to the original. Further, even though I said the image is underexposed, the head above the beak and the legs are too bright for an 8-bit jpeg to pull out any detail--simply too little tonality information that is quantized. Same for detail in the orange feet. Also, the image suffers from some residual chromatic aberration that limits recovery. That should have been better corrected during raw conversion.

    I made no curves, contrast, or levels adjustments to any part of the image.

    Bottom line, I think this shows a significant improvement over the photoshop unsharp mask and cubic spline, but should not be considered the limits of what the technology could do if starting from a raw file. Working from the raw data would mitigate many of the problems I discussed above and produce a better image.

    A note on the 2-component blur. Depending on the lens characteristics as it defocuses, there may be multiple components to the blur. One could in theory treat that as a single blur function that is non-Gaussian (more like Lorentzian), or simply deconvolve in multiple steps using different models. This is not unlike people running unsharp mask multiple times with different settings to improve results. It is best to start with the larger diameter settings and work toward finer.

    Roger

  45. #43
    Super Moderator arash_hazeghi's Avatar
    Join Date
    Oct 2008
    Location
    San Francisco, California, United States
    Posts
    18,556
    Threads
    1,321
    Thank You Posts

    Default

    Attached Images Attached Images
     
    Roger,

    Thanks for working on the image. It's good but it does not show any improvement compared to what I can achieve with my technique in Photoshop. Here is my version, I worked on the same file for about 40 seconds.

    To my eyes your post has a bit more noise (on the bird), some artifacts (maybe from resizing?) and a bit less detail around the beak, talons and under-wings compared to my version.

    I am posting my output so people can compare and decide which one they prefer.



    Best
    Last edited by arash_hazeghi; 01-26-2014 at 08:32 PM. Reason: upload file
    New! Sony Capture One Pro Guide 2022
    https://arihazeghiphotography.com/Gu.../Sony_C1P.html


    ------------------------------------------------
    Visit my blog
    http://www.arihazeghiphotography.com/blog

  46. #44
    Banned
    Join Date
    Feb 2008
    Location
    Colorado
    Posts
    3,949
    Threads
    254
    Thank You Posts

    Default

    Well, Arash,

    Perhaps we'll just have to agree to disagree. I do not see your image as an improvement at all. First you brightened it and that skews the visual impression. It is kind of like the stereo salesman boosting the volume on one system, to make it seem better than another when it really is not. I see in your image a lot of ringing artifacts. For example, around the beak and talons. Look at the catchlight in the eye. Your image has double the pixels making up the catchlight compared to the RL deconvolution image. If you put the two images side by side in a photo editor and enlarge to 200%, all these ringing artifacts in your image will be more easily visible.

    And you did not start with the same jpeg image that you originally posted for me to work on because your reworked image above shows more vegetation in the lower right corner than the image you gave me. That is not the same file.

    Roger

  47. #45
    BPN Viewer Tom Graham's Avatar
    Join Date
    Apr 2010
    Location
    Southern California, Orange County
    Posts
    1,116
    Threads
    33
    Thank You Posts

    Default

    I agree not same file. Because when I tried magnification (600%) each RC and AH jpgs in PS the bird heads are different sizes. AH's being larger. And AH bit brighter (caused by sharpening?).
    Look at both images in PS at 600%. Note striking differences in contrast. Note AH eye highlight is 4px, RC is 2px. Another quite obvious difference is noise(?) in background.

    To make a comparison to show here, I cropped each to a size of 144x103x around the head. 144x103 is small here so I resized each to 550px wide. Yes, resizing blurred it all some. For best comparison you magnify each 600% in PS. Anyway just to put up something like I see -
    Tom (and please don't nit-pik on this image, do it correctly yourself and compare)
    Name:  RC AH.jpg
Views: 1418
Size:  145.9 KB
    Last edited by Tom Graham; 01-26-2014 at 10:14 PM.

  48. #46
    Banned
    Join Date
    Feb 2008
    Location
    Colorado
    Posts
    3,949
    Threads
    254
    Thank You Posts

    Default

    Hi Tom,
    Yes, the bird in Arash's image is about 6% larger. Arash smoothed the background on his image. I did not. The smoothing encroached on the bird in some locations on Arash's image; you can see that on the bird's back above the head and the wingtips. The halo around the beak is quite strong in Arash's image, but also shows weakly in my version, but that is because it shows in the jpeg Araash posted. A good raw conversion would do better with that problem.

    Roger

  49. #47
    BPN Viewer
    Join Date
    Dec 2013
    Location
    Barrow, Alaska
    Posts
    37
    Thank You Posts

    Default

    In the end, we see what works. Richarson-Lucy is a good tool, and certainly the best for certain uses. This is not one of them.

    Roger sharpened the image with a RL algorithm and then found it necessary to also apply Unsharp Mask to get his desired results. That after repeatedly arguing that USM is at all times inferior to RL and unnecessary.

    Arash simply provided a more appropriately sharpened image that did not use Richardson Lucy at all. Note that the non-RL version doesn't have nearly as many artifacts, such as "jaggies" on tonal edges caused when the Richardson-Lucy algorithm reduces a high contrast edge transition to just a single pixel width. And the halo around the bird's beak is clearly not due to sharpening, as it exists in the original image.

    When used appropriately, high pass sharpening, Richardson-Lucy sharpening, wavelet sharpening and unsharp mask all have slightly different effects and therefore slightly different uses. None of them replace another.

    Incidentally, reducing the highlight in the bird's eye to just 2 pixels is not what I think is correct. Compare it to the original and to various different methods and it fails the test of improving the image! And it is a very good demonstration of the difference between sharpening applied before and after resampling. The downsizing algorithm dramatically affects the final product, probably more so than the sharpening tool used.
    Last edited by Floyd Davidson; 01-27-2014 at 06:10 AM.

  50. #48
    Banned
    Join Date
    Feb 2008
    Location
    Colorado
    Posts
    3,949
    Threads
    254
    Thank You Posts

    Default

    Quote Originally Posted by Floyd Davidson View Post
    In the end, we see what works. Richarson-Lucy is a good tool, and certainly the best for certain uses. This is not one of them.

    Roger sharpened the image with a RL algorithm and then found it necessary to also apply Unsharp Mask to get his desired results. That after repeatedly arguing that USM is at all times inferior to RL and unnecessary.

    Arash simply provided a more appropriately sharpened image that did not use Richardson Lucy at all. Note that the non-RL version doesn't have nearly as many artifacts, such as "jaggies" on tonal edges caused when the Richardson-Lucy algorithm reduces a high contrast edge transition to just a single pixel width. And the halo around the bird's beak is clearly not due to sharpening, as it exists in the original image.

    When used appropriately, high pass sharpening, Richardson-Lucy sharpening, wavelet sharpening and unsharp mask all have slightly different effects and therefore slightly different uses. None of them replace another.
    Floyd,
    We can't make any conclusions regarding noise on which method is better with Arash's image because Arash used different data for his image, and supplied me with only an 8-bit lossy compressed jpeg. At least some of the noise in the RL image is from jpeg artifacts. While one can't make any conclusions regarding noise, one can see the ringing from whatever sharpening Arash used. That in my view destroys the fine detail of the image. It seems that to an untrained eye that people (I'm talking the general population here, not specific people) have either come to accept ringing, or see it as detail.

    Yes, unsharp mask (USM) is inferior to deconvolution (real) sharpening as a true sharpening tool, but that does not mean USM is not valuable. It enhances edge contrast, and gives the perception of sharpness. In both both of my tutorials on deconvolution sharpening I show that the two together produces an even better result. It is not an either or.
    http://www.clarkvision.com/articles/image-restoration1/

    In sharpening an image the first step is deconvolution. That improved real resolution. After deconvolution, edge enhancement (e.g. unsharp mask) simply improve the perception of sharpness further, and USM on a higher resolution image (that after deconvolution) is always better than USM on a lower resolution image. And Arash's image proves that.


    Quote Originally Posted by Floyd Davidson View Post
    Incidentally, reducing the highlight in the bird's eye to just 2 pixels is not what I think is correct. Compare it to the original and to various different methods and it fails the test of improving the image! And it is a very good demonstration of the difference between sharpening applied before and after resampling. The downsizing algorithm dramatically affects the final product, probably more so than the sharpening tool used.
    Remember that these images are downsized for web. The original is 3 times larger, so in the full resolution image the catchlight in the RL deconvolution is about 3 pixels in diameter and in Arash's 6 pixels. That is the point spread function and represents a good measure of the resolution of the image. That shows the the RL deconvolution improved real resolution by about a factor of 2 over Arash's work of image. And it is the large catchlight (cimbined with the rest of the image) that leads to the view that the image is not sharp and slightly out of focus. See the full resolution image that Arash posted a link to (currently in pane 40).

    So despite being at a disadvantage of an 8-bit lossy compressed jpeg, the RL deconvolution did very well and improved real resolution abut 2x over Arash's method.

    Roger

  51. #49
    BPN Viewer
    Join Date
    Dec 2013
    Location
    Barrow, Alaska
    Posts
    37
    Thank You Posts

    Default

    Quote Originally Posted by Roger Clark View Post
    So despite being at a disadvantage of an 8-bit lossy compressed jpeg, the RL deconvolution did very well and improved real resolution abut 2x over Arash's method.
    From the original JPEG posted by Arash:



    And then from the sharpened image posted by Arash:



    And from the sharpend image posted by Roger:



    There are indeed a few things that can be said about the differences. Mostly, but not all, in favor of the sharpening done by Arash. Higher resolution from Richardson-Lucy is not apparent.

  52. #50
    BPN Viewer Tom Graham's Avatar
    Join Date
    Apr 2010
    Location
    Southern California, Orange County
    Posts
    1,116
    Threads
    33
    Thank You Posts

    Default

    pane 49 by HD
    the 2nd, middle, image by AH is very obviously higher contrast than original or RC's. (or not?) Is higher contrast considered sharpening? What does higher contrast contribute to sharpening, if anything?
    Because in RC image I see only 2 px in the eye highlight, does that mean it is the sharpest?
    Tom
    Last edited by Tom Graham; 01-27-2014 at 03:39 PM. Reason: question structure

Page 1 of 2 12 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Web Analytics