This is not an accurate statement. 1) deconvolution is not a generic term. it has specific mathematical meaning with specific mathematical operations.
It is also inaccurate to characterize any sharpening as a deconvolution. For example, unsharp mask is a convolution and a mathematically linear operation. it is not a deconvolution at all. Smart Sharpen in photoshopis only partly a deconvolution in some modes. And here is the critical factor: deconvolution in restoring image detail is not a direct solution; it is an iterative solution requiring at least 32-bit floating point. Smart Sharpen, while technically could be considered a (partial) deconvolution is not doing multiple iterations for it to be very effective. So even if some sharpen tools could technically claim they are deconvolution, there is limited effectiveness unless they do multiple iterations in floating point.
This is not true at all in real images. Noise is usually the limiting factor and all images obtained with imaging systems in the real world contain noise.
It can actually be more sophisticated than that, even an experienced analyst can make a good estimate of the amount of blur and use that as a starting point. It is a lot better than just a guess. For example, one can use the number of pixels in a transition at a hard edge, or a specular highlight (for example, catchlight) to make a good estimate of the blur.
Of course it depends on the amount of blur, whether Gaussian or motion, but a good deconvolution algorithm uses a blur model of any shape. It does not matter if the blur is Gaussian, symmetric, or not. Richardson-Lucy deconvolutuion is such an algorithm. Whether it is "easy" or not depends less on the shape of the blur and more on the size of the blur and the S/N of the image.
That is not correct. When multiple processes contribute to blur, the results is almost always well modeled by a Gaussian profile. And you forgot a major contributor to blur: diffraction. In today's digital cameras with 4 to 7 micron pixels, diffraction is usually larger than a pixel. For example, red light at f/4 results in about a 6-micron diameter spot, raising to 11.7 microns at f/8. But it isn't a single diffraction disk, it is multi-wavelengths, even in a red, green or blue channel. This makes many overlapping diffraction disks of varying sizes, and that is closely modeled by a Gaussian. then add in lens aberrations, and the blur filter and each process is a convolution, making the result closely modeled by a Gaussian. Only when the blur is well out of focus when typically under or over corrected lenses or really bad astigmatism dominates does the blur become non Gaussian. And even then, a good deconvolution algorithm can correct a good S/N image.
Here is an example of quite complex motion blur corrected by Richardson-Lucy deconvolution:
http://www.astrosurf.com/buil/us/iris/deconv/deconv.htm
This is a fine theory, but I don't agree. Photoshop has fallen behind in capability of raw conversion. I'm seeing better results out of other raw converters, like darktable than even DPP. I'm now delivering prints for galleries produced with darktable. And I've run Richardson-Lucy deconvolution on hundreds of images, many of which have sold in galleries and won and placed in contests.
I agree that it is a great link on MTF, but MTF is a 1-dimensional measure of image sharpness. Real images are not bar charts. Real world images have 2-D information. For example, with MTF, the theory would say there is no information to be gained once 0% MTF is reached. This is true only for the 1-dimensional profile of a bar chart. Deconvolution can restore detail beyond the 0% MTF "limit" on 2-D image objects, just not on bar chart profiles.
Roger