Results 1 to 9 of 9

Thread: smaller RAW formats- how?

  1. #1
    Forum Participant
    Join Date
    Jan 2008
    Location
    Canada
    Posts
    6,588
    Threads
    643
    Thank You Posts

    Default smaller RAW formats- how?

    My 50D and several other bodies out there can produce smaller RAW images with fewer pixels. The 50D has two such options which produce files 3456 x 2304 and 2353 x 1568 in size.

    I'd be interested to know how this is done. I can think of two options:

    1. When writing the RAW file to the CF card, only spit out information from every second, third, forth, etc, set of RBG sensors (i.e., throw away data).
    2. Average information over a set of RGB sensors and spit that out (use all the data).

    There may be other ways of doing it. Just wondering.

  2. #2
    Banned
    Join Date
    Feb 2008
    Location
    Colorado
    Posts
    3,949
    Threads
    254
    Thank You Posts

    Default

    John,
    The evidence is that the data from multiple pixels are averaged, thus improving the signal-to-noise ratio. Some CCDs can average pixels on chip (called binning), but that is not practical on a Bayer sensor with adjacent pixels being different colors. So it must be done in software, but I have not seen the algorithm published.

    Roger

  3. #3
    Forum Participant
    Join Date
    Jan 2008
    Location
    Canada
    Posts
    6,588
    Threads
    643
    Thank You Posts

    Default

    Thanks Roger. I had a feeling you might know something about this subject!

    Right now I am massively down-sampling my RAW 50D images for posting on BPN and emailing. Maybe in some circumstances it might produce a better result to shoot a less noisy sRAW image and down-sample a lot less, so long as you don't have aspirations to make a huge prints sometime in the future.

  4. #4
    Emil Martinec
    Guest

    Default

    Since the mid-size sRAW on the 50D is 11/16 in linear dimensions (IIRC) the size of the RAW, it is not a simple binning of pixel data from the sensor. It seems reasonable to assume that they are using the demosaic routine in the camera's Digic processing engine to interpolate the Bayer data, and then downsampling the result without applying color balance, tone curves, gamma correction, etc.

    I seem to recall on-sensor binning is possible in CMOS, even with the complications of the color filter array, by suitably sharing transistors between same-color pixels in a column, but one is not able to bin more than pairs of pixels this way. CCD's are much better set up for pixel binning on the sensor if that is desired.

  5. #5
    Emil Martinec
    Guest

    Default

    Quote Originally Posted by John Chardine View Post
    Thanks Roger. I had a feeling you might know something about this subject!

    Right now I am massively down-sampling my RAW 50D images for posting on BPN and emailing. Maybe in some circumstances it might produce a better result to shoot a less noisy sRAW image and down-sample a lot less, so long as you don't have aspirations to make a huge prints sometime in the future.
    sRAW is not less noisy per se, rather it looks less noisy at the pixel level because the pixel level is a coarser scale in the image than it is in the full-blown RAW, and noise is a scale dependent quantity. A properly downsampled converted RAW should not have noise performance worse than sRAW -- after all they are derived from the same sensor data!

    The use of sRAW is for convenience (although it doesn't reduce file sizes as much as one might hope, since one is trading single-color Bayer data for a lower pixel count with all three RGB colors), not for noise performance.

    To show that noise is scale dependent, I recently did an exercise with 40D and 50D RAW data, calculating the noise power spectrum of each as well as the noise spectrum of the 50D image downsampled to the size of the 40D:



    The horizontal scale is spatial frequency -- fine scales in the image are to the right, coarse scales to the left (the overall scale is somewhat arbitrary; the pixel level or Nyquist frequency is at 256 for the 50D, and 209 for the lower resolution 40D). The vertical axis is the noise power. The data points are noise power at the corresponding image scale. The blue points are the 50D, the red points the 40D; the orange points are the 50D downsampled to the 40D pixel dimensions using PS CS3 bicubic, the black points are downsampling via ImageMagick's Lanczos filter.

    As one can see, the more accurate Lanczos resampling filter rather accurately reproduces the noise characteristics of the 40D. It is also true that the noise power at corresponding scales in images from the 40D and 50D are the same, even without downsampling the 50D. What people see as "more noise" in the 50D when pixel-peeping is the result of comparing apples and oranges -- the physics of imaging dictates that the noise spectrum is rising with spatial frequency, so there is more noise power at finer scales. Since the pixel level of the 50D is a finer scale than the pixel level of the 40D, there is more noise. The noise of the 50D at the coarser scale of pixel level on the 40D (209 in the above graph) is the same as that of the 40D.

  6. #6
    Banned
    Join Date
    Feb 2008
    Location
    Colorado
    Posts
    3,949
    Threads
    254
    Thank You Posts

    Default

    Emil,
    A simple way to look at your graph and results is the noise is dominated by Poisson statistics of photon counting. So when you normalize the area of photon collection, you capture the same number of photons (e.g. per unit area or per unit larger pixel) one should get the same answer for the 50D and 40D when 50D pixels are added together to get the same area as a 40D pixel. If you plot showed a difference, it would say something fundamental about the two sensors (e.g. one was less efficient than the other). Your results show that the efficiency of slicing up the pie (the sensor) into smaller pieces (pixels) say the total pie (sensor) is still the same and nothing was lost. That's good.

    Roger

  7. #7
    Emil Martinec
    Guest

    Default

    Quote Originally Posted by rnclark View Post
    Emil,
    A simple way to look at your graph and results is the noise is dominated by Poisson statistics of photon counting. So when you normalize the area of photon collection, you capture the same number of photons (e.g. per unit area or per unit larger pixel) one should get the same answer for the 50D and 40D when 50D pixels are added together to get the same area as a 40D pixel. If you plot showed a difference, it would say something fundamental about the two sensors (e.g. one was less efficient than the other). Your results show that the efficiency of slicing up the pie (the sensor) into smaller pieces (pixels) say the total pie (sensor) is still the same and nothing was lost. That's good.

    Roger
    Agreed, yet many people are fooling themselves by making comparisons at 100%. I wanted to give a quantitative demonstration as to why that is misleading. It's fine to say that nothing is lost by dividing the photons up among smaller collecting devices; what seems to need more emphasis is the dependence of noise on spatial scale, and the importance of comparing images at a consistent scale when judging image noise. The misunderstanding of what the pixel level view represents when viewing images with different pixel dimensions is quite pervasive. And yes, the 40D and 50D have about the same quantum efficiency (with the 50D a tad better, perhaps due to its increased microlens coverage). That is why the graphs agree up to the resolution limit of the 40D.

  8. #8
    Banned
    Join Date
    Feb 2008
    Location
    Colorado
    Posts
    3,949
    Threads
    254
    Thank You Posts

    Default

    Emil,
    If you always intend to make the same size enlargements, then I agree with you. But many photographers want to push things more. With more pixels, they want to make larger prints, or crop more and still get a decent print (I know I do). So knowing the pixel performance tells you how far you can push your camera.

    Roger

  9. #9
    Emil Martinec
    Guest

    Default

    All well and good. If one has an absolute standard of acceptable S/N, then put the upper bound on spatial frequency where the camera has that S/N, and that determines the effective MP count for acceptable S/N. It is when the pixel performance is used to compare cameras of different pixel count that is to me problematic. That amounts to comparing prints of different sizes, which seems to me not useful. Things get a bit more nuanced if different sensor formats are thrown into the mix; then if one wants to crop both images to the same field of view, the analysis of noise and S/N as a function of spatial frequency is better referred to absolute spatial frequency in lines/mm rather than in lines/picture height (where picture height is the sensor size). But it still doesn't make sense to compare quantities at different spatial frequencies.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Web Analytics