Results 1 to 26 of 26

Thread: High Resolution Images vs. Jpeg

  1. #1
    Lifetime Member Michael Gerald-Yamasaki's Avatar
    Join Date
    Jan 2010
    Location
    Santa Cruz, CA USA
    Posts
    2,035
    Threads
    311
    Thank You Posts

    Default High Resolution Images vs. Jpeg

    Folks,

    Greetings. I've read many times glowing comments and reviews on various sites (LL comes to mind) of the detail and color of high resolution images created by large pixel count sensors 80 MP medium format backs, D800E shots & the like. The evidence is provided as a full frame jpeg.

    On the other hand, detail is usually suspect on images posted here on BPN if the image has been cropped substantially from ff. Even if the final crop size exceeds the size as posted on BPN.

    My question is how does high resolution express itself in the native resolution of jpegs (8-bit color, full frame reduced to screen size, say 1200x800) such that you could tell the difference between a for example, 12000x8000 16-bit TIFF original and a 2400x1600 8-bit TIFF original? (not to mention looking at such on a 8-bit/channel computer screen.)

    Cheers,

    -Michael-

  2. #2
    Super Moderator arash_hazeghi's Avatar
    Join Date
    Oct 2008
    Location
    San Francisco, California, United States
    Posts
    18,545
    Threads
    1,318
    Thank You Posts

    Default

    Hi Mike,

    How much you can crop depends on the quality of original RAW... If RAW is tack sharp and very clean you can produce an excellent-looking 1200 X 800 file from a 4 Mpixel original crop (2x larger on each side) but if it is soft or noisy even a 20 Mpixel file will look poor when reduced to 1200 pixels. The bit depth only affects subtle tones, it does not affect detail ( as long as you don't compress it).

    On BPN usually the lack of fine details comes from softness in the RAW, many folks struggle to make tack sharp photos (specifically at long focal lengths ) and thus much downsizing/sharpening is needed to make it look good...

    Since Bayer sensors are a 3X3 pattern I usually prefer to have an original that is 9X larger (so information from a full unit cell is averaged into one output pixel ) than final size i.e. about 9 Mpixels for 1200 pixel wide output which is 45% crop on my 1DX.

    It also depends on your vision, some people have good eyes for fine detail and can spot the difference, some people can't...
    Last edited by arash_hazeghi; 08-28-2013 at 04:14 PM.
    New! Sony Capture One Pro Guide 2022
    https://arihazeghiphotography.com/Gu.../Sony_C1P.html


    ------------------------------------------------
    Visit my blog
    http://www.arihazeghiphotography.com/blog

  3. #3
    Lifetime Member Michael Gerald-Yamasaki's Avatar
    Join Date
    Jan 2010
    Location
    Santa Cruz, CA USA
    Posts
    2,035
    Threads
    311
    Thank You Posts

    Default

    Arash,

    I was thinking of the question from the other direction. In creating a jpeg at 1200x800 is there something intrinsic about a high resolution source that would produce a better quality jpeg than a jpeg created from a resolution, say 2x the final jpeg resolution. If so, what would be the differences in the jpeg?

    In other words you have a tack sharp 12000x8000 source and a tack sharp 2400x1600 source, would there be a quality difference between the jpegs at 1200x800? If so, how is that quality difference expressed in the jpeg, better color accuracy, better acutance...?

    -Michael-

  4. #4
    Super Moderator arash_hazeghi's Avatar
    Join Date
    Oct 2008
    Location
    San Francisco, California, United States
    Posts
    18,545
    Threads
    1,318
    Thank You Posts

    Default

    from a photographic point of view, it depends on the subject, amount of detail and the person viewing it.... if it is a very finely detailed subject and the viewer has critical eyes they can spot some differences for the numbers you gave.


    mathematically, it depends on down-sampling too. You are low-pass filtering data but it is not a hard cut filter obviously. A very rough estimate is as I said 3X linear reduction to cancel the effect of Bayer and arrive at a "real" RGB pixel. From that point on as long the image is larger than intended output size it makes little visual difference (for bi-cubic). So take that as a rule of thumb.

    A somewhat similar example (after de-Bayer-ing) is taking 196Kbs and 41Kbs audio and playing it through a 24kbs channel... the higher bandwidth will hardly sound better through the narrow-band channel...
    Last edited by arash_hazeghi; 08-28-2013 at 06:04 PM.
    New! Sony Capture One Pro Guide 2022
    https://arihazeghiphotography.com/Gu.../Sony_C1P.html


    ------------------------------------------------
    Visit my blog
    http://www.arihazeghiphotography.com/blog

  5. #5
    Forum Participant
    Join Date
    Jan 2008
    Location
    Canada
    Posts
    6,588
    Threads
    643
    Thank You Posts

    Default

    Michael- An easy way to answer your question empirically would be to chose a scene with lots of detail and colour and make one image of it, then zoom in and make several, say 4 up and 4 down for eight in total with some overlap, then stitch the set together. Crop both in the same way, and downsample both to 1200 x 800 and examine the result. One would have to assume that your zoom lens is performing well at all focal lengths, which I think is pretty safe for a good quality lens.

  6. #6
    Lifetime Member Michael Gerald-Yamasaki's Avatar
    Join Date
    Jan 2010
    Location
    Santa Cruz, CA USA
    Posts
    2,035
    Threads
    311
    Thank You Posts

    Default

    Quote Originally Posted by arash_hazeghi View Post
    from a photographic point of view, it depends on the subject, amount of detail and the person viewing it.... if it is a very finely detailed subject and the viewer has critical eyes they can spot some differences for the numbers you gave.

    A somewhat similar example (after de-Bayer-ing) is taking 196Kbs and 41Kbs audio and playing it through a 24kbs channel... the higher bandwidth will hardly sound better through the narrow-band channel...
    Let's put aside Bayer for a moment. I can't convince myself that the second statement isn't true for photography... the resolution is the resolution limit, irrespective of the origin. Isn't that right?

    Bayer... I can see how it might raise the ante on a minimum size difference. I like your 3x linear reduction rule of thumb.

    -------

    John, In theory, an empirical test should yield a reasonable answer. In practice, the complexity (for me) of setting up a sufficient empirical study is too great. ;-)

    Thanks for the thoughts.

    Cheers,

    -Michael-

  7. #7
    Super Moderator arash_hazeghi's Avatar
    Join Date
    Oct 2008
    Location
    San Francisco, California, United States
    Posts
    18,545
    Threads
    1,318
    Thank You Posts

    Default

    Quote Originally Posted by Michael Gerald-Yamasaki View Post
    the resolution is the resolution limit, irrespective of the origin. Isn't that right?
    not quite. Only as long as the original spectrum was wider than the filter cut off. If not, it is limited by that, not the filter. Bayer reduces your effective spatial bandwidth dramatically, in the case of 2400 pixels wide let's say every 4 pixel can contain distinct color detail so you the true FFT is only 600 points. if the cut off is at 1200 points it will not limit, thus the larger image will look better after down-sampling...

    you can simply go to dpreview.com and download their test scene with an old 4 Mpixel camera and a 12 Mpixel camera (ISO 100) and downsample to 1200 pixels, you will sure see differences if you look carefully. now repeat with a 20 Mpixel and a 30 Mpixel down to 1200 and you'll have a hard time spotting any difference.

    If you are a geek :D you can download a very simple code here http://www.peter-cockerell.net/Bayer/bayer.html and experiment with a bunch of images to see how it affects them it is very old and the demosaic algorithm is very simple but it gives you an idea...

    best
    Last edited by arash_hazeghi; 08-29-2013 at 01:26 AM. Reason: added link
    New! Sony Capture One Pro Guide 2022
    https://arihazeghiphotography.com/Gu.../Sony_C1P.html


    ------------------------------------------------
    Visit my blog
    http://www.arihazeghiphotography.com/blog

  8. #8
    Lifetime Member Michael Gerald-Yamasaki's Avatar
    Join Date
    Jan 2010
    Location
    Santa Cruz, CA USA
    Posts
    2,035
    Threads
    311
    Thank You Posts

    Default

    Arash,

    Thank you for your well reasoned responses. I see your point about the impact of Bayer on the effective spatial resolution of a image. Doesn't demosaicing, sharpening, and nr nearly return the spatial resolution to 2400 pixels?

    Thanks for the link.

    Cheers,

    -Michael-

  9. #9
    Super Moderator arash_hazeghi's Avatar
    Join Date
    Oct 2008
    Location
    San Francisco, California, United States
    Posts
    18,545
    Threads
    1,318
    Thank You Posts

    Default

    hey Mike, no it doesn't, even the best demosaic algorithm can't recover all the information that was lost. if you compare a low ISO image from a Foveon sensor (which isn't 100% perfect itself) to a Bayer output at pixel level you can see how striking the difference is.
    New! Sony Capture One Pro Guide 2022
    https://arihazeghiphotography.com/Gu.../Sony_C1P.html


    ------------------------------------------------
    Visit my blog
    http://www.arihazeghiphotography.com/blog

  10. #10
    Forum Participant
    Join Date
    Jan 2008
    Location
    Canada
    Posts
    6,588
    Threads
    643
    Thank You Posts

    Default

    I agree with that Arash. The detail I get out of my little Sigma DP2 Merrill is amazing. Low ISO is the key as you suggest.

  11. #11
    Lifetime Member Michael Gerald-Yamasaki's Avatar
    Join Date
    Jan 2010
    Location
    Santa Cruz, CA USA
    Posts
    2,035
    Threads
    311
    Thank You Posts

    Default

    Arash, John,

    Greetings. I've given this a fair amount of thought and played with resizing images and such and have come to a different conclusion: The greatest decrease in IQ from cropping as seen from, say, a BPN sized jpeg is due to the decreased impact on sharpness & noise reduction from downsizing. That is, downsizing results in a sharper image with less noise (if appropriate downsizing algorithms are employed). So, if you start with a larger image to downsize to a standard size, the sharpening & nr impact will be greater.

    That's not to say the Bayer & all that isn't important, just that the resizing has an overriding impact on what is seen in screen size jpegs.

    So, I'm wondering what the relative merits are of uprezzing as an early step in pp large crops ;-)...

    Cheers,

    -Michael-

  12. #12
    BPN Viewer
    Join Date
    Jan 2008
    Location
    Corning, NY
    Posts
    2,507
    Threads
    208
    Thank You Posts

    Default

    Quote Originally Posted by Michael Gerald-Yamasaki View Post
    Arash, John,

    Greetings. I've given this a fair amount of thought and played with resizing images and such and have come to a different conclusion: The greatest decrease in IQ from cropping as seen from, say, a BPN sized jpeg is due to the decreased impact on sharpness & noise reduction from downsizing. That is, downsizing results in a sharper image with less noise (if appropriate downsizing algorithms are employed). So, if you start with a larger image to downsize to a standard size, the sharpening & nr impact will be greater.

    That's not to say the Bayer & all that isn't important, just that the resizing has an overriding impact on what is seen in screen size jpegs.

    So, I'm wondering what the relative merits are of uprezzing as an early step in pp large crops ;-)...

    Cheers,

    -Michael-
    Creating pixels then throwing them away seems a lot different than downsizing from a native resolution that is larger than posted.

  13. #13
    Forum Participant
    Join Date
    Jan 2008
    Location
    Canada
    Posts
    6,588
    Threads
    643
    Thank You Posts

    Default

    Quote Originally Posted by Michael Gerald-Yamasaki View Post
    Arash, John,

    Greetings. I've given this a fair amount of thought and played with resizing images and such and have come to a different conclusion: The greatest decrease in IQ from cropping as seen from, say, a BPN sized jpeg is due to the decreased impact on sharpness & noise reduction from downsizing. That is, downsizing results in a sharper image with less noise (if appropriate downsizing algorithms are employed). So, if you start with a larger image to downsize to a standard size, the sharpening & nr impact will be greater.

    That's not to say the Bayer & all that isn't important, just that the resizing has an overriding impact on what is seen in screen size jpegs.

    So, I'm wondering what the relative merits are of uprezzing as an early step in pp large crops ;-)...

    Cheers,

    -Michael-
    Hi Michael- I don't quite understand the second sentence. Could you clarify please.

  14. #14
    Super Moderator arash_hazeghi's Avatar
    Join Date
    Oct 2008
    Location
    San Francisco, California, United States
    Posts
    18,545
    Threads
    1,318
    Thank You Posts

    Default

    Hi Mike, I don't understand what you are trying to say either.
    New! Sony Capture One Pro Guide 2022
    https://arihazeghiphotography.com/Gu.../Sony_C1P.html


    ------------------------------------------------
    Visit my blog
    http://www.arihazeghiphotography.com/blog

  15. #15
    Lifetime Member Michael Gerald-Yamasaki's Avatar
    Join Date
    Jan 2010
    Location
    Santa Cruz, CA USA
    Posts
    2,035
    Threads
    311
    Thank You Posts

    Default

    Okay, let me try again.

    1. Downsizing improves IQ both by decreasing noise and by giving the appearance of sharper lines and edges.

    2. If the output is of a standard size (say, 1200x800), the jpeg from a larger original will display better IQ by virtue of having greater downsizing (all other things being equal) as the greater downsizing results in a sharper, less noisy image.

    3. Since one is trying to judge whether "all other things being equal" for critiquing purposes, a cropped image is at a disadvantage because it is displayed with less downsizing. In other words, it's unknown whether all other things are equal, so the difference in downsizing levels gives and advantage to larger original images.

    Ed, the uprez idea is not to uprez then immediately turn around and throw the pixels away. I've been experimenting a bit with uprezing then processing the uprezed version (detailing, tonal work, etc.) before downsizing. It's interesting the impact on how sharpening tools for instance are used for effect. NR doesn't work on the uprezed version (the uprezing increases the size of noise outside the range that the NR software recognizes as noise), so NR needs to be performed before uprezing.

    If the above isn't understandable yet, I can put some images together to show the effect.

    Cheers,

    -Michael-

  16. #16
    Super Moderator arash_hazeghi's Avatar
    Join Date
    Oct 2008
    Location
    San Francisco, California, United States
    Posts
    18,545
    Threads
    1,318
    Thank You Posts

    Default

    I still don't understand, are you asking the same question again ?

    up-rezing makes no sense at all. you are not adding any information but just adding rounding and interpolation error to the original which makes things worse.
    Last edited by arash_hazeghi; 09-09-2013 at 10:22 PM.
    New! Sony Capture One Pro Guide 2022
    https://arihazeghiphotography.com/Gu.../Sony_C1P.html


    ------------------------------------------------
    Visit my blog
    http://www.arihazeghiphotography.com/blog

  17. #17
    Lifetime Member Michael Gerald-Yamasaki's Avatar
    Join Date
    Jan 2010
    Location
    Santa Cruz, CA USA
    Posts
    2,035
    Threads
    311
    Thank You Posts

    Default

    Attached Images Attached Images
     
    Quote Originally Posted by arash_hazeghi View Post
    I still don't understand, are you asking the same question again ?

    up-rezing makes no sense at all. you are not adding any information but just adding rounding and interpolation error to the original which makes things worse.
    Lets skip the up-rezing stuff because I'm just playing with it and haven't reached any conclusions about it.

    Here are three images of black lines on white created in PS with a 15 pixel 0% hardness brush. The original (fat bits view) is on the left. The original image is 2000x2000. The two other fat bit images are from 500x500 jpegs. The middle 500x500 jpeg was created from a 1000x1000 crop and the right 500x500 jpeg was created from the entire 2000x2000 image.

    The width of the black to white gradient in the left image is about 7, about 3 in the middle image and 2 in the right image.

    The cropped version (middle) has had less downsizing so is less sharp than the right version which has had more downsizing.

    The right version is oft times considered to have the better IQ coming from the "full frame". Here (IMO) the cropped version looks to be a better representation of the original data.

    Which of the middle or right image do you think has the better IQ?

    Cheers,

    -Michael-

  18. #18
    Super Moderator arash_hazeghi's Avatar
    Join Date
    Oct 2008
    Location
    San Francisco, California, United States
    Posts
    18,545
    Threads
    1,318
    Thank You Posts

    Default

    Attached Images Attached Images
     
     
    Hi Mike, Sorry but it doesn't make sense to me ... your final images are not the same size they look like the same thing just interpolated digitally to different sizes. Your original question was taking different files from different resolution cameras and reducing them to the same output size...totally different.

    you are probably using a bad interpolation method because the last one has hard and "pixelated" edges. Good down-sampling should make the edges smoother than the original, not harder.

    Correct down-sampling alone does not make an image sharper, it actually makes it softer because you are averaging neighbor pixels (low pass filter), and that's why it reduces visible noise. It is the sharpening applied after down-sampling that makes the image sharper.


    If you want a relevant example here is one

    above left : 1400 pixel tall original

    above right :bicubic down-sample to 800 pixel.



    good luck
    Last edited by arash_hazeghi; 09-10-2013 at 01:43 AM. Reason: added example
    New! Sony Capture One Pro Guide 2022
    https://arihazeghiphotography.com/Gu.../Sony_C1P.html


    ------------------------------------------------
    Visit my blog
    http://www.arihazeghiphotography.com/blog

  19. #19
    Lifetime Member Michael Gerald-Yamasaki's Avatar
    Join Date
    Jan 2010
    Location
    Santa Cruz, CA USA
    Posts
    2,035
    Threads
    311
    Thank You Posts

    Default

    Attached Images Attached Images
     
    Quote Originally Posted by arash_hazeghi View Post
    Hi Mike, Sorry but it doesn't make sense to me ... your final images are not the same size they look like the same thing just interpolated digitally to different sizes. Your original question was taking different files from different resolution cameras and reducing them to the same output size...totally different.
    Arash, in order to understand the impact of downsizing variable sized images to the same output size I was removing variables for experimentation. By using the same source image, varying the crop size, then resizing to a standard size, one can compare the standard sized output images against the original full image to try to gain an understanding of the visual impact on different levels of reduction (the original question, I think).

    The reduction I used is just what LR does to create a smaller jpeg from the full sized 8-bit TIFF. Isn't that bicubic of sorts?

    Here are the two 500x500 output jpegs from the original 2000x2000 full image. The left image matches to the middle fat bits image posted earlier, the right image matches to the right fat bits. Remember that the original 2000x2000 image was created from 15 pixel wide 0% hardness brush lines, very soft lines.

    I find the lines in the right image quite a bit sharper than the cropped version, though from looking at the fat bits the left image is a closer representation of the original.

    Are there flaws in my experimental design or conclusions?

    Not sure how to insert your images into this discussion without spending a lot of discussion on all the different variables, but a great image nonetheless. I love the eyes.

    Cheers,

    -Michael-

  20. #20
    Super Moderator arash_hazeghi's Avatar
    Join Date
    Oct 2008
    Location
    San Francisco, California, United States
    Posts
    18,545
    Threads
    1,318
    Thank You Posts

    Default

    Hi Mike,

    No, it still doesn't make sense to me :) , I don't understand what variables you are talking about. I am not sure you can say anything from the funny example, you made the image smaller and the lines look thinner. The lines were too blurry to begin with, maybe you want to simulate an out of focus photo (?)


    Anyways, as mentioned before you can just download a real photo from the web (e.g. dpreview full size samples) and down-sample in Photohsop and see/conclude for yourself.


    good luck

    Arash
    Last edited by arash_hazeghi; 09-10-2013 at 10:33 AM.
    New! Sony Capture One Pro Guide 2022
    https://arihazeghiphotography.com/Gu.../Sony_C1P.html


    ------------------------------------------------
    Visit my blog
    http://www.arihazeghiphotography.com/blog

  21. #21
    Lifetime Member Michael Gerald-Yamasaki's Avatar
    Join Date
    Jan 2010
    Location
    Santa Cruz, CA USA
    Posts
    2,035
    Threads
    311
    Thank You Posts

    Default

    Attached Images Attached Images
     
    Arash,

    Sorry, this doesn't seem to be getting anywhere. One more try... same deal, fat bit view, both reduced to same size, cropped on left, full on right. Impact on line edges about the same as in other set of images.

    If no one else is listening (& has a comment), I guess we can call it a day.

    Cheers,

    -Michael
    Last edited by Michael Gerald-Yamasaki; 09-10-2013 at 11:31 AM. Reason: not finished

  22. #22
    Super Moderator arash_hazeghi's Avatar
    Join Date
    Oct 2008
    Location
    San Francisco, California, United States
    Posts
    18,545
    Threads
    1,318
    Thank You Posts

    Default

    hey Mike, I think your confusion comes from that you perceive things as sharper when they become smaller. e.g. lines that are several pixels wide will be interpolated into a single pixel line with hard edges, but that is actually throwing out detail, not gaining sharpness.

    Especially, if you pick a blurry line like your examples or something like that (soft photo, OOF etc.) when you shrink it the "blur" around pixels will disappear too, but that's because your original was bad to begin with.

    look at the example I provided. if it doesn't make sense to you I don't have much more to add.


    good luck
    Last edited by arash_hazeghi; 09-10-2013 at 08:36 PM. Reason: system crash
    New! Sony Capture One Pro Guide 2022
    https://arihazeghiphotography.com/Gu.../Sony_C1P.html


    ------------------------------------------------
    Visit my blog
    http://www.arihazeghiphotography.com/blog

  23. #23
    Lifetime Member Michael Gerald-Yamasaki's Avatar
    Join Date
    Jan 2010
    Location
    Santa Cruz, CA USA
    Posts
    2,035
    Threads
    311
    Thank You Posts

    Default

    Arash,

    You make a good point about detail. Two things happen on downsizing:
    - loss of detail
    - harder edges (if you like this name better than sharpening).

    They work against each other depending on content
    - sharp fine line content doesn't gain harder edges, just loses detail.
    - edges with gradient dropoff (blurry lines, if you will) gain a harder edge appearance (what I called appears sharper)

    An already cropped image downsized to a common small jpeg (smaller reduction than full frame of same image) should look better because it doesn't lose as much detail.
    This is frequently not the case. I think this would be indicative of a softer image.

    What this means to me is:
    - cropping from full frame in of itself is not as bad for IQ as I previously thought, but it does make softness more apparent in downsized images
    - If one's downsized image is getting worse in apparent IQ, that means your original has sharp fine line detail (and that's a good thing!).

    Hope your system is back to normal.

    Cheers,

    -Michael-

  24. #24
    Forum Participant
    Join Date
    Feb 2008
    Posts
    274
    Threads
    71
    Thank You Posts

    Default Bayer Pattern Quibbling

    Minor technical point. Bayer sensors have a 2x2 pattern, not 3x3. There are two green pixels on a diagonal, then a red and a blue on the other diagonal to finish out a square.

    Bill

    [QUOTE=arash_hazeghi;932405]Hi Mike,

    Since Bayer sensors are a 3X3 pattern I usually prefer to have an original that is 9X larger (so information from a full unit cell is averaged into one output pixel ) than final size i.e. about 9 Mpixels for 1200 pixel wide output which is 45% crop on my 1DX.

  25. #25
    BPN Viewer
    Join Date
    Dec 2013
    Location
    Barrow, Alaska
    Posts
    37
    Thank You Posts

    Default

    Quote Originally Posted by BillTyler View Post
    Minor technical point. Bayer sensors have a 2x2 pattern, not 3x3. There are two green pixels on a diagonal, then a red and a blue on the other diagonal to finish out a square.

    Bill

    Quote Originally Posted by arash_hazeghi View Post
    Hi Mike,

    Since Bayer sensors are a 3X3 pattern I usually prefer to have an original that is 9X larger (so information from a full unit cell is averaged into one output pixel ) than final size i.e. about 9 Mpixels for 1200 pixel wide output which is 45% crop on my 1DX.
    This thread is a bit old now, but a little more info on this might be worth posting for future readers. All of the above is "correct".

    The sensor itself is a 2x2 matrix. But the software that interpolates the data to produce an RGB image will rarely use a 2x2 matrix (too many artifacts), and at a minimum will use a 3x3 matrix. That will probably happen only in areas where fine detail is detected, and otherwise a 4x4 or even 5x5 matrix might be used.

    The significance is that with a 3x3 matrix it takes a width of 6 pixels to fully make a sharp edged tonal transition. And obviously more it the matrix is larger. The effect is that a single tone transition from black to white is necessarily no sharper than 6 pixels across. That is specifically why it is always stated that "sharpen is always necessary" for images produced with a Bayer Filter Array.

    It also explains why different resampling algorithms do better for different purposes. The only one mentioned in this thread was bicubic, which is relatively poor for down sampling of generic photography. Software that allows a selection that includes a "Lanczos" filter is significatnly better. For upsizing a Mitchell filter is a good choice. There are other filters, and for any given image they might produce better results. (The problem with bicubic is "ringing" that essentially introduces a non-optimal form of sharpening.)
    Last edited by Floyd Davidson; 12-19-2013 at 03:59 AM.

  26. #26
    Forum Participant
    Join Date
    Jan 2008
    Location
    Canada
    Posts
    6,588
    Threads
    643
    Thank You Posts

    Default

    Interesting information Floyd. I did not realise that raw processors are dynamically sampling the sensor data depending in the detail detected. Anyway this is what I took from your comments. If I'm off-track please let me know.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Web Analytics