PDA

View Full Version : Femto-photography



John Chardine
10-03-2012, 09:36 AM
I saw this link to a captivating TED talk on Naturescapes and thought I would post here in case some missed it.

http://www.youtube.com/watch?v=SoHeWgLvlXI&feature=related

Please take time to watch this incredible presentation.

John Chardine
10-03-2012, 11:14 AM
A little off-topic, but for those interested in the art of presentation, Dr. Raskar did a masterful job here- compelling graphics, no horrible PP bullet lists, maintained eye contact with the audience and treated the whole affair as a conversation. TED talks are typically excellent in this regard. These ideas and more are laid out in Garr Reynold's excellent website Presentation Zen: http://www.presentationzen.com (and elsewhere).

Jon Rista
10-03-2012, 03:55 PM
I love the concept behind femto-second photography. I read a paper by Eric Fossum on the concept of a "Gigapixel Digital Film System" (http://ericfossum.com/Publications/Papers/Gigapixel%20Digital%20Film%20Sensor%20Proposal.pdf ) that used repeated, short exposure pulses additively combined to produce very high resolution photographs. I've wondered if the same concept, the additive stacking of very short moment-exposures produced over the actual chosen shutter time, could be used to produce photographs of near-infinite dynamic range, or to eliminate photon shot noise, etc.

I think there are so many awesome implications for the kind of technology employed by Dr. Raskar and his team, and as outlined in Fossum's Gigapixel DFS, that could so radically change the nature of photography (kind of like digital, high dynamic range sensors and high ISO have done recently.)

Roger Clark
10-04-2012, 08:00 AM
Hi John,

Great presentation and interesting topic. I certainly see applications in science. However, using the technique to "see" around corners in a real world situation (like the example presented of driving a car) has a real practical limitation not mentioned: to get enough photons in a short time with moving subjects, the laser pulses would be so intense as to vaporize the reflection point. Even if the intensity was below vaporization, it would still be so bright as to be a danger to people, potentially blinding them.

This reminds me of an article I read in Applied Optics many years ago. The military described a field test of an active laser scanning system to remotely sense composition. They flew in an airplane and the laser did a raster scan vaporizing the surface and a spectrometer measured the emission lines from the vaporized target. The problem was that it was a danger to people on the ground, let alone the burn marks and danger of setting something on fire!

Jon, regarding the gigapixel sensor, the author describes a digital sensor that mimics film. He introduces the concept of sub-diffraction imaging, but doesn't actually say how one would circumvent the diffraction limits. He compared to film, but film was always limited by diffraction limits.

Roger

Jon Rista
10-04-2012, 02:55 PM
@Roger: Regarding Fossum's DFS, I was never sure much about the sub-diffraction limit bit either. I think the idea stems from the dynamic nature of his "grains", were each 'jot' has the potential to fragment a large grain into smaller grains...but if you are experiencing severe diffraction softening, I'm not sure how that helps. Perhaps were thinking of the problem in the inverse...at high diffraction, you would have coarse grains, where as at low diffraction (say a diffraction limited f/2.8 lens) you could potentially have very small grains...smaller than today's average pixel size?

I found that article several years ago, and the idea of an exposure generated by successive "short" exposures that add up to the entire shutter time seemed intriguing from a dynamic range standpoint. The human eye effectively works that way, on a "refresh rate" of about 500 exposures per second that our brain processes and combines to produce a high resolution (or perhaps what might effectively be the biological equivalent of super resolution), high dynamic range image. I've always wondered if we could do something similar with digital cameras, and it seems Dr. Raskar's team has done something similar with their femto-photography system (imaging their scene millions of times in the presence of only a few photons and combining the results to produce a visible image of useful dynamic range.)

Michael Gerald-Yamasaki
10-05-2012, 06:44 PM
John,

Thanks for posting a link to Ramesh Raskar's TED talk. Very interesting. I saw a paper of his at Siggraph '98 on the office of the future - immersive environment (that hasn't come yet ;-). He also wrote a paper on motion deblurring using a fluttered shutter you might find interesting (Google Scholar Ramesh Raskar and fluttered shutter) also the subject of a patent.

Jon, Eric Fossum also has a patent on for a Wide Dynamic Range Optical Sensor. The sensor has the ability to record the signal over two exposures ... I gather something like a two-shot HDR in CMOS. It would be interesting if a sensor could be "continuously" recorded (no idea the sensor hardware required for such a device) and exposure constructed from collections of sufficiently small timesteps. "Exposure" could be applied after the fact instead of selected before hand (and could differ over the image if desired). Very high data bandwidth & storage required ;-).

Cheers,

-Michael-

Roger Clark
10-06-2012, 08:59 AM
Jon, Eric Fossum also has a patent on for a Wide Dynamic Range Optical Sensor. The sensor has the ability to record the signal over two exposures ... I gather something like a two-shot HDR in CMOS. It would be interesting if a sensor could be "continuously" recorded (no idea the sensor hardware required for such a device) and exposure constructed from collections of sufficiently small timesteps. "Exposure" could be applied after the fact instead of selected before hand (and could differ over the image if desired). Very high data bandwidth & storage required ;-).


Hi Michael,
Your idea certainly has merit and could in theory work for small sensors (small pixel number). The problem, however, is readout time. The fastest cameras that do 14-bit output run at around 200 megapixels per second. That means for a 10 megapixel camera, it takes about 1/10 second to read out the sensor. So multiple exposures are limited by reading the array. That would be a real problem for moving subjects. One can read out faster, but precision suffers, so faster system tend to be only 8-bit. In theory, one could put a readout channel and A/D system per column (the 1D4 has only 4 readout channels, so each channel read 4 megapixels), then readout rates could be increased dramatically and your idea could probably be implemented (perhaps at the cost of power to run all those electronics). I think, if I remember correctly, Sony said they were moving to a channel per column readout.

Roger