Stuff I've been working on

Apr 29, 2013 09:53

You know how when you look at something that's not in focus, it's blurry? There's lost information there -- you can't resolve things very well when they're not in focus. But there are ways to correct for that. If you know how your sample blurs, then you can look at a blurred image and do a pretty good job of reconstructing what the nonblurred image ( Read more... )

Leave a comment

Comments 9

rose_garden April 30 2013, 06:41:54 UTC
I have another friend who is working on holography like this (in Vinny Manoharan's group at Harvard). It's neat stuff.

Reply


derakonsdad April 30 2013, 14:13:01 UTC
Very neat! What does the time variable in the videos represent (i.e., why don't you get a static image)?

Reply

derakon April 30 2013, 14:14:41 UTC
The view is being rotated about the Z axis of the original volume to show how the PSF is not radially symmetric.

Reply

derakonsdad April 30 2013, 22:38:33 UTC
If the bead is a point (or sphere), why is the PSF not radially symmetric? Is this artifact?

Reply

derakon April 30 2013, 22:45:53 UTC
Correct -- the asymmetry is induced by imperfections in the optics, mismatches in the refractive indices of the different media the light travels through, etc. All systematic sources of error that can be corrected for, so long as you have an accurate measurement!

Reply


gwalla April 30 2013, 15:53:13 UTC
I wonder if images at various focal lengths could be combined to form a static 3D image from a single conventional camera.

Reply

derakon April 30 2013, 16:14:58 UTC
You can do something similar in microscopy -- instead of taking multiple images at different focal lengths, you take a single image and split out the light into different depths of focus, giving you an entire 3D volume with a single exposure of your sample. It's high-speed and less damaging for the sample (most samples don't like being fried by high-powered lasers). Unfortunately the optics involve lose you about 50% of your light (the beam splitters needed to redirect the light to different parts of the camera sensor are not perfectly efficient), but it's still better than taking 9 separate images.

But it sounds like you're saying "put a camera on a tripod, take images with multiple focal lengths, reconstruct a 3D image from the result". I think the problem you'll run into there is that you can't see around things using that approach -- you need multiple viewpoints to get what we generally think of a a 3D image. But you could probably get a depth map, i.e. knowing how far a given part of the image is from the camera.

Reply

derakon April 30 2013, 20:41:54 UTC
Note that the microscopy technique I described above works mostly because the samples are so thin that you can see through them. Otherwise you'd have the same issue as in conventional photography where objects in the foreground occlude objects in the background.

Reply

gwalla May 1 2013, 02:07:40 UTC
I don't mean a complete scene you could fly around, exactly. More like enough to generate a right-left stereoscopic image.

Reply


Leave a comment

Up