Stuff I've been working on

Apr 29, 2013 09:53

You know how when you look at something that's not in focus, it's blurry? There's lost information there -- you can't resolve things very well when they're not in focus. But there are ways to correct for that. If you know how your sample blurs, then you can look at a blurred image and do a pretty good job of reconstructing what the nonblurred image should have been. This is called deconvolution, and we use it in microscopy all the time to improve our resolution -- blurring is a huge factor when you're dealing with very high-magnification lenses like we do.

What we do is take images of fluorescent "beads", which are about as close to point light sources as we can reasonably get; each bead is under 100nm in size. As we scan the microscope in and out of focus, we get a pattern of blurring that is known as the Point-Spread Function, or PSF. For example, here's a bead that's in focus:



As we move out of focus, it blurs:



And as we get even further out of focus, patterns of diffraction start showing up:



These are all top-down views, like what the microscope sees. But if you took a bunch of these images and compiled them into a "stack", a 3D volume of images laid on top of each other, then you could view it from the side (and indeed, rotate your perspective about the volume), creating something like this:



You can see the in-focus part, where there's a narrow, bright band; above and below that, the band gets broader and dimmer. However, this is pretty noisy data, and it wouldn't be very useful for our deconvolution efforts. What biologists commonly do around here is to "radially average" the PSF. That is, they assume that the PSF is basically symmetrical, so they take a PSF image, and average together all of the lines out from the center of the image to create a perfectly circular PSF.

Of course, you can see that the PSF isn't perfectly symmetrical. And the less-well your PSF matches your true "pattern of blurring", the worse your deconvolution results will be. However, the gains in decreased noisiness of the data generally make up for the loss in accuracy of the PSF.

What I've been working on lately is a system to average together multiple PSFs. This isn't trivial, since the PSFs need to be perfectly aligned before being averaged, and different beads tend to have different intensities, which requires us to use a weighted average. I've written code that will:

1) Automatically detect isolated beads,
2) Collect "stacks" of images of those beads,
3) Align all the stacks together,
4) Normalize their intensities, and
5) Average the stacks together

And now I have some results! Check this out:



As you can see, the image is much less noisy than the raw data we saw earlier, and the diffraction patterns show up very strongly. Moreover, the asymmetry in the PSF is quite visible.

Presumably this will lead to better deconvolution results; we haven't got quite that far yet. But I'm enjoying having results I can show to people, even if it requires some explaining first!
Previous post Next post
Up