So another algorithm I used recently is "filtered backprojection". It's used in a lot of applications where you have lots of projected views of an object from lots of detectors and one or more projectors. -And- you know the geometry of all the projectors and detectors. Stuff like medical CT scan imaging. Each projection is a data set of what each detector saw for that projection. The algorithm reconstructs a single image by summing the effects of each projection's view for each reconstruction image point.
kind of like looking at all the shadows cast by a moving flashlight on an object, then adding up all the shadows to yield the original object. Kind of like magic, too.
It takes some time. To reconstruct a 1024x1024 image, you've got over a million pixels and each projection's data set has to be considered for each pixel. I'm seeing maybe 30 seconds for a 1024x1024 image. 3D images are even worse. Isn't that amazing? Our brains reconstucta very detailed fine-grained 3D color image seemingly effortlessly in milliseconds. We humans are so clever, and yet sometimes it seems to me that we are barely at the level of finger-painting two year old children. A fine level, to be sure, and yet such a long, long way to go.