Click to view
This is one of my bugaboos, and one of the reasons I can't watch shows like CSI. (NCIS doesn't count. I have no good explanation why, but I really do enjoy the show.) Being a professional graphic designer, and knowing what this software is capable of doing, most cop/espionage/thriller movies that introduce Photoshop or some variation thereof just plain hurt to watch. The thing is this: with a low resolution image, there are limits on enhancement. I hate using a word like, "enhancement," too, because it's so vague. Why? LOLcat images are enhanced with text dropped in on top of the image. Lots and lots of images in advertising are enhanced to include special effects, or to retouch an image to make it ideal.
Video yanked from BoingBoingGenerally, most of these, "enhancements," in these films and TV shows fall under the heading of sharpening an image. The goofball stuff like finding a reflection and enhancing it, or rotating a JPEG so you can see someone's face are so damned silly I can't even begin to make fun of them. The problem with enhancing a reflection, for instance, is that CCD chips (and film, back in the day, I guess) don't capture that much information, at least not relative to what the human eye is capable of seeing. There is a point, with a digital photo, where information is a solid color. If you go into these areas, down to the pixel level, and look, you will see fields of a single color. In those spots in a photo where a single color only exists, there is no other information. If you can see a fuzzy face reflected in a pane of glass looking out onto a nighttime sky, there is a limit to the detail that is there to start with. Sharpening an image works basically like this: you make all the pixels in one area of color more similar to one another, while making another, adjacent group of pixels similar in the same way. This increases contrast between the two areas of color, but it has another effect: it destroys information. That's right, sharpening an image doesn't add information, it destroys it by making groups of pixels more alike. The most information an image can have, or will ever have, is when it is first imported onto your computer. Every thing you do to it after that point, even just opening and resaving it in the case of image formats like JPEGs, degrades the image. I don't mean, "degrades," as some sort of lofty aesthetic principle, I mean, "degrades," as in the image loses information. Formats like JPEG and GIF are considered, "lossy," because they throw away image information every single time you open and resave the file. Even, "lossless," formats like TIFF and PSD still lose a small amount of information to image compression. Sharpening an image works then, not because we're adding information, but because we're selectively removing it. We don't notice that the sharpened image has lost information, because the information we are looking for, that we need in order to, "read," the image, is present. We leave enough information for persistence of vision to do its magic trick. The same thing is true for blurring an image, or for color correcting it, or for retouching it. Okay, so sharpening an image works by selectively throwing away information. So, looking at this from the cop show perspective, aren't we throwing away data that would distract us so we can focus on what we need and want to know? Sure. Yeah. Except that if the information you want to know/see isn't already there, it won't be there when you sharpen/run a fractal pattern/add noise or any of the other things you could actually conceivably do to an image. If the image is really low resolution, you can't add information. If I can, on a TV screen during the show, count the number of pixels across a guy's head, then you're not going to be able to enhance that into a studio portrait quality image. You can try all day, if you want, but I can tell you how much success you will have. None. That's because the information isn't there. But what if I, "blow up," the image? Well, digitally, that means you increase the resolution, and that means one of two things happens: • the image doesn't change on screen, but when you print, its print dimensions are decreased • the image gets lots bigger on screen, and a little (or a lot) fuzzier This is because of a slight misunderstanding about what the word, "resolution," means in this context. "Resolution," is actually the relationship of the actual number of pixels across an image is to the print dimensions of the image. 100 pixels per inch (ppi) is lower resolution than 300ppi. This means that a 300ppi image at 3"x5" will have three times more detail and information than a 100ppi image at the same print size. The files are different in that the 100ppi file is 300 pixels across by 500 tall. The 300ppi file is 900 pixels across by 1500 pixels tall. With a higher resolution file, you have more pixels available to describe your image, which means that you can have more detail. So, if I have an image that is 10" x 10" at 100ppi, and I increase the resolution to 300ppi, one of the aforementioned tow things happen, if I know what I'm doing. (And I do.) First, I can leave the actual number of pixels in the image alone, and alter the print dimensions. This means that the image remains 1000 pixels wide by 1000 pixels tall, but is now 3.33" wide by 3.33" tall. I've made the image 300ppi, but the print dimensions have shrunken. Perfectly acceptable, to be honest, I do this all the time with photos given to me in a web friendly format. The other thing you can do when increasing the print resolution is to leave the print dimensions alone, and increase the number of pixels in the image. A 10" x 10" image at 100ppi is 1000 pixels by 1000 pixels. Increasing this image to 300ppi in this case results in an image that is 10" x 10" and 3000 pixels by 3000 pixels. Great, dandy, except for one thing: what is the computer supposed to do with the space you've just added? You just created a bunch of pixels, a bunch of information, that didn't exist when you started. You are essentially pushing the pixels apart and asking the computer to guess what information should go there.
Allow me to illustrate. Remember those old puzzles your Grandma always had when you were a kid? The sliding tile puzzles that you had to move around to get things in the proper order? Imagine one of those:
fig. 1a A digital photograph (raster image) is a lot like this. Each tile in the puzzle is like a pixel in your raster image. When you increase the resolution, you are asking the computer to guess what the new pixels should look like. Like so:
fig. 1b So, how should the computer go about filling in these pixels? There are several mathematical models that are used for different purposes. Nearest neighbor, bicubic linear, &c are all used to fill in the blanks, but these don't generate actual information. They all basically look at the pixels, and average the color between them. This won't help you reveal a killer's face, or make a license plate legible. In some cases, it will let you fudge a low resolution photograph in place of a high resolution one when you are working on a graphics project. What it comes down to is that the math doesn't work out on these shows. Most of the time, what they're talking about is laughably outside the realm of what is possible. There are a bunch of really, really interesting things that can be done with Photoshop, including some miraculous image, "saves," but nothing like the casual crap that is so often shown in movies and TV shows. There are even some image sharpening programs that use fractals to create structured, "noise," to fool the eye into thinking that there are hard edges and extra useful information in the image. Unfortunately, even these have limitations, and they are good for creating something passable, not something specific. Also, they are better suited for inanimate objects than human faces, because humans are really, really good at detecting the presence of other humans. (Uncanny valley and all that.) But despite all this, despite what Photoshop can do, most of the stunts pulled with it and other image editors on cop shows are just flat outside the realm of possibility.