You might have heard whispers about a computer scientist posting a game-changing paper on the internet a few weeks ago. Vinay Deolalikar's attempt at a P != NP proof might net him
a million dollars.
More likely, you've never heard about P and NP and have no idea what I'm talking about. If you're curious at all about one of the most fundamental problems in computer science, and why proving a simple statement is worth $1M to anybody, or just want to understand a bit about how CS people think about computers, here's a layperson's guide to P ?= NP.
How hard is a problem?
Let's say you have to look up a name in the phone book. You might start at the beginning of the book and thumb through the letters until you find the first letter of the name you're looking for, and repeat for the second letter amongst the names that start with the first, and so on.
"Let's see... Smith. [thumbs through to S] A, B, C, …. Q, R, S... okay, S. [now thumbs through S to Sm] Sa, Sb, … Sl, Sm... great. Sma... "
Or you might flip the book open to the middle and see if the name you’re looking for comes before or after the names on the open page, and focus on the first or second half of the book, respectively. You'd repeat this halving procedure, getting down to smaller and smaller sections of the phone book, until you’ve found the name you're looking for.
"Let's see... Smith... [opens book to Nelson] that's after Nelson... [open halfway between Nelson and the end, gets Talbot] before Talbot, so between Nelson and Talbot... [halfway between Nelson and Talbot: Quinn] after Quinn..."
Which way is easier? You may have an intuitive idea. Imagine for a minute that your job is to look up names in a phone book all day, as fast as possible. Now you care a lot more about which of these two methods, or algorithms, you should use. You'd probably want something better than a rough guess; you'd want a quantified understanding of difficulty.
There is a field of computer science called complexity theory that deals with classifying problems by difficulty. Classification by difficulty is very useful. For instance,
-
If you have two proposed algorithms for solving the same problem (as with the phone book lookup above), you can determine which is more efficient, and thus which one to use.
-
If you have a fixed amount of time to prime your thrusters before the space shuttle launches, or else it’ll blow up, you may want to prove that the priming algorithm will take no more than that fixed amount of time to run.
-
If you are searching for an efficient solution to a difficult problem, you may be able to expose the problem as something that is simply too hard to solve by traditional means - one that could literally take longer than the age of the universe to solve, depending on the parameters. Knowing this could save you a lot of time fruitlessly looking for a solution.
What does difficulty mean when solving a problem? If you’re a human, you might think it’s a measure of hard work, intuition, and insight. Of these three, computers are really only good at hard work, so computer scientists measure difficulty in two primary ways, time and space.
Say you’re trying to solve a problem, and you're using a whiteboard for your step-by-step scratch work. Time represents the number of instances you write or erase something on the whiteboard as you work through your solution, and space represents the size of the whiteboard you’d need to fit in all of your scratch work (assuming that you could erase work you don’t need anymore along the way). Analogously, for a computer, time is the number of instructions it executes, and space is the amount of memory (RAM) it needs.
For this essay, we’ll only deal with time.
1 How do we measure how much time a given algorithm takes? There are a couple of different ways to measure time: what's the fastest the algorithm could run? The average time it takes over different inputs? For simplicity, we'll focus on the worst-case time: the longest the algorithm could take to run. That way, no matter what parameters we are given, our thrusters will be primed before the space shuttle launches.
And then there's the issue of size. Looking up a name in the Nome, Alaska phone book takes less time than looking one up in the New York City phone book, right? The time it takes to solve a problem with a given algorithm depends on the size of the input to the problem (what you are asked to do - look a name up in Nome, or NYC?). If the input is a list of words, the size is the number of words. If the input is a single number, the size is the number of digits it has. And so on.
Let’s take a simple example. Say you want to sort a list of n numbers from smallest to biggest:
4 13 21 5 6 9 18 12 7 0
(Here, n is 10, but imagine that it could be anything.) How might you sort the numbers? Here’s one way: scan through the list and find the smallest number. In this case, that’s 0. Take that number out of the list and put it aside. Now, scan through the remaining n-1 numbers to find the next smallest number - in this case, 4. Put that next to the 0. Keep repeating this process, removing one number at a time, until you have no numbers left in the list, and a sorted pile of numbers to the side. Perfect.
How much time did that take? Well, you looked through the n numbers and found the smallest one, and pulled it aside; you repeated this process for each of the n numbers in the list. So you took n*n = n2 steps to sort the list
2. We say that sorting numbers with this algorithm, called selection sort, takes n2 time. If there are 100 items in the list, it’ll take around 100*100 = 10,000 time steps to sort them. That’s pretty neat! You give me an input - the list of numbers to sort - and I can give you a limit on how long it’ll take me to sort them, without having to do the actual sorting at all.
Here’s another sorting algorithm. Given a list of n numbers, we permute them (jumble them up) in a random order and see if this new order is sorted or not. If it is, great! We’re done. If not, we jumble them up again in a new random order and repeat. This algorithm will eventually find the right sorted order. But how long will it take? Well, to permute the numbers, we select one of the n to be the first number in the new list. Then of the remaining n-1 numbers, we select another, and so on. The total number of permutations is n*(n-1)*(n-2)*…1, otherwise written as n! (factorial). So in the worst case, when we come across the right sorted order last, this permutation sort algorithm takes n! time to sort. If there are 100 items in the list, that’s 100!, or 100*99*98*…, which is big - something like 1 followed by 158 zeros.
So you just saw two algorithms for sorting numbers, with vastly different execution times. How difficult then, is the problem of sorting? We say that a problem is as difficult as the most efficient algorithm we know of to solve it. So in some sense, it doesn’t matter that permutation sort is really slow; insertion sort does a better job, so sorting is at worst an n2 problem. This is a crucial point: if the only algorithm you know of is permutation sort, you think sorting is a really hard problem. When you discover insertion sort, suddenly sorting looks easy and life is good. Our classification of problems, then, is often limited by our own creativity and knowledge. There are many problems out there that might be easier - that might be solved with more efficient algorithms - if only we could come up with these better algorithms to solve them.
3 In fact, we know of even faster algorithms that can sort in n*log(n) time, which is faster than n2, so sorting really takes no more than n*log(n) time in the general case.
What are P and NP?
So far we’ve shown how problems can be classified by the amount of time it takes to solve them with the most efficient algorithms available. From here we can provide a very simple explanation for P:
P is the set of problems that can be solved in polynomial time.
A polynomial looks like nk: n, n2, n100, etc. So sorting, which takes n2 time, is in P. So are searching, greatest common divising, zip data compression, multiplying, primality checking
4, and a host of other common problems. In fact, with few exceptions, the world of computers is built on algorithms that run in polynomial time. In addition to simple stuff like sorting, generic algorithmic sledgehammers like dynamic and linear programming, which solve problems from spell checking to resource allocation, are in P.
Other problems are exponential: their times look like 2n, 100n, and so on.
We consider polynomial problems to be easy: if n is 100, then n2 is 10,000, which isn’t too bad. Exponential problems, on the other hand, are very hard: if n is 100, then 2n is 1 followed by 30 zeros. So if you have a problem, you’d better hope it’s in P and not exponential, or something even worse.
5 In a broad sense, problems in P are considered practical, and problems harder than P are cause for concern.
NP is a little more nuanced.
NP is the set of problems that can be checked in polynomial time.
By checking I mean: if I pose a problem and you provide a solution, I can check if your solution is correct relatively quickly, even if solving the problem took a lot of time.
For instance, if I give you a really complicated maze, and you give it back to me with a line drawn from start to finish, I don’t have to solve the whole maze myself to verify your answer; I can just check that your line goes from start to finish without crossing over any other lines, which is a lot easier.
Or: if I ask for the prime factors of a large number, and you give me a list of supposed factors back, it’s easy for me to check if you’re right. If the numbers you give me are prime (which I can check in P), and when I multiply them (which I can also do in P) I get my original number, you’re right. Otherwise you’re wrong.
In each of these cases, I don’t have to do the work of solving the problem myself.
There are many other interesting problems in NP, including:
- Given a bunch of integers, can you pick some of them that together add up to 0?
- Given a map of roads and cities, what is the shortest possible trip that visits all the cities exactly once?
Note that NP specifies nothing about how hard it is to actually solve one of its problems; some problems might be easy to solve and others hard. We really don't know. In fact, it is this unknown -- how hard it is to solve problems that are easily checkable -- that is at the heart of P ?= NP.
What does "P ?= NP" mean?
Read "P ?= NP" as "Does P equal NP?" Are the problems in P the same as the problems in NP? In other words, is every problem that is checkable in polynomial time also solvable in polynomial time?
6 Here's another way to phrase the question. For certain problems in NP, the best known algorithms involve exhaustively enumerating every possible combination of inputs - an exponential task - to find a solution (much like permutation sort, above), even though checking that solution is ultimately easy to do. P ?= NP asks "Are there some problems for which brute-force (exhaustive) searching for a solution is the best possible algorithm, even if the solution is easy to check?" In other words, are some problems so hard that brute-force is the best we can do?
You can begin to see why this is a hard question. You have to show that either:
-
All problems that are quickly checkable are quickly solvable (P = NP). And your proof better hold for problems that we haven't even thought up yet.
-
A single problem that's quickly checkable is not quickly solvable (P != NP). Sure, you can easily show that the best known algorithm for this problem is not in P, but you also have to show that no algorithm for solving the problem in P can possibly exist, whether we know of it yet or not.
We know of many problems in NP that may or may not be in P. (I'll describe one in the next section.) The solutions to these problems can be checked in polynomial time, but arriving at the solutions using the best algorithms we've got takes exponential time. We simply do not know whether polynomial time algorithms exist to solve these problems. No one has invented (or discovered) one yet, and we don't know whether that's because none exist, or because we just haven't been smart enough to find one.
If it were ever shown that P = NP, the world would be a different place. Many problems that we consider totally out of reach could be solved. Sure, a few that we hope are hard (such as ones that would allow a hacker to decode your credit card number when you buy something on Amazon) would become simple, which would be bad. But on the whole, it would be an amazing broadening of powers for computers everywhere.
Life couldn't be that easy, could it? Intuitively, most theorists believe that P != NP: that some problems are hard to solve, even if they are easy to check. But no one has been able to show it yet. For the record, Deolalikar claims to show that in fact P != NP. The current consensus is that his proof is incorrect, even though the underlying claim is most likely true.
And that's P ?= NP.
(Feel free to stop reading here.)
NP-completeness
Of course, there's more to the story than that. Thousands of papers have been published about P and NP. One of the more intriguing discoveries is the notion of NP-completeness. A problem is NP-complete if it is in NP (checkable in P) and "as hard as" any other problem in NP.
What does that mean? It's pretty astounding. If I have an NP-complete problem, and you give me any other problem in NP, I can convert its inputs
7 into a formulation of my problem and then solve that problem.
So if you give me an NP problem about scheduling traffic, and I have an NP-complete problem that finds cliques of friends amongst all the relationships on Facebook, I can solve your problem by somehow converting your arrangement of cars into a social graph. These conversions can be technical and complicated, but the bottom line is that I can solve your problem with my problem, and so mine is as hard to solve as yours. (If it were easier, then I shouldn't be able to solve your problem, right?) Furthermore, this property holds for any problem you can find in NP; my problem as hard as anything else out there.
There are many NP-complete problems, each of which can be used to solve the other; you could say they're all equally hard
8. They allow for some interesting implications:
-
If you find a single NP-complete problem that is in P, then all NP problems are in P, since they're all no harder than that single problem.
-
Conversely, if you find a single NP problem that's not in P, then all NP-complete problems are not in P, since they're all at least as hard as that single problem.
Despite their significance, NP-complete problems are not bizarre theoretical constructs. I'll describe one, just to show you how simple they can be.
Imagine you're going on a camping trip and have a knapsack that can hold a given amount of weight, say p pounds. You have a whole bunch of gear that you'd like to take along: a sleeping bag (4 lbs), a stove (1 lb), a tent (6 lbs), and so on. What items do you put in your knapsack so that you get the closest to p pounds of stuff in there without going over?
Amazingly, the best known way to get the correct answer is to enumerate combinations of items until you've found the solution
9, an exponential task. No one, to date, can do better. If you can - you win a million bucks. More amazingly, this knapsack problem is NP-complete, simple though it is. You can transform any other problem in NP (factoring, summing to 0, route finding, etc.) to a knapsack formulation and solve it.
On the other side of things, people have tried to approximate the knapsack problem with polynomial time algorithms, since solving it outright takes so long. If you can't get the correct answer efficiently, might as well see how close you can get, right? So rather than enumerate all combinations of items, these algorithms heuristically pick and choose certain combinations, sacrificing total correctness for speed. And they've succeeded: the best polynomial approximations can get arbitrarily close to the right answer, though not all the way there. So the problem is NP-complete - super hard - but also very easy to approximate. Crazy.
10 In addition to knapsack, the two problems I described above as in NP, integer summing and shortest trip, are also NP-complete. But other hard problems, such as the prime factorization one, are not known to be NP-complete. It's a mysterious world out there.
If you find any of this interesting, Wikipedia is a good place to start. Oh yeah, bonus points if you can figure out which phone book lookup method is better.
1 Note that since it takes one unit of time to write something down in one unit of space, space requirements can never exceed time requirements.
Back to text 2 Well, not quite. The first time you scanned the list for the smallest number, you looked through n numbers. But the second time, you’d already taken a number out, so there were only n-1 numbers. And so on. The real number of steps is n + n-1 + n-2 + …. 1 = (n2+n)/2. In such cases, we look for the "biggest" term - the one that will yield the largest result as n gets larger - and discard the others as relatively insignificant. In this case, n2 is the biggest term, so we forget about the rest.
Back to text 3 In certain cases it’s possible to prove a lower bound on problem difficulty. That is, you could prove that a problem can’t take less than a certain amount of time, say t. If you know of an algorithm that takes t time, then you can rest easy, because you know that no other faster algorithm can exist. Here’s one example: finding the smallest number in a list of n numbers must take at least n time steps: one each to examine every number in the list. If you ever try to take fewer than n steps, you can’t be examining all the numbers, and I’ll foil you by giving you an input list in which the smallest number is one of the ones you didn’t look at. Boom - you’ve overlooked the smallest number in the list and your algorithm is incorrect. Many problems are complex enough, however, that proving a useful lower bound is very difficult.
Back to text 4 Interestingly, primality checking was only shown to be in P in 2002!
Back to text 5 Wait a minute, you say. If my problem takes n100 time, it’s in P, but it’ll still take really long! You’re right: in practice, the actual constants of the polynomial (in this case, the number 100) make a difference. A n100 problem is a lot harder than a n2 problem. And in fact, there are certain problems (such as linear programming) for which an exponential algorithm can find a solution faster than a polynomial algorithm in many common cases. But as n gets really large, even the largest polynomial is smaller than the smallest exponential: n1000000 is smaller than 2n (or even 1.0001n) for very large values of n. And geeky computer scientists like to think very big.
Back to text 6 We know that the reverse is true: problems solvable in polynomial time are checkable in polynomial time; just solve them again and see if your result agrees with the proposed one. In other words, we know that all the problems in P are in NP. But we don't know whether all the problems in NP are in P.
Back to text 7 For the record, this conversion process must itself take no more than polynomial time.
Back to text 8 You can actually differentiate among NP-complete problems by how well you can approximate them with faster algorithms, but that's a topic for another time. (If you are surprised and curious about the fact that problems that are equally hard to solve can be approximated to different degrees, then start reading up, and welcome to the world of theoretical computer science!)
Back to text 9 Without bogging you down with details, I'll acknowledge that there are other known ways to solve this problem, but they're all still exponential in the size of the input.
Back to text 10 Just for kicks, one more twist: if you add just one more constraint to the knapsack, for instance giving it a volume in addition to a weight limit, you can no longer approximate it to an arbitrary degree of accuracy.
Back to text