Sep 13, 2013 09:03
I'm being driven kind of nuts by this book because it confuses perfect machine learning with near-perfect predictive ability. The examples given for the consequences for P=NP are simply outlandish. That's not so say they're all wrong--- machine-generated proofs really could demolish long-held problems if that were the situation. But in other cases he vastly overestimates our ability to predict even with perfect knowledge. There is no way that a year of weather or an entire baseball season can be accurately predicted no matter how much "big data" you have to learn from, that's just a limitation of chaotic systems.
I am also a bit less than impressed that he doesn't much acknowledge that there are problems harder than NP where even verifying an answer takes non-polynomial time, or the answer is uncomputable. (This does get a mention later in the book.) But this is not a rigorous introduction, it's a popular science book.
It's a good introduction for all that (including some interesting history even for those already familiar with the topic) But even a complexity theorist should understand the difference between modelling and prediction.
theory,
complexity,
book,
rant