Computers are useless; they can only give you answers

Jun 12, 2007 14:43

Here's something that's been bugging me for a while: why are computers useful?

A bit of an odd question, you might think, and I'm going to need to explain some background. First, you need to know that there are problems that computers can't solve, even in principle. A classic example is the halting problem: given a program and its input, can you ( Read more... )

computers, philosophy, maths, ideas

Leave a comment

Comments 30

(The comment has been removed)

pozorvlak June 12 2007, 23:12:03 UTC
That's the badger. Formally, A is dense in X if for all x in X and all e > 0, there exists some a in A within e of X. Take the rationals embedded in the reals as an example. Take any real number x, and any e > 0 (assume it's very small). There's some natural number n such that 10^n < e: take q to be the rational number whose decimal expansion agrees with that of x to n+1 places, and is 0 thereafter. Then |q-x| < e.

Reply


totherme June 12 2007, 14:41:47 UTC
Byron Cook from MS research Cambridge has been doing a fair bit in the way of solving the bits of the halting problem that most people actually care about.

He's produced this tool called Terminator for doing something closely approximating to the impossible. In the separation logic community, the East London Massive have been doing some cool stuff in the same area. Those guys are getting together now, using shape analysis to extend the space of things that all their termination checkers can deal with.

In another place entirely, there's Catch - which apparently does some interesting termination analysis on haskell programmes. Even dealing with lazyness sanely :)

Reply

totherme June 12 2007, 18:36:38 UTC
You know whenever you write one of these posts I have an overwelming desire to reply something completly irrlevent. Yours is an interesting meaningful descrition of both humans and maths, and I have nothing to offer other then the 'and for the spell checker'. I'm gong to flick blonde hair at someone, and then try and feel clever by reading more of the stern review. though that isn't very clever cos its very clear dadn well written and i'm stilling having trouble following it.....:(
ps its toni, i can't be bothered to log in.

Reply

half_of_monty June 13 2007, 10:52:43 UTC
On the other hand, the Stern review matters more. So you win really.

Reply

pozorvlak June 13 2007, 11:12:54 UTC
Cool! Though of course this makes it even more mysterious, if even the canonical uncomputable problem has useful computable approximations :-)

Reply


shuripentu June 12 2007, 20:48:08 UTC
Typing is faster than writing.

Reply

pozorvlak June 13 2007, 09:54:57 UTC
Computers were useful when the major input method was punched cards :-)

Reply

totherme June 13 2007, 12:57:35 UTC
I think short-hand is faster than typing too - but that's not the point. Lump-hammers are useful for breaking up rocks, even though there are some (precious) ones that they can't break up. A tool is useful if it does even one thing well.

Reply

totherme June 13 2007, 12:58:32 UTC
misplaced ")": s/precious\) ones/precious ones\)/

Grr.

Reply


metamoof June 12 2007, 23:44:26 UTC
I must save this argument up for the next time someone tries to tell me that it must be possible to programme a solution. It's a teeny bit less hand-wavey than my current one, and, more to the point, has Seriously Incomprehesible Mathematics behind it.

Reply

pozorvlak June 13 2007, 09:57:02 UTC
:-)

Make sure you get the phrases "probability measure" and "Borel algebra" in...
(They're part of the technical gadgetry that makes the "probability 0" bit make sense)

Reply

metamoof June 13 2007, 13:03:43 UTC
I think you may, however, need to explain that probability bit.

I can understand that it may be so small as to be virtually 0, but I can't quite work out how that could be exactly 0. That being said, the intuitive "virtually 0" idea works well enough for my particular use of this argument.

Reply

pozorvlak June 13 2007, 13:40:35 UTC
*fires off a quick email to an actual analyst to make sure*

Intuitively, it's basically for the same reason that 0.999999.... is equal to 1. Suppose they're not equal: then set e = 1 - 0.999... . By our assumption, e > 0. So e > 10^(-n) for some sufficiently large n. But 10^(-n) = 1 - 0.{n 9s}, which is greater than e, since 0.9999... has more than n nines. So e must be equal to zero. You can play a similar game and show that the probability has to be smaller than any positive number, and thus must be zero. Maybe we'd be better off if we allowed infinitesimals...

In a bit more detail: to talk meaningfully about probabilities over some space of possibilities (here, the subsets of N), you need to have a notion of measure on the space. I think (and have just sent an email to check) that there's a general theorem that if the space is infinite a subset of cardinality less than the whole space must have measure zero - you can deduce this from the axioms for a measure, which you need for probability theory to make sense. I'm not an expert ( ... )

Reply


half_of_monty June 13 2007, 10:51:45 UTC
I think it's something to do with the way people think, or maybe with the way probability is designed. For example: if some computer (that could represent real numbers, not finite approximations of them) were to pick a real number at random, it would be transcendental with probability 1 ( ... )

Reply

pozorvlak June 13 2007, 13:43:40 UTC
Ah - like the way that people are quite likely to pick a prime if asked for a random number?

I suspect the answer to a lot of this can be found in algorithmic information theory. Problems that humans are interested in can, since they fit into finite human brains, be expressed relatively compactly (where "compact" here might mean "in a multiple-volume design document"...). AIT suggests that such problems are comparatively rare...

Reply


Leave a comment

Up