Sep 25, 2013 17:35
Okay, so I did some math yesterday, and I was gonna type it up, but then I discovered a very problematic error in it. Now I'm pretty sure that I've fixed it so I'm gonna type it up again. We'll see. This has happened like five times now.
The proposition I will prove is that the residue of the Riemann zeta function at s=1 is not efficiently computable as a continuous limit. In particular, that if you approach s=1 via s=1+1/n, then the number of terms of the sum necessary for an accurate approximation grows exponentially in n. This has the heuristic effect that in order to accurately approximate (s-1)\zeta(s) at s=1.01, one would need to compute the first 10^33 terms of the sum, and the empirical result which inspired me to seek a proof that this is terrible was that computing the first 10^7 terms or so of the sum got me .0007 instead of pi/2 like I expected.
The proof proceeds as follows: Assume that you must increase the number of terms of the sum in order to accurately approximate the limit as s goes to 1. Then look at the points s=1+1/n, and write the increase in number of necessary terms of the sum as f(n), a function of n.
Now if we cut off the sum at the f(n)th term, we have an error term consisting of the sum as k goes from 0 to infinity of 1/n(f(n)+k)^(1+1/n).
The terms of this sum go to zero. Consider how long it takes for the summands to be one half the size of the first summand. Some analysis shows that this takes k>= (2^(n/n+1)-1)f(n); we can similarly compute this for the number of summands it takes to reach 1/4, 1/8, 1/16, etc. the size of the first summand by replacing the 2 in the above formula. This gives us a telescoping sum which bounds our error term from below.
More analysis tells us that the sum is equal to (2^(n/n+1)-1)/((2^(1/n+1)-1)*n*f(n)^(1/n); this consists of four basic parts. The first is the denominator, which quickly converges to 1. The second is the portion we've been looking for; (2^(1/n+1)-1) in the denominator, which causes the error to grow linearly. The next is the n in the denominator, which kills off the linear growth from that term. Finally, there is the f(n)^(1/n) term in the denominator; the growth in the number of terms of the sum we've included. We need for this to grow unboundedly in order for our error terms to get smaller as n increases.
So the number of terms of the sum we must compute has to be, as a function of n, a function such that f(n)^(1/n) goes to infinity. However this implies that log(f(n)) must grow faster than n, which means that f(n) must be superexponential.
Thus we have proven, that if we are to approximate the Riemann zeta function in this fashion, we must include exponentially many terms of the sum relative to the closeness to s=1. This makes the computation impossible in practice.
Q.E. Mother-fucking D.
That took me like three days. Ugh.
math,
lazy,
industrious