Aug 23, 2015 20:10
I realized after writing part 5 that by continuing on to the anthropic principle and observer selection effects, I've skipped over a different issue I planned to write more about, which was how statistical mechanics and quantum mechanics are actually the same thing. I think I actually covered most of what I'd wanted to cover in part 4, but then forgot to finish the rest in part 5. However, in thinking more about that it has led to lots more thoughts which make all of this more complicated and might change my perspective somewhat from what I said earlier in this series. So let me just briefly note some of the things I was going to talk about there, and what complications have arisen. Later, we'll get to the quantum suicide booth stuff.
The first time I used Feynman diagrams in a physics class, believe it or not, was not in Quantum Field Theory, where they are used most frequently, but in graduate Statistical Mechanics, which I took the year before. We weren't doing anything quantum, just regular classical statistical mechanics. But we used Feynman diagrams for it! How is this possible? Because the path integral formulation of quantum mechanics looks nearly identical mathematically to the way in which classical statistical mechanics is done. In both cases, you have to integrate an exponential function over a set of possible states to obtain an expression called the "partition function". Then you take derivatives of that to find correlation functions, expectation values of random variables (known as "operators" in quantum mechanics") and to compute the probability of transitions between initial and final states. This might even be the same reason why the Schrodinger Equation is sometimes used by Wall Street quants to predict the stock market, although I'm not sure about that.
One difference between the two approaches is what function gets integrated. In classical statistical mechanics, it's the exponential of the Boltzmann factor for each energy state e^(-E/kT). You sum this over all accessible states to get the partition function. In Feynman's path integral formalism for quantum mechanics, you usually integrate e^(iS) where S is the action (Lagrangian for a specific path integrated over time) over all possible paths connecting an initial and final state. Another difference is what you get out. Instead of the partition function, in quantum mechanics, you get out a probability amplitude, whose magnitude then has to be squared to be interpreted as a transition probability.
I was going to write about how these are very close to the same thing, but as I read more in anticipation of writing this, I got more confused about how they fit together. In the path integral for quantum mechanics, you can split it up into a series of tiny time intervals, integrating over each one separately. Then taking the limit as the size of these time intervals approaches zero. When you look at one link in the chain, you find that you can split the factor e^{iS} into a product of 2 factors. One is e^{ip*\delta_x} which performs a Fourier transform, and the other is e^{-iHt} which tells you how to time-evolve an energy eigenstate in quantum mechanics into the future. The latter factor can be viewed as the equivalent of the Schrodinger Equation, and this is how Schrodinger's Equation is derived from Feynman's path integral. (There's a slight part of this I don't quite understand, which is why energy eigentstates and momentum eigenstates seem to be conflated here. The Fourier transform converts the initial and final states from position into momentum eigenstates, but in order to use the e^{-iHt} factor it would seem you need an energy eigenstate. These are the same for a "free" particle, but not if there is some potential energy source affecting the particle! But let's not worry about that now.) So after this conversion is done, it looks even more like statistical mechanics. Because instead of summing over the exponential of the Lagrangian, we're summing over the exponential of the Hamiltonian, whose eigenvalues are the energies being summed over in the stat mech approach. However there are still 2 key differences. First, there's the factor of "i". e^{-iEt} has an imaginary exponent, while e^{-E/(kT)} has a negative exponent. This makes a pretty big difference, although sometimes that difference is made to disappear by using the "imaginary time" formalism, where you replace t with it (this is also known as "analytic continuation to Euclidean time). There's a whole mystery about where the i in quantum mechanics comes from, and this seems to be the initial source--it's right there in the path integral, where it's missing in regular classical statistical mechanics. This causes interference between paths which you otherwise wouldn't get. The second remaining difference here is that you have a t instead of 1/kT (time instead of inverse-temperature). I've never studied the subject known as Quantum Field Theory at Finite Temperature in depth, but I've been passed along some words of wisdom from it, including the insight that if you want to analyze a system of quantum fields at finite temperature, you can do so with almost the same techniques you use for zero temperature, so long as you pretend that time is a periodic variable that loops around every 1/kT seconds, instead of continuing infinitely into the past and the future. This is very weird, and I'm not sure it has any physical interpretation, it may just be a mathematical trick. But nevertheless, it's something I want to think about more and understand better.
Another thing I'd like to think about more, in order to understand the connection here, is what happens when you completely discretize the path integral? That is, what if we pretend there's no such thing as continuous space, and we just want to consider a quantum universe consisting solely of a finite number of qubits. Is there a path integral formulation of this universe? There's no relativity here or any notion of space or spacetime. But as with any version of quantum mechanics, there is still a notion of time. So it should be possible. And the path integral usually used (due to Dirac and Feynman) should be the continuum limit of this. I feel like I would understand quantum mechanics a lot more if I knew what the discrete version looked like.
Oh, one more thing before we move on to the quantum suicide booth. While reading through some Wikipedia pages related to the path integral recently, I found something pretty interesting and shocking. Apparently, there is some kind of notion of non-commutativity, even in the classical version of the path integral used to compute Brownian motion. In this version of the path integral, you use stochastic calculus (also known as Ito calculus I think?) to find the probabilistic behavior of a random walk. (And here again, we find a connection with Wall Street--this is how the Black Sholes formula for options pricing is derived!) I had stated in a previous part of this series that non-commutativity was the one thing that makes quantum mechanics special, and that there is no classical analog of it. But apparently, I'm wrong, because some kind of non-commutativity of differential operators does show up in stochastic calculus. But I've tried to read how it works, and I must confess I don't understand it much. They say that you get a commutation relationship like [x, k] = 1 in the classical version of the path integral. And then in the quantum version, where there's an imaginary i in the exponent instead of a negative sign, this becomes [x, k] = i or equivalently, [x, p] = ih. So apparently both non-commutativity and the uncertainty principle is directly derivable from stochastic calculus, whether it's the quantum or the classical version. So this would indicate that really the *only* difference between classical and quantum is the factor of i. But I'm not sure that's true if looked at from the Koopman-von-Neumann formalism. Clearly I have a lot more reading and thinking to do on this!
ito calculus,
quantum mechanics,
statistical mechanics