wandering sets part 9: still confused

Sep 09, 2013 22:14

I have to admit, in trying to tie all of this together, I have realized that there still seems to be something big that I don't understand about the whole thing. And there is at least one minor mistake I should correct in part 8. So from here on out, we're treading on thin ice, I'm doing something more akin to explaining what I don't understand rather than describing a solution.

It seemed that if I could understand wandering sets, then all of the pieces would fit together. And it still seems that way, although the big thing I still don't get about wandering sets is how they related to mixing. And that seems crucial.

The minor mistake I should correct in part 8 is my proposed example of a completely dissipative action. I said you could take the entire space minus the attractor as your initial starting set, and then watch it evolve into the attractor. But this wouldn't work because the initial set would include points that are in the neighborhood of the attractor. However, a minor modification of this works--you would just need to start with a set that excludes not only the attractor but also the neighborhood around it.

In thinking about this minor problem, however, I realized there are also some more subtle problems with how I presented things. First, I may have overstated the importance of dimensionality. In order to have a completely dissipative action, you could really just use any space which has an attractor that is some subset of that space, where it attracts any points outside of it into the attractor basin. The subset wouldn't necessarily have to have a lower dimension--my intuition is that in thermodynamics that would be the usual case, although I must admit that I'm not sure and I don't want to leave out any possibilities.

This leads to a more general point here that the real issue with irreversibility need not be stated in terms of dimension going up or down--a process is irreversible any time there is a 1-to-many mapping or a many-to-1 mapping. So a much simpler way of putting the higher/lower dimensionality confusion on my part is that I often am not sure whether irreversible processes are supposed to time evolve things from 1-to-many or from many-to-1. Going from a higher to lower dimensional space is one type of many-to-1 mapping, and going from lower to higher is one type of 1-to-many mapping. But these are not the only types, just types that arise as typical cases in thermodynamics, because of the large number of independent degrees of freedom involved in macroscopic systems.

Then there's the issue of mixing. I still haven't figured out how mixing relates to wandering sets at all. Mixing very clearly seems like an irreversible process of the 1-to-many variety. But the wandering sets wiki page seems to be describing something of the many-to-1 variety. However, they say at the top of the page that wandering sets describe mixing! I still have no idea how this could be the case. But now let's move on to quantum mechanics...

In quantum mechanics, one can think of the measurement process in terms of a quantum Hilbert space (sort of the analog of state space in classical mechanics) where different subspaces (called "superselection sectors") "decohere" from each other upon measurement. That is, they split off from each other, leading to the Many Worlds terminology of one world splitting into many. Thinking about it this way, one would immediately guess that the quantum measurement process therefore is a 1-to-many process. 1 initial world splits into many different worlds. However, if you think of it more in terms of a "collapse" of a wavefunction, you start out with many possibilities before a measurement, and they all collapse into 1 after the measurement. So thinking about it that way, you might think that quantum physics involves the many-to-1 type of irreversibility. But which is it? Well, this part I understand, mostly... and the answer is that it's both.

The 1-to-many and many-to-1 perspectives can be synthesized by looking at quantum mechanics in terms of what's called the "density matrix". Indeed, you need the density matrix formulation in order to really see how the quantum version Lioville's theorem works. In the density matrix formulation of QM, instead of tracking the state of the system using a wavefunction--which is a vector whose components can represent all of the different positions of a particle (or field, or string) in a superposition--you use a matrix, which is sort of like the 2 dimensional version of a vector. By using a density matrix instead of just a vector to keep track of the state of the system, you can distinguish between two kinds of states--pure states and mixed states. A pure state is a coherent quantum superposition of many different possibilities. Whereas a mixed state is more like a classical probability distribution over many different pure states. A measurement process in the density matrix formalism, then, is described by a mixing process that evolves a pure state into a mixed state. This happens due to entanglement between the original coherent state of the system and the environment. When a pure state becomes entangled in a random way with a large number of degrees of freedom, this is called "decoherence". What was originally a coherent state (nice and pure, all the same phases), is now a mixed state (decoherent, lots of random phases, too difficult to disentangle from the environment).

What happens is that you originally represent the system plus the environment by a single large density matrix. And then, once system becomes entangled with environment, the matrix decomposes into the different superselection sectors. These are different sub matrices, each of which represents a different pure state. The entire matrix is then seen as a classical distribution over the various pure states. As I began writing this, I was going to say that because it was a mixing process, it went from 1-to-many. But now that I think of it, because the off-diagonal elements between the different sectors end up being zero after the measurement, the final space is actually smaller than the initial space. And I think that's even before you decide to ignore all but one of the sectors (which is where the "collapse" part comes in, in collapse based interpretations). From what I recall, the off-diagonal elements wind up being exactly zero--or so close to zero that you could never tell the difference--because you assume the way in which the environment gets entangled is random. As long as each phase is random (or more specifically--as long as they are uncorrelated with each other), when you sum over a whole lot of them at once, they add up to zero--although I'd have to look this up to remember the details of how that works.

I was originally going to say that mixed states are more general and involve more possibilities than pure states, so therefore evolving from a pure state to a mixed state goes from 1-to-many, and then when you choose to ignore all but one of the final sectors, you go back from many-to-1, both of these being irreversible processes. However, as I write it out, I remember 2 things. The first is what I mentioned above--even before you pick one sector out you've already gone from many-to-1! Then you go from many-to-1 again if you were to throw away the other sectors. And the second thing I remember is that, mathematically pure states never really do evolve into mixed states. As long as you are applying the standard unitary time evolution operator, a pure state always evolves into another pure state and entropy always remains constant. However, if there is an obvious place where you can split system from environment, it's tradition to "trace over the degrees of freedom of the environment" at the moment of measurement. And it's this act of tracing that actually takes things from pure to mixed, and from many to 1. I think you can prove that from a point of view of inside the system, whether you trace over the degrees of freedom in the environment or not is irrelevant. You'll wind up with the same physics either way, the same predictions for all future properties of the system. It's just a way of simplifying the calculation. But when you do get this kind of massive random entanglement, you wind up with a situation where tracing can be used to simplify the description of the system from that point on. You're basically going form a fine grained approximation of the system+environment to a more course grained approximation. So it's no wonder that this involves a change in entropy. Although whether entropy goes up or down in the system or in the environment+system, before or after the tracing, or before or after you decide to consider only one superselection sector--I'll have to think about and answer in the next part.

This is getting into the issues I thought I sorted out from reading Leonard Susskind's book. But I see that after a few years away from it, I'm already having trouble remembering exactly how it works again. I will think about this some more and pick this up again in part 10. Till next time...

quantum mechanics, statistical mechanics, thermodynamics

Previous post Next post
Up