Jul 30, 2013 09:58
Our universe has 3 large spacial dimensions (plus one temporal dimension, and possibly another 6 or 7 microscopic dimensions if string theory is right, but those won't be of any importance here).
Given 3 numbers (say, longitude, latitude, and altitude), you can uniquely identify where a particle is located in space. But the state of a system depends not only on what the particles positions are, but also on what their momenta are, ie how fast they are moving (the momentum of a particle in classical mechanics is simply it's mass times it's velocity--when relativistic and quantum effects are taken into account, this relationship becomes much more complicated). This requires another 3 numbers in order to fully specify what the state of a particle is.
If you were to specify all 6 of these numbers for every particle in a given system, you would have completely described the state of that system. (I'm ignoring spin and charge here, which you'd also need to keep track of in order to fully specify the state.) In order to categorize all possible states of such a system, you therefore need a space with 6N dimensions, where N is the number of particles. It is this 6N dimensional space which is called "phase space" and it is in this space where wandering sets are defined. The state of the system is represented by a single point in phase space, and as it changes dynamically over time, this point moves around.
In statistical mechanics and in quantum mechanics, you often deal with probability distributions rather than a single state. So you might start out with some distribution of points in phase space, some fuzzy cloudy region near some neighborhood of a point for instance. And as the system evolves, this cloud can move around and change its shape. But one really central and important theorem in physics is Liouville's theorem... it says that as this cloud of probability moves around, the volume it takes up in phase space always remains constant. This theorem can be derived from the equations of motion in classical or quantum mechanics. But it really follows as a consequence of energy conservation, which in turn is a consequence of the invariance of the laws of physics under time translations. At any given moment in time, the basic laws of physics appear to be the same, they do not depend explicitly on time, so therefore energy is conserved and so is the volume of this cloud in phase space. It can morph into whatever different shape it wants as it wanders around, but its total volume in phase space must remain constant.
Systems that obey Liouville's theorem are called conservative systems, and they don't have wandering sets. Systems that do not obey Liouville's theorem are called dissipative systems, and they *do* contain wandering sets.
But wait-- I just said that Liouville's theorem follows from some pretty basic principles of physics, like the fact that the laws of physics are the same at all times. So doesn't that mean that all physical systems in our universe are conservative--in other words, that there really is no such thing as dissipation? And does that in turn mean that entropy never really increases, it just remains constant?
This is one of the most frustrating paradoxes for me whenever I start to think about dissipation. It's very easy to convince yourself that dissipation doesn't really exist, but it's equally easy to convince yourself that all real world systems are dissipative, and that this pervasive tendency physicists have for treating all systems as conservative is no better than approximating a cow as a perfect sphere (a running joke about physicists).
I'll let this sink in for now, but end this part by hinting at where things are going next and what the answers to the above paradox involve. In order to understand the real distinction between conservative and dissipative systems, we have to talk about the difference between open and closed systems, about the measurement problem in quantum mechanics, what it means to make a "measurement", and how to separate a system from its environment, and when such distinctions are important and what they mean. We need to talk about the role of the observer in physics. One of the popular myths that you will find all over in pop physics books is that this role for an observer, and this problem of separating a system from its environment, was something that came out of quantum mechanics. But in fact, this problem has been around much longer than quantum mechanics, and originates in thermodynamics / statistical mechanics. It's something that people like Boltzmann and Maxwell spent a long time thinking about and puzzling over. (But it's certainly true that quantum mechanics has made the problem seem deeper and weirder, and raised the stakes somewhat.) Philosophically, it's loosely connected to the problem of the self vs the other, and how we reconcile subjective and objective descriptions of the world. In short, this is probably the most important and interesting question in the philosophy of physics, and it seems to involve all areas of physics equally, and it all revolves somehow around dissipation and entropy. To be continued...
physics,
quantum mechanics,
statistical mechanics