And so, it congeals…

Jun 17, 2007 21:21

This morning I found myself asking a question, “What is the difference between abstraction and encoding?” The answer, the only answer I could arrive at is that abstraction destroys the information in an irreversible way, while encoding always allows you to recreate the original message.

To understand what I’m babbling about, look at this:




This is an abstraction of the DNA molecule and the archetypal representation that you probably remember from high school. Much of the salient information about the molecule is lost and to regain it you must have a decent understanding of the molecule a priori, that is you’d better understand it before you even receive the ‘message’.

Now, look at this:




Just knowing how to read the notation chemists use to encode information about atoms and chemical bonds is enough to acquire all of the information about the DNA molecule. This is a lossless encoding method. You don't have to know anything about that particular molecule to understand it in its entirety.

The first example is an abstraction and the second is a lossless encoding. The temptation to call the first a lossy encoding can easily be stymied as the first could be any arbitrary DNA molecule.

Here’s another example

(x - 2)2 is a convent method of encoding x2 - 4x + 4, while (a - b)n abstracts it. It kills off the particular but at the same time encompasses all possible variants of that polynomial.

In my head, questions about information and its link to physics are running rampant. At the center is the bridge, ENTROPY, the so-called measure of disorder in a system.

What is entropy?

It is the trend of a system to move towards the statistical mean of all possible states of a system.

In statistical mechanics, it is the average molecular velocity and distribution moving towards the average of all possible states of the system, the most probable state, completely even distribution. As molecular velocity is temperature, it is the tendency of a system to move towards even temperature distribution. To identify the location and velocity of a particular particle you would have actually hunt it down. No equation could tell you anything about it without first finding it.

In information theory, it is the probability that the next bit in a string will be a 1. To understand this, let’s talk a bit more about encoding. Ok, let’s examine two strings, 010101010101010101010101010101010101 and 011101001101011010110001010101000111. The first one was just 01 copied and pasted 18 times, while the second was produced by random flips of a coin. For the first one, all I really needed was 01 and a rule to reproduce it 17 times. As for the second, well, you’ll just have to take the second one as is. We call the first compressible and the second, you guessed it, incompressible. We say that first had low entropy and the second at maximal entropy, equilibrium.

Let’s say you and I are having a nice chat. Now, if you can guess the next word out of my mouth I’m not telling you anything, am I? However, let’s say every word I have to say is a complete surprise, random in the sense that you don’t know what’s next. Now, there is some information!

Entropy is a measure of randomness! Randomness is information, whether it is intelligible is irrelevant. But this is the thing that has been on my mind. I think I can use it to explain how random actions of ants make ant hills and how random firing of neurons makes personalities. But, beyond all of that I think I can use it to explain the distribution of the primes.

I want to free the idea of information from content of a message and analyze it a pure and abstract form. I want to liberate it from its shackles to create a mathematically rigorous definition of uncertainty, one independent of content and meaning, because it goes far beyond that. Entropy is a property of any set, be it countable or not.
Previous post Next post
Up