How to write a really freaking huge number (in a small space)

Sep 09, 2007 01:44

http://www-users.cs.york.ac.uk/susan/cyc/g/graham.htm

Something I was thinking about: What's the minimum number of bits (b) needed to precisely describe a whole number (n)?

Suppose you simply write the number in binary.
Then b = Log (base 2) n

But the text of the article on Graham's number can't be more than ten kilobytes in total size. And the number itself is clearly much much larger than 2^10,000.

What kind of methods would you use to encode an arbitrary file (say, an image file) using a mathematical description that takes up much, much less space than a simple bitmap?

It occurs to me that this is not an original idea. PNG and JPEG and other image compression algorithms have been around for a long time. For large data file sizes, can such algorithms get extremely good, with data compression ratios on the order of graham's number / 2^10,000?

Is there a theory that describes the theoretical tradeoff between compact compressed file size and minimum de-compression access time?

information science, math, numbers, theory, image compression

Previous post Next post
Up