## Tuesday, September 6, 2011

### computability and big numbers

I recently realized (through finding a computer version of trivial pursuit, which provides player statistics) that I'm a little less terrible at the geography category than the science category. The resulting minor identity crisis put me on a bit of a science kick, which included reading this great essay a friend of mine sent me (thank you Kenny!) that I can't believe I've never seen before. Read the whole thing if you like numbers or computation in the least. In particular, the relation between computability and big numbers is fun. The idea goes like this:
• Consider a computer and all the possible programs we could write for this computer (intuitively, don't think of interactive software programs, think of a program you start running, and then you wait, and then it eventually gives you an `answer'. Like a calculator.) Specifically, consider a Turing machine.
• Computer programs can be written to compute whatever you want them to. The problem is, it's hard to know if it will finish computing that thing in finite time. It might get stuck in an infinite loop, counting all the way to infinity looking for something that doesn't exist. The halting problem is: can I examine an arbitrary program and decide whether or not it will terminate in finite time?
• It turns out that it is not possible to write a computer program that solves the halting problem. The proof is great but repeating it would take too much length and specificity, so just read at that link.
• But since I can't resist, it basically goes: Say h(i,j) is an algorithm that returns 1 if program i terminates on input j, 0 otherwise. Then define g(i) to be 0 if h(i,i) is 0, and to loop forever if h(i,i)=1. But then either g(g)=h(g,g)=0, or g(g) doesn't terminate and h(g,g)=1. Either way, you have a contradiction with the definition of h. As Aaronson puts it, "Like a hound that finally catches its tail and devours itself, the mythical machine vanishes in a fury of contradiction. (That’s the sort of thing you don’t say in a research paper.)" *grin*
• Any program is defined with a certain exact number of rules. Think about all the possible programs that contain exactly N rules. There are only finitely many of these programs, since N is finite and there are only finitely many possible types of rules, so we can hypothetically make a list of all of these programs.
• Now consider what happens when you run each of these programs. Some might terminate and some might loop forever, but among the ones that terminate, one of them takes the longest. So, you can define BB(N) to be the length of time that the longest-running program with N rules takes (that is, the Nth Busy Beaver number.)
• Can a Turing machine compute these numbers? If it could, it could solve the halting problem by simply watching a program with N rules run for BB(N) steps, and if it hasn't finished by then, by definition of BB(N), it never will. Therefore, the BB number sequence grows too fast to be computable, because we already know that the halting problem isn't computable.
• Even more striking, BB(N) grows faster than any computable sequence: Say there is some number sequence that is always greater than BB, D(N)>BB(N). Then if we can compute D(N), we can automatically compute BB(N), because we just run every N-rule program, and among those that stop within D steps, the longest-running one takes BB(N) steps. Therefore D doesn't exist.
• One minor addendum to that: even if we don't know that D is greater than BB, or even if it was somehow not possible to know for sure, having computational access to D allows us to unknowingly calculate BB, which is in principle not possible.)
• So in summary: the BB number sequence is really, really big. So big, no computer can possibly keep up when trying to calculate them. And this connection between computability theory and big numbers is cool :)
A side note about amateur interests: isn't it nice learning this stuff in bite-sized chunks instead of in five-minute flurries in math classes that are so dense with mulling-requiring ideas that you can never catch up enough to enjoy how beautiful it is? Sure, at this rate I couldn't never learn enough to be a theoretical computer scientist, but what's the point if it's not enjoyable? Not to say it's not possible to learn enough or work hard enough where that pace would be enjoyable, but I already picked a different niche specialize in, and I don't want to completely lose touch with science just because I can't devote so much time to it.