Artificial Intelligence Philosophy

Apr 21, 2008 13:22

I used to be much more of a booster of Artificial Intelligence than I am now. Growing up, I was into Godel, Escher, Bach, Hofstadter's and Dennett's "The Mind's I", and other similar books. In grad school, I used to get into debates online with Mikhail Zeleny, this Russian grad student, kind of a crank, who believed that AI was impossible. He was prone to obscure and kind of bizarre claims (computers would never achieve "noumenal perception of the infinite"), but in some ways was intelligent and erudite. He used to get into arguments with McCarthy, the guy who created the AI programming language LISP, and would cite obscure errors from McCarthy's papers, if I recall correctly.

Some AI critics have influenced my thinking even if I think that their criticisms are highly suspect. I think that Searle and Penrose have made me appreciate more that a human is a physical system, and that "intelligence", "thought", "cognition", etc. are abstractions we make based on that physical system. What seemed "natural" to someone growing up with programming was that the Mind was to the Brain as Software is to Hardware. That "we" as thinking beings could be equivalent to intelligent computer programs, but we were "running" on the "hardware" of a brain instead of a computer.

There's something a little arbitrary though about abstracting from a real physical system to leave only "thoughts"--though one has to admit that while arbitrary it's also not entirely artificial. Many cultures have imagined a separate spirit and body. I think what makes AI seem so transparently plausible is that computers are designed in such a way that the hardware is separated so clearly from the software. We know what Microsoft Word is (unfortunately!) independently from Mac or PC (or even PowerPC Mac).

Many supporters of AI think that "intelligence" is a perfectly coherent abstraction from a physical system, but "consciousness" is not--that it's vague, or meaningless, an "epiphenomenon" arising from short-term memory or the "Cartesian theater" of different computational "modules" within the brain interacting. I guess the reason why this is, is that intelligence is in some ways measurable, but not consciousness. Although what we measure is some statistical abstraction from tests, this "g" or "general intelligence" factor, is not necessarily what we mean by intelligence, and was only found (and considered to be similar to "intelligence") because of confidence that this concept we normally spoke about had some correlate in reality. It's hard to imagine developments that could convince us that intelligence (or its lack) is not a useful concept to apply. (Particularly after exposure to the works of George W. Bush.)

One easy way to make the brain-computer analogy work is to understand neurons as performing a parallel computation. But although Penrose and Hammeroff's speculations appear mostly bunk, they did convince me that there's a lot of really interesting stuff going on below the level of the neuron. The cytoskeleton of neurons (or all animal cells) is fascinating with these microtubles... And the way that messenger RNA sends signals around the cell, both through the bases of the RNA and through its shape--that's fascinating, and I'd like to learn more about it.

My guess is something like IBM's Blue Brain Project, that tries to simulate an entire mammalian brain, neuron by neuron, would not really reproduce the process by which neurons grow new outgrowths (or even new neurons are born and become interconnected)--that perhaps the short-term behavior of brains will be easier to simulate than longer-term changes, in which what goes on within the cell to change its growth might matter more.

When you look at, say, soap and water mixing, it's hard not to anthropomorphize and think the liquids have repulsion as humans do--and to think that some of how we "feel" is inseparable from actual chemical reactions, as opposed to patterns in which the underlying substrate doesn't matter. This is, of course, not a scientific argument.

What I'm getting at is that AI boosters and critics may be subdividing the world differently, and may frame the "ultimate" question differently. The boosters may be asking: "Can a computer do what a human does?" (Abstracting certain things that merely "make up" a person but don't count as to what a person "does".) Whereas the critics would say, "But it's not a human, so what?"
Previous post Next post
Up