Notes on consciousness.

Nov 17, 2010 03:19

I'm pretty sure anyone whose primary objection against computational theories of mind is that symbol manipulation is insufficient to produce complex effects also has no idea how computer programming works or what kinds of wicked things one can do with a few simple words and a compiler.

Leave a comment

easwaran November 17 2010, 21:10:45 UTC
I don't think anyone objects because symbol manipulation can't produce complex effects - the objections from people like Searle or Chalmers are (respectively) that symbol manipulation doesn't produce semantic or phenomenological effects. They claim to have arguments that these sorts of effects are qualitatively different in kind from syntactic processes. It's not about mere complexity. (Of course, I think their arguments are probably wrong too, but at least they're not making the simple mistake that you're pointing out.)

Reply

vaelynphi November 18 2010, 00:25:09 UTC
I was actually referring to an (old) argument by Chalmers which I've heard parroted quite a bit recently.

However, on the subject of whether symbolic machines can't produce semantic effects, I believe the notion that this is impossible is wrongheaded. There are no, to my knowledge, examples of how one would produce such effects with machines, but there is no argument I've yet seen that machines cannot have qualitative experience, or at least emulate it to such a degree as to be indistinguishable.

Most arguments I've heard contrary amount to 'but they're *just* machines' or some other tripe.

From the other end, I have yet to be convinced that the mind is not basically a computer; or, to put it more specifically, a Turing machine. Or, at the very least, the current set of supposed mind-brain/consciousness problems seem to dissolved pretty swiftly in the face of computational theories. That is neither to say that those theories have produced a functional model nor that other, often newer problems don't remain or arise.

Reply

easwaran November 18 2010, 00:45:39 UTC
The argument is a bit more sophisticated than "they're just machines". The argument is supposed to be that qualitative experiences are a different sort of thing than input-output behaviors, so that you could have one without the other. The Chalmers argument is supposed to be, "we can imagine the possibility of a zombie, therefore consciousness and behavior are different things". Of course, that by itself (if correct) doesn't prove that a computer couldn't be conscious, just that it wouldn't be conscious in virtue of having the appropriate behavior. I don't quite understand what Chalmers' positive claim is though - I think he'd say that it's possible for a machine to be conscious, provided that you build in the appropriate quasi-physical structures that give rise to consciousness in us. But I don't really understand it ( ... )

Reply

vaelynphi November 19 2010, 00:00:01 UTC
Hah hah; I guess that's *not* the direction they'd prefer... heh. But I agree with you there; presumably my use of the term 'Turing machine' is incorrect.

Hrm; I think Chalmers is plain wrong about that; I think consciousness is built up from myriad nonconscious elements, in much the same way that sophisticated software is built up from myriad simple processes. Precisely *how* that works is obviously still up for grabs, but the idea seems simple enough to be uncontroversial.

And the Dretske/Millican argument is close to my response to the Chinese room: the room does understand Chinese; the person inside need not.

Reply

easwaran November 19 2010, 00:21:27 UTC
I agree with you about Chalmers, although the idea isn't exactly that simple. It's not any more clear how physical processes can add up to phenomenal facts than how physical processes can add up to moral facts. I think both can happen, but these are at least counterintuitive claims. We have to deny the claim that zombies are even conceivable in order to support this.

Reply


Leave a comment

Up