Notes on consciousness.

Nov 17, 2010 03:19

I'm pretty sure anyone whose primary objection against computational theories of mind is that symbol manipulation is insufficient to produce complex effects also has no idea how computer programming works or what kinds of wicked things one can do with a few simple words and a compiler.

Leave a comment

Comments 9

marcimojo November 17 2010, 09:42:49 UTC
You tell 'em.

Reply


(The comment has been removed)

vaelynphi November 18 2010, 00:26:48 UTC
I'm all ears!

I'm not sure if my views could be boiled down as 'mind-as-computer', but as caricatures go it's probably a fair one.

Reply

(The comment has been removed)

vaelynphi November 18 2010, 23:37:49 UTC
Hrm; it sounds like you agree with me more than disagree with me.

At least, you seem to agree with me that there's nothing directly true about consciousness that prevents machines from having, or at least emulating, it, other than our current state of technology. (I could append an 'as far as I know' to that. There may turn out to be some magical qualia-juice that's incompatible with electronics. Bender can't go to heaven after all...)

So far as computational models go, I'm thinking of Dennett's explanations, which might not strictly be 'generalized computational models', so perhaps I'm speaking the wrong language. Perhaps calling it computational is wrong, and I should simply say 'material' or 'mechanical'; after all, I know of no good reason that all the facts about mental states in a mind can't be expressed in terms of all the facts about physical states in a brain.

Reply


easwaran November 17 2010, 21:10:45 UTC
I don't think anyone objects because symbol manipulation can't produce complex effects - the objections from people like Searle or Chalmers are (respectively) that symbol manipulation doesn't produce semantic or phenomenological effects. They claim to have arguments that these sorts of effects are qualitatively different in kind from syntactic processes. It's not about mere complexity. (Of course, I think their arguments are probably wrong too, but at least they're not making the simple mistake that you're pointing out.)

Reply

vaelynphi November 18 2010, 00:25:09 UTC
I was actually referring to an (old) argument by Chalmers which I've heard parroted quite a bit recently.

However, on the subject of whether symbolic machines can't produce semantic effects, I believe the notion that this is impossible is wrongheaded. There are no, to my knowledge, examples of how one would produce such effects with machines, but there is no argument I've yet seen that machines cannot have qualitative experience, or at least emulate it to such a degree as to be indistinguishable.

Most arguments I've heard contrary amount to 'but they're *just* machines' or some other tripe.

From the other end, I have yet to be convinced that the mind is not basically a computer; or, to put it more specifically, a Turing machine. Or, at the very least, the current set of supposed mind-brain/consciousness problems seem to dissolved pretty swiftly in the face of computational theories. That is neither to say that those theories have produced a functional model nor that other, often newer problems don't remain or arise.

Reply

easwaran November 18 2010, 00:45:39 UTC
The argument is a bit more sophisticated than "they're just machines". The argument is supposed to be that qualitative experiences are a different sort of thing than input-output behaviors, so that you could have one without the other. The Chalmers argument is supposed to be, "we can imagine the possibility of a zombie, therefore consciousness and behavior are different things". Of course, that by itself (if correct) doesn't prove that a computer couldn't be conscious, just that it wouldn't be conscious in virtue of having the appropriate behavior. I don't quite understand what Chalmers' positive claim is though - I think he'd say that it's possible for a machine to be conscious, provided that you build in the appropriate quasi-physical structures that give rise to consciousness in us. But I don't really understand it ( ... )

Reply

vaelynphi November 19 2010, 00:00:01 UTC
Hah hah; I guess that's *not* the direction they'd prefer... heh. But I agree with you there; presumably my use of the term 'Turing machine' is incorrect.

Hrm; I think Chalmers is plain wrong about that; I think consciousness is built up from myriad nonconscious elements, in much the same way that sophisticated software is built up from myriad simple processes. Precisely *how* that works is obviously still up for grabs, but the idea seems simple enough to be uncontroversial.

And the Dretske/Millican argument is close to my response to the Chinese room: the room does understand Chinese; the person inside need not.

Reply


Leave a comment

Up