Nov 17, 2010 03:19
I'm pretty sure anyone whose primary objection against computational theories of mind is that symbol manipulation is insufficient to produce complex effects also has no idea how computer programming works or what kinds of wicked things one can do with a few simple words and a compiler.
Leave a comment
Comments 9
Reply
(The comment has been removed)
I'm not sure if my views could be boiled down as 'mind-as-computer', but as caricatures go it's probably a fair one.
Reply
(The comment has been removed)
At least, you seem to agree with me that there's nothing directly true about consciousness that prevents machines from having, or at least emulating, it, other than our current state of technology. (I could append an 'as far as I know' to that. There may turn out to be some magical qualia-juice that's incompatible with electronics. Bender can't go to heaven after all...)
So far as computational models go, I'm thinking of Dennett's explanations, which might not strictly be 'generalized computational models', so perhaps I'm speaking the wrong language. Perhaps calling it computational is wrong, and I should simply say 'material' or 'mechanical'; after all, I know of no good reason that all the facts about mental states in a mind can't be expressed in terms of all the facts about physical states in a brain.
Reply
Reply
However, on the subject of whether symbolic machines can't produce semantic effects, I believe the notion that this is impossible is wrongheaded. There are no, to my knowledge, examples of how one would produce such effects with machines, but there is no argument I've yet seen that machines cannot have qualitative experience, or at least emulate it to such a degree as to be indistinguishable.
Most arguments I've heard contrary amount to 'but they're *just* machines' or some other tripe.
From the other end, I have yet to be convinced that the mind is not basically a computer; or, to put it more specifically, a Turing machine. Or, at the very least, the current set of supposed mind-brain/consciousness problems seem to dissolved pretty swiftly in the face of computational theories. That is neither to say that those theories have produced a functional model nor that other, often newer problems don't remain or arise.
Reply
Reply
Hrm; I think Chalmers is plain wrong about that; I think consciousness is built up from myriad nonconscious elements, in much the same way that sophisticated software is built up from myriad simple processes. Precisely *how* that works is obviously still up for grabs, but the idea seems simple enough to be uncontroversial.
And the Dretske/Millican argument is close to my response to the Chinese room: the room does understand Chinese; the person inside need not.
Reply
Leave a comment