Nov 17, 2010 03:19
I'm pretty sure anyone whose primary objection against computational theories of mind is that symbol manipulation is insufficient to produce complex effects also has no idea how computer programming works or what kinds of wicked things one can do with a few simple words and a compiler.
Leave a comment
Reply
However, on the subject of whether symbolic machines can't produce semantic effects, I believe the notion that this is impossible is wrongheaded. There are no, to my knowledge, examples of how one would produce such effects with machines, but there is no argument I've yet seen that machines cannot have qualitative experience, or at least emulate it to such a degree as to be indistinguishable.
Most arguments I've heard contrary amount to 'but they're *just* machines' or some other tripe.
From the other end, I have yet to be convinced that the mind is not basically a computer; or, to put it more specifically, a Turing machine. Or, at the very least, the current set of supposed mind-brain/consciousness problems seem to dissolved pretty swiftly in the face of computational theories. That is neither to say that those theories have produced a functional model nor that other, often newer problems don't remain or arise.
Reply
Reply
Hrm; I think Chalmers is plain wrong about that; I think consciousness is built up from myriad nonconscious elements, in much the same way that sophisticated software is built up from myriad simple processes. Precisely *how* that works is obviously still up for grabs, but the idea seems simple enough to be uncontroversial.
And the Dretske/Millican argument is close to my response to the Chinese room: the room does understand Chinese; the person inside need not.
Reply
Reply
Leave a comment