Alan, you'll enjoy this

Jan 22, 2008 17:56

So, I'm writing about a girl named Ada and a robot named Galatea and an ornery old barkeeper named Deuce X. McKenna, and i'm doing research on strong A.I. on Wikipedia to make sure I have my facts at least believable if not right, and I start reading about the Turing test and such, and then I get into the "Chinese Room" thought experiment. John Searle argues that jsut because a machine passes a Turing test doesn't necessarily mean it has any sort of intelligence or understanding of what it's really doing.
Basically, he argues that a machine built to interpret Chinese characters and respond accordingly understands Chinese no better than a system involving a man who can't speak a word of Chinese, but responds to Chinese characters using a set of rules. I guess we can pretend he has pens and paper and rows upon rows of filing cabinets and reference cards. You'll have to read the link. Keep in mind, he is not at all decoding the Chinese characters into English, he's merely responding to them based on a set of rules, same as a computer does.

I'd argue that his argument is irrelevant, because isn't the human mind essentially a machine that decodes characters and responds to them according to a set of rules? When I read these words I've just typed, some part of my brain has to go through the process of taking all the letter and words and making them into something intelligible. Now admittedly, the rules my brain uses to react to stimuli aren't exactly well-defined nor clear-cut, but isn't that general human irrationality? A rational A.I. isn't a difficult concept to grasp. Making an A.I. that appreciates art, has mood swings, gets lonely, and and just wants to stay in bed drunk some days... that would impress me.

Anyhow, what's y'all's take on Searle's argument? Think he has a point, or is it completely invalidated by the notion that the human brain, although complex and irrational, is nothing more than a strong A.I.?
Previous post Next post
Up