I just read
Mark Halpern's
The Trouble with the Turing Test. I'm just aghast at how off-the-mark Halpern is. He asks all the wrong questions, and makes all the wrong observations. He desperately wants to point out that the emperor is wearing no clothes, by lamely flinging the rhetoric of a critical thinker like a chimp flings poo. I am very disturbed by this article, since I was pointed here by
Arts & Letters Daily. I hope this is not being taken seriously.
You see, Alan Turing is very well respected in my field. Our closest equivalent to the Nobel award is the Turing Prize. The Turing Machine is the rock on which the theory of computation is founded. He is a very interesting man, credited with very significant ideas. Blah blah blah. I'm a computer scientist. Touting the merit's of Turing's work is a bit like masturbation. I will refrain.
Turing also wrote about a thought experiment involving a game, which has since been called the Turing test. Here is a summary. Put a computer and a man in two separate rooms, with well defined interfaces (say, text-based terminals) through which the occupants may communicate with an outsider, called a judge. Let the judge be a human. If the judge cannot guess which room has the human after interacting with both using the interface, then the machine might be said to be intelligent. Its intelligence has been mistaken for a human's, and thus it (functionally) is as intelligent as a human.
Who is this judge? Should the computer be able to fool a particular judge, or any judge? Who is the other human? It is a vaguely defined game, but is not intended to be instantiated so details would be, in a sense, unnecessary. The point is that the mental experiment explains a framework in which intelligence can be thought about in functional terms, instead of letting it remain an amorphous trait subject to deep philosophical debate. Artificial intelligence is a scientific field built upon the view that intelligence may be a thing that can be discussed in computational terms. Philosophy is still allowed to debate intelligence, and Turing's game is an important issue in such debates. Fodder for philosophical debate, foundation of a science -- its a rather important paper. Its also a very simple, accessible argument that may be used to problematize the notion of "Intelligence," when people claim computational intelligence has inherent limits that would prevent it from ever matching a human's innate smarts; it forces you to question well, what is intelligence really? Its a loaded word. 'Can machines think' is an ill-defined question. Turing said as much in the same article: "The original question, 'Can machines think?' I believe to be too meaningless to deserve discussion." If, however, we look at it from a functional, utilitarian perspective (as sciences often do), then computational intelligence might not be a perverse notion.
If using language that may provoke philosophical debate upsets you, or you feel like scientist are stripping you of your humanity, then lets just call it something else, like artificial computational banana soup-atronics. It doesn't matter. ACBSatronics exists. Its doing stuff: neural networks, support vector machines, problem solving, pattern matching, automatic rule generation, sampling and clustering, provably approximate learning -- oh, shit! Wait. Learning may be too much like 'thinking' and 'intelligence.' We'll have to think of another word to use there too.
Turing was not Hilbert; he was not setting a programme. He was not directing the community to construct a machine that passes his test. He thought that such a machine would probably exist in the near future (early computer scientists were prone to making funny predictions
1). The fact that such a machine does not exist might be a sign that his prediction was wrong, or that AI has had some setbacks, or maybe that he didn't realize that making a machine that fools people into thinking its a human (but does not solve an interesting problem, has no real expertise, nor any greater utility) is not a very worthwhile topic of research. We've found his argument more useful as a framework for thinking about intelligence as a functional property (which might be demonstrated by non-humans) and as a rhetorical device for making people think about intelligence.
MH, however, seems to think that the Turing test is the main goal of AI. That AI researchers who laud Turing but do not pursue this goal are hypocrites. That the idea of looking at intelligence as something functional is laughable. That the debate over the semantics or nature of intelligence is more interesting than the science of machine learning, expert systems, model checkers, and theorem-provers. (This last point is the only one at which I do not cringe -- MH is allowed to prefer philosophy of science over science; however, I think both disciplines are interesting).
I think MH is incredibly naive, his expose-style criticism of AI researchers to be off the mark, and his brief dip into some basic arguments from the philosophy of the mind to be not-quite as endearing as watching a child thinking about something "really hard," squinty-little-face, you-can-kinda-see-the-wheels-turning-in-his-head style.
Lets take his first impression of the Turing test:Turing’s thought experiment was simple and powerful, but problematic from the start. Turing does not argue for the premise that the ability to convince an unspecified number of observers, of unspecified qualifications, for some unspecified length of time, and on an unspecified number of occasions, would justify the conclusion that the computer was thinking-he simply asserts it.
Yes. This is very fair. There is an implicit view (functionalism). Instead of saying that a sneaky assertion is problematic, why not view the experiment as an explanation of this view? If you would like to debate that the view is problematic, that is totally legitimate. There is quite a bit of merit in exploring the alternate views (whose truth values are equally uncertain), as this causes one realize that any other view takes away the possibility of a computational science from being able to discuss intelligent machines. That is, think of the other views enough to realize that they cannot serve as a foundation for a science.
MH claims "Turing gave detailed examples of what he wanted and expected programmers to do." This naive, mistaken view is the basis of his argument that, since no machine passes the Turing test, the field is in trouble.... no computer, however sophisticated, has come anywhere near real thinking. These facts have made the Test highly problematic for AI enthusiasts, who want to enlist Turing as their spiritual father and philosophic patron.When members of the AI community need some illustrious forebear to lend dignity to their position, Turing’s name is regularly invoked, and his paper referred to as if holy writ. But when the specifics of that paper are brought up, and when critics ask why the Test has not yet been successfully performed, he is brushed aside as an early and rather unsophisticated enthusiast. His ideas, we are then told, are no longer the foundation of AI work, and his paper may safely be relegated to the shelf where unread classics gather dust, even while we are asked to pay its author the profoundest respect. Turing’s is a name to conjure with, and that is just what most AI workers do with it.
He doesn't seem to understand that Turing was not describing a program specification. He wasn't saying "build me a machine that can do the following!" He was demonstrating (1) the machine does not have to always do what it is instructed (note the Sonnet example) or be correct/honest in its answers and (2) how such a machine's programming is simple enough to be not unreasonable.Apart from his Test, no one has proposed any compelling alternative for judging the success or failure of AI
The test was never meant to judge the success of AI as a field. Indeed, there was no such field. No one in AI thinks the test is a good metric for their work. I'm not sure any field has a single test by which one measures its success.Yet the Test remains a living issue in almost all discussions of AI, if only because Turing provided a concrete goal for AI workers
Again, it was not a goal. It was such a reasonable sounding machine that Turing believed it might exist in the future, but such a machine's utility is clearly suspect.
He goes on to quote a number of authorities (all of which are fantastic quotes) but doesn't actually listen to any of them. He is still convinced that passing the Turing test is the pinnacle achievement, to be pursued by all of AI. He doesn't seem to realize that the reason this is part of "almost all discussions of AI" is because its so basic and accessible that when journalists come to our offices, we take the book off the "shelf," and "dust" it off, and use it as a very accessible starting point for a conversation which (hopefully) leads our journalist down the path that views intelligence in the way that is useful for a scientist in the field of computational intelligence.
MH quotes Maurice Wilkes 1967 Turing speech, and indeed this is a wonderful and obvious first choice of quotations for this article. Wilke's speech is very critical of the AI field. He is a very critical man (for instance, he feels computer science has no right in calling itself a science
2). His perspectives are not shared by the entire field. I feel, however, that his speech can be viewed positively, as a call to arms. I feel Wilkes can be interpreted as the following: a very unfantastic machine might pass Turing's test; indeed, this was probably Turing's belief. The work on expert systems, for example, might fulfill this goal, though in an incredibly unremarkable way. Wilkes wants the work to fulfill a stronger notion for AI. You might say he is criticizing AI for not understanding Turing's grander dream (which, arguably, is not well expressed by the Turing test) or you might say he is re-envisioning the dream for the future. MH, however, summarizes Wilkes as something like "all this so-called AI research is rubbish -- we haven't come close to passing Turing's test." Indeed, this summary is a fair reading; others were left with the same impression of unwarranted criticality and rudeness, and the lecture generated several heated responses.
I particularly enjoy Martha Pollack's view on AI's dilemma when faced with the Turing test as an implicit metric:“As soon as we solve a problem, instead of looking at the solution as AI, we come to view it as just another computer system,” she told Wired News. But so far, nothing that has emerged from AI laboratories actually deserves the name “artificial intelligence.”
MH doesn't seem to understand that Pollack is explaining why the Test is a silly measuring stick of AI's achievements. MH, however, is so caught up in its simplicity and its apparent goal, he cannot see it as anything but the most wondrously shiny measuring stick that ever existed.
Interestingly, this reminds me of a lecture by a professor and her research into the usability of various
CAPTCHAs. Her observations were that CAPTCHAs based on hard computer vision problems were subject to two phenomena, usually at the same time: (1) humans couldn't solve the problems either, (2) programmers soon taught computers to solve them. Each time we try to think of a visual trick humans can do easily but a computer cannot, we are wrong on both ends. Pollack's dilemma is the dual: each time computers do an intelligence trick that seems human-ish, humans soon recognize it as a task solvable by software.
One of the best quotes MH uses is that of Douglas Lenat:"The Turing test is a red herring,” he declared in 2001. “Anthropomorphizing a computer program isn’t a useful goal.” Lenat is dedicated to building a computing system with enough facts about the world, and enough power of drawing inferences from those facts, to be able to arrive at reasonable conclusions about matters it has not been explicitly informed about. Yet this goal suggests that his project, even more than Turing’s, is rightly described as “anthropomorphizing” a computer.
But.. wha? Are you serious? You haven't listened to his quote at all. How can you quote someone, and ignore what you are quoting? You are projecting anthropomorphization onto this research agenda, right after he said you shouldn't. Its preposterous. Again -- lets not call it "intelligence." Fine? His goal of research into ACBSatronics is about making a machine which can draw inferences from a knowledge base to arrive at conclusions it has not been explicitly programmed to have. If you feel this makes a computer to be too human-like, MH, thats totally your own paranoia. If it makes you feel any better, go read Godel's incompleteness theorem. Such a machine will never be able to come to all true conclusions about propositions in algebra. See? Limits. We have no such known limits for humans. Feel better? Have I de-anthropomorphized this research project enough? You can still kiss your sweetie little wife goodnight and know, in the morning, that she isn't a computer pretending to be your little lady: just use a statement composed of every true proposition in algebra as a "secret code word."
Maybe Peter Wegner will convince you of the utility of the Turing test as a description of the foundation for scientific inquiry in this area. If one line of philosophical inquiry leads to a foundation against which our science has meaning, while the other line takes us into a realm where science can say nothing... and if we don't see why one point of view is more valid than the other (that is, either might be true), for a researcher there is utility in believing the first."In answering a metaphysical question like “Can Machines Think?” it is more important to answer it in a manner that is useful than to juggle the meaning of fuzzy concepts to prove its truth or falsity"....For Wegner, the main object is not to decide what thinking really is; it is to keep the boys in the lab happy and productive.
Yes! They are productive! They are happy to accept a view that doesn't lead to fatalism or fundamentalism! MH, you make this sound almost negative. This is the postitive, cheery, useful, (at least) equally true p.o.v. of science. Its why biologists can't do research in intelligent design. Its why doctors don't say "I can repair your kidneys, but what is the point when modern science has no tools to repair your soul?" MH, are you a fundamentalist? If so ... is this a fundamentalist assault on AI? I'm beginning to think it is.Those AI workers who still hope to create machine intelligence do so because they believe that such an ambitious achievement is possible in the full sense of “intelligence.”
Again -- its not really a goal. I am beginning, however, to sense a really rude use of the term "AI worker," suggesting a misled blue collar dupe, following the lead of academic hypocrites.
MH goes on to describe some of the responses to Turing's functionalism p.o.v. by philosophers, most notably John Searle and his Chinese Room mental game. It seems, however, a little too complicated for MH to follow. Poor MH. Pull the string and he sighs "Science of the Mind is hard..."Searle’s critics-and surprisingly, sometimes Searle himself-introduce further personae into the Chinese Room: they postulate that the room’s inhabitant is a woman (no reason given); that there are other characters (“demons”) who are always-again, for no clear reason-male; that the whole Chinese Room should be put inside a robot; and, somewhat more seriously, that the collection of elements in the thought experiment (the room, its inhabitant, the slips of paper on which symbols are handed in and out, etc.) constitutes a “system” with properties possessed by none of its elements.
Ok, dude, you are frustrating me. How about attending a couple classes, or getting some friends to explain this stuff to you? Here's a clue:
- The reason the English speaker is female and demons are male is to make the pronouns less confusing. Its purely for ease of exposition.
- The reason there are two demons is because each has a different perspective in the rebuttal.
- The reason they are demons stems from a long history of perverse mental experiments presupposing players with abilities whose powers it is not up to the philosopher to defend. See Maxwell's demon.
- Try looking at the arguments one at a time, if it gets too confusing.
- Remember to breathe.
In thought experiments even more than in most intellectual constructs, entities are not to be multiplied without necessity, but Hofstadter points to no such necessity. And if we are to admit the new players he calls for, why stop there? Why not introduce the whole Latvian army, the Radio City Music Hall Rockettes, and the Worshipful Company of Fishmongers?
Man.... you just don't get it.Consider a different example: suppose that the first sine-function table had just been developed and that only one copy existed. The man who secretly possessed that sole copy, though completely unmathematical himself, could make a handsome living selling instant sine values to everyone who needed them. His clients, unaware of his possession of the table, would credit him with being a whiz at mathematics, if not a positive magician....And just as one man acquired an undeserved reputation as a mathematician by responding instantly to any request for a sine value...
I really have no clue how a man must pose or act to convince a mathematician that he is a mathematician. I think that if a man seemed to have knowledge of sine values, and no further mathematical knowledge, they would call him a rather lackluster savant. Is this really the only way you can understand the "Chinese Room" rebuttal? Is it that complicated? Sigh.... I don't understand your argument, it introduces hairier notions while the task of "demonstrating you know the Chinese language, to a Chinese speaker" is much clearer.
I will not summarize criticisms of the Chinese Room
3, but (personally) I appreciate the System perspective. We will return to the much less interesting, much more excruciating, inner debate of MH. Look -- he's about to stumble down a road that leads nowhere, wherein our own intelligence might be an artifact of a greater pre-existent intelligence:...when a machine does something “intelligent,” it is because some extraordinarily brilliant person or persons, sometime in the past, found a way to preserve some fragment of intelligent action in the form of an artifact.
Yay! Its its a reasonable extension that human intelligence is an artifact of an intelligent creator! Its proof by analogy that intelligent design is right, if you follow MH's argument. Its amazing how many places you can see God: biology, girls locked in boxes being forced to speak Chinese, burned onto tortillas, etc.
Lets return:Perhaps the absurdity of trying to make computers that can “think” is best demonstrated by reviewing a series of attempts to do just that
Finally, we agree: the attempts are silly. Building "chatterboxes" is a source of amusement and problem solving, much like any programming contest. Turing said your question was meaningless. Every AI researcher you quote says this goal is silly, a red herring, a perverse interpretation of their research's purpose; sometimes, they think its interesting, but they meant something more sophisticated than Loebner's test. Loebner is quite a character. Please take him with a grain of salt ... not the field of AI.And what if judges mistake humans for computers-the very opposite of what Turing expected?"
This possibility is implicit in Turing's mental experiment. Its not "the very opposite" of what was expected. Are you really this dense?Where does one find judges who are representative of “general educated opinion,” yet who have had no interaction with cleverly programmed computers and no encounter with the notion of “thinking machines”?
Yes, you may debate the fairness of the prize's instantiation of the test all you want. This trouble (making this game rigorous or well-defined), however, is why Turing was vague, is why the field doesn't find this to be a useful metric, and is why nobody thinks passing the test is worthwhile research. Thank goodness the Loebner prize is just for recreation
4; to seriously and heatedly argue the fairness of its conditions is up there with getting angry and calling a "do-over" in a Pokeman card game.
Since I have never seen these transcripts or heard the play-by-play, MH's article suddently becomes worth reading. What follows is enormously entertaining, although MH doesn't seem to appreciate it one bit. The prize's game is as follows: each terminal connects judges to an entity (either human or computer), and each terminal is assigned a specific topic about which the entity and judge must converse. One such topic was Shakespeare's plays. Apparently, the entity behind that terminal (a very personable lady) happened to know quite a bit about Shakespeare, unlike the judges.
How useful is a judge who, as in this case, is so ignorant of and uninterested in Shakespeare that he cannot help trivializing a conversation about the playwright?
Ouch. So, you feel human judge 1 was too stupid to be a judge of intelligence? Thats rather harsh. Is it so hard to believe that a human expert's encyclopedic knowledge might be confused for a computer? Each time a judge is fooled, will you really cry "shenanigans!" and ask for a more intelligent judge? I think you are taking this too personally, or something.
Interestingly, terminal 2 (T2) was a computer program with randomly generated responses, including: “I want to have a child someday,” “He is a sonar technician, he hunts Russian submarines,” “It came from forest fires in West Virginia,” and “I mean, have I convinced you I’m a computer yet?” At one point, T2’s remarks are so wildly incoherent that Judge 1 announces that he’s penetrated its disguise: “Okay, you’re busted! You’re no machine. You’re trying too hard, and it’s becoming obvious.”
The strategy was a mind-fuck. The judges weren't sure if humans might try to trick them into thinking they are the computers; T2 was a computer acting like a human acting like a computer. T2 was judged to be a human. This, apparently, enrages MH, who has no patience for machines that can fool humans by cheap psychological tricks.Overall, the performance of the judges leaves us to draw some sad conclusions about their inability to engage in sustained conversation, their lack of knowledge on general human subjects, and their need to share their personal concerns even with entities that contribute little more to the “conversation” than a mirror does to the relief of loneliness.
Yes. Psychologists found the same thing when playing with
Eliza. The fact that you make this observation about the judges' interaction with T5, and T5 is a machine whose programming is based on Eliza, means you are regurgitating known observations about humans when they interact with Eliza. Sigh ... I really hope you don't think you have observed something clever. Lets continue with this pageant of unincredible discoveries:If it is denied that computers can, even in principle, think, then a claim is being tacitly made that humans have some special property that science will never understand-a “soul” or some similarly mystical entity. This is of course unacceptable to many scientists.
Not unacceptable... just not useful. Not science. Its the reason we study evolution, instead of putting down our books and going home. Its the reason we make computers that achieve a type of highly-defined intelligence, a little at a time, instead of throwing up our arms and saying "oh, lets stop -- they will never become really intelligent" (where intelligence is defined in a way that keeps it just out of reach from being anything we can pin down or test or emulate with any measured success).
I feel ashamed that I presumed earlier that MH was interested in the philosophy of science, instead of in science. Apparently, MH is actually interested in theology, not philosophy or science or any combination of the two.
The following is the entry from Arts & Letters Daily that links here:Failing the Turing Test. To be sure, true artificial intelligence remains a possibility. But to question if it will ever be realized does not mean wanting to return to the Dark Ages...
Actually, if ALD had read the article, they would see MH is trying to roll back the clock possibly even earlier.
I am so astounded by this garbage, that I barely notice the article is published in some random journal called
The New Atlantis, sponsored by the
Ethics and Public Policy Center, both devoted to the "Judeo-Christian moral tradition" in technology and science. The other articles are all about stem cells and crap. Jesus. Get out of my science! AL Daily, stop pointing people to this! This .... this ... I feel sick...
1. "I think there is a world market for maybe five computers." -- Thomas Watson, chairman of IBM, 1943.
"Where a calculator on the ENIAC is equipped with 18,000 vacuum tubes and weighs 30 tons, computers in the future may have only 1,000 vacuum tubes and weigh only 1.5 tons." -- Popular Mechanics, 1949
"I have traveled the length and breadth of this country and talked with the best people, and I can assure you that data processing is a fad that won't last out the year." -- The editor in charge of business books for Prentice Hall, 1957.
"But what...is it good for?" -- Engineer at the Advanced Computing Systems Division of IBM, 1968, commenting on the microchip.
"There is no reason anyone would want a computer in their home." -- Ken Olson, president, chairman and founder of Digital Equipment Corp., 1977.
"640K ought to be enough for anybody." -- Attributed to Bill Gates, 1981, but believed to be an urban legend.
2. Re: the term CS, Wilkes once said "I have been a scientist and I have been an engineer. I know the difference." For a potential rebuttal, explaining the "science" in CS, see Newell and Simon's 1975 Turing lecture "Computer science as empirical inquiry."
3. They are here: [
1,
2]
4. At least, I think this is its purpose, and has no scientific value. Apparently, it receives some NSF funding, for some reason. If it has any sceintific worth and if this is a misuse of public funds is debated
here, in a 1994 CACM article (followed by Loebner's rather crazy/angry rebuttal).