FacebookBufferPocketShare

Stevan_HarnadThe Turing Test has captured the imagination of the general public due to fundamental questions about the nature of the mind. But Stevan Harnad argues the hype over the supposed passing of the Turing Test is misplaced. Alan Turing’s idea for cognitive science was simple: Stop worrying about what the mind “is” and explain instead what the mind does. But we are nowhere near having designed a system that can do everything a person with a mind can do.

It has been reported that the Turing Test has been passed, 64 years after it was first proposed by Turing, because 32% of judges mistook a computer programme for a real 13-year old Ukrainian boy called Eugene Goostman in a 5-minute test. Nothing of the sort. Really passing the Turing Test would require designing a system that has real, lifelong verbal capacity, indistinguishable from a real pen-pal, not just fooling an arbitrary percentage of interrogators in a series of 5-minute exchanges!

At the beginning of his 1950 paper Turing had written:

Turing: “[A] statistical survey such as a Gallup poll [would be] absurd [as a way to define or determine whether a machine can think]” (Turing 1950)

Taking a statistical survey like a Gallup Poll — to find out people’s opinions of what thinking is — would indeed be a waste of time, as Turing points out. Later in the paper, however, in a throwaway remark that is merely his personal prediction about progress in attempts to pass his Test, he mentions the equivalent of a statistical survey in which 30% of interrogators will be successfully fooled for five minutes:

Turing:I believe that in about fifty years’ time it will be possible, to programme computers… [to] play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning.” (Turing 1950)

No doubt this party-game/Gallup-Poll criterion can be met by today’s computer programs — but that remains as meaningless a demographic fact today as it was when predicted 64 years ago: Like any other science, cognitive science is not the art of fooling some or most of the people for some or most of the time! The candidate must really have the generic performance capacity of a real human being — capacity that is totally indistinguishable from that of a real human being to any real human being (for a lifetime, if need be!). No short-term tricks: real lifelong performance capacity (Harnad 2008).

Turing was not only the co-inventor of the computer and the code-breaker of the Nazis’ Enigma Machine, thereby helping the Allies win World War II, but with what came to be called the “Turing Test” he also set the agenda for what would eventually come to be called “cognitive science”: the science of explaining how the mind works. Turing’s idea was simple: Stop worrying about what the mind “is” and explain instead what the mind does. If you can design a system that can do everything that a person with a mind can do – and can do it so that people cannot tell it apart from a real person – then that system will have passed the Turing Test, and the explanation of how that system works will be the explanation of how the mind works.

But the lion’s share of the enormous research agenda proposed by Turing for cognitive science is getting the system to be able to do everything a person with a mind can do. Testing whether people can tell the candidate apart from a real person only becomes relevant at the endgame, once the system already has our generic performance capacities. And we are nowhere near having designed a system that can do everything a person with a mind can do. Not even if we restrict the test to everything a mind can do verbally. (The real Turing Test will of course have to be robotic, not just verbal, because what we can do is not just what we can do with our mouths! But let’s set aside for another discussion the “symbol grounding problem” of whether computation alone can indeed do everything the mind can do.)

Red Robot Noir 4Image credit: Robot Noir by LittleLostRobot (Flickr, CC BY-NC)

It should be obvious from all this that the Turing Test is not – and never was – about fooling anyone, let alone fooling some people, some of the time. It is about designing – indeed “reverse-engineering” — a system that is really able to do anything an ordinary person can do, any time, as long as you like, indistinguishably from the way a real person does it. Nothing about 5-minute tests and percentages of judges that think the candidate is or isn’t a real person (although obviously eventual success can only be achieved by degrees).

The Turing Test has captured the imagination of the general public partly because of our interactions with computers that are able to do more and more things that only people with minds had been able to do. Another reason has been the growth in the number of science fiction books and movies about computers and robots that have – or seem to have – minds. But the biggest reason for the fascination is the “other minds problem” itself – the very problem that the Turing Test is meant to resolve.

We are not mind-readers. The only one I can know has a mind is myself; we’ve known that since at least Descartes’ famous “I think therefore I am.” For all bodies other than my own, the only way I can infer whether they indeed have a mind is if they can do what minds can do. I can’t observe other minds, but I can observe what they can do. So Turing’s real insight was that Turing-testing is — and always has been — our only means of mind-reading. Hence once we have designed a system that can do anything a person with a mind can do, indistinguishably from a person with a mind, not only will we be in no better or worse a position to know whether that system really has a mind than with any other person, but we will come as close as possible to having explained how the mind works.

But that’s certainly not where we are when we have a system that can fool 30% of people for 5 minutes. And Turing certainly never said, implied or intended any such thing.

For more on this topic, see Harnad, S. (1992) The Turing Test Is Not A Trick: Turing Indistinguishability Is A Scientific Criterion and Harnad, S. (2008) The Annotation Game: On Turing (1950) on Computing, Machinery and Intelligence.

Note: This article gives the views of the authors, and not the position of the Impact of Social Science blog, nor of the London School of Economics. Please review our Comments Policy if you have any concerns on posting a comment below.

About the Author

Stevan Harnad currently holds a Canada Research Chair in cognitive science at Université du Québec à Montréal (UQAM) and is professor of cognitive science at the University of Southampton. In 1978,  Stevan was the founder of Behavioral and Brain Sciences, of which he remained editor-in-chief until 2002. In addition, he founded CogPrints (an electronic eprint archive in the cognitive sciences hosted by the University of Southampton), and the American Scientist Open Access Forum (since 1998). Stevan is an active promoter of open access.

Print Friendly