Alan Turing's legacy

Toby Howard

This article first appeared in Personal Computer World magazine, August 1997.

WAY BACK in the prehistory of computing, Alan Turing posed a question that almost 50 years later is still the subject of intense debate. It's inspired several specialist fields within computer science, fuelled endless amounts of research, and led to all kinds of misunderstandings in the media and popular culture. Turing's question: "Can a machine think?"

Turing is the scandalously unsung hero of computing. A British genius who died in tragic circumstances at the peak of his creativity, he was years ahead of his time, laying the foundation for theoretical computer science and algorithm analysis with his mathematical formulation of the "universal machine" now known as the Turing Machine. And half a century before IBM's "Deeper Blue" finally defeated Garry Kasparov, Turing invented one of the first-ever chess-playing programs, "Turochamp".

Turing was fascinated with the idea of mechanical intelligence, and conceived the idea of a test to determine whether or not a machine could think. Realising the extreme difficulty, if not impossibility, of defining the nature of "intelligence", he instead devised a test to identify intelligent behaviour. The idea was that anyone -- or anything -- that could behave as if it were intelligent could be said, in some sense, to actually be intelligent.

The focus of the test, which today remains controversial among philosophers and computer scientists, is to engage a judge in conversation, first with a computer and then with a human being. The judge is not told which is which, and if the judge is unable to distinguish between the two in terms of "human-ness", the computer can be said to be truly intelligent.

It wasn't until 1991 that anyone actually carried out the test, when Hugh Loebner, a theatre equipment manufacturer from New York, instigated an annual contest, stumping up a $100,000 prize for the first computer program to pass the test (actually a restricted form of the test was initially used). Additionally, in the knowledge that Artificial Intelligence (AI) techniques were still in their infancy, and no true winner was likely for some years, Loebner added an annual $2000 prize for the computer which appeared to be the "most human" of its fellow competitors. The protocols of Loebner's contest are simple: a team of judges engage in conversations (typed at a keyboard) with each of the entrants' programs, and also with a human confederate. The judges don't know which is which, and award points according to how human-like each conversation seems.

The 1997 winner of the "most human" prize was a program called Converse, written by UK computer chess expert David Levy. For just 600 KB of C++, and 28.8 MB of data, Converse is an impressive conversationalist. Here are two extracts from its winning dialogue with two of the judges. Just for fun, we'll conceal the true identities of the speakers for a moment:

Extract one:

A: Nathan, have you ever been burgled.......?
B: Yes. I lost a great deal of very meaningful and not valuable things.
A: How long ago was that?
B: Many years ago.
A: I imagine that was pretty awful.

Extract two:

X: Who is Al Gore?
Y: Al Gore is an American Democratic politician. What was I
   saying.. oh yes... What, if anything, do you enjoy about
   being a man?
X: I get to act manlike without people thinking I am strange.
Y: Do you think of yourself as being attractive?
X: Yes to the enlightened woman.
Y: Well, that's good to know.

Did you guess which was which? In the first extract, the computer was A; in the second, it was Y. Although these exchanges are quite convincing, they should be seen in context. Although Levy claims that "Converse can currently converse intelligently on 60 different topics, using its knowledge about several thousand people and places", its conversation does not always flow easily , as the following exchange shows:

Judge:    Do you need water or something before the 5 minute rounds?
Converse: OK, yes on balance... Time to get off this one I think and
          onto something more down to earth!
Judge:    Like what?
Converse: Almost certainly! let's move on

I don't mean to single out Converse. Rather, it's a problem exhibited by all programs of this kind. The problem is the lack of what we might call "common sense" available to the program. Although there have been many "expert systems" which cover a fairly narrow range of knowledge, such as the MYCIN computerised medical "expert" from the 1970s, which advised physicians on treating blood infections and meningitis, the creation of a more generally "clever" system has remained an elusive goal.

Researchers have realised for some time that all AI programs need access to the same general knowledge about the world that we effortlessly carry around in our heads, and there are several efforts underway to put this in place. Since 1984, AI researcher Douglas Lenat has been working to set up "Cyc", which he calls "a very large, multi-contextual knowledge base and inference engine", now being marketed through Cycorp, of Austin, Texas ( The idea is that Cyc will serve as a knowledge base into which AI programs can dip -- for a price, of course.

Perhaps not surprisingly, Loebner's annual machine intelligence contest is regarded as highly controversial within the professional AI community. In a field which has suffered from bloated claims and subsequent embarrassments over the years, AI researchers are especially keen to avoid any more of the damaging hype and unrealised promises that have led to massive cuts in research budgets -- and jobs. One of Loebner's fiercest critics is Stuart Shieber of Harvard University. He's written a detailed critique of the contest, to which Loebner has responded (you can read both, and much more including transcripts of the contests, at

And last year Marvin Minsky of the MIT Media Lab, one of the most senior figures in AI, posted to the USENET newsgroup a plea for Loebner to "revoke his stupid prize" and to "spare us the horror of this obnoxious and unproductive annual publicity campaign", offering to pay $100 to the first person to convince Loebner to do so. Without missing a beat, Loebner turned the tables on Minsky. Since Loebner has stated that he will cease his annual tests once a computer is judged to be truly intelligent, he argued that since Minsky was prepared to pay a bounty to someone to bring Loebner's contest to a close, Minsky was effectively a "co-sponsor" of the contest. Which explains why Minsky is rather wickedly listed on Loebner's Web page as an official co-sponsor.

While the Turing test continues to exercise fascination, Alan Turing himself died a lonely and premature death in 1954, aged only 42. Professionally, Turing was a brilliant mathematician and logician who not only made astonishing contributions to wartime code-breaking at Bletchley Park, but also, at the Universities of Cambridge and Manchester, helped to define the electronic computer. Privately, as a homosexual in those less understanding times, Turing suffered persecution and criminal charges. After undergoing the primitive and demeaning "treatment" of his sexuality with female hormone therapy, Turing died, allegedly after eating an apple he had laced with cyanide.

For all his short, extraordinary, life, Turing remained a futurist. He wrote in his ground-breaking 1950 paper:

	The original question "Can machines think?" I believe to
	be too meaningless to deserve discussion. Nevertheless, I
	believe that at the end of the century the use of words
	and general educated opinion will have altered so much
	that one will be able to speak of machines thinking without
	expecting to be contradicted.

As we approach the end of the century, our PCs as unreliable as ever, creaking under the weight of their enormously complicated software, with all its unsuspected and dangerous labarynthine interactions, we might think that Turing was over-optimistic. Perhaps, but perhaps not. There are still two and a half years to go before the millennium, which in terms of modern computer developments is a very long time indeed.

All we can do is wait and see if the "intelligent machine" emerges. If it doesn't, we'll surely continue to seek it. If it does, we shall have to decide what we want it to do; and perhaps the day will come when we can even ask it what it thinks . . . about us.

Toby Howard teaches at the University of Manchester .