Artificial Intelligence for 2001?
This article first appeared in Personal Computer World magazine, July 1998.
WHATEVER HAPPENED to Artificial Intelligence? 2001 is looming, but where is HAL? After some misleading and premature claims for "thinking machines" in the 70s and 80s, AI got itself something of a bad name. But according to two researchers working at opposite ends of the field, true AI really might be just around the corner.
As a science, AI began in 1956 at a US conference funded by Nelson Rockefeller. Thinking back to those early days, when computers filled entire rooms, it seems astonishing that anyone could be bold enough to suggest that human intelligence could be synthesised, or at least mimicked. There were some successes in the 60s and 70s: industrial robots, new programming methods and languages, the beginnings of computer vision, and new theories about knowledge, but the majority of AI researchers eschewed the bold claims that once made them media stars.
AI went into deep hibernation in the mid-80s, and although today its public face has all but disappeared from view, many of its ideas have quietly entered mainstream computer science. But AI is poised to enter the limelight again, and researchers are beginning to make cautious noises about creating truly intelligent systems.
According to computer scientist Douglas Lenat, before a computer can behave intelligently, it must have common sense about the world. In his view, intelligence is simply a matter of being able to reason about the world. The problem, of course, is that the world is a rather complicated place. But Lenat isn't phased. He's building a gigantic database of common-sense facts. He calls it "Cyc".
The trouble with common sense is that it isn't written down anywhere. In order to codify it, you have to write down facts and their inter-relationships, and to avoid ambiguities do so in excruciating detail. Lenat reports that it took his team three months to write down enough knowledge so that Cyc could understand the sentence "Napoleon died on St Helena; Wellington was saddened". Think of the web of background knowledge you need to make sense of this: that Napoleon and Wellington are persons; that people die; that death is final; that people have emotions; that a person's death may cause emotions in others; that Wellington must have known of Napoleon's death -- and so on.
Common sense is also often contradictory. Lenat cites the example of vampires: we all know that Dracula was a vampire; we know that vampires come from Transylvania; that Christopher Lee is famous for his creepy portrayal of the evil Count. And yet, we also know that vampires don't exist.
The Cyc project is a mammoth undertaking. It began life in 1984 as a government-industry consortium funded to the tune of $25 million over 10 years. Now it's managaged by Cycorp, Inc., a private concern based in Texas. The US Government believes in Cyc, and last year provided further funding of $1.5 million. For the last 14 years, a team of programmers have laboriously created a gigantic "knowledge base" which currently contains about ten million facts, written in a special language called "Cycl". This was phase one. The second phase, now in progress, is to provide facts to Cyc in plain English. Cyc now has enough common sense to correctly figure out a good proportion of the sentences fed to it.
Lenat expects Cyc to enter its third and final phase early in the next millennium. By then, Cyc will have enough common sense to automatically read and digest everything it can get its virtual hands on -- dictionaries, encyclopaedias, novels -- and it will constantly scour the Web for new material. When it finds contradictions, ambiguities, or concepts it doesn't recognise, it will ask a human "tutor" for assistance. But most of the time, Lenat says, Cyc will quietly consume human knowledge.
Cyc is a commercial venture, but last year Cycorp Inc released a tiny part of its common sense database on the Web -- 3,000 facts which they claim "capture the most general concepts of human consensus reality".
While Cyc is undeniably a major achievement, some AI workers think Lenat is barking up the wrong tree, comparing his efforts to trying to reach the moon by constructing "a really tall building". Intelligence is not, they say, an inevitable consequence of having enough common sense. Rather, it can only arise from interacting with the world and learning from direct experience. Scientists at the Massachussets Institite of Technology are taking this approach to its logical conclusion: their goal is to build a robot with human intelligence.
Led by Rodney Brooks, the MIT team have created an android called Cog. In some respects, Cog is humanoid: it has a trunk, a head with video cameras for eyes, and an arm with a grasping hand. Ears, touch sensors, a fully articulated hand, and a voice are on the drawing board. But it can't move around: it's fixed to a heavy iron frame, linked by cables to a rack of dedicated processors which form its brain.
Whereas Cyc seeks intelligence with top-down programming, nobody programs Cog at all. The idea is that it learns entirely through its experience, contructing its common-sense view of the world bottom-up, like a developing child. Critics have suggested that there's no need to build physical robots, since they can be easily simulated in software. But Brooks defends his engineering approach, claiming that the form of the human body is intimately connected with its internal thought, and that physical interaction with the world is crucial to the formation of true intelligence. Cog is in its infancy, but it's already attracting phenomenal interest in the AI community. It's also popular with the media, so much so that Brooks is now refusing all requests for interviews and visits to meet Cog in person.
Brooks and Lenat are worlds apart: while Lenat spoon-feeds Cyc with knowledge about the world, Cog is left alone to learn from its own experience. It's too early to tell where these approaches will lead. Burnt by their previous excessive claims, this time around the AI community is being deeply and publicly sceptical. But everyone agrees on one thing: something extremely interesting is happening in Artificial Intelligence.
Toby Howard teaches at the University of Manchester.