Image Credit: Hugo A. Sanchez/©Gulf News

The field of artificial intelligence doesn’t lack for ambition. In January, Google’s chief executive, Sundar Pichai, claimed in an interview that AI “is more profound than, I dunno, electricity or fire.”

Day-to-day developments, though, are more mundane. Recently Pichai stood onstage in front of a cheering audience and proudly showed a video in which a new Google program, Google Duplex, made a phone call and scheduled a hair salon appointment. The program performed those tasks well enough that a human at the other end of the call didn’t suspect she was talking to a computer. Assuming the demonstration is legitimate, that’s an impressive (if somewhat creepy) accomplishment. But Google Duplex is not the advance toward meaningful AI that many people seem to think.

If you read Google’s public statement about Google Duplex, you’ll discover that the initial scope of the project is surprisingly limited. It encompasses just three tasks: helping users “make restaurant reservations, schedule hair salon appointments, and get holiday hours.”

Schedule hair salon appointments? The dream of artificial intelligence was supposed to be grander than this — to help revolutionise medicine, say, or to produce trustworthy robot helpers for the home.

The reason Google Duplex is so narrow in scope isn’t that it represents a small but important first step toward such goals. The reason is that the field of AI doesn’t yet have a clue how to do any better.

As Google concedes, the trick to making Google Duplex work was to limit it to “closed domains,” or highly constrained types of data (like conversations about making hair salon appointments), “which are narrow enough to explore extensively.” Google Duplex can have a human-sounding conversation only “after being deeply trained in such domains.” Open-ended conversation on a wide range of topics is nowhere in sight.

The limitations of Google Duplex are not just a result of its being announced prematurely and with too much fanfare; they are also a vivid reminder that genuine AI is far beyond the field’s current capabilities, even at a company with perhaps the largest collection of AI researchers in the world, vast amounts of computing power and enormous quantities of data.

The crux of the problem is that the field of artificial intelligence has not come to grips with the infinite complexity of language. Just as you can make infinitely many arithmetic equations by combining a few mathematical symbols and following a small set of rules, you can make infinitely many sentences by combining a modest set of words and a modest set of rules. A genuine, human-level AI will need to be able to cope with all of those possible sentences, not just a small fragment of them.

The narrower the scope of a conversation, the easier it is to have. If your interlocutor is more or less following a script, it is not hard to build a computer program that, with the help of simple phrase-book-like templates, can recognise a few variations on a theme. (“What time does your establishment close?” “I would like a reservation for four people at 7pm.”) But mastering a Berlitz phrase book doesn’t make you a fluent speaker of a foreign language. Sooner or later the non sequiturs start flowing.

Even in a closed domain like restaurant reservations, unusual circumstances are bound to come up. (“Unfortunately, we are redecorating the restaurant that week.”) A good computer programmer can dodge many of these bullets by inducing an interlocutor to rephrase. (“I’m sorry, did you say you were closed that week?”) In short stylised conversations, that may suffice. But in open-ended conversations about complex issues, such hedges will eventually get irritating, if not outright baffling.

To be fair, Google Duplex doesn’t literally use phrase-book-like templates. It uses “machine learning” techniques to extract a range of possible phrases drawn from an enormous data set of recordings of human conversations. But the basic problem remains the same: No matter how much data you have and how many patterns you discern, your data will never match the creativity of human beings or the fluidity of the real world. The universe of possible sentences is too complex. There is no end to the variety of life — or to the ways in which we can talk about that variety.

So what should the field of artificial intelligence do instead? Once upon a time, before the fashionable rise of machine learning and “big data,” AI researchers tried to understand how complex knowledge could be encoded and processed in computers. This project, known as knowledge engineering, aimed not to create programs that would detect statistical patterns in huge data sets but to formalise, in a system of rules, the fundamental elements of human understanding, so that those rules could be applied in computer programs. Rather than merely imitating the results of our thinking, machines would actually share some of our core cognitive abilities. That job proved difficult and was never finished. But “difficult and unfinished” doesn’t mean misguided. AI researchers need to return to that project sooner rather than later, ideally enlisting the help of cognitive psychologists who study the question of how human cognition manages to be endlessly flexible.

Today’s dominant approach to AI has not worked out. Yes, some remarkable applications have been built from it, including Google Translate and Google Duplex. But the limitations of these applications as a form of intelligence should be a wake-up call. If machine learning and big data can’t get us any further than a restaurant reservation, even in the hands of the world’s most capable AI company, it is time to reconsider that strategy.

— New York Times News Service

Gary Marcus is a professor of psychology and neural science. Ernest Davis is a professor of computer science.