1.1628650-1820508592
Picture for illustrative purposes only. Image Credit: Gulf News Archive


The “robot invasion” meme has become a staple of publishing. The storylines are familiar: robots and super-smart computers are displacing human workers in even the most complex tasks. They are anticipating our needs and making decisions for us without us even noticing. Fascination and fear are the natural responses. We are awestruck, even as we contemplate our increasing irrelevance.

Now it’s time for the sequel. If this were Hollywood, a plucky band of misfits would regroup and take the fight back to the machines. Humanity, in all its imperfection and striving, would carry the day.

Alas, the narrative that unfolds in the latest spate of robot books is not quite so simple. This isn’t Independence Day, and things most certainly won’t return to the way they were. But a more nuanced view of the effect that intelligent machines will have on our world is taking shape. It is one in which the people are not entirely superseded and where to be human is still to rise above the machines — as long as we make the right choices, and soon.

In short, it’s high time we put humans back into the robot future. When it comes to contemplating the future of work, however, this is not an easy thing to do. Trying to keep up with the rapidly advancing capabilities of the machines is like hitting a moving target. Every time someone identifies distinctly “human” characteristics that set us apart, the machines match them.

A decade ago Frank Levy, an economist at the Massachusetts Institute of Technology, predicted that complex problem-solving would give people an edge over the machines for a long time to come. That looks hopelessly optimistic these days. Machine learning — the latest iteration of artificial intelligence, which is based on highly advanced forms of pattern-recognition — is capable of all the flexible, problem-solving strategies Professor Levy had in mind. We all think, deep down, we’re special. The scary thing is, we probably aren’t.

In The Future of the Professions, father-and-son authors Richard and Daniel Susskind do a remorselessly effective job of demolishing the self-deception most people engage in when comparing themselves to machines.

For professionals like lawyers, doctors and accountants, it is particularly tempting to believe in human exceptionalism. Many might secretly admit that their hard-won specialist knowledge will soon be matched by machines. But they still like to believe they bring something more to their work. Personal judgment and adherence to a code of ethics are things that set professionals apart. There is also the all-important personal interaction: the best advisers and experts know how to anticipate their clients’ concerns and always find the right words.

Richard Susskind, a British law professor and technologist, has been thinking about this issue longer than most. Back in the 1980s, he was one of the first to set up a business using “expert systems” — software based on “decision trees” that tried to map the thinking of experts — to tackle legal problems. Real-world problems proved much too complex.

These days, according to the Susskinds, most professional jobs can be “unbundled” into a set of discrete tasks. Indeed, this has been happening for some time: much legal research is already carried out by advanced natural language systems that hunt through the stacks of documents far faster than humans could.

Once jobs are picked apart, there is often surprisingly little left that couldn’t be done by a machine, according to the Susskinds. Paralegals or nurse practitioners, with an intelligent machine at their fingertips, can handle work that once required more highly trained professionals — not to mention online self-service systems that replace people altogether, such as tax preparation services.

There are close parallels here with what industrialisation did to craft workers. Once tasks have been standardised and turned into repeatable routines, what is left? True, new process-based jobs emerge from these kinds of upheaval. Someone has to organise the work into sub-routines and analyse the data. But these do not seem as attractive as the jobs they replace, and may themselves eventually be automated.

Like many who have thought about these areas, the Susskinds come up with one quality that seems uniquely human: empathy. Stripping the human interaction out from other aspects of professional work, they suggest that there may be a role for full-time empathisers who can deal with the deeply interpersonal work that computers can’t handle.

Any quest for the uniquely human, however, can lead to some disturbing realisations. One is that machines might often be better at empathising than people are — if empathy is defined as recognising a person’s emotional needs and adjusting behaviour accordingly. “Affective” computing, the work of machines that observe humans and analyse their emotional state, is already an established branch of the science, as the Susskinds note.

Even when machines aren’t trying to read our moods — something that many people would find deeply creepy, if they knew it was happening — they are still coming to be seen as an acceptable replacement for the personal touch. Since the first experiments involving people and rudimentary “thinking” machines, AI researchers have found that many people would actually prefer to deal with computers that appear to understand them than with other human beings. And even if human interaction is something we all crave, there’s no reason we should seek it from a surgeon or a tax lawyer — let alone a call-centre employee or a checkout assistant. If all those jobs can be done effectively by machines, then we should take our quest for fellow-feeling elsewhere.

Geoff Colvin, a senior editor at Fortune magazine, also looks for hope in the softer side of human nature. In Humans are Underrated, he makes the case that there is no point trying to beat machines at their own game. Computers may not actually think, but they do a very good job of using massive number-crunching to emulate our cognitive functions. Any job that relies on applying the grey matter is in jeopardy.

The irony here is that the spread of IT has brought huge demand for analytical skills. In education, science and technology are all the rage. These, though, are the very jobs that machines are best at copying. Learning how to code may be exactly the wrong response to the spread of computing, since this is the kind of work the computers will eventually do for themselves (which provokes an entirely different set of anxieties).

What makes people special, according to Colvin, is their inbuilt propensity for social interaction. We work well in groups — communicating, collaborating and, yes, empathising. That puts women in the driving seat, he says. They are better listeners and communicators. Our best hope lies in what makes us most different from the logic-processors.

But in the long term, do groups of people have any more hope in the race against the machine than individuals of the species? Colvin’s response is that only humans can work out how to satisfy human needs, since they are the best at identifying what other humans care most about.

It would be nice to think he’s right. But technology has often defined our wants and even our sense of our own identity. In a world of Facebook, Instagram and Twitter, our relationships — and our self-conception — are reduced to likes, selfies and tweets. In this era, Colvin himself notes, true empathy is in short supply. This is not simply a case of people shaping their technological tools, but of the tools shaping us.

So if the robots are heading, inevitably, for a more central role in our lives, how do we learn to live with them? In the popular imagination, robots are almost akin to an alien life-form. Following their own inbuilt logic, they veer all too easily from the wishes and desires of their human creators. In the triumph of technology, as automation takes over, the human is too easily forgotten.

This is the risk identified by New York Times technology reporter John Markoff. But in his engaging and informative history of robotics and artificial intelligence, Machines of Loving Grace, Markoff argues that this is not the way it has to be. He identifies a common fallacy. Since the onward advance of technological capability is inexorable, the outcome is often seen as inevitable: anything that can be built, will be built. But just because machines are becoming more powerful, says Markoff, it doesn’t mean we have to give them dominion over the most important spheres of human existence.

He describes the future, instead, as a choice. It is one that he thinks most technologists are ducking: they can either treat the advancing technologies of robotics and AI as replacements for human thought and action — or they can see them as tools to enhance the human. It is a choice between “AI” (artificial intelligence, or making people redundant) and “IA” (intelligence augmentation, or making people smarter).

As a long-term reporter on Silicon Valley and a tech historian, Markoff is well-placed to describe how we came to this point. In his telling, the heroes are the humanists who have always seen technology as a tool to be put at the service of people. They include Doug Engelbart and Alan Kay, two of the pioneers of personal computing, as well as Terry Winograd, the Stanford University professor who became disillusioned with AI and who, as his thesis adviser, had a powerful influence on the early thinking of Google co-founder Larry Page. According to Markoff, Google now stands on the cusp of the choice that will define the future of technology — and, perhaps, of the human race. Its search engine is one of the glories of IA, extending the power of the individual in a way that would have been almost incomprehensible two decades ago. But Google is now bent on AI, in the form of driverless cars and an attempt to build a super-intelligent machine, dubbed Google Brain. If it succeeds, humans may no longer be in the equation.

This is also the issue that occupies David Mindell in Our Robots, Ourselves. Mindell brings an altogether refreshing perspective to a field that can sometimes get lost in the “what if”. As an aeronautical engineer, pilot and expert in undersea exploration, he has worked alongside autonomous machines and can report back on the experience.

If you want to understand how robots will change our world, he says, then look to where they are already being pushed to their limits: in the extremes of underwater or space exploration. In Mindell’s examples, robots aren’t “other”: they respond to their human controllers. Even supposedly fully autonomous systems embody the values of their designers. If we want to understand robots, he says, we should first look into ourselves.

This is all well and good. But the quest for a humanistic response to the rise of the robots faces two big challenges. The first is the difficulty of managing increasingly complex human/machine interactions. The machines are our tools, but we may simply not be able to control them any more.

Mindell describes one such cautionary tale. High over the Atlantic, two inexperienced Air France pilots were thrown when their autopilot switched off and handed back control for an entirely routine reason. They misread the situation and pushed the plane into a stall, crashing into the ocean with the loss of everyone on board.

This handover problem between machines and people is about to become a very real one for “driverless” cars. In their first incarnation, these will not be entirely driverless: they will hold their lanes on freeways, edge along in slow-moving traffic and park themselves. But when driving conditions become more complex, they will need to hand back control to their flesh-and-blood chauffeurs. It is hard to see how a human, face buried in a smartphone, would be able to respond in time. Faced with this kind of problem, one response is to design people out of the system altogether. That is what Google has decided to do with its all-in bet on an entirely driverless future. The motivation to build machines that replace us rather than enhance us is understandable, even if it does make humans beside the point.

The second challenge is economic. It isn’t so much an issue of the technology but rather the system in which it is designed. If the return on investment from a fully automated process is greater than one that combines both people and machines, then replacing workers with technology is a no-brainer. It’s all very well asking the technologists to think twice before they act, but if the incentives are skewed heavily towards full automation, why would they hesitate?

Yet this is a moment that won’t recur. To plunge into this future without a second thought would be to ignore a historic responsibility. As Markoff and Mindell both suggest, it is not enough to divorce the technology from the human. We created the machines in our image, and they will in turn shape our image of ourselves.

In the final analysis, the robots are us. We need to decide how we want to live with them — before it’s too late.

–Financial Times