Stanford, California: In factories and warehouses, robots routinely outdo humans in strength and precision. Artificial intelligence software can drive cars, beat grandmasters at chess and leave Jeopardy! champions in the dust.

But machines still lack a critical element that will keep them from eclipsing most human capabilities anytime soon: a well-developed sense of touch.

Consider Dr. Nikolas Blevins, a head and neck surgeon at Stanford Health Care who routinely performs ear operations requiring that he shave away bone deftly enough to leave an inner surface as thin as the membrane in an eggshell.

Blevins is collaborating with roboticists J. Kenneth Salisbury and Sonny Chan on designing software that will make it possible to rehearse these operations before performing them. The programme blends X-ray and magnetic resonance imaging data to create a vivid three-dimensional model of the inner ear, allowing the surgeon to practice drilling away bone, to take a visual tour of the patient’s skull and to virtually “feel” subtle differences in cartilage, bone and soft tissue. Yet no matter how thorough or refined, the software provides only the roughest approximation of Blevins’ sensitive touch.

“Being able to do virtual surgery, you really need to have haptics,” he said, referring to the technology that makes it possible to mimic the sensations of touch in a computer simulation.

The software’s limitations typify those of robotics, in which researchers lag in designing machines to perform tasks that humans routinely do instinctively. Since the first robotic arm was designed at the Stanford Artificial Intelligence Laboratory in the 1960s, robots have learnt to perform repetitive factory work, but they can barely open a door, pick themselves up if they fall, pull a coin out of a pocket or twirl a pencil.

The correlation between highly evolved artificial intelligence and physical ineptness even has a name: Moravec’s paradox, after robotics pioneer Hans Moravec, who wrote in 1988, “It is comparatively easy to make computers exhibit adult-level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a 1-year-old when it comes to perception and mobility.”

Advances in haptics and kinematics, the study of motion control in jointed bodies, are essential if robots are ever to collaborate with humans in hoped-for roles like food service worker, medical orderly, office secretary and health care assistant.

“It just takes time, and it’s more complicated,” Ken Goldberg, a roboticist at the University of California, Berkeley, said of such advances. “Humans are really good at this, and they have millions of years of evolution.”

 

Touch Impulses

Touch is a much more complicated sense than one might think. Humans have an array of organs that allow them to sense pressure, sheer forces, temperature and vibrations with remarkable precision. (And German researchers have shown that raccoons have evolved the animal world’s most sophisticated brain functions to process touch impulses in the dark.)

Research suggests that our sense of touch is actually several orders of magnitude finer than previously believed. Last fall, for example, Swedish scientists reported in the journal Nature that dynamic human touch — for example, when a finger slides across a surface — could distinguish ridges no higher than 13 nanometres, or about 0.0000005 of an inch.

That is the scale of individual molecules. Or as Mark Rutland, a professor of surface chemistry at the KTH Royal Institute of Technology in Sweden, put it, if your finger were as big as the earth, it could feel the difference between a car and a house. Physiologists have shown that the interaction between a finger and a surface is detected by organs called mechanoreceptors, which are embedded at different depths in the skin. Some are sensitive to changes in an object’s size or shape and others to vibrations.

In the case of tiny surface variations, cues come from Pacinian corpuscles, oval-shaped structures about 1 millimetre long (one twenty-fifth of an inch) that signal when they are deformed.

Replicating that sensitivity is the goal of haptics, a science that is playing an increasing role in connecting the computing world to humans. One of the most significant advances in haptics has been made by Mako Surgical, founded in 2004 by roboticist Rony Abovitz. In 2006, Mako began offering a robot that provides precise feedback to surgeons repairing arthritic knee joints.

“I thought haptics was a way to combine machine intelligence and human intelligence in a way that the machine would do what it was good at and the human would do what the human was good at, and there was this really interesting symbiosis that could come about,” Abovitz said, adding:

“The surgeon still has the sense of control and can put the energy into the motion and push. But all of the intelligent guidance and what you thought the surgeon would normally do is done by the machine.”

 

Robotic Dangers

Even in industries where robots are entrenched, experts worry about the dangers they pose to the people who work alongside them. Robots have caused dozens of workplace deaths and injuries in the United States; if a robot revolution is ever to take place, scientists will have to create machines that meet exacting safety standards — and do it inexpensively.

“For the last 30 years, industrial robots have focused on one metric: being fast and cheap,” said Kent Massey, the director of advanced programmes at HDT Global, a robotics firm based in Solon, Ohio. “It has been about speed. It’s been awesome, but a standard arm today is precise and stiff and heavy, and they’re really dangerous.”

Massey’s company is one of a number of robot-arm designers that are beginning to build safer machines. Rethink Robotics in Boston and Universal Robots in Denmark have built “compliant” robots that sense human contact. The Universal system uses a combination of sensors in its joints and software, and the Rethink robot uses “series elastic actuators” — essentially springs in the joints that mimic the compliance of human muscles and tendons and acoustic sensors so the robot can slow when humans approach.

Beyond advances necessary for basic safety, scientists are focusing on more subtle aspects of touch. Last year, researchers at Georgia Tech reported in the journal Science that they had fabricated bundles of tiny transistors called taxels to measure changes in electrical charges that signal mechanical strain or pressure. The goal is to design touch-sensitive applications, including artificial skin for robots and other devices.

Much research is focusing on vision and its role in touch. The newest da Vinci Xi, a surgery system developed by Intuitive Surgical Inc., uses high-resolution 3-D cameras to enable doctors to perform delicate operations remotely, manipulating tiny surgical instruments. The company focused on giving surgeons better vision, because the necessary touch for operating on soft tissue like organs is still beyond the capability of haptics technology.

Curt Salisbury, a principal research engineer at SRI International, a non-profit research institute, said that while surgeons could rely on visual cues provided by soft tissues to understand the forces exerted by their tools, there were times when vision alone would not suffice.

“Haptic feedback is critical when you don’t have good visual access,” he said.

Other researchers believe that advances in sensors that more accurately model human skin, as well as algorithms that fuse vision, haptics and kinematics, will lead to vast improvements in the next generation of robots.

One path is being pursued by Eduardo Torres-Jara, an assistant professor of robotics at Worcester Polytechnic Institute in Massachusetts, who has defined an alternative theory he describes as “sensitive robotics.” He has created a model of robotic motion, grasping and manipulation that begins with simply knowing where the robot’s feet or hands meet the ground or an object. “It is all about recognising the tactile events and understanding that very well,” he said. Using biologically inspired artificial skin that can detect tiny changes in magnetic forces, he has built a two-legged walking robot that is able to balance and stride by measuring changing forces on the bottoms of its feet.

If improving tactile performance depends on greater computing power, help may be on the way. Goldberg, the Berkeley roboticist, has begun designing cloud-based robotic systems that can tap vast pools of computing power via the Internet.

“I’m very excited about the idea of cloud robotics,” he said. “It is lifting the limitation of computing that we’ve always had.”

In July, roboticists at Brown, Cornell, Stanford and Berkeley described a database called Robo Brain, sponsored by the National Science Foundation, that is intended to offer an Internet-based repository of images and videos to give robots support for performing actions in the physical world. For example, information on how to identify, grasp and carry a coffee mug would be accessible to any robot or robotic arm connected to the Internet.

Other haptics researchers believe that artificially replicating touch will have a powerful effect on the development of autonomous robots, as well as systems that augment humans.

Last fall, Allison Okamura, an associate professor of mechanical engineering at the Laboratory for Collaborative Haptics and Robotics in Medicine at Stanford, taught an online course in haptics. Students assembled “hapkits” designed by Stanford education professor Paulo Blickstein, then programmed them to create virtual devices like springs and dampers that could be manipulated as if they were in the real world.

The students followed with new projects, tweaking the hardware and sharing programmes they had created. Okamura said their enthusiasm was understandable.

“If you have all these senses — vision, hearing, taste, touch and smell — and someone took them away from you one by one, which is the last one you would give up?” she asked. “Almost everyone says vision, but for me, it would be touch.”