Already Anticipating ‘Terminator’ Ethics


Already Anticipating ‘Terminator’ Ethics

TEHRAN (Tasnim) – What could possibly go wrong? That was a question that some of the world’s leading roboticists faced at a technical meeting in October, when they were asked to consider what the science-fiction writer Isaac Asimov anticipated: the need to design ethical behavior into robots.

A lot has changed since then. Generally, we have moved from the industrial era of caged robots toward a time when robots will increasingly wander freely among us. On the military front, we now have “brilliant” weapons like self-navigating cruise missiles, pilotless drones and even Humvee-mounted, tele-operated M16 rifles.

Advocates in the Pentagon make the case that these robotic systems keep troops out of harm’s way, and are more effective killing machines. Some even argue that robotic systems have the potential to wage war more ethically — which, of course, sounds like an oxymoron— than human soldiers do. Proponents suggest that machines can kill with less collateral damage, and are less likely to commit war crimes, wrote the New York Time in its Monday edition.

All of which make questions about robots and ethics more than hypothetical for roboticists and policy makers alike.

The discussion about robots and ethics came during this year’s Humanoids technical conference. At the conference, which focused on the design and application of robots that appear humanlike, Ronald C. Arkin delivered a talk on “How to NOT Build a Terminator,” picking up where Asimov left off with his fourth law of robotics — “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”

While he did an effective job posing the ethical dilemmas, he did not offer a simple solution. His intent was to persuade the researchers to confront the implications of their work.

Dr. Arkin, a veteran roboticist at Georgia Institute of Technology whose research has included the ethics of robots for the military, began his talk by focusing on the Pentagon’s Defense Advanced Research Projects Agency Robotics Challenge, for which teams have been asked to design robots capable of operating in emergency situations, like the Fukushima nuclear power plant crisis in Japan.

“We all know that that is motivated by urban seek-and-destroy,” Dr. Arkin said, only half sardonically adding, “Oh no, I meant urban search-and-rescue.”

He then showed an array of clips from sci-fi movies, including James Cameron’s 1984 “The Terminator,” starring Arnold Schwarzenegger. Each of the clips showed evil robots performing tasks that Darpa has specified as part of its robotics challenge. Clearing debris, opening doors, breaking through walls, climbing ladders and stairs, and riding in utility vehicles — all have “dual use” implications, meaning that they can be used constructively or destructively, depending on the intent of the designer, Dr. Akin showed.

The audience of 250 roboticists laughed nervously. “I’m being facetious,” he told them, “but I’m just trying to tell you that these kinds of technologies you are developing may have uses in places you may not have fully envisioned.”

High hopes and science fiction aside, we are a long way from perfecting a robot intelligent enough to disobey an order because it would violate the laws of war or humanity.

Yet the issue is looming. It was discussed in a fascinating, but little-noted Pentagon report last year, “The Role of Autonomy in DoD Systems.” The report points out the nuances involved in automating battle systems. For example, contrary to the goal of reducing staffing, the authors wrote, an unmanned aerial combat patrol might require as many as 170 people.

Most Visited in Space/Science
Top Space/Science stories
Top Stories