Hundreds of top AI and robotics researchers have gathered last month at the Beneficial Artificial Intelligence (BAI) conference in Asilomar, California, to compile a set of 23 principles that should guide research, safety, and ethical issues in AI development. The list of principles was endorsed by Stephen Hawking and Elon Musk.

In 1942, the science fiction author Isaac Asimov introduced his famous Three Laws of Robotics, described in the short story “Runaround”. These rules, built into robots’ brains, were meant to control their behaviour:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

He later added a fourth, or zeroth law, to precede all the others:

0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

Since then, Asimov’s Laws have become a fairly familiar symbol of science fiction culture. As a fictional device, they are the leitmotiv running through the author’s stories, guiding their characters and often failing to do so, too — for instance, in the short story “Runaround”, a conflict between two Laws causes Speedy, a robot, to go around in circles unable to respond to orders.

Robots might have seemed a far-off future, when Asimov wrote the three Laws, 75 years ago. But today when we talk about robots, it’s hardly “science fiction” we are talking about. Now, artificial intelligence is virtually everywhere, from driverless cars to drones used for military purposes. It is even in our phones’ voice recognition systems such as Apple’s Siri or Google Now. Which brings up a fundamental question: As AI rapidly rises, what should society do to best manage it?

HAL refuses to obey crewman David Bowman after he tried to disconnect it. Scene from “2001: A Space Odyssey” (1968) by Stanley Kubrick.

At the Beneficial AI conference, which happened last January in Asilomar, California, a group of researchers, academics, philosophers and entrepreneurs have gathered to discuss the future of artificial intelligence and its implications in people’s lives. The result was a list of 23 principles ranging from research strategies to data rights and to future issues including potential super-intelligence.

This was the second conference on AI held by the Future of Life Institute, an organisation focused on keeping artificial intelligence beneficial and exploring ways of reducing risks from nuclear weapons and biotechnology. Its scientific advisory board includes members such as SpaceX and Tesla CEO, Elon Musk, theoretical physicist Stephen Hawking, Nobel laureates in Physics, Saul Permuter (2011 Nobel Prize) and Frank Wilczek (2004 Nobel laureate for his work on the strong nuclear force), as well as the actor and science communicator, Morgan Freeman.

The Asilomar AI Principles document is divided into three areas: Research Issues, Ethics and Values, and Longer-Terms Issues.

The first part of the text includes recommendations on how research should be funded and used to create beneficial intelligence, urging teams of AI developers to avoid a racing culture that could lead to “corner-cutting on safety standards”.

Ethics and values section features perhaps the most complex and controversial points of this list— which only retained principles if at least 90 per cent of the conference’s attendees agreed on them. In this section, scientists suggest that highly autonomous AI systems should be aligned with human values. These include ideals of human dignity, rights, freedoms, and cultural diversity.

While most researchers agreed with the underlying idea of this principle, which they called Value Alignment, there was no consensus on how to put it into action. Should these values be embedded in AI systems? Can they be programmed into machines, as Asimov envisioned? Do we (humans) even agree on what those values are, or do they change with time?

The debate around the ethical and legal aspects concerning AI has still a long way to go, but initial discussions on the matter are already taking place in the European Union, with Members of the European Parliament looking at whether robots should have a legal status.

The Asilomar Principles also address the increasing use of AI for warfare, by warning against “an arms race in lethal autonomous weapons”. This had been previously stressed out by The Future of Life Institute in 2015 when the organisation sent an open letter petitioning the UN to ban the development of offensive autonomous weapons.

In the last section of the Asilomar document, scientists draw attention to other issues that might occur in the long run, given “the profound change in the history of life on Earth” that advanced AI could represent. Society should, therefore, plan for and mitigate the risks posed by AI, they say, “especially catastrophic or existential risks”.

In addition, “strict safety and control measures” are advised to “AI systems that are designed to recursively self-improve”, and “superintelligence should only be developed in the service of widely shared ethical ideas, and for the benefit of all humanity rather than one state or organisation”.

So far, the Asilomar AI Principles have been signed by more than one thousand Artificial Intelligence and Robotics researchers, and 1900 people from other fields. The 23 Principles and the names of its signatories are available here.

Cover illustration: Isaac Asimov © 2015 Zakeena. Licensed under CC-BY

Leave a Comment