ARTIFICIAL INTELLIGENCE AND REGULATIONS FOR MACHINES

With the breakneck pace of experimentation happening in the field, artificial intelligence is fast becoming something of a Pandora’s box. Though the technology is in its infancy, examples are already emerging that suggest the need for regulation – and sooner rather than later.

Revolution in warfare

Experts, 116 of them from the fields of artificial intelligence and robotics, wrote a now celebrated and frequently quoted letter to the United Nations in August. In it, they warned of the prospect of autonomous weapons systems developed to identify targets and use lethal force without human intervention. Signatories included the great and good of AI, including Tesla boss Elon Musk and Head of Applied AI at Google DeepMind Mustafa Suleyman. The letter anticipates a “third revolution in warfare” that could change conflict to the degree that gunpowder did. It states autonomous weapons “will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.” This, coupled with the risk of systems being hacked or falling into the hands of despots and terrorists, provides grounds for an early global ban.

If “intelligent” weapons sound like science fiction, they are not. Around the same time as the technologists penned their letter, Kalashnikov Group – famous for its eponymous machine gun – unveiled a formidable-looking AI cannon. The company says its “fully automated combat module” can spot and kill a target without a human finger on the trigger. This raises complex questions, both ethical and practical, about what limits should be placed on AI. Should robots be trusted to make decisions? Would their choices be better than human ones? Even if democracies curtail development, will authoritarian regimes follow suit?

Whatever the answers, they need to address not just military scenarios, but every other sphere in which AI could impact society: health care, transport, government, legal and medicine, to name only a handful of areas where the technology is already being developed. And the answers need to come sooner rather than later.

 

Second law

Three-quarters of a century ago, science fiction author Isaac Asimov provided a useful starting point for the governance of AI with his Three Laws of Robotics: Robots may not injure a human being, must obey orders (unless they go against the First Law) and must protect themselves (unless to do so conflicts with the First or Second Law).

But even these simple rules will encounter difficulties when applied in the real world, according to Greg Benson, professor of computer science at the University of San Francisco. Take autonomous vehicles. “A self-driving car might have to decide between potentially harming its passengers or a greater number of pedestrians. Should the car protect the passengers at all costs, or try to minimize the total harm to humans involved, even if that means injuring people in the car?” He points out that if people knew autonomous vehicles were coded to treat their safety equitably with other road users, they probably wouldn’t buy one.