The Use of AI in Weapons Systems - The UK and US Legal and Regulatory Framework
12 min to read

The Use of AI in Weapons Systems – The UK and US Legal and Regulatory Framework

Date
15 October 2017

A few weeks ago I was invited to speak at the DSEI Defence Conference (see https://www.dsei.co.uk/dsei-press-releases/dsei-2017-strategic-conferences#/). As a Tech lawyer the defence sector is not my natural environment but speaking on this topic at the conference gave me the opportunity to spend some time thinking about the important issue of the use of AI in weapons systems and the legal and regulatory framework in which weapons systems operate.

This note is a summary of my presentation at the DSEI conference

In any business sector – including the defence sector – the introduction of AI needs a consideration of the current and future legal environment in order to assess whether it can be legally introduced into that environment and to assess how the law should manage and control the usage of AI in that business sector once it has been introduced.

As I have previously commented in this blog (see http://digitalbusiness.law/2017/02/do-we-need-robot-law/) the two key questions are:

(1) Are facilitative changes to the law needed in order to allow the use of AI in a particular business sector – in effect the question is what changes to the law are needed to enable the use of AI in a particular business sector

(2) Are changes to the law needed to manage the operation of AI in a particular business sector? Are there any special features about the use of AI in a particular business sector that mean that such usage should be specifically regulated to manage the operation of robotics and artificial intelligence in that business sector?

Of course, the legal regulatory framework does not exist in a political or social vacuum. If changes to the law are needed in order to facilitate the introduction of AI into a particular business sector, then the changes need to be assessed politically, socially – and increasingly – ethically.

Political Level

Within the field of weapons systems AI this takes us straight into the recent political-level (small “p”) debates led by a significant number of high-level AI-researchers and individuals – such as Stephen Hawking and Elon Musk – on the need for a ban on the introduction of AI into the battlefield environment. In August 2016 attendees at the International Joint Conference on Artificial Intelligence, including Stephen Hawking, Elon Musk (Tesla and SpaceX) and Steve Wozniak (Co-Founder of Apple) signed a letter calling for a ban on the development and operation of offensive autonomous weapons that operate beyond “meaningful human control” (see https://futureoflife.org/2017/08/20/killer-robots-worlds-top-ai-robotics-companies-urge-united-nations-ban-lethal-autonomous-weapons/).

They suggest that the dangers from battlefield AI outweigh the benefits. They see a risk of a global AI arms race if one country pushes ahead with AI weapon development – with the result that AI weapons will end up “In the hands of terrorists, dictators wanting to control their populations, warlords wishing to perpetrate ethnic cleanings etc”. “Unlike nuclear weapons they require no costly or hard-to-obtain raw materials – they will become ubiquitous and cheap for all significant military powers to mass produce”

The timing of the letter opposing the development of AI battlefield systems is significant.   It was designed to put pressure on the United Nations Group of Governmental Experts. This Group was due to meet in August to start a review of the Convention on Conventional Weapons in connection with the development of battlefield AI systems. The August meeting was cancelled and is due to be reconvened in November.

Also, continuing the theme of political and ethical debate – earlier this year a conference of AI specialists identified 23 Principles designed to ensure that AI remains a “force for good”. These principles were developed at the Asilomar Conference in California and are being referred to as the Asilomar principles, including principles relating to research issues, ethics and values and longer-term issues (see https://futureoflife.org/ai-principles/).

They include a requirement for “an arms race in lethal autonomous weapons to be avoided” and that on the issue of human control “humans should choose how and whether to delegate decisions to AI systems, to accomplish human chosen objectives”.

UK Government Policy

Moving onto the UK – in the UK the Government’s policy on battlefield AI systems has been discussed on a number of occasions in the House of Lords.

In 2013 Lord Astor (as Parliamentary Under- Secretary of State of Defence) said that “the United Kingdom does not have fully autonomous weapon systems (see https://hansard.parliament.uk/Lords/2013-03-26/debates/13032658000808/ArmedForcesAutonomousWeaponSystems). Such systems are not yet in existence and are not likely to be for many years, if at all. There are currently a limited number of naval defensive systems that could operate in automatic mode, although there would always be naval personnel involved in setting the parameters of any such operation”.

In terms of regulation and control Lord Astor commented that: “I must emphasise that any type of weapon system would be used only in strict adherence with international humanitarian law” and that he wanted to be ” absolutely clear that the operation of weapons systems will always—always—be under human control.

In December 2016, similar comments were made by Earl Howe (as Minister of State for Defence) in response to questions by Lord Judd and Lord West.

In a letter dated January 2017 (see http://www.article36.org/autonomous-weapons/uk-govt-response-2017/) the FCO has stated in Response to a letter from the Article 36 group:

“The UK does not support a pre-emptive ban on such systems” and that the Government considers that “existing International Humanitarian Law is sufficient to control and regulate Lethal Autonomous Weapons systems. Whatever the characteristics of such weapons they would not be capable of satisfying International Humanitarian Law in the critical areas of proportionality and discrimination, and it is therefore highly likely that they would de facto be illegal under existing regulations“.

So the Government’s position is that there is no need for additional UK law to regulate the introduction and operation of battlefield AI systems.   International Humanitarian Law is sufficient in this field.

International Humanitarian Law

What are the key features of applicable International Humanitarian Law in this context?

Firstly – weapons prohibited by their nature where they cause excessive injury or unnecessary suffering that has no military purpose. Unnecessary suffering, in this context, refers primarily to the effect of such weapons on combatants. Weapons in this category include weapons loaded with poison, chemical and biological weapons and blinding laser weapons.

Since robotic weapons do not inherently cause excessive injury or unnecessary suffering, it may be reasonable to say that they are not illegal due to their nature, unless they serve as delivery platforms for prohibited weapons.

Secondly – the principle of distinction requires that weapons must only be aimed at a specific military target.

Weapons that are not capable of discriminate use – of being aimed at a specific military target – are unlawful. Weapons that are inherently indiscriminate could include: long-range missiles with very rudimentary guidance systems; biological weapons that spread contagious diseases; and anti-personnel mines.

It has been argued that autonomous AI weapons, for the foreseeable future, will not be able to distinguish between combatants/ military targets and civilians. Is this true? Even if this were generally the case, it does not necessarily mean that autonomous robotic weapons are indiscriminate by nature; while they may, in some contexts, be highly inaccurate, there are some environments in which these weapons could be used lawfully.

These might include remote areas where no civilians are present, such as the high seas, deserts and outer space. As a result, it is possible to argue that these weapons cannot be banned on grounds that they are indiscriminate by nature.

Thirdly – The rule of proportionality prohibits an attack if the ensuing civilian harm is excessive in relation to the concrete and direct military advantage anticipated by the attack. An attack may become illegal if excessive collateral damage affecting civilians or civilian objects is to be expected. The application of the rule of proportionality in practice leads to a number of intricate and value-laden questions. For example, what is the value of a military objective relative to the likely civilian casualties? How many casualties are acceptable in order to eliminate, say, an enemy tank or supply bridge? What is the likelihood that the destruction of a bridge is going to lead to casualties in the school nearby?

Answering these questions requires a number of value judgements to be made that are highly contextual. Questions have therefore been raised whether AI weapons systems can be developed to take account of the indefinite number of situations in armed conflict that involve value judgements. Making the relevant judgements requires a lot of experience, and military personnel spend a lot of time in training to learn how to make those decisions and calculations.

Of course, military personnel sometimes get these decisions and calculations wrong, with highly negative humanitarian consequences. Even extensive training cannot guarantee that the military never makes the wrong decisions. As with many areas of AI, for the case for the introduction of AI weapons systems to succeed, it needs to be shown that autonomous robotic weapons could effectively outperform humans and be more reliable in making the necessary value-judgements than trained and experienced humans.

The key problem is that it is simply not clear how machines, even if AI programming techniques improve considerably, could make the necessary value judgements. However, in other fields, AI solutions relating to value judgments and complex factual circumstances are moving ahead very quickly. Systems are being developed to provide AI solutions for medical and legal diagnosis. At the moment it is not clear whether these will become sufficiently powerful to replace human diagnosis in these areas or whether they will simply become tools to assist and supplement human diagnosis. However, the fact that a solute requires complex human value judgements no longer means that the solution is immune to replacement by AI solutions

Finally, there is a general obligation for states to ensure that the employment of new weapons, means or methods of warfare complies with the rules of international law.

Article 36 of Additional Protocol 1 to the Geneva Convent ion (API), the obligation to conduct reviews applies to every State, whether or not it is party to API, and whether or not it develops and manufactures weapons itself or purchases them.

This legal review obligation stems from the general principle that holds that a State’s right to choose the means and methods of warfare is not unlimited. More precisely, the aim of Article 36 is:

“To prevent the use of weapons that would violate international law in all circumstances and to impose restrictions on the use of weapons that would violate international law in some circumstances, by determining their lawfulness before they are developed, acquired or otherwise incorporated into a State’s arsenal”

Clearly, all AI weapons systems should be subject to regular legal reviews.

The US Approach

In the US a rather more specific framework for AI weapons systems has already been introduced. In November 2012, the US issued directive 3000.09 (updated earlier this year) (see http://www.esd.whs.mil/DD/), establishing policy for the “design, development, acquisition, testing, fielding, and … application of lethal or non-lethal, kinetic or non-kinetic, force by autonomous or semi-autonomous weapon systems.” 

It was a first attempt at establishing policy prescriptions and demarcating lines of responsibility for the creation and use of semi-autonomous, “human supervised” and fully autonomous weapons systems. In layman’s terms, it attempts to answer the who, what, when, where and how of autonomous systems in military combat.

The Directive sets out reasonably clear lines of responsibility for system development, testing and evaluation, equipment/weapons training, as well as developing doctrine, tactics, techniques and procedures. The explicit purpose of the Directive is to establish guidelines to “minimize the probability and consequences of failures in autonomous and semi-autonomous weapons systems that could lead to unintended engagements.” These “unintended engagements” refer to “the use of force resulting in damage to persons or objects that human operators did not intend to be the targets of US military operations, including unacceptable levels of collateral damage beyond those consistent with the law of war, ROE, and commander’s intent.”

Of course, the Directive raises as many questions as it answers. One of the key worries was the extent to which the policy could be avoided in certain circumstances. It has been argued that the Directive erodes the notion of “proper authority” and gave cause for concern in terms of the US military compliance with the International legal framework.

 

Concluding Thoughts

In conclusion: in my view the UK Government is correct that International Humanitarian Law already provides an extensive and – potentially – adequate legal framework relating the introduction and control of battlefield AI systems – at least at the moment.

In my view the likelihood of a ban on AI weaponry having the effect of preventing AI weaponry getting in to the “wrong hands” is not likely to be effective. AI is becoming pervasive and the “bad guys” are probably already in the process of developing AI weaponry.

I’m less convinced that the UK government is correct in saying that battlefield AI systems would not be capable of satisfying International Humanitarian Law. The pace of AI is developing rapidly and AI systems may become just as good if not better than humans and making the types of value judgements that are made on the battlefield within a relatively short timescale.

Share
Written by
Roger Bickerstaff
Roger Bickerstaff
United Kingdom
Roger is a partner at Bird & Bird LLP in London and San Francisco and Honorary Professor in Law at Nottingham University. Bird & Bird LLP is an international law firm specializing in Tech and digital transformation.
View profile
Related articles
Smart Contracts – Recognising and Addressing the Risks
4 min to read
29 December 2021
Smart Contracts – Recognising and Addressing the Risks
Smart contracts, where some or all of the contractual obligations are defined in and/or performed automatically by a computer program, are expected to have a significant impact on the way business is...
Technology Projects: Managing the Risks of Innovation and Change Part 3: Contract Reset and Dispute Resolution
Technology Projects: Managing the Risks of Innovation and Change Part 3: Contract Reset and Dispute Resolution
Customers in long-term technology projects can find that while they have been working towards their chosen solution a more advanced, cheaper, or simply more desirable technology has become available....
Digital dispute resolution rules to facilitate rapid and cost-effective resolution of disputes involving novel digital technologies
Digital dispute resolution rules to facilitate rapid and cost-effective resolution of disputes involving novel digital technologies
While some saw the development of products using blockchain technology leading to the demise of disputes, the reality is that disputes in the arena of digital technology are increasing in number. Lawtech’s...
Technology Projects: Managing the Risks of Innovation and Change Part 2: During the Life of the Project
Technology Projects: Managing the Risks of Innovation and Change Part 2: During the Life of the Project
Customers in long-term technology projects can find that while they have been working towards their chosen solution a more advanced, cheaper, or simply more desirable technology has become available....
Cookies
We use analytics cookies to help us understand if our website is working well and to learn what content is most useful to visitors. We also use some cookies which are essential to make our website work. You can accept or reject our analytic cookies (including the collection of associated data) and change your mind at any time. Find out more in our Cookie Notice.