When AI systems cause harm: the application of civil and criminal liability
19 min to read

When AI systems cause harm: the application of civil and criminal liability

Authors
Date
08 November 2019

By Ben Hughes and Russell Williamson

“Good morning, Dave.”

It’s fairly safe to say that, in the main, those of us who practice commercial law do not have sufficient expertise in computer science to assess whether any given computing system is based on artificial intelligence (AI) techniques or on more traditional system development techniques.  Indeed, AI systems are often described in terms that, to us laypersons, seem better suited to science fiction – as exemplified by HAL, the sentient computer in Stanley Kubrick’s 1968 film 2001: A Space Odyssey – than real life.  Even in debates between computer scientists, AI has been light-heartedly defined as “whatever hasn’t been done yet“, suggesting that it is more akin to magic or wishful thinking than reality.

But, bringing a more blame-focused lawyer’s perspective, we can see that, possibly more so than traditional systems, systems based on AI techniques or methods are developed by combinations of separate designers, developers, software programmers, hardware manufacturers, system integrators and data or network service providers.

Crucially, each of these different contributors may be distinct legal persons meaning that, if things go wrong and harm occurs, there could be a wider range of potential litigants (e.g. individuals, businesses, universities, public bodies etc.) when determining who is responsible and, ultimately, at fault.  The question of ‘who is liable?’ is becoming increasingly significant as more powerful AI systems are created, whose failure could in principle have catastrophic consequences.

With the possibility of these different litigants in mind, this article explores how civil and criminal liability might arise under English law in relation to the supply and use of such systems, based on the following grounding principles:

  • AI-based systems are no different from traditionally-developed systems in that they are also ultimately comprised of hardware, software and data.
  • AI systems themselves are not legal persons and, therefore, are not capable of participating in disputes as litigants, even if they are the cause and/or subject of a dispute.
  • Accordingly, it is only those who are involved in developing, manufacturing, selling, operating and coming into contact with AI systems who could be claimants or defendants in any associated legal action.

We also consider how the current liability frameworks in place (some of which were created before the current age of AI) might be changed in the near future at a legislative level.

(A) Civil liability

We are all, by any practical definition of the words, foolproof and incapable of error.

In relation to civil liability, English law can treat the supply of ‘services’ differently from the supply of ‘products’.

For example, the supply and use of a driverless car (as a product) is likely to trigger different liability considerations than the supply and receipt of advice that is produced by an AI-enabled robo-adviser (a service).

We explore the different sources of English law that may apply to these two examples.

(1) Driverless cars

“Dave, stop. Stop, will you? Stop, Dave. Will you stop Dave? Stop, Dave.”

If you mention driverless (or self-driving) cars in a discussion about AI systems, you risk being accused of dealing in clichés.  However, this would be unfair because cars are a helpful illustration of an everyday product that looks set to become considerably more advanced in the coming years and which, if things go wrong, could cause considerable harm.

Indeed, manufacturers are making vehicles with increasingly sophisticated autonomous (or ‘automated’) features that collect ‘sensory’ information about the driving environment and then, through the use of AI-enabled control systems, interpret and control (or ‘drive’) the vehicle based on that information.

We do not yet know the level of autonomy that will ultimately be attained, but the ‘Holy Grail’ seems to be fully autonomous vehicles that are able to perform all the driving tasks that humans can, perhaps even performing some tasks that humans cannot, without requiring any kind of human or manual intervention.

Whether or not full autonomy is achieved in the near future, there are cars in operation now that have varying levels of partial autonomy that depend on automated control systems.  The use of these vehicles raises questions as to who should be held to blame if a system-level defect causes a road accident or other harm.  As an example, in March 2018 reports emerged of a ‘self-driving’ Uber car operating in an autonomous mode that killed a pedestrian (pushing a bicycle across a road) in Arizona[1]

Under English law, the key ways in which the supply of a defective driverless car (or its component parts/systems) could result in a civil liability claim are as follows:

  1. Strict liability: Under Part I of the Consumer Protection Act 1987 (the “CPA“) (which implements the EU Product Liability Directive[2] (the “PLD“) into English law), liability can be imposed on a product’s producer, own-brander, importer and/or supplier where that product is considered to be ‘defective’ according to an objective statutory standard.  A claimant does not need to show any fault or negligence on the part of the defendant – instead, one needs to demonstrate the presence of a defect in the product and a causal link between that defect and the loss suffered.  Two types of damage are recoverable under this strict liability regime, being personal injury/death and loss to non-commercial property.

This regime gives rise to two important questions: (a) what is a ‘product’?; and (b) what is a ‘defect’?

  • First, the PLD defines products as “all moveables … even [if] incorporated into another moveable or into an immovable” while the CPA defines products as “any goods [including] a product which is comprised in another product, whether by virtue of being a component part or raw material or otherwise[3].  These definitions suggest that the strict liability regime may only apply to products with a physical existence, and not to intangible or incorporeal products – including software.  However, the answer to this question is not entirely clear[4] and it is currently subject to regulatory consultation at a European level.  At least at present, if the product supplied amounts to ‘software’, it is possible that it would not be caught on the basis that it is not considered to be a physical moveable or good, especially where the software is not provided on a physical medium, i.e. where it is downloaded ‘over the air’ as an intangible form.  On that basis, while a defective physical part or component of a car would be classed as a “product”, it is less clear whether a defective piece of software forming part of the car’s AI autonomous system would be caught.   
  • Secondly, a product will be defective if it does not provide the safety which a person is generally entitled to expect.  This raises some critical questions as to what public expectations are regarding the safety of AI systems employed in vehicles.  For example: (i) is it acceptable for a driverless car to disobey road rules by swerving onto the pavement to avoid a child in the road; and (ii) what is an acceptable level of security for an AI system to prevent third party hackers from gaining unauthorised access or control of the system?
  • Many commentators have suggested that the PLD and CPA are outdated in not reflecting the current realities of how technology products are incorporated (as part of wider systems) and delivered to users, especially in the context of accidents such as the autonomous Uber car example referred to above which demonstrates that malfunctioning embedded software can have grave consequences.  However, changes are afoot.  At the EU level (and as part of an ‘Artificial Intelligence for Europe’ regulatory roadmap): (i) following the completion of a public consultation, the European Commission is due to publish a report and guidance on the PLD with regards to AI, the Internet of Things and robotics[5]; and (ii) a Commission expert group on liability and new technologies has been established to consider possible amendments to the PLD[6].  Further, in the UK the Law Commission is undertaking a review of the legal framework for automated vehicles, including the way in which product liability laws apply to ‘pure’ software and ‘over the air’ updates (e.g. which add or enhance automation features)[7].  These are upcoming developments to be closely monitored.
  • The cybersecurity risks also present new legal problems.  At present, there is no accepted industry standard for the security of AI products, including those used in driverless cars.  Suppliers in the automotive industry are therefore compelled (for commercial reasons) to satisfy the varying requirements of their customers, including Original Equipment Manufacturers (OEMs) and such supplier-customer relationships may not allow proper end-to-end testing of the cybersecurity of a particular platform or system that is made up of component parts sourced from different providers.  However, once again, legislative change appears to be on the horizon.  It is anticipated that the World Forum for Harmonisation of Vehicle Regulations (under the United Nations Economic Commission for Europe) will issue a regulation on cybersecurity (including software updates) in 2020.
  • Negligence: Under the common law of negligence, the manufacturer of the car or its component parts/systems (including AI software/systems) could potentially be liable to a claimant who has suffered personal injury or damage to their property as a result of a defect.  There is an established duty of care between manufacturers of products and their ultimate end-users and others that may be affected by the products (including innocent bystanders).  While current authority has examined this duty of care in relation to the provision of physical products, there is nothing to prevent the same principle applying to the supply of software and associated AI systems, particularly where such software controls, or may affect, physical devices/components. 
  • Contract: If there is a contract of sale between the purchaser and seller of the car, the purchaser may be able to sue the seller for breach of the express or implied terms of that contract.

(2) Robo-advisers

I know I’ve made some very poor decisions recently…

AI systems are now used on a routine basis to produce professional advice with either limited or no human invention, including in the fields of medicine, finance and law.  Such AI-enabled systems come with various labels but we refer to them as “robo-advisers”.

If an AI-enabled robo-adviser is used to provide poor or negligent advice, the legal person that used the adviser to provide the advice (i.e. the potential defendant) may be liable to the person that received the advice (i.e. the potential claimant) for damages caused by breach of:

  • the express or implied terms of any English-law governed contract that may exist between the defendant and the claimant (and under which the advice was provided); and/or
  • any duty of care (not to cause such damage) that may be owed by the defendant to the claimant.  Such a duty may be owed whether or not the advice was supplied under a contract between the claimant and defendant.

However, unlike in the case of the driverless car, it seems unlikely that a claimant could successfully argue that an AI-enabled robo-adviser was supplied as a “product”, meaning that the defendant would not be liable under the strict liability regime of the CPA.  That regime was not designed to cover the provision of ‘services’.

(B) Criminal liability

“Dave, this conversation can serve no purpose anymore. Goodbye.”

In addition to civil liability for damage caused by a defective product, there is separate legislation that regulates the supply of ‘safe’ products – which could equally apply to AI systems.

The majority of UK legislation concerning product safety has its origins in a series of EU Directives, the most wide-ranging of which is the General Product Safety Directive 2001/95 (as implemented into English law by the General Product Safety Regulations 2005).  These regulations:

  • apply to all products which are intended for consumers (or are likely, under reasonably foreseeable conditions, to be used by consumers); and
  • impose specific obligations on ‘producers’ and ‘distributors’ of products, including: (i) ensuring that only ‘safe’ products are placed on the market; (ii) monitoring the safety of products after their supply; and (iii) taking appropriate corrective action to avoid that safety risks or hazards that arise (including, in the worst-case scenarios, undertaking a product recall or similar field-type actions).

Failure to comply with these obligations can constitute a criminal offence, punishable by an unlimited fine or imprisonment (for up to 12 months) for very serious cases.

Similar to the issues discussed for the civil strict liability regime, there is some uncertainty as to whether software forming part of an AI-system would constitute a ‘product’ for the purpose of these regulations.

For very serious matters, the Corporate Manslaughter and Corporate Homicide Act 2007 can impose a criminal offence of ‘corporate manslaughter’ where the way in which a company’s activities are managed or organised: (i) causes a person’s death; and (ii) amounts to a gross breach of a relevant duty of care (e.g. in negligence) owed by the company to the deceased person.  

In the context of driverless cars, the Law Commission (as part of its review of the law regarding autonomous vehicles) is considering whether certain criminal sanctions should be reformed.  In particular, one proposal being debated is that ‘users-in-charge’ (i.e. a qualified and fit to drive person) of an automated vehicle would not be liable for breaches of road rules committed while the automated driving system was engaged.  Instead, if the breach lies with the automated system, the matter should be referred to a new regulatory authority and each automated system would need to be backed by a self-selected entity responsible for that system.  If this proposal was adopted, the responsible entity in most cases will be the developer or manufacturer of the relevant system (including, for example, any partnerships if the system involved the input of two software developers) and that entity could face direct criminal sanctions (such as financial penalties and the withdrawal of approval to supply the system).  Any such reform will involve increasing the responsibility and liability risk for system developers or manufacturers which, in turn, may lead to greater insurance costs.

(C) General comments on liability

(1) Is there a ‘black box’ problem?

“As to whether he has real feelings is something I don’t think anyone can truthfully answer.”

Discussions on legal liability in the context of AI systems often mention the so-called “black box” or “explainability” problem. The problem is said to arise because AI systems are comprised of complex algorithms and data sets that are software-generated (rather than created by human beings) and, as a result of this, are not fully understood by, or perhaps not even capable of being understood by, human beings. Accordingly (it is said), it may not be possible for a human to explain why an AI system (using such self-generated algorithms or data sets) reached a particular answer (or made a particular “decision”).

This may be a problem for computer scientists who are seeking to understand AI systems’ behaviour, perhaps in order to resolve problems or prevent them from happening in the future.

From a lawyer’s perspective, however, it is less clear whether or not AI systems being “black boxes” will be relevant in determining the liability of defendants. We think that the relevance of any such “explainability” problem will ultimately depend on the facts and the application of the relevant law in each case.

Having said that, we do think it is important to bear in mind that any consideration of liability in a civil or criminal matter is ultimately a question of whether or not the acts or omissions of the relevant defendant (as caused by the relevant AI system’s decisions) were illegal.  Did those acts or omissions amount to breaches of contract, negligence or criminal offences (as the case may be)?  It is important to reinforce the point that the defendant in any case will be a legal person, not an AI system.

In order to answer these types of questions a Court may not need to understand why the relevant AI system made the decision that led to the defendant’s allegedly illegal act or omission.  Indeed, it seems to us that it is possible for it to be unclear why the AI system made a decision but for it to be very clear that the AI system did in fact make the decision and that the decision either caused or amounted to the defendant having committed an illegal act or omission.

For example, a claimant may have lost money after acting on poor investment advice supplied by a defendant whose advice was generated by a robo-adviser.  The claimant’s case may be that the defendant failed to provide the investment advice in accordance with an implied term or common legal duty that it should do so with reasonable care and skill.  The claimant may be able to establish breach without needing to explain why the robo-adviser reached the conclusion that generated the poor advice.  It may therefore be possible for the Court to find that the defendant is at fault purely on the basis of the quality of the advice itself.

The Court may have ruled in the same way irrespective of whether the advice was produced using a robo-adviser or a human adviser.  We should not forget that the human brain also has “black box” features (we often cannot explain human behaviour) but this has not prevented Courts from finding defendants liable in the past.

Accordingly, we think it is too early to tell whether the “black box” problem in computer science gives rise to a similar problem in the context of determining whether AI systems’ decisions give rise to legal liability.

We also do not think it is safe to say that the involvement of an AI system with “black box” features would, in every case, harm a claimant’s ability to show that the defendant’s acts or omissions were the cause of the relevant wrongful consequence.

This is because, under English law, a claimant will just need to show:

  • factual causation: that the consequence would not have occurred but for the defendant’s actions; and
  • legal causation: a complete chain of causation between the defendant’s actions and the consequence (the defendant’s action need not have been the sole cause of the consequence but it must have made a significant (or more than minimal) contribution to that consequence).

Neither of these tests require the claimant to explain why the defendant acted in the way contended.

Where a defendant’s state of mind is relevant to determining their liability, the law that applies will depend on the type of wrong being alleged. However, it is important to remember that it will ultimately be the defendant’s state of mind that is relevant and that, even if the decision of an AI system is relevant, it may be possible to establish that the defendant had the requisite state of mind without being able to explain why the AI system made the decision.

(2) Are there ways to mitigate or manage your risk?

Look Dave, I can see you’re really upset about this. I honestly think you ought to sit down calmly, take a stress pill, and think things over.

As can be seen from the issues addressed in this article, certain civil and criminal laws (which could be applied to AI systems) are likely to be reformed.  That said, in relation to product liability, there are steps that can be taken by developers, manufacturers and suppliers now to seek to mitigate (or at least manage) their risks in the event a problem arises.  In particular:

  • Conduct a careful and thorough review of the contracts (or proposed contracts) with your customers.  Key issues to consider include:
    • the exclusion of any implied terms and statutory warranties regarding the quality of the relevant products;
    • specifying the intended use by the customer of the products and what the parties’ obligations are in relation to implementation, calibration or integration of the product with any other component or system.  Where the customer is to be responsible for such implementation, consider imposing appropriate warranties and/or indemnities on the customer in your favour (for example, where a defect results from faulty implementation);
    • the imposition of testing and acceptance provisions on the customer, with a timeframe for the reporting of any defects identified by the customer;
    • including provisions which are designed to exclude or limit the supplier’s liability;
    • imposing a shorter limitation period for the bringing of claims.
  • Conduct a careful and thorough review of the contracts (or proposed contracts) with your own suppliers or sub-suppliers.  In particular, do these contracts give you a clear right of recourse against the supplier where an issue is caused by that supplier’s component?
  • Seek to classify your supply as a service (as opposed to a product).
  • Where you are supplying software, seek to provide such software by way of download only (rather than by a physical medium).
  • Consider ring-fencing your potential liability (e.g. establish separate corporate vehicles dealing with design, manufacture and supply or distribution of the relevant products).
  • Consider the insurance protection in place regarding your products.  For example:
    • What risks are covered (e.g. public liability, product liability, recall, directors’/officers’ civil liability)?
    • Are the products under the relevant policies properly defined?
    • What are the territorial limits of the coverage?
    • What is the scope of damage covered (e.g. physical damage, accidental loss, personal injury, other economic losses)?  It should be noted that products involving AI may give rise to unknown risks, meaning that it may prove difficult to obtain full insurance cover or alternatively premiums are likely to be higher.    

“I am feeling much better now.”


[1] The US National Transportation Safety Board (NTSB) is due to hold a meeting on 19 November 2019 to determine the probable cause of the crash.  However, initial documents released by the NTSB refer to software flaws and suggest that: (i) the car failed to identify the victim (as a pedestrian) and the bicycle as sources of an imminent collision until just before impact; and (ii) the system design did not include a consideration for jaywalking pedestrians (https://www.ntsb.gov/news/press-releases/Pages/ma20191017.aspx).

[2] 85/374/EEC.

[3] Goods themselves are defined as including substances.

[4] The English Courts have not addressed the question directly in the context of the CPA and/or the PLD.

[5] https://ec.europa.eu/growth/single-market/goods/free-movement-sectors/liability-defective-products_en

[6] https://ec.europa.eu/transparency/regexpert/index.cfm?do=groupDetail.groupDetail&groupID=3592

[7] https://www.lawcom.gov.uk/project/automated-vehicles/.  Final recommendations are due to be published in 2021.

Share
Written by
Ben Hughes
Ben Hughes
Ben draws on his experience of working in the technology industry to advise on high-profile multi-sourcing ICT projects, including in the banking, financial services and public sectors. Ben is an associate in Bird & Bird's Commercial Group, based in London. He advises on a broad range of commercial law matters and specialises in advising on major ICT outsourcing deals, including under innovative and evolving contracting models. His clients include banking and financial service providers, public sector organisations and a wide range of technology providers.
Related articles
Smart Contracts – Recognising and Addressing the Risks
4 min to read
29 December 2021
Smart Contracts – Recognising and Addressing the Risks
Smart contracts, where some or all of the contractual obligations are defined in and/or performed automatically by a computer program, are expected to have a significant impact on the way business is...
Technology Projects: Managing the Risks of Innovation and Change Part 3: Contract Reset and Dispute Resolution
Technology Projects: Managing the Risks of Innovation and Change Part 3: Contract Reset and Dispute Resolution
Customers in long-term technology projects can find that while they have been working towards their chosen solution a more advanced, cheaper, or simply more desirable technology has become available....
Digital dispute resolution rules to facilitate rapid and cost-effective resolution of disputes involving novel digital technologies
Digital dispute resolution rules to facilitate rapid and cost-effective resolution of disputes involving novel digital technologies
While some saw the development of products using blockchain technology leading to the demise of disputes, the reality is that disputes in the arena of digital technology are increasing in number. Lawtech’s...
Technology Projects: Managing the Risks of Innovation and Change Part 2: During the Life of the Project
Technology Projects: Managing the Risks of Innovation and Change Part 2: During the Life of the Project
Customers in long-term technology projects can find that while they have been working towards their chosen solution a more advanced, cheaper, or simply more desirable technology has become available....
Cookies
We use analytics cookies to help us understand if our website is working well and to learn what content is most useful to visitors. We also use some cookies which are essential to make our website work. You can accept or reject our analytic cookies (including the collection of associated data) and change your mind at any time. Find out more in our Cookie Notice.