Digital technologies in healthcare: the liability challenges
12 min to read

Digital technologies in healthcare: the liability challenges

Date
28 February 2020

Digital technologies are increasingly transforming the way in which products are being made and services are being delivered.

This is no more so than in the healthcare sector. In August 2019, the NHS set up a national artificial intelligence (AI) laboratory to enhance care of patients and research. Prior to this, there were calls from within the NHS for technology firms to help it become a world leader in the use of AI and machine learning. Indeed, over the last few years we have seen AI trialled in the NHS in clinical applications including the detection of eye disease, diagnosis of breast cancer from mammograms and in planning radiotherapy treatment for head and neck cancer. Globally, we have also seen AI trialled in areas such as identification of skin cancer from photographs and in predicting patient deterioration. Digital technologies are also being used to monitor patients (so as to facilitate discharge from in-patient care) and in operational planning contexts, such as in identifying patients that are likely to cancel appointments.

However, if digital technologies such as AI are going to be widely used in front-line clinical services, then thought needs to be given to who is responsible, and how patients are properly compensated, if something goes wrong.

This will depend in part on the type and form of the digital technology. For example, the supply and use of a robot to perform surgery (a product) is likely to trigger different liability considerations under current law than the supply and receipt of a diagnosis produced by an AI-enabled robo-advisor (which may be considered a service).

This issue of liability has been considered in depth recently by an independent expert group set up by the European Commission, which has published a report on ‘Liability for Artificial Intelligence and other emerging digital technologies’.

The report explores the ways in which existing liability frameworks at EU level may need to adapt to keep pace with the use of digital technologies such as AI in order to plug gaps in victims’ ability to receive compensation or recover losses. Brexit notwithstanding, it is an important insight into current EU level thinking on this issue (with the UK represented in the expert group).

The report is ambitious and far reaching, containing 34 key findings and is sector agnostic. The most interesting findings in a healthcare context include the following:

It is not necessary to give autonomous systems a legal personality

The expert group considers that harm caused by technologies (even where fully autonomous) is generally reducible to risks attributable to natural legal persons. Where this is not the case under existing liability frameworks, it is better to plug gaps by new laws directed at individuals rather than create a new category of legal person.

Furthermore, if autonomous systems were given legal personality, this would arguably be a mere formality as opposed to a change of situation. In reality, someone (who?) would either need to provide the relevant autonomous system with assets from which damages could be paid or pay the premiums on insurance on the autonomous system’s behalf.

Ultimately, this means that where a healthcare provider uses digital technology such as AI in delivering clinical care, liability for harm caused to the patient will need to either sit with the healthcare provider, the original developer or another legal entity in the supply chain or contributing to the technology. Given that a doctor has a well-established duty of care to a patient, where in the chain liability sits may well depend on the terms of the healthcare provider’s contract with the digital technology supplier.

In certain circumstances, strict liability may be an appropriate response to the risks posed by emerging digital technologies

Currently, under Part I of the Consumer Protection Act 1987 (which implements the EU Product Liability Directive into English law) liability can be imposed on a product’s producer, own-brander, importer and/or supplier where that product is considered to be ‘defective’ according to an objective statutory standard. This means that a claimant does not need to show any fault or negligence on the part of the defendant. Rather, the claimant needs to demonstrate the presence of a defect in the product and a causal link between that defect and the loss suffered.  Two types of damage are recoverable under this strict liability regime, being personal injury/death and loss to non-commercial property (the former in particular being clearly on point in the context of healthcare provision).

The findings of the expert group are interesting as they suggest that:

  • Strict product liability could be clarified to extend to products in digital (not just tangible) form. This would reflect the current reality of how intangible technologies are often incorporated as part of wider tangible products.
  • The point in time at which a product is placed on the market should not set a strict limit on the producer’s liability for defects where, after that point in time, the producer (or a third party acting on its behalf) remains responsible for providing updates or digital services.
  • Strict liability could be additionally introduced for digital technology service offerings (not just limited to products).

If the findings of the expert group are followed, this could mean (or clarify) that technology such as AI software designed to be used in diagnosis could attract strict liability, even where the software is not provided on a physical medium.

Strict liability should lie with the person who is in control of the risk connected with the operation of emerging digital technologies and who benefits from their operation

Digital technologies such as AI often introduce a complex matrix of contributors. For example, an AI system will be developed by the original producer, may be maintained or updated by such original producer or another party, may receive, act on and itself modify its algorithms based on data from multiple external sources and will be used by the end operator (e.g. a doctor or other member of the healthcare provider’s staff). It may therefore be difficult to work out who ‘controls’ or ‘operates’ an AI system.

Given this, the expert group considers that there is often more than just one person who may be considered to be ‘operating’ digital technologies in a meaningful way. As such, strict liability should lie with the operator who has more control over the risks of an operation and who is the cheapest cost avoider and taker of insurance. In order to avoid uncertainty, lawmakers should look to define which operator is liable under which circumstances, and all other matters that need to be regulated (including insurance).

We would suggest that the fact that healthcare providers will in any event have clinical negligence insurance may play into a consideration of who is best placed to take on (and insure) the risks connected with emerging digital technologies. It is difficult to envisage a scenario where the well-established duty of care between doctor and patient would be alleviated somehow, or otherwise shift to a duty of care between technology supplier and patient (though new duties of care may be introduced). Given this, it may well be that the need for healthcare providers to insure against harm to patients is unaffected by the technology used in patient care – in which case additionally passing liability to technology suppliers may be of limited benefit (and indeed, hinder innovation).

Steps should be considered to address difficulties with proving a causal link arising from the increasing opacity and complexity of digital technologies

The introduction of digital technologies may make it much more difficult to determinate what has happened if something goes wrong.

The more complex emerging digital technologies become, the less those using them can comprehend the processes that may have caused harm or damage. For example, AI systems are comprised of complex algorithms and data sets that are software-generated (rather than human generated). As such, it may not be possible for a human (for instance, a doctor) to understand why an AI system (using self-generated algorithms or data sets) reached a particular answer or made a particular decision or diagnosis. This is often called the “black box” problem.

Indeed, this issue has been noted in previous AI trial results. For example, in a 2019 US study focused on the use of AI to diagnose lung cancer, it was noted that the AI system used sometimes highlighted a lung nodule (a growth) that for all intents and purposes looked benign (but the AI system thought wasn’t). The AI system was usually correct, though the human experts were not sure why the AI system reached the conclusions that it did.

This opacity, combined with the issue of multiple contributors, creates difficulties around proving a causal link between damage and a defect. Waters could be muddied, for instance, by the impact of updates and upgrades to AI systems (which may or may not be implemented by the original producer) and the dependence of AI systems on external information (which may be generated by in-built sensors or communicated from external sources).

The expert group makes a number of findings around this, including the following:

  • Lawmakers should consider alleviating the burden of proving causation if a balancing of certain factors warrants this. These factors include the likelihood that the technology at least contributed to the harm, the risk associated with any known defect within the technology and the degree of ex-post traceability and intelligibility of processes within the technology.
  • If there are multiple possible causes and it remains unclear what exactly triggered the harm, but if the likelihood of all possible causes combined that are attributable to one party exceeds a certain threshold (e.g. 50% or more), this could contribute to placing the burden of producing evidence rebutting an assumption of causal link onto such party.
  • Joint and several liability should be explored where two or more persons cooperate on a contractual or similar basis in the provision of different elements of a commercial and technological unit, i.e. where the claimant can demonstrate that at least one element of such unit has caused the damage in a way triggering liability, but not which element.
  • There should be a duty on producers to equip technology with means of logging information about the operation of the technology in certain proportionate circumstances. The suggestion is to couple this with a reversal of the burden of proof where the victim is not given reasonable access to such information (including where it is not logged in the first place). If the operator (e.g. the end user) of the digital technology becomes liable due to the absence of logged information, it should have a recourse claim against the producer who failed to equip the technology with logging facilities.

It is important to note that the impact of any changes to causation needs to be considered alongside any potential strict liability, given that strict liability will impact the need to show fault or negligence.

Operators of emerging digital technologies should have to comply with an adapted range of duties of care

The expert group suggests the introduction of additional duties of care, including around choosing the right system for the right task and skills, monitoring the system and maintaining the system. The suggested duty to monitor stems from the acknowledgement that, unlike traditional products for which the current harmonised product liability framework was designed, highly sophisticated digital technologies (particularly AI systems) are not finished products in the same sense when they are put on the market.

Furthermore, producers (e.g. the original developers of the technology) should have to design, describe and market products in a way effectively enabling operators (e.g. the end users) to discharge their duties of care around choosing, monitoring and maintaining the system. This could include a responsibility on technology suppliers to alert customers to particular features and risks with the software in question, and possibly offering training and to monitor the system once it is on the market.

These findings are perhaps not surprising – and we would expect emerging digital technology users to be required to undertake some form of quality, rational and sensibility assessment, even under a light touch regulatory regime. Indeed, in sectors such as healthcare (where use of digital technologies may have an impact on fundamental rights given the type of harm that could be caused), we would envisage that more robust regulatory controls around digital technology use may be put in place.

What does this mean?

The expert group is an independent expert group and the findings of the report are recommendations only. Nevertheless its existence and content demonstrates that much thinking is going into whether and (if so) how laws need to adapt to keep pace with digital technologies.

If the law adapts as envisaged by the expert group, it may place additional technical requirements (such as logging features) on producers to mitigate issues around opacity caused by digital technologies. Additionally, producers may have express duties around the way in which they describe and market their products.

Furthermore, given the thinking around strict liability, causation and additional duties of care, both technology suppliers and healthcare providers using digital technologies would need to carefully consider how liability is addressed in their contracts. This is particularly so as digital technologies such as AI become used in areas such as diagnosis, even if only to augment the other information available to a doctor.

Currently, suppliers often seek to limit their liability for harm caused by digital technologies used in a healthcare environment, for instance through:

  • Excluding any warranty that the technology is fit for a particular purpose or will be error free.
  • Including express acknowledgements that the technology is not a substitute for professional medical care.
  • Including other liability limitations and exclusions.

There is potential though for tension to grow between broad supplier exclusions of liability and the increasing complexity and opacity of digital technologies such as AI, particularly where used in a healthcare context. On the other hand, care needs to be taken to avoid hindering innovation and stifling market growth by introducing unsustainable increases to suppliers’ liability exposure. Otherwise, the benefits of emerging digital technologies may not be realised at all.

For an article on the application of civil and criminal liability in relation to AI systems, please see here.

For an article on a light touch regulatory framework for AI, please see here.

Share
Written by
Ben Woodfield
Ben Woodfield
Ben is an associate in Bird & Bird's London-based Commercial Group. Ben has a particular focus on major projects, primarily in the IT, healthcare and defence sectors.
Related articles
Smart Contracts – Recognising and Addressing the Risks
4 min to read
29 December 2021
Smart Contracts – Recognising and Addressing the Risks
Smart contracts, where some or all of the contractual obligations are defined in and/or performed automatically by a computer program, are expected to have a significant impact on the way business is...
Technology Projects: Managing the Risks of Innovation and Change Part 3: Contract Reset and Dispute Resolution
Technology Projects: Managing the Risks of Innovation and Change Part 3: Contract Reset and Dispute Resolution
Customers in long-term technology projects can find that while they have been working towards their chosen solution a more advanced, cheaper, or simply more desirable technology has become available....
Digital dispute resolution rules to facilitate rapid and cost-effective resolution of disputes involving novel digital technologies
Digital dispute resolution rules to facilitate rapid and cost-effective resolution of disputes involving novel digital technologies
While some saw the development of products using blockchain technology leading to the demise of disputes, the reality is that disputes in the arena of digital technology are increasing in number. Lawtech’s...
Technology Projects: Managing the Risks of Innovation and Change Part 2: During the Life of the Project
Technology Projects: Managing the Risks of Innovation and Change Part 2: During the Life of the Project
Customers in long-term technology projects can find that while they have been working towards their chosen solution a more advanced, cheaper, or simply more desirable technology has become available....
Cookies
We use analytics cookies to help us understand if our website is working well and to learn what content is most useful to visitors. We also use some cookies which are essential to make our website work. You can accept or reject our analytic cookies (including the collection of associated data) and change your mind at any time. Find out more in our Cookie Notice.