AI in healthcare and life sciences: key considerations
6 min to read

AI in healthcare and life sciences: key considerations

Date
10 November 2020

Digital technology is changing the way in which we receive healthcare. Healthcare providers are increasingly looking to adopt digital technologies such as Artificial Intelligence (AI) as the benefits become clear. AI is particularly attractive given its ability to analyse large quantities of complex data. The NHS’s commitment to AI is demonstrated through the NHS AI Lab, which brings together government, health and care providers, academics and technology companies with the aim of accelerating the safe adoption of AI in health and care.

One key use case for AI is in clinical diagnosis. There are a number of examples of healthcare providers collaborating with technology companies to trial the use of AI in areas such as the detection of eye disease and the diagnosis of different forms of cancer.

Additionally, AI has application in monitoring patients remotely (including in their own homes), in assisting preparations for surgery or other treatments and in non-clinical planning.

AI also permeates the life sciences sector. As well as use in drug discovery, pharmaceutical companies are looking to AI to build efficiencies into their operations (for example, to provide real-time data on their supply chains and manage any issues or bottlenecks in production), to analyse and leverage the large quantities of data that they hold and to support patients in administering and better understanding their treatment.

AI provides a myriad of opportunities for healthcare and life sciences, but also a few key challenges to navigate and understand.

The need for large volumes of quality data

AI systems are reliant on large volumes of data so that they can learn and improve the decisions that they make. This data is often called training data.

As a result, a collaboration model lends itself well to AI system development. We have seen a number of examples of collaboration arrangements under which a technology developer brings the expertise in deploying machine learning and a healthcare provider (or perhaps a pharmaceutical company) brings a large volume of training data. Such collaborations could include multiple healthcare providers (so as to increase the data pool) or other parties such as Universities (to assist in running and testing the AI solution).

Collaborations of this nature create questions around IP, which should be addressed from the outset. These include questions of ownership and use of the AI system itself and of use of any branding or trade marks.

From a data perspective, collaborating parties will not only need to deal with the usual data protection and privacy requirements relating to the handling and processing of personal data, but also the additional obligations and restrictions which attach to health data.

Navigating the regulatory framework

Thought also needs to be given to the regulatory framework. Software products will be subject to specific regulatory obligations around safety and performance if they are a medical device within the meaning of the regulatory framework. This is an area of law where change looks to be on the horizon, in the form of the expected implementation of the Medical Device Regulation (EU) 2017/745.

Mitigating the risks associated with reliance on an IT system

If an AI system is service or business critical then, like any other critical IT infrastructure, the system needs to be robust and there needs to be appropriate contractual safeguards in place to address or mitigate the impact of the system “going down”. These include appropriate acceptance procedures, requirements around maintaining integrations with other systems, support obligations, service levels (e.g. around defect resolution and service availability) and business continuity and disaster recovery obligations.

Special consideration also needs to be given to the process for migrating away from the system at the end of the contract term. Transitional assistance and clear exit obligations (including an exit plan addressing migration of data) should be considered carefully. This is a critical part of any strategy around complex IT projects.

What happens if something goes wrong?

Finally, if digital technologies such as AI are going to be widely used in front-line clinical services, then thought needs to be given to who is responsible, and how patients are properly compensated, if something goes wrong.

AI presents some particular challenges in terms of liability. Firstly, it may not be possible to tell how or why an AI system has reached a particular conclusion. This is often called the “black box” problem and stems from the fact that the inner workings of AI systems are comprised of complex algorithms and data sets that are software generated, as opposed to human generated.

Coupled with this, AI systems tend to involve a complex matrix of contributors, including the software developer, the system maintainer (who may or may not be the software developer), the party or parties providing training data and the person using the system.

As a result, if an AI system reaches an incorrect conclusion then it may be very difficult to work out why this has happened and who is responsible.

Potential changes in the law have been discussed for some time, particularly at EU level. For example, in May the Committee of Legal Affairs of the European Parliament published a report and draft EU Regulation. The flow of the tide seems to be for liability to sit with the “deployer” of an AI system, being the person who decides on the use of the AI system, who exercises control over the risk and who benefits from its operation.

The question then is to what extent should the deployer seek to pass on liability risk to the technology provider under contract. There are a number of considerations here, including who is best placed to insure the risk and the need to encourage innovation.

Historically, suppliers of complex software in high risk sectors have often taken the approach of limiting liability as far as possible – for example, expressly excluding any warranty that the technology is fit for a particular purpose. However, the inherent lack of transparency around how AI works and growing reliance on AI systems could create some tension with this approach. It is also noteworthy that the draft EU Regulation published in May provides for a potential defence for the deployer where, amongst other things, it has exercised due diligence by selecting a suitable AI-system for the right task – which doesn’t fit well with excluding warranties around fitness for purpose.

This is certainly an area to keep an eye on, including how closely the UK follows any EU approach.

By Ben Woodfield & Bridget Chamberlain.

Share
Written by
Ben Woodfield
Ben Woodfield
Ben is an associate in Bird & Bird's London-based Commercial Group. Ben has a particular focus on major projects, primarily in the IT, healthcare and defence sectors.
Related articles
The EU’s Digital Services Package a global benchmark – a closer look at the Digital Markets Act.
27 min to read
17 December 2021
The EU’s Digital Services Package a global benchmark – a closer look at the Digital Markets Act.
On 15 December 2020, the European Commission published proposals for two regulations to regulate digital services, the Digital Services Act and the Digital Markets Act. According to the Commission's...
Member States reach a common position on data governance
3 min to read
18 October 2021
Member States reach a common position on data governance
A first initiative in the EU data strategy to capture the enormous potential of ‘Big Data’ appears to be nearing completion. On 1 October, EU Member States agreed on a common position with respect to...
Why has EU adopted a new regulatory framework – the European Electronic Communications Code?
Why has EU adopted a new regulatory framework – the European Electronic Communications Code?
For decades, Over-the-Top service providers (OTT) have developed outside the EU legal framework for electronic communications as the latter was not designed to regulate non-traditional telecom players. On...
China Releases Regulation on Critical Information Infrastructure
8 min to read
10 September 2021
China Releases Regulation on Critical Information Infrastructure
On 17 August 2021, the Chinese central government released the long-awaited Regulations on Critical Information Infrastructure (CII) Security Protection (CII Regulation),...
Cookies
We use analytics cookies to help us understand if our website is working well and to learn what content is most useful to visitors. We also use some cookies which are essential to make our website work. You can accept or reject our analytic cookies (including the collection of associated data) and change your mind at any time. Find out more in our Cookie Notice.