When AI goes wrong – part 2: autonomous vehicles
4 min to read

When AI goes wrong – part 2: autonomous vehicles

Date
16 June 2017

In this second of two articles considering liability for the mistakes of AI systems, we look at driverless cars and who might be responsible when things go wrong.

Driverless cars promise a great deal – these machines with reflexes far sharper than any human’s will make the roads safer. At the same time, they will be able to travel closer together at higher speeds, reducing congestion and journey times. In fact, driverless cars may change the norm when it comes to vehicle ownership – car owners would be able to send their vehicles out as autonomous taxis while they work or sleep, which reduces the need to own a car in the first place.

The technology is advancing rapidly, but hurdles remain. One of the biggest questions to be answered if driverless vehicles are to become widespread is who carry’s the can when something goes wrong. A driver was killed when using his Tesla in autonomous mode in July 2016 and, despite the increases in safety which driverless cars may bring, it would seem certain that there will be further accidents in future.

A form of Artificial Intelligence is at the heart of a driverless car – the computer system constantly gathers the data inputs from a myriad of sensors that the car is equipped with, analyses and processes that data on board to make decisions, and send instructions to move the car.

But what if something went wrong? For example, what if the camera misread a weathered stop sign and proceeded into a high-speed road, crashing into oncoming traffic. Who should be responsible for this accident?

As with the example of the algorithm in our previous article, there are potentially multiple people who could be at fault here: the developer of the AI as it’s systems incorrectly processed the data it was provided, or perhaps the supplier of the camera sensor if it was found to be substandard? Perhaps even the local authority would arguably bear responsibility for failing to properly maintain the road sign (though under English law, the duty of a local authority to maintain a public highway does not extend to the provision of information signs¹.

However, as far as those drivers and owners of the autonomous vehicles with which our wayward car collided are concerned, they don’t really care ultimately who was at fault – they simply want redress for the damage they have suffered. Is there any reason why this situation should be treated any differently to how a traffic accident is treated currently?

Perhaps not – if there had been a human driver who failed to stop at the junction, blame would lie with him or her. But in the UK that driver, if they followed the law, would be carrying motor vehicle liability insurance. The insurer would pay out to the other driver and then it would be for the insurance company to look back up the chain to see who is at fault.

The difference in our scenario is that it was the car, rather than the driver, that made the mistake. Our first reaction might be that this completely changes things: the driver is not responsible, his AI is! But the AI doesn’t have its own legal personality. The AI also does not “think” as we know it – it makes decisions based on the data it has.

So who is responsible for the AI? Bird & Bird’s Roger Bickerstaff argues that it needs to be a person who bears responsibility, drawing an analogy with the owner of a dog who would take responsibility for the damage which their dog may cause – the dog is not the one who takes the liability.

This is the approach which is proposed in the Vehicle Technology and Aviation Bill 2016-17, which makes owners liable for damage caused by their vehicles when they are not insured (and the insurer is liable when the car is insured). Insurance companies will therefore have a significant role in ensuring autonomous vehicle safety. No longer will they be considering the skill of a driver when setting the insurance premium (based on their demographic or claims history) but rather they will be assessing the make and model of the car for its safety record. Manufacturers will be incentivised to continuously refine their AI systems to increase safety, to make insurance cheaper for their vehicles and thus more attractive to customers.

Compulsory insurance does not fully put to bed the question of responsibility for AIs. But, provided that insurers don’t set premiums at prohibitively high levels, it does appear to be the sensible approach to take, to remove one of the big barriers facing the adoption of driverless cars.


¹ Gorringe v. Calderdale Metropolitan Borough Council (Respondents) [2004] UKHL 15

Share
Written by
Will Bryson
Will Bryson
William is an enthusiastic and incisive commercial lawyer, with particular focus on cutting edge technology. William is an associate in Bird & Bird's Commercial Group. He advises on a variety of commercial contracts with particular interest in the IT, communications and defence sectors. Within those sectors, William has developed a keen interest in emerging and disruptive technologies, particularly the Internet of Things and Big Data. William's experience ranges from advising on general commercial contracts, including software licencing and development agreements, to working on the Department of Energy and Climate Change's multi-billion pound procurement for the roll out of "smart meter" infrastructure across the UK and acting for BAE Systems as part of the nuclear submarine project team. William has also worked for a number of clients, both employers and contractors, on contracts for build and maintenance of solar power plants.
Related articles
Smart Contracts – Recognising and Addressing the Risks
4 min to read
29 December 2021
Smart Contracts – Recognising and Addressing the Risks
Smart contracts, where some or all of the contractual obligations are defined in and/or performed automatically by a computer program, are expected to have a significant impact on the way business is...
Technology Projects: Managing the Risks of Innovation and Change Part 3: Contract Reset and Dispute Resolution
Technology Projects: Managing the Risks of Innovation and Change Part 3: Contract Reset and Dispute Resolution
Customers in long-term technology projects can find that while they have been working towards their chosen solution a more advanced, cheaper, or simply more desirable technology has become available....
Digital dispute resolution rules to facilitate rapid and cost-effective resolution of disputes involving novel digital technologies
Digital dispute resolution rules to facilitate rapid and cost-effective resolution of disputes involving novel digital technologies
While some saw the development of products using blockchain technology leading to the demise of disputes, the reality is that disputes in the arena of digital technology are increasing in number. Lawtech’s...
Technology Projects: Managing the Risks of Innovation and Change Part 2: During the Life of the Project
Technology Projects: Managing the Risks of Innovation and Change Part 2: During the Life of the Project
Customers in long-term technology projects can find that while they have been working towards their chosen solution a more advanced, cheaper, or simply more desirable technology has become available....
Cookies
We use analytics cookies to help us understand if our website is working well and to learn what content is most useful to visitors. We also use some cookies which are essential to make our website work. You can accept or reject our analytic cookies (including the collection of associated data) and change your mind at any time. Find out more in our Cookie Notice.