In this second of two articles considering liability for the mistakes of AI systems, we look at driverless cars and who might be responsible when things go wrong.

Driverless cars promise a great deal – these machines with reflexes far sharper than any human’s will make the roads safer. At the same time, they will be able to travel closer together at higher speeds, reducing congestion and journey times. In fact, driverless cars may change the norm when it comes to vehicle ownership – car owners would be able to send their vehicles out as autonomous taxis while they work or sleep, which reduces the need to own a car in the first place.

The technology is advancing rapidly, but hurdles remain. One of the biggest questions to be answered if driverless vehicles are to become widespread is who carry’s the can when something goes wrong. A driver was killed when using his Tesla in autonomous mode in July 2016 and, despite the increases in safety which driverless cars may bring, it would seem certain that there will be further accidents in future.

A form of Artificial Intelligence is at the heart of a driverless car – the computer system constantly gathers the data inputs from a myriad of sensors that the car is equipped with, analyses and processes that data on board to make decisions, and send instructions to move the car.

But what if something went wrong? For example, what if the camera misread a weathered stop sign and proceeded into a high-speed road, crashing into oncoming traffic. Who should be responsible for this accident?

As with the example of the algorithm in our previous article, there are potentially multiple people who could be at fault here: the developer of the AI as it’s systems incorrectly processed the data it was provided, or perhaps the supplier of the camera sensor if it was found to be substandard? Perhaps even the local authority would arguably bear responsibility for failing to properly maintain the road sign (though under English law, the duty of a local authority to maintain a public highway does not extend to the provision of information signs¹.

However, as far as those drivers and owners of the autonomous vehicles with which our wayward car collided are concerned, they don’t really care ultimately who was at fault – they simply want redress for the damage they have suffered. Is there any reason why this situation should be treated any differently to how a traffic accident is treated currently?

Perhaps not – if there had been a human driver who failed to stop at the junction, blame would lie with him or her. But in the UK that driver, if they followed the law, would be carrying motor vehicle liability insurance. The insurer would pay out to the other driver and then it would be for the insurance company to look back up the chain to see who is at fault.

The difference in our scenario is that it was the car, rather than the driver, that made the mistake. Our first reaction might be that this completely changes things: the driver is not responsible, his AI is! But the AI doesn’t have its own legal personality. The AI also does not “think” as we know it – it makes decisions based on the data it has.

So who is responsible for the AI? Bird & Bird’s Roger Bickerstaff argues that it needs to be a person who bears responsibility, drawing an analogy with the owner of a dog who would take responsibility for the damage which their dog may cause – the dog is not the one who takes the liability.

This is the approach which is proposed in the Vehicle Technology and Aviation Bill 2016-17, which makes owners liable for damage caused by their vehicles when they are not insured (and the insurer is liable when the car is insured). Insurance companies will therefore have a significant role in ensuring autonomous vehicle safety. No longer will they be considering the skill of a driver when setting the insurance premium (based on their demographic or claims history) but rather they will be assessing the make and model of the car for its safety record. Manufacturers will be incentivised to continuously refine their AI systems to increase safety, to make insurance cheaper for their vehicles and thus more attractive to customers.

Compulsory insurance does not fully put to bed the question of responsibility for AIs. But, provided that insurers don’t set premiums at prohibitively high levels, it does appear to be the sensible approach to take, to remove one of the big barriers facing the adoption of driverless cars.

¹ Gorringe v. Calderdale Metropolitan Borough Council (Respondents) [2004] UKHL 15

Previous articleWhen AI goes wrong – part 1: trading algorithms
Next articleArtificial Intelligence and IP – Part 1: Developing AI systems
William is an enthusiastic and incisive commercial lawyer, with particular focus on cutting edge technology. William is an associate in Bird & Bird's Commercial Group. He advises on a variety of commercial contracts with particular interest in the IT, communications and defence sectors. Within those sectors, William has developed a keen interest in emerging and disruptive technologies, particularly the Internet of Things and Big Data. William's experience ranges from advising on general commercial contracts, including software licencing and development agreements, to working on the Department of Energy and Climate Change's multi-billion pound procurement for the roll out of "smart meter" infrastructure across the UK and acting for BAE Systems as part of the nuclear submarine project team. William has also worked for a number of clients, both employers and contractors, on contracts for build and maintenance of solar power plants.