When AI goes wrong – part 1: trading algorithms
4 min to read

When AI goes wrong – part 1: trading algorithms

Date
16 June 2017

Artificial Intelligence applications are appearing everywhere with promises to transform our lives in many ways. But what happens when things go wrong? In this series of two articles, we consider the question of who should be responsible when an AI makes a mistake. We’ll do this by exploring two current or near future examples of the technology: a trading algorithm and an autonomous car.

The examples we’ve taken are not “general” or “strong” AIs (i.e. a computer system that could potentially take on any task it is faced with – a technology which, for the time being at least, remains the stuff of science fiction). Instead we are considering “narrow” or “week” AIs: a computer system which can analyse data in order to take actions that maximise its chance of success at an identified goal.

Our example AI application for this article is a trading algorithm. Our algorithm has a particular goal: to execute trades which seek to maximise return on investment in a fund (perhaps with further parameters set around risk tolerance, desired levels of return, and/or excluding certain classes of assets, to align with the profile of the fund). The algorithm has been developed using machine learning (and continues to use machine learning to refine its decision making process on a day-to-day basis).

Our algorithm has been licensed to a bank by the software house that originally wrote and developed it. Once the licence agreement was signed, the software house set about training the algorithm to achieve its goal by feeding it a diet of labelled data from the software house’s own banks of historic data. After the training and subsequent demonstration phase (in which the software house demonstrated to the bank the decisions the algorithm would have made on live market data) the bank accepts the algorithm and it starts trading the bank’s money.

After a year of successful trading in which the algorithm used the results of its previous decisions to refine its investing strategy and deliver significant returns to the bank, the algorithm makes a catastrophic error: making a very large but seemingly illogical series of trades resulting in a loss of tens of millions of pounds.

The first question here is whether anyone should be liable for this “glitch” at all. Markets can be volatile so liability shouldn’t be attributed to bad investment decisions.

Also the fact the trade seemed illogical in hindsight does not necessarily mean it was definitely wrong. In one of the matches against the Go World Champion, Lee Sedol, in March 2016 Google DeepMind’s AlphaGo AI made such a highly unusual move that commentators thought the AI must have malfunctioned but presumably AlphaGo had its reasons (and went on to win the match and the series).

This leads onto a second problem with ascribing liability for losses caused by AI: proving who is at fault. In the case of AlphaGo, it could not express why it made the move it did or explain what in its experience of playing Go meant it think that move would be a wise one. Likewise in the example of our algorithm, it’s very unlikely that the background to the making of the decision could be unpicked to see what previous experience caused the decision to be made. Without this ability to interrogate the decision, it would not be possible to say if it was an error in the original code written by the software house or resulting from the diet of data it was fed (and in the latter case, whether it arose from the training data or the real “live” decisions made once in use by the bank).

In practice questions of liability are likely to be answered by the parties through the contracts they enter into, much in the same way they are today. It may take some time for a ‘market standard’ approach to emerge but it would seem likely that standard approaches do emerge as the licencing of AI systems in environments where there is a degree of risk becomes more widespread.

Share
Written by
Will Bryson
Will Bryson
William is an enthusiastic and incisive commercial lawyer, with particular focus on cutting edge technology. William is an associate in Bird & Bird's Commercial Group. He advises on a variety of commercial contracts with particular interest in the IT, communications and defence sectors. Within those sectors, William has developed a keen interest in emerging and disruptive technologies, particularly the Internet of Things and Big Data. William's experience ranges from advising on general commercial contracts, including software licencing and development agreements, to working on the Department of Energy and Climate Change's multi-billion pound procurement for the roll out of "smart meter" infrastructure across the UK and acting for BAE Systems as part of the nuclear submarine project team. William has also worked for a number of clients, both employers and contractors, on contracts for build and maintenance of solar power plants.
Related articles
Smart Contracts – Recognising and Addressing the Risks
4 min to read
29 December 2021
Smart Contracts – Recognising and Addressing the Risks
Smart contracts, where some or all of the contractual obligations are defined in and/or performed automatically by a computer program, are expected to have a significant impact on the way business is...
Technology Projects: Managing the Risks of Innovation and Change Part 3: Contract Reset and Dispute Resolution
Technology Projects: Managing the Risks of Innovation and Change Part 3: Contract Reset and Dispute Resolution
Customers in long-term technology projects can find that while they have been working towards their chosen solution a more advanced, cheaper, or simply more desirable technology has become available....
Digital dispute resolution rules to facilitate rapid and cost-effective resolution of disputes involving novel digital technologies
Digital dispute resolution rules to facilitate rapid and cost-effective resolution of disputes involving novel digital technologies
While some saw the development of products using blockchain technology leading to the demise of disputes, the reality is that disputes in the arena of digital technology are increasing in number. Lawtech’s...
Technology Projects: Managing the Risks of Innovation and Change Part 2: During the Life of the Project
Technology Projects: Managing the Risks of Innovation and Change Part 2: During the Life of the Project
Customers in long-term technology projects can find that while they have been working towards their chosen solution a more advanced, cheaper, or simply more desirable technology has become available....
Cookies
We use analytics cookies to help us understand if our website is working well and to learn what content is most useful to visitors. We also use some cookies which are essential to make our website work. You can accept or reject our analytic cookies (including the collection of associated data) and change your mind at any time. Find out more in our Cookie Notice.