AI and Online Harms: what does the Government’s White Paper mean for the industry?
6 min to read

AI and Online Harms: what does the Government’s White Paper mean for the industry?

Date
03 May 2019

Earlier this month the Government released the much-anticipated Online Harms White Paper. Jointly authored by the Department of Culture, Media and Sport and the Home Office, the Paper sets out the Government’s proposals to address harms ranging from terrorism and child sexual exploitation to disinformation and harassment. It proposes fundamental change to internet regulation in the UK. The report suggests that AI, along with other technologies, may be both a source of the problem and a part of the answer. As Culture Secretary Jeremy Wright put it at the launch, “Just as technology has created the challenges we are addressing here, technology will provide many of the solutions.” The White Paper suggests challenges ahead for the AI industry, as a push for ‘transparency’ may see operators asked to explain how their algorithms function. But there may also be opportunities on the horizon.  

AI as part of the problem

AI is particularly cited as a concern in relation to online misinformation and disinformation (the latter being information that is deliberately inaccurate and intended to mislead). It’s a topic that has been increasingly in the spotlight in the past few years. Governments worldwide have been concerned by reports of deliberate disinformation campaigns designed to influence the outcome of high profile elections, whilst the spread of false information relating to vaccinations is already believed to have contributed to falling uptakes and increasing cases of preventable disease.

AI has long been integral to the way content is targeted at online users. The Government’s concern, however, is that more sophisticated algorithms and more comprehensive data allow for micro-targeting, leading to an ‘echo chamber’ effect in which consumers are exclusively exposed to content likely to reinforce existing beliefs, and shielded from anything that might challenge them. The argument goes that this highly specific targeting can make it easier to manipulate people, particularly given a lack of public awareness – according to research by doteveryone almost two thirds of adult internet users don’t realise that the news and information they see online can depend on the people they are connected to on social media. AI systems can also play a part in generating convincing fake content (or ‘deepfakes’) which humans and algorithms alike struggle to differentiate from the real thing. Developers are all too aware of the potential misuse of tools like this, with OpenAI prompting debate in February with its announcement that its new AI writing system was potentially too dangerous for public release. To read more about Open AI’s decision and the potential misuse of natural language tools click here.

AI’s role in the solution

On the other hand, the White Paper cites numerous examples of the ways in which AI is already being used to combat online harms. This includes automatic detection of problematic content and automatic fact checking (for more detail on the use of AI to automate content monitoring, click here). The White Paper argues that there is much more scope for innovation in this area, particularly regarding the safety of young people online. It cites examples, some of which are already receiving Government funding, ranging from systems to protect the digital privacy of children, to hate speech detection tools, to apps designed to track children’s use of smartphones and guide them towards safer online behaviours. However there is also considerable scepticism that AI will provide a panacea for content detection. Despite some early success stories, current AI systems are often limited in their ability to correctly classify certain categories of content, even more so when the classifications are vague or depend heavily on context.

One of the central planks of the White Paper’s proposals is a new online regulator, either a new body set up for the purpose, or an existing regulator, such as Ofcom, with an extended remit. This regulator’s responsibilities will include encouraging further innovation in the area of online safety. It will be expected to work with industry partners to promote “the rapid innovation, development and scale-up of safety products.” The regulator will also be tasked with supporting the development of “scalable privacy-enhancing technologies to allow companies to access training data to develop AI solutions, without compromising highly sensitive or illegal datasets”.

What to watch out for?

The digital industry should be prepared to face more intense scrutiny from both regulators and the press, when it comes to where and how algorithms are functioning online, and platform operators may need to face the challenge of providing sufficient explanations as to how algorithms function. The White Paper suggests the new regulator would have the power to require explanations of how algorithms work; for example, to assess whether their use of data leads to a particular bias, or to test how they select content. However, the White Paper does not acknowledge the ‘black box’ issue: the fact that AI’s decision making can often be opaque even to its creators. Trying to explain why AI does what it does can be a lot more complicated than these proposals suggest. As the topic continues to garner public attention and with numerous calls for improvements to digital literacy, it would also be smart to anticipate greater interest in how algorithms operate from consumers themselves.

Meanwhile, more regulation will likely see an increased interest – from both public and private bodies – in digital tools that can combat online harms. Those with the right expertise may be able to capitalise on increasing opportunities in the online safety market.

Anyone with an interest in AI will be watching for further developments as the Government attempts to square the circle it has drawn for itself: making the UK’s online space arguably the most highly regulated in the democratic world, whilst attempting to make the UK the best place for digital innovators to flourish. Many have also criticised the White Paper for overreliance on the notion of ‘harm’ – a concept that is open to interpretation. It remains to be seen whether the Codes of Practice to be developed by the regulator will define in robust terms who must guard against which harms, and how.

Those with views on how the Government might address these challenges can contribute to the online harms consultation, which is open until 1 July 2019.


By Elizabeth Greene, Bird & Bird

Share
Written by
Elizabeth Greene
Elizabeth Greene
Related articles
Smart Contracts – Recognising and Addressing the Risks
4 min to read
29 December 2021
Smart Contracts – Recognising and Addressing the Risks
Smart contracts, where some or all of the contractual obligations are defined in and/or performed automatically by a computer program, are expected to have a significant impact on the way business is...
Technology Projects: Managing the Risks of Innovation and Change Part 3: Contract Reset and Dispute Resolution
Technology Projects: Managing the Risks of Innovation and Change Part 3: Contract Reset and Dispute Resolution
Customers in long-term technology projects can find that while they have been working towards their chosen solution a more advanced, cheaper, or simply more desirable technology has become available....
Digital dispute resolution rules to facilitate rapid and cost-effective resolution of disputes involving novel digital technologies
Digital dispute resolution rules to facilitate rapid and cost-effective resolution of disputes involving novel digital technologies
While some saw the development of products using blockchain technology leading to the demise of disputes, the reality is that disputes in the arena of digital technology are increasing in number. Lawtech’s...
Technology Projects: Managing the Risks of Innovation and Change Part 2: During the Life of the Project
Technology Projects: Managing the Risks of Innovation and Change Part 2: During the Life of the Project
Customers in long-term technology projects can find that while they have been working towards their chosen solution a more advanced, cheaper, or simply more desirable technology has become available....
Cookies
We use analytics cookies to help us understand if our website is working well and to learn what content is most useful to visitors. We also use some cookies which are essential to make our website work. You can accept or reject our analytic cookies (including the collection of associated data) and change your mind at any time. Find out more in our Cookie Notice.