The Code of Practice: Putting an end to fake news?
4 min to read

The Code of Practice: Putting an end to fake news?

November 05, 2020

Disinformation and fake news has come under much scrutiny in the past couple of years, and the (voluntary) Code of Practice on Disinformation was BigTech’s attempt to stave off compulsory legislative measures. The Code is a self-regulatory initiative signed by Facebook, Google, Twitter, Mozilla, and members of the advertising industry in October 2018, with Microsoft and TikTok subscribing more recently. It sets out a wide range of commitments the signatories agree to, with the aim of taking a collective approach in preventing the spread of online disinformation. Despite this, fake news and conspiracy theories have flooded social media since the beginning of the COVID-19 pandemic, leading to the WHO director-general’s claim that: ‘we’re not just fighting a pandemic; we’re fighting an infodemic’.

As we continue to face an increase of unverified information spreading online, the EU Commission published its assessment on how the Code has been implemented. The assessment, published on 10th September 2020, highlights that whilst the Code is a valuable instrument for platforms, its self-regulatory nature falls short of the hard-line approach needed to promote greater protection for users. 12 months on from the Code’s implementation, what further steps are necessary to ensure platforms and advertisers tackle the problem of disinformation effectively?

Monitoring the Code of Practice

The Commission assessed the effectiveness of the Code by monitoring how well signatories had implemented each of the commitments they had agreed to. These included measures such as:  

  • Reducing advertising opportunities for accounts spreading disinformation;
  • Enhanced transparency of political advertising;
  • Taking action against techniques to artificially boost posts and enable false narratives to become viral;
  • Setting up features that give prominence to trustworthy information; and
  • Collaborating more with fact-checkers and the research community.

In light of the potentially harmful spread of fake news about COVID-19 during the pandemic, the Commission also considered what platforms had done to tackle health related disinformation.

Did self-regulation work?

Crucially, the Code has started to force platforms and the advertising sector to hold themselves accountable by putting them under public scrutiny. As much of the world enters a second wave of coronavirus, it is more important than ever that platforms consistently fact-check posts and remove content shown to be false, misleading and potentially harmful. This marks a big step forward in regulating an increasingly digital world.

However, the assessment highlighted that there was a need for more clarity. This is not completely surprising – arguably the writing has been on the wall since not long after the Code was introduced when the Code’s Sounding Board, a multi-stakeholder forum, opined that “there is no common approach, no clear and meaningful commitments, and the KPIs and objectives are not measurable.” At a time when preventing the ‘infodemic’ is paramount, the voluntary Code’s shortcomings highlight the need for a Europe-wide approach in tackling disinformation. The voluntary nature of the Code has resulted in an inherent ‘regulatory asymmetry’ between those who choose to implement it, and those who don’t. As a result, there is a limit to how effective the Code can be. Malicious actors can just move to platforms who have chosen to not self-regulate to propagate their disinformation.

The future of fake news

As people look for answers online in response to the uncertainty of COVID-19, it is clear Europe needs to take a more assertive approach in tackling disinformation. In fact, Facebook and Instagram have directed more than 2 billion people to resources from health authorities, emphasising how crucial the role social media sites play is.

The shift away from self-regulation ties in with the development of the UK’s regulatory framework to tackle online harms. Companies falling under the scope of the Online Harms Bill, making its way (slowly) through Parliament currently, will have a legal duty to comply with it, rather than being able to choose. The proposals contained within the Online Harms Bill are intended to provide a more uniform approach to protecting users online, although it remains to be seen whether this will in fact be achieved as and when the legislation comes into force.

The Code has helped progress the conversation between platforms and authorities about the problem of disinformation. The Commission has said it will deliver a more comprehensive approach by the end of the year in the form of a European Democracy Action plan and a Digital Services Act package. If the ‘infodemic’ has taught us anything, it has underlined that an EU-wide approach is likely the most effective way to tackle the issue.

Written by
Maisie Briggs
Maisie is a trainee in the Commercial department at Bird & Bird.
Related articles
Recent initiatives impacting the digital sector
Recent initiatives impacting the digital sector
Recent months have seen a veritable flurry of legislative, regulatory and soft law initiatives and proposals. The measures often tackle similar sectors – all things digital often being in the eye of the...
New Report aims to increase awareness among smaller enterprises of the checks and balances in the P2B Regulation
3 min to read
May 05, 2021
New Report aims to increase awareness among smaller enterprises of the checks and balances in the P2B Regulation
Bird & Bird has contributed to a new report on the EU Platform to Business Regulation recently published by the Digital Future Society, a non-profit transnational initiative aiming to engage stakeholders...
European Commission publishes landmark Artificial Intelligence regulatory package
1 min to read
April 21, 2021
European Commission publishes landmark Artificial Intelligence regulatory package
In Brussels today, the European Commission unveiled a much-anticipated new rulebook on Artificial Intelligence. The Artificial Intelligence (AI) package, which comprises: a Proposal for a Regulation...
Call for Views on AI and IP: the UK Government Response
9 min to read
April 21, 2021
Call for Views on AI and IP: the UK Government Response
How does the UK’s intellectual property regime stand up in the face of emerging AI technology? Running from 7 September to 30 November 2020, the UKIPO’s call for views on AI and IP sought to find...
We use analytics cookies to help us understand if our website is working well and to learn what content is most useful to visitors. We also use some cookies which are essential to make our website work. You can accept or reject our analytic cookies (including the collection of associated data) and change your mind at any time. Find out more in our Cookie Notice.