Disinformation and fake news has come under much scrutiny in the past couple of years, and the (voluntary) Code of Practice on Disinformation was BigTech’s attempt to stave off compulsory legislative measures. The Code is a self-regulatory initiative signed by Facebook, Google, Twitter, Mozilla, and members of the advertising industry in October 2018, with Microsoft and TikTok subscribing more recently. It sets out a wide range of commitments the signatories agree to, with the aim of taking a collective approach in preventing the spread of online disinformation. Despite this, fake news and conspiracy theories have flooded social media since the beginning of the COVID-19 pandemic, leading to the WHO director-general’s claim that: ‘we’re not just fighting a pandemic; we’re fighting an infodemic’.
As we continue to face an increase of unverified information spreading online, the EU Commission published its assessment on how the Code has been implemented. The assessment, published on 10th September 2020, highlights that whilst the Code is a valuable instrument for platforms, its self-regulatory nature falls short of the hard-line approach needed to promote greater protection for users. 12 months on from the Code’s implementation, what further steps are necessary to ensure platforms and advertisers tackle the problem of disinformation effectively?
Monitoring the Code of Practice
The Commission assessed the effectiveness of the Code by monitoring how well signatories had implemented each of the commitments they had agreed to. These included measures such as:
- Reducing advertising opportunities for accounts spreading disinformation;
- Enhanced transparency of political advertising;
- Taking action against techniques to artificially boost posts and enable false narratives to become viral;
- Setting up features that give prominence to trustworthy information; and
- Collaborating more with fact-checkers and the research community.
In light of the potentially harmful spread of fake news about COVID-19 during the pandemic, the Commission also considered what platforms had done to tackle health related disinformation.
Did self-regulation work?
Crucially, the Code has started to force platforms and the advertising sector to hold themselves accountable by putting them under public scrutiny. As much of the world enters a second wave of coronavirus, it is more important than ever that platforms consistently fact-check posts and remove content shown to be false, misleading and potentially harmful. This marks a big step forward in regulating an increasingly digital world.
However, the assessment highlighted that there was a need for more clarity. This is not completely surprising – arguably the writing has been on the wall since not long after the Code was introduced when the Code’s Sounding Board, a multi-stakeholder forum, opined that “there is no common approach, no clear and meaningful commitments, and the KPIs and objectives are not measurable.” At a time when preventing the ‘infodemic’ is paramount, the voluntary Code’s shortcomings highlight the need for a Europe-wide approach in tackling disinformation. The voluntary nature of the Code has resulted in an inherent ‘regulatory asymmetry’ between those who choose to implement it, and those who don’t. As a result, there is a limit to how effective the Code can be. Malicious actors can just move to platforms who have chosen to not self-regulate to propagate their disinformation.
The future of fake news
As people look for answers online in response to the uncertainty of COVID-19, it is clear Europe needs to take a more assertive approach in tackling disinformation. In fact, Facebook and Instagram have directed more than 2 billion people to resources from health authorities, emphasising how crucial the role social media sites play is.
The shift away from self-regulation ties in with the development of the UK’s regulatory framework to tackle online harms. Companies falling under the scope of the Online Harms Bill, making its way (slowly) through Parliament currently, will have a legal duty to comply with it, rather than being able to choose. The proposals contained within the Online Harms Bill are intended to provide a more uniform approach to protecting users online, although it remains to be seen whether this will in fact be achieved as and when the legislation comes into force.
The Code has helped progress the conversation between platforms and authorities about the problem of disinformation. The Commission has said it will deliver a more comprehensive approach by the end of the year in the form of a European Democracy Action plan and a Digital Services Act package. If the ‘infodemic’ has taught us anything, it has underlined that an EU-wide approach is likely the most effective way to tackle the issue.