Global progressive policing
OPINION:

What is the Online Safety Act and why have riots reopened debates about it?

Amid concerns that the recent riots in the UK were fuelled by misinformation spread on social media, some have questioned whether the Online Safety Act passed last year needs to be revisited; however, Associate Professor Dr Olivia Brown and Postdoctoral Researcher Dr Alicia Cork of the University of Bath argue that with parts of the Act not due to come into effect until late 2024, the effectiveness of the legislation won’t be fully understood until it has been tested in another situation like the recent riots.

Recent rioting and unrest in the UK has led to calls for the Online Safety Act to be revisited. Mayor Sadiq Khan has called it “not fit for purpose”, and Cabinet Office Minister Nick Thomas Symonds suggested that the Government could change the law, which was passed under the previous government and includes a raft of measures relevant to the recent riots, including powers to fine social media companies.

The frustration in the wake of the riots has arisen from the fact that the parts of the Act are not due to come into effect until late 2024… This raises questions as to what might have been different had they already been in place.

Prime Minister Keir Starmer has been less forthcoming about the Act and has said only that he would “look more broadly at social media after this disorder”. His spokesperson suggested the Act was not under active review.

In practical terms, social media played a key role in the widespread co-ordination of events in locations across the country. Online platforms have also served as a vehicle through which misinformation and hateful rhetoric has spread.

The Act, enforced by the independent media regulator Ofcom, deals with the regulation of online speech and aims to protect users from potential harms including abuse and harassment, fraudulent activity and hate offences.

Specifically, it seeks to place more responsibility on social media companies to ensure their platforms are safe, with fines of up to 10% of their annual revenue being issued to providers whose platforms are deemed unsafe.

In more extreme cases, Ofcom has the power to require advertisers and internet providers to cease working with platforms that do not comply with the regulations. The Act passed into law in October 2023, and laws in relation to individual offences are already in effect. For example, it is now an offence to share false information with an intention to cause non-trivial harm.

However, the frustration in the wake of the riots has arisen from the fact that the parts of the Act are not due to come into effect until late 2024. These include enforcement powers and other measures that Ofcom could apply to social networking platforms and other platform providers, such as online forums and instant messaging platforms.

This raises questions as to what might have been different in the past 14 days had they already been in place.

Algorithm concerns

A key concern has been the way in which algorithms deciding what content is recommended on social networking platforms may have propagated harmful content in relation to the riots – including racist, hateful and violent content.

Picture © Primakov / Shutterstock

For example, it was found that people were using TikTok to live-stream content of the riots as they unfolded.

By incorporating these safety considerations when designing and refining content recommendation algorithms, it is hoped that fewer individuals will be exposed to harmful content before content moderation teams have had the opportunity to remove it.

At present, social media platforms such as TikTok, X, Facebook and YouTube are designed to optimise user engagement through their recommendation algorithms, with safety concerns not typically weighted within these systems. X, for example, employs different algorithms for content moderation versus content recommendation.

As a result of this, it is possible that harmful content can be recommended by one algorithm before it is identified as needing to be moderated by another algorithm.

The Online Safety Act aims to address this challenge by requiring platforms to test the safety implications of their recommendation algorithms. That is, when changes are made to their recommendation algorithms, services will be encouraged to collect safety metrics, allowing them to assess whether these algorithm changes are likely to increase individuals’ exposure to illegal content.

By incorporating these safety considerations when designing and refining content recommendation algorithms, it is hoped that fewer individuals will be exposed to harmful content before content moderation teams have had the opportunity to remove it.

Neutral oversight

One of the primary challenges around the regulation of online content is the unwillingness of platform providers to be seen as “arbiters of truth”.

For example, X has recently changed the name of its Trust and Safety team to just Safety, as Elon Musk, CEO of X, stated that: “Any organisation that puts ‘Trust’ in their name cannot be trusted as that is obviously a euphemism for censorship.”

While the law was passed by the UK Government, the Government does not have powers to determine what content is allowed and what should be disallowed – thus securing political neutrality in the long-term implementation of the Act.

Mark Zuckerberg, CEO of Meta, said something similar back in 2016 after the US election, when he stated that Meta “shouldn’t be the arbiter of truth of everything that people say online”.

However, and as recent events have shown, this has not precluded Musk himself from propagating specific narratives in relation to the UK riots and adding fuel to an already inflamed discourse.

The Act addresses this challenge by using the independent regulator, Ofcom, to enforce and regulate online content and algorithms. While the law was passed by the UK Government, the Government does not have powers to determine what content is allowed and what should be disallowed – thus securing political neutrality in the long-term implementation of the Act.

Prevailing challenges

At present, the Online Safety Act does not include any legislation about misinformation and disinformation. This appears to be why Mayor Khan suggested that in its current form, the Act does not go far enough.

The prevailing challenge of misinformation was put in sharp focus by the murders that led to the riots, with content falsely claiming that the Southport attacker was a Muslim migrant trending across several social networking platforms in the aftermath of the incident.

The Home Secretary Yvette Cooper claimed that social networking platforms “put rocket boosters” under the spread of this content, and there has been much debate as to whether it helped fuel the violence seen on many city streets.

This leaves some observers concerned that, until the Act fully comes into force, we are in a legal purgatory around what can and cannot be litigated against online.

However, we won’t really know how effective the Online Safety Act can be until all of it has come into force and it has been tested in another situation like the recent riots.

This article first appeared on The Conversation, and is republished under a Creative Commons Licence; you can read the original here.

Dr Olivia Brown is an Associate Professor in Digital Futures at the University of Bath, School of Management. Her research sits at the intersection of psychology and behavioural science, aiming to understand how intra- and inter-group processes influence individual and group behaviour in online and offline settings. For instance, Liv studies high-reliability organisations to understand what factors support teamwork in extreme and challenging settings and has conducted extensive research examining how online communications data might be used to predict and understand behaviour offline. She is especially interested in how online interactions can influence offline behaviour, with a focus on extremism and terrorism.

Dr Alicia Cork is a Postdoctoral interdisciplinary researcher at the University of Bath, working between psychology and computer science disciplines to understand and mitigate online harms. She is currently affiliated with REPHRAIN, a National Research Centre on Privacy, Harm Reduction and Adversarial Influence Online, and in the past have been a visiting researcher at the Alan Turing Institute. Alicia completed her Social Psychology PhD at the University of Exeter in 2021, where her focus was on testing social psychological theories using machine learning and natural language processing techniques.

Picture © PA Images / Alamy Stock Photo


1 Responses to “What is the Online Safety Act and why have riots reopened debates about it?”

  1. js3890 says:

    How would the Online Safety Act protect against misinformation from all government bodies too? The control of information sources seems very “big brother”. I agree, misinformation can be dangerous, from any and all directions; in the most recent cases, this is where authorities need to move quicker with disclosure to the public. Yes, certain information is sensitive and personal, however if revealing some can prevent disqiuet and violence from the start, and even sending people to prison, then isn’t it a case of “the needs of the many outweight the needs of the few” [Dr Spock]?

    When the public feel they are being deceived or lied to, that is when the gaps are filled with misinformation and consipiracy, even anger.

You must be registered and logged in to post a comment

Please LOG IN or REGISTER
Top