The UK government’s Online Safety Bill has been continually beset by delays and difficulties – and by some justified criticisms. Further thought is clearly needed…
Last week, the UK government announced that it was once again pulling its Online Safety Bill from the parliamentary schedule. Apparently, it remains a ‘top priority’ for the new Minister, but it is unclear when it will return. It’s possible that the delay is simply a result of the continuing chaos of British politics; but it may also reflect the ever-increasing complexity of the proposed legislation, and the controversy it has generated on all sides.
This is by no means the first such delay. Legislation in this area was first officially proposed over five years ago, in the Internet Safety Strategy Green Paper of 2017. The Online Harms White Paper followed two years later, and a draft bill was eventually published in May 2021. Further amendments have been made following the scrutiny report of a Joint Committee of the Commons and the Lords, published in December last year.
The chair of that committee, Tory MP Damian Collins, expressed appreciation that the government had accepted many of its recommendations, and that – using the government’s political rhetoric – ‘we really will make the UK the safest place to be online in the world’. Yet declaring the UK’s exceptionalism – while aligned with Brexit – is surely problematic in an age of globalised technology; and many significant problems with the legislation have yet to be addressed.
Meanwhile, in recent weeks, the pressure has grown following the inquest of Molly Russell. Molly committed suicide at the age of fourteen in 2017; and it was subsequently found that she had accessed thousands of images of self-harm and suicide in the months before her death. The ensuing press coverage was predictable – ‘Instagram Helped to Kill My Daughter’ screamed the Daily Mail – although Molly’s father Ian has gradually emerged as a calm and articulate campaigner.
For the first time in a UK court, witnesses called at the inquest included executives from social media companies. The coroner, Andrew Walker, ruled that social media had indeed contributed to Molly’s death, and later issued a ‘Prevention of Future Deaths’ notice to companies including Meta, Pinterest, Snapchat and Twitter. Walker also argued that in future there should be separate social media platforms for children and adults; that age verification systems should be compulsory; and that companies should provide technology for parents to monitor their children’s online behaviour.
Like all such cases, this was undoubtedly horrifying. However, establishing the precise role of social media is not straightforward. Molly was almost certainly disposed to be suicidal in any case, and sought out social media for this reason. What precisely she saw, and how much of it might be deemed to be encouraging suicide, rather than seeking to prevent it (and how we would make that judgment), isn’t clear. As ever, there is a risk that blaming media provides a simplified explanation of what happened, which prevents us from looking more closely at other (perhaps very painful) aspects of the case.
The current bill is exceptionally long (the full text runs to 230 pages), although it is helpfully summarised in a government factsheet (albeit produced before the most recent amendments). Briefly, the legislation will make it mandatory for social media companies to remove illegal material from their platforms, as well as material that is deemed ‘lawful but harmful’. It will place a ‘duty of care’ on companies, requiring them to use available (that is, government-approved) technologies to proactively monitor such content. The communications regulator, Ofcom, will be empowered to develop and enforce relevant codes of practice, especially for platforms likely to be accessed by children. Alongside this are requirements (currently somewhat vaguely worded) to guarantee freedom of expression, and to ensure transparency for users.
While several of the forms of harm addressed in the bill are already covered by existing laws, these proposals are much more far-reaching, not least in respect of the penalties for violation: companies found to be in breach could be fined up to £18 million or 10% of their overall turnover, while individual executives could be imprisoned.
One key problem with the bill is that it covers an enormous range of different types of harm. Among the issues mentioned explicitly in the bill and the accompanying documentation are: abuse and harassment; terrorism; pornography (of various kinds); material promoting eating disorders, self-harm and suicide; gambling; cyber-flashing; bullying; child sexual exploitation; fraudulent advertising; racist and misogynistic abuse; antisemitism and Islamophobia; flashing images likely to induce epileptic seizures; mis- and dis-information; content relating to the conduct of elections; violence; unclear terms and conditions; modern slavery; online fraud; ‘addictive’ technologies; and the sale of firearms.
Obviously, these all entail quite different forms of potential harm: they have different causes and consequences, and different perpetrators and victims; they use different techniques and technologies; and they would need to be addressed in quite different ways. Concerns about such issues derive from different sources, and may reflect very different political and moral commitments: they should not simply be conflated. Yet it would seem that, in its journey through the legislative process, the remit of the bill has steadily grown, accumulating more and more harms as it goes. The Joint Committee in particular appears to have taken on any and every problem it came across, resulting in a bill that aims to solve almost all internet-related problems in one fell swoop.
Despite the length of the list, the bill also defers to secondary legislation (or to the regulator) other, yet-to-be-defined categories of material that might be deemed harmful – each of which might well present quite different challenges in terms of implementation. Significant questions remain about whether our current media regulator is either disposed or able to perform this role – a role that is very different from its current remit, and potentially vast in scope. Meanwhile, this also raises the perennial issue of who is to regulate the regulator, and how it is to be accountable to the public.
Legal experts have pointed to the lack of clarity in defining several of the bill’s key terms (some thorough analysis can be found here and here). ‘Harm’ seems to be a very inclusive concept, yet there is no real requirement to produce evidence of harm; instead, the bill refers to the ‘real and substantial risk’ of causing ‘non-trivial psychological or physical harm’, or causing people to act in such a way as to cause such harm to themselves or to others. As such, harm can potentially mean whatever the individual, or the government, says it means
One glaring problem here relates to material that is described as ‘lawful but harmful’. In effect, this means that some activities that are legal offline will no longer be regarded as legal online. This all raises further difficulties of definition. Who is to differentiate between harmful and harmless (or ‘trivial’) abuse, for example? How might we differentiate between a site that provides information about eating disorders, say, and one that encourages them? Who is to determine what counts as ‘information’, as opposed to disinformation?
Without clear definitions, and with exceptionally high sanctions for violation, it seems likely that companies will err on the side of caution, by taking down or blocking any material they regard as potentially problematic. In this situation, as the Guardian journalist Alex Hern has said, ‘The message of the bill is simple: take down exactly the content the government wants taken down, and no more. Guess wrong and you could face swingeing fines. Keep guessing wrong and your senior managers could even go to jail.’
The challenge of having to monitor and remove online content as it appears (rather than just retrospectively, in response to complaints) is enormous. While companies might wish to achieve this by means of algorithms, this is quite unlikely to succeed. The bill requires social media platforms to consider ‘all relevant information that is reasonably available’, including ‘contextual information’, when making such judgments; and they are also required to make inferences about the ‘mental states’ of users who post material (including those who create bots in order to do so). If companies choose to remove user-generated material, it’s hard to know how those searching for such material will know that this has occurred; nor is it clear how appeals against such decisions could be mounted.
At the same time, the bill is equally vague in defining categories of material that will be exempted from prosecution. It does apply to ‘misinformation’ and ‘disinformation’, as (presumably) distinct from ‘information’, but it fails to specify by whom these things are to be defined, or how. ‘Journalism’ and ‘content of democratic importance’, on the other hand, are excluded – although these too remain undefined. Larger companies will also be forbidden from discriminating against the expression of any and all political viewpoints. Again, there is a massive potential for legal dispute here.
One particular problem in this respect, discussed in detail by groups such as the Media Reform Coalition and Article 19, is that online material produced by established print-based news organisations will not be subject to regulation. Astonishingly, this applies not just to so-called ‘news’ stories, but also to ‘below the line’ comments, which (in the case of sites like the Daily Mail) are notorious locations for hate speech. It isn’t clear where this leaves those who might be called (or call themselves) ‘citizen journalists’, or simply members of the public who are taking the opportunity to participate online. Who defines ‘journalism’?
In the draft bill, the notion of ‘freedom of expression’ is vaguely and inconsistently applied, and there is no positive incentive for companies to address this: companies are merely required to show that they have ‘taken steps to mitigate any adverse effects’ on free speech – while simultaneously removing material they fear the regulator might not approve. Concerns here have caused some to describe the bill as an extension of ‘state-backed censorship’, while others have called it ‘a threat to human rights’. Interestingly, objections to the bill on these grounds have come not just from the political left but also from the right, including from some Conservative MPs: we need to be wary about how the idea of ‘free speech’ is invoked in such debates.
Meanwhile, the bill requires companies to operate age verification systems (an approach that has already been considered and dropped by policy-makers in the past), and to gather information about users via technologies that include user profiling and behaviour identification (and to provide this information to the regulator). The requirement to monitor user-generated content implicitly discourages the continuing use of end-to-end encryption. These developments all raise significant questions about privacy.
Another strange exclusion (particularly when it comes to young people) is of online gaming platforms, some of which have been noted for high levels of abuse and harassment. In addition, it appears that there will be different requirements for companies of different sizes (smaller companies will be required to do less), although the thresholds between them are yet to be identified, and a great many smaller companies will undoubtedly be affected (government estimates as many as 25,000 in total).
Many though they may be, these problems should not be insurmountable. My argument here is not motivated by a kind of ‘anything goes’ libertarianism. There are many good (and some less good) reasons why the activities of social media companies should be regulated; and despite the many dangers of government regulation, it is clear (as the Joint Committee report proposes) that self-regulation has failed. Social media companies need to be held accountable for the consequences of their systems and practices. And the law is bound to require regular updating in this respect.
Responsibility here cannot and should not simply be passed to individual users. In fact, despite the government’s problematic Online Media Literacy Strategy, the section on media literacy from the earlier drafts of the bill has now been deleted – although it’s clear that media literacy (however defined) is in no way an adequate means of dealing with many of the challenges that are at stake here.
The question is not whether to regulate, but how. I’ve had my say about this in several previous posts and articles, for example on young people and online pornography, and on the dangers of teaching internet safety. In some ways, the controversies around the current bill replay dilemmas that have recurred throughout the history of censorship legislation. How do we meaningfully assess risk and harm – and how might we do so in a way that would satisfy a court of law? How far should the fear of harm to children determine what should be available to the public in general? How (and by whom) is such harmful material to be identified and (if necessary) removed forever? Measures to keep material away from children have had limited success in older media such as the cinema and broadcasting, but it’s much harder to see how these can be applied in relation to newer media – and of course we cannot assume that children are any less competent than adults in accessing such media.
Despite what many politicians might claim, policy-making is not an objective, scientific process. What counts as evidence is problematic and often contested; and what comes out is often about serendipity rather than design. In areas such as this, which press powerful emotional buttons, there is a major risk of legislation proving counter-productive, or simply unfit for purpose.
To its credit, the Joint Committee report offers a great many specific suggestions as regards implementation, not least in relation to the overall design and structure of social media services (rather than merely ‘harmful content’). Yet despite claims that most of its recommendations have been taken on board, several key issues seem to have been ignored: most of the changes that have been made seem to have entailed further expanding the scope of the bill, not least those that give greater power to government itself. The outcome is, to put it mildly, a confused, unwieldy mess, and a potentially dangerous one. What we need is a much more considered, step-by-step approach, which reflects the diversity and difficulty of the issues at stake.
Even so, regulating social media is not going to be enough to address the whole range of problems that have been identified in these debates. We need to understand the causes of phenomena like violent extremism, teen suicides or child abuse – causes that are obviously very different from each other. As ever, blaming the media – and then seeking to regulate them – might allow us to feel that such problems have been solved. This, in my view, is probably the most harmful delusion of all.
Pingback: The trouble with ‘information literacy’ | David Buckingham