Described as a ‘power grab’, ‘a censor’s charter’ and a Bill of ‘real constitutional significance’, three months have now passed since the draft of the Online Safety Bill was published. The purpose of the Bill, according to the Explanatory Notes, is to ‘establish a new regulatory regime to address illegal and harmful content online, with the aim of preventing harm to individuals in the United Kingdom.’
How is harm going to be prevented? Simple: through imposing a duty of care on: providers of internet services that allow uploads and sharing of user-generated content; and providers of search engines.
The Bill serves some other purposes too: duties are imposed on providers to protect user’s right of freedom to expression and privacy; and Ofcom has been conferred with new powers, regarding the creation of codes of practice and enforcement of the new regulatory regime.
There is a clear and urgent need for a Bill that protects users and imposes duties; service providers must be held to account for the presence and dissemination of repugnant material, especially in the fields of child abuse and terrorism.
Reactions to the draft have been both fierce and fearful. Let’s take a look at five things we need to know about this Bill.
‘Harmful’ does not mean illegal. Harmful to adults or children is defined as ‘having, or indirectly having, a significant adverse physical or psychological impact on a child [or adult] of ordinary sensibilities’ (ss 45 and 46).
The definition of harmful is left very wide, open for courts to interpret, which will allow the law to develop naturally and in response to individual cases. However, the open definition has attracted criticism for being too vague, particularly the concept of ‘indirect harm’.
On their website, Ofcom themselves state that ‘free expression is the lifeblood of the internet’ when they explain that their role will not be censoring the internet, particularly social media platforms.
Category 1 services (the largest user-to-user and search engine platforms) will have a duty to protect content of democratic importance and journalistic content.
However, campaigners have voiced concern that the duties in the Bill will lead tech companies to censor heavily, giving them the power to police content, rather than Parliament, courts or the Police. If AI or algorithms are used to identify ‘harmful content’, they may not be able to detect irony and sarcasm, leading to unnecessary and extreme censorship.
The Bill will apply to private messages, excluding SMS and email. Open Rights group have expressed serious concerns over the possible end to encrypted messaging, and suggest that this is the design of the Bill. Private messages becoming available to scrutiny will yield clear advantages in terms of national security and protecting vulnerable groups of people. However, the concerns about the end of encryption are indeed legitimate, as there is no precedent and little available information for how the content of private content of messages will be monitored, searched and used.
The headline figure is that, for failing to comply with the duty of care, companies can be fined up to £18m, or 10 per cent of global turnover, whichever is higher.
Furthermore, Category 1 services will have additional responsibilities: they will need to conduct and share regular risk assessments of their effect on freedom of expression and they will need to show that they have taken steps to mitigate any unfavourable outcomes.
The Bill has received some criticism for what is not included, yet still poses a threat to online safety. Martin Lewis has pointed out that scam adverts are not covered by the Bill, for example. Anonymous abuse, a wide area of concern, is also not tackled by the Bill. Nearly 700,000 people have signed the online petition for verification of ID to be part of the opening a social media account. Finally, the Bill has received criticism, notably from Nicola Roberts when talking to the BBC, for not responding to the issue of people opening up new accounts after they have been banned from a social media platform.