

Discover more from The Social Media Report, by Drew Benvie
The Social Media Report #31: Will UK's 'world first' Online Safety Bill send ripples or a tsunami?
Social media users are to benefit from new laws to protect them online, according to the UK government’s Online Safety Bill which was launched yesterday. The bill is being called a world-first by its creators, and it aims to hold the social networks themselves to account, stating that executives from the likes of Facebook, TikTok, Twitter or Instagram could face jail sentences if they fail to comply with the new laws. A world first it is, and its impact will be felt globally, too, as the new features it is asking for social networks to develop could bring about universal changes to social media.

I’m going to dig into just two of the issues that will determine, as I see it, the impact that this bill could have, as they touch on the crucial elements of both the cause of much harmful behaviour online, and the reality of protecting reputations on social media.
If and when this new bill becomes law, the biggest issue in it becoming practicable is that forcing platforms to better police themselves is only half of the battle. Threatening a social network with fines for non-compliance will no doubt bring change fast. But policing user behaviour, and educating to improve user safety should both be massive priorities.
The policing and changing of user behaviour are topics that appear to be largely missed from the 225-page bill, which focuses on how social networks and internet companies themselves should act with regard to making the internet safer for those using it. Practically, this will cause huge problems, as I’ve encountered as part of my day job working with individuals and brands on reputation management.
So let’s look at trolls and how this bill could impact things.


What makes trolls tick: they are masters of disguise, and they are relentless
The new bill describes measures that it suggests will reduce harmful content online, one of which is about identity verification. The suggestion is that all social networks should be able to verify a user’s real identity, and if a user so desires, it can automatically block all ‘non-verified’ accounts in order to reduce exposure to trolling. I like this, and can see its value, especially for big social media profiles such as footballers or politicians, as much as for the average person. It will be a major feature for all social networks to develop and standardise, but it could work for preventing many cases of trolling.
But the reality of protecting social media users from attacks is that the sheer volume of the attacks themselves, and the methods deployed by attackers to cover their tracks, makes detection hard and protection even harder.
First, let’s look at volume of harmful social media content. This is something I have researched in depth and spoken on including at my TED talk on this topic. Facebook alone has to remove 33 million pieces of spam content every hour of every day, on average. That’s 1.2 billion spam posts in the last three months of 2021 that broke Facebook’s code of conduct, or in other words, that were so harmful in some way that they had to be deleted.
Furthermore, for the same last three months of 2021, Facebook removed 8.2 million posts due to harassment and bullying, while Instagram removed a further 6.6 million for the same reason. A further 19.8 million Facebook posts and 2.6m Instagram were removed for child endangerment in the same time period, and for hate speech, 17.4 million Facebook posts and 3.8 million Instagram posts were removed.
In addition, much of the content that requires removal from Facebook is due to it being reported by another user. While Facebook states that “technology proactively detects and removes the vast majority of violating content before anyone reports it”, data shows that in the last three months of 2021, to take bullying and harassment content as an example, 41.2% of content that Facebook removed was seen and had to be reported by a user. What this illustrates is that although content is being removed, it also still does harm, and in enormous quantities, and this poses challenges to lawmakers.


Next, let’s look at behaviours of trolls.
Trolls create convincing personas online. They rarely choose to go for total randomness in their apparent approach, as this lessens their impact. On top of them having realistic looking profiles, they hack real ‘verified’ accounts, and use them to perpetuate their attacks. Recent data shows there are over 600 million hacked accounts in circulation, and I’ve encountered numerous people myself having had their authentic social media profiles taken over by ‘bad actors’, sometimes permanently.
A considerable amount of harmful content I see online is from what I can tell is a troll, but to the lay person it does in fact appear to be a real person. The difference between a disguised troll and a harmless real person is only slight, and invisible to the untrained eye, but hugely damaging in practice.
Ultimately, reducing the damage done by trolling will require close analysis of their activities, not just the rolling out of a blocking system. A blocked troll is still there, and they will become increasingly proficient at passing off as real. Education is key.
Below is the introduction to the bill, which is downloadable in full here,
Ofcom is to be put in charge of policing the new UK laws when they come into effect, meaning the government agency will be given new responsibilities and new powers. How this shapes up specifically will be critical in the success of the bill, and in the successful evolution of the safety features needed by the social networks in order to improve user safety.
This bill will no doubt send ripples globally, it won’t just be a UK thing. But if done right, it will change behaviours internationally among social networks and us, their users, and we will see a tsunami of change online.
The Social Media Report is written by Drew Benvie, founder & CEO of Battenhall.
You can follow The Social Media Report on Twitter at @TheSMReport. Suggestions for stories can be emailed to db@battenhall.com. Thank you for reading.