The Social Media Report #23: UK moves to regulate social media
Moderator minefield will set a global precedent
In this edition of The Social Media Report, I take a look at the newly-announced UK government regulation on social media - The Online Safety Bill. Aimed at both protecting free speech from Big Tech moderation, and also removing ‘online harms’, it is will likely become a precedent in digital regulation globally, as Europe and the US look on.
I also have as usual my must-read articles of the week. Subscribe below if you’ve been sent this, and you’ll get every edition soon as it’s out.
The UK government has made its move to regulate social media. The new Online Safety Bill is lined up to give regulator Ofcom the power ban social networks if they put a foot wrong, impose criminal offence against senior executives, and levy fines of up to $13bn.
The new regulatory framework is a pincer movement. It aims to both shield users from harm, and also to protect political content from removal. Social media content that will need to be removed will include posts that are abusive, illegal, terrorism-related, harmful to children or harmful to adults. And at the same time social networks will be asked to apply “protections equally to a range of political opinions, no matter their affiliation.” Just wait for the next major political election and see how social media moderators decide what’s political and harmful and therefore ok to leave up.
This will be a minefield for social media moderators, seeking to define what content is illegal, harmful or abusive, while at the same time protecting free speech, all the while dealing with the looming threat of a jail sentence, nationwide bans and material fines if the decision-makers in the social networks fall foul.
Abusive and illegal behaviour online has long been the bane of social media platforms, but in reality, not all harmful activity is plain to see. With millions of posts removed every hour of every day by Facebook and Instagram alone, more is left up online in the margins, awaiting reports by users who just don’t bother to, or even worse, dwelling in the murky middle ground between innocuous and invasive. In this category of harmful content you will see posts that are plain inappropriate, disguised spam, plagiarised, mildly offensive, stolen or simply fake.
This is where the social networks and the new legislations will fail. I have reported more content like this that is clear, to me, to be causing harm, but to the naked eye of a moderator of a social network is totally fine. I give some specific examples in this section of my TED talk on this topic, in particular on how an organised Russian campaign disguised as British tweets sought to destabilise Western governments through propaganda.
Therein lies the problem. And also the solution. I spend a good proportion of my time helping my clients to protect themselves online. This includes FTSE 100 CEOs, company spokespeople that put their heads above the parapet, regular everyday colleagues and the brands they represent alike. These people suffer all kinds of abuse online every day, which tend to come from all angles. Sometimes abuse is driven by personal or political beliefs, sometimes it comes from a vested interest of some kind, and sometimes abuse appears come via a coordinated attack. But often, attacks are left online as they are borderline, often they are disguised, and often there isn’t someone like me working to protect the victim.
Which is why the success of new regulation will come from education, not moderation. The social networks are already running moderate machines, and it’s not easy. The people working in moderation teams have it hard, too, as a Facebook employee shared just this week in an interview on the topic. Education on all sides will be the answer, to help the average user of a social network to spot an actual online harm, and report it, as well as educating the masses on the repercussions and impact of abuse online.
In the coming weeks we will see this new regulation take shape, and it is certain fines will be levied on the social networks. I will be interested to see what infringements will trigger Ofcom intervention. Will it be the curtailing of political discourse, overseas influence promoting disinformation, or just some digital hooliganism. Time will tell.
My must reads from this week
Here are the stories that I have been reading this week.
China is clamping down on social networks, bans LinkedIn competitor: China has ordered domestic app stores to remove 90 apps, including Maimai, a LinkedIn rival.
Instagram and Twitter have apologised for deleting pro-Palestine posts: they have blamed tech glitches with their algorithms.
We call if fanning the flames: research shows that confronting trolls worsens the harassment they inflict, something I’ve experienced and can support.
Snapchat bans anonymous apps over safety: anonymous Q&A apps Yolo and LMK taken off Snapchat after lawsuit over teen’s death.
The new rules of the creator economy: a look by The Economist on how YouTubers, Instagrammers and TikTokers are shaping the business world.
TikTok and infomercials: the app is working on ecommerce.
Facebook begins testing its Clubhouse competitor: the social network is rolling out live audio in Facebook Groups.
The blue tick’s evil cousin on Clubhouse: The Atlantic writes: “On Clubhouse, a black badge was meant to identify trolls. It’s become an emblem of the app’s dysfunctional moderation system.”
Dogecoin developers and Elon Musk: looks like, after announcing that Tesla cars cannot be bought using Bitcoin due to its harmful effects on the environment, Elon Musk is working with the developers behind Bitcoin.
Facebook’s digital currency project, Diem, formerly called Libra, focuses on US: plans have been abandoned to secure a payment license from the Swiss watchdog.
Facebook is working on an AI that knows what to forget: kind of like when we chose to pay attention and when we let it go, and it improves intelligence.
Google plans to double the size its AI ethics team: that will be 200 researchers at Google focused on ethics in artificial intelligence.
Long read on Chamath Palihapitiya and SPACs: really worth a read - “Today it seems almost every rich, underemployed man with a bit of name recognition has raised, or is raising, a SPAC, seeking to earn a big payday by finding a company to buy. There are SPACs advised by retired athletes (Alex Rodriguez, Shaquille O’Neal), SPACs advised by washed-up politicians (Paul Ryan, John Delaney), and one SPAC advised by a dream team of NBA great David Robinson and former Senate Majority Leader Bill Frist…. Chamath Palihapitiya—tech billionaire, Golden State Warriors co-owner, and all-around meme lord—has a sure-thing, 100%, can’t-miss investment for you that will definitely, absolutely pay off for him.”
The Decoder podcast hosts professor and researcher of Facebook’s moderation moves, Kate Klonick: an insightful chat about the Facebook Oversight Board, Trump and brands. It goes deep into the inner workings of social media moderation, and the types of characters involved. Should Facebook be trying to create a moral, legal, spiritual justification for a the figures it chooses to remove? Is this right, real, or a big distraction? Or should ever social network have an oversight board? These are things Klonick says these are all she’s been thinking about for the last two years, all on her own. Well, hi Kate!
The Social Media Report is written by Drew Benvie, founder & CEO of Battenhall. Drew advises brands, organisations and governments on social media strategies, lectures on social media at The American University of Paris, and wrote the first page on Social Media on Wikipedia in 2006. Drew has been named the UK’s most respected social media consultant by NMA and was given the outstanding contribution award by the PRCA in 2020, and his consultancy, Battenhall, has been named UK and global consultancy of the year 2021.