The Social Media Report #22

The ethics behind what we see and why we see it

Straight in at number one in Europe’s list of most dangerous uses of technology is how AI, when powering ‘toys’ encourage dangerous behaviour of children. Sound a bit like an Insta filter or TikTok meme to you? In this edition of The Social Media Report, I take a look at the new AI regulation coming in through the European Commission and what it will likely mean for social media. I also have as usual my must-read articles of the week.

Subscribe below if you’ve been sent this, and you’ll get every edition soon as it’s out.

AI determines what we see and why we see it. Now it will be regulated to reduce harm. This week, the European Commission launched a ‘first ever’ legal framework for regulating artificial intelligence technologies. The AI framework, unveiled by executive vice-president and chief digital lawmonger Margarethe Vestager, is dressed up to be some kind of launchpad or incubator. Its very title, ‘Excellence and trust in artificial intelligence’, suggests a new set of tools to take AI to the next level, and the website talks of EU funded projects in AI. But this is a first of a kind set of limitations, fines and bans for uses of AI that could be harmful to society, and even that subliminally affect people.

There’s more to this AI regulation than meets the eye. This framework will pick on the relatively easier to regulate uses of AI that apply to businesses, like CV screening that is biased, or toys that harm children. Those will bed in, then this framework will easily be able to expand to cover screening of social media content that is biased, or social media content that harms children. This, I think, will be goal: regulate AI and social media in one play.

What counts as AI, and what will be the impact on social media? Setting frameworks is key to providing safety, and nowhere more so, if you ask me of course, than on social media. Social networks use sophisticated machine learning, which is AI, to make use of data on their users to decide what to show them. Social networks read what we type, they see where we search, and they know what we like. They use this information, turned into hordes of data, and use it to improve their products, and keep us scrolling, clicking, following and buying. An example of this can be seen below, in one of my must-reads from this week, where in China the government there is swooping in specifically on data troves. So how will social media’s use of AI and the algorithm change in the future? One word: safety.

Algorithms for screening are in the spotlight: the European Commission has said that this new AI framework will look specifically at how AI is being used for screening. Specifically, screening applicants for jobs in companies is one example, but screening what information people are shown, for example on a social network, is a clear, close comparison. When we buy a newspaper, or watch television, it’s the same for all. But open a browser, and we are being shown information that subliminally affects us, and can cause harm as well as bring about good, which is why I believe we will see the algorithms changing as soon as this framework clunks into gear.

Examples are already coming to light of social networks’ fair use of data and AI. For example, TikTok is being sued for several billion pounds for allegedly ‘illegally collecting the personal information of millions of children in the UK and Europe.’ This information includes biometrics, contact details and behaviours, and the case contends that this information is shared with unknown third parties for profit.

Going back to one of the two examples the AI framework gives of unacceptable uses of AI - the encouragement of dangerous behaviour in children - I think that an absolutely fundamental element of the new AI framework will need to be the safety of children consuming digital and social media content. From screen time addiction and advertising guidelines, to inappropriate content and dangerous memes which AI pushes to users, I personally think that regulation’s biggest impact here will be the safety of future generations.

My must reads from this week

Here are the stories that I have been reading this week.






Long reads

The Social Media Report is written by Drew Benvie, founder & CEO of Battenhall.

You can follow The Social Media Report on Twitter at @TheSMReport. Suggestions for stories can be emailed to Thank you for reading, and see you next time.