The Social Media Report #22
The ethics behind what we see and why we see it
Straight in at number one in Europe’s list of most dangerous uses of technology is how AI, when powering ‘toys’ encourage dangerous behaviour of children. Sound a bit like an Insta filter or TikTok meme to you? In this edition of The Social Media Report, I take a look at the new AI regulation coming in through the European Commission and what it will likely mean for social media. I also have as usual my must-read articles of the week.
Subscribe below if you’ve been sent this, and you’ll get every edition soon as it’s out.
AI determines what we see and why we see it. Now it will be regulated to reduce harm. This week, the European Commission launched a ‘first ever’ legal framework for regulating artificial intelligence technologies. The AI framework, unveiled by executive vice-president and chief digital lawmonger Margarethe Vestager, is dressed up to be some kind of launchpad or incubator. Its very title, ‘Excellence and trust in artificial intelligence’, suggests a new set of tools to take AI to the next level, and the website talks of EU funded projects in AI. But this is a first of a kind set of limitations, fines and bans for uses of AI that could be harmful to society, and even that subliminally affect people.
There’s more to this AI regulation than meets the eye. This framework will pick on the relatively easier to regulate uses of AI that apply to businesses, like CV screening that is biased, or toys that harm children. Those will bed in, then this framework will easily be able to expand to cover screening of social media content that is biased, or social media content that harms children. This, I think, will be goal: regulate AI and social media in one play.
What counts as AI, and what will be the impact on social media? Setting frameworks is key to providing safety, and nowhere more so, if you ask me of course, than on social media. Social networks use sophisticated machine learning, which is AI, to make use of data on their users to decide what to show them. Social networks read what we type, they see where we search, and they know what we like. They use this information, turned into hordes of data, and use it to improve their products, and keep us scrolling, clicking, following and buying. An example of this can be seen below, in one of my must-reads from this week, where in China the government there is swooping in specifically on data troves. So how will social media’s use of AI and the algorithm change in the future? One word: safety.
Algorithms for screening are in the spotlight: the European Commission has said that this new AI framework will look specifically at how AI is being used for screening. Specifically, screening applicants for jobs in companies is one example, but screening what information people are shown, for example on a social network, is a clear, close comparison. When we buy a newspaper, or watch television, it’s the same for all. But open a browser, and we are being shown information that subliminally affects us, and can cause harm as well as bring about good, which is why I believe we will see the algorithms changing as soon as this framework clunks into gear.
Examples are already coming to light of social networks’ fair use of data and AI. For example, TikTok is being sued for several billion pounds for allegedly ‘illegally collecting the personal information of millions of children in the UK and Europe.’ This information includes biometrics, contact details and behaviours, and the case contends that this information is shared with unknown third parties for profit.
Going back to one of the two examples the AI framework gives of unacceptable uses of AI - the encouragement of dangerous behaviour in children - I think that an absolutely fundamental element of the new AI framework will need to be the safety of children consuming digital and social media content. From screen time addiction and advertising guidelines, to inappropriate content and dangerous memes which AI pushes to users, I personally think that regulation’s biggest impact here will be the safety of future generations.
My must reads from this week
Here are the stories that I have been reading this week.
How social media is being used for sharing healthcare information in India: crowdsourcing online is being used to locate free beds in hospitals.
India orders Twitter to remove critical posts by users: the social network has been requested by the Indian Government to take down tweets, such as this one, that criticise the nation’s handling of the Covid crisis.
Facebook contractor pens damning note on mental healthcare of social media moderators: a ‘blistering’ internal note wonders ‘how much policies would change if Mark Zuckerberg and other executives at social media companies had to sit and do content moderation for even just one 8-hour shift.says that.’
How social media shaped European Football’s week of ‘utter chaos’: twelve clubs announced the formation of a European Super League, but within 48hrs we saw a spectacular collapse. As the FT puts it, football is based on community, not mutiny, and social media reaction sparked a turning points in negotiation talks.
A look behind TikTok’s viral music process: the social network is paying musicians’ university fees, amongst other things.
Facebook working on a music integration service: Project Boombox is reported to be building audio features where users can engage in real-time conversations with others.
Spotify turns 15 - how the music industry has evolved: a look back by Variety on the impact Spotify has made on audio and music.
China is forecast to hold one third of the world’s data by 2025: new laws coming in show how China is looking to create a market for its data to propel growth.
Facebook accidentally sends data policies to a journalist: “Longer term,” Facebook said, “we expect more scraping incidents and think it's important to both frame this as a broad industry issue and normalize the fact that this activity happens regularly.”
Data analytics automation firm secures $35m investment: I really like this space. Data analytics is a growth area and automating it like Unsupervised is doing will be powerful for brands.
How global governments are moving to limit the power of tech companies: from the US and Europe to China, The New York Times notes that there’s now an urgency and breadth “that no single industry has experienced before.”
Trump’s last social media memorial of a post is still a troll fest: the LA Times looks at the discussion still burning brightly in the comments under Donald Trump’s very last Facebook post which is still up online.
An in-depth look at the ‘incredible rise’ of North Korea's hacking army: The New Yorker looks into North Korea’s extensive and expanding hacking operations, almost always focused on generating revenue for the closed-off country.
Is Clubhouse a venture capital experiment? PR Ed Zitron looks at length into Clubhouse, how it lacks the virality of Twitch, Snapchat, and Discord: “I believe that Clubhouse is A16Z's experiment in their own venture ecosystem, to take a company and pump money and time into it to make it worth billions of dollars.”
The Social Media Report is written by Drew Benvie, founder & CEO of Battenhall.