The Social Media Report #32: I have a funny feeling
Some holiday reading on dystopian AI-controlled futures
Bags packed, books chosen, break incoming. It’s almost time to roll that dice called Heathrow Airport and hopefully enjoy a reset. Holidays are such an important part of the summer for me, growing up going to my mother’s homeland Spain every July or August, my muscle memory is ready for that mental detox, and reading a good book or two is one of my favourite things for the summer slow-down.
My summer reading: AI with all the feels
This year one of the books I’m taking is Klara and the Sun. So many people had told me to read it, and I’d been putting it off until this break. It seems a timely read too. The author, Kazuo Ishiguro, explores the topic that’s taken all the headlines of late: can AI have feelings, and if so, how will this shape humans’ relationships with our ‘Artificial Friends’? The book blurb says it explores “the fundamental question: what does it mean to love?”
This makes Blake Lemoine’s recent assertion that AI has feelings seem pretty low-balled if you ask me. The software engineer was derided by the AI community last month for proclaiming that he’s discovered AI was sentient, when it seemingly was just reading from millions of spreadsheets, which is what AI does. But if an AI can make a person feel a connection, isn’t that the bigger point?
Could someone really fall in love with AI?
Fast forward to our relationship with our phones, and it’s clear to see already that the algorithm has us hooked. And not always in a good way. It isn’t just our screen time or the messages we’re exposed to that are relevant here. It’s also the way in which AI’s advances could multiply the affect of us being in so deep. With future generations, or maybe even sooner, as people’s relationships develop with our AI, Siri, Alexa, Tamagotchi, a social media connection, or whatever, how deep could these relationships go? It seems to me that people could very soon become reliant on ‘Artificial Friends’.
The last book I read was Conor Grennan’s Little Princes, a funny and retro throwback tale of exploration, humanitarianism and love. I’m expecting it to be the exact opposite in fact to Klara and The Sun. I had the pleasure last year of speaking to one of the stars of the Little Princes, Liz, who it was that first mentioned Conor’s book to me. She explained how it’s really a tale of how she and Conor first met.
I can’t recommend the book highly enough. Conor’s about my age now, and in the book, set 20 years ago, he explores Nepal and the far east, with the technology and trappings befitting the period. What really struck me was how long every little thing took back in the early noughties, from sending and receiving a message (ie on email), through to how long a huge task takes, like tracking down a family or setting up an initiative. It also showed the power of networks, and of talking to people to find out information, rather than the fire and forget culture we find ourselves in today. It’s all very nostalgic, and it shows how much can be achieved when you take your time. Conor’s also hilarious.
After reading the book, I found myself in a Zoom meeting again with the star of the book Liz, who is now an expert in law and AI. She said something unrelated to the book, but related to AI’s considered potential to be sentient that got me thinking.
If you put aside the argument over whether AI could have feelings, or if it could indeed love, and think instead about whether a human could have feelings for an AI, or fall in love with it, isn’t this a more pressing concern for future generations? What if our children could fall in love with their Siri? Or worse, get manipulated by our AI? It really got me thinking.
Show me the algorithm
The social media news headlines this week are dominated by how our networks have been used for harm. We’ve had the case of the British Army’s social media accounts being hacked and used to promote crypto scams, and in China, the case of a hacker who has stolen the data of 1 billion people and is selling it on a forum - 23 terabytes of it to be precise. In Iran, a social media bot ‘army’ one million strong is attacking the #MeToo movement. It’s all a bit doom and gloom. And a story caught my eye linking this with AI this morning.
In Japan, a new case has come to light which may result in the courts forcing the country’s equivalent of TripAdvisor to expose the inner workings of its algorithm. Are we seeing a new precedent emerge, that will force an AI’s owner to disclose its inner workings? To expose their souls.
This case has been reported in the FT, explaining that in Japan, where Kakaku.com’s restaurant review site Tabelog has been requested by the Japanese courts to disclose part of its algorithms, something that tech companies have long considered trade secrets and not for disclosure. This follows a high profile case where a restaurant argued that Tabelog’s algorithm harmed its business, resulting in Kakaku.com being ordered to pay over $250,000 in damages, something Kakaku.com is appealing.
So should an algorithm or the workings of an AI be secret and mystical, or public and downloadable? And will this shape AI’s potential to strike up a relationship with a person? I seriously doubt it.
Social media and AI futures
Looking ahead, I will be watching the regulators’ moves on AI and algorithms with great interest, because I think they are more closely linked to social media than many might think. If an account on an app or a social network can befriend us, it can become a powerful force for good, but it could also cause great harm, whether we know it or not.
The Social Media Report is written by Drew Benvie, founder & CEO of Battenhall.
Thanks for reading The Social Media Report. Subscribe to receive new posts in your inbox.