Whilst mingling at a summer soiree spoilt by artisan delicatessen and golden fizz, a bout of imposter syndrome kicked in. Why was I, as a mid-year Batchelor degree student, invited to the launch of a luxury wedding venue? This was around 2010 and the answer may have been my Klout score.
In the earlier days of what’s commonly referred to as web 2.0, a business called Klout claimed you could be rated as an individual via something called a “Klout Score”. You would plug in your social profiles and be presented with a number between 1 – 100 – higher, the better. It became a social weighting and a sinister benchmark.
In a popularised TV episode of Charlie Brooker’s Black Mirror called “Nosedive”, a world was portrayed led by numerical socioeconomic status. The protagonist Lacie receives an upvote or downvote for daily interactions – from buying a coffee to the social circle she’s in. As the episode name suggests, an artificial rise in score eventually leads to downfall. It highlighted how malevolent such a social rating can be.
Can you believe in the real world some companies began using Klout as part of their recruitment process? Especially for digital related roles.
Bloggers and other online personalities would use the number in contract negotiations. It was likely the reason I was gorging on food at the soiree. It wasn’t that challenging for me to focus on building a Klout score that identified myself as an influencer in the local area.
As a public relations student, Klout could have become a meaningless algorithmic barrier to reaching City-based roles in London. After Klout changed the secret scoring methodology, the service began to receive appropriate criticism, then demise – but the social need to artificially upweight a status remains.
Today, Elon Musk Twitter’s deal is on hold until more information about the prevalence of spam accounts is shared. Twitter claims less than 5% of daily active users are spam accounts. Most people suspect this number to be far higher. To understand why Twitter has a bot problem, we need to look back to the earlier days of social media.
I joined Twitter in 2008, the same year Klout was founded. A far bigger story about how the Twitter of 2008 vs 2022 could be told, but focusing on bots alone, they were commonplace. I’m talking about the bots that could be purchased to increase your followers and raise your engagement levels – what it meant to astroturf your social presence, to be someone you’re not.
To this very day this deception takes place. Try it yourself, plenty of free tools exist that can detect fake followers – you’ll find some of the biggest ‘influencers’ on Instagram and Twitter are especially guilty. Due diligence is necessary before any campaigning of this nature.
In 2010, I was involved in two pieces of research that revealed how bot networks on Twitter work together to spread disinformation. One piece looked at how bots amplified 5G conspiracy theories and spread disinformation, finding 44.7% of profiles were potentially bots.
The other examined the COVID-19 ‘anti-mask movement’, finding 7% were potentially bot accounts. In total, that’s 5,018 fake accounts over what were two very small sample sets. New research has even come to light showing that 23% of Elon Musk’s 93 million Twitter followers are fake.
Today, the influence and use of bots has shifted to the mainstream. Questions are being asked specifically of Twitter being a ‘brand-safe’ environment – I pity the sales team trying to convince everyone about the value of the network. The Twitter acquisition has thrown the social network into disrepute. But, this goes far beyond Twitter.
The spread of disinformation impacts every social network. It is often targeted, can be weaponised, and used to change the course of conversations. Social networks can, and do, introduce safeguards but the Wild West aspects of social media will always find holes – it’s like anti-virus software trying to patch a vulnerability.
Every communications programme must monitor disinformation. If you are not, you will miss something.