Whenever a big political event takes place on Twitter, you know with absolute certainty that the bots will follow. From this perspective, COP27 didn’t disappoint. New research by the Kekst CNC Intelligence team shows that during COP27 6% of the accounts involved in conversation were suspected bots, responsible for driving an estimated 12% of overall mentions.
Our worldwide search for COP27 tweets revealed 2.2+ million mentions between 4th – 21st November, causing 66 billion potential impressions (non-unique views). It was the talk of the digital town square – with influential business leaders, politicians, celebrities and companies sharing their aspirations and criticisms. Behind the veil, an automated circus of fake Twitter accounts were driving alternative narratives.
The backdrop of COP27 bot activity may be reflective of Twitter having now lost over 50% of its employees, including its moderation team. Our research suggests the relaxing of safeguards could have reintroduced more sophisticated behaviours by fake profiles.
During COP27, we’ve been able to identify the following behaviours by suspected bot accounts*:
Bots targeting COP27 were part of four key networks
Each network was focused on driving specific topics to the COP27 hashtag. On each occasion, there was a central Twitter profile surrounded by other profiles whose tweets were not being engaged with directly. Each of the central profiles tweeted 50+ times per day, flooding Twitter with agenda-driven messages.
Activism across clear subject areas
If you were following the COP27 hashtag closely, then you may have seen tweets calling for justice for Tigray (North Ethiopia), criticism of 400 private jets touching down in Egypt, promotional tweets of a certain politician’s stand on the “Loss and Damage” funding mechanism, and rogue accounts publicising rooms for rent. All of these “conversations” showed significant and certain bot behaviours.
Targeted disinformation towards energy companies
It’s unusual for Twitter bots to share links to external sites, but COP27 saw an exception with the inclusion of a Breitbart article about “fossil fuel lobbyists” attending the conference. Breitbart’s website is rated 49.5/100 in terms of credibility by NewsGuard’s misinformation technology due to the publication’s track record of publishing false and misleading claims. It potentially indicates the concerted efforts made by bot accounts to elevate negative stories about energy companies during COP27.
These insights come after weeks of negative headlines regarding Elon Musk’s acquisition of Twitter and his attempts to address Twitter’s many fake accounts. Trying to verify users via a paid subscription model through Twitter Blue ultimately failed in its last iteration – even costing Eli Lilly billions. As Musk himself has said, Twitter will be experimenting with “lots of dumb things” over the next few months.
Identifying Twitter bots in 2022 was far harder compared to our previous research efforts in 2020. Indicators such as non-verified public profiles, no URL, and tweeting an unsustainable number of times per day are giveaways. But the nature of content has meant that mindless bot behaviors and genuine activism are far harder to distinguish.
At Kekst CNC, we advise that organisations should be especially vigilant when managing their presences on Twitter, or when using Twitter data to inform social insights.
*This snapshot is unable to verify whether bot activity has increased overall since Musk’s acquisition.
**The nature of bot behaviours is based upon a 10% random sample of the suspected bot dataset. This is due to the number of mentions involved.