The start of 2024 has brought with it a range of misinformation and disinformation challenges to governments, businesses, and society. Every week, 5 – 10 incidents reach my inbox of harmful content circulating the internet that has had a proven impact on audience beliefs.
Organisations have never been more focused on developing a model of resilience to navigate the complexities and dangers of this fast-turning ‘zero trust’ media landscape – why should anyone trust anything?
In 2022, the risks presented by misinformation and disinformation were not understood in the mainstream. The narrative quickly shifted this year as the World Economic Forum (WEF) included these threats for the first time in its annual risk report. They’re a top 10 risk and are considered the biggest threat facing the world in a landmark election year.
The public availability of Artificial Intelligence (AI) technologies has certainly informed WEF’s risk ranking. In recent months we’ve seen an AI-generated video impact Zara, ‘leaked’ fake audio footage of London Mayor Sadiq Khan claiming that Armistice Day events should be postponed for a pro-Palestinian march, and fake harmful images of Taylor Swift lead to global condemnation.
If I ask you to think of a social network at the heart of misinformation spread, then you’ll probably think X/Twitter following the business’ turbulent transformation and the likes of Foreign Policy magazine calling the network “… a sewer of disinformation”. Contrary to this, the journalist investigators at NewsGuard have recently identified YouTube as a Russian disinformation incubator.
In just six months, NewsGuard identified seven viral false claims about Ukraine that were seeded on anonymous YouTube channels before spreading further. Claims included that Ukraine had assassinated a journalist and false narratives connected to President Zelensky. These are not harmless online narratives but instead details that have been referenced as truth by members of U.S. Congress before.
Typically businesses have been slow to respond to risk and instead react once a crisis hits, although perhaps even this is changing. Rebecca Pardon in Communicate Magazine evidences the collapse of Silicon Valley Bank – so-called the “first Twitter-fuelled bank run” – to show many organisations are still ill-equipped to deal with social crisis and misinformation spread. Detecting manipulated content is fast becoming a hygiene requirement of managing communications.
As the timeliness of this new article suggests, the bulging deluge of harmful content created and shared online is keeping us busy at Kekst CNC. To remain at the forefront takes serious time and investment, which is partly why our parent company Publicis has announced its €300mn AI strategy investment. We’re on the same journey.