Language traits of disinformation in the age of generative AI


The rapid acceleration of generative AI has democratized access to technologies that will transform entire industries. When it comes to the spread of mis- and disinformation the harm could be significant. For several years, it’s had a tangible business impact that has been well-recorded with an estimation made that disinformation causes share price loses totalling $78bn per year.

This year, the specialists who form the Kekst CNC Intelligence team have supported multiple clients who have found themselves under attack from malicious threat actors who have spread false or misleading information. In one major incident, a company lost more than $100bn in value. Our strong partnerships with leading companies, research labs, AI pioneers, and universities give us unique insights into how AI continues to transform industries and communications.

Based on our own dataset of 4.2 million online articles, it’s clear that AI is being exploited by malicious actors. Generative AI is fuelling accessible and cost-effective disinformation campaigns. We’re seeing AI automate content creation including articles and social media posts that mimic human language and style.

Deep learning models have additionally been used to create realistic deepfake videos, voice recordings, and images. Content can be used to form stories that can manipulate audiences and enforce dangerous perceptions. Bot activity continues to be an issue, which we’ve been supporting clients with for years. These fake social media accounts, which often look like real people or brands, being programmed to disseminate disinformation at a rapid pace.

By deploying advanced natural language processing and machine learning algorithms across our dataset, we identified that disinformation presents the following language traits:

Sensationalist: Disinformation tends to use exaggerated or sensational language to grab attention and evoke strong emotional responses.

Polarizing: Often employs biased or divisive language, using loaded terms that can elicit emotional reactions or reinforce existing beliefs.

Manipulative: Weaves compelling narratives that play on people’s fears, prejudices, or desires, appealing to their emotions rather than relying on rational analysis.

Repetition: Campaigns often repeat key phrases, slogans, or buzzwords to reinforce messaging and make it more memorable.

Distortion: Selectively presents or distorts facts, taking them out of context or using them in misleading ways to support a particular narrative.

As generative AI continues to evolve, the battle against disinformation will remain a constant. It’s a core part of reputation management and growing in scale. In future articles, I’ll go into more detail.  

About the author

Michael White

Add Comment