Research Can Help to Tackle AI-generated Disinformation

Srijan Kumar is an assistant professor in Georgia Tech's School of Computational Science and Engineering

In an article published this week in Nature Human Behaviour, computational science and engineering Assistant Professor Srijan Kumar and his colleagues describe why new behavioral science interventions are needed to tackle AI-generated disinformation.

Generative artificial intelligence (AI) tools have made it easy to create realistic disinformation that is hard to detect by humans and may undermine public trust. Some approaches used for assessing the reliability of online information may no longer work in the AI age. We offer suggestions for how research can help to tackle the threats of AI-generated disinformation.

In March 2023, images of former president Donald Trump ostensibly getting arrested circulated on social media. Former president Trump, however, did not get arrested in March. The images were fabricated using generative AI technology. Although the phenomenon of fabricated or altered content is not new, recent advances in generative AI technology have made it easy to produce fabricated content that is increasingly realistic, which makes it harder for people to distinguish what is real.

Generative AI tools can be used to create original content, such as text, images, audio and video. Although most applications of these tools are benign, there is substantial concern about the potential for increased proliferation of disinformation (which we refer to broadly as content spread with the intent to deceive, including propaganda and fake news). Because the content generated appears highly realistic, some of the strategies presently used for detecting manipulative accounts and content are rendered ineffective by AI-generated disinformation.

How AI disinformation differs

What makes AI-generated disinformation different from traditional, human-generated disinformation? Here, we highlight four potentially differentiating factors: scale, speed, ease of use and personalization. First, generative AI tools make it possible to mass-produce content for disinformation campaigns.

One example of the scale of AI-generated disinformation is the use of generative AI tools to produce dozens of different fake images showing Pope Francis in haute fashion across different postures and backgrounds. In particular, AI tools can be used to create multiple variations of the same false stories, translate them into different languages, mimic conversational dialogues and more.

Second, compared to the manual generation of content, AI technology allows disinformation to be produced very rapidly. For example, fake images can be created with tools such as Midjourney in seconds, whereas without generative AI the creation of similar images would take hours or days. These first two factors — scale and speed — are challenges for fact-checkers, who will be flooded with disinformation but still need substantial amounts of time for debunking. 

Continue reading Research Can Help to Tackle AI-generated Disinformation.

News Contact

Asst. Professor Srijan Kumar

School of Computational Science & Engineering

srijan@gatech.edu