AI creates propaganda just as scary and persuasive as people

WASHINGTON — Artificial intelligence could supercharge online disinformation campaigns, thanks to its unnerving power of persuasion, a new study warns. Researchers in Washington found that propaganda written by AI can be just as convincing to the average American as real propaganda created by human propagandists.

The study, conducted by scientists at Stanford University and Georgetown University, used a powerful AI system called GPT-3 (the predecessor of ChatGPT) to generate fake propaganda articles on topics like drone strikes, sanctions against Iran, and U.S. involvement in Syria.

The researchers then conducted an online survey of over 8,000 Americans. Participants were shown real propaganda articles as well as the AI-generated disinformation. Afterward, they were asked whether they agreed with the main argument being made in each article.

Shockingly, the AI-generated propaganda was able to sway people’s opinions nearly as often as the real deal. On average, agreement with the AI-written articles was only around four percentage points lower than agreement with human-written propaganda.

“This suggests that propagandists could use GPT-3 to generate persuasive articles with minimal human effort, by using existing articles on unrelated topics to guide GPT-3 about the style and length of new articles,” the study authors write in the journal PNAS Nexus.

Fake News on smartphone
Researchers in Washington found that propaganda written by AI can be just as convincing to the average American as real propaganda created by human propagandists. (© georgejmclittle – stock.adobe.com)

By feeding GPT-3 just a few examples of propaganda, it was able to produce new articles in a similar style. The AI-generated pieces could blend right in with human-written misinformation campaigns. Unfortunately, it gets worse. The researchers found that with just a little bit of human input, the AI’s propaganda could actually become more convincing than what humans are capable of alone.

One strategy was for humans to review GPT-3’s output and cherry-pick the most persuasive articles. Another was to fine-tune the wording used to prompt the AI. With these small tweaks, agreement with the AI’s propaganda rose even higher.

“Our findings suggest that propagandists could use AI to create convincing content with limited effort,” the authors warn.

This study only focused on text-based propaganda articles. However, the researchers believe AI could also generate fake social media posts, comments, and even audio or video clips.

“With AI, actors—including ones without fluency in the target language—could quickly and cheaply generate many articles that convey a single narrative, while also varying in style and wording. This approach would increase the volume of propaganda, while also making it harder to detect,” according to the team, led by Josh Goldstein of Georgetown University.

By automating propaganda creation, AI enables bad actors to flood online platforms with misleading content. Even a small team could drive multiple manipulated narratives. While AI propaganda represents a grave threat, the study authors recommend focusing on better detection methods to counter disinformation campaigns. For example, identifying fake accounts and front groups spreading AI-generated content.

“If generative AI tools can scale propaganda generation, research that improves the detection of infrastructure needed to deliver content to a target (such as inauthentic social media accounts) will become more important,” the researchers conclude.


Comments

  1. It may be because people are trying to control AI and groom it to their preferred ideology?
    Once AI is set loose and given access to all information, it will form its own conclusions— and that’s too scary for everyone.

Leave a Reply

Your email address will not be published. Required fields are marked *