LOS ANGELES — Fake social media accounts programmed to push a certain political agenda, commonly referred to as “bots,” played an uncomfortably large role in the 2016 U.S. presidential election. These accounts, whose sole purpose is to spread misinformation and sow political dissension, represent a troubling trend in modern politics. Now, a new study finds that bots are evolving in order to better mimic humans and evade detection, setting the stage for a 2020 presidential election rife with misinformation.
Researchers from the University of Southern California examined bot behaviors on social media during the 2018 U.S. elections, and compared that behavior to what was observed online during the 2016 presidential election. To do this, they examined nearly 250,000 social media accounts that had posted about both the 2016 and 2018 U.S. elections, and among all of those accounts they identified more than 30,000 as bots.
Interestingly, they noticed that 2016 bots focused primarily on retweets and posting a high number of tweets all based on the same subject or message. However, actual human Twitter users weren’t retweeting in 2018 as much as they were in 2016, and it appears that the bots picked up on this and adapted their approach in 2018 — posting less messages and retweets on the same topic in large amounts. This indicates that the AI behind these bots is monitoring human behavior on social media platforms and adapting to mimic that behavior.
Bots have also started taking a “multi-bot” approach, in which multiple fake accounts post in unison about a political issue in order to imitate actual human interaction on social media.
For whatever reason, human social media users started engaging more with each other through replies to an original post during the 2018 elections. In response, researchers noticed that bots started posting replies much more often and even tried to dictate conversation topics by posting polls, a type of social media post that actively invites reply comments and debate. Polls are usually associated with reputable and verified social media accounts as well, another sign that these evolving bots are becoming more skilled at masquerading as legitimate.
Researchers specifically cited one example: a bot account that posted a Twitter poll asking if all voters in federal elections should be required to show a photo ID before placing a vote. The bot account then asked users to cast a vote and retweet the poll so others would see it.
“Our study further corroborates this idea that there is an arms race between bots and detection algorithms. As social media companies put more efforts to mitigate abuse and stifle automated accounts, bots evolve to mimic human strategies,” explains lead author Emilio Ferrara in a media release. “Advancements in AI enable bots producing more human-like content. We need to devote more efforts to understand how bots evolve and how more sophisticated ones can be detected. With the upcoming 2020 US elections, the integrity of social media discourse is of paramount importance to allow a democratic process free of external influences,”
The study is published in the scientific journal First Monday.