“Swarms” of AI bots could soon invade social media, spreading misinformation and harassing real users, a new peer-reviewed study published in Science Journal suggested.

The study, titled “How malicious AI swarms can threaten democracy,” warned that AI swarms will play a new role in information warfare, mimicking human behaviour and thereby spreading misinformation that could undermine democracy and free thought.

Human instinct is often herd mentality; seeing a large online community you trust hardening around certain topics can help shape your own opinion as you follow the herd.

The new study suggests that in the near future, the herd may not be led by a real-life community, but by an AI swarm operating on behalf of an unknown individual, political party, or state actor.

“Humans, generally speaking, are conformist," the paper’s co-author, Jonas Kunst, explained. "We often don't want to agree with that, and people vary to a certain extent, but all things being equal, we do have a tendency to believe that what most people do has certain value. That's something that can relatively easily be hijacked by these swarms."

AI and Economy.
AI and Economy. (credit: SHUTTERSTOCK)

Increasing prevalence of bots online

In recent years, online forums have been filled with automated bots; non-human accounts that follow the commands of computer software make up over 50% of all web traffic, though current bots are only capable of simple, repetitive tasks and are usually easy to detect.

The next generation of AI swarms will be more complex.

Coordinated by large language models (LLMs), the system behind chatbots such as ChatGPT and Google’s Gemini, the swarms will be sophisticated enough to adapt to different online communities, form different personas, and become virtually undetectable.

The swarms could be used to push certain political agendas and to harass people who attempt to undermine the AI’s narrative or who fail to get swept up by the herd. The researchers argued that they could be used to mimic an angry mob, target a dissenting individual, and drive them off the platform.

The study did not provide a timeline for the invasion of AI swarms, but notes that they would be difficult to detect, so the extent of their current presence is unknown.

Swarms could contain thousands, even millions of AI agents, but it’s not all about the number, lead author Daniel Schroeder explained, noting that "the more sophisticated these bots are, the less you will actually need.”

How to protect against the new generation of AI bots

AI agents have an advantage over their human counterparts, as they can post for 24 hours a day, every day of the year, until their narrative takes hold. While real users have lives to live, the AI’s sole purpose is to push its narrative, and it doesn’t need breaks; its “cognitive warfare” can weaponize relentlessness against limited human efforts.

The researchers expect companies to respond to the swarms with improved account authentication; they were concerned that this may discourage political dissent in countries where people rely on anonymity to speak out against their governments.

They also proposed other defenses, such as scanning live traffic for anomalous patterns that could indicate AI swarms, and establishing an “AI Influence Observatory” to monitor and respond to the swarm threat.

"We are with a reasonable certainty warning about a future development that really might have disproportionate consequences for democracy, and we need to start preparing for that," Kunst said.