A group of prominent experts in AI and online misinformation have issued a stark warning about the emerging threat of “AI swarms” – coordinated networks of human-imitating AI agents that could be deployed to manipulate public opinion and undermine democracy.
The warning, published in the journal Science, comes from a global consortium including Nobel Peace Prize-winning free speech activist Maria Ressa, as well as leading researchers from institutions such as Berkeley, Harvard, Oxford, Cambridge and Yale. They caution that these AI swarms, capable of infiltrating communities and fabricating consensus efficiently, could be used by would-be autocrats to sway populations and even overturn election results.
“A disruptive threat is emerging: swarms of collaborative, malicious AI agents,” the authors state. “These systems are capable of coordinating autonomously, infiltrating communities and fabricating consensus efficiently. By adaptively mimicking human social dynamics, they threaten democracy.”
The experts point to early examples of AI-powered influence operations being used in recent elections in Taiwan, India and Indonesia. They predict this technology could be deployed at scale to disrupt the 2028 US presidential election.
“It’s just frightening how easy these things are to vibe code and just have small bot armies that can actually navigate online social media platforms and email and use these tools,” said Daniel Thilo Schroeder, a research scientist at the Sintef research institute in Oslo, who has been simulating such swarms in laboratory conditions.
The threat is being supercharged by advances in AI’s ability to pick up on the tone and content of discourse, allowing the bots to mimic human dynamics more convincingly. Progress in “agentic” AI also means they can autonomously plan and coordinate their actions across social media, messaging channels, blogs and emails.
In Taiwan, where voters are regularly targeted by Chinese propaganda, AI bots have been increasing their engagement with citizens on platforms like Threads and Facebook in recent months, providing “tonnes of information that you cannot verify” and encouraging young Taiwanese to remain neutral on the China-Taiwan dispute.
“It’s not telling you that China’s great, but it’s [encouraging them] to be neutral,” said Puma Shen, a Taiwanese Democratic Progressive Party MP and campaigner against Chinese disinformation. “This is very dangerous, because then you think people like me are radical.”
While some experts have expressed scepticism about the pace of AI progress, Michael Wooldridge, professor of the foundations of AI at Oxford University, believes the threat is entirely plausible. “I think it is entirely plausible that bad actors will try to mobilise virtual armies of LLM-powered agents to disrupt elections and manipulate public opinion,” he said.
The experts are calling for coordinated global action to counter the risk, including “swarm scanners” and watermarked content to detect and combat AI-run misinformation campaigns. They warn that without such measures, democracy itself could be under threat from these increasingly sophisticated and coordinated AI systems.