General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsHow malicious AI swarms can threaten democracy (new paper in Science magazine + link to free copy)
One of the authors, Jay Van Bavel, posted about this on both Bluesky and X, linking to both the paper at Science.org, which requires a subscription or AAAS membership, and to a free preprint for those who don't have access to Science.
The free preprint is at https://osf.io/preprints/osf/qm9yk_v4
The paper in Science magazine is at https://www.science.org/doi/10.1126/science.adz1697
Advances in artificial intelligence (AI) offer the prospect of manipulating beliefs and behaviors on a population-wide level (1). Large language models (LLMs) and autonomous agents (2) let influence campaigns reach unprecedented scale and precision. Generative tools can expand propaganda output without sacrificing credibility (3) and inexpensively create falsehoods that are rated as more human-like than those written by humans (3, 4). Techniques meant to refine AI reasoning, such as chain-of-thought prompting, can be used to generate more convincing falsehoods. Enabled by these capabilities, a disruptive threat is emerging: swarms of collaborative, malicious AI agents. Fusing LLM reasoning with multiagent architectures (2), these systems are capable of coordinating autonomously, infiltrating communities, and fabricating consensus efficiently. By adaptively mimicking human social dynamics, they threaten democracy. Because the resulting harms stem from design, commercial incentives, and governance, we prioritize interventions at multiple leverage points, focusing on pragmatic mechanisms over voluntary compliance.
From the free preprint at https://osf.io/preprints/osf/qm9yk_v4 - this is just part of 2 long paragraphs:
This chorus erodes the independence essential to collective intelligence and democracy, already weakened by pervasive social influence operations on contemporary platforms. Beyond social norms, this directly undermines human cognitive information processing. The wisdom of crowds, where aggregated judgments outperform experts, depends critically on independence between judgments. While rudimentary botnets already replicate messages to simulate consensus, swarms of AI agents can do so with far greater sophistication, adaptivity, and contextual awareness. Citizens may then overestimate the informational value of this artificial consensus and may further magnify it by sharing the information themselves...
And you can bet that Trump has already been told about this by his aides and the AI bros allied with him. For more on why they're allies, see https://www.democraticunderground.com/100220960669
AI now allows propaganda campaigns to reach unprecedented scale and precision.
— Jay Van Bavel, PhD (@jayvanbavel.bsky.social) 2026-01-22T21:44:35.438Z
Our new paper in Science explains how a disruptive threat is emerging: swarms of collaborative, malicious AI agents.
www.science.org/doi/10.1126/...
Led by @daniel-thilo.bsky.social & @kunstjonas.bsky.social
Free preprint kindly shared here: bsky.app/profile/jane...
— Jay Van Bavel, PhD (@jayvanbavel.bsky.social) 2026-01-22T23:16:53.441Z
flying rabbit
(4,932 posts)gulliver
(13,760 posts)We have the technology to fight impersonations, whether single-person or group. Digital signatures and government controlled identity verification.
Fraud is nothing new. AI didn't invent it. And natural intelligence is very often less trustworthy than AI.
highplainsdem
(60,434 posts)gulliver
(13,760 posts)You can ask an AI about something, and it will generally tell you if it's sound or not. It's not perfect, but it's an improvement. Non-AI software was already very bad for us, intolerably so I would say. AI might help us equalize things.
highplainsdem
(60,434 posts)gulliver
(13,760 posts)... when you do.
I agree, they absolutely can dumb people down. Google does that too. When I forget a name or a word, for example, I no longer let myself Google it. I wait for my mind to have a chance to struggle with it. Eventually it floats to the surface. If you let Google do it, it hurts your memory. It's called the Google Effect.
If you use AI to learn things and magnify your natural abilities, it makes you more knowledgeable. If you just have it do your work for you, that's bad.
LearnedHand
(5,259 posts)By Daniel Suarez:
Daemon Book 1 of 2
Freedom(tm) Book 2
Qutzupalotl
(15,687 posts)This sentence well describes the illness and points to the cure. It is human nature to align with your peers. But as hard as it is, we must learn to form our own opinions and defend them, even from the in group.
Our thoughts and opinions are two of the very few things under our complete control, according to the Stoics. It is not surprising that technology has developed a method to short-circuit that, to subtly shift our thinking based on artificial consensus. It is also not surprising that this is being exploited by those who would control us.
So we must consciously fight against these attempts at mass brainwashing by first being aware of them, then cultivating sources of information that are strictly factual and short on hyperbole. Ditching social media and opinion news is one way to combat the propaganda; continuing to use these tools, but warily, is another.