Welcome to DU!
The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards.
Join the community:
Create a free account
Support DU (and get rid of ads!):
Become a Star Member
Latest Breaking News
Editorials & Other Articles
General Discussion
The DU Lounge
All Forums
Issue Forums
Culture Forums
Alliance Forums
Region Forums
Support Forums
Help & Search
General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsScience Is Drowning in AI Slop (Ross Andersen in the Atlantic. Gift link.)
Gift link that Andersen posted on both Bluesky and X:
https://www.theatlantic.com/science/2026/01/ai-slop-science-publishing/685704/?gift=S4EwRLGNogt2Kqjs1lNdf-xGfJttf2MOaA3NRxCqJjM&utm_source=copy-link&utm_medium=social&utm_campaign=share
Science Is Drowning in AI Slop
Peer review has met its match.
By Ross Andersen
January 22, 2026, 8:49 AM ET
-snip-
Some scientific disciplines have become hotbeds for slop. Publishers are sharing intelligence about the most egregious ones, according to Jennifer Wright, the head of research integrity and publication ethics at Cambridge University Press. Unfortunately, many are fields that society would very much like to be populated with genuinely qualified scientistscancer research, for one. The mills have hit on a very effective template for a cancer paper, Day told me. Someone can claim to have tested the interactions between a tumor cell and just one protein of the many thousands that exist, and as long as they arent reporting a dramatic finding, no one will have much reason to replicate their results.
AI can also generate the images for a fake paper. A now-retracted 2024 review paper in Frontiers in Cell and Developmental Biology featured an AI-generated illustration of a rat with hilariously disproportionate testicles, which not only passed peer review but was published before anyone noticed. As embarrassing as this was for the journal, little harm was done. Much more worrying is the ability of generative AI to conjure up convincing pictures of thinly sliced tissue, microscopic fields, or electrophoresis gels that are commonly used as evidence in biomedical research.
-snip-
A similar influx of AI-assisted submissions has hit bioRxiv and medRxiv, the preprint servers for biology and medicine. Richard Sever, the chief science and strategy officer at the nonprofit organization that runs them, told me that in 2024 and 2025, he saw examples of researchers who had never once submitted a paper sending in 50 in a year. Research communities have always had to sift out some junk on preprint servers, but this practice makes sense only when the signal-to-noise ratio is high. That wont be the case if 99 out of 100 papers are manufactured or fake, Sever said. Its potentially an existential crisis.
-snip-
When I called A. J. Boston, a professor at Murray State University who has written about this issue, he asked me if Id heard of the dead-internet conspiracy theory. Its adherents believe that on social media and in other online spaces, only a few real people create posts, comments, and images. The rest are generated and amplified by competing networks of bots. Boston said that in the worst-case scenario, the scientific literature might come to look something like that. AIs would write most papers, and review most of them, too. This empty back-and-forth would be used to train newer AI models. Fraudulent images and phantom citations would embed themselves deeper and deeper in our systems of knowledge. Theyd become a permanent epistemological pollution that could never be filtered out.
Peer review has met its match.
By Ross Andersen
January 22, 2026, 8:49 AM ET
-snip-
Some scientific disciplines have become hotbeds for slop. Publishers are sharing intelligence about the most egregious ones, according to Jennifer Wright, the head of research integrity and publication ethics at Cambridge University Press. Unfortunately, many are fields that society would very much like to be populated with genuinely qualified scientistscancer research, for one. The mills have hit on a very effective template for a cancer paper, Day told me. Someone can claim to have tested the interactions between a tumor cell and just one protein of the many thousands that exist, and as long as they arent reporting a dramatic finding, no one will have much reason to replicate their results.
AI can also generate the images for a fake paper. A now-retracted 2024 review paper in Frontiers in Cell and Developmental Biology featured an AI-generated illustration of a rat with hilariously disproportionate testicles, which not only passed peer review but was published before anyone noticed. As embarrassing as this was for the journal, little harm was done. Much more worrying is the ability of generative AI to conjure up convincing pictures of thinly sliced tissue, microscopic fields, or electrophoresis gels that are commonly used as evidence in biomedical research.
-snip-
A similar influx of AI-assisted submissions has hit bioRxiv and medRxiv, the preprint servers for biology and medicine. Richard Sever, the chief science and strategy officer at the nonprofit organization that runs them, told me that in 2024 and 2025, he saw examples of researchers who had never once submitted a paper sending in 50 in a year. Research communities have always had to sift out some junk on preprint servers, but this practice makes sense only when the signal-to-noise ratio is high. That wont be the case if 99 out of 100 papers are manufactured or fake, Sever said. Its potentially an existential crisis.
-snip-
When I called A. J. Boston, a professor at Murray State University who has written about this issue, he asked me if Id heard of the dead-internet conspiracy theory. Its adherents believe that on social media and in other online spaces, only a few real people create posts, comments, and images. The rest are generated and amplified by competing networks of bots. Boston said that in the worst-case scenario, the scientific literature might come to look something like that. AIs would write most papers, and review most of them, too. This empty back-and-forth would be used to train newer AI models. Fraudulent images and phantom citations would embed themselves deeper and deeper in our systems of knowledge. Theyd become a permanent epistemological pollution that could never be filtered out.
It's been only three years since ChatGPT was released.
Three years.
I wrote about the deluge of AI slop that is gushing into scientific discourse
— Ross Andersen (@rossandersen.bsky.social) 2026-01-22T14:31:11.310Z
www.theatlantic.com/science/2026...
3 replies
= new reply since forum marked as read
Highlight:
NoneDon't highlight anything
5 newestHighlight 5 most recent replies
Science Is Drowning in AI Slop (Ross Andersen in the Atlantic. Gift link.) (Original Post)
highplainsdem
23 hrs ago
OP
SheltieLover
(77,651 posts)1. Ai needs to be illegal
highplainsdem
(60,486 posts)2. Yes. And the AI bros should be in prison.
SheltieLover
(77,651 posts)3. Absolutely!