AI-generated text is overwhelming institutions - setting off a no-win 'arms race' with AI detectors
AI-generated text is overwhelming institutions setting off a no-win arms race with AI detectors
Published: February 5, 2026 8:27am EST
Bruce Schneier
Adjunct Lecturer in Public Policy, Harvard Kennedy School
Nathan Sanders
Affiliate, Berkman Klein Center for Internet & Society, Harvard University
(The Conversation) In 2023, the science fiction literary magazine Clarkesworld stopped accepting new submissions because so many were generated by artificial intelligence. Near as the editors could tell, many submitters pasted the magazines detailed story guidelines into an AI and sent in the results. And they werent alone. Other fiction magazines have also reported a high number of AI-generated submissions.
This is only one example of a ubiquitous trend. A legacy system relied on the difficulty of writing and cognition to limit volume. Generative AI overwhelms the system because the humans on the receiving end cant keep up.
This is happening everywhere. Newspapers are being inundated by AI-generated letters to the editor, as are academic journals. Lawmakers are inundated with AI-generated constituent comments. Courts around the world are flooded with AI-generated filings, particularly by people representing themselves. AI conferences are flooded with AI-generated research papers. Social media is flooded with AI posts. In music, open source software, education, investigative journalism and hiring, its the same story.
Like Clarkesworlds initial response, some of these institutions shut down their submissions processes. Others have met the offensive of AI inputs with some defensive response, often involving a counteracting use of AI. Academic peer reviewers increasingly use AI to evaluate papers that may have been generated by AI. Social media platforms turn to AI moderators. Court systems use AI to triage and process litigation volumes supercharged by AI. Employers turn to AI tools to review candidate applications. Educators use AI not just to grade papers and administer exams, but as a feedback tool for students. ......................(more)
https://theconversation.com/ai-generated-text-is-overwhelming-institutions-setting-off-a-no-win-arms-race-with-ai-detectors-274720
·
highplainsdem
(60,770 posts)know it was trained illegally on stolen intellectual property - and at this point you'd have to be pretty ignorant of all the news about AI the last few years to be unaware of the IP theft.
It's ALWAYS foolish to use genAI voluntarily because you're dumbing yourself down when you use it, and hurting your credibility in the eyes of everyone who's aware of the fraud and IP theft.
Clarkesworld and at least some other magazines not only reject AI submissions, but refuse future submissions from the fake writers who thought it was OK to use AI.
SheltieLover
(78,138 posts)stage left
(3,235 posts)There are five of us. Some use AI to help them in plotting. I'm horrible at plotting, but I refuse to use AI. It steals from artists and is busily turning what's left of literature into slop, fast food fiction. I'll share this with my fellow scribblers. We don't want magazines to stop taking submissions.
SheltieLover
(78,138 posts)stage left
(3,235 posts)Jacson6
(1,829 posts)We did all our writing in bluebooks at the test site. and did our testing with multiple choice in a secure room with nothing allowed but a no. 2 pencil. If you were caught cheating you got a zero for the class. They did catch two students in one of oury test sessions. Our writing tests lasted 1 to 1.5 hours.
But really cheating hurts the student in the long run.