Welcome to DU!
The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards.
Join the community:
Create a free account
Support DU (and get rid of ads!):
Become a Star Member
Latest Breaking News
Editorials & Other Articles
General Discussion
The DU Lounge
All Forums
Issue Forums
Culture Forums
Alliance Forums
Region Forums
Support Forums
Help & Search
General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsAI coding can be what the Harvard Business Review calls "workslop" - and it can be catastrophic
https://leodemoura.github.io/blog/2026/02/28/when-ai-writes-the-worlds-software.htmlWhen AI Writes the Worlds Software, Who Verifies It?
AI Is Rewriting the Worlds Software
Code Metal recently raised $125 million to rewrite defense industry code using AI. Google and Microsoft both report that 2530% of their new code is AI-generated. AWS used AI to modernize 40 million lines of COBOL for Toyota. Microsofts CTO predicts that 95% of all code will be AI-generated by 2030. The rewriting of the worlds software is not coming. It is underway.
-snip-
Andrej Karpathy described the pattern: I Accept All always, I dont read the diffs anymore. When AI code is good enough most of the time, humans stop reviewing carefully. Nearly half of AI-generated code fails basic security tests, and newer, larger models do not generate significantly more secure code than their predecessors. The errors are there. The reviewers are not. Even Karpathy does not trust it: he later outlined a cautious workflow for code [he] actually care[s] about, and when he built his own serious project, he hand-coded it.
-snip-
The Harvard Business Review recently documented what it calls workslop: AI-generated work that looks polished but requires someone downstream to fix. When that work is a memo, it is annoying. When it is a cryptographic library, it is catastrophic. As AI accelerates the pace of software production, the verification gap does not shrink. It widens. Engineers stop understanding what their systems do. AI outsources not just the writing but the thinking.
The threat extends beyond accidental errors. When AI writes the software, the attack surface shifts: an adversary who can poison training data or compromise the models API can inject subtle vulnerabilities into every system that AI touches. These are not hypothetical risks. Supply chain attacks are already among the most damaging in cybersecurity, and AI-generated code creates a new supply chain at a scale that did not previously exist. Traditional code review cannot reliably detect deliberately subtle vulnerabilities, and a determined adversary can study the test suite and plant bugs specifically designed to evade it. A formal specification is the defense: it defines what correct means independently of the AI that produced the code. When something breaks, you know exactly which assumption failed, and so does the auditor.
-snip-
AI Is Rewriting the Worlds Software
Code Metal recently raised $125 million to rewrite defense industry code using AI. Google and Microsoft both report that 2530% of their new code is AI-generated. AWS used AI to modernize 40 million lines of COBOL for Toyota. Microsofts CTO predicts that 95% of all code will be AI-generated by 2030. The rewriting of the worlds software is not coming. It is underway.
-snip-
Andrej Karpathy described the pattern: I Accept All always, I dont read the diffs anymore. When AI code is good enough most of the time, humans stop reviewing carefully. Nearly half of AI-generated code fails basic security tests, and newer, larger models do not generate significantly more secure code than their predecessors. The errors are there. The reviewers are not. Even Karpathy does not trust it: he later outlined a cautious workflow for code [he] actually care[s] about, and when he built his own serious project, he hand-coded it.
-snip-
The Harvard Business Review recently documented what it calls workslop: AI-generated work that looks polished but requires someone downstream to fix. When that work is a memo, it is annoying. When it is a cryptographic library, it is catastrophic. As AI accelerates the pace of software production, the verification gap does not shrink. It widens. Engineers stop understanding what their systems do. AI outsources not just the writing but the thinking.
The threat extends beyond accidental errors. When AI writes the software, the attack surface shifts: an adversary who can poison training data or compromise the models API can inject subtle vulnerabilities into every system that AI touches. These are not hypothetical risks. Supply chain attacks are already among the most damaging in cybersecurity, and AI-generated code creates a new supply chain at a scale that did not previously exist. Traditional code review cannot reliably detect deliberately subtle vulnerabilities, and a determined adversary can study the test suite and plant bugs specifically designed to evade it. A formal specification is the defense: it defines what correct means independently of the AI that produced the code. When something breaks, you know exactly which assumption failed, and so does the auditor.
-snip-
3 replies
= new reply since forum marked as read
Highlight:
NoneDon't highlight anything
5 newestHighlight 5 most recent replies
AI coding can be what the Harvard Business Review calls "workslop" - and it can be catastrophic (Original Post)
highplainsdem
8 hrs ago
OP
dalton99a
(93,625 posts)1. About Andrej Karpathy
Andrej Karpathy
@karpathy
I like to train large deep neural nets. Previously Director of AI @ Tesla, founding team @ OpenAI, PhD @ Stanford.
highplainsdem
(61,486 posts)2. I've had some exchanges with him online. We don't always agree.
Editing to add that at least he's smart to realize AI shouldn't be used for work you care about.
highplainsdem
(61,486 posts)3. kick