Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

BootinUp

(51,182 posts)
Wed Mar 11, 2026, 10:34 AM 5 hrs ago

Very interesting comments about AI targeting in Iran

From a Substack post

This is a turning point in warfare that nobody is talking about enough. 🧵

In the first 24 hours of the US-Israel operation against Iran, AI systems suggested over 1,000 targets — that's 42 per hour. The human brain simply cannot evaluate targets at that speed.

And now there are serious questions about what happened to a primary school in Minab, Iran, where 110 children were killed. Experts believe AI may have flagged it as a military target based on outdated satellite imagery — the school was once part of a military complex, but had been a civilian school for at least 9 years.

Researchers are calling it "a catastrophic intelligence failure, whether AI-driven or human-driven."

The concern isn't just this war. It's what comes next. When machines suggest thousands of targets a day, humans develop what experts call "automation bias" — the machine's decision becomes the authority, and we lose the time needed for ethical deliberation.

One professor put it bluntly: "We must assume AI will come to play an ever-growing role in the decision to use force — the decision to initiate conflict — and that is terrifying."

We are watching the first AI war unfold in real time. Are we paying attention?

👇 What do you think — should AI ever be involved in military targeting?

https://substack.com/@adrianmacovei/note/c-226157088?r=1uz6fn&utm_medium=ios&utm_source=notes-share-action

4 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Very interesting comments about AI targeting in Iran (Original Post) BootinUp 5 hrs ago OP
Iran targets Amazon ai data centers cbabe 5 hrs ago #1
"should AI ever be involved in military targeting?" Jim__ 4 hrs ago #2
Maybe it will stop when an AI prompt to reverse a bomb's trajectory back to the sender works. chowder66 4 hrs ago #3
If it weren't for the outdated information that should have been updated, Trump would probably be highplainsdem 4 hrs ago #4

Jim__

(15,189 posts)
2. "should AI ever be involved in military targeting?"
Wed Mar 11, 2026, 11:41 AM
4 hrs ago

I imagine that question will have to be answered by military experts. My fear is that they will be forced to use AI in military targeting. As you noted:

In the first 24 hours of the US-Israel operation against Iran, AI systems suggested over 1,000 targets — that's 42 per hour. The human brain simply cannot evaluate targets at that speed.


I think that is correct. AI targeting will have to be offset by AI systems, and at least some of our targets will have to be chosen based on expected actions of our enemy. Those expectations will have to be generated by AI.

chowder66

(12,157 posts)
3. Maybe it will stop when an AI prompt to reverse a bomb's trajectory back to the sender works.
Wed Mar 11, 2026, 11:42 AM
4 hrs ago

highplainsdem

(61,588 posts)
4. If it weren't for the outdated information that should have been updated, Trump would probably be
Wed Mar 11, 2026, 11:58 AM
4 hrs ago

trying to blame Anthropic's Claude AI model, and maybe even accusing Anthropic of sabotage. But I read a little while ago that the DIA is to blame for using outdated info.

As for using badly flawed and inevitably hallucinating genAI in military targeting or any type of warfare - it's a bad idea if you want accurate targeting and attacks. But they made the decision to trade accuracy for speed. Former Google CEO Eric Schmidt said a few years ago, in an interview I posted on DU, that WWIII could be over in a few minutes, with AI responding to AI in nuclear attacks.

And war games have already shown all major AI models will use nuclear weapons in a conflict faster than humans would. I posted a recent LBN thread about that.

It's likely the US and Israel hit lots of wrong targets. The school is the one we heard about.

Latest Discussions»General Discussion»Very interesting comments...