General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsVery interesting comments about AI targeting in Iran
From a Substack post
This is a turning point in warfare that nobody is talking about enough. 🧵
In the first 24 hours of the US-Israel operation against Iran, AI systems suggested over 1,000 targets that's 42 per hour. The human brain simply cannot evaluate targets at that speed.
And now there are serious questions about what happened to a primary school in Minab, Iran, where 110 children were killed. Experts believe AI may have flagged it as a military target based on outdated satellite imagery the school was once part of a military complex, but had been a civilian school for at least 9 years.
Researchers are calling it "a catastrophic intelligence failure, whether AI-driven or human-driven."
The concern isn't just this war. It's what comes next. When machines suggest thousands of targets a day, humans develop what experts call "automation bias" the machine's decision becomes the authority, and we lose the time needed for ethical deliberation.
One professor put it bluntly: "We must assume AI will come to play an ever-growing role in the decision to use force the decision to initiate conflict and that is terrifying."
We are watching the first AI war unfold in real time. Are we paying attention?
👇 What do you think should AI ever be involved in military targeting?
https://substack.com/@adrianmacovei/note/c-226157088?r=1uz6fn&utm_medium=ios&utm_source=notes-share-action
cbabe
(6,563 posts)Jim__
(15,189 posts)I imagine that question will have to be answered by military experts. My fear is that they will be forced to use AI in military targeting. As you noted:
I think that is correct. AI targeting will have to be offset by AI systems, and at least some of our targets will have to be chosen based on expected actions of our enemy. Those expectations will have to be generated by AI.
chowder66
(12,157 posts)highplainsdem
(61,588 posts)trying to blame Anthropic's Claude AI model, and maybe even accusing Anthropic of sabotage. But I read a little while ago that the DIA is to blame for using outdated info.
As for using badly flawed and inevitably hallucinating genAI in military targeting or any type of warfare - it's a bad idea if you want accurate targeting and attacks. But they made the decision to trade accuracy for speed. Former Google CEO Eric Schmidt said a few years ago, in an interview I posted on DU, that WWIII could be over in a few minutes, with AI responding to AI in nuclear attacks.
And war games have already shown all major AI models will use nuclear weapons in a conflict faster than humans would. I posted a recent LBN thread about that.
It's likely the US and Israel hit lots of wrong targets. The school is the one we heard about.