General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsAI Pen test
An we are supposed to trust AI with military weapons?
The AI can not even admit it is wrong.
LiberalArkie
(19,634 posts)edhopper
(37,269 posts)is replace the 35B Communication Module and wait for it to fail Dave.
doc03
(39,012 posts)LiberalArkie
(19,634 posts)to hear and does not back track. It will lie to cover itself. And out DOW want to use it to target and have control over the DOW's weapons.
And speaking of the DOW with the financial industry getting rid of humans and replacing them with AI...
I smell a crash coming, says former Goldman Sachs boss [link:https://www.telegraph.co.uk/business/2026/03/02/i-smell-a-crash-coming-goldman-sachs-boss/]
highplainsdem
(61,352 posts)be trusted..(There are other, older types of machine learning that are more reliable.)
GenAI can make mistakes at any time. Even if it's giving correct answers for a while, it can suddenly hallucinate.
I posted a while back about genAI weapons systems that were tested and not only had a very high error rate, but the genAI itself, which supposedly should have been aware of its error rate, said it was much more accurate than it really was.
GenAI can screw up anything. Including AI overviews and summaries too many people trust.
When.I ran across an article recently about many developers now trusting genAI so much for coding that about half the developers no longer check its results, I didn't know whether to laugh or cry. But I'd bet there are already plenty of hackers out there, including ones working for foreign governments, who already know which vulnerabilities are most likely to appear in.AI-generated code and be missed by developers. And they're waiting for what they think will be the best times to exploit those weaknesses.