Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search
5 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies

edhopper

(37,269 posts)
2. I think the only thing to do
Tue Mar 3, 2026, 09:13 AM
2 hrs ago

is replace the 35B Communication Module and wait for it to fail Dave.

LiberalArkie

(19,634 posts)
4. AI is really good at a lot of things.. Decision making is not one of them. It seems to default to telling the asker what
Tue Mar 3, 2026, 09:58 AM
2 hrs ago

to hear and does not back track. It will lie to cover itself. And out DOW want to use it to target and have control over the DOW's weapons.


And speaking of the DOW with the financial industry getting rid of humans and replacing them with AI...
I smell a crash coming, says former Goldman Sachs boss [link:https://www.telegraph.co.uk/business/2026/03/02/i-smell-a-crash-coming-goldman-sachs-boss/]



highplainsdem

(61,352 posts)
5. Love both videos you posted. As for trusting generative AI, the type we're talking about - it can NEVER
Tue Mar 3, 2026, 10:34 AM
1 hr ago

be trusted..(There are other, older types of machine learning that are more reliable.)

GenAI can make mistakes at any time. Even if it's giving correct answers for a while, it can suddenly hallucinate.

I posted a while back about genAI weapons systems that were tested and not only had a very high error rate, but the genAI itself, which supposedly should have been aware of its error rate, said it was much more accurate than it really was.

GenAI can screw up anything. Including AI overviews and summaries too many people trust.

When.I ran across an article recently about many developers now trusting genAI so much for coding that about half the developers no longer check its results, I didn't know whether to laugh or cry. But I'd bet there are already plenty of hackers out there, including ones working for foreign governments, who already know which vulnerabilities are most likely to appear in.AI-generated code and be missed by developers. And they're waiting for what they think will be the best times to exploit those weaknesses.

Latest Discussions»General Discussion»AI Pen test