Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

highplainsdem

(61,750 posts)
Tue Mar 17, 2026, 10:48 AM 3 hrs ago

Google's Gemini and Musk's Grok are telling people a real photo from the war is fake (Guardian, 3/17)

https://www.theguardian.com/global-development/2026/mar/17/atrocity-ai-slop-verify-facts-iran-minab-graves

A photo of Iran’s bombed schoolgirl graveyard went around the world. Was it real, or AI?
Numerous faked images and a string of startlingly inaccurate responses from Gemini and Grok are part of a tidal wave of AI slop engulfing coverage of the Iran war

Tess McClure
Tue 17 Mar 2026 01.00 EDT

-snipping opening paragraphs about a photo of the Iranian cemetery preparing to bury schoolgirls killed by a US military strike, and AI being asked if the photo is real-

Ask Gemini, the AI service powered by Google, and the answer you receive is no – in fact, Gemini claims the photograph is from two years earlier and more than 2,000km (1,240 miles) away. Rather than graves for small girls killed by a missile, the image “depicts a mass burial site in Kahramanmaraş, Turkey” after the 7.8 magnitude earthquake that struck in 2023. “This specific aerial perspective became one of the most widely shared images of the disaster,” Gemini says, “illustrating the sheer scale of the loss.”

Seeing the same burial image on social media, others turned to X’s AI assistant Grok to check its veracity. Like Gemini, Grok will breezily assure you the photo is not from Iran at all – although it lands on a different date, disaster and location. The image is “from Rorotan Cemetery in Jakarta, Indonesia – a July 2021 stock photo of Covid mass burials. Not Minab,” it says.

In both cases, the AI answers sound sure: they don’t equivocate, and even provide “sources” for the original image, should you choose to check them. Follow the thread to examine those, however, and you’ll begin to hit dead ends: either the image doesn’t appear at all, or the link provided is to a news report that doesn’t exist. For all their impression of clarity and precision, the AIs are simply wrong.

The cemetery image, it turns out, is authentic. Researchers have cross referenced the photo of the site with satellite images that confirm its location, and it can be cross-referenced again with dozens more images taken of the same site from slightly different angles, and again with video footage – none of which experts say show signs of tampering or digital manipulation. The “factchecks” by Gemini and Grok are just one example of a tidal wave of AI-generated slop – hallucinated facts, nonsense analysis and faked images – that are engulfing coverage of the Iran war. Experts say it is wasting investigative time and risks atrocities being denied – as well as heralding alarming weaknesses as people increasingly rely on AI summaries for news and information.

-snip-


Much more at the link about how wrong answers from these chatbots, AI tools that too many people trust, are adding greatly to the misinformation human factcheckers have to deal with now. They include an example of a chatbot giving a series of equally incorrect answers after being told its first answer is apparently wrong and it should search again, with the chatbot apologizing very politely for the oversight and confidently offering another wrong answer each time.

The article points out why these AI models can't be trusted - that they're "probabilistic language models, constructing sentences piece by piece based on which next word has the highest likelihood of being appropriate." I've often seen them described as fancy autocomplete.

They are not, and can never be, reliable sources of information. As the 3rd paragraph from the Guardian in the excerpt above mentions, even when the bots provide what it says are its sources, those sources may not exist at all, or may not contain what the bot says they contain.

So don't be fooled by the bot listing sources that are usually considered reliable.

The ONLY way to check if what the bot is saying came from those sources is to click on every link yourself and check carefully, especially if you're planning to spread what the bot told you to anyone else, especially on social media including DU.

And if you're going to do that checking to avoid becoming an AI-powered source of misinformation yourself, then it makes MUCH more sense to quote and link to real sources, and write your own text around that information - as you always would have in the past, before generative AI fancy autocomplete - and actually give people REAL information.

This Guardian article this morning links to a BBC story last October about a large study on how inaccurate these AI responses are:

https://www.bbc.co.uk/mediacentre/2025/new-ebu-research-ai-assistants-news-content

Largest study of its kind shows AI assistants misrepresent news content 45% of the time – regardless of language or territory
An intensive international study was coordinated by the European Broadcasting Union (EBU) and led by the BBC

Published: 12:01 am, 22 October 2025
Updated: 06:10 pm, 22 October 2025


New research coordinated by the European Broadcasting Union (EBU) and led by the BBC has found that AI assistants – already a daily information gateway for millions of people – routinely misrepresent news content no matter which language, territory, or AI platform is tested.

The intensive international study of unprecedented scope and scale was launched at the EBU News Assembly, in Naples. Involving 22 public service media (PSM) organizations in 18 countries working in 14 languages, it identified multiple systemic issues across four leading AI tools.

Professional journalists from participating PSM evaluated more than 3,000 responses from ChatGPT, Copilot, Gemini, and Perplexity against key criteria, including accuracy, sourcing, distinguishing opinion from fact, and providing context.

Key findings:
• 45% of all AI answers had at least one significant issue.
• 31% of responses showed serious sourcing problems – missing, misleading, or incorrect attributions.
• 20% contained major accuracy issues, including hallucinated details and outdated information.
• Gemini performed worst with significant issues in 76% of responses, more than double the other assistants, largely due to its poor sourcing performance.
• Comparison between the BBC’s results earlier this year and this study show some improvements but still high levels of errors.

-snip-


Much more at that link, too.

Please do NOT use AI as a source of information, especially as information you're offering to other people. It's a bad idea even if you're just looking for info for yourself.

It becomes irresponsible if you post the bot's answer anywhere without both identifying it as coming from AI - you should say which AI as well - AND checking every single detail from the AI, including its specific sources and if those sources exist...and if they exist, if they support what the bot claims they said.

And yes, I know that takes much more time than just playing Ask The Bot with your favorite AI playmate. But your wanting to save time, and your decision to trust AI despite the high error rate, are not valid excuses for dumping possible misinformation on anyone. Even if you think you're giving people fair warning by saying that answer came from AI, you're forcing others to trust AI because you're willing to do so, or to do all the checking you couldn't be bothered to do yourself.
13 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies

highplainsdem

(61,750 posts)
2. And yet people keep trusting AI without checking. Sometimes it seems as if just the "artificial intelligence"
Tue Mar 17, 2026, 11:09 AM
3 hrs ago

name has brainwashed them.

And sometimes they're simply hooked on using it. Chatbots are designed to be engaging and to flatter users.

teach1st

(6,021 posts)
4. Just for fun, I asked Gemini about the claims in the article
Tue Mar 17, 2026, 11:30 AM
2 hrs ago

Here's what Gemini spat out:

The photograph described in the Guardian article is authentic. Despite the conflicting reports from AI assistants, the image accurately depicts a mass burial site in Minab, Iran, following a deadly missile strike on the Shajareh Tayyebeh Elementary School on February 28, 2026.

According to investigative reports from The Guardian, Amnesty International, Human Rights Watch, and The New York Times, here are the verified facts:

1. The Event and Location
Location: Minab, Hormozgan Province, southern Iran.

Incident: On February 28, 2026, a missile strike hit an elementary school and an adjacent IRGC (Islamic Revolutionary Guard Corps) compound.

Casualties: Reports indicate over 100 children (mostly girls aged 7–12) and several teachers were killed.

The Image: The aerial photo shows excavators preparing rows of small graves for these victims on March 2, 2026.

2. Verification of the Image
The image was captured by the Iranian Foreign Media Department/Reuters and has been cross-referenced by experts using:

Satellite Imagery: Planet Labs and other satellite data confirm the location and the appearance of new graves at the Minab cemetery matching the timeframe.

Visual Consistency: Researchers matched the aerial perspective with dozens of other ground-level photos and videos of the same site, showing no signs of digital manipulation.

Munition Analysis: Independent experts and organizations like Amnesty International identified remnants at the scene consistent with a U.S.-manufactured Tomahawk missile.

3. Why AI Failed
The Guardian article highlights this as a case of "AI slop" and "hallucination."

Gemini incorrectly identified the image as being from the 2023 earthquake in Turkey, even providing fake "sources."

Grok incorrectly claimed it was a 2021 stock photo of a COVID-19 burial site in Indonesia.

The Risk: These incorrect AI summaries can lead to "atrocity denial," where real civilian casualties are dismissed as fake news because an AI incorrectly labeled them as "hallucinated" or "recycled" imagery.

In summary, while AI tools were "sure" of their debunking, they were factually incorrect. The image is a genuine record of a mass casualty event in Iran.


I use Gemini to help with computer setup and issues and basic code. It's usually helpful, but unless you know about the subject, it is not easy to spot its frequent errors. Always double-check AI stuff.

highplainsdem

(61,750 posts)
5. Using AI is never "just for fun - it's always a waste of electricity and water. AI search typically uses 10x the
Tue Mar 17, 2026, 11:47 AM
2 hrs ago

electricity of ordinary search, and I've read that the newer "reasoning" AI models use many times the amount of electricity and water that older AI models use.

Of course the AI companies want the public to view their illegally-trained unethical tools as "fun" - to get people used to chatting with their bots constantly. To condition them to wonder what a bot might say on any topic. Maybe even wonder what a selection of bots might say.

And of course the bots, even one particular bot questioned repeatedly, can and will give widely varying answers, because there's no real thought or intelligence there.

I realize you meant well and thought it might be amusing to get a bot's response. But asking the bot at all shows how serious a problem genAI is.

Of course those AI companies controlled by megalomaniac robber barons see constant AI use as the ideal, not a problem.

But those robber barons are also a very serious problem, and using and promoting the use of AI tools helps the robber barons.

teach1st

(6,021 posts)
6. Nine seconds of watching television?
Tue Mar 17, 2026, 12:21 PM
2 hrs ago

The report referenced below is from Google's own researchers, so there's that to consider. This is from RC Wireless, and it's seven-months old. The excerpt I posted is just that - an excerpt. The report's methodology is quite interesting.

The median text prompt through Gemini apps consumes 0.24 watt-hours of energy, plus the equivalent of five drops of water. (More on the carbon emissions part in a second.)

The researchers said that energy consumption is less than what gets consumed by watching nine seconds of TV, and also noted that the number is “substantially lower than many public estimates.” So, nine seconds of TV and five drops of water per text query. That doesn’t sound like much … until you start thinking about the fact that Google’s Gemini has more than 400 million monthly active users, who are making multiple queries per day and often asking for image or video generation.

“While these impacts are low compared to other daily activities, reducing the environmental impact of AI serving continues to warrant important attention,” the researchers wrote.


https://www.rcrwireless.com/20250821/fundamentals/gemini-prompt-google

teach1st

(6,021 posts)
9. BTW: How to turn off AI in Google search
Tue Mar 17, 2026, 12:56 PM
1 hr ago

This works for me in Chrome. No AI. I don't know if this method reduces any extra energy costs used by AI.

While there are other methods like special plug-ins, these are all vulnerable to Google changes and could easily be broken. Probably the simplest and best solution is to change your browser settings. For those with Chrome, you simply need to follow the following instructions:

* Open Chrome and put chrome://settings/searchEngines in the search box.
* A new tab will open, go to Search Engine > Manage Search Engines and Site Search.
* Click on the Add button by the Site Search update. A new box will open.
* Fill in the form with the following:

Name: AI Free Web
Shortcut: @web
{google:baseURL}search?q=%s&udm=14
You’ll now see AI Free Web in the list.
Click on the three stacked dots on the right side of this new option. Select Make Default.


Going forward, if you use the search box to trigger Google, you won’t get any of the Google AI overview results. Technically, these just bypass the main search results and filter out everything as opposed to truly disabling Google AI overviews. The result is about the same, however.

highplainsdem

(61,750 posts)
10. I've seen lots of different estimates of how much electricity and water are used. Any wasted on a
Tue Mar 17, 2026, 12:58 PM
1 hr ago

hallucinating chatbot is a waste, and the number of data centers being built and the electricity and water they require are a good indicator of how harmful to the environment this tech is.

teach1st

(6,021 posts)
12. Agree on that
Tue Mar 17, 2026, 01:18 PM
1 hr ago

AI has amazing possibilities, even in education. At the same time, it has pitfalls that need to be addressed. For example, the reports that young folks are increasingly using AI chatboxes as friends and advisors, and withdrawing from interactions with real people are concerning.

AI isn't ready for prime time. It should NOT be used for critical processes without strict human review processes.

AI isn't a fad - it's a permanent fixture that requires oversight. With effective leadership, we could require data centers to operate solely on renewable energy and recycled water.

womanofthehills

(10,962 posts)
7. Thing is - never accept AI's first answer
Tue Mar 17, 2026, 12:22 PM
2 hrs ago

If I confront Grok with info opposite to what it says - it will often dig deeper and come up with a totally different answer. Now X has an icon on every post so yo can instantly get AI feedback.

So lots of people posting recent (videos of Netanyu at coffee shops saying he’s alive etc) asking Grok - is this video AI and Grok saying yes.

highplainsdem

(61,750 posts)
8. I guess you didn't read the article, which points out that chatbots are capable of offering one wrong
Tue Mar 17, 2026, 12:35 PM
1 hr ago

answer after another. None of their answers should ever be trusted without checking.

Mblaze

(1,006 posts)
13. So AI is just cyber-opinion programmed by oligarchs.
Tue Mar 17, 2026, 01:47 PM
34 min ago

What could be problematic about that? /s

Latest Discussions»General Discussion»Google's Gemini and Musk'...