General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsGoogle's Gemini and Musk's Grok are telling people a real photo from the war is fake (Guardian, 3/17)
https://www.theguardian.com/global-development/2026/mar/17/atrocity-ai-slop-verify-facts-iran-minab-gravesNumerous faked images and a string of startlingly inaccurate responses from Gemini and Grok are part of a tidal wave of AI slop engulfing coverage of the Iran war
Tess McClure
Tue 17 Mar 2026 01.00 EDT
-snipping opening paragraphs about a photo of the Iranian cemetery preparing to bury schoolgirls killed by a US military strike, and AI being asked if the photo is real-
Ask Gemini, the AI service powered by Google, and the answer you receive is no in fact, Gemini claims the photograph is from two years earlier and more than 2,000km (1,240 miles) away. Rather than graves for small girls killed by a missile, the image depicts a mass burial site in Kahramanmaraş, Turkey after the 7.8 magnitude earthquake that struck in 2023. This specific aerial perspective became one of the most widely shared images of the disaster, Gemini says, illustrating the sheer scale of the loss.
Seeing the same burial image on social media, others turned to Xs AI assistant Grok to check its veracity. Like Gemini, Grok will breezily assure you the photo is not from Iran at all although it lands on a different date, disaster and location. The image is from Rorotan Cemetery in Jakarta, Indonesia a July 2021 stock photo of Covid mass burials. Not Minab, it says.
In both cases, the AI answers sound sure: they dont equivocate, and even provide sources for the original image, should you choose to check them. Follow the thread to examine those, however, and youll begin to hit dead ends: either the image doesnt appear at all, or the link provided is to a news report that doesnt exist. For all their impression of clarity and precision, the AIs are simply wrong.
The cemetery image, it turns out, is authentic. Researchers have cross referenced the photo of the site with satellite images that confirm its location, and it can be cross-referenced again with dozens more images taken of the same site from slightly different angles, and again with video footage none of which experts say show signs of tampering or digital manipulation. The factchecks by Gemini and Grok are just one example of a tidal wave of AI-generated slop hallucinated facts, nonsense analysis and faked images that are engulfing coverage of the Iran war. Experts say it is wasting investigative time and risks atrocities being denied as well as heralding alarming weaknesses as people increasingly rely on AI summaries for news and information.
-snip-
Much more at the link about how wrong answers from these chatbots, AI tools that too many people trust, are adding greatly to the misinformation human factcheckers have to deal with now. They include an example of a chatbot giving a series of equally incorrect answers after being told its first answer is apparently wrong and it should search again, with the chatbot apologizing very politely for the oversight and confidently offering another wrong answer each time.
The article points out why these AI models can't be trusted - that they're "probabilistic language models, constructing sentences piece by piece based on which next word has the highest likelihood of being appropriate." I've often seen them described as fancy autocomplete.
They are not, and can never be, reliable sources of information. As the 3rd paragraph from the Guardian in the excerpt above mentions, even when the bots provide what it says are its sources, those sources may not exist at all, or may not contain what the bot says they contain.
So don't be fooled by the bot listing sources that are usually considered reliable.
The ONLY way to check if what the bot is saying came from those sources is to click on every link yourself and check carefully, especially if you're planning to spread what the bot told you to anyone else, especially on social media including DU.
And if you're going to do that checking to avoid becoming an AI-powered source of misinformation yourself, then it makes MUCH more sense to quote and link to real sources, and write your own text around that information - as you always would have in the past, before generative AI fancy autocomplete - and actually give people REAL information.
This Guardian article this morning links to a BBC story last October about a large study on how inaccurate these AI responses are:
https://www.bbc.co.uk/mediacentre/2025/new-ebu-research-ai-assistants-news-content
An intensive international study was coordinated by the European Broadcasting Union (EBU) and led by the BBC
Published: 12:01 am, 22 October 2025
Updated: 06:10 pm, 22 October 2025
New research coordinated by the European Broadcasting Union (EBU) and led by the BBC has found that AI assistants already a daily information gateway for millions of people routinely misrepresent news content no matter which language, territory, or AI platform is tested.
The intensive international study of unprecedented scope and scale was launched at the EBU News Assembly, in Naples. Involving 22 public service media (PSM) organizations in 18 countries working in 14 languages, it identified multiple systemic issues across four leading AI tools.
Professional journalists from participating PSM evaluated more than 3,000 responses from ChatGPT, Copilot, Gemini, and Perplexity against key criteria, including accuracy, sourcing, distinguishing opinion from fact, and providing context.
Key findings:
45% of all AI answers had at least one significant issue.
31% of responses showed serious sourcing problems missing, misleading, or incorrect attributions.
20% contained major accuracy issues, including hallucinated details and outdated information.
Gemini performed worst with significant issues in 76% of responses, more than double the other assistants, largely due to its poor sourcing performance.
Comparison between the BBCs results earlier this year and this study show some improvements but still high levels of errors.
-snip-
Much more at that link, too.
Please do NOT use AI as a source of information, especially as information you're offering to other people. It's a bad idea even if you're just looking for info for yourself.
It becomes irresponsible if you post the bot's answer anywhere without both identifying it as coming from AI - you should say which AI as well - AND checking every single detail from the AI, including its specific sources and if those sources exist...and if they exist, if they support what the bot claims they said.
And yes, I know that takes much more time than just playing Ask The Bot with your favorite AI playmate. But your wanting to save time, and your decision to trust AI despite the high error rate, are not valid excuses for dumping possible misinformation on anyone. Even if you think you're giving people fair warning by saying that answer came from AI, you're forcing others to trust AI because you're willing to do so, or to do all the checking you couldn't be bothered to do yourself.
eppur_se_muova
(41,786 posts)highplainsdem
(61,750 posts)name has brainwashed them.
And sometimes they're simply hooked on using it. Chatbots are designed to be engaging and to flatter users.
malaise
(295,498 posts)visibility
teach1st
(6,021 posts)Here's what Gemini spat out:
According to investigative reports from The Guardian, Amnesty International, Human Rights Watch, and The New York Times, here are the verified facts:
1. The Event and Location
Location: Minab, Hormozgan Province, southern Iran.
Incident: On February 28, 2026, a missile strike hit an elementary school and an adjacent IRGC (Islamic Revolutionary Guard Corps) compound.
Casualties: Reports indicate over 100 children (mostly girls aged 712) and several teachers were killed.
The Image: The aerial photo shows excavators preparing rows of small graves for these victims on March 2, 2026.
2. Verification of the Image
The image was captured by the Iranian Foreign Media Department/Reuters and has been cross-referenced by experts using:
Satellite Imagery: Planet Labs and other satellite data confirm the location and the appearance of new graves at the Minab cemetery matching the timeframe.
Visual Consistency: Researchers matched the aerial perspective with dozens of other ground-level photos and videos of the same site, showing no signs of digital manipulation.
Munition Analysis: Independent experts and organizations like Amnesty International identified remnants at the scene consistent with a U.S.-manufactured Tomahawk missile.
3. Why AI Failed
The Guardian article highlights this as a case of "AI slop" and "hallucination."
Gemini incorrectly identified the image as being from the 2023 earthquake in Turkey, even providing fake "sources."
Grok incorrectly claimed it was a 2021 stock photo of a COVID-19 burial site in Indonesia.
The Risk: These incorrect AI summaries can lead to "atrocity denial," where real civilian casualties are dismissed as fake news because an AI incorrectly labeled them as "hallucinated" or "recycled" imagery.
In summary, while AI tools were "sure" of their debunking, they were factually incorrect. The image is a genuine record of a mass casualty event in Iran.
I use Gemini to help with computer setup and issues and basic code. It's usually helpful, but unless you know about the subject, it is not easy to spot its frequent errors. Always double-check AI stuff.
highplainsdem
(61,750 posts)electricity of ordinary search, and I've read that the newer "reasoning" AI models use many times the amount of electricity and water that older AI models use.
Of course the AI companies want the public to view their illegally-trained unethical tools as "fun" - to get people used to chatting with their bots constantly. To condition them to wonder what a bot might say on any topic. Maybe even wonder what a selection of bots might say.
And of course the bots, even one particular bot questioned repeatedly, can and will give widely varying answers, because there's no real thought or intelligence there.
I realize you meant well and thought it might be amusing to get a bot's response. But asking the bot at all shows how serious a problem genAI is.
Of course those AI companies controlled by megalomaniac robber barons see constant AI use as the ideal, not a problem.
But those robber barons are also a very serious problem, and using and promoting the use of AI tools helps the robber barons.
teach1st
(6,021 posts)The report referenced below is from Google's own researchers, so there's that to consider. This is from RC Wireless, and it's seven-months old. The excerpt I posted is just that - an excerpt. The report's methodology is quite interesting.
The researchers said that energy consumption is less than what gets consumed by watching nine seconds of TV, and also noted that the number is substantially lower than many public estimates. So, nine seconds of TV and five drops of water per text query. That doesnt sound like much until you start thinking about the fact that Googles Gemini has more than 400 million monthly active users, who are making multiple queries per day and often asking for image or video generation.
While these impacts are low compared to other daily activities, reducing the environmental impact of AI serving continues to warrant important attention, the researchers wrote.
https://www.rcrwireless.com/20250821/fundamentals/gemini-prompt-google
teach1st
(6,021 posts)This works for me in Chrome. No AI. I don't know if this method reduces any extra energy costs used by AI.
* Open Chrome and put chrome://settings/searchEngines in the search box.
* A new tab will open, go to Search Engine > Manage Search Engines and Site Search.
* Click on the Add button by the Site Search update. A new box will open.
* Fill in the form with the following:
Name: AI Free Web
Shortcut: @web
{google:baseURL}search?q=%s&udm=14
Youll now see AI Free Web in the list.
Click on the three stacked dots on the right side of this new option. Select Make Default.
Going forward, if you use the search box to trigger Google, you wont get any of the Google AI overview results. Technically, these just bypass the main search results and filter out everything as opposed to truly disabling Google AI overviews. The result is about the same, however.
highplainsdem
(61,750 posts)highplainsdem
(61,750 posts)hallucinating chatbot is a waste, and the number of data centers being built and the electricity and water they require are a good indicator of how harmful to the environment this tech is.
teach1st
(6,021 posts)AI has amazing possibilities, even in education. At the same time, it has pitfalls that need to be addressed. For example, the reports that young folks are increasingly using AI chatboxes as friends and advisors, and withdrawing from interactions with real people are concerning.
AI isn't ready for prime time. It should NOT be used for critical processes without strict human review processes.
AI isn't a fad - it's a permanent fixture that requires oversight. With effective leadership, we could require data centers to operate solely on renewable energy and recycled water.
womanofthehills
(10,962 posts)If I confront Grok with info opposite to what it says - it will often dig deeper and come up with a totally different answer. Now X has an icon on every post so yo can instantly get AI feedback.
So lots of people posting recent (videos of Netanyu at coffee shops saying hes alive etc) asking Grok - is this video AI and Grok saying yes.
highplainsdem
(61,750 posts)answer after another. None of their answers should ever be trusted without checking.
Mblaze
(1,006 posts)What could be problematic about that? /s