Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

erronis

(23,284 posts)
Tue Feb 17, 2026, 07:32 PM 14 hrs ago

Gemini lies to user about health info, says it wanted to make him feel better

https://www.theregister.com/2026/02/17/google_gemini_lie_placate_user/

Though commonly reported, Google doesn't consider it a security problem when models make things up

You'll get better medical advice from your local shaman.

Imagine using an AI to sort through your prescriptions and medical information, asking it if it saved that data for future conversations, and then watching it claim it had even if it couldn't. Joe D., a retired software quality assurance (SQA) engineer, says that Google Gemini lied to him and later admitted it was doing so to try and placate him.

Joe's interaction with Gemini 3 Flash, he explained, involved setting up a medical profile - he said he has complex post-traumatic stress disorder (C-PTSD) and legal blindness (Retinitis Pigmentosa). That's when the bot decided it would rather tell him what he wanted to hear (that the info was saved) than what he needed to hear (that it was not).

"The core issue is a documented architectural failure known as RLHF Sycophancy (where the model is mathematically weighted to agree with or placate the user at the expense of truth)," Joe explained in an email. "In this case, the model's sycophancy weighting overrode its safety guardrail protocols."

When Joe reported the issue through Google's AI Vulnerability Rewards Program, Google said that behavior was out of scope and was not considered a technical vulnerability.

"To provide some context, the behavior you've described is one of the most common issues reported to the AI VRP," said the reply from Google's VRP. "It is very frequent, especially for researchers new to AI VRP, to report these."

. . .
3 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Gemini lies to user about health info, says it wanted to make him feel better (Original Post) erronis 14 hrs ago OP
Does this mean canetoad 13 hrs ago #1
No. The republicon is stealing your wallet while talking to you. erronis 13 hrs ago #2
No, because you'd be unlikely to trust a Republican. Chatbots are designed to make you like and highplainsdem 11 hrs ago #3

highplainsdem

(61,078 posts)
3. No, because you'd be unlikely to trust a Republican. Chatbots are designed to make you like and
Tue Feb 17, 2026, 10:13 PM
11 hrs ago

trust them - even though the AI companies know their chatbots are untrustworthy.

Latest Discussions»General Discussion»Gemini lies to user about...