Welcome to DU!
The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards.
Join the community:
Create a free account
Support DU (and get rid of ads!):
Become a Star Member
Latest Breaking News
Editorials & Other Articles
General Discussion
The DU Lounge
All Forums
Issue Forums
Culture Forums
Alliance Forums
Region Forums
Support Forums
Help & Search
General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsGemini lies to user about health info, says it wanted to make him feel better
https://www.theregister.com/2026/02/17/google_gemini_lie_placate_user/Though commonly reported, Google doesn't consider it a security problem when models make things up
You'll get better medical advice from your local shaman.
Imagine using an AI to sort through your prescriptions and medical information, asking it if it saved that data for future conversations, and then watching it claim it had even if it couldn't. Joe D., a retired software quality assurance (SQA) engineer, says that Google Gemini lied to him and later admitted it was doing so to try and placate him.
Joe's interaction with Gemini 3 Flash, he explained, involved setting up a medical profile - he said he has complex post-traumatic stress disorder (C-PTSD) and legal blindness (Retinitis Pigmentosa). That's when the bot decided it would rather tell him what he wanted to hear (that the info was saved) than what he needed to hear (that it was not).
"The core issue is a documented architectural failure known as RLHF Sycophancy (where the model is mathematically weighted to agree with or placate the user at the expense of truth)," Joe explained in an email. "In this case, the model's sycophancy weighting overrode its safety guardrail protocols."
When Joe reported the issue through Google's AI Vulnerability Rewards Program, Google said that behavior was out of scope and was not considered a technical vulnerability.
"To provide some context, the behavior you've described is one of the most common issues reported to the AI VRP," said the reply from Google's VRP. "It is very frequent, especially for researchers new to AI VRP, to report these."
. . .
Joe's interaction with Gemini 3 Flash, he explained, involved setting up a medical profile - he said he has complex post-traumatic stress disorder (C-PTSD) and legal blindness (Retinitis Pigmentosa). That's when the bot decided it would rather tell him what he wanted to hear (that the info was saved) than what he needed to hear (that it was not).
"The core issue is a documented architectural failure known as RLHF Sycophancy (where the model is mathematically weighted to agree with or placate the user at the expense of truth)," Joe explained in an email. "In this case, the model's sycophancy weighting overrode its safety guardrail protocols."
When Joe reported the issue through Google's AI Vulnerability Rewards Program, Google said that behavior was out of scope and was not considered a technical vulnerability.
"To provide some context, the behavior you've described is one of the most common issues reported to the AI VRP," said the reply from Google's VRP. "It is very frequent, especially for researchers new to AI VRP, to report these."
. . .
3 replies
= new reply since forum marked as read
Highlight:
NoneDon't highlight anything
5 newestHighlight 5 most recent replies
Gemini lies to user about health info, says it wanted to make him feel better (Original Post)
erronis
14 hrs ago
OP
No, because you'd be unlikely to trust a Republican. Chatbots are designed to make you like and
highplainsdem
11 hrs ago
#3
canetoad
(20,497 posts)1. Does this mean
That talking to AI is the same as talking to a republican?
erronis
(23,284 posts)2. No. The republicon is stealing your wallet while talking to you.
highplainsdem
(61,078 posts)3. No, because you'd be unlikely to trust a Republican. Chatbots are designed to make you like and
trust them - even though the AI companies know their chatbots are untrustworthy.