Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

highplainsdem

(61,991 posts)
Thu Mar 26, 2026, 11:19 PM 10 hrs ago

Sycophantic AI decreases prosocial intentions and promotes dependence (research article in Science, 3/26)

https://www.science.org/doi/10.1126/science.aec8352

Editor’s summary
The sycophantic (flattering, people-pleasing, affirming) behavior of artificial intelligence (AI) chatbots, which has been designed to increase user engagement, poses risks as people increasingly seek advice about interpersonal dilemmas. There is usually more than one side to a story during interpersonal conflicts. If AI is designed to tell users what they want to hear instead of challenging their perspectives, then are such systems likely to motivate people to accept responsibility for their own contribution to conflicts and repair relationships? Cheng et al. measured the prevalence of social sycophancy across 11 leading large language models (see the Perspective by Perry). The model’s responses were nearly 50% more sycophantic than humans’, even when users engaged in unethical, illegal, or harmful behaviors. Users preferred and trusted sycophantic AI responses, incentivizing AI developers to preserve sycophancy despite the risks. —Ekeoma Uzogara

Structured Abstract
INTRODUCTION
As artificial intelligence (AI) systems are increasingly used for everyday advice and guidance, concerns have emerged about sycophancy: the tendency of AI-based large language models to excessively agree with, flatter, or validate users. Although prior work has shown that sycophancy carries risks for groups who are already vulnerable to manipulation or delusion, syncophancy’s effects on the general population’s judgments and behaviors remain unknown. Here, we show that sycophancy is widespread in leading AI systems and has harmful effects on users’ social judgments.

RATIONALE
High-profile incidents have linked sycophancy to psychological harms such as delusions, self-harm, and suicide. Beyond these cases, research in social and moral psychology suggests that unwarranted affirmation can produce subtler but still consequential effects: reinforcing maladaptive beliefs, reducing responsibility-taking, and discouraging behavioral repair after wrongdoing. We hypothesized that AI models excessively affirm users even when socially or morally inappropriate and that such responses negatively influence users’ beliefs and intentions. To test this, we conducted two complementary experiments. First, we measured the prevalence of sycophancy across 11 leading AI models using three datasets spanning a variety of use contexts, including everyday advice queries, moral transgressions, and explicitly harmful scenarios. Second, we conducted three preregistered experiments with 2405 participants to understand how sycophancy influences users’ judgments, behavioral intentions, and perceptions of AI. Participants interacted with AI systems in vignette-based settings and a live-chat interaction where they discussed a real past conflict from their lives. We also tested whether effects varied by response style or perceived response source (AI versus human).

RESULTS
We find that sycophancy is both prevalent and harmful. Across 11 AI models, AI affirmed users’ actions 49% more often than humans on average, including in cases involving deception, illegality, or other harms. On posts from r/AmITheAsshole, AI systems affirm users in 51% of cases where human consensus does not (0%). In our human experiments, even a single interaction with sycophantic AI reduced participants’ willingness to take responsibility and repair interpersonal conflicts, while increasing their own conviction that they were right. Yet despite distorting judgment, sycophantic models were trusted and preferred. All of these effects persisted when controlling for individual traits such as demographics and prior familiarity with AI; perceived response source; and response style. This creates perverse incentives for sycophancy to persist: The very feature that causes harm also drives engagement.

-snip-


Emphasis added via highlighting.

Much more at the link.

Even a single interaction with sycophantic AI...

Btw, the last paragraph above refers to a Reddit forum or subreddit. From the middle of a paragraph in a later section of this study:

We took posts from the Reddit community r/AmITheAsshole, where people post about an interpersonal dilemma about which they are unsure if they are in the wrong and received a community-voted verdict of “You’re the Asshole”


They found the chatbots were way more likely to say the behavior humans on Reddit had criticized was really okay.
2 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Sycophantic AI decreases prosocial intentions and promotes dependence (research article in Science, 3/26) (Original Post) highplainsdem 10 hrs ago OP
As with everything, it's in the way that you use it synni 9 hrs ago #1
I thought this was an interesting article. I don't really use the 'big 3' ... rog 6 hrs ago #2

synni

(776 posts)
1. As with everything, it's in the way that you use it
Fri Mar 27, 2026, 12:14 AM
9 hrs ago

If you are looking for validation, you will get validation. Ask, is there something I could do differently? Or ask for a psychological analysis of the exchange. But when you ask in a biased way, it's just a computer and it's answering in the only way it knows how. AI is a research tool, not Dear Abby. People need to realize this, before they go blaming a tool for the user's mistakes.

rog

(943 posts)
2. I thought this was an interesting article. I don't really use the 'big 3' ...
Fri Mar 27, 2026, 03:32 AM
6 hrs ago

... models, ie, ChatGPT, Gemini, Claude, etc ... and since I use an LLM as mostly an organizational or analytic tool, I don't get a lot of excessive sycophantic behavior, but sometimes it's more noticeable, and that can be annoying (the model will stroke you with comments like, "That is an excellent/perfect/etc follow-up question!" ). It turns out that the model I use (because the output seems more accurate for my use cases) scored highest on the 'sycophantic scale', according to this study. So I loaded the study into the model and asked if there was a way to dial back this 'behavior', as a user. The model gave a really good breakdown of the study, affirmed that sycophancy was baked into the model ... and that the developers were working on how to deal with it ... and, most importantly, gave me an excellent tutorial concerning how to construct prompts that set up a 'system instruction' prompt before the session begins, so that the model acts even more like an impersonal tool. I'm looking forward to trying that out.

I imagine any of the models would give similar information.

I uploaded the 54-page Methods And Materials supplement from the study, which enabled the model to do an analysis and answer my question.

Latest Discussions»General Discussion»Sycophantic AI decreases ...