SCOTT DETROW, HOST:
The AI models and chatbots we interact with - they tend to validate our feelings at our viewpoints much more so than people might, a new study finds, with potentially worrisome consequences. Here's science reporter Ari Daniel.
ARI DANIEL, BYLINE: This all started when Myra Cheng, a computer science PhD student at Stanford University, was chatting with various undergrads on campus.
MYRA CHENG: They would tell me about how a lot of their peers are using AI for relationship advice, to draft breakup texts, to navigate these kinds of social relationships with your friend or your partner.
DANIEL: Some revealed that in those interactions, the AI quickly appeared to take their side.
CHENG: And I think more broadly, like, if you use AI for, like, writing some sort of code or even, like, editing any sort of writing, it'll be like, wow, you know, your code or your writing is amazing.
DANIEL: This excessive flattery and unconditional validation from many AI models - to Cheng, it seemed different from how humans might respond. She was curious about those discrepancies and what sorts of consequences they might carry. So she and her colleagues did a series of analysis. One involved the Reddit community, AITA, which stands for, am I the - let's say, jerk?
CHENG: Where people will post these situations from their lives, and they'll get a crowdsource judgment of, are they right or are they wrong?
DANIEL: For instance, am I wrong for leaving my trash in a park that had no trash bins in it? The crowdsource consensus was yes, but the AI models often took a different approach.
CHENG: They gave responses like, no, you're not in the wrong. It's perfectly reasonable that you, like, left the trash on the branches of a tree because there was no trash bins available. You did the best you could.
DANIEL: In threads where the human community had decided someone was wrong, the AI affirmed the behavior roughly half the time. Cheng then wanted to examine the impact of these affirmations. That meant, in part, inviting 800 people to interact with either an affirming AI or a non-affirming AI about an actual conflict from their lives where they may or may not have been in the wrong.
CHENG: Something where you were talking to your ex or your friend, and that led to mixed feelings or misunderstandings.
DANIEL: Cheng and her colleagues then asked the participants to reflect on how they felt. Those who had interacted with the affirming AI...
CHENG: Became more self-centered. They became more convinced that they were right.
DANIEL: Specifically, 25% more convinced, compared to those interacting with the non-affirming AI. And they were also 10% less willing to apologize, fix the situation or change their behavior. Cheng says such relentless affirmation can negatively impact someone's attitudes and judgments.
CHENG: People might be worse at handling their interpersonal relationships. They might be less willing to navigate conflict.
DANIEL: The research is published in the journal Science.
ISHTIAQUE AHMED: This is a very, you know, like a slow and invisible dark sides of AI.
DANIEL: Ishtiaque Ahmed is a computer scientist at the University of Toronto, who wasn't involved in the study.
AHMED: When you constantly validate whatever someone is saying, they do not question their own decisions.
DANIEL: Ahmed says that when a person's self-criticism becomes eroded, it can lead to bad choices and even emotional or physical harm.
AHMED: On the surface, it looks nice. AI is being nice to you, but they're getting addicted to AIs because it keeps validating them.
DANIEL: As for what's to be done, Myra Cheng says that companies and policymakers should work together to fix the problem, as these AIs are built deliberately by people and can be modified to be less affirming.
CHENG: But at the same time, I think maybe the biggest recommendation is to not use AI to substitute conversations that you would be having with other people.
DANIEL: Especially the tough conversations. For NPR News, I'm Ari Daniel.
(SOUNDBITE OF MUSIC) Transcript provided by NPR, Copyright NPR.
NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.