We created a new scale to measure AI sycophancy in conversations with no right answer. Three facets: Uncritical Agreement, Obsequiousness, Excitement. People hate the flattery but welcome the enthusiasm. And sycophancy correlated with perceived empathy across every study.
arxiv.org/abs/2603.15448
Posts by Victoria Oldemburgo de Mello
Check out my student @hellovic.bsky.social's new paper: The Moralization of Artificial Intelligence
With @reemayad.bsky.social, @eloisecote.bsky.social, @yoelinbar.net, and Jason Plaks
Thanks for flagging that out! I‘ll definitely address those issues in the next iteration of the paper. We didn’t have strong expertise in SEMs in our team so we really appreciate the feedback.
Thanks for the observation!
Shout out to my amazing co-authors! @eloisecote.bsky.social @reemayad.bsky.social @yoelinbar.net Jason Plaks and @minzlicht.bsky.social
The takeaway: if you want to reduce AI aversion, talking about safety, control, and benefits may not be enough. Moral opposition requires moral engagement. Strategies that ignore the moral dimension of AI attitudes are likely to fall short for some people. [7/7]
And moralization has real behavioral consequences. Moralization scores predicted a 42% decrease in the likelihood of using AI 2–19 months later. This isn't just an attitude. Moral opposition shapes what people actually do—and don't do—with AI. [6/7]
SEMs support a domain-general moralization story: people form a global attitude about AI first, then rationalize how specific applications might harm society. The attitude precedes the reasoning. This is the hallmark of moral cognition, not rational cost-benefit analysis. [5/7]
Here's the twist: even moral opponents justify their stance with practical arguments, not moral ones. People moralize AI, but they don't say so out loud. They reach for safety concerns, job loss, privacy; post-hoc rationalizations for an underlying moral intuition. [4/7]
In surveys of representative American samples, most people aren't AI opponents. But among those who are, the majority show signs of moralization, saying they'd reject AI even if risks were reduced and benefits increased. That's a tell. Risk-benefit logic isn't driving this. [3/7]
We analyzed 70,000 news headlines about AI and compared them to topics known to be moralized—GMOs, COVID-19, and vaccines. Result: AI is moralized in public discourse at levels comparable to GMOs and COVID, and more than vaccines. The rhetoric isn't neutral. It's moral. [2/7]
🧵 New preprint alert! Why do some people refuse to use AI, even when you tell them it's safe and beneficial? We argue it's not about risk perception. It's about moralization. AI has become a moral issue for many people, and that changes everything. [1/7] osf.io/preprints/ps...
Thanks!
Hi! The bit.ly link in the image is broken.
So excited to see this research! My students just learned the word “sycophantic” today, for exactly this reason! We talked about the types and qualities of conversations you can have with a sycophant, and why this matters for how we process the output of LLMs.
Abstract and results summary
🚨 New preprint 🚨
Across 3 experiments (n = 3,285), we found that interacting with sycophantic (or overly agreeable) AI chatbots entrenched attitudes and led to inflated self-perceptions.
Yet, people preferred sycophantic chatbots and viewed them as unbiased!
osf.io/preprints/ps...
Thread 🧵
These changes in personality are pretty shocking
And some more on moral opposition to AI @yoelinbar.net @hellovic.bsky.social @reemayad.bsky.social @eloisecote.bsky.social @minzlicht.bsky.social
www.nuancebehavior.com/article/mora...
Want to feel better after using Instagram? Our new paper shows how. It's not about what you post, it's about how you respond to others' joy. In 4 studies (N=1327), focusing on sharing & caring about others' positive emotions improved mood & life satisfaction
pubmed.ncbi.nlm.nih.gov/39883419/#:~....
wait till you hear that the weather report from my hometown (in Brazil) issued an "extreme cold alert" when it was 13°C
All good, Mickey 😀
1/ New paper led by Dariya Ovsyannikova & Victoria de Mello Oldemburgo.
Can AI offer empathy that’s better than humans? Maybe. Our new study found that people rated AI-generated responses as more compassionate than from humans, including trained crisis responders.
www.nature.com/commspsychol/
I'm excited to share with you all that my paper on A Positive Empathy Intervention to Improve Well-being on Instagram with @minzlicht.bsky.social and Victoria Oldemburgo de Melo has been accepted for publication at Emotion 😁🥳 osf.io/preprints/ps.... In this work, we tested a brief intervention 1/3