Advertisement · 728 × 90

Posts by Victoria Oldemburgo de Mello

Post image

We created a new scale to measure AI sycophancy in conversations with no right answer. Three facets: Uncritical Agreement, Obsequiousness, Excitement. People hate the flattery but welcome the enthusiasm. And sycophancy correlated with perceived empathy across every study.

arxiv.org/abs/2603.15448

1 month ago 34 16 0 0

Check out my student @hellovic.bsky.social's new paper: The Moralization of Artificial Intelligence

With @reemayad.bsky.social, @eloisecote.bsky.social, @yoelinbar.net, and Jason Plaks

1 month ago 5 3 1 0

Thanks for flagging that out! I‘ll definitely address those issues in the next iteration of the paper. We didn’t have strong expertise in SEMs in our team so we really appreciate the feedback.

1 month ago 1 0 0 0

Thanks for the observation!

1 month ago 0 0 1 0

Shout out to my amazing co-authors! @eloisecote.bsky.social @reemayad.bsky.social @yoelinbar.net Jason Plaks and @minzlicht.bsky.social

1 month ago 1 0 0 0

The takeaway: if you want to reduce AI aversion, talking about safety, control, and benefits may not be enough. Moral opposition requires moral engagement. Strategies that ignore the moral dimension of AI attitudes are likely to fall short for some people. [7/7]

1 month ago 1 0 2 0

And moralization has real behavioral consequences. Moralization scores predicted a 42% decrease in the likelihood of using AI 2–19 months later. This isn't just an attitude. Moral opposition shapes what people actually do—and don't do—with AI. [6/7]

1 month ago 2 0 1 0

SEMs support a domain-general moralization story: people form a global attitude about AI first, then rationalize how specific applications might harm society. The attitude precedes the reasoning. This is the hallmark of moral cognition, not rational cost-benefit analysis. [5/7]

1 month ago 2 0 1 0

Here's the twist: even moral opponents justify their stance with practical arguments, not moral ones. People moralize AI, but they don't say so out loud. They reach for safety concerns, job loss, privacy; post-hoc rationalizations for an underlying moral intuition. [4/7]

1 month ago 2 0 1 0
Advertisement

In surveys of representative American samples, most people aren't AI opponents. But among those who are, the majority show signs of moralization, saying they'd reject AI even if risks were reduced and benefits increased. That's a tell. Risk-benefit logic isn't driving this. [3/7]

1 month ago 1 0 1 0

We analyzed 70,000 news headlines about AI and compared them to topics known to be moralized—GMOs, COVID-19, and vaccines. Result: AI is moralized in public discourse at levels comparable to GMOs and COVID, and more than vaccines. The rhetoric isn't neutral. It's moral. [2/7]

1 month ago 2 0 1 0
OSF

🧵 New preprint alert! Why do some people refuse to use AI, even when you tell them it's safe and beneficial? We argue it's not about risk perception. It's about moralization. AI has become a moral issue for many people, and that changes everything. [1/7] osf.io/preprints/ps...

1 month ago 16 4 3 3

Thanks!

3 months ago 0 0 0 0

Hi! The bit.ly link in the image is broken.

3 months ago 1 0 1 0

So excited to see this research! My students just learned the word “sycophantic” today, for exactly this reason! We talked about the types and qualities of conversations you can have with a sycophant, and why this matters for how we process the output of LLMs.

6 months ago 22 2 1 1
Abstract and results summary

Abstract and results summary

🚨 New preprint 🚨

Across 3 experiments (n = 3,285), we found that interacting with sycophantic (or overly agreeable) AI chatbots entrenched attitudes and led to inflated self-perceptions.

Yet, people preferred sycophantic chatbots and viewed them as unbiased!

osf.io/preprints/ps...

Thread 🧵

6 months ago 177 91 5 15
Post image

These changes in personality are pretty shocking

8 months ago 43 12 7 3
Preview
The End of Loneliness Today’s post is something I wrote for the New Yorker—available here. (Physical copy coming out next week.) It’s about the consequences of using AI to cure loneliness. I’m curious to see what people th...

open.substack.com/pub/paulbloo...

9 months ago 8 3 0 2
Preview
Moral Opposition to AI Although executives push “AI‑first” for efficiency, new research shows most resistance stems from moral gut reactions, not cost‑benefit math. Research from Victoria Oldemburgo de Mello, Reem Ayad, Élo...

And some more on moral opposition to AI @yoelinbar.net @hellovic.bsky.social @reemayad.bsky.social @eloisecote.bsky.social @minzlicht.bsky.social

www.nuancebehavior.com/article/mora...

11 months ago 5 2 1 0
Advertisement
Preview
A positive empathy intervention to improve well-being on Instagram - PubMed With more than half the global population on social media, there is a critical need to understand how to engage it in a way that improves rather than worsens user well-being. Here, we show that positi...

Want to feel better after using Instagram? Our new paper shows how. It's not about what you post, it's about how you respond to others' joy. In 4 studies (N=1327), focusing on sharing & caring about others' positive emotions improved mood & life satisfaction

pubmed.ncbi.nlm.nih.gov/39883419/#:~....

1 year ago 33 9 2 0

wait till you hear that the weather report from my hometown (in Brazil) issued an "extreme cold alert" when it was 13°C

1 year ago 1 0 1 0

All good, Mickey 😀

1 year ago 1 0 0 0
Post image

1/ New paper led by Dariya Ovsyannikova & Victoria de Mello Oldemburgo.

Can AI offer empathy that’s better than humans? Maybe. Our new study found that people rated AI-generated responses as more compassionate than from humans, including trained crisis responders.

www.nature.com/commspsychol/

1 year ago 19 9 1 0
Post image

I'm excited to share with you all that my paper on A Positive Empathy Intervention to Improve Well-being on Instagram with @minzlicht.bsky.social and Victoria Oldemburgo de Melo has been accepted for publication at Emotion 😁🥳 osf.io/preprints/ps.... In this work, we tested a brief intervention 1/3

1 year ago 27 9 1 0