These results challenge assumptions about more immersive, human-like modes of engagement. This has real implications for how AI companions, mental health chatbots, and customer service tools are designed. More immersive ≠ more effective.
Posts by Josh Wenger
A few other findings:
→ AI empathy is still rated higher quality than human empathy, even for spoken empathy
→ We find the "AI label" penalty (knowing a response is AI-generated makes it feel less empathetic), but this appears smaller than the voice penalty
A likely mechanism, at least for AI: spoken empathy triggers greater feelings of uncanniness—which, in turn, undermines perceived empathic quality.
We found an "empathic voice penalty": empathy delivered via voice is rated as less empathetic and makes people feel less heard than the same message delivered as text. This held for BOTH human and AI empathizers, but was especially strong for AI.
🚨 New preprint with @dcameron.bsky.social and @minzlicht.bsky.social!
The empathic voice penalty: Vocal delivery reduces perceived empathy in humans and AI osf.io/preprints/ps...
We know AI often outperforms humans at text-based empathy—but what happens when emotional support is expressed vocally?
When given the choice, participants sought human empathy, despite rating AI responses as more empathetic and making them feel more heard.
@jdweng.bsky.social
@dcameron.bsky.social
@minzlicht.bsky.social
www.nature.com/articles/s44...
Our findings highlight the impressive potential of AI for high-quality emotional support, while emphasizing the importance of respecting individual preferences in empathy seeking behavior.
This effect appears for participants’ real-life emotional situations, and even when the human empathizer is an expert (e.g., trained crisis responders).
In our new research, we examined whether people choose to receive empathy from a human or AI empathizer when given the free option.
Across four studies, we find an “AI empathy choice paradox”:
—People generally choose human empathizers.
—But when they do choose AI, they rate it as more empathetic.
New publication with @dcameron.bsky.social and @minzlicht.bsky.social in @commspsychol.nature.com!
www.nature.com/articles/s44...
AI empathy is good, but would people actually choose to turn to AI for emotional support over a human empathizer?
We also see empathy as part of a broader philosophy of science conversation: how should scientists engage with participant experiences to inform our construct definitions, and how should we bound our constructs as new technologies and relational possibilities emerge?
5/5
Rather than letting modality (human vs. AI) dictate construct boundaries, we suggest grounding empathy in what it does for people—and why that matters for theory, measurement, and public relevance. As AI reshapes social interaction, our constructs need to be flexible enough to keep up.
4/5
In our new preprint, we argue for a functional-relational approach to empathy, highlighting:
- the multiple functions empathy serves
- the role of relational context
- the importance of lived experience in defining psychological constructs
3/5
Traditional models of empathy focus on the empathizer’s embodied emotional experience, which AI lacks. Yet people report feeling cared for by AI. This tension between human experience and researcher-imposed construct definitions raises questions about what it truly means for AI to “empathize.”
2/5
New preprint with @dcameron.bsky.social and @mgreinecke.bsky.social!
Rethinking empathy in the age of AI osf.io/preprints/ps...
How should we define empathy as a construct in an age where AI can provide quality emotional support, but doesn’t actually feel?
1/5
Wintry walk with the EMP Lab
@amormino.bsky.social @jokretz.bsky.social @farvk.bsky.social @jdweng.bsky.social
@psuliberalarts.bsky.social
@prcpennstate.bsky.social
@ssripennstate.bsky.social
@rockethics.bsky.social
Finally, we suggest that questions about whether empathy from one source is inherently “better” are difficult to answer without grounding them in a normative ethical framework to provide guidance regarding the relative value of different empathic qualities and their effect on well-being.
From an empathy recipient’s perspective, the preference for one source over another may depend on how they weigh these trade-offs in light of their particular emotional situation. In some moments, accessible empathy may be more valuable than selective empathy.
Human empathy has the potential for unique qualities such as selectivity and effort. However, human empathy and its expression of these qualites manifests in a wide variety of forms. AI empathy, on the other hand, also offers its own unique advantages, including consistency and accessibility.
New chapter with @dcameron.bsky.social, Martina Orlandi, and @minzlicht.bsky.social:
osf.io/preprints/ps...
In this chapter, we argue that instead of debating whether human or AI empathy is superior overall, it is more useful to focus on the distinct trade-offs that each source of empathy offers.
Picture of members of the Empathy & Moral Psychology (EMP) Lab
A snapshot of our last summer meeting of the EMP Lab. A Happy Valley welcome to new members, graduate student
@jokretz.bsky.social & post-doc @amormino.bsky.social, & farewell to alum @rachelbuterbaugh.bsky.social, who's off to grad school. With @jdweng.bsky.social @farvk.bsky.social, a great team!
So grateful for the chance to attend the EASP Summer School organized by @jimaceverett.bsky.social. Huge thanks to @jimaceverett.bsky.social and @mgreinecke.bsky.social for your mentorship in the Moral Psych of AI workstream, and to all of the other amazing students I had the chance to learn from!
Thanks to everybody who chimed in!
I arrived at the conclusion that (1) there's a lot of interesting stuff about interactions and (2) the figure I was looking for does not exist.
So, I made it myself! Here's a simple illustration of how to control for confounding in interactions:>
I'll be sharing some data from our recent preprint on AI empathy choice (osf.io/preprints/os...) in a talk at the Society for Affective Science Annual Conference this Saturday. Stop by or reach out if you're interested in talking about AI, empathy, or causal inference!
@affectscience.bsky.social
Across multiple studies we examine this AI empathy choice paradox and explore how it varies between empathy vs. compassion, physical vs. emotional suffering, positive vs. negative situations, and explore the importance of perceived empathizer effort.
New preprint with @dcameron.bsky.social and @minzlicht.bsky.social!
We find that people choose to receive empathy from human over AI empathizers, despite finding AI responses superior in terms of empathy and making them feel more heard.
Link: osf.io/preprints/os...
On another note, the EMP Lab will be at @affectscience.bsky.social in Portland, Oregon, for anyone who'd like to chat about empathy, motivated emotion regulation, & moral outrage. My grad student @jdweng.bsky.social will be giving his first external talk on human vs. AI empathy. The coffee awaits!