Advertisement · 728 × 90

Posts by Yoonseo Zoh

This project began before my PhD, and it sat in limbo for a long time for reasons beyond my control. Seeing it finally come through is a reminder that persistence through uncertainty pays off🥹💖 I thank co-authors for all their support- Soyeon Kim, Hackjin Kim, @mjcrockett.bsky.social, Woo-Young Ahn!

2 weeks ago 0 0 0 0

Please check out the link for the full paper—and imagine what you would choose in this dilemma. Would you have one person endure the cold water for 180 seconds, or three people endure it for 80 seconds each?

2 weeks ago 0 0 1 0

Combining computational modeling and neuroimaging, we show that this moral concern about avoiding harm to to the worst-off individual is not a unitary process, but instead reflects two distinct psychological dimensions that shape moral decision in different ways and engage separable neural regions.

2 weeks ago 0 0 1 0

Surprisingly, we found that most people placed greater weight on fairness and protecting the worst-off, choosing to allocate the harm to the group even though this resulted in more harm overall!

2 weeks ago 1 0 1 0
Post image

They chose between one person👤 enduring the harm for a longer period and a group of three or four people👥👥 each enduring the same harm for shorter periods. Importantly, the total harm experienced by the group exceeded that of the one person, so the group option involved more harm overall.

2 weeks ago 0 0 1 0
Post image

To address this question, we developed a novel moral dilemma in which participants had to allocate harm, which was the discomfort of plunging a hand into icy water🖐🥶❄️

2 weeks ago 0 0 1 0

This trade-off has long been studied through hypothetical dilemmas, such as the trolley problem, which have helped psychologists and philosophers investigate our moral intuitions. But how do people actually make moral decisions beyond imagined scenarios?🤔

2 weeks ago 2 0 1 0
Preview
Decomposing the neurocomputational mechanisms of deontological moral preferences Abstract. Research on the neurocomputational mechanisms of moral judgment has typically focused on contrasting “utilitarian” preferences to impartially max

My paper is out in PNAS Nexus!! In this paper, we examine how people make moral decisions in situations where harm to one person is weighed against harm to many 🔗: academic.oup.com/pnasnexus/ar...

2 weeks ago 5 3 1 0

This project started before my Ph.D., and it sat in limbo for a long time for reasons beyond my control. So it means a lot to see it finally out in the world 🥹💖 I’m grateful to my co-authors for all their support—So Yeon Kim, Hackjin Kim, @mjcrockett.bsky.social, and Woo-Young Ahn!

2 weeks ago 0 0 0 0
Preview
Decomposing the neurocomputational mechanisms of deontological moral preferences Abstract. Research on the neurocomputational mechanisms of moral judgment has typically focused on contrasting “utilitarian” preferences to impartially max

Please check out the link for the full paper—and imagine what you would choose in this dilemma! Would you have one person endure the cold water for 180 seconds, or three people endure it for 80 seconds each? 🔗: academic.oup.com/pnasnexus/ar...

2 weeks ago 0 0 1 0
Advertisement

Combining computational modeling and neuroimaging, we show that this moral concern about avoiding harm to to the worst-off individual is not a unitary process, but instead reflects two distinct psychological dimensions that shape moral decision in different ways and engage separable neural regions.

2 weeks ago 0 0 1 0

Surprisingly, we found that most people placed greater weight on fairness and protecting the worst-off, choosing to allocate the harm to the group even though this resulted in more harm overall😮!

2 weeks ago 0 0 1 0

This trade-off has long been studied through hypothetical dilemmas, such as the trolley problem, which have helped psychologists and philosophers examine moral intuitions. But how do people actually make moral decisions beyond imagined scenarios?🤔

2 weeks ago 1 0 0 0
Post image

Thank you Yoonseo Zoh (zohyos7.github.io) for sharing your work with us on "Intuitive Theories in Moral Cognition". Intuitive theories structure how people represent dilemmas, how they generalize to new contexts, and how they switch between representations based on resource-rational constraints.

1 month ago 14 1 1 0

Thanks so much, Tobi! It was a pleasure to share my work with your lab :)

1 month ago 1 0 0 0

Abstract

When we empathize with someone going through something, we often draw on our past experiences with the someone and the something. These kinds of experiences ground "thick empathy", a form of empathy that has been largely overlooked in the psychology and neuroscience literature. Consider how a mother, empathizing with her daughter about to give birth, can draw on her own experience of childbirth, and her relationship with her daughter, to deeply grasp what her daughter is going through in a way that others who lack those experiences cannot. I argue that thick empathy deserves more empirical attention because it is associated with well-being and helps us build networks of effective mutual social support. My analysis highlights novel risks and dilemmas posed by "empathy machines" that promise to enhance or even replace human empathy and are becoming increasingly popular as a potential solution to widespread loneliness. Even when empathy machines provide value to individuals, their widespread adoption risks imposing collective emotional and epistemic costs that ultimately make it harder for us to empathize well.

Keywords: empathy, understanding, experience, thick description, ethnography, phenomenal knowledge, interpersonal knowledge, virtual reality, artificial intelligence, chatbots

Abstract When we empathize with someone going through something, we often draw on our past experiences with the someone and the something. These kinds of experiences ground "thick empathy", a form of empathy that has been largely overlooked in the psychology and neuroscience literature. Consider how a mother, empathizing with her daughter about to give birth, can draw on her own experience of childbirth, and her relationship with her daughter, to deeply grasp what her daughter is going through in a way that others who lack those experiences cannot. I argue that thick empathy deserves more empirical attention because it is associated with well-being and helps us build networks of effective mutual social support. My analysis highlights novel risks and dilemmas posed by "empathy machines" that promise to enhance or even replace human empathy and are becoming increasingly popular as a potential solution to widespread loneliness. Even when empathy machines provide value to individuals, their widespread adoption risks imposing collective emotional and epistemic costs that ultimately make it harder for us to empathize well. Keywords: empathy, understanding, experience, thick description, ethnography, phenomenal knowledge, interpersonal knowledge, virtual reality, artificial intelligence, chatbots

New preprint: Empathy, Thick and Thin
papers.ssrn.com/sol3/papers....

It is perhaps foolhardy to attempt to say something new about a topic as widely studied as empathy. I tried anyway! 1/

4 months ago 252 66 12 11

I’m still getting started on Bluesky and just realized I hadn’t added my collaborators here 😅 Thanks again to my amazing collaborators for all their support!💖 @psyhongbo.bsky.social @annayahprosser.bsky.social @brainapps.bsky.social @stevewcchang.bsky.social

5 months ago 2 0 0 0

I’m deeply grateful to my collaborators and my advisor, @mjcrockett.bsky.social, for their guidance on this project. I’m also extending this line of research to examine how resource-rational constraints shape how we represent others’ moral character. Stay tuned🤩⭐️!

5 months ago 1 0 2 0
Advertisement

There are more fascinating results in the paper that I couldn’t fit here—go check it out! 👉
static1.squarespace.com/static/538ca...

5 months ago 1 0 1 0

More broadly, these findings help explain the often mixed relationship between people’s explicit moral endorsements and their concrete moral decisions, showing that intuitive moral theories shape moral cognition at a representational level beyond overt behavior.

5 months ago 1 0 1 0
Post image

The results were striking: Even when two people made different choices, their brains represented those choices similarly if they endorsed the two utilitarian principles to a similar degree. In other words, alignment in intuitive moral theories shaped how people mentally represented moral problems🧠

5 months ago 2 0 1 0

This approach allowed us to test whether people who endorse similar moral theories also show similar neural representations of ambiguous moral problems—beyond what can be explained by their overt decisions.

5 months ago 1 0 1 0
Post image

We used a moral decision-making task that was not explicitly aligned with either theory, making its relevance intentionally ambiguous. Using neuroimaging, we examined neural representational similarity across participants while controlling for similarities in their behavioral choices.

5 months ago 2 0 1 0

We conceptualized these dimensions as distinct intuitive moral theories that frame different patterns of moral judgment and behavior.

5 months ago 1 0 1 0
Post image

Recent research suggests that individual differences in utilitarian tendencies fall along two dimensions: a permissive attitude toward harming others for greater good (instrumental harm) and an impartial concern for others’ welfare (impartial beneficence).

5 months ago 1 0 1 0
Advertisement
Post image

In this work, we asked: what are the consequences of holding different intuitive moral theories? Do distinct moral theories shape how people represent and reason about moral problems—and do these effects extend beyond contexts directly tied to a theory’s content? 🤔

5 months ago 1 0 1 0
Post image

I’m thrilled to share that our paper is now published in the Journal of Experimental Psychology: General!🧵👇 psycnet.apa.org/record/2026-...

5 months ago 10 5 1 2