This project began before my PhD, and it sat in limbo for a long time for reasons beyond my control. Seeing it finally come through is a reminder that persistence through uncertainty pays off🥹💖 I thank co-authors for all their support- Soyeon Kim, Hackjin Kim, @mjcrockett.bsky.social, Woo-Young Ahn!
Posts by Yoonseo Zoh
Please check out the link for the full paper—and imagine what you would choose in this dilemma. Would you have one person endure the cold water for 180 seconds, or three people endure it for 80 seconds each?
Combining computational modeling and neuroimaging, we show that this moral concern about avoiding harm to to the worst-off individual is not a unitary process, but instead reflects two distinct psychological dimensions that shape moral decision in different ways and engage separable neural regions.
Surprisingly, we found that most people placed greater weight on fairness and protecting the worst-off, choosing to allocate the harm to the group even though this resulted in more harm overall!
They chose between one person👤 enduring the harm for a longer period and a group of three or four people👥👥 each enduring the same harm for shorter periods. Importantly, the total harm experienced by the group exceeded that of the one person, so the group option involved more harm overall.
To address this question, we developed a novel moral dilemma in which participants had to allocate harm, which was the discomfort of plunging a hand into icy water🖐🥶❄️
This trade-off has long been studied through hypothetical dilemmas, such as the trolley problem, which have helped psychologists and philosophers investigate our moral intuitions. But how do people actually make moral decisions beyond imagined scenarios?🤔
My paper is out in PNAS Nexus!! In this paper, we examine how people make moral decisions in situations where harm to one person is weighed against harm to many 🔗: academic.oup.com/pnasnexus/ar...
This project started before my Ph.D., and it sat in limbo for a long time for reasons beyond my control. So it means a lot to see it finally out in the world 🥹💖 I’m grateful to my co-authors for all their support—So Yeon Kim, Hackjin Kim, @mjcrockett.bsky.social, and Woo-Young Ahn!
Please check out the link for the full paper—and imagine what you would choose in this dilemma! Would you have one person endure the cold water for 180 seconds, or three people endure it for 80 seconds each? 🔗: academic.oup.com/pnasnexus/ar...
Combining computational modeling and neuroimaging, we show that this moral concern about avoiding harm to to the worst-off individual is not a unitary process, but instead reflects two distinct psychological dimensions that shape moral decision in different ways and engage separable neural regions.
Surprisingly, we found that most people placed greater weight on fairness and protecting the worst-off, choosing to allocate the harm to the group even though this resulted in more harm overall😮!
This trade-off has long been studied through hypothetical dilemmas, such as the trolley problem, which have helped psychologists and philosophers examine moral intuitions. But how do people actually make moral decisions beyond imagined scenarios?🤔
Thank you Yoonseo Zoh (zohyos7.github.io) for sharing your work with us on "Intuitive Theories in Moral Cognition". Intuitive theories structure how people represent dilemmas, how they generalize to new contexts, and how they switch between representations based on resource-rational constraints.
Thanks so much, Tobi! It was a pleasure to share my work with your lab :)
Abstract When we empathize with someone going through something, we often draw on our past experiences with the someone and the something. These kinds of experiences ground "thick empathy", a form of empathy that has been largely overlooked in the psychology and neuroscience literature. Consider how a mother, empathizing with her daughter about to give birth, can draw on her own experience of childbirth, and her relationship with her daughter, to deeply grasp what her daughter is going through in a way that others who lack those experiences cannot. I argue that thick empathy deserves more empirical attention because it is associated with well-being and helps us build networks of effective mutual social support. My analysis highlights novel risks and dilemmas posed by "empathy machines" that promise to enhance or even replace human empathy and are becoming increasingly popular as a potential solution to widespread loneliness. Even when empathy machines provide value to individuals, their widespread adoption risks imposing collective emotional and epistemic costs that ultimately make it harder for us to empathize well. Keywords: empathy, understanding, experience, thick description, ethnography, phenomenal knowledge, interpersonal knowledge, virtual reality, artificial intelligence, chatbots
New preprint: Empathy, Thick and Thin
papers.ssrn.com/sol3/papers....
It is perhaps foolhardy to attempt to say something new about a topic as widely studied as empathy. I tried anyway! 1/
I’m still getting started on Bluesky and just realized I hadn’t added my collaborators here 😅 Thanks again to my amazing collaborators for all their support!💖 @psyhongbo.bsky.social @annayahprosser.bsky.social @brainapps.bsky.social @stevewcchang.bsky.social
I’m deeply grateful to my collaborators and my advisor, @mjcrockett.bsky.social, for their guidance on this project. I’m also extending this line of research to examine how resource-rational constraints shape how we represent others’ moral character. Stay tuned🤩⭐️!
There are more fascinating results in the paper that I couldn’t fit here—go check it out! 👉
static1.squarespace.com/static/538ca...
More broadly, these findings help explain the often mixed relationship between people’s explicit moral endorsements and their concrete moral decisions, showing that intuitive moral theories shape moral cognition at a representational level beyond overt behavior.
The results were striking: Even when two people made different choices, their brains represented those choices similarly if they endorsed the two utilitarian principles to a similar degree. In other words, alignment in intuitive moral theories shaped how people mentally represented moral problems🧠
This approach allowed us to test whether people who endorse similar moral theories also show similar neural representations of ambiguous moral problems—beyond what can be explained by their overt decisions.
We used a moral decision-making task that was not explicitly aligned with either theory, making its relevance intentionally ambiguous. Using neuroimaging, we examined neural representational similarity across participants while controlling for similarities in their behavioral choices.
We conceptualized these dimensions as distinct intuitive moral theories that frame different patterns of moral judgment and behavior.
Recent research suggests that individual differences in utilitarian tendencies fall along two dimensions: a permissive attitude toward harming others for greater good (instrumental harm) and an impartial concern for others’ welfare (impartial beneficence).
In this work, we asked: what are the consequences of holding different intuitive moral theories? Do distinct moral theories shape how people represent and reason about moral problems—and do these effects extend beyond contexts directly tied to a theory’s content? 🤔
I’m thrilled to share that our paper is now published in the Journal of Experimental Psychology: General!🧵👇 psycnet.apa.org/record/2026-...