Advertisement · 728 × 90

Posts by William J. Brady

New preprint! w/Tessa Charlesworth & @williambrady.bsky.social:

The Psychology of Algorithmic Bias

We introduce a psychology-centered framework to specify mechanisms through which human behavior interacts dynamically with AI systems to produce algorithmic bias.

osf.io/preprints/psyarxiv/rxu37_v1

2 weeks ago 20 13 3 0

New preprint! ✨ Do you and your partner have made-up words ("eggy" to mean awkward)? Do you and your bestie have an anecdote you love to tell together (that time one of you tripped over an acorn)? Do you and your closest colleague have a cherished ritual (weekly lunch at "the usual spot")? 🧵

3 weeks ago 36 15 4 2

Nice - this worked well!

1 month ago 0 0 0 0
Photo of Billy talking to a crowd of confused speakers & attendees

Photo of Billy talking to a crowd of confused speakers & attendees

Photo of negotiations continuing between speakers

Photo of negotiations continuing between speakers

They double-booked a room for the "Healthy, Wise, Wealthy, Decision Making" and "Moral Machines" sessions at SPSP, but @williambrady.bsky.social exercised some masterful (& healthy & wise & moral & wealthy?) negotiation skills and we're now getting two sessions of talks in one 😊

1 month ago 16 1 2 1

🫡

1 month ago 3 0 0 0
Post image

Also! Causal evidence from A/B testing dataset: victim framing causes more click throughs

1 month ago 2 0 0 0
Post image

@killianmcloughlin.bsky.social social media news especially likely to use villian / victim framing; low quality news especially; more likely to evoke outrage and draw engagement when doing so #spsp

1 month ago 17 3 1 0
Post image

New work from @drsanaz.bsky.social : using smart phone sensing data, one of the biggest predictors of people higher on authoritarian measure was Facebook and social media use #spsp

1 month ago 17 2 1 0
Post image

Many think LLM-simulated participants can transform behavioral science. But there's been a lack of accessible discussion of what it means to validate LLMs for behavioral scientists. Under what conditions can we trust LLMs to learn about human parameters? Our paper maps the validation landscape.
1/

4 months ago 99 26 2 3
Advertisement
Bluesky

Including many fantastic participants I was able to tag + more: @markthornton.bsky.social n.bsky.social @clemensstachl.bsky.social @hyogweon.bsky.social @abbycassario.bsky.social @bufangao.bsky.social o.bsky.social @drsanaz.bsky.social @mattgroh.bsky.social @mohammadatari.bsky.social

1 month ago 5 0 0 0

Thanks or lead organizer Tessa Charlesworth, and co-organizers @baixuechunzi.bsky.social brent hughes @chujunlin.bsky.social

1 month ago 3 0 1 0
ComputationalPsychTalks_SPSP2026

One thing we want to do this year is highlight all the cool computational psych happening throughout the conference symposia. If you have a talk broadly related to comp psych, enter it at this link: tinyurl.com/bdyuxmx3. We will put a QR code during precon so attendees can view this list!

1 month ago 2 0 1 0
Post image

We're excited about the upcoming Computational Psychology preconference at @spspnews.bsky.social this Thursday. See our action-packed full day agenda below! Featuring 3 keynote talk themes with related early-career speakers, data blitz session, panel discussion. Don't miss it! #SPSP

1 month ago 21 9 1 1

Last year at SPSP we had some great discussions about research with LLMs. This time #spsp2026 we're back with a whole workshop!

Friday 2/27 at 8am, + informal drinks at 6pm to continue the conversation

More info, plus a chance to submit discussion topics, here:
spsp2026.carrd.co

1 month ago 20 5 1 1
Post image

Interested in why moral conflict is so common on social media?

Join us at #SPSP 2026 for our symposium. We’ll present new findings on how platforms shape digital discourse and explore pathways toward healthier online environments

🗓 Saturday, the 28th | 9:30–10:40 AM
📍 Room E270, Level 2

1 month ago 10 3 1 0

WSP has been the birthplace of many great things (and perhaps also some questionable things 😅)

1 month ago 1 0 0 0
Post image

@chazfirestone.bsky.social it's a true honor 😅

1 month ago 6 0 3 0

Appreciate you Molly!!

1 month ago 1 0 0 0
Advertisement

Very honored by this one! Thanks to all my mentors, students and colleagues who made it possible! And congrats to all the recipients for their amazing work.

1 month ago 32 0 2 1
Post image Post image

For folks interested in learning about our lab's research, check out this flier with all our presentations at this coming #SPSP2026 conference @spspnews.bsky.social. With research by several rising stars covering tech, culture, politics and more

Credit to our talented lab manager Hanying Yao!

1 month ago 15 5 0 0
Preview
Troland Research Award – NAS Two Troland Research Awards of $75,000 are given annually to recognize unusual achievement by early-career researchers (preferably 45 years of age or younger) and to further empirical research within ...

Congrats to @mjcrockett.bsky.social on the Troland Research Award from @nasonline.org ! Having witnessed the "unusual achievement" first hand, very happy to see the recognition 🌹💐

www.nasonline.org/award/trolan...

2 months ago 6 2 1 0
Post image

Before the end of this year, I’m glad to share a short perspective/policy piece, recently out with @joshcjackson.bsky.social , Zhao Wang, and @williambrady.bsky.social: “Large AI Models Have a Prioritization Problem: Policy Implications and Solutions.”

3 months ago 6 3 2 0

Abstract

When we empathize with someone going through something, we often draw on our past experiences with the someone and the something. These kinds of experiences ground "thick empathy", a form of empathy that has been largely overlooked in the psychology and neuroscience literature. Consider how a mother, empathizing with her daughter about to give birth, can draw on her own experience of childbirth, and her relationship with her daughter, to deeply grasp what her daughter is going through in a way that others who lack those experiences cannot. I argue that thick empathy deserves more empirical attention because it is associated with well-being and helps us build networks of effective mutual social support. My analysis highlights novel risks and dilemmas posed by "empathy machines" that promise to enhance or even replace human empathy and are becoming increasingly popular as a potential solution to widespread loneliness. Even when empathy machines provide value to individuals, their widespread adoption risks imposing collective emotional and epistemic costs that ultimately make it harder for us to empathize well.

Keywords: empathy, understanding, experience, thick description, ethnography, phenomenal knowledge, interpersonal knowledge, virtual reality, artificial intelligence, chatbots

Abstract When we empathize with someone going through something, we often draw on our past experiences with the someone and the something. These kinds of experiences ground "thick empathy", a form of empathy that has been largely overlooked in the psychology and neuroscience literature. Consider how a mother, empathizing with her daughter about to give birth, can draw on her own experience of childbirth, and her relationship with her daughter, to deeply grasp what her daughter is going through in a way that others who lack those experiences cannot. I argue that thick empathy deserves more empirical attention because it is associated with well-being and helps us build networks of effective mutual social support. My analysis highlights novel risks and dilemmas posed by "empathy machines" that promise to enhance or even replace human empathy and are becoming increasingly popular as a potential solution to widespread loneliness. Even when empathy machines provide value to individuals, their widespread adoption risks imposing collective emotional and epistemic costs that ultimately make it harder for us to empathize well. Keywords: empathy, understanding, experience, thick description, ethnography, phenomenal knowledge, interpersonal knowledge, virtual reality, artificial intelligence, chatbots

New preprint: Empathy, Thick and Thin
papers.ssrn.com/sol3/papers....

It is perhaps foolhardy to attempt to say something new about a topic as widely studied as empathy. I tried anyway! 1/

4 months ago 252 66 12 11
Preview
AI-generated political videos are more about memes and money than persuading and deceiving Don’t discount the threat of AI political videos fooling people, but for now, they’re mostly about bolstering group identity and cashing in on viral content.

New from me - how AI-generated political videos have become just another part of social media, used to entertain, outrage and monitize attention

theconversation.com/ai-generated...

4 months ago 33 15 2 1
Preview
ChatGPT does not replicate human moral judgments: the importance of examining metrics beyond correlation to assess agreement - Scientific Reports Scientific Reports - ChatGPT does not replicate human moral judgments: the importance of examining metrics beyond correlation to assess agreement

Out now in Scientific Reports! Despite high correlations, ChatGPT models failed to replicate human moral judgments. We propose tests beyond correlation to compare LLM data and human data.

With @mattgrizz.bsky.social @andyluttrell.bsky.social @chasmonge.bsky.social

www.nature.com/articles/s41...

4 months ago 24 9 0 1
Advertisement
Post image Post image

So there you have it, twin study estimates were greatly inflated, and molecular data sets the record straight. I walk through possible counter-arguments, but ultimately the uncomfortable truth is that genes contribute to traits much less than we always thought.

4 months ago 135 43 4 8
Preview
How public involvement can improve the science of AI | PNAS As AI systems from decision-making algorithms to generative AI are deployed more widely, computer scientists and social scientists alike are being ...

Great work by @natematias.bsky.social & Megan Price: public involvement in AI is an important part of rigorous science. AI systems are sociotechnical, meaning that the lived experience of the public is essential for validation, etc.

www.pnas.org/doi/10.1073/...

5 months ago 2 3 1 0
OSF

New preprint out 📄
“Why Reform Stalls: Justifications of Force Are Linked to Lower Outrage and Reform Support.”

Why do some cases of police violence spark reform while others fade? We look at how people explain them—through justification or outrage.

osf.io/preprints/ps...

5 months ago 9 5 1 1
Post image

🚨Out in PNAS🚨
Examining news on 7 platforms:
1)Right-leaning platforms=lower quality news
2)Echo-platforms: Right-leaning news gets more engagement on right-leaning platforms, vice-versa for left-leaning
3)Low-quality news gets more engagement EVERYWHERE - even BlueSky!
www.pnas.org/doi/10.1073/...

5 months ago 218 105 11 8
Preview
Estimating cognitive biases with attention-aware inverse planning People's goal-directed behaviors are influenced by their cognitive biases, and autonomous systems that interact with people should be aware of this. For example, people's attention to objects in their...

Excited to share a new preprint, accepted as a spotlight at #NeurIPS2025!

Humans are imperfect decision-makers, and autonomous systems should understand how we deviate from idealized rationality

Our paper aims to address this! 👀🧠✨
arxiv.org/abs/2510.25951

a 🧵⤵️

5 months ago 63 14 1 2