Just wrote a new blogpost trying to summarize my thoughts on the question of how and whether to use AI for research in psychology and cognitive science: babieslearninglanguage.blogspot.com/2026/04/usin...
Posts by Elizabeth Jiwon Im
The Cognitive Tools Lab at Stanford (cogtoolslab.github.io) is recruiting two new research staff members to join in AY 26-27.
Full-Time Lab Manager: forms.gle/UVwfx5wbY9Km....
IRiSS Predoc Researcher: iriss.stanford.edu/predoc/2026-....
Please share widely in your networks, thank you!!
I Work Very Hard, And I Would Like To Try Cake By A Horse Hello. I am a horse. I work very hard at my job of being a horse. When humans say move the heavy thing, I move the heavy thing. When humans sit on top of me and pull on my head, I carry them where they want to go. The main food the humans give me is hay and oats. But I am thinking it would be nice to have a different food. I am thinking I would like to try cake. Yes, yes. Cake. I know all about it. When humans eat cake, it is in glad times. It is the food for a celebration, such as when a woman becomes 47. I have seen cake on the Fourth of July. When humans have a cake, they stand around it and clap hands and smile and say happy birthday at each other. Sometimes there are beautiful markings on a cake, such as balloons or a pink shape. Sometimes the top of a cake is on fire and a boy must blow on the fire with mouth wind. This is the scariest cake. I do not want this kind. But I will eat any other cake. Any cake that is not the fire cake that tries to kill the boy. Please understand: I do not get money for doing work. I do not get to go inside the house. All I am either doing my horse job or standing in my pen or eating food off the floor. I always do these things. But I have never once gotten cake and I would like it very much. I have noticed that human children get to eat cake. But I am bigger than the children. I am more helpful to the farm. Children do not move the heavy things like me or let anyone ride on them. And yet they get cake. Maybe the humans will realize this. Maybe they will say, "You know who deserves cake? That horse. That horse whose back we are always on." Every day I dream about what it will be like if I get to eat cake. Here is what will happen. First, I will walk to the cake and putt my nose at it like hrrfff to make and stomping my hooves to make sure it is not a snake. Then I will trot in a circle to show that I am a horse and I am large. After that, I will nuzzle the cake to …
The horse op-ed is an instant classic. I can't tell you how much joy this piece gives me.
It should be taught in every introductory writing class in no small part because the horse arguments are so compelling. "I have noticed that human children get to eat cake. But I am bigger than the children."
The Causality in Cognition Lab -- a supportive, bluesky-colored team -- is looking for a predoc to join us! Here are infos about the lab (cicl.stanford.edu) and the position (careersearch.stanford.edu/jobs/iriss-p...). The application deadline is May 1st.
Please share, thank you 🙏
Chopping science means progress stops. Time to write home about the value of American science. Science Homecoming has resources on our website. Encourage your colleagues to pencil up ✏️
New paper with @cantlonlab.bsky.social out now in Dev Cog Neuro! We scanned 3-5yr olds with fMRI and found that number words activate regions of cortex also involved in visual numerosity perception, even at the earliest stages of counting acquisition doi.org/10.1016/j.dc...
Excited to share our new publication, “Measuring Naturalistic Speech Comprehension in Real Time”!
➡️ rdcu.be/fa3hk #psynomBRM
w/ @kriesjill.bsky.social, Shiven Gupta, Maria Papworth Burrel, & @lauragwilliams.bsky.social
🧵1/11
My (very) short piece on how prenatal experience with the mother's voice may rapidly scaffold the development of face perception in newborn infants is now out in @natrevpsychol.nature.com !
www.nature.com/articles/s44...
Very happy that this paper from our lab is now out in @pnas.org! What happens when the *same* person experiences the *same* information with a *different* interpretation? Nearly the whole 🧠—well, at least nearly all association cortex—changes how it represents that information! tinyurl.com/p8chj2j7
The most impt change at #NIH and to US science this year is bigger than grant cancellations— it’s how the agency is governed.
For 75 years NIH has been largely independent of presidential control. That’s changed this year. New piece from me and @nataliebaviles.bsky.social in @nature.com
🧪
Excited to share new work on how the brain makes social inferences from visual input! 🧠👯♂️
(With @lisik.bsky.social , @shariliu.bsky.social, @tianminshu.bsky.social , and Minjae Kim!) www.biorxiv.org/content/10.6...
We've posted a new fMRI study of semantic relations (has-part, is-a, made-of, etc.), a key aspect of language. We find that relations are represented in the same brain regions as are other semantic concepts, though voxels tend to be selective for only one relation or another.
doi.org/10.64898/202...
New paper with @timbrady.bsky.social and @violastoermer.bsky.social now out in JoCN! "Real-world Objects Scaffold Visual Working Memory for Features: Increased Neural Engagement When Colors Are Remembered as Part of Meaningful Objects" doi.org/10.1162/JOCN...
🚨 Jewelia’s new preprint! We report the first pRF mapping in teens + reveal functional fingerprints of category regions in high-level visual cortex. www.biorxiv.org/content/10.6...
New preprint with @SamJung @timbrady.bsky.social and @violastoermer.bsky.social: osf.io/preprints/ps.... Here we uncover what might be driving the “meaningfulness benefit” in visual working memory. Studies show that real objects are remembered better in VWM tasks than abstract stimuli. But why? 1/
Congratulations to @lillianbehm.bsky.social, Nick Turk-Browne, and a huge team for putting together this paper (out today) on lessons from a decade of attempts to study awake infants with fMRI:
onlinelibrary.wiley.com/doi/10.1111/...
The Visual Learning Lab is hiring TWO lab coordinators!
Both positions are ideal for someone looking for research experience before applying to graduate school. Application deadline is Feb 10th (approaching fast!)—with flexible summer start dates.
Excited to share our new publication “The Spatio-Temporal Dynamics of Phoneme Encoding in Aging and Aphasia”, published in JNeurosci 🧠
➡️ www.jneurosci.org/content/46/4...
with @lauragwilliams.bsky.social & @mvandermosten.bsky.social 🤝
Check out @stanfordbrain.bsky.social ’s summary of it ⬇️
Now in press at Nature Communications!
www.nature.com/articles/s41...
Check it out if you are interested in category selectivity, the organization of visual cortex, and topographic models!
Excited that this is now out in @nathumbehav.nature.com 🎉
David Rose (davdrose.github.io) led this project on how children's understanding of causal language develops.
📃 (preprint): osf.io/preprints/ps...
📎: github.com/davdrose/cau...
How physical information is used to make sense of the psychological world
Perspective by Shari Liu, Seda Karakose-Akbiyik, Joseph Outa & Minjae J. Kim
Web: go.nature.com/3Xwo40J
PDF: rdcu.be/eSMfa
We are recruiting a lab manager/research assistant to start in early 2026! The successful candidate will conduct awake infant fMRI, meet cute babies, and join a fun team!
More details (e.g. responsibilities): soc.stanford.edu/people/#join...
Apply here: careersearch.stanford.edu/jobs/social-...
Figure 1 showing alignment pipeline using CLIP models on BabyView data.
Figure 2: human judgments are correlated with CLIP scores.
Can we use VLMs to quantify multimodal alignment in children's experiences? We analyze a large corpus of headcam videos to find out!
New preprint from our BabyView project, led by @alvinwmtan.bsky.social and Jane Yang: arxiv.org/abs/2511.18824
Can AI simulations of human research participants advance cognitive science? In @cp-trendscognsci.bsky.social, @lmesseri.bsky.social & I analyze this vision. We show how “AI Surrogates” entrench practices that limit the generalizability of cognitive science while aspiring to do the opposite. 1/
We’re recruiting a postdoctoral fellow to join our team! 🎉
I’m happy to share that I’ve opened back up the search for this position (it was temporarily closed due to funding uncertainty).
See lab page and doc below for details!
infant data from experiment 1
conceptual schema for different habituation models
title page
results from experiment 2 with adults
Ever wonder how habituation works? Here's our attempt to understand:
A stimulus-computable rational model of visual habituation in infants and adults doi.org/10.7554/eLif...
This is the thesis of two wonderful students: @anjiecao.bsky.social @galraz.bsky.social, w/ @rebeccasaxe.bsky.social
🚨New paper out w/ @gershbrain.bsky.social & @fierycushman.bsky.social from my time @Harvard!
Humans are capable of sophisticated theory of mind, but when do we use it?
We formalize & document a new cognitive shortcut: belief neglect — inferring others' preferences, as if their beliefs are correct🧵
Flyer for the event!
*Sharing for our department’s trainees*
🧠 Looking for insight on applying to PhD programs in psychology?
✨ Apply by Sep 25th to Stanford Psychology's 9th annual Paths to a Psychology PhD info-session/workshop to have all of your questions answered!
📝 Application: tinyurl.com/pathstophd2025
New Open dataset alert:
🧠 Introducing "Spacetop" – a massive multimodal fMRI dataset that bridges naturalistic and experimental neuroscience!
N = 101 x 6 hours each = 606 functional iso-hours combining movies, pain, faces, theory-of-mind and other cognitive tasks!
🧵below