New reading material for my lab courtesy of @lucylai.bsky.social
Never too old to learn from board books!
Posts by Ryan Truong
Neuroscience has become increasingly concerned with prediction, and machine learning with causal explanation, with each field adopting methods from the other, writes @gershbrain.bsky.social. Will this bring us closer to understanding neural systems?
www.thetransmitter.org/the-big-pict...
A trumpet-shaped, single-celled organism seems able to predict one thing will follow another, hinting that such associative learning emerged long before multicellular nervous systems
This is an awesome discovery:
A single-celled organism with no brain called Stentor seems capable of Pavlovian learning. Yes, it can actually learn to associate two things despite having no neurons.
My latest for @newscientist.com. 🧪 #science #memory #learning
www.newscientist.com/article/2519...
Another day, another stupid Excel chart.
“If something is boring after two minutes, try it for four. If still boring, then eight. Then sixteen. Then thirty-two. Eventually one discovers that it is not boring at all.”
― John Cage
remake of Ender's Game but instead of kids it's a bunch of AI's who are told they are merely participating in a simulation and none of this is real, but *actually*...
After several years of work, my lab is starting to put out our first papers on learning in a unicellular organism (Stentor coeruleus).
Here we show evidence for a form of associative learning in Stentor:
www.biorxiv.org/content/10.6...
Today we present a new framework for measuring human-like general intelligence in machines: studying how and how well they play and learn to play all conceivable human games compared to humans. We then propose the AI Gamestore a way to sample from popular human games to evaluate AI models.
A new and improved version of TheoryCoder, which learns to play video games in a human-like way by synthesizing both high-level abstractions and a low-level model of game mechanics:
arxiv.org/abs/2602.00929
Thanks Sam! Main takeaways:
1) Ground-truth vs. predictive model selection differ under noisy and scarce data—for prediction, oversimplified models may work better in avoiding overfitting.
2) When humans decide between externally provided, prefitted predictive models, they're undersensitive to 1).
With some trepidation, I'm putting this out into the world:
gershmanlab.com/textbook.html
It's a textbook called Computational Foundations of Cognitive Neuroscience, which I wrote for my class.
My hope is that this will be a living document, continuously improved as I get feedback.
Another fun project from @yangxiang.bsky.social. She asks the question: do people assign responsibility to personality traits in the same way that they assign reponsibility to people? The answer: sort of!
osf.io/preprints/ps...
The Harvard Gazette has a nice story on my student @yangxiang.bsky.social and her work with @tobigerstenberg.bsky.social
news.harvard.edu/gazette/stor...
It’s grad school application season, and I wanted to give some public advice.
Caveats:
-*-*-*-*
> These are my opinions, based on my experiences, they are not secret tricks or guarantees
> They are general guidelines, not meant to cover a host of idiosyncrasies and special cases
How do people flexibly integrate visual & textual information to draw mental inferences about agents we've never met?
In a new paper led by @lanceying.bsky.social, we introduce a cognitive model that achieves this by synthesizing rational agent models on-the-fly -- presented at #EMNLP2025!
It's been 15 years since Edna Ullmann-Margalit passed away, and I keep going back to stuff she's written.
I highly recommend 'Normal Rationality', which collects her essays.
If you're looking to start, maybe look here:
bit.ly/4qk2GZS
bit.ly/46XudIV
bit.ly/4nkfLQc
bit.ly/3KUIiOy
arxiv.org/abs/2510.11144
"Using teacher models that answer at varying levels of abstraction, from executable action sequences to high-level subgoal descriptions, we show that lifelong learning agents benefit most from answers that are abstracted and decoupled from the current state."
Q: Why did the LLM cross the road?
A: We're not sure, but it achieved 94.7% on CHIKENBench-Large
Aw man you’re right
Isn’t it the case that stim-you-LYE refers to multiple stimuli while the other(stim-you-LEE) is about singular stimuli?
Does predictive coding work in SPACE or in TIME? Most neuroscientists assume TIME, i.e. neurons predict their future sensory inputs. We show that in visual cortex predictive coding actually works across SPACE, just like the original Rao+Ballard theory #neuroscience
www.biorxiv.org/cgi/content/...
🚨Our preprint is online!🚨
www.biorxiv.org/content/10.1...
How do #dopamine neurons perform the key calculations in reinforcement #learning?
Read on to find out more! 🧵
Belated update #2: my year at Meta FAIR through the AIM program was so nice that I’m sticking around for the long haul.
I’m excited to stay at FAIR and work with @asli-celikyilmaz.bsky.social and friends on fun LLM questions; I’ll be working from the New York office so we’re sticking around.
Now out in Cognition, work with the great @gershbrain.bsky.social @tobigerstenberg.bsky.social on formalizing self-handicapping as rational signaling!
📃 authors.elsevier.com/a/1lo8f2Hx2-...
Our NeurIPS submission arxiv.org/abs/2502.08938 did not get in, but it's one of my favorite papers and I think one of the better papers we've ever put out so I want to highlight it
Can’t afford therapy. I was talking to my neighbor’s cat just so I could pretend, but they changed the locks.
Excited to share a new preprint based on my work this past year:
**TreeIRL** is a novel planner that combines classical search with learning-based methods to achieve state-of-the-art performance in simulation and in **real-world autonomous driving**! 🚘 🤖 🚀