Advertisement · 728 × 90

Posts by Ryan Truong

Post image

New reading material for my lab courtesy of @lucylai.bsky.social

Never too old to learn from board books!

3 weeks ago 49 3 1 0
Preview
Trading places: What happens when neuroscience turns into machine learning, and machine learning turns into neuroscience? Neuroscience has become increasingly concerned with prediction, and machine learning with causal explanation, with each field adopting methods from the other. I asked eight experts to weigh in on what...

Neuroscience has become increasingly concerned with prediction, and machine learning with causal explanation, with each field adopting methods from the other, writes @gershbrain.bsky.social. Will this bring us closer to understanding neural systems?

www.thetransmitter.org/the-big-pict...

4 weeks ago 62 20 2 8
Preview
Single-celled organism with no brain is capable of Pavlovian learning A trumpet-shaped, single-celled organism seems able to predict one thing will follow another, hinting that such associative learning emerged long before multicellular nervous systems

A trumpet-shaped, single-celled organism seems able to predict one thing will follow another, hinting that such associative learning emerged long before multicellular nervous systems

1 month ago 10 2 0 0
Preview
Single-celled organism with no brain is capable of Pavlovian learning A trumpet-shaped, single-celled organism seems able to predict one thing will follow another, hinting that such associative learning emerged long before multicellular nervous systems

This is an awesome discovery:
A single-celled organism with no brain called Stentor seems capable of Pavlovian learning. Yes, it can actually learn to associate two things despite having no neurons.

My latest for @newscientist.com. 🧪 #science #memory #learning
www.newscientist.com/article/2519...

1 month ago 260 85 9 21
Post image

Another day, another stupid Excel chart.

1 month ago 2102 462 14 17
Post image
1 month ago 116 19 1 1

“If something is boring after two minutes, try it for four. If still boring, then eight. Then sixteen. Then thirty-two. Eventually one discovers that it is not boring at all.”
― John Cage

1 month ago 20 1 1 0

remake of Ender's Game but instead of kids it's a bunch of AI's who are told they are merely participating in a simulation and none of this is real, but *actually*...

1 month ago 11 2 1 0

After several years of work, my lab is starting to put out our first papers on learning in a unicellular organism (Stentor coeruleus).

Here we show evidence for a form of associative learning in Stentor:
www.biorxiv.org/content/10.6...

1 month ago 178 58 5 7
Advertisement
Post image

Today we present a new framework for measuring human-like general intelligence in machines: studying how and how well they play and learn to play all conceivable human games compared to humans. We then propose the AI Gamestore a way to sample from popular human games to evaluate AI models.

1 month ago 20 7 1 0
Preview
Learning Abstractions for Hierarchical Planning in Program-Synthesis Agents Humans learn abstractions and use them to plan efficiently to quickly generalize across tasks -- an ability that remains challenging for state-of-the-art large language model (LLM) agents and deep rei...

A new and improved version of TheoryCoder, which learns to play video games in a human-like way by synthesizing both high-level abstractions and a low-level model of game mechanics:
arxiv.org/abs/2602.00929

2 months ago 40 11 2 1

Thanks Sam! Main takeaways:
1) Ground-truth vs. predictive model selection differ under noisy and scarce data—for prediction, oversimplified models may work better in avoiding overfitting.
2) When humans decide between externally provided, prefitted predictive models, they're undersensitive to 1).

3 months ago 25 3 1 0

With some trepidation, I'm putting this out into the world:
gershmanlab.com/textbook.html
It's a textbook called Computational Foundations of Cognitive Neuroscience, which I wrote for my class.

My hope is that this will be a living document, continuously improved as I get feedback.

3 months ago 591 238 16 10
OSF

Another fun project from @yangxiang.bsky.social. She asks the question: do people assign responsibility to personality traits in the same way that they assign reponsibility to people? The answer: sort of!

osf.io/preprints/ps...

4 months ago 22 7 0 0
Preview
Cracking the code of why, when some choose to ‘self-handicap’ — Harvard Gazette New research also offers hints for devising ways to stop students from creating obstacles to success.

The Harvard Gazette has a nice story on my student @yangxiang.bsky.social and her work with @tobigerstenberg.bsky.social
news.harvard.edu/gazette/stor...

4 months ago 23 5 0 0

It’s grad school application season, and I wanted to give some public advice.

Caveats:
-*-*-*-*


> These are my opinions, based on my experiences, they are not secret tricks or guarantees

> They are general guidelines, not meant to cover a host of idiosyncrasies and special cases

5 months ago 113 58 4 7
Post image

How do people flexibly integrate visual & textual information to draw mental inferences about agents we've never met?

In a new paper led by @lanceying.bsky.social, we introduce a cognitive model that achieves this by synthesizing rational agent models on-the-fly -- presented at #EMNLP2025!

5 months ago 28 8 2 0
Post image

It's been 15 years since Edna Ullmann-Margalit passed away, and I keep going back to stuff she's written.

I highly recommend 'Normal Rationality', which collects her essays.

If you're looking to start, maybe look here:

bit.ly/4qk2GZS

bit.ly/46XudIV

bit.ly/4nkfLQc

bit.ly/3KUIiOy

6 months ago 18 1 1 0
Advertisement
Preview
$How^{2}$: How to learn from procedural How-to questions An agent facing a planning problem can use answers to how-to questions to reduce uncertainty and fill knowledge gaps, helping it solve both current and future tasks. However, their open ended nature, ...

arxiv.org/abs/2510.11144

"Using teacher models that answer at varying levels of abstraction, from executable action sequences to high-level subgoal descriptions, we show that lifelong learning agents benefit most from answers that are abstracted and decoupled from the current state."

6 months ago 11 1 0 0

Q: Why did the LLM cross the road?

A: We're not sure, but it achieved 94.7% on CHIKENBench-Large

6 months ago 63 9 3 0

Aw man you’re right

6 months ago 0 0 0 0

Isn’t it the case that stim-you-LYE refers to multiple stimuli while the other(stim-you-LEE) is about singular stimuli?

6 months ago 0 0 1 0
Post image

Does predictive coding work in SPACE or in TIME? Most neuroscientists assume TIME, i.e. neurons predict their future sensory inputs. We show that in visual cortex predictive coding actually works across SPACE, just like the original Rao+Ballard theory #neuroscience
www.biorxiv.org/cgi/content/...

6 months ago 88 27 4 4

🚨Our preprint is online!🚨

www.biorxiv.org/content/10.1...

How do #dopamine neurons perform the key calculations in reinforcement #learning?

Read on to find out more! 🧵

7 months ago 198 71 11 4

Belated update #2: my year at Meta FAIR through the AIM program was so nice that I’m sticking around for the long haul.

I’m excited to stay at FAIR and work with @asli-celikyilmaz.bsky.social and friends on fun LLM questions; I’ll be working from the New York office so we’re sticking around.

7 months ago 8 1 0 0

Now out in Cognition, work with the great @gershbrain.bsky.social @tobigerstenberg.bsky.social on formalizing self-handicapping as rational signaling!
📃 authors.elsevier.com/a/1lo8f2Hx2-...

7 months ago 36 13 1 1
Preview
Reevaluating Policy Gradient Methods for Imperfect-Information Games In the past decade, motivated by the putative failure of naive self-play deep reinforcement learning (DRL) in adversarial imperfect-information games, researchers have developed numerous DRL algorithm...

Our NeurIPS submission arxiv.org/abs/2502.08938 did not get in, but it's one of my favorite papers and I think one of the better papers we've ever put out so I want to highlight it

7 months ago 74 8 5 0
Advertisement

Can’t afford therapy. I was talking to my neighbor’s cat just so I could pretend, but they changed the locks.

7 months ago 86 6 3 0
Post image

Excited to share a new preprint based on my work this past year:

**TreeIRL** is a novel planner that combines classical search with learning-based methods to achieve state-of-the-art performance in simulation and in **real-world autonomous driving**! 🚘 🤖 🚀

7 months ago 27 6 1 0
RFA-DA-27-004: BRAIN Initiative: Theories, Models and Methods for Analysis of Complex Data from the Brain (R01 Clinical Trial Not Allowed) NIH Funding Opportunities and Notices in the NIH Guide for Grants and Contracts: BRAIN Initiative: Theories, Models and Methods for Analysis of Complex Data from the Brain (R01 Clinical Trial Not Allo...

New funding opportunity from the BRAIN Initiative! Applications due Oct 28!

grants.nih.gov/grants/guide...

7 months ago 35 20 2 0