Advertisement · 728 × 90

Posts by Aran Nayebi

Talk recordings from our CoSyNe 2026 workshop on Agent-based Models in Neuroscience are now online!

Playlist: www.youtube.com/playlist?lis...

RL agents and theory, interoception, biomechanical models connectome-constrained models, and more.

#CoSyNe #NeuroAI #CompNeuro #RL #EmbodiedAI

3 weeks ago 23 5 1 0
Preview
Aran Nayebi on X: "Does anyone know how this virtual fly moves *without* RL, given that the actual motor neurons weren't traced out (because the body wasn't scanned)? @Leokoz8 @michaelandregg @oh_that_hat @eonsys @alexwg @Philip_Shiu @AdamMarblestone" / X Does anyone know how this virtual fly moves *without* RL, given that the actual motor neurons weren't traced out (because the body wasn't scanned)? @Leokoz8 @michaelandregg @oh_that_hat @eonsys @alexwg @Philip_Shiu @AdamMarblestone

More technical details here: x.com/aran_nayebi/...

1 month ago 1 0 0 0

Honored to be quoted in this great reporting by @theroberthart.bsky.social on getting the facts straight here. No, this is *not* a fly uploaded to a computer, and in fact imitation learning via RL from fly behavior was responsible for the intelligent behavior, not the "uploaded" pseudo-connectome :)

1 month ago 7 0 1 0

If you're at #Cosyne2026, stop by @reecedkeller.bsky.social's poster tonight (Poster 1-034) and ask him questions! :)

1 month ago 4 0 0 0

If you're attending @cosynemeeting.bsky.social, come check out our NeuroAgents workshop on Tuesday March 17!

Speakers: Omri Barak, Cristina Savin, @lilweb.bsky.social @reecedkeller.bsky.social Caroline Haimerl, Hannah Choi @xaqlab.bsky.social Srini Turaga, Yanan Sui, @trackingskills.bsky.social

👇

1 month ago 10 3 0 0
Preview
What Capable Agents Must Know: Selection Theorems for Robust Decision-Making under Uncertainty As artificial agents become increasingly capable, what internal structure is *necessary* for an agent to act competently under uncertainty? Classical results show that optimal control can be *implemen...

15/ Paper: arxiv.org/abs/2603.02491

Thanks to @lenoreblum.bsky.social & Manuel Blum, @dhadfieldmenell.bsky.social, & @dyamins.bsky.social, @leokoz8.bsky.social, @reecedkeller.bsky.social, Noushin Quazi for discussions and feedback, & @bwfund.bsky.social & @protocollabs.bsky.social for funding.

1 month ago 3 0 0 0

14/ Therefore, the selection-theoretic approach we develop here helps to set ground truth & guidance as to what signatures we can expect to look for in more capable systems.

1 month ago 1 0 1 0
Advertisement

13/ Altogether, these results have implications for the emerging science of AI alignment/welfare. As AI systems become more robustly agentic, we should expect signatures like world models, belief-like memory, and under task-distribution assumptions: modularity & regime-tracking variables to emerge.

1 month ago 1 0 1 0
Post image

12/ This connects to the Contravariance Principle / Platonic Representation Hypothesis that similar representations develop with high-performing models, and helps explain why capable models often develop brain-aligned representations, as the past decade of NeuroAI has consistently observed.

1 month ago 1 0 1 0
Post image

11/ Finally: if two agents both achieve vanishing regret on the same task family, their internal representations must match up to an *invertible* recoding.

1 month ago 1 0 1 0
Post image

10/ Structure in the task distribution further shapes internal organization:
• block-structured tasks → informational modularity
• mixtures of task regimes → persistent regime-tracking variables that globally modulate behavior (functionally analogous to affective modulators)

1 month ago 1 0 1 0
Post image

9/ Combining our same betting framework with predictive-state style tests (PSRs), we address an *open question* recently posed by Jonathan Richens & @tom4everitt.bsky.social 2025: even in POMDPs, low regret forces a predictive state and belief-like memory via a quantitative no-aliasing result.

1 month ago 1 0 1 0

8/ Partial observability is harder because the same observation can come from multiple latent states, mixing together different underlying dynamics. No amount of training data scale can resolve this.

1 month ago 1 0 1 0
Post image

7/ But we also highlight a limit: We show that counterfactual reasoning generally *cannot* be recovered from this alone, echoing critiques from Judea Pearl and others on the limits on causal reasoning of standard world models.

1 month ago 1 0 1 0
Post image

6/ This error bound improves w goal depth n (longer-horizon competence demands tighter dynamics estimates). And it highlights a pitfall: myopic (n=1) competence doesn’t force world models, echoing a recent result of Richens & Everitt, but w/o assuming worst-case competence or deterministic policies.

1 month ago 1 0 1 0

5/ In fully observed environments, we show even stochastic policies with only average-case competence implicitly encode an approximate interventional transition model (“what happens if I do a?”).

1 month ago 1 0 1 0

4/ Main idea: reduce prediction to binary bets.

If a test isn’t a coin flip, regret bounds limit how often an agent can bet wrong. So strong performance forces internal state to track the predictive distinctions that matter.

1 month ago 1 0 1 0
Advertisement

3/ In RL, classic results show belief states are sufficient statistics for optimal control, but they don’t show such predictive structure is *necessary*.

1 month ago 1 0 1 0
Post image

2/ Cybernetics argued that “every good regulator is a model” (Good Regulator Theorem). But this has pitfalls: even a constant policy can regulate trivial goals without modeling anything.

1 month ago 1 0 1 0
Post image

1/ As AI agents become increasingly capable, what must *inevitably* emerge inside them?

We prove selection theorems: strong task performance forces world models, belief-like memory and—under task mixtures—persistent variables resembling core primitives associated with emotion.

1 month ago 14 4 1 0
Preview
Google Colab

PyTorchTNN tutorial (prepared by my students @trinityjchung.com and Yuchen Shen): colab.research.google.com/drive/11QuXu...

Slides from today's talk: anayebi.github.io/files/slides...

1 month ago 3 0 0 0

Want to learn how to build your own biologically-plausible temporal neural networks (TNNs)?

Check out the PyTorchTNN tutorial, prepared by my students @trinityjchung.com and Yuchen Shen! 👇

colab.research.google.com/drive/11QuXu...

Check out the thread below for a high-level overview 👇

1 month ago 5 0 0 0
Preview
Google Colab

Colab notebook tutorial: colab.research.google.com/drive/11QuXu...

1 month ago 0 0 0 0

PyTorchTNN tutorial (prepared by my students @trinityjchung.com and Yuchen Shen): colab.research.google.com/drive/11QuXu...

Slides from today's talk: anayebi.github.io/files/slides...

1 month ago 1 1 0 0
Preview
Neuroscience and Machine Learning Workshop

All details can be found at the link below. Be sure check out the other talks by @cpehlevan.bsky.social and @engeltatiana.bsky.social! neuroscience.uchicago.edu/neuroscience...

1 month ago 1 0 1 0

Finally, I'll end on giving a tutorial on our PyTorchTNN library: bsky.app/profile/anay...

1 month ago 1 0 1 0
Advertisement

Then I'll talk about how similar principles of recurrence emerge in tactile sensing, suggesting shared organization across sensory cortex: bsky.app/profile/trin...

1 month ago 1 0 1 0

I'll first be talking about our work on recurrence in vision: x.com/aran_nayebi/...

1 month ago 1 0 1 0
Post image

Looking forward to presenting on "How behavior shapes recurrent circuits across sensory systems and species: from vision to touch" at the University of Chicago Neuroscience and ML workshop on Wednesday! Details below 👇🧵

1 month ago 15 4 1 1

It was breathtaking to see this view from your balcony in real life yesterday! :)

1 month ago 2 0 1 0