This is an important one. That soma-targeting interneurons can do background subtraction and gain modulation is well understood. But the roles of dendrite-targeting interneurons in normalizing top-down learning signals have not been studied in detail.
Posts by Roy Eyono
Wouldn't have been possible without you! Thanks Dan!!
Here's a lovely #blueprint on a new study from our lab led by @royeyono.bsky.social.
tl;dr: it implies that there may be interneurons whose role is to normalize credit assignment signals during learning.
#neuroscience 🧪
🫶 This work was done in collaboration with @dlevenstein.bsky.social @tyrellturing.bsky.social @arnaghosh.bsky.social @repromancer.bsky.social
Check out the full paper to dive deeper into our findings!
Paper: arxiv.org/abs/2603.17676
These findings complement several theories on how SST-subtypes targeting apical dendrites (where error-related signals arrive) are critical in supporting learning.
Potentially, via the normalization of error signals!
e.g. Payeur et al. : nature.com/articles/s41...
Hence this led us to believe if the brain uses inhibition for normalization, it must have a dedicated mechanism to normalize learning signals!
tldr;
In the end, we found that centering the gradients via lateral inhibition was enough to recapitulate performance on the task!
Upon further inspection, we indeed noticed that despite our high alignment in neural activity, we had poor gradient alignment with layer norm.
Why you may ask? It turns out that layer normalization actually implicitly normalizes back-propagated error signals "under the hood" !
Surprisingly!
Despite successfully normalizing feedforward neural activity, this didn't reflect in performance. As we were unable to recapitulate layer normalization's performance on the task.
With our simple inhibitory loss function on our EI network, we were able to successfully *learn* to layer normalize neural activity!
Our network was trained on a robust perceptual invariance task: FashionMNIST with randomized brightness levels during both training and testing. (bounded by epsilon)
The brain is believed to recruit inhibitory neurons to perform normalization, but how does it actually help us learn?🤔
To find out, we built an ANN with separate excitatory and inhibitory neurons. And trained the E weights on the task and the I weights to do layer normalization
How do neural circuits in the brain implement normalization? 🧠
In our new paper, we show that just normalizing sensory input isn't enough. Crucially, we must also normalize the error signals! 🧵👇
Paper: arxiv.org/abs/2603.17676
Thrilled to announce I'll be starting my own neuro-theory lab, as an Assistant Professor at @yaleneuro.bsky.social @wutsaiyale.bsky.social this Fall!
My group will study offline learning in the sleeping brain: how neural activity self-organizes during sleep and the computations it performs. 🧵
Apropos of never ending discussions about whether ANNs are "good" models of the nervous system, here is a slide I present to masters students showing a network that is found in motor control circuits *across phyla* (that's pretty ubiquitous!) I ask them to guess what it does...
1/ Okay, one thing that has been revealed to me from the replies to this is that many people don't know (or refuse to recognize) the following fact:
The unts in ANN are actually not a terrible approximation of how real neurons work!
A tiny 🧵.
🧠📈 #NeuroAI #MLSky
OK If we are moving to Bluesky I am rescuing my favourite ever twitter thread (Jan 2019).
The renamed:
Bluesky-sized history of neuroscience (biased by my interests)
Delighted to be in Leeds joining the School of Computing! Fantastic first impressions — like a "less offensive London" (youtu.be/watch?v=_6_VVLgrgFI). Stay tuned for a PhD position starting next October. Meanwhile, drop me a message with your CV and research interests—I'd love to hear from you!
9️⃣