Advertisement · 728 × 90

Posts by Roy Eyono

This is an important one. That soma-targeting interneurons can do background subtraction and gain modulation is well understood. But the roles of dendrite-targeting interneurons in normalizing top-down learning signals have not been studied in detail.

1 month ago 26 12 2 0

Wouldn't have been possible without you! Thanks Dan!!

1 month ago 1 1 0 0

Here's a lovely #blueprint on a new study from our lab led by @royeyono.bsky.social.

tl;dr: it implies that there may be interneurons whose role is to normalize credit assignment signals during learning.

#neuroscience 🧪

1 month ago 50 12 2 0
Preview
Inhibitory normalization of error signals improves learning in neural circuits Normalization is a critical operation in neural circuits. In the brain, there is evidence that normalization is implemented via inhibitory interneurons and allows neural populations to adjust to chang...

🫶 This work was done in collaboration with @dlevenstein.bsky.social @tyrellturing.bsky.social @arnaghosh.bsky.social @repromancer.bsky.social

Check out the full paper to dive deeper into our findings!

Paper: arxiv.org/abs/2603.17676

1 month ago 7 0 0 0
Post image

These findings complement several theories on how SST-subtypes targeting apical dendrites (where error-related signals arrive) are critical in supporting learning.

Potentially, via the normalization of error signals!

e.g. Payeur et al. : nature.com/articles/s41...

1 month ago 5 0 1 0
Post image

Hence this led us to believe if the brain uses inhibition for normalization, it must have a dedicated mechanism to normalize learning signals!

tldr;

In the end, we found that centering the gradients via lateral inhibition was enough to recapitulate performance on the task!

1 month ago 8 0 1 0
Post image

Upon further inspection, we indeed noticed that despite our high alignment in neural activity, we had poor gradient alignment with layer norm.

1 month ago 4 0 1 0
Advertisement
Post image

Why you may ask? It turns out that layer normalization actually implicitly normalizes back-propagated error signals "under the hood" !

1 month ago 3 0 1 0
Post image

Surprisingly!

Despite successfully normalizing feedforward neural activity, this didn't reflect in performance. As we were unable to recapitulate layer normalization's performance on the task.

1 month ago 2 0 1 0
Post image

With our simple inhibitory loss function on our EI network, we were able to successfully *learn* to layer normalize neural activity!

1 month ago 5 1 1 0
Post image

Our network was trained on a robust perceptual invariance task: FashionMNIST with randomized brightness levels during both training and testing. (bounded by epsilon)

1 month ago 4 1 1 0
Post image

The brain is believed to recruit inhibitory neurons to perform normalization, but how does it actually help us learn?🤔

To find out, we built an ANN with separate excitatory and inhibitory neurons. And trained the E weights on the task and the I weights to do layer normalization

1 month ago 4 0 1 0
Preview
Inhibitory normalization of error signals improves learning in neural circuits Normalization is a critical operation in neural circuits. In the brain, there is evidence that normalization is implemented via inhibitory interneurons and allows neural populations to adjust to chang...

How do neural circuits in the brain implement normalization? 🧠

In our new paper, we show that just normalizing sensory input isn't enough. Crucially, we must also normalize the error signals! 🧵👇

Paper: arxiv.org/abs/2603.17676

1 month ago 68 22 1 2

Thrilled to announce I'll be starting my own neuro-theory lab, as an Assistant Professor at @yaleneuro.bsky.social @wutsaiyale.bsky.social this Fall!

My group will study offline learning in the sleeping brain: how neural activity self-organizes during sleep and the computations it performs. 🧵

10 months ago 420 48 61 7
Post image

Apropos of never ending discussions about whether ANNs are "good" models of the nervous system, here is a slide I present to masters students showing a network that is found in motor control circuits *across phyla* (that's pretty ubiquitous!) I ask them to guess what it does...

1 year ago 166 63 6 12

1/ Okay, one thing that has been revealed to me from the replies to this is that many people don't know (or refuse to recognize) the following fact:

The unts in ANN are actually not a terrible approximation of how real neurons work!

A tiny 🧵.

🧠📈 #NeuroAI #MLSky

1 year ago 152 38 21 17

OK If we are moving to Bluesky I am rescuing my favourite ever twitter thread (Jan 2019).

The renamed:

Bluesky-sized history of neuroscience (biased by my interests)

1 year ago 630 205 14 14
Advertisement

Delighted to be in Leeds joining the School of Computing! Fantastic first impressions — like a "less offensive London" (youtu.be/watch?v=_6_VVLgrgFI). Stay tuned for a PhD position starting next October. Meanwhile, drop me a message with your CV and research interests—I'd love to hear from you!

1 year ago 34 9 0 3

9️⃣

1 year ago 0 0 0 0
Preview
The oneirogen hypothesis: modeling the hallucinatory effects of classical psychedelics in terms of replay-dependent plasticity mechanisms Classical psychedelics induce complex visual hallucinations in humans, generating percepts that are coherent at a low level, but which have surreal, dream-like qualities at a high level. While there a...

1. Hi all: I’m here to advertise our new preprint: www.biorxiv.org/content/10.1..., with Fabrice Normandin, @tyrellturing.bsky.social, and @glajoie.bsky.social!

1 year ago 55 12 3 6
Preview
Afro-Centric Effective Altruism: A Sankaran Essay Afro-futurism: What would Thomas Sankara think about AI, and its implications on the African diaspora.

I have admired Thomas Sankara for as long as I can remember. In this blog, I put myself in his shoes and imagine what he would have thought of this new age of AI.

Thought this would be fitting as my inaugural post on BlueSky!

1 year ago 5 0 0 0