Advertisement · 728 × 90

Posts by

The pleasure is mine, and I’m sure our paths will cross again. :)

4 months ago 2 0 0 0

Amazing work and talk, congratulations!
And I very much agree that the “Best part about being a scientist is the people I get to work with.” 🙂

4 months ago 3 0 1 0
Preview
A hardwired neural circuit for temporal difference learning The neurotransmitter dopamine plays a major role in learning by acting as a teaching signal to update the brain's predictions about rewards. A leading theory proposes that this process is analogous to...

Beautiful and clear results showing that temporal difference error calculation is hardwired in the dopamine/striatum mircocircuits: www.biorxiv.org/content/10.1...
from @malcolmgcampbell.bsky.social and @naoshigeuchida.bsky.social

7 months ago 36 12 1 0
Preview
Convergent motifs of early olfactory processing are recapitulated by layer-wise efficient coding The architecture of early olfactory processing is a striking example of convergent evolution. Typically, a panel of broadly tuned receptors is selectively expressed in sensory neurons (each neuron exp...

Excited to share new computational work, led by @jzv.bsky.social, driven by Juan Carlos Fernandez del Castillo + contribution from Farhad Pashakanloo. We recover 3 core motifs in the olfactory system of evolutionarily distant animals using a biophysically-grounded model + efficient coding ideas!

7 months ago 24 12 0 1
Preview
Frontiers | Summary statistics of learning link changing neural representations to behavior How can we make sense of large-scale recordings of neural activity across learning? Theories of neural network learning with their origins in statistical phy...

Since I'm back on BlueSky - with @frostedblakess.bsky.social and @cpehlevan.bsky.social we wrote a brief perspective on how ideas about summary statistics from the statistical physics of learning could potentially help inform neural data analysis... (1/2)

7 months ago 34 10 1 0
Preview
Home The school will open the thematic period on Data Science and will be dedicated to the mathematical foundations and methods for high-dimensional data analysis. It will provide an in-depth introduction ...

Just got back from a great summer school at Sapienza University sites.google.com/view/math-hi... where I gave a short course on Dynamics and Learning in RNNs. I compiled a (very biased) list of recommended readings on the subject, for anyone interested: aleingrosso.github.io/_pages/2025_...

7 months ago 15 2 0 1

😊

7 months ago 0 0 0 0
From Spikes To Rates
From Spikes To Rates YouTube video by Gerstner Lab

Is it possible to go from spikes to rates without averaging?

We show how to exactly map recurrent spiking networks into recurrent rate networks, with the same number of neurons. No temporal or spatial averaging needed!

Presented at Gatsby Neural Dynamics Workshop, London.

8 months ago 61 17 2 1
Post image

Out today in @nature.com: we show that individual neurons have diverse tuning to a decision variable computed by the entire population, revealing a unifying geometric principle for the encoding of sensory and dynamic cognitive variables.
www.nature.com/articles/s41...

9 months ago 206 52 5 4
Advertisement

Our new preprint 👀

10 months ago 31 6 0 0