Advertisement Β· 728 Γ— 90

Posts by Alessandro Galloni

We're hiring for a project on #SpikingNeuralNetworks and #neuromorphic computing, to start in October this year, for 36 months. Can hire at pre- or post-PhD level. Email me informally, or apply at the link below. Share with your networks / anyone who would be interested. πŸ€–πŸ§ πŸ§ͺ

16 hours ago 10 5 0 0

Happy to see the main paper from my postdoc finally out!

3 weeks ago 14 3 0 0

Biology is full of coconuts. πŸ₯₯

4 weeks ago 28 4 1 0

Doesn't that imply that the upstream neuron has to 'know' if its message is important in order to increase the rate? But surely importance depends on what info downstream neurons need at a given time, so should be dictated top-down rather than bottom-up

1 month ago 1 0 1 0

Scientists are always doing what's interesting to them, but I think that's the wrong approach. We should be going after all the stuff that bores us. Because it's actually not boring at all once you get into it, and it's precisely the things we think are going to be boring that open our minds.

1 month ago 49 5 2 1
Preview
22 years of Brain Science: what CoSyNe tells us about the evolution of Neuroscience Tracking the intellectual DNA of Computational and Systems Neuroscience through its flagship meeting

I tracked every keyword in 22 years of Cosyne abstracts to map how computational neuroscience evolved β€” from Bayesian brains to neural manifolds to LLMs β€” and where it's heading next.

1 month ago 159 70 7 18
Front page of the website for the book: Practical Spiking Neural Networks

Front page of the website for the book: Practical Spiking Neural Networks

The field of #neuromorphics is lacking *accessible*, *intuitive*, and *practical* introductions. Ramashish Gaurav, PetruΘ› Antoniu Bogdan, and I are setting out to fix this with a book on Practical Spiking Neural Networks! βœ…

Any and all contributions are welcome! πŸ’•

Early access at: snnbook.net

3 months ago 17 5 1 1
Post image

One of the underrated papers this year:
"Small Batch Size Training for Language Models:
When Vanilla SGD Works, and Why Gradient Accumulation Is Wasteful" (arxiv.org/abs/2507.07101)

(I can confirm this holds for RLVR, too! I have some experiments to share soon.)

3 months ago 70 9 0 1

Very neat approach to mapping biological circuits to neuromorphic HW!
Had the pleasure of providing some feedback to Suraj Honnuraiah on an earlier draft of the paper, great to see it finally in print! (I actually missed that it already came out in Oct)

Link to paper:
www.pnas.org/doi/10.1073/...

4 months ago 1 0 0 0
Advertisement
A diagram showing 128 neural network architectures.

A diagram showing 128 neural network architectures.

How does the structure of a neural circuit shape its function?

@neuralreckoning.bsky.social & I explore this in our new preprint:

doi.org/10.1101/2025...

πŸ€–πŸ§ πŸ§ͺ

🧡1/9

8 months ago 110 40 6 7

With Masashi launching his new lab, we’ll be recruiting a new postdoc in the Oldenburg Lab.
Work: high-precision multiphoton holography, neural coding, motor cortex circuits, all-optical physiology.
If you’re interested, just reach out.

4 months ago 3 4 0 0
Post image

1/6 New preprint πŸš€ How does the cortex learn to represent things and how they move without reconstructing sensory stimuli? We developed a circuit-centric recurrent predictive learning (RPL) model based on JEPAs.
πŸ”— doi.org/10.1101/2025...
Led by @atenagm.bsky.social @mshalvagal.bsky.social

4 months ago 142 42 3 4
An old record player set by the side of the road, with a note reading: "does not work but could be fun to fix!"

An old record player set by the side of the road, with a note reading: "does not work but could be fun to fix!"

When you join a lab to do some voltage imaging

4 months ago 75 8 4 1

I am still able to login fine, can try submitting an error/bug report on your behalf

4 months ago 0 0 0 0

Maybe it's 'just' a question of figuring out how few neurons can you nudge (and which ones) to get max desynch. Anyway, definitely sound like a cool modelling problem! πŸ˜…

4 months ago 1 0 0 0

Interesting, how would one actually do this? (either the isochron or the lyapunov) E.g optogenetics seems more likely to induce synchrony

4 months ago 1 0 1 0

whereas more global LFP (~0.1-0.5mV amplitude) would have a much smaller direct effect (but maybe still meaningful). Much harder to demonstrate such causal effects in mammals, since manipulating single neurons generally doesn't affect behavior like it does in drosophila

4 months ago 1 0 0 0

Yes, thanks for sharing @stevenflorek.bsky.social! This is a pretty convincing example of eph coupling having causal role (drosophila ppl always have the coolest results!). My guess is that the key variable here is distance, e.g. many neurons might have such effect on their immediate neighbors

4 months ago 3 0 1 0
Advertisement

E.g. you could argue that hippocampal theta-sweeps would be completely messed up if you jitter the spikes just a little, but the underlying place field would be essentially the same

4 months ago 1 0 0 0

Yes, it's always a question of temp resolution. Saying "spike time matters" implies that that jittering spikes by ~1-2 ms would meaningfully change the computation. "Rate code" implies that computation would only be meaningfully affected by shifting the spike times a much larger amount (eg >10ms)

4 months ago 1 0 1 0

Since this is clearly one of your favorite topics, would be able to point to 1-2 papers / key results that you find to be the most compelling piece(s) of evidence?

4 months ago 1 0 1 0

Lots of insignificant weak things can be measured, showing that something matters causally is extremely difficult. The claim that spike timing matters broadly is only moderately controversial. The claim that ephaptic coupling is causal in some larger circuit computation is much more controversial

4 months ago 1 0 1 0

πŸ’―, I'm confused about what people are actually trying to claim here. Oscillations are important in the sense that *spike timing* matters. I think there is a good amount of data backing that up (e.g. HPC theta). This has nothing to do with any direct effect of the (extremely weak) LFP electric field

4 months ago 4 0 1 0

supportive mentor I could have wished for, and who gave me the time and space to learn and grow, and the intellectual freedom explore my ideas (and the occasional rabbit hole). Will miss all the wonderful colleagues I am leaving behind. Now on to new adventures, see you all back in Europe!

5 months ago 2 0 1 0

Feel incredibly fortunate to have worked alongside @neurosutras.bsky.social over the last few years and proud of all that we achieved. I'm very grateful to all the people who supported my many fellowship applications over the years.

Especially grateful to Aaron, who has been the most

5 months ago 2 0 1 0

Last week was my last at Rutgers University. After nearly 5 years in the US, I am moving to the Netherlands to join Innatera, a neuromorphic computing startup pushing the boundaries of what we can do with ultra-efficient hardware running SNNs for edge computing.

5 months ago 18 0 1 0

There was never any point to having reference letters. That's why we've all started using AI to do this nonesense task.

References should only be used for short-listed candidates for important positions/awards, and ideally, be done via a call to get the most honest opinion possible.

5 months ago 52 11 7 2
Advertisement

Very cool! Maybe it's just my bad intuition, but I find it surprising that weights can tolerate more extreme quantisation that delays

5 months ago 0 0 1 0
Preview
Exploiting heterogeneous delays for efficient computation in low-bit neural networks Neural networks rely on learning synaptic weights. However, this overlooks other neural parameters that can also be learned and may be utilized by the brain. One such parameter is the delay: the brain...

Psst - neuromorphic folks. Did you know that you can solve the SHD dataset with 90% accuracy using only 22 kb of parameter memory by quantising weights and delays? Check out our preprint with @pengfei-sun.bsky.social and @danakarca.bsky.social, or read the TLDR below. πŸ‘‡πŸ€–πŸ§ πŸ§ͺ arxiv.org/abs/2510.27434

5 months ago 43 16 3 3

With my great advisors and colleagues, @achterbrain.bsky.social @zhe @danakarca.bsky.social @neural-reckoning.org, we show that if heterogeneous axonal delays (imprecise) can capture the essential temporal structure of a task, spiking networks do not need precise synaptic weights to perform well.

5 months ago 22 10 2 0