Advertisement · 728 × 90

Posts by Tom George

ten years of continuous dog walking seems oddly achievable compared to the others

2 weeks ago 1 0 0 0

A little overdue but happy to announce that I
1) Got my PhD 🎓
2) Started a postdoc and CNS fellowship with Blake Richards and Guillaume Lajoie (@glajoie.bsky.social @tyrellturing.bsky.social)
Come find me out in @Mila_Quebec, Montreal working on new things neuroAI!

5 months ago 38 2 3 1

woah, are my retinas working, or is that one character away from #RatInABox 👀...

@dlevenstein.bsky.social is right, could be time for a colab

6 months ago 2 0 0 0

Deadline extended until 31st January!!!

1 year ago 3 0 0 0
Preview
Tom M George ‪PhD, University College London‬ - ‪‪Cited by 124‬‬ - ‪Machine learning‬ - ‪Theoretical neuroscience‬

Hey Antonio, great list! Please could you add me too? I consider myself firmly in this space 😁
scholar.google.com/citations?hl...

thanks!

1 year ago 1 0 0 0
Preview
Neural heterogeneity promotes robust learning - Nature Communications The authors show that heterogeneity in spiking neural networks improves accuracy and robustness of prediction for complex information processing tasks, results in optimal parameter distribution simila...

and there's this nice work by @neuralreckoning.bsky.social ! www.nature.com/articles/s41...

1 year ago 7 0 1 0

Very nice work by @sjshipley.bsky.social on place cells and Alzheimers!

1 year ago 4 0 0 0

Thats great to hear, reach out if you run into any problems!

1 year ago 2 0 0 0

Great question. Local optima will always be hard to identify. Ofc if you have a reason to believe behaviour really _isn't_ a good initialisation then you shouldn't use it.

You can always / we already track the log-likelihood of held-out spikes. If this increases then things are looking good.

1 year ago 2 0 1 0
Advertisement

the sky is bluer*

1 year ago 1 0 0 0

you were right though....the grass is greener over here ;)

1 year ago 1 0 1 0
SIMPL: Scalable and hassle-free optimisation of neural representations from behaviour An efficient technique for optimising tuning curves starting from behaviour by iteratively refitting the tuning curves and redecoding the latent variables.

At the risk of rambling I'll end the thread here and perhaps do a deeper dive in the future. Give it a read (or better, try it on your data) and let us know your thoughts!

tomge.org/papers/simpl/

21/21

1 year ago 3 0 2 0

This isn’t cheating, behaviour has always been there for the taking and we should exploit it (many techniques specialise in joint behavioural-neural analysis). If we ignore behaviour SIMPL still works but the latent space isn’t smooth and “identifiable”...certainly something to consider.

20/21

1 year ago 2 0 1 0
Post image

Initialising at behaviour is a powerful trick here. In many regions (e.g., but not limited to, hippocampus 👀), a behavioural correlate (position👀) exists which is VERY CLOSE to the true latent. Starting right next to the global maxima help makes optimisation straightforward.

1 year ago 3 0 1 0

These non-local dynamics aren’t a new discovery by any means but this is, in our opinion, the correct and quickest way to find them.

18/21

1 year ago 2 0 1 0
Post image

And there’s cool stuff in the optimised latent too. It mostly tracks behaviour (hippocampus is still mostly a cognitive map) but does occasional big jumps as though the animal is contemplating another location in the environment.

17/21

1 year ago 5 0 1 0
Advertisement

Dubious analogy: Using behaviour alone to study neural representations (status quo for hippocampus) is like wearing mittens and trying to a figure out the shape of a delicate statue in the dark. Everything is blurred.

16/21

1 year ago 3 0 1 0

The old paradigm of “just smooth spikes against position” is wrong! Those aren’t tuning curves in a causal sense…they’re just smoothed spikes. These “real” tuning curves (the output of an algorithm like SIMPL) are the ones we should be analysing/theorising about.

15/21

1 year ago 4 0 1 0
Post image

It’s quite a sizeable effect. The median place cell has 23% more place fields...the median place field is 34% smaller and has a firing rate 45% higher. It’s hard to overstate this result…

14/21

1 year ago 3 0 1 0
Post image

When applied to a similarly large (but now real) hippocampal dataset SIMPL optimises the tuning curves. “Real” place fields, it turns out, are much smaller, sharper, more numerous and more uniformly-distributed than previously thought.

13/21

1 year ago 3 0 1 0
Post image

SIMPL outperforms CEBRA — a contemporary, more general-purpose, neural-net-based technique — in terms of performance and compute-time. It’s over 30x faster. It also beats pi-VAE and GPLVM.

12/21

1 year ago 4 0 1 0

Let’s test SIMPL: We make artificial grid cell data and add noise to the position (latent) variable. This noise blurs the grid fields out of recognition. Apply SIMPL and you recover a perfect estimate of the true trajectory and grid fields in a handful of compute-seconds.

11/21

1 year ago 3 0 1 0
Post image

I think this gif explains it well. The animal is "thinking" of the green location but located at the yellow. Spikes plotted against green give sharp grid fields but against yellow are blurred.

In the brain this discrepancy will be caused by replay, planning, uncertainty and more.

1 year ago 3 0 1 0
Advertisement

behaviour =/= latent.

This is obvious in non-navigational regions. But for HPC/MEC/etc. it’s definitely often overlooked…behaviour alone explains the spikes SO well (read: grid cells look pretty) it’s common to just stop there. But that leaves some error

9/21

1 year ago 3 0 1 0
Post image

In order to know the “true” tuning curves we need to know the “true” latent which passed through those curves to generate spikes. i.e. what was the animal thinking of…not what was the animal doing. This latent, of course, is often close to a behavioural readout such as position

1 year ago 3 0 2 0

So what’s the idea inspiring this? Basically, tuning curves (defined as plotting spikes against behaviour) aren’t the brains “real” tuning curves in any causal sense. But often we analyse and theorise about them as though they are. That's a problem.

7/21

1 year ago 3 0 1 0
Post image

SIMPL is also "identifiable", returning not just any tuning curves but specifically THE tuning curves which generated the data (there are some caveats / subtleties here)

6/21

1 year ago 3 0 2 0

SIMPL is very cheap to run (30x faster than CEBRA).
Why? Because we design it to be:
• Fitting: we use spike smoothing (strictly KDE)
• Decoding: we use Kalman-smoothed MLE.
No neural nets or Gaussian processes to slow it down. Optimised in JAX. It has only 2 hyperparameters.

1 year ago 4 0 1 0
Post image

This works because it’s really just expectation maximisation (EM) — a well known algo for optimising models with hidden latents — with some faff removed. We show how, under relatively weak assumptions, repeated redecoding (SIMPL) is the same thing.

4/21

1 year ago 3 0 1 0
Post image

TL;DR
1. Start by assuming the latent IS behaviour (e.g. the animals position) …
2. ...fit tuning curves to this latent…
3. …re-decode the latent from these tuning curves…
4. …repeat steps 2 and 3
You will, very quickly, converge on the “true” latent space.

1 year ago 3 0 1 0