Advertisement · 728 × 90

Posts by Jonathan Cornford

Highly recommend working with Dan!

4 days ago 3 0 0 0

🚨 I’m excited to say that my CIHR Project Grant was funded! My NHP lab is now full-speed-ahead, and I’m hiring experimentalists (postdoc, PhD student, and/or a tech/manager). We’ll do multi-region ephys during reaching/grasping in macaques, with behavioral and spinal perturbations.

8 months ago 84 25 2 3

Trying to train RNNs in a biol plausible (local) way? Well, try our new method using predictive alignment. Paper just out in Nat. Com. Toshitake Asabuki deserves all the credit!
www.nature.com/articles/s41...

8 months ago 57 16 1 0

I’ve just had a grant costed at 1.7x salary at Leeds. And I, clearly mistakenly, thought that must be pushing the limits..

9 months ago 1 0 0 0
Post image Post image Post image Post image

Fabulous day at UK Neural Computation 2025!

Thanks to today’s invited speakers Jonathan Cornford, Jenny Bizley, Petr Znamenskiy and Flavia Mancini

Congratulations to ECR speakers Ian Hawes and Andrea Colins Rodriguez, selected from 80+ outstanding abstract submissions

Roll on Day 3!

#UKNC25

9 months ago 9 3 0 0
Preview
Log-Normal Multiplicative Dynamics for Stable Low-Precision Training of Large Networks Studies in neuroscience have shown that biological synapses follow a log-normal distribution whose transitioning can be explained by noisy multiplicative dynamics. Biological networks can function sta...

Together with @repromancer.bsky.social, I have been musing for a while that the exponentiated gradient algorithm we've advocated for comp neuro would work well with low-precision ANNs.

This group got it working!

arxiv.org/abs/2506.17768

May be a great way to reduce AI energy use!!!

#MLSky 🧪

9 months ago 39 13 3 0

💯 Great to see!

9 months ago 2 0 0 0

Really like this work!

9 months ago 9 0 1 0

Very much agree with you that they have these troubles, and are important to focus on. But I think the disagreement becomes about strength of evidence (not no evidence) which is largely an emotional call. Hence why we need a peer review jury! :)

9 months ago 0 0 0 0

So, being provocative, I’d ask if any of the qualitatively different approaches you mention can learn non-trivial things (which has to be important for a model of the mind). I agree with you re explaining (see above). But the point is our best model of intelligence also v naturally shares redundancy

9 months ago 1 0 1 0
Advertisement

The neuropsych observations are interesting and valid, but aren’t inconsistent with PDP models. They could (imo do) just point to learning dynamics & resource pressures shaping the circuitry. Combined with PDP they generate hypothesis that can be tested. And this is true of other model approaches.

9 months ago 1 0 1 0

I’m a bit confused reading this. Redundancy e.g. to cell loss is not a smoking gun but it certainly is evidence *supporting* PDP models as being a good model of the brain. If the reverse was true we would v likely discard ANNs as models as redundancy is such a basic issue in experimental neuro.

9 months ago 1 0 2 0
UKNC25 |

Closing soon! Register by July 1st for UK Neural Computation 2025

neuralcomputation.uk

ECR day: 9 July - careers, grants, starting a lab
Main meeting: 10-11 July - 13 speakers, 70+ posters, sandpit
Hosted by @imperialcollegeldn.bsky.social

Sponsors @aria-research.bsky.social @crick.ac.uk

9 months ago 1 2 0 0
Post image Post image

Excited for the UK Neural Computation Conference 2025 @ Imperial, 9th - 11th July! 🚀

World leading scientists working in brain computation - from experimental to modelling, mathematics & ML (+ all combinations thereof).

Registration closes 1st July! Plz share:
neuralcomputation.uk

10 months ago 12 6 1 1
Post image

1/6 Why does the brain maintain such precise excitatory-inhibitory balance?
Our new preprint explores a provocative idea: Small, targeted deviations from this balance may serve a purpose: to encode local error signals for learning.
www.biorxiv.org/content/10.1...
led by @jrbch.bsky.social

10 months ago 181 57 5 3

New #NeuroAI #compneurosky preprint! To better understand how target-directed learning works in the brain, we sought to engineer an artificial neural network capable of solving complex image classification tasks that comprises only experimentally-supported biological building blocks. (1/15)

10 months ago 57 19 1 2

Can I register without booking a room? I’m based in Leeds!

11 months ago 0 0 1 0
Post image

Discussions around AI ethics and sustainability tend to happen in different circles, with different people and from different perspectives... but what if we had these important conversations together? 🤝

1 year ago 23 7 1 0

Really enjoyed TAing for this tutorial, had great discussions with several attendees. Do check out `torch_brain` and the other packages here:
github.com/neuro-galaxy

1 year ago 19 3 0 0
Post image

Are you training self-supervised/foundation models, and worried if they are learning good representations? We got you covered! 💪
🦖Introducing Reptrix, a #Python library to evaluate representation quality metrics for neural nets: github.com/BARL-SSL/rep...
🧵👇[1/6]
#DeepLearning

1 year ago 27 9 3 2
Advertisement

Thanks!

1 year ago 1 0 0 0
Post image

Come by Poster 068 to learn about why comp neuro studies should use exponentiated gradient descent!

1 year ago 22 2 1 0

Yes exactly! Not sure how it would play out in implementation. But I can see a role in providing an enriched learning objective for a mech model as the foundation model has extracted and compressed into a representation what is meaningful in the neural data.

1 year ago 2 0 0 0

Do you see a usecase in distilling the neural-structure understanding of foundational models into a mechanistic model?

1 year ago 3 0 1 0

Congratulations Guillaume!

1 year ago 1 0 1 0

Can you point me to one or two? I don’t see anywhere where you say why language isn’t one?

My take is that language is compressing information about the world. Eg take a newspaper article. How is that not an encoding of reality?

1 year ago 0 0 0 0

You don’t view language as an encoding of the world? Obviously there are differences to sense inputs, but that seems pretty obvious to me. I presume this been hashed out and argued about before though? What do you disagree with?

1 year ago 0 0 1 0

Am I missing something? If language is an encoding of the world, just like pixel values are, and sensory input in general is, why can’t an llm in principle do all of the above just in an RL-like setting? And then we’re just debating how much of the cake the RL cherry is?

1 year ago 1 0 1 0
Advertisement

The Clopath Lab has a fully funded PhD position open for Oct in Computational Neuroscience at Imperial College London. If you are interested, just send us an informal e-mail!

1 year ago 20 19 2 1

📢 We have a new #NeuroAI postdoctoral position in the lab!

If you have a strong background in #NeuroAI or computational neuroscience, I’d love to hear from you.

(Repost please)

🧠📈🤖

1 year ago 59 40 2 3