Highly recommend working with Dan!
Posts by Jonathan Cornford
🚨 I’m excited to say that my CIHR Project Grant was funded! My NHP lab is now full-speed-ahead, and I’m hiring experimentalists (postdoc, PhD student, and/or a tech/manager). We’ll do multi-region ephys during reaching/grasping in macaques, with behavioral and spinal perturbations.
Trying to train RNNs in a biol plausible (local) way? Well, try our new method using predictive alignment. Paper just out in Nat. Com. Toshitake Asabuki deserves all the credit!
www.nature.com/articles/s41...
I’ve just had a grant costed at 1.7x salary at Leeds. And I, clearly mistakenly, thought that must be pushing the limits..
Fabulous day at UK Neural Computation 2025!
Thanks to today’s invited speakers Jonathan Cornford, Jenny Bizley, Petr Znamenskiy and Flavia Mancini
Congratulations to ECR speakers Ian Hawes and Andrea Colins Rodriguez, selected from 80+ outstanding abstract submissions
Roll on Day 3!
#UKNC25
Together with @repromancer.bsky.social, I have been musing for a while that the exponentiated gradient algorithm we've advocated for comp neuro would work well with low-precision ANNs.
This group got it working!
arxiv.org/abs/2506.17768
May be a great way to reduce AI energy use!!!
#MLSky 🧪
💯 Great to see!
Really like this work!
Very much agree with you that they have these troubles, and are important to focus on. But I think the disagreement becomes about strength of evidence (not no evidence) which is largely an emotional call. Hence why we need a peer review jury! :)
So, being provocative, I’d ask if any of the qualitatively different approaches you mention can learn non-trivial things (which has to be important for a model of the mind). I agree with you re explaining (see above). But the point is our best model of intelligence also v naturally shares redundancy
The neuropsych observations are interesting and valid, but aren’t inconsistent with PDP models. They could (imo do) just point to learning dynamics & resource pressures shaping the circuitry. Combined with PDP they generate hypothesis that can be tested. And this is true of other model approaches.
I’m a bit confused reading this. Redundancy e.g. to cell loss is not a smoking gun but it certainly is evidence *supporting* PDP models as being a good model of the brain. If the reverse was true we would v likely discard ANNs as models as redundancy is such a basic issue in experimental neuro.
Closing soon! Register by July 1st for UK Neural Computation 2025
neuralcomputation.uk
ECR day: 9 July - careers, grants, starting a lab
Main meeting: 10-11 July - 13 speakers, 70+ posters, sandpit
Hosted by @imperialcollegeldn.bsky.social
Sponsors @aria-research.bsky.social @crick.ac.uk
Excited for the UK Neural Computation Conference 2025 @ Imperial, 9th - 11th July! 🚀
World leading scientists working in brain computation - from experimental to modelling, mathematics & ML (+ all combinations thereof).
Registration closes 1st July! Plz share:
neuralcomputation.uk
1/6 Why does the brain maintain such precise excitatory-inhibitory balance?
Our new preprint explores a provocative idea: Small, targeted deviations from this balance may serve a purpose: to encode local error signals for learning.
www.biorxiv.org/content/10.1...
led by @jrbch.bsky.social
New #NeuroAI #compneurosky preprint! To better understand how target-directed learning works in the brain, we sought to engineer an artificial neural network capable of solving complex image classification tasks that comprises only experimentally-supported biological building blocks. (1/15)
Can I register without booking a room? I’m based in Leeds!
Discussions around AI ethics and sustainability tend to happen in different circles, with different people and from different perspectives... but what if we had these important conversations together? 🤝
Really enjoyed TAing for this tutorial, had great discussions with several attendees. Do check out `torch_brain` and the other packages here:
github.com/neuro-galaxy
Are you training self-supervised/foundation models, and worried if they are learning good representations? We got you covered! 💪
🦖Introducing Reptrix, a #Python library to evaluate representation quality metrics for neural nets: github.com/BARL-SSL/rep...
🧵👇[1/6]
#DeepLearning
Thanks!
Come by Poster 068 to learn about why comp neuro studies should use exponentiated gradient descent!
Yes exactly! Not sure how it would play out in implementation. But I can see a role in providing an enriched learning objective for a mech model as the foundation model has extracted and compressed into a representation what is meaningful in the neural data.
Do you see a usecase in distilling the neural-structure understanding of foundational models into a mechanistic model?
Congratulations Guillaume!
Can you point me to one or two? I don’t see anywhere where you say why language isn’t one?
My take is that language is compressing information about the world. Eg take a newspaper article. How is that not an encoding of reality?
You don’t view language as an encoding of the world? Obviously there are differences to sense inputs, but that seems pretty obvious to me. I presume this been hashed out and argued about before though? What do you disagree with?
Am I missing something? If language is an encoding of the world, just like pixel values are, and sensory input in general is, why can’t an llm in principle do all of the above just in an RL-like setting? And then we’re just debating how much of the cake the RL cherry is?
The Clopath Lab has a fully funded PhD position open for Oct in Computational Neuroscience at Imperial College London. If you are interested, just send us an informal e-mail!