Introducing CorText: a framework that fuses brain data directly into a large language model, allowing for interactive neural readout using natural language.
tl;dr: you can now chat with a brain scan 🧠💬
1/n
Posts by Zejin Lu
If you are interested in development and development-inspired NeuroAI, and are coming to CCN this year,
come join the workshop with us on 1️⃣1️⃣ Monday, Aug 11
🕒 3:00 – 6:00 pm
📍 Room A2.11
Register here: sites.google.com/view/child2m...
(You can also come by my poster to chat!)
Not just one, but two fantastic chances to discuss how infant development can inform machine learning and vice-versa at CCN 2025 in Amsterdam!!! Satellite workshop sites.google.com/view/child2m...
and Generative Adversarial Collaboration sites.google.com/ccneuro.org/...
🚨 Finally out in Nature Machine Intelligence!!
"Visual representations in the human brain are aligned with large language models"
🔗 www.nature.com/articles/s42...
Hi Lukas, very interesting work! Is it possible to know the shape bias in the Geirhos way? He reports the average shape bias across categories (see his plot code here: github.com/bethgelab/mo...).
It would be even better if we could also know the average shape bias of each model across seeds:)!
🚨 Preprint alert! Excited to share my second PhD project: “Adopting a human developmental visual diet yields robust, shape-based AI vision” -- a nice case showing that biology, neuroscience, and psychology can still help AI :)! arxiv.org/abs/2507.03168
In conclusion, All-TNNs are an exciting new class of networks for modelling primate vision, which address questions that are beyond the scope of CNNs and their topographic derivatives. 12/12
Next, we will use All-TNNs to explore the various factors of how smooth maps emerge from model training, without the need for a secondary smoothness loss. Possible avenues include wiring length optimization, energy constraints, local inhibition, or top-down connectivity patterns. 11/12
Can TNNs expand to self-supervised objectives? Yes, to a degree. We show that training All-TNNs with SimCLR yields smooth topography and category-independent spatial biases. However, SimCLR training fails to reproduce the structure of human-like category-specific spatial biases. 10/12
We show that these behavioural accuracy maps are structured and exhibit category-specific effects. Importantly, All-TNNs better reproduce these spatial structures of human visual biases than CNNs and other control models. 9/12
To study the impact of topography on behaviour, we conducted a human psychophysical experiment to quantify object recognition performance across spatial locations. This provided us with category-specific spatial accuracy maps for humans. 8/12
Similarly, All-TNNs allocate energy expenditure to task-relevant input regions, using an order of magnitude less “metabolic” cost than CNNs! And the smoother the topography, the greater the energy efficiency of the network! Energy efficiency was not explicitly optimised for. 7/12
Interestingly, All-TNNs exhibit a form of foveation, and allocate more processing resources to spatial regions rich in task-relevant information. 6/12
Upon training, topographical features reminiscent of the ventral stream emerge in All-TNNs, including smooth orientation selectivity maps in the first layer, and category-based selectivity clusters for tools, scenes, and faces in the last layer. 6/12
Overall network architecture of All-TNNs
All-TNNs overcome this limitation. In All-TNNs 1) each unit has its own local RF, 2) units in each layer are arranged on a 2D “cortical sheet” without weight sharing, and 3) feature selectivity varies smoothly across space by encouraging similar selectivity in neighboring units. 5/12
Yet, their reliance on weight sharing, i.e., detecting identical features across visual space, renders them unable to model central aspects of biological vision, such as the origin of topography and its relation to behaviour. 4/12
Background: CNNs are commonly used to model primate vision, and have been successful at predicting neural activity and at accounting for complex visual behaviour. 3/12
With Adrien Doerig(@adriendoerig.bsky.social) , Victoria Bosch (@initself.bsky.social), Daniel Kaiser (@dkaiserlab.bsky.social), Radoslaw Martin Cichy and Tim C Kietzmann (@timkietzmann.bsky.social). 2/12
In this work, we introduce All-Topographic Neural Networks (All-TNNs)—ANNs that drop weight sharing and learn on a smooth “cortical sheet,” capturing both human-like neural topography and visual biases in behaviour. 2/12
Now out in Nature Human Behaviour @nathumbehav.nature.com : “End-to-end topographic networks as models of cortical map formation and human visual behaviour”. Please check our NHB link: www.nature.com/articles/s41...