The Cowley Group at CSHL has an opening for a bioAI PhD student to start Fall 2026 to work on closed-loop AI models for visual processing (see below). You *must* have a Master's degree in a quant/eng/cs field.
www.cshl.edu/phd-program/...
Please reach out to me if interested!
I'll be at Cosyne.
Posts by Bryan M. Li
The model also works with datasets containing a few hundred neurons from different animals and laboratories. There is more good stuff in the appendix of the paper and the code repository!
Paper: www.biorxiv.org/content/10.1...
Code and model weights: github.com/bryanlimy/Vi...
7/7
We sincerely thank Turishcheva & Fahey et al. (2023) for organising the Sensorium challenge(s!) and for making their high-quality, large-scale mouse V1 recordings publicly available, which made this work possible!
6/7
We compared our model against SOTA models from the Sensorium 2023 challenge and showed that ViV1T is the most performant while being more computationally efficient. We also evaluated the data efficiency of the model by varying the number of training samples and neurons.
5/7
Moving beyond gratings, we used ViV1T to generate centre-surround most exciting videos (MEVs) via the Inception Loop (Walker et al. 2019). Our in vivo experiments confirmed that MEVs elicit stronger contextual modulation than gratings, natural images and videos, and most exciting images (MEIs).
4/7
ViV1T also revealed novel functional features. We found new properties of contextual responses to surround stimuli in V1 neurons, both movement- and contrast-dependent. We validated this in vivo!
3/7
ViV1T, only trained on natural movies, captured well-known direction tuning and contextual modulation of V1. Despite no built-in mechanism for modelling neuron connectivities, the model predicted feedback-dependent contextual modulation (including feedback onset delay!) (Keller et al. 2020).
2/7
We present our preprint on ViV1T, a transformer for dynamic mouse V1 response prediction. We reveal novel response properties and confirm them in vivo.
With @wulfdewolf.bsky.social, Danai Katsanevaki, @arnoonken.bsky.social, @rochefortlab.bsky.social.
Paper and code at the end of the thread!
🧵1/7
Two flagship papers from the International Brain Laboratory, now out in @Nature.com:
🧠 Brain-wide map of neural activity during complex behaviour: doi.org/10.1038/s41586-025-09235-0
🧠 Brain-wide representations of prior information in mouse decision-making: doi.org/10.1038/s41586-025-09226-1 +
Excited to share our new pre-print on bioRxiv, in which we reveal that feedback-driven motor corrections are encoded in small, previously missed neural signals.
I suspect that behaviour seems unimportant because normalised correlation averages over repeats, minimising the effect of trial-to-trial variability. Single trial correlation should show a bigger difference, we observed something similar in: openreview.net/pdf?id=qHZs2... (Table 1 vs Table A.7)
I am happy to read.