We're happy to release NeuralSet: a simple, fast, scalable package for Neuro-AI
Supports:
🧠 fMRI, EEG, MEG, iEEG, spikes… preprocessing
💬 text 🔊 audio ▶️ video 🏞️ image… embeddings
📦 pip install neuralset
🔍 facebookresearch.github.io/neuroai/neur...
📄 kingjr.github.io/files/neural...
🧵 Details👇
Posts by Thomas Serre
Postdocs and advanced grad students in theoretical/computational neuroscience - come join our workshop in July!
Candidates should have strong research potential and interests consistent with current APMA faculty, plus a clear vision for collaboration with brain science and/or CS. Up to 3 years of prior postdoc experience allowed.
ccbs.carney.brown.edu
Full ad and application: www.mathjobs.org/jobs/list/28...
Postdoc opening in Applied Mathematics at Brown! Bridging APMA + brain science or CS. Two-year appointment starting July 2026 — review begins April 1!
Great opportunity to collaborate with @carneyinstitute.bsky.social faculty at the Nancy G. Zimmerman Center for Computational Brain Science.
I'm excited to share that this paper was accepted at ICLR 2026! We show that language models encode one of the most basic ingredients of a world model: the ability to distinguish plausible from implausible states. Check out the paper for more details!
See you in Rio!
Paper: arxiv.org/abs/2507.12553
Published @cp-trendscognsci.bsky.social with @drewlinsley.bsky.social & @tonyfeng.bsky.social: As vision models scale to human/superhuman accuracy, they’re becoming worse models of primate vision—benchmark engineering isn’t neuroscience. @carneyinstitute.bsky.social @browncopsy.bsky.social
Hopkins Cog Sci is hiring! We have two open faculty positions: one in vision, and one language. Please repost!
This is such a bizarre illusion - the helmet leads to an expectation of a face inside and all I saw was a blurred out face until I realized what's really happening...
That image is from 1961 and an idealization. Here is an actual trajectory of fixational eye movements. The dots are 2 ms apart. If a midget ganglion cell, with single-cone receptive field, fires at 100 Hz, then every spike reports about a different cone. How can we ever read anything?
An array of 9 purple discs on a blue background. Figure from Hinnerk Schulz-Hildebrandt.
A nice shift in perceived colour between central and peripheral vision. The fixated disc looks purple while the others look blue.
The effect presumably comes from the absence of S-cones in the fovea.
From Hinnerk Schulz-Hildebrandt:
arxiv.org/pdf/2509.115...
The Python Software Foundation won a $1.5m grant from the US government National Science Foundation.
Turned it down because required to affirm that we "will not... operate any programs that advance or promote DEI"
simonwillison.net/2025/Oct/27/...
Apply to become a CSHL-Simons Fellow in Neuroscience!
Run your own lab, pursue bold ideas, join a highly collaborative community!
All areas including experimental or computational neuro, including NeuroAI & systems
PhD required; ≤~1 yr postdoc
www.cshl.edu/about-us/car...
Last call for applications! Join us in advancing AI and the science of mind at Brown. Apply by Nov 8, 2025 👉 apply.interfolio.com/173939
Personal take: Current XAI tools can't yet discover novel mechanisms—they test hypotheses more than reveal the unexpected.
We need better methods NOW, before digital twins become so convincing we stop asking how they work.
📚 Full ref: arxiv.org/abs/2509.17280
📄 doi.org/10.1016/j.neuron.2025.09.039
Moving beyond prediction means:
- Grounding models in neuroscience/cognitive science theory
- Revealing computations through interpretability/XAI studies
- Generating testable hypotheses to drive experiments
Challenge: turning data-fitting machines into theory-bearing instruments.
Yet debate continues: Do high-performing models capture genuine mechanisms or just exploit statistical regularities?
Even with perfect predictions, we risk replacing one black box (the brain) with another (a deep neural network).
Explanatory value requires more than fit.
Based on successes across AI and science, optimism is growing that scaled models will uncover true generative processes.
If a model predicts like a brain, has it discovered how the brain works?
Tempting to think so.
🧠 Thrilled to share our NeuroView with Ellie Pavlick!
"From Prediction to Understanding: Will AI Foundation Models Transform Brain Science?"
AI foundation models are coming to neuroscience—if scaling laws hold, predictive power will be unprecedented.
But is that enough?
Thread 🧵👇
I wrote an op-ed for the Washington Square News about the government's attempt to extort universities. nyunews.com/opinion/gues...
Join a top interdisciplinary program exploring the intersection of artificial & natural intelligence. Strong ties with the @carneyinstitute.bsky.social, the Center for Computational Brain Science (CCBS), and the new NSF-funded AI Institute (ARIA).
Brown’s Department of Cognitive & Psychological Sciences is hiring a tenure-track Assistant Professor, working in the area of AI and the Mind (start July 1, 2026). Apply by Nov 8, 2025 👉 apply.interfolio.com/173939
#AI #CognitiveScience #AcademicJobs #BrownUniversity
With Pieter Roelfsema, our TICS response to Scholte & de Haan (2025): in deep nets, distributed codes ≠ solved binding. Flexible vision needs dynamic grouping + attention (or object-centric slots) to link features to objects/relations. www.sciencedirect.com/science/arti...
Front-page-style graphic titled “BREAKING NEWS” with photos of RFK Jr. and Dr. Bhattacharya in front of a government hearing chamber. Text reads: “NIH Scientists Sound the Alarm as Health Research Faces Historic Threat” and “NIH Employees Send Trump Cronies Scathing Wake-Up Call.”
🚨BREAKING: 300+ NIH employees call out the harm of censorship & politicized science in scathing email to Bhattacharya, demanding an end to political interference, a lift on funding freezes, & rehiring of fired staff whose work saves lives.
This is historic - insiders are blowing the whistle.
🧵(1/5)
From the NIH news desk: "NIH to prioritize human-based research technologies; new initiative aims to reduce use of animals in NIH-funded research"
www.nih.gov/news-events/...
📢 Our takeaway: To truly model biological vision, vision science must diverge from conventional AI approaches and develop deep learning methods tailored to the intricacies of biological visual systems.
🧠 This divergence suggests that DNNs may adopt visual strategies differing from those used by primates, as highlighted in our previous work on harmonization.
🔍 Key finding: As DNNs achieve human or superhuman accuracy, their alignment with primate vision plateaus—and in some cases, deteriorates.