We've posted a new group-based lexical-semantic brain viewer! You can now inspect cortical conceptual maps at the group level (24 participants), vertex-by-vertex. Check it out!
gallantlab.org/viewer-stori...
Posts by Simon Faghel-Soubeyrand
Emergence of Successor Representations and Experimental Design. Top: Example of how sequence learning and sleep might change neural representations. Upon encountering a Welsh Corgi, the brain primarily represents the current stimulus entity. If the Corgi is part of a recurring temporal sequence (Corgi → Girl → House), subsequent stimuli (Girl and House) might be integrated into the Corgi representation. Post-learning sleep might provide an opportunity for the brain to replay learned experiences and thereby further strengthen successor representations. Upon post-sleep exposure to a Corgi image (right), brain activation patterns might reflect both the current stimulus (Corgi) as well as learned successors (Girl, House). Faded images indicate weaker representations. Middle: Timeline of the experiment. Participants first completed a perceptual task, followed by a sequence learning task (Memory Arena). Memory for the learned sequence was then assessed both before and after a period of sleep. Finally, participants completed the perceptual task again. Bottom left: Memory Arena sequence design. Participants (N = 26) were tasked with learning the spatiotemporal structure of 50 images. These images belonged to five distinct categories (letter strings, scenes, objects, faces, and body parts) and were organized into 10 subsequences of five images each, following one of two fixed category orders: (i) letter string, scene, object, face, or (ii) object, scene, letter string, face, with body part images randomly inserted to obscure the primary category sequences. The two subsequence types were counterbalanced across participants. Bottom right: Memory Arena location design. The Arena was spatially organized into five principal ‘slices’, with each slice corresponding to one of the five main image categories.
How do experiences reshape our internal representations of the world? @bstaresina.bsky.social &co show that learning sequential experiences reshapes how the #brain represents what we see; a post-learning nap strengthens these predictive changes @plosbiology.org 🧪 plos.io/4dJGwMC
🔔PREPRINT: Sleep ripples drive single-neuron reactivation for human memory consolidation
1/9: How does sleep support human memory consolidation? To test this, we recorded hundreds of neurons in the human medial temporal lobe (MTL) across learning, wakefulness, and sleep.
doi.org/10.64898/202...
Clarifying the conceptual dimensions of representation in neuroscience — a Perspective by Stephan Pohl, Edgar Y. Walker, David L. Barack, Jennifer Lee, Rachel N. Denison, Ned Block, Florent Meyniel & Wei Ji Ma
www.nature.com/articles/s41...
NSD-synthetic, the out-of-distribution companion dataset of NSD consisting of 7T fMRI responses to 284 artificial images, is now published.
#NeuroAI #CompNeuro #neuroscience #AI
doi.org/10.1038/s414...
How do memories guide behaviour?
Multiple memory representations, from detailed to gist-like, let us flexibly reconstruct or reproduce past experiences to behave adaptively across species.
Now out in Physiological Reviews with Morris Moscovitch, Melanie Sekeres & @brianlevine.bsky.social!
1/7 Can infants recognise the world around them? 👶🧠 As part of the FOUNDCOG project, we scanned 134 awake infants using fMRI. Published today in Nature Neuroscience, our research reveals 2-month-old infants already possess complex visual representations in VVC that align with DNNs.
Last week to apply! Cognitive Neuroscience Research Laboratory Manager at @oxexppsy.bsky.social (with links to @oxcin.bsky.social and @ox.ac.uk)
www.jobs.ac.uk/job/DPZ833/c...
For those into sleep, memory, single units, and neural dynamics this one is for you!
New preprint from the fantastic @fabian31415.bsky.social in collaboration with @humansingleneuron.bsky.social exploring how precisely timed sleep rhythms shape memory at the level of single neurons in humans.
How does the brain replay memories during sleep?
Excited to share our new preprint, the outcome of an extensive effort led by Johannes Niediek, showing that reactivation of human concept neurons reflects memory content rather than event sequence.
What if we could tell you how well you’ll remember your next visit to your local coffee shop? ☕️
In our new Nature Human Behaviour paper, we show that the 𝗾𝘂𝗮𝗹𝗶𝘁𝘆 𝗼𝗳 𝗮 𝘀𝗽𝗮𝘁𝗶𝗮𝗹 𝗿𝗲𝗽𝗿𝗲𝘀𝗲𝗻𝘁𝗮𝘁𝗶𝗼𝗻 can be measured with neuroimaging – and 𝘁𝗵𝗮𝘁 𝘀𝗰𝗼𝗿𝗲 𝗽𝗿𝗲𝗱𝗶𝗰𝘁𝘀 𝗵𝗼𝘄 𝘄𝗲𝗹𝗹 𝗻𝗲𝘄 𝗲𝘅𝗽𝗲𝗿𝗶𝗲𝗻𝗰𝗲𝘀 𝘄𝗶𝗹𝗹 𝘀𝘁𝗶𝗰𝗸.
Need more fMRI data (beyond the amazing NSD)? Introducing MOSAIC! Incredible effort led expertly by Ben Lahner, with help from grad student Mayukh Deb. Work in collaboration with the amazing Aude Oliva! @neurosky.bsky.social. More below..
Thrilled that my recent paper, Hippocampal Ripples during Offline Periods Predict Human Motor Sequence Learning, was selected for the “This Week in The Journal” highlight! 🤩
Huge thanks to @bstaresina.bsky.social and our collaborators who made this work possible!
doi.org/10.1523/JNEU...
#JNeurosci
Check out our new paper! We evaluate what we know (and don't know) about the link between memory consolidation during sleep and next-day learning 👇
What aspects of human knowledge do vision models like CLIP fail to capture, and how can we improve them? We suggest models miss key global organization; aligning them makes them more robust. Check out LukasMuttenthaler's work, finally out (in Nature!?) www.nature.com/articles/s41... + our blog! 1/3
Super excited to share a new preprint!
We asked a simple-but-big question:
What changes in the brain when someone becomes an expert?
Using chess ♟️ + fMRI 🧠 + representational geometry & dimensionality 📈, we ask:
1️⃣ WHAT information is encoded?
2️⃣ HOW is it structured?
3️⃣ WHERE is it expressed?
1/n
Our "mind captioning" paper is now published in Science Advances @science.org .
The method generates descriptive text of what we perceive and recall from brain activity — a linguistic interpretation of nonverbal mental content rather than language decoding.
doi.org/10.1126/scia...
My Lab @unlv.edu is recruiting motivated students interested in human memory and brain research! Learn #EEG, #fMRI, and data analysis while exploring how we remember 🧠
📧 DM me or check out #PhD program www.unlv.edu/degree/phd-n... & www.unlv.edu/psychology/g...
Plus, Vegas is a fun place to live!🤟
I wrote a thing on episodic memory and systems consolidation. I hope you all enjoy it and/or find it interesting.
A neural state space for episodic memories
www.sciencedirect.com/science/arti...
#neuroskyence #psychscisky #cognition 🧪
Introducing CorText: a framework that fuses brain data directly into a large language model, allowing for interactive neural readout using natural language.
tl;dr: you can now chat with a brain scan 🧠💬
1/n
An array of 9 purple discs on a blue background. Figure from Hinnerk Schulz-Hildebrandt.
A nice shift in perceived colour between central and peripheral vision. The fixated disc looks purple while the others look blue.
The effect presumably comes from the absence of S-cones in the fovea.
From Hinnerk Schulz-Hildebrandt:
arxiv.org/pdf/2509.115...
🚨Preprint: Semantic Tuning of Single Neurons in the Human Medial Temporal Lobe
1/8: How do human neurons encode meaning?
In this work, led by Katharina Karkowski, we recorded hundreds of human MTL neurons to study semantic coding in the human brain:
doi.org/10.1101/2025...
How well do classifiers trained on visual activity actually transfer to non-visual reactivation?
#Decoding studies often rely on training in one (visual) condition and applying it to another (e.g. rest-reactivation). However: How well does this work? Show us what makes it work and win up to 1000$!
@dotproduct.bsky.social's first first author paper is finally out in @sfnjournals.bsky.social! Her findings show that content-specific predictions fluctuate with alpha frequencies, suggesting a more specific role for alpha oscillations than we may have thought. With @jhaarsma.bsky.social. 🧠🟦 🧠🤖
What do we talk about when we talk about "readout"?
I argued that our overly specialized, modular approach to studying the brain has given us a simplistic view of readout.
🧠📈
Why do we remember emotional events so vividly? Our new paper @nathumbehav.nature.com suggests that emotional arousal enhances memory by strengthening integration across large-scale brain networks! Led by the amazing @jadynpark.bsky.social & @ycleong.bsky.social! doi.org/10.1038/s415...
A memory can be represented at different levels of granularity, from highly specific to generalized.
Different representational formats of a memory can be used at different times or in different contexts, and draw on different neural representations.
doi.org/10.31234/osf...
Our article is out in Annual Review of Vision Science: “Visual Image Reconstruction from Brain Activity via Latent Representation”
We trace the path from early brain decoding to modern NeuroAI, highlight progress & pitfalls, and discuss future directions www.annualreviews.org/content/jour...
New preprint! My stellar undergrad, June Kim, & @charan-neuro.bsky.social find that intersubject pattern similarity at encoding (especially in posteromedial cortex) relates to shared/differing content between Ss at recall (measured using topic modeling) www.biorxiv.org/content/10.1...