How do the brain’s event representations change as we gain familiarity with an experience?
Brain regions’ representations can become coarser or finer as events become familiar. Slow-timescale structure predicts memory.
Excited to share this work w/ Narjes Al-Zahli & @chrisbaldassano.bsky.social!
Posts by Kelsey Han
Attention fluctuates over time and across contexts—how is this reflected in the brain?🧠Fitting a dynamical systems model to fMRI data, we show that the geometry of neural dynamics along the attractor landscape reflects changes in attention. Out in @natcomms.nature.com
www.nature.com/articles/s41...
Cosyne invited me to give a long tutorial (4 hours!) on methods to quantify differences high-d neural recordings across animals, brain regions, deep neural nets, etc.
The recording is up on youtube. I hope it inspires more research on this fundamental topic!
www.youtube.com/watch?v=n44x...
Pleased to share that our paper "Representation Biases: Variance is Not Always a Good Proxy for Importance" is now out as Theory/New Concepts paper in eNeuro!
www.eneuro.org/content/13/3... 1/
Thanks Erica!
So yes—high-dimensional neural codes do shape behavior. Not only do stimulus representations scale unboundedly, individual differences span the full dimensional capacity of cortical codes. We're only beginning to understand the rich structure that makes each brain unique.
The upshot: your subjective experience isn't encoded in a low-dimensional subspace of cortical activity. It emerges from the full high-dimensional geometry of cortical population responses—most of which we've been missing with conventional approaches.
We also found that neural dimensionality is related to the concreteness of each subject’s recollection. Subjects who focus on concrete details, as opposed to abstract aspects of the movies, tend to share more dimensions with others.
These neural differences matter! Fine-grained structure in higher dimensions of cortical activity predicts behavioral differences during recall—even after accounting for coarse-scale effects captured by standard methods.
In our new work, we find that the ways individual brains differ are *not* constrained to a few dominant patterns. We find distinct patterns in how individual brains process natural movies along many latent dimensions, and these differences are reliable across different movies.
Recent work from our lab revealed the scale-free structure of cortical image representations in large-scale studies of humans and monkeys. Stimulus-related information is distributed across thousands of dimensions, extending far beyond the few dominant components typically studied.
Our findings show that individual neural patterns during movie viewing span orders of magnitude of dimensions—and these high-dimensional codes predict how people describe their experiences.
Human visual cortex representations may be much higher-dimensional than earlier work suggested, but are these higher dimensions of cortical activity actually relevant to behavior? Our new paper tackles this by studying how different people experience the same movies. đź§µ www.cell.com/current-biol...
Dimensionality reduction may be the wrong approach to understanding neural representations. Our new paper shows that across human visual cortex, dimensionality is unbounded and scales with dataset size—we show this across nearly four orders of magnitude. journals.plos.org/ploscompbiol...
📢The UniReps x @ellis.eu
speaker series is back! Come join us in our next appointment 18th December 4 pm CET with @meenakshikhosla.bsky.social
and Raj Magesh Gauthaman🔵🔴
Hopkins Cog Sci is hiring! We have two open faculty positions: one in vision, and one language. Please repost!