When universities and research institutions become military targets, academics cannot remain silent. I co-initiated an open letter calling for global academic solidarity and support for affected students and scholars. Please share.
to read and sign: sites.google.com/view/protect...
Posts by Raj Magesh
The 6x US memory champion – Nelson Dellis – can memorize a deck of cards in 40 seconds and knows the first 10K digits of pi.
To figure out how, he let us peak inside his brain. Here is what we learned in our precision brain mapping study www.biorxiv.org/content/10.6...
youtube.com/shorts/MryMq...
Excited to share new work on how the brain makes social inferences from visual input! 🧠👯♂️
(With @lisik.bsky.social , @shariliu.bsky.social, @tianminshu.bsky.social , and Minjae Kim!) www.biorxiv.org/content/10.6...
Do you work or study in the fields of psychology, neuroscience, computer science, artificial intelligence, or philosophy?
What does the term 'representation' mean to you?
We invite you to participate in a brief survey on key conceptual questions across fields.
eu.surveymonkey.com/r/VX9GNXM
New typography idea: replace consecutive dots with a bar
Even worse idea: if the dots are close enough within a word, join them with an arc
Human visual cortex representations may be much higher-dimensional than earlier work suggested, but are these higher dimensions of cortical activity actually relevant to behavior? Our new paper tackles this by studying how different people experience the same movies. 🧵 www.cell.com/current-biol...
Yeah, definitely!
An relevant paper along these lines is www.nature.com/articles/nat..., where they show dimensionality collapse on error trials in monkey PFC representations!
Sorry, I'd missed this sub-thread!
Yes, several prior reports of low-D representations were because of deliberate constraints to measure behavioral relevance. Here, we only consider cross-trial/cross-subject reliability, not task-related constraints (a very interesting Q in its own right).
Also: the ease of reaching out to the devs who *actually wrote* the software and getting timely responses from them.
And how easy it is to contribute bugfixes.
Even if the frequency of bugs is higher, the total annoyance is much lower, perhaps because I feel like I have agency.
Long live FOSS!
I think more the latter than the former.
But my point is simpler: I think neuroscience experiments often yield low-D manifolds because of simplicity in inputs (e.g. carefully controlled stimuli) and easy tasks. I expect naturalistic stimuli and behaviors would elicit more high-D representations.
Prediction: task-based optimization will ultimately prove to have a relatively minor role in DNN models of the ventral stream. Although tasks (including self-supervised ones) are currently crucial, there are signs that a simpler approach is possible. A thread:
I agree that relative measures are cleaner to measure and easier to interpret!
Our point in this paper is mainly that the absolute dimensionality is much higher than previously thought throughout visual cortex! And so we might need different approaches to understand these high-D data.
📢The UniReps x @ellis.eu
speaker series is back! Come join us in our next appointment 18th December 4 pm CET with @meenakshikhosla.bsky.social
and Raj Magesh Gauthaman🔵🔴
Hopkins Cog Sci is hiring! We have two open faculty positions: one in vision, and one language. Please repost!
Yeah, given all the limitations, it's amazing how there's still so much stimulus-related information in BOLD signals!
In Fig S12 (journals.plos.org/ploscompbiol...) we find power-law spectra in a monkey electrophysiology dataset too.
And the same in mouse Ca-imaging: www.nature.com/articles/s41...
But also, we're binning the eigenspectrum heavily to measure this small-but-nonzero signal in the tail!
This is a tradeoff: we lose spectral resolution but at least we can measure the signal there.
The nice thing about the estimator we're using in the paper is that if there is no stimulus-related signal (i.e. generalizes across repeated presentations and new stimuli), the expected value of the variance is 0.
So what we're seeing significantly above zero is not noise.
Ohhh I see what you meant! I've been using "high" and "low" variance to refer to the first few dimensions and the tail of the eigenspectrum respectively.
Yeah, in principle, noise should definitely inflate the tail of the eigenspectrum (also the rest, but less noticeably).
Thanks!
The cross-decomposition method we're using measures variance that generalizes (i) across multiple presentations of the stimuli and (ii) to a held-out test set, so I'm not too worried about that---we are measuring only stimulus-related signal.
(I think you meant low variance?)
Yep, I think many tasks often used in neuroscience won't require attention to many features, but actual naturalistic behavior is probably way more high-dimensional.
www.pnas.org/doi/full/10....
I'll refactor it into a standalone tool at some point when I get the time. 🙃
But the sklearn implementation is likely sufficient for most purposes.
I think the best place to start would be this implementation of cross-decomposition in sklearn: scikit-learn.org/stable/modul...
I've written a GPU-accelerated version that does other stuff too (permutation tests, etc.) but it's unfortunately not quite plug-and-play (github.com/BonnerLab/sc...).
Yep, at some point in the process, the relevant info must be extracted for task purposes, and a low-D manifold is what I'd expect to see there. Though it seems that throughout visual cortex at least, the code remains pretty high-dimensional (though how much ends up being used on a task is unclear).
Also, while I think many would agree visual representations are high-dimensional, often our datasets and tools have been too limited to detect it.
Estimates of visual cortex dimensionality have traditionally been much lower (~10s-100), not the unbounded power-law we're reporting here.
I tend to think of these representations as being a rich, general-purpose feature bank that can be easily read out from for a variety of tasks. But yeah, I'm sure different latent subspaces are differentially activated based on task demands.
Yeah, that's an important point! Our analysis here only measures reliability of the representation across trials/held-out stimuli, not whether the info is used for downstream processing.
I'm also curious how dimensionality depends on task demands, but that's hard to answer with this dataset.
But also, networks do have pretty high-dimensional representations in general, often with power-law statistics too!
A nice example is in proceedings.neurips.cc/paper_files/...
Yeah, compression of info is something that often happens close to the final layers of DNNs, likely because networks are often trained on a more limited task than an open-ended system like our brains.
e.g. networks trained on CIFAR-10 often end up lower-dimensional than those trained on CIFAR-100