Advertisement · 728 × 90

Posts by Raj Magesh

Post image

When universities and research institutions become military targets, academics cannot remain silent. I co-initiated an open letter calling for global academic solidarity and support for affected students and scholars. Please share.

to read and sign: sites.google.com/view/protect...

1 week ago 32 16 0 2
The machines are fine. I'm worried about us. On AI agents, grunt work, and the part of science that isn't replaceable.

Hey, I wrote a thing about AI in astrophysics
ergosphere.blog/posts/the-ma...

3 weeks ago 1724 515 109 265
how does his brain do it ? #neuroscience #memory #sport Nelson Dellis 6x US memory champion
how does his brain do it ? #neuroscience #memory #sport Nelson Dellis 6x US memory champion YouTube video by Roselyne Chauvin

The 6x US memory champion – Nelson Dellis – can memorize a deck of cards in 40 seconds and knows the first 10K digits of pi.
To figure out how, he let us peak inside his brain. Here is what we learned in our precision brain mapping study www.biorxiv.org/content/10.6...

youtube.com/shorts/MryMq...

1 month ago 81 32 5 7
Post image

Excited to share new work on how the brain makes social inferences from visual input! 🧠👯‍♂️
(With @lisik.bsky.social , @shariliu.bsky.social, @tianminshu.bsky.social , and Minjae Kim!) www.biorxiv.org/content/10.6...

1 month ago 51 17 1 2
Post image Post image

Do you work or study in the fields of psychology, neuroscience, computer science, artificial intelligence, or philosophy?

What does the term 'representation' mean to you?

We invite you to participate in a brief survey on key conceptual questions across fields.

eu.surveymonkey.com/r/VX9GNXM

2 months ago 66 60 4 3

New typography idea: replace consecutive dots with a bar

Even worse idea: if the dots are close enough within a word, join them with an arc

2 months ago 1 0 1 0
Preview
High-dimensional structure underlying individual differences in naturalistic visual experience Han and Bonner reveal that individual visual experience arises from high-dimensional neural geometry distributed across multiple representational scales. By characterizing the full dimensional spectru...

Human visual cortex representations may be much higher-dimensional than earlier work suggested, but are these higher dimensions of cortical activity actually relevant to behavior? Our new paper tackles this by studying how different people experience the same movies. 🧵 www.cell.com/current-biol...

2 months ago 60 16 2 2
Preview
The importance of mixed selectivity in complex cognitive tasks - Nature When an animal is performing a cognitive task, individual neurons in the prefrontal cortex show a mixture of responses that is often difficult to decipher and interpret; here new computational methods...

Yeah, definitely!

An relevant paper along these lines is www.nature.com/articles/nat..., where they show dimensionality collapse on error trials in monkey PFC representations!

4 months ago 0 0 0 0

Sorry, I'd missed this sub-thread!

Yes, several prior reports of low-D representations were because of deliberate constraints to measure behavioral relevance. Here, we only consider cross-trial/cross-subject reliability, not task-related constraints (a very interesting Q in its own right).

4 months ago 0 0 0 0

Also: the ease of reaching out to the devs who *actually wrote* the software and getting timely responses from them.

And how easy it is to contribute bugfixes.

Even if the frequency of bugs is higher, the total annoyance is much lower, perhaps because I feel like I have agency.

Long live FOSS!

4 months ago 1 0 0 0
Advertisement

I think more the latter than the former.

But my point is simpler: I think neuroscience experiments often yield low-D manifolds because of simplicity in inputs (e.g. carefully controlled stimuli) and easy tasks. I expect naturalistic stimuli and behaviors would elicit more high-D representations.

4 months ago 1 0 1 0

Prediction: task-based optimization will ultimately prove to have a relatively minor role in DNN models of the ventral stream. Although tasks (including self-supervised ones) are currently crucial, there are signs that a simpler approach is possible. A thread:

4 months ago 20 9 1 0

I agree that relative measures are cleaner to measure and easier to interpret!

Our point in this paper is mainly that the absolute dimensionality is much higher than previously thought throughout visual cortex! And so we might need different approaches to understand these high-D data.

4 months ago 3 0 0 0
Post image

📢The UniReps x @ellis.eu
speaker series is back! Come join us in our next appointment 18th December 4 pm CET with @meenakshikhosla.bsky.social
and Raj Magesh Gauthaman🔵🔴

4 months ago 12 6 0 2

Hopkins Cog Sci is hiring! We have two open faculty positions: one in vision, and one language. Please repost!

4 months ago 32 34 0 2

Yeah, given all the limitations, it's amazing how there's still so much stimulus-related information in BOLD signals!

In Fig S12 (journals.plos.org/ploscompbiol...) we find power-law spectra in a monkey electrophysiology dataset too.

And the same in mouse Ca-imaging: www.nature.com/articles/s41...

4 months ago 1 0 1 0

But also, we're binning the eigenspectrum heavily to measure this small-but-nonzero signal in the tail!

This is a tradeoff: we lose spectral resolution but at least we can measure the signal there.

4 months ago 2 0 0 0

The nice thing about the estimator we're using in the paper is that if there is no stimulus-related signal (i.e. generalizes across repeated presentations and new stimuli), the expected value of the variance is 0.

So what we're seeing significantly above zero is not noise.

4 months ago 1 0 1 0

Ohhh I see what you meant! I've been using "high" and "low" variance to refer to the first few dimensions and the tail of the eigenspectrum respectively.

Yeah, in principle, noise should definitely inflate the tail of the eigenspectrum (also the rest, but less noticeably).

4 months ago 1 0 1 0

Thanks!

The cross-decomposition method we're using measures variance that generalizes (i) across multiple presentations of the stimuli and (ii) to a held-out test set, so I'm not too worried about that---we are measuring only stimulus-related signal.

(I think you meant low variance?)

4 months ago 1 0 1 0
Advertisement
PNAS Proceedings of the National Academy of Sciences (PNAS), a peer reviewed journal of the National Academy of Sciences (NAS) - an authoritative source of high-impact, original research that broadly spans...

Yep, I think many tasks often used in neuroscience won't require attention to many features, but actual naturalistic behavior is probably way more high-dimensional.

www.pnas.org/doi/full/10....

4 months ago 1 0 1 0

I'll refactor it into a standalone tool at some point when I get the time. 🙃

But the sklearn implementation is likely sufficient for most purposes.

4 months ago 1 0 0 0
Preview
PLSSVD

I think the best place to start would be this implementation of cross-decomposition in sklearn: scikit-learn.org/stable/modul...

I've written a GPU-accelerated version that does other stuff too (permutation tests, etc.) but it's unfortunately not quite plug-and-play (github.com/BonnerLab/sc...).

4 months ago 0 0 1 0

Yep, at some point in the process, the relevant info must be extracted for task purposes, and a low-D manifold is what I'd expect to see there. Though it seems that throughout visual cortex at least, the code remains pretty high-dimensional (though how much ends up being used on a task is unclear).

4 months ago 1 0 1 0

Also, while I think many would agree visual representations are high-dimensional, often our datasets and tools have been too limited to detect it.

Estimates of visual cortex dimensionality have traditionally been much lower (~10s-100), not the unbounded power-law we're reporting here.

4 months ago 1 0 1 0

I tend to think of these representations as being a rich, general-purpose feature bank that can be easily read out from for a variety of tasks. But yeah, I'm sure different latent subspaces are differentially activated based on task demands.

4 months ago 1 0 0 0

Yeah, that's an important point! Our analysis here only measures reliability of the representation across trials/held-out stimuli, not whether the info is used for downstream processing.

I'm also curious how dimensionality depends on task demands, but that's hard to answer with this dataset.

4 months ago 1 0 2 0
$\alpha$-ReQ : Assessing Representation Quality in Self-Supervised Learning by measuring eigenspectrum decay

But also, networks do have pretty high-dimensional representations in general, often with power-law statistics too!

A nice example is in proceedings.neurips.cc/paper_files/...

4 months ago 2 0 0 0

Yeah, compression of info is something that often happens close to the final layers of DNNs, likely because networks are often trained on a more limited task than an open-ended system like our brains.

e.g. networks trained on CIFAR-10 often end up lower-dimensional than those trained on CIFAR-100

4 months ago 3 0 0 0
Advertisement
Preview
Hierarchical Text-Conditional Image Generation with CLIP Latents Contrastive models like CLIP have been shown to learn robust representations of images that capture both semantics and style. To leverage these representations for image generation, we propose a two-s...

Yeah, there are definitely analogous findings in DNNs!

I particularly like Figure 7 in arxiv.org/abs/2204.06125 as an example of high-dimensional representations being useful in DNNs.

4 months ago 3 0 1 0