Advertisement · 728 × 90

Posts by Fabian Schneider

Ironically, I often find myself hoping to see long supplements (particularly in sys neuro and spiking work) because, in my experience, broader papers often come with a free serving of “practically irreproducible given the (lack of) details and math in paper”.

1 day ago 2 0 1 0

Some reports say over 500 schools, 55 libraries, & 25 universities hit.
You can debate the numbers, but hitting Sharif University & Beheshti is like hitting MIT & Stanford. I keep wondering: How would the scientific community respond differently if it was those universities? What’s the difference?

2 weeks ago 473 191 6 5
Post image

Significant acceleration of global warming since 2015, finds news PIK study with @rahmstorf.bsky.social
Recent warming: around ~0.35°C per decade.
1970–2015 average: just under 0.2°C per dec.
➡️Current rate is higher than in any decade since records began in 1880.
www.pik-potsdam.de/en/news/late...

1 month ago 178 108 4 16

"The software detected 18 cases in the first 600 datasets that were serious enough to report...based on that limited sample, around 3% of papers contain these types of errors."

😭

1 month ago 30 13 0 0

hey siri whats it like doing a phd

2 months ago 2 0 0 0
A functional influence based circuit motif that constrains the set of plausible algorithms of cortical function There are several plausible algorithms for cortical function that are specific enough to make testable predictions of the interactions between functionally identified cell types. Many of these algorithms are based on some variant of predictive processing. Here we set out to experimentally distinguish between two such predictive processing variants. A central point of variability between them lies in the proposed vertical communication between layer 2/3 and layer 5, which stems from the diverging assumptions about the computational role of layer 5. One assumes a hierarchically organized architecture and proposes that, within a given node of the network, layer 5 conveys unexplained bottom-up input to prediction error neurons of layer 2/3. The other proposes a non-hierarchical architecture in which internal representation neurons of layer 5 provide predictions for the local prediction error neurons of layer 2/3. We show that the functional influence of layer 2/3 cell types on layer 5 is incompatible with the hierarchical variant, while the functional influence of layer 5 cell types on prediction error neurons of layer 2/3 is incompatible with the non-hierarchical variant. Given these data, we can constrain the space of plausible algorithms of cortical function. We propose a model for cortical function based on a combination of a joint embedding predictive architecture (JEPA) and predictive processing that makes experimentally testable predictions. ### Competing Interest Statement The authors have declared no competing interest. Swiss National Science Foundation, https://ror.org/00yjd3n13 Novartis Foundation, https://ror.org/04f9t1x17 European Research Council, https://ror.org/0472cxd90, 865617

Our work with @georgkeller.bsky.social on testing predictive processing (PP) models in cortex is out on biorvix now! www.biorxiv.org/content/10.6... A short thread on our findings and thoughts on where we should move on from PP below.

2 months ago 49 16 2 1
Preview
Sensory sharpening and semantic prediction errors unify competing models of predictive processing in human speech comprehension Speech comprehension relies on predictive mechanisms, but models disagree on whether the brain prioritizes expected or unexpected information. This study shows that sharpening of sensory representatio...

Same sound, different perception: Do expectations change what you hear?👂🧠

We paired faces w topics and played the same ambiguous speech w different faces. The brain sharpened sensory signals toward predictions and showed gated prediction errors at higher levels.

Read @plosbiology.org. Blueprint👇

3 months ago 43 13 1 1
Preview
GitHub - FabulousFabs/MVPy: Multivariate pattern analysis for neuroscience with full GPU support in python. Multivariate pattern analysis for neuroscience with full GPU support in python. - FabulousFabs/MVPy

PS: The computational cost of some of our analyses required writing a lot of custom code to put everything on GPUs. We are now working to consolidate this tooling into MVPy. If you'd like to contribute (features, docs, tests, benchmarks), come say hi!

3 months ago 5 0 0 0

For a quick glance at our key findings, see our quoted preprint thread.

Elated to finally see this published. Huge thanks to @helenblank.bsky.social, the Predictive Cognition Lab, and colleagues at @isnlab.bsky.social. This project truly took a village. 👏

3 months ago 3 1 1 0
Advertisement
Preview
Sensory sharpening and semantic prediction errors unify competing models of predictive processing in human speech comprehension Speech comprehension relies on predictive mechanisms, but models disagree on whether the brain prioritizes expected or unexpected information. This study shows that sharpening of sensory representatio...

Same sound, different perception: Do expectations change what you hear?👂🧠

We paired faces w topics and played the same ambiguous speech w different faces. The brain sharpened sensory signals toward predictions and showed gated prediction errors at higher levels.

Read @plosbiology.org. Blueprint👇

3 months ago 43 13 1 1
Preview
WARN-D machine learning competition is live » Eiko Fried If you share one single thing of our team in 2026—on social media or per email with your colleagues—please let it be this machine learning competition. It was half a decade of work to get here, especi...

After 5 years of data collection, our WARN-D machine learning competition to forecast depression onset is now LIVE! We hope many of you will participate—we have incredibly rich data.

If you share a single thing of my lab this year, please make it this competition.

eiko-fried.com/warn-d-machi...

3 months ago 189 159 5 7
Post image

Introducing CorText: a framework that fuses brain data directly into a large language model, allowing for interactive neural readout using natural language.

tl;dr: you can now chat with a brain scan 🧠💬

1/n

5 months ago 134 52 7 8
IMAGINE-decoding-challenge Predict which words participants were hearing, based upon brain activity recordings of visually seeing these items?

How well do classifiers trained on visual activity actually transfer to non-visual reactivation?

#Decoding studies often rely on training in one (visual) condition and applying it to another (e.g. rest-reactivation). However: How well does this work? Show us what makes it work and win up to 1000$!

5 months ago 32 14 3 3
Preview
Regularization, Action, and Attractors in the Dynamical “Bayesian” Brain Abstract. The idea that the brain is a probabilistic (Bayesian) inference machine, continuously trying to figure out the hidden causes of its inputs, has become very influential in cognitive (neuro)sc...

🧠 Regularization, Action, and Attractors in the Dynamical “Bayesian” Brain

direct.mit.edu/jocn/article...

(still uncorrected proofs, but they should post the corrected one soon--also OA is forthcoming, for now PDF at brainandexperience.org/pdf/10.1162-...)

5 months ago 29 12 2 3
What do representations tell us about a system? Image of a mouse with a scope showing a vector of activity patterns, and a neural network with a vector of unit activity patterns
Common analyses of neural representations: Encoding models (relating activity to task features) drawing of an arrow from a trace saying [on_____on____] to a neuron and spike train. Comparing models via neural predictivity: comparing two neural networks by their R^2 to mouse brain activity. RSA: assessing brain-brain or model-brain correspondence using representational dissimilarity matrices

What do representations tell us about a system? Image of a mouse with a scope showing a vector of activity patterns, and a neural network with a vector of unit activity patterns Common analyses of neural representations: Encoding models (relating activity to task features) drawing of an arrow from a trace saying [on_____on____] to a neuron and spike train. Comparing models via neural predictivity: comparing two neural networks by their R^2 to mouse brain activity. RSA: assessing brain-brain or model-brain correspondence using representational dissimilarity matrices

In neuroscience, we often try to understand systems by analyzing their representations — using tools like regression or RSA. But are these analyses biased towards discovering a subset of what a system represents? If you're interested in this question, check out our new commentary! Thread:

8 months ago 171 53 5 1

Cool! May I join?

8 months ago 1 0 0 0

Thanks Peter!! :-)

For anyone looking for a brief summary, here's a quick tour of our key findings: bsky.app/profile/fabi...

8 months ago 1 0 0 0
Preview
Sensory sharpening and semantic prediction errors unify competing models of predictive processing in communication The human brain makes abundant predictions in speech comprehension that, in real-world conversations, depend on conversational partners. Yet, models diverge on how such predictions are integrated with...

🧵16/16
More results, details and discussion in the full preprint: www.biorxiv.org/content/10.1...

Huge thanks to Helen Blank, the Predictive Cognition Lab, and colleagues @isnlab.bsky.social.

Happy to discuss here, via email or in person! Make sure to catch us at CCN if you're around. 🥳

8 months ago 4 0 0 0
Advertisement

🧵15/16
3. Prediction errors are not computed indiscriminately and appear to be gated by likelihood, potentially underlying robust updates to world models (where extreme prediction errors might otherwise lead to deleterious model updates).

8 months ago 0 0 1 0
Redirecting

🧵14/16
2. Priors sharpen representations at the sensory level, and produce high-level prediction errors.

While this contradicts traditional predictive coding, it aligns well with recent views by @clarepress.bsky.social, @peterkok.bsky.social, @danieljamesyon.bsky.social: doi.org/10.1016/j.ti...

8 months ago 2 0 1 0

🧵13/16
So what are the key takeaways?

1. Listeners apply speaker-specific semantic priors in speech comprehension.

This extends previous findings showing speaker-specific adaptations at the phonetic, phonemic and lexical levels.

8 months ago 0 0 1 0
Post image

🧵12/16
In fact, neurally we find a double dissociation between type of prior and congruency: Semantic prediction errors are apparent relative to speaker-invariant priors IFF word is highly unlikely given speaker prior, but emerge relative to speaker-specific priors otherwise!

8 months ago 0 0 1 0
Post image

🧵11/16
Interestingly, participants take longer to respond to words incongruent with the speaker, but response times are a function of word probability given the speaker only for congruent words. This may also suggest some kind of gating, incurring a switch cost!

8 months ago 0 0 1 0
Post image

🧵10/16
So is there some process gating which semantic prediction errors are computed?

In real time, we sample particularly congruent and incongruent exemplars of a speaker for each subject. We present unmorphed but degraded words and ask for word identification.

8 months ago 0 0 1 0
Post image

🧵9/16
Conversely, here we find that only speaker-specific semantic surprisal improves encoding performance. Explained variance clusters across all sensors between 150-630ms, consistent with prediction errors at higher levels of the processing hierarchy such as semantics!

8 months ago 0 0 1 0
Post image

🧵8/16
What about high-level representations? Let's zoom out to the broadband EEG response.

To test for information theoretic measures, we encode single-trial responses from acoustic/semantic surprisal, controlling for general linguistic confounds (in part through LLMs).

8 months ago 0 0 1 0
Post image

🧵7/16
How are they altered? Our RSMs naturally represent expected information. Due to their geometry, a sign flip inverts the pattern to represent unexpected information.

Coefficients show clear evidence of sharpening at the sensory level, pulling reps. towards predictions!

8 months ago 0 0 1 0
Post image

🧵6/16
We find that similarity structure of sensory representations is best explained by combining speaker-invariant and -specific acoustic predictions. Critically, purely semantic predictions do not help.

Semantic predictions alter sensory representations at the acoustic level!

8 months ago 1 0 1 0
Advertisement
Post image

🧵5/16
We compute similarity between reconstructions for both speakers and original words from morph creation. We encode observed sensory RSMs from speaker-invariant and -specific acoustic and semantic predictions, controlling for raw acoustics and general linguistic predictions.

8 months ago 0 0 1 0
Post image

🧵4/16
Let's zoom in on the sensory level: We train stimulus reconstruction models to decode auditory spectrograms from EEG recordings.

If predictions shape neural representations at the sensory level, we should find reconstructed representational content shifted by speakers.

8 months ago 0 0 1 0