Advertisement · 728 × 90

Posts by Ebrahim Feghhi

Preview
LightBeam: An Accurate and Memory-Efficient CTC Decoder for Speech Neuroprostheses A promising pathway for restoring communication in patients with dysarthria and anarthria is speech neuroprostheses, which directly decode speech from cortical neural activity. Two benchmarks, Brain-t...

Excited to introduce LightBeam, a CTC decoder for speech neuroprostheses that drastically cuts memory load while achieving state-of-the art (SOTA) results

Paper: arxiv.org/abs/2603.14002

Code: github.com/ebrahimfeghh...

Co-authors: @who-is-lionel.bsky.social, @nrhadidi.bsky.social, Jonathan Kao.

1 month ago 1 0 0 0

Our result that OASM, a trivial model of temporal autocorrelation, achieves higher neural predictivity than GPT2-XL on the Pereira2018 dataset has been replicated thanks to @kartikpradeepan.bsky.social! Interested if @mschrimpf.bsky.social thinks this changes conclusions from Schrimpf et al., 2021.

2 months ago 5 0 1 0

This post has generated a super interesting debate between the authors of the paper @ebrahimfeghhi.bsky.social @nrhadidi.bsky.social and one of the authors of a paper they criticised @mschrimpf.bsky.social including an attempted reproduction of their results. This is such a great use of social media

2 months ago 16 6 3 0

While we greatly appreciate the discussion, it seems like the responses either (1) claim our results are not reproducible without meaningful attempts to reproduce them or, (2) cite other studies with similar conclusions as evidence the original claims hold, a line of reasoning we don't agree with.

2 months ago 2 0 0 0

I'll do my best to also look into brain-score, but to be totally honest we have used it before and found it hard to navigate. I believe we have given sufficient details for reproduction as well as provided code to run OASM. We will work over the next few days to ensure our code is very easy to run.

2 months ago 2 0 1 0

This paper shows alignment between LLMs and brain data is outperformed by a null model. More evidence for the argument I've been making in talks lately that we shouldn't believe any computational paper that puts less effort into null than main model.

www.biorxiv.org/content/10.1...

2 months ago 66 10 2 1

Neither Nima nor I received any such feedback regarding the framing of the manuscript. We have always been happy to engage with counter-evidence, and are very receptive to further discussion.

2 months ago 4 0 0 0

Therefore, we see no convincing evidence showing that the original results presented in the highly cited Schrimpf et al., 2021 paper are robust.

2 months ago 3 0 1 0

Furthermore, given that these studies used different neural datasets and conducted different experiments than Schrimpf et al., 2021, they're not replications. For instance, participants are shown images in Shen et al. 2025, whereas Schrimpf et al., 2021 used natural language stimuli neural datasets.

2 months ago 3 0 1 0
Advertisement

Mischler et al. 2024 uses shuffled splits, and the Caucheteux et al. 2022 result is somewhat nuanced given that the correlation saturates beyond a certain point.

2 months ago 3 0 1 0
Post image

Regarding the other studies you cited, we acknowledge that findings are mixed in the literature, and we have further emphasized this in our updated preprint which we plan to release soon. We include the updated section below:

2 months ago 3 0 1 0

A paper you were senior author on, AlKhamissi et al., 2024, also claimed to fail to replicate our result, specifically that positional and word rate information explained the neural predictivity of untrained LLMs. AlKhamissi et al. also “replicated” our results incorrectly by using shuffled splits.

2 months ago 3 0 1 0
Post image

Please see Section 4.4 of our preprint, which we also include here:

2 months ago 4 0 1 0

Your reconstruction of OASM is wrong. The text above your code writes “OASM: Ordinal position, Average word length, Sentence Length, Mean Features”. We’re unsure how this acronym was generated, since OASM is a model of temporal autocorrelation, constructed by gaussian blurring an identity matrix.

2 months ago 6 0 2 1
Preview
GitHub - ebrahimfeghhi/beyond-brainscore: Code for paper: "What are large language models mapping to in the brain? A case against over-reliance on brain score" Code for paper: "What are large language models mapping to in the brain? A case against over-reliance on brain score" - ebrahimfeghhi/beyond-brainscore

We will update the preprint, and here is the link to the codebase: github.com/ebrahimfeghh.... The code to construct OASM is here: github.com/ebrahimfeghh....

2 months ago 4 0 1 0

Apologies for not including the link to the codebase on our new preprint. We included the link in our old preprint (Feghhi at al., 2024) which also had the OASM results, but thank you for pointing this out.

2 months ago 5 0 1 0

Hi Martin, thanks for the response and we appreciate the discussion.

2 months ago 6 0 1 1