Constituent-constrained word prediction during language comprehension
www.nature.com/articles/s41...
Posts by Elliot Murphy
Dataset of chronic intracranial EEG of epilepsy patients via responsive neurostimulation system
www.frontiersin.org/journals/neu...
Philosopher Jolien Francken and I analysed the "neurobiology of language", here's our view of some problems 1) defining language; 2) identifying its neurobiological basis; 3) overlooking goal dependence. 1&2 are principled, but we can modify 3 and use it to remedy 1&2 doi.org/10.31234/osf...
I love it! Thanks for the link, this is right up my street 😄
The much more careful work by other scholars, who we cite in our Commentary, motivates a much sharper critique of LLMs.
Thank you to my co-authors @evelinaleivada.bsky.social , Paolo Morosi & Andrew Nevins, and to @kmahowald.bsky.social & @futrell.bsky.social for their target article!
Nowhere in that paper do we advocate for replicating our method exclusively. And the bizarre "Glarts glarts glart..." example was only ever used to see how the model tends to "reason" through these weird examples. The other more damning cases in our paper are, oddly, never cited by LLM enjoyers.
And of course it goes without saying that one-off prompts are not the gold standard here - our 2025 paper was only meant to be a quick pilot analysis, and we say so in the paper.
At the same time, they *also* do use this approach (eg. in Hu et al 2026, which they also cite in this BBS reply) to make some pretty strong claims. We think one cannot have it both ways: either they use because it works up to some point (I concur) or "we don't endorse the method", as they say here.
But on this specific point: we of course agree that one-off prompts are to be complemented by a different type of evaluation (probably one that looks into underlying states, without readily accepting the black-box nature).
They seem to have engaged much more with a paper we cited than with our arguments! They *do* mention our Kubrickian rebuttal to their cinematic references, though 🤓
With respect to the author's reply, as far as I can see the only direct engagement with our Commentary article was a brief mention of one example LLM prompt from within an entirely different paper cited in our commentary (Murphy et al. 2025).
We also argue that LLMs have been unable to help with causal-mechanistic questions concerning how the algebraic properties of syntax-semantics are neurally enforced, and point of a couple of relevant studies (see also arxiv.org/abs/2604.00291)
Our points about "emergence" are also echoed in recent work by Krakauer et al. (arxiv.org/pdf/2506.111...)
Other work casts doubt on the claim that LLM probabilities can distinguish between possible/impossible languages (see the extensive work by my co-author @evelinaleivada.bsky.social) - and for the common fallacies in the literature, see @olivia.science / @irisvanrooij.bsky.social
F&M point to work on the geometry of word embeddings and their ability to capture syntactic dependencies. Yet dependency grammar is not a viable candidate theory of syntax, since it isolates word-word dependency graphs rather than hierarchical constituency structure.
We argue that LLMs have not in fact demonstrated "mastery" of syntax, marshalling recent evidence, and that they further serve to obscure explanatory insights with respect to topics in the cognitive neuroscience of language.
First, the authors should be applauded for generously engaging such a broad set of critiques. From the philosophical to the linguistic to the neuroscientific, they have undertaken a mammoth task.
Happy to share our BBS commentary, now in press!
Thanks to @kmahowald.bsky.social & @futrell.bsky.social for a very thoughtful response.
A couple of thoughts [on their reply [to our commentary [on their target article]]] 🧵
lingbuzz.net/lingbuzz/009...
A new paper from the lab!
We use MEG and a "local/global" design in the language domain to ask whether the transitions between words in a sentence are encoded by a shallow transition-probability mechanism, in parallel to a tree-based syntactic mechanism.
www.sciencedirect.com/science/arti...
Disentangling hierarchical and sequential computations during sentence processing
www.sciencedirect.com/science/arti...
New paper out in Proceedings of the Royal Society B: we apply linguistic tools to sperm whale vowels.
The result: sperm whale vowels do not just look like human vowels. They also behave like them.
We found several parallels. Like in Latin, whales have short and long vowels.
I really liked your points about “different effectors but same capacity”. Language surely *is* a “multimodal socially embedded phenomenon”, but this isn’t a contradiction of it also being some kind of capacity for discrete infinity.
Interestingly, the reverse is often true of so-called “ethical debates”, where folks often agree on abstractions and morality they just contest empirical facts and statistics/surveys etc—and they rarely reach any actual ethical discussion.
Some debates at conferences between the “language network nihilists” and more traditional neuropsychology researchers often end up with “we actually don’t disagree on the facts, just the nomenclature”.
I totally agree with you!—we should be pluralistic and embrace complexity whilst also being able to push our favorite abstractions towards whichever causal-mechanistic basis we have evidence for.
Extremely cool paper 😎
All elementary functions from a single binary operator
arxiv.org/abs/2603.21852
Cortex is rhythmic: Brain rhythms coordinate over large distances. The strongest phase organization spans up to 8–16 cm of cortex.
The dominance of large-scale phase dynamics in human cortex, from delta to gamma
doi.org/10.7554/eLif...
#neuroscience