Advertisement · 728 × 90

Posts by Lars Meyer

Post image Post image

The Obleser lab will be hiring soon!

New postdoc (fully funded) and
new PhD or part-time postdoc position (soft-money funded).

Spread the word. Start in Sept/Oct.
Watch out for official announcements!
Please be in touch.

auditorycognition.com
obleserlab.com
hoerhanse.de
lemmi.uni-luebeck.de

6 days ago 42 44 1 0
Preview
Postdoctoral Position in AI, Machine Learning and Auditory Neuroscience - Research Paris, France | Institut de l’Audition (Institut Pasteur)Duration: 24 months (flexible start)Supervisor: Keith Doelling (INSERM) About the position We are recruiting a postdoctoral researcher to focus...

🚨 Post-doc Job ALERT! Interested in using AI to better understand how the brain enjoys music? Or how auditory processing changes in Hearing Loss? Want to eat your weight in 🥖, 🧀, and 🍷? Click on this! #neurojobs, #blackinneuro, #neuroskyence, #psychjobs
research.pasteur.fr/en/job/postd...

3 weeks ago 17 14 0 0
MEG – LanguageCycles

John T. Hale and Team Language Cycles are searching for participants from 25 languages for The Little Prince MEG study. Get in touch via languagecycles.com/home/tlpp/. Eight years and seven rejected proposals in the making, now generously funded by the Johns Hopkins University and the @dfg.de

2 weeks ago 11 6 0 0
Preview
More than words: Effects of grammaticality and lexical surprisal in self-paced reading Language comprehension requires the integration of information from a wide variety of sources, including sensory input and memory. The present study c…

New paper! 🥳

When reading a sentence, do you rely on probabilistic or grammatical relations between words, or both? To find out, we designed an experiment that contrasts these two pressures. 🧵

in Cognition with @andreaeyleen.bsky.social & Antje S. Meyer
www.sciencedirect.com/science/arti...

2 weeks ago 19 7 1 1
Video

Every PI

2 weeks ago 308 50 8 10
Post image

profmusgrave.github.io/anotherday/

3 weeks ago 5 2 0 0
Bridging the neural synchronization to linguistic structures and natural speech comprehension Speech comprehension involves the inference of abstract information from continuous acoustic signals. Prior work suggests that electrophysiological activity is synchronized with abstract linguistic structures (phrases and sentences) during the processing of isochronous syllable sequences. It is yet unclear whether this prior evidence generalizes to natural speech comprehension, which requires the flexible processing of continuous speech, where syllables and other types of linguistic units are anisochronous. Our magnetoencephalography experiment investigated neural synchronization to acoustic (syllables) and abstract units (phrases and sentences) using continuous speech ranging from artificial isochronous to more natural anisochronous. We find that neural synchronization to phrases and sentences, but not syllables, is resilient to naturalistic anisochrony. This suggests that linguistic structure processing reflects endogenous inferences that are fundamentally distinct from the exogenous processing of syllables driven by speech acoustics. Lateralization and linear regression results extend this functional dissociation as hemispheric asymmetry: stimulus-independent leftward lateralization for linguistic structure processing but stimulus-driven rightward lateralization (or bilaterality) for both syllable and acoustic processing. Our findings provide a more realistic characterization of the flexible neural mechanisms supporting the efficient comprehension of natural speech. ### Competing Interest Statement The authors have declared no competing interest.

New preprint! 🧠

We show that neural synchronization to abstract linguistic structures is independent from acoustic processing, even with stimuli mirroring natural speech!

w/ the dream team @diliberg.bsky.social @nicolaml.bsky.social @languagecycles.bsky.social

🔗 doi.org/10.64898/202...

3 weeks ago 5 3 0 0
Advertisement
Fragebogen | Seite 1

Are you a speech therapist in Germany? If you have 10 minutes, could you please help our research and fill out this questionnaire? 💫

survey.ifkw.lmu.de/EYELED/

@alanlangus.bsky.social

3 weeks ago 0 4 0 0
Screenshot of Futures of Language landing page at https://futuresoflanguage.org, with tagline "Human interaction is too important to be left into the hands of merchants of hype."

Text continues: "We study artisanal and artificial ways of languaging to better understand language + technology, and to reimagine our linguistic futures."

Features a tricolour image of strands representing the three lines of work in the project.

Screenshot of Futures of Language landing page at https://futuresoflanguage.org, with tagline "Human interaction is too important to be left into the hands of merchants of hype." Text continues: "We study artisanal and artificial ways of languaging to better understand language + technology, and to reimagine our linguistic futures." Features a tricolour image of strands representing the three lines of work in the project.

Screenshot of Futures of Language library page with two recent posts. One by Samira Ibnelkaïd: Sara Ahmed & the Killjoy Ethics. One by Mark Dingemanse: The Stories we weave around "Technology".

Screenshot of Futures of Language library page with two recent posts. One by Samira Ibnelkaïd: Sara Ahmed & the Killjoy Ethics. One by Mark Dingemanse: The Stories we weave around "Technology".

Screenshot of lower part of Futures of Language landing page showing a quotation by Prof. Enongo Lumumba-Kasongo: "Creativity is so much about trying out new weird things - and AI is so much about averaging. It's the people invested in playing in that space of the anomaly and playing with the unexpected who will continue to thrive."

Page also shows some logo's of the home base and funder of the Futures of Language project.

Screenshot of lower part of Futures of Language landing page showing a quotation by Prof. Enongo Lumumba-Kasongo: "Creativity is so much about trying out new weird things - and AI is so much about averaging. It's the people invested in playing in that space of the anomaly and playing with the unexpected who will continue to thrive." Page also shows some logo's of the home base and funder of the Futures of Language project.

We've revamped the Futures of Language website! Read about the project, team, papers, and other news at futuresoflanguage.org

Also, our weekly updates are collated in the Futures Library, a growing collection of short pieces on ideas & people that drive the project: futuresoflanguage.org/library/

1 month ago 30 9 1 0
Preview
Less is more: Probabilistic reduction is best explained by small-scale predictability measures The primary research questions of this paper center on defining the amount of context that is necessary and/or appropriate when investigating the relationship between language model probabilities and ...

We wrote a thing -- showing you don't need LLMs to model language production dynamics like the tendency for speakers to reduce predictable words. All you have to do is better model how speech rate varies depending on where a word is and how long the utterance is. arxiv.org/abs/2512.23659

3 months ago 88 22 2 3
PhD Student (gn*) Department of Phoniatrics and Pediatric Audiology PhD Student (gn*) Department of Phoniatrics and Pediatric Audiology

Interested in a PhD intersecting clinical & fundamental language neuroscience? Help us uncover the neurological underpinnings of speech and language disorders! Work with me at UK Münster on a DFG-funded project, together w. Chris Kell & Katrin Neumann. Apply at jobs-sf.ukmuenster.de/job/UKM-PhD-...

2 months ago 0 0 0 0

another great paper from @mh-christiansen.bsky.social, showing that non-constituents* can be primed

It's more evidence that traditional linguists were mistaken to believe memory was in short supply:
Human memory is compressed, clustered, implicit and vast

2 months ago 22 4 1 0

We are humbled!!! Thank you, Xinchi Yu!!!

2 months ago 6 1 1 0
A language processing time window of around three seconds Nature Reviews Psychology - A language processing time window of around three seconds

It was such a privilege to introduce recent cool work from Lena Henke and Lars Meyer (@languagecycles.bsky.social) on time windows and language processing in a "Journal Club" piece on Nature Reviews Psychology @natrevpsychol.nature.com! View-only link: rdcu.be/e1xaY

2 months ago 8 3 0 1
Advertisement

Go work with Benedikt, his research is legit!

2 months ago 1 0 0 0
AI slop published on your watch: very bad look for HSSComms and Springer/Nature

    

    Dear editors, 

     

    I have failed to locate contact information for most of the academic editors for Humanities and Social Sciences Communications; I trust that you will forward this message to them.

     

    I want to note that AI slop is being published on your watch in the journal you edit. This paper, out last week (Al-Jarrah 2026), is full of inaccurate claims and includes countless hallucinated references. Even a cursory look at the bibliography shows that at least 10 and probably many more references are bullshit pure and simple; the simplest explanation is that they are confabulated by generative AI, which sheds doubt on the quality of the manuscript as a whole and on the review and editorial processes at your journal.

     

    It is a great lapse of editorial judgement to let this kind of obvious drivel pass and I am warning my colleagues at MPI and Radboud to avoid your journal until clear and unambiguous action is taken. You may also want to take note that many people are finding out about this and the online discussion of this paper and of the journal's failing standards is something I hope that Springer Nature cares about. 

     

    In my opinion the paper does not pass even the most minimal quality assurance checks and is fully against the COPE guidelines on publication ethics. You may want to hold the author accountable for this; I think the only reasonable course is retraction. 

     

    But the scholarly community also holds the journal accountable. By letting AI slop through, your journal is polluting the information ecology of scientific publishing. Amidst a rising tide of synthetic text, scholarly publishing, with its tradition of human oversight and strong peer revidew, should be one of the last stalwarts to defend the integrity of our research. I look forward to your response and to find out what decisive action you are taking.

AI slop published on your watch: very bad look for HSSComms and Springer/Nature  Dear editors, I have failed to locate contact information for most of the academic editors for Humanities and Social Sciences Communications; I trust that you will forward this message to them. I want to note that AI slop is being published on your watch in the journal you edit. This paper, out last week (Al-Jarrah 2026), is full of inaccurate claims and includes countless hallucinated references. Even a cursory look at the bibliography shows that at least 10 and probably many more references are bullshit pure and simple; the simplest explanation is that they are confabulated by generative AI, which sheds doubt on the quality of the manuscript as a whole and on the review and editorial processes at your journal. It is a great lapse of editorial judgement to let this kind of obvious drivel pass and I am warning my colleagues at MPI and Radboud to avoid your journal until clear and unambiguous action is taken. You may also want to take note that many people are finding out about this and the online discussion of this paper and of the journal's failing standards is something I hope that Springer Nature cares about. In my opinion the paper does not pass even the most minimal quality assurance checks and is fully against the COPE guidelines on publication ethics. You may want to hold the author accountable for this; I think the only reasonable course is retraction. But the scholarly community also holds the journal accountable. By letting AI slop through, your journal is polluting the information ecology of scientific publishing. Amidst a rising tide of synthetic text, scholarly publishing, with its tradition of human oversight and strong peer revidew, should be one of the last stalwarts to defend the integrity of our research. I look forward to your response and to find out what decisive action you are taking.

Re: the AI slop paper shared by @thomaspellard.bsky.social and @lameensouag.bsky.social, I wrote to the editors — will update when I get a reply, and will be following closely what they do.

Key point is that we should hold the *journal* accountable for this mess

I have a few predictions...

1/n

3 months ago 124 48 9 2
Redirecting

🧠 How strong is speech decoding from MEG signals? Much stronger during speech production (73% accuracy) than comprehension (~51%) (& Delta & Theta bands carry most relevant information). More discussion in our recent decoding study: doi.org/10.1016/j.cs...

3 months ago 12 2 0 0

If I could do a second PhD, this would be it. Join John!

3 months ago 4 1 0 0

This paper had a pretty shocking headline result (40% of voxels!), so I dug into it, and I think it is wrong. Essentially: they compare two noisy measures and find that about 40% of voxels have different sign between the two. I think this is just noise!

3 months ago 238 99 8 9
Two fully funded PhD positions in Natural Language Processing at University of Leipzig | European Laboratory for Learning and Intelligent Systems

🏹 Job alert: Two fully funded PhD positions in Natural Language Processing at University of Leipzig

📍 Leipzig 🇩🇪
📅 Apply by Jan 15th
🔗 ellis.eu/research/jobs/2025-12-16...

4 months ago 10 8 0 0

Hopkins Cog Sci is hiring! We have two open faculty positions: one in vision, and one language. Please repost!

4 months ago 32 34 0 2
Vacancies

#wearehiring
We have vacancies @jacobscenteruzh.bsky.social:
* Stabmitarbeiter:in, Swiss EdLab 80 - 100%
* Wissenschaftliche:r Mitarbeiter:in, Swiss EdLab (2 Stellen) 80 - 100%
* Mitarbeit bei der z-proso-Studie (20–50%)
Visit: www.jacobscenter.uzh.ch/en/jobs.html

4 months ago 4 6 0 0
Post image

Happy to share the advent of the new *Journal of Experimental Pragmatics*!

- Diamond Open Access
- innovative, transparent bottom-up reviewing process

Go community! Let's make it a success story!

@irastamon.bsky.social @richardbreheny.bsky.social

4 months ago 3 1 0 0
Advertisement

Very cool stuff! I am very much enjoying the Neon trackers and its wonderful they show overlap with stationary trackers.

6 months ago 1 2 0 0

Job announcement!

Research Fellow (65% TV-L 13), to work on co-singing gesture in beatboxing, based at the University of Cologne. Application deadline: 12 January, 2026

Feel free to contact me with any questions about the position.

jobportal.uni-koeln.de/ausschreibun...

4 months ago 2 5 0 0
Substitute Professorship in Computational Linguistics (m/f/d, W3, 100%)

Come work with us!!

Two full substitute professorships for Computational Linguistics (1 year) and General Linguistics (1.5 years) at the University of Tübingen. @unituebingen.bsky.social

uni-tuebingen.de/universitaet...

uni-tuebingen.de/universitaet...

4 months ago 6 9 0 0

Thank you, Martin! Thank you for all the guidance and support throughout those years!

4 months ago 1 0 1 0
Preview
Lars Meyer receives ERC Consolidator Grant from the EU Lars Meyer receives ERC Consolidator Grant from the EU

Lars Meyer receives #ERC Consolidator Grant from @erc.europa.eu 🥳
For his project "Language in Balance: an Imprint of Brain Electrophysiology? (BALANG)”, Lars Meyer @languagecycles.bsky.social receives funding of up to 2 million euros over the next 5 years. Huge congratulations! tinyurl.com/4a5y78nz

4 months ago 18 1 1 0
Preview
Lars Meyer receives ERC Consolidator Grant from the EU Lars Meyer receives ERC Consolidator Grant from the EU

www.cbs.mpg.de/2425446/2025...

4 months ago 9 0 0 0
Preview
Behavioral and Neural Effects of Proactive Control Adjustments on a Trial-by-Trial Basis

New @jocnforum.bsky.social post by Dariusz Asanowicz, replying to @bradpostle.bsky.social and Chunyue Teng: “Behavioral and Neural Effects of Proactive Control Adjustments on a Trial-by-Trial Basis”

doi.org/10.21428/8e6...

4 months ago 3 2 0 0