Advertisement · 728 × 90

Posts by Ladislas Nalborczyk

Preview
Postdoctoral Research Fellow at UCL Discover Postdoctoral Research Fellow jobs and more in higher education on jobs.ac.uk. Apply for further details on the top job board.

🚨🚨JOB ALERT🚨🚨
I'm hiring a cogsci/philosophy/compneuro postdoc at @ucl.ac.uk @uclbrainscience.bsky.social @uclpals.bsky.social!

www.jobs.ac.uk/job/DRH486/p...

Come to London & work on frameworks for "testing" for consciousness using Bayesian belief updating & latent variable modeling.

Pls share!

3 hours ago 48 50 1 3
Diagram showing a speaker and listener interacting with each other. 

The speaker predicts they will say "head", but actually produces "had". The resulting sensory prediction error is minimised by active inference - changing speech movements to correct for this error.

The listener predicts they will hear "head" but hears "had" and performs perceptual inference to minimise their sensory prediction error - updating their predictions during perception.

Diagram showing a speaker and listener interacting with each other. The speaker predicts they will say "head", but actually produces "had". The resulting sensory prediction error is minimised by active inference - changing speech movements to correct for this error. The listener predicts they will hear "head" but hears "had" and performs perceptual inference to minimise their sensory prediction error - updating their predictions during perception.

I'm pleased to share this excellent paper from @abbiebradshawphd.bsky.social with @clarepress.bsky.social that presents an active inference account of speech motor control doi.org/10.3758/s134...

7 hours ago 10 5 1 0

Open call for a permanent MEG Lab Manager position for our new MEGIN scanner at CIMeC Trento.

You have extensive MEG experience, a degree in physics, engineering, neuroscience:

Apply here lavoraconnoi.unitn.it/bando-pta/co...

Deadline 29th April!

p.s. call in Italian but foreigners welcome!! 🌎

1 day ago 12 14 0 0
Preview
Constituent-constrained word prediction during language comprehension - Nature Neuroscience Zou et al. reveal a key difference between human brains and large language models (LLMs). While LLMs are optimized to predict the next word, the human brain modulates prediction efficiency by strategi...

New paper that merits a read (Im totally unbiased...not). Simple, straightforward, impactful message. Prediction a la LLM is nice. Constituent-constrained prediction is nicer. @jiajiezou.bsky.social and Nai Ding show brain, behavioral, MEG, ECoG data.
www.nature.com/articles/s41... #neuroskyence

1 day ago 51 20 0 2
Video

We're happy to release NeuralSet: a simple, fast, scalable package for Neuro-AI

Supports:
🧠 fMRI, EEG, MEG, iEEG, spikes… preprocessing
💬 text 🔊 audio ▶️ video 🏞️ image… embeddings

📦 pip install neuralset
🔍 facebookresearch.github.io/neuroai/neur...
📄 kingjr.github.io/files/neural...

🧵 Details👇

1 day ago 66 30 1 4
Preview
Spiking the mind: Rethinking the role of cortical feedback in visual mental imagery - PubMed Recent research has revealed similarities between visual mental imagery and visual perception. Visual imagery is supported by cortical feedback involving multiple visual areas, including the primary v...

Excellent review by Pearson and team on how imagery is implemented through modulatory rather than excitatory top-down signals to visual cortex: pubmed.ncbi.nlm.nih.gov/41973795/

1 day ago 23 9 0 0
figure 1 from the paper linked in OP - similar prevalence of self-reported imagery scores across auditory and visual domains

figure 1 from the paper linked in OP - similar prevalence of self-reported imagery scores across auditory and visual domains

new preprint from PhD student Gage Quigley-Tump, reporting a survey of 200,000 ppl on themusiclab.org about auditory imagery or the lack thereof ('anauralia', the auditory version of aphantasia)

osf.io/cm85z

some findings:

(1) self-reported imagery ability similar across auditory & visual domains

3 days ago 37 9 2 1

To accompany my textbook (Computational Foundations of Cognitive Neuroscience) and the class I taught this semester, I'm open-sourcing my lectures slides:
gershmanlab.com/lectures.html
I'll continue to update these as I improve them.

5 days ago 186 57 4 0

New preprint out by @grassocamille.bsky.social, @lnalborczyk.bsky.social and @virginievanw.bsky.social ! Check it out if interested in mental time line, neural geometry analysis and duration encoding!
shorturl.at/ABUeu
@cea.fr @inserm.fr @univparissaclay.bsky.social @unicog.bsky.social

5 days ago 3 1 0 0
Advertisement

Hiring: Neuroscience roles at the International Brain Laboratory. We’re looking for:
Neuroscience Community Engineer
Neuroscience Research Software Engineer / Data Scientist
All applications must be submitted by May 8th.
Flexible US/Europe locations.
www.internationalbrainlab.com/opportunitie...

6 days ago 8 9 0 0
Preview
Modelling time-resolved electrophysiological data with Bayesian generalised additive multilevel models Providing utility functions for fitting Bayesian generalised additive multilevel models (BGAMMs) to time-resolved data (e.g., M/EEG, pupillometry, mouse-tracking, etc) and identifying clusters.

If you analyse time-resolved data (M/EEG, iEEG, pupillometry, force recordings…) and feel limited by cluster-based permutation tests (CBPTs); especially when trying to determine when an effect starts or ends; you may want to try our new R package: lnalborczyk.github.io/neurogam/
#rstats #brms #EEG

4 months ago 74 32 6 1
Preview
Postdoctoral Position in AI, Machine Learning and Auditory Neuroscience - Research Paris, France | Institut de l’Audition (Institut Pasteur)Duration: 24 months (flexible start)Supervisor: Keith Doelling (INSERM) About the position We are recruiting a postdoctoral researcher to focus...

🚨 Post-doc Job ALERT! Interested in using AI to better understand how the brain enjoys music? Or how auditory processing changes in Hearing Loss? Want to eat your weight in 🥖, 🧀, and 🍷? Click on this! #neurojobs, #blackinneuro, #neuroskyence, #psychjobs
research.pasteur.fr/en/job/postd...

3 weeks ago 17 14 0 0
Speech is defined by theta-gamma coupled acoustic rhythms, mapped onto segregated populations in human early auditory cortex

new collaborative paper! Speech is defined by theta-gamma coupled acoustic rhythms, mapped onto segregated populations in human early auditory cortex doi.org/10.7554/eLif...
@davidpoeppel.bsky.social @luc-arnal.bsky.social @brungio.bsky.social @jremygiroud.bsky.social @ilcb.bsky.social

1 week ago 15 5 0 2
Video

🧠 the Digital Brain Project is now live:

$5M total · up to $500k per selected team

Let's open-source the modeling of the human brain brain activity!

➡️Apply on: digitalbrainproject.org

1 week ago 47 14 1 2
Tempo comparison across scales, taxa, modalities, and media. Top left: Spectrogram of cricket(s) chirping for 1 min. Top right: Spectrogram of nearby fireflies flashing for 1 min (N = 21). The colorbars in both heatmaps correspond to Power/frequency (dB/Hz). Bottom: Typical tempos at which different animals signal vs. their respective mean body weights on a logarithmic scale (N = 24). The plot consists of six main groups: insects, amphibians, birds, fish, crustaceans (these last four in an overlapping region due to similar weights—note that the labels here don’t necessarily correspond to specific points as the species are mixed), and mammals. The icons (light bulb, speaker, and a moving human) represent the form of the signal (light, sound, or gesture). Note that the signals are mostly transmitted through air, with two examples through water (both fish, written in blue).

Tempo comparison across scales, taxa, modalities, and media. Top left: Spectrogram of cricket(s) chirping for 1 min. Top right: Spectrogram of nearby fireflies flashing for 1 min (N = 21). The colorbars in both heatmaps correspond to Power/frequency (dB/Hz). Bottom: Typical tempos at which different animals signal vs. their respective mean body weights on a logarithmic scale (N = 24). The plot consists of six main groups: insects, amphibians, birds, fish, crustaceans (these last four in an overlapping region due to similar weights—note that the labels here don’t necessarily correspond to specific points as the species are mixed), and mammals. The icons (light bulb, speaker, and a moving human) represent the form of the signal (light, sound, or gesture). Note that the signals are mostly transmitted through air, with two examples through water (both fish, written in blue).

Do animals have a favored tempo for communicating with each other? This study reveals a hotspot of 0.5-4 Hz for #communication across distinct species & modalities, hypothesizing that this may driven by biophysical commonalities of the receivers' #neurons @plosbiology.org 🧪 plos.io/4ccWuhh

1 week ago 27 12 0 2

Beautiful paper by Tingting Wu and @qing-yu.bsky.social suggesting reduced top-down influence from IPS to EVC during imagery and VWM maintenance in aphantasia: www.biorxiv.org/content/10.6...

1 week ago 11 2 1 0
OSF

„Burst-related potentials as a temporal anchor for cognition“
#preprint #neuroskyence

by M.C. Schuma, @ayeletlandau.bsky.social I. Diester, @memorycontrol.bsky.social & G. Karvat
osf.io/preprints/ps...

1 week ago 8 3 1 0
Preview
Home This satellite meeting showcases the latest research on metacognition, in the form of both keynotes and short oral presentations.

🚨 Announcing another edition of the Metacognitive Science satellite in NYC, August 2nd 2026 (day before CCN) 🧠 🧪

Abstract submission is now open and closes May 15th!

metacognitivescience.org

Co-organised with @meganakpeters.bsky.social @luciecharlesneuro.bsky.social @dobyrahnev.bsky.social

1 week ago 27 14 0 0
Advertisement
Preview
Neural circuits encode prior knowledge of temporal statistics - Nature Neuroscience This study shows that cerebellar circuits learn and encode prior probabilities of event timing. Cell-type-specific neural activity reflects environmental statistics and guides predictive motor behavio...

Our work on how neural circuits in the cerebellum encode prior probabilities led by Julius Koppen is out now in Nature Neuroscience www.nature.com/articles/s41...

Big thanks to Julius Koppen & the whole team! And dedicated to all of us who found inspiration in Bayesian theories of the brain!

1 week ago 61 25 1 1
Professeur-e Assistant-e avec pré-titularisation conditionnelle ou Associé (6846) en neurosciences computationnelles / Assistant Professor with Tenure Track or Associate Professor in computational neuroscience

Assistant/Associate Professor position on Computational Neuroscience of Language opening at the University of Geneva, in collaboration with the National Center for Competence in Research "Evolving Language".

Full details on the position and how to apply: jobs.unige.ch/www/wd_porta....

1 week ago 3 2 0 0
Postdoctoral Position in the Cognitive Computational Neuroscience of Language | Max Planck Institute

OPEN POSTDOC position (part of @erc.europa.eu Consolidator DYNALANG)

We build math&comp models of neural dynamics using insights from formal linguistics + ML

Seeking theory-driven researchers w/ interests in language, neural dynamics, & math/comp neuroscience.

Apply here: tinyurl.com/55exdpse

1 week ago 35 34 2 1

Happy to share our new preprint:

Uncovering the representational geometry of durations

Is time represented along a single mental timeline? We combine behaviour + EEG to show that duration is organised in a richer, multidimensional space.

w/ @lnalborczyk.bsky.social & @virginievanw.bsky.social

3 weeks ago 39 15 1 4
Seeing and imagining activate some of the same brain cells By recording brain activity directly, scientists showed that imagining an object can revive parts of the neural pattern used to see it.

Some of the same *single neurons* in VTC being activated when seeing and imagining the same things!
www.sciencenews.org/article/seei...

1 week ago 38 6 1 1
Post image

New paper in Imaging Neuroscience by Yuanyuan Weng, Jelmer P. Borst, and Elkan G. Akyürek:

Sustained alpha oscillations serve attentional prioritization in working memory, not maintenance

doi.org/10.1162/IMAG...

1 week ago 15 6 0 1
Preview
Job Opportunity at Lancaster University: Senior Research Associate in Machine Learning for Speech Processing Senior Research Associate in Machine Learning for Speech ProcessingDepartment: Phonetics Laboratory / Linguistics and English LanguageLocation: Bailrigg, Lancaster, UKSalary: £39,906 (pro-rata if part...

I’m hiring an 18-month postdoc to work on physics-informed machine learning for acoustic-articulatory speech inversion at
@phoneticslab.bsky.social

🗓️ Deadline: Friday 10 April.

🔗 More info & applications: hr-jobs.lancs.ac.uk/Vacancy.aspx...

📣 Please share with anyone who might be a good fit!

1 month ago 27 29 0 1
Postdoctoral Fellowships The information provided on this page is a summary of the main rules and requirements for Postdoctoral Fellowships (PFs) and who can apply for them.

Interested in applying for an MSCA Postdoctoral Fellowship 2026?

I'd be very happy to support postdoctoral researchers in co-developing a proposal in Aix-en-Provence, France, on the cognitive and neural mechanisms of inner speech.

marie-sklodowska-curie-actions.ec.europa.eu/actions/post...

2 weeks ago 13 9 2 1
Graphic announcing the MSCA Postdoctoral Fellowships 2026 call. It shows the opening date (9 April 2026), closing date (9 September 2026), and a budget of €399.05 million. The design features scientific visuals such as cells, a leaf, and lab elements on a dark background, with the European Commission logo at the bottom.

Graphic announcing the MSCA Postdoctoral Fellowships 2026 call. It shows the opening date (9 April 2026), closing date (9 September 2026), and a budget of €399.05 million. The design features scientific visuals such as cells, a leaf, and lab elements on a dark background, with the European Commission logo at the bottom.

Big opportunities for researchers 🌍

We are investing nearly €400 million to help researchers share their work and collaborate with the best scientific teams across the EU.

The 2026 Marie Skłodowska-Curie Actions Postdoctoral Fellowships are now open.

More: link.europa.eu/PNxpxw

1 week ago 108 67 2 21
Advertisement
Preview
How the human brain builds our sense of time A new study reveals the brain doesn’t rely on a single clock but builds our sense of time through multiple stages across different regions.

If you’re curious to learn more about how we build our sense of time, without too many technicalities, here’s a nice read on our recent work!

www.earth.com/news/how-the...

2 weeks ago 12 4 0 0
Video

🔥 We're very pleased to release our latest study 🧠: "Temporal structure of the language hierarchy within small cortical patches"
Paper → arxiv.org/abs/2604.03021
🧵 Summary thread below: 1/7

2 weeks ago 46 15 2 3

Using time-resolved EEG/MEG decoding?🧠 Here’s a new approach!
No feature engineering (decode from raw signals), but capturing info that standard decoding often misses (oscillatory/aperiodic activity, connectivity).
Lightweight, INTERPRETABLE, and easy to use. (1/6)
www.biorxiv.org/content/10.6...

2 weeks ago 56 23 4 1