🚨🚨JOB ALERT🚨🚨
I'm hiring a cogsci/philosophy/compneuro postdoc at @ucl.ac.uk @uclbrainscience.bsky.social @uclpals.bsky.social!
www.jobs.ac.uk/job/DRH486/p...
Come to London & work on frameworks for "testing" for consciousness using Bayesian belief updating & latent variable modeling.
Pls share!
Posts by Ladislas Nalborczyk
Diagram showing a speaker and listener interacting with each other. The speaker predicts they will say "head", but actually produces "had". The resulting sensory prediction error is minimised by active inference - changing speech movements to correct for this error. The listener predicts they will hear "head" but hears "had" and performs perceptual inference to minimise their sensory prediction error - updating their predictions during perception.
I'm pleased to share this excellent paper from @abbiebradshawphd.bsky.social with @clarepress.bsky.social that presents an active inference account of speech motor control doi.org/10.3758/s134...
Open call for a permanent MEG Lab Manager position for our new MEGIN scanner at CIMeC Trento.
You have extensive MEG experience, a degree in physics, engineering, neuroscience:
Apply here lavoraconnoi.unitn.it/bando-pta/co...
Deadline 29th April!
p.s. call in Italian but foreigners welcome!! 🌎
New paper that merits a read (Im totally unbiased...not). Simple, straightforward, impactful message. Prediction a la LLM is nice. Constituent-constrained prediction is nicer. @jiajiezou.bsky.social and Nai Ding show brain, behavioral, MEG, ECoG data.
www.nature.com/articles/s41... #neuroskyence
We're happy to release NeuralSet: a simple, fast, scalable package for Neuro-AI
Supports:
🧠 fMRI, EEG, MEG, iEEG, spikes… preprocessing
💬 text 🔊 audio ▶️ video 🏞️ image… embeddings
📦 pip install neuralset
🔍 facebookresearch.github.io/neuroai/neur...
📄 kingjr.github.io/files/neural...
🧵 Details👇
Excellent review by Pearson and team on how imagery is implemented through modulatory rather than excitatory top-down signals to visual cortex: pubmed.ncbi.nlm.nih.gov/41973795/
figure 1 from the paper linked in OP - similar prevalence of self-reported imagery scores across auditory and visual domains
new preprint from PhD student Gage Quigley-Tump, reporting a survey of 200,000 ppl on themusiclab.org about auditory imagery or the lack thereof ('anauralia', the auditory version of aphantasia)
osf.io/cm85z
some findings:
(1) self-reported imagery ability similar across auditory & visual domains
To accompany my textbook (Computational Foundations of Cognitive Neuroscience) and the class I taught this semester, I'm open-sourcing my lectures slides:
gershmanlab.com/lectures.html
I'll continue to update these as I improve them.
New preprint out by @grassocamille.bsky.social, @lnalborczyk.bsky.social and @virginievanw.bsky.social ! Check it out if interested in mental time line, neural geometry analysis and duration encoding!
shorturl.at/ABUeu
@cea.fr @inserm.fr @univparissaclay.bsky.social @unicog.bsky.social
Hiring: Neuroscience roles at the International Brain Laboratory. We’re looking for:
Neuroscience Community Engineer
Neuroscience Research Software Engineer / Data Scientist
All applications must be submitted by May 8th.
Flexible US/Europe locations.
www.internationalbrainlab.com/opportunitie...
If you analyse time-resolved data (M/EEG, iEEG, pupillometry, force recordings…) and feel limited by cluster-based permutation tests (CBPTs); especially when trying to determine when an effect starts or ends; you may want to try our new R package: lnalborczyk.github.io/neurogam/
#rstats #brms #EEG
🚨 Post-doc Job ALERT! Interested in using AI to better understand how the brain enjoys music? Or how auditory processing changes in Hearing Loss? Want to eat your weight in 🥖, 🧀, and 🍷? Click on this! #neurojobs, #blackinneuro, #neuroskyence, #psychjobs
research.pasteur.fr/en/job/postd...
new collaborative paper! Speech is defined by theta-gamma coupled acoustic rhythms, mapped onto segregated populations in human early auditory cortex doi.org/10.7554/eLif...
@davidpoeppel.bsky.social @luc-arnal.bsky.social @brungio.bsky.social @jremygiroud.bsky.social @ilcb.bsky.social
🧠 the Digital Brain Project is now live:
$5M total · up to $500k per selected team
Let's open-source the modeling of the human brain brain activity!
➡️Apply on: digitalbrainproject.org
Tempo comparison across scales, taxa, modalities, and media. Top left: Spectrogram of cricket(s) chirping for 1 min. Top right: Spectrogram of nearby fireflies flashing for 1 min (N = 21). The colorbars in both heatmaps correspond to Power/frequency (dB/Hz). Bottom: Typical tempos at which different animals signal vs. their respective mean body weights on a logarithmic scale (N = 24). The plot consists of six main groups: insects, amphibians, birds, fish, crustaceans (these last four in an overlapping region due to similar weights—note that the labels here don’t necessarily correspond to specific points as the species are mixed), and mammals. The icons (light bulb, speaker, and a moving human) represent the form of the signal (light, sound, or gesture). Note that the signals are mostly transmitted through air, with two examples through water (both fish, written in blue).
Do animals have a favored tempo for communicating with each other? This study reveals a hotspot of 0.5-4 Hz for #communication across distinct species & modalities, hypothesizing that this may driven by biophysical commonalities of the receivers' #neurons @plosbiology.org 🧪 plos.io/4ccWuhh
Beautiful paper by Tingting Wu and @qing-yu.bsky.social suggesting reduced top-down influence from IPS to EVC during imagery and VWM maintenance in aphantasia: www.biorxiv.org/content/10.6...
„Burst-related potentials as a temporal anchor for cognition“
#preprint #neuroskyence
by M.C. Schuma, @ayeletlandau.bsky.social I. Diester, @memorycontrol.bsky.social & G. Karvat
osf.io/preprints/ps...
🚨 Announcing another edition of the Metacognitive Science satellite in NYC, August 2nd 2026 (day before CCN) 🧠 🧪
Abstract submission is now open and closes May 15th!
metacognitivescience.org
Co-organised with @meganakpeters.bsky.social @luciecharlesneuro.bsky.social @dobyrahnev.bsky.social
Our work on how neural circuits in the cerebellum encode prior probabilities led by Julius Koppen is out now in Nature Neuroscience www.nature.com/articles/s41...
Big thanks to Julius Koppen & the whole team! And dedicated to all of us who found inspiration in Bayesian theories of the brain!
Assistant/Associate Professor position on Computational Neuroscience of Language opening at the University of Geneva, in collaboration with the National Center for Competence in Research "Evolving Language".
Full details on the position and how to apply: jobs.unige.ch/www/wd_porta....
OPEN POSTDOC position (part of @erc.europa.eu Consolidator DYNALANG)
We build math&comp models of neural dynamics using insights from formal linguistics + ML
Seeking theory-driven researchers w/ interests in language, neural dynamics, & math/comp neuroscience.
Apply here: tinyurl.com/55exdpse
Happy to share our new preprint:
Uncovering the representational geometry of durations
Is time represented along a single mental timeline? We combine behaviour + EEG to show that duration is organised in a richer, multidimensional space.
w/ @lnalborczyk.bsky.social & @virginievanw.bsky.social
Some of the same *single neurons* in VTC being activated when seeing and imagining the same things!
www.sciencenews.org/article/seei...
New paper in Imaging Neuroscience by Yuanyuan Weng, Jelmer P. Borst, and Elkan G. Akyürek:
Sustained alpha oscillations serve attentional prioritization in working memory, not maintenance
doi.org/10.1162/IMAG...
I’m hiring an 18-month postdoc to work on physics-informed machine learning for acoustic-articulatory speech inversion at
@phoneticslab.bsky.social
🗓️ Deadline: Friday 10 April.
🔗 More info & applications: hr-jobs.lancs.ac.uk/Vacancy.aspx...
📣 Please share with anyone who might be a good fit!
Interested in applying for an MSCA Postdoctoral Fellowship 2026?
I'd be very happy to support postdoctoral researchers in co-developing a proposal in Aix-en-Provence, France, on the cognitive and neural mechanisms of inner speech.
marie-sklodowska-curie-actions.ec.europa.eu/actions/post...
Graphic announcing the MSCA Postdoctoral Fellowships 2026 call. It shows the opening date (9 April 2026), closing date (9 September 2026), and a budget of €399.05 million. The design features scientific visuals such as cells, a leaf, and lab elements on a dark background, with the European Commission logo at the bottom.
Big opportunities for researchers 🌍
We are investing nearly €400 million to help researchers share their work and collaborate with the best scientific teams across the EU.
The 2026 Marie Skłodowska-Curie Actions Postdoctoral Fellowships are now open.
More: link.europa.eu/PNxpxw
If you’re curious to learn more about how we build our sense of time, without too many technicalities, here’s a nice read on our recent work!
www.earth.com/news/how-the...
🔥 We're very pleased to release our latest study 🧠: "Temporal structure of the language hierarchy within small cortical patches"
Paper → arxiv.org/abs/2604.03021
🧵 Summary thread below: 1/7
Using time-resolved EEG/MEG decoding?🧠 Here’s a new approach!
No feature engineering (decode from raw signals), but capturing info that standard decoding often misses (oscillatory/aperiodic activity, connectivity).
Lightweight, INTERPRETABLE, and easy to use. (1/6)
www.biorxiv.org/content/10.6...