I’m hiring an 18-month postdoc to work on physics-informed machine learning for acoustic-articulatory speech inversion at
@phoneticslab.bsky.social
🗓️ Deadline: Friday 10 April.
🔗 More info & applications: hr-jobs.lancs.ac.uk/Vacancy.aspx...
📣 Please share with anyone who might be a good fit!
Posts by Eleanor Chodroff
It's this time again!
Abstracts for CorpusPhon2 are due this Friday, March 13! Corpus phonetic and phonological studies, methodologies, descriptions, etc. are all welcome contributions. Looking forward to your submissions, and hope to see you there!
sites.google.com/view/corpusp...
CorpusPhon2 will take place on Monday 29/6/26 in Montréal, Québec, right after LabPhon20. Submissions are due on 3/13/26 and our website has been updated to reflect this. Please consider submitting to our workshop!
sites.google.com/view/corpusp...
📣 Phoneticians/phonologists: CorpusPhon is happening again this year on June 29th at LabPhon in Montreal! Submissions due Friday, March 13th. Hope to see you there!
sites.google.com/view/corpusp...
🤯 Phonetic Universal Uncovered! 🎤
We analyzed over 60,000 speakers across 75 languages and confirmed a universal phonetic bias: High vowels (like /i, u/) are consistently spoken with a slightly higher pitch (F0) than low vowels (/a/).
For everyone working on the intersections between linguistic and computational research, consider submitting to the upcoming SCiL! We're very excited that it will be co-located with ACL 2026 as a workshop, and we've also received NSF funding to help cover costs.
sites.google.com/view/scil2026
Dark blue background, with six puzzle pieces on the right in various shades of blue. On the left, the text 'Interspeech Challenges' in white, with Interspeech 2026 logo above and website interspeech.org below.
Get ready to tackle one of the #Interspeech2026 Challenges! Check out the accepted Challenge Proposals 👉 interspeech2026.org/en-AU/pages/... – detailed information to come!
Deadline 4 Jan: Postdoc, variability and vowel harmony, metaphony (phonetic and psycholinguistic approaches), Potsdam (w/ A. Gafos) docs.google.com/document/d/1...
To the right, a toy koala wearing a lanyard and sitting on a lectern in front of a projector screen. To the left, a dark blue background with text 'Special Session Announcement', with Interspeech 2026 logo above and website interspeech.org below.
An extraordinary response to the call for Special Sessions for #Interspeech2026! Consider submitting your paper to one of these accepted themed sessions 👉 interspeech2026.org/en-AU/pages/... (and stay tuned for more details!)
The Spoken Language group @bcbl.bsky.social is currently recruiting for these positions:
👉PhD students (expressions of interest are welcome on a rolling basis)
👉Postdoctoral Researcher (start date: Nov '26 - Sep '27) tinyurl.com/3um3bjze
👉Research Assistant tinyurl.com/y5uebra6
Please share ☺️
Friendly PSA: we are currently in the purgatory period of continental daylight savings changes when Central Europe and North America (East Coast) are 5 hours apart instead of the usual 6.
a peek through fall foliage to some people at the University of Pennsylvania, photo by Eric Sucar
Seeking applications from recent PhDs in neuro, psych, ling, philo, comp sci, or other cog sci discipline, for our MindCORE Fellowship.
MindCORE is an interdisciplinary effort at Penn to understand human intelligence and behavior.
Apply by Dec 1: mindcore.sas.upenn.edu/post-doctora...
Please join @mariamaly.bsky.social and me October 30th! We'll help you navigate the confusing world of peer review.
A memorial service, followed by a reception, will take place on Friday, 14 November at 5:00 PM in the Aula (KOL-G-201) of the main building of the University of Zurich (Rämistrasse 71, 8006 Zurich).
recent picture of Martin Volk
It is with deep sorrow that we bid farewell to Prof. Dr. Martin Volk, who passed away on 15 September 2025 at the age of 64 after a brief and sudden illness. www.cl.uzh.ch/en/about-us/...
Postdoc (flexible start): child language development across different populations and contexts, methods including behavioral studies, large-scale data analysis, and/or computational modeling. M. Cychosz, Linguistics, Stanford Univ. postdocs.stanford.edu/prospective/...
The differences are small but consistent in direction, supporting a biomechanical account critically tied to uniform phonetic targets across vowels. At the same time, variation in effect size across languages suggests speakers differ in how strongly this uniformity is realized
Our findings:
📉 Clear crosslinguistic bias—high vowels are shorter than low vowels
➡️ But no systematic difference between high front vs. high back vowels
Previous explanations have focused on two explanations:
🗣️Automatic accounts
👂Speaker control
As a novel contribution, we reinterpret intrinsic vowel duration as a statistical universal emerging from the competing pressures of target uniformity and enhancement
We provide a large-scale crosslinguistic corpus analysis of intrinsic vowel duration – the observation that high vowels (like /i/ or /u/) tend to be shorter than low vowels (like /a/)
Our dataset:
✅ 60+ languages
✅ 16 language families
✅ Thousands of speakers
Excited to share our new preprint with @mzhang89.bsky.social : “A crosslinguistic corpus phonetic analysis of intrinsic vowel duration” 🎉
🔗 osf.io/preprints/ps...
📢PhD fellowships
The Spoken Language group at #BCBL (Spain) offers sponsorship for the #INPhINIT Predoctoral Fellowships
Potential PhD projects can be related to:
🗣️Speech perception
🧠Language learning
ℹ️Info about the position and application process: tinyurl.com/2kcfsjr3
📆Deadline: October 30
Thank you!!!
Does anyone have a PDF of Klatt and Cooper (1975) "Perception of segment duration in sentence contexts" that goes beyond the first two pages? (link.springer.com/chapter/10.1...)
✅Similarity scores: huggingface.co/datasets/pac...
📄Paper: www.isca-archive.org/interspeech_...
💻Code: github.com/pacscilab/CV...
💫This was joint work with @mzhang89.bsky.social, Aref Farhadipour, Annie Baker, Jiachen Ma, and Bogdan Pricop
Pairs below this were more likely perceived as different speakers and above, as the same speaker. Of course there’s no ground truth, so you can also choose your own threshold
The similarity scores, paper, and code can be found at the below links
Happy data cleaning 😊
We ran automatic speaker verification (ResNet-293 trained w/ multilingual VoxBlink2) to obtain similarity scores among files for each client ID. Based on previous thresholds and a perceptual evaluation, we found an optimal threshold of ~0.35–0.40 for same vs diff speakers
Interspeech 2025 poster on Quantifying and reducing speaker heterogeneity within the Common Voice Corpus
🗣️Mozilla Common Voice users!🗣️
Important notice: the client ID does not always correspond to a single speaker ID! Every so often, a single client ID contains more than one speaker’s voice. Our #Interspeech2025 paper examines the extent of this problem and proposes a solution