✨ELOQUENCE at Interspeech 2025
At this year’s Interspeech, ELOQUENCE showcased cutting-edge research advancing multilingual, trustworthy, and inclusive AI.
Read more in our latest blog 👉 eloquenceai.eu/eloquence-at...
#ELOQUENCEAI #Interspeech2025 #SpeechTechnology #TrustworthyAI #MultilingualAI
Sydney opera house with night-time Sydney city skyline behind. Main text is 'Sign up for Interspeech 2026' #interspeech2026 interspeech2026.org, with sponsor logos below image.
Now that #interspeech2025 has wrapped up, #interspeech2026 is at the helm of this account. Stay tuned for emerging info, and remember to register your interest at interspeech2026.org/en-AU to receive updates!
📆 Sydney – 27 September – 1 October 2026
What happens when you say:
“I want a horror -- comedy -- movie”? 🎥
That slip-of-the-tongue can confuse recommender systems.
Our INTERSPEECH 2025 paper shows some LLMs handle it better than others.
📄 mariateleki.github.io/pdf/HorrorCo...
#INTERSPEECH2025 #ConversationalAI #RecSys
Our pick of the week by Marco Gaido: "Context-Driven Dynamic #Pruning for Large #Speech #Foundation Models" by Masao Someki, Shikhar Bharadwaj, Atharva Anand Joshi, Chyi-Jiunn Lin, Jinchuan Tian, Jee-weon Jung, @shinjiw.bsky.social, et al. #INTERSPEECH2025.
arxiv.org/abs/2505.18860
Pol Pastells i Javier Román, del grup de recerca #CLiC han presentat el projecte SCRIBAL aquest agost a l' #Interspeech2025, en una edició dedicada a les ciències i tecnologies de la parla justes i inclusives ⚖️🌐💻
Més informació 🔗 linguistica.ub.edu/el-clic-part...
Interspeech 2025 poster on Quantifying and reducing speaker heterogeneity within the Common Voice Corpus
🗣️Mozilla Common Voice users!🗣️
Important notice: the client ID does not always correspond to a single speaker ID! Every so often, a single client ID contains more than one speaker’s voice. Our #Interspeech2025 paper examines the extent of this problem and proposes a solution
Interspeech paper title: What do self-supervised speech models know about Dutch? Analyzing advantages of language-specific pre-training Authors: Marianne de Heer Kloots, Hosein Mohebbi, Charlotte Pouw, Gaofei Shen, Willem Zuidema, Martijn Bentum
✨ Do self-supervised speech models learn to encode language-specific linguistic features from their training data, or only more language-general acoustic correlates?
At #Interspeech2025 we presented our new Wav2Vec2-NL model and SSL-NL evaluation dataset to test this!
📄 arxiv.org/abs/2506.00981
⬇️
Congrats to @shinjiw.bsky.social and his team for their Best Student Paper award at #Interspeech2025!
x.com/i/status/195...
Video of Prof Takayuki Arai with his vocal tract models at #Interspeech2025
Love this analogue demonstration.
I finished my presentation. Thank you for attending the session and discussion! #Interspeech2025
It's been a great #interspeech2025!
I presented a TTS-for-ASR paper:
www.isca-archive.org/interspeech_...
And one on prosody reps: www.isca-archive.org/interspeech_...
There were many interesting questions & comments - if you have more and didn't get the chance feel free to send me a message.
On behalf of Mina Serajian and colleagues, I had the honor of presenting their poster at #Interspeech2025 on Farsi VC.
As a PhD focusing on Frisian ASR, I found it a nice opportunity to network and to see how advances in other LR languages face similar challenges and offer insights for Frisian.
What a great conference #Interspeech2025! There is still time to stop by our booth and grab a limited-edition TIMIT word poetry magnet. Also don’t miss our colleague’s oral session on TELVID: A multilingual, multi-modal corpus for speaker recognition at 13:30, A04, Port 1A @interspeech.bsky.social
Empty chairs
We are living in a time (again) when not all researchers are free to travel to international conferences. Thanks to everybody who stepped in (maybe last-minute) to present work on behalf of the original authors who could not attend #interspeech2025!
I’ll will be presenting this tomorrow at 8.50 at #interspeech2025, come by if you’re interested in prosodic representations!
Bauquet at Stadshaven Brouwerij & Gastropub🍻🎸🥁🎺🎹⛴️ #Interspeech2025
Thank you to everyone who stopped by, I’m grateful for all the feedback and interesting questions #interspeech2025
Are lifetime changes in mean f0 (high or low pitched voice) of female speakers due to hormonal changes or age? It's not hormones, according to a study presented at #interspeech2025 by Melanie Weirich and Adrian Simpson from Friedrich-Schiller-University Jena. www.isca-archive.org/interspeech_...
Good morning #Interspeech2025 Stop by our booth during the coffee breaks today to say hello. Also don't miss today's special session co-organized by LDC on Challenges in Speech Collection, Curation and Annotation in two parts beginning at 13:30, Dock 15. @interspeech.bsky.social
Daniel Duran, Leonie Schade, Petra Wagner, Jana Eichmann at the conference.
Phoneticians from Bielefeld University at #interspeech2025
Had such a great time presenting our tutorial on Interpretability Techniques for Speech Models at #Interspeech2025! 🔍
For anyone looking for an introduction to the topic, we've now uploaded all materials to the website: interpretingdl.github.io/speech-inter...