MCML just started again a call for their very competitive but also really nice, fully funded PhD positions.
These positions are matched to research groups at both TUM and LMU, including my group and the other great ML and NLP groups here in Munich 😄
Posts by Michael A. Hedderich
Check out our survey at #EMNLP2025 and help build a future where low-resource languages including African languages are represented in NLP!
Paper: arxiv.org/abs/2505.21315
This is work lead (in a great way) by Jesujoba Alabi and together with David Adelani and Dietrich Klakow.
Based on the analysis, we suggest future directions including:
1️⃣ Scale beyond the top-10 high-resource languages
2️⃣ Build more multicultural, native-language datasets
3️⃣ Develop African-centric LLMs
4️⃣ Focus on human-centered, application-driven NLP
Key findings include:
1️⃣ Papers have increased rapidly in the last 5 years 📈
2️⃣ Research is skewed toward certain tasks like MT and NLU
3️⃣ Language coverage is uneven, with a few languages dominating
We cover datasets, tasks, methods, and themes across 25+ venues (NLP, speech, HCI, ML), and manually analyzed 884 papers for this survey.
We have 3 main goals:
1️⃣ Comprehensive Overview – Map the research landscape
2️⃣ Accessible Entry Point – Easy starting point for new researchers
3️⃣ Open Issues – Highlight gaps and challenges
Despite resource gaps, NLP research on African languages is far from dormant. Growth is fueled by community initiatives, multilingual large corpora, shared tasks, and dedicated venues, making this a great time to chart the field.
NLP research distribution across Africa by language coverage.
Excited to share that our survey paper "Charting the Landscape of African NLP: Mapping Progress and Shaping the Road Ahead" lead by Jesujoba Alabi has been accepted at #EMNLP2025! Here’s a short 🧵 about the paper.
Headed to ACL? MaiNLP & our most recent work will be there too👥📄
Come see what we’ve been working on!
Looking forward to my visit to Hamburg University and their Data Science group!
Joint work with Anyi Wang, @raoyuan.bsky.social , @florian-eichin.com , Jonas Fischer and @barbaraplank.bsky.social
Check out the paper at arxiv.org/abs/2504.158... or discuss the work with us at #ACL2025 in Vienna.
Through
📊 3 new benchmarks with ground truth
📚 evaluation on existing prompt data
🛠 demonstration studies, and
🙇 a user study
we show how Spotlight can reliably provide new insights and support users uncovering relevant differences on bias, cultural artifacts, language style, model failure,...
uses data mining + human analysis to supports users in better understanding the behavior of LLM models 🔎
We leverage token patterns to automatically distinguish between random (decoding) variations and systematic differences in LLM outputs and guide the user in their nuanced analysis.
What changes if you take the LLM prompt “Tell me a short story about Dr. Li” and replace “Dr. Li” with “Dr. Smith”?
Would you have guessed that this introduces a massive gender bias, from ca. half/half to 99% male doctors?
In our #ACL2025 paper we present the Spotlight framework which...
uses data mining + human analysis to supports users in better understanding the behavior of LLM models 🔎
We leverage token patterns to automatically distinguish between random (decoding) variations and systematic differences in LLM outputs and guide the user in their nuanced analysis.
Interpretability meets Discourse. Congratulations to
@florian-eichin.com to his first ACL paper 🎉
Want to know if your prompting is also affected by this? Addressing this and other issues systematically, we proposed Spotlight, which utilizes data mining to uncover the effects of prompt- and model-changes (meet us at ACL to discuss)
arxiv.org/abs/2504.15815
Are you attending NAACL 2025 and are you interested in low-resource languages and dialects?
Then don't miss our very own @verenablaschke.bsky.social's keynote talk at the WNUT 2025 workshop on May 3rd:
Beyond “noisy” text: How (and why) to process dialect data
🌐 ☀️
noisy-text.github.io/2025/
Happy to be part of that team for almost 1/3 of that time 😀