Advertisement Β· 728 Γ— 90

Posts by Angelika Romanou

Post image

πŸŽ‰ Re-Align is back for its 4th edition at ICLR 2026!

πŸ“£ We invite submissions on representational alignment, spanning ML, Neuroscience, CogSci, and related fields.

πŸ“ Tracks: Short (≀5p), Long (≀10p), Challenge (blog)

⏰ Deadline: Feb 5, 2026 for papers

πŸ”— representational-alignment.github.io/2026/

3 months ago 15 9 1 4

1/ 🌍 How does mixing data from hundreds of languages affect LLM training?
In our new paper "Revisiting Multilingual Data Mixtures in Language Model Pretraining" we revisit core assumptions about multilinguality using 1.1B-3B models trained on up to 400 languages.
πŸ§΅πŸ‘‡

4 months ago 9 6 1 0
Post image Post image Post image

Keynote talk:
Apertus: Democratizing Open and Compliant LLMs for Global Language Environments.

Imanol Schlag introduces Apertus, a fully open suite of LLMs with a focus on compliance, transparency, and multilingual representation training across 1000+ languages. πŸŒπŸ€–

6 months ago 3 1 0 0
Post image

1/🚨 New preprint

How do #LLMs’ inner features change as they train? Using #crosscoders + a new causal metric, we map when features appear, strengthen, or fade across checkpointsβ€”opening a new lens on training dynamics beyond loss curves & benchmarks.

#interpretability

6 months ago 15 6 2 0

Proud to have been part of the team behind #Apertus 🌍✨ an open multilingual LLM.

Trained on open data, supporting 1,800+ languages, and built with transparency, compliance & responsible AI in mind.

πŸ€– Try Apertus models: huggingface.co/collections/...

7 months ago 2 0 0 0

If you’re at @iclr-conf.bsky.social this week, come check out our spotlight poster INCLUDE during the Thursday 3:00–5:30pm session!

I will be there to chat about all things multilingual & multicultural evaluation.

Feel free to reach out anytime during the conference. I’d love to connect!

1 year ago 4 2 0 1
Post image

NEW PAPER ALERT: Generating visual narratives to illustrate textual stories remains an open challenge, due to the lack of knowledge to constrain faithful and self-consistent generations. Our #CVPR2025 paper proposes a new benchmark, VinaBench, to address this challenge.

1 year ago 6 5 1 1

Lots of great news out of the EPFL NLP lab these last few weeks. We'll be at @iclr-conf.bsky.social and @naaclmeeting.bsky.social in April / May to present some of our work in training dynamics, model representations, reasoning, and AI democratization. Come chat with us during the conference!

1 year ago 25 12 1 0
Advertisement
Post image

🚨 New Paper!

Can neuroscience localizers uncover brain-like functional specializations in LLMs? πŸ§ πŸ€–

Yes! We analyzed 18 LLMs and found units mirroring the brain's language, theory of mind, and multiple demand networks!

w/ @gretatuckute.bsky.social, @abosselut.bsky.social, @mschrimpf.bsky.social
πŸ§΅πŸ‘‡

1 year ago 103 26 2 5
Post image

πŸš€ Introducing PICLe: a framework for in-context named-entity detection (NED) using pseudo-annotated demonstrations.
🎯 No human labeling neededβ€”yet it outperforms few-shot learning with human annotations!
#AI #NLProc #LLMs #ICL #NER

1 year ago 12 8 1 1

Introducing Global-MMLU🌍: A multilingual benchmark featuring MMLU translations in 42 languages crafted with:
βœ… Human curation
βœ… Extensive metadata
βœ… Insights into cultural sensitivity

Proud to have collaborated with Shivalika Singh, @sarahooker.bsky.social and Cohere For AI!

1 year ago 13 4 0 0
Post image

1/ πŸ“˜ Could ChatGPT get an engineering degree? Spoiler, yes! In our new @pnas.org article, we explore how AI assistants like GPT-4 perform in STEM university courses β€” and on average they pass a staggering 91.7% of core courses. 🧡 #AI #HigherEd #STEM #LLMs #NLProc

1 year ago 36 14 1 5

πŸ™‹πŸ»β€β™€οΈ

1 year ago 0 0 0 0

πŸ‘ As well as the fantastic multilingual research community that helped us collect and validate INCLUDE!

1 year ago 3 0 0 0

πŸ™ We thank our amazing core team and advisors:
@negarforoutan.bsky.social, Anna Sotnikova, @eric-zemingchen.bsky.social, Sree Harsha Nelaturu, Shivalika Singh, Rishabh Maheshwary, Micol Altomare, Mohamed A Haggag, Imanol Schlag, @mziizm.bsky.social, @sarahooker.bsky.social, @abosselut.bsky.social

1 year ago 4 0 1 0
Advertisement

For easy evaluation, we provide the following subsets:
INCLUDE-base: up to 550 samples per language, totaling ~23K questions
πŸ€— : huggingface.co/datasets/Coh...
INCLUDE-lite: up to 250 samples per language, totaling ~11K questions
πŸ€— : huggingface.co/datasets/Coh...

1 year ago 5 0 1 0
Preview
INCLUDE: Evaluating Multilingual Language Understanding with Regional Knowledge The performance differential of large language models (LLM) between languages hinders their effective deployment in many regions, inhibiting the potential economic and societal value of generative AI ...

Check out our paper: πŸ“„ : arxiv.org/abs/2411.19799

1 year ago 4 0 1 0

🀝 Information is transferred across languages of the same script, though untrained languages might also excel due to potential data contamination.

🌎 Models can struggle with non-English instructions, entangling knowledge evaluation with other factors such as task formatting.

1 year ago 3 0 1 0

Analysis shows:
πŸ“š Models have a long way to go in capturing the regional knowledge reflected in languages.

πŸ’ͺ Model scale improves regional knowledge understanding, but other techniques like CoT or instruction tuning have minimal or negative impacts.

1 year ago 3 0 1 0
Post image

To build INCLUDE, we collected ~200K MCQ data from 44 languages and 58 knowledge domains, collected from local sources in 52 countries, representing a rich array of cultural and regional knowledge.

1 year ago 6 1 1 0
Post image

πŸ€” Why is regional knowledge so important?

Users expect #LLMs to know information relevant to their environmentsβ€” customs, culture, etc.
To be relevant & relatable, LLMs need to know these nuances. It's not just global knowledge; it's about meeting user needs where they are.

1 year ago 4 1 1 0

🌍 First, what is regional knowledge?

It's the local info, culture & practices of a regional context. US Law is a great topic, but not as relevant for multilingual LLMs for other regions.

For INCLUDE, we collect regional knowledge rather than translating Western-centric benchmarks.

1 year ago 4 0 1 0
Post image

πŸš€ Introducing INCLUDE 🌍: A multilingual LLM evaluation benchmark spanning 44 languages!

Contains *newly-collected* data, prioritizing *regional knowledge*.
Setting the stage for truly global AI evaluation.
Ready to see how your model measures up?
#AI #Multilingual #LLM #NLProc

1 year ago 38 6 1 5
Advertisement

πŸ™‹πŸ»β€β™€οΈ

1 year ago 0 0 0 0
Preview
Open Faculty Positions -

.@icepfl.bsky.social is hiring for multiple positions in CS (including one open call): www.epfl.ch/about/workin...

Apply to come join us in Beautiful Lausanne!

1 year ago 13 9 0 0

πŸ™‹πŸ»β€β™€οΈ

1 year ago 1 0 1 0

πŸ™‹πŸ»β€β™€οΈ

1 year ago 0 0 1 0

πŸ™‹πŸ»β€β™€οΈ

1 year ago 1 0 1 0

Happy to be added! Thanks!

1 year ago 1 0 0 0