π Re-Align is back for its 4th edition at ICLR 2026!
π£ We invite submissions on representational alignment, spanning ML, Neuroscience, CogSci, and related fields.
π Tracks: Short (β€5p), Long (β€10p), Challenge (blog)
β° Deadline: Feb 5, 2026 for papers
π representational-alignment.github.io/2026/
Posts by Angelika Romanou
1/ π How does mixing data from hundreds of languages affect LLM training?
In our new paper "Revisiting Multilingual Data Mixtures in Language Model Pretraining" we revisit core assumptions about multilinguality using 1.1B-3B models trained on up to 400 languages.
π§΅π
Keynote talk:
Apertus: Democratizing Open and Compliant LLMs for Global Language Environments.
Imanol Schlag introduces Apertus, a fully open suite of LLMs with a focus on compliance, transparency, and multilingual representation training across 1000+ languages. ππ€
1/π¨ New preprint
How do #LLMsβ inner features change as they train? Using #crosscoders + a new causal metric, we map when features appear, strengthen, or fade across checkpointsβopening a new lens on training dynamics beyond loss curves & benchmarks.
#interpretability
Proud to have been part of the team behind #Apertus πβ¨ an open multilingual LLM.
Trained on open data, supporting 1,800+ languages, and built with transparency, compliance & responsible AI in mind.
π€ Try Apertus models: huggingface.co/collections/...
If youβre at @iclr-conf.bsky.social this week, come check out our spotlight poster INCLUDE during the Thursday 3:00β5:30pm session!
I will be there to chat about all things multilingual & multicultural evaluation.
Feel free to reach out anytime during the conference. Iβd love to connect!
NEW PAPER ALERT: Generating visual narratives to illustrate textual stories remains an open challenge, due to the lack of knowledge to constrain faithful and self-consistent generations. Our #CVPR2025 paper proposes a new benchmark, VinaBench, to address this challenge.
Lots of great news out of the EPFL NLP lab these last few weeks. We'll be at @iclr-conf.bsky.social and @naaclmeeting.bsky.social in April / May to present some of our work in training dynamics, model representations, reasoning, and AI democratization. Come chat with us during the conference!
π¨ New Paper!
Can neuroscience localizers uncover brain-like functional specializations in LLMs? π§ π€
Yes! We analyzed 18 LLMs and found units mirroring the brain's language, theory of mind, and multiple demand networks!
w/ @gretatuckute.bsky.social, @abosselut.bsky.social, @mschrimpf.bsky.social
π§΅π
π Introducing PICLe: a framework for in-context named-entity detection (NED) using pseudo-annotated demonstrations.
π― No human labeling neededβyet it outperforms few-shot learning with human annotations!
#AI #NLProc #LLMs #ICL #NER
Introducing Global-MMLUπ: A multilingual benchmark featuring MMLU translations in 42 languages crafted with:
β
Human curation
β
Extensive metadata
β
Insights into cultural sensitivity
Proud to have collaborated with Shivalika Singh, @sarahooker.bsky.social and Cohere For AI!
1/ π Could ChatGPT get an engineering degree? Spoiler, yes! In our new @pnas.org article, we explore how AI assistants like GPT-4 perform in STEM university courses β and on average they pass a staggering 91.7% of core courses. π§΅ #AI #HigherEd #STEM #LLMs #NLProc
ππ»ββοΈ
π As well as the fantastic multilingual research community that helped us collect and validate INCLUDE!
π We thank our amazing core team and advisors:
@negarforoutan.bsky.social, Anna Sotnikova, @eric-zemingchen.bsky.social, Sree Harsha Nelaturu, Shivalika Singh, Rishabh Maheshwary, Micol Altomare, Mohamed A Haggag, Imanol Schlag, @mziizm.bsky.social, @sarahooker.bsky.social, @abosselut.bsky.social
For easy evaluation, we provide the following subsets:
INCLUDE-base: up to 550 samples per language, totaling ~23K questions
π€ : huggingface.co/datasets/Coh...
INCLUDE-lite: up to 250 samples per language, totaling ~11K questions
π€ : huggingface.co/datasets/Coh...
π€ Information is transferred across languages of the same script, though untrained languages might also excel due to potential data contamination.
π Models can struggle with non-English instructions, entangling knowledge evaluation with other factors such as task formatting.
Analysis shows:
π Models have a long way to go in capturing the regional knowledge reflected in languages.
πͺ Model scale improves regional knowledge understanding, but other techniques like CoT or instruction tuning have minimal or negative impacts.
To build INCLUDE, we collected ~200K MCQ data from 44 languages and 58 knowledge domains, collected from local sources in 52 countries, representing a rich array of cultural and regional knowledge.
π€ Why is regional knowledge so important?
Users expect #LLMs to know information relevant to their environmentsβ customs, culture, etc.
To be relevant & relatable, LLMs need to know these nuances. It's not just global knowledge; it's about meeting user needs where they are.
π First, what is regional knowledge?
It's the local info, culture & practices of a regional context. US Law is a great topic, but not as relevant for multilingual LLMs for other regions.
For INCLUDE, we collect regional knowledge rather than translating Western-centric benchmarks.
π Introducing INCLUDE π: A multilingual LLM evaluation benchmark spanning 44 languages!
Contains *newly-collected* data, prioritizing *regional knowledge*.
Setting the stage for truly global AI evaluation.
Ready to see how your model measures up?
#AI #Multilingual #LLM #NLProc
ππ»ββοΈ
.@icepfl.bsky.social is hiring for multiple positions in CS (including one open call): www.epfl.ch/about/workin...
Apply to come join us in Beautiful Lausanne!
ππ»ββοΈ
ππ»ββοΈ
ππ»ββοΈ
Happy to be added! Thanks!