Advertisement · 728 × 90

Posts by Heidelberg University NLP Group

I am honored to receive the 2025 #GSCL Best Thesis Award at #KONVENS in Hildesheim for my Master’s thesis, which investigates multilinguality and develops language models for Ancient Greek and Latin. Thank you to my mentors and collaborators. I look forward to what comes next.

7 months ago 4 1 1 1

Frederick's talk upcoming today! Learn about how MLLMs generalize across languages!

8 months ago 0 0 0 0
Probing classifier performance comparison between early and late checkpoint across layers. While the early checkpoint shows uniformly high performance, the later checkpoint exhibits relatively high variance across layers.

Probing classifier performance comparison between early and late checkpoint across layers. While the early checkpoint shows uniformly high performance, the later checkpoint exhibits relatively high variance across layers.

How and when do multilingual LMs achieve cross-lingual generalization during pre-training? And why do later, supposedly more advanced checkpoints, lose some language identification abilities in the process? Our #ACL2025 paper investigates.

10 months ago 3 2 1 1

What did Aristotle actually write? We think we know, but reality is messy. As ancient Greek texts traveled through 2,500 years of history, they were copied and recopied countless times, accumulating subtle errors with each generation. Our new #NAACL2025 paper tackles this fascinating challenge.

11 months ago 13 4 1 2
Post image

Debates aren’t always black and white—opposing sides often share common ground. These partial agreements are key for meaningful compromises
Presenting “Perspectivized Stance Vectors” (PSVs) — an interpretable method to identify nuanced (dis)agreements

📜 arxiv.org/abs/2502.09644
🧵 More details below

1 year ago 4 3 1 0

🎉 Exciting news from our team!

The final paper of @aicoffeebreak.bsky.social's PhD journey is accepted at #ICLR2025! 🙌 🖼️📄

Check out her original post below for more details on Vision & Language Models (VLMs), their modality use and their self-consistency 🔥

1 year ago 9 2 0 0