Only 10 days left to submit your work to our International Workshop on News Recommendation and Analytics! 🚀
▶️ More details: research.idi.ntnu.no/NewsTech/INR...
📆 Submission deadline: July 17th, 2025 AoE
📍 Event co-located with @recsys.bsky.social
in Prague on September 26th (tentative)!
Posts by Fabian David Schmidt
📢 Introducing Walk&Retrieve, a simple yet effective zero-shot #RAG framework based on #knowledgegraph walks!
Arxiv : arxiv.org/abs/2505.16849
GitHub: github.com/MartinBoeckl...
Joint work w/ @martinboeckling.bsky.social @heikopaulheim.bsky.social
Details 👇
Title slide: Processing Trans Languaging - Vagrant Gautam (they/xe), Saarland University, with a very brightly patterned background featuring colourful people and math symbols.
Come to my keynote tomorrow at the first official @queerinai.com workshop at #NAACL2025 to hear about how trans languaging is complex and cool, and how this makes it extra difficult to process computationally. I will have SO many juicy examples!
Diagram illustrating a hypothesis about knowledge unlearning in language models. The left side shows a training corpus with varying frequencies of facts, such as 'Montreal is a city in Quebec' (high frequency) and 'Atlantis is a city in the ocean' (lower frequency). The center shows a language model being trained on this data, then undergoing unlearning. The right side demonstrates the 'Forget Quality' results, where the model more effectively unlearns the less frequent fact ('Atlantis is in Greece') while retaining the more frequent knowledge. Labels A, B, and C mark key points in the hypothesis: A (frequency variations in training data), B (influence of frequency), and C (unlearning effectiveness).
Check out our new paper on unlearning for LLMs 🤖. We show that *not all data are unlearned equally* and argue that future work on LLM unlearning should take properties of the data to be unlearned into account. This work was lead by my intern @a-krishnan.bsky.social
🔗: arxiv.org/abs/2504.05058
📣 Call for Papers is out! 📣
Working on #news #recsys & their societal, legal, and ethical dimensions?
👉 Submit to the 13th INRA workshop, co-located w/ @recsys.bsky.social in Prague!
📅 Paper deadline: ** July 17th, 2025 **
More info: research.idi.ntnu.no/NewsTech/INR...
#INRA2025 #RecSys2025
Hello! INRA is a forum for researchers and practitioners to discuss technical innovations, societal, ethical, and legal aspects of news recommendation and analytics.
The upcoming 13th edition of our workshop will be co-located w/ @recsys.bsky.social in Prague.
Stay tuned to this channel!
Joint work with Florian Schneider, Chris Biemann, and @gglavas.bsky.social
My first paper on multilingual vision-language, and couldn't be happier how this work turned out!🙂
Cross-modal topic matching correlates well with other multilingual vision-language tasks!
🤗Images-To-Sentence (given Images, select topically fitting sentence) & Sentences-To-Image (given Sentences, pick topically matching image) probe complementary aspects in VLU
X-modal to text-only perf. *gap* shows that VL support decreases from high to low-resource language tiers:
Images/Topic→Sentence (for I/T, pick S): narrows with less textual support (left)
Sentences→Image/Topic (for S, pick I/T): increases with less VL support worse (right)
Strong vision-language models (VLMs) like GPT-4o-mini maintain good performance for top-150 languages, only to drop to performing no better than chance for the lowest resource languages!
Introducing MVL-SIB, a massively multilingual vision-language benchmark for cross-modal topic matching in 205 languages!
🤔Tasks: Given images (sentences), select topically matching sentence (image).
Arxiv: arxiv.org/abs/2502.12852
HF: huggingface.co/datasets/Wue...
Details👇
Excited to present today a poster at @OECD in Paris @IASEAIorg based on our upcoming paper "Societal Alignment Frameworks Can Improve LLM Alignment" (stay tuned for the pre-print soon!🎊). Today (Fri) at 1pm CET. Conference livestream: iaseai.org/conference
⚠️Struggling with multilingual news recommendation?
We introduce NaSE, a news-adapted sentence encoder!🙌
✅No costly fine-tuning needed
✅Perfect for cold-start & few-shot scenarios
Read our ECIR 2025 📰: arxiv.org/abs/2406.12634
Try it out @hf.co 🤗: huggingface.co/aiana94/NaSE
I'm making an unofficial starter pack with some of my colleagues at Mila. WIP for now but here's the link!
go.bsky.app/BHKxoss