Advertisement · 728 × 90

Posts by Ece Takmaz

INLG2026 The 19th International Natural Language Generation Conference is scheduled to be held in Utrecht, the Netherlands from October 17 to 21, 2026.

📍It's official! #INLG2026 is coming to Utrecht, Netherlands, Oct 17-21! Hosted with support from Utrecht University and the local NLP community. Follow us here and check 2026.inlgmeeting.org for updates -- hope to see you there!

2 weeks ago 15 9 0 3
Redirecting…

We are announcing a very new workshop! 🌿 MINT @ EMNLP 2026, 1st Workshop on Multimodal Interaction in Face-to-Face Dialogue! ✨ mintworkshop.github.io Raquel Fernández, Diego Frassinelli, @esamghaleb.bsky.social, Bulat Khaertdinov, @asliozyurek.bsky.social Zerrin Yumak @emnlpmeeting.bsky.social

3 weeks ago 4 3 0 0

CMCL deadline extended to Feb 28 AoE!

1 month ago 2 1 0 0

The submission deadline for CMCL is coming up in less than a month! (Feb 25) CMCL will be co-located with LREC and take place on May 16!🌴https://sites.google.com/view/cmclworkshop/cfp

2 months ago 3 2 0 1

I had so many inspiring conversations with lovely colleagues and I am already looking forward to visiting again in the future! Many thanks to @simeonjunker.bsky.social, @bbunzeck.bsky.social, @manarali.bsky.social, @hbuschme.bsky.social, Clara Lachenmaier, Lisa Gottschalk, Emilie Sitter, Yu Wang ✨

3 months ago 2 0 0 0
Post image

I have just returned from a week-long visit to Bielefeld University! Thank you very much for hosting me Sina Zarrieß and @ozgealacam.bsky.social 😊 @clausebielefeld.bsky.social

3 months ago 8 2 1 0
Post image

This week we’re having @ecekt.bsky.social as our guest in Bielefeld. She gave a highly timely talk on language+vision models, how they process images under noise conditions, and about how to train a highly effective multimodal BabyLM with model merging. 🗣️👀💻

3 months ago 12 1 0 1
Advertisement
Post image Post image Post image Post image
3 months ago 1 0 0 0
Post image Post image Post image Post image
3 months ago 1 0 1 0
Post image Post image Post image Post image

Photos from the Computational Psycholinguistics Meeting in Utrecht, many thanks to everyone who joined us in making this a memorable event! ✨

3 months ago 6 0 1 0

The CfP for CMCL is out!🌴 We are looking forward to receiving many interesting submissions! ✨ (Deadline: February 25, 2026) sites.google.com/view/cmclwor...

4 months ago 7 2 0 1

which song did they use? money for nothing?

4 months ago 2 0 0 0

Many thanks to @dnliu.bsky.social for inviting me, and to the members of the group for their insightful questions! 😊✨

4 months ago 3 0 0 0

The program of the Computational Psycholinguistics Meeting 2025 at Utrecht University is out, packed with a lot of very interesting talks! The registration is full, but there is a waiting list, if you would like to attend✨ cpl2025.sites.uu.nl/schedule/

4 months ago 0 0 0 0

Cognitive Modeling and Computational Linguistics (CMCL) workshop will be co-located with LREC 2026 in Palma, Mallorca!🌴Stay tuned for more details!✨
@byungdoh.bsky.social Tatsuki Kuribayashi @grambelli.bsky.social Philipp Wicke, Jixing Li, Ryo Yoshida @cmclworkshop.bsky.social

4 months ago 13 4 0 2
Post image Post image Post image

I was in Sweden this week! 🇸🇪❄️ Many thanks to Nikolai Ilinykh for inviting me to give a talk at the University of Gothenburg. I enjoyed having inspiring chats and delicious food with Sharid Loáiciga, @asayeed.bsky.social, Simon Dobnik, Hyewon Jang and Chris Howes at CLASP. Much appreciated! 😊🎄

4 months ago 7 1 0 0
Advertisement
Preview
Model Merging to Maintain Language-Only Performance in Developmentally Plausible Multimodal Models Ece Takmaz, Lisa Bylinina, Jakub Dotlacil. Proceedings of the First BabyLM Workshop. 2025.

I hope our findings would be helpful for the future contributors to the multimodal track of the BabyLM challenge! aclanthology.org/2025.babylm-...

5 months ago 0 0 0 0

Instead of using the data provided in the BabyLM challenge, I opted for obtaining them from their sources, which added extra layers of filtering and complexity, revealing some discrepancies in the multimodal BabyLM data. I mention these in the paper.

5 months ago 1 0 1 0

Unfortunately, we had limited time and resources to modify the whole evaluation pipeline for our specific multimodal architecture. As a result, we tested our models on a subset of the benchmarks.

5 months ago 0 0 1 0

The report on the Findings of the Third BabyLM Challenge indicates that the multimodal track received only 1 full submission this year. We submitted our paper to the workshop track instead of the challenge.

5 months ago 0 0 1 0

We experiment with weighted linear interpolation of language-only and multimodal model weights. Model merging with language-only checkpoints helps alleviate the issue to some extent, benefiting performance in language-only benchmarks and not disrupting accuracy in multimodal tasks heavily.

5 months ago 0 0 1 0

How can we mitigate this issue in developmentally plausible multimodal models and maintain language-only performance? We explored model merging, a technique that has been shown to benefit multi-task and multi-language models, reducing the effects of catastrophic forgetting.

5 months ago 0 0 1 0

Our multimodal BabyLM model surpasses previous multimodal baselines and submissions on the leaderboard. Yet, compared to language-only models, it underperforms in grammar-oriented benchmarks, although being exposed to the same language-only data as the language-only models (+ multimodal data).

5 months ago 0 0 1 0
Advertisement

Previous work, including BabyLM contributions, indicates that multimodal data has limited or no benefits in text-only benchmarks. We reach similar conclusions in our low-resource multimodal scenario.

5 months ago 0 0 1 0
Post image

I will be attending EMNLP in China to present our paper with @bylinina.bsky.social (who will be in China, too) and Jakub Dotlacil in the BabyLM workshop! Looking forward to meeting people there! ✨ 😊 #EMNLP2025 @emnlpmeeting.bsky.social

lnkd.in/e-Bzz6De

5 months ago 12 3 1 0
Preview
Traces of Image Memorability in Vision Encoders: Activations, Attention Distributions and Autoencoder Losses Images vary in how memorable they are to humans. Inspired by findings from cognitive science and computer vision, this paper explores the correlates of image memorability in pretrained vision encoders...

I felt very much at home at #ICCV2025! Here is the paper: arxiv.org/abs/2509.01453

5 months ago 1 0 0 0
Post image Post image Post image

Just got back from Hawaii, where I presented a workshop paper on image memorability at @iccv.bsky.social 🌺 Coming from multimodal NLP, it was my first time attending a CV conference. Everywhere I looked, there were talks and posters that were incredibly interesting!

5 months ago 4 0 1 0
Post image

🌍Introducing BabyBabelLM: A Multilingual Benchmark of Developmentally Plausible Training Data!

LLMs learn from vastly more data than humans ever experience. BabyLM challenges this paradigm by focusing on developmentally plausible data

We extend this effort to 45 new languages!

6 months ago 44 16 1 4
Preview
Traces of Image Memorability in Vision Encoders: Activations, Attention Distributions and Autoencoder Losses Images vary in how memorable they are to humans. Inspired by findings from cognitive science and computer vision, this paper explores the correlates of image memorability in pretrained vision encoders...

I will be presenting this work at the @iccv.bsky.social
2025 workshop MemVis: The 1st Workshop on Memory and Vision! 🌺 Work done with Albert Gatt & Jakub Dotlacil arxiv.org/abs/2509.01453

6 months ago 0 0 0 0
Post image Post image Post image

What makes an image memorable? And can we predict image memorability using pretrained vision encoders? We explored activations, attention distributions, image patch uniformity and sparse autoencoder losses using image representations across the layers of CLIP, DINOv2 and SigLIP2.

6 months ago 11 4 1 0