annabavaresco.github.io/annabs-unsol...
Posts by Anna Bavaresco
Oh, and I've also presented a paper: Vision-Language Models Align with Human Neural Representations in Concept Processing, joint work with @mdhk.net, Sandro Pezzelle , and Raquel Fernández. If you missed the poster, you can still check out the paper 👇🏻
aclanthology.org/2026.eacl-lo...
It was great to attend EACL2026 in Rabat!🇲🇦 I've learnt a few interesting things about Arabic, seen a friend receive a well-deserved Outstanding Paper Award (kudos @alberto-testoni.bsky.social), and had stimulating conversations with amazing researchers✨
📢 PhD position in the NeuroAI of Language
Why can LLMs predict brain activity so well? We're hiring a PhD student to find out -- AI interpretability meets neuroimaging
Deadline March 20
Please RT 🙏
👇
mpi.nl/career-education/vacancies/vacancy/fully-funded-4-year-phd-position-neuroai-language
The submission deadline for CMCL is coming up in less than a month! (Feb 25) CMCL will be co-located with LREC and take place on May 16!🌴https://sites.google.com/view/cmclworkshop/cfp
Does it matter how you prompt an LLM with a persona? Do LLMs respond differently to natural conversation history compared to names and explicit mentions? Go check out our new preprint! 👀
🚀 Call for Papers!
The 9th Multimodal Learning and Applications Workshop (MULA 2026) is coming to CVPR 2026.
If you’re working on multimodal learning, this is your stage!
🗓️ Submission deadline: March 9, 2026 (AoE)
More information ➡️ mula-workshop.github.io
Grateful to my co-authors @mdhk.net , Sandro Pezzelle, and Raquel Fernández for all the time and effort they’ve put into this project ✨
Check out our preprint if you’re curious about this work: arxiv.org/abs/2407.179...
In addition, they highlight striking differences among VLM architectures, concerning their alignment with activations from different brain networks and how much they are influenced by the type of context (visual or sentential) provided along with the input concept words
Our paper has been accepted to EACL 2026!🎉 We systematically evaluate several vision-language (VLMs) and language-only models, measuring their alignment with brain responses to concept words. Our results show that vision-language models offer a promising tool to model human concept processing
I have a PhD opening for my #VIDI BrainShorts project 📽️🧠🤖! Are you or do you know an ambitious, recent (or almost) MSc graduate with a background in NeuroAI and interest in large-scale data collection and video perception? Check out our vacancy! (deadline Feb 15).
werkenbij.uva.nl/en/vacancies...
Don't read my blog it's lame
annabavaresco.github.io/annabs-unsol...
The CfP for CMCL is out!🌴 We are looking forward to receiving many interesting submissions! ✨ (Deadline: February 25, 2026) sites.google.com/view/cmclwor...
@annabavaresco.bsky.social and
@tlmnhut.bsky.social show: supervised pruning of a DNN’s feature space better aligns with human category representations, selects distinct subspaces for different categories, and more accurately predicts people’s preferences for GenAI images.
doi.org/10.1145/3768...
Interspeech paper title: What do self-supervised speech models know about Dutch? Analyzing advantages of language-specific pre-training Authors: Marianne de Heer Kloots, Hosein Mohebbi, Charlotte Pouw, Gaofei Shen, Willem Zuidema, Martijn Bentum
✨ Do self-supervised speech models learn to encode language-specific linguistic features from their training data, or only more language-general acoustic correlates?
At #Interspeech2025 we presented our new Wav2Vec2-NL model and SSL-NL evaluation dataset to test this!
📄 arxiv.org/abs/2506.00981
⬇️
Had such a great time presenting our tutorial on Interpretability Techniques for Speech Models at #Interspeech2025! 🔍
For anyone looking for an introduction to the topic, we've now uploaded all materials to the website: interpretingdl.github.io/speech-inter...
As always, I'm thankful to Raquel Fernández, who supervised this work, agreed to pose for the moderately cringe picture, and presented this other work carried out together with @mdhk.net and Sandro Pezzelle 👇
arxiv.org/abs/2407.17914
I also had the opportunity to present this work on experiential semantic information in multimodal and language-only models, recently published at CoNLL 👇
aclanthology.org/2025.conll-1...
What a privilege to have #CCN2025 in (an exceptionally warm and sunny) Amsterdam this year!
It was my first time attending the conference, and being surrounded by so many talented researchers whose interests are similar to mine has been a deeply enriching experience ✨