Advertisement · 728 × 90

Posts by Martin Tutek

Post image

We are delighted to welcome @marlutz.bsky.social to our lab over the next few months! 🎉
She'll work on the representation of different demographic groups in LLMs.

#NLProc

1 day ago 23 8 0 0
Registration Official website for the 64th Annual Meeting of the Association for Computational Linguistics

FYI #ACL2026 has an unusual registration system this year, and probably a lot of people who want to attend will not be able to.

Spots are limited to 3.5k people, and only presenting authors can register during the first phase. Then, *if* there are spots left, others can try to register.

1 day ago 12 3 2 0
Preview
2026 Call for Papers Workshop on Insights from Negative Results in NLP

📢 The workshop on Insights from negative results will be back at EMNLP'26!

Your most-insightful failures can be submitted in 4 pages by June 25. It's also possible to commit short papers reviewed through ARR.

insights-workshop.github.io/2026/cfp

1 week ago 33 11 0 0
ICML 2026 Workshop GenAICreativity Welcome to the OpenReview homepage for ICML 2026 Workshop GenAICreativity

How can generative AI better support human creativity, without limiting it? If you have thoughts, we invite submissions to our ICML workshop on Generative AI, Creativity, and Human-AI Co-Creation

📍 July 2026, Seoul
📄 Submit by: April 24 (AOE)
🔗 Submission link: openreview.net/group?id=ICM...

1 week ago 20 8 0 0
A llama sweating while writing a paper at a desk. A sign says "Deadline! March 31 11:59pm AOE"

A llama sweating while writing a paper at a desk. A sign says "Deadline! March 31 11:59pm AOE"

❗The full paper submission deadline for COLM is ~14 hours from now (11:59pm AOE)!

Please submit your final PDFs on the same page where you uploaded your abstracts. And please use the provided LaTeX templates; do not handwrite your manuscript like this llama is!

Good luck!

3 weeks ago 8 4 0 0

Interested in pursuing a PhD in NLP/cog-sci?

Studying language learning in LMs from the perspective of human language acquisition? Few more days to apply!!

3 weeks ago 7 1 0 0

I notice a surprising lack of emdashes in this post, do you not like them?

3 weeks ago 2 0 0 0
Preview
Opinion | Your Chatbot Isn’t a Therapist

A piece co-authored by an old friend (Divya Saini, a psychiatrist at Massachusetts General Hospital)

www.nytimes.com/2026/03/29/o...

3 weeks ago 9 1 0 0
Advertisement

Check out works on sequence repetition 🔁 and evaluating synthetic data 🧮 from our lab in Rabat!
@eaclmeeting.bsky.social #EACL2026

3 weeks ago 4 0 0 0
Preview
The Curse of Verbalization: How Presentation Order Constrains LLM Reasoning Yue Zhou, Henry Peng Zou, Barbara Di Eugenio, Yang Zhang. Findings of the Association for Computational Linguistics: EACL 2026. 2026.

The Curse of Verbalization: How Presentation Order Constrains LLM Reasoning
aclanthology.org/2026.finding...

> Restructuring problems to align the order of information presentation with the order of utilization consistently improves performance.

Intuitive but neat.

4 weeks ago 0 0 0 0
Preview
Mary, the Cheeseburger-Eating Vegetarian: Do LLMs Recognize Incoherence in Narratives? Karin De Langis, Püren Öncel, Ryan Peters, Andrew Elfenbein, Laura Kristen Allen, Andreas Schramm, Dongyeop Kang. Proceedings of the 19th Conference of the European Chapter of the Association for Comp...

Mary, the Cheeseburger-Eating Vegetarian: Do LLMs Recognize Incoherence in Narratives?
aclanthology.org/2026.eacl-lo...

How LLMs process contradictory information in context (wrt. what they expect or know) is a big question. Contradictions in settings seem more important to traits.

4 weeks ago 0 0 1 0
Preview
Rethinking Hallucinations: Correctness, Consistency, and Prompt Multiplicity Prakhar Ganesh, Reza Shokri, Golnoosh Farnadi. Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers). 2026.

Rethinking Hallucinations: Correctness, Consistency, and Prompt Multiplicity
aclanthology.org/2026.eacl-lo...

To put it plainly, hallucinations are a frustratingly poorly defined phenomenon. To mitigate it, nuance and categorization is important, which this paper does a good job of.

4 weeks ago 0 0 1 0
Preview
LLMs Faithfully and Iteratively Compute Answers During CoT: A Systematic Analysis With Multi-step Arithmetics Keito Kudo, Yoichi Aoki, Tatsuki Kuribayashi, Shusaku Sone, Masaya Taniguchi, Ana Brassard, Keisuke Sakaguchi, Kentaro Inui. Findings of the Association for Computational Linguistics: EACL 2026. 2026.

LLMs Faithfully and Iteratively Compute Answers During CoT: A Systematic Analysis With Multi-step Arithmetics
aclanthology.org/2026.finding...

Big fan of faithfulness work, causal interventions and even moreso in specialized scenarios (arithmetic) where pseudolabels can be derived.

4 weeks ago 0 0 1 0
Preview
Sycophancy Hides Linearly in the Attention Heads Rifo Ahmad Genadi, Munachiso Samuel Nwadike, Nurdaulet Mukhituly, Tatsuya Hiraoka, Hilal AlQuabeh, Kentaro Inui. Proceedings of the 19th Conference of the European Chapter of the Association for Compu...

Sycophancy Hides Linearly in the Attention Heads
aclanthology.org/2026.eacl-lo...

Sycophancy is a concerning phenomenon which LLMs regularly exhibit. Showing where it is encoded, and also more surprisingly, showing that it is in the attention heads, is very cool.

4 weeks ago 0 0 1 0

To ease my FOMO from not attending @eaclmeeting.bsky.social, I skimmed the proceedings while playing the Tangier episode of Parts Unknown.

I'll do something different and shout out 5 (subjectively) interesting works of authors I'm *not* closely related to, in no specific order:🧵⬇️

4 weeks ago 4 0 1 0
Advertisement
Horizon Europe - Marie Skłodowska-Curie Actions Postdoctoral Fellowships 2026 - Bocconi University

Thinking of applying for an #MSCA Postdoctoral Fellowship in 2026?

I’m open to supervising at Bocconi! Feel free to reach out.

By submitting an expression of interest to Bocconi, selected applicants will receive full proposal support.

🗓️ Deadline: April 15
👉 www.unibocconi.it/en/horizon-e...

4 weeks ago 5 4 0 0

Excited to present this work together with @dippedrusk.com at #EACL. Join us in the poster session 1 (11:30-13:00) 🔥

4 weeks ago 4 1 0 0
Post image

Excited to share that @milanlp.bsky.social will be presenting 5 new papers at #EACL2026 and workshops in Rabat 🇲🇦!

4 weeks ago 9 4 5 1
Post image

Argh this sucks. apparently they lost their funding guarantee

4 weeks ago 16 3 4 1

You are #EACL2026? Check out some great work from the CSS Department @gesis.org, our data Science Methods team, with great collaborators!

4 weeks ago 12 4 4 0

I’m seeing close to zero reaction/conversation about this on here. This is huge news for open research on language models, especially in the US.

4 weeks ago 73 17 3 2

Yup this is a massive loss. OLMo (+entire ecosystem, OLMoTrace, the NeurIPS tutorial on the LM pipeline,...) was incredibly valuable and now I feel I took it for granted all this time. Not even counting all the great research coming out of AllenAI.

4 weeks ago 7 0 0 0
Advertisement
Preview
This is your kid's brain on AI slop “It’s toddler AI misinformation at an industrial scale. It’s very risky for the developing brain.”

“It’s toddler AI misinformation at an industrial scale. It’s very risky for the developing brain.”

Children’s media experts say AI-generated “slop" has infiltrated the internet, preying on young children and their unsuspecting caregivers.

1 month ago 509 162 19 39

We'll have a reproducibility track at this years' Blackbox workshop! Details are still within a slightly opaque box.

We want to see if cleaning solutions that make opaque boxes 📦 transparent 🍱 work on different boxes 🎁📮🧰🥡, and with different 🧽solution-to-water🧼ratios!

1 month ago 6 2 0 1

Thank you Maria!

1 month ago 1 1 0 0
Preview
Old Habits Die Hard: How Conversational History Geometrically Traps LLMs How does the conversational past of large language models (LLMs) influence their future performance? Recent work suggests that LLMs are affected by their conversational history in unexpected ways. For...

Check out our paper & code for the full results!

arxiv.org/abs/2603.03308
technion-cs-nlp.github.io/OldHabitsDie...

1 month ago 2 1 0 0
Post image

The probabilistic mechanism can also be calculated for closed models.

We find relatively similar probabilistic results compared to open models. Given the high correlation between our probabilistic and geometric results:
➡️ We could attempt to induce the geometry of closed models!

1 month ago 1 0 1 0
Post image

This correlation dissolves in an inconsistent conversations (spanning different topics).

This finding aligns with adversarial strategies that employ unrelated tokens to jailbreak models (Zou et al., 2023; Qi et al., 2025)

1 month ago 1 0 1 0
Post image

We bridge two worlds:
- Probabilistic: Modeling chats as Markov chains.
- Geometric: Measuring the orthogonality and dynamics in the internal state.

We find a high correlation between the two; the larger the probabilistic consistency - the larger the internal trap!

1 month ago 1 0 1 0
Post image

How does an LLM’s past influence its future?🤔

In new work, led by @adisimhi.bsky.social, together with @fbarez.bsky.social @boknilev.bsky.social and Shay Cohen, we find conversational history creates a latent "geometric trap" which makes old habits e.g. hallucinations hard to break!

1 month ago 24 5 2 2