Advertisement · 728 × 90

Posts by Tim van Erven

Video

How are AI tools changing our writing process? In our #chi2026 paper, we find a shift in the way writers write and engage with ideas: from generating ideas to reacting to the ones AI surfaces. We call it reactive writing.

2 days ago 7 4 1 1

Bug was apparently not in the verified part of the code.

5 days ago 1 0 0 0
Post image

Looking forward to Anton Xue's talk at the Theory of Interpretable AI Seminar! 🚀
He’ll present a framework with stability guarantees for certifying feature attribution explanations and LLM reasoning chains through robust feature selection. Join us! 🧠✨

tverven.github.io/tiai-seminar/

1 week ago 2 0 0 0
Post image

Don't forget this!

3 weeks ago 4 2 0 0

We want to speak directly to the concern many of you have expressed, and we owe you a clear explanation of what happened, why it happened, and where we stand now. We understand this situation caused genuine alarm and we take that seriously.

3 weeks ago 27 12 3 6
Post image

Did I miss the discussion about this over here?

3 weeks ago 7 2 7 1
2025 Conference The Thirty-Ninth Annual Conference on Neural Information Processing Systems

Following the success of the EurIPS and NeurIPS-Mexico City pilots in 2025, we are thrilled to announce two official NeurIPS 2026 satellite events for this year!

These will be held in Paris, France and Atlanta, USA, respectively, running alongside the main venue in Sydney, Australia.

3 weeks ago 70 17 2 4
Advertisement
Post image

Good advice from the university library. Applying this to all my social media as well.

1 month ago 2 1 0 0
Post image

Upcoming talk in the Theory of Interpretable AI seminar: Federico Adolfi frames interpretability in terms of computational complexity theory.

1 month ago 9 1 0 0

Wow. Donald Knuth on Ai for math: www-cs-faculty.stanford.edu/~knuth/paper...

1 month ago 93 23 6 2

Very inspiring talk today by Damien in the Theory of Interpretable AI seminar!

Recording is up: youtu.be/qI4oVWhvz-4

Upcoming talks on the seminar website: tverven.github.io/tiai-seminar/

2 months ago 4 0 0 0

Happening today!

2 months ago 1 0 0 0
Post image

Excited for Damien’s seminar talk this Thursday!🚀

Steering is an exciting area in interpretability—but how strongly should we steer?

Damien will present a theory of steering strength: choosing the right magnitude of representation change—not too weak, not too strong
tverven.github.io/tiai-seminar/

2 months ago 2 1 0 2

Apparently the NeurIPS chairs decided to reopen the camera-ready submission to ensure "that the proceedings reflect the highest scientific quality.” @neuripsconf.bsky.social, is that true?

So the solution to academic misconduct is to give the authors a chance to cover their tracks?

2 months ago 48 12 4 2
Advertisement
Preview
Giving University Exams in the Age of Chatbots Giving University Exams in the Age of Chatbots par Ploum - Lionel Dricot.

This link from the comments is very inspiring: ploum.net/2026-01-19-e...

2 months ago 1 0 0 0

Video recording of Chhavi's talk in the Theory of Interpretable AI seminar is now available: youtu.be/ylW3H--g7SU

Check out the seminar website for upcoming speakers: tverven.github.io/tiai-seminar/

3 months ago 2 0 0 0
ICML 2026 Intro LLM Policy

This is also a divisive issue in the machine learning community. One of the largest international conferences (ICML) is trying out ways to adapt the reviewing process: icml.cc/Conferences/...

3 months ago 0 0 0 0

Happening today!

3 months ago 1 0 0 0
Post image

Looking forward to the next “Theory of Interpretable AI” seminar on January 15, where Chhavi Yadav will present "ExpProof"! A fresh take on trustworthy explanations for confidential ML models using Zero-Knowledge Proofs. Feel free to join! #interpretability #Crypto

tverven.github.io/tiai-seminar/

3 months ago 2 0 0 2
Post image

New blog post (on a shiny new ICML blog!): What's New in #ICML2026 Peer Review

Some highlights:
- Policies to combat thinly sliced contributions
- Cascading desk rejections for peer-review abuse
- Reviewer reciprocity
- New ways to support authors and reviewers

Post: blog.icml.cc/2026/01/08/w...

3 months ago 23 8 1 0

In case you missed Ayman's talk, here is the video: youtu.be/KunGyRGbRdk

4 months ago 0 0 0 0

Well-deserved. Congrats!!

4 months ago 1 0 1 0

Happening today!

4 months ago 1 0 0 0
Post image

Ayman will present at the Theory of Interpretability Seminar!

He’ll share a new approach for optimal, interpretable decision trees with strong efficiency guarantees. 🌳✨
🗓️ Join us to learn how AO* search leads to provably better sparse trees!

tverven.github.io/tiai-seminar/

4 months ago 2 0 0 2
Advertisement
The Central Challenge in Explainable AI: Channel Capacity Explainable AI is about communication: we want to tell people how or whya machine learning model is making certain decisions. Why is this sodifficult? In this post I take an information-theoretic pers...

When reading a large literature, it is really helpful to have opinionated opinions that help to categorize papers.

One of mine for explainable AI is that methods need to address a fundamental limit in how much information can be communicated.

Blog post: www.timvanerven.nl/blog/xai-com... (no math)

4 months ago 8 4 0 0

Our libraries are cutting staff so that Elsevier can have its 32% profit margin

5 months ago 112 38 1 2
Preview
Theory of XAI Workshop Explainable AI (XAI) is now deployed across a wide range of settings, including high-stakes domains in which misleading explanations can cause real harm. For example, explanations are required by law ...

The schedule for our Workshop on the Theory of XAI is now online!

🕰️ Dec 2, starting 9am
📍 Bella Center Copenhagen (co-located with EurIPS)
🔗 sites.google.com/view/theory-...

5 months ago 3 2 0 0

Great seminar talk by @ulrikeluxburg.bsky.social yesterday.

Here's the video if you missed it: youtu.be/zR_GvDF65OM

Seminar info: tverven.github.io/tiai-seminar/

5 months ago 7 1 0 0

Happening in two hours.

5 months ago 3 0 0 0
Post image

Coming up tomorrow (Tuesday 11 Nov) in the Theory of Interpretability seminar: Ulrike von Luxburg will discuss why
informative explanations only exist for simple functions 👀

tverven.github.io/tiai-seminar/

5 months ago 6 0 0 2