How are AI tools changing our writing process? In our #chi2026 paper, we find a shift in the way writers write and engage with ideas: from generating ideas to reacting to the ones AI surfaces. We call it reactive writing.
Posts by Tim van Erven
Bug was apparently not in the verified part of the code.
Looking forward to Anton Xue's talk at the Theory of Interpretable AI Seminar! 🚀
He’ll present a framework with stability guarantees for certifying feature attribution explanations and LLM reasoning chains through robust feature selection. Join us! 🧠✨
tverven.github.io/tiai-seminar/
Don't forget this!
We want to speak directly to the concern many of you have expressed, and we owe you a clear explanation of what happened, why it happened, and where we stand now. We understand this situation caused genuine alarm and we take that seriously.
Did I miss the discussion about this over here?
Following the success of the EurIPS and NeurIPS-Mexico City pilots in 2025, we are thrilled to announce two official NeurIPS 2026 satellite events for this year!
These will be held in Paris, France and Atlanta, USA, respectively, running alongside the main venue in Sydney, Australia.
Good advice from the university library. Applying this to all my social media as well.
Upcoming talk in the Theory of Interpretable AI seminar: Federico Adolfi frames interpretability in terms of computational complexity theory.
Wow. Donald Knuth on Ai for math: www-cs-faculty.stanford.edu/~knuth/paper...
Very inspiring talk today by Damien in the Theory of Interpretable AI seminar!
Recording is up: youtu.be/qI4oVWhvz-4
Upcoming talks on the seminar website: tverven.github.io/tiai-seminar/
Happening today!
Excited for Damien’s seminar talk this Thursday!🚀
Steering is an exciting area in interpretability—but how strongly should we steer?
Damien will present a theory of steering strength: choosing the right magnitude of representation change—not too weak, not too strong
tverven.github.io/tiai-seminar/
Apparently the NeurIPS chairs decided to reopen the camera-ready submission to ensure "that the proceedings reflect the highest scientific quality.” @neuripsconf.bsky.social, is that true?
So the solution to academic misconduct is to give the authors a chance to cover their tracks?
Video recording of Chhavi's talk in the Theory of Interpretable AI seminar is now available: youtu.be/ylW3H--g7SU
Check out the seminar website for upcoming speakers: tverven.github.io/tiai-seminar/
This is also a divisive issue in the machine learning community. One of the largest international conferences (ICML) is trying out ways to adapt the reviewing process: icml.cc/Conferences/...
Happening today!
Looking forward to the next “Theory of Interpretable AI” seminar on January 15, where Chhavi Yadav will present "ExpProof"! A fresh take on trustworthy explanations for confidential ML models using Zero-Knowledge Proofs. Feel free to join! #interpretability #Crypto
tverven.github.io/tiai-seminar/
New blog post (on a shiny new ICML blog!): What's New in #ICML2026 Peer Review
Some highlights:
- Policies to combat thinly sliced contributions
- Cascading desk rejections for peer-review abuse
- Reviewer reciprocity
- New ways to support authors and reviewers
Post: blog.icml.cc/2026/01/08/w...
In case you missed Ayman's talk, here is the video: youtu.be/KunGyRGbRdk
Well-deserved. Congrats!!
Happening today!
Ayman will present at the Theory of Interpretability Seminar!
He’ll share a new approach for optimal, interpretable decision trees with strong efficiency guarantees. 🌳✨
🗓️ Join us to learn how AO* search leads to provably better sparse trees!
tverven.github.io/tiai-seminar/
When reading a large literature, it is really helpful to have opinionated opinions that help to categorize papers.
One of mine for explainable AI is that methods need to address a fundamental limit in how much information can be communicated.
Blog post: www.timvanerven.nl/blog/xai-com... (no math)
Our libraries are cutting staff so that Elsevier can have its 32% profit margin
The schedule for our Workshop on the Theory of XAI is now online!
🕰️ Dec 2, starting 9am
📍 Bella Center Copenhagen (co-located with EurIPS)
🔗 sites.google.com/view/theory-...
Great seminar talk by @ulrikeluxburg.bsky.social yesterday.
Here's the video if you missed it: youtu.be/zR_GvDF65OM
Seminar info: tverven.github.io/tiai-seminar/
Happening in two hours.
Coming up tomorrow (Tuesday 11 Nov) in the Theory of Interpretability seminar: Ulrike von Luxburg will discuss why
informative explanations only exist for simple functions 👀
tverven.github.io/tiai-seminar/