any advice on how to reliably find them? asking for a friend…
Posts by Martin Jaggi
using LLMs by authors is allowed -if done responsibly. it is also allowed for reviewers who chose the permissive policy. what we require is that authors who want their paper to be reviewed by humans, must as (reciprocal) reviewer adhere to the same standards. see also here: icml.cc/Conferences/...
To ensure compliance w peer-review policies, ICML has removed 795 reviews (1% of total) by reviewers who used LLMs when they explicitly agreed to not. Consequently, 497 papers (2% of all submissions) of these (reciprocal) reviewers have been desk rejected
Details in blog post 👇
There has been some online discussion of prompt watermarks in ICML submissions.
tl;dr:
- Yes, this is one of the *conference*'s (several) scientific integrity measures
- Yes, it's not infalliable (but it still helps)
- No, your paper won't be desk rejected as a result 1/4
More like every week.
Open models continue to pace closed models on a 9 month lag.
A factor of 10 billion since 2010 😮
A couple of eye-opening slides form @sloeschcke.bsky.social's presentation at today’s @belongielab.org meeting (1/2)
🙀
The #ICML2026 abstract deadline has passed! We're at 33540 active abstracts (and dropping). How many will make it over the finish line? 🏁
New blog post (on a shiny new ICML blog!): What's New in #ICML2026 Peer Review
Some highlights:
- Policies to combat thinly sliced contributions
- Cascading desk rejections for peer-review abuse
- Reviewer reciprocity
- New ways to support authors and reviewers
Post: blog.icml.cc/2026/01/08/w...
A multidisciplinary team of ETH Zurich researchers developed a method of using an autonomous excavator to construct a dry-stone wall that is six metres high and sixty-five metres long.
We updated the plots we use to measure the open model ecosystem at
interconnects, to guide The ATOM Project, and to understand what's happening.
We have ~8 plots to summarize what's happening.
First, the high level picture showing China's growing adoption lead.
Announcing the ICML 2026 policy for LLMs in reviewing! Reviewers and authors both pick either conservative or permissive LLM use, and will be matched accordingly. Importantly: authors on papers who choose conservative must obey the conservative policy as reviewers.
what about Apertus? (seems they missed to add us in that ranking)
Experimental Git branch to support Apertus in the browser with Transformers.js
👀 I am working on something pretty cool..
Hopefully, it will soon be possible to try #Apertus 🇨🇭 directly in your browser, powered by Transformers.js 🎉
The threshold for consistent English/query understanding is now 3M parameters.
thanks for the lausanne visit and sharing these super cool results!
Breaking: we release a fully synthetic generalist dataset for pretraining, SYNTH and two new SOTA reasoning models exclusively trained on it. Despite having seen only 200 billion tokens, Baguettotron is currently best-in-class in its size range. pleias.fr/blog/blogsyn...
🎉 ICML 2026 Call for Papers (& Position Papers) is here! 🎉
📅 Key Dates
Abstract deadline: Jan 23, 2026 AOE
Paper deadline: Jan 28, 2026 AOE
A few key changes this year:
- Attendance for authors of accepted papers is optional
- Originally submitted version of accepted papers will be made public
...
so open-weights models are much happier than closed ones i guess, cause they live on in the long run, did i get that right?
I just tried the official demo for the new Gemini 2.5 Computer Use model and it started by navigating to Google, solving Google's own CAPTCHA and then running a search! simonwillison.net/2025/Oct/7/gemini-25-com...
apertus also! (september release, same mission but multilingual)
cool idea. let’s us know how it goes! btw maybe these can be useful github.com/swiss-ai/ape...
or, since today, also unsloth and llamacpp
on the engineering track it renews yearly usually, but permanent is possible after some experience & paperwork. on the academic track see e.g. here www.epfl.ch/about/workin...
Link to the first version of the Apertus open-data open-weights LLM - multilingual in >1000 languages, and compliant ethical AI huggingface.co/collections/...
Several open positions at EPFL Lausanne and ETH Zurich and, as part of the Swiss AI Initiative. We cover the entire stack of foundation model training. And we're open to international applicants of course (no H-1B required ;))