PS: Quite amazingly, just before writing this we saw that PriorGuide has already been implemented in the SBI package @sbi-devs.bsky.social -- kudos to the developers!
Posts by Luigi Acerbi
5/ Work led by Yang Yang and with Severi Rissanen @mummitrollet.bsky.social @nasrullohloka.bsky.social @huangdaolang.bsky.social @arnosolin.bsky.social Markus Heinonen and yours truly.
@univhelsinkics.bsky.social & FCAI & @ellisinstitute.fi
Website: yangyang-pro.github.io/PriorGuide/
4/ The idea is that we can pretrain diffusion models for simulator-based inference and then adapt them at runtime using available task-specific information. We can do both posterior (parameter) inference and posterior-predictive inference (new data).
3/ By using a Gaussian mixture model (GMM) approximation, the modified score takes an analytical closed form, which makes the base method virtually free.
We can also refine the approximation with corrective Langevin steps -- which improve the approximation quality by spending test-time compute.
2/ PriorGuide applies to amortized inference based on diffusion models, like Simformer.
We leverage a "guidance" term based on the new-to-old prior ratio... and a bunch of other tricks which let us express the modified score for diffusion.
No retraining: it's a pure test-time technique.
1/ One of the issues of fully amortized inference / pretrained simulator-based inference is that you are stuck with the "prior" training distribution. What if you change your mind after training?
In PriorGuide, one of our papers at ICLR this week, we allow the prior to be changed at runtime!
Big day ahead for @chengkunli.bsky.social -- no pressure!
www.helsinki.fi/en/faculty-s...
@univhelsinkics.bsky.social
Featuring @upicchini.bsky.social as our honoured Opponent, and your truly as the Custos. Looking forward to the big event.
We’re launching a new AI Methods & Software Hub in the ML cluster and are looking for a Director to build and lead it!
Shape how ML/AI and software drive scientific discovery—in close collaboration with AI and domain scientists—within an extremely vibrant and collaborative ecosystem!
3. The style was not surface-level LLM-y (all lowercase, some carefully placed typos), but the style, the general structure with a hook, a rhetorical question, and then some recognizable sentences gave it away -- on top of the extremely unplausible but click-baity (for me) content of the question.
2. How do I know this was AI-generated? There is no plausible reason why an undergrad with some light NLP background would ask expert-level questions for how Gaussian process surrogates work in a very specific method from one of my early papers.
1. Dear colleagues, I finally got my first extremely well-crafted, highly manipulative nerd-sniping email from a prospective student whose core was 100% AI-assisted and that felt weird and I almost fell for it. We are in for a fun ride, aren't we?
Today NeurIPS is announcing our official satellite event in Paris.
After responding to the call from Ellis following the success of EurIPS in December, we are pleased to reach a new milestone by joining forces with the NeurIPS organizing committee for the 2026 edition.
𝐷𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑡𝑙𝑦!
The fitting ending for a great research journey!
24-Hour Final Call for BAMB! 2026 ⏳
Join us in Barcelona (July 12–23) and learn from our expert faculty:
@meganakpeters.bsky.social
@marcelomattar.bsky.social
@khamascience.bsky.social
@thecharleywu.bsky.social
Apply now:
www.bambschool.org
Yes, this still happens! Before doing a manual check, I deploy agents to do the first check for me (literally a /doublecheck skill in Claude Code which I force to call automatically at the end of each task). This often catches issues, including once a totally fabricated MCMC analysis...
Look who's there!
With (in pseudo-random order) @mummitrollet.bsky.social @nasrullohloka.bsky.social @huangdaolang.bsky.social @conorhassan.bsky.social @sfrancesco.bsky.social @arnosolin.bsky.social @samikaski.bsky.social and several others not on here -- check their names above!
More info soon; all papers fit our AI4science research agenda within the Finnish Center for Artificial Intelligence & @ellisinstitute.fi -- building efficient methods for inference, uncertainty quantification and decision making, leveraging powerful autoregressive transformers and diffusion models.
A bit of a delayed celebration, but happy that our three submitted papers were accepted at @iclr-conf.bsky.social 2026! This was a... complicated year for ICLR, but hopefully now we can focus on the science.
This game was 100% designed, made, and tested by Claude Code with one prompt to "make a complete Sierra-style adventure game with EGA-like graphics and text parser, with 10-15 minutes of gameplay." I gave two prompts to play test the game & deploy it.
Play: enchanted-lighthouse-game.netlify.app
7/ Work by @sfrancesco.bsky.social, @chengkunli.bsky.social & myself, with many thanks to the Research Council of Finland.
The S-VBMC code is available as an easy-to-use Python library: github.com/acerbilab/sv...
Check out the paper: openreview.net/forum?id=M2i...
6/ S-VBMC is an inexpensive post-processing step, so it greatly improves posterior quality at a negligible computational cost!
5/ This optimization is made possible by VBMC’s handy property of providing a closed-form solution for individual components of the ELBO (I_m,k), allowing the following formulation for M independent VBMC solutions:
4/ S-VBMC “stacks” the Gaussian mixtures posteriors output by independent VBMC runs by maximizing the “stacked” ELBO with respect to the weights of the individual Gaussian components. It doesn’t change the components, it just re-weights them!
VBMC vs. S-VBMC
3/ However, VBMC’s relatively conservative active learning strategy can lead it to miss some portions of the true posterior when this has challenging properties (multiple modes, long tails). S-VBMC fixes this!
2/ Bayesian inference of model parameters can be a complex problem to solve, especially with expensive likelihood functions. We addressed this in the past with Variational Bayesian Monte Carlo (VBMC repo: github.com/acerbilab/py...).
Stacking Variational Bayesian Monte Carlo paper in TMLR.
1/ Excited to share our new work published in Transactions on Machine Learning Research (TMLR), Stacking Variational Bayesian Monte Carlo (S-VBMC)!
I'd like to propose the following norm for peer review of papers. If a paper shows clear signs of LLM-generated errors that were not detected by the author, the paper should be immediately rejected. My reasoning: 1/ #ResearchIntegrity
What an amazing Yule gift from @stefanradev.bsky.social & colleagues: a tour-de-force tutorial on diffusion models for simulator-based inference.
This is one of the most comprehensive and useful review/tutorials I have ever seen -- a must read! Kudos to all the authors!
arxiv.org/abs/2512.20685