Advertisement · 728 × 90

Posts by Luigi Acerbi

PS: Quite amazingly, just before writing this we saw that PriorGuide has already been implemented in the SBI package @sbi-devs.bsky.social -- kudos to the developers!

21 hours ago 2 0 0 0
PriorGuide: Test-Time Prior Adaptation for Simulation-Based Inference PriorGuide enables efficient incorporation of arbitrary priors at inference time for amortized diffusion-based simulation-based inference, without retraining the model.

5/ Work led by Yang Yang and with Severi Rissanen @mummitrollet.bsky.social @nasrullohloka.bsky.social @huangdaolang.bsky.social @arnosolin.bsky.social Markus Heinonen and yours truly.

@univhelsinkics.bsky.social & FCAI & @ellisinstitute.fi

Website: yangyang-pro.github.io/PriorGuide/

21 hours ago 1 0 1 0
Post image

4/ The idea is that we can pretrain diffusion models for simulator-based inference and then adapt them at runtime using available task-specific information. We can do both posterior (parameter) inference and posterior-predictive inference (new data).

21 hours ago 0 0 1 0
Post image

3/ By using a Gaussian mixture model (GMM) approximation, the modified score takes an analytical closed form, which makes the base method virtually free.

We can also refine the approximation with corrective Langevin steps -- which improve the approximation quality by spending test-time compute.

21 hours ago 0 0 1 0
Post image

2/ PriorGuide applies to amortized inference based on diffusion models, like Simformer.

We leverage a "guidance" term based on the new-to-old prior ratio... and a bunch of other tricks which let us express the modified score for diffusion.

No retraining: it's a pure test-time technique.

21 hours ago 0 0 1 0
Post image

1/ One of the issues of fully amortized inference / pretrained simulator-based inference is that you are stuck with the "prior" training distribution. What if you change your mind after training?

In PriorGuide, one of our papers at ICLR this week, we allow the prior to be changed at runtime!

21 hours ago 18 7 1 0
Preview
Chengkun Li defends his PhD thesis on Surrogate-based methods for efficient Bayesian posterior computation | Faculty of Science | University of Helsinki On Wednesday the 1st of April 2026, M.Eng. Chengkun Li defends his PhD thesis on Surrogate-based methods for efficient Bayesian posterior computation. The thesis is related to research done in the Mac...

Big day ahead for @chengkunli.bsky.social -- no pressure!

www.helsinki.fi/en/faculty-s...

@univhelsinkics.bsky.social

Featuring @upicchini.bsky.social as our honoured Opponent, and your truly as the Custos. Looking forward to the big event.

2 weeks ago 7 0 1 0

We’re launching a new AI Methods & Software Hub in the ML cluster and are looking for a Director to build and lead it!

Shape how ML/AI and software drive scientific discovery—in close collaboration with AI and domain scientists—within an extremely vibrant and collaborative ecosystem!

3 weeks ago 10 6 0 0

3. The style was not surface-level LLM-y (all lowercase, some carefully placed typos), but the style, the general structure with a hook, a rhetorical question, and then some recognizable sentences gave it away -- on top of the extremely unplausible but click-baity (for me) content of the question.

4 weeks ago 1 0 0 0

2. How do I know this was AI-generated? There is no plausible reason why an undergrad with some light NLP background would ask expert-level questions for how Gaussian process surrogates work in a very specific method from one of my early papers.

4 weeks ago 1 1 1 0
Advertisement

1. Dear colleagues, I finally got my first extremely well-crafted, highly manipulative nerd-sniping email from a prospective student whose core was 100% AI-assisted and that felt weird and I almost fell for it. We are in for a fun ride, aren't we?

4 weeks ago 4 0 1 0

Today NeurIPS is announcing our official satellite event in Paris.

After responding to the call from Ellis following the success of EurIPS in December, we are pleased to reach a new milestone by joining forces with the NeurIPS organizing committee for the 2026 edition.

4 weeks ago 89 32 1 9

𝐷𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑡𝑙𝑦!

1 month ago 6 0 0 1

The fitting ending for a great research journey!

1 month ago 4 1 0 0
Preview
BAMB! 2026 | Barcelona Summer School for Advanced Modeling of Behavior Intensive training for experienced researchers in cognitive science, computational neuroscience and neuro-AI. Five interconnected modules, expert faculty, hands-on projects. July 12-23, 2026.

24-Hour Final Call for BAMB! 2026 ⏳

Join us in Barcelona (July 12–23) and learn from our expert faculty:

@meganakpeters.bsky.social
@marcelomattar.bsky.social
@khamascience.bsky.social
@thecharleywu.bsky.social

Apply now:
www.bambschool.org

1 month ago 10 9 1 0

Yes, this still happens! Before doing a manual check, I deploy agents to do the first check for me (literally a /doublecheck skill in Claude Code which I force to call automatically at the end of each task). This often catches issues, including once a totally fabricated MCMC analysis...

2 months ago 1 0 1 0

Look who's there!

2 months ago 1 0 0 0

With (in pseudo-random order) @mummitrollet.bsky.social @nasrullohloka.bsky.social @huangdaolang.bsky.social @conorhassan.bsky.social @sfrancesco.bsky.social @arnosolin.bsky.social @samikaski.bsky.social and several others not on here -- check their names above!

2 months ago 4 0 0 0
Advertisement

More info soon; all papers fit our AI4science research agenda within the Finnish Center for Artificial Intelligence & @ellisinstitute.fi -- building efficient methods for inference, uncertainty quantification and decision making, leveraging powerful autoregressive transformers and diffusion models.

2 months ago 4 1 1 0
Post image

A bit of a delayed celebration, but happy that our three submitted papers were accepted at @iclr-conf.bsky.social 2026! This was a... complicated year for ICLR, but hopefully now we can focus on the science.

2 months ago 15 4 1 0
Post image

This game was 100% designed, made, and tested by Claude Code with one prompt to "make a complete Sierra-style adventure game with EGA-like graphics and text parser, with 10-15 minutes of gameplay." I gave two prompts to play test the game & deploy it.

Play: enchanted-lighthouse-game.netlify.app

2 months ago 91 9 5 0
Preview
GitHub - acerbilab/svbmc: Stacking Variational Bayesian Monte Carlo (S-VBMC) algorithm for combining Variational Bayesian Monte Carlo (VBMC) posteriors to boost inference performance. Stacking Variational Bayesian Monte Carlo (S-VBMC) algorithm for combining Variational Bayesian Monte Carlo (VBMC) posteriors to boost inference performance. - acerbilab/svbmc

7/ Work by @sfrancesco.bsky.social, @chengkunli.bsky.social & myself, with many thanks to the Research Council of Finland.

The S-VBMC code is available as an easy-to-use Python library: github.com/acerbilab/sv...

Check out the paper: openreview.net/forum?id=M2i...

3 months ago 1 0 0 1
Post image

6/ S-VBMC is an inexpensive post-processing step, so it greatly improves posterior quality at a negligible computational cost!

3 months ago 0 0 1 0
Post image

5/ This optimization is made possible by VBMC’s handy property of providing a closed-form solution for individual components of the ELBO (I_m,k), allowing the following formulation for M independent VBMC solutions:

3 months ago 0 0 1 0
Post image

4/ S-VBMC “stacks” the Gaussian mixtures posteriors output by independent VBMC runs by maximizing the “stacked” ELBO with respect to the weights of the individual Gaussian components. It doesn’t change the components, it just re-weights them!

3 months ago 0 0 1 0
VBMC vs. S-VBMC

VBMC vs. S-VBMC

3/ However, VBMC’s relatively conservative active learning strategy can lead it to miss some portions of the true posterior when this has challenging properties (multiple modes, long tails). S-VBMC fixes this!

3 months ago 0 0 1 0
Advertisement
Preview
GitHub - acerbilab/pyvbmc: PyVBMC: Variational Bayesian Monte Carlo algorithm for posterior and model inference in Python PyVBMC: Variational Bayesian Monte Carlo algorithm for posterior and model inference in Python - acerbilab/pyvbmc

2/ Bayesian inference of model parameters can be a complex problem to solve, especially with expensive likelihood functions. We addressed this in the past with Variational Bayesian Monte Carlo (VBMC repo: github.com/acerbilab/py...).

3 months ago 1 0 1 0
Stacking Variational Bayesian Monte Carlo paper in TMLR.

Stacking Variational Bayesian Monte Carlo paper in TMLR.

1/ Excited to share our new work published in Transactions on Machine Learning Research (TMLR), Stacking Variational Bayesian Monte Carlo (S-VBMC)!

3 months ago 14 2 1 1

I'd like to propose the following norm for peer review of papers. If a paper shows clear signs of LLM-generated errors that were not detected by the author, the paper should be immediately rejected. My reasoning: 1/ #ResearchIntegrity

3 months ago 115 28 4 6
Preview
Diffusion Models in Simulation-Based Inference: A Tutorial Review Diffusion models have recently emerged as powerful learners for simulation-based inference (SBI), enabling fast and accurate estimation of latent parameters from simulated and real data. Their score-b...

What an amazing Yule gift from @stefanradev.bsky.social & colleagues: a tour-de-force tutorial on diffusion models for simulator-based inference.

This is one of the most comprehensive and useful review/tutorials I have ever seen -- a must read! Kudos to all the authors!

arxiv.org/abs/2512.20685

3 months ago 35 3 0 0