Advertisement · 728 × 90

Posts by Tommy Rochussen

If you're at ICLR and want to chat about amortised inference/neural processes or probabilistic ML more broadly, swing by my poster between 10:30-13:00 on Thursday (poster session 1) in Pavilion 3 at location #206. The poster is about NP-style learning in BNNs to find good priors.

2 hours ago 3 1 0 1

Check out the paper 👉 arxiv.org/pdf/2602.087...

Looking forward to presenting this work in Rio, and many thanks to @vincefort.bsky.social for his supervision!

2 months ago 3 1 0 0

Are humble Gaussian priors enough for BNNs to model highly complex stochastic processes? Do well-specified BNN priors remove the need for more costly approximate inference algorithms?

We provide answers in the paper!

2 months ago 1 0 1 0
Post image

2. It turns BNNs into flexible generative models (i.e., sampling from learned priors.

3. It enables capabilities that have been difficult for neural processes so far, including:
• Within-task minibatching
• Meta-learning in extremely data-scarce regimes.

2 months ago 1 0 1 0
Post image

Why this matters:

1. It lets us study BNNs under well-specified, data-driven priors rather than the usual isotropic guff.

2 months ago 0 0 1 0

3. The resulting model can be viewed as a neural process whose latent variable is the weights of a BNN, with the network itself acting as the decoder.

2 months ago 0 0 1 0
Post image

2. This is achieved via per-dataset amortised variational inference, allowing the model to infer dataset-specific posteriors while learning a shared, well-specified prior.

2 months ago 0 0 1 0
Post image

What we do:

1. We propose a way to learn a prior over neural network weights from data, using a collection of related datasets.

2 months ago 0 0 1 0

Bayesian neural network (BNN) practitioners have to specify priors over weights, but doing so is often unclear or ad hoc. In this paper, we bridge Bayesian deep learning and probabilistic meta-learning to offer a concrete answer.

2 months ago 2 0 1 0
Advertisement

The work tackles a fairly fundamental question in Bayesian deep learning:

"how can we be Bayesian if we don’t have any meaningful prior beliefs in the first place?"

2 months ago 0 0 1 0

I’m pleased to share that our latest paper, “Amortising Inference and Meta-Learning Priors in Neural Networks”, has been accepted to ICLR 2026 in Rio!

2 months ago 3 1 1 0

Are bitterns as fiendishly difficult to spot in Singapore as they are in Europe?

2 months ago 1 0 1 0

Arxiv link: arxiv.org/pdf/2504.01650

It’s nice to be able to get the ball rolling on my PhD with this paper, and a nice achievement to have published my first non-workshop paper. A big thanks to @vincefort.bsky.social for his supervision on this project!

1 year ago 1 0 0 0

1.) you want/need GP levels of interpretability
2.) you don’t have that many training tasks, so need SOTA data efficiency (at the meta-level)
3.) you have accurate domain knowledge (in GP-prior form)
4.) each task has too many observations for exact GP inference

1 year ago 1 0 1 0

If you need probabilistic predictions across multiple related tasks/datasets, you should use this model if any combination of the following hold:

1 year ago 1 0 1 0

We introduce the ability to meta-learn sparse variational Gaussian process inference, resulting in a new type of neural process that is amenable to prior elicitation.

1 year ago 1 0 1 0

Very pleased to share that our new paper “Sparse Gaussian Neural Processes” has been accepted under the proceedings track at AABI 2025! 🎉 (1/n)

1 year ago 7 0 2 1
Advertisement

I've seen things you people wouldn't believe.

Attacks from reviewers on fire off the shoulders of #OpenReview.

I watched logic fallacies glitter in the dark near @iclr-conf.bsky.social

All those moments will be lost in time, like tears in the next resubmission. 

Time to die.

#ML #Ai #PhDlife

1 year ago 16 3 2 2

🙋‍♂️

1 year ago 1 0 0 0

Thanks for putting this together - keen to be added!

1 year ago 0 0 1 0