Advertisement Β· 728 Γ— 90

Posts by Christian A. Naesseth

Preview
Vacancy β€” Assistant Professor in AI for Science (AI4Science) <p><span>Are you passionate about advancing Machine Learning by integrating insights from the natural sciences? Are you eager to bridge the 3rd (<em><span>computational</span></em>) and 4th (<em><span...

We're looking for a new colleague at @amlab.bsky.social: Assistant Professor in AI for Science πŸ”¬πŸ€–

World-class ML research, Amsterdam's thriving AI ecosystem (ELLIS, startups, big tech), and some of the best academic labor conditions in Europe ❀️

Deadline: May 30 πŸ‘‰ werkenbij.uva.nl/en/vacancies...

6 days ago 28 16 0 4
Post image

ProbML 2026 (formerly AABI) invites submissions on probabilistic ML (both Bayesian and otherwise!), July 5 in Seoul (co-located with ICML). Website: probml.cc. Tracks: proceedings (PMLR), workshop, fast track. New focus includes applications in healthcare and climate!
Submit by: 20 March 2026.

2 months ago 24 16 1 8
Preview
Monitoring Risks in Test-Time Adaptation Encountering shifted data at test time is a ubiquitous challenge when deploying predictive models. Test-time adaptation (TTA) methods address this issue by continuously adapting a deployed model using...

πŸ“œ Monitoring Risks in Test-Time Adaptation
(ICML PUT Workshop Oral!)

Time: Fri 18 Jul 10 a.m. PDT
Location: West Meeting Room 220-222
Presenter: @monaschir.bsky.social

arxiv.org/abs/2507.08721

9 months ago 2 0 0 1
Preview
Controlled Generation with Equivariant Variational Flow Matching We derive a controlled generation objective within the framework of Variational Flow Matching (VFM), which casts flow matching as a variational inference problem. We demonstrate that controlled genera...

πŸ“œ Controlled Generation with Equivariant Variational Flow Matching

Time: Wed 16 Jul 11 a.m. PDT β€” 1:30 p.m. PDT
Location: East Exhibition Hall A-B #E-3309
Presenter: @eijkelboomfloor.bsky.social

arxiv.org/abs/2506.18340

9 months ago 3 0 1 0
https://arxiv.org/abs/2502.02472

πŸ“œ SDE Matching: Scalable and Simulation-Free Training of Latent Stochastic Differential Equations

Time: Thu 17 Jul 11 a.m. PDT β€” 1:30 p.m. PDT
Location: East Exhibition Hall A-B #E-2412
Presenter: @gbarto.bsky.social

arxiv.org/abs/2502.02472

9 months ago 1 0 1 0

At #ICML2025 this week?

Come check out our work on controlled generation, simulation-free latent SDEs, and risk monitoring in test-time adaptation, and chat with the awesome students that made it happen!

#SDE #Diffusion #FlowMatching #TTA #UncertaintyQuantification

9 months ago 5 1 1 0
https://us05web.zoom.us/j/7780256206?pwd=flsq8weBOvaZgAsr3ThNiHq9d1mXMS.1&omn=89044077993

Tomorrow, Tuesday (July 1st) from 4pm to 5pm (UK time).

β€œSDE Matching: Scalable and Simulation-Free Training of Latent Stochastic Differential Equations" (arxiv.org/abs/2502.02472) πŸš€

Join via Zoom πŸ”₯

t.co/N1C3UFukxd

9 months ago 3 1 0 0

πŸš¨πŸš€
Come hear @gbarto.bsky.social talk about SDE Matching tomorrow!

SDE Matching is a highly efficient and scalable training framework for Latent/Neural SDEs.

You no longer have to discretize or simulate your SDE models when fitting them to data.

#SDE #Diffusion #FlowMatching #ML

9 months ago 9 1 1 0
Advertisement
Preview
a cartoon dog is sitting at a table with a cup of coffee in front of a fire with the words this is fine . ALT: a cartoon dog is sitting at a table with a cup of coffee in front of a fire with the words this is fine .

Overleaf down πŸ˜… #Overleaf #NeurIPS

11 months ago 13 0 0 1

Tack Oskar!

11 months ago 0 0 0 0

Wow, I am floored! With the UAI results in, my lab with collaborators have achieved the #PerfectGame πŸ†

100% acceptance rate across an entire #ML cycle! (5/5 #NeurIPS, #ICLR, 2/2 #AISTATS, 2/2 #ICML, 1/1 #UAI)

10 for 10. πŸ₯³πŸΎπŸ€©

#Science #AI #ElementalAI

11 months ago 10 0 1 0

Exciting news: AMLab is happy to have 7 papers accepted at #ICML2025! πŸŽ‰

See the thread below for the full list πŸ“ and meet us in Vancouver to discuss them further! πŸ‡¨πŸ‡¦

🧡1 / 8

11 months ago 14 4 1 0
Post image

Oh, rip, the camera-ready PDF on Open Review is only "privately revealed". Sorry about that :(

proceedings.mlr.press/v258/chen25f...
proceedings.mlr.press/v258/timans2...

11 months ago 2 0 0 0
Preview
Max-Rank: Efficient Multiple Testing for Conformal Prediction Multiple hypothesis testing (MHT) frequently arises in scientific inquiries, and concurrent testing of multiple hypotheses inflates the risk of Type-I errors or false positives, rendering MHT...

openreview.net/forum?id=1Yi...

openreview.net/forum?id=29c...

Check out the papers and/or the posters tomorrow (Sunday)!

#Statistics #SMC #ConformalPrediction #Testing #ML #Bayes

11 months ago 2 0 1 0

#AISTATS2025 happening in Phuket, Thailand! I have two papers at the conference:

1. Max-Rank: Efficient Multiple Testing for Conformal Prediction

2. Variational Combinatorial Sequential Monte Carlo for Bayesian Phylogenetics in Hyperbolic Space

Both at poster session 2!

11 months ago 12 1 1 0

Thanks Pierre! Was great meeting in person as well :)

11 months ago 1 0 1 0

Very excited that our work (together with my PhD student @gbarto.bsky.social and our collaborator Dmitry Vetrov) was recognized with a Best Paper Award at #AABI2025!

#ML #SDE #Diffusion #GenAI πŸ€–πŸ§ 

11 months ago 19 2 1 0

If you missed it and are attending #AABI at NTU today you can find me presenting it again at the afternoon poster session!

approximateinference.org

11 months ago 7 2 0 0
Advertisement
Preview
SDE Matching: Scalable and Simulation-Free Training of Latent Stochastic Differential Equations The Latent Stochastic Differential Equation (SDE) is a powerful tool for time series and sequence modeling. However, training Latent SDEs typically relies on adjoint sensitivity methods, which depend ...

Paper: arxiv.org/abs/2502.02472
FPI workshop: sites.google.com/view/fpiwork...
DeLTa workshop: delta-workshop.github.io

Joint work with my PhD student
@gbarto.bsky.social and our collaborator Dmitry Vetrov.

11 months ago 1 0 0 0
Post image

Come check out SDE Matching at the #ICLR2025 workshops, a new simulation-free framework for training fully general Latent/Neural SDEs (generalisation of diffusion and bridge models).

FPI: Morning poster session
DeLTa: Afternoon poster session

#SDE #Bayes #GenAI #Diffusion #Flow

11 months ago 13 1 1 1
Post image Post image Post image Post image

The calm before the storm #ICLR2025 πŸ”₯πŸ”₯πŸ”₯

11 months ago 48 6 0 2
Preview
E-Valuating Classifier Two-Sample Tests We introduce a powerful deep classifier two-sample test for high-dimensional data based on E-values, called E-C2ST. Our test combines ideas from existing work on split likelihood ratio tests and...

Attending #ICLR2025 and #AABI2025. Presenting at the conference and workshops:

1. E-Valuating Classifier Two-Sample Tests, Friday, Hall 3 + Hall 2B #437
2. SDE Matching, Sunday-Tuesday, FPI/DeLTa/AABI

openreview.net/forum?id=dwF...
arxiv.org/abs/2502.02472

lmk if you want to chat!

1 year ago 14 0 1 0
Preview
Generative modelling in latent space Latent representations for generative models.

New blog post: let's talk about latents!
sander.ai/2025/04/15/l...

1 year ago 74 18 3 5

I'm not sure I followed this comment as I understood your earlier comment about disliking mandatory cites as leaning towards allowing more author discretion? But I understood this comment like an argument for less author discretion?

1 year ago 0 0 1 0

However, if you cite something that you think is actively bad/wrong I think that it is perfectly fine to argue that point in the related work/discussion section, or perhaps in an extended part of it in the supplementary/appendix.

1 year ago 1 0 0 0

Ah, I see. Perhaps I then misunderstood your comment about being opinionated about what is worth citing.

As I mentioned, my comment wasn't about this specific case as it is from my understanding quite a bit more complex than what was available on OpenReview.

1 year ago 1 0 2 0

Just to be extra clear, my comment was not (and is not) a comment about this specific case.

My comment was about whether it is ok in general to not cite relevant work because an author dislikes it and therefore doesn't think it is worth citing.

Of course relevance is to some degree subjective.

1 year ago 0 0 1 0
Advertisement

Indeed, as I mentioned it is not black and white and there is of course a cutoff. But not citing relevant papers because of personal taste is the wrong direction imo.

1 year ago 2 0 2 0

Of course there is a grayscale, but I don't think ones personal opinion about a work's worth should be given much weight when deciding whether a citation is warranted or not.

(note these are comments about citation norms in general and not this case in particular)

1 year ago 1 0 1 0

In general, I believe in stronger norms as weaker would allow for even more abuse and gaming than whatever our current norms are. If the work is relevant, it should be cited. If the work is highly relevant, it should be cited and discussed. In the discussion you can ofc give your opinion about it.

1 year ago 2 0 1 0