Advertisement · 728 × 90

Posts by Quentin Bertrand

Post image

My paper on Generalized Gradient Norm Clipping & Non-Euclidean (L0, L1)-Smoothness (together with collaborators from EPFL) was accepted as an oral at NeurIPS! We extend the theory for our Scion algorithm to include gradient clipping. Read about it here arxiv.org/abs/2506.01913

7 months ago 16 3 1 0

Thanks!

7 months ago 0 0 0 0

Our work on the generalization of Flow Matching got an oral at Neurips !

Go see @quentinbertrand.bsky.social present it there :)

7 months ago 25 3 3 0
Video

New paper on the generalization of Flow Matching www.arxiv.org/abs/2506.03719

🤯 Why does flow matching generalize? Did you know that the flow matching target you're trying to learn *can only generate training points*?

w @quentinbertrand.bsky.social @annegnx.bsky.social @remiemonet.bsky.social 👇👇👇

10 months ago 55 17 2 3

What an amazing week with insightful discussions and interactions! @franceausenegal.bsky.social

1 year ago 1 0 0 0
Outpout on DecisionBoundaryDisplay for a set of probabilistic classifier on a 3 class classification problems.

The two logistic regression models fitted on the original features display linear decision boundaries as expected. For this particular problem, this does not seem to be detrimental as both models are competitive with the non-linear models when quantitatively evaluated on the test set. We can observe that the amount of regularization influences the model confidence: lighter colors for the strongly regularized model with a lower value of C. Regularization also impacts the orientation of decision boundary leading to slightly different ROC AUC.

The log-loss on the other hand evaluates both sharpness and calibration and as a result strongly favors the weakly regularized logistic-regression model, probably because the strongly regularized model is under-confident. This could be confirmed by looking at the calibration curve using sklearn.calibration.CalibrationDisplay.

The logistic regression model with RBF features has a “blobby” decision boundary that is non-linear in the original feature space and is quite similar to the decision boundary of the Gaussian process classifier which is configured to use an RBF kernel.

The logistic regression model fitted on binned features with interactions has a decision boundary that is non-linear in the original feature space and is quite similar to the decision boundary of the gradient boosting classifier: both models favor axis-aligned decisions when extrapolating to unseen region of the feature space.

The logistic regression model fitted on spline features with interactions has a similar axis-aligned extrapolation behavior but a smoother decision boundary in the dense region of the feature space than the two previous models.

Outpout on DecisionBoundaryDisplay for a set of probabilistic classifier on a 3 class classification problems. The two logistic regression models fitted on the original features display linear decision boundaries as expected. For this particular problem, this does not seem to be detrimental as both models are competitive with the non-linear models when quantitatively evaluated on the test set. We can observe that the amount of regularization influences the model confidence: lighter colors for the strongly regularized model with a lower value of C. Regularization also impacts the orientation of decision boundary leading to slightly different ROC AUC. The log-loss on the other hand evaluates both sharpness and calibration and as a result strongly favors the weakly regularized logistic-regression model, probably because the strongly regularized model is under-confident. This could be confirmed by looking at the calibration curve using sklearn.calibration.CalibrationDisplay. The logistic regression model with RBF features has a “blobby” decision boundary that is non-linear in the original feature space and is quite similar to the decision boundary of the Gaussian process classifier which is configured to use an RBF kernel. The logistic regression model fitted on binned features with interactions has a decision boundary that is non-linear in the original feature space and is quite similar to the decision boundary of the gradient boosting classifier: both models favor axis-aligned decisions when extrapolating to unseen region of the feature space. The logistic regression model fitted on spline features with interactions has a similar axis-aligned extrapolation behavior but a smoother decision boundary in the dense region of the feature space than the two previous models.

Recently merged in scikit-learn's main branch: display the maximum predicted class probability in 2D continuous feature spaces (mostly for didactic purposes):

scikit-learn.org/dev/auto_exa...

The linked example has been updated to include some conclusions we can draw from this plot.

1 year ago 31 6 2 0

Visit the playground at the end of our blog post (with co-authors @annegnx.bsky.social, Ségolène Martin, @mathurinmassias.bsky.social, @quentinbertrand.bsky.social)
dl.heeere.com/cfm#cfm-play...

1 year ago 0 1 0 0
Post image

👩‍🎓👨‍🎓 Internship offers (1st step to PhD program) in my group:
team.inria.fr/soda/job-off...

Topics:
◼ Health AI & causality, accounting for censoring (for people who love health impact)
◼ Foundation models for tabular learning (for people into bigger models)

Come work with us!

1 year ago 40 12 0 0
Advertisement

This blog post provides intuition and nice illustrations to understand normalizing flows and flow matching techniques!

w. @annegnx.bsky.social, Ségolène Martin, @mathurinmassias.bsky.social, and @remiemonet.bsky.social (the king for figures)

1 year ago 5 1 0 0

Very cool ref! Did not know about it!

1 year ago 2 0 1 0

Nice blogpost and very cool illustrations 😍. I will die on the hill that most of the FM ideas where introduced back in 2021 by Stefano Peluchetti in his underappreciated paper openreview.net/forum?id=oVf...

1 year ago 11 1 2 2
Video

Anne Gagneux, Ségolène Martin, @quentinbertrand.bsky.social Remi Emonet and I wrote a tutorial blog post on flow matching: dl.heeere.com/conditional-... with lots of illustrations and intuition!

We got this idea after their cool work on improving Plug and Play with FM: arxiv.org/abs/2410.02423

1 year ago 356 102 12 11