Advertisement Β· 728 Γ— 90

Posts by Mathieu Blondel

Post image

πŸ“£ Please share: We invite submissions to the 29th International Conference on Artificial Intelligence and Statistics (#AISTATS 2026) and welcome paper submissions at the intersection of AI, machine learning, statistics, and related areas. [1/3]

8 months ago 36 21 2 2

I'm not on TV yet, but I'm on YouTube 😊 talking about research, ML, how I prepare talks and the difference between Bayesian and frequentist statistics.

Many thanks to Charles Riou who already posted many videos of interviews of ML & stats researchers on his YouTube channel "ML New Papers"!! πŸ™

11 months ago 40 3 2 0
Post image

1.5 yrs ago, we set out to answer a seemingly simple question: what are we *actually* getting out of RL in fine-tuning? I'm thrilled to share a pearl we found on the deepest dive of my PhD: the value of RL in RLHF seems to come from *generation-verification gaps*. Get ready to 🀿:

1 year ago 59 11 1 3

Am I the only one who feels this is awful? If someone wants to remain anonymous, people should respect that...

1 year ago 1 0 1 0

SchrΓΆdinger's snack

1 year ago 3 0 1 0

yes!

1 year ago 1 0 0 0
Preview
Loss Functions and Operators Generated by f-Divergences The logistic loss (a.k.a. cross-entropy loss) is one of the most popular loss functions used for multiclass classification. It is also the loss function of choice for next-token prediction in language...

Cool work! We recently found that Tsallis q=1.5 (alpha=1.5 in our notation) seems to works really well across several datasets for language modeling arxiv.org/abs/2501.18537 It would be great to find some theoretical justification for why 1.5 seems to be a sweet spot.

1 year ago 4 1 2 0
Post image Post image Post image Post image

πŸ§—β€β™‚οΈWhy GD converges beyond [step size]<2/[smoothness]? We investigate loss functions and identify their *separation margin* is an important factor. Surprisingly Renyi 2-entropy yields super fast rate T=Ξ©(Ξ΅^{-1/3})!
arxiv.org/abs/2502.04889

1 year ago 11 4 1 1

Modern post-training is essentially distillation then RL. While reward hacking is well-known and feared, could there be such a thing as teacher hacking? Our latest paper confirms it. Fortunately, we also show how to mitigate it! The secret: diversity and onlineness! arxiv.org/abs/2502.02671

1 year ago 11 5 0 0
Advertisement

The reason for this is because the usual duality theory still works when working in the spaces of functions and probability measures, while it doesn't if we work in the space of network parameters. We need to apply duality first and then parameterize, not the other way around!

1 year ago 2 0 0 0

The EBM paper below parameterizes dual variables as neural nets. This idea (which has been used in other contexts such as OT or GANs) is very powerful and may be *the* way duality can be useful for neural nets (or rather, neural nets can be useful for duality!).

1 year ago 6 1 1 0

Surprisingly, we found that we still obtain good performance even if we use the classical softargmax at inference time and our losses at train time. This means that we can keep the inference code the same and just change the training code, which is useful e.g. for open-weight LMs

1 year ago 1 0 0 0

We obtain good performance across several language modeling tasks with the alpha-divergence, for alpha=1.5.

1 year ago 1 0 1 0
Post image

The table below summarizes the link between some entropies and f-divergences.

1 year ago 0 0 1 0

2) We instantiate Fenchel-Young losses with f-divergence regularization. This generalizes the cross-entropy loss in two directions: i) by replacing the KL with f-divergences and ii) by allowing non-uniform prior class weights. Each loss is associated with a f-softargmax operator.

1 year ago 0 0 1 0

Our approach naturally generalizes to Fenchel-Young losses, allowing us to obtain the first tractable approach for optimizing the sparsemax loss in general combinatorial spaces.

1 year ago 0 0 1 0

We propose a new joint formulation for learning the EBM and the log-partition, and a MCMC-free doubly stochastic optimization scheme with unbiased gradients.

1 year ago 0 0 1 0
Advertisement

Pushing this idea a little bit further, we can parameterize the log-partition as a separate neural network. This allows us to evaluate the *learned* log-partition on new data points.

1 year ago 0 0 1 0

By treating the log-partition not as a quantity to compute but as a variable to optimize, we no longer need it to be exact (in machine learning we never look for exact solutions to optimization problems!).

1 year ago 0 0 1 0

1) EBMs are generally challenging to train due to the partition function (normalization constant). At first, learning the partition function seems weird O_o But the log-partition exactly coincides with the Lagrange multiplier (dual variable) associated with equality constraints.

1 year ago 0 0 1 0

Really proud of these two companion papers by our team at GDM:

1) Joint Learning of Energy-based Models and their Partition Function
arxiv.org/abs/2501.18528

2) Loss Functions and Operators Generated by f-Divergences
arxiv.org/abs/2501.18537

A thread.

1 year ago 14 3 1 1

Sparser, better, faster, stronger

1 year ago 39 5 1 0

Former French minister of Education and "philosopher" Luc Ferry, who said a few years ago that maths was useless, wrote a book on artificial intelligence πŸ˜‚

1 year ago 9 0 0 0

Huge congrats!

1 year ago 0 0 1 0
Post image

We are organising the First International Conference on Probabilistic Numerics (ProbNum 2025) at EURECOM in southern France in Sep 2025. Topics: AI, ML, Stat, Sim, and Numerics. Reposts very much appreciated!

probnum25.github.io

1 year ago 46 24 3 6
Post image

Slides for a general introduction to the use of Optimal Transport methods in learning, with an emphasis on diffusion models, flow matching, training 2 layers neural networks and deep transformers. speakerdeck.com/gpeyre/optim...

1 year ago 124 27 3 1
Advertisement
MLSS Senegal 2025

MLSS coming to Senegal !

πŸ“ AIMS Mbour, Senegal
πŸ“… June 23 - July 4, 2025

An international summer school to explore, collaborate, and deepen your understanding of machine learning in a unique and welcoming environment.
Details: mlss-senegal.github.io

1 year ago 3 1 0 0

But ensuring that your program supports complex numbers throughout could be a bit tedious.

1 year ago 1 0 1 0
4th and 5th of December Sorbonne Center for Artificial Intelligence (SCAI)

Thrilled to be co-organizing NeurIPS in Paris at @sorbonne-universite.fr next week!

πŸ“‘ 100 papers from NeurIPS 2024. Nearly twice as many as in 2023!
πŸ§‘β€πŸŽ“ over 300 registered participants
βœ… a local and sustainable alternative to flying to Vancouver.

More info: neuripsinparis.github.io/neurips2024p...

1 year ago 40 8 2 1