Advertisement · 728 × 90

Posts by Thomas George

Ilies Chibane, Thomas George, Pierre Nodet, Vincent Lemaire: Calibration improves detection of mislabeled examples https://arxiv.org/abs/2511.02738 https://arxiv.org/pdf/2511.02738 https://arxiv.org/html/2511.02738

5 months ago 0 1 0 0

Aziz Bacha, Thomas George
Training Feature Attribution for Vision Models
https://arxiv.org/abs/2510.09135

6 months ago 0 1 0 0
Post image

📢 Talk Announcement

"Unlock the full predictive power of your multi-table data", by Luc-Aurélien Gauthier and Alexis Bondu

📜 Talk info: pretalx.com/pydata-paris-2025/talk/H9X8TG
📅 Schedule: pydata.org/paris2025/schedule
🎟 Tickets: pydata.org/paris2025/tickets

8 months ago 2 1 0 0
PhD thesis : Explaining “black box” AI algorithms through their training examples Global context Recent advances in machine learning have led to new AI applications promising increased automation of new tasks to enhance operational efficiency or relie...

PhD offer at Orange Innov in Paris: example-based explainability of deep networks' predictions.

Please share with interested candidates, or do not hesitate to reach out to me for further information 😁

1 year ago 1 0 1 0

Very interesting challenge! How will you balance accuracy and energy efficiency in your final score?

1 year ago 1 0 1 0

A unified view of mislabeling detection methods using a simple principle: your trained machine learning model knows more about your data than what you usually query it for (i.e., its predicted class). Instead, there are many other ways to *probe* it.

www.youtube.com/watch?v=fT9V...

1 year ago 5 0 0 0
Preview
Implicit Regularization via Neural Feature Alignment We approach the problem of implicit regularization in deep learning from a geometrical viewpoint. We highlight a regularization effect induced by a dynamical alignment of the neural tangent features i...

Congratulations for a very interesting paper! On the same topic, allow me to adverstise our AISTATS paper arxiv.org/abs/2008.00938 where we use the "sum of linearized steps" view to derive a Rademacher complexity bound which uses tangent features during training (fig. 6).

1 year ago 1 0 1 0