Advertisement · 728 × 90

Posts by CSML IIT Lab

Post image

Almost 5 years in the making... "Hyperparameter Optimization in Machine Learning" is finally out! 📘

We designed this monograph to be self-contained, covering: Grid, Random & Quasi-random search, Bayesian & Multi-fidelity optimization, Gradient-based methods, Meta-learning.

arxiv.org/abs/2410.22854

4 months ago 13 8 0 0
Preview
Hyperparameter Optimization in Machine Learning Hyperparameters are configuration variables controlling the behavior of machine learning algorithms. They are ubiquitous in machine learning and artificial intelligence and the choice of their values ...

🚨 OpenReview might have leaked names, but it won't leak the best hyperparameters, unfortunately! 😅

Tired of the drama? Solve your HPO problems before the ICML deadline with this new monograph by our own Luca Franceschi & Massimiliano Pontil (& colleagues).

arxiv.org/abs/2410.22854

4 months ago 9 1 0 1

He will also present an entropy-respecting forward–backward learning scheme that mitigates the inherent ill-posedness of stochastic learning problems.

Join us for what promises to be a very insightful session!

5 months ago 4 0 0 0

In this talk, Arthur Bizzi will introduce Neural Kolmogorov Equations, a deterministic and parallelizable framework for learning continuous-time stochastic processes using Forward and Backward Kolmogorov Equations.

5 months ago 1 0 1 0

Abstract:
Learning differential equations becomes substantially more challenging in the presence of stochasticity, as Neural SDEs typically require expensive, sequential integration during training.

5 months ago 1 0 1 0

📢 Upcoming Talk at Our Lab

We’re excited to host Arthur Bizzi from EPFL for a research talk next week!

Title: Towards Neural Kolmogorov Equations: Parallelizable SDE Learning with Neural PDEs

🗓 Date: November 19
⏰ Time: 16:00 CET
📍 Galileo Sala, CHT @iitalk.bsky.social

5 months ago 5 2 1 0

Excited to share our group’s latest work at #AISTATS2025! 🎓
Tackling concentration in dependent data settings with empirical Bernstein bounds for Hilbert space-valued processes.
📍Catch the poster tomorrow!

🔁 See the original tweet for details!

11 months ago 3 0 0 0

DeltaProduct is here! Achieve better state tracing through highly parallel execution. Explore more!🚀

1 year ago 5 1 0 0
Advertisement
Preview
Slow dynamical modes from static averages In recent times, efforts are being made at describing the evolution of a complex system not through long trajectories, but via the study of probability distribution evolution. This more collective app...

P11] (submitted to The Journal of Chemical Physics)
chemrxiv.org/engage/chemr...

Kooplearn library:
kooplearn.readthedocs.io/latest/

For the longer version of the thread, you can take a look at this blog post:
vladi-iit.github.io/posts/2024-1...

1 year ago 2 0 0 0
Preview
Learning Dynamical Systems via Koopman Operator Regression in Reproducing Kernel Hilbert Spaces We study a class of dynamical systems modelled as Markov chains that admit an invariant distribution via the corresponding transfer, or Koopman, operator. While data-driven algorithms to reconstruct s...

Publications:
[P1] NeurIPS 2022
arxiv.org/abs/2205.14027

[P2] NeurIPS2023
arxiv.org/abs/2302.02004

[P3] ICML2024
arxiv.org/abs/2312.13426

[P4] NeurIPS2023
arxiv.org/abs/2306.04520

[P5] ICLR 2024
arxiv.org/abs/2307.09912

[P6] NeurIPS2024
arxiv.org/abs/2405.12940

1 year ago 2 0 1 0

14/ Looking ahead, we’re excited to tackle new challenges:
• Learning from partial observations
• Modeling non-time-homogeneous dynamics
• Expanding applications in neuroscience, genetics, and climate modeling

Stay tuned for groundbreaking updates from our team! 🌍

1 year ago 2 0 1 0

🙏 Collaborations with the Dynamic Legged Systems group led by Claudio Semini and the Atomistic Simulations group led by Michele Parrinello enriched our research, resulting in impactful works like [P9, P10] and [P7, P11].

1 year ago 1 0 1 0

12/ This journey wouldn’t have been possible without the inspiring collaborations that shaped our work.

🌟 Special thanks to Karim Lounici from École Polytechnique, whose insights were a major driving force behind many projects.

1 year ago 1 0 1 0
Predicting the quantiles for opening/closing of the Chignolin protein in the next simulation step

Predicting the quantiles for opening/closing of the Chignolin protein in the next simulation step

11/ One of our most exciting results:
[P8] NeurIPS 2024 proposed Neural Conditional Probability (NCP) to efficiently learn conditional distributions. It simplifies uncertainty quantification and guarantees accuracy for nonlinear, high-dimensional data.

1 year ago 2 0 1 0

10/ [P7] NeurIPS 2024 developed methods to discover slow dynamical modes in systems like molecular simulations. This is transformative for studying rare events and costly data acquisition scenarios in atomistic systems.

1 year ago 1 0 1 0

9/ Addressing continuous dynamics:
[P6] NeurIPS 2024 introduced a physics-informed framework for learning Infinitesimal Generators (IG) of stochastic systems, ensuring robust spectral estimation.

1 year ago 1 0 1 0

8/ 🌟 Representation learning takes center stage in:
[P5] ICLR 2024
We combined neural networks with operator theory via Deep Projection Networks (DPNets). This approach enhances robustness, scalability, and interpretability for dynamical systems.

1 year ago 1 0 1 0
Advertisement
Free energy surface of Chignolin protein folding

Free energy surface of Chignolin protein folding

7/ 📈 Scaling up:
[P4] NeurIPS 2023 introduced a Nyström sketching-based method to reduce computational costs from cubic to almost linear without sacrificing accuracy. Validated on massive datasets like molecular dynamics, see figure.

1 year ago 1 0 1 0
Effects of metric distortion in learning eigenvalues (left) and stabilization of forecasting (right) for Ornstein-Uhlenbeck process

Effects of metric distortion in learning eigenvalues (left) and stabilization of forecasting (right) for Ornstein-Uhlenbeck process

6/ [P3] ICML 2024 addressed a critical issue in TO-based modeling: reliable long-term predictions.
Our Deflate-Learn-Inflate (DLI) paradigm ensures uniform error bounds, even for infinite time horizons. This method stabilized predictions in real-world tasks; see the figure.

1 year ago 2 0 1 0

5/ [P2] NeurIPS 2023 advanced TOs with theoretical guarantees for spectral decomposition—previously lacking finite sample guarantees. We developed sharp learning rates, enabling accurate, reliable models for long-term system behavior.

1 year ago 1 0 1 0
 Koopman Operator Regression Pipeline

Koopman Operator Regression Pipeline

4/ 🔑 The journey began with:
[P1] NeurIPS 2022
We introduced the first ML formulation for learning TO, which led to the development of the open-source Kooplearn library. This step laid the groundwork for exploring the theoretical limits of operator learning from finite data.

1 year ago 2 0 1 0

3/TOs describe system evolution over finite time intervals, while IGs capture instantaneous rates of change. Their spectral decomposition is key for identifying dominant modes and understanding long-term behavior in complex or stochastic systems.

1 year ago 1 0 1 0

2/ 🌐 Our work revolves around Markov/Transfer Operators (TO) and their Infinitesimal Generators (IG)—tools that allow us to model complex dynamical systems by understanding their evolution in higher-dimensional spaces. Here’s why this matters.

1 year ago 1 0 1 0

1/ 🚀 Over the past two years, our team, CSML, at IIT, has made significant strides in the data-driven modeling of dynamical systems. Curious about how we use advanced operator-based techniques to tackle real-world challenges? Let’s dive in! 🧵👇

1 year ago 5 3 1 0

An inspiring dive into understanding dynamical processes through 'The Operator Way.' A fascinating approach made accessible for everyone—check it out! 👇👀

1 year ago 4 1 0 0
Preview
Unlocking State-Tracking in Linear RNNs Through Negative Eigenvalues Linear Recurrent Neural Networks (LRNNs) such as Mamba, RWKV, GLA, mLSTM, and DeltaNet have emerged as efficient alternatives to Transformers in large language modeling, offering linear scaling with…

Excited to present
"Unlocking State-Tracking in Linear RNNs Through Negative Eigenvalues"
at the M3L workshop at #NeurIPS
https://buff.ly/3BlcD4y

If interested, you can attend the presentation the 14th at 15:00, pass at the afternoon poster session, or DM me to discuss :)

1 year ago 9 3 0 0
Advertisement

In his book “The Nature of Statistical Learning” V. Vapnik wrote:
“When solving a given problem, try to avoid a more general problem as an intermediate step”

1 year ago 8 3 1 0

Join us at our posters and talks to connect, share ideas, and explore collaborations. 🚀✨

1 year ago 3 0 0 0

🔬 Fine-tuning Foundation Models for Molecular Dynamics: A Data-Efficient Approach with Random Features
✍️ @pienovelli.bsky.social, L. Bonati, P. Buigues, G. Meanti, L. Rosasco, M. Pontil | 📅ML4PS Workshop, Dec 15.

1 year ago 3 0 1 0

🔗 Unlocking State-Tracking in Linear RNNs Through Negative Eigenvalues
✍️ R. Grazzi, J. Siems, J. Franke, A. Zela, F. Hutter, M. Pontil
📃https://arxiv.org/abs/2411.12537 | 📅 Oral @ M3L workshop, Dec 14, 15:00 - 15:15.

1 year ago 3 0 1 0