Advertisement ยท 728 ร— 90

Posts by Bartolomeo Stellato

Honored to receive a 2026 Sloan Research Fellowship in Mathematics. This wouldn't be possible without my entire research group at Princeton, and I'm grateful to the colleagues who supported my research. #SloanFellow

1 month ago 11 2 0 0
Post image

Proud to celebrate the graduation of my PhD student Vinit Ranjan, who defended his thesis this month: "Beyond the Worst Case: Verification of First-Order Methods for Parametric Optimization Problems" ๐ŸŽ‰ Congratulations Dr. Ranjan!

3 months ago 8 0 0 0
Post image

Wishing everyone happy holidays! ๐ŸŽ„ Feeling lucky to work with such a fantastic group of students. Here's to good research, great company, and Neapolitan pizza ๐Ÿ•

3 months ago 8 0 0 0
Post image

New preprint! ๐Ÿ“„ Data-driven convergence guarantees for first-order methods via PEP + Wasserstein DRO.

Less pessimistic probabilistic rates that reflect how your solver actually behaves ๐ŸŽฏ

๐Ÿ“Ž arxiv.org/abs/2511.17834
๐Ÿ’ป github.com/stellatogrp/dro_pep

w/ Jisun Park & Vinit Ranjan #optimization #fom

3 months ago 2 2 0 0
Preview
AI helps Princeton scientists plot the best paths for space exploration When a spacecraft or probe is sent to explore Mars or do flybys of one of Saturnโ€™s moons, it stays in constant contact with mission control back on Earth, where scientists recalculate and adjust as ne...

Autonomous spacecraft are still a far off ideal ๐Ÿš€

But Ryne Beeson and @stella.to are taking the first steps in that direction by finding the optimal trajectories to a given planet or moon with the help of machine learning: ai.princeton.edu/news/2025/ai...

3 months ago 3 1 0 0

๐Ÿ“š New Arxiv Paper

Title: Data-driven Analysis of First-Order Methods via Distributionally Robust Optimization
Authors: Jisun Park, Vinit Ranjan, Bartolomeo Stellato

Read more: https://arxiv.org/abs/2511.17834

4 months ago 5 1 0 0
Post image

๐Ÿ“ข New in JMLR (w @rajivsambharya.bsky.social)! ๐ŸŽ‰ Data-driven guarantees for classical & learned optimizers via sample bounds + PAC-Bayes theory.

๐Ÿ“„ jmlr.org/papers/v26/2...
๐Ÿ’ป github.com/stellatogrp/...

7 months ago 7 3 0 0
Advertisement
Post image

๐Ÿ“ข Our paper "Verification of First-Order Methods for Parametric Quadratic Optimization" with my student Vinit Ranjan (vinitranjan1.github.io/) is accepted in Mathematical Programming! ๐ŸŽ‰

๐Ÿ”— DOI: doi.org/10.1007/s10107-025-02261-w
๐Ÿ“„ arXiv: arxiv.org/pdf/2403.033...
๐Ÿ’ป Code: github.com/stellatogrp/...

8 months ago 10 1 0 0
Post image

Iโ€™m happy to share that Iโ€™ll be spending the fall semester at Princeton as a visiting student in the Department of Operations Research and Financial Engineering (ORFE), working with @stellato.io funded through the WASP program. If youโ€™re in the area and would like to connect, feel free to reach out.

8 months ago 11 1 0 0

๐Ÿ”„ Updated Arxiv Paper

Title: Exact Verification of First-Order Methods via Mixed-Integer Linear Programming
Authors: Vinit Ranjan, Jisun Park, Stefano Gualandi, Andrea Lodi, Bartolomeo Stellato

Read more: https://arxiv.org/abs/2412.11330

1 year ago 6 2 0 0

๐Ÿ“š New Arxiv Paper

Title: Data Compression for Fast Online Stochastic Optimization
Authors: Irina Wang, Marta Fochesato, Bartolomeo Stellato

Read more: https://arxiv.org/abs/2504.08097

11 months ago 3 2 0 0
Post image

๐Ÿš€ Gave a talk at the EURO @euroonline.bsky.social Seminar Series on "Data-Driven Algorithm Design and Verification for Parametric Convex Optimization"!

๐ŸŽฅ Recording: https://euroorml.euro-online.org/

Big thanks to Dolores Romero Morales for the invitation! ๐Ÿ™Œ #MachineLearning #Optimization #ORMS

1 year ago 7 1 0 0
Post image

The new season of the Robust Optimization Webinar (#ROW) starts this week. Our first presentation will take place this Friday, January 24, at 15:00 (CET).

Speaker: Peyman Mohajerin Esfahani (TU Delft)

Title: Inverse Optimization: The Role of Convexity in Learning

1 year ago 15 3 1 0

๐Ÿ“š New Arxiv Paper

Title: Exact Verification of First-Order Methods via Mixed-Integer Linear Programming
Authors: Vinit Ranjan, Stefano Gualandi, Andrea Lodi, Bartolomeo Stellato

Read more: https://arxiv.org/abs/2412.11330

1 year ago 9 3 0 0

What happens to the hyperparameters of learned optimizers? Turns out, we learn long steps! ๐Ÿš€

๐Ÿ‘‡ Check out our latest work with @rajivsambharya.bsky.social!

1 year ago 10 3 0 1
Advertisement
Post image

Clustering is a powerful tool for decision-making under uncertainty!

Work w/ my students Irina Wang (lead) and Cole Becker, in collab. w/
Bart Van Parys

๐Ÿงต (7/7)

1 year ago 1 0 0 0
Post image

We have several examples in the paper. Here is a sparse portfolio optimization one. Clustering barely affects the solution objective. Speedups are more than 3 orders of magnitude. ๐Ÿงต (6/7)

1 year ago 1 1 1 0
Post image

By varying the number of clusters K, our method bridges Robust and Distributionally Robust optimization! We also derive theoretical bounds on 1) how to adjust the Wasserstein ball radius to compensate for clustering, and 2) how to exactly quantify the effect of clustering ๐Ÿงต (5/7)

1 year ago 1 0 1 0
Post image Post image

In Mean Robust Optimization, we define an uncertainty set around the cluster centroids with weights defined by the amount of samples in each cluster. ๐Ÿงต (4/7)

1 year ago 0 0 1 0
Post image

Our procedure: we first cluster N data points into K clusters. Then, we solve the Mean Robust Optimization problem. ๐Ÿงต (3/7)

1 year ago 0 0 1 0
Post image

Robust optimization is tractable but, often, very conservative. Wasserstein Distributionally Robust Optimization is less conservative but, often, computationally expensive. How can we bridge the two? ๐Ÿงต (2/7)

1 year ago 0 0 1 0
Post image

Our paper "Mean robust optimization" has been accepted to Mathematical Programming: https://buff.ly/3B3VpIG

๐Ÿ“ฐ Arxiv (longer version): https://buff.ly/3CT4aWD
๐Ÿ‘ฉโ€๐Ÿ’ป Code: https://buff.ly/3ATqAXh

w/ Irina Wang, Cole Becker, and Bart van Parys

A thread ๐Ÿงต (1/7)๐Ÿ‘‡

1 year ago 28 7 1 0
Advertisement

Cool! Thanks for creating this. Could you please add me? :)

1 year ago 0 0 0 0
Preview
Anytime Acceleration of Gradient Descent This work investigates stepsize-based acceleration of gradient descent with {\em anytime} convergence guarantees. For smooth (non-strongly) convex optimization, we propose a stepsize schedule that all...

arxiv.org/abs/2411.17668 Our postdoc zihan slays another COLT open problem! proceedings.mlr.press/v247/kornows...

1 year ago 68 11 1 3

๐Ÿ“š New Arxiv Paper

Title: Learning Algorithm Hyperparameters for Fast Parametric Convex Optimization
Authors: Rajiv Sambharya, Bartolomeo Stellato

Read more: http://arxiv.org/abs/2411.15717v1

1 year ago 2 2 0 0

๐Ÿ‘‹๐Ÿ‘‹๐Ÿ‘‹

1 year ago 0 0 1 0

Congratulations @atlaswang.bsky.social :)

1 year ago 1 0 0 0
Preview
2025 ICS Conference The 18th INFORMS Computing Society (ICS) Conference welcomes you to Toronto, Canada. We invite researchers, practitioners, and innovators to come together and share insights at the cutting edge where...

We are very excited to announce that the 2025 INFORMS Computing Society (ICS) Conference will take place March 14-16, 2025, in Toronto:

sites.google.com/view/ics-2025

Submissions for contributed talks are due on December 23.

We invite talks that showcase the dynamic interface of CS, AI & #ORMS.

1 year ago 16 4 0 0

New #arxiv bot for #optimization and #control! ๐ŸŽ‰

bsky.app/profile/arxi...

1 year ago 26 9 0 0

Thanks @tmaehara.bsky.social It looks great! I will let you know if I find anything wrong but from a brief look at the first post it looks exactly what one would expect. Thanks again!

1 year ago 1 0 1 0
Advertisement