Advertisement · 728 × 90

Posts by Alessio Russo

Post image

A neat result: the Complete Class Theorem .

➡️ pick any non-Bayes decision rule, there’s always a Bayes rule that is at least as good as non-Bayes one.

When we talk about “good” procedures, we never really need to leave the Bayes world, at least for compact parameter spaces.

4 months ago 0 0 0 0
Post image

Amazing poster session yesterday at #NeurIPS2025 where we presented Adversarial Diffusion for Robust Reinforcement Learning! Thanks for all the great work @DanieleFoffano, looking forward to the next one! #RL #Diffusion

4 months ago 2 2 0 0
Post image

Excited to be in San Diego next week for #NeurIPS2025 🎉!

Will present Adversarial Diffusion for Robust RL together with @DanieleFoffano. Poster session on Fri 5 Dec 7:30 p.m. EST, Exhibit Hall C,D,E.

AD-RRL uses diffusion models to train Robust RL policies.

#RL #Diffusion

4 months ago 3 1 1 0

In the picture all of the people that have worked with me (some are unrelated to the topic, but I still feel like everyone is part of the journey!)

And I'm missing some probably!

4 months ago 0 0 0 0
Post image

Incredibly happy to have presented at the CS theory seminar a #UPenn !

Many thanks to @sikatasengupta for organizing this!

4 months ago 1 0 1 0

Thanks again to @aldopacchiano.bsky.social and #BostonUniversity for this opportunity! #reinforcementlearning #rl

4 months ago 1 0 0 0
Post image

Just wrapped up my Pure Exploration short course, it was an amazing experience 🚀 All the lectures are now online:

🔗 sites.google.com/vie...

Had a lot of fun teaching this, and I’d be happy to run it again for workshops, seminars, etc... just reach out! 📚

4 months ago 1 0 1 0
Advertisement

PS: In the local timezone the poster session will be at 4:30 p.m. PST

4 months ago 1 0 0 0
Preview
Adversarial Diffusion for Robust Reinforcement Learning Robustness to modeling errors and uncertainties remains a central challenge in reinforcement learning (RL). In this work, we address this challenge by leveraging diffusion models to train robust...


With Daniele Foffano & Alexandre Proutiere. Happy to meet in SD!
Paper:

4 months ago 1 0 1 0
Post image

AD-RRL uses diffusion-guided adversarial trajectories to train robust policies using a CVaR objective.

It's a Dyna-style loop: collect rollouts; train a diffusion model; adversarially guide sampling to produce worst-case trajectories; train the RL agent on this data; iterate.

4 months ago 2 0 1 0
Post image

Excited to be in San Diego next week for #NeurIPS2025 🎉!

Will present Adversarial Diffusion for Robust RL together with @DanieleFoffano. Poster session on Fri 5 Dec 7:30 p.m. EST, Exhibit Hall C,D,E.

AD-RRL uses diffusion models to train Robust RL policies.

#RL #Diffusion

4 months ago 3 1 1 0
Theory Seminar | Penn CS Theory Group The theory seminar is back for Fall 2025! Talks are on Fridays from 12-1pm, usually in Amy Gutmann Hall room 414. All talks are announced on the theory-group listserv; you can sign up here. Below is a tenative list of speakers for Fall 2025. 9/5: Tomer Ezra 9/12: Yuhao Li 9/26: George Li 10/3: Kostas Stavropoulos 10/10: Tushant Mittal 10/17: Sophie Yu 10/24: Alexandr Andoni 10/31: Giannis Fikioris 11/21: Alessio Russo 11/28: Krish Singal 12/05: Beepul Bharti

See details here: theory.cis.upenn.edu...
#ReinforcementLearning

5 months ago 0 0 0 0

Excited to be visiting #UPenn for the CS Theory Seminar tomorrow (Nov 21), where I’ll present my recent work on pure exploration in reinforcement learning, done together with @aldopacchiano.bsky.social

Many thanks to @sikatasengupta.bsky.social for organizing this!

5 months ago 2 1 1 0

In each review process a reviewer usually reviews 2 to 5 papers. For each paper reviewed they obtain a score from the AC of that paper, normalized by the pattern of scores of that AC. If during a PhD you review 20-30 papers, that should give a roughly good estimate of the quality of your reviews.

5 months ago 0 0 0 0

How do we do that? This is not the place to brainstorm, but I applaud the system of #ICLR to make reviews public, this is a first step, and in my opinion, this should be the standard. What do we need to hide? Furthermore, I will throw a simple idea: introduce an ELO system for reviewers.

5 months ago 0 0 1 0

LLMs should be intended to augment your work skills, to empower yourself and be more productive.

What we lack is a form of accountability. It is irresponsible not to make reviewer accountable for reviews of poor quality with wrong/false statements.

5 months ago 0 0 1 0

I don't see a collapse of the review process. We won’t see a hard technical failure. People will simply use LLMs more and more to deal with the larger and larger amount of work. Note that I don't oppose the use of LLMs.

5 months ago 0 0 1 0
Advertisement

Some people are waiting for the "inevitable" collapse of the review process. Even if a collapse happens (not even clear what we mean by collapse here), if we don't take action to prevent this sort of problem from happening again, then it will eventually happen again.

5 months ago 0 0 1 0

This news is not new. I'm surprised by the lack of actions of chairs who, supposedly, should be doing something, but instead I see the same pattern conference after conference.

5 months ago 0 0 1 0
Preview
Some Ethical Issues in the Review Process of Machine Learning Conferences Recent successes in the Machine Learning community have led to a steep increase in the number of papers submitted to conferences. This increase made more prominent some of the issues that affect the c...

My X and Linkedin feed are filled with posts complaining about the review process in supposedly "top" ML conferences. I wrote on this issue several years ago

"Ethical Issues in the Review Process of Machine Learning Conferences"

arxiv.org/abs/2106.00810

5 months ago 1 0 1 0
Post image

Honored to be named a Top Reviewer (10%) for #NeurIPS2025 🎉.

I’ll be in San Diego and happy to meet! I’ll also be with @DanieleFoffano presenting our latest on Robust RL with Diffusion Models arxiv.org/abs/2509.2.... Ping me if you want to chat on RL.

#RL#DiffusionModels

5 months ago 2 0 0 0
Post image

Excited for tomorrow, where Pure Exploration in RL kicks off at Boston University 🎉!

Lectures will be recorded & uploaded - stay tuned.
Many thanks to @bostonu.bsky.social (CDS) & @aldopacchiano.bsky.social for allowing this opportunity.
Website: sites.google.com/vie...

#BU #RL #PureExploration

5 months ago 1 0 0 0
Post image


80⭐️ on a tiny repo on data-driven control I put out during my PhD!

PyDeePC is a tiny Python implementation of Data-enabled Predictive Control. It is model-free, quick to try, easy to learn. Feedback welcome!

github.com/rssalessi...
#MPC #ControlTheory #DataDrivenControl

5 months ago 1 0 0 0
Preview
Course on Pure Exploration in RL and Active Sequential Hypothesis Testing- Expression of Interest Dear all, I'm Alessio Russo, a Postdoc at BU (PLAIA group). I’m planning a PhD-level short course on sequential hypothesis testing and pure exploration. See also the website for more information ht...

Registration form forms.gle/ohC8KJPPbBt6...

5 months ago 1 0 0 0
Preview
Seminars on Pure Exploration and Active Sequential Hypothesis Testing Course Description

Excited to teach a PhD-level short course at #BostonUniversity: Pure Exploration and Active Sequential Hypothesis Testing. Nov 14–25, 4–6pm.

We will cover Best-arm & best-policy Identification, sample complexity bounds and optimal algorithms.

Website👉 sites.google.com/view/asht202...

5 months ago 2 0 1 0

Interested in #RL, #bandits, and #learning in general?

This is a collection of interesting papers (and books) that I have read so far or want to read. Note that the list is not up-to-date.

github.com/rssalessio/r...

1 year ago 4 0 0 0
Advertisement

Happy to thay that we will also present this work at #INFORMS, at the Applied Probability Conference #APS this year in Atlanta!

1 year ago 2 0 0 0

In this work we studied the sample complexity of pure exploration in an online learning problem with a general unknown stochastic feedback graph. A setting that was not well studied before in literature.

1 year ago 2 0 1 0
Post image

Thrilled to share that this work with Yichen Song and
@aldopacchiano.bsky.social got accepted for an oral presentation at AISTATS 2025!

Pure Exploration with Feedback Graphs arxiv.org/pdf/2503.07824

Code: github.com/rssalessio/P...

#AISTATS #AISTATS2025 #ML

1 year ago 2 0 1 1