my main takeaway from a talk on reward design in rl: ai only beat humans when they were asked not to collaborate ππ
Posts by Anastasiia Pedan
thank you, Claas, you're the best mentor I could've asked for!!!!
This was an amazing collaboration with a cracked team consisting of @cvoelcker.bsky.social, me, Arash Ahmadian, Romina Abachi, @igilitschenski.bsky.social, and @sologen.bsky.social
#ReinforcementLearning #ModelBasedRL #RLTheory #ICML2025
For more details, feel free to come chat with us in Vancouverβ°οΈπ²π and check out our paperπ€! www.arxiv.org/abs/2505.22772
We can correct the MuZero loss and other losses from the same family by pushing the value estimates computed from different sampled model rollouts to have the correct variance and mean. We prove the soundness of this change and show that it is beneficial for agent performance πππ!
Getting a correct value estimate is instrumental in model-based RL, so if your algorithm fails to provide correct targets for model learning, your agent is in trouble because these errors will accumulate fast πππ!
Would you be surprised to learn that many empirical implementations of value-aware model learning (VAML) algos, including MuZero, lead to incorrect model & value functions when training stochastic models π€? In our new @icmlconf.bsky.social 2025 paper, we show why this happens and how to fix it π¦Ύ!