Advertisement · 728 × 90

Posts by Jayden

Cool setup

5 months ago 1 0 0 0
Post image Post image

Been feeling FOMO from all the ICLR posts the past 2 days. Will finally be at the conference tomorrow. Please do come by our poster and I’m happy to chat! 😊

⏰: Sat, 26 Apr 3pm
📍: Hall 3 + Hall 2B Poster 374

11 months ago 2 1 0 0

Also, I'll be presenting this work at ICLR next month, please do come by!

1 year ago 4 0 0 0

Our benchmark code is already available for testing out new algorithms and I will be sharing additional instructions on using our code in the coming days. Stay tuned. I look forward to engaging and collaborating with anyone interested in advancing this new area of research! 🙂

1 year ago 2 0 1 0

There are numerous promising avenues for further exploration, particularly in adapting techniques and insights from single-objective RL generalization research to tackle this harder problem setting!

1 year ago 0 0 1 0

Ultimately, a priori scalarization of rewards in single-objective RL limits the agent's flexibility to adapt its behavior to environment changes and objective tradeoffs. Developing agents capable of generalizing across multiple environments AND objectives will become a crucial research direction.

1 year ago 0 0 1 0
Post image Post image Post image

Our baseline evaluations of current MORL algorithms uncover 2 key insights:

1) Current MORL algorithms struggle with generalization.
2) However, MORL demonstrate greater potential for learning adaptable behaviors for generalization compared to single-objective.

1 year ago 0 0 1 0
Post image

We also introduce a benchmark featuring diverse multi-objective domains with parameterized environment configurations to facilitate studies in this area.

1 year ago 0 0 1 0
Advertisement

Despite its importance, the intersection of generalization and multi-objectivity remains a significant gap in RL literature.

In this paper, we formalize generalization in Multi-Objective Reinforcement Learning (MORL) and how it can be evaluated.

1 year ago 1 0 1 0

Consider an autonomous vehicle, which must not only generalize across varied environmental conditions—different weather patterns, lighting, and road surfaces—but also learn optimal trade-offs between competing objectives such as travel time, passenger's comfort, and safety.

1 year ago 0 0 1 0

Real-world sequential decision-making tasks often involves balancing trade-offs among conflicting objectives and generalizing across diverse environments.

1 year ago 0 0 1 0
Preview
On Generalization Across Environments In Multi-Objective Reinforcement Learning Real-world sequential decision-making tasks often require balancing trade-offs between multiple conflicting objectives, making Multi-Objective Reinforcement Learning (MORL) an increasingly prominent f...

Our work "On Generalization Across Environments in Multi-Objective Reinforcement Learning" has been accepted at ICLR 2025!

Paper: arxiv.org/abs/2503.00799
Code: github.com/JaydenTeoh/M...
Authors: Jayden Teoh, Pradeep Varakantham, Peter Vamplew (@amp1874.bsky.social)

More below --->

1 year ago 3 1 1 0
Preview
On Generalization Across Environments In Multi-Objective Reinforcement Learning Real-world sequential decision-making tasks often require balancing trade-offs between multiple conflicting objectives, making Multi-Objective Reinforcement Learning (MORL) an increasingly prominent f...

Our ICRL paper on generalization in multi-objective reinforcement learning is now on arxiv: arxiv.org/abs/2503.00799

This work lead by Jayden Teoh is the first to examine generalization of RL across multiple multi-objective environments, and is a great basis for an exciting new field of research.

1 year ago 5 1 0 0