Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Authors: Florian Felten 1 ; Grégoire Danoy 2 ; 1 ; El-Ghazali Talbi 3 and Pascal Bouvry 2 ; 1

Affiliations: 1 SnT, University of Luxembourg, Esch-sur-Alzette, Luxembourg ; 2 FSTM/DCS, University of Luxembourg, Esch-sur-Alzette, Luxembourg ; 3 University of Lille, CNRS/CRIStAL, Inria Lille, France

Keyword(s): Reinforcement Learning, Multi-objective, Metaheuristics, Pareto Sets.

Abstract: The fields of Reinforcement Learning (RL) and Optimization aim at finding an optimal solution to a problem, characterized by an objective function. The exploration-exploitation dilemma (EED) is a well known subject in those fields. Indeed, a consequent amount of literature has already been proposed on the subject and shown it is a non-negligible topic to consider to achieve good performances. Yet, many problems in real life involve the optimization of multiple objectives. Multi-Policy Multi-Objective Reinforcement Learning (MPMORL) offers a way to learn various optimised behaviours for the agent in such problems. This work introduces a modular framework for the learning phase of such algorithms, allowing to ease the study of the EED in Inner-Loop MPMORL algorithms. We present three new exploration strategies inspired from the metaheuristics domain. To assess the performance of our methods on various environments, we use a classical benchmark - the Deep Sea Treasure (DST) - as well as propose a harder version of it. Our experiments show all of the proposed strategies outperform the current state-of-the-art ε-greedy based methods on the studied benchmarks. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 70.40.220.129

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Felten, F. ; Danoy, G. ; Talbi, E. and Bouvry, P. (2022). Metaheuristics-based Exploration Strategies for Multi-Objective Reinforcement Learning. In Proceedings of the 14th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART; ISBN 978-989-758-547-0; ISSN 2184-433X, SciTePress, pages 662-673. DOI: 10.5220/0010989100003116

@conference{icaart22,
author={Florian Felten and Grégoire Danoy and El{-}Ghazali Talbi and Pascal Bouvry},
title={Metaheuristics-based Exploration Strategies for Multi-Objective Reinforcement Learning},
booktitle={Proceedings of the 14th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART},
year={2022},
pages={662-673},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0010989100003116},
isbn={978-989-758-547-0},
issn={2184-433X},
}

TY - CONF

JO - Proceedings of the 14th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART
TI - Metaheuristics-based Exploration Strategies for Multi-Objective Reinforcement Learning
SN - 978-989-758-547-0
IS - 2184-433X
AU - Felten, F.
AU - Danoy, G.
AU - Talbi, E.
AU - Bouvry, P.
PY - 2022
SP - 662
EP - 673
DO - 10.5220/0010989100003116
PB - SciTePress