Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
×
Jul 13, 2017 · We propose a new approach for joint training of multiple tasks, which we refer to as Distral (Distill & transfer learning). Instead of sharing ...
We find that the Distral algorithms learn faster and achieve better asymptotic performance, are significantly more robust to hyperparameter settings, and learn ...
We implemented the Distral algorithm and replicated the results obtained in the recent Google DeepMind's paper: "Distral: Robust Multitask Reinforcement ...
We propose a new approach for joint training of multiple tasks, which we refer to as Distral (distill & transfer learning). Instead of sharing parameters ...
People often think in the multitask learning, a task will require less data and achieve a higher asymptotic performance (i.e., other tasks helps).
The paper presents an approach to performing transfer between multiple reinforcement learning tasks by regularizing the policies of different tasks towards a ...
This work proposes a new approach for joint training of multiple tasks, which it refers to as Distral (Distill & transfer learning), and shows that the ...
Apr 28, 2021 · Actor-mimic: Deep multitask and transfer reinforcement learning [ICLR 2016] ... “Distral: Robust multitask reinforcement learning.” arXiv preprint ...
We find that the Distral algorithms learn faster and achieve better asymptotic performance, are significantly more robust to hyperparameter settings, and learn ...
Mar 15, 2018 · ... Distral (DIStill & TRAnsfer Learning). Instead of ... robust and more stable—attributes that are critical in deep reinforcement learning.