Abstract: In this paper, we explore the capability of selective decentralization in improving the reinforcement learning performance for unknown systems using model-based approaches. In selective decentralization, we automatically select the best communication policies among agents. Our learning design, which is built on the control system principles, includes two phases. First, we apply system identification to train an approximated model for the unknown systems. Second, we find the suboptimal solution of the Hamilton–Jacobi–Bellman (HJB) equation to derive the suboptimal control. For linear systems, the HJB equation transforms to the well-known Riccati equation with closed-form solution. In nonlinear system, we discretize the approximation…model as a Markov Decision Process (MDP) in order to determine the control using dynamic programming algorithms. Since the theoretical foundation of using MDP to control the nonlinear system has not been thoroughly developed, we prove that the control law learned by the discrete-MDP approach is guarantee to stabilize the system, which is the learning goal, given several sufficient conditions. These learning and control techniques could be applied in centralized, completely decentralized and selectively decentralized manner. Our results show that selective decentralization outperforms the complete decentralization and the centralization approaches when the systems are completely decoupled or strongly interconnected.
Show more
Keywords: Decentralized control, Hamilton–Jacobi–Bellman equation, Markov process, multi-agent systems
Abstract: In this paper, we propose a novel, partially decentralized learning algorithm for the control of finite, multi-agent Markov Decision Process with unknown transition probabilities and reward values. One learning automaton is associated with each agent acting in a state and the automata acting within a state may communicate with each other. However, there is no communication between the automata present in different states, thus making the system partially decentralized. We propose novel algorithms so that the entire automata team converges to the policy that maximizes the long-term expected reward per step. Simulation results are presented to demonstrate the usefulness of…the proposed algorithms.
Show more
Abstract: Multidisciplinary Design Optimization (MDO) is a computational approach for optimizing design of a complex system of systems that require knowledge from multiple disciplines. In a former study, we explored and found that the individual discipline feasible (IDF), a type of MDO design technique, performed well in several benchmark test cases of decentralized Reinforcement Learning (RL) problems, in particular, stabilizing an unknown system. However, the earlier study was not able to resolve as to why the overall system of systems, even with strongly coupled systems, could be stabilized when each agent just focused on stabilizing itself. In this work, we make…significant extension in resolving this behavior by conducting a theoretical analysis of the MDO solution of RL problems. Through the analysis, we show that with the proper control law, each MDO agent should be able to bring its state closer to the 0-stable point regardless of how the other agents’ states impact the state of the whole system. This is the main reason why the ‘selfish’ MDO-IDF agents are successful in learning to stabilize the overall system. The simulation results, including benchmark test cases, verify our analysis. Therefore, we propose that the MDO would be a promising solution in many other decentralized RL problems.
Show more