| |||||||||||||||
ADPRL 2011 : 2011 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning | |||||||||||||||
Link: http://www.ieee-ssci.org/2011/adprl-2011 | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
Part of IEEE Symposium Series on Computational Intelligence 2011
Adaptive (or Approximate) dynamic programming (ADP) is a general and effective approach for solving optimal control problems by adapting to uncertain environments over time. ADP optimizes a user-defined cost function with respect to an adaptive control law, conditioned on prior knowledge of the system, and its state, in the presence of system uncertainties. A numerical search over the present value of the control minimizes a nonlinear cost function forward-in-time providing a basis for real-time, approximate optimal control. The ability to improve performance over time subject to new or unexplored objectives or dynamics has made ADP an attractive approach in a number of application domains including optimal control and estimation, operation research, and computational intelligence. ADP is viewed as a form of reinforcement learning based on an actor-critic architecture that optimizes a user-prescribed value online and obtains the resulting optimal control policy. Reinforcement learning (RL) algorithms learn to optimize an agent by letting it interact with an environment and learn from its received feedback. The goal of the agent is to optimize its accumulated reward over time, and for this it estimates value functions that predict its future reward intake when executing a particular policy. Reinforcement learning techniques can be combined with many different function approximators and do not assume any a priori knowledge about the environment. An important aspect in RL is that an agent has to explore parts of the environment it does not know well, while at the same time it has to exploit its knowledge to maximize its reward intake. RL techniques have already been applied successfully for many problems such as controlling robots, game playing, elevator control, network routing, and traffic light optimization. Topics The symposium topics include, but are not limited to: * Convergence and performance bounds of ADP * Complexity issues in RL and ADP * Statistical learning and RL, PAC bounds for RL * Monte-Carlo and quasi Monte-Carlo methods * Direct policy search, actor-critic methods * Parsimoneous function representation * Adaptive feature discovery * Learning rules and architectures for RL * Sensitivity analysis for policy gradient estimation * Neuroscience and biologically inspired control * Partially observable Markov decision processes * Distributed intelligent systems * Multi-agent RL systems * Multi-level multi-objective optimization for ADPRL * Kernel methods and value function representation * Applications of ADP and RL Symposium Chair Jagannathan Sarangapani, Missouri University of Science and Technology, USA Symposium Co-Chairs Huaguang Zhang, Northeastern University, China Marco Wiering, University of Groningen, Netherlands Program Committee Charles W. Anderson, Colorado State University, USA S. N. Balakrishnan, Missouri University of Science and Technology, USA Tamer Basar, University of Illinois, USA Dimitri P. Bertsekas, Massachusetts Institute of Technology, USA Mingcong Deng, Okayama University, Japan Hai-Bin Duan, Beihang University, China El-Sayed El-Alfy, King Fahd University of Petroleum and Minerals, Saudi Arabia Xiao Hu, GE Global Research, USA Derong Liu, University of Illinois Chicago, USA Abhijit Gosavi, Missouri University of Science and Technology, USA Zeng-Guang Hou, Chinese Academy of Sciences, China Hossein Javaherian, General Motors, USA George G. Lendaris, Portland State University, USA Frank L. Lewis, University of Texas at Arlington, USA Eduardo Morales, INAOE, Mexico Remi Munos, INRIA Lille - Nord Europe, France Kumpati S. Narendra, Yale University, USA Hector D. Patino, Universidad Nacional de San Juan, Argentina Jan Peters, Max Planck Institute for Biological Cybernetics, Germany Warren Powell, Princeton University, USA Philippe Preux, INRIA & CNRS (LIFL), France Danil Prokhorov, Toyota Technical Center, USA Jennie Si, Arizona State University, USA L. Enrique Sucar, INAOE, Mexico Csaba Szepesvari, University of Alberta, Canada Antonios Tsourdos, Cranfield University (DCMT), UK G. Kumar Venayagamoorthy, Missouri University of Science and Technology, USA Draguna Vrabie, University of Texas at Arlington, USA Paul Werbos, National Science Foundation, USA Bernard Widrow, Stanford University, USA Donald C. Wunsch, Missouri University of Science and Technology, USA Gary G. Yen, Oklahoma State University, USA |
|