Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

LC-Learning: Phased Method for Average Reward Reinforcement Learning —Analysis of Optimal Criteria —

  • Conference paper
  • First Online:
PRICAI 2002: Trends in Artificial Intelligence (PRICAI 2002)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 2417))

Included in the following conference series:

Abstract

This paper presents an analysis of criteria which measure policy optimality for average reward reinforcement learning. In previous works for undiscounted tasks, two criteria, gain-optimality and bias-optimality have been presented. The former is one to measure an average reward and the latter is one to evaluate transient actions. However, a limit factor in the definition of the gain-optimality makes real meaning of the criterion unclear, and what si worse, the performance function for the bias-optimality does not always converge. Thus, previous methods calculate an optimal policy with approximation approaches, that is, they don’t always acquire the optimal policy because of some finite errors. In addition, the theoretical proof of the convergence to the optimal policy is a difficult task. To eliminate ambiguity over these criteria, we show a necessary and sufficient condition of the gain-optimality: if and only if a policy is gain-optimal, it includes an optimal cycle-In other words, we only need to search a stationary cycle that has the highest average reward to find a gain optimal policy. We also make the performance function for the bias-optimality always converge by dividing it into two terms cycle-bias-value and path-bias-value. Finally, we build foundation of LC-learning, an algorithm for computing the bias optimal policy in a cyclic domain.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Leslie P. Kaelbling. Hierarchical learning in stochastic domains: Preliminary results. In Proceedings of the Tenth International Conference on Machine Learning (ICML-1993), pages 167–173, 1993.

    Google Scholar 

  2. Leslie P. Kaelbling. Learning to achieve goals. In Proceedings of the Thirteenth International Joint Conference on Artificial Intelligence (IJCAI-1993), pages 1094–1099, 1993.

    Google Scholar 

  3. Leslie P. Kaelbling, Michael L. Littman, and Andrew P. Moore. Reinforcement learning: A survey. Journal of Artificial Intelligence Research, 4:237–285, 1996.

    Google Scholar 

  4. Taro Konda, Tensyo Shinjiro, and Tomohiro Yamaguchi. Lc-learning: In-stages model-based average reward reinforcement learning: Complexity and empirical results. In Proceedings of the Seventh Pacific Rim International Conference on Artificial Intelligence (PRICAI-2002), 2002.

    Google Scholar 

  5. Sridhar Mahadevan. An average-reward reinforcement learning algorithm for computing bias-optimal policies. In Proceedings of the Thirteenth AAAI (AAAI-1996), pages 875–880, 1996.

    Google Scholar 

  6. Sridhar Mahadevan. Average reward reinforcement learning: Foundations, algorithms, and empirical results. Machine Learning, 22(1-3):159–195, 1996.

    Article  Google Scholar 

  7. Sridhar Mahadevan. Sensitive-discount optimality: Unifying average-reward and discounted reinforcement learning. In Proceedings of the Thirteenth International Conference on Machine Learning (ICML-1996), pages 328–336, 1996.

    Google Scholar 

  8. Andrew W. Moore and Christopher G. Atkeson. Prioritized sweeping: Reinforcement learning with less data and less time. Machine Learning, 13:103–130, 1993.

    Google Scholar 

  9. Martin L. Puterman. Markov Decision Processes: Discrete Dynamic Stochastic Programming, 92-93. John Wiley, 1994.

    Google Scholar 

  10. Stuart J. Russell and Peter Norvig. Artificial Intelligence: A Modern Approach. Prentice Hall, 1995.

    Google Scholar 

  11. Anton Schwartz. A reinforcement learning method for maximizing undiscounted rewards. In Proceedings of the Tenth International Conference on Machine Learning (ICML-1993), pages 298–305, 1993.

    Google Scholar 

  12. Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998.

    Google Scholar 

  13. Prasad Tadepalli and DoKyeong Ok. Model-based average reward reinforcement learning. Artificial Intelligence, 100(1-2):177–223, 1998.

    Article  MATH  Google Scholar 

  14. Christopher J. Watkins and Peter Dayan. Q-learning. machine learning. Machine Learning, 3(8):279–292, 1992.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2002 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Konda, T., Yamaguchi, T. (2002). LC-Learning: Phased Method for Average Reward Reinforcement Learning —Analysis of Optimal Criteria —. In: Ishizuka, M., Sattar, A. (eds) PRICAI 2002: Trends in Artificial Intelligence. PRICAI 2002. Lecture Notes in Computer Science(), vol 2417. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45683-X_23

Download citation

  • DOI: https://doi.org/10.1007/3-540-45683-X_23

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-44038-3

  • Online ISBN: 978-3-540-45683-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics