Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/1390156.1390227acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicmlConference Proceedingsconference-collections
research-article

A worst-case comparison between temporal difference and residual gradient with linear function approximation

Published: 05 July 2008 Publication History

Abstract

Residual gradient (RG) was proposed as an alternative to TD(0) for policy evaluation when function approximation is used, but there exists little formal analysis comparing them except in very limited cases. This paper employs techniques from online learning of linear functions and provides a worst-case (non-probabilistic) analysis to compare these two types of algorithms when linear function approximation is used. No statistical assumptions are made on the sequence of observations, so the analysis applies to non-Markovian and even adversarial domains as well. In particular, our results suggest that RG may result in smaller temporal differences, while TD(0) is more likely to yield smaller prediction errors. These phenomena can be observed even in two simple Markov chain examples that are non-adversarial.

References

[1]
Baird, L. C. (1995). Residual algorithms: Reinforcement learning with function approximation. Proceedings of the Twelfth International Conference on Machine Learning (ICML-95) (pp. 30--37).
[2]
Bertsekas, D. P., & Tsitsiklis, J. N. (1996). Neuro-dynamic programming. Athena Scientific.
[3]
Boyan, J. A., & Moore, A. W. (1995). Generalization in reinforcement learning: Safely approximating the value function. Advances in Neural Information Processing Systems 7 (NIPS-94) (pp. 369--376).
[4]
Cesa-Bianchi, N., Long, P. M., & Warmuth, M. (1996). Worst-case quadratic loss bounds for prediction using linear functions and gradient descent. IEEE Transactions on Neural Networks, 7, 604--619.
[5]
Horn, R. A., & Johnson, C. R. (1986). Matrix analysis. Cambridge University Press.
[6]
Kivinen, J., & Warmuth, M. K. (1997). Exponentiated gradient versus gradient descent for linear predictors. Information and Computation, 132, 1--63.
[7]
Lagoudakis, M. G., & Parr, R. (2003). Least-squares policy iteration. Journal of Machine Learning Research, 4, 1107--1149.
[8]
Munos, R. (2003). Error bounds for approximate policy iteration. Proceedings of the Twentieth International Conference on Machine Learning (ICML-03) (pp. 560--567).
[9]
Precup, D., & Sutton, R. S. (1997). Exponentiated gradient methods for reinforcement learning. Proceedings of the Fourteenth International Conference on Machine Learning (ICML-97) (pp. 272--277).
[10]
Puterman, M. L. (1994). Markov decision processes: Discrete stochastic dynamic programming. New York: Wiley-Interscience.
[11]
Schapire, R. E., & Warmuth, M. K. (1996). On the worstcase analysis of temporal-difference learning algorithms. Machine Learning, 22, 95--122.
[12]
Schoknecht, R., & Merke, A. (2003). TD(0) converges provably faster than the residual gradient algorithm. Proceedings of the Twentieth International Conference on Machine Learning (ICML-03) (pp. 680--687).
[13]
Sutton, R. S. (1988). Learning to predict by the methods of temporal differences. Machine Learning, 3, 9--44.
[14]
Sutton, R. S., & Barto, A. G. (1998). Reinforcement learning: An introduction. Cambridge, MA: MIT Press.
[15]
Tsitsiklis, J. N., & Van Roy, B. (1997). An analysis of temporal-difference learning with function approximation. IEEE Transactons on Automatic Control, 42, 674--690.

Cited By

View all
  • (2021)Gaussian Process Temporal-Difference Learning with Scalability and Worst-Case Performance GuaranteesICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)10.1109/ICASSP39728.2021.9414667(3485-3489)Online publication date: 6-Jun-2021
  • (2021)Reviewing On-Policy/Off-Policy Critic Learning in the Context of Temporal Differences and Residual LearningReinforcement Learning Algorithms: Analysis and Applications10.1007/978-3-030-41188-6_2(15-24)Online publication date: 3-Jan-2021
  • (2020)Deep Residual Reinforcement LearningProceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems10.5555/3398761.3398946(1611-1619)Online publication date: 5-May-2020
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
ICML '08: Proceedings of the 25th international conference on Machine learning
July 2008
1310 pages
ISBN:9781605582054
DOI:10.1145/1390156
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

  • Pascal
  • University of Helsinki
  • Xerox
  • Federation of Finnish Learned Societies
  • Google Inc.
  • NSF
  • Machine Learning Journal/Springer
  • Microsoft Research: Microsoft Research
  • Intel: Intel
  • Yahoo!
  • Helsinki Institute for Information Technology
  • IBM: IBM

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 05 July 2008

Permissions

Request permissions for this article.

Check for updates

Qualifiers

  • Research-article

Funding Sources

Conference

ICML '08
Sponsor:
  • Microsoft Research
  • Intel
  • IBM

Acceptance Rates

Overall Acceptance Rate 140 of 548 submissions, 26%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)2
  • Downloads (Last 6 weeks)1
Reflects downloads up to 16 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2021)Gaussian Process Temporal-Difference Learning with Scalability and Worst-Case Performance GuaranteesICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)10.1109/ICASSP39728.2021.9414667(3485-3489)Online publication date: 6-Jun-2021
  • (2021)Reviewing On-Policy/Off-Policy Critic Learning in the Context of Temporal Differences and Residual LearningReinforcement Learning Algorithms: Analysis and Applications10.1007/978-3-030-41188-6_2(15-24)Online publication date: 3-Jan-2021
  • (2020)Deep Residual Reinforcement LearningProceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems10.5555/3398761.3398946(1611-1619)Online publication date: 5-May-2020
  • (2018)Temporal regularization in Markov decision processProceedings of the 32nd International Conference on Neural Information Processing Systems10.5555/3326943.3327107(1784-1794)Online publication date: 3-Dec-2018
  • (2016)Online Bellman residual and temporal difference algorithms with predictive error guaranteesProceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence10.5555/3061053.3061249(4213-4217)Online publication date: 9-Jul-2016
  • (2016)A Greedy Approach to Adapting the Trace Parameter for Temporal Difference LearningProceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems10.5555/2936924.2937006(557-565)Online publication date: 9-May-2016
  • (2015)Online Bellman Residual algorithms with predictive error guaranteesProceedings of the Thirty-First Conference on Uncertainty in Artificial Intelligence10.5555/3020847.3020935(852-861)Online publication date: 12-Jul-2015
  • (2015)Scalable estimation strategies based on stochastic approximationsStatistics and Computing10.1007/s11222-015-9560-y25:4(781-795)Online publication date: 1-Jul-2015
  • (2015)Chaotic dynamics and convergence analysis of temporal difference algorithms with bang‐bang controlOptimal Control Applications and Methods10.1002/oca.215637:1(108-126)Online publication date: 20-Jan-2015
  • (2014)Accelerated gradient temporal difference learning algorithms2014 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL)10.1109/ADPRL.2014.7010611(1-8)Online publication date: Dec-2014
  • Show More Cited By

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media