Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/1160633.1160775acmconferencesArticle/Chapter ViewAbstractPublication PagesaamasConference Proceedingsconference-collections
Article

RVσ(t): a unifying approach to performance and convergence in online multiagent learning

Published: 08 May 2006 Publication History
  • Get Citation Alerts
  • Abstract

    We present a new multiagent learning algorithm (RVσ(t) that can guarantee both no-regret performance (all games) and policy convergence (some games of arbitrary size). Unlike its predecessor ReDVaLeR, it (1) does not need to distinguish whether its opponents are self-play or otherwise non-stationary, (2) is allowed to know its portion of any equilibrium that, we argue, leads to convergence in some games in addition to no-regret. Although the regret of RVσ(t) is analyzed in continuous time, we show that it grows slower than in other no-regret techniques like GIGA and GIGA-WoLF. We show that RVσ(t) can converge to coordinated behavior in coordination games, while GIGA, GIGA-WoLF may converge to poorly coordinated (mixed) behaviors.

    References

    [1]
    B. Banerjee and J. Peng. Performance bounded reinforcement learning in strategic intercations. In Proceedings of the 19th National Conference on Artificial Intelligence (AAAI-04), pages 2--7, San Jose, CA, 2004. AAAI Press.
    [2]
    B. Banerjee and J. Peng. Convergence of no-regret learning in multiagent systems. In Proceedings of the First International Workshop on Learning and Adaptation in Multiagent Systems (LAMAS), Utrecht, The Netherlands, 2005. Held in conjunction with AAMAS-05.
    [3]
    M. Bowling. Convergence and no-regret in multiagent learning. In Proceedings of NIPS 2004/5, 2005.
    [4]
    M. Bowling and M. Veloso. Multiagent learning using a variable learning rate. Artificial Intelligence, 136:215--250, 2002.
    [5]
    V. Conitzer and T. Sandholm. AWESOME: A general multiagent learning algorithm that converges in self-play and learns a best response against stationary opponents. In Proceedings of the 20th International Conference on Machine Learning, 2003.
    [6]
    M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In Proceedings of the 20th International Conference on Machine Learning, Washington DC, 2003.

    Cited By

    View all
    • (2009)Approximation guarantees for fictitious play2009 47th Annual Allerton Conference on Communication, Control, and Computing (Allerton)10.1109/ALLERTON.2009.5394918(636-643)Online publication date: Sep-2009
    • (2008)Online multiagent learning against memory bounded adversariesProceedings of the 2008th European Conference on Machine Learning and Knowledge Discovery in Databases - Volume Part I10.5555/3120828.3120864(211-226)Online publication date: 15-Sep-2008
    • (2008)Using adaptive consultation of experts to improve convergence rates in multiagent learningProceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 310.5555/1402821.1402866(1337-1340)Online publication date: 12-May-2008
    • Show More Cited By

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    AAMAS '06: Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems
    May 2006
    1631 pages
    ISBN:1595933034
    DOI:10.1145/1160633
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 08 May 2006

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. game theory
    2. multiagent learning

    Qualifiers

    • Article

    Conference

    AAMAS06
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 1,155 of 5,036 submissions, 23%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)5
    • Downloads (Last 6 weeks)0

    Other Metrics

    Citations

    Cited By

    View all
    • (2009)Approximation guarantees for fictitious play2009 47th Annual Allerton Conference on Communication, Control, and Computing (Allerton)10.1109/ALLERTON.2009.5394918(636-643)Online publication date: Sep-2009
    • (2008)Online multiagent learning against memory bounded adversariesProceedings of the 2008th European Conference on Machine Learning and Knowledge Discovery in Databases - Volume Part I10.5555/3120828.3120864(211-226)Online publication date: 15-Sep-2008
    • (2008)Using adaptive consultation of experts to improve convergence rates in multiagent learningProceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 310.5555/1402821.1402866(1337-1340)Online publication date: 12-May-2008
    • (2008)MB-AIM-FSIProceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 110.5555/1402383.1402438(371-378)Online publication date: 12-May-2008
    • (2008)Online Multiagent Learning against Memory Bounded AdversariesMachine Learning and Knowledge Discovery in Databases10.1007/978-3-540-87479-9_32(211-226)Online publication date: 2008

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media