Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3297156.3297188acmotherconferencesArticle/Chapter ViewAbstractPublication PagescsaiConference Proceedingsconference-collections
research-article

Artificial Intelligence Techniques on Real-time Strategy Games

Published: 08 December 2018 Publication History
  • Get Citation Alerts
  • Abstract

    Real-time strategy (RTS) games can be seen as simulating real and complex dynamic environments in a limited and small world, posing important challenges for the development of artificial intelligence. Existing applications of artificial intelligence technology in RTS games are not yet able to compete with professional human players. But there are already ways to control the macro of RTS games, and they can compete with amateur human players. RTS games are an excellent platform for testing artificial intelligence technology, and more and more smarter methods are being used in the overall control of it. The purpose of this paper is to systematically review the artificial intelligence technologies used in RTS games in recent years, including the definition of RTS games, the challenges faced, the platform for research problems, and the artificial intelligence methods for problem solving. Finally, we propose the future research direction of real-time strategy games. This article provides a quick start guide for the researchers, the theoretical framework of the system and possible research directions.

    References

    [1]
    Wender, S., & Watson, I. (2014). Combining Case-Based Reasoning and Reinforcement Learning for Unit Navigation in Real-Time Strategy Game AI. Case-Based Reasoning Research and Development. Springer International Publishing.
    [2]
    Foerster, J., Nardelli, N., Farquhar, G., Afouras, T., Torr, P. H. S., & Kohli, P., et al. (2018). Stabilising experience replay for deep multi-agent reinforcement learning.
    [3]
    Barriga, N. A., Stanescu, M., & Buro, M. (2015). Puppet Search: Enhancing Scripted Behavior by Look-Ahead Search with Applications to Real-Time Strategy Games. Aiide.
    [4]
    Gibney, E. (2016). What google's winning go algorithm will do next. Nature, 531(7594), 284.
    [5]
    Robertson, G., & Watson, I. (2014). A review of real-time strategy game ai. Ai Magazine, 35(4), 75--104.
    [6]
    Ontanon, S., Synnaeve, G., Uriarte, A., Richoux, F., Churchill, D., & Preuss, M. (2013). A survey of real-time strategy game ai research and competition in starcraft. Computational Intelligence & Ai in Games, 5(4), 293--311.
    [7]
    Lara-Cabrera, R., Cotta, C., & Fernández-Leiva, A. J. (2013). A review of computational intelligence in RTS games. Foundations of Computational Intelligence (Vol.7979, pp.114--121). IEEE.
    [8]
    Vinyals, O., Ewalds, T., Bartunov, S., Georgiev, P., Vezhnevets, A. S., & Yeo, M., et al. (2017). Starcraft ii: a new challenge for reinforcement learning.
    [9]
    Kong, X., Xin, B., Liu, F., & Wang, Y. (2017). Revisiting the master-slave architecture in multi-agent deep reinforcement learning.
    [10]
    Wender, S. (2015). A Multi-Layer Case-Based & Reinforcement Learning Approach to Adaptive Tactical Real-Time Strategy Game AI (Doctoral dissertation, ResearchSpace@ Auckland).
    [11]
    Uriarte, A., & Ontanon, S. (2014). Automatic Learning of Combat Models for RTS Games. AIIDE.
    [12]
    Shao, K., Zhu, Y., & Zhao, D. (2018). Starcraft micromanagement with reinforcement learning and curriculum transfer learning. IEEE Transactions on Emerging Topics in Computational Intelligence, PP(99), 1--12.
    [13]
    Synnaeve, G., & Bessière, P. (2016). Multiscale bayesian modeling for rts games: an application to starcraft ai. IEEE Transactions on Computational Intelligence & Ai in Games, 8(4), 338--350.
    [14]
    Brood War API: code.google.com/p/bwapi.
    [15]
    Churchill, D., Preuss, M., Richoux, F., Synnaeve, G., Uriarte, A., & Ontañnón, S., et al. (2016). StarCraft Bots and Competitions. Encyclopedia of Computer Graphics and Games.
    [16]
    Wilson, A. R., & Company, T. T. (2012). Masters of war history's greatest strategic thinkers.
    [17]
    Weber, B. G., Mateas, M., & Jhala, A. (2011). Building human-level ai for real-time strategy games. Aaai Fall Symposium.
    [18]
    Lelis, L. H. S. (2017). Stratified Strategy Selection for Unit Control in Real-Time Strategy Games. Twenty-Sixth International Joint Conference on Artificial Intelligence (pp.3735--3741).
    [19]
    Lelis, L. H. S. (2017). Stratified Strategy Selection for Unit Control in Real-Time Strategy Games. Twenty-Sixth International Joint Conference on Artificial Intelligence (pp.3735--3741).
    [20]
    Marc J. V. Ponsen, Héctor Muñoz-Avila, Pieter Spronck, and David W. Aha. 2005. Automatically acquiring domain knowledge for adaptive game AI using evolutionary learning. In Proceedings of the 17th conference on Innovative applications of artificial intelligence - Volume 3 (IAAI'05), Bruce Porter (Ed.), Vol. 3. AAAI Press 1535--1540.
    [21]
    Aha, D. W., Molineaux, M., & Ponsen, M. (2005). Learning to win: case-based plan selection in a real-time strategy game., 3620, 5--20.
    [22]
    Kinshuk Mishra, Santiago Ontañón, and Ashwin Ram. 2008. Situation Assessment for Plan Retrieval in Real-Time Strategy Games. (ECCBR '08), 355--369.
    [23]
    Pedro Cadena and Leonardo Garrido. 2011. Fuzzy case-based reasoning for managing strategic and tactical reasoning in starcraft. (MICAI'11), 113--124.
    [24]
    Santiago Ontañón and Michael Buro. 2015. Adversarial hierarchical-task network planning for complex real-time games. In Proceedings of the 24th International Conference on Artificial Intelligence (IJCAI'15), Qiang Yang and Michael Wooldridge (Eds.). AAAI Press 1652--1658.
    [25]
    Sun, L., Jiao, P., Xu, K., Yin, Q., & Zha, Y. (2017). Modified adversarial hierarchical task network planning in real-time strategy games. Applied Sciences, 7(9), 872.
    [26]
    Graham Erickson and Michael Buro. 2014. Global state evaluation in StarCraft. In Proceedings of the Tenth AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE'14). AAAI Press 112--118.
    [27]
    Justesen, N., & Risi, S. (2017). Learning macromanagement in starcraft from replays using deep learning.
    [28]
    Churchill, D., & Buro, M. (2013). Portfolio greedy search and simulation for large-scale combat in starcraft. 1--8.
    [29]
    Uriarte, A., & Ontañón, S. (2014). Game-tree search over high-level game states in RTS games. AIIDE (pp.73--79).
    [30]
    Barriga, N. A., Stanescu, M., & Buro, M. (2017). Game tree search based on non-deterministic action scripts in real-time strategy games. IEEE Transactions on Computational Intelligence & Ai in Games, PP(99), 1--1.
    [31]
    Zhen, J. S., & Watson, I. (2013). Neuroevolution for Micromanagement in the Real-Time Strategy Game Starcraft: Brood War. AI 2013: Advances in Artificial Intelligence. Springer International Publishing.
    [32]
    Justesen, N., & Risi, S. (2017). Continual online evolutionary planning for in-game build order adaptation in StarCraft. the Genetic and Evolutionary Computation Conference (pp.187--194).
    [33]
    Gajurel, A., Louis, S. J., Mendez, D. J., & Liu, S. (2018). Neuroevolution for rts micro.
    [34]
    Chih-Sheng Lin and Chuan-Kang Ting. 2011. Emergent Tactical Formation Using Genetic Algorithm in Real-Time Strategy Games. In Proceedings of the 2011 International Conference on Technologies and Applications of Artificial Intelligence (TAAI '11). IEEE Computer Society, Washington, DC, USA, 325--330.
    [35]
    Tavares, A. R., Zuin, G. L., Azp, H., & Chaimowicz, L. (2017). Combining genetic algorithm and swarm intelligence for task allocation in a real time strategy game. SBC Journal on Interactive Systems, 8(1), 4--19.
    [36]
    Parra, R., & Garrido, L. (2012). Bayesian Networks for Micromanagement Decision Imitation in the RTS Game Starcraft. Advances in Computational Intelligence. Springer Berlin Heidelberg.
    [37]
    Hostetler, J., Dereszynski, E. W., Dietterich, T. G., & Fern, A. (2012). Inferring strategies from limited reconnaissance in real-time strategy games. 367--376.
    [38]
    Wender, S., & Watson, I. (2014). Combining Case-Based Reasoning and Reinforcement Learning for Unit Navigation in Real-Time Strategy Game AI. Case-Based Reasoning Research and Development. Springer International Publishing.
    [39]
    Uriarte, A., & Ontañón, S. (2014). Game-tree search over high-level game states in RTS games. AIIDE (pp.73--79).
    [40]
    Ontañón, S. (2013). The combinatorial multi-armed bandit problem and its application to real-time strategy games. Journal of Essential Oil Research Jeor, 18(2), 185--188.
    [41]
    Uriarte, A., & Ontañón, S. (2017). Single believe state generation for partially observable real-time strategy games. Computational Intelligence and Games (pp.296--303). IEEE.
    [42]
    Uriarte, A., & Ontanon, S. (2016). Improving Monte Carlo Tree Search Policies in StarCraft via Probabilistic Models Learned from Replay Data. AIIDE.
    [43]
    Shantia, A., Begue, E., & Wiering, M. (2011). Connectionist reinforcement learning for intelligent unit micro management in StarCraft. International Joint Conference on Neural Networks (Vol.10, pp.1794--1801). IEEE.
    [44]
    Kshitij Judah, Saikat Roy, Alan Fern, and Thomas G. Dietterich. 2010. Reinforcement learning via practice and critique advice. In Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence (AAAI'10). AAAI Press 481--486
    [45]
    Usunier, N., Synnaeve, G., Lin, Z., & Chintala, S. (2016). Episodic exploration for deep deterministic policies: an application to starcraft micromanagement tasks.
    [46]
    Foerster, J., Farquhar, G., Afouras, T., Nardelli, N., & Whiteson, S. (2017). Counterfactual multi-agent policy gradients.
    [47]
    Sethy, H., Patel, A., & Padmanabhan, V. (2015). Real time strategy games: a reinforcement learning approach. Procedia Computer Science, 54, 257--264.
    [48]
    Stanescu, M., Barriga, N. A., Hess, A., & Buro, M. (2017). Evaluating real-time strategy game states using convolutional neural networks. Computational Intelligence and Games. IEEE.
    [49]
    Barriga, N. A., Stanescu, M., & Buro, M. (2017). Combining strategic learning and tactical search in real-time strategy games.
    [50]
    Shleyfman, A., Komenda, A., & Domshlak, C. (2014). On Combinatorial Actions and CMABs with Linear Side Information. ECAI (Vol.263, pp.825--830).
    [51]
    Tian, Y., Gong, Q., Shang, W., Wu, Y., & Zitnick, L. (2017). Elf: an extensive, lightweight and flexible research platform for real-time strategy games.
    [52]
    Brockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., & Tang, J., et al. (2016). Openai gym.
    [53]
    Churchill, D., Lin, Z., & Synnaeve, G. (2017). An Analysis of Model-Based Heuristic Search Techniques for StarCraft Combat Scenarios. AAAI Publications, Thirteenth Artificial Intelligence and Interactive Digital Entertainment Conference.
    [54]
    Peng, P., Wen, Y., Yang, Y., Yuan, Q., Tang, Z., & Long, H., et al. (2017). Multiagent bidirectionally-coordinated nets: emergenc89e of human-level coordination in learning to play starcraft combat games.
    [55]
    Lin, Z., Gehring, J., Khalidov, V., & Synnaeve, G. (2017). Stardata: a starcraft ai research dataset.
    [56]
    Andersen, P. A., Goodwin, M., & Granmo, O. C. (2018). Deep RTS: A Game Environment for Deep Reinforcement Learning in Real-Time Strategy Games. arXiv preprint arXiv:1808.05032.
    [57]
    Nair, L., & Chernova, S. (2018). Action categorization for computationally improved task learning and planning.
    [58]
    Zambaldi, V., Raposo, D., Santoro, A., Bapst, V., Li, Y., & Babuschkin, I., et al. (2018). Relational deep reinforcement learning.
    [59]
    Alghanem, B., & G, K. P. (2018). Asynchronous advantage actor-critic agent for starcraft ii.
    [60]
    Sukhbaatar, S., Szlam, A., & Fergus, R. (2016). Learning multiagent communication with backpropagation.
    [61]
    Rashid, T., Samvelyan, M., Witt, C. S. D., Farquhar, G., Foerster, J., & Whiteson, S. (2018). Qmix: monotonic value function factorisation for deep multi-agent reinforcement learning.

    Cited By

    View all
    • (2024)The role of video games in enhancing managers' strategic thinking and cognitive abilities: An experiential surveyEntertainment Computing10.1016/j.entcom.2024.10069450(100694)Online publication date: May-2024
    • (2021)Multi-agent Simulation of Agile Team Dynamics: An Investigation on Team Strategies PerspectiveSoftware Engineering and Algorithms10.1007/978-3-030-77442-4_1(1-36)Online publication date: 20-Jul-2021
    • (2020)Path Planning for Non-Playable Characters in Arcade Video Games using the Wavefront Algorithm2020 IEEE Games, Multimedia, Animation and Multiple Realities Conference (GMAX)10.1109/GMAX49668.2020.9256835(1-5)Online publication date: 2020

    Index Terms

    1. Artificial Intelligence Techniques on Real-time Strategy Games

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image ACM Other conferences
        CSAI '18: Proceedings of the 2018 2nd International Conference on Computer Science and Artificial Intelligence
        December 2018
        641 pages
        ISBN:9781450366069
        DOI:10.1145/3297156
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        In-Cooperation

        • Shenzhen University: Shenzhen University

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 08 December 2018

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. Artificial Intelligence
        2. Machine Learning
        3. Multi-agent Collaboration
        4. Real-time Strategy Games

        Qualifiers

        • Research-article
        • Research
        • Refereed limited

        Conference

        CSAI '18

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)101
        • Downloads (Last 6 weeks)5
        Reflects downloads up to

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)The role of video games in enhancing managers' strategic thinking and cognitive abilities: An experiential surveyEntertainment Computing10.1016/j.entcom.2024.10069450(100694)Online publication date: May-2024
        • (2021)Multi-agent Simulation of Agile Team Dynamics: An Investigation on Team Strategies PerspectiveSoftware Engineering and Algorithms10.1007/978-3-030-77442-4_1(1-36)Online publication date: 20-Jul-2021
        • (2020)Path Planning for Non-Playable Characters in Arcade Video Games using the Wavefront Algorithm2020 IEEE Games, Multimedia, Animation and Multiple Realities Conference (GMAX)10.1109/GMAX49668.2020.9256835(1-5)Online publication date: 2020

        View Options

        Get Access

        Login options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media