Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3599733.3600261acmconferencesArticle/Chapter ViewAbstractPublication Pagese-energyConference Proceedingsconference-collections
research-article

EXARL-PARS: Parallel Augmented Random Search Using Reinforcement Learning at Scale for Applications in Power Systems

Published: 28 June 2023 Publication History

Abstract

With recent advances in deep learning and large-scale computing, learning-based controls have become increasingly attractive for complex physical systems. Consequently, developing generalized learning-based control software that takes into account the next generation of computing architectures is paramount. Specifically, for the case of complex control, we present the Easily eXtendable Architecture for Reinforcement Learning (EXARL), which aims to support various scientific applications seeking to leverage reinforcement learning (RL) on exascale computing architectures. We demonstrate the efficacy and performance of the EXARL library for the scientific use case of designing a complex control policy to stabilize a power system after experiencing a fault. We use a parallel augmented random search method developed within EXARL and present its preliminary validation and performance stabilization of a fault for the IEEE 39-bus system.

References

[1]
Shrirang Abhyankar, Renke Huang, Shuangshuang Jin, Bruce Palmer, William Perkins, and Yousu Chen. 2021. Implicit-integration dynamics simulation with the GridPACK framework. In 2021 IEEE Power & Energy Society General Meeting (PESGM). IEEE, 1–5.
[2]
Zeng Bo, Ouyang Shaojie, Zhang Jianhua, Shi Hui, Wu Geng, and Zeng Ming. 2015. An analysis of previous blackouts in the world: Lessons for China’s power industry. Renewable and Sustainable Energy Reviews 42 (2015), 1151–1163.
[3]
Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. 2016. Openai gym. arXiv preprint arXiv:1606.01540 (2016).
[4]
Steven L Brunton and J Nathan Kutz. 2022. Data-driven science and engineering: Machine learning, dynamical systems, and control. Cambridge University Press.
[5]
Gerald W Cauley, David N Cook, Holly A Hawkins, and Critical Infrastructure Protection. 2011. NORTH AMERICAN ELECTRIC) RELIABILITY CORPORATION). (2011).
[6]
Dept. Valley Forge. 2009. Exelon Transmission Planning Criteria, PJM Transm. Plan.
[7]
Istemihan Genc, Ruisheng Diao, Vijay Vittal, Sharma Kolluri, and Sujit Mandal. 2010. Decision tree-based preventive and corrective control applications for dynamic security enhancement in power systems. IEEE Transactions on Power Systems 25, 3 (2010), 1611–1619.
[8]
Qiuhua Huang, Renke Huang, Weituo Hao, Jie Tan, Rui Fan, and Zhenyu Huang. 2019. Adaptive power system emergency control using deep reinforcement learning. IEEE Transactions on Smart Grid 11, 2 (2019), 1171–1182.
[9]
Renke Huang, Yujiao Chen, Tianzhixi Yin, Xinya Li, Ang Li, Jie Tan, Wenhao Yu, Yuan Liu, and Qiuhua Huang. 2022. Accelerated Derivative-Free Deep Reinforcement Learning for Large-Scale Grid Emergency Voltage Control. IEEE Transactions on Power Systems 37, 1 (2022), 14–25. https://doi.org/10.1109/TPWRS.2021.3095179
[10]
B Ravi Kiran, Ibrahim Sobh, Victor Talpaert, Patrick Mannion, Ahmad A Al Sallab, Senthil Yogamani, and Patrick Pérez. 2021. Deep reinforcement learning for autonomous driving: A survey. IEEE Transactions on Intelligent Transportation Systems 23, 6 (2021), 4909–4926.
[11]
Jens Kober, J Andrew Bagnell, and Jan Peters. 2013. Reinforcement learning in robotics: A survey. The International Journal of Robotics Research 32, 11 (2013), 1238–1274.
[12]
P Kundur, GK Morison, and L Wang. 2000. Techniques for on-line transient stability assessment and control. In 2000 IEEE Power Engineering Society Winter Meeting. Conference Proceedings (Cat. No. 00CH37077), Vol. 1. IEEE, 46–51.
[13]
Zhihao Li, Guoqiang Yao, Guangchao Geng, and Quanyuan Jiang. 2016. An efficient optimal control method for open-loop transient stability emergency control. IEEE Transactions on Power Systems 32, 4 (2016), 2704–2713.
[14]
Eric Liang, Richard Liaw, Robert Nishihara, Philipp Moritz, Roy Fox, Ken Goldberg, Joseph Gonzalez, Michael Jordan, and Ion Stoica. 2018. RLlib: Abstractions for distributed reinforcement learning. In International Conference on Machine Learning. PMLR, 3053–3062.
[15]
Yuri V Makarov, Viktor I Reshetov, A Stroev, and I Voropai. 2005. Blackout prevention in the united states, europe, and russia. Proc. IEEE 93, 11 (2005), 1942–1955.
[16]
Horia Mania, Aurelia Guy, and Benjamin Recht. 2018. Simple random search provides a competitive approach to reinforcement learning. arXiv preprint arXiv:1803.07055 (2018).
[17]
Paul Messina. 2017. The Exascale Computing Project. Computing in Science & Engineering 19, 3 (2017), 63–67. https://doi.org/10.1109/MCSE.2017.57
[18]
Sidhant Misra, Line Roald, Marc Vuffray, and Michael Chertkov. 2017. Fast and robust determination of power system emergency control actions. arXiv preprint arXiv:1707.07105 (2017).
[19]
Bruce Palmer, William Perkins, Yousu Chen, Shuangshuang Jin, David Callahan, Kevin Glass, Ruisheng Diao, Mark Rice, Stephen Elbert, Mallikarjuna Vallem, 2016. GridPACKTM: A framework for developing power grid simulations on high-performance computing platforms. The International Journal of High Performance Computing Applications 30, 2 (2016), 223–240.
[20]
ATD Perera and Parameswaran Kamalaruban. 2021. Applications of reinforcement learning in energy systems. Renewable and Sustainable Energy Reviews 137 (2021), 110618.
[21]
Matthias Plappert, Rein Houthooft, Prafulla Dhariwal, Szymon Sidor, Richard Y Chen, Xi Chen, Tamim Asfour, Pieter Abbeel, and Marcin Andrychowicz. 2017. Parameter space noise for exploration. arXiv preprint arXiv:1706.01905 (2017).
[22]
Antonin Raffin, Ashley Hill, Adam Gleave, Anssi Kanervisto, Maximilian Ernestus, and Noah Dormann. 2021. Stable-Baselines3: Reliable Reinforcement Learning Implementations. Journal of Machine Learning Research 22, 268 (2021), 1–8. http://jmlr.org/papers/v22/20-1364.html
[23]
Vinay Ramakrishnaiah, Malachi Schram, Jamal Mohd-Yusof, Sayan Ghosh, Yunzhi Huang, Ai Kagawa, Christine Sweeney, and Shinjae Yoo. 2020. Easily eXtendable Architecture for Reinforcement Learning (EXARL). https://github.com/exalearn/ExaRL.
[24]
Rick L. Stevens, Valerie E. Taylor, Jeffrey A. Nichols, Arthur B. Maccabe, Katherine A. Yelick, and David Brown. 2020. AI for Science.
[25]
Richard S Sutton and Andrew G Barto. 2018. Reinforcement learning: An introduction. MIT press.
[26]
Zidong Zhang, Dongxia Zhang, and Robert C Qiu. 2019. Deep reinforcement learning for power system applications: An overview. CSEE Journal of Power and Energy Systems 6, 1 (2019), 213–225.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
e-Energy '23 Companion: Companion Proceedings of the 14th ACM International Conference on Future Energy Systems
June 2023
157 pages
ISBN:9798400702273
DOI:10.1145/3599733
Publication rights licensed to ACM. ACM acknowledges that this contribution was authored or co-authored by an employee, contractor or affiliate of the United States government. As such, the Government retains a nonexclusive, royalty-free right to publish or reproduce this article, or to allow others to do so, for Government purposes only.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 28 June 2023

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Reinforcement learning
  2. augmented random search
  3. exascale computing
  4. power systems

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

e-Energy '23
Sponsor:

Acceptance Rates

Overall Acceptance Rate 160 of 446 submissions, 36%

Upcoming Conference

E-Energy '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 57
    Total Downloads
  • Downloads (Last 12 months)23
  • Downloads (Last 6 weeks)1
Reflects downloads up to 09 Nov 2024

Other Metrics

Citations

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media