Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1609/aaai.v38i20.30238guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
research-article

Carbon footprint reduction for sustainable data centers in real-time

Published: 20 February 2024 Publication History

Abstract

As machine learning workloads significantly increase energy consumption, sustainable data centers with low carbon emissions are becoming a top priority for governments and corporations worldwide. This requires a paradigm shift in optimizing power consumption in cooling and IT loads, shifting flexible loads based on the availability of renewable energy in the power grid, and leveraging battery storage from the uninterrupted power supply in data centers, using collaborative agents. The complex association between these optimization strategies and their dependencies on variable external factors like weather and the power grid carbon intensity makes this a hard problem. Currently, a real-time controller to optimize all these goals simultaneously in a dynamic real-world setting is lacking. We propose a Data Center Carbon Footprint Reduction (DC-CFR) multi-agent Reinforcement Learning (MARL) framework that optimizes data centers for the multiple objectives of carbon footprint reduction, energy consumption, and energy cost. The results show that the DC-CFR MARL agents effectively resolved the complex interdependencies in optimizing cooling, load shifting, and energy storage in real-time for various locations under real-world dynamic weather and grid carbon intensity conditions. DC-CFR significantly outperformed the industry-standard ASHRAE controller with a considerable reduction in carbon emissions (14.5%), energy usage (14.4%), and energy cost (13.7%) when evaluated over one year across multiple geographical regions.

References

[1]
Abedi, S.; Yoon, S. W.; and Kwon, S. 2022. Battery energy storage control using a reinforcement learning approach with cyclic time-dependent Markov process. International Journal of Electrical Power & Energy Systems, 134: 107368.
[2]
Acun, B.; Lee, B.; Kazhamiaka, F.; Maeng, K.; Gupta, U.; Chakkaravarthy, M.; Brooks, D.; and Wu, C.-J. 2023. Carbon Explorer: A Holistic Framework for Designing Carbon Aware Datacenters. In Proceedings of the 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 2. ACM.
[3]
Biemann, M.; Scheller, F.; Liu, X.; and Huang, L. 2021. Experimental evaluation of model-free reinforcement learning algorithms for continuous HVAC control. Applied Energy, 298: 117164.
[4]
Cao, J.; Harrold, D.; Fan, Z.; Morstyn, T.; Healey, D.; and Li, K. 2020. Deep Reinforcement Learning-Based Energy Storage Arbitrage With Accurate Lithium-Ion Battery Degradation Model. IEEE Transactions on Smart Grid, 11(5): 4513-4521.
[5]
Cheng, Y.; Chai, Z.; and Anwar, A. 2018. Characterizing Co-Located Datacenter Workloads: An Alibaba Case Study. In Proceedings of the 9th Asia-Pacific Workshop on Systems, APSys '18. New York, NY, USA: Association for Computing Machinery. ISBN 9781450360067.
[6]
Crawley, D. B.; Lawrie, L. K.; Pedersen, C. O.; and Winkelmann, F. C. 2000. Energy plus: energy simulation program. ASHRAE journal, 42(4): 49-56.
[7]
Espen, F.; Benth; and Šaltytė-Benth, J. 2005. Stochastic Modelling of Temperature Variations with a View Towards Weather Derivatives.
[8]
Huang, B.; and Wang, J. 2021. Deep-Reinforcement-Learning-Based Capacity Scheduling for PV-Battery Storage System. IEEE Transactions on Smart Grid, 12(3): 2272-2283.
[9]
Jiménez-Raboso, J.; Campoy-Nieves, A.; Manjavacas-Lucas, A.; Gómez-Romero, J.; and Molina-Solana, M. 2021. Sinergym: A Building Simulation and Control Framework for Training Reinforcement Learning Agents. In Proceedings of the 8th ACM International Conference on Systems for Energy-Efficient Buildings, Cities, and Transportation, 319-323. New York, NY, USA: Association for Computing Machinery. ISBN 9781450391146.
[10]
Liang, E.; Liaw, R.; Nishihara, R.; Moritz, P.; Fox, R.; Goldberg, K.; Gonzalez, J.; Jordan, M.; and Stoica, I. 2018. RL-lib: Abstractions for Distributed Reinforcement Learning. In Dy, J.; and Krause, A., eds., Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, 3053-3062. PMLR.
[11]
Liaw, R.; Liang, E.; Nishihara, R.; Moritz, P.; Gonzalez, J. E.; and Stoica, I. 2018. Tune: A Research Platform for Distributed Model Selection and Training. CoRR, abs/1807.05118.
[12]
Lowe, R.; Wu, Y. I.; Tamar, A.; Harb, J.; Pieter Abbeel, O.; and Mordatch, I. 2017. Multi-agent actor-critic for mixed cooperative-competitive environments. Advances in neural information processing systems, 30.
[13]
Mahbod, M. H. B.; Chng, C. B.; Lee, P. S.; and Chui, C. K. 2022. Energy saving evaluation of an energy efficient data center using a model-free reinforcement learning approach. Applied Energy, 322: 119392.
[14]
Naug, A.; Guillen, A.; Luna Gutiérrez, R.; Gundecha, V.; Ghorbanpour, S.; Dheeraj Kashyap, L.; Markovikj, D.; Krause, L.; Mousavi, S.; Babu, A. R.; et al. 2023a. PyDCM: Custom Data Center Models with Reinforcement Learning for Sustainability. In Proceedings of the 10th ACM International Conference on Systems for Energy-Efficient Buildings, Cities, and Transportation, 232-235.
[15]
Naug, A.; Guillen, A.; Luna Gutierrez, R.; Gundecha, V.; Ghorbanpour, S.; Mousavi, S.; Ramesh Babu, A.; and Sarkar, S. 2023b. A Configurable Pythonic Data Center Model for Sustainable Cooling and ML Integration. In NeurIPS 2023 Workshop on Tackling Climate Change with Machine Learning.
[16]
Radovanović, A.; Koningstein, R.; Schneider, I.; Chen, B.; Duarte, A.; Roy, B.; Xiao, D.; Haridasan, M.; Hung, P.; Care, N.; Talukdar, S.; Mullen, E.; Smith, K.; Cottman, M.; and Cirne, W. 2023. Carbon-Aware Computing for Datacenters. IEEE Transactions on Power Systems, 38(2): 1270-1280.
[17]
Ran, Y.; Hu, H.; Wen, Y.; and Zhou, X. 2022a. Optimizing Energy Efficiency for Data Center Via Parameterized Deep Reinforcement Learning. IEEE Transactions on Services Computing, 1-14.
[18]
Ran, Y.; Hu, H.; Zhou, X.; and Wen, Y. 2019. DeepEE: Joint Optimization of Job Scheduling and Cooling Control for Data Center Energy Efficiency Using Deep Reinforcement Learning. In 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS). IEEE.
[19]
Ran, Y.; Zhou, X.; Hu, H.; and Wen, Y. 2022b. Optimizing Data Centre Energy Efficiency via Event Driven Deep Reinforcement Learning. In 2022 IEEE World Congress on Services (SERVICES). IEEE.
[20]
Sarkar, S.; Guillen, A.; Carmichael, Z.; Gundecha, V.; Naug, A.; Ramesh Babu, A.; and Luna Gutierrez, R. 2023a. Enhancing Data Center Sustainability with a 3D CNN-Based CFD Surrogate Model. In NeurIPS 2023 Workshop on Tackling Climate Change with Machine Learning.
[21]
Sarkar, S.; Naug, A.; Guillen, A.; Luna Gutierrez, R.; Gundecha, V.; Ghorbanpour, S.; Mousavi, S.; and Ramesh Babu, A. 2023b. Sustainable Data Center Modeling: A MultiAgent Reinforcement Learning Benchmark. In NeurIPS 2023 Workshop on Tackling Climate Change with Machine Learning.
[22]
Schulman, J.; Wolski, F.; Dhariwal, P.; Radford, A.; and Klimov, O. 2017. Proximal Policy Optimization Algorithms.
[23]
Wang, R.; Zhang, X.; Zhou, X.; Wen, Y.; and Tan, R. 2022. Toward Physics-Guided Safe Deep Reinforcement Learning for Green Data Center Cooling Control. In 2022 ACM/IEEE 13th International Conference on Cyber-Physical Systems (ICCPS). IEEE.
[24]
Yi, D.; Zhou, X.; Wen, Y.; and Tan, R. 2019. Toward Efficient Compute-Intensive Job Allocation for Green Data Centers: A Deep Reinforcement Learning Approach. In 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS). IEEE.
[25]
Zhang, K.; Wang, P.; Gu, N.; and Nguyen, T. D. 2022. GreenDRL: Managing Green Datacenters Using Deep Reinforcement Learning. In Proceedings of the 13th Symposium on Cloud Computing. ACM.
[26]
Zhang, Q.; Chng, C.-B.; Chen, K.; Lee, P.-S.; and Chui, C-K. 2023a. DRL-S: Toward safe real-world learning of dynamic thermal management in data center. Expert Systems with Applications, 214: 119146.
[27]
Zhang, Q.; Zeng, W.; Lin, Q.; Chng, C.-B.; Chui, C.-K.; and Lee, P.-S. 2023b. Deep reinforcement learning towards real-world dynamic thermal management of data centers. Applied Energy, 333: 120561.
[28]
Zhou, K.; Zhou, K.; and Yang, S. 2022. Reinforcement learning-based scheduling strategy for energy storage in microgrid. Journal of Energy Storage, 51: 104379.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Guide Proceedings
AAAI'24/IAAI'24/EAAI'24: Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence and Thirty-Sixth Conference on Innovative Applications of Artificial Intelligence and Fourteenth Symposium on Educational Advances in Artificial Intelligence
February 2024
23861 pages
ISBN:978-1-57735-887-9

Sponsors

  • Association for the Advancement of Artificial Intelligence

Publisher

AAAI Press

Publication History

Published: 20 February 2024

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 31 Jan 2025

Other Metrics

Citations

View Options

View options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media