Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3600100.3625682acmotherconferencesArticle/Chapter ViewAbstractPublication PagessensysConference Proceedingsconference-collections
research-article
Open access

A Lightweight Calibrated Simulation Enabling Efficient Offline Learning for Optimal Control of Real Buildings

Published: 15 November 2023 Publication History

Abstract

Modern commercial Heating, Ventilation, and Air Conditioning (HVAC) devices form a complex and interconnected thermodynamic system with the building and outside weather conditions, and current setpoint control policies are not fully optimized for minimizing energy use and carbon emission. Given a suitable training environment, a Reinforcement Learning (RL) model is able to improve upon these policies, but training such a model, especially in a way that scales to thousands of buildings, presents many real world challenges. We propose a novel simulation-based approach, where a customized simulator is used to train the agent for each building. Our open-source simulator1 is lightweight and calibrated via telemetry from the building to reach a higher level of fidelity. On a two-story, 68,000 square foot building, with 127 devices, we were able to calibrate our simulator to have just over half a degree of drift from the real world over a six-hour interval. This approach is an important step toward having a real-world RL control system that can be scaled to many buildings, allowing for greater efficiency and resulting in reduced energy consumption and carbon emissions.

References

[1]
Javier Arroyo, Carlo Manna, Fred Spiessens, and Lieve Helsen. 2022. Reinforced model predictive control (RL-MPC) for building energy management. Applied Energy 309 (2022), 118346.
[2]
Donald Azuatalam, Wee-Lih Lee, Frits de Nijs, and Ariel Liebman. 2020. Reinforcement learning for whole-building HVAC control and demand response. Energy and AI 2 (2020), 100020.
[3]
Yasaman Balali, Adrian Chong, Andrew Busch, and Steven O’Keefe. 2023. Energy modelling and control of building heating and cooling systems with data-driven and hybrid models—A review. Renewable and Sustainable Energy Reviews 183 (2023), 113496.
[4]
Christian Blad, Simon Bøgh, and Carsten Skovmose Kallesøe. 2022. Data-driven offline reinforcement learning for HVAC-systems. Energy 261 (2022), 125290.
[5]
Bingqing Chen, Zicheng Cai, and Mario Bergés. 2020. Gnu-rl: A practical and scalable reinforcement learning solution for building hvac control using a differentiable mpc policy. Frontiers in Built Environment 6 (2020), 562239.
[6]
Liangliang Chen, Fei Meng, and Ying Zhang. 2023. Fast Human-in-the-loop Control for HVAC Systems via Meta-learning and Model-based Offline Reinforcement Learning. IEEE Transactions on Sustainable Computing (2023).
[7]
Drury B Crawley, Linda K Lawrie, Frederick C Winkelmann, Walter F Buhl, Y Joe Huang, Curtis O Pedersen, Richard K Strand, Richard J Liesen, Daniel E Fisher, Michael J Witte, 2001. EnergyPlus: creating a new-generation building energy simulation program. Energy and buildings 33, 4 (2001), 319–331.
[8]
Cheng Gao and Dan Wang. 2023. Comparative study of model-based and model-free reinforcement learning control performance in HVAC systems. Journal of Building Engineering 74 (2023), 106852.
[9]
Basil Kouvaritakis and Mark Cannon. 2016. Model predictive control. Switzerland: Springer International Publishing 38 (2016).
[10]
Sergey Levine, Aviral Kumar, George Tucker, and Justin Fu. 2020. Offline reinforcement learning: Tutorial, review, and perspectives on open problems. arXiv preprint arXiv:2005.01643 (2020).
[11]
Simeng Liu and Gregor P Henze. 2006. Experimental analysis of simulated reinforcement learning control for active and passive building thermal storage inventory: Part 2: Results and analysis. Energy and buildings 38, 2 (2006), 148–161.
[12]
Harvard Lomax, Thomas H Pulliam, David W Zingg, and TA Kowalewski. 2002. Fundamentals of computational fluid dynamics. Appl. Mech. Rev. 55, 4 (2002), B61–B61.
[13]
Mengxue Lu and Joseph Lai. 2020. Review on carbon emissions of commercial buildings. Renewable and Sustainable Energy Reviews 119 (2020), 109545.
[14]
Karl Mason and Santiago Grijalva. 2019. A review of reinforcement learning for autonomous building energy management. Computers & Electrical Engineering 78 (2019), 300–312.
[15]
Faye C McQuiston, Jerald D Parker, Jeffrey D Spitler, and Hessam Taherian. 2023. Heating, ventilating, and air conditioning: analysis and design. John Wiley & Sons.
[16]
R Sendra-Arranz and A Gutiérrez. 2020. A long short-term memory artificial neural network to predict daily HVAC consumption in buildings. Energy and Buildings 216 (2020), 109952.
[17]
E M Sparrow. 1993. Heat Transfer: Conduction [Lecture Notes].
[18]
Richard S Sutton and Andrew G Barto. 2018. Reinforcement learning: An introduction. MIT press.
[19]
Kirubakaran Velswamy, Biao Huang, 2017. A Long-Short Term Memory Recurrent Neural Network Based Reinforcement Learning Controller for Office Heating Ventilation and Air Conditioning Systems. (2017).
[20]
Marshall Wang, John Willes, Thomas Jiralerspong, and Matin Moezzi. 2023. A Comparison of Classical and Deep Reinforcement Learning Methods for HVAC Control. arXiv preprint arXiv:2308.05711 (2023).
[21]
Mubashir Wani, Akshya Swain, and Abhisek Ukil. 2019. Control strategies for energy optimization of HVAC systems in small office buildings using energyplus tm. In 2019 IEEE Innovative Smart Grid Technologies-Asia (ISGT Asia). IEEE, 2698–2703.
[22]
Tianshu Wei, Yanzhi Wang, and Qi Zhu. 2017. Deep reinforcement learning for building HVAC control. In Proceedings of the 54th annual design automation conference 2017. 1–6.
[23]
Liang Yu, Shuqi Qin, Meng Zhang, Chao Shen, Tao Jiang, and Xiaohong Guan. 2021. A review of deep reinforcement learning for smart building energy management. IEEE Internet of Things Journal 8, 15 (2021), 12046–12063.
[24]
Liang Yu, Yi Sun, Zhanbo Xu, Chao Shen, Dong Yue, Tao Jiang, and Xiaohong Guan. 2020. Multi-agent deep reinforcement learning for HVAC control in commercial buildings. IEEE Transactions on Smart Grid 12, 1 (2020), 407–419.
[25]
Chi Zhang, Sanmukh R Kuppannagari, Rajgopal Kannan, and Viktor K Prasanna. 2019. Building HVAC scheduling using reinforcement learning via neural network based model approximation. In Proceedings of the 6th ACM international conference on systems for energy-efficient buildings, cities, and transportation. 287–296.
[26]
Huan Zhao, Junhua Zhao, Ting Shu, and Zibin Pan. 2021. Hybrid-model-based deep reinforcement learning for heating, ventilation, and air-conditioning control. Frontiers in Energy Research 8 (2021), 610518.
[27]
Jie Zhao, Khee Poh Lam, B Erik Ydstie, and Omer T Karaguzel. 2015. EnergyPlus model-based predictive control within design–build–operate energy information modelling infrastructure. Journal of Building Performance Simulation 8, 3 (2015), 121–134.
[28]
Dian Zhuang, Vincent JL Gan, Zeynep Duygu Tekler, Adrian Chong, Shuai Tian, and Xing Shi. 2023. Data-driven predictive control for smart HVAC system in IoT-integrated buildings with time-series forecasting and reinforcement learning. Applied Energy 338 (2023), 120936.
[29]
Zhengbo Zou, Xinran Yu, and Semiha Ergan. 2020. Towards optimal control of air handling units using deep reinforcement learning and recurrent neural network. Building and Environment 168 (2020), 106535.

Index Terms

  1. A Lightweight Calibrated Simulation Enabling Efficient Offline Learning for Optimal Control of Real Buildings
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Other conferences
      BuildSys '23: Proceedings of the 10th ACM International Conference on Systems for Energy-Efficient Buildings, Cities, and Transportation
      November 2023
      567 pages
      ISBN:9798400702303
      DOI:10.1145/3600100
      Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 15 November 2023

      Check for updates

      Author Tags

      1. HVAC Optimization
      2. Reinforcement Learning
      3. Simulation

      Qualifiers

      • Research-article
      • Research
      • Refereed limited

      Funding Sources

      • Google

      Conference

      BuildSys '23

      Acceptance Rates

      Overall Acceptance Rate 148 of 500 submissions, 30%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 144
        Total Downloads
      • Downloads (Last 12 months)144
      • Downloads (Last 6 weeks)23
      Reflects downloads up to 18 Aug 2024

      Other Metrics

      Citations

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Get Access

      Login options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media