Deep Belief Network Based Hybrid Model for Building Energy Consumption Prediction
Abstract
:1. Introduction
2. Introduction of DBN
2.1. Restricted Boltzmann Machine
- Step 1:
- Initialize the number of visible units n, the number of hidden units m, the number of training data N, the weighting matrix , the visible bias vector , the hidden bias vector and the learning rate .
- Step 2:
- Assign a sample from the training data to be the initial state of the visible layer.
- Step 3:
- Calculate according to Equation (1), and extract from the conditional distribution , where .
- Step 4:
- Calculate according to Equation (2), and extract from the conditional distribution , where .
- Step 5:
- Calculate according to Equation (1).
- Step 6:
- Update the parameters according to the following equations:
- Step 7:
- Assign another sample from the training data to be the initial state of the visible layer, and iterate Steps 3 to 7 until all the N training data have been used.
2.2. Deep Belief Network
3. The Proposed Hybrid Model
3.1. Structure of the Hybrid Model
- Step 1:
- Extract the energy-consuming pattern as the periodicity knowledge from the training data.
- Step 2:
- Remove the energy-consuming pattern from the training data to generate the residual data.
- Step 3:
- Utilize the residual data to train the MDBN model.
- Step 4:
- Combine the outputs from the MDBN model with the periodicity knowledge to obtain the final prediction results of the hybrid model.
3.2. Extraction of the Energy-Consuming Patterns and Generation of the Residual Data
3.2.1. The Daily-Periodic Pattern
3.2.2. The Weekly-Periodic Pattern
3.3. Modified DBN and Its Training Algorithm
3.3.1. Structure of the MDBN
3.3.2. Pre-Training of the DBN Part
- Step 1:
- Initialize the number of hidden layers k, the number of the training data N and the initial sequence number of hidden layer .
- Step 2:
- Assign a sample from the training data to be the input data of the DBN.
- Step 3:
- Regard the input layer and the first hidden layer of the DBN as an RBM, and compute the activation by Equation (3) when the training process of this RBM is finished.
- Step 4:
- Regard the uth and the th hidden layer as an RBM with the input , and compute the activation by Equation (3) when the training process of this RBM is completed.
- Step 5:
- Let , and iterate Step 4 until .
- Step 6:
- Use the as the input of the regression part.
- Step 7:
- Assign another sample from the training data as the input data of the DBN, and iterate Step 3 to 7 until all the N training data have been assigned.
3.3.3. Least Squares Learning of the Regression Part
4. Experiments
4.1. Introduction of the Comparative Approaches
4.1.1. Backward Propagation Neural Network
4.1.2. Generalized Radial Basis Function Neural Network
4.1.3. Extreme Learning Machine
- Randomly assign input weights or the parameters in the hidden neurons.
- Calculate the hidden layer output matrix , where
- Calculate the output weights , where and is the Moore–Penrose generalized inverse of the matrix .
4.1.4. Support Vector Regression
4.2. Applied Data Sets and Experimental Setting
4.2.1. Applied Data Sets
4.2.2. Design Factors for MDBN
- Design Factor i: the number of hidden layers kThe number of hidden layers determines how many RBMs are stacked. In this study, we consider the number of hidden layers 2, 3 and 4 as Levels 1, 2 and 3, respectively.
- Design Factor ii: the number of uth hidden unitsThe number of hidden units is an important factor that greatly influences the performance of the MDBN model. Here, we assume that the numbers of neurons in all hidden layers are equal, i.e., . In this paper, we set the number of neurons 50, 100 and 150 as Levels 1, 2 and 3, respectively.
- Design Factor iii: the number of input variables rIn this paper, we utilize r energy consumption data in the building energy consumption time series before time t to predict the value at time t. In other words, we utilize to predict the value of . Here, we consider the number of input variables 4, 5 and 6 as Levels 1, 2 and 3, respectively.
4.2.3. Comparison Setting
4.3. Energy Consumption Prediction for the Retail Store
4.3.1. Energy-Consuming Pattern of the Retail Store
4.3.2. Configurations of the Prediction Models
- For the BPNN, there were 110 neurons in the hidden layer that can realize the nonlinear transformation of features by the sigmoid function. Additionally, the algorithm was ran for 7000 iterations to achieve the learning objective.
- For the GRBFNN, the 6-fold cross-validation was adopted to determine the optimized spread of the radial basis function. Furthermore, the spread was chosen from 0.01 to 2 with the 0.1 step length.
- For the ELM, there were 100 neurons in the hidden layer, and the hardlim function was chosen as the activation function for converting the original features into another space.
- For the SVR, the penalty coefficient was set to be 80, and the radial basis function was chosen as the kernel function to realize the nonlinear transformation of input features.
4.3.3. Experimental Results
4.4. Energy Consumption Prediction for the Office Building
4.4.1. Energy-Consuming Pattern of the Office Building
4.4.2. Configurations of the Prediction Models
- For the BPNN, there were 200 neurons in the hidden layer. Furthermore, the sigmoid function was chosen to realize the nonlinear transformation of features. Additionally, we ran the BP algorithm 1000 times to obtain the final outputs.
- For the GRBFNN, the 5-fold cross-validation was utilized to determine the optimized spread of the radial basis function. Furthermore, the spread was chosen from 0.01 to 2 with a 0.1 step length.
- For the ELM, there were 150 neurons in the hidden layer, and the hardlim function was chosen as the activation function for converting the original features into another space.
- For the SVR, the penalty coefficient was set to be 10 and the sigmoid function was chosen as the kernel function to realize the nonlinear transformation of input features.
4.4.3. Experimental Results
4.5. Comparisons and Discussions
5. Conclusions
Acknowledgments
Author Contributions
Conflicts of Interest
References
- Ahmad, M.W.; Mourshed, M.; Rezgui, Y. Trees vs Neurons: Comparison between random forest and ANN for high-resolution prediction of building energy consumption. Energy Build. 2017, 147, 77–89. [Google Scholar] [CrossRef]
- Banihashemi, S.; Ding, G.; Wang, J. Developing a hybrid model of prediction and classification algorithms for building energy consumption. Energy Procedia 2017, 110, 371–376. [Google Scholar] [CrossRef]
- Naji, S.; Keivani, A.; Shamshirband, S.; Alengaram, U.J.; Jumaat, M.Z.; Mansor, Z.; Lee, M. Estimating building energy consumption using extreme learning machine method. Energy 2016, 97, 506–516. [Google Scholar] [CrossRef]
- Hsu, D. Comparison of integrated clustering methods for accurate and stable prediction of building energy consumption data. Appl. Energy 2015, 160, 153–163. [Google Scholar] [CrossRef]
- Dong, B.; Cao, C.; Lee, S.E. Applying support vector machines to predict building energy consumption in tropical region. Energy Build. 2005, 37, 545–553. [Google Scholar] [CrossRef]
- Jung, H.C.; Kim, J.S.; Heo, H. Prediction of building energy consumption using an improved real coded genetic algorithm based least squares support vector machine approach. Energy Build. 2015, 90, 76–84. [Google Scholar] [CrossRef]
- Hong, W.C.; Dong, Y.; Zhang, W.Y.; Chen, L.Y.; Panigrahi, B.K. Cyclic electric load forecasting by seasonal SVR with chaotic genetic algorithm. Int. J. Electr. Power Energy Syst. 2013, 44, 604–614. [Google Scholar] [CrossRef]
- Fan, G.F.; Peng, L.L.; Hong, W.C.; Sun, F. Electric load forecasting by the SVR model with differential empirical mode decomposition and auto regression. Neurocomputing 2016, 173, 958–970. [Google Scholar] [CrossRef]
- Hong, W.C. Chaotic particle swarm optimization algorithm in a support vector regression electric load forecasting model. Energy Convers. Manag. 2009, 50, 105–117. [Google Scholar] [CrossRef]
- Robinson, C.; Dilkina, B.; Hubbs, J.; Zhang, W.; Guhathakurta, S.; Brown, M.A.; Pendyala, R.M. Machine learning approaches for estimating commercial building energy consumption. Appl. Energy 2017, 208, 889–904. [Google Scholar] [CrossRef]
- Hinton, G.E.; Osindero, S.; Teh, Y.W. A fast learning algorithm for deep belief nets. Neural Comput. 2006, 18, 1527–1554. [Google Scholar] [CrossRef] [PubMed]
- Lv, Y.; Duan, Y.; Kang, W.; Li, Z.; Wang, F.Y. Traffic flow prediction with big data: A deep learning approach. IEEE Trans. Intell. Transp. Syst. 2015, 16, 865–873. [Google Scholar] [CrossRef]
- Yang, H.F.; Dillon, T.S.; Chen, Y.P.P. Optimized structure of the traffic flow forecasting model with a deep learning approach. IEEE Trans. Neural Netw. Learn. Syst. 2017, 28, 2371–2381. [Google Scholar] [CrossRef] [PubMed]
- Li, C.; Ding, Z.; Zhao, D.; Yi, J.; Zhang, G. Building energy consumption prediction: An extreme deep learning approach. Energies 2017, 10, 1525. [Google Scholar] [CrossRef]
- Xiao, Y.; Wu, J.; Lin, Z.; Zhao, X. A deep learning-based multi-model ensemble method for cancer prediction. Comput. Methods Programs Biomed. 2018, 153, 1–9. [Google Scholar] [CrossRef] [PubMed]
- Galea, C.; Farrugia, R.A. Forensic face photo-sketch recognition using a deep learning-based architecture. IEEE Signal Process. Lett. 2017, 24, 1586–1590. [Google Scholar] [CrossRef]
- Masoumi, M.; Hamza, A.B. Spectral shape classification: A deep learning approach. J. Vis. Commun. Image Represent. 2017, 43, 198–211. [Google Scholar] [CrossRef]
- Sarikaya, R.; Hinton, G.E.; Deoras, A. Application of deep belief networks for natural language understanding. IEEE/ACM Trans. Audio Speech Lang. Process. (TASLP) 2014, 22, 778–784. [Google Scholar] [CrossRef]
- Zhang, X.L.; Wu, J. Deep belief networks based voice activity detection. IEEE Trans. Audio Speech Lang. Process. 2013, 21, 697–710. [Google Scholar] [CrossRef]
- Chen, C.C.; Li, S.T. Credit rating with a monotonicity-constrained support vector machine model. Expert Syst. Appl. 2014, 41, 7235–7247. [Google Scholar] [CrossRef]
- Wang, L.; Xue, P.; Chan, K.L. Incorporating prior knowledge into SVM for image retrieval. In Proceedings of the International Conference on Pattern Recognition, Cambridge, UK, 23–26 August 2004; pp. 981–984. [Google Scholar]
- Wu, X.; Srihari, R. Incorporating prior knowledge with weighted margin support vector machines. In Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Seattle, WA, USA, 22–25 August 2004; ACM: New York, NY, USA, 2004; pp. 326–333. [Google Scholar]
- Li, C.; Zhang, G.; Yi, J.; Wang, M. Uncertainty degree and modeling of interval type-2 fuzzy sets: definition, method and application Comput. Math. Appl. 2013, 66, 1822–1835. [Google Scholar] [CrossRef]
- Abonyi, J.; Babuska, R.; Verbruggen, H.B.; Szeifert, F. Incorporating prior knowledge in fuzzy model identification. Int. J. Syst. Sci. 2000, 31, 657–667. [Google Scholar] [CrossRef]
- Li, C.; Yi, J.; Zhang, G. On the monotonicity of interval type-2 fuzzy logic systems. IEEE Trans. Fuzzy Syst. 2014, 22, 1197–1212. [Google Scholar] [CrossRef]
- Chakraborty, S.; Chattopadhyay, P.P.; Ghosh, S.K.; Datta, S. Incorporation of prior knowledge in neural network model for continuous cooling of steel using genetic algorithm. Appl. Soft Comput. 2017, 58, 297–306. [Google Scholar] [CrossRef]
- Kohara, K.; Ishikawa, T.; Fukuhara, Y.; Nakamura, Y. Stock price prediction using prior knowledge and neural networks. Intell. Syst. Account. Financ. Manag. 1997, 6, 11–22. [Google Scholar] [CrossRef]
- Li, C.; Gao, J.; Yi, J.; Zhang, G. Analysis and design of functionally weighted single-input-rule-modules connected fuzzy inference systems. IEEE Trans. Fuzzy Syst. 2016. [Google Scholar] [CrossRef]
- Lin, H.; Lin, Y.; Yu, J.; Teng, Z.; Wang, L. Weighing fusion method for truck scales based on prior knowledge and neural network ensembles. IEEE Trans. Instrum. Meas. 2014, 63, 250–259. [Google Scholar] [CrossRef]
- Bengio, Y.; Lamblin, P.; Popovici, D.; Larochelle, H. Greedy layer-wise training of deep networks. In Proceedings of the Twentieth Annual Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 4–7 December 2006; pp. 153–160. [Google Scholar]
- Fischer, A.; Igel, C. An introduction to restricted Boltzmann machines. Prog. Pattern Recognit. Image Anal. Comput. Vis. Appl. 2012, 7441, 14–36. [Google Scholar]
- Bengio, Y. Learning Deep Architectures for AI; Now Publishers: Boston, MA, USA, 2009; pp. 1–127. [Google Scholar]
- Hinton, G. Training products of experts by minimizing contrastive divergence. Neural Comput. 2002, 14, 1771–1800. [Google Scholar] [CrossRef] [PubMed]
- Roux, N.L.; Bengio, Y. Representational power of restricted boltzmann machines and deep belief networks. Neural Comput. 2008, 20, 1631–1649. [Google Scholar] [CrossRef] [PubMed]
- Bu, S.; Liu, Z.; Han, J.; Wu, J.; Ji, R. Learning high-level feature by deep belief networks for 3-D model retrieval and recognition. IEEE Trans. Multimed. 2014, 16, 2154–2167. [Google Scholar] [CrossRef]
- Huang, G.B.; Wang, D.H.; Lan, Y. Extreme learning machines: A survey. Int. J. Mach. Learn. Cybern. 2011, 2, 107–122. [Google Scholar] [CrossRef]
- Huang, G.B.; Zhu, Q.Y.; Siew, C.K. Extreme learning machine: Theory and applications. Neurocomputing 2006, 70, 489–501. [Google Scholar] [CrossRef]
- Huang, G.B.; Zhu, Q.Y.; Siew, C.K. Extreme learning machine: A new learning scheme of feedforward neural networks. In Proceedings of the 2004 IEEE International Joint Conference on Neural Networks, Budapest, Hungary, 25–29 July 2004; Volume 2, pp. 985–990. [Google Scholar]
- Huang, G.B.; Chen, L.; Siew, C.K. Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Trans. Neural Netw. 2006, 17, 879–892. [Google Scholar] [CrossRef] [PubMed]
- Erb, R.J. Introduction to backpropagation neural network computation. Pharm. Res. 1993, 10, 165–170. [Google Scholar] [CrossRef] [PubMed]
- Uzlu, E.; Kankal, M.; Akpınar, A.; Dede, T. Estimates of energy consumption in Turkey using neural networks with the teaching-learning-based optimization algorithm. Energy 2014, 75, 295–303. [Google Scholar] [CrossRef]
- Yedra, R.M.; Diaz, F.R.; Nieto, M.D.M.C.; Arahal, M.R. A neural network model for energy consumption prediction of CIESOL bioclimatic building. Adv. Intell. Syst. Comput. 2014, 239, 51–60. [Google Scholar]
- Lu, J.; Hu, H.; Bai, Y. Generalized radial basis function neural network based on an improved dynamic particle swarm optimization and AdaBoost algorithm. Neurocomputing 2015, 152, 305–315. [Google Scholar] [CrossRef]
- Friedrichs, F.; Schmitt, M. On the power of Boolean computations in generalized RBF neural networks. Neurocomputing 2005, 63, 483–498. [Google Scholar] [CrossRef]
- Awad, M.; Khanna, R. Support vector regression. Neural Inf. Process. Lett. Rev. 2007, 11, 203–224. [Google Scholar]
- Wu, C.H.; Ho, J.M.; Lee, D.T. Travel-time prediction with support vector regression. IEEE Trans. Intell. Transp. Syst. 2004, 5, 276–281. [Google Scholar] [CrossRef]
- Buildings Datasets. Available online: https://trynthink.github.io/buildingsdatasets/ (accessed on 13 May 2017).
- Glantz, S.A.; Slinker, B.K. Primer of Applied Regression and Analysis of Variance; Health Professions Division, McGraw-Hill: New York, NY, USA, 1990. [Google Scholar]
- Hirata, T.; Kuremoto, T.; Obayashi, M.; Mabu, S.; Kobayashi, K. Time series prediction using DBN and ARIMA. In Proceedings of the International Conference on Computer Application Technologies, Atlanta, GA, USA, 10–14 June 2016; pp. 24–29. [Google Scholar]
- Tao, Y.; Chen, H. A hybrid wind power prediction method. In Proceedings of the Power and Energy Society General Meeting, Boston, MA, USA, 17–21 July 2016; pp. 1–5. [Google Scholar]
Design Factors | Level | ||
---|---|---|---|
1 | 2 | 3 | |
i | 2 hidden layers | 3 hidden layers | 4 hidden layers |
ii | 50 hidden units | 100 hidden units | 150 hidden units |
iii | 4 input variables | 5 input variables | 6 input variables |
Trails | Factor | Residual Data | Trails | Factor | Residual Data | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
i | ii | iii | MAE (kwh) | MRE (%) | RMSE (kwh) | i | ii | iii | MAE (kwh) | MRE (%) | RMSE (kwh) | ||
1 | 1 | 1 | 1 | 49.21 | 5.26 | 80.79 | 15 | 2 | 2 | 3 | 49.65 | 5.31 | 80.03 |
2 | 1 | 1 | 2 | 48.74 | 5.18 | 80.03 | 16 | 2 | 3 | 1 | 50.18 | 5.36 | 82.13 |
3 | 1 | 1 | 3 | 48.73 | 5.19 | 78.06 | 17 | 2 | 3 | 2 | 48.43 | 5.11 | 78.24 |
4 | 1 | 2 | 1 | 49.12 | 5.24 | 81.20 | 18 | 2 | 3 | 3 | 48.33 | 5.12 | 77.96 |
5 | 1 | 2 | 2 | 48.25 | 5.16 | 79.39 | 19 | 3 | 1 | 1 | 47.71 | 5.03 | 76.83 |
6 | 1 | 2 | 3 | 49.16 | 5.24 | 79.36 | 20 | 3 | 1 | 2 | 48.37 | 5.11 | 77.63 |
7 | 1 | 3 | 1 | 49.42 | 5.28 | 81.85 | 21 | 3 | 1 | 3 | 48.13 | 5.11 | 77.60 |
8 | 1 | 3 | 2 | 49.33 | 5.25 | 81.40 | 22 | 3 | 2 | 1 | 48.72 | 5.18 | 79.16 |
9 | 1 | 3 | 3 | 48.65 | 5.18 | 78.69 | 23 | 3 | 2 | 2 | 49.66 | 5.28 | 79.84 |
10 | 2 | 1 | 1 | 48.73 | 5.20 | 79.65 | 24 | 3 | 2 | 3 | 49.08 | 5.22 | 78.03 |
11 | 2 | 1 | 2 | 49.61 | 5.29 | 81.24 | 25 | 3 | 3 | 1 | 51.07 | 5.50 | 83.35 |
12 | 2 | 1 | 3 | 47.95 | 5.08 | 77.96 | 26 | 3 | 3 | 2 | 48.81 | 5.18 | 79.22 |
13 | 2 | 2 | 1 | 48.83 | 5.17 | 79.93 | 27 | 3 | 3 | 3 | 48.33 | 5.09 | 77.50 |
14 | 2 | 2 | 2 | 49.97 | 5.33 | 81.33 |
Methods | Data Type | MAE (kwh) | MRE (%) | RMSE (kwh) | r | |
---|---|---|---|---|---|---|
MDBN | Residual data | 47.71 | 5.03 | 76.83 | 0.94 | 0.89 |
Original data | 54.38 | 5.59 | 86.43 | 0.93 | 0.86 | |
BPNN | Residual data | 65.69 | 7.24 | 93.38 | 0.92 | 0.85 |
Original data | 75.45 | 8.20 | 100.40 | 0.94 | 0.87 | |
GRBFNN | Residual data | 54.60 | 5.75 | 83.87 | 0.93 | 0.87 |
Original data | 52.51 | 5.62 | 87.54 | 0.93 | 0.86 | |
ELM | Residual data | 58.54 | 6.29 | 88.62 | 0.93 | 0.86 |
Original data | 78.86 | 8.34 | 113.02 | 0.89 | 0.79 | |
SVR | Residual data | 48.28 | 5.19 | 81.31 | 0.93 | 0.87 |
Original data | 52.19 | 5.42 | 89.93 | 0.92 | 0.85 |
Trails | Factor | Residual Data | Trails | Factor | Residual Data | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
i | ii | iii | MAE (kwh) | MRE (%) | RMSE (kwh) | i | ii | iii | MAE (kwh) | MRE (%) | RMSE (kwh) | ||
1 | 1 | 1 | 1 | 2.30 | 12.67 | 3.69 | 15 | 2 | 2 | 3 | 2.35 | 12.99 | 3.69 |
2 | 1 | 1 | 2 | 2.22 | 12.29 | 3.61 | 16 | 2 | 3 | 1 | 2.25 | 12.49 | 3.65 |
3 | 1 | 1 | 3 | 2.32 | 12.74 | 3.67 | 17 | 2 | 3 | 2 | 2.30 | 12.78 | 3.68 |
4 | 1 | 2 | 1 | 2.23 | 11.97 | 3.63 | 18 | 2 | 3 | 3 | 2.36 | 13.10 | 3.71 |
5 | 1 | 2 | 2 | 2.35 | 12.81 | 3.71 | 19 | 3 | 1 | 1 | 2.21 | 12.19 | 3.65 |
6 | 1 | 2 | 3 | 2.40 | 13.10 | 3.71 | 20 | 3 | 1 | 2 | 2.23 | 12.29 | 3.66 |
7 | 1 | 3 | 1 | 2.17 | 11.93 | 3.58 | 21 | 3 | 1 | 3 | 2.27 | 12.54 | 3.67 |
8 | 1 | 3 | 2 | 2.29 | 12.70 | 3.67 | 22 | 3 | 2 | 1 | 2.17 | 12.06 | 3.60 |
9 | 1 | 3 | 3 | 2.27 | 12.55 | 3.63 | 23 | 3 | 2 | 2 | 2.26 | 12.51 | 3.65 |
10 | 2 | 1 | 1 | 2.26 | 12.53 | 3.65 | 24 | 3 | 2 | 3 | 2.23 | 12.25 | 3.67 |
11 | 2 | 1 | 2 | 2.31 | 12.83 | 3.68 | 25 | 3 | 3 | 1 | 2.14 | 11.91 | 3.60 |
12 | 2 | 1 | 3 | 2.36 | 13.10 | 3.70 | 26 | 3 | 3 | 2 | 2.32 | 12.64 | 3.73 |
13 | 2 | 2 | 1 | 2.09 | 11.62 | 3.54 | 27 | 3 | 3 | 3 | 2.21 | 12.30 | 3.64 |
14 | 2 | 2 | 2 | 2.31 | 12.84 | 3.68 |
Methods | Data Type | MAE (kwh) | MRE (%) | RMSE (kwh) | r | |
---|---|---|---|---|---|---|
MDBN | Residual data | 2.09 | 11.62 | 3.54 | 0.97 | 0.93 |
Original data | 2.32 | 11.50 | 4.19 | 0.95 | 0.90 | |
BPNN | Residual data | 2.57 | 12.64 | 4.04 | 0.96 | 0.93 |
Original data | 3.85 | 23.21 | 4.75 | 0.95 | 0.91 | |
GRBFNN | Residual data | 2.54 | 12.62 | 4.39 | 0.95 | 0.91 |
Original data | 4.35 | 21.94 | 5.98 | 0.93 | 0.87 | |
ELM | Residual data | 3.50 | 17.18 | 4.92 | 0.96 | 0.92 |
Original data | 4.61 | 25.52 | 5.92 | 0.90 | 0.82 | |
SVR | Residual data | 3.23 | 14.89 | 4.98 | 0.94 | 0.88 |
Original data | 6.13 | 34.42 | 7.55 | 0.92 | 0.85 |
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, C.; Ding, Z.; Yi, J.; Lv, Y.; Zhang, G. Deep Belief Network Based Hybrid Model for Building Energy Consumption Prediction. Energies 2018, 11, 242. https://doi.org/10.3390/en11010242
Li C, Ding Z, Yi J, Lv Y, Zhang G. Deep Belief Network Based Hybrid Model for Building Energy Consumption Prediction. Energies. 2018; 11(1):242. https://doi.org/10.3390/en11010242
Chicago/Turabian StyleLi, Chengdong, Zixiang Ding, Jianqiang Yi, Yisheng Lv, and Guiqing Zhang. 2018. "Deep Belief Network Based Hybrid Model for Building Energy Consumption Prediction" Energies 11, no. 1: 242. https://doi.org/10.3390/en11010242