Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Energy management strategy for electric vehicles based on deep Q-learning using Bayesian optimization

Published: 01 September 2020 Publication History
  • Get Citation Alerts
  • Abstract

    In this paper, a deep Q-learning (DQL)-based energy management strategy (EMS) is designed for an electric vehicle. Firstly, the energy management problem is reformulated to satisfy the condition of employing DQL by considering the dynamics of the system. Then, to achieve the minimum of electricity consumption and the maximum of the battery lifetime, the DQL-based EMS is designed to properly split the power demand into two parts: one is supplied by the battery and the other by supercapacitor. In addition, a hyperparameter tuning method, Bayesian optimization (BO), is introduced to optimize the hyperparameter configuration for the DQL-based EMS. Simulations are conducted to validate the improvements brought by BO and the convergence of DQL algorithm equipped with tuned hyperparameters. Simulations are also carried out on both training dataset and the testing dataset to validate the optimality and the adaptability of the DQL-based EMS, where the developed EMS outperforms a previously published rule-based EMS in almost all the cases.

    References

    [1]
    Chan CC, Wong YS, Bouscayrol A, and Chen K Powering sustainable mobility: roadmaps of electric, hybrid, and fuel cell vehicles [point of view] Proc IEEE 2009 97 4 603-607
    [2]
    Wang B, Xu J, Cao B, and Ning B Adaptive mode switch strategy based on simulated annealing optimization of a multi-mode hybrid energy storage system for electric vehicles Appl Energy 2017 194 596-608
    [3]
    Song Z, Li J, Han X, Xu L, Lu L, Ouyang M, et al. Multi-objective optimization of a semi-active battery/supercapacitor energy storage system for electric vehicles Appl Energy 2014 135 212-224
    [4]
    Cao J and Emadi A A new battery/ultracapacitor hybrid energy storage system for electric, hybrid, and plug-in hybrid electric vehicles IEEE Trans Power Electron 2012 27 1 122-132
    [5]
    Zhang L, Hu X, Wang Z, Sun F, and Dorrell DG Experimental impedance investigation of an ultracapacitor at different conditions for electric vehicle applications J Power Sources 2015 287 129-138
    [6]
    Wang B, Xu J, Cao B, Qiyu L, and Qingxia Y Compound-type hybrid energy storage system and its mode control strategy for electric vehicles J Power Electron 2015 15 3 849-859
    [7]
    Trovão JP, Pereirinha PG, Jorge HM, and Antunes CH A multi-level energy management system for multi-source electric vehicles—an integrated rule-based meta-heuristic approach Appl Energy 2013 105 304-318
    [8]
    Blanes JM, Gutierrez R, Garrigos A, Lizan JL, and Cuadrado JM Electric vehicle battery life extension using ultracapacitors and an FPGA controlled interleaved buck–boost converter IEEE Trans Power Electron 2013 28 12 5940-5948
    [9]
    Hu X, Johannesson L, Murgovski N, and Egardt B Longevity-conscious dimensioning and power management of the hybrid energy storage system in a fuel cell hybrid electric bus Appl Energy 2015 137 913-924
    [10]
    Song Z, Hofmann H, Li J, Han X, and Ouyang M Optimization for a hybrid energy storage system in electric vehicles using dynamic programing approach Appl Energy 2015 139 151-162
    [11]
    Sisakht ST and Barakati SM Energy management using fuzzy controller for hybrid electrical vehicles J. Intell. Fuzzy Syst. 2016 30 1411-1420
    [12]
    Santucci A, Sorniotti A, and Lekakou C Power split strategies for hybrid energy storage systems for vehicular applications J Power Source 2014 258 395-407
    [13]
    Vinot E and Trigui R Optimal energy management of HEVs with hybrid storage system Energy Convers Manag 2013 76 437-452
    [14]
    Xiong R, Cao J, and Yu Q Reinforcement learning-based real-time power management for hybrid energy storage system in the plug-in hybrid electric vehicle Appl Energy 2018 211 C 538-548
    [15]
    Wu J, He H, Peng J, Li Y, and Li Z Continuous reinforcement learning of energy management with deep q network for a power split hybrid electric bus Appl Energy 2018 222 799-811
    [16]
    Hu Yue, Li Weimin, Xu Kun, Zahid Taimoor, Qin Feiyan, and Li Chenming Energy management strategy for a hybrid electric vehicle based on deep reinforcement learning Appl Sci 2018 8 187
    [17]
    Hredzak B, Agelidis VG, and Jang M Model predictive control system for a hybrid battery-ultracapacitor power source IEEE Trans Power Electron 2014 29 1469-1479
    [18]
    Mnih V, Kavukcuoglu K, Silver D, Graves A, Antonoglou I, Wierstra D et al (2013) Playing Atari with deep reinforcement learning. Technical report. Deepmind Technologies, arXiv:1312.5602 [cs.LG]
    [19]
    Neary P (2018) Automatic hyperparameter tuning in deep convolutional neural networks using asynchronous reinforcement learning. In: 2018 IEEE international conference on cognitive computing (ICCC), San Francisco, CA, pp 73–77
    [20]
    Bergstra J and Bengio YRandom search for hyper-parameter optimizationJ Mach Learn Res201213281-30529137011283.68282
    [21]
    Barsce JC, Palombarini JA, Martínez EC (2018) Towardsautonomous reinforcement learning: automatic setting of hyper-parameters using bayesian optimization. CoRR, vol. abs/1805.04748. http://arxiv.org/abs/1805.04748
    [22]
    Jones DR, Schonlau M, and Welch WJEfficient global optimization of expensive black-box functionsJ Glob Optim1998134455-4921673460
    [23]
    Srinivas N, Krause A, Kakade SM, and Seeger MInformation-theoretic regret bounds for gaussian process optimization in the bandit settingIEEE Trans Inf Theory20125853250-32652952544
    [24]
    Contal E, Perchet V, Vayatis N (2014) Gaussian process optimization with mutual information. In: International conference on machine learning (ICML)
    [25]
    Mnih V, Kavukcuoglu K, Silver D, Rusu A, Veness J, Bellemare G, Marc MG, Graves A, Riedmiller M, Fidjeland K, Andreas, Ostrovski G, Petersen S, Beattie C, Sadik A, Antonoglou I, King H, Kumaran D, Wierstra D, Legg S, and Hassabis D Human-level control through deep reinforcement learning Nature 2015 518 529-533
    [26]
    Bergstra J, Bardenet R, Bengio Y, Kégl B (2011) Algorithms for hyper-parameter optimization. In: Proceedings of neural information and processing systems
    [27]
    Jones DRA taxonomy of global optimization methods based on response surfacesJ Glob Optim200121345-3831869398
    [28]
    Thornton C, Hutter F, Hoos HH, Leyton-Brown K (2013) Auto-WEKA: combined selection and hyperparameter optimization of classification algorithms. In: Proceedings of the Knowledge discovery and data mining, pp 847–855
    [29]
    Xia Y, Liu C, Li YY, and Liu N A boosted decision tree approach using bayesian hyper-parameter optimization for credit scoring Expert Syst Appl 2017 78 225-241
    [30]
    Meyer RT, DeCarlo RA, and Pekarek SHybrid model predictive power management of a battery-supercapacitor electric vehicleAsian J Control2016181150-1653458898
    [31]
    Yue SA, Wang YA, Xie QA, Zhu DA, Pedram MA, Chang NB (2015) Model-free learning-based online management of hybrid electrical energy storage systems in electric vehicles. In: Conference of the IEEE Industrial Electronics Society. IEEE
    [32]
    Zhang S, Xiong R, and Sun F Model predictive control for power management in a plug-in hybrid electric vehicle with a hybrid energy storage system ☆ Appl Energy 2015 185 1654-1662
    [33]
    Golchoubian P, Azad NL (2015) An optimal energy management system for electric vehicles hybridized with supercapacitor. In: ASME 2015 Dynamic Systems and Control Conference. American Society of Mechanical Engineers, p V001T10A004
    [34]
    Wang J, Liu P, Hicks-Garner J, Sherman E, Soukiazian S, Verbrugge M, et al. Cycle-life model for graphite-lifepo4 cells J Power Sources 2011 196 8 3942-3948
    [35]
    Safari M, te Morcret M, Teyssot A, and Delacourt C Life-prediction methods for Lithium-ion batteries derived from a fatigue approach J Electrochem Soc 2010 157 A713-A720
    [36]
    Barto AG and Sutton RS Reinforcement learning: an introduction 1998 Cambridge MIT Press
    [37]
    Anschel O, Baram N, Shimkin N (2017) Averaged-dqn: variance reduction and stabilization for deep reinforcement learning. In: Proceedings of the 34th International Conference on Machine Learning, vol 70. JMLR. org, pp 176–185
    [38]
    Bergstra J, Komer B, Eliasmith C, Yamins D, and Cox DD Hyperopt: a python library for model selection and hyperparameter optimization Comput Sci Discov 2015 8 014008
    [39]
    Levesque J-C, Durand A, Gagne C, Sabourin R (2017) Bayesian optimization for conditional hyperparameter spaces. In: Int. Joint Conf. Neural Networks, Anchorage, Alaska, USA, pp 286–293
    [40]
    Song Z, Hofmann H, Li J, Hou J, Han X, and Ouyang M Energy management strategies comparison for electric vehicles with hybrid energy storage system Appl Energy 2014 134 C 321-331

    Cited By

    View all
    • (2023)A Novel Model based Energy Management Strategy for Plug-in Hybrid Electric Vehicles using Deep Reinforcement LearningProceedings of the 2023 Fifteenth International Conference on Contemporary Computing10.1145/3607947.3608004(289-293)Online publication date: 3-Aug-2023
    • (2023)Generalized gradient emphasis learning for off-policy evaluation and control with function approximationNeural Computing and Applications10.1007/s00521-023-08965-435:32(23599-23616)Online publication date: 1-Nov-2023
    • (2022)Research on power industry engineering optimization algorithm based on deep learningProceedings of the 3rd Asia-Pacific Conference on Image Processing, Electronics and Computers10.1145/3544109.3544380(914-919)Online publication date: 14-Apr-2022
    • Show More Cited By

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image Neural Computing and Applications
    Neural Computing and Applications  Volume 32, Issue 18
    Sep 2020
    1004 pages
    ISSN:0941-0643
    EISSN:1433-3058
    Issue’s Table of Contents

    Publisher

    Springer-Verlag

    Berlin, Heidelberg

    Publication History

    Published: 01 September 2020
    Accepted: 05 October 2019
    Received: 03 January 2019

    Author Tags

    1. Energy management strategy (EMS)
    2. Electric vehicle (EV)
    3. Deep Q-learning (DQL)
    4. Bayesian optimization (BO)

    Qualifiers

    • Research-article

    Funding Sources

    • National Science and Technology Support Program
    • Fundamental Research Funds for the Central Universities

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)0
    • Downloads (Last 6 weeks)0

    Other Metrics

    Citations

    Cited By

    View all
    • (2023)A Novel Model based Energy Management Strategy for Plug-in Hybrid Electric Vehicles using Deep Reinforcement LearningProceedings of the 2023 Fifteenth International Conference on Contemporary Computing10.1145/3607947.3608004(289-293)Online publication date: 3-Aug-2023
    • (2023)Generalized gradient emphasis learning for off-policy evaluation and control with function approximationNeural Computing and Applications10.1007/s00521-023-08965-435:32(23599-23616)Online publication date: 1-Nov-2023
    • (2022)Research on power industry engineering optimization algorithm based on deep learningProceedings of the 3rd Asia-Pacific Conference on Image Processing, Electronics and Computers10.1145/3544109.3544380(914-919)Online publication date: 14-Apr-2022
    • (2022)Dsa-PAML: a parallel automated machine learning system via dual-stacked autoencoderNeural Computing and Applications10.1007/s00521-022-07119-234:15(12985-13006)Online publication date: 1-Aug-2022

    View Options

    View options

    Get Access

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media