Abstract
Autonomous vehicles aim to improve driving safety and comfort. In autonomous car SAE level-3 operations, it is necessary to determine whether the driving authority can be transferred from the computer to the driver. The driver must be awake and sufficiently alert to switch to manual driving operation. Physiological measurement methods require sensors that are in contact with the human body. These sensors are annoying, frustrating, and inconvenient for the driver. The purpose of this study is to determine a driver’s condition using eye-closing time and yawning frequency by a visible-light camera and a thermal camera. Eye-closings and yawns were detected using an appropriate non-contact detector with a visible camera and thermal camera. When a visible-light camera was used, the driver state recognition rate was 90 % or higher when the driver’s surroundings were brightly lit, but the recognition rate decreased significantly to approximately 4 % when the surroundings were dark. Using a thermal camera, the face recognition rate was 74 % under bright and dark conditions. For the thermal images, DLib and OpenCV could be performed well. Therefore, the DLib and thermal image combination could be used for a reliable driver drowsiness detection task.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Galarza, E. E., Egas, F. D., Silva, F. M., Velasco, P. M. and Galarza, E. D. (2018). Real time driver drowsiness detection based on driver’s face image behavior using a system of human computer interaction implemented in a smartphone. Int. Conf. Information Technology and Systems (ICITS). 721, 563–572.
Georgescu, M. I., Ionescu, R. T. and Popescu, M. (2019). Local learning with deep and handcrafted features for facial expression recognition. IEEE Access, 1, 64827–64836.
Guo, Y., Liu, Y., Georgiou, T. and Lew, M. S. (2018). A review of semantic segmentation using deep neural networks. Int. J. Multimed Information Retrieval 7, 2, 87–93.
Ingre, M., Åkerstedt, T., Peters, B., Anund, A. and Kecklund, G. (2006). Subjective sleepiness, simulated driving performance and blink duration: Examining individual differences. J. Sleep Research 15, 1, 47–53.
Kajiwara, S. (2014). Evaluation of driver’s mental workload by facial temperature and electrodermal activity under simulated driving conditions. Int. J. Automotive Technology 15, 1, 65–70.
Kajiwara, S. (2019). Evaluation of driver status in autonomous vehicles: Using thermal infrared imaging and other physiological measurements. Int. J. Vehicle Information and Communication Systems 4, 3, 232–241.
King, D. E. (2009). Dlib-ml: A machine learning toolkit. J. Machine Learning Research, 10, 1755–1758.
Krajewski, J., Golz, M., Schnieder, S., Schnupp, T., Heinze, C. and Sommer, D. (2010). Detecting fatigue from steering behaviour applying continuous wavelet transform. Proc. 7th Int. Conf. Methods and Techniques in Behavioral Research, 24, 1–4.
Krajewski, J., Sommer, D., Trutschel, U., Edwards, D. and Golz, M. (2009). Steering wheel behavior based estimation of fatigue. 2009 Driving Assessment Conf. Bigsky, MT, USA.
Lal, S. K., Craig, A., Boord, P., Kirkup, L. and Nguyen, H. (2003). Development of an algorithm for an EEG-based driver fatigue countermeasure. J. Safety Research 34, 3, 321–328.
Le, A. S., Suzuki, T. and Aoki, H. (2020). Evaluating driver cognitive distraction by eye tracking: From simulator to driving. Transportation Research Interdisciplinary Perspectives, 9, 100087.
Ma, J., Yu, W., Liang, P., Li, C. and Jiang, J. (2019). FusionGAN: A generative adversarial network for infrared and visible image fusion. Information Fusion, 48, 11–26.
Ma, J., Zhao, J., Ma, Y. and Tian, J. (2015). Non-rigid visible and infrared face registration via regularized Gaussian fields criterion. Pattern Recognition 48, 3, 772–784.
Minaee, S., Boykov, Y. Y., Porikli, F., Plaza, A. J., Kehtarnavaz, N. and Terzopoulos, D. (2021). Image segmentation using deep learning: a survey. IEEE Trans. Pattern Analysis and Machine Intelligence.
Ngxande, M., Tapamo, J. R. and Burke, M. (2017). Driver drowsiness detection using behavioral measures and machine learning techniques: A review of state-of-art techniques. Pattern Recognition Association of South Africa and Robotics and Mechatronics (PRASA-RobMech). Bloemfontein, South Africa.
Pander, T., Przybyla, T. and Czabanski, R. (2008). An application of detection function for the eye blinking detection. Conf. Human System Interactions. Krakow, Poland.
Park, I., Ahn, J. H. and Byun, H. (2006). Efficient measurement of eye blinking under various illumination conditions for drowsiness detection systems. 18th Int. Conf. Pattern Recognition (ICPR’06). Hong Kong, China.
Picot, A., Charbonnier, S. and Caplier, A. (2010). Drowsiness detection based on visual signs: Blinking analysis based on high frame rate video. IEEE Instrumentation and Measurement Technology Conf. Proc. Austin, TX, USA.
SAE Standards. (2018). Taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles, J3016_201806.
Sahayadhas, A., Sundaraj, K. and Murugappan, M. (2012). Detecting driver drowsiness based on sensors: A review. Sensors 12, 12, 16937–16953.
Smith, B. M., Zhang, L., Brandt, J., Lin, Z. and Yang, J. (2013). Exemplar-Based Face Parsing. Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 3484–3491.
Sundararajan, K. and Woodard, D. L. (2018). Deep Learning for Biometrics: A Survey. ACM Computing Surveys (CSUR) 51, 3, 1–34.
Volow, M. R. and Erwin, C. W. (1978). The heart rate variability corre-lates of spontaneous drowsiness onset. SAE Technical Paper No. 730124.
Yang, S., Luo, P., Loy, C. C. and Tang, X. (2015). From facial parts responses to face detection: A deep learning approach. Proc. IEEE Int. Conf. Computer Vision, 3676–3684.
Yu, Q., Cheng, H. H., Cheng, W. W. and Zhou, X. (2004). CH openCV for interactive open architecture computer vision. Advances in Engineering Software, 35, 8–9, 527–536.
Zhang, K., Zhang, Z., Li, Z. and Qiao, Y. (2016). Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Processing Letters 23, 10, 1499–1503.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Kajiwara, S. Driver-Condition Detection Using a Thermal Imaging Camera and Neural Networks. Int.J Automot. Technol. 22, 1505–1515 (2021). https://doi.org/10.1007/s12239-021-0130-3
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12239-021-0130-3