Hybrid Representation of Sensor Data for the Classification of Driving Behaviour
Abstract
:1. Introduction
2. Background
2.1. CNNs
2.2. RNNs
2.2.1. LSTMs
2.2.2. GRUs
3. Hybrid Representation Approach
3.1. CNN-Based Image-like Representation of Sensor Measurements
3.2. RNN-Based Time-Series Representation
3.3. Rule-Guided Event Detection
3.4. Route Level Classification
4. Experimental Evaluation
4.1. Datasets
4.1.1. UAH
4.1.2. MOTIF Datasets
4.2. Experimental Setup
4.3. Results
4.3.1. Time Slice Classification
4.3.2. Route Level Classification
- (1)
- In almost all cases, each hybrid variant obtains equal or higher classification accuracy, when compared to its NN-only counterpart (one exception arises in CNN-based classification of normal samples in the MOTIF 1 dataset). This indicates that the rule-based component contributes to overall classification performance and in several cases ‘corrects’ the result obtained by the NN-based component.
- (2)
- When comparing the three NN architectures investigated, the RNN-based ones (LSTM and GRU) obtain a higher classification performance than the CNN-based architecture, more so in the UAH and MOTIF 2 datasets. This could be attributed to the fact that RNNs have been formulated to capture patterns in time-series such as these sensor measurements.
- (3)
- Between LSTM and GRU, the latter achieves slightly more accurate classification.
- (1)
- The GRU-based hybrid variant (‘Hybrid (GRU)’) has 1/23 misclassification, whereas the LSTM-based and CNN-based hybrid variants (‘Hybrid (CNN)’ and ‘Hybrid (LSTM)’) have 6/23 and 2/23 misclassifications. The approach of Romera et al. [12] has 3/23 misclassifications.
- (2)
- All ‘NN-only’ variants lead to a considerable number of misclassifications. Still, when ‘NN-only’ variants are combined with ‘Events-only’, the overall classification accuracy is increased, as evident in the results obtained by the ‘Hybrid’ counterparts. Also, there is a case in which ‘NN-only’ variants ‘correct’ ‘Events-only’ (‘D4-Aggressive-Secondary’). These observations demonstrate that each component contributes complementary information, increasing overall classification accuracy.
5. Conclusions
- (1)
- Both NN-guided time-series encoding and rule-guided event detection contribute to the accuracy obtained by the proposed hybrid classification method.
- (2)
- The RNN-based variants (LSTM and GRU) obtain higher classification performance than the CNN-based variants, more so in the UAH and MOTIF 2 datasets.
- (3)
- Between LSTM and GRU, the latter achieves slightly more accurate classification.
- (4)
- (5)
- In terms of overall route classification, the proposed approach outperforms the approach of Romera et al. [12] in distinguishing between normal and aggressive driving behaviour, resulting in less misclassifications in the UAH dataset. This result of the proposed method is obtained without using data derived from camera, as is the case with the method of Romera et al.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Bergasa, L.M.; Almerıa, D.; Almazan, J. DriveSafe: An app for alerting inattentive drivers and scoring driving behaviors. In Proceedings of the IEEE Intelligent Vehicles Symposium (IV), Dearborn, MI, USA, 8–11 June 2014; pp. 240–245. [Google Scholar]
- Joubert, J.W.; de Beer, D.; de Koker, N. Combining accelerometer data and contextual variables to evaluate the risk of driver behaviour. Transp. Res. Part F-Traff. Psych. Beh. 2016, 41, 80–96. [Google Scholar] [CrossRef] [Green Version]
- Van Ly, M.; Martin, S.; Trivedi, M.M. Driver classification and driving style recognition using inertial sensors. In Proceedings of the IEEE Intelligent Vehicles Symposium (IV), Gold Coast, QLD, Australia, 23–26 June 2013; pp. 1040–1045. [Google Scholar]
- Vaitkus, V.; Lengvenis, P.; Zylius, G. Driving style classification using long-term accelerometer information. In Proceedings of the International Conference on Methods and Models in Automation and Robotics (MMAR), Miedzyzdroje, Poland, 2–5 September 2014; pp. 641–644. [Google Scholar]
- Yi, D.; Du, J.; Liu, C.; Ouddus, M.; Chen, W.-H. A machine learning based personalized system for driving state recognition. Transp. Res. Part C 2019, 105, 241–261. [Google Scholar] [CrossRef]
- Bouhoute, A.; Oucheikh, R.; Boubouh, K.; Berrada, I. Advanced driving behavior analytics for an improved safety assessment and driver fingerprinting. IEEE Tran. Intell. Transp. Syst. 2019, 20, 2171–2184. [Google Scholar] [CrossRef]
- Xie, J.; Zhu, M. Maneuver-based driving behavior classification based on random forest. IEEE Sens. Lett. 2019, 3, 1–4. [Google Scholar] [CrossRef]
- Yuksel, A.S.; Atmaca, S. Driver’s black box: A system for driver risk assessment using machine learning and fuzzy logic. J. Intell. Transp. Syst. 2020, 25, 482–500. [Google Scholar] [CrossRef]
- Savelonas, M.; Karkanis, S.; Spyrou, E. Classification of driving behaviour using short-term and long-term summaries of sensor data. In Proceedings of the IEEE South-East Europe Design Automation, Computer Engineering, Computer Networks and Social Media Conference (SEEDA-CECNSM), Corfu, Greece, 25–27 September 2020; pp. 1–4. [Google Scholar]
- Spyrou, E.; Vernikos, I.; Savelonas, M.; Karkanis, S. An image-based approach for classification of driving behaviour using CNNs. In Advances in Mobility-as-a-Service Systems. CSUM 2020. Advances in Intelligent Systems and Computing; Nathanail, E.G., Adamos, G., Karakikes, I., Eds.; Springer: Cham, Switzerland, 2021; Volume 1278. [Google Scholar]
- Saleh, K.; Hossny, M.; Nahavandi, S. Driving behavior classification based on sensor data fusion using LSTM recurrent neural networks. In Proceedings of the IEEE International Conference on Intelligent Transportation Systems (ITSC), Yokohama, Japan, 16–19 October 2017; pp. 1–6. [Google Scholar]
- Romera, E.; Bergasa, L.M.; Arroyo, R. Need data for driver behaviour analysis? Presenting the public UAH-DriveSet. In Proceedings of the IEEE International Conference on Intelligent Transportation Systems (ITSC), Rio de Janeiro, Brazil, 1–4 November 2016; pp. 387–392. [Google Scholar]
- Mantzekis, D.; Savelonas, M.; Karkanis, S.; Spyrou, E. RNNs for classification of driving behaviour. In Proceedings of the IEEE International Conference on Information, Intelligence, Systems and Applications (IISA), Patras, Greece, 15–17 July 2019; pp. 1–2. [Google Scholar]
- Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
- Chung, J.; Cho, C.G.K.; Bengio, Y. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv 2014, arXiv:1412.3555. [Google Scholar]
- Cho, K.; van Merrienboer, B.; Bahdanau, D.; Bengio, Y. On the properties of neural machine translation: Encoder–decoder approaches. In Proceedings of the Workshop on Syntax, Semantics and Structure in Statistical Translation, Doha, Qatar, 25 October 2014; pp. 103–111. [Google Scholar]
- Savelonas, M.; Mantzekis, D.; Labiris, N.; Tsakiri, A.; Karkanis, S.; Spyrou, E. Hybrid time-series representation for the classification of driving behaviour. In Proceedings of the International Workshop on Semantic and Social Media Adaptation and Personalization (SMAP), Zakynthos, Greece, 29–30 October 2020; pp. 1–6. [Google Scholar]
- Khodairy, M.A.; Abosamra, G. Driving behavior classification based on oversampled signals of smartphone embedded sensors using an optimized stacked-LSTM neural networks. IEEE Access 2020, 9, 4957–4972. [Google Scholar] [CrossRef]
- Xie, J.; Hu, K.; Li, G.; Guo, Y. CNN-based driving maneuver classification using multi-sliding window fusion. Exp. Syst. Appl. 2021, 169, 114442. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Proceedings of the Advances in Neural Information Processing Systems, Harrah’s and Harveys, Lake Tahoe, NV, USA, 3–8 December 2012; Volume 25, pp. 1097–1105. [Google Scholar]
- Hinton, G.; Deng, L.; Yu, D.; Dahl, G.E.; Mohamed, A.-R.; Jaitly, N.; Senior, A.; Vanhoucke, V.; Nguyen, P.; Sainath, T.N.; et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Process. Mag. 2012, 29, 82–97. [Google Scholar] [CrossRef]
- Graves, A. Supervised sequence labelling with recurrent neural networks. In Studies in Computational Intelligence; Springer-Verlag: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
- Graves, A.; Mohamed, A.; Hinton, G. Speech recognition with deep recurrent neural networks. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Vancouver, BC, Canada, 26–31 May 2013; pp. 6645–6649. [Google Scholar]
- Sutskever, I.; Vinyals, O.; Le, Q.V. Sequence to sequence learning with neural networks. arXiv 2014, arXiv:1409.3215. [Google Scholar]
- Bahdanau, D.; Cho, K.; Bengio, Y. Neural machine translation by jointly learning to align and translate. arXiv 2016, arXiv:1409.0473v7. [Google Scholar]
- Graves, A. Generating sequences with recurrent neural networks. arXiv 2014, arXiv:1308.0850v5. [Google Scholar]
- FMS500 LIGHT+, Sensata Technologies. Available online: https://www.xirgoglobal.com/export/en/model/fms500-light-0 (accessed on 14 September 2021).
- Gal, Y. Uncertainty in Deep Learning; University of Cambridge: Cambridge, UK, 2016. [Google Scholar]
- Chollet, F. Keras. Available online: https://keras.io (accessed on 14 September 2021).
- Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. Available online: https://www.tensorflow.org/ (accessed on 14 September 2021).
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Silva, F.; Analide, C.; Novais, P. Traffic expression through ubiquitous and pervasive sensorization. smart cities and assessment of driving behaviour. In Proceedings of the International Conference on Pervasive and Embedded Computing and Communication Systems (PECCS), Angers, France, 11–13 February 2015; pp. 33–42. [Google Scholar]
- Semanjski, I.; Bellens, R.; Gautama, S.; Witlox, F. Integrating big data into a sustainable mobility policy 2.0 planning support system. Sustainability 2016, 8, 1142. [Google Scholar] [CrossRef] [Green Version]
- Alwattar, T.A.; Mian, A. Development of an elastic material model for BCC lattice cell structures using finite element analysis and neural networks approaches. J. Compos. Sci. 2019, 3, 33. [Google Scholar] [CrossRef] [Green Version]
- Alwattar, T.A. Developing Equivalent Solid Model for Lattice Cell Structure Using Numerical Approaches. Ph.D. Thesis, Wright State University, Daytonn, OH, USA, 2020. [Google Scholar]
Event Type | Threshold Sensitivity | ||
---|---|---|---|
Low | Medium | High | |
Acceleration | 0.1 g < az < 0.2 g | 0.2 g < az < 0.4 g | 0.4 g < az |
Braking | −0.1 g > az > −0.2 g | −0.2 g > az > −0.4 g | −0.4 g > az |
Turning | 0.1 g < |ay| < 0.2 g | 0.2 g < |ay| < 0.4 g | 0.4 g < |ay| |
Driver | Genre | Age Range | Model | Fuel |
---|---|---|---|---|
D1 | Male | 40–50 | Audi Q5 | Diesel |
D2 | Male | 20–30 | Mercedes B180 | Diesel |
D3 | Male | 20–30 | Citroen C4 | Diesel |
D4 | Female | 30–40 | Kia Picanto | Gasoline |
D5 | Male | 30–40 | Opel Astra | Gasoline |
D6 | Male | 40–50 | Citroen C-Zero | Electricity |
Driver | Genre | Age Range | Model | Fuel |
---|---|---|---|---|
MTF1-D1 | Male | 30–40 | Toyota Yaris | Gasoline |
MTF1-D2 | Male | 30–40 | Toyota Yaris | Gasoline |
MTF2-D3 | Male | 30–40 | MERCEDES-41hp | Diesel |
MTF2-D4 | Male | 40–50 | IVECO-177hp | Diesel |
MTF2-D5 | Male | 40–50 | MERCEDES-41hp | Diesel |
MTF2-D6 | Male | 50–60 | IVECO-177hp | Diesel |
UAH | ||||
CNN | Normal | Aggressive | ||
Normal | 0.85 | 0.15 | ||
Aggressive | 0.70 | 0.30 | ||
LSTM | Normal | Aggressive | ||
Normal | 0.97 | 0.03 | ||
Aggressive | 0.25 | 0.75 | ||
GRU | Normal | Aggressive | ||
Normal | 0.98 | 0.02 | ||
Aggressive | 0.22 | 0.78 | ||
MOTIF 1 | ||||
CNN | Normal | Semi-aggressive | Aggressive | |
Normal | 0.92 | 0.05 | 0.03 | |
Semi-aggressive | 0.14 | 0.83 | 0.03 | |
Aggressive | 0.03 | 0.04 | 0.94 | |
LSTM | Normal | Semi-aggressive | Aggressive | |
Normal | 1.00 | 0 | 0 | |
Semi-aggressive | 0.005 | 0.99 | 0.005 | |
Aggressive | 0 | 0.02 | 0.98 | |
GRU | Normal | Semi-aggressive | Aggressive | |
Normal | 1.00 | 0 | 0 | |
Semi-aggressive | 0 | 1.00 | 0 | |
Aggressive | 0 | 0.01 | 0.99 | |
MOTIF 2 | ||||
CNN | Normal | Semi-aggressive | Aggressive | |
Normal | 0.86 | 0.07 | 0.07 | |
Semi-aggressive | 0.15 | 0.81 | 0.04 | |
Aggressive | 0.08 | 0.16 | 0.76 | |
LSTM | Normal | Semi-aggressive | Aggressive | |
Normal | 0.95 | 0.03 | 0.02 | |
Semi-aggressive | 0.01 | 0.98 | 0.01 | |
Aggressive | 0 | 0.01 | 0.99 | |
GRU | Normal | Semi-aggressive | Aggressive | |
Normal | 0.98 | 0.01 | 0.01 | |
Semi-aggressive | 0.03 | 0.96 | 0.01 | |
Aggressive | 0.03 | 0.01 | 0.96 |
UAH | ||||
Acc | P | R | F1 | |
CNN | 0.64 | 0.63 | 0.64 | 0.60 |
LSTM | 0.89 | 0.89 | 0.89 | 0.89 |
GRU | 0.91 | 0.91 | 0.91 | 0.91 |
MOTIF 1 | ||||
Acc | P | R | F1 | |
CNN | 0.82 | 0.83 | 0.82 | 0.82 |
LSTM | 0.99 | 0.99 | 0.99 | 0.99 |
GRU | 0.99 | 1.00 | 1.00 | 1.00 |
MOTIF 2 | ||||
Acc | P | R | F1 | |
CNN | 0.78 | 0.79 | 0.78 | 0.78 |
LSTM | 0.97 | 0.97 | 0.97 | 0.97 |
GRU | 0.96 | 0.97 | 0.97 | 0.97 |
UAH | ||||
CNN | Normal | Aggressive | ||
Normal | 11/12 (8/12) | 1/12 (4/12) | ||
Aggressive | 4/11 (6/11) | 7/11 (5/11) | ||
LSTM | Normal | Aggressive | ||
Normal | 11/12 (10/12) | 1/12 (2/12) | ||
Aggressive | 1/11 (5/11) | 10/11 (6/11) | ||
GRU | Normal | Aggressive | ||
Normal | 12/12 (11/12) | 0/12 (1/12) | ||
Aggressive | 1/11 (5/11) | 10/11 (6/11) | ||
MOTIF 1 | ||||
CNN | Normal | Semi-aggressive | Aggressive | |
Normal | 10/11 (11/11) | 1/11(0/11) | 0/11 (0/11) | |
Semi-aggressive | 0/11 (0/11) | 11/11 (9/11) | 0/11 (2/11) | |
Aggressive | 0/11 (0/11) | 0/11 (2/11) | 11/11 (9/11) | |
LSTM | Normal | Semi-aggressive | Aggressive | |
Normal | 11/11 (5/11) | 0/11 (6/11) | 0/11 (0/11) | |
Semi-aggressive | 0/11 (2/11) | 11/11 (7/11) | 0/11 (2/11) | |
Aggressive | 0/11 (0/11) | 1/11 (1/11) | 10/11 (10/11) | |
GRU | Normal | Semi-aggressive | Aggressive | |
Normal | 11/11 (5/11) | 0/11 (6/11) | 0/11 (0/11) | |
Semi-aggressive | 0/11 (2/11) | 11/11 (7/11) | 0/11 (2/11) | |
Aggressive | 0/11 (0/11) | 0/11 (0/11) | 11/11 (11/11) | |
MOTIF 2 | ||||
CNN | Normal | Semi-aggressive | Aggressive | |
Normal | 7/12 (4/12) | 1/12 (8/12) | 4/12 (0/12) | |
Semi-aggressive | 1/12 (0/12) | 11/12 (12/12) | 0/12 (0/12) | |
Aggressive | 0/12 (0/12) | 0/12 (2/12) | 12/12 (10/12) | |
LSTM | Normal | Semi-aggressive | Aggressive | |
Normal | 10/12 (9/12) | 2/12 (3/12) | 0/12 (0/12) | |
Semi-aggressive | 1/12 (1/12) | 11/12 (11/12) | 0/12 (0/12) | |
Aggressive | 0/12 (0/12) | 0/12 (0/12) | 12/12 (12/12) | |
GRU | Normal | Semi-aggressive | Aggressive | |
Normal | 11/12 (10/12) | 1/12 (2/12) | 0/12 (0/12) | |
Semi-aggressive | 0/12 (0/12) | 12/12 (12/12) | 0/12 (0/12) | |
Aggressive | 0/12 (0/12) | 0/12 (0/12) | 12/12 (12/12) |
State | Driver | Time (min) | Km | CNN (Only) | LSTM (Only) | GRU (Only) | Events (Only) | Hybrid (CNN) | Hybrid (LSTM) | Hybrid (GRU) | Romera et al. [12] |
---|---|---|---|---|---|---|---|---|---|---|---|
Normal (Motorway) | D1 | 14 | 25 | T | T | T | T | T | T | T | T |
D2 | 15 | 26 | F | T | T | T | T | T | T | T | |
D3 | 15 | 26 | F | T | T | T | T | T | T | T | |
D4 | 16 | 25 | T | T | T | T | T | T | T | T | |
D5 | 15 | 25 | F | T | T | T | T | T | T | T | |
D6 | 17 | 25 | T | T | T | T | T | T | T | T | |
Aggressive (Motorway) | D1 | 12 | 24 | F | T | T | T | T | T | T | T |
D2 | 14 | 26 | T | F | F | T | F | T | T | F | |
D3 | 13 | 26 | F | T | T | T | T | T | T | T | |
D4 | 15 | 25 | T | F | F | T | T | T | T | F | |
D5 | 13 | 25 | F | F | F | T | T | T | T | F | |
D6 | 15 | 25 | F | T | T | T | T | T | T | T | |
Normal (Secondary) | D1 | 10 | 16 | T | T | T | T | T | T | T | T |
D2 | 10 | 16 | T | F | T | T | T | F | T | T | |
D3 | 11 | 16 | F | T | T | T | T | T | T | T | |
D4 | 11 | 16 | T | F | F | T | T | T | T | T | |
D5 | 11 | 16 | T | T | T | T | F | T | T | T | |
D6 | 13 | 16 | T | T | T | T | T | T | T | T | |
Aggressive (Secondary) | D1 | 8 | 16 | T | F | F | T | T | T | T | T |
D2 | 10 | 16 | T | T | T | T | T | T | T | T | |
D3 | 11 | 16 | F | F | F | F | F | F | F | T | |
D4 | 10 | 16 | F | T | T | F | F | T | T | T | |
D5 | 7 | 12 | T | T | T | T | F | T | T | T | |
Accuracy at Normal | 8/12 | 10/12 | 11/12 | 12/12 | 11/12 | 11/12 | 12/12 | 12/12 | |||
Accuracy at Aggressive | 5/11 | 6/11 | 6/11 | 9/11 | 6/11 | 10/11 | 10/11 | 8/11 | |||
Overall Accuracy | 13/23 | 16/23 | 17/23 | 21/23 | 17/23 | 21/23 | 22/23 | 20/23 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Savelonas, M.; Vernikos, I.; Mantzekis, D.; Spyrou, E.; Tsakiri, A.; Karkanis, S. Hybrid Representation of Sensor Data for the Classification of Driving Behaviour. Appl. Sci. 2021, 11, 8574. https://doi.org/10.3390/app11188574
Savelonas M, Vernikos I, Mantzekis D, Spyrou E, Tsakiri A, Karkanis S. Hybrid Representation of Sensor Data for the Classification of Driving Behaviour. Applied Sciences. 2021; 11(18):8574. https://doi.org/10.3390/app11188574
Chicago/Turabian StyleSavelonas, Michalis, Ioannis Vernikos, Dimitris Mantzekis, Evaggelos Spyrou, Athanasia Tsakiri, and Stavros Karkanis. 2021. "Hybrid Representation of Sensor Data for the Classification of Driving Behaviour" Applied Sciences 11, no. 18: 8574. https://doi.org/10.3390/app11188574