Lightweight Driver Behavior Identification Model with Sparse Learning on In-Vehicle CAN-BUS Sensor Data
Abstract
:1. Introduction
- A lightweight deep-learning network is proposed for performing driver-behavior identification using in-vehicle CAN-BUS sensor data. Our proposed architecture outperforms state-of-the-art methods. Particularly, it exhibits higher accuracy, efficient memory usage, less computational complexity (number of floating-point operations (FLOPs) and the number of parameters), which improves the inference time.
- Our proposed architecture requires a shorter window size, such as 40 s, for driver identification, compared with previous research, which required at least 60 s time-series data to perform the classification.
- We study the impact of window size(number of time steps) and degree of overlap by sliding window on accuracy and computational complexity, determining the optimal values for our network.
- To further validate the effectiveness of our lightweight model, we also evaluated the current research methods by applying channel pruning at different layers to make them lightweight. We assess the optimal extent of pruning without significantly compromising the accuracy of existing models. Nevertheless, the proposed solution obtains a more compact size compared to that of existing methods by introducing depthwise convolutions. We presented a detail results in terms of inference time and memory usage at Jetson embedded system (Xavier, TX2, and Nano).
- For robust testing of our proposed model, we introduce anomalous data at different time sequences and finally apply anomaly detection method (one-class support vector machine). We presented a comparison of robustness with existing algorithms.
- To sustain the lightweight model and adjust new classes without affecting the accuracy, we make our proposed solution adaptable for new classes by developing a state-of-the-art sparse-learning technique at the fully connected layer. Accordingly, we carefully select the relatively substantial nodes for existing classes, freeze them, and re-train our network by improving the weaker nodes to classify new classes. This ensures that a high accuracy is sustained for existing classes, thereby providing room to adjust new classes without affecting the network size.
- We deployed the proposed algorithm equipped with sparse learning in the container environment of NVIDIA-Docker using the Jetson platform (Xavier, Tx2, and Nano). In this regard, our proposed model acts like active instance (container) and supports incremental learning to absorb a greater number of classes. This makes our model favorable candidate to deploy in real-time conditions (container is a virtualization method gradually becoming a base environment for edge computing applications).
2. Related Work
2.1. Data Source
2.2. Machine-Learning Models
2.2.1. CNN-RNN Architectures
2.2.2. Driver Identification Using DeepConvRNN-Attention
2.2.3. Driver Identification Using FCN-LSTM
2.3. Applied Platform
3. Methodology
3.1. Problem Formulation
3.2. OCS Lab—Security Driving Dataset
3.3. Our Proposed Framework
3.3.1. Depthwise Convolution
3.3.2. Recurrent Neural Networks
3.3.3. Hyperparameter Optimization
4. Performance Evaluation
4.1. Experimental Setup
4.2. Cross Validation of Time Series Data
4.3. Computational Complexity of the Proposed Model
- The input size (40 × 15) is the key to reducing the network size, which mainly contributes to memory consumption. Although it does not consume parameters, it is indirectly involved in reducing the number of parameters of the subsequent layers.
- A max-pooling layer is utilized to effectively reduce the network size (memory and parameters) by setting a high filter size, thereby alleviating the over-fitting problem. This, in turn, indirectly affects the parameters of subsequent layers.
- Depthwise convolution layers with large-sized filters and few depth-multipliers can effectively reduce the number of parameters.
- The augmentation of the LSTM layer is not sequential; instead, the input is separately fed to the convolutional and LSTM layers in the proposed model, thereby reducing the size of the hidden LSTM layer, such as a single layer with 10 hidden states in our case. However, DeepConvLSTM uses two LSTM layers with 128 hidden neurons each to acquire a competitive accuracy.
4.4. Robustness to Data Anomalies
4.5. Comparison with Compressed versions of Existing Models
- For each filter in a particular layer, compute the sum of its absolute kernel weights using equation of L1 norm [51].
- Sort the filter on the basis of the L1 norm values calculated in the previous step.
- Prune the m filters that have the smallest L1 norm values, and remove the corresponding feature maps from the current and next layers.
- –
- To select the optimum values of the m filters to be pruned, we begin pruning with the smallest number, that is, 10, and start evaluating the accuracy on a validation dataset.
- To delete the channels from layer i and i+1, we use the Keras-surgeon [52] tool, which deletes the channels and provides a new copy of the pruned model.
5. Deploying the Proposed Model with Sparse Learning
5.1. Sparse Learning
5.2. Node Selection Towards Sparse Learning
- Case 1 ( > ): As a proof of concept, we begin with seven ( = 7) randomly selected classes (for simplicity, we consider A–G) and train the model from scratch (see Table 5) to obtain an initial network with a significantly high test accuracy of 99.11%. The complete pattern exhibited by the validation accuracy is depicted in Figure 7b. First, we freeze 20% of the nodes in the pre-trained network for classes. Later, we feed the network with the data of + (7 + 3) classes and retrain the 80% of weak nodes for fine tuning. The test accuracy decreases to 97.86%, similar to that in the case of training 10 classes from scratch. However, upon freezing 40% of the nodes and re-training only 60% of the nodes, the test accuracy becomes significantly high (see Table 5). Such sparse learning can result in higher accuracy than that in the original training procedure of starting from the scratch. Another supporting argument is the pattern followed by the validation accuracy: when we retrain 80% of the nodes, the validation accuracy of the model starts from the seventies, whereas when we retrain only 60% nodes, the validation accuracy for the first few epoch starts from the nineties (see Figure 7b). Similar pattern can be found in case of DepthConv-GRU as shown in Figure 9a,b.
- Case 2 ( < ): We repeat similar experiments for the case where there are more new classes () than existing classes (). However, in practice, this case can degrade the network performance [54]. This is because the pre-trained model can find difficulties in adjusting to an enormous amount of data as compared with the small amount of data previously used for training; thus, it may require major weight updates to achieve an acceptable accuracy. To resolve this issue, the authors in [54] proposed the following formula as an effective node-selection method:This formula was previously validated on LeNet, AlexNet, and VGGNet to adapt to up to 40% new classes without a significant loss in the accuracy corresponding to existing classes [54]. We exploited this formula for node selection in the fully connected layer of the proposed model. We retrained the nodes whose magnitudes ranged from 0 to the value obtained using the formula. Because all the nodes values were activated using ReLU, their magnitudes were greater or equal to zero. Accordingly, we retrained all the nodes that lied in this range and froze the remaining ones. When = 4, we obtained the initial test accuracy of 99.51% upon training only 4 classes (see Table 5). However, upon freezing the nodes according to the formula and feeding the network with data regarding +(4 + 6) classes, the accuracy suddenly dropped to 96.79%, which is lower than the original accuracy (97.86%) of the network when trained from scratch. Similarly, upon freezing 20% nodes using the previous ranking-by-magnitude method, we obtained a lower accuracy, that is, 96.37%. We achieved an accuracy of 97.86% upon freezing 40% nodes and re-training the rest using sparse learning. The corresponding validation accuracy curves for DepthConv-LSTM are depicted in Figure 8a,b. Similar pattern can be found in case of DepthConv-GRU as shown in Figure 9a,b.
6. Discussion and Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Carfora, M.F.; Martinelli, F.; Mercaldo, F.; Nardone, V.; Orlando, A.; Santone, A.; Vaglini, G. A “pay-how-you-drive” car insurance approach through cluster analysis. Soft Comput. 2019, 23, 2863–2875. [Google Scholar] [CrossRef]
- Troncoso, C.; Danezis, G.; Kosta, E.; Balasch, J.; Preneel, B. Pripayd: Privacy-friendly pay-as-you-drive insurance. IEEE Trans. Dependable Secure Comput. 2010, 8, 742–755. [Google Scholar] [CrossRef] [Green Version]
- Dai, R.; Lu, Y.; Ding, C.; Lu, G. The effect of connected vehicle environment on global travel efficiency and its optimal penetration rate. J. Adv. Transp. 2017, 2017. [Google Scholar] [CrossRef]
- Lee, J.; Kao, H.; Yang, S. Service innovation and smart analytics for industry 4.0 and big data environment. Procedia CIRP 2014, 16, 3–8. [Google Scholar] [CrossRef] [Green Version]
- Kwak, B.I.; Woo, J.; Kim, H.K. Know your master: Driver profiling-based anti-theft method. In Proceedings of the 2016 IEEE 14th Annual Conference on Privacy, Security and Trust (PST), Auckland, New Zealand, 12–14 December 2016; pp. 211–218. [Google Scholar]
- Kang, Y.G.; Park, K.H.; Kim, H.K. Automobile theft detection by clustering owner driver data. arXiv 2019, arXiv:1909.08929. [Google Scholar]
- Zhang, J.; Wu, Z.; Li, F.; Xie, C.; Ren, T.; Chen, J.; Liu, L. A deep learning framework for driving behavior identification on in-vehicle CAN-BUS sensor data. Sensors 2019, 19, 1356. [Google Scholar] [CrossRef] [Green Version]
- el Mekki, A.; Bouhoute, A.; Berrada, I. Improving driver identification for the next-generation of in-vehicle software systems. IEEE Trans. Veh. Technol. 2019, 68, 7406–7415. [Google Scholar] [CrossRef]
- Júnior, J.F.; Carvalho, E.; Ferreira, B.V.; de Souza, C.; Suhara, Y.; Pentland, A.; Pessin, G. Driver behavior profiling: An investigation with different smartphone sensors and machine learning. PLoS ONE 2017, 12, e0174959. [Google Scholar]
- Fugiglando, U.; Massaro, E.; Santi, P.; Milardo, S.; Abida, K.; Stahlmann, R.; Netter, F.; Ratti, C. Driving behavior analysis through CAN bus data in an uncontrolled environment. IEEE Trans. Intell. Transp. Syst. 2018, 20, 737–748. [Google Scholar] [CrossRef] [Green Version]
- Castignani, G.; Derrmann, T.; Frank, R.; Engel, T. Driver behavior profiling using smartphones: A low-cost platform for driver monitoring. IEEE Intell. Transp. Syst. Mag. 2015, 7, 91–102. [Google Scholar] [CrossRef]
- Park, K.H.; Kim, H.K. This car is mine!: Automobile theft countermeasure leveraging driver identification with generative adversarial networks. arXiv 2019, arXiv:1911.09870. [Google Scholar]
- Androidauto-Connect Your Phone to Car Display. Available online: https://www.android.com/auto/ (accessed on 7 June 2020).
- Automotive Grade Linux. 2020. Available online: https://www.automotivelinux.org/ (accessed on 7 June 2020).
- QNX in Automotive-QNX Software Systems. 2020. Available online: https://blackberry.qnx.com/en/software-solutions/connected-autonomous-vehicles (accessed on 7 June 2020).
- Kashevnik, A.; Lashkov, I.; Gurtov, A. Methodology and mobile application for driver behavior analysis and accident prevention. IEEE Trans. Intell. Transp. Syst. 2019, 6, 2427–2436. [Google Scholar] [CrossRef]
- Warren, J.; Lipkowitz, J.; Sokolov, V. Clusters of driving behavior from observational smartphone data. IEEE Intell. Transp. Syst. Mag. 2019, 11, 171–180. [Google Scholar] [CrossRef]
- Li, M.G.; Jiang, B.; Che, Z.; Shi, X.; Liu, M.; Meng, Y.; Ye, J.; Liu, Y. DBUS: Human driving behavior understanding system. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Seoul, Korea, 27–28 October 2019. [Google Scholar] [CrossRef]
- Ramanishka, V.; Chen, Y.; Misu, T.; Saenko, K. Toward driving scene understanding: A dataset for learning driver behavior and causal reasoning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 7699–7707. [Google Scholar]
- Fridman, L.; Brown, D.E.; Glazer, M.; Angell, W.; Dodd, S.; Jenik, B.; Terwilliger, J.; Kindelsberger, J.; Ding, L.; Seaman, S.; et al. MIT autonomous vehicle technology study: Large-scale deep learning based analysis of driver behavior and interaction with automation. arXiv 2017, arXiv:1711.069761. [Google Scholar]
- Wijnands, J.S.; Thompson, J.; Nice, K.A.; Aschwanden, G.D.P.A.; Stevenson, M. Real-time monitoring of driver drowsiness on mobile platforms using 3D neural networks. Neural Comput. Appl. 2019, 32, 9731–9743. [Google Scholar] [CrossRef] [Green Version]
- Kim, W.; Jung, W.; Choi, H.K. Lightweight driver monitoring system based on multi-task mobilenets. Sensors 2019, 19, 3200. [Google Scholar] [CrossRef] [Green Version]
- Taamneh, S.; Tsiamyrtzis, P.; Dcosta, M.; Buddharaju, P.; Khatri, A.; Manser, M.; Ferris, T.; Wunderlich, R.; Pavlidis, I. A multimodal dataset for various forms of distracted driving. Sci. Data 2017, 4, 170110. [Google Scholar] [CrossRef] [Green Version]
- Zhang, X.; Zhao, X.; Rong, J. A study of individual characteristics of driving behavior based on hidden Markov model. Sens. Transducers 2014, 167, 194–202. [Google Scholar]
- Miyajima, C.; Nishiwaki, Y.; Ozawa, K.; Wakita, T.; Itou, K.; Takeda, K.; Itakura, F. Driver modeling based on driving behavior and its evaluation in driver identification. Proc. IEEE 2007, 95, 427–437. [Google Scholar] [CrossRef]
- Van Ly, M.; Martin, S.; Trivedi, M.M. Driver classification and driving style recognition using inertial sensors. In Proceedings of the 2013 IEEE Intelligent Vehicles Symposium (IV), Gold Coast City, Australia, 23–26 June 2013; pp. 1040–1045. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. In Proceedings of the Advances in Neural Information Processing Systems, Harrahs adn Harverys, Lake Tahoe, CA, USA, 3–8 December 2012; pp. 1097–1105. [Google Scholar]
- Greff, K.; Srivastava, R.K.; Koutník, J.; Steunebrink, B.R.; Schmidhuber, J. LSTM: A search space odyssey. IEEE Trans. Neural Netw. Learn. Syst. 2016, 28, 2222–2232. [Google Scholar] [CrossRef] [Green Version]
- Ha, S.; Choi, S. Convolutional neural networks for human activity recognition using multiple accelerometer and gyroscope sensors. In Proceedings of the 2016 IEEE International Joint Conference on Neural Networks (IJCNN), Vancouver, BC, Canada, 24–29 July 2016; pp. 381–388. [Google Scholar]
- Cui, Z.; Chen, W.; Chen, Y. Multi-scale convolutional neural networks for time series classification. arXiv 2016, arXiv:1603.06995. [Google Scholar]
- Karim, F.; Majumdar, S.; Darabi, H.; Chen, S. LSTM fully convolutional networks for time series classification. IEEE Access 2017, 6, 1662–1669. [Google Scholar] [CrossRef]
- Liu, T.; Bao, J.; Wang, J.; Zhang, Y. A hybrid CNN–LSTM algorithm for online defect recognition of CO2 welding. Sensors 2018, 18, 4369. [Google Scholar] [CrossRef] [Green Version]
- Bahdanau, D.; Cho, K.; Bengio, Y. Neural machine translation by jointly learning to align and translate. arXiv 2014, arXiv:1409.0473. [Google Scholar]
- Wang, Z.; Yan, W.; Oates, T. Time series classification from scratch with deep neural networks: A strong baseline. In Proceedings of the 2017 IEEE International Joint Conference on Neural Networks (IJCNN), Anchorage, AK, USA, 14–19 May 2017; pp. 1578–1585. [Google Scholar]
- Brookhuis, K.A.; de Waard, D.; Janssen, W.H. Behavioural impacts of advanced driver assistance systems—An overview. Eur. J. Transp. Infrastruct. Res. 2001, 1, 245–253. [Google Scholar] [CrossRef]
- Curry, E.; Sheth, A. Next-generation smart environments: From system of systems to data ecosystems. IEEE Intell. Syst. 2018, 33, 69–76. [Google Scholar] [CrossRef]
- Hui, K.; Le, M.; Tao, S. Container and microservice driven design for cloud infrastructure devops. In Proceedings of the 2016 IEEE International Conference on Cloud Engineering (IC2E), Berlin, Germany, 4–8 April 2016; pp. 202–211. [Google Scholar]
- Bernstein, D. Containers and cloud: From lxc to docker to kubernetes. IEEE Cloud Comput. 2014, 1, 81–84. [Google Scholar] [CrossRef]
- Mittal, S. A Survey on optimized implementation of deep learning models on the NVIDIA Jetson platform. J. Syst. Architect. 2019, 97, 428–442. [Google Scholar] [CrossRef]
- Kim, C.E.; Oghaz, M.M.D.; Fajtl, J.; Argyriou, V.; Remagnino, P. A comparison of embedded deep learning methods for person detection. arXiv 2018, arXiv:1812.03451. [Google Scholar]
- OCS Lab. Driving Dataset. Available online: http://ocslab.hksecurity.net/Datasets/driving-dataset (accessed on 27 August 2020).
- Information Protection R&D Data Challenge 2019. Available online: http://datachallenge.kr/challenge18/vehicle/tutorial/ (accessed on 25 June 2020).
- Hall, M.; Frank, E.; Holmes, G.; Pfahringer, B.; Reutemann, P.; Witten, I.H. The WEKA data mining software: An update. ACM SIGKDD Explor. Newsl. 2009, 11, 10–18. [Google Scholar] [CrossRef]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- Rastgoo, M.N. Driver Stress Level Detection Based on Multimodal Measurements. Ph.D. Thesis, Queensland University of Technology, Queensland, Australia, 2019. Available online: https://eprints.qut.edu.au/134144/ (accessed on 8 July 2020).
- Dehghani, A.; Sarbishei, O.; Glatard, T.; Shihab, E. A quantitative comparison of overlapping and non-overlapping sliding windows for human activity recognition using inertial sensors. Sensors 2019, 19, 5026. [Google Scholar] [CrossRef] [Green Version]
- Ullah, S.; Kim, D.H. Benchmarking Jetson Platform for 3D Point-Cloud and Hyper-Spectral Image Classification. In Proceedings of the 2020 IEEE International Conference on Big Data and Smart Computing (BigComp), Busan, Korea, 19–22 February 2020; pp. 477–482. [Google Scholar]
- A Driver Identification Framework on AutoMotive Grade Linux. Available online: https://github.com/vcar/AGL (accessed on 7 July 2020).
- Han, S.; Mao, H.; Dally, W.J. Deep compression: Compressing deep neural networks with pruning, trained quantization and Huffman coding. arXiv 2015, arXiv:1510.00149. [Google Scholar]
- Scardapane, S.; Comminiello, D.; Hussain, A.; Uncini, A. Group sparse regularization for deep neural networks. Neurocomputing 2017, 241, 81–89. [Google Scholar] [CrossRef] [Green Version]
- Li, H.; Kadav, A.; Durdanovic, I.; Samet, H.; Graf, H.P. Pruning filters for efficient convnets. arXiv 2016, arXiv:1608.08710. [Google Scholar]
- Keras-Surgeon, for Network Pruning Available on Github. Available online: https://github.com/BenWhetton/keras-surgeon (accessed on 5 July 2020).
- Quattoni, A.; Collins, M.; Darrell, T. Transfer learning for image classification with sparse prototype representations. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar]
- Ibrokhimov, B.; Hur, C.; Kang, S. Effective node selection technique towards sparse learning. Appl. Intell. 2020, 50, 3239–3251. [Google Scholar] [CrossRef]
Dataset | # of Selected Features | Features |
---|---|---|
Security Driving Dataset [5] | 15 | Long_Term_Fuel_Trim_Bank1, Intake_air_pressure, Accelerator_Pedal_value, Fuel_consumption, Torque_of_friction, Maximum_indicated_engine_torque, Engine_torque, Calculated_LOAD_value, Activation_of_Air_compressor, Engine_coolant_temperature, Transmission_oil_temperature, Wheel_velocity_front_left-hand, Wheel_velocity_front_right-hand, Wheel_velocity_rear_left-hand, Torque_converter_speed |
Input | Algorithm | Accuracy | FLOPs | Memory | Feature Engineering | Windowing |
---|---|---|---|---|---|---|
60 × 45 | DeepConvLSTM | 97.72 | 1.624 M | 7.88 MB | Yes | Wx = 60, dx = 6 |
60 × 45 | DeepConvGRU | 95.19 | 1.623 M | 7.88 MB | Yes | Wx = 60, dx = 6 |
60 × 45 | DeepConvLSTM-Attention | 97.86 | 1.632 M | 7.91 MB | Yes | Wx = 60, dx = 6 |
60 × 45 | DeepConvGRU-Attention | 98.36 | 1.631 M | 7.91 MB | Yes | Wx = 60, dx = 6 |
60 × 15 | FCM-LSTM | 95.1 | 0.56 M | 3.28 MB | No | Wx = 60, dx = 10 |
60 × 15 | Proposed DepthConv-LSTM | 97.78 | 0.235 M | 1.74 MB | No | Wx = 60, dx = 10 |
60 × 15 | Proposed DepthConv- GRU | 98.52 | 0.234 M | 1.74 MB | No | Wx = 60, dx = 10 |
40 × 15 | Proposed DepthConv-LSTM | 97.86 | 0.233 M | 1.69 MB | No | Wx = 40, dx = 6 |
40 × 15 | Proposed DepthConv- GRU | 98.72 | 0.232 M | 1.69 MB | No | Wx = 40, dx = 6 |
Algorithm | Pruning | FLOPs (M) | Memory (MB) | Accuracy (%) | # of Channels Pruned/Total | Inference Time (s/Sample) (Container-NVIDIA Docker) | ||
---|---|---|---|---|---|---|---|---|
Xavier | TX2 | Nano | ||||||
DeepConv GRU - Attention | 0% | 1.627 | 7.91 | 98.36 | - | ∼505 | ∼1175 | ∼2580 |
8.50% | 1.482 | 7.48 | 98.04 | LSTM1(07/128) LSTM2(11/128) | ∼469 | ∼1040 | ∼2270 | |
19% | 1.314 | 6.64 | 96.89 | LSTM1 (20/128) LSTM2(15/128) | ∼452 | ∼997 | ∼2160 | |
FCN-LSTM | 0% | 0.566 | 3.28 | 95.10 | - | ∼284 | ∼365 | ∼450 |
5.30% | 0.535 | 3.14 | 94.32 | Conv1(10/128) Conv2(10/256) | ∼253 | ∼342 | ∼416 | |
12.20% | 0.496 | 3.07 | 93.92 | Conv1(10/128) Conv2(20/256) Conv3(10/256) | ∼241 | ∼333 | ∼371 | |
Proposed DC-LSTM | 0% | 0.233 | 1.69 | 97.86 | - | ∼188 | ∼207 | ∼230 |
Proposed DC-GRU | 0% | 0.232 | 1.69 | 98.72 | - | ∼182 | ∼205 | ∼227 |
Anomaly Rate | Anomaly Duration | Accuracy with Anomalies | Corrected Anomalies (One-Class SVM) | ||||
---|---|---|---|---|---|---|---|
Proposed DC-GRU | Proposed DC-LSTM | FCN-LSTM | Proposed DC-GRU | Proposed DC-LSTM | FCN-LSTM | ||
0% | 1 s | 98.72 | 97.86 | 95.1 | 98.72% | 97.86% | 95.10% |
10 s | 98.72 | 97.86 | 95.1 | ||||
1% | 1 s | 98.08 | 97.22 | 93.25 | 97.32 | 96.62 | 93.89 |
10 s | 98.08 | 97.22 | 92.6 | ||||
10% | 1 s | 92.74 | 92.52 | 85.85 | 97.12 | 96.22 | 93.57 |
10 s | 93.16 | 92.09 | 84.89 | ||||
30% | 1 s | 82.69 | 81.62 | 70.42 | 96.32 | 95.52 | 92.93 |
10 s | 81.2 | 81.62 | 69.77 | ||||
50% | 1 s | 72.86 | 73.5 | 57.23 | 95.14 | 94.51 | 91.64 |
10 s | 75.21 | 75.43 | 57.88 |
# of Classes | Sparse Learing only FC Layer | Proposed DepthConv-LSTM Accuracy (%) | Proposed DepthConv-GRU Accuracy (%) | Node Selection Method |
---|---|---|---|---|
10 | Initial Network | 97.86 | 98.72 | - |
10 | 80% re-train, 20% Freeze | 98.72 | 98.72 | Ranking Mg. of nodes |
10 | 60% re-train, 40% Freeze | 98.08 | 98.5 | Ranking Mg. of nodes |
7 | Initial Network | 99.11 | 99.4 | - |
7+3 | 80% re-train, 20% Freeze | 97.86 | 99.15 | Ranking Mg. of nodes |
7+3 | 60% re-train, 40% Freeze | 98.29 | 99.15 | Ranking Mg. of nodes |
4 | Initial Network | 99.51 | 99.51 | - |
4+6 | 80% re-train, 20% Freeze | 96.37 | 98.08 | Ranking Mg. of nodes |
4+6 | 60% re-train, 40% Freeze | 97.86 | 97.44 | Ranking Mg. of nodes |
4+6 | re_train 0=<Av<Eq_value | 96.79 | 98.08 | Avg Activation Method |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ullah, S.; Kim, D.-H. Lightweight Driver Behavior Identification Model with Sparse Learning on In-Vehicle CAN-BUS Sensor Data. Sensors 2020, 20, 5030. https://doi.org/10.3390/s20185030
Ullah S, Kim D-H. Lightweight Driver Behavior Identification Model with Sparse Learning on In-Vehicle CAN-BUS Sensor Data. Sensors. 2020; 20(18):5030. https://doi.org/10.3390/s20185030
Chicago/Turabian StyleUllah, Shan, and Deok-Hwan Kim. 2020. "Lightweight Driver Behavior Identification Model with Sparse Learning on In-Vehicle CAN-BUS Sensor Data" Sensors 20, no. 18: 5030. https://doi.org/10.3390/s20185030