Abstract
In this research, we present a novel deep multi-task learning model to handle the perception stage of an autonomous driving system. The model leverages the fusion of RGB and dynamic vision sensor (DVS) images to perform semantic segmentation and depth estimation in four different perspectives of view simultaneously. As for the experiment, CARLA simulator is used to generate thousands of simulation data for training, validation, and testing processes. A dynamically changing environment with various weather conditions, daytime, maps, and non-player characters (NPC) is also considered to simulate a more realistic condition with expecting a better generalization of the model. An ablation study is conducted by modifying the network architecture to evaluate the influence of the sensor fusion technique. Based on the test result on 2 different datasets, the model that leverages feature maps sharing from RGB and DVS encoders is performing better. Furthermore, we show that our model can inference faster and have a comparable performance against another recent model. Official implementation code is shared at https://github.com/oskarnatan/RGBDVS-fusion.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Borse, S., Wang, Y., Zhang, Y., Porikli, F.: InverseForm: a loss function for structured boundary-aware segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5901–5911 (2021)
Caesar, H., et al.: nuScenes: a multimodal dataset for autonomous driving. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11618–11628 (2020)
Cantrell, K., Miller, C., Morato, C.: Practical depth estimation with image segmentation and serial U-Nets. In: Proceedings of the International Conference on Vehicle Technology and Intelligent Transport Systems, pp. 406–414 (2020)
Cipolla, R., Gal, Y., Kendall, A.: Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7482–7491 (2018)
Cordts, M., et al.: The cityscapes dataset for semantic urban scene understanding. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3213–3223 (2016)
Dosovitskiy, A., Ros, G., Codevilla, F., Lopez, A., Koltun, V.: CARLA: an open urban driving simulator. In: Proceedings of the Annual Conference on Robot Learning, pp. 1–16 (2017)
Fayyad, J., Jaradat, M.A., Gruyer, D., Najjaran, H.: Deep learning sensor fusion for autonomous vehicle perception and localization: a review. Sensors 20(15) (2020)
He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1026–1034 (2015)
Häne, C., et al.: 3D visual perception for self-driving cars using a multi-camera system: calibration, mapping, localization, and obstacle detection. Image Vision Comput. 68, 14–27 (2017)
Kato, S., Takeuchi, E., Ishiguro, Y., Ninomiya, Y., Takeda, K., Hamada, T.: An open approach to autonomous vehicles. IEEE Micro 35(6), 60–68 (2015)
Khatab, E., Onsy, A., Varley, M., Abouelfarag, A.: Vulnerable objects detection for autonomous driving: a review. Integration 78, 36–48 (2021)
Kocic, J., Jovicic, N., Drndarevic, V.: An end-to-end deep neural network for autonomous driving designed for embedded automotive platforms. Sensors 19(9) (2019)
Krogh, A., Hertz, J.A.: A simple weight decay can improve generalization. In: Proceedings of the International Conference on Neural Information Processing Systems, pp. 950–957 (1991)
Levinson, J., et al.: Towards fully autonomous driving: systems and algorithms. In: Proceedings of the IEEE Intelligent Vehicles Symposium, pp. 163–168 (2011)
Lin, T.Y., Dollár, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 936–944 (2017)
Munir, F., Azam, S., Jeon, M., Lee, B.G., Pedrycz, W.: LDNet: end-to-end lane marking detection approach using a dynamic vision sensor. IEEE Trans. Intell. Transp. Syst. 1–17 (2021)
Nathan, S., Derek, H., Pushmeet, K., Rob, F.: Indoor segmentation and support inference from RGBD images. In: Proceedings of the European Conference on Computer Vision, pp. 746–760 (2012)
Nobis, F., Geisslinger, M., Weber, M., Betz, J., Lienkamp, M.: A deep learning-based radar and camera sensor fusion architecture for object detection. In: Proceedings of the Sensor Data Fusion: trends, Solutions, Applications, pp. 1–7 (2019)
Paszke, A., et al.: PyTorch: an imperative style, high performance deep learning library. In: Advances in Neural Information Processing Systems, vol. 32, pp. 8024–8035 (2019)
Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. ArXiv (2021). https://arxiv.org/abs/2103.13413
Ravoor, P.C., Sudarshan, T.S.B.: Deep learning methods for multi-species animal re-identification and tracking - a survey. Comput. Sci. Rev. 38, 100289 (2020)
Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 234–241 (2015)
Shekhar, H., Seal, S., Kedia, S., Guha, A.: Survey on applications of machine learning in the field of computer vision. In: Emerging Technology in Modelling and Graphics, pp. 667–678 (2020)
Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res 15(56), 1929–1958 (2014)
Sutskever, I., Martens, J., Dahl, G., Hinton, G.: On the importance of initialization and momentum in deep learning. In: Proceedings of the International Conference on Machine Learning. pp. 1139–1147 (2013)
Tao, A., Sapra, K., Catanzaro, B.: Hierarchical multi-scale attention for semantic segmentation. ArXiv (2020). https://arxiv.org/abs/2005.10821
Teichmann, M., Weber, M., Zollner, M., Cipolla, R., Urtasun, R.: MultiNet: real-time joint semantic reasoning for autonomous driving. In: Proceedings of the IEEE Intelligent Vehicles Symposium, pp. 1013–1020 (2018)
Ye, J.C., Sung, W.K.: Understanding geometry of encoder-decoder CNNs. In: Proceedings of the International Conference on Machine Learning, pp. 7064–7073 (2019)
Yousefzadeh, A., Orchard, G., Gotarredona, T.S., Barranco, B.L.: Active perception with dynamic vision sensors. Minimum saccades with optimum recognition. IEEE Trans. Biomed. Circuits Syst. 12(4), 927–939 (2018)
Zhang, Y., Yang, Q.: A survey on multi-task learning. IEEE Trans. Knowl. Data Eng. (early access) (2021)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 Springer Nature Switzerland AG
About this paper
Cite this paper
Natan, O., Miura, J. (2022). Semantic Segmentation and Depth Estimation with RGB and DVS Sensor Fusion for Multi-view Driving Perception. In: Wallraven, C., Liu, Q., Nagahara, H. (eds) Pattern Recognition. ACPR 2021. Lecture Notes in Computer Science, vol 13188. Springer, Cham. https://doi.org/10.1007/978-3-031-02375-0_26
Download citation
DOI: https://doi.org/10.1007/978-3-031-02375-0_26
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-02374-3
Online ISBN: 978-3-031-02375-0
eBook Packages: Computer ScienceComputer Science (R0)