Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1109/ITSC55140.2022.9922598guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
research-article

Accurate Cooperative Sensor Fusion by Parameterized Covariance Generation for Sensing and Localization Pipelines in CAVs

Published: 08 October 2022 Publication History

Abstract

A major challenge in cooperative sensing is to weight the measurements taken from the various sources to get an accurate result. Ideally, the weights should be inversely proportional to the error in the sensing information. However, previous cooperative sensor fusion approaches for autonomous vehicles use a fixed error model, in which the covariance of a sensor and its recognizer pipeline is just the mean of the measured covariance for all sensing scenarios. The approach proposed in this paper estimates error using key predictor terms that have high correlation with sensing and localization accuracy for accurate covariance estimation of each sensor observation. We adopt a tiered fusion model consisting of local and global sensor fusion steps. At the local fusion level, we add in a covariance generation stage using the error model for each sensor and the measured distance to generate the expected covariance matrix for each observation. At the global sensor fusion stage we add an additional stage to generate the localization covariance matrix from the key predictor term velocity and combines that with the covariance generated from the local fusion for accurate cooperative sensing. To showcase our method, we built a set of 1/10 scale model autonomous vehicles with scale accurate sensing capabilities and classified the error characteristics against a motion capture system. Results show an average and max improvement in RMSE when detecting vehicle positions of 1.42x and 1.78x respectively in a four-vehicle cooperative fusion scenario when using our error model versus a typical fixed error model.

References

[1]
M. Aeberhard and N. Kaempchen, “High-level sensor data fusion architecture for vehicle surround environment perception,” in Proc. 8th Int. Workshop Intell. Transp, vol. 665, 2011.
[2]
C. Allig and G. Wanielik, “Unequal dimension track-to-track fusion approaches using covariance intersection,” IEEE Transactions on Intelligent Transportation Systems, 2020.
[3]
E. Arnold, M. Dianati, R. de Temple, and S. Fallah, “Cooperative perception for 3d object detection in driving scenarios using infrastructure sensors,” IEEE Transactions on Intelligent Transportation Systems, 2020.
[4]
G. Bresson, M.-C. Rahal, D. Gruyer, M. Revilloud, and Z. Alsayed, “A cooperative fusion architecture for robust localization: Application to autonomous driving,” in 2016 IEEE 19th international conference on intelligent transportation systems (ITSC). IEEE, 2016, pp. 859–866.
[5]
S. Chadwick, W. Maddetn, and P. Newman, “Distant vehicle detection using radar and vision,” in 2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019, pp. 8311–8317.
[6]
Q. Chen, X. Ma, S. Tang, J. Guo, Q. Yang, and S. Fu, “F-cooper: Feature based cooperative perception for autonomous vehicle edge computing system using 3d point clouds,” in Proceedings of the 4th ACM/IEEE Symposium on Edge Computing, 2019, pp. 88–100.
[7]
Q. Chen, S. Tang, Q. Yang, and S. Fu, “Cooper: Cooperative perception for connected autonomous vehicles based on 3d point clouds,” in 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS). IEEE, 2019, pp. 514–524.
[8]
F. de Ponte Müller, E. M. Diaz, and I. Rashdan, “Cooperative positioning and radar sensor fusion for relative localization of vehicles,” in 2016 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2016, pp. 1060–1065.
[9]
W. Farag, “Kalman-filter-based sensor fusion applied to road-objects detection and tracking for autonomous vehicles,” Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering, vol. 235, no. 7, pp. 1125–1138, 2021.
[10]
D. Feng, C. Haase-Schütz, L. Rosenbaum, H. Hertlein, C. Glaeser, F. Timm, W. Wiesbeck, and K. Dietmayer, “Deep multi-modal object detection and semantic segmentation for autonomous driving: Datasets, methods, and challenges,” IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 3, pp. 1341–1360, 2020.
[11]
F. García, F. Jiménez, J. E. Naranjo, J. G. Zato, F. Aparicio, J. M. Armingol, and A. de la Escalera, “Environment perception based on lidar sensors for real road applications,” Robotica, vol. 30, no. 2, pp. 185–193, 2012.
[12]
F. Garcia, D. Martin, A. De La Escalera, and J. M. Armingol, “Sensor fusion methodology for vehicle detection,” IEEE Intelligent Transportation Systems Magazine, vol. 9, no. 1, pp. 123–133, 2017.
[13]
S.-W. Kim, Z. J. Chong, B. Qin, X. Shen, Z. Cheng, W. Liu, and M. H. Ang, “Cooperative perception for autonomous vehicle control on the road: Motivation and experimental results,” in 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2013, pp. 5059–5066.
[14]
S.-W. Kim, B. Qin, Z. J. Chong, X. Shen, W. Liu, M. H. Ang, E. Frazzoli, and D. Rus, “Multivehicle cooperative driving using cooperative perception: Design and experimental validation,” IEEE Transactions on Intelligent Transportation Systems, vol. 16, no. 2, pp. 663–680, 2014.
[15]
H. Li and F. Nashashibi, “Multi-vehicle cooperative localization using indirect vehicle-to-vehicle relative pose estimation,” in 2012 IEEE International Conference on Vehicular Electronics and Safety (ICVES 2012). IEEE, 2012, pp. 267–272.
[16]
A. Rauch, F. Klanner, R. Rasshofer, and K. Dietmayer, “Car2x-based perception in a high-level fusion architecture for cooperative perception systems,” in 2012 IEEE Intelligent Vehicles Symposium. IEEE, 2012, pp. 270–275.
[17]
A. W. Stroupe, M. C. Martin, and T. Balch, “Distributed sensor fusion for object position estimation by multi-robot systems,” in Proceedings 2001 ICRA. IEEE international conference on robotics and automation (Cat. No. 01CH37164), vol. 2. IEEE, 2001, pp. 1092–1098.
[18]
M. Tsukada, T. Oi, A. Ito, M. Hirata, and H. Esaki, “Autoc2x: Open-source software to realize v2x cooperative perception among autonomous vehicles,” in 2020 IEEE 92nd Vehicular Technology Conference (VTC2020-Fall). IEEE, 2020, pp. 1–6.
[19]
M. Tsukada, T. Oi, M. Kitazawa, and H. Esaki, “Networked roadside perception units for autonomous driving,” Sensors, vol. 20, no. 18, p. 5320, 2020.
[20]
G. Wan, X. Yang, R. Cai, H. Li, Y. Zhou, H. Wang, and S. Song, “Robust and precise vehicle localization based on multi-sensor fusion in diverse city scenes,” in 2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018, pp. 4670–4677.

Cited By

View all
  • (2023)B-AWARE: Blockage Aware RSU Scheduling for 5G Enabled Autonomous VehiclesACM Transactions on Embedded Computing Systems10.1145/360913322:5s(1-23)Online publication date: 31-Oct-2023

Index Terms

  1. Accurate Cooperative Sensor Fusion by Parameterized Covariance Generation for Sensing and Localization Pipelines in CAVs
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image Guide Proceedings
      2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC)
      Oct 2022
      4379 pages

      Publisher

      IEEE Press

      Publication History

      Published: 08 October 2022

      Qualifiers

      • Research-article

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)0
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 23 Dec 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2023)B-AWARE: Blockage Aware RSU Scheduling for 5G Enabled Autonomous VehiclesACM Transactions on Embedded Computing Systems10.1145/360913322:5s(1-23)Online publication date: 31-Oct-2023

      View Options

      View options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media