Federated Self-Supervised Learning of Monocular Depth Estimators for Autonomous Vehicles

EFS Soares, CAV Campos - arXiv preprint arXiv:2310.04837, 2023 - arxiv.org
arXiv preprint arXiv:2310.04837, 2023arxiv.org
Image-based depth estimation has gained significant attention in recent research on
computer vision for autonomous vehicles in intelligent transportation systems. This focus
stems from its cost-effectiveness and wide range of potential applications. Unlike binocular
depth estimation methods that require two fixed cameras, monocular depth estimation
methods only rely on a single camera, making them highly versatile. While state-of-the-art
approaches for this task leverage self-supervised learning of deep neural networks in …
Image-based depth estimation has gained significant attention in recent research on computer vision for autonomous vehicles in intelligent transportation systems. This focus stems from its cost-effectiveness and wide range of potential applications. Unlike binocular depth estimation methods that require two fixed cameras, monocular depth estimation methods only rely on a single camera, making them highly versatile. While state-of-the-art approaches for this task leverage self-supervised learning of deep neural networks in conjunction with tasks like pose estimation and semantic segmentation, none of them have explored the combination of federated learning and self-supervision to train models using unlabeled and private data captured by autonomous vehicles. The utilization of federated learning offers notable benefits, including enhanced privacy protection, reduced network consumption, and improved resilience to connectivity issues. To address this gap, we propose FedSCDepth, a novel method that combines federated learning and deep self-supervision to enable the learning of monocular depth estimators with comparable effectiveness and superior efficiency compared to the current state-of-the-art methods. Our evaluation experiments conducted on Eigen's Split of the KITTI dataset demonstrate that our proposed method achieves near state-of-the-art performance, with a test loss below 0.13 and requiring, on average, only 1.5k training steps and up to 0.415 GB of weight data transfer per autonomous vehicle on each round.
arxiv.org