Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3566097.3567874acmconferencesArticle/Chapter ViewAbstractPublication PagesaspdacConference Proceedingsconference-collections
research-article
Open access

Semantic Guided Fine-Grained Point Cloud Quantization Framework for 3D Object Detection

Published: 31 January 2023 Publication History

Abstract

Unlike the grid-paced RGB images, network compression, i.e.pruning and quantization, for the irregular and sparse 3D point cloud face more challenges. Traditional quantization ignores the unbalanced semantic distribution in 3D point cloud. In this work, we propose a semantic-guided adaptive quantization framework for 3D point cloud. Different from traditional quantization methods that adopt a static and uniform quantization scheme, our proposed framework can adaptively locate the semantic-rich foreground points in the feature maps to allocate a higher bitwidth for these "important" points. Since the foreground points are in a low proportion in the sparse 3D point cloud, such adaptive quantization can achieve higher accuracy than uniform compression under a similar compression rate. Furthermore, we adopt a block-wise fine-grained compression scheme in the proposed framework to fit the larger dynamic range in the point cloud. Moreover, a 3D point cloud based software and hardware co-evaluation process is proposed to evaluate the effectiveness of the proposed adaptive quantization in actual hardware devices. Based on the nuScenes dataset, we achieve 12.52% precision improvement under average 2-bit quantization. Compared with 8-bit quantization, we can achieve 3.11× energy efficiency based on co-evaluation results.

References

[1]
Haotong Qin, Zhongang Cai, Mingyuan Zhang, Yifu Ding, Haiyu Zhao, Shuai Yi, Xianglong Liu, and Hao Su. Bipointnet: Binary neural network for point clouds. In International Conference on Learning Representations, 2020.
[2]
Andreas Geiger, Philip Lenz, Christoph Stiller, and Raquel Urtasun. Vision meets robotics: The kitti dataset. The International Journal of Robotics Research, 32(11): 1231--1237, 2013.
[3]
Yifan Zhang, Qingyong Hu, Guoquan Xu, Yanxin Ma, Jianwei Wan, and Yulan Guo. Not all points are equal: Learning highly efficient point-based detectors for 3d lidar point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18953--18962, 2022.
[4]
Kuan Wang, Zhijian Liu, Yujun Lin, Ji Lin, and Song Han. Haq: Hardware-aware automated quantization with mixed precision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8612--8620, 2019.
[5]
Haibao Yu, Qi Han, Jianbo Li, Jianping Shi, Guangliang Cheng, and Bin Fan. Search what you want: Barrier panelty nas for mixed precision quantization. In European Conference on Computer Vision, pages 1--16. Springer, 2020.
[6]
Ahmed T Elthakeb, Prannoy Pilligundla, FatemehSadat Mireshghallah, Amir Yazdanbakhsh, and Hadi Esmaeilzadeh. Releq: A reinforcement learning approach for deep quantization of neural networks. arXiv preprint arXiv:1811.01704, 2018.
[7]
Jianfei Chen, Lianmin Zheng, Zhewei Yao, Dequan Wang, Ion Stoica, Michael Mahoney, and Joseph Gonzalez. Actnn: Reducing training memory footprint via 2-bit activation compressed training. In International Conference on Machine Learning, pages 1803--1813. PMLR, 2021.
[8]
Holger Caesar, Varun Bankiti, Alex H Lang, Sourabh Vora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Giancarlo Baldan, and Oscar Beijbom. nuscenes: A multimodal dataset for autonomous driving. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 11621--11631, 2020.
[9]
Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew Howard, Hartwig Adam, and Dmitry Kalenichenko. Quantization and training of neural networks for efficient integer-arithmetic-only inference. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2704--2713, 2018.
[10]
Yan Yan, Yuxing Mao, and Bo Li. Second: Sparsely embedded convolutional detection. Sensors, 18(10):3337, 2018.
[11]
Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pages 2980--2988, 2017.
[12]
Yoshua Bengio, Nicholas Léonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013.
[13]
Junghyup Lee, Dohyung Kim, and Bumsub Ham. Network quantization with element-wise gradient scaling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6448--6457, 2021.
[14]
Zhe Yuan, Jingyu Wang, Yixiong Yang, Jinshan Yue, Zhibo Wang, Xiaoyu Feng, Yanzhi Wang, Xueqing Li, Huazhong Yang, and Yongpan Liu. A sparse-adaptive cnn processor with area/performance balanced n-way set-associate pe arrays assisted by a collision-aware scheduler. In 2019 IEEE Asian Solid-State Circuits Conference (A-SSCC), pages 61--64. IEEE, 2019.

Cited By

View all

Index Terms

  1. Semantic Guided Fine-Grained Point Cloud Quantization Framework for 3D Object Detection

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ASPDAC '23: Proceedings of the 28th Asia and South Pacific Design Automation Conference
    January 2023
    807 pages
    ISBN:9781450397834
    DOI:10.1145/3566097
    This work is licensed under a Creative Commons Attribution-ShareAlike International 4.0 License.

    Sponsors

    In-Cooperation

    • IPSJ
    • IEEE CAS
    • IEEE CEDA
    • IEICE

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 31 January 2023

    Check for updates

    Author Tags

    1. 3D object detection
    2. adaptive quantization
    3. block-wise
    4. point cloud
    5. semantic-guided

    Qualifiers

    • Research-article

    Funding Sources

    Conference

    ASPDAC '23
    Sponsor:

    Acceptance Rates

    ASPDAC '23 Paper Acceptance Rate 102 of 328 submissions, 31%;
    Overall Acceptance Rate 466 of 1,454 submissions, 32%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 344
      Total Downloads
    • Downloads (Last 12 months)163
    • Downloads (Last 6 weeks)20
    Reflects downloads up to 25 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Login options

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media