Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
27 April 2023 Adaptive offset self-attention network for 3D point cloud
Gaihua Wang, Nengyuan Wang, Qi Li, Hong Liu, Qianyu Zhai
Author Affiliations +
Abstract

The semantic segmentation of point clouds has achieved significant progress in recent years. However, most current methods rely on complex stacking of modules to learn local features, resulting in a complicated network structure. We propose an adaptive offset self-attention network (AOSANet), a simple and efficient point cloud learning framework for classification and segmentation tasks. AOSANet has permutation invariance when processing three-dimensional point cloud data and can dynamically learn local features based on the input features. To better fuse the local features, we introduce the maximum feature extraction module, which performs maximum feature weighting prior to local feature fusion. This allows the network to concentrate on the most important features and improve overall robustness. Through extensive experiments on publicly available datasets, the proposed method achieves a state-of-the-art performance on point cloud classification and segmentation tasks.

© 2023 SPIE and IS&T
Gaihua Wang, Nengyuan Wang, Qi Li, Hong Liu, and Qianyu Zhai "Adaptive offset self-attention network for 3D point cloud," Journal of Electronic Imaging 32(2), 023045 (27 April 2023). https://doi.org/10.1117/1.JEI.32.2.023045
Received: 8 November 2022; Accepted: 10 April 2023; Published: 27 April 2023
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Point clouds

Feature extraction

3D modeling

Data modeling

Feature fusion

Education and training

Image segmentation

Back to Top