The semantic segmentation of point clouds has achieved significant progress in recent years. However, most current methods rely on complex stacking of modules to learn local features, resulting in a complicated network structure. We propose an adaptive offset self-attention network (AOSANet), a simple and efficient point cloud learning framework for classification and segmentation tasks. AOSANet has permutation invariance when processing three-dimensional point cloud data and can dynamically learn local features based on the input features. To better fuse the local features, we introduce the maximum feature extraction module, which performs maximum feature weighting prior to local feature fusion. This allows the network to concentrate on the most important features and improve overall robustness. Through extensive experiments on publicly available datasets, the proposed method achieves a state-of-the-art performance on point cloud classification and segmentation tasks. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
Point clouds
Feature extraction
3D modeling
Data modeling
Feature fusion
Education and training
Image segmentation