Lightweight salient object detection in optical remote-sensing images via semantic matching and edge alignment

G Li, Z Liu, X Zhang, W Lin - IEEE Transactions on Geoscience …, 2023 - ieeexplore.ieee.org
G Li, Z Liu, X Zhang, W Lin
IEEE Transactions on Geoscience and Remote Sensing, 2023ieeexplore.ieee.org
Recently, relying on convolutional neural networks (CNNs), many methods for salient object
detection in optical remote-sensing images (ORSI-SOD) are proposed. However, most
methods ignore the number of parameters and computational cost brought by CNNs, and
only a few pay attention to portability and mobility. To facilitate practical applications, in this
article, we propose a novel lightweight network for ORSI-SOD based on semantic matching
and edge alignment, termed SeaNet. Specifically, SeaNet includes a lightweight MobileNet …
Recently, relying on convolutional neural networks (CNNs), many methods for salient object detection in optical remote-sensing images (ORSI-SOD) are proposed. However, most methods ignore the number of parameters and computational cost brought by CNNs, and only a few pay attention to portability and mobility. To facilitate practical applications, in this article, we propose a novel lightweight network for ORSI-SOD based on semantic matching and edge alignment, termed SeaNet. Specifically, SeaNet includes a lightweight MobileNet-V2 for feature extraction, a dynamic semantic matching module (DSMM) for high-level features, an edge self-alignment module (ESAM) for low-level features, and a portable decoder for inference. First, the high-level features are compressed into semantic kernels. Then, semantic kernels are used to activate salient object locations in two groups of high-level features through dynamic convolution operations in DSMM. Meanwhile, in ESAM, cross-scale edge information extracted from two groups of low-level features is self-aligned through loss and used for detail enhancement. Finally, starting from the highest level features, the decoder infers salient objects based on the accurate locations and fine details contained in the outputs of the two modules. Extensive experiments on two public datasets demonstrate that our lightweight SeaNet not only outperforms most state-of-the-art lightweight methods, but also yields comparable accuracy with state-of-the-art conventional methods, while having only 2.76 M parameters and running with 1.7 G floating point operations (FLOPs) for inputs. Our code and results are available at https://github.com/MathLee/SeaNet .
ieeexplore.ieee.org