Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (29)

Search Parameters:
Keywords = camouflaged object detection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 3188 KiB  
Article
Camouflaged Object Detection That Does Not Require Additional Priors
by Yuchen Dong, Heng Zhou, Chengyang Li, Junjie Xie, Yongqiang Xie and Zhongbo Li
Appl. Sci. 2024, 14(6), 2621; https://doi.org/10.3390/app14062621 - 21 Mar 2024
Viewed by 1068
Abstract
Camouflaged object detection (COD) is an arduous challenge due to the striking resemblance of camouflaged objects to their surroundings. The abundance of similar background information can significantly impede the efficiency of camouflaged object detection algorithms. Prior research in this domain has often relied [...] Read more.
Camouflaged object detection (COD) is an arduous challenge due to the striking resemblance of camouflaged objects to their surroundings. The abundance of similar background information can significantly impede the efficiency of camouflaged object detection algorithms. Prior research in this domain has often relied on supplementary prior knowledge to guide model training. However, acquiring such prior knowledge is resource-intensive. Furthermore, the additional provided prior information is typically already embedded in the original image, but this information is underutilized. To address these issues, in this paper, we introduce a novel Camouflage Cues Guidance Network (CCGNet) for camouflaged object detection that does not rely on additional prior knowledge. Specifically, we use an adaptive approach to track the learning state of the model with respect to the camouflaged object and dynamically extract the cues of the camouflaged object from the original image. In addition, we introduce a foreground separation module and an edge refinement module to effectively utilize these camouflage cues, assisting the model in fully separating camouflaged objects and enabling precise edge prediction. Extensive experimental results demonstrate that our proposed methods can achieve superior performance compared with state-of-the-art approaches. Full article
Show Figures

Figure 1

15 pages, 563 KiB  
Article
Camouflaged Object Detection Based on Deep Learning with Attention-Guided Edge Detection and Multi-Scale Context Fusion
by Yalin Wen, Wei Ke and Hao Sheng
Appl. Sci. 2024, 14(6), 2494; https://doi.org/10.3390/app14062494 - 15 Mar 2024
Viewed by 1234
Abstract
In nature, objects that use camouflage have features like colors and textures that closely resemble their background. This creates visual illusions that help them hide and protect themselves from predators. This similarity also makes the task of detecting camouflaged objects very challenging. Methods [...] Read more.
In nature, objects that use camouflage have features like colors and textures that closely resemble their background. This creates visual illusions that help them hide and protect themselves from predators. This similarity also makes the task of detecting camouflaged objects very challenging. Methods for camouflaged object detection (COD), which rely on deep neural networks, are increasingly gaining attention. These methods focus on improving model performance and computational efficiency by extracting edge information and using multi-layer feature fusion. Our improvement is based on researching ways to enhance efficiency in the encode–decode process. We have developed a variant model that combines Swin Transformer (Swin-T) and EfficientNet-B7. This model integrates the strengths of both Swin-T and EfficientNet-B7, and it employs an attention-guided tracking module to efficiently extract edge information and identify objects in camouflaged environments. Additionally, we have incorporated dense skip links to enhance the aggregation of deep-level feature information. A boundary-aware attention module has been incorporated into the final layer of the initial shallow information recognition phase. This module utilizes the Fourier transform to quickly relay specific edge information from the initially obtained shallow semantics to subsequent stages, thereby more effectively achieving feature recognition and edge extraction. In the latter phase, which is focused on deep semantic extraction, we employ a dense skip joint attention module to enhance the decoder’s performance and efficiency, ensuring accurate capture of deep-level information, feature recognition, and edge extraction. In the later stage of deep semantic extraction, we use a dense skip joint attention module to improve the decoder’s performance and efficiency in capturing precise deep information. This module efficiently identifies the specifics and edge information of undetected camouflaged objects across channels and spaces. Differing from previous methods, we introduce an adaptive pixel strength loss function for handling key captured information. Our proposed method shows strong competitive performance on three current benchmark datasets (CHAMELEON, CAMO, COD10K). Compared to 26 previously proposed methods using 4 measurement metrics, our approach exhibits favorable competitiveness. Full article
(This article belongs to the Special Issue Advances in Image Recognition and Processing Technologies)
Show Figures

Figure 1

18 pages, 4194 KiB  
Article
Robust Localization-Guided Dual-Branch Network for Camouflaged Object Segmentation
by Chuanjiang Wang, Yuepeng Li, Guohui Wei, Xiankai Hou and Xiujuan Sun
Electronics 2024, 13(5), 821; https://doi.org/10.3390/electronics13050821 - 20 Feb 2024
Cited by 1 | Viewed by 832
Abstract
The existence of camouflage targets is widespread in the natural world, as they blend seamlessly or closely resemble their surrounding environment, making it difficult for the human eye to identify them accurately. In camouflage target segmentation, challenges often arise from the high similarity [...] Read more.
The existence of camouflage targets is widespread in the natural world, as they blend seamlessly or closely resemble their surrounding environment, making it difficult for the human eye to identify them accurately. In camouflage target segmentation, challenges often arise from the high similarity between the foreground and background, resulting in segmentation errors, imprecise edge detection, and overlooking of small targets. To address these issues, this paper presents a robust localization-guided dual-branch network for the recognition of camouflaged targets. Two crucial branches, i.e., a localization branch and an overall refinement branch are designed and incorporated. The localization branch achieves accurate preliminary localization of camouflaged targets by incorporating the robust localization module, which integrates different high-level feature maps in a partially decoded manner. The overall refinement branch optimizes segmentation accuracy based on the output predictions of the localization branch. Within this branch, the edge refinement module is devised to effectively reduce false negative and false positive interference. By conducting context exploration on each feature layer from top to bottom, this module further enhances the precision of target edge segmentation. Additionally, our network employs five jointly trained output prediction maps and introduces attention-guided heads for diverse prediction maps in the overall refinement branch. This design adjusts the spatial positions and channel weights of different prediction maps, generating output prediction maps based on the emphasis of each output, thereby further strengthening the perception and feature representation capabilities of the model. To improve its ability to generate highly confident and accurate prediction candidate regions, tailored loss functions are designed to cater to the objectives of different prediction maps. We conducted experiments on three publicly available datasets for camouflaged object detection to assess our methodology and compared it with state-of-the-art network models. On the largest dataset COD10K, our method achieved a Structure-measure of 0.827 and demonstrated superior performance in other evaluation metrics, outperforming recent network models. Full article
Show Figures

Figure 1

16 pages, 4024 KiB  
Article
Features Split and Aggregation Network for Camouflaged Object Detection
by Zejin Zhang, Tao Wang, Jian Wang and Yao Sun
J. Imaging 2024, 10(1), 24; https://doi.org/10.3390/jimaging10010024 - 18 Jan 2024
Viewed by 1722
Abstract
Higher standards have been proposed for detection systems since camouflaged objects are not distinct enough, making it possible to ignore the difference between their background and foreground. In this paper, we present a new framework for Camouflaged Object Detection (COD) named FSANet, which [...] Read more.
Higher standards have been proposed for detection systems since camouflaged objects are not distinct enough, making it possible to ignore the difference between their background and foreground. In this paper, we present a new framework for Camouflaged Object Detection (COD) named FSANet, which consists mainly of three operations: spatial detail mining (SDM), cross-scale feature combination (CFC), and hierarchical feature aggregation decoder (HFAD). The framework simulates the three-stage detection process of the human visual mechanism when observing a camouflaged scene. Specifically, we have extracted five feature layers using the backbone and divided them into two parts with the second layer as the boundary. The SDM module simulates the human cursory inspection of the camouflaged objects to gather spatial details (such as edge, texture, etc.) and fuses the features to create a cursory impression. The CFC module is used to observe high-level features from various viewing angles and extracts the same features by thoroughly filtering features of various levels. We also design side-join multiplication in the CFC module to avoid detail distortion and use feature element-wise multiplication to filter out noise. Finally, we construct an HFAD module to deeply mine effective features from these two stages, direct the fusion of low-level features using high-level semantic knowledge, and improve the camouflage map using hierarchical cascade technology. Compared to the nineteen deep-learning-based methods in terms of seven widely used metrics, our proposed framework has clear advantages on four public COD datasets, demonstrating the effectiveness and superiority of our model. Full article
Show Figures

Figure 1

10 pages, 5260 KiB  
Proceeding Paper
A Linear Differentiation Scheme for Camouflaged Target Detection using Convolution Neural Networks
by Jagadesh Sambbantham, Gomathy Balasubramanian, Rajarathnam and Mohit Tiwari
Eng. Proc. 2023, 59(1), 45; https://doi.org/10.3390/engproc2023059045 - 13 Dec 2023
Viewed by 676
Abstract
Camouflaged objects are masked within an existing image or video under similar patterns. This makes it tedious to detect target objects post classification. The pattern distributions are monotonous due to similar pixels and non-contrast regions. In this paper, a distribution-differentiated target detection scheme [...] Read more.
Camouflaged objects are masked within an existing image or video under similar patterns. This makes it tedious to detect target objects post classification. The pattern distributions are monotonous due to similar pixels and non-contrast regions. In this paper, a distribution-differentiated target detection scheme (DDTDS) is proposed for segregating and identifying camouflaged objects. First, the image is segmented using textural pixel patterns for which the linear differentiation is performed. Convolutional neural learning is used for training the regions across pixel distribution and pattern formations. The neural network employs two layers for linear training and pattern differentiation. The differentiated region is trained for its positive rate in identifying the region around the target. Non-uniform patterns are used for training the second layer of the neural network. The proposed scheme pursues a recurrent iteration until the maximum segmentation is achieved. The metrics of positive rate, detection time, and false negatives are used for assessing the proposed scheme’s performance. Full article
(This article belongs to the Proceedings of Eng. Proc., 2023, RAiSE-2023)
Show Figures

Figure 1

17 pages, 19458 KiB  
Technical Note
CamoNet: A Target Camouflage Network for Remote Sensing Images Based on Adversarial Attack
by Yue Zhou, Wanghan Jiang, Xue Jiang, Lin Chen and Xingzhao Liu
Remote Sens. 2023, 15(21), 5131; https://doi.org/10.3390/rs15215131 - 27 Oct 2023
Cited by 2 | Viewed by 1521
Abstract
Object detection algorithms based on convolutional neural networks (CNNs) have achieved remarkable success in remote sensing images (RSIs), such as aircraft and ship detection, which play a vital role in military and civilian fields. However, CNNs are fragile and can be easily fooled. [...] Read more.
Object detection algorithms based on convolutional neural networks (CNNs) have achieved remarkable success in remote sensing images (RSIs), such as aircraft and ship detection, which play a vital role in military and civilian fields. However, CNNs are fragile and can be easily fooled. There have been a series of studies on adversarial attacks for image classification in RSIs. However, the existing gradient attack algorithms designed for classification cannot achieve excellent performance when directly applied to object detection, which is an essential task in RSI understanding. Although we can find some works on adversarial attacks for object detection, they are weak in concealment and easily detected by the naked eye. To handle these problems, we propose a target camouflage network for object detection in RSIs, called CamoNet, to deceive CNN-based detectors by adding imperceptible perturbation to the image. In addition, we propose a detection space initialization strategy to maximize the diversity in the detector’s outputs among the generated samples. It can enhance the performance of the gradient attack algorithms in the object detection task. Moreover, a key pixel distillation module is employed, which can further reduce the modified pixels without weakening the concealment effect. Compared with several of the most advanced adversarial attacks, the proposed attack has advantages in terms of both peak signal-to-noise ratio (PSNR) and attack success rate. The transferability of the proposed target camouflage network is evaluated on three dominant detection algorithms (RetinaNet, Faster R-CNN, and RTMDet) with two commonly used remote sensing datasets (i.e., DOTA and DIOR). Full article
(This article belongs to the Special Issue Deep Learning in Optical Satellite Images)
Show Figures

Figure 1

17 pages, 5455 KiB  
Article
Improving the Detection and Positioning of Camouflaged Objects in YOLOv8
by Tong Han, Tieyong Cao, Yunfei Zheng, Lei Chen, Yang Wang and Bingyang Fu
Electronics 2023, 12(20), 4213; https://doi.org/10.3390/electronics12204213 - 11 Oct 2023
Cited by 3 | Viewed by 2417
Abstract
Camouflaged objects can be perfectly hidden in the surrounding environment by designing their texture and color. Existing object detection models have high false-negative rates and inaccurate localization for camouflaged objects. To resolve this, we improved the YOLOv8 algorithm based on feature enhancement. In [...] Read more.
Camouflaged objects can be perfectly hidden in the surrounding environment by designing their texture and color. Existing object detection models have high false-negative rates and inaccurate localization for camouflaged objects. To resolve this, we improved the YOLOv8 algorithm based on feature enhancement. In the feature extraction stage, an edge enhancement module was built to enhance the edge feature. In the feature fusion stage, multiple asymmetric convolution branches were introduced to obtain larger receptive fields and achieve multi-scale feature fusion. In the post-processing stage, the existing non-maximum suppression algorithm was improved to address the issue of missed detection caused by overlapping boxes. Additionally, a shape-enhanced data augmentation method was designed to enhance the model’s shape perception of camouflaged objects. Experimental evaluations were carried out on camouflaged object datasets, including COD and CAMO, which are publicly accessible. The improved method exhibits enhancements in detection performance by 8.3% and 9.1%, respectively, compared to the YOLOv8 model. Full article
(This article belongs to the Special Issue Emerging Artificial Intelligence Technologies and Applications)
Show Figures

Figure 1

18 pages, 3683 KiB  
Article
Real-Time Segmentation of Artificial Targets Using a Dual-Modal Efficient Attention Fusion Network
by Ying Shen, Xiancai Liu, Shuo Zhang, Yixuan Xu, Dawei Zeng, Shu Wang and Feng Huang
Remote Sens. 2023, 15(18), 4398; https://doi.org/10.3390/rs15184398 - 7 Sep 2023
Viewed by 1037
Abstract
The fusion of spectral–polarimetric information can improve the autonomous reconnaissance capability of unmanned aerial vehicles (UAVs) in detecting artificial targets. However, the current spectral and polarization imaging systems typically suffer from low image sampling resolution, which can lead to the loss of target [...] Read more.
The fusion of spectral–polarimetric information can improve the autonomous reconnaissance capability of unmanned aerial vehicles (UAVs) in detecting artificial targets. However, the current spectral and polarization imaging systems typically suffer from low image sampling resolution, which can lead to the loss of target information. Most existing segmentation algorithms neglect the similarities and differences between multimodal features, resulting in reduced accuracy and robustness of the algorithms. To address these challenges, a real-time spectral–polarimetric segmentation algorithm for artificial targets based on an efficient attention fusion network, called ESPFNet (efficient spectral–polarimetric fusion network) is proposed. The network employs a coordination attention bimodal fusion (CABF) module and a complex atrous spatial pyramid pooling (CASPP) module to fuse and enhance low-level and high-level features at different scales from the spectral feature images and the polarization encoded images, effectively achieving the segmentation of artificial targets. Additionally, the introduction of the residual dense block (RDB) module refines feature extraction, further enhancing the network’s ability to classify pixels. In order to test the algorithm’s performance, a spectral–polarimetric image dataset of artificial targets, named SPIAO (spectral–polarimetric image of artificial objects) is constructed, which contains various camouflaged nets and camouflaged plates with different properties. The experimental results on the SPIAO dataset demonstrate that the proposed method accurately detects the artificial targets, achieving a mean intersection-over-union (MIoU) of 80.4%, a mean pixel accuracy (MPA) of 88.1%, and a detection rate of 27.5 frames per second, meeting the real-time requirement. The research has the potential to provide a new multimodal detection technique for enabling autonomous reconnaissance by UAVs in complex scenes. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

19 pages, 11064 KiB  
Article
Edge-Guided Camouflaged Object Detection via Multi-Level Feature Integration
by Kangwei Liu, Tianchi Qiu, Yinfeng Yu, Songlin Li and Xiuhong Li
Sensors 2023, 23(13), 5789; https://doi.org/10.3390/s23135789 - 21 Jun 2023
Cited by 2 | Viewed by 1544
Abstract
Camouflaged object detection (COD) aims to segment those camouflaged objects that blend perfectly into their surroundings. Due to the low boundary contrast between camouflaged objects and their surroundings, their detection poses a significant challenge. Despite the numerous excellent camouflaged object detection methods developed [...] Read more.
Camouflaged object detection (COD) aims to segment those camouflaged objects that blend perfectly into their surroundings. Due to the low boundary contrast between camouflaged objects and their surroundings, their detection poses a significant challenge. Despite the numerous excellent camouflaged object detection methods developed in recent years, issues such as boundary refinement and multi-level feature extraction and fusion still need further exploration. In this paper, we propose a novel multi-level feature integration network (MFNet) for camouflaged object detection. Firstly, we design an edge guidance module (EGM) to improve the COD performance by providing additional boundary semantic information by combining high-level semantic information and low-level spatial details to model the edges of camouflaged objects. Additionally, we propose a multi-level feature integration module (MFIM), which leverages the fine local information of low-level features and the rich global information of high-level features in adjacent three-level features to provide a supplementary feature representation for the current-level features, effectively integrating the full context semantic information. Finally, we propose a context aggregation refinement module (CARM) to efficiently aggregate and refine the cross-level features to obtain clear prediction maps. Our extensive experiments on three benchmark datasets show that the MFNet model is an effective COD model and outperforms other state-of-the-art models in all four evaluation metrics (Sα, Eϕ, Fβw, and MAE). Full article
Show Figures

Figure 1

16 pages, 2467 KiB  
Article
Camouflaged Object Detection with a Feature Lateral Connection Network
by Tao Wang, Jian Wang and Ruihao Wang
Electronics 2023, 12(12), 2570; https://doi.org/10.3390/electronics12122570 - 7 Jun 2023
Cited by 3 | Viewed by 1932
Abstract
We propose a new framework for camouflaged object detection (COD) named FLCNet, which comprises three modules: an underlying feature mining module (UFM), a texture-enhanced module (TEM), and a neighborhood feature fusion module (NFFM). Existing models overlook the analysis of underlying features, which results [...] Read more.
We propose a new framework for camouflaged object detection (COD) named FLCNet, which comprises three modules: an underlying feature mining module (UFM), a texture-enhanced module (TEM), and a neighborhood feature fusion module (NFFM). Existing models overlook the analysis of underlying features, which results in extracted low-level feature texture information that is not prominent enough and contains more interference due to the slight difference between the foreground and background of the camouflaged object. To address this issue, we created a UFM using convolution with various expansion rates, max-pooling, and avg-pooling to deeply mine the textural information of underlying features and eliminate interference. Motivated by the traits passed down through biological evolution, we created an NFFM, which primarily consists of element multiplication and concatenation followed by an addition operation. To obtain precise prediction maps, our model employs the top-down strategy to gradually combine high-level and low-level information. Using four benchmark COD datasets, our proposed framework outperforms 21 deep-learning-based models in terms of seven frequently used indices, demonstrating the effectiveness of our methodology. Full article
Show Figures

Figure 1

22 pages, 5906 KiB  
Article
Camouflaged Object Detection Based on Ternary Cascade Perception
by Xinhao Jiang, Wei Cai, Yao Ding, Xin Wang, Zhiyong Yang, Xingyu Di and Weijie Gao
Remote Sens. 2023, 15(5), 1188; https://doi.org/10.3390/rs15051188 - 21 Feb 2023
Cited by 6 | Viewed by 2754
Abstract
Camouflaged object detection (COD), in a broad sense, aims to detect image objects that have high degrees of similarity to the background. COD is more challenging than conventional object detection because of the high degree of “fusion” between a camouflaged object and the [...] Read more.
Camouflaged object detection (COD), in a broad sense, aims to detect image objects that have high degrees of similarity to the background. COD is more challenging than conventional object detection because of the high degree of “fusion” between a camouflaged object and the background. In this paper, we focused on the accurate detection of camouflaged objects, conducting an in-depth study on COD and addressing the common detection problems of high miss rates and low confidence levels. We proposed a ternary cascade perception-based method for detecting camouflaged objects and constructed a cascade perception network (CPNet). The innovation lies in the proposed ternary cascade perception module (TCPM), which focuses on extracting the relationship information between features and the spatial information of the camouflaged target and the location information of key points. In addition, a cascade aggregation pyramid (CAP) and a joint loss function have been proposed to recognize camouflaged objects accurately. We conducted comprehensive experiments on the COD10K dataset and compared our proposed approach with other seventeen-object detection models. The experimental results showed that CPNet achieves optimal results in terms of six evaluation metrics, including an average precision (AP)50 that reaches 91.41, an AP75 that improves to 73.04, and significantly higher detection accuracy and confidence. Full article
Show Figures

Figure 1

17 pages, 8696 KiB  
Article
Camouflaged Insect Segmentation Using a Progressive Refinement Network
by Jing Wang, Minglin Hong, Xia Hu, Xiaolin Li, Shiguo Huang, Rong Wang and Feiping Zhang
Electronics 2023, 12(4), 804; https://doi.org/10.3390/electronics12040804 - 6 Feb 2023
Cited by 1 | Viewed by 1915
Abstract
Accurately segmenting an insect from its original ecological image is the core technology restricting the accuracy and efficiency of automatic recognition. However, the performance of existing segmentation methods is unsatisfactory in insect images shot in wild backgrounds on account of challenges: various sizes, [...] Read more.
Accurately segmenting an insect from its original ecological image is the core technology restricting the accuracy and efficiency of automatic recognition. However, the performance of existing segmentation methods is unsatisfactory in insect images shot in wild backgrounds on account of challenges: various sizes, similar colors or textures to the surroundings, transparent body parts and vague outlines. These challenges of image segmentation are accentuated when dealing with camouflaged insects. Here, we developed an insect image segmentation method based on deep learning termed the progressive refinement network (PRNet), especially for camouflaged insects. Unlike existing insect segmentation methods, PRNet captures the possible scale and location of insects by extracting the contextual information of the image, and fuses comprehensive features to suppress distractors, thereby clearly segmenting insect outlines. Experimental results based on 1900 camouflaged insect images demonstrated that PRNet could effectively segment the camouflaged insects and achieved superior detection performance, with a mean absolute error of 3.2%, pixel-matching degree of 89.7%, structural similarity of 83.6%, and precision and recall error of 72%, which achieved improvements of 8.1%, 25.9%, 19.5%, and 35.8%, respectively, when compared to the recent salient object detection methods. As a foundational technology for insect detection, PRNet provides new opportunities for understanding insect camouflage, and also has the potential to lead to a step progress in the accuracy of the intelligent identification of general insects, and even being an ultimate insect detector. Full article
(This article belongs to the Special Issue Deep Learning for Computer Vision)
Show Figures

Figure 1

18 pages, 78897 KiB  
Article
Rust-Style Patch: A Physical and Naturalistic Camouflage Attacks on Object Detector for Remote Sensing Images
by Binyue Deng, Denghui Zhang, Fashan Dong, Junjian Zhang, Muhammad Shafiq and Zhaoquan Gu
Remote Sens. 2023, 15(4), 885; https://doi.org/10.3390/rs15040885 - 5 Feb 2023
Cited by 10 | Viewed by 3063
Abstract
Deep neural networks (DNNs) can improve the image analysis and interpretation of remote sensing technology by extracting valuable information from images, and has extensive applications such as military affairs, agriculture, environment, transportation, and urban division. The DNNs for object detection can identify and [...] Read more.
Deep neural networks (DNNs) can improve the image analysis and interpretation of remote sensing technology by extracting valuable information from images, and has extensive applications such as military affairs, agriculture, environment, transportation, and urban division. The DNNs for object detection can identify and analyze objects in remote sensing images through fruitful features of images, which improves the efficiency of image processing and enables the recognition of large-scale remote sensing images. However, many studies have shown that deep neural networks are vulnerable to adversarial attack. After adding small perturbations, the generated adversarial examples will cause deep neural network to output undesired results, which will threaten the normal recognition and detection of remote sensing systems. According to the application scenarios, attacks can be divided into the digital domain and the physical domain, the digital domain attack is directly modified on the original image, which is mainly used to simulate the attack effect, while the physical domain attack adds perturbation to the actual objects and captures them with device, which is closer to the real situation. Attacks in the physical domain are more threatening, however, existing attack methods generally generate the patch with bright style and a large attack range, which is easy to be observed by human vision. Our goal is to generate a natural patch with a small perturbation area, which can help some remote sensing images used in the military to avoid detection by object detectors and im-perceptible to human eyes. To address the above issues, we generate a rust-style adversarial patch generation framework based on style transfer. The framework takes a heat map-based interpretability method to obtain key areas of target recognition and generate irregular-shaped natural-looking patches to reduce the disturbance area and alleviates suspicion from humans. To make the generated adversarial examples have a higher attack success rate in the physical domain, we further improve the robustness of the adversarial patch through data augmentation methods such as rotation, scaling, and brightness, and finally, make it impossible for the object detector to detect the camouflage patch. We have attacked the YOLOV3 detection network on multiple datasets. The experimental results show that our model has achieved a success rate of 95.7% in the digital domain. We also conduct physical attacks in indoor and outdoor environments and achieve an attack success rate of 70.6% and 65.3%, respectively. The structural similarity index metric shows that the adversarial patches generated are more natural than existing methods. Full article
Show Figures

Figure 1

26 pages, 13008 KiB  
Article
MAGNet: A Camouflaged Object Detection Network Simulating the Observation Effect of a Magnifier
by Xinhao Jiang, Wei Cai, Zhili Zhang, Bo Jiang, Zhiyong Yang and Xin Wang
Entropy 2022, 24(12), 1804; https://doi.org/10.3390/e24121804 - 9 Dec 2022
Cited by 5 | Viewed by 3612
Abstract
In recent years, protecting important objects by simulating animal camouflage has been widely employed in many fields. Therefore, camouflaged object detection (COD) technology has emerged. COD is more difficult to achieve than traditional object detection techniques due to the high degree of fusion [...] Read more.
In recent years, protecting important objects by simulating animal camouflage has been widely employed in many fields. Therefore, camouflaged object detection (COD) technology has emerged. COD is more difficult to achieve than traditional object detection techniques due to the high degree of fusion of objects camouflaged with the background. In this paper, we strive to more accurately and efficiently identify camouflaged objects. Inspired by the use of magnifiers to search for hidden objects in pictures, we propose a COD network that simulates the observation effect of a magnifier called the MAGnifier Network (MAGNet). Specifically, our MAGNet contains two parallel modules: the ergodic magnification module (EMM) and the attention focus module (AFM). The EMM is designed to mimic the process of a magnifier enlarging an image, and AFM is used to simulate the observation process in which human attention is highly focused on a particular region. The two sets of output camouflaged object maps were merged to simulate the observation of an object by a magnifier. In addition, a weighted key point area perception loss function, which is more applicable to COD, was designed based on two modules to give greater attention to the camouflaged object. Extensive experiments demonstrate that compared with 19 cutting-edge detection models, MAGNet can achieve the best comprehensive effect on eight evaluation metrics in the public COD dataset. Additionally, compared to other COD methods, MAGNet has lower computational complexity and faster segmentation. We also validated the model’s generalization ability on a military camouflaged object dataset constructed in-house. Finally, we experimentally explored some extended applications of COD. Full article
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)
Show Figures

Figure 1

14 pages, 5191 KiB  
Article
Spectral Camouflage Characteristics and Recognition Ability of Targets Based on Visible/Near-Infrared Hyperspectral Images
by Jiale Zhao, Bing Zhou, Guanglong Wang, Jiaju Ying, Jie Liu and Qi Chen
Photonics 2022, 9(12), 957; https://doi.org/10.3390/photonics9120957 - 9 Dec 2022
Cited by 8 | Viewed by 2360
Abstract
Hyperspectral imaging can simultaneously obtain the spatial morphological information of the ground objects and the fine spectral information of each pixel. Through the quantitative analysis of the spectral characteristics of objects, it can complete the task of classification and recognition of ground objects. [...] Read more.
Hyperspectral imaging can simultaneously obtain the spatial morphological information of the ground objects and the fine spectral information of each pixel. Through the quantitative analysis of the spectral characteristics of objects, it can complete the task of classification and recognition of ground objects. The appearance of imaging spectrum technology provides great advantages for military target detection and promotes the continuous improvement of military reconnaissance levels. At the same time, spectral camouflage materials and methods that are relatively resistant to hyperspectral reconnaissance technology are also developing rapidly. In order to study the reconnaissance effect of visible/near-infrared hyperspectral images on camouflage targets, this paper analyzes the spectral characteristics of different camouflage targets using the hyperspectral images obtained in the visible and near-infrared bands under natural conditions. Two groups of experiments were carried out. The first group of experiments verified the spectral camouflage characteristics and camouflage effects of different types of camouflage clothing with grassland as the background; the second group of experiments verified the spectral camouflage characteristics and camouflage effects of different types of camouflage paint sprayed on boards and steel plates. The experiment shows that the hyperspectral image based on the near-infrared band has a good reconnaissance effect for different camouflage targets, and the near-infrared band is an effective “window” band for detecting and distinguishing true and false targets. However, the stability of the visible/near-infrared band detection for the target identification under camouflage paint is poor, and it is difficult to effectively distinguish the object materials under the same camouflage paint. This research confirms the application ability of detection based on the visible/near-infrared band, and points out the direction for the development of imaging detectors and camouflage materials in the future. Full article
Show Figures

Figure 1

Back to TopTop