Camouflaged Object Detection with a Feature Lateral Connection Network
Abstract
:1. Introduction
- We propose a new COD model which incorporates three modules: an underlying feature mining module, a texture-enhanced module [15], and a neighborhood feature fusion module. We conduct a series of experiments to validate the effectiveness of our model.
- To fully mine the spatial texture information of low-level features, we design an underlying feature mining module. Drawing inspiration from biological evolution, we created the neighborhood feature fusion module. To improve the accuracy of our prediction map, we utilized a top-down strategy to gradually integrate high-level and low-level features.
2. Related Works
2.1. Salient Object Detection (SOD)
2.2. Camouflaged Object Detection (COD)
3. The Proposed Method
3.1. Overall Architecture
Algorithm 1 Camouflaged Object Detection with a Feature Lateral Connection Network |
Input: Training datasets D. Maximal number of learning epochs E. Output: Parameters for Res2Net-50 [26], for UFM, for TEM, for NFFM.
|
3.2. Underlying Feature Mining Module (UFM)
3.3. Neighborhood Feature Fusion Module (NFFM)
3.4. Loss Function
4. Experimental Results
4.1. Datasets and Implementation
4.2. Evaluation Metrics
4.3. Comparison with State-of-the-Art Methods
4.3.1. Quantitative Comparison
4.3.2. Qualitative Comparison
4.4. Comparisons of Inference
4.5. Ablation Studies
4.6. Failure Cases and Analysis
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Lv, Y.; Zhang, J.; Dai, Y.; Li, A.; Liu, B.; Barnes, N.; Fan, D.P. Simultaneously localize, segment and rank the camouflaged objects. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 18–22 June 2021; pp. 11591–11601. [Google Scholar]
- Mei, H.; Ji, G.P.; Wei, Z.; Yang, X.; Wei, X.; Fan, D.P. Camouflaged object segmentation with distraction mining. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 18–22 June 2021; pp. 8772–8781. [Google Scholar]
- Zhou, T.; Zhou, Y.; Gong, C.; Yang, J.; Zhang, Y. Feature Aggregation and Propagation Network for Camouflaged Object Detection. IEEE Trans. Image Process. 2022, 31, 7036–7047. [Google Scholar] [CrossRef]
- Pang, Y.; Zhao, X.; Zhang, L.; Lu, H. Multi-scale interactive network for salient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 9413–9422. [Google Scholar]
- Le, X.; Mei, J.; Zhang, H.; Zhou, B.; Xi, J. A learning-based approach for surface defect detection using small image datasets. Neurocomputing 2020, 408, 112–120. [Google Scholar] [CrossRef]
- Lidbetter, T. Search and rescue in the face of uncertain threats. Eur. J. Oper. Res. 2020, 285, 1153–1160. [Google Scholar] [CrossRef] [Green Version]
- Zhang, X.; Zhu, C.; Wang, S.; Liu, Y.; Ye, M. A Bayesian approach to camouflaged moving object detection. IEEE Trans. Circuits Syst. Video Technol. 2016, 27, 2001–2013. [Google Scholar] [CrossRef]
- Feng, X.; Guoying, C.; Richang, H.; Jing, G. Camouflage texture evaluation using a saliency map. Multimed. Syst. 2015, 21, 169–175. [Google Scholar] [CrossRef]
- Tankus, A.; Yeshurun, Y. Detection of regions of interest and camouflage breaking by direct convexity estimation. In Proceedings of the 1998 IEEE Workshop on Visual Surveillance, Bombay, India, 2 January 1998; pp. 42–48. [Google Scholar]
- Guo, H.; Dou, Y.; Tian, T.; Zhou, J.; Yu, S. A robust foreground segmentation method by temporal averaging multiple video frames. In Proceedings of the 2008 International Conference on Audio, Language and Image Processing, Shanghai, China, 7–9 July 2008; pp. 878–882. [Google Scholar]
- Fan, D.P.; Ji, G.P.; Cheng, M.M.; Shao, L. Concealed object detection. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 6024. [Google Scholar] [CrossRef]
- Yang, F.; Zhai, Q.; Li, X.; Huang, R.; Luo, A.; Cheng, H.; Fan, D.P. Uncertainty-guided transformer reasoning for camouflaged object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Nashville, TN, USA, 18–22 June 2021; pp. 4146–4155. [Google Scholar]
- Sun, Y.; Chen, G.; Zhou, T.; Zhang, Y.; Liu, N. Context-aware cross-level fusion network for camouflaged object detection. arXiv 2021, arXiv:2105.12555. [Google Scholar]
- Zhai, Q.; Li, X.; Yang, F.; Chen, C.; Cheng, H.; Fan, D.P. Mutual graph learning for camouflaged object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 18–22 June 2021; pp. 12997–13007. [Google Scholar]
- Wu, Z.; Su, L.; Huang, Q. Cascaded partial decoder for fast and accurate salient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 3907–3916. [Google Scholar]
- Le, T.N.; Nguyen, T.V.; Nie, Z.; Tran, M.T.; Sugimoto, A. Anabranch network for camouflaged object segmentation. Comput. Vis. Image Underst. 2019, 184, 45–56. [Google Scholar] [CrossRef]
- Skurowski, P.; Abdulameer, H.; Błaszczyk, J.; Depta, T.; Kornacki, A.; Kozieł, P. Animal camouflage analysis: Chameleon database. Unpubl. Manuscr. 2018, 2, 7. [Google Scholar]
- Itti, L.; Koch, C.; Niebur, E. A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 1254–1259. [Google Scholar] [CrossRef] [Green Version]
- Fang, H.; Gupta, S.; Iandola, F.; Srivastava, R.K.; Deng, L.; Dollár, P.; Gao, J.; He, X.; Mitchell, M.; Platt, J.C.; et al. From captions to visual concepts and back. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1473–1482. [Google Scholar]
- Liu, N.; Han, J.; Yang, M.H. Picanet: Learning pixel-wise contextual attention for saliency detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 3089–3098. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
- Zhang, X.; Wang, T.; Qi, J.; Lu, H.; Wang, G. Progressive attention guided recurrent network for salient object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 714–722. [Google Scholar]
- Wang, B.; Chen, Q.; Zhou, M.; Zhang, Z.; Jin, X.; Gai, K. Progressive feature polishing network for salient object detection. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 12128–12135. [Google Scholar]
- Fan, D.P.; Ji, G.P.; Sun, G.; Cheng, M.M.; Shen, J.; Shao, L. Camouflaged object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 2777–2787. [Google Scholar]
- Li, A.; Zhang, J.; Lv, Y.; Liu, B.; Zhang, T.; Dai, Y. Uncertainty-aware joint salient object and camouflaged object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 18–22 June 2021; pp. 10071–10081. [Google Scholar]
- Gao, S.H.; Cheng, M.M.; Zhao, K.; Zhang, X.Y.; Yang, M.H.; Torr, P. Res2net: A new multi-scale backbone architecture. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 43, 652–662. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Dobzhansky, T. Nothing in biology makes sense except in the light of evolution. Am. Biol. Teach. 2013, 75, 87–91. [Google Scholar] [CrossRef]
- Fan, D.P.; Ji, G.P.; Qin, X.; Cheng, M.M. Cognitive vision inspired object segmentation metric and loss function. Sci. Sin. Inform. 2021, 6, 6. [Google Scholar]
- Qin, X.; Zhang, Z.; Huang, C.; Gao, C.; Dehghan, M.; Jagersand, M. Basnet: Boundary-aware salient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 7479–7489. [Google Scholar]
- De Boer, P.T.; Kroese, D.P.; Mannor, S.; Rubinstein, R.Y. A tutorial on the cross-entropy method. Ann. Oper. Res. 2005, 134, 19–67. [Google Scholar] [CrossRef]
- Wei, J.; Wang, S.; Huang, Q. F3Net: Fusion, feedback and focus for salient object detection. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 12321–12328. [Google Scholar]
- Da, K. A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Fan, D.P.; Cheng, M.M.; Liu, Y.; Li, T.; Borji, A. Structure-measure: A new way to evaluate foreground maps. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 4548–4557. [Google Scholar]
- Margolin, R.; Zelnik-Manor, L.; Tal, A. How to evaluate foreground maps? In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 248–255. [Google Scholar]
- Achanta, R.; Hemami, S.; Estrada, F.; Susstrunk, S. Frequency-tuned salient region detection. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 1597–1604. [Google Scholar]
- Fan, D.P.; Gong, C.; Cao, Y.; Ren, B.; Cheng, M.M.; Borji, A. Enhanced-alignment measure for binary foreground map evaluation. arXiv 2018, arXiv:1805.10421. [Google Scholar]
- Zhang, D.; Han, J.; Li, C.; Wang, J.; Li, X. Detection of co-salient objects by looking deep and wide. Int. J. Comput. Vis. 2016, 120, 215–232. [Google Scholar] [CrossRef]
- Zhao, J.X.; Liu, J.J.; Fan, D.P.; Cao, Y.; Yang, J.; Cheng, M.M. EGNet: Edge guidance network for salient object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 8779–8788. [Google Scholar]
- Wu, Z.; Su, L.; Huang, Q. Stacked cross refinement network for edge-aware salient object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 7264–7273. [Google Scholar]
- Liu, J.J.; Hou, Q.; Cheng, M.M.; Feng, J.; Jiang, J. A simple pooling-based design for real-time salient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 3917–3926. [Google Scholar]
- Gao, S.H.; Tan, Y.Q.; Cheng, M.M.; Lu, C.; Chen, Y.; Yan, S. Highly efficient salient object detection with 100k parameters. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 8–14 September 2020; pp. 702–721. [Google Scholar]
- Zhang, J.; Yu, X.; Li, A.; Song, P.; Liu, B.; Dai, Y. Weakly-supervised salient object detection via scribble annotations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 12546–12555. [Google Scholar]
- Zhang, J.; Fan, D.P.; Dai, Y.; Anwar, S.; Saleh, F.S.; Zhang, T.; Barnes, N. UC-Net: Uncertainty inspired RGB-D saliency detection via conditional variational autoencoders. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 8582–8591. [Google Scholar]
- Zhou, H.; Xie, X.; Lai, J.H.; Chen, Z.; Yang, L. Interactive two-stream decoder for accurate and fast saliency detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 9141–9150. [Google Scholar]
- Fan, D.P.; Ji, G.P.; Zhou, T.; Chen, G.; Fu, H.; Shen, J.; Shao, L. Pranet: Parallel reverse attention network for polyp segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Lima, Peru, 4–8 October 2020; pp. 263–273. [Google Scholar]
- Yan, J.; Le, T.N.; Nguyen, K.D.; Tran, M.T.; Do, T.T.; Nguyen, T.V. Mirrornet: Bio-inspired camouflaged object segmentation. IEEE Access 2021, 9, 43290–43300. [Google Scholar] [CrossRef]
- Jagtap, A.D.; Kawaguchi, K.; Karniadakis, G.E. Adaptive activation functions accelerate convergence in deep and physics-informed neural networks. J. Comput. Phys. 2020, 404, 109136. [Google Scholar] [CrossRef] [Green Version]
- Jagtap, A.D.; Kawaguchi, K.; Em Karniadakis, G. Locally adaptive activation functions with slope recovery for deep and physics-informed neural networks. Proc. R. Soc. A 2020, 476, 20200334. [Google Scholar] [CrossRef]
- Jagtap, A.D.; Karniadakis, G.E. How important are activation functions in regression and classification? A survey, performance comparison, and future directions. J. Mach. Learn. Model. Comput. 2023, 4, 21–75. [Google Scholar] [CrossRef]
- Jagtap, A.D.; Shin, Y.; Kawaguchi, K.; Karniadakis, G.E. Deep Kronecker neural networks: A general framework for neural networks with adaptive activation functions. Neurocomputing 2022, 468, 165–180. [Google Scholar] [CrossRef]
CAMO Dataset | CHAMELEON Dataset | COD10K Dataset | NC4K Dataset | |||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
EGNet [38] | 0.732 | 0.604 | 0.670 | 0.800 | 0.109 | 0.797 | 0.649 | 0.702 | 0.860 | 0.065 | 0.736 | 0.517 | 0.582 | 0.810 | 0.061 | 0.777 | 0.639 | 0.696 | 0.841 | 0.075 |
PoolNet [40] | 0.730 | 0.575 | 0.643 | 0.747 | 0.105 | 0.845 | 0.691 | 0.749 | 0.864 | 0.054 | 0.740 | 0.506 | 0.576 | 0.777 | 0.056 | 0.785 | 0.635 | 0.699 | 0.814 | 0.073 |
F3Net [31] | 0.711 | 0.564 | 0.616 | 0.741 | 0.109 | 0.848 | 0.744 | 0.770 | 0.894 | 0.047 | 0.739 | 0.544 | 0.593 | 0.795 | 0.051 | 0.780 | 0.656 | 0.705 | 0.824 | 0.070 |
SCRN [39] | 0.779 | 0.643 | 0.705 | 0.797 | 0.090 | 0.876 | 0.741 | 0.787 | 0.889 | 0.042 | 0.789 | 0.575 | 0.651 | 0.817 | 0.047 | 0.830 | 0.698 | 0.757 | 0.854 | 0.059 |
CSNet [41] | 0.771 | 0.642 | 0.705 | 0.795 | 0.092 | 0.856 | 0.718 | 0.766 | 0.869 | 0.047 | 0.778 | 0.569 | 0.635 | 0.810 | 0.047 | 0.750 | 0.603 | 0.655 | 0.773 | 0.088 |
SSAL [42] | 0.644 | 0.493 | 0.579 | 0.721 | 0.126 | 0.757 | 0.639 | 0.702 | 0.849 | 0.071 | 0.668 | 0.454 | 0.527 | 0.768 | 0.066 | 0.699 | 0.561 | 0.644 | 0.780 | 0.093 |
UCNet [43] | 0.739 | 0.640 | 0.700 | 0.787 | 0.094 | 0.880 | 0.817 | 0.836 | 0.930 | 0.036 | 0.776 | 0.633 | 0.681 | 0.857 | 0.042 | 0.811 | 0.729 | 0.775 | 0.871 | 0.055 |
MINet [4] | 0.748 | 0.637 | 0.691 | 0.792 | 0.090 | 0.855 | 0.771 | 0.802 | 0.914 | 0.036 | 0.770 | 0.608 | 0.657 | 0.832 | 0.042 | 0.812 | 0.720 | 0.764 | 0.862 | 0.056 |
ITSD [44] | 0.750 | 0.610 | 0.663 | 0.780 | 0.102 | 0.814 | 0.662 | 0.705 | 0.844 | 0.057 | 0.767 | 0.557 | 0.615 | 0.808 | 0.051 | 0.811 | 0.680 | 0.729 | 0.845 | 0.064 |
PraNet [45] | 0.769 | 0.663 | 0.710 | 0.824 | 0.094 | 0.860 | 0.763 | 0.789 | 0.907 | 0.044 | 0.789 | 0.629 | 0.671 | 0.861 | 0.045 | 0.822 | 0.724 | 0.762 | 0.876 | 0.059 |
ANet [16] | 0.682 | 0.484 | 0.541 | 0.685 | 0.126 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
MirrorNet [46] | 0.785 | 0.719 | 0.754 | 0.848 | 0.077 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
SINet [24] | 0.745 | 0.644 | 0.702 | 0.804 | 0.092 | 0.872 | 0.806 | 0.827 | 0.936 | 0.034 | 0.776 | 0.631 | 0.679 | 0.864 | 0.043 | 0.808 | 0.723 | 0.769 | 0.871 | 0.058 |
PFNet [2] | 0.782 | 0.695 | 0.746 | 0.842 | 0.085 | 0.882 | 0.810 | 0.828 | 0.931 | 0.033 | 0.800 | 0.660 | 0.701 | 0.877 | 0.040 | 0.829 | 0.745 | 0.784 | 0.888 | 0.053 |
UJSC [25] | 0.800 | 0.728 | 0.772 | 0.859 | 0.073 | 0.891 | 0.833 | 0.847 | 0.945 | 0.030 | 0.809 | 0.684 | 0.721 | 0.884 | 0.035 | 0.842 | 0.771 | 0.806 | 0.898 | 0.047 |
SLSR [1] | 0.787 | 0.696 | 0.744 | 0.838 | 0.080 | 0.890 | 0.822 | 0.841 | 0.935 | 0.030 | 0.804 | 0.673 | 0.715 | 0.880 | 0.037 | 0.840 | 0.766 | 0.804 | 0.895 | 0.048 |
MGL-R [14] | 0.775 | 0.673 | 0.726 | 0.812 | 0.088 | 0.893 | 0.813 | 0.834 | 0.918 | 0.030 | 0.814 | 0.666 | 0.711 | 0.852 | 0.035 | 0.833 | 0.740 | 0.782 | 0.867 | 0.052 |
C2FNet [13] | 0.796 | 0.719 | 0.762 | 0.854 | 0.080 | 0.888 | 0.828 | 0.844 | 0.935 | 0.032 | 0.813 | 0.686 | 0.723 | 0.890 | 0.036 | 0.838 | 0.762 | 0.795 | 0.897 | 0.049 |
UGTR [12] | 0.784 | 0.684 | 0.736 | 0.822 | 0.086 | 0.887 | 0.794 | 0.820 | 0.910 | 0.031 | 0.817 | 0.666 | 0.711 | 0.853 | 0.036 | 0.839 | 0.747 | 0.787 | 0.875 | 0.052 |
SINet_V2 [11] | 0.820 | 0.743 | 0.782 | 0.882 | 0.070 | 0.888 | 0.816 | 0.835 | 0.942 | 0.030 | 0.815 | 0.680 | 0.718 | 0.887 | 0.037 | 0.847 | 0.770 | 0.805 | 0.903 | 0.048 |
FAPNet [3] | 0.815 | 0.734 | 0.776 | 0.865 | 0.076 | 0.893 | 0.825 | 0.842 | 0.940 | 0.028 | 0.822 | 0.694 | 0.731 | 0.888 | 0.036 | 0.851 | 0.775 | 0.810 | 0.899 | 0.047 |
Ours | 0.808 | 0.741 | 0.782 | 0.873 | 0.071 | 0.891 | 0.837 | 0.851 | 0.948 | 0.028 | 0.818 | 0.700 | 0.734 | 0.893 | 0.034 | 0.845 | 0.780 | 0.813 | 0.905 | 0.046 |
COD10K-Amphibian | COD10K-Aquatic | COD10K-Flying | COD10K-Terrestrial | |||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
EGNet [38] | 0.776 | 0.588 | 0.650 | 0.843 | 0.056 | 0.712 | 0.515 | 0.584 | 0.784 | 0.091 | 0.769 | 0.558 | 0.621 | 0.838 | 0.046 | 0.713 | 0.467 | 0.531 | 0.794 | 0.056 |
PoolNet [40] | 0.781 | 0.584 | 0.644 | 0.823 | 0.050 | 0.737 | 0.534 | 0.607 | 0.782 | 0.078 | 0.767 | 0.539 | 0.610 | 0.797 | 0.045 | 0.707 | 0.441 | 0.508 | 0.745 | 0.054 |
F3Net [31] | 0.808 | 0.657 | 0.700 | 0.846 | 0.039 | 0.728 | 0.554 | 0.611 | 0.788 | 0.076 | 0.760 | 0.571 | 0.618 | 0.818 | 0.040 | 0.712 | 0.490 | 0.538 | 0.770 | 0.048 |
SCRN [39] | 0.839 | 0.665 | 0.729 | 0.867 | 0.041 | 0.780 | 0.600 | 0.674 | 0.818 | 0.064 | 0.817 | 0.608 | 0.683 | 0.840 | 0.036 | 0.758 | 0.509 | 0.588 | 0.784 | 0.048 |
CSNet [41] | 0.828 | 0.649 | 0.711 | 0.857 | 0.041 | 0.768 | 0.587 | 0.656 | 0.808 | 0.067 | 0.809 | 0.610 | 0.676 | 0.838 | 0.036 | 0.744 | 0.501 | 0.566 | 0.776 | 0.047 |
SSAL [42] | 0.729 | 0.560 | 0.637 | 0.817 | 0.057 | 0.632 | 0.428 | 0.509 | 0.737 | 0.101 | 0.702 | 0.504 | 0.576 | 0.795 | 0.050 | 0.647 | 0.405 | 0.471 | 0.756 | 0.060 |
UCNet [43] | 0.827 | 0.717 | 0.756 | 0.897 | 0.034 | 0.767 | 0.649 | 0.703 | 0.843 | 0.060 | 0.806 | 0.675 | 0.718 | 0.886 | 0.030 | 0.742 | 0.566 | 0.617 | 0.830 | 0.042 |
MINet [4] | 0.823 | 0.695 | 0.732 | 0.881 | 0.035 | 0.767 | 0.632 | 0.684 | 0.831 | 0.058 | 0.799 | 0.650 | 0.697 | 0.856 | 0.031 | 0.732 | 0.536 | 0.584 | 0.802 | 0.043 |
ITSD [44] | 0.810 | 0.628 | 0.679 | 0.852 | 0.044 | 0.762 | 0.584 | 0.648 | 0.811 | 0.070 | 0.793 | 0.588 | 0.645 | 0.831 | 0.040 | 0.736 | 0.496 | 0.552 | 0.777 | 0.051 |
PraNet [45] | 0.842 | 0.717 | 0.750 | 0.905 | 0.035 | 0.781 | 0.643 | 0.692 | 0.848 | 0.065 | 0.819 | 0.669 | 0.707 | 0.888 | 0.033 | 0.756 | 0.565 | 0.607 | 0.835 | 0.046 |
ANet [16] | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
MirrorNet [46] | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
SINet [24] | 0.820 | 0.714 | 0.756 | 0.891 | 0.034 | 0.766 | 0.643 | 0.698 | 0.854 | 0.063 | 0.803 | 0.663 | 0.707 | 0.887 | 0.031 | 0.749 | 0.577 | 0.625 | 0.845 | 0.042 |
PFNet [2] | 0.848 | 0.740 | 0.775 | 0.911 | 0.031 | 0.793 | 0.675 | 0.722 | 0.868 | 0.055 | 0.824 | 0.691 | 0.729 | 0.903 | 0.030 | 0.773 | 0.606 | 0.647 | 0.855 | 0.040 |
UJSC [25] | 0.841 | 0.742 | 0.769 | 0.905 | 0.031 | 0.805 | 0.705 | 0.747 | 0.879 | 0.049 | 0.836 | 0.719 | 0.752 | 0.906 | 0.026 | 0.778 | 0.624 | 0.664 | 0.863 | 0.037 |
SLSR [1] | 0.845 | 0.751 | 0.783 | 0.906 | 0.030 | 0.803 | 0.694 | 0.740 | 0.875 | 0.052 | 0.830 | 0.707 | 0.745 | 0.906 | 0.026 | 0.772 | 0.611 | 0.655 | 0.855 | 0.038 |
MGL-R [14] | 0.854 | 0.734 | 0.770 | 0.886 | 0.028 | 0.807 | 0.688 | 0.736 | 0.855 | 0.051 | 0.839 | 0.701 | 0.743 | 0.873 | 0.026 | 0.785 | 0.606 | 0.651 | 0.823 | 0.036 |
C2FNet [13] | 0.849 | 0.752 | 0.779 | 0.899 | 0.030 | 0.807 | 0.700 | 0.741 | 0.882 | 0.052 | 0.840 | 0.724 | 0.759 | 0.914 | 0.026 | 0.783 | 0.627 | 0.664 | 0.872 | 0.037 |
UGTR [12] | 0.857 | 0.738 | 0.774 | 0.896 | 0.029 | 0.810 | 0.686 | 0.734 | 0.855 | 0.050 | 0.843 | 0.699 | 0.744 | 0.873 | 0.026 | 0.789 | 0.606 | 0.653 | 0.823 | 0.036 |
SINet_V2 [11] | 0.858 | 0.756 | 0.788 | 0.916 | 0.030 | 0.811 | 0.696 | 0.738 | 0.883 | 0.051 | 0.839 | 0.713 | 0.749 | 0.908 | 0.027 | 0.787 | 0.623 | 0.662 | 0.866 | 0.039 |
FAPNet [3] | 0.854 | 0.752 | 0.783 | 0.914 | 0.032 | 0.821 | 0.717 | 0.757 | 0.887 | 0.049 | 0.845 | 0.725 | 0.760 | 0.906 | 0.025 | 0.795 | 0.639 | 0.678 | 0.868 | 0.037 |
Ours | 0.863 | 0.776 | 0.803 | 0.925 | 0.028 | 0.817 | 0.725 | 0.764 | 0.895 | 0.048 | 0.844 | 0.733 | 0.763 | 0.913 | 0.024 | 0.785 | 0.638 | 0.675 | 0.869 | 0.037 |
Method | Ours | FAPNet [3] | SINet_V2 [11] | UGTR [12] | C2FNet [13] | MGL-R [14] | SINet [24] | SLSR [1] | UJSC [25] | PFNet [2] |
---|---|---|---|---|---|---|---|---|---|---|
Params. | 31.554 M | 29.524 M | 26.976 M | 48.868 M | 28.411 M | 63.595 M | 48.947 M | 50.935 M | 217.982 M | 46.498 M |
FLOPs | 43.435 G | 59.101 G | 24.481 G | 1.007 T | 26.167 G | 553.939 G | 38.757 G | 66.625 G | 112.341 G | 53.222 G |
FPS | 34.013 | 28.476 | 38.948 | 15.446 | 36.941 | 12.793 | 34.083 | 32.547 | 18.246 | 29.175 |
Method | CAMO Dataset | COD10K Dataset | NC4K Dataset | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Basic | 0.797 | 0.712 | 0.757 | 0.861 | 0.078 | 0.800 | 0.660 | 0.698 | 0.878 | 0.040 | 0.832 | 0.749 | 0.785 | 0.893 | 0.052 |
Basic+UFM | 0.804 | 0.728 | 0.774 | 0.865 | 0.076 | 0.814 | 0.685 | 0.725 | 0.883 | 0.036 | 0.841 | 0.765 | 0.803 | 0.894 | 0.049 |
Basic+NFFM | 0.806 | 0.738 | 0.781 | 0.873 | 0.072 | 0.817 | 0.700 | 0.734 | 0.892 | 0.035 | 0.843 | 0.777 | 0.810 | 0.903 | 0.046 |
Ours | 0.808 | 0.741 | 0.782 | 0.873 | 0.071 | 0.818 | 0.700 | 0.734 | 0.893 | 0.034 | 0.845 | 0.780 | 0.813 | 0.905 | 0.046 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, T.; Wang, J.; Wang, R. Camouflaged Object Detection with a Feature Lateral Connection Network. Electronics 2023, 12, 2570. https://doi.org/10.3390/electronics12122570
Wang T, Wang J, Wang R. Camouflaged Object Detection with a Feature Lateral Connection Network. Electronics. 2023; 12(12):2570. https://doi.org/10.3390/electronics12122570
Chicago/Turabian StyleWang, Tao, Jian Wang, and Ruihao Wang. 2023. "Camouflaged Object Detection with a Feature Lateral Connection Network" Electronics 12, no. 12: 2570. https://doi.org/10.3390/electronics12122570