This paper proposes a Dual-constraint Coarse-to-Fine Network (DCNet) for camouflaged object detection (COD), which addresses the challenge of detecting objects that are highly similar to their surroundings. DCNet integrates region and boundary constraints to improve detection accuracy. The network consists of three key modules: an Area-Boundary Decoder (ABD) that generates initial region and boundary cues, an Area Search Module (ASM) that adaptively searches for coarse regions, and an Area Refinement Module (ARM) that identifies fine regions. The ABD aggregates multi-level features from the backbone to extract region and boundary cues. The ASM uses these cues to search for coarse regions, while the ARM refines these regions using boundary cues. The deep supervision strategy enables the network to fuse multi-level features and accurately localize camouflaged objects in a coarse-to-fine manner.
Extensive experiments on three benchmark COD datasets (CHAMELEON, COD10K, and CAMO) demonstrate that DCNet outperforms 12 state-of-the-art COD methods. It also performs well on two COD-related tasks: industrial defect detection and polyp segmentation. The proposed DCNet achieves superior performance in terms of evaluation metrics such as $ S_{\alpha} $, $ E_{\varphi} $, $ F_{\beta}^{\omega} $, and MAE. Additionally, ablation studies show that both the ASM and ARM contribute positively to the detection performance. The network's effectiveness is further validated in applications such as polyp segmentation and industrial defect detection, where DCNet achieves high accuracy and robustness. The study concludes that DCNet is a promising approach for COD, with potential for future improvements in computational efficiency and performance.This paper proposes a Dual-constraint Coarse-to-Fine Network (DCNet) for camouflaged object detection (COD), which addresses the challenge of detecting objects that are highly similar to their surroundings. DCNet integrates region and boundary constraints to improve detection accuracy. The network consists of three key modules: an Area-Boundary Decoder (ABD) that generates initial region and boundary cues, an Area Search Module (ASM) that adaptively searches for coarse regions, and an Area Refinement Module (ARM) that identifies fine regions. The ABD aggregates multi-level features from the backbone to extract region and boundary cues. The ASM uses these cues to search for coarse regions, while the ARM refines these regions using boundary cues. The deep supervision strategy enables the network to fuse multi-level features and accurately localize camouflaged objects in a coarse-to-fine manner.
Extensive experiments on three benchmark COD datasets (CHAMELEON, COD10K, and CAMO) demonstrate that DCNet outperforms 12 state-of-the-art COD methods. It also performs well on two COD-related tasks: industrial defect detection and polyp segmentation. The proposed DCNet achieves superior performance in terms of evaluation metrics such as $ S_{\alpha} $, $ E_{\varphi} $, $ F_{\beta}^{\omega} $, and MAE. Additionally, ablation studies show that both the ASM and ARM contribute positively to the detection performance. The network's effectiveness is further validated in applications such as polyp segmentation and industrial defect detection, where DCNet achieves high accuracy and robustness. The study concludes that DCNet is a promising approach for COD, with potential for future improvements in computational efficiency and performance.