Multi-task consistency-preserving adversarial hashing for cross-modal retrieval
Owing to the advantages of low storage cost and high query efficiency, cross-modal hashing
has received increasing attention recently. As failing to bridge the inherent modality gap
between modalities, most existing cross-modal hashing methods have limited capability to
explore the semantic consistency information between different modality data, leading to
unsatisfactory search performance. To address this problem, we propose a novel deep
hashing method named Multi-Task Consistency-Preserving Adversarial Hashing (CPAH) to …
has received increasing attention recently. As failing to bridge the inherent modality gap
between modalities, most existing cross-modal hashing methods have limited capability to
explore the semantic consistency information between different modality data, leading to
unsatisfactory search performance. To address this problem, we propose a novel deep
hashing method named Multi-Task Consistency-Preserving Adversarial Hashing (CPAH) to …
Owing to the advantages of low storage cost and high query efficiency, cross-modal hashing has received increasing attention recently. As failing to bridge the inherent modality gap between modalities, most existing cross-modal hashing methods have limited capability to explore the semantic consistency information between different modality data, leading to unsatisfactory search performance. To address this problem, we propose a novel deep hashing method named Multi-Task Consistency-Preserving Adversarial Hashing (CPAH) to fully explore the semantic consistency and correlation between different modalities for efficient cross-modal retrieval. First, we design a consistency refined module (CR) to divide the representations of different modality into two irrelevant parts, i.e., modality-common and modality-private representations. Then, a multi-task adversarial learning module (MA) is presented, which can make the modality-common representation of different modalities close to each other on feature distribution and semantic consistency. Finally, the compact and powerful hash codes can be generated from modality-common representation. Comprehensive evaluations conducted on three representative cross-modal benchmark datasets illustrate our method is superior to the state-of-the-art cross-modal hashing methods.
ieeexplore.ieee.org