Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3663976.3663997acmotherconferencesArticle/Chapter ViewAbstractPublication PagescvipprConference Proceedingsconference-collections
research-article
Open access

Dense Contrastive Learning and Depth Dynamic Aggregation for Reference-based Super-Resolution

Published: 27 June 2024 Publication History

Abstract

Reference-based super-resolution (RefSR) has attracted growing interest due to its capacity to leverage external prior information. RefSR involves migrating texture details from the reference (Ref) image to the low-resolution (LR) image based on their corresponding pixel or patch relations. Hence, high-quality matching and feature extraction are crucial. However, most recent RefSR approaches have primarily focused on advanced network architectures to enhance performance, often overlooking the potential of feature matching and extraction. In this paper, we introduce the dense contrastive learning and depth dynamic aggregation network for Reference-based Super-Resolution, namely DDSR, comprising a dense contrastive learning (DCL) network and a depth dynamic aggregation (DDA) module. To further enhance the accuracy of feature matching, we propose a novel DCL network that optimizes the feature space distribution in a dense contrastive manner, resulting in more precise feature matching. In addition, we design a DDA module that extracts features from the Ref image, utilizing variable features with multiple depths to extract feature information comprehensively. Experimental results demonstrate that DDSR outperforms state-of-the-art methods in both quantitative and qualitative measurements.

References

[1]
Jiezhang Cao, Jingyun Liang, Kai Zhang, Yawei Li, Yulun Zhang, Wenguan Wang, and LucVan Goo. 2022. Reference-based Image Super-Resolution with Deformable Attention Transformer.
[2]
Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang. 2015. Image Super-Resolution Using Deep Convolutional Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence (Jun 2015), 295–307. https://doi.org/10.1109/tpami.2015.2439281
[3]
Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. 2020. Momentum Contrast for Unsupervised Visual Representation Learning. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). https://doi.org/10.1109/cvpr42600.2020.00975
[4]
Jia-Bin Huang, Abhishek Singh, and Narendra Ahuja. 2015. Single image super-resolution from transformed self-exemplars. In Proceedings of the IEEE conference on computer vision and pattern recognition. 5197–5206.
[5]
Michal Irani and Shmuel Peleg. 2004. Improving resolution by image registration. CVGIP: Graphical Models and Image Processing (Sep 2004), 231–239. https://doi.org/10.1016/1049-9652(91)90045-l
[6]
Yuming Jiang, KelvinC.K. Chan, Xintao Wang, ChenChange Loy, and Ziwei Liu. 2021. Robust Reference-based Super-Resolution via C2-Matching.
[7]
J. Kim, J. K. Lee, and K. M. Lee. 2016. Deeply-Recursive Convolutional Network for Image Super-Resolution. IEEE (2016).
[8]
Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, and Wenzhe Shi. 2017. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). https://doi.org/10.1109/cvpr.2017.19
[9]
Wonkyung Lee, Junghyup Lee, DoHyung Kim, and Bumsub Ham. 2020. Learning with Privileged Information for Efficient Image Super-Resolution.
[10]
Jingyun Liang, Jiezhang Cao, Guolei Sun, Kai Zhang, Luc Van Gool, and Radu Timofte. 2021. SwinIR: Image Restoration Using Swin Transformer. In 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW). https://doi.org/10.1109/iccvw54120.2021.00210
[11]
BeeOh Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and KyoungMu Lee. 2017. Enhanced Deep Residual Networks for Single Image Super-Resolution.
[12]
Liying Lu, Wenbo Li, Xin Tao, Jiangbo Lu, and Jiaya Jia. 2021. MASA-SR: Matching Acceleration and Spatial Adaptation for Reference-Based Image Super-Resolution. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). https://doi.org/10.1109/cvpr46437.2021.00630
[13]
Yusuke Matsui, Kota Ito, Yuji Aramaki, Azuma Fujimoto, Toru Ogawa, Toshihiko Yamasaki, and Kiyoharu Aizawa. 2017. Sketch-based manga retrieval using manga109 dataset. Multimedia tools and applications 76 (2017), 21811–21838.
[14]
MehdiS.M. Sajjadi, Bernhard Schölkopf, and Michael Hirsch. 2016. EnhanceNet: Single Image Super-Resolution Through Automated Texture Synthesis.
[15]
Gyumin Shim, Jinsun Park, and In So Kweon. 2020. Robust Reference-Based Super-Resolution With Similarity-Aware Deformable Convolution. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). https://doi.org/10.1109/cvpr42600.2020.00845
[16]
Karen Simonyan and Andrew Zisserman. 2015. Very Deep Convolutional Networks for Large-Scale Image Recognition.
[17]
Libin Sun and James Hays. 2012. Super-resolution from internet-scale scene matching. In 2012 IEEE International conference on computational photography (ICCP). IEEE, 1–12.
[18]
Xintao Wang, Ke Yu, Shixiang Wu, Jinjin Gu, Yihao Liu, Chao Dong, Yu Qiao, and Chen Change Loy. 2019. ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks. 63–79. https://doi.org/10.1007/978-3-030-11021-5_5
[19]
Yanchun Xie, Jimin Xiao, Mingjie Sun, Chao Yao, and Kaizhu Huang. 2020. Feature Representation Matters: End-to-End Learning for Reference-Based Image Super-Resolution. 230–245. https://doi.org/10.1007/978-3-030-58548-8_14
[20]
Fuzhi Yang, Huan Yang, Jianlong Fu, Hongtao Lu, and Baining Guo. 2020. Learning Texture Transformer Network for Image Super-Resolution.
[21]
Wenlong Zhang, Yihao Liu, Chao Dong, and Yu Qiao. 2020. RankSRGAN: Generative Adversarial Networks With Ranker for Image Super-Resolution. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV). https://doi.org/10.1109/iccv.2019.00319
[22]
Yulun Zhang, Kunpeng Li, Kai Li, Lichen Wang, Bineng Zhong, and Yun Fu. 2018. Image Super-Resolution Using Very Deep Residual Channel Attention Networks. 294–310. https://doi.org/10.1007/978-3-030-01234-2_18
[23]
Zhifei Zhang, Zhaowen Wang, Zhe Lin, and Hairong Qi. 2020. Image Super-Resolution by Neural Texture Transfer. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). https://doi.org/10.1109/cvpr.2019.00817
[24]
Haitian Zheng, Mengqi Ji, Lei Han, Ziwei Xu, Haoqian Wang, Yebin Liu, and Lu Fang. 2019. Learning Cross-scale Correspondence and Patch-based Synthesis for Reference-based Super-Resolution. In Procedings of the British Machine Vision Conference 2017. https://doi.org/10.5244/c.31.138
[25]
Haitian Zheng, Mengqi Ji, Haoqian Wang, Yebin Liu, and Lu Fang. 2018. CrossNet: An End-to-end Reference-based Super Resolution Network using Cross-scale Warping.
[26]
Xizhou Zhu, Han Hu, Stephen Lin, and Jifeng Dai. 2018. Deformable ConvNets v2: More Deformable, Better Results.

Index Terms

  1. Dense Contrastive Learning and Depth Dynamic Aggregation for Reference-based Super-Resolution

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Other conferences
      CVIPPR '24: Proceedings of the 2024 2nd Asia Conference on Computer Vision, Image Processing and Pattern Recognition
      April 2024
      373 pages
      ISBN:9798400716607
      DOI:10.1145/3663976
      This work is licensed under a Creative Commons Attribution International 4.0 License.

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 27 June 2024

      Check for updates

      Author Tags

      1. dense contrastive learning
      2. depth dynamic aggregation
      3. reference-based super-resolution

      Qualifiers

      • Research-article
      • Research
      • Refereed limited

      Funding Sources

      • The Key R&D Program of Department of Education of Guangdong Province

      Conference

      CVIPPR 2024

      Acceptance Rates

      Overall Acceptance Rate 14 of 38 submissions, 37%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 101
        Total Downloads
      • Downloads (Last 12 months)101
      • Downloads (Last 6 weeks)27
      Reflects downloads up to 22 Dec 2024

      Other Metrics

      Citations

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Login options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media