Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3503161.3547887acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

A Numerical DEs Perspective on Unfolded Linearized ADMM Networks for Inverse Problems

Published: 10 October 2022 Publication History

Abstract

Many research works show that the continuous-time Differential Equations (DEs) allow for a better understanding of traditional Alternating Direction Multiplier Methods (ADMMs). And many unfolded algorithms directly inherit the traditional iterations to build deep networks. Although they obtain a faster convergence rate and superior practical performance, there is a lack of an appropriate explanation of the unfolded network architectures. Thus, we attempt to explore the connection between the existing unfolded Linearized ADMM (LADMM) and numerical DEs, and propose efficient unfolded network design schemes. First, we present an unfolded Euler LADMM scheme as a by-product, which originates from the Euler method for solving first-order DEs. Then inspired by the trapezoid method in numerical DEs, we design a new more effective network scheme, called unfolded Trapezoid LADMM scheme. Moreover, we analyze that the Trapezoid LADMM scheme has higher precision than the Euler LADMM scheme. To the best of our knowledge, this is the first work to explore the connection between unfolded ADMMs and numerical DEs with theoretical guarantees. Finally, we instantiate our Euler LADMM and Trapezoid LADMM schemes into ELADMM and TLADMM with the proximal operators, and ELADMM-Net and TLADMM-Net with convolutional neural networks. And extensive experiments show that our algorithms are competitive with state-of-the-art methods.

References

[1]
Ravi P Agarwal and Donal O'Regan. 2008. An Introduction to Ordinary Differential Equations. (2008).
[2]
Anthony Bloch. 1994. Hamiltonian and gradient flows, algorithms and control. Vol. 3. American Mathematical Soc.
[3]
Stephen Boyd, Neal Parikh, and Eric Chu. 2011. Distributed optimization and statistical learning via the alternating direction method of multipliers. Now Publishers Inc.
[4]
AA Brown and Michael C Bartholomew-Biggs. 1989. Some effective methods for unconstrained optimization based on the solution of systems of ordinary differential equations. Journal of Optimization Theory and Applications 62, 2 (1989), 211--224.
[5]
Jiwei Chen, Yubao Sun, Qingshan Liu, and Rui Huang. 2020. Learning memory augmented cascading network for compressed sensing of images. In European Conference on Computer Vision. Springer, 513--529.
[6]
Xiaohan Chen, Jialin Liu, Zhangyang Wang, and Wotao Yin. 2018. Theoretical Linear Convergence of Unfolded ISTA and Its Practical Weights and Thresholds. Advances in neural information processing systems (2018).
[7]
Wei Deng, Ming-Jun Lai, Zhimin Peng, andWotao Yin. 2017. Parallel multi-block ADMM with o (1/k) convergence. Journal of Scientific Computing 71, 2 (2017), 712--736.
[8]
Weisheng Dong, Peiyao Wang, Wotao Yin, Guangming Shi, Fangfang Wu, and Xiaotong Lu. 2018. Denoising prior driven deep neural network for image restoration. IEEE transactions on pattern analysis and machine intelligence 41, 10 (2018), 2305--2318.
[9]
Yuling Fan, Hongjian Wang, Hartmut Gemmeke, Torsten Hopp, and Juergen Hesser. 2022. Model-data-driven image reconstruction with neural networks for ultrasound computed tomography breast imaging. Neurocomputing 467 (2022), 10--21.
[10]
Rita Fermanian, Mikael Le Pendu, and Christine Guillemot. 2021. Regularizing the Deep Image Prior with a Learned Denoiser for Linear Inverse Problems. In IEEE Multimedia Siganl Processing Workshop (MMSP).
[11]
G França, DP Robinson, and R Vidal. 1808. A nonsmooth dynamical systems perspective on accelerated extensions of ADMM (2018). Preprint. arXiv (1808).
[12]
Guilherme Franca, Daniel Robinson, and Rene Vidal. 2018. ADMM and accelerated ADMM as continuous dynamical systems. In International Conference on Machine Learning. PMLR, 1559--1567.
[13]
Davis Gilton, Greg Ongie, and Rebecca Willett. 2019. Neumann networks for linear inverse problems in imaging. IEEE Transactions on Computational Imaging 6 (2019), 328--343.
[14]
Tom Goldstein, Brendan O'Donoghue, Simon Setzer, and Richard Baraniuk. 2014. Fast alternating direction optimization methods. SIAM Journal on Imaging Sciences 7, 3 (2014), 1588--1623.
[15]
Karol Gregor and Yann LeCun. 2010. Learning fast approximations of sparse coding. In Proceedings of the 27th international conference on international conference on machine learning. 399--406.
[16]
Harshit Gupta, Kyong Hwan Jin, Ha Q Nguyen, Michael T McCann, and Michael Unser. 2018. CNN-based projected gradient descent for consistent CT image reconstruction. IEEE transactions on medical imaging 37, 6 (2018), 1440--1453.
[17]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770--778.
[18]
Uwe Helmke and John B Moore. 2012. Optimization and dynamical systems. Springer Science & Business Media.
[19]
Shih-Wei Hu, Gang-Xuan Lin, and Chun-Shien Lu. 2021. GPX-ADMM-Net: ADMM-based neural network with generalized proximal operator. In 2020 28th European Signal Processing Conference (EUSIPCO). IEEE, 2055--2059.
[20]
Vasiliki Kouni, Georgios Paraskevopoulos, Holger Rauhut, and George C Alexandropoulos. 2021. ADMM-DAD net: a deep unfolding network for analysis compressed sensing. arXiv preprint arXiv:2110.06986 (2021).
[21]
Honggui Li and Maria Trocan. 2018. Sparse solution to inverse problem of nonlinear dimensionality reduction. In International Conference on Multimedia and Network Information System. Springer, 322--331.
[22]
Ke Li and Jitendra Malik. 2016. Learning to optimize. arXiv preprint arXiv:1606.01885 (2016).
[23]
Xiaoyong Li, Xueru Bai, Yujie Zhang, and Feng Zhou. 2022. High-Resolution ISAR Imaging Based on Plug-and-Play 2D ADMM-Net. Remote Sensing 14, 4 (2022), 901.
[24]
Yangyang Li, Lin Kong, Fanhua Shang, Yuanyuan Liu, Hongying Liu, and Zhouchen Lin. 2021. Learned extragradient ISTA with interpretable residual structures for sparse coding. In Proc. AAAI Conf. Artif. Intell.
[25]
Zhouchen Lin, Risheng Liu, and Zhixun Su. 2011. Linearized alternating direction method with adaptive penalty for low-rank representation. In Proceedings of the 24th International Conference on Neural Information Processing Systems. 612--620.
[26]
Jialin Liu and Xiaohan Chen. 2019. ALISTA: Analytic weights are as good as learned weights in LISTA. In International Conference on Learning Representations (ICLR).
[27]
Canyi Lu, Huan Li, Zhouchen Lin, and Shuicheng Yan. 2016. Fast proximal linearized alternating direction method of multiplier with parallel splitting. In Thirtieth AAAI Conference on Artificial Intelligence.
[28]
Vishal Monga, Yuelong Li, and Yonina C Eldar. 2021. Algorithm unrolling: Interpretable, efficient deep learning for signal and image processing. IEEE Signal Processing Magazine 38, 2 (2021), 18--44.
[29]
Thomas Moreau and Joan Bruna. 2019. Understanding trainable sparse coding via matrix factorization. In 5th International Conference on Learning Representations, ICLR 2017.
[30]
Hua Ouyang, Niao He, Long Tran, and Alexander Gray. 2013. Stochastic alternating direction method of multipliers. In International Conference on Machine Learning. PMLR, 80--88.
[31]
Juan Marcos Ramírez, José Ignacio Martínez Torre, and Henry Arguello Fuentes. 2021. LADMM-Net: An Unrolled Deep Network For Spectral Image Fusion From Compressive Data. arXiv preprint arXiv:2103.00940 (2021).
[32]
Michael E Sander, Pierre Ablin, Mathieu Blondel, and Gabriel Peyré. 2021. Momentum residual neural networks. In International Conference on Machine Learning. PMLR, 9276--9287.
[33]
Johannes Schropp and I Singer. 2000. A dynamical systems approach to constrained minimization. Numerical functional analysis and optimization 21, 3--4 (2000), 537--551.
[34]
Wuzhen Shi, Feng Jiang, Shaohui Liu, and Debin Zhao. 2019. Scalable convolutional neural network for image compressed sensing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 12290--12299.
[35]
Weijie Su, Stephen Boyd, and Emmanuel J Candes. 2016. A differential equation for modeling Nesterov's accelerated gradient method: Theory and insights. The Journal of Machine Learning Research 17, 1 (2016), 5312--5354.
[36]
Jian Sun, Huibin Li, Zongben Xu, et al. 2016. Deep ADMM-Net for compressive sensing MRI. Advances in neural information processing systems 29 (2016).
[37]
Yubao Sun, Jiwei Chen, Qingshan Liu, Bo Liu, and Guodong Guo. 2020. Dualpath attention network for compressed sensing image reconstruction. IEEE Transactions on Image Processing 29 (2020), 9482--9495.
[38]
Udaya SKP Thanthrige, Peter Jung, and Aydin Sezgin. 2021. Deep Unfolding of Iteratively Reweighted ADMM for Wireless RF Sensing. arXiv preprint arXiv:2106.03686 (2021).
[39]
Qi Wei, Kai Fan, Wenlin Wang, Tianhang Zheng, Chakraborty Amit, Katherine Heller, Changyou Chen, and Kui Ren. 2019. InverseNet: Solving Inverse Problems of Multimedia Data with Splitting Networks. In 2019 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 1324--1329.
[40]
Kailun Wu, Yiwen Guo, Ziang Li, and Changshui Zhang. 2019. Sparse coding with gated learned ISTA. In International Conference on Learning Representations.
[41]
Xingyu Xie, Jianlong Wu, Guangcan Liu, Zhisheng Zhong, and Zhouchen Lin. 2019. Differentiable linearized admm. In International Conference on Machine Learning. PMLR, 6902--6911.
[42]
Yangyang Xu and Wotao Yin. 2016. A fast patch-dictionary method for whole image recovery. Inverse Problems and Imaging 10, 2 (2016), 563--583.
[43]
Yan Yang, Jian Sun, Huibin Li, and Zongben Xu. 2020. ADMM-CSNet: A Deep Learning Approach for Image Compressive Sensing. IEEE Transactions on Pattern Analysis and Machine Intelligence 42, 3 (2020), 521--538. https://doi.org/10.1109/TPAMI.2018.2883941
[44]
Di You, Jingfen Xie, and Jian Zhang. 2021. ISTA-Net: flexible deep unfolding network for compressive sensing. In 2021 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 1--6.
[45]
Di You, Jian Zhang, Jingfen Xie, Bin Chen, and Siwei Ma. 2021. COAST: COntrollable arbitrary-sampling NeTwork for compressive sensing. IEEE Transactions on Image Processing 30 (2021), 6066--6080.
[46]
Huizhuo Yuan, Yuren Zhou, Chris Junchi Li, and Qingyun Sun. 2019. Differential inclusions for modeling nonsmooth ADMM variants: A continuous limit theory. In International Conference on Machine Learning. PMLR, 7232--7241.
[47]
Jian Zhang and Bernard Ghanem. 2018. ISTA-Net: Interpretable optimizationinspired deep network for image compressive sensing. In Proceedings of the IEEE conference on computer vision and pattern recognition. 1828--1837.
[48]
Shuai Zheng and James T. Kwok. 2016. Fast-and-Light Stochastic ADMM. In IJCAI. 2407--2613.
[49]
Xiang Zhou, Huizhuo Yuan, Chris Junchi Li, and Qingyun Sun. 2020. Stochastic Modified Equations for Continuous Limit of Stochastic ADMM. arXiv preprint arXiv:2003.03532 (2020).

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
MM '22: Proceedings of the 30th ACM International Conference on Multimedia
October 2022
7537 pages
ISBN:9781450392037
DOI:10.1145/3503161
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 10 October 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. differential equations
  2. inverse problems
  3. network architecture design
  4. unfolded ladmms

Qualifiers

  • Research-article

Conference

MM '22
Sponsor:

Acceptance Rates

Overall Acceptance Rate 2,145 of 8,556 submissions, 25%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 159
    Total Downloads
  • Downloads (Last 12 months)38
  • Downloads (Last 6 weeks)5
Reflects downloads up to 01 Nov 2024

Other Metrics

Citations

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media