Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Free access
Just Accepted

Binary Iterative Hard Thresholding Converges with Optimal Number of Measurements for 1-Bit Compressed Sensing

Online AM: 29 July 2024 Publication History
  • Get Citation Alerts
  • Abstract

    Compressed sensing has been a very successful high-dimensional signal acquisition and recovery technique that relies on linear operations. However, the actual measurements of signals have to be quantized before storing or processing them. 1-bit (or one-bit) compressed sensing is a heavily quantized version of compressed sensing, where each linear measurement of a signal is reduced to just one bit: the sign of the measurement. Once enough of such measurements are collected, the recovery problem in 1-bit compressed sensing aims to find the original signal with as much accuracy as possible. The recovery problem is related to the traditional “halfspace-learning” problem in learning theory.
    For recovery of sparse vectors, a popular reconstruction method from one-bit measurements is the binary iterative hard thresholding (BIHT) algorithm. The algorithm is a simple projected subgradient descent method, and is known to converge well empirically, despite the nonconvexity of the problem. The convergence property of BIHT was not theoretically fully justified (e.g., it is known that a number of measurement greater than max {k10, 2448, k3.5/ϵ}, where k is the sparsity and ϵ denotes the approximation error, is sufficient, Friedlander et al., 2021). In this paper we show that the BIHT estimates converge to the original signal with only \(\frac{k}{\epsilon } \) measurements (up to logarithmic factors). Note that, this dependence on k and ϵ is optimal for any recovery method in 1-bit compressed sensing. With this result, to the best of our knowledge, BIHT is the only practical and efficient (polynomial time) algorithm that requires the optimal number of measurements in all parameters (both k and ϵ). This is also an example of a gradient descent algorithm converging to the correct solution for a nonconvex problem, under suitable structural conditions.

    References

    [1]
    Jayadev Acharya, Arnab Bhattacharyya, and Pritish Kamath. 2017. Improved bounds for universal one-bit compressive sensing. In 2017 IEEE International Symposium on Information Theory, ISIT 2017, Aachen, Germany, June 25-30, 2017. IEEE, 2353–2357. https://doi.org/10.1109/ISIT.2017.8006950
    [2]
    Richard G Baraniuk, Simon Foucart, Deanna Needell, Yaniv Plan, and Mary Wootters. 2017. Exponential decay of reconstruction error from binary measurements of sparse signals. IEEE Transactions on Information Theory 63, 6 (2017), 3368–3385.
    [3]
    Petros Boufounos and Richard G. Baraniuk. 2008. 1-Bit compressive sensing. In 42nd Annual Conference on Information Sciences and Systems, CISS 2008, Princeton, NJ, USA, 19-21 March 2008. IEEE, 16–21. https://doi.org/10.1109/CISS.2008.4558487
    [4]
    Emmanuel J Candès, Justin Romberg, and Terence Tao. 2006. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Transactions on Information Theory 52, 2 (2006), 489–509.
    [5]
    Moses Charikar. 2002. Similarity estimation techniques from rounding algorithms. In Proceedings on 34th Annual ACM Symposium on Theory of Computing, May 19-21, 2002, Montréal, Québec, Canada, John H. Reif (Ed.). ACM, 380–388. https://doi.org/10.1145/509907.509965
    [6]
    David L. Donoho. 2006. Compressed sensing. IEEE Trans. Information Theory 52, 4 (2006), 1289–1306.
    [7]
    Larkin Flodin, Venkata Gandikota, and Arya Mazumdar. 2019. Superset Technique for Approximate Recovery in One-Bit Compressed Sensing. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’Alché-Buc, Emily B. Fox, and Roman Garnett (Eds.). Curran Associates, Inc., 10387–10396. https://proceedings.neurips.cc/paper/2019/hash/c900ced7451da79502d29aa37ebb7b60-Abstract.html
    [8]
    Simon Foucart. 2017. Flavors of compressive sensing. In Approximation Theory XV: San Antonio 2016 15. Springer, Springer, 61–104.
    [9]
    Michael P Friedlander, Halyun Jeong, Yaniv Plan, and Özgür Yılmaz. 2021. NBIHT: An Efficient Algorithm for 1-bit Compressed Sensing with Optimal Error Decay Rate. IEEE Transactions on Information Theory 68, 2 (2021), 1157–1177.
    [10]
    Sivakant Gopi, Praneeth Netrapalli, Prateek Jain, and Aditya Nori. 2013. One-bit compressed sensing: Provable support and vector recovery. In Proceedings of the 30th International Conference on Machine Learning, ICML 2013, Atlanta, GA, USA, 16-21 June 2013(JMLR Workshop and Conference Proceedings, Vol.  28). JMLR.org, 154–162. http://proceedings.mlr.press/v28/gopi13.html
    [11]
    Jarvis D. Haupt and Richard G. Baraniuk. 2011. Robust support recovery using sparse compressive sensing matrices. In 45st Annual Conference on Information Sciences and Systems, CISS 2011, The John Hopkins University, Baltimore, MD, USA, 23-25 March 2011. IEEE, 1–6. https://doi.org/10.1109/CISS.2011.5766202
    [12]
    Laurent Jacques, Kévin Degraux, and Christophe De Vleeschouwer. 2013. Quantized iterative hard thresholding: Bridging 1-bit and high-resolution quantized compressed sensing. arXiv preprint arXiv:1305.1786 abs/1305.1786 (2013), 8 pages. arXiv:1305.1786 http://arxiv.org/abs/1305.1786
    [13]
    Laurent Jacques, Jason N Laska, Petros T Boufounos, and Richard G Baraniuk. 2013. Robust 1-bit compressive sensing via binary stable embeddings of sparse vectors. IEEE Transactions on Information Theory 59, 4 (2013), 2082–2102.
    [14]
    Karin Knudson, Rayan Saab, and Rachel Ward. 2016. One-bit compressive sensing with norm estimation. IEEE Transactions on Information Theory 62, 5 (2016), 2748–2758.
    [15]
    Ping Li. 2016. One Scan 1-Bit Compressed Sensing. In Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, AISTATS 2016, Cadiz, Spain, May 9-11, 2016(JMLR Workshop and Conference Proceedings, Vol.  51), Arthur Gretton and Christian C. Robert (Eds.). JMLR.org, 1515–1523. http://jmlr.org/proceedings/papers/v51/li16g.html
    [16]
    Dekai Liu, Song Li, and Yi Shen. 2019. One-bit compressive sensing with projected subgradient method under sparsity constraints. IEEE Transactions on Information Theory 65, 10 (2019), 6650–6663.
    [17]
    Arya Mazumdar and Soumyabrata Pal. 2022. Support Recovery in Universal One-Bit Compressed Sensing. In 13th Innovations in Theoretical Computer Science Conference, ITCS 2022, January 31 - February 3, 2022, Berkeley, CA, USA(LIPIcs, Vol.  215), Mark Braverman (Ed.). Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 106:1–106:20.
    [18]
    Samet Oymak and Ben Recht. 2015. Near-optimal bounds for binary embeddings of arbitrary sets. arXiv preprint arXiv:1512.04433 abs/1512.04433 (2015), 20 pages. arXiv:1512.04433 http://arxiv.org/abs/1512.04433
    [19]
    Yaniv Plan and Roman Vershynin. 2012. Robust 1-bit compressed sensing and sparse logistic regression: A convex programming approach. IEEE Transactions on Information Theory 59, 1 (2012), 482–494.
    [20]
    Yaniv Plan and Roman Vershynin. 2013. One-Bit Compressed Sensing by Linear Programming. Communications on Pure and Applied Mathematics 66, 8(2013), 1275–1297.
    [21]
    Yaniv Plan and Roman Vershynin. 2013. Robust 1-bit Compressed Sensing and Sparse Logistic Regression: A Convex Programming Approach. IEEE Trans. Information Theory 59, 1 (2013), 482–494.
    [22]
    Yaniv Plan and Roman Vershynin. 2016. The generalized lasso with non-linear observations. IEEE Transactions on information theory 62, 3 (2016), 1528–1537.
    [23]
    Yaniv Plan, Roman Vershynin, and Elena Yudovina. 2017. High-dimensional estimation with geometric constraints. Information and Inference: A Journal of the IMA 6, 1 (2017), 1–40.
    [24]
    Rayan Saab, Rongrong Wang, and Özgür Yılmaz. 2018. Quantization of compressive samples with stable and robust recovery. Applied and Computational Harmonic Analysis 44, 1 (2018), 123–143.
    [25]
    Roman Vershynin. 2018. High-dimensional probability: An introduction with applications in data science. Vol.  47. Cambridge university press.

    Index Terms

    1. Binary Iterative Hard Thresholding Converges with Optimal Number of Measurements for 1-Bit Compressed Sensing

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image Journal of the ACM
      Journal of the ACM Just Accepted
      ISSN:0004-5411
      EISSN:1557-735X
      Table of Contents
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Online AM: 29 July 2024
      Accepted: 15 July 2024
      Revised: 22 June 2024
      Received: 18 September 2022

      Check for updates

      Author Tags

      1. compressed sensing
      2. quantization
      3. gradient descent
      4. sparsity

      Qualifiers

      • Research-article

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 0
        Total Downloads
      • Downloads (Last 12 months)0
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 27 Jul 2024

      Other Metrics

      Citations

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Get Access

      Login options

      Full Access

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media