This paper confirms a surprising phenomenon first observed by Wright \textit{et al.} \cite{WYGSM_... more This paper confirms a surprising phenomenon first observed by Wright \textit{et al.} \cite{WYGSM_Face_2009_J} \cite{WM_denseError_2010_J} under different setting: given $m$ highly corrupted measurements $y = A_{\Omega \bullet} x^{\star} + e^{\star}$, where $A_{\Omega \bullet}$ is a submatrix whose rows are selected uniformly at random from rows of an orthogonal matrix $A$ and $e^{\star}$ is an unknown sparse error vector whose nonzero entries may be unbounded, we show that with high probability $\ell_1$-minimization can recover the sparse signal of interest $x^{\star}$ exactly from only $m = C \mu^2 k (\log n)^2$ where $k$ is the number of nonzero components of $x^{\star}$ and $\mu = n \max_{ij} A_{ij}^2$, even if nearly 100% of the measurements are corrupted. We further guarantee that stable recovery is possible when measurements are polluted by both gross sparse and small dense errors: $y = A_{\Omega \bullet} x^{\star} + e^{\star}+ \nu$ where $\nu$ is the small dense noise with bounded energy. Numerous simulation results under various settings are also presented to verify the validity of the theory as well as to illustrate the promising potential of the proposed framework.
Structurally random matrices (SRM) are first proposed in as fast and highly efficient measurement... more Structurally random matrices (SRM) are first proposed in as fast and highly efficient measurement operators for large scale compressed sensing applications. Motivated by the bridge between compressed sensing and the Johnson-Lindenstrauss lemma, this paper introduces a related application of SRMs regarding to realizing a fast and highly efficient embedding. In particular, it shows that a SRM is also a promising dimensionality reduction transform that preserves all pairwise distances of high dimensional vectors within an arbitrarily small factor epsi, provided that the projection dimension is on the order of O(epsi-2 log3 N), where N denotes the number of d-dimensional vectors. In other words, SRM can be viewed as the sub-optimal Johnson-Lindenstrauss embedding that, however, owns very low computational complexity O(d log d) and highly efficient implementation that uses only O(d) random bits, making it a promising candidate for practical, large scale applications where efficiency and speed of computation are highly critical.
This paper confirms a surprising phenomenon first observed by Wright \textit{et al.} \cite{WYGSM_... more This paper confirms a surprising phenomenon first observed by Wright \textit{et al.} \cite{WYGSM_Face_2009_J} \cite{WM_denseError_2010_J} under different setting: given $m$ highly corrupted measurements $y = A_{\Omega \bullet} x^{\star} + e^{\star}$, where $A_{\Omega \bullet}$ is a submatrix whose rows are selected uniformly at random from rows of an orthogonal matrix $A$ and $e^{\star}$ is an unknown sparse error vector whose nonzero entries may be unbounded, we show that with high probability $\ell_1$-minimization can recover the sparse signal of interest $x^{\star}$ exactly from only $m = C \mu^2 k (\log n)^2$ where $k$ is the number of nonzero components of $x^{\star}$ and $\mu = n \max_{ij} A_{ij}^2$, even if nearly 100% of the measurements are corrupted. We further guarantee that stable recovery is possible when measurements are polluted by both gross sparse and small dense errors: $y = A_{\Omega \bullet} x^{\star} + e^{\star}+ \nu$ where $\nu$ is the small dense noise with bounded energy. Numerous simulation results under various settings are also presented to verify the validity of the theory as well as to illustrate the promising potential of the proposed framework.
Page 1. DISTRIBUTED COMPRESSED VIDEO SENSING Thong T. Do , Yi Chen , Dzung T. Nguyen , Nam ... more Page 1. DISTRIBUTED COMPRESSED VIDEO SENSING Thong T. Do , Yi Chen , Dzung T. Nguyen , Nam Nguyen , Lu Gan and Trac D. Tran ∗ Department of Electrical and Computer Engineering The Johns Hopkins ...
Given an order-$d$ tensor $\tensor A \in \R^{n \times n \times \ldots \times n}$, we present a si... more Given an order-$d$ tensor $\tensor A \in \R^{n \times n \times \ldots \times n}$, we present a simple, element-wise sparsification algorithm that zeroes out all sufficiently small elements of $\tensor A$, keeps all sufficiently large elements of $\tensor A$, and retains some of the remaining elements with probabilities proportional to the square of their magnitudes. We analyze the approximation accuracy of the proposed algorithm using a powerful inequality that we derive. This inequality bounds the spectral norm of a random tensor and is of independent interest. As a result, we obtain novel bounds for the tensor sparsification problem. As an added bonus, we obtain improved bounds for the matrix ($d=2$) sparsification problem.
The problem of affine rank minimization seeks to find the minimum rank matrix that satisfies a se... more The problem of affine rank minimization seeks to find the minimum rank matrix that satisfies a set of linear equality constraints. Generally, since affine rank minimization is NP-hard, a popular heuristic method is to minimize the nuclear norm that is a sum of singular values of the matrix variable. A recent intriguing paper shows that if the linear transform that defines the set of equality constraints is nearly isometrically distributed and the number of constraints is at least O(r(m + n) logmn), where r and m times n are the rank and size of the minimum rank matrix, minimizing the nuclear norm yields exactly the minimum rank matrix solution. Unfortunately, it takes a large amount of computational complexity and memory buffering to solve the nuclear norm minimization problem with known nearly isometric transforms. This paper presents a fast and efficient algorithm for nuclear norm minimization that employs structurally random matrices for its linear transform and a projected subgradient method that exploits the unique features of structurally random matrices to substantially speed up the optimization process. Theoretically, we show that nuclear norm minimization using structurally random linear constraints guarantees the minimum rank matrix solution if the number of linear constraints is at least O(r(m+n) log3 mn). Extensive simulations verify that structurally random transforms still retain optimal performance while their implementation complexity is just a fraction of that of completely random transforms, making them promising candidates for large scale applications.
This paper proposes a novel framework called distributed compressed video sensing (DISCOS) - a so... more This paper proposes a novel framework called distributed compressed video sensing (DISCOS) - a solution for distributed video coding (DVC) based on the recently emerging compressed sensing theory. The DISCOS framework compressively samples each video frame independently at the encoder. However, it recovers video frames jointly at the decoder by exploiting an interframe sparsity model and by performing sparse recovery with side information. In particular, along with global frame-based measurements, the DISCOS encoder also acquires local block-based measurements for block prediction at the decoder. Our interframe sparsity model mimics state-of-the-art video codecs: the sparsest representation of a block is a linear combination of a few temporal neighboring blocks that are in previously reconstructed frames or in nearby key frames. This model enables a block to be optimally predicted from its local measurements by l1-minimization. The DISCOS decoder also employs a sparse recovery with side information to jointly reconstruct a frame from its global measurements and its local block-based prediction. Simulation results show that the proposed framework outperforms the baseline compressed sensing-based scheme of intraframe-coding and intraframe-decoding by 8 - 10dB. Finally, unlike conventional DVC schemes, our DISCOS framework can perform most encoding operations in the analog domain with very low-complexity, making it be a promising candidate for real-time, practical applications where the analog to digital conversion is expensive, e.g., in Terahertz imaging.
This paper confirms a surprising phenomenon first observed by Wright \textit{et al.} \cite{WYGSM_... more This paper confirms a surprising phenomenon first observed by Wright \textit{et al.} \cite{WYGSM_Face_2009_J} \cite{WM_denseError_2010_J} under different setting: given $m$ highly corrupted measurements $y = A_{\Omega \bullet} x^{\star} + e^{\star}$, where $A_{\Omega \bullet}$ is a submatrix whose rows are selected uniformly at random from rows of an orthogonal matrix $A$ and $e^{\star}$ is an unknown sparse error vector whose nonzero entries may be unbounded, we show that with high probability $\ell_1$-minimization can recover the sparse signal of interest $x^{\star}$ exactly from only $m = C \mu^2 k (\log n)^2$ where $k$ is the number of nonzero components of $x^{\star}$ and $\mu = n \max_{ij} A_{ij}^2$, even if nearly 100% of the measurements are corrupted. We further guarantee that stable recovery is possible when measurements are polluted by both gross sparse and small dense errors: $y = A_{\Omega \bullet} x^{\star} + e^{\star}+ \nu$ where $\nu$ is the small dense noise with bounded energy. Numerous simulation results under various settings are also presented to verify the validity of the theory as well as to illustrate the promising potential of the proposed framework.
Structurally random matrices (SRM) are first proposed in as fast and highly efficient measurement... more Structurally random matrices (SRM) are first proposed in as fast and highly efficient measurement operators for large scale compressed sensing applications. Motivated by the bridge between compressed sensing and the Johnson-Lindenstrauss lemma, this paper introduces a related application of SRMs regarding to realizing a fast and highly efficient embedding. In particular, it shows that a SRM is also a promising dimensionality reduction transform that preserves all pairwise distances of high dimensional vectors within an arbitrarily small factor epsi, provided that the projection dimension is on the order of O(epsi-2 log3 N), where N denotes the number of d-dimensional vectors. In other words, SRM can be viewed as the sub-optimal Johnson-Lindenstrauss embedding that, however, owns very low computational complexity O(d log d) and highly efficient implementation that uses only O(d) random bits, making it a promising candidate for practical, large scale applications where efficiency and speed of computation are highly critical.
This paper confirms a surprising phenomenon first observed by Wright \textit{et al.} \cite{WYGSM_... more This paper confirms a surprising phenomenon first observed by Wright \textit{et al.} \cite{WYGSM_Face_2009_J} \cite{WM_denseError_2010_J} under different setting: given $m$ highly corrupted measurements $y = A_{\Omega \bullet} x^{\star} + e^{\star}$, where $A_{\Omega \bullet}$ is a submatrix whose rows are selected uniformly at random from rows of an orthogonal matrix $A$ and $e^{\star}$ is an unknown sparse error vector whose nonzero entries may be unbounded, we show that with high probability $\ell_1$-minimization can recover the sparse signal of interest $x^{\star}$ exactly from only $m = C \mu^2 k (\log n)^2$ where $k$ is the number of nonzero components of $x^{\star}$ and $\mu = n \max_{ij} A_{ij}^2$, even if nearly 100% of the measurements are corrupted. We further guarantee that stable recovery is possible when measurements are polluted by both gross sparse and small dense errors: $y = A_{\Omega \bullet} x^{\star} + e^{\star}+ \nu$ where $\nu$ is the small dense noise with bounded energy. Numerous simulation results under various settings are also presented to verify the validity of the theory as well as to illustrate the promising potential of the proposed framework.
Page 1. DISTRIBUTED COMPRESSED VIDEO SENSING Thong T. Do , Yi Chen , Dzung T. Nguyen , Nam ... more Page 1. DISTRIBUTED COMPRESSED VIDEO SENSING Thong T. Do , Yi Chen , Dzung T. Nguyen , Nam Nguyen , Lu Gan and Trac D. Tran ∗ Department of Electrical and Computer Engineering The Johns Hopkins ...
Given an order-$d$ tensor $\tensor A \in \R^{n \times n \times \ldots \times n}$, we present a si... more Given an order-$d$ tensor $\tensor A \in \R^{n \times n \times \ldots \times n}$, we present a simple, element-wise sparsification algorithm that zeroes out all sufficiently small elements of $\tensor A$, keeps all sufficiently large elements of $\tensor A$, and retains some of the remaining elements with probabilities proportional to the square of their magnitudes. We analyze the approximation accuracy of the proposed algorithm using a powerful inequality that we derive. This inequality bounds the spectral norm of a random tensor and is of independent interest. As a result, we obtain novel bounds for the tensor sparsification problem. As an added bonus, we obtain improved bounds for the matrix ($d=2$) sparsification problem.
The problem of affine rank minimization seeks to find the minimum rank matrix that satisfies a se... more The problem of affine rank minimization seeks to find the minimum rank matrix that satisfies a set of linear equality constraints. Generally, since affine rank minimization is NP-hard, a popular heuristic method is to minimize the nuclear norm that is a sum of singular values of the matrix variable. A recent intriguing paper shows that if the linear transform that defines the set of equality constraints is nearly isometrically distributed and the number of constraints is at least O(r(m + n) logmn), where r and m times n are the rank and size of the minimum rank matrix, minimizing the nuclear norm yields exactly the minimum rank matrix solution. Unfortunately, it takes a large amount of computational complexity and memory buffering to solve the nuclear norm minimization problem with known nearly isometric transforms. This paper presents a fast and efficient algorithm for nuclear norm minimization that employs structurally random matrices for its linear transform and a projected subgradient method that exploits the unique features of structurally random matrices to substantially speed up the optimization process. Theoretically, we show that nuclear norm minimization using structurally random linear constraints guarantees the minimum rank matrix solution if the number of linear constraints is at least O(r(m+n) log3 mn). Extensive simulations verify that structurally random transforms still retain optimal performance while their implementation complexity is just a fraction of that of completely random transforms, making them promising candidates for large scale applications.
This paper proposes a novel framework called distributed compressed video sensing (DISCOS) - a so... more This paper proposes a novel framework called distributed compressed video sensing (DISCOS) - a solution for distributed video coding (DVC) based on the recently emerging compressed sensing theory. The DISCOS framework compressively samples each video frame independently at the encoder. However, it recovers video frames jointly at the decoder by exploiting an interframe sparsity model and by performing sparse recovery with side information. In particular, along with global frame-based measurements, the DISCOS encoder also acquires local block-based measurements for block prediction at the decoder. Our interframe sparsity model mimics state-of-the-art video codecs: the sparsest representation of a block is a linear combination of a few temporal neighboring blocks that are in previously reconstructed frames or in nearby key frames. This model enables a block to be optimally predicted from its local measurements by l1-minimization. The DISCOS decoder also employs a sparse recovery with side information to jointly reconstruct a frame from its global measurements and its local block-based prediction. Simulation results show that the proposed framework outperforms the baseline compressed sensing-based scheme of intraframe-coding and intraframe-decoding by 8 - 10dB. Finally, unlike conventional DVC schemes, our DISCOS framework can perform most encoding operations in the analog domain with very low-complexity, making it be a promising candidate for real-time, practical applications where the analog to digital conversion is expensive, e.g., in Terahertz imaging.
Uploads
Papers by Hà Trần