Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Compressed Sensing of Multichannel EEG Signals: The Simultaneous Cosparsity and Low-Rank Optimization

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 62, NO.

8, AUGUST 2015

2055

Compressed Sensing of Multichannel EEG Signals:


The Simultaneous Cosparsity and
Low-Rank Optimization
Yipeng Liu , Member, IEEE, Maarten De Vos, Member, IEEE, and Sabine Van Huffel, Fellow, IEEE

AbstractGoal: This paper deals with the problems that some


EEG signals have no good sparse representation and single-channel
processing is not computationally efficient in compressed sensing of
multichannel EEG signals. Methods: An optimization model with
L0 norm and Schatten-0 norm is proposed to enforce cosparsity
and low-rank structures in the reconstructed multichannel EEG
signals. Both convex relaxation and global consensus optimization
with alternating direction method of multipliers are used to compute the optimization model. Results: The performance of multichannel EEG signal reconstruction is improved in term of both accuracy and computational complexity. Conclusion: The proposed
method is a better candidate than previous sparse signal recovery
methods for compressed sensing of EEG signals. Significance: The
proposed method enables successful compressed sensing of EEG
signals even when the signals have no good sparse representation.
Using compressed sensing would much reduce the power consumption of wireless EEG system.
Index TermsAlternating direction method of multipliers
(ADMM), compressed sensing (CS), cosparse signal recovery, lowrank matrix recovery, multichannel electroencephalogram (EEG).

I. INTRODUCTION
IRELESS body sensor networks take spatially distributed sensors to acquire physiological signals, and
transmit them over wireless links to a central unit for signal
processing [1]. The electroencephalogram (EEG) signal is one
of the most frequently used biomedical signals. It has important
applications in medical healthcare, braincomputer interfacing,
and so on [2]. Continuous EEG monitoring usually requires large
amount of data to be sampled and transmitted, which leads to
large size of batteries. The recording unit of the wireless portable
EEG systems is powered with batteries, and the physical size

Manuscript received October 29, 2014; revised February 5, 2015; accepted


March 4, 2015. Date of publication March 11, 2015; date of current version
July 15, 2015. This work was supported by FWO of Flemish Government:
G.0108.11 (Compressed Sensing); Belgian Federal Science Policy Office: IUAP
P7/19/ (DYSCO, Dynamical systems, control and optimization, 20122017);
and ERC Advanced Grant: BIOTENSORS (339804). Asterisk indicates corresponding author.
Y. Liu was with ESAT-STADIUS Division/iMinds Medical IT Department,
Department of Electrical Engineering, University of Leuven, 3001 Leuven,
Belgium. He is now with the School of Electronic Engineering/Center for Information in BioMedicine, University of Electronic Science and Technology of
China (UESTC), Chengdu 611731, China (e-mail: yipengliu@uestc.edu.cn).
M. De Vos is with Institute of Biomedical Engineering, Department of Engineering, University of Oxford, Oxford, United Kingdom.
S. Van Huffel is with ESAT-STADIUS Division/iMinds Medical IT Department, Department of Electrical Engineering, University of Leuven, 3001 Leuven, Belgium.
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TBME.2015.2411672

of the batteries sets the overall device size and operational lifetime. A physically too large device would not be portable; and
excessive battery power consumption would make the long time
wireless recording very hard [3][5].
Compressed sensing (CS) was proposed to deal with this challenge. Rather than first sample, the analog signal at Nyquist rate
and discard most in the compression, it directly acquires the digital compressed measurements at a lower sampling rate, and recovers the digital signals by nonlinear algorithms from the compressed measurements [6]. CS relies on the assumption that the
signal vector x is compressed by a random matrix RM N
(measurement or sampling matrix) in discrete form as [6], [7]
y = x

(1)

where y is the random sub-Nyquist compressed measurement.


Here M  N , which means that it is sampled at a greatly
reduced rate. If x is sparse, its recovery only requires the compressed signal y and the sampling matrix . If it is not sparse, the
signal x should be represented (transformed) using a representation matrix (dictionary) RN P with N  P and a sparse
vector RP 1 with most of its entries zero or almost zero as
x = .

(2)

With the compressed measurement y, sampling matrix , and


dictionary , we can recover x by (2) after computing by
minimize 0

(3)

subject to y =
where 0 is the pseudo-0 norm that counts the number
of nonzero entries, i.e., 0 = #{n = 0, n = 1, 2, . . . , N }.
The signal x is called K-sparse when the number of nonzero
entries is K. Most of the current methods for biomedical signal
recovery from compressed samples are based on the solution
of the 0 programming problem (3), such as, basis pursuit,
orthogonal matching pursuit (OMP), iterative hard thresholding
(IHT), etc., [4], [8], [9]. Besides, [5] found that some EEG
signals are not sparse in any sparse transformed domains, and
proposed to exploit block-sparsity by block sparse Bayesian
learning (BSBL) to recover EEG signals [5].
Contrary to the traditional sparse or block-sparse signal
model, the cosparse signal model uses an analysis operator multiplying the measurement to produce a sparse vector [10]
= x
Q N

(4)

is the cosparse representation matrix (analwhere R


ysis dictionary) with N  Q, and RQ 1 is the cosparse

0018-9294 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications standards/publications/rights/index.html for more information.

2056

IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 62, NO. 8, AUGUST 2015

Fig. 1. EEG signals reconstructed by OMP, BSBL, GAP, and analysis L1


optimization with SSR = 0.35.

vector if most of its entries are nearly zero. Several sufficient


conditions theoretically guarantee the successful recovery of the
cosparse signal from the compressed measurement, such as the
restricted isometry property adapted to the dictionary, restricted
orthogonal projection property (ROPP), etc., [10][12]. When
N = P , an equivalent cosparse signal model to the sparse signal
model can be found by letting = 1 ; but there is no such an
equivalent when N < P . The traditional sparse synthesis model
puts an emphasis on the nonzeros of the sparse vector , but the
cosparse analysis model draws its strength from the zeros of the
analysis vector .
The cosparse signal recovery has some unique advantages
in CS-based EEG systems. First, the sparse signal recovery (3)
gets the best estimate of the sparse vector ; but the cosparse
signal recovery (5) gets the EEG signals best estimate directly.
Second, theoretically the sparse signal recovery (3) requires
the columns of the representation matrix to be incoherent,
but the cosparse way (5) allows the coherence of the cosparse
representation matrix , which can result in super resolution
of the EEG signal estimate [11]. Third, the EEG signal can
hardly be sparsely represented [5]. However, data analysis shows
that the EEG signals are approximately piecewise linear [13],
as shown in Fig. 1, which implies the signal fits the cosparse
signal model (4) well with the second-order difference matrix as
the cosparse analysis dictionary. Therefore, the cosparse signal
recovery should be more appropriate for CS of EEG signals.
Since nearly all types of EEG systems have multiple channels, it can be taken for granted that it is better to jointly process
the multichannel EEG signals. [14] proposed to jointly process
multichannel EEG signals by allowing slightly different phases
of the dictionaries in different channels. Another classical way
assumes that multiple channels share a similar support of sparse
vector. This generalizes the single measurement vector problem straightforwardly to a multiple measurement vector problem [15], [16]. [17] proposed to incorporate preprocessing and
entropy coding in the sampling to reduce the redundance in correlated multichannel signals, but the added preprocessing and
encoder would increase the power consumption in EEG sampling [4]; and the procedure can hardly be realized for analog
signals, which implies the analog EEG signals should be sampled at Nyquist sampling rate in the beginning. To compress

the multichannel EEG signals from the complete digital measurement, [18] used a wavelet-based volumetric coding method,
while [19] exploited the low-rank structure in matrix/tensor form
and achieved better performance.
Since most of the multichannel EEG signals are more or less
correlated with each other, the low-rank structure-based compression method motivates the use of low-rank data structure
in CS of multichannel EEG signals too. The multichannel EEG
signals are put columnwise into a matrix. Our EEG data analysis finds that the newly formed EEG data matrix has only a few
nonzero singular values.
In this paper, the second-order difference matrix is chosen to
be the cosparse analysis dictionary, which tries to enforce the
approximate piecewise linear structure. Exploiting additionally
the low-rank structure, we can further enhance the signal recovery performance by exploiting the cosparsity of single-channel
EEG signals and the low-rank property of multichannel EEG
signals simultaneously in the framework of multistructure CS.
The 0 norm and Schatten-0 norm-based optimization model
is used to encourage cosparsity and low-rank structure in the
reconstructed signals. Two methods are proposed to solve the
multicriteria optimization problem. One relaxes it to a convex
optimization; and the other one transforms it into a global consensus optimization problem. The alternating direction method
of multipliers (ADMM) is used to solve it efficiently. The convergence and computational complexity are briefly analyzed. In
numerical experiments, a group of real-life EEG data is used
to test the algorithms performance of both single-channel and
multichannel EEG signal recovery methods. Numerical results
show that the cosparse signal recovery method and simultaneous cosparsity and low-rank (SCLR) optimization achieve
the best performance in term of mean squared error (MSE) and
mean cross-correlation (MCC) in single-channel and multichannel EEG signal recovery respectively.
The rest of the paper is organized as follows. Section II
presents an optimization model to exploit both cosparsity and
low-rank data structures to recover the EEG signals. In Section
III, two methods are given to solve the optimization problem,
i.e., convex relaxation and ADMM. In Section IV, numerical
experiments are used to demonstrate the proposed methods
performance improvement. Section V draws the conclusion.
II. SCLR OPTIMIZATION MODEL
The optimization model for cosparse signal recovery can be
formulated as [10]
minimize x0
x

(5)

subject to y = x.
Here, we call (5) the analysis L0 optimization. When the EEG
system records R channels simultaneously, the extension of analysis L0 optimization to multichannel data is
minimize vec (X)0
X

subject to Y = X

(6)

LIU et al.: COMPRESSED SENSING OF MULTICHANNEL EEG SIGNALS: THE SIMULTANEOUS COSPARSITY AND LOW-RANK OPTIMIZATION

where X RN R , and vec(X) puts all the columns of X into


one column vector sequentially. A series of solvers are summarized in [10].
Reconstructing the EEG matrix from the compressed measurements by exploiting the low-rank structure can be formulated as
minimize XSchatten0
X

constraints can also be expressed via LMIs using Schur complements [23]. The obtained optimization model is
minimize 1T e + 2f

X,e0, f 0

subject to Y=X
e vec (X) e

(7)

subject to Y = X
where XSchatten0 is the Schatten-0 norm that counts the
number of the nonzero singular values of X [20]. A variety of
methods to solve it can be found in [21].
Motivated by the fact that many EEG signals have both
cosparsity and low-rank structure, we propose to simultaneously
exploit these two data structures in multichannel EEG signal reconstruction from the compressed measurement. Both 0 norm
and Schatten-0 norm-based constraints are used in the optimization model. Combining with the linear data fitting constraint, we
can formulate the SCLR optimization model as follows:
minimize vec (X)0 + XSchatten0
X

(8)

subject to Y = X.
III. SOLUTIONS

XT


0

Tr (A) +Tr (B) < f

B. ADMM
Besides the classical SDP, another method, called ADMM,
can be used to solve the SCLR optimization [25]. With individual constraints on the same variables in each constraint, (9)
can be rewritten into a global consensus optimization with local
variables Xi , i = 1, 2 and a common global variable X as
minimize vec (X1 )1 + X2 

minimize vec (X)1 + X


X

X 1 ,X 2 ,X

(12)

subject to X = X1 ; X = X2 ; Y = X.
Here, the new constraints are that all the local variables should
be equal. It is equivalent to
minimize vec (X1 )1 + X2 
X 1 ,X 2 ,X

X
1 ; Y=
X
2
subject to Y=
where


=
Y

(9)

subject to Y = X.

(13)


(14)

Similarly to the reformulation from minimizex x1 to


minimizex,e0 1T e, subject to e x e due to the definition of the 1 norm, we can reformulate the 1 norm minimization into its equivalent linear programming in (9) [22]. By
introduction of new nonnegative variables e and f, (9) can be
expressed as
minimize 1T e + f

X,e0, f 0

subject to Y=X
X f

(11)

where A = AT and B = BT are new variables. Equation (11)


is a semi-definite programming (SDP) which can be solved
by interior-point method [22], [23]. The software CVX can
compute the solution in this way [24].

A. Convex Relaxation
To solve the SCLR optimization (8), one classical way relaxes
the nonconvex 0 norm and Schatten-0 norm into convex 1 norm
and Schatten-1 norm, respectively, where the 
1 norm sums all
the absolute values of the entries, i.e., x1 = N
n =1 |xn |. The
Schatten-1 norm is called nuclear norm too, and sums all the singular values of the data matrix, i.e., XSchatten1 = X =
m in(N ,P )
n . The newly formed convex simultaneous cosparn =1
sity and low-rank optimization model can be formulated as

2057

(10)

e vec (X) e,
where 1 RQ R 1 is a column vector with all the entries
being 1.
The nuclear norm constraint can be replaced by its linear matrix inequality (LMI) equivalent; and the approximation


=


.

(15)

I
The corresponding augmented Lagrangian of (13) is
L (X1 , X2 ; Z1 , Z2 ) = vec (X1 )1 + X2 




X
1 + vec(Z2 )T vec Y
X
2
+ vec(Z1 )T vec Y




X
1 2 + Y
X
2 2
+ 2 Y
2
F
F
(16)
where > 0, Z1 and Z2 are dual variables. The resulting
ADMM algorithm in the scaled dual form is the following:
: = arg min
Xt+1
1


X1



t1 2 (17)
vec (X1 )1 + Y
X1 + U
F
2

2058

IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 62, NO. 8, AUGUST 2015

time can be decreased. Previous experience shows that a few iterations will often produce acceptable results of practical use.
IV. NUMERICAL EXPERIMENTS

Xt+1
: = arg min
2


Xt+1
Ut+1
1
Ut+1
2

X2



Y
X
2 + U
t2 2
F
2

1  t+1
X1 + Xt+1
=
2
2


= Ut1 + Xt+1
Xt1
1


= Ut2 + Xt+1
Xt2
2
X2  +

(18)
(19)

(20)

where U1 = 1/Z1 and U2 = 1/Z2 are scaled dual variables.


In the proposed ADMM algorithm for SCLR optimization, two
steps separately optimize over variables generally, i.e., updating
the prime variables X1 and X2 , updating the scaled dual variables U1 and U2 . In this iterative algorithm, the variables are
updated in an alternating fashion.
For both (17) and (18), there are many computationally efficient algorithms [10], [21]. For example, analysis L1 optimization, greedy analysis pursuit (GAP) can be used to solve (17);
to solve (18), SDP method or singular value thresholding can
be used. The solutions of (19) and (20) are straightforwardly
easy. The ADMM for SCLR optimization is summarized in
Algorithm 1.
A lot of convergence results exist for ADMM in the literature
[25]. Generally, the convergence to optimum can be guaranteed
when the epigraph of gi
epigi = {(X, ) |gi (X) , i = 1, 2 . . .}

(21)

is a closed nonempty convex set, where g1 (X) = vec (X)1 ,


g2 (X) = X , and the unaugmented Lagrangian
L=0 (X1 , X2 ; Z1 , Z2 ) = vec (X1 )1 + X2 




X
1 + vec(Z2 )T vec Y
X
2
+vec(Z1 )T vec Y
(22)
has a saddle point. The proof can be found in [26].
The ADMM decomposes the optimization model with multiple constraints into several ones with fewer constraints. There
could be some fast algorithms for these new optimization models. Besides, it allows multiple steps in one iteration to be processed in parallel. With a multicore processor, the computational

To demonstrate the performance of the possible methods for


EEG signal recovery from the compressed measurement, we
perform two groups of numerical experiments. The details about
the data materials and subjects are given in Section IV-A. In Section IV-B, we test the performance of two cosparse signal recovery methods for single-channel EEG signals in different kinds of
situations, i.e., analysis L1 optimization and GAP. Some other
algorithms are tested to make comparison, such as BSBL which
is reported to be the best of all the current candidates for EEG
signal recovery from compressed measurement [5], and OMP
which is a proper representative of the classical sparse signal
recovery algorithms [4]. In Section IV-C, a group of multichannel EEG signals are recovered by the proposed algorithms for
SCLR optimization, as well as simultaneous orthogonal matching pursuit (SOMP) [27], BSBL [5], [16], and simultaneous
greedy analysis pursuit (SGAP) [28].
In all experiments, as argued by our analysis in Section I,
the second-order difference matrix is chosen to be the analysis dictionary for cosparse EEG signal recovery. The Gaussian
matrix is chosen to be the sampling matrix for CS of EEG signals. The sparse dictionaries of OMP and SOMP are Daubechies
wavelets [4].
To measure the compression degree, the subsampling ratio
(SSR) is defined as
M
100%.
(23)
N
To quantify the difference between high-dimensional values
implied by the estimator and the true values of the quantity
being estimated, two different evaluation functions are often
used in EEG signal processing. One is the MSE that measures
the average of the squares of the errors. The error is the amount
by which the value implied by the estimator differs from the
quantity to be estimated. Here, we can formulate it as
2



L X

l X
F
(24)
MSE =
LN R
SSR =

l=1

where X is the true EEG data with R channels and each channel
l is its estimate in the lth experiment, and L is
has length N, X
l are normalized by
the number of experiments. Both X and X
their Frobenius norms, respectively. When R = 1, the matrix X
is degenerated into a vector x. In that case, MSE can be used to
evaluate single-channel EEG signal reconstruction evaluation.
The MSE has variants of other equivalent forms, such as mean
L2 error [29], percent of root-mean-square difference [4].
Another evaluation function is the MCC. It is equivalent to
the Structural SIMilarity index, which measures the similarity
of two waveforms [4], [5], [30]. It can be formulated as

l
L vec(X)T vec X

  .
(25)
MCC =
 
LXF X
l
l=1
F

LIU et al.: COMPRESSED SENSING OF MULTICHANNEL EEG SIGNALS: THE SIMULTANEOUS COSPARSITY AND LOW-RANK OPTIMIZATION

2059

A. Data Material and Subjects


The used EEG data is the CHB-MIT scalp EEG database
which is online available in the Physiobank database:
http://www.physionet.org/cgi-bin/atm/ATM [31], [32]. Collected at the Childrens Hospital Boston, these EEG recordings
are from pediatric subjects with intractable seizures. Subjects
were monitored without antiseizure medication in order to characterize their seizures and assess their candidacy for surgical
intervention. All the recordings were collected from 22 subjects (five males, ages 322; and 17 females, ages 1.519).
All used datasets consist of 23-channel EEG recordings, which
were sampled at 256 samples per second with 16-bit resolution.
The international 1020 system of EEG electrode positions and
nomenclature was used for these recordings. More details about
the EEG database can be found [31]. In our experiments, the
EEG recording chb01 31.edf has been selected to demonstrate
the recovery algorithms performance.
In Section IV-B, L = 500 segments of EEG data are used, i.e.,
xl RN 1 , l = 1, 2, . . . , L. They are taken from all the R = 23
channels sequentially. The length of each segment of the EEG
data x is N = 256. Each segment of EEG data is normalized by
its 2 norm.
In Section IV-C, L = 50 segments of 23-channel EEG data
are used, i.e., Xl RN R , l = 1, 2, . . . , L. In each segment
of the EEG data matrix X, the number of sampling points is
N R = 256 23. Each segment of EEG data is normalized
by its Frobenius norm.
B. Single-Channel EEG Signal Recovery
To show how the proposed cosparse signal recovery methods
work, we take a segment of single-channel EEG signal and
reconstruct it from the compressed measurement with SSR =
0.35. The reconstructed and real signals are shown in Fig. 1.
We can see that the reconstructed signals from GAP and the
analysis L1 optimization methods fit the real signal better than
those from the classical OMP and BSBL methods.
Fig. 2(a) and (b) gives the values of MSE and MCC of GAP,
OMP, and BSBL with different SSRs. We can see that analysis
L1 optimization, GAP, and BSBL have similar accuracy performance, and they outperform OMP. Analysis L1 optimization
is slightly more accurate than GAP, and GAP is slightly more
accurate than BSBL. Fig. 2(c) shows that the greedy algorithms
GAP and OMP are much faster than BSBL and analysis L1 optimization, and GAP is even slightly faster than OMP. Therefore,
if we only care about the accuracy, the analysis L1 optimization is the best choice; and if both accuracy and computational
complexity are important, GAP should be a better choice.

Fig. 2. Differences in average performance evaluation of single-channel EEG


signal recovery from compressed measurements with different SSRs using 500
different single-channel EEG segments: (a) MSE versus SSR; (b) MCC versus
SSR; (c) CPU time versus SSR.

C. Multichannel EEG Signal Recovery


In these experiments, most of the parameters are selected as
in Section IV-B. Two algorithms for SCLR optimization are
used, i.e., interior point method for SCLR optimization and
ADMM for SCLR optimization with experienced choices of
the parameters Tm ax = 5, = 1, and = 0.05. In comparison
with the proposed methods, three other popular multichannel

sparse/cosparse signal recovery methods are taken too, i.e.,


BSBL, SOMP, and SGAP.
Fig. 3(a), (b), and (c) displays the values of MSE, MCC, and
CPU times of the interior point method for the SCLR optimization, ADMM for SCLR optimization, BSBL, SOMP, and SGAP
with different values of SSR. We can see that the interior point

2060

IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 62, NO. 8, AUGUST 2015

their accuracy is much worse and not acceptable. Therefore, we


recommend that the ADMM for SCLR optimization should be
a better candidate for multichannel EEG signal recovery than
the other methods.
V. CONCLUSION
With the second-order difference matrix as the cosparse analysis dictionary, the EEG signals cosparsity is exploited for
the single-channel EEG signal recovery from compressed measurements. To further enhance the performance, cosparsity and
low-rank structure are jointly used in the multichannel EEG signal recovery. In the proposed new optimization model, the 0
norm constraint is used to encourage cosparsity while Schatten0 norm constraint is used for low-rank structure. To solve the
optimization model, two methods are used. One approximates
it by relaxing the 0 and Schatten-0 norms into 1 norm and nuclear norm, respectively, which leads to a convex optimization.
The other way is ADMM that divides the multiple criteria optimization into several connected single criterion optimizations in
the form of global consensus optimization. Each single criterion
optimization can be solved by a series of existing efficient methods. In numerical experiments, EEG signals cosparsity for CS
is proved by the single-channel EEG data-based results; and the
multichannel EEG data results show that the SCLR optimization
outperforms all the previous methods.
REFERENCES

Fig. 3. Differences in average performance evaluation of multichannel EEG


signal recovery from compressed measurement with different SSRs using 50
different multichannel EEG segments: (a) MSE versus SSR; (b) MCC versus
SSR; (c) CPU time versus SSR.

method for SCLR optimization, ADMM for SCLR optimization have similar accuracy performance, and they outperform
the other ones in accuracy. Comparing the speed of these two
solutions for SCLR optimization, the ADMM for SCLR optimization is faster. In Fig. 3(c), we can see that the greedy
algorithms SOMP and SGAP are much faster than the rest. But

[1] C. Bachmann et al., Low-power wireless sensor nodes for ubiquitous


long-term biomedical signal monitoring, IEEE Commun. Mag., vol. 50,
no. 1, pp. 2027, Jan. 2012.
[2] M. De Vos et al., Towards a truly mobile auditory braincomputer
interface: Exploring the P300 to take away, Int. J. Psychophysiol.,
vol. 91, no. 1, pp. 4653, 2014.
[3] S. Debener et al., How about taking a low-cost, small, and wireless EEG
for a walk? Psychophysiology, vol. 49, no. 11, pp. 16171621, 2012.
[4] A. M. Abdulghani et al., Compressive sensing scalp EEG signals: Implementations and practical performance, Med. Biological Eng. Comput.,
vol. 50, no. 11, pp. 11371145, 2012.
[5] Z. Zhang et al., Compressed sensing of EEG for wireless telemonitoring
with low energy consumption and inexpensive hardware, IEEE Trans.
Biomed. Eng., vol. 60, no. 1, pp. 221224, Jan. 2013.
[6] Y. C. Eldar and G. Kutyniok, Compressed Sensing: Theory and Applications. Cambridge, U.K.: Cambridge Univ. Press, 2012.
[7] S. R. Becker, Practical compressed sensing: Modern data acquisition
and signal processing, Ph.D. dissertation, Dept. Applied Computation.,
Maths., California Inst. Technol., CA, USA, 2011.
[8] J. A. Tropp and S. J. Wright, Computational methods for sparse solution
of linear inverse problems, Proc. IEEE, vol. 98, no. 6, pp. 948958, Jun.
2010.
[9] H. Mamaghanian et al., Compressed sensing for real-time energyefficient ECG compression on wireless body sensor nodes, IEEE Trans.
Biomed. Eng., vol. 58, no. 9, pp. 24562466, Sep. 2011.
[10] S. Nam et al., The cosparse analysis model and algorithms, Appl.
Comput. Harmonic Anal., vol. 34, no. 1, pp. 3056, 2013.
[11] E. J. Candes et al., Compressed sensing with coherent and redundant
dictionaries, Appl. Comput. Harmonic Anal., vol. 31, no. 1, pp. 5973,
2011.
[12] T. Peleg and M. Elad, Performance guarantees of the thresholding algorithm for the cosparse analysis model, IEEE Trans. Inf.. Theory, vol. 59,
no. 3, pp. 18321845, Mar. 2013.
[13] C. Yan et al., An approach of time series piecewise linear representation
based on local maximum, minimum and extremum, J. Inform. Comput.
Sci., vol. 10, no. 9, pp. 27472756, 2013.
[14] P. J. Durka et al., Multichannel matching pursuit and EEG inverse
solutions, J. Neurosci. Methods, vol. 148, no. 1, pp. 4959, 2005.

LIU et al.: COMPRESSED SENSING OF MULTICHANNEL EEG SIGNALS: THE SIMULTANEOUS COSPARSITY AND LOW-RANK OPTIMIZATION

[15] S. F. Cotter et al., Sparse solutions to linear inverse problems with


multiple measurement vectors, IEEE Trans. Signal Process., vol. 53,
no. 7, pp. 24772488, Jul. 2005.
[16] Z. Zhang et al., Spatiotemporal sparse Bayesian learning with applications to compressed sensing of multichannel physiological signals, IEEE
Trans. Neural Syst. Rehabil. Eng., vol. 22, no. 6, pp. 11861197, Nov.
2014.
[17] S. Fauvel and R. K. Ward, An energy efficient compressed sensing framework for the compression of electroencephalogram signals, Sensors,
vol. 14, no. 1, pp. 14741496, 2014.
[18] K. Srinivasan et al., Multichannel EEG compression: Wavelet-based image and volumetric coding approach, IEEE J. Biomed. Health Informat.,
vol. 17, no. 1, pp. 113120, Jan. 2013.
[19] J. Dauwels et al., Near-lossless multichannel EEG compression based
on matrix and tensor decompositions, IEEE J. Biomed. Health Informat.,
vol. 17, no. 3, pp. 708714, May 2013.
[20] A. Rohde and A. Tsybakov, Estimation of high-dimensional low-rank
matrices, Ann. Statist., vol. 39, no. 2, pp. 887930, 2011.
[21] B. Recht et al., Guaranteed minimum-rank solutions of linear matrix
equations via nuclear norm minimization, SIAM Rev., vol. 52, no. 3,
pp. 471501, 2010.
[22] S. Boyd and L. Vandenberghe, Convex Optimization. Cambridge, U.K.:
Cambridge Univ. Press, 2009.
[23] L. Vandenberghe and S. Boyd, Semidefinite programming, SIAM Rev.,
vol. 38, no. 1, pp. 4995, 1996.
[24] M. Grant et al., CVX: Matlab software for disciplined convex programming, version 2.0 beta, in Recent Advances in Learning Control.
Heidelberg, Germany: Springer, 2012, pp. 95110.
[25] S. Boyd et al., Distributed optimization and statistical learning via the alternating direction method of multipliers, Found. Trends Mach. Learning,
vol. 3, no. 1, pp. 1122, 2011.
[26] J. Eckstein and D. P. Bertsekas, On the Douglas-Rachford splitting
method and the proximal point algorithm for maximal monotone operators, Math. Program., vol. 55, no. 13, pp. 293318, 1992.
[27] J. A. Tropp et al., Algorithms for simultaneous sparse approximation.
part I: Greedy pursuit, Signal Process., vol. 86, no. 3, pp. 572588, 2006.
[28] Y. Avonds et al., Simultaneous greedy analysis pursuit for compressive
sensing of multi-channel ECG signals, in Proc. IEEE Annu. Int. Conf.
Eng. Med. Biol. Soc., 2014, pp. 63856388.
[29] Y. Liu et al., Multi-structural signal recovery for biomedical compressive
sensing, IEEE Trans. Biomed. Eng., vol. 60, no. 10, pp. 27942805,
Oct. 2013.
[30] Z. Wang and A. C. Bovik, Mean squared error: Love it or leave it? a
new look at signal fidelity measures, IEEE Signal Process. Mag., vol. 26,
no. 1, pp. 98117, Jan. 2009.
[31] A. H. Shoeb, Application of machine learning to epileptic seizure onset
detection and treatment, Ph.D. dissertation, Dept. Electr. Eng. Comput.
Sci., Massachusetts Inst. Technol., Cambridge, MA, USA, 2009.
[32] A. L. Goldberger et al., Physiobank, physiotoolkit, and physionet components of a new research resource for complex physiologic signals,
Circulation, vol. 101, no. 23, pp. e215e220, 2000.

2061

Yipeng Liu (S09M13) was born in Chengdu,


China, in 1983. He received the B.Sc. degree in
biomedical engineering and the Ph.D. degree in
information and communication engineering from
the University of Electronic Science and Technology of China (UESTC), Chengdu, in June 2006 and
June 2011, respectively.
From June 2011 to November 2011, he was a
Research Engineer at Huawei Technologies. From
November 2011 to November 2014, he was a Postdoctoral Research Fellow at the University of Leuven,
Leuven, Belgium. Since September 2014, he has been an Associate Professor
with the School of Electronic Engineering, UESTC. His research interests include compressed sensing theory and application.

Maarten De Vos (M09) received the M.Sc. degree


in electrotechnicalmechanical engineering and the
Ph.D. degree in engineering from KU Leuven, Leuven, Belgium, in 2005 and 2009, respectively.
From 2009 to 2014, he has been a Postdoctoral
Researcher with KU Leuven and the University of
Oldenburg. From 2013 to 2014, he has been a Junior
Professor at the University of Oldenburg. Since 2014,
he has been an Associate Professor with the Institute of Biomedical Engineering, Department of Engineering, University of Oxford, Oxford, U.K. His
current research interests include linear and multilinear algebra, decomposition
techniques for biomedical signals and developing humanmachine interfacing
solutions.

Sabine Van Huffel (M96A96SM99F09) received the M.D. degree in computer science engineering, the M.D. degree in biomedical engineering,
and the Ph.D. degree in electrical engineering from
KU Leuven, Leuven, Belgium, in June 1981, July
1985, and June 1987, respectively.
She is currently a Full Professor at the Department of Electrical Engineering, KU Leuven. Her research interests include numerical (multi)linear algebra and software, system identification, parameter
estimation, and biomedical data processing.

You might also like