Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

On Kruskal's Uniqueness Condition For The Candecomp/Parafac Decomposition

Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

Linear Algebra and its Applications 420 (2007) 540–552

www.elsevier.com/locate/laa

On Kruskal’s uniqueness condition for the


Candecomp/Parafac decomposition
Alwin Stegeman a,∗ ,1 , Nicholas D. Sidiropoulos b
a Heijmans Institute of Psychological Research, University of Groningen, Grote Kruisstraat 2/1,
9712 TS Groningen, The Netherlands
b Department of Electronic and Computer Engineering, Technical University of Crete, Kounoupidiana Campus,
Chania – Crete 731 00, Greece

Received 21 March 2005; accepted 4 August 2006


Available online 13 October 2006
Submitted by V. Mehrmann

Abstract

Let X be a real-valued three-way array. The Candecomp/Parafac (CP) decomposition is written as X =


Y(1) + · · · + Y(R) + E, where Y(r) are rank-1 arrays and E is a rest term. Each rank-1 array is defined
(r) (r) (r) (r)
by the outer product of three vectors a(r) , b(r) and c(r) , i.e. yij k = ai bj ck . These vectors make up
the R columns of the component matrices A, B and C. If 2R + 2 is less than or equal to the sum of the
k-ranks of A, B and C, then the fitted part of the decomposition is unique up to a change in the order of the
rank-1 arrays and rescaling/counterscaling of each triplet of vectors (a(r) , b(r) , c(r) ) forming a rank-1 array.
This classical result was shown by Kruskal. His proof is, however, rather inaccessible and does not seem
intuitive. In order to contribute to a better understanding of CP uniqueness, this paper provides an accessible
and intuitive proof of Kruskal’s condition. The proof is both self-contained and compact and can easily be
adapted for the complex-valued CP decomposition.
© 2006 Elsevier Inc. All rights reserved.

Keywords: Candecomp; Parafac; Three-way arrays; Uniqueness; Kruskal-rank condition

∗ Corresponding author. Tel.: +31 50 363 6193; fax: +31 50 363 6304.
E-mail addresses: a.w.stegeman@rug.nl (A. Stegeman), nikos@telecom.tuc.gr (N.D. Sidiropoulos).
1 The author is supported by the Dutch Organisation for Scientific Research (NWO), VENI grant 451-04-102.

0024-3795/$ - see front matter ( 2006 Elsevier Inc. All rights reserved.
doi:10.1016/j.laa.2006.08.010
A. Stegeman, N.D. Sidiropoulos / Linear Algebra and its Applications 420 (2007) 540–552 541

1. Introduction

The decomposition of three-way (or three-mode) arrays into rank-one three-way outer products
was proposed independently by Carroll and Chang [1] (who called it Candecomp) and Harsh-
man [2] (who called it Parafac). Candecomp/Parafac (CP) decomposes a three-way array X of
order I × J × K into a fixed number of R components Y(r) , r = 1, . . . , R, and a residual term
E, i.e.
R

X= Y(r) + E. (1.1)
r=1
In the sequel, we will denote three-way arrays as Z, matrices as Z, vectors as z and scalars as z.
We assume all arrays, matrices, vectors and scalars to be real-valued.
CP has its origins in psychology, where it was conceived primarily as an exploratory data
analysis tool. Later, CP attracted considerable interest in chemistry (see Smilde et al. [10] and
references therein), and, more recently, in signal processing and telecommunications research
(see Sidiropoulos [9] and references therein). For example, CP is appropriate for the analysis of
fluorescence excitation–emission measurements. CP is also useful in the context of certain para-
meter and signal estimation problems in wireless communications, including emitter localization,
carrier frequency offset estimation, and the separation of spread-spectrum communication signals.
This renewed interest has helped to sustain advances in both theory and applications of CP.
(r)
Each component Y(r) is the outer product of three vectors a(r) , b(r) and c(r) , i.e. yij k =
(r) (r) (r)
ai bj ck . This implies that each of the R components has three-way rank 1. Analogous to
matrix algebra, the three-way rank of X is defined (see Kruskal [5]) as the smallest number of
rank-1 arrays whose sum equals X. Since there are no restrictions on the vectors a(r) , b(r) and c(r) ,
the array X has three-way rank R if and only if R is the smallest number of components for which
a CP decomposition (1.1) exists with perfect fit, i.e. with an all-zero residual term E. For a fixed
value of R, the CP decomposition (1.1) is found by minimizing the sum of squares of E. It may be
noted that the CP decomposition is a special case of the three-mode principal component model of
Tucker [15], which reduces to CP when the Tucker3 core array is R × R × R and superdiagonal.
A CP solution is usually expressed in terms of the component matrices A (I × R), B (J × R)
and C (K × R), which have as columns the vectors a(r) , b(r) and c(r) , respectively. Let Xk (I × J )
and Ek (I × J ) denote the kth slices of X and E, respectively. Now (1.1) can be written as
Xk = ACk BT + Ek , k = 1, . . . , K, (1.2)
where Ck is the diagonal matrix with the kth row of C as its diagonal.
The uniqueness of a CP solution is usually studied for given residuals E. It can be seen that the
fitted part of a CP decomposition, i.e. a full decomposition of X − E, can only be unique up to
rescaling/counterscaling and jointly permuting columns of A, B and C. For instance, suppose the
rth columns of A, B and C are multiplied by scalars λa , λb and λc , respectively, and there holds
λa λb λc = 1. Then the rth CP component Y(r) remains unchanged. Furthermore, a joint permuta-
tion of the columns of A, B and C amounts to a new order of the R components. Hence, the residuals
E will be the same for the solution (A, B, C) as for the solution (Aa , Bb , Cc ), where 
is a permutation matrix and a , b and c are diagonal matrices such that a b c = IR . Contrary
to the 2-dimensional situation, these are usually the only transformational indeterminacies in CP.
When, for given residuals E, the CP solution (A, B, C) is unique up to these indeterminacies, the
solution is called essentially unique.
542 A. Stegeman, N.D. Sidiropoulos / Linear Algebra and its Applications 420 (2007) 540–552

To avoid unnecessarily complicated notation, we assume (without loss of generality) the


residual array E to be all-zero. Hence, we consider the essential uniqueness of the full CP
decomposition
Xk = ACk BT , k = 1, . . . , K. (1.3)
We introduce the following notation. Let ◦ denote the outer product, i.e. for vectors x and y we
define x ◦ y = xyT . For three vectors x, y and z, the product x ◦ y ◦ z is a three-way array with
elements xi yj zk .
We use ⊗ to denote the Kronecker product. The Khatri–Rao product, which is the column-wise
Kronecker product, is denoted by . It is defined as follows. Suppose matrices X and Y both have
n columns. Then the product X  Y also has n columns and its j th column is equal to xj ⊗ yj ,
where xj and yj denote the j th columns of X and Y, respectively. Notice that X  Y contains
the columns 1, n + 2, 2n + 3, . . . , (n − 1)n + n of the Kronecker product X ⊗ Y.
For a matrix X, let Vec(X) denote the column vector which is obtained by placing the columns
of X below each other, such that the first column of X is on top. For a vector x, lets diag(x) denote
the diagonal matrix with its diagonal equal to x. Notice that diag(xT ) = diag(x).
Although we consider the real-valued CP decomposition, all presented results also hold in
the complex-valued case. To translate the proofs to the complex-valued case, the ordinary
transpose T should be changed into the Hermitian or conjugated transpose H (except in those
cases where the transpose is due to the formulation of the CP decomposition, such as in (1.3)).

2. Conditions for essential uniqueness

The first sufficient conditions for essential uniqueness are due to Jennrich (in Harshman [2])
and Harshman [3]. For a discussion, see Ten Berge and Sidiropoulos [14]. The most general
sufficient condition for essential uniqueness is due to Kruskal [5]. Kruskal’s condition involves a
variant of the concept of matrix rank that he introduced, which has been named k-rank after him.
The k-rank of a matrix is defined as follows.

Definition 2.1. The k-rank of a matrix is the largest value of x such that every subset of x columns
of the matrix is linearly independent.

We denote the k-rank and rank of a matrix A by kA and rA , respectively. There holds kA  rA .
The matrix A has an all-zero column if and only if kA = 0 and A contains proportional columns
if kA = 1.
Kruskal [5] showed that a CP solution (A, B, C) is essentially unique if
2R + 2  kA + kB + kC . (2.1)
Notice that (2.1) cannot hold when R = 1. However, in that case uniqueness has been proven by
Harshman [3]. Ten Berge and Sidiropoulos [14] have shown that Kruskal’s sufficient condition
(2.1) is also necessary for R = 2 and R = 3, but not for R > 3. The same authors conjectured that
Kruskal’s condition might be necessary and sufficient for R > 3, provided that k-ranks and ranks
of the component matrices coincide, i.e. kA = rA , kB = rB and kC = rC . However, Stegeman
and Ten Berge [11] refuted this conjecture. Kruskal’s condition was generalized to n-way arrays
(n > 3) by Sidiropoulos and Bro [8].
Alternative sufficient conditions for essential uniqueness have been obtained for the case where
one of the component matrices has full column rank, e.g. if rC = R. See Jiang and Sidiropoulos
A. Stegeman, N.D. Sidiropoulos / Linear Algebra and its Applications 420 (2007) 540–552 543

[4] and De Lathauwer [6], who have independently proposed the same uniqueness condition (this
was noticed in Stegeman, Ten Berge and De Lathauwer [12]). The latter author also provides a
corresponding algorithm. Moreover, for random component matrices A, B and C, [6] derives a
condition for “uniqueness with probability 1” in the form of a dimensionality constraint. A link
between the deterministic approach of Jiang and Sidiropoulos [4] and the random setting of De
Lathauwer [6] is provided in [12].
Besides sufficient conditions for essential uniqueness, also necessary conditions can be formu-
lated. For instance, the CP solution is not essentially unique if one of the component matrices has
an all-zero column. Indeed, suppose the rth column of A is all-zero, then the rth component Y(r)
is all-zero and the rth columns of B and C can be arbitrary vectors. Also, the CP solution is not
essentially unique if the k-rank of one of the component matrices equals 1. This can be seen as
follows. Suppose kA = 1. Then A has two proportional columns, i.e. a(s) = λa(t) for some s = / t.
We have

Y(s) + Y(t) = λa(t) ◦ b(s) ◦ c(s) + a(t) ◦ b(t) ◦ c(t)


= a(t) ◦ [λb(s) |b(t) ][c(s) |c(t) ]T
= a(t) ◦ [λb(s) |b(t) ]U([c(s) |c(t) ]U−T )T , (2.2)

for any nonsingular 2 × 2 matrix U. Eq. (2.2) describes mixtures of the sth and tth columns of
B and C for which the fitted part of the CP model remains unchanged. If U is not the product of
a diagonal and a permutation matrix, then (2.2) indicates that the CP solution is not essentially
unique. Since the three-way array X may be “viewed from different sides”, the roles of A, B and
C are exchangeable in the sequel. Hence, it follows that

kA  2, kB  2, kC  2, (2.3)

is a necessary condition for essential uniqueness of (A, B, C).


Another necessary condition for essential uniqueness is due to Liu and Sidiropoulos [7]. Let
Xk be as in (1.3), i.e. Xk = ACk BT . Let X be the matrix having Vec(XkT ) as its kth column,
k = 1, . . . , K. Then X can be written as

X = (A  B)CT . (2.4)

Suppose that (A  B) is not of full column rank. Then there exists a nonzero vector n such that
(A  B)n = 0. Adding n to any column of CT preserves (2.4), but produces a different solution
for C. Moreover, we have (A  B)CT = (A  B)(C + znT )T for any vector z, and we can choose
z such that one column of (C + znT ) vanishes. It follows that X has a full CP decomposition with
R − 1 components if (A  B) does not have full column rank.
Hence, full column rank of (A  B) is necessary for essential uniqueness. By exchanging the
roles of A, B and C, we obtain that

(A  B) and (C  A) and (B  C) have full column rank, (2.5)

is a necessary condition for essential uniqueness of (A, B, C). Note that (A  B) = (B  A),
where  is a row-permutation matrix. Hence, in each of the three Khatri–Rao products in (2.5)
the two matrices may be swapped.
In the following, we denote the column space of a matrix A by span(A) and we denote its
orthogonal complement null(A), i.e. null(A) = {x : xT A = 0T }.
544 A. Stegeman, N.D. Sidiropoulos / Linear Algebra and its Applications 420 (2007) 540–552

3. Proof of Kruskal’s uniqueness condition

The proof of Kruskal’s condition (2.1) in Kruskal [5] is long and rather inaccessible and does not
seem very intuitive. Therefore, Kruskal’s condition has been partially reproved by Sidiropoulos
and Bro [8] and Jiang and Sidiropoulos [4]. However, apart from [5] no other complete proof
of Kruskal’s condition exists in the literature. In the remaining part of this paper, a complete,
accessible and intuitive proof of Kruskal’s condition will be provided. Kruskal’s uniqueness
condition is formalized in the following theorem.

Theorem 3.1. Let (A, B, C) be a CP solution which decomposes the three-way array X into
R rank-1 arrays. Suppose Kruskal’s condition 2R + 2  kA + kB + kC holds and we have an
alternative CP solution (Ā, B̄, C̄) also decomposing X into R rank-1 arrays. Then there holds
Ā = Aa , B̄ = Bb and C̄ = Cc , where  is a unique permutation matrix and a , b
and c are unique diagonal matrices such that a b c = IR .

The cornerstone in both Kruskal’s and our proof of Theorem 3.1 is Kruskal’s Permutation
Lemma, see Section 5 of Kruskal [5]. Below, we present this lemma. Its proof is postponed until
Section 4 of this paper. Let ω(z) denote the number of nonzero elements of the vector z.

Lemma 3.2 (Permutation Lemma). Let C and C̄ be two K × R matrices and let kC  2. Sup-
pose the following condition holds: for any vector x such that ω(C̄T x)  R − rC̄ + 1, we have
ω(CT x)  ω(C̄T x). Then there exists a unique permutation matrix  and a unique nonsingular
diagonal matrix  such that C̄ = C.

The condition of the Permutation Lemma states that if a vector x is orthogonal to h  rC̄ − 1
columns of C̄, then x is orthogonal to at least h columns of C. Assuming the Permutation Lemma
is true, this condition has to be equivalent to: if a vector x is orthogonal to h columns of C̄, then
x is orthogonal to at least h columns of C, 1  h  R. As an alternative to Kruskal’s proof of the
Permutation Lemma, Jiang and Sidiropoulos [4] prove the equivalence of these two conditions and
show that the latter condition implies C̄ = C. In Section 4, we reconsider Kruskal’s original
proof of the Permutation Lemma and explain the link with the approach of Jiang and Sidiropoulos
[4].
In our proof of Theorem 3.1, we need the following result of Sidiropoulos and Bro [8] on the
k-rank of a Khatri–Rao product. The proof below is a shorter version of the proof in [8] and is
due to Ten Berge [13].

Lemma 3.3. Consider matrices A (I × R) and B (J × R):

(i) If kA = 0 or kB = 0, then kAB = 0.


(ii) If kA  1 and kB  1, then kAB  min(kA + kB − 1, R).

Proof. First, we prove (i). If kA = 0, then A has an all-zero column. This implies that also (A  B)
has an all-zero column and, hence, that kAB = 0. The same is true if kB = 0. This completes the
proof of (i).
Next, we prove (ii). Suppose kA  1 and kB  1. Premultiplying a matrix by a nonsingular
matrix affects neither the rank nor the k-rank. We have (SA)  (TB) = (S ⊗ T)(A  B), where
A. Stegeman, N.D. Sidiropoulos / Linear Algebra and its Applications 420 (2007) 540–552 545

(S ⊗ T) is nonsingular if both S and T are nonsingular. Hence, premultiplying A and B by


nonsingular matrices also does not affect the rank and k-rank of (A  B). Since A has no all-zero
columns, a linear combination of its rows exists such that all its elements are nonzero. Hence,
since both A and B have no all-zero columns, we can find nonsingular matrices S and T such
that both SA and TB have at least one row with all elements nonzero. Therefore, we may assume
without loss of generality that both A and B have at least one row with all elements nonzero.
Since (A  B) has R columns, we have either kAB = R or kAB < R. Suppose kAB < R.
Let n be the smallest number of linearly dependent columns of (A  B), i.e. kAB = n − 1. We
collect n linearly dependent columns of (A  B) in the matrix (An  Bn ), where An and Bn contain
the corresponding columns of A and B. Let d be a nonzero vector such that (An  Bn )d = 0.
Let D = diag(d), which is nonsingular by the definition of n. Note that Vec(Bn DATn ) = (An 
Bn )d = 0, which implies Bn DATn = O. Sylvester’s inequality gives
0 = rBn DATn  rAn + rBn D − n = rAn + rBn − n, (3.1)
where the last equality is due to the fact that D is nonsingular. Writing n = kAB + 1, Eq. (3.1)
yields
kAB  rAn + rBn − 1. (3.2)
Clearly, rAn  kAn . Because B has at least one row with all elements nonzero, it follows from
(An  Bn )d = 0 that the columns of An are linearly dependent. Hence, we also have kAn 
kA . Equivalently, rBn  kBn  kB . Hence, if kAB < R, we have kAB  kA + kB − 1. This
completes the proof. 

The following lemma shows that Kruskal’s condition implies the necessary uniqueness condi-
tions (2.3) and (2.5). For (2.5), the proof is due to Liu and Sidiropoulos [7].

Lemma 3.4. If Kruskal’s condition 2R + 2  kA + kB + kC holds, then

(i) kA  2, kB  2 and kC  2,
(ii) rAB = rCA = rBC = R.

Proof. We first prove (i). From Kruskal’s condition and kC  R it follows that R  kA + kB −
2. If kA  1, then this implies kB  R + 1, which is impossible. Hence, kA  2 if Kruskal’s
condition holds. The complete proof of (i) is obtained by exchanging the roles of A, B and C.
Next, we prove (ii). By the proof of (i) and statement (ii) of Lemma 3.3, we have kAB 
min(kA + kB − 1, R) = R. Hence, rAB  kAB  R, which implies rAB = R. The complete
proof of (ii) is obtained by exchanging the roles of A, B and C. 

We are now ready to prove Theorem 3.1. The first part of our proof, up to Eq. (3.11), is similar
to Sidiropoulos and Bro [8]. Let the CP solutions (A, B, C) and (Ā, B̄, C̄) be as in Theorem 3.1
and assume Kruskal’s condition 2R + 2  kA + kB + kC holds. First, we use the Permutation
Lemma to show that C and C̄ are identical up to a column permutation and rescaling. For this,
we need to prove that for any x such that ω(C̄T x)  R − rC̄ + 1 there holds ω(CT x)  ω(C̄T x).

Let x = (x1 , . . . , xK )T . We consider the linear combination K k=1 xk Xk of the slices Xk in
(1.3). For the Vec of the transpose of this linear combination, see (2.4), we have
(A  B)CT x = (Ā  B̄)C̄T x. (3.3)
546 A. Stegeman, N.D. Sidiropoulos / Linear Algebra and its Applications 420 (2007) 540–552

By Lemma 3.4, the matrix (A  B) has full column rank. This implies that if ω(C̄T x) = 0, then
also ω(CT x) = 0. Hence, null(C̄) ⊆ null(C). The orthogonal decomposition theorem states that
any y ∈ span(C) can be written as the sum y = s + s⊥ , where s ∈ span(C̄) and s⊥ ∈ null(C̄). But
since null(C̄) ⊆ null(C), there must hold s⊥ = 0 and y ∈ span(C̄). Hence, we have span(C) ⊆
span(C̄) and also rC  rC̄ . This implies that if ω(C̄T x)  R − rC̄ + 1, then
ω(C̄T x)  R − rC̄ + 1  R − rC + 1  R − kC + 1  kA + kB − (R + 1), (3.4)
where the last inequality follows from Kruskal’s
 condition.
Next, consider the linear combination K k=1 xk Xk again. By (1.3), it is equal to

A diag(CT x)BT = Ā diag(C̄T x)B̄T . (3.5)


We have
ω(C̄T x) = rdiag(C̄T x)  rĀ diag(C̄T x)B̄T = rA diag(CT x)BT . (3.6)

Let γ = ω(CT x) and let 


A and B consist of the columns of A and B, respectively, corresponding
to the nonzero elements of CT x. Then 
A and  B both have γ columns. Let t be the γ × 1 vector
containing the nonzero elements of CT x such that A diag(CT x)BT =  A diag(t)BT . Sylvester’s
inequality now yields
rA diag(CT x)BT = r BT  r
A diag(t) A + r
B diag(t) − γ = r
A + r
B − γ, (3.7)
where the last equality is due to the fact that t contains no zero elements. From the definition of
the k-rank, it follows that
r
A  min(γ , kA ), r
B  min(γ , kB ). (3.8)
From (3.6)–(3.8) we obtain
ω(xT C̄)  min(γ , kA ) + min(γ , kB ) − γ . (3.9)
Combining (3.4) and (3.9) yields
min(γ , kA ) + min(γ , kB ) − γ  ω(C̄T x)  kA + kB − (R + 1). (3.10)
Recall that we need to show that γ = ω(CT x)
 ω(C̄T x).
From (3.10) it follows that we are
done if γ < min(kA , kB ). We will prove this by contradiction. Suppose γ > max(kA , kB ). Then
(3.10) gives γ  R + 1, which is impossible. Suppose next that kA  γ  kB . Then (3.10) gives
kB  R + 1, which is also impossible. Since A and B can be exchanged in the latter case, this
shows that γ < min(kA , kB ) must hold. Therefore, γ = ω(CT x)  ω(C̄T x) follows from (3.10)
and the Permutation Lemma yields that a unique permutation matrix c and a unique nonsingular
diagonal matrix c exist such that C̄ = Cc c . Notice that kC  2 follows from Lemma 3.4.
The analysis above can be repeated for A and B. Hence, it follows that
Ā = Aa a , B̄ = Bb b , C̄ = Cc c , (3.11)
where a , b and c are unique permutation matrices and a , b and c are unique nonsingular
diagonal matrices. It remains to show that a = b = c and that a b c = IR . The following
lemma states that we only need to show that a = b .

Lemma 3.5. Let the CP solutions (A, B, C) and (Ā, B̄, C̄) be as in Theorem 3.1 and assume
Kruskal’s condition 2R + 2  kA + kB + kC holds. If (3.11) holds and a = b , then a =
b = c and a b c = IR .
A. Stegeman, N.D. Sidiropoulos / Linear Algebra and its Applications 420 (2007) 540–552 547

Proof. Let a = b = . We have (A  B)CT = (Aa  Bb )(Cc c )T , which may be


written as (A  B)(Cc a b c T )T . Since (A  B) has full column rank (see Lemma 3.4), it
follows that C = Cc a b c T . The matrix c a b c T is a rescaled permutation matrix.
Since C has no all-zero columns or proportional columns (kC  2, see Lemma 3.4), it follows
that c T = IR and a b c = IR . This completes the proof. 

In the remaining part of this section, we will show that if Kruskal’s condition holds, then (3.11)
implies a = b . The proof of this is based on Kruskal [5, pp. 129–132].
Consider the CP decomposition in (1.3) and assume (3.11) holds. For vectors v and w we have
vT Xk w = vT ACk (wT B)T = vT Aa C̄k a b (wT Bb )T . (3.12)

We combine (3.12) for k = 1, . . . , K into


⎛ T ⎞ ⎛ ⎞
v X1 w 1
⎜ .. ⎟ ⎜ .. ⎟
⎝ . ⎠ = C diag(v T
A) diag(w T
B) ⎝.⎠
v T XK w 1
⎛ ⎞
1
⎜ .. ⎟
= C̄a b diag(v Aa ) diag(w Bb ) ⎝ . ⎠ .
T T
(3.13)
1

Let ar and br denote the rth columns of A and B, respectively. We define


⎛ ⎞ ⎛ T ⎞
1 v a1 · w T b1
⎜ ⎟ ⎜ ⎟
p = diag(vT A) diag(wT B) ⎝ ... ⎠ = ⎝ ..
. ⎠. (3.14)
vT a R · w bR
1 T

Let the index function g(x) be given by Aa = [ag(1) |ag(2) | · · · |ag(R) ]. Analogously, let h(x) be
given by Bb = [bh(1) |bh(2) | · · · |bh(R) ]. We define
⎛ ⎞ ⎛ T ⎞
1 v ag(1) · wT bh(1)
⎜ ⎟ ⎜ ⎟
q = diag(vT Aa ) diag(wT Bb ) ⎝ ... ⎠ = ⎝ ..
. ⎠. (3.15)
1 vT ag(R) · wT bh(R)
Eq. (3.13) can now be written as Cp = C̄a b q. Below, we will show that if Kruskal’s condition
holds and a = / b , then we can choose v and w such that q = 0 and p = / 0 has less than kC
nonzero elements. This implies that C contains n linearly dependent columns, with 1  n 
kC − 1, which contradicts the definition of kC . Hence, if Kruskal’s condition and (3.11) hold,
then a = b .
Suppose a = / b . Then there exists an r such that ar is the sth column of Aa , column br is
the tth column of Bb and s = / t. Hence, there exists an r such that r = g(s) = h(t) and s = / t.
Next, we create an index set S ⊂ {1, . . . , R} for which we find a vector v such that vT aj = 0 if
j ∈ S. Equivalently, we create an index set T ⊂ {1, . . . , R} for which we find a vector w such
that wT bj = 0 if j ∈ T . The sets S and T are created as follows:

• Put g(t) in S and h(s) in T .


• For all x ∈
/ {s, t}, add g(x) to S if card(S) < kA − 1. Otherwise, add h(x) to T .
548 A. Stegeman, N.D. Sidiropoulos / Linear Algebra and its Applications 420 (2007) 540–552

Here, card(S) denotes the number of elements in the set S. We observe the following. In row x of
the vector q in (3.15), either g(x) ∈ S or h(x) ∈ T , x = 1, . . . , R. The index r = g(s) = h(t) is
neither an element of S nor an element of T . Since kA − 1  R − 1, the set S will contain exactly
kA − 1 elements. The number of elements in T equals R − card(S) = R − kA + 1, which is less
than or equal to kB − 1 (see the proof of Lemma 3.4).
We choose vectors v and w such that
vT aj = 0 if j ∈ S, and vT ar =
/ 0,
wT bj = 0 if j ∈ T , and wT br =/ 0.
It can be seen that such v and w always exist. The vector v has to be chosen from the orthogonal
complement of span{aj , j ∈ S}, which is an (I − kA + 1)-dimensional space. If the column ar is
orthogonal to all possible vectors v, it lies in span{aj , j ∈ S}. But then we would have kA linearly
dependent columns in A, which is not possible. Hence, a vector v as above can always be found.
Analogous reasoning shows that this is also true for w.
For the sets S and T and the vectors v and w above, we have q = 0 and the rth element of p
nonzero. Let S c = {1, . . . , R}\S and T c = {1, . . . , R}\T . The number of nonzero elements in p
is bounded from above by
card(S c ∩ T c )  card(S c ) = R − kA + 1  kC − 1, (3.16)
where the last inequality follows from Kruskal’s condition and kB  R. Hence, Cp = 0 im-
plies that C contains n linearly dependent columns, with 1  n  kC − 1, which contradicts the
definition of kC . This completes the proof of Theorem 3.1.

4. Proof of Kruskal’s Permutation Lemma

Here, we prove Kruskal’s Permutation Lemma, i.e. Lemma 3.2. The proof presented in Section
4.1 is similar to Kruskal’s original proof in [5]. However, we add more clarification to it in order
to show that the proof is natural and intuitive. In Section 4.2 we discuss the link between the proof
in Section 4.1 and the alternative proof of the Permutation Lemma by Jiang and Sidiropoulos [4].

4.1. Kruskal’s original proof revisited

Let C and C̄ be two K × R matrices and let kC  2. Suppose the following condition holds:
for any vector x such that ω(C̄T x)  R − rC̄ + 1, we have ω(CT x)  ω(C̄T x). We need to show
that there exists a unique permutation matrix  and a unique nonsingular diagonal matrix  such
that C̄ = C.
From the condition above, it follows that if ω(C̄T x) = 0, then also ω(CT x) = 0. Hence,
null(C̄) ⊆ null(C). As before, this implies span(C) ⊆ span(C̄) and also rC  rC̄ . Since rC 
kC  2, the matrix C̄ must have at least two nonzero and nonproportional columns.
Recall that the condition above states that if a vector x is orthogonal to h  rC̄ − 1 columns
of C̄, then x is orthogonal to at least h columns of C. Any vector x is orthogonal to the all-zero
columns of C̄. If x is orthogonal to a nonzero column c̄j of C̄, then it is also orthogonal to all
nonzero columns which are scalar multiples of c̄j . Therefore, it makes sense to partition the
columns of C̄ into the following sets.

G0 = {the all-zero columns of C̄},


Gj = {column c̄j of C̄ and all its nonzero scalar multiples in C̄}, j = 1, . . . , M.
A. Stegeman, N.D. Sidiropoulos / Linear Algebra and its Applications 420 (2007) 540–552 549

Hence, if x is orthogonal to a nonzero column c̄j of C̄, then it is orthogonal to at least card(G0 ) +
card(Gj ) columns of C̄. If x is orthogonal to two linearly independent columns c̄i and c̄j of C̄,
then it is orthogonal to all columns in G0 , Gi and Gj and to all columns of C̄ in span{c̄i , c̄j }. To
be able to work with such sets of columns of C̄, we define the following.

Definition 4.1. A set Hk of columns of C̄ is called a k-dimensional column set if Hk contains


exactly k linearly independent columns of C̄ and all columns of C̄ in the span of those k columns,
k = 1, . . . , rC̄ . The 0-dimensional column set H0 is defined as H0 = G0 .

Notice that the rC̄ -dimensional column set contains all columns of C̄. For k ∈ {1, . . . , rC̄ − 1},
there are more possibilities of forming a k-dimensional column set. For example, the
1-dimensional column sets are given by the unions G0 ∪ Gj , j = 1, . . . , M. The span of a
k-dimensional column set Hk is denoted by span(Hk ) and its orthogonal complement by null(Hk ).
It is important to note that if x is orthogonal to some nonzero columns of C̄, then those columns
form a k-dimensional column set Hk and x ∈ null(Hk ), k  1. This fact, together with span(C) ⊆
span(C̄), yields the following result. For ease of presentation, we postpone its proof until the end
of this section.

Lemma 4.2. Let C and C̄ be two K × R matrices. Suppose that ω(CT x)  ω(C̄T x) for any vector
x such that ω(C̄T x)  R − rC̄ + 1. For k ∈ {0, . . . , rC̄ }, let Hk be a k-dimensional column set.
Then the number of columns of C in span(Hk ) is larger than or equal to card(Hk ).

The Permutation Lemma now follows from the result of Lemma 4.2 for k = 0 and k = 1. First,
consider k = 0. Since C does not have any all-zero columns (kC  2), the number of columns of
C in span(H0 ) is zero. From Lemma 4.2 it then follows that also card(H0 ) must be zero. Hence,
C̄ contains no all-zero columns and the set G0 is empty.
Next, consider k = 1. There must hold that the number of columns of C in span(H1 ) is larger
than or equal to card(H1 ), for any 1-dimensional column set H1 . The set G0 is empty, which
implies that the 1-dimensional column sets are given by Gj , j = 1, . . . , M. Hence, the number
of columns of C in span(Gj ) must be larger than or equal to card(Gj ), for j = 1, . . . , M. Since
kC  2, the number of columns of C contained in a particular span(Gj ) cannot be larger than
one. This implies card(Gj )  1, for j = 1, . . . , M. However, from the construction of the sets
Gj , it follows that card(Gj )  1. Hence, card(Gj ) = 1, for j = 1, . . . , M, and M = R. Since
any column of C is contained in at most one span(Gj ), it follows that this can only be true if C
and C̄ have the same columns up to permutation and scalar multiplication. That is, there exists a
permutation matrix  and a nonsingular diagonal matrix  such that C̄ = C. It can be seen
that  and  are indeed unique.
In order to prove the Permutation Lemma, we need the result of Lemma 4.2 only for k = 0
and k = 1. However, as will be seen in the proof of Lemma 4.2 below, the only way to obtain the
result for k = 0 and k = 1 seems to be through induction, starting at k = rC̄ and subsequently
decreasing the value of k. This explains why Lemma 4.2 is formulated for all k ∈ {0, . . . , rC̄ }.
Finally, we prove Lemma 4.2. We need the following result.

Lemma 4.3. Let Hk be a k-dimensional column set and y a K × 1 vector. If y ∈


/ span(Hk ),
then there is at most one (k + 1)-dimensional column set Hk+1 with Hk ⊂ Hk+1 and y ∈
span(Hk+1 ).
550 A. Stegeman, N.D. Sidiropoulos / Linear Algebra and its Applications 420 (2007) 540–552

(1)
Proof. Suppose to the contrary that there are two different (k + 1)-dimensional column sets Hk+1
(2) (1) (2)
and Hk+1 which both contain Hk and such that y ∈ span(Hk+1 ) and y ∈ span(Hk+1 ). Since y ∈
/
(1) (2)
span(Hk ), it follows that span{Hk , y} = span(Hk+1 )
= span(Hk+1 ). Without loss of generality
(1) (2) (2)
we may assume that Hk+1 contains some column c̄ of C̄ and c̄ ∈ / Hk+1 . Then span{Hk+1 , c̄} =
(1)
span{Hk , y, c̄} has dimension k + 2. But this is not possible, since span{Hk , y} = span(Hk+1 )
has dimension k + 1 and contains c̄. This completes the proof. 

Proof of Lemma 4.2. First, we give the proof for k = rC̄ and k = rC̄ − 1. Then we use induction
on k (going from k + 1 to k) to complete the proof.
Let k = rC̄ . There holds span(Hk ) = span(C̄). Hence, the number of columns of C̄ in span(Hk )
is equal to R. Since span(C) ⊆ span(C̄), the number of columns of C in span(Hk ) is also equal
to R. This completes the proof for k = rC̄ .
Next, we consider the case k = rC̄ − 1. Let Hk be an (rC̄ − 1)-dimensional column set. Pick
a vector x ∈ null(Hk ), which does not lie in null(C̄). Then x can be chosen from a 1-dimensional
subspace. Let q̄ = card(Hk ) and let q denote the number of columns of C in span(Hk ). We have
ω(C̄T x)  R − q̄; we claim that in fact ω(C̄T x) = R − q̄. Indeed, suppose x is orthogonal to
column c̄j of C̄ and c̄j is not included in Hk . Then x would lie in null{Hk , c̄j } = null(C̄), which
is a contradiction. In the same way we can show that ω(CT x) = R − q. As before, suppose x is
orthogonal to column cj of C and cj does not lie in span(Hk ). Since cj lies in span(C) ⊆ span(C̄),
the vector x lies in null{Hk , cj } = null(C̄). Again, we obtain a contradiction.
Since q̄  rC̄ − 1, we have ω(C̄T x) = R − q̄  R − rC̄ + 1. Hence, the condition of Lemma
4.2 implies ω(CT x)  ω(C̄T x), which yields q  q̄. This completes the proof for k = rC̄ − 1.
The remaining part of the proof is by induction on k. Suppose the result of Lemma 4.2 holds
for k + 1 < rC̄ . We will show that it holds for k as well. Let Hk be a k-dimensional column set. Let
(i) (i)
Hk+1 , i = 1, . . . , N, be all (k + 1)-dimensional column sets such that Hk ⊂ Hk+1 . Since there
exists a group of exactly rC̄ − k linearly independent columns of C̄ which are not included in
(i)
Hk , it follows that there are at least rC̄ − k  2 different (k + 1)-dimensional column sets Hk+1
containing Hk . Thus, N  2.
As above, Let q̄ = card(Hk ) and let q denote the number of columns of C in span(Hk ). Also, let
(i) (i)
q̄i = card(Hk+1 ) and let qi denote the number of columns of C in span(Hk+1 ), for i = 1, . . . , N.
By the induction hypothesis, we know that qi  q̄i for all i. Next, we will show that q  q̄.
According to Lemma 4.3, a column of C which does not lie in span(Hk ), is included in at most
(i)
one span(Hk+1 ), i ∈ {1, . . . , N}. This implies
N

q+ (qi − q)  R, (4.1)
i=1
(i)
where qi − q denotes the number of columns of C which are included in span(Hk+1 ) but not in
span(Hk ). Analogously, Lemma 4.3 yields that a column of C̄ which is not in Hk , is an element
(i)
of at most one Hk+1 , i ∈ {1, . . . , N}. Hence,
N

q̄ + (q̄i − q̄)  R, (4.2)
i=1
A. Stegeman, N.D. Sidiropoulos / Linear Algebra and its Applications 420 (2007) 540–552 551

(i)
where q̄i − q̄ denotes the number of columns of C̄ which are included in Hk+1 but not in Hk .
Notice that a column of C̄ which is included in the span of some k-dimensional column set is by
definition an element of the k-dimensional column set itself.
Next, we show that we have equality in (4.2). Let c̄j be a column of C̄ which is not an element
(i) (i)
of Hk . Then we may create Hk+1 such that span(Hk+1 ) = span{Hk , c̄j }. It can be seen that
(i) (i)
Hk ⊂ Hk+1 and c̄j ∈ Hk+1 . Therefore, any column of C̄ not included in Hk is included in some
(i)
Hk+1 , i ∈ {1, . . . , N}. This implies that we may replace (4.2) by
N

q̄ + (q̄i − q̄) = R. (4.3)
i=1
Now, the result follows from Eqs. (4.1) and (4.3) and the induction hypothesis. Indeed, we have
N N
 
(N − 1)q  qi − R  q̄i − R = (N − 1)q̄, (4.4)
i=1 i=1
where the first inequality follows from (4.1), the second inequality follows from the induction
hypothesis and the equality follows from (4.3). Since N  2, Eq. (4.4) yields q  q̄. This
completes the proof of Lemma 4.2. 

4.2. Connection with the proof of Jiang and Sidiropoulos

Here, we explain the link between the proof in Section 4.1 and the alternative proof of the
Permutation Lemma by Jiang and Sidiropoulos [4]. As stated in Section 3, the latter authors
prove the Permutation Lemma in two steps. First, they show that the condition in the Permutation
Lemma is equivalent to: if a vector x is orthogonal to h columns of C̄, then it is orthogonal to at
least h columns of C. Second, they show that the latter condition implies C̄ = C. These two
steps are formalized in the following lemmas.

Lemma 4.4. Let C and C̄ be two K × R matrices and let kC  1. Then ω(CT x)  ω(C̄T x) for
all x if and only if ω(CT x)  ω(C̄T x) for any x such that ω(C̄T x)  R − rC̄ + 1.

Lemma 4.5. Let C and C̄ be two K × R matrices and let kC  2. Suppose that ω(CT x) 
ω(C̄T x) for all x. Then there exists a unique permutation matrix  and a unique nonsingular
diagonal matrix  such that C̄ = C.

Using the approach in Section 4.1, we will prove Lemma 4.4 and Lemma 4.5. For Lemma 4.4,
it suffices to prove the “if” part. This can be done by using Lemma 4.2 above. Indeed, suppose
that ω(CT x)  ω(C̄T x) for any x such that ω(C̄T x)  R − rC̄ + 1 and let kC  1. We need to
show that ω(CT x)  ω(C̄T x) holds for all x. Since null(C̄) ⊆ null(C), the case x ∈ null(C̄) is
trivial. Also the case ω(C̄T x) = R is trivial. Suppose 0 < ω(C̄T x) < R. Then the columns of
C̄ orthogonal to x form a k-dimensional column set Hk , for some k ∈ {1, . . . , rC̄ − 1}, and x ∈
null(Hk ). Notice that since C̄ does not contain all-zero columns (this follows from kC  1 and the
case k = 0 in Lemma 4.2, see above), the value of k must be at least one. As in the proof of Lemma
4.2, we have card(Hk ) = R − ω(C̄T x). Similarly, the number of columns of C in span(Hk ) equals
552 A. Stegeman, N.D. Sidiropoulos / Linear Algebra and its Applications 420 (2007) 540–552

R − ω(CT x). Thus, according to Lemma 4.2, we have R − ω(CT x)  card(Hk ) = R − ω(C̄T x),
which proves Lemma 4.4.
As in Section 4.1, Lemma 4.5 can be obtained from Lemma 4.4 and Lemma 4.2. This explains
the link between the proof of the Permutation Lemma in Section 4.1 and the one of Jiang and
Sidiropoulos [4].

Acknowledgments

The authors would like to thank Jos ten Berge for sharing his short-cut proof of Lemma 3.3
and for commenting on an earlier version of this manuscript.

References

[1] J.D. Carroll, J.J. Chang, Analysis of individual differences in multidimensional scaling via an n-way generalization
of Eckart–Young decomposition, Psychometrika 35 (1970) 283–319.
[2] R.A. Harshman, Foundations of the Parafac procedure: models and conditions for an “explanatory” multimodal
factor analysis, UCLA Working Papers in Phonetics 16 (1970) 1–84.
[3] R.A. Harshman, Determination and proof of minimum uniqueness conditions for Parafac-1, UCLA Working Papers
in Phonetics 22 (1972) 111–117.
[4] T. Jiang, N.D. Sidiropoulos, Kruskal’s permutation lemma and the identification of Candecomp/Parafac and bilinear
models with constant modulus constraints, IEEE Trans. Signal Process. 52 (2004) 2625–2636.
[5] J.B. Kruskal, Three-way arrays: rank and uniqueness of trilinear decompositions, with applications to arithmetic
complexity and statistics, Linear Algebra Appl. 18 (1977) 95–138.
[6] L. De Lathauwer, A link between the Canonical Decomposition in multilinear algebra and simultaneous matrix
diagonalization, SIAM J. Matrix Anal. Appl. 28 (2006) 642–666.
[7] X. Liu, N.D. Sidiropoulos, Cramér–Rao lower bounds for low-rank decomposition of multidimensional arrays,
IEEE Trans. Signal Process. 49 (2001) 2074–2086.
[8] N.D. Sidiropoulos, R. Bro, On the uniqueness of multilinear decomposition of N-way arrays, J. Chemometr. 14
(2000) 229–239.
[9] N.D. Sidiropoulos, Low-rank decomposition of multi-way arrays: a signal processing perspective, in: Proceedings
of IEEE Sensor Array and Multichannel (SAM) Signal Processing Workshop, July 18–21, Sitges, Barcelona, Spain,
2004.
[10] A. Smilde, R. Bro, P. Geladi, Multi-way Analysis: Applications in the Chemical Sciences, Wiley, 2004.
[11] A. Stegeman, J.M.F. Ten Berge, Kruskal’s condition for uniqueness in Candecomp/Parafac when ranks and k-ranks
coincide, Computational Statistics and Data Analysis 50 (2006) 210–220.
[12] A. Stegeman, J.M.F. Ten Berge, L. De Lathauwer, Sufficient conditions for uniqueness in Candecomp/Parafac and
Indscal with random component matrices, Psychometrika 71 (2006) 219–229.
[13] J.M.F. Ten Berge, The k-rank of a Khatri–Rao product, Unpublished Note, Heijmans Institute of Psychological
Research, University of Groningen, The Netherlands, 2000.
[14] J.M.F. Ten Berge, N.D. Sidiropoulos, On uniqueness in Candecomp/Parafac, Psychometrika 67 (2002) 399–409.
[15] L.R. Tucker, Some mathematical notes on three-mode factor analysis, Psychometrika 31 (1966) 279–311.

You might also like