n- LINEAR ALGEBRA OF TYPE I
AND ITS APPLICATIONS
W. B. Vasantha Kandasamy
e-mail: vasanthakandasamy@gmail.com
web: http://mat.iitm.ac.in/~wbv
www.vasantha.net
Florentin Smarandache
e-mail: smarand@unm.edu
INFOLEARNQUEST
Ann Arbor
2008
n- LINEAR ALGEBRA OF TYPE I
AND ITS APPLICATIONS
W. B. Vasantha Kandasamy
Florentin Smarandache
2008
2
CONTENTS
5
Pref ace
Chapt er One
BASIC CONCEPTS
7
Chapt er Two
n-VECTOR SPACES OF TYPE I AND THEIR
PROPERTIES
13
Chapt er Three
APPLICATIONS OF
n-LINEAR ALGEBRA OF TYPE I
3
81
Chapt er Four
SUGGESTED PROBLEMS
103
FURTHER READING
111
INDEX
116
ABOUT THE AUTHORS
120
4
PREFACE
With the advent of computers one needs algebraic structures
that can simultaneously work with bulk data. One such
algebraic structure namely n-linear algebras of type I are
introduced in this book and its applications to n-Markov chains
and n-Leontief models are given. These structures can be
thought of as the generalization of bilinear algebras and bivector
spaces. Several interesting n-linear algebra properties are
proved.
This book has four chapters. The first chapter just
introduces n-group which is essential for the definition of nvector spaces and n-linear algebras of type I. Chapter two gives
the notion of n-vector spaces and several related results which
are analogues of the classical linear algebra theorems. In case of
n-vector spaces we can define several types of linear
transformations.
The notion of n-best approximations can be used for error
correction in coding theory. The notion of n-eigen values can be
used in deterministic modal superposition principle for
undamped structures, which can find its applications in finite
element analysis of mechanical structures with uncertain
parameters. Further it is suggested that the concept of nmatrices can be used in real world problems which adopts fuzzy
models like Fuzzy Cognitive Maps, Fuzzy Relational Equations
and Bidirectional Associative Memories. The applications of
5
these algebraic structures are given in Chapter 3. Chapter four
gives some problem to make the subject easily understandable.
The authors deeply acknowledge the unflinching support of
Dr.K.Kandasamy, Meena and Kama.
W.B.VASANTHA KANDASAMY
FLORENTIN SMARANDACHE
6
Chapter One
BASIC CONCEPTS
In this chapter we introduce the notion of n-field, n-groups (n ≥
2) and illustrate them by examples. Throughout this book F will
denote a field, Q the field of rationals, R the field of reals, C the
field of complex numbers and Zp, p a prime, the finite field of
characteristic p. The fields Q, R and C are fields of zero
characteristic.
Now we proceed on to define the concept of n-groups.
DEFINITION 1.1: Let G = G1 ∪ G2 ∪ … ∪ Gn (n ≥ 2) where
each (Gi, *i, ei) is a group with ∗i the binary operation and ei
the identity element, such that Gi ≠ Gj, if i ≠ j, 1 ≤ j, i ≤ n.
Further Gi ⊄ Gj or Gj ⊄ Gi if i ≠ j. Any element x ∈ G would be
represented as x = x1 ∪ x2 ∪ …∪ xn; where xi ∈ Gi, i = 1, 2, …,
n. Now the operations on G is described so that G becomes a
group. For x, y ∈ G, where x = x1 ∪ x2 ∪ …∪ xn and y = y1 ∪ y2
∪ … ∪ yn; with xi, yi ∈ Gi, i = 1, 2, …, n.
x * y = (x1 ∪ x2 ∪ …∪ xn ) * (y1 ∪ y2 ∪ … ∪ yn)
= (x1 *1 y1 ∪ x2 *2 y2 ∪ … ∪ xn *n yn).
7
Since each xi *i yi ∈ Gi we see x * y = (p1 ∪ p2 ∪ …∪ pn) where
xi *i yi = pi for i = 1, 2, …, n. Thus G is closed under the binary
operation *.
Now let e = (e1 ∪ e2 ∪ … ∪ en) where ei ∈ Gi the identity of
Gi with respect to the binary operation, *i, i = 1, 2, …, n we see
e * x = x * e = x for all x ∈ G. e will be known as the identity
element of G under the operation *.
Further for every x = x1 ∪ x2 ∪ … ∪ xn ∈ G; we have
−1
x1 ∪ x2−1 ∪ ... ∪ xn−1 in G such that,
x * x −1 = ( x1 ∪ x2 ∪ ... ∪ xn ) * ( x1−1 ∪ x2 −1 ∪ ... ∪ xn −1 )
x1 *1 x1−1 ∪ x2 *2 x2 −1 ∪ ... ∪ xn *n xn −1
x-1 * x
(e1 ∪ e2 ∪ … ∪ en) = e.
x −1 = x1−1 ∪ x2 −1 ∪ ... ∪ xn −1
=
=
is known as the inverse of x = x1 ∪ x2 ∪ … ∪ xn. We define (G,
*, e) to be the n-group (n ≥ 2). When n = 1 we see it is the
group. n = 2 gives us the bigroup described in [37-38] when n
> 2 we have the n-group.
Now we illustrate this by examples before we proceed on to
recall more properties about them.
Example 1.1: Let G = G1 ∪ G2 ∪ G3 ∪ G4 ∪ G5 where G1 = S3
the symmetric group of degree 3 with
⎛1 2 3 ⎞
e1 = ⎜
⎟,
⎝1 2 3 ⎠
G2 = 〈g | g6 = e2〉, the cyclic group of order 6, G3 = Z5, the group
under addition modulo 5 with e3 = 0, G4 = D8 = {a, b | a2 = b8 =
1; bab = a}, the dihedral group of order 8, e4 = 1 is the identity
element of G4 and G5 = A4 the alternating subgroup of S4 with
⎛1 2 3 4 ⎞
e4 = ⎜
⎟.
⎝1 2 3 4 ⎠
8
Clearly G = S3 ∪ G2 ∪ Z5 ∪ D8 ∪ A4 is a n-group with n = 5.
Any x ∈ G would be of the form
⎛ 1 2 3⎞
⎛1 2 3 4 ⎞
2
3
x =⎜
⎟∪g ∪4∪b ∪⎜
⎟.
⎝ 2 1 3⎠
⎝1 3 4 2 ⎠
⎛ 1 2 3⎞
⎛1 2 3 4 ⎞
4
5
x-1 = ⎜
⎟ ∪ g ∪1 ∪ b ∪ ⎜
⎟.
⎝ 2 1 3⎠
⎝1 4 2 3 ⎠
The identity element of G is
⎛1 2 3 ⎞
⎛1 2 3 4 ⎞
⎜
⎟ ∪ e2 ∪ 0 ∪ 1 ∪ ⎜
⎟
⎝1 2 3 ⎠
⎝1 2 3 4 ⎠
= e1 ∪ e2 ∪ e3 ∪ e4 ∪ e5.
Thus G is a 5-group. Clearly the order of G is o(G1) × o(G2) ×
o(G3) × o(G4) × o(G5) = 6 × 6 × 5 × 16 × 12 = 34, 560.
We see o(G) < ∞. Thus if in the n-group G1 ∪ G2 ∪ … ∪
Gn, every group Gi is of finite order then G is of finite order; 1 ≤
i ≤ n.
Example 1.2: Let G = G1 ∪ G2 ∪ G3 where G1 = Z10, the group
under addition modulo 10, G2 = 〈g | g5 = 1〉, the cyclic group of
order 5 and G3 = Z the set of integers under +.
Clearly G is a 3-group. We see G is an infinite group for
order of G3 is infinite.
Further it is interesting to observe that every group in the 3group G is abelian. Thus if G = G1 ∪ G2 ∪ … ∪ Gn, is a ngroup (n ≥ 2), we see G is an abelian n-group if each Gi is an
abelian group; i = 1, 2, …, n. Even if one of the Gi in G is a non
abelian group then we call G to be only a non abelian n-group.
Having seen an example of an abelian and non abelian group we
now proceed on to define the notion of n-subgroup. We need all
9
these concepts mainly to define the new notion of linear nalgebra or n-linear algebra and n-vector spaces of type I.
DEFINITION 1.2: Let G = G1 ∪ G2 ∪ … ∪ Gn, be a n-group, a
proper subset H ⊂ G of the form H = H1 ∪ H2 ∪ … ∪ Hn with
Hi ≠ Gi or {ei} but φ ≠ Hi ⊂ Gi; i = 1, 2, …, n, Hi proper
subgroup of Gi is defined to be the proper n-subgroup of the ngroup G. If some of the Hi = Gi or Hi = {ei} or Hi = φ for some
i, then H will not be called as proper n-subgroup but only as msubgroup of the n-group, m < n and m of the subgroups Hj in Gj
are only proper and the rest are either {ej} or φ, 1 ≤ j ≤ n.
We illustrate both these situations by the following example.
Examples 1.3: Let G = G1 ∪ G2 ∪ G3 ∪ G4 be a 4-group where
G1 = S4, G2 = Z10 group under addition modulo 10, G3 = D12 the
dihedral group with order 12 given by the set {a, b | a2 = b6 = 1,
bab = a} and G4 = Z the set of positive and negative integers
with zero under +.
Consider H = H1 ∪ H2 ∪ H3 ∪ H4 where H1 = A4 the
alternating subgroup of S4, H2 = {0, 2, 4, 6, 8} a subgroup of
order 5 under addition modulo 10. H3 = {1, b, b2, b3, b4, b5}; the
subgroup of D12 and H4 = {2n | n ∈ Z} a subgroup of Z. Clearly
H is a proper 4-subroup of the 4-group G.
Let K = K1 ∪ K2 ∪ K3 ∪ K4 ⊆ G where K1 = A4, K2 = {0,
5}, K3 = D12 and K4 = Z. Clearly K is not a proper 4-subgroup
of the 4-group G but only a improper 4-subgroup of G.
Let T = T1 ∪ T2 ∪ T3 ∪ T4 ⊆ G where T1 = A4, T2 = {0}, T3
= φ and T4 = {2n | n ∈ Z}; clearly T is only a 2-subgroup or
bisubgroup of the 4-group G.
We mainly need in this book n-groups which are only abelian.
Now in this section we define the notion of n-fields.
DEFINITION 1.3: Let F = F1 ∪ F2 ∪ … ∪ Fn ( n ≥ 2) be such
that each Fi is a field and Fi ≠ Fj, if i ≠ j and Fi ⊄ Fj or Fj ⊄ Fi,
1 ≤ i, j ≤ n. Then we define (F, +, ×) to be a n-field if (F, +) is a
10
n-group and F1 \ {0} ∪ F2 \ {0} ∪ … ∪ Fn \ {0} is a n-group
under ×.
Further
[(a1 ∪ a2 ∪ … ∪ an) + (b1 ∪ b2 ∪ … ∪ bn)]
× [(c1 ∪ c2 ∪ … ∪ cn)]
= (a1 + b1) × c1 ∪ (a2 + b2) × c2 ∪ … ∪ (an + bn) × cn
and
[(c1 ∪ c2 ∪ … ∪ cn)] × {[(a1 ∪ a2 ∪ … ∪ an)]
+ [(b1 ∪ b2 ∪ … ∪ bn)]}
= c1 × (a1 ∪ b1) ∪ c2 × (a2 ∪ b2) ∪ … ∪ cn × (an ∪ bn)
for all ai, bi, ci ∈ F, i = 1, 2, …, n. Thus (F, +, ×) is a n-field.
We illustrate this by the following example.
Example 1.4: Let F = F1 ∪ F2 ∪ F3 ∪ F4 where F1 = Q, F2 = Z2,
F3 = Z17 and F4 = Z11; F is a 4-field.
Example 1.5: Let F = F1 ∪ F2 ∪ F3 ∪ F4 ∪ F5 ∪ F6 where F1 =
Z2, F2 = Z3, F3 = Z13, F4 = Z7, F5 = Z19 and F6 = Z31, F is a 6field. Let F = F1 ∪ F2 ∪ … ∪ Fn, be a n-field where each Fi is a
field of characteristic zero, 1 ≤ i ≤ n, then F is called as a n-field
of characteristic zero.
Let F = F1 ∪ F2 ∪ … ∪ Fm (m ≥ 2) be a m-field if each field Fi
is of finite characteristic then we call F to be a m-field of finite
characteristic. Suppose F = F1 ∪ F2 ∪ … ∪ Fn, n ≥ 2 where
some Fi’s are finite characteristic and some Fj’s are zero
characteristic then alone we say F is a n-field of mixed
characteristic.
Example 1.6: Let F = F1 ∪ F2 ∪ … ∪ F5 where F1 = Q, F2 = Z7,
F3 = Z23 and F4 = Z17 and F5 = Z2 be a 5-field. F is a 5-field of
mixed characteristic.
11
Example 1.7: Let F = F1 ∪ F2 ∪ … ∪ F6 = Z4 ∪ R ∪ Z7 ∪ Q ∪
R ∪ Z11. Clearly F is not a 6 field as F2 = F5. We need each field
Fi to be distinct, 1 ≤ i ≤ n.
Note: Clearly F1 ∪ F2 ∪ F3 = Q ∪ R ∪ Z2 is not a 3-field as Q ⊆
R. Because we need in this case also as in case of bistructures
non containment of one set in another set.
12
Chapter Two
n-VECTOR SPACES OF T YPE I
AND T HEIR PROPERTIES
In this chapter we introduce the notion of n-vector spaces and
describe some of their important properties.
Here we define the concept of n-vector spaces over a field
which will be known as the type I n-vector spaces or n-vector
spaces of type I. Several interesting properties about them are
derived in this chapter.
DEFINITION 2.1: A n-vector space or a n-linear space of type I
(n ≥ 2) consists of the following:
1. a field F of scalars
2. a set V = V1 ∪ V2 ∪ … ∪ Vn of objects called n-vectors
3. a rule (or operation) called vector addition; which
associates with each pair of n-vectors α = α1 ∪ α2 ∪ … ∪
αn, = 1 ∪ 2 ∪ … ∪ n ∈ V = V1 ∪ V2 ∪ …∪ Vn; α +
= (α1 ∪ α2 ∪ …∪ αn) + ( 1∪ 2 ∪ … ∪ n) = (α1 + β1 ∪ α2
+ β2 ∪ … ∪ αn + βn) ∈ V called the sum of α and in such
a way
a. α + = + α; i.e., addition is commutative (α, ∈ V).
13
b. α + ( + ) = (α + ) + , i.e., addition is associative (α,
, ∈ V).
c. There is a unique n-vector 0n = 0 ∪ 0 ∪ … ∪ 0 ∈ V
such that α + 0n = α for all α ∈ V, called the zero nvector of V.
d. For each n-vector α = α1 ∪ α2 ∪ … ∪ αn ∈ V, there
exists a unique vector – α = –α1 ∪ –α2 ∪ … ∪ –αn ∈ V
such that α + (–α) = 0n.
e. A rule (or operation) called scalar multiplication which
associates with each scalar c in F and a n-vector α in V
= V1 ∪ V2 ∪ … ∪ Vn a n-vector cα in V called the
product of c and α in such a way that
= 1. (α1 ∪ α2 ∪ … ∪ αn)
= 1.α1 ∪ 1.α2 ∪ … ∪ 1.αn
= α1 ∪ α2 ∪ … ∪ αn
= α
for every n-vector α in V.
1. 1.α
2. (c1. c2).α = c1.(c2. α) for all c1 , c2 ∈ F and α ∈ V i.e. if α1 ∪
α2 ∪ … ∪ αn is the n-vector in V we have
(c1. c2).α
=
=
=
=
(c1. c2) (α1 ∪ α2 ∪ … ∪ αn)
c1 [c2((α1 ∪ α2 ∪ … ∪ αn)]
c1 [c2α1 ∪ c2α2 ∪ … ∪ c2αn]
c1 [c2α].
=
=
=
=
=
c[(α1 ∪ α2 ∪ … ∪ αn) + ( 1 ∪ 2 ∪ … ∪ n)]
c[α1 + 1 ∪ α2 + 2 ∪ … ∪ αn + n]
(c(α1 + 1) ∪ c(α2 + 2) ∪…∪ c(αn + n)]
(cα1 ∪ cα2 ∪…∪ cαn) + (c 1 ∪ c 2 ∪…∪ c n)
cα + c .
3. c(α + ) = c.α + c. for all α, ∈ V and for all c ∈ F i.e., if
α1 ∪ α2 ∪ … ∪ αn and 1 ∪ 2 ∪ … ∪ n are n-vectors of V then
for any c ∈ F we have
c(α + )
4. (c1 + c2).α = c1α + c2α
for all c1, c2 ∈ F and α ∈ V.
14
Just like a vector space which is a composite algebraic
structure containing the field a set of vectors which form a
group, the n-vector space of type I is a composite of set of nvectors or n-group and a field F of scalars. V is a linear nalgebra or n-linear algebra if V has a multiplicative closed
binary operation “.” which is associative i.e.; if α, ∈ V, α. ∈
V, thus if α = (α1 ∪ α2 ∪ … ∪ αn) and = ( 1 ∪ 2 ∪ … ∪ n) ∈
V then if
α.
= (α1 ∪ α2 ∪ … ∪ αn) . ( 1 ∪ 2 ∪ … ∪
= (α1. 1 ∪ α2. 2 ∪ … ∪ αn. n)∈ V
n)
then the linear n-vector space of type I becomes a linear nalgebra of type-I.
Now we make an important mention that all linear n-algebras of
type-I are linear n-vector spaces of type-I; however a n-vector
space of type-I over F in general need not be a n- linear algebra
of type I over F.
We now illustrate this by the following example.
Example 2.1: Let V = V1 ∪ V2 ∪ V3 ∪ V4 where V1 = Q[x] the
vector space of polynomials over Q. V2 = Q × Q, the vector
space of dimension two over Q,
⎧⎪⎛ a b ⎞
⎫⎪
V3 = ⎨⎜
⎟ a, b,c,d ∈ Q ⎬
⎪⎩⎝ c d ⎠
⎭⎪
the vector space of all 2 × 2 matrices with entries from Q and
⎧⎪⎛ a b c ⎞
⎫⎪
V4 = ⎨⎜
⎟ a, b,c,d,e,f ∈ R ⎬
⎪⎩⎝ d e f ⎠
⎭⎪
be the vector space of all 2 × 3 matrices with entries from R
over Q. Thus V is a linear 4-vector space over Q of type-I.
Clearly V is not a linear 4-algebra of type-I over Q.
15
Now we give yet another example of a linear n-vector space of
type-I.
Example 2.2: Let V = V1 ∪ V2 ∪ V3 ∪ V4 ∪ V5 be a 5-vector
space over Q of type-I, where V1 = Q[x], the set of all
polynomials with coefficients from Q is a vector space over Q.
V2 = Q × R × Q is a vector space over Q,
⎧⎛ a b c ⎞
⎫
⎪⎜
⎪
⎟
V3 = ⎨⎜ d e f ⎟ a, b,c,d,e,f ,g, h,i ∈ Q ⎬
⎪⎜ g h i ⎟
⎪
⎠
⎩⎝
⎭
is a vector space over Q,
⎧⎛ a
⎪⎜
⎪ 0
V4 = ⎨⎜
⎪⎜⎜ 0
⎪⎝ 0
⎩
⎫
0 0 0⎞
⎪
⎟
b 0 0⎟
⎪
a, b,c,d,∈ R ⎬
0 c 0⎟
⎪
⎟
⎪
0 0 d⎠
⎭
is a vector space over Q and V5 = R is a vector space over Q.
Clearly V = V1 ∪ V2 ∪ V3 ∪ V4 ∪ V5 is a linear 5-vector space
of type-I over Q. Also V is a linear 5-linear algebra over Q.
Thus we have seen from example 2.1 that every vector n-space
of type-I need not be a linear n-algebra of type-I. Also every
linear n-algebra of type-I is a linear n-vector space of type-I.
Now we can also define the notion of n-vector space of type-I in
a very different way.
DEFINITION 2.2: Let V = V1 ∪ V2 ∪ … ∪ Vn (n ≥ 2) where each
Vi is a vector space over the same field F and Vi ≠ Vj , if i ≠ j
and Vi ⊄ Vj and Vj ⊄ Vi if i ≠ j, 1 ≤ i, j ≤ n, then V is defined to
be a n-vector space of type-I over F.
If each of the Vi’s are linear algebra over F then we call V
to be a linear n-algebra of type-I over F.
16
Now we proceed on to define the notion of n-subvector space of
the n-vector space of type-I.
DEFINITION 2.3: Let V = V1 ∪ V2 ∪ … ∪ Vn (n ≥ 2) be a nvector space of type I over F. Suppose W = W1 ∪ W2 ∪ … ∪ Wn
(n ≥ 2) is a proper subset of V such that each Wi is a proper
subspace of the vector space Vi over F with Wi ≠ Vi, Wi ≠ φ or (0)
such that Wi ≠ Wj or Wi ⊄ Wj or Wj ⊄ Wi if i ≠ j, 1 ≤ i, j ≤ n, then
we define W to be a n-subspace of type-I over F.
We now illustrate it by the following example.
Example 2.3: Let V = V1 ∪ V2 ∪ V3 where V1 = R × R, a vector
space over R and V2 = R[x] a vector space over R and
⎧⎪⎛ a c ⎞
⎫⎪
V3 = ⎨⎜
⎟ a, b,c,d ∈ R ⎬ ,
⎪⎩⎝ d b ⎠
⎭⎪
a vector space over R i.e., V is a 3-vector space of type-I over
R. Let W = W1 ∪ W2 ∪ W3 ⊂ V = V1 ∪ V2 ∪ V3 where
W1 = R × {0} ⊂ V1,
⎧n
⎫
W2 = ⎨∑ ri x 2i ri ∈ R ⎬ ⊂ V2 ,
⎩ i =0
⎭
⎧⎪⎛ a 0 ⎞
⎫⎪
W3 = ⎨⎜
⎟ a, b ∈ R ⎬ ⊆ V3 .
⎩⎪⎝ 0 b ⎠
⎭⎪
Clearly W is a 3 subspace of V of type-I. Suppose
⎧⎪⎛ a 0 ⎞
⎫⎪
T = R × {0} ∪ R ∪ ⎨⎜
⎟ a, b ∈ R ⎬ ⊆ V1 ∪ V2 ∪ V3,
⎪⎩⎝ 0 b ⎠
⎭⎪
then T is not a 3-subspace of type-I as R × {0} and R are same
or R ⊆ R × {0}.
17
Now we proceed on to define the notion of n-linear dependence
and n-linear independence in the n-vector space V of type-I.
DEFINITION 2.4: Let V = V1 ∪ V2 ∪ … ∪ Vn be a n-vector
space of type-I over F. Any proper n-subset S ⊆ V would be of
the form S = S1 ∪ S2 ∪ … ∪ Sn ⊆ V1 ∪ V2 ∪ … ∪Vn where φ ≠
Si contained in Vi, 1 ≤ i ≤ n. Si a proper subset of Vi. If each of
the subsets Si ⊆ Vi is a linearly independent set over F for i = 1,
2, …, n then we define S to be a n-linearly independent subset of
V. Even if one of the subset Sk of Vk is not a linearly independent
subset of Vk for some 1 ≤ k ≤ n then we call the n-subset of V to
be a n-linearly dependent subset or a linearly dependent nsubset of V.
Now we illustrate this situation by the following examples.
Example 2.4: Let V = V1 ∪ V2 ∪ V3 ∪ V4 be a 4- vector space
over Q, where V1 = Q[x], V2 = Q × Q × Q; V3 = { the set of all 2
× 2 matrices with entries from Q} and V4 = [the set of all 4 × 2
matrices with entries from Q, are all vector spaces over Q. Let S
= S1 ∪ S2 ∪ S3 ∪ S4 be a 4 subset of V,
S1 = {1, x2, x5, x7, 3x8},
S2 = {(7, 0, 2), (0, 5, 1)},
⎪⎧⎛ 5 1 ⎞ ⎛ 0 0 ⎞ ⎪⎫
S3 = ⎨⎜
⎟,⎜
⎟⎬
⎪⎩⎝ 0 0 ⎠ ⎝ 7 3 ⎠ ⎭⎪
and
⎧ ⎡0
⎪⎢
⎪ 1
S4 = ⎨ ⎢
⎪ ⎢0
⎪ ⎢⎣ 3
⎩
2 ⎤ ⎡1
0 ⎥⎥ ⎢⎢0
,
0 ⎥ ⎢0
⎥ ⎢
0 ⎦ ⎣0
0⎤ ⎡0
2 ⎥⎥ ⎢⎢ 0
,
0 ⎥ ⎢7
⎥ ⎢
1⎦ ⎣0
0⎤ ⎫
⎪
0 ⎥⎥ ⎪
⎬.
3⎥ ⎪
⎥
1 ⎦ ⎭⎪
Clearly we see every subset Si of Vi is a linearly independent
subset, for i = 1, 2, 3, 4. Thus S is a 4- linearly independent
subset of V.
18
Example 2.5: Let V = V1 ∪ V2 ∪ V3 be a 3-vector space over R
where V1 = R[x], V2 = {set of all 3 × 3 matrices with entries
from R} and V3 = R × R × R × R. Clearly V1, V2 and V3 are all
vector spaces over R. Let S = S1 ∪ S2 ∪ S3 ⊆ V1 ∪ V2 ∪ V3 = V
be a proper 3-subset of V; where
S1 = {x3, 3x3 + 7, x5},
⎧⎛ 6 0 0 ⎞ ⎛ 0 1 −2 ⎞ ⎫
⎪⎜
⎟ ⎜
⎟⎪
S2 = ⎨⎜ 0 0 3 ⎟ , ⎜ 1 0 1 ⎟ ⎬
⎪⎜ 1 1 0 ⎟ ⎜ 0 7 0 ⎟ ⎪
⎠ ⎝
⎠⎭
⎩⎝
and
S3 = {(3 1 0 0), (0 7 2 1), (5 1 1 1), (0 8 9 1), (2 1 3 0)}.
We see S1 is a linearly dependent subset of V1 over R and S2 is a
linearly independent subset over R and S3 is a linearly
dependent subset of V3 over R. Thus S is a 3-linearly dependent
subset of the 3-vector space V over R.
Now we proceed onto define the notion of n-basis of the nvector space V over a field F.
DEFINITION 2.5: Let V = V1 ∪ V2 ∪ … ∪ Vn be a n-vector space
over a field F. A proper n-subset S = S1 ∪ S2 ∪…∪ Sn of V is
said to be n-basis of V if S is a n-linearly independent set and
each Sj ⊆ Vj generates Vj, i.e., Sj is a basis of Vj, true for j = 1,
2, …, n. Even if one of the Sj is not a basis of Vj for 1 ≤ j ≤ n
then S is not a n-basis of V.
As in case of vector spaces the n-vector spaces can also have
many basis but the number of base elements in each of the n
subsets is the same.
Now we illustrate this situation by the following example.
Example 2.6 : Let V = V1 ∪ V2 ∪ V3 ∪ V4 be a 4-vector space
over Q. V1 = {all polynomials of degree less than or equal to 5},
19
V2 = Q × Q × Q, V3 = {the set of all 2×2 matrices with entries
from Q} and V4 = Q × Q × Q × Q × Q are vector spaces over Q.
Now let
B = B1 ∪ B2 ∪ B3 ∪ B4
= {1, x, x2, x3, x4, x5} ∪ {(1 0 0), (0 1 0), (0 2 1)} ∪
⎧⎪⎛ 0 0 ⎞ ⎛ 1 0 ⎞ ⎛ 0 0 ⎞⎛ 0 1 ⎞ ⎫⎪
⎨⎜
⎟,⎜
⎟,⎜
⎟⎜
⎟ ⎬ ∪ {(0 0 0 0 1), (0 0 0
⎪⎩⎝ 1 0 ⎠ ⎝ 0 0 ⎠ ⎝ 0 1 ⎠⎝ 0 0 ⎠ ⎪⎭
1 0), (0 0 1 0 0), (0 1 0 0 0), (1 0 0 0 0)}
⊆ V1 ∪ V2 ∪ V3 ∪ V4 = V.
B is a 4-basis of V as each Bi is a basis of Vi ; i = 1, 2, 3, 4.
Example 2.7: Let V = V1 ∪ V2 ∪ V3 ∪ V4 ∪ V5 be a 5-vector
space over Q where V1 = R, V2 = Q × Q, V3 = Q[x], V4 = R × R
× R and V5 = {set of all 2×2 matrices with entries from Q}.
Clearly V1, V2, V3, V4 and V5 are vector spaces over Q. We see
some of the vector spaces Vi over Q are finite dimensional i.e.,
has finite basis and some of the vector spaces Vj have infinite
number of elements in the basis set. We find means to define the
new notion of finite n-dimensional space and infinite ndimensional space. To be more specific in this example, V1 is an
infinite dimensional vector space over Q, V2 and V3 are finite
dimensional vector spaces over Q. V4 is an infinite dimensional
vector space over Q and V5 is a finite dimensional vector space
over Q.
DEFINITION 2.6: Let V = V1 ∪ V2 ∪ … ∪ Vn be a n-vector space
of type-I over F. If every vector space Vi in V is finite
dimensional over F then we say the n-vector space is finite ndimensional over F. Even if one of the vector space Vj in V is
infinite dimensional then we say V is infinite dimensional over
F. We denote the dimension of V by (n1, n2, …, nn); ni dimension
of Vi , i = 1, 2, …, n.
We illustrate the definition by some examples.
20
Example 2.8: Let V = V1 ∪ V2 ∪ V3 be a 3-vector space over Q,
where V1 = Q[x], V2 = {set of all 2×2 matrices with entries from
Q} and V3 = Q the one dimensional vector space over Q.
Clearly 3-dimension of the 3-vector space over Q is (∞, 4, 1).
Thus V is an infinite 3-dimensional space over Q.
Example 2.9: Let V = V1 ∪ V2 ∪ V3 ∪ V4 be a 4-vector space
of type-I over Q. Suppose V1 = {set of all 2 × 2, matrices with
entries from Q}; V2 = Q × Q × Q a vector space over Q,
V3 = {All polynomials of degree less than or equal to 7 with
coefficients from Q} and V4 = {the collection of all 5 × 5,
matrices with entries from Q}, we see V1, V2, V3 and V4 are
vector spaces over Q. The 4-dimension of V is (4, 3, 8, 25), so,
V is finite 4-dimension 4 vector space over Q of type-I.
Having seen sub n-spaces, n-basis and n-dimension of n-vector
spaces of type-I now we proceed on to define the notion of ntransformation of n-vector space of type-I.
DEFINITION 2.7: Let V = V1 ∪ V2 ∪ … ∪ Vn be a n-vector
space over a field F of type-I and W = W1 ∪ W2 ∪ … ∪ Wm be
another m-vector space over the same field F of type I, (n ≤ m)
(m ≥ 2) and (n ≥ 2). We call T a n-map if T = T1 ∪ T2 ∪ … ∪
Tn: V → W is defined as Ti : Vi → Wj, 1 ≤ i ≤ n, 1 ≤ j ≤ m for
every i. If each Ti is a linear transformation from Vi to Wj, i = 1,
2, …, n, 1 ≤ j ≤ n then we call the n-map to be a n-linear
transformation from V to W or linear n-transformation from V
to W. No two Vi’s are mapped on to the same Wj, 1 ≤ i ≤ n, 1 ≤ j
≤ m. Even if one of the Ti is not a linear transformation from Vi
to Wj then T is not a n-linear transformation.
We will illustrate this by the simple example.
Example 2.10: Let V = V1 ∪ V2 ∪ V3 be a 3-vector space over
Q and W = W1 ∪ W2 ∪ W3 ∪ W4 be a 4-vector space over Q. V
is of finite (3, 2, 4) dimension and W is of finite (4, 3, 2, 4)
dimension. T: V → W be a 3-linear transformation defined by T
= T1 ∪ T2 ∪ T3: V1 ∪ V2 ∪ V3 → W1 ∪ W2 ∪ W3 ∪ W4 as
21
T1: V1 → W1 given by
T1 (a11 ,a12 ,a13 ) = (a11 + a12 ,a12 ,a13 + a12 ,a11 + a12 + a13 )
T2:V2 → W3 defined by
T2 (a11 ,a 22 ) = (a12 + a 22 ,a12 )
and T3 : V3 → W4 defined by
T3 (a13 ,a 32 ,a 33 ,a 34 ) = (a 32 ,a 34 ,a 34 ,a13 + a 32 ) ,
clearly T is a 3-linear transformation or linear 3-transformation
or linear 3 transformation of V to W; i.e. from 3-vector space V
to 4-vector space W.
It may so happen that we may have a n-vector space over a field
F and it would become essential for us to make a linear ntransformation to a m-vector space over F where n>m. In such
situation we define a linear n-transformation which we call as
shrinking linear n-transformation which is as follows.
DEFINITION 2.8: Let V be a n-vector space over F and W a mvector space over F n > m. The shrinking n-map T from V = V1
∪ V2 ∪ … ∪ Vn to W = W1 ∪ W2 ∪ … ∪ Wm is defined as a map
from V to W as follows T = T1 ∪ T2 ∪ … ∪ Tn with Ti : Vi → Wj
; 1 ≤ i ≤ n and 1 ≤ j ≤ m with the condition Tj : Vj → Wk where j
may be equal to k. i.e. the range space as in case of linear nmap may not be distinct.
Now if Ti : Vi → Wj in addition a linear transformation then
we call, T = T1 ∪ T2 ∪ … ∪ Tn the shrinking n-map to be a
shrinking linear n-transformation or a shrinking n-linear
transformation.
We illustrate this situation by the following example.
Example 2.11: Let V = V1 ∪ V2 ∪ V3 ∪ V4 ∪ V5 be a 5-vector
space defined over Q of 5-dimenion (3, 2, 5, 7, 6) and W = W1
∪ W2 ∪ W3 be a 3-vector space defined over Q of 3-dimension
(5, 3, 6). T = T1 ∪ T2 ∪ … ∪ T5 : V → W can only be a
shrinking 5-linear transformation defined by
22
T1 : V1 → W3,
T2 : V2 → W1,
T3 : V3 → W2,
T4 : V4 → W3
T5 : V5 → W1
and
where
T1 (x11 , x12 , x13 ) = (x11 + x12 , x13 , x12 , x12 + x13 , x11 + x13 , x11 )
where x11 , x12 , x13 ∈ V1 ,
T2 (x12 , x 22 ) = (x12 + x 22 , x 22 , x12 + x 22 , x12 , x 22 ),
where x12 , x22 ∈ V2 ,
T3 (x13 , x 32 , x 33 , x 34 , x 53 ) = (x13 + x 32 , x 32 + x 33 , x 34 + x 53 )
where x13 , x 32 , x 33 , x 34 and x 35 ∈ V3 ,
T4 (x14 , x 42 , x 34 , x 44 , x 54 , x 64 , x 74 )
= (x14 + x 24 , x 24 + x 34 , x 34 + x 44 , x 44 + x 54 , x 54 + x 64 , x 64 + x 74 )
for all x14 , x 42 ,..., x 74 ∈ V4 and
T5 (x15 , x 52 , x 53 , x 54 , x 55 , x 56 ) = (x15 , x 52 , x 53 + x 54 , x 55 , x 56 )
for x15 , x 52 ,..., x 56 ∈ V5 .
Clearly T is a shrinking linear 5 transformation.
Note: It may be sometimes essential for one to define a linear ntransformation from a n-vector space V into a m-vector space
W, m > n where all the n spaces of the m-vector space may not
be used only a set of r vector spaces from W may be needed r <
n < m, in such cases we call the linear n-transformation as a
special shrinking linear n-transformation of V into W.
We illustrate this situation by the following example.
23
Example 2.12: Let V = V1 ∪ V2 ∪ V3 be a 3-vector space over
Q and W = W1 ∪ W2 ∪ W3 ∪ W4 ∪ W5 be a 5-vector space over
Q. Suppose V is a finite 3-dimension (3, 5, 4) space and W be a
finite 5-dimension (3, 5, 4, 8, 2) space. Let T = T1 ∪ T2 ∪ T3 : V
→ W be defined by T1: V1 → W1, T2: V2 → W3, T3: V3 → W1 as
follows;
T1 (x11 , x12 , x13 ) = (x11 + x12 , x12 + x13 , x12 )
for all x11 , x12 , x13 ∈ V1 ;
T2 (x12 , x 22 , x 32 , x 24 , x 52 ) = (x 22 , x12 , x 32 + x 52 , x 24 )
for all x12 , x 22 , x 32 , x 24 , x 52 ∈ V2 ,
T3 (x13 , x 32 , x 33 , x 34 ) = (x13 + x 32 , x 34 + x13 , x 32 + x 33 )
for all x13 , x 32 , x 33 and x 34 in V3.
Thus T:V → W is only a special shrinking linear 3transformation.
DEFINITION 2.9: Let V be a n-vector space over the field F and
W be a n-vector space over the same field F. T = T1 ∪ T2 ∪ … ∪
Tn is a linear one to one n transformation if each Ti is a
transformation from Vi to Wj and for no Vk we have Tk : Vk → Wj
i.e. no two distinct domain space can have the same range
space. Then we call T to be a one to one vector space
preserving linear n-transformation.
We just show this by a simple example.
Example 2.13: Let V = V1 ∪ V2 ∪ V3 ∪ V4 be a 4- vector space
over Q and W = W1 ∪ W2 ∪ W3 ∪ W4 be another 4-vector
space over Q. Let V be of (3, 4, 5, 2) finite 4 dimensional space
and W a (2, 5, 6, 3) finite 4-dimensional space. Let T = T1 ∪ T2
∪ T3 ∪ T4: V = V1 ∪ V2 ∪ V3 ∪ V4 → W1 ∪ W2 ∪ W3 ∪ W4
given by T1: V1 → W2 , T2: V2 → W3 , T3: V3 → W4 and T4: V4
→ W1 where T1, T2, T3 and T4 are linear transformation. Clearly
T is a linear one to one 4-transformation.
24
Note: In the definition 2.9 it is interesting and important to note
that all Ti’s need not be 1-1 linear transformation with dim Vi =
dim Wj if Ti: Vi → Wj i.e., Ti’s are not vector space
isomorphism for i = 1, 2, …, n. Now we give a new name for a
n-linear transformation T: V → W where T = T1 ∪ T2 ∪ … ∪
Tn with each Ti a vector space isomorphism or Ti is 1-1 and onto
linear transformation from Vi to Wj, 1 ≤ i ≤ n, 1 ≤ j ≤ n.
DEFINITION 2.10: Let V and W be n vector spaces defined over
a field F. We say V and W are of same n-dimension if and only
if n-dimension of V is (n1, …, nn) then the n-dimension of W is
just a permutation of (n1 , n2 , … , nn).
Example 2.14: Let V = V1 ∪ V2 ∪ V3 ∪ V4 ∪ V5 be a 5dimension vector space over R of 5-dimension (7, 2, 3, 4, 5).
Suppose W = W1 ∪ W2 ∪ W3 ∪ W4 ∪ W5 is a 5-dimension
vector space over R of 5-dimension (2, 5, 4, 7, 3) then we say V
and W are of same 5-dimension. If X = X1 ∪ X2 ∪ X3 ∪ X4 ∪
X5 is a 5-vector space of 5-dimension (2, 7, 9, 3, 4) then clearly
X and V are not 5-vector spaces of same dimension. So for any
n-dimensional n-vector space V we have only n number of nvector spaces of same dimension including V.
We just show this by an example.
Example 2.15: Let V = V1 ∪ V2 ∪ V3 be a 3-vector space of 3dimension (7, 5, 3). Then W, X, Y, Z and S of 3-dimension (5,
7, 3), (5, 3, 7), (7, 3, 5), (3, 5, 7) and (3, 7, 5) are of same
dimension.
In view of this we have the following interesting theorem.
THEOREM 2.1: Let V be a finite n-dimension n-vector space
over the field F of n-dimension (n1, n2, …, nn), then their exist
n finite n-dimension n-vector spaces of same dimension as that
of V including V over F.
25
Proof: Given V is a finite n-vector space of n-dimension (n1, n2,
…, nn) i.e. each 1 ≤ ni ≤ ∞ and i ≠ j implies ni ≠ nj we know two
n-vector spaces V and W are of same dimension if and only if
the n-dimension of one (say V) can be got from permuting the
n-dimension of W, or vice versa. Further from group theory we
know for a set (1, 2, …, n) we have n permutations of the set
(1, 2, …, n). Thus we have n n-vector spaces of dimension (n1,
n2, …, nn).
Note: If we have a n-vector space of n-dimension (m1, m2, …,
mn) with some mi ≠ nj, 1 ≤ i ≤ n then we get another set of n nvector spaces of n-dimension (m1, m2, …, mn) and all its
permutations. Clearly this set of m-vector spaces with ndimension (n1, n2, …, nn) are distinct from the n-vector spaces
of n-dimension (m1, m2, …, mn). From this one can conclude we
have infinite number of n-vector spaces of varying dimensions.
Only same n-dimension vector spaces can be n-isomorphic.
DEFINITION 2.11: Let V and W be n-vector spaces of same
dimension. Let n-dimension of V be (n1, n2, …, nn) and that of W
be (n4, n2, nn, …, n5) i.e. let V = V1 ∪ V2 ∪ … ∪ Vn and W = W1
∪ W2 ∪ … ∪ Wn. A linear n-transformation T = T1 ∪ T2 ∪ … ∪
Tn : V → W is defined to be a n-vector space linear nisomorphism if and only if Ti : Vi → Wj is such that dim Vi =
dim Wj; 1 ≤ i, j ≤ n.
We illustrate this situation by an example.
Example 2.16: Let V = V1 ∪ V2 ∪ V3 ∪ V4 ∪ V5 and W = W1
∪ W2 ∪ W3 ∪ W4 ∪ W5 be two 5-vector spaces of same
dimension. Let the 5-dimension of V and W be (3, 2, 5, 4, 6)
and (4, 2, 5, 3, 6) respectively. Suppose T = T1 ∪ T2 ∪ T3 ∪ T4
∪ T5: V → W given by T1(V1) = W4, T2(V2) = W2, T3(V3) = W3,
T4(V4) = W1 and T5(V5) = W5; then T is a one to one nisomorphic, n-linear transformation of V to W (n = 5). Suppose
P : V → W where P = P1 ∪ P2 ∪ P3 ∪ P4 ∪ P5 given by P1:V1
→ W2, P2: V2 → W3, P3: V3 → W4, P4: V4 → W5, and P5: V5 →
W1 the linear transformation so that P is a 5-linear
26
transformation from V to W. Clearly P is not the one to one
isomorphic 5-linear transformation of V. P is only a one to one
5-linear transformation of V.
Now having seen different types of linear n-transformation of a
n-vector space V to W, W a linear n-space we proceed on to
define the notion of n-kernel of T.
DEFINITION 2.12: Let V = V1 ∪ V2 ∪ … ∪ Vn be a n-vector
space over the field F and W = W1 ∪ W2 ∪ … ∪ Wm be a mvector space over the field F. Let T = T1 ∪ T2 ∪ … ∪ Tn be nlinear transformation of T from V to W defined by Ti: Vi → Wj;
1 ≤ i ≤ n and 1≤ j ≤ m such that no two domain spaces are
mapped on to the same range space. The n-kernel of T denoted
by ker T = ker T1 ∪ kerT2 ∪ … ∪ kerTn where
ker Ti = {vi ∈ Vi | T(vi) = 0 }, i = 1, 2, …, n.
Thus
ker T = {(v1, v2, …, vn) ∈ V1 ∪ V2 ∪ … ∪ Vn | T(v1, v2, …, vn)
= T(v1) ∪ T(v2) ∪ … ∪ T(vn) = 0 ∪ 0 ∪ … ∪ 0}.
It is easily verified that Ker T is a proper n-subgroup of V.
Further Ker T is a n-subspace of V.
We will illustrate this situation by the following example.
Example 2.17: Let V = V1 ∪ V2 ∪ V3 be a 3-vector space over
Q of 3-dimension (3, 2, 4). Let W = W1 ∪ W2 ∪ W3 ∪ W4 be a
4-vector space over Q of 4-dimension (4, 3, 2, 5). Let T = T1 ∪
T2 ∪ T3: V → W be a 3-linear transformation given by
T1:V1 → W4,
T 1 ( x , x , x ) = ( x 11 + x 12 , x 13 , x 11 , x 11 + x 13 , x 12 )
1
1
1
2
1
3
for all x11 , x12 , x13 ∈ V1 ,
ker T1 = {(x11 , x12 , x13 ) T(x11 , x12 , x13 ) = (0) i.e. x13 = 0 , x11 = 0 ,
x12 = 0 and x11 + x12 = 0 and x11 + x13 = 0 }
27
Thus ker T1 = {(0, 0, 0)} is the trivial subspace of V1,
T2: V2 → W3, T2 (x12 , x 22 ) = (x12 + x 22 , x12 )
for all x12 , x 22 ∈ V2 .
ker T2 = {(x12 , x 22 ) T(x12 , x 22 ) = (0)}
i.e. x12 + x 22 = 0 and x12 = 0 which forces x 22 = 0 . Thus ker T2 =
{(0 0)}.
Now
T3: V3 → W1
given by
T3 (x13 , x 32 , x 33 , x 34 ) = (x13 + x 32 , x 33 , x 34 , x 33 + x 34 )
for all x13 , x 32 , x 33 , x 34 ∈ V3 .
Now ker T3 gives
x13 + x 32 = 0 , x 33 = 0 , x 34 = 0 , x 33 + x 34 = 0 .
This gives the condition x13 = − x 32 and x 33 = x 34 = 0 . Thus
ker T3 = {(x13 , − x13 ,0,0)} .
Thus a subspace of V3. Hence we see the 3-kernel of T is a 1susbspace of V i.e. 〈{(0 0 0 0) ∪ (0 0) ∪ (x13 , − x13 ,0,0) }〉. We
can define kernel for any n-linear transformation T be it a usual
n-linear transformation or a one to one n-linear transformation.
It is easily verified that for any n-vector space V = V1 ∪ V2 ∪ …
∪ Vn and any m-vector space W = W1 ∪ W2 ∪ … ∪ Wm over
the same field F. Suppose T: V → W is any n-linear
transformation from V to W then ker T = ker T1 ∪ ker T2 ∪ …
∪ ker Tn would be always a t-subspace of V as each ker Ti is a
subspace of Vi , i = 1, 2, …, n. It may so happen that some of the
ker Ti may be the zero space in such case we will call the
subspace of V only as a t-subspace of V where 1 ≤ t ≤ n. If all
the subspaces given by ker Ti is zero then we call ker T to be the
n zero subspace of V; i = 1, 2, …, n.
Now we proceed on to give some more results in case of nvector spaces and their related linear n-transformation.
DEFINITION 2.13: Let V = V1 ∪ V2 ∪ … ∪ Vn be a n-vector
space over a field F of type-I. Let T = T1 ∪ T2 ∪ … ∪ Tn : V →
28
V be a linear n-transformation of V such that each Ti : Vi → Vi, i
= 1, 2, …, n. i.e., each Ti is a linear operator on Vi then we
define T = T1 ∪ T2 ∪ … ∪ Tn to be a n-linear operator on V.
Clearly all n-linear transformations need not be n-linear
operator on V. Thus T is a n-linear operator on V if and only if
each Ti is a linear operator from Vi to Vi, 1≤ i ≤ n.
This is the marked difference between the linear operator on a
vector space and a n-linear operator on a n-vector space. All nlinear transformations from the n-vector space V to the same nvector space V need not always be a n-linear operator.
We illustrate this situation by the following example.
Example 2.18 : Let V = V1 ∪ V2 ∪ V3 ∪ V4 be a 4-vector space
over Q of 4-dimension (5, 4, 2, 3). Let T = T1 ∪ T2 ∪ T3 ∪ T4 :
V → V be a 4-linear transformation given by T1: V1 → V2 , T2:
V2 → V3, T3: V3 → V4 and T4: V4 → V1. Clearly none of the
linear transformation Ti’s are linear operators for they have
different domain and range spaces; i = 1, 2, 3, 4. So T though is
on the same n-vector space V still T is a linear n-transformation
and not a linear n-operator on V, where n = 4.
Suppose we define a 4-linear transformation P = P1 ∪ P2 ∪
P3 ∪ P4 : V → V defined by P1: V1 → V1, P2: V2 → V2, P3: V3
→ V3, and P4: V4 → V4, clearly the 4-linear transformation P is
a 4-linear operator of V.
The above example shows the reader that in general a n-linear
transformation of a n-vector space V need not in general be a nlinear operator on V. But of course trivially every n-linear
operator on V is a n-linear transformation on V.
We have the following result in case of finite n-dimensional nvector spaces over the field F.
THEOREM 2.2: Let V = V1 ∪ V2 ∪ … ∪ Vn and W = W1 ∪ W2 ∪
… ∪ Wn be any two n-vector spaces over the field F. Let B =
{(α11 ,α 21 ,...,α n11 ) ∪ (α12 ,α 22 ,...,α n22 ) ∪ ... ∪ (α1n ,α 2n ,...,α nnn )} be a
29
n-basis of V1 ∪ V2 ∪ … ∪ Vn; i.e., (α1i ,α 2i ,...,α ni i ) is a basis of Vi,
i = 1, 2, … , n. Let
C = {( β11 , β 21 ,..., β n11 ) ∪ ( β12 , β 22 ,..., β n22 ) ∪ ... ∪ ( β1n , β 2n ,..., β nnn )}
be any n-vector in W = W1 ∪ W2 ∪ … ∪ Wn then there is
precisely only one linear n-transformation T = T1 ∪ T2 ∪…∪ Tn
from V on to W such that T α ij = β ij , j = 1, 2, …, ni, 1< i<n.
Proof: To prove that there is some n-linear transformation T
with T(B) = C, it is enough if we show for the T = T1 ∪ T2 ∪…
∪ Tn we have Ti α ij = βij , i = 1, 2, …, n and j = 1, 2, …, ni.
Given
(α1i , α i2 ,..., α ini ) in Vi there is a unique ni tuple
(x1i , x i2 ,..., x in i ) such that αi = x1i α1i + x i2 α i2 + ... + x ini α ini , for this
vector αi we define
Tiαi = x1i β1i + x i2βi2 + ... + x ini βini
true for each i; i = 1, 2, …, n.
Clearly Ti is a well defined rule for associating with each vector
αi in Vi a vector Ti αi in Wi. From the definition it is clear that
Ti αij = βij for each j. To see that Ti is linear; let βi =
y1i α1i + yi2 α i2 + ... + yini α ini be in Vi and ci be any scalar.
T(ci α i + βi ) = (ci x1i + y1i )β1i + ... + (ci x ini + yin i )βini .
On the other hand,
ci (Ti (α i )) + Ti (βi ) = ci ∑ x ijβij + ∑ yijβij
∑ (c x
ni
=
j=1
i
i
j
ni
ni
j=1
j=1
+ yij )βij
and thus Ti( ciαi + βi ) = ci(Tiαi) + Ti βi true for each i; i = 1, 2,
…, n. If U = U1 ∪ U2 ∪ … ∪ Un is a linear n-transformation
30
from V on to W with U i α ij = βij , j = 1, 2, …, ni and true for each
i; i = 1, 2, … , n. then for the vector αi = ∑ x ijα ij we have
ni
j=1
U i αi = U i (∑ x ijα ij )
ni
j=1
∑x U α
ni
=
i
j
j=1
i
i
j
= ∑ x ijβij ,
ni
j=1
true for each and every i; i = 1, 2, .., n. Thus Ui is exactly the
rule Ti , i = 1, 2, … , n hence U is exactly the rule T which we
have defined. This shows the n-linear transformation T = T1 ∪
T2 ∪ … ∪ Tn is unique.
Having defined n-kernel of a n-linear transformation T we now
proceed on to define the n-range of the n-linear transformation
T = T1 ∪ T2 ∪ … ∪ Tn.
DEFINITION 2.14: Let T = T1 ∪ T2 ∪ … ∪ Tn be a n-linear
transformation from the n-vector space V = V1 ∪ V2 ∪ … ∪ Vn
in to another m-vector space W, m > n. The range of T is called
the n-range of T denoted by RTn , is a p-subspace of W p < m
that is RTn = {β = β1 ∪ β2 ∪ … ∪ βm ∈ W} such that
= T(α)
for some α = α1 ∪ α2 ∪ … ∪ αn in V. Clearly if , ∈ RTn and c
any scalar, then there are n-vectors α, δ in V such that Tα =
and T = . Since T is n-linear.
T (cα + δ) = cTα + Tδ
= c 1+ 2
which is in RTn . Now V and W be any two n-vector space and m
vector space respectively defined over the field F and let T = T1
∪ T2 ∪ … ∪ Tn be a linear n-transformation from V into W. The
n-null space T is a n set of all n-vectors α in V such that
31
= T(α1 ∪ α2 ∪ … ∪ αn)
= (T1 ∪ T2 ∪ … ∪ Tn) (α1 ∪ α2 ∪ … ∪ αn)
= T1α1 ∪ T2α2 ∪ … ∪ Tnαn
= 0 ∪ 0 ∪ … ∪ 0.
If the n-vector space V is finite n-dimensional, the n-rank of T is
the n-dimension of the n-range of T and will vary depending on
the nature of the n-linear transformation like if T is a shrinking
n-linear transformation, it would be different and so on.
Tα
Now we can prove the most important theorem relating the nrank of T and n-nullity of T for a n-linear transformation only as
for other n-linear transformation like shrinking n-linear
transformation the result in general may not be true.
THEOREM 2.3: Let V and W be two n-vector space and mvector space over the field F, m > n and let T is be linear ntransformation i.e. T = T1 ∪ T2 ∪ … ∪ Tn from V to W is such
that Ti: Vi → Wj and the Wj’s are distinct spaces for each Ti, i.e.
no two subspaces of V are mapped on to the same subspace in
W. Suppose V is (n1, n2, … , nn) finite dimensional, then n rank T
+ n nullity T = n dim V.
Proof: Given V = V1 ∪ V2 ∪ … ∪ Vn is a n-vector space over F
and W = W1 ∪ W2 ∪ … ∪ Wn is a m-vector space over F (m >
n) of dimensions (n1, n2, …, nn) and (m1, m2, …, mn)
respectively. T = T1 ∪ T2 ∪ … ∪ Tn is a n-linear transformation
such that each Ti is a linear transformation from Vi to a unique
Wj, i.e. no two vector spaces Vi and Vk can be mapped to same
Wj, if i ≠ k; 1 ≤ i, k ≤ n and 1 ≤ j ≤ m . Now n-rank T + n nullity
T = n dim W
i.e. n-rank (T1 ∪ T2 ∪ … ∪ Tn) + n nullity of (T1 ∪ T2 ∪ … ∪
Tn) = n dim (V1 ∪ V2 ∪ … ∪ Vn).
i.e. rank T1 ∪ rank T2 ∪ … ∪ rank Tn + nullity T1 ∪ nullity T2 ∪
… ∪ nullity Tn = (dim V1, dim V2, …, dim Vn) = (n1, n2, …, nn).
Suppose N = N1 ∪ N2 ∪… ∪ Nn be the p-null space of the
n-space V; 0 ≤ p ≤ n. Let
α = α11 , α12 ,..., α1ki ∪ α12 , α 22 ,..., α 2k 2 ∪ ... ∪ α1n , α n2 ,..., α nk n
{(
) (
)
32
(
)}
be a n-basis for N. Here 0 ≤ ki ≤ ni ; i = 1, 2, … , n. If ki = 0 then
the corresponding null space is the zero space. Now we show
the working for any i; Ti: Vi → Wj. This result which we would
prove is true for all i = 1, 2, …, n.
Let {α1i , α i2 ,..., α ik } be a basis for Ni, the null space of Ti. There
{
}
are vectors αiki +1 ,..., α in i in Vi such that α1i , αi2 ,..., α in i is a basis
{
}
for Vi; true for each i; i = 1, 2, …, n. We shall now prove that
Ti α ki +1 ,...,Ti α ni is a basis for the range of Ti. The vectors
Ti α ,Ti α ,...,Ti α in certainly span the range of Ti and since
i
1
i
j
i
2
Ti α = 0 for j ≤ ki we see that Ti α ki +1 ,...,Ti α n i span the range, to
see that these vectors are linearly independent, suppose we have
scalars cj’s such that
∑ c T (α ) = 0 .
ni
j= k i +1
This says that
i
j
j i
∑ cα )=0
ni
Ti (
j= k i +1
and accordingly the vector
αi =
j
i
j
∑ cα
ni
j= k i +1
j
i
j
is in the null space of Ti. Since α1i , α i2 ,..., α ini form a basis for Ni
there must be scalars β1i , βi2 ,..., βin i such that αi = ∑ βijα ij . Thus
ki
∑β α
ki
j=1
i
j
i
j
∑ βα
ni
–
j= k i +1
i
j
j=1
i
j
= 0.
Since α1i , α i2 ,..., α ini are linearly independent we must have b1i =
… = bik i = cik i+1 = … = cini = 0. If ri is the rank of Ti the fact that
Ti αiki+1 , . . . , Ti αini form a basis, for the range of Ti tells us that
ri = ni – ki . Since ki is the nullity of Ti and ni is the dimension of
33
Vi, we get rank Ti + nullity Ti = dim. Vi. This is true for each
and every i. That is
(rank T1 + nullity T1) ∪ (rank T2 + nullity T2) ∪ … ∪ (rank Tn
+ nullity Tn)
= dim (V1 ∪ V2 ∪ … ∪ Vn)
i.e., (rank T1 ∪ rank T2 ∪ … ∪ rank Tn) + (nullity T1 ∪ nullity
T2 ∪ … ∪ nullity Tn)
= dim (V1 ∪ V2 ∪ … ∪ Vn) that is
rank (T1 ∪ T2 ∪ … ∪ Tn) + nullity (T1 ∪ T2 ∪ … ∪ Tn)
= (n1, n2, … , nn).
n rank T + n nullity T = n dim V.
Now in the relation
n rank T + n nullity T = n dim (V) = (n1, n2, …, nn).
We assume the n-linear transformation is such that it is not
shrinking it is a n-linear transformation given in definition 2.12.
Also we see if nullity Ti = 0 for some i in such cases we have
rank Ti = dim Vi. Since a p-nullspace in general need not always
be a nontrivial subspace we may have the p-nullspace of the nvector space be such that p < n.
Now we proceed on to the algebra of n-linear transformations.
Let us assume V and W are two n-vector space and m-vector
space respectively defined over the field K.
THEOREM 2.4: Let V and W be any two n-vector space and mvector space respectively defined over the field F(m > n). Let T
and U be n-linear transformations as given in definition from V
into W. The n-function (T + U) defined by (T + U)α = Tα + Uα
is a n-linear transformation from V into W, if c is any element
from F, the function cT defined by (cT)α = cTα is a n-linear
transformation from V into W.
The set of all n-linear transformations from V into W with
addition and scalar multiplication defined above is an n-vector
space over the field F.
Proof: Let V = V1 ∪ V2 ∪ … ∪ Vn be a n-vector space over F
and W = W1 ∪ W2 ∪ … ∪ Wm (m>n) a m-vector space over F.
T = T1 ∪ T2 ∪ … ∪ Tn a n-linear transformation from V to W.
34
If U = U1 ∪ U2 ∪ …∪ Un is a n-linear transformation from V
into W; define the n-function (T + U) for α = α1 ∪ α2 ∪ … ∪ αn
∈ V by (T + U) α = Tα + Uα then (T + U) is a n-linear
transformation of V into W.
(T + U) (cα + β)
= [(T1 ∪ T2 ∪ … ∪ Tn) + (U1 ∪ U2 ∪ … ∪ Un)] [cα + β]
= [(T1 + U1) ∪ (T2 + U2) ∪ … ∪ (Tn + Un)]
[c(α1 ∪ α2 ∪ … ∪ αn) + (β1 ∪ β2 ∪ … ∪ βn))]
= [(T1 + U1) ∪ (T2 + U2) ∪ … ∪ (Tn + Un)]
[(cα1 + β1) ∪ (cα2 + β2) ∪ … ∪ (cαn + βn)]
= [(T1 + U1) (cα1 + β1)] ∪ [(T2 + U2) (cα2 + β2)] ∪ … ∪
[(Tn + Un) (cαn + βn)].
Now using the properties of linear transformation on linear
vector space we get (Ti + Ui) (cαi + βi) = c (Ti + Ui) (αi) + (Ti +
Ui) (βi) for each i = 1, 2, …, n.
Thus (T + U) (cα + β) = {[c(T1 + U1) α1 ∪ c(T2 + U2) α2 ∪ … ∪
c(Tn + Un) αn] + (T1 + U1) β1 ∪ (T1 + U1) β2 ∪ … ∪ (Tn + Un)
βn} = c(T + U) α + (T + U)β, which shows (T + U) is a n-linear
transformation from V into W.
cT(dα + β)
= c[(T1 ∪ T2 ∪ … ∪ Tn) [d(α1 ∪ α2 ∪ … ∪ αn) + (β1 ∪ β2 ∪
… ∪ βn)]
= c[T1 ∪ T2 ∪ … ∪ Tn] [(dα1 + β1) ∪ (dα2 + β2) ∪ … ∪
(dαn + βn)]
= c[T1(dα1 + β1) ∪ T2(dα2 + β2) ∪ … ∪ Tn(dαn + βn)]
= T1(c(dα1 + β1)) ∪ T2(c(dα2 + β2)) ∪ … ∪ Tn(c(dαn + βn)
(since each cTi is a linear transformation)
= T[c(dα + β)]
= c[T(dα + β)]
= d[(cT)α] + (cT)β.
This shows cT is a n-linear transformation of V into W.
35
THEOREM 2.5: Let V be a n-vector space of n-dimension (n1, n2,
…, nn) over the field F, and let W be a m dimensional m-vector
space over the same field F with m-dimension (m1, m2, …, mn)
(m > n). Then Ln(V,W) is the finite dimensional n-space over F
(
of n-dimension mi1 n1 , mi2 n2 ,..., min nn
)
where Ln(V,W) denotes
the space of all n-linear transformations of V into W, 1 ≤ i1, i2,
… , in ≤ m.
Proof: Let
B = { (α11 , α12 ,..., α1n1 ) ∪ (α12 , α 22 ,..., α n2 2 ) ∪ … ∪ (α1n , α n2 ,..., α nn n ) }
be a n-basis for V = V1 ∪ V2 ∪ … ∪ Vn of the n-vector space of
n-dimension (n1, n2, … , nn).
Let
C = { (β11 , β12 ,..., β1m1 ) ∪ (β12 , β22 ,..., β2m2 ) ∪ … ∪ (β1m , β2m ,..., βmmn ) }
be a m-basis of the m-vector space W = W1 ∪ W2 ∪ … ∪ Wm
of m-dimension (m1, m2, …, mn).
Let Ln(V, W) be the set of all n-linear transformation of V
into W. For every pair of integers (pj, qi), 1 ≤ j ≤ m and 1 ≤ i ≤
n, 1 ≤ pj ≤ mj and 1 ≤ qi ≤ ni, we define a linear transformation
j i
E p ,q ; 1≤ i ≤ n and kj ≤ m of Vi into Wj by
⎧⎪0 if t ≠ q i
j i
= δ tqi βpj j .
E p ,q (αit) = ⎨ j
i
β
=
if
t
q
⎪⎩ p j
By the theorem (Tjα ij = βij ) their is a unique linear transformation
j
i
from Vi into Wj. We claim that mjni transformation E p ,q form a
basis for Li(Vi, Wj). This is true for each i, i = 1, 2, …, n and the
appropriate j, 1≤ j ≤ m with no two spaces Vi of V mapped into
the same Wj. Let Ti be a linear transformation from Vi into Wj, 1
≤ i ≤ n, 1 ≤ j ≤ m. For each ki ≤ k ≤ ni. Let A1k, …, A m jk be the
coordinates of the vector Tiαik in the ordered basis
( β1j , β2j ,..., βmj j ) the n-basis of W given in C.
36
Ti αik =
∑A
βpj
mj
p =1
pk
∑∑ A
(1)
We wish to show that
mj
Ti =
ni
p j =1 qi =1
j
p jq i
E p ,q
j
(2)
Let Ui be the linear transformation in the right hand member
of (2). Then for each k
U i α ik = ∑∑ A p jqi E p q (α ik )
j, i
p j =1 q i =1
= ∑∑ A p jqi δkqi βpj j
= ∑ A p jk βpj j
pj
qi
mj
p j =1
= Ti αik ;
j
i
and consequently Ui = Ti. Now from 2 we see E p ,q spans
Li(Vi,Wj). We must only now show they form a linearly
independent set. This is very clear from the fact
∑∑ A
Ui =
p
j
q
i
p jq i
Ep
j, i
q
is the zero transformation, then Uiαik = 0 for each k, so that
∑A
mj
p =1
p jk
βpj j = 0
and thus the independence of the βpj j implies that A p jk = 0 for
j
every pj and k. Since this is true of every i, i = 1, 2, … , n. We
have
Ln(V,W) = Ln1 (V1, Wi1 ) ∪ Ln2 (V2, Wi2 ) ∪ … ∪ Lnn (Vn, Win );
where i1, i2, …, in are distinct elements from the set {1, 2, …,m}
and m > n. Hence Ln(V,W) is a n-space of dimension
(mi1 n1 , m i2 n 2 ,..., min n n ) over the same field F. This n-space will
37
be known as the n-space of n-linear transformation of the nvector space V= V1 ∪ V2 ∪ … ∪ Vn of n-dimension (n1, n2, …,
nn) into the m-vector space W = W1 ∪ W2 ∪ … ∪ Wm of mdimension (m1, m2, …, mn), m > n.
Now having proved that the space of all n-linear transformations
of a n-vector space V into a m-vector space W forms a n-vector
space over the same field F, we prove another interesting
theorem.
THEOREM 2.6: Let V and W be two n-vector spaces of ndimensions (n1, n2, … , nn) and (t1, t2, … , tn) respectively defined
over the field F. Z be a m-vector space defined over the same
field F(m > n). Let T be a n-linear transformation of V into W
and U be a n linear transformation from W into Z. Then the
composed function UT defined by (UT)(α) = U(T(α)) is a nlinear transformation from V into Z, α ∈ V.
Proof: Given V = V1 ∪ V2 ∪ … ∪ Vn and W = W1 ∪ W2 ∪ …
∪ Wn are 2 n-vector spaces over F. Z = Z1 ∪ Z2 ∪ … ∪ Zm is
given to be a m-vector space over F, m > n. T: V → W is a nlinear transformation; that is T = T1 ∪ T2 ∪ … ∪ Tn : V → W
with Ti: Vi → Wj and no two vector spaces in V are mapped into
the same vector space Wj, i = 1, 2, …, n and 1 ≤ j ≤ n.
Now U = U1 ∪ … ∪ Un: W → Z is a n-linear transformation
such that Uj: Wj → Zk, j = 1, 2, … , n and 1 ≤ k ≤ m such that no
two subspaces of W are mapped into the same Zk.
Now
(Uj Ti) (cαi + βi)
= Uj [Ti (cαi + βi)
= Uj [Ti (cαi ) + T (βi)]
= Uj [c Ti (αi) + Ti (βi)]
= Uj [cωj + δj ]
j
(as Ti : Vi → Wj ; ω , δj ∈ Wj)
= c Uj (ωj) + Uj (δj)
= cak + bk; ak, bk ∈ Zk.
38
Thus UjTi is a n-linear transformation from Wj to Zk. Hence the
claim; for the result is true for each i and each j. Thus UT is a nlinear transformation from W to Z.
So
U o T = (U1 ∪ U2 ∪ … ∪ Un) o (T1 ∪ T2 ∪ … ∪ Tn)
= U1 Ti1 ∪ U2 Ti2 ∪ … ∪ Un Tin
(i1, i2, … , in) is a permutation of 1, 2, 3, … , n. Now we for the
notational convenience recall that if V = V1 ∪ V2 ∪ … ∪ Vn is a
n vector space over a field F then Vi’s are called as component
subvector spaces of V. Vi is also unknown as the component of
V.
Now we proceed on to define the notion of linear n-operator.
DEFINITION 2.15: Let V = V1 ∪ V2 ∪ … ∪ Vn be a n-vector
space over F, a n-linear operator on V is a n-linear
transformation T from V to V, such that T = T1 ∪ T2 ∪ … ∪ Tn
with Ti:Vi → Vi for 1 ≤ i ≤ n. Thus in the above theorem not only
V = W = Z but U and T are such that Ti: Vi → Vi; Ui: Vi → Vi so
that U and T are n-linear operators on the n space V, we see
composition UT is again a n-linear operator on V.
Thus the n-space Ln (V, V) has a multiplication defined as
composition. In this case the operator TU is also defined. In
general TU ≠ UT i.e., UT – TU ≠ 0.
Now Ln (V, V) would be only a n-vector space of dimension
(n12 , n22 ,..., nn2 ), n-dimension of V is (n1, n2, … , nn).
= (U1 ∪ U2 ∪ …∪ Un) ○ (T1 ∪ T2 ∪ …∪ Tn)
= (U1T1 ∪ U2 T2 ∪ … ∪ UnTn).
TU
= (T1 ∪ T2 ∪ … ∪ Tn) ○ (U1 ∪ U2 ∪ … ∪ Un)
= T1U1 ∪ T2U2 ∪ … ∪ TnUn.
Here Ti : Vi → Vi and Ui : Vi → Vi , i = 1, 2, … , n.
UT
Now only in this case T2 = TT and in general Tn = TT … T; n
times for n = 1, 2, …, n. We define Tº = I1 ∪ I2 ∪ … ∪ In =
identity n-function of V = V1 ∪ V2 ∪ … ∪ Vn. It may so happen
39
depending on each Vi we will have different power of Ti to be
approaching identity for varying linear transformation. i.e. if T
: T1 ∪ T2 ∪ … ∪ Tn on V = V1 ∪ V2 ∪ … ∪ Vn, such that Ti : Vi
→ Vi (only) for i = 1, 2, …, n since n-dimension of V is (n1, n2,
…, nn). so T o T = T2 = (T1 ∪ T2 ∪ … ∪ Tn) (T1 ∪ T2 ∪ … ∪ Tn)
= (T12 , T22 ,..., Tn2 ) . Like wise any power of T. I = I1 ∪ I2 ∪ … ∪
In is the identity function on V i.e. each Ii : Vi → Vi is such that
Ii(μi) = μi for all μi ∈ Vi ; i = 1, 2, …, n. Only under these
special conditions we define Ln (V, V); elements of Ln(v, v) are
called special n-linear operators.
LEMMA 2.1: Let V = V1 ∪ V2 ∪ … ∪ Vn be a n-vector space
over the field F; let U, T1 and T2 be n-linear operators on V; let
c be an element of F
a. IU = UI = U where I = I1 ∪ I2 ∪ … ∪ In is the nidentity transformation
b. U (T1 + T2) = UT1 + UT2
(T1 + T2)U = T1U + T2U
c. C(U T1) = (C U) T1 = U (C T1).
Proof: Given V = V1 ∪ V2 ∪ … ∪ Vn be a n-vector space over
F, F a field. I = I1 ∪ I2 ∪ … ∪ In be the n-identity
transformation of V to V i.e. Ij: Vj → Vj; is the identity
transformation of each Vj, j = 1, 2, …, n. U = U1 ∪ U2 ∪ … ∪
Un: V → V such that Ui: Vi → Vi for i = 1, 2, …, n. Ti .
T1i ∪ T2i ∪ ... ∪ Tni : V → V such that Tji : Vj → Vj; j = 1, 2, …, n
and i = 1, 2.
U = (U1 ∪ U2 ∪ … ∪ Un) (I1 ∪ I2 ∪ … ∪ In)
= U1 ○ I1 ∪ U2 ○ I2 ∪ Un In
= U1 ∪ … ∪ Un.
Further
IU = (I1 ∪ I2 ∪ … ∪ In) ○ (U1 ∪ U2 ∪ …∪ Un)
= I1 ○ U1 ∪ I2 ○ U2 ∪ In o Un
= U1 ∪ U2 ∪ … ∪ Un.
Thus IU=UI.
40
U(T1 + T2) =
=
=
UT1 + UT2
[U1 ∪ U2 ∪ … ∪ Un] [(T11 ∪ T12 ∪ … ∪ T1n) +
(T21 ∪ T22 ∪ … ∪ T2n)]
[U1 ∪ U2 ∪ … ∪ Un] (T11 + T21) ∪ (T12 + T22)
∪ … ∪ (T1n + T2n).
We know from the results in linear algebra U (T1 + T2) = UT1 +
UT2 for any U, T1, T2 ∈ L(V1 ; V1) where V1 is a vector space
and L(V1 ,V1) is the collection of all linear operators from V1 to
V1.
Now in Ui (T1i + T2i), Ui T1i and T2i are linear operators
from Vi to Vi true for each i = 1, 2, …, n. Thus U (T1 + T2) =
UT1 + UT2 and (T1 + T2) U = T1U + T2U. Further C(UT1) =
(CU) T1 = U(CT1) for all U, T1 ∈ Ln(V,V). Let U = U1 ∪ U2 ∪
… ∪ Un and T1 = ( T11 ∪ T21 ∪ ... ∪ Tn1 ) where Ui: Vi → Vi for
each i and Ti1 : Vi → Vi for each i = 1, 2, …, n.
C[(U1 ∪ U2 ∪ … ∪ Un) (T11 ∪ T21 ∪ ... ∪ Tn1 ) ]
=
=
C [ U1T11 ∪ U 2 T21 ∪ ... ∪ U n Tn1 ]
( CU1 T11 ∪ CU1T21 ∪ ... ∪ CU n Tn1 ).
(CU1 ∪ CU2 ∪ … ∪ CUn) (T11 ∪ T12 ∪ … ∪ T1n) = (CU)T1.
But
C(UT1) =
=
=
=
=
=
(CU1 ∪ CU2 ∪ … ∪ CUn) (T11 ∪ T12 ∪ …∪ T1n)
(U1 ∪ U2 ∪ … ∪ Un) (CT1)
(U1 ∪ U2 ∪ … ∪ Un) ( CT11 ∪ CT21 ∪ ... ∪ CTn1 )
U1 (CT11 ) ∪ U 2 (CT21 ) ∪ ... ∪ U n (CTn1 )
(U1 ∪ U2 ∪ … ∪ Un) (CT11 ∪ CT21 ∪ ... ∪ CTn1 )
U(CT1).
Let us denote the set of all n-linear transformation from V to V;
this will also include the set of all n-linear operator T = T1 ∪ T2
∪ … ∪ Tn with Ti : Vi → Vi, i = 1, 2, …, n. Let us denote the n-
41
linear transformation on V by LnT (V, V). Clearly Ln (V, V) ⊆
LnT (V, V). This is the marked difference between the usual
linear operator and n-linear operator. For a n-linear
transformation can be n-linear operator or n-linear
transformation. But every linear operator from V to V is always
a linear transformation.
Let V to W be two n-linear vector spaces of same
dimension say (n1, n2, …, nn) and ( n i1 , n i2 ,…, n in ) where (i1, i2,
…, in) is a permutation of (1, 2, …, n).
Let Ts: V → W be a n-linear transformation where Ts = T1
∪ T2 ∪ … ∪ Tn; Ti: Vi → Wj where Wj is such that dim Vi =
dim Wj, this is the way every Vi is matched. This will certainly
happen because the n-dimension of both V and W are one and
the same. We call such n-linear transformation from same
dimensional space V into W satisfying the conditions mentioned
by each Ti; i = 1, 2, …, n. denoted by Ts, for this is a special nlinear transformation.
If each Ti in Ts; i = 1, 2, …, n is invertible; then we can find
a special n-linear transformation Us : W → V such that TsUs =
UsTs and is the identity function on W. If Ts is invertible the
function Us is unique and is denoted by Ts−1 . Further more Ts is
1-1 that is Tsα = Tsβ implies α = β where α = α1 ∪ α2 ∪ … ∪ αn
and β = β1 ∪ β2 ∪ … ∪ βn. Ts is onto, that is the range of Ts is
all of W.
THEOREM 2.7: Let V and W be n-vector spaces over the field F
of same dimension (n1, n2, …, nn) over the field F. If Ts is a
special n-linear transformation from V into W and Ts is
invertible then the inverse function Ts−1 is a special n-linear
transformation from W into V.
Proof: Let Ts = T1 ∪ T2 ∪ … ∪ Tn be a special n-linear
transformation from the same n-dimensional spaces V into W,
where n-dimension of V is (n1, n2, …, nn) and that of W is
n i1 , n i2 ,..., n in ; (i1, i2, …, in) a permutation of (1, 2, 3, …, n).
(
)
42
i.e., Ts: V → W; Ti: Vi → Wj where dim Vi = dim Wj.
Ts−1 = T1−1 ∪ T2−1 ∪ ... ∪ Tn−1 is the inverse of Ts.
Let β1, β2 be vectors in W let C ∈ F. To show Ts−1 (Cβ1 +
β2) = C T1−1 β1 + T1−1 β2 where β1 = β11 ∪ β12 ∪ ... ∪ β1n and β2 =
β12 ∪ β22 ∪ ... ∪ β2n .
Ts−1 (Cβ1 + β2)
= Ts−1 ( C β11 + β12 ∪ C β12 + β22 ∪ ... ∪ C β1n + β n2 )
= ( T1−1 (C β11 + β12 ) ∪ T2−1 (C β12 + β22 ) ∪ ... ∪ Tn−1 (C β1n + β 2n ) )
= (CT1−1β11 + T1−1β12 ) ∪ (CT2−1β12 + T2−1β22 ) ∪ ... ∪ (CTn−1β1n + Tn−1βn2 )
= C Ts−1 β1 + Ts−1 β2.
Let αi = C Ts−1 βi ; i = 1, 2, that is let αi be the unique n-vector in
the V such that Tsαi = βi. Since Ts is n-linear;
Ts(Cα1 + α2) = CTsα1 + Tsα2 = Cβ1 + β2.
Thus Cα1 + α2 is the unique n-vector in V which is sent by Ts
into Cβ1 + β2 and so
Ts−1 (Cβ1 + β2) = Cα1 + α2 = C( Ts−1 β1) + Ts−1 β2
and Ts−1 is n-linear, the proof is similar to the earlier one using
Ts = T1 ∪ T2 ∪ ... ∪ Tn and Ts−1 = T1−1 ∪ T2−1 ∪ ... ∪ Tn−1 and α1 =
α11 ∪ α12 ∪ ... ∪ α1n and β1 = β11 ∪ β12 ∪ ... ∪ β1n .
THEOREM 2.8: Let T = T1 ∪ T2 ∪ … ∪ Tn be a n-linear
transformation from V = V1 ∪ V2 ∪ … ∪ Vn and W = W1 ∪ W2
∪ … ∪ Wn where dim V = (n1, n2, … , nn) and dim W =
ni1 , ni2 ,..., nin where i1, i2, …, in is a permutation of (1, 2, …,
(
)
n.). Then Ts is non singular if and only if Ts carries each nlinearly independent n-subset of V into a n-linearly independent
n-subset of W.
43
Proof: Suppose first we assume Ts is non singular. Let S = S1 ∪
S2 ∪ … ∪ Sn be a n-linearly independent n-subset of V = V1 ∪
V2 ∪ … ∪ Vn i.e. Si ⊆ Vi is a linearly independent subset of Vi,
i = 1, 2, …, n. Let
S = { α11, α12 ,..., α1k1 ∪ α12 , α 22 ,..., α k2 2 ∪…∪ α1n , α 2n ,..., α kn n }
(
) (
)
(
)
= S1 ∪ S2 ∪ … ∪ Sn ⊂ V1 ∪ V2 ∪ … ∪ Vn. Given Ts = T1 ∪
T2 ∪ … ∪ Tn. Here Ti: Vi → Wj, Ti α1i ,Ti αi2 ,...,Ti α ik i are
(
)
linearly independent for each i for if
C1i (Ti α1i ) + … + Cik i (Ti αik i ) = 0,
then Ti ( C1i α1i + … + Cik i αiki ) = 0 and since Ti is non singular
( C1i α1i + … + Cik i αiki ) = 0 from which it follows each Cij = 0, j =
1, 2, …, ki, because Si is an independent set. This is true of each
i, i.e. S = S1 ∪ S2 ∪ … ∪ Sn is an independent n-set. This shows
the image of S under Ts is independent. Suppose Ts carries
independent n-sets onto independent n sets. Let α = α1 ∪ α2 ∪
… ∪ αn be a non zero n vector of V.
Then if S = S1 ∪ S2 ∪ … ∪ Sn = α1 ∪ α2 ∪ … ∪ αn with Si
= {αi}; i = 1, 2, …, n; is independent. The image n-set of S is
the n-row vector T1α1 ∪ T2α2 ∪ … ∪ Tnαn and this set is
independent. Hence Ts(α) = T1α1 ∪ T2α2 ∪ … ∪ Tnαn ≠ 0
because the set consisting of the zero n-vector alone is
dependent. Thus null space of Ts is 0 ∪ 0 ∪ … ∪ 0.
The following concept of non singular n-linear
transformation is little different.
(
)
DEFINITION 2.16: Let V and W be two same n-dimension spaces
over F i.e. dim V = (n1, n2, …, nn) and dim W = ni1 , ni2 ,..., nin
where (i1, i2, … , in) is a permutation of ((1, 2, 3, …, n). If T =
T1∪ T2 ∪ … ∪ Tn is a special n-linear transformation of V into
W i.e. if Ti: Vi → Wj then dim Vi = dim Wj = ni for every i. Then
T is n-non singular if each Ti is non singular.
In view of this the reader is expected to prove the following
theorem.
44
THEOREM 2.9: Let V and W be two n-vector spaces of same
dimension defined over the same field F. T is special n-linear
transformation from V into W. Then T is n-invertible if and only
if T is non-singular.
Note: We say the special n-linear transformation is n-invertible
if and only if each Ti = T1 ∪ T2 ∪ … ∪ Tn is invertible for i = 1,
2, …, n.
Now we proceed on to define the n-representation of ntransformations by n-matrices. (n ≥ 2).
Let V be a n-vector space of n dimension (n1, n2, … , nn) and W
be a m-vector space of m-dimension (m1, m2, … , mn) defined
over the same field F. Let
B = { (α11, α12 ,..., α1n1 ) ∪ (α12 , α 22 ,..., α 2n 2 ) ∪ … ∪ (α1n , α 2n ,..., α nn n ) }
be a n-ordered n-basis of V. We say the n-basis is an n-ordered
n-basis if each of the basis (αi1, αi2, … , αini ) of Vi is an ordered
basis for i = 1, 2, …, n and
B1 = {( β11 , β12 ,..., β1m1 ) ∪ ( β12 , β22 ,..., βm2 2 ) ∪ … ∪ ( β1m , β2m ,..., βmmm )}
be a m-ordered m basis of W. If T is any n-linear transformation
from V into W i.e. T = T1 ∪ T2 ∪ … ∪ Tn then each Ti: Vi →
Wk is determined by its action on the vector α ij ; 1 ≤ k ≤ m ; true
for each i = 1, 2, …, n and i ≤ j ≤ ni. Each of the ni vector Tiαj is
uniquely expressible as a linear combination
Ti α ij = ∑ A ijβki
mk
(1)
i =1
1 ≤ k ≤ m and βik ∈ Wk, the scalars A1j, A2j, …, A mk j being
coordinates of Tiαij in the m-ordered m-basis B1. Accordingly
the transformation Ti is determined by the mkni scalars; Aij via
equation (1). The mk × ni matrix A ik defined by A i j is called
the submatrix relative to the n-linear transformation T = T1 ∪ T2
∪ … ∪ Ti ∪ … ∪ Tn of the pair of ordered basis
45
{ (α1i , α i2 ,..., α ini ) } and { (β1k , βk2 ,..., βkmk ) }
of Vi and Wk respectively. This is true for each i and k, 1 ≤ i ≤ n
and 1 ≤ k ≤ m i.e. the m-matrix of T is given by
m
m
m
A n 1i1 ∪ A n 2i 2 ∪ … ∪ A n ni n
(
)
= A (m, n) here mi1 , mi2 ,..., min ⊂ (m1, m2, …, m mn ). Clearly
A is only a n-linear transformation map Vi → Wj and no two
Vi’s are mapped onto same Wj, 1 ≤ i ≤ n and 1 ≤ j ≤ m. Thus if
αi is a vector in Vi then αi = x1i a1i + ... + x in i a ini is a vector in Vi
then
⎛ ni
⎞
Tiαi = Ti ⎜ ∑ x ijα ij ⎟
⎝ j=1
⎠
ni
⎛
⎞
= ⎜ ∑ x ij ⎟ .(Tα ij )
⎝ j=1 ⎠
= ∑ x ij ∑ A ijβkj
ni
mk
j=1
i =1
⎛
⎞
= ∑ ⎜ ∑ A ij x ij ⎟ βik .
j=1 ⎝ i =1
⎠
mk
ni
This is true for each i, i = 1, 2, ..., n. If X = X1 ∪ X2 ∪ … ∪ Xn
is the coordinate n-matrix of α in the n-basis B then the
computation above shows that
m
m
m
AX = ( A n1i1 ∪ A n 2i2 ∪ … ∪ A n nin ) (X1 ∪ X2 ∪ … ∪ Xn) is the
coordinate n-matrix of the n-vector Tα in the ordered basis B1
because the scalars
∑A
n1
j=1
mi1
ij
x1j ∪
∑A
n2
j=1
mi2
ij
x 2j ∪ …∪ ∑ A ij in x nj
th
nn
m
j=1
is the entry of the i n-row of the n-column matrix AX. Let us
observe that A is given by the mi × nj, n-matrices over the field
F, then
⎛ nn
⎞
⎛ n2
⎞
⎛ n1
⎞
T1 ⎜ ∑ x1jα1j ⎟ ∪ T2 ⎜ ∑ x 2j α 2j ⎟ ∪ … ∪ Tn ⎜ ∑ x nj α nj ⎟
⎝ j=1
⎠
⎝ j=1
⎠
⎝ j=1
⎠
46
m i2
⎛ n2 m
⎞
⎛ n1 mi1 1 ⎞ i1
= ∑ ⎜ ∑ A ij x j ⎟ βi ∪ ∑ ⎜ ∑ A ij i2 x 2j ⎟ βii2 ∪ … ∪
i =1 ⎝ j=1
i =1 ⎝ j=1
⎠
⎠
min
nn
⎛
mi
n ⎞ i
⎜ ∑ A ij n x j ⎟ βin ,
∑
i =1 ⎝ j=1
⎠
mi1
where (i1, i2, …, in) ⊂ {1, 2, …, m} taken in some order, defines
a n-linear transformation T from V into W, the n-matrix of A
relative to the n-basis B and m-basis B1 which is stated by the
following theorem.
THEOREM 2.10: Let V = V1 ∪ V2 ∪ … ∪ Vn be a finite ndimensional i.e., (n1, n2, …, nn) n-vector space over the field F
and W = W1 ∪ W2 ∪ … ∪ Wm, an m-dimensional (m1, m2, …,
mn) vector space over the same field F, (m > n). For each nlinear transformation T from V into W there is a n-mixed
rectangular matrices A of orders (m1× n1, m2 × nn , …, mn × nn)
with entries in F such that [T α ]B1 = A[α]B for every α ∈ V. T →
A is a one to one correspondence between the set of all n-linear
transformations from V into W and the set of all mi × ni, mixed
rectangular n-matrices, i = 1, 2, …, n over the field F. The
m
m
m
matrix A = A n 1 i1 ∪ A n 2 i2 ∪ … ∪ A n n in is the associated n-
matrix with T; the n-linear transformation of V into W relative
to the basis B and B1.
Several interesting results true for the usual vector spaces can be
derived in case of n-vector spaces n ≥ 2 with appropriate
modifications.
Now we give the definition of n-inner product on a n-vector
space V.
DEFINITION 2.17: Let F be a field of reals or complex numbers
and V = V1 ∪ V2 ∪ … ∪ Vn a n-vector space over F. An n-inner
product on V is a n-function which assigns to each ordered pair
of n-vectors α = α1 ∪ α2 ∪ … ∪ αn and = 1 ∪ 2 ∪ … ∪ n in
the n-vector space V a scalar n-tuple from F. 〈α | 〉 = 〈α1 ∪ α2
∪ … ∪ αn | 1 ∪ 2 ∪ … ∪ n〉 = (〈α1| 1〉, 〈α2| 2〉, … , 〈αn| n〉),
47
where 〈αi| i〉 is a inner product on Vi as αi, i ∈ Vi, this is true
for each i, i = 1, 2, …, n; satisfying the following conditions:
a. 〈α + | 〉 = 〈α | 〉 + 〈 | 〉 (where α = α1 ∪ α2 ∪ … ∪ αn,
= 1 ∪ 2 ∪ … ∪ n and = 1 ∪ 2 ∪ … ∪ n where αi, i,
i ∈ Vi for each i = 1, 2, …, n.) = (〈α1 | 1〉, 〈α2 | 2〉, …, 〈αn |
n〉) + (〈 1 | 1〉 〈 2 | 2〉, …, 〈 n | n〉) = (〈α1 | 1〉 + 〈 1 | 1〉, 〈α2
| 2〉 + 〈 2 | 2〉, …, 〈αn | n〉 + 〈 n | n〉)
b. 〈Cα | 〉 = C〈α | 〉 = (C1〈α1 | 1〉, C2〈α2 | 2〉, …, Cn〈αn | n〉)
c. 〈 | α〉 = α β , the bar denoting the complex conjugation.
d. 〈α | α〉 > (0, 0, … , 0) if α ≠ 0 i.e., (〈α1 | α1〉, 〈α2 | α2〉, … 〈αn|
αn〉) > (0, 0, … , 0) each αi ≠ 0 in α = α1 ∪ α2 ∪ … ∪ αn, i =
1, 2, …, n.
On F = F n1 ∪ F n 2 ∪ ... ∪ F n n there is a n-inner product
which we call the n-standard inner product. It is defined on
α = ( x11, x12 ,..., x1n1 ) ∪ ( x12 , x22 ,..., xn22 ) ∪ … ∪ ( x1n , x2n ,..., xnnn )
and
= ( y11, y12 ,..., y1n1 ) ∪ ( y12 , y22 ,..., yn22 ) ∪ … ∪ ( y1n , y2n ,..., ynnn ) ∈ P
by
〈α | 〉 = ( ∑ x1j y1j ,
n1
j =1
∑ x 2j y 2j , …,
n2
j =1
∑x y
nn
n
j
j =1
n
j
)
if F is a real field. If F is the field of complex numbers then
〈α | 〉 = ( ∑ x1j y j , ∑ x 2j y j , …,
n1
j =1
1
n2
j =1
2
∑x
nn
j =1
n
j
n
y j ).
The reader is expected to work out the properties related with ninner products on the n-vector spaces over the field F.
Now we proceed on to define n-orthogonal sets.
DEFINITION 2.18: Let V = V1 ∪ V2 ∪ … ∪ Vn be a n-vector
space over the field F. We say V is a n-inner product space if on
V is defined an n-inner product. Let α = (α1 ∪ α2 ∪ … ∪ αn) and
48
= ( 1 ∪ 2 ∪ … ∪ n) ∈ V with αi, i ∈ Vi, i = 1, 2, …, n. We
say α is n-orthogonal to if 〈α| 〉 = (0, 0, … , 0) = (〈α1| 1〉, …,
〈αn| n〉) i.e. if each αi is orthogonal to i ∈ Vi i.e. 〈αi| i〉 = 0 for i
= 1, 2, …, n. This equivalently implies is n-orthogonal to α.
Hence we simply say α and are orthogonal.
If S = S1 ∪ S2 ∪ … ∪ Sn ⊆ V1 ∪ V2 ∪ … ∪ Vn = V be a n-set of
n-vectors in V. S is called an n-orthogonal set provided all pairs
of distinct n-vectors in S are orthogonal. An n-orthogonal set is
called an n-orthonormal set if ||α|| = (1, 1, … , 1) for every α in
S = S1 ∪ S2 ∪ … ∪ Sn.
We denote 〈α | β〉 also by (α|β).
THEOREM 2.11: Let V = V1 ∪ V2 ∪ … ∪ Vn be a n-vector space
which is a n-inner product space defined over the field. Let S =
S1 ∪ S2 ∪ … ∪ Sn be an n-orthogonal set in V. The set of non
zero vectors in S are n-linearly independent.
Proof: Let V = V1 ∪ V2 ∪ … ∪ Vn be a n-vector space over F.
Let S = S1 ∪ S2 ∪ … ∪ Sn ⊂ V = V1 ∪ V2 ∪ … ∪ Vn be a
orthogonal n-set of V. To show the elements in the n-sets are northogonal. Let α1i , αi2 , …, α imi ∈ Si for i = 1, 2, …, n. i.e.
α11 , α12 , …, α1m1 ∈ S1, α12 , α 22 , …, α 2m2 ∈ S2 and so on. α1n , α n2 ,
…, α nmn ∈ Sn. Let α1i , α i2 , …, α imi be the distinct set of n-vectors
in Si and that βi = c1i α1i + ci2 αi2 + … + cimi αimi . Then
⎛ mi
⎞
(βi | αik ) = ⎜ ∑ c ijα ij | α ik ⎟
⎝ j=1
⎠
=
∑ c (α
j=1
i
j
i
j
| αik )
= cik ( αik | αik ).
Since ( α ik | α ik ) ≠ 0, cik ≠ 0 .
49
Thus when βi = 0 then each cik = 0 for each i. So each Si is an
independent set. Hence S = S1 ∪ S2 ∪ … ∪ Sn is a nindependent set.
Several interesting results including Gram-Schmidt northogonalization process can be derived.
We now proceed onto define the notion of n-best
approximation to the n-vector β relative to a n-sub-vector space
W.
DEFINITION 2.19: Let V = V1 ∪ V2 ∪ … ∪ Vn be a n-inner
product n-vector space over the field F. W = W1 ∪ W2 ∪ … ∪
Wn be a n-subspace of V. Let = β11 ∪ β 21 ∪ ... ∪ β nn ∈ V the nbest approximation to by n-vectors in W is a n-vector α =
α11 ∪ α 21 ∪ ... ∪ α nn in W such that || – α|| ≤ || – γ|| for every nvector in W i.e. || ii – αii|| ≤ || ii – ii|| for every ii ∈ Wi and
this is true for each i; i = 1, 2, …, n. We know if
=
1
1
n
β1 ∪ β 2 ∪ ... ∪ β n and if is a n-linear combination of an northogonal sequence of non zero-vectors α1, α2, …, αm where
each αi = αi1 ∪ αi2 ∪ … ∪ αin, i = 1,2,…, m, then
(
)
(
)
(
)
⎛ m1 β11 α k1 α k1 m2 β 22 α k2 α k2
mn β n α n α n
n
k
k
∪
∪
∪
...
= ⎜∑
∑
∑
n 2
1 2
2 2
⎜ k =1
k =1
k =1
αk
αk
αk
⎝
⎞
⎟.
⎟
⎠
The following theorem is left as an exercise for the reader to
prove.
THEOREM 2.12: Let W = W1 ∪ W2 ∪ … ∪ Wn be a n-subspace
of an n-inner product space V = V1 ∪ V2 ∪ … ∪ Vn and =
β11 ∪ β 21 ∪ ... ∪ β nn be a n-vector in V
1. The n-vector α = α11 ∪ α 21 ∪ ... ∪ α nn in W is a n-best
approximation to by the n-vector in W if and only if α is n-orthogonal to every n-vector in W.
2. If a n-best approximation to by n-vectors in W exists,
it is unique.
50
3. If W is n-finite n-dimensional and { α11 ∪ α 21 ∪ ... ∪ α nn }
is any n-orthonormal n-basis for W then the n-vector
α=
( β11 | α k1 )α k1
( β 22 | α k2 )α k2
∪
∑k || α 1 ||2 ∑k || α 2 ||2
k
k
∪… ∪ ∑
( β nn | α kn )α kn
|| α kn ||2
k
is the unique n-best approximation to by n-vector in W.
Now we proceed on to define the notion of n-orthogonal
complement.
DEFINITION 2.20: Let V be a n-inner product n-space and S any
n-set of n vectors in V. The n-orthogonal complement of S is the
n-set S⊥ of all n-vectors in V which are n-orthogonal to every nvector in S; where S = S1 ∪ S2 ∪ … ∪ Sn ⊆ V = V1 ∪ V2 ∪ … ∪
Vn and S⊥ = S1⊥ ∪ S2⊥ ∪ ... ∪ Sn⊥ ⊆ V. i.e. each Si⊥ is the
orthogonal complement of Si for every i, i = 1, 2, …, n. We call
α to be the n-orthogonal projection of on W. If every n-vector
in V has an n-orthogonal projection on W, the n-mapping that
assigns to each n-vector in V its n-orthogonal projection on W
is called the n-orthogonal projection of V on W.
The reader is expected to prove the following theorems.
THEOREM 2.13: Let V = V1 ∪ V2 ∪ … ∪ Vn be a n-inner
product n-vector space defined over the field F. W a finite
dimensional n-subspace of V and E the n-orthogonal projection
→ ( – E ) is the nof V on W, Then the n-mapping
orthogonal projection of V on W⊥.
THEOREM 2.14: Let W = W1 ∪ W2 ∪ … ∪ Wn ⊂ V be a finite
dimensional n-subspace of the n inner product space V = V1 ∪
V2 ∪ … ∪ Vn and let E = E1 ∪ E2 ∪ … ∪ En be the n-orthogonal
projection of V on W. Then E is an n-idempotent n-linear
transformation of V onto W and W⊥ is the n-null space of E and
51
V = W ⊕ W⊥ i.e. if V = V1 ∪ V2 ∪ … ∪ Vn and W = W1 ∪ W2 ∪
… ∪ Wn and W⊥ = W1⊥ ∪ W2⊥ ∪ ... ∪ Wn⊥ then V = (W ⊕ W⊥) =
(W1 ⊕ W1⊥ ) ∪ (W2⊕ W2⊥ ) ∪ … ∪ (Wn⊕ Wn⊥ ).
THEOREM 2.15: Under the conditions of the above theorems, IE is the n-orthogonal projection of V on W⊥. It is an nidempotent linear n-transformation of V onto W⊥, with null
space W.
THEOREM 2.16: Let { α11 ∪ α 21 ∪ ... ∪ α nn } be an orthogonal nset of non-zero vectors in an n-inner product space V. If is any
=
vector in V then ∑k |( | α kk )|2 / || α kk ||2 ≤ || ||2 where
β11 ∪ β 21 ∪ ... ∪ β nn ∈ V.
It is pertinent to mention here that the notion of linear functional
dual space or adjoints cannot be extended in an analogous way
in case of n-vector spaces of type I.
Now we proceed on to define the notion of n-unitary operators
on n-inner product n vector spaces V over the field F.
DEFINITION 2.21: Let V and W be n-inner product n-vector
space and m vector space over the same field F respectively. Let
T be a n-linear transformation from V into W. We say that T
preserves n inner products if (Tα | T ) = (α | ) for all α, ∈ V
i.e. if V = V1 ∪ V2 ∪ … ∪ Vn and W = W1 ∪ W2 ∪ … ∪ Wn and
T = T1 ∪ T2 ∪ … ∪ Tn with α = α1 ∪ α2 ∪ … ∪ αn and = 1 ∪
2 ∪ … ∪ n ∈ V. Ti: Vi → Wj. with no two Vi mapped on to the
same Wj, then Tiαi,Ti i ∈ Wj and (Tiαi | Ti i) = (αi | i) for every i,
i = 1, 2, …, n. An n-isomorphism of V into W is a n-vector space
isomorphism T of V onto W which also preserves n-inner
products.
THEOREM 2.17: Let V and W be n-finite dimensional n-inner
vector spaces of same n-dimension i.e. dim V = (n1, n2, … , nn)
and dim W = ni1 , ni2 ,..., nin where (i1, i2, … , in) is a
(
)
52
permutation of (1, 2, …, n) defined over the same field T. If T =
T1 ∪ T2 ∪ … ∪ Tn is a n-linear transformation from V into W
the following are equivalent
1. T preserves inner products i.e., each Ti in T preserves
inner product i.e. Ti: Vi → Wj; 1 ≤ i , j ≤ n.
2. T is an n-inner product n- isomorphism
3. T carries every n-orthonormal n-basis for V onto an northogonal n-basis for W.
4. T carries some n-orthogonal n-basis for V onto an northonormal basis for W i.e. Ti carries some orthogonal
basis of Vi into an orthogonal basis for Wj.
The reader is expected to prove the following theorems.
THEOREM 2.18: Let V and W be n-dimensional finite inner
product n-spaces over the same field F. Then V = V1 ∪ V2 ∪ …
∪ Vn is n-isomorphic with W = W1 ∪ W2 ∪ … ∪ Wn i.e. each Ti:
Vi → Wj is an isomorphism for i = 1, 2, …, n if V and W are of
same n-dimension.
THEOREM 2.19: Let V and W be two n-inner product spaces
over the same field F. Let T = T1 ∪ T2 ∪ … ∪ Tn be a n-linear
transformation from V into W. Then T preserves n-inner product
if and only if ||Tα|| = ||α|| i.e. ||(T1 ∪ T2 ∪ … ∪ Tn )
( α11 ∪ α 21 ∪ ... ∪ α nn )|| = ||T1( α11 ) ∪ T2( α 22 ) ∪ … ∪ Tn( α nn )|| =
(|| α11 ||, || α 22 ||, … , || α nn ||) for every α ∈ V i.e. for every αi ∈ Vi,
i = 1, 2, …, n.
We define the notion of n unitary operator of a n-vector space V
over the field F.
DEFINITION 2.22: A n-unitary operator on an n-inner product
space V is a n-isomorphism of V onto itself.
DEFINITION 2.23: If T is a n-linear operator on an n-inner
product space V = V1 ∪ V2 ∪ … ∪ Vn, then we say T = T1 ∪ T2
∪ … ∪ Tn has an n-adjoint on V if there exists a n-linear
53
operator T*= T1∗ ∪ T2∗ ∪ ... ∪ Tn∗ on V such that (Tα | ) = (α |
T* ) for all α = α11 ∪ α 21 ∪ ... ∪ α nn ,
= β11 ∪ β 21 ∪ ... ∪ β nn in V
= V1 ∪ V2 ∪ … ∪ Vn i.e. Ti( α ii | β ii ) = ( α ii | T * β ii ) for each i =
1, 2, …, n.
It is easily verified as in case of adjoints the n-adjoints of T not
only depends on T but also on the n-inner product on V.
Interesting results in this direction can be derived for any reader.
The following theorems are also left as an exercise for the
reader.
THEOREM 2.20: Let V = V1 ∪ V2 ∪ … ∪ Vn be a finite ndimensional n-inner product n-space defined over the field F. If
T and U are n-linear operators on V and c is a scalar, then
1. (T + U)* = T* + U* i.e. if T = T1 ∪ T2 ∪ … ∪ Tn and U
= U1 ∪ U2 ∪ … ∪ Un then in (T + U)* we have for each
i, (Ti + Ui)* = Ti * + U i* , i = 1, 2, …, n.
2. (cT)* = cT*
3. (TU)* = T*U*, here also (TiUi)* = U i* Ti * for i = 1, 2,
…, n. i.e. (TU)* = (T1U1)* ∪ (T2U2)* ∪ … ∪ (TnUn)* =
U1* T1* ∪ U 2* T2* ∪ … ∪ U n* Tn*
4. (T*)* = T since ( Ti* )* = Ti, for each i = 1, 2, …, n.
THEOREM 2.21: Let U be a n-linear operator on an n-inner
product space V, defined over the field F. Then U is n-unitary if
and only the n-adjoint, U* of U exists and UU* = U*U = I.
THEOREM 2.22: Let V = V1 ∪ V2 ∪ … ∪ Vn be a n-vector space
of a n-inner vector space of finite dimension and U be a n-linear
operator on V. Then U is n-unitary if and only if the n-matrix
related with U in some ordered n-orthonormal n-basis is also a
n-unitary matrix i.e. if A = A1 ∪ A2 ∪ … ∪ An is the n-matrix
54
each Ai in A is unitary i.e. Ai * Ai = I for each i, i.e. A * A =
A1*A1 ∪ A2*A2 ∪ … ∪ An*An = I1 ∪ … ∪ In.
Several interesting results can be obtained using appropriate and
analogous proper modifications.
Now we proceed on to define the notion of n-normal
operator or normal n-operators on a n-vector space V. The
principle objective for doing this is we can obtain some
interesting properties about the n-orthonormal n-basis of V = V1
∪ V2 ∪ … ∪ Vn.
Let the n-orthonormal n-basis of V be denoted by B =
{( α11 ,α 21 ,...,α n11 ) ∪ ( α12 ,α 22 ,...,α n22 )∪ … ∪ ( α1n ,α 2n ,...,α nnn )}
where each ( α1i ,α 2i ,...,α ni i ) is a orthogonal basis of Vi for i = 1,
2, …, n. Let T = T1 ∪ T2 ∪ … ∪ Tn the n-linear operator on V
be defined by Ti α ij = cijα ij for j = 1, 2, …, ni and for each Ti, i =
1, 2, …, n. This simply implies that the n-matrix of T
(consequently each matrix of Ti in the ordered basis
( α1i ,α 2i ,...,α ni i ) is a diagonal matrix with the diagonal entries
( c1i , c2i ,..., cni i ) is a n-diagonal n matrix given by
⎡ c11
⎢
c12
⎢
D =⎢
⎢
⎢
⎢0
⎣
⎡ c1n
⎢
⎢
∪…∪ ⎢
⎢
⎢
⎢0
⎣
⎡ c12
0⎤
⎥
⎢
⎢
⎥
⎥ ∪ ⎢
⎢
⎥
⎢
⎥
⎢0
c1n1 ⎥⎦
⎣
n
2
c
55
2
2
c
0⎤
⎥
⎥
⎥.
⎥
⎥
n ⎥
cnn ⎦
0⎤
⎥
⎥
⎥
⎥
⎥
cn22 ⎥⎦
The n-adjoint operator T*= T*1 ∪ T*2 ∪ … ∪ T*n of T = T1 ∪
T2 ∪ … ∪ Tn is represented by the n-conjugate transpose n
matrix i.e. once again a n-diagonal n matrix with diagonal
entries c1−1 , c2−1 ,..., cn−ii ; i = 1, 2, …, n. If V is a real n-vector
space over the real field F then of course we have T = T*
DEFINITION 2.24: If V = V1 ∪ V2 ∪ … ∪ Vn be a n-dimensional
n-inner product n-vector space and T a n-linear operator on V
be say T is n-normal if it commutes with its n-adjoint T* of T i.e.
TT* = T*T.
Now in order to define some more properties we now proceed
onto define the notion of n-characteristic values or n-eigen
values of a n-vector space V and so on.
DEFINITION 2.25: Let V = V1 ∪ V2 ∪ … ∪ Vn be a n-vector
space over the field F of type I. Let T = T1 ∪ T2 ∪ … ∪ Tn be a
n-linear operator on V. A n-characteristic value (or
equivalently characteristic n-value) of T is a n-tuple of scalars
c11 ∪ c22 ∪ ... ∪ cnn such that their exists a non zero n vector α =
α11 ∪ α 22 ∪ ... ∪ α nn in V with Tα = cα. i.e. ( c11 ∪ c22 ∪ ... ∪ cnn )
( α11 ∪ α 22 ∪ ... ∪ α nn ) = c11α11 ∪ c22α 22 ∪ … ∪ cnnα nn = T1 α11 ∪
T2 α 22 ∪ … ∪ Tn α nn ; If c = c11 ∪ c22 ∪ ... ∪ cnn is the ncharacteristic value of T then
a. any α = α11 ∪ α 22 ∪ ... ∪ α nn such that Tα = cα is called
the n-characteristic n-vector of T associated with the ncharacteristic value c = c11 ∪ c22 ∪ ... ∪ cnn .
b. The collection of all α = α11 ∪ α 22 ∪ ... ∪ α nn such that Tα
= cα is called the n-characteristic space associated
with c. n-characteristic values will also be known as neigen values or n-spectral values.
If T is any n-linear operator on the n-vector space V and c any n
scalar the set of n-vector α in V = V1 ∪ V2 ∪ … ∪ Vn such that
56
Tα = cα is a n-subspace of V. It is the n-null space of the nlinear transformation (T – cI) = (T1 ∪ T2 ∪ … ∪ Tn) –
( c11 ∪ c22 ∪ ... ∪ cnn ) (I1 ∪ I2 ∪ … ∪ In)) = (T1 – c11 I1) ∪ (T2 –
c22 I2) ∪ … ∪ (Tn – cnn In). We call c the n-characteristic value of
T and if this n-subspace is different from the zero subspace i.e.
if (T – cI) = (T1 – c11 I1) ∪ … ∪ (Tn – cnn In) fails to be one to one
i.e. each Ti – cii Ii fails to be one to one. If V is a finite ndimension n-vector space, (T – cI) fails to be one to one. Only
when the n determinant i.e. det(T – cI) = det(T1 – c11 I1) ∪ det(T2
– c22 I2) ∪ … ∪ det(Tn – cnn In) ≠ (0 ∪ 0 ∪ … ∪ 0) i.e. each
det(Ti – cii Ii) ≠ 0 for i = 1, 2, …, n.
This is made into the following nice theorem
THEOREM 2.23: Let T be a n-linear operator on a finite ndimensional n-vector space V = V1 ∪ V2 ∪ … ∪ Vn and c =
c11 ∪ c22 ∪ ... ∪ cnn be a n scalar then the following are equivalent
a. c = c11 ∪ c22 ∪ ... ∪ cnn is a n-characteristic value of T =
T1 ∪ T2 ∪ … ∪ Tn i.e. each cii is a characteristic value
of Ti ; i = 1, 2, …, n.
b. The n-operator (T – cI) = (T1 – c11 I1) ∪ … ∪ (Tn –
cnn In) is non singular (i.e. non invertible) i.e. each (Ti –
cii Ii) is non invertible i.e. non singular for each n-vector
spaces, i = 1, 2, …, n.
c. det(T – cI) = (0 ∪ 0 ∪ … ∪ 0) i.e. det (Ti − cii I i ) = 0 for
each i = 1, 2, …, n.
Now we give the analogous for n-matrix.
DEFINITION 2.26: Let A = A1 ∪ A2 ∪ …∪ An be a n-square
matrix where each matrix Ai is ni × ni matrix i = 1, 2, …, n; if i ≠
j then ni ≠ nj, 1 ≤ i, j ≤ n over the field F, a n-characteristic
value of A in F is a n scalar C= C11 ∪ C22 ∪ ... ∪ Cnn ; Cii ∈ F , i =
57
1, 2, 3, … , n such that the n-matrix (A – CI) = ( A1 − C11 I1 ) ∪
( A2 − C22 I 2 ) ∪ … ∪ ( An − Cnn I n ) is singular, i.e. C is a ncharacteristic value of A if and only if det (A – CI) = 0 ∪ 0 ∪ …
∪ 0 or equivalently det (CI – A) = 0 ∪ 0 ∪ … ∪ 0, i.e. if
det ( Ai − Cii I i ) = 0 for each and every i, i = 1, 2, …, n we form
the matrix (xI – A) where x = x1 ∪ x2 ∪ … ∪ xn with polynomial
entries and consider the n-polynomial f = det(xI – A) = det(x1I1
– A1) ∪ det(x2I2 – A2) ∪ … ∪ det(xnIn – An) = f1 ∪ f2 ∪ … ∪ fn in
n variables x1, x2, … , xn. Clearly the n-characteristic values of
A in F are just the n-tuple scalars C = C11 ∪ C22 ∪ ... ∪ Cnn in F
such that
f(C)
= 0∪0∪…∪0
= f1 (C11 ) ∪ f 2 (C22 ) ∪ ... ∪ f n (Cnn ) .
For this reason f is called the n-characteristic n-polynomial
of A.
It is important to note that f is a n-monic polymonial which has
degree exactly (n1, n2, …, nn ) is the n-degree of the n-monic
polynomial f = f1 ∪ f2 ∪ … ∪ fn.
We can prove the following simple lemma.
LEMMA 2.2: Similar n-matrices have same n-characteristic
polynomial.
Proof: We just recall if A and B are the mixed square nmatrices of dimension (n1, n2, …, nn ) and (n1, n2, …, nn ) i.e.
same dimension i.e. identity permutation of (n1, n2, …, nn ) . We
say A is similar to B or B is similar to A if their exists a
invertible n matrix P of dimension (n1, n2, …, nn ) such that if A
= A1 ∪ A2 ∪ … ∪ An, B = B1 ∪ B2 ∪ … ∪ Bn and P = P1 ∪ P2
∪ … ∪ Pn then B = P-1AP i.e. B = B1 ∪ B2 ∪ … ∪ Bn = P1−1 A1
P1 ∪ P2−1 A2 P2 ∪ … ∪ Pn−1 An Pn; i.e. each Ai is similar Bi for i
= 1, 2, …, n.
58
Suppose A and B are similar n-mixed square matrices of
identical dimension i.e. order Ai = order Bi for i = 1, 2, …, n
then B = P-1AP the
n-det (xI – B) = n-det(xI – P-1 A P)
= n-det (P-1(xI –A) P)
= n-det P-1 det(xI – A) .det P
= det P1−1 det(x11I1 − A1 ) det P1 ∪
det P2−1 det(x 22 I 2 − A 2 ) det P2 ∪ … ∪
det Pn−1 det(x nn I n − A n ) det Pn
= det(xI – B)
= det(x11I1 − B1 ) ∪ det(x 22 I 2 − B2 ) ∪ … ∪ det(x nn I n − Bn )
i.e. n-det (xI – B) = n-det (xI – A) .
DEFINITION 2.27: Let T = T1 ∪ T2 ∪ … ∪ Tn be a special nlinear operator on a finite dimension n-vector space V = V1 ∪
V2 ∪ … ∪ Vn. We say T is n diagonalizable if there is a n-basis
for V each n-vector of which is a n-characteristic n-vector of T.
The following two lemmas are left as an exercise to the reader.
LEMMA 2.3: Suppose Tα = Cα where T = T1 ∪ T2 ∪ … ∪ Tn, α
= α11 ∪ α 22 ∪ ... ∪ α nn and C = C11 ∪ C22 ∪ ... ∪ Cnn . If F = F1 ∪
F2 ∪ … ∪ Fn is any n-polynomial then f(T) α = f(C) α i.e.,
f1 (T1 )α11 ∪ f 2 (T2 )α 22 ∪ ... ∪ f n (Tn )α nn =
f1 (C11 )α11 ∪ f 2 (C22 )α 22 ∪ ... ∪ f n (Cnn )α nn .
LEMMA 2.4: Let T = T1 ∪ T2 ∪ … ∪ Tn be a n-linear operator
on a finite (n1, n2, … , nn ) dimensional n-vector space V = V1 ∪
V2 ∪ … ∪Vn. Let {( C11 , C21 ,..., Ck11 ) ∪ ( C12 , C22 ,..., Ck22 ) ∪ … ∪
( C1n , C2n ,..., Cknn ) } be distinct n-characteristic values of T1 ∪ T2
∪ … ∪ Tn and let W11 ,W21 ,...,Wk1 be the subspaces of V1
1
associated with characteristic values C11 , C21 ,..., Ck11 respectively,
W12 ,W22 , ...,Wk22
be the subspaces of V2 with associated
59
characteristic values C12 , C22 ,…, Ck22 respectively; and so on and
let W1n ,W2n ,...,Wknn be the subspaces of Vn with associated
characteristic values C1n , C2n ,..., Cknn and if
W 1 = W11 + W21 + ... + Wk11
and if
W 2 = W12 + W22 + ... + Wk22 , …, W n = W1n + W2n + ... + Wknn
and if W = W1 ∪ W2 ∪ … ∪ Wn the n-dim W = (dimW1, dimW2,
…, dimW n) with dim W j = dimW1 j + dimW2j + ... + dimWk jj for
each j = 1, 2, …, n; and if Bii is an ordered basis of W i, i = 1,
2, …, k; then ( B11 , B22 ,..., Bnn ) is the n-ordered n-basis of W 1 W
… W k.
2
Using these lemmas the reader is expected to prove the
following theorem.
THEOREM 2.24: Let T = T1 ∪ T2 ∪ … ∪ Tn be a n-linear
operator of the finite n-dimensional n vector space V = V1 ∪ V2
∪ … ∪ Vn. Let {( C11 , C21 ,..., Ck11 ) , ( C12 , C22 ,..., Ck22 ) ∪ … ∪
( C1n , C2n ,..., Cknn ) } be the distinct n-characteristic n-values of T
and let (W1 ∪ W2 ∪ … ∪ Wn) be the null n-subspace of (T – CI)
i.e. Wi is a subspace of (Ti − Cii I ) for i = 1, 2, …, n. Then the
following are equivalent
1. T is n-diagonalizable
2. The n-characteristic polynomial for T is
f
=
f1
∪
f2
∪…
( xi − C ) ( xi − C ) ...( xi − C )
i d1i
1
i d 2i
2
i
i d ki
ki
∪
fn
where
fi
=
for every i = 1, 2, … , n. and
dim Wi = di where d i = d1i + d 2i + ... + d kii for every i = 1, 2, … ,
n. dim W1 + dim W2 + … + dimWk = dim V = (n1, n2, … nn ) i.e.,
dim W1 = dim W11 + dim W21 + … + dim Wk11 = n1 , dimW2 =
60
dim W12 + dim W22 + … + dim Wk12 = n2 and so on and dimWn =
dim W1n + dim W2n + … + dim Wknn = nn .
The proof left as an exercise for the reader.
Now we proceed on to define the notion of n-annihilating
polynominals.
Let T = T1 ∪ T2 ∪ … ∪ Tn be a n-linear operator on a nvector space V over the field F. If p(x) = p1(x) ∪ p2(x) ∪ … ∪
pn(x) be a n-polynominal in x with coefficients from F then p(T)
= p1(T1) ∪ p1(T1) ∪ … ∪ pn(Tn) is again a n-linear operator on
V. If q(x) = q1(x) ∪ q2(x) ∪ … ∪ qn(x) is another n-polynomial
over F then
(p + q) (T) = p(T) + q(T) .
pq(T) = p(T) q(T)
= p1(T) q1(T) ∪ p2(T) q2(T) ∪ … ∪ pn(T) qn(T) .
Therefore the collection of n-polynomials p(x) which nannihilate T in the sense that p(T) = 0, is a n ideal in the npolynomial algebra F[x]. Clearly Ln (V, V) is a n-linear space of
dimension ( n12 , n 22 ..., n 2n ) where ni is the dimension of the vector
space Vi in V = V1 ∪ V2 ∪ … ∪ Vn. If we take in the n-linear
operator T = T1 ∪ T2 ∪ … ∪ Tn, for each Ti a n i2 + 1 power of
Ti for i = 1, 2, …, n then Ci0 + C1i Ti + Ci2 Ti2 + ... + Cin 2 Tini = 0 for
2
some scalars Cij not all zero, 1 ≤ j ≤ n i2 .
i
So the n-ideal of polynomials which n-annihilate T contains
a non zero n-polynomial of n-degree ( n12 , n 22 ,..., n 2n ) or less.
Now we define the notion of n-minimal polynomial for T =
T1 ∪ T2 ∪ … ∪ Tn.
DEFINITION 2.28: If T is a n-linear operator on a finite
dimensional n-vector space V over the field F. The n-minimal
polynomial for T = T1 ∪ T2 ∪ … ∪ Tn is the unique n-monic
generator of the n-ideals of polynomial over F which nannihilate T, i.e., the n-monic generator of the n-ideals of
polynomials over F which annihilate each Ti for i = 1, 2, …, n.
61
The term n-minimal comes from the fact that the n-generator of
a polynomial n-ideal is characterized by being the n-monic
polynomials each of minimum degree that is every ideal in the
n-ideals; that is the n-minimal polynomial p = p1 ∪ p2 ∪ … ∪
pn for the n-linear operator T is uniquely determined by these
three properties. In p = p1 ∪ p2 ∪ … ∪ pn, each pi is a monic
polynomial over the scalar field F, which we shortly call as the
n-monic polynomial over F. p(T) = 0 implies pi(Ti) = 0 for each
i, i = 1, 2, …, n; i.e., p1(T1) ∪ p2(T2) ∪ … ∪ pn(Tn) = 0 ∪ 0 ∪ …
∪ 0. No n-polynomial over F which n-annihilates T has smaller
degree than p, i.e., polynomial over F which annihilates Ti has
smaller degree than pi for each i = 1, 2, …, n. If A is a n-mixed
square matrix over F i.e., A = A1 ∪ A2 ∪ … ∪ An is a n-mixed
matrix where each Ai is a ni × ni matrix over F, we define the nminimal polynomial for A in an analogous way as unique nmonic generator ideal of all n-polynomials over F which nannihilate A or annihilates Ai for each i, i = 1, 2, …, n.
Similar results which hold good in case of linear vector spaces
can be analogously extended to the case of n-vector spaces with
proper and appropriate modifications.
The proof of the following interesting theorem can be obtained
by any interested reader.
THEOREM 2.25: Let T = T1 ∪ T2 ∪ … ∪ Tn be a n-linear
operator on a (n1, n2, …, nn) finite dimensional n-vector space
[or let A = A1 ∪ A2 ∪ … ∪An, a n-mixed square matrix where
each Ai is a ni × ni matrix, i = 1, 2, …, n] then n-characteristic
and n-minimal polynomial for T[for A] have the same n-roots
except for multiplicities.
The Cayley-Hamilton theorem for n-linear operator T on the nvector space V is stated, the proof is also left as an exercise for
the reader.
THEOREM 2.26: (CAYLEY HAMILTON THEOREM FOR n-VECTOR
SPACES) Let T = T1 ∪ T2 ∪ … ∪ Tn be a n-linear operator on a
finite (n1, n2, …, nn) dimensional n-vector space V = V1 ∪ V2 ∪
62
… ∪ Vn over a field F. If f = f1 ∪ f2 ∪ … ∪ fn is the ncharacteristic polynomial for T = T1 ∪ T2 ∪ … ∪ Tn then f(T) =
0 ∪ 0 ∪ … ∪ 0 i.e., f1(T1) ∪ f2(T2) ∪ … ∪ fn(Tn) = 0 ∪ 0 ∪ …
∪ 0; in other words the n-minimal polynomial divides the ncharacteristic polynomial for T.
We just give an hint of the proof.
Hint:
Choose
a
n-ordered
n-basis
{( α11 , α12 ,..., α1n1 )
∪
( α , α ,..., α ) ∪ … ∪ ( α , α ,..., α ) } for V = V1 ∪ V2 ∪ …
2
1
2
2
2
n2
n
1
n
2
n
nn
∪ Vn and let A = A11 ∪ A12 ∪ … ∪ A nn be the n matrix which
represents T = T1 ∪ T2 ∪ … ∪ Tn in the given n-basis. Then
Ti αik =
∑A
ni
j=1
i
jk
αij ; 1 ≤ j ≤ ni. This is true for each i; i.e., true for
each Ti. Thus p k = ∑ (δijTk − A kji I k )α kj = 0 , this equation being
nk
j=1
true for k = 1, 2, … , n, i.e., P = P1 ∪ P2 ∪ … ∪ Pn. Suppose K
= K1 ∪ K2 ∪ … ∪ Kn be a commutative n-ring with identity
consisting of all n polynomials in T = T1 ∪ T2 ∪ … ∪ Tn. Let
B1 ∪ B2 ∪ … ∪ Bn be an element of
K n ×n = K n1 ×n1 ∪ K n 2 ×n 2 ∪ ... ∪ K n n ×n n
k
with entries Bij = δijTk − A kji I k , k = 1, 2, …, n. We can show f(T)
= det B i.e., f1(T1) ∪ f2(T2) ∪ … ∪ fn(Tn) = det B1 ∪ det B2 ∪
… ∪ detBn .
Using this hint the interested reader can prove the result.
Now we proceed on to define the notion of a n-subspace W of V
to be n-invariant under T.
DEFINITION 2.29: Let V = V1 ∪ V2 ∪ … ∪ Vn be a n-vector
space over F. T = T1 ∪ T2 ∪ … ∪ Tn be a n-linear operator on
V. If W = W1 ∪ W2 ∪ … ∪ Wn is a n-subspace of V, we say that
W is n-invariant under T if for each vector α = α1 ∪ α2 ∪ … ∪
αn in W the vector T(α) is in W i.e., each Ti(αi) ∈ Wi for i = 1, 2,
63
… , n. i.e., T(W) is contained in W or this is the same as Ti(Wi)
is contained in Wi for i = 1, 2, …, n.
LEMMA 2.5: Let V be a finite (n1, n2, …, nn) dimensional nvector space over the field F. Let T = T1 ∪ T2 ∪ … ∪ Tn be a nlinear operator on V such that the n-minimal polynomial for T
is a product of linear n-factors p = p1 ∪ p2 ∪ … ∪ pn where pi
(
= ( x − C1i ) ... x − Ckii
r1
)
rki
; C ij ∈ F , 1 ≤ j ≤ ki, for i = 1, 2, …, m.
Let W = W 1 ∪ W 2 ∪ … ∪ W n be a proper (W ≠ V) subspace of
V where each W i = W1i + ... + Wkii , i = 1, 2, … , n. which is ninvariant under T. There exists a vector α = α1 ∪ α2 ∪ … ∪ αn
in V such that α is not in W;
(T – CI) α = (T1 – C1I1) α1 ∪ (T2 – C2I2)α2 ∪ … ∪ (Tn – CnIn)αn
is in W for some m-characteristic values C 1 = (C11 , C21 ...Ck11 ) , 1 ≤
k1 ≤ n1; C 2 = (C12 , C22 ,..., Ck22 ) and so on.
The proof can be derived without much difficulty; infact very
straight forward, using the working for each Ti: Vi →Vi and
W i = W1i + ... + Wki i , 1 ≤ ki ≤ ni. When the result holds for every
component of V and T it is true for the n-vector space and its nlinear operator T which is defined on V.
The following theorem on the n-diagonalizablily of the n-linear
operator T on V is given below.
THEOREM 2.27: Let V = V1 ∪ V2 ∪ … ∪ Vn be a finite (n1, n2,
… , nn) dimensional n-vector space over the field F and let T =
T1 ∪ T2 ∪ … ∪ Tn be a n-linear operator on V. Then T is n
diagonalizable if and only if the n-minimal polynomial for T has
the form,
p=
{( x − C )...( x − C )} ∪
1
1
1
k1
64
{( x − C )( x − C )...( x − C )} ∪…∪
{( x − C )( x − C )...( x − C )}
2
1
2
2
n
1
2
k2
n
2
n
kn
where C iji are distinct elements of F (i.e., C11 , C21 ,..., Ck11 , forms a
distinct set in F, C12 , C22 ,..., Ck22 forms a distinct set in F, so on
C1n , C2n ,..., Cknn forms a distinct set of F) .
Proof: We know if T = T1 ∪ T2 ∪ … ∪ Tn is n-diagonalizable
its n-minimal polynomial is a n-product of distinct linear factors
i.e., each Ti: Vi → Vi (where Vi is a component of the n-vector
space V = V1 ∪ V2 ∪ … ∪ Vn and Ti is a linear operator of Vi
and a component of T).
So we can say if pi = ( x − C1i )( x − Ci2 ) ... x − Ciki the
(
)
minimal polynomial associated with the diagonalizable operator
Ti then the pi is a product of distinct linear factors. This is true
for each i; i = 1, 2, …, n, Hence the claim. So to prove the
converse, let W = W1 ∪ W2 ∪ … ∪ Wn be the n-subspace
spanned by all the n-characteristic n-vectors of T and suppose
W ≠ V that is; each Wi ≠ Vi for i = 1, 2, …, n.
This implies we have a n-vector α = α1 ∪ α2 ∪ … ∪ αn not
in W (i.e., each αi ∉ Wi for i = 1, 2, …, n.) and a ncharacteristic value C = C1 ∪ C2 ∪ … ∪ Cn of T such that the
vector β = (T – CI) α lies in W i.e., β = β1 ∪ β2 ∪ … ∪ βn then
βi = ( Ti − Cij Ii ) α i lies in Wi (1 ≤ j ≤ ki) this is true for each i, i =
1, 2, …, n. Since βi ∈Wi we have βi = β1i + βi2 + ... + βiki (true for
each i, i = 1, 2, …, n) where Tiβij = Cijβij ; 1 ≤ j ≤ ki and i = 1, 2,
…,
n
and
hence
the
vector
h i (Ti )βi = h i (C1i )β1i + ... + h i (Ciki )βiki is in Wi
in
for
Wi.
every
polynomial hi; this is true for each i, i = 1, 2, …, n.
Now pi = ( x − Cij ) qi for some polynomial qi also
q i − q i (Cij ) = x − (Cij )h i (this is true for each i, i = 1, 2, …, n).
65
We have q i (Ti )α i − q i (Cij )α i = h i (Ti )(Ti − Cij Ii )α i = h i (Ti )βi , 1
≤ i ≤ n. But h i (Ti )βi is in Wi (for each i) and since 0 = pi(Ti) αi
= ( Ti − Cij Ii ) q i (Ti )α i , the vector q i (Ti )α i is in Wi.
Therefore q i (Cij )αi is in Wi. Since αi is not Wi we have
q i (Cij ) = 0 true for every i = 1, 2, …, n. This contradicts the fact
pi has distinct roots for i = 1, 2, …, n. Hence the claim.
How ever we give an illustration of this theorem so that the
reader can understand how it is applied in general.
Example 2.19: Let V = V1 ∪ V2 ∪ V3 where V1 = Q × Q, V2 =
Q × Q × Q × Q and V3 = Q × Q × Q i.e., V a 3-vector space over
Q of finite dimension and 3-dimension (2, 4, 3) .Define T: V →
V by T = T1 ∪ T2 ∪ T3 :V1 ∪ V2 ∪ V3 → V1 ∪ V2 ∪ V3 by T1:
V1→V1 defined by the related matrix
⎡1 2 ⎤
A1 = ⎢
⎥.
⎣0 2 ⎦
T2:V2→V2 defined by the related matrix
⎡2
⎢0
A2 = ⎢
⎢0
⎢
⎣0
1
1
0
0
1
2
3
0
3⎤
1 ⎥⎥
5⎥
⎥
4⎦
and T3:V3→V3 defined by the related matrix
⎡ 5 −6 −6 ⎤
A 3 = ⎢⎢ −1 4 2 ⎥⎥ .
⎢⎣ 3 −6 −4 ⎥⎦
The 3-matrix associated with T is given by
66
⎡2
⎢0
⎡1 2 ⎤
⎢
⎢0 2⎥ ∪ ⎢0
⎣
⎦
⎢
⎣0
1
1
0
0
3⎤
⎡ 5 −6 −6 ⎤
1 ⎥⎥
∪ ⎢⎢ −1 4 2 ⎥⎥
5⎥
⎥
⎣⎢ 3 −6 −4 ⎦⎥
4⎦
1
2
3
0
that is 3-characterstic polynomial associated with T is given by
C =
=
(x – 1) (x – 2) ∪ (x – 2) (x – 1) (x – 3) (x – 4) ∪(x – 2)2
(x – 1)
C1 ∪ C2 ∪ C3.
The 3-minimal polynomial p is given by
p = p1 ∪ p2 ∪ p3
= (x – 1) (x – 2) ∪ (x – 2) (x – 1) (x – 3) (x – 4) ∪ (x – 1)
(x – 2).
Hence T is a 3-diagonalizable operator and the 3-diagonal 3matrix associated with T is given by
⎡2
⎢0
⎡1 0 ⎤
⎢
D= ⎢
⎥ ∪ ⎢0
⎣0 2⎦
⎢
⎣0
0
1
0
0
0
0
3
0
0⎤
⎡1 0 0 ⎤
0 ⎥⎥
∪ ⎢⎢ 0 2 0 ⎥⎥ .
0⎥
⎢⎣ 0 0 2 ⎥⎦
⎥
4⎦
Now we proceed on to describe the n-linear operator which is ndiagonalizable in the language of n-invariant direct sum
decomposition.
DEFINITION 2.30: Let V be a n-vector space over F, a nprojection of V = V1 ∪ V2 ∪ … ∪ Vn is a n-linear operator E =
E1 ∪ E2 ∪ … ∪ En on V such that E2 = E i.e., E2 =
(E ) ∪(E )
1 2
2 2
∪ ... ∪ ( E n ) . All properties associated with
2
linear operators as projection can be analogously derived.
Clearly if V = V1 ∪ V2 ∪ … ∪ Vn and V = (W11 ⊕ ... ⊕ Wk11 ) ∪
67
(W12 ⊕ ... ⊕ Wk22 ) ∪ … ∪ (W1n ⊕ ... ⊕ Wknn ) then for each space
W ji ; 1 ≤ i ≤ n and 1 ≤ j ≤ kj we can define E ij an operator on Vi
such that if αi ∈ Vi is of the form α i = α1i + α 2i + ... + α ki i with
α ij ∈W ji define E ij (αi) = α ij , E ij is a well defined rule; this is
true for each i and j so
E11 + E21 + ... + Ek11 = E1 ,
E12 + E22 + ... + Ek22 = E 2 , …,
E1n + E2n + ... + Eknn = E n ;
E = E1 ∪ E2 ∪ … ∪ En.
Now as in case of linear vector space we can in case of n-vector
spaces derive the properties of projections.
Theorem 2.28: Let V = V1 ∪ V2 ∪ … ∪ Vn be a n-vector space
over the field F. Suppose each Vi = W1i ⊕ ... ⊕ Wkii for i = 1, 2,
…, n i.e., V = (W11 ⊕ ... ⊕ Wk11 ) ∪ (W12 ⊕ W22 ⊕ ... ⊕ Wk22 ) ∪ … ∪
(W1n ⊕ W2n ⊕ ... ⊕ Wknn ) then there exists (k1 + k2 + … + kn)
linear operators E11 , E21 ,..., Ek11 , E12 , E22 ,..., Ek22 , …, E1n , E2n ,..., Eknn
on the n-vector space V such that
1. Each E iji is a projection, i = 1, 2, …, n; 1≤ ji ≤ ki
2.
E iji .E ijk = 0 if ji ≠ jk
3. I i = E1i + ... + Ekii i = 1, 2,…, n i.e., I = I 1 ∪ I 2 ∪…∪ I n.
4. range of E ij = W ji for i = 1, 2, …, n, 1≤ j≤ ki.
Conversely if E11 , E21 ,..., Ek11 , E12 , E22 ,..., Ek22 , …, E1n ,..., Eknn are k1
+ k2 + … + kn linear operators on V which satisfy the condition
1, 2 and 3 and if we let W ji be the range of E ij then V =
(W11 ⊕ ... ⊕ Wk11 ) ∪ (W22 ⊕ ... ⊕ Wk22 ) ∪ … ∪ (W1n ⊕ ... ⊕ Wknn ) .
68
Proof: Now to prove the converse statement we proceed as
follows; from the basic definition and properties the condition 1
to 4 are true, which can be easily verified.
Suppose we have E = E1 ∪ E2 ∪ … ∪ En where each Ei is a
E1i , E i2 ,..., E iki ki number of linear operators of Vi, Vi a
component of the n-vector space V = V1 ∪ V2 ∪ … ∪ Vn and
what we prove for i, is true for i = 1, 2, … , n.
Given E satisfies all the three conditions given in (1) (2) and
(3) and if we let Wji to be range of E ij then certainly V = W1 ∪
… ∪ Wn where W i = W1i ⊕ ... ⊕ Wki i by condition (3) we have
for α = α1 ∪ α2 ∪ … ∪ αn,
α = ( E11α11 + E12 α12 + ... + E1k1 α1k1 ) ∪
( E12 α12 + E 22 α 22 + ... + E k2 2 α k2 2 ) ∪ … ∪
( E1n α1n + E n2 α n2 + ... + E nk n α nk n )
for each α ∈ Vj where each Ii = E1i + ... + E iki , i = 1, 2, … , n and
E ijp .E ijk = 0 if p ≠ k; 1 ≤ j ≤ ki and αi = α1i + ... + αik i true for i =
1, 2, …, n. This is true for each αi ∈ Vi and hence for each α ∈
V and E ijαij in Wi. This expression for each αi is unique and
hence each α is unique, because if α = ( α11 + ... + α1k1 ) ∪
( α12 + α 22 + ... + α k2 2 ) ∪ … ∪ ( α1n + α n2 + ... + α nk n ) is unique with
each αi ∈ Wi , i.e., αij ∈ Wji . Suppose αij = E ijβij then from (1)
E ijαi = ∑ E ijα ijk
and (2) we have
ki
k =1
∑E E β
ki
=
k =1
i
j
i
k
i
jk
= (E ij ) 2 βij
= E ijβij = αij .
69
This is true for every i, i = 1, 2, …, n and every j, j = 1, 2, …, ki.
This proves each Vi is direct sum of Wi, hence V is a direct sum
W11 ,..., Wk11 , … , W1n , W2n ,..., Wknn . Hence the result.
Now we give a sketch of proof of the following theorem.
However reader is expected to prove the theorem.
THEOREM 2.29: Let T = T1 ∪ T2 ∪ … ∪ Tn be a n-linear
operator on the n-space V = V1 ∪ V2 ∪ … ∪ Vn and let W 1, …,
Wn and E1, E2, …, En be as in the above theorem. Then a
necessary and sufficient condition that each n-subspace W i to
be n-invariant under T (i.e., each W ji invariant under Ti) is that
T commutes with each of the projections E i i.e., TE i = E iT for i
= 1, 2, …, m (i.e., each Ti commutes with E ij i.e., Ti E ij = E ij Ti,
i = 1, 2, …, n and j = 1, 2, …, ki).
Proof: Suppose T commutes with each E ij i.e., Ti commutes
with E ij for j = 1, 2, …, ki. This is true for each Ti also. Let α =
α1 ∪ α2 ∪ … ∪ αn with αij ∈ Wji , then E ijαij = α ij and for
Ti αij = Ti (E ijα ij ) = E ijTi αij (since Ti commutes with E ij for j = 1,
2, …, ki and i = 1, 2, …, n) .This shows that Ti α ij is in the range
of E ij i.e., Wji is invariant under Ti.
Assume now that each Wji is invariant under Ti, 1 < j < ki; i = 1,
2, …, n; we shall show that Ti E ij = E ijTi for every i, 1 ≤ i ≤ n and
j = 1, 2, …, ki. Let
αi
αi ∈ Vi
= E1i αi + ... + E iki α i
Tα i = TE1i α i + ... + TEik i αi .
Since E ijαi is in Wji which is invariant under Ti we must have
Ti( E ijαi ) = E ijβij for some βij .
Then E ijTi E ik αi = E ij Eik βik
70
k≠j
⎧⎪0 if
=⎨ i i
⎪⎩E jβ j if k = j.
E ijTi α i = EijTi .E1i αi + ... + E ijTi Eik i αi
= E ijβij
= Ti E ijα i .
This is true for each αi ∈ Vi so E ijTi = Ti E ij . This result is true
for each i, i = 1, 2, … , n.
We now prove the main theorem which describes ndiagonalization of a n-linear operator.
THEOREM 2.30: Let T = T1 ∪ T2 ∪ … ∪ Tn be a n-linear
operator on a finite dimensional n-vector space V = V1 ∪ V2 ∪
… ∪ Vn. If T is n-diagonalizable and if ( C11 , C21 ,..., Ck11 ) ∪
( C12 , C22 ,..., Ck22 ) ∪…∪ ( C1n , C2n ,..., Cknn ) are n-characteristic
values such that for each i, C1i , C2i ,..., Ckii
are distinct
characteristic values of Ti for i = 1, 2, … , n, then their exists nlinear operators
( E11 , E21 ,..., Ek11 ) , ( E12 , E22 ,..., Ek22 ) , … , ( E1n , E2n ,..., Eknn )
on V such that
1. T = ( C11 E11 + ... + Ck11 Ek11 ) ∪ ( C12 E12 + ... + Ck22 Ek22 ) ∪ … ∪
( C1n E1n + ... + Cknn Eknn )
2. I = ( E11 + E21 + ... + Ek11 ) ∪ … ∪ ( E1n + ... + Eknn ) = I1 ∪ …
∪ In
3.
Eki E ij = 0, j ≠ k.
4.
E ij E ij = E ij
71
5. The range of each E ij is the characteristic space for Ti
associated with C ij .
Conversely if there exists (k1, k2, …, kn) set of ki distinct nscalars C1i , C2i ,..., Ckii , i = 1, 2, …, n and ki distinct linear
operators E1i , E2i ,..., Ekii ; i = 1, 2, …, n which satisfy conditions
(1), (2) and (3) then Ti is diagonalizable; hence T = T1 ∪ T2 ∪
… ∪ Tn is n-diagonalizable. C1i , C2i ,..., Ckii are distinct
characteristic values of Ti for i = 1, 2, …, n and conditions (4)
and (5) are satisfied.
Proof: Suppose that T is n-diagonalizable i.e., each Ti of T is
diagonalizable with distinct characteristic values ( C11 ,C12 ,...,C1k1 )
∪ ( C12 ,C 22 ,...,Ck2 2 ) ∪ … ∪ ( C1n ,Cn2 ,...,Cnk n ), i.e., each set of
( C1i ,Ci2 ,...,Ciki ) are distinct. Let
Wji
be the space of
characteristic vectors associated with the characteristic values
Cij . As we have seen.
V = ( W11 ⊕ ... ⊕ Wk11 ) ∪ ( W12 ⊕ ... ⊕ Wk22 ) ∪ … ∪
( W1n ⊕ ... ⊕ Wknn )
where each Vi = W1i ⊕ ... ⊕ Wki i for i = 1, 2, …, n.
Let E1i , E i2 ,..., E iki
be the projections associated with this
decomposition given in theorem. Then (2), (3), (4) and (5) are
satisfied. To verify (1) we proceed as follows for each α = α1 ∪
α2 ∪ … ∪ αn in V; αi ∈Vi; αi = E1i α + ... + E ik i α and so
Tiαi
=
=
TE1i αi + ... + TE iki α i
C1i E1i α i + ... + Ciki E ik i α i .
In other words Ti = C1i E1i + ... + Cik i E iki . Now suppose that we are
given a n-linear operator T = T1 ∪ T2 ∪ … ∪ Tn along with
distinct n scalars C1 ∪ C2 ∪ … ∪ Cn = C with scalar Cij and
non zero operator E ij satisfying (1), (2) and (3) . This is true for
72
each i = 1, 2, …, n and j = 1, 2, …, ki. Since E ij .E ik = 0 when j ≠
k, we multiply both sides of
I = I1 ∪ I2 ∪ … ∪ In
= ( E11 + E12 + ... + E1k1 ) ∪ ( E12 + E 22 + ... + E 2k 2 ) ∪ … ∪
( E1n + E n2 + ... + E nk n )
by E1t ∪ E 2t ∪ ... ∪ E nt and obtain immediately
E1t , E 2t ,..., E nt = ( E1t )2 ∪( E 2t )2 ∪ … ∪( E nt )2.
Multiplying
T = ( C11E11 + ... + C1k1 E1k1 ) ∪ … ∪( C1n E1n + ... + Ckn n E kn n )
by E1t ∪ E 2t ∪ ... ∪ E nt we have
T1E1t ∪ T2 E 2t ∪ ... ∪ Tn E nt = C1t E1t ∪ C 2t E 2t ∪ ... ∪ C nt E nt
which shows that any n-vector in the n range of
E1t ∪ E 2t ∪ ... ∪ E nt is in the n-null space of (T – CI) =
(T1 − C1t I1 ) ∪ ... ∪ (Tn − Cnt I n ) where I = I1 ∪ I2 ∪ … ∪ In. Since
we have assumed E1t ∪ E 2t ∪ ... ∪ E nt ≠ 0 ∪ 0 ∪ … ∪ 0, this
proves that there is a nonzero n-vector in the n-null space of
(T –CI) = (T1 − C1t I1 ) ∪ ... ∪ (Tn − Cnt I n )
i.e., that Cit is a characteristic value of Ti for each i, i = 1, 2, …,
n; for if Ci is any scalar then (Ti – CiIi) = ( C1i − Ci ) E1i + … +
( Ciki − Ci ) E iki true for i = 1, 2, … , n so if (Ti – CiIi)αi = 0, we
must have ( Cit – Ci) E ijαi = 0. If αi must be the zero vector then
E ijαi ≠ 0 for some j so that for this j we have Cij − Ci = 0 .
Certainly Ti is diagonalizable since we have shown that
every non zero vector in the range of E ij is a characteristic value
of Ti and the fact that Ii = E1i + ... + E iki , shows that these
characteristic vectors span Vi. This is true for each i, i = 1, 2, …,
n. All that is to be shown is that the n-null space of (T – CI) =
( T1 − C1k I1 ) ∪ ( T2 − C k2 I 2 ) ∪ … ∪ ( Tn − Cnk I n ) is exactly the n
73
range of E1k ∪ ... ∪ E kn , but this is clear because Tα = Cα i.e.,
Ti α i = Cijα i , for each i, i = 1, 2, …, n. Thus
∪ ∑ (Cij − Cik )E ijαi = 0 ; i.e.,
ki
∑ (C
k1
i =1 j=1
1
j
j=1
ki
− C1k )E1jα1 ∪ ∑ (C2j − C 2k )E 2j α 2 ∪ ... ∪ ∑ (Cnj − C nk )E nj α n
k2
kn
j=1
j=1
= 0 ∪ 0 ∪ … ∪ 0.
Hence ( (Cij − Cik )E ijα i = 0 for each j; and each i = 1, 2, …, n
and E ijαi = 0, k ≠ j; for each i, i = 1, 2, …, n. Since αi =
E1i αi + ... + E iki α i for each i and Eijαi = 0 for j ≠ k we have αi =
E ijαi which proves that αi is the range of E ij . This is true for
each i hence the claim.
We give the statement of the primary decomposition theorem
for n-vector space V.
THEOREM 2.31: Let T = T1 ∪ T2 ∪ … ∪ Tn be a n-linear
operator on the finite dimensional n-vector space V = V1 ∪ V2
∪ … ∪ Vn over the field F. Let p = p1 ∪ p2 ∪ … ∪ pn where pi
i
i
ri
= p1r1 p1r2 ... pkki i , i = 1, 2, …, n. i.e.,
r1
r2
p = p11r1 p12r2 ... p1kk11 ∪ p21
... p2kk22 ∪ ... ∪ p11r1 p12r2 ... pnkknn
p22
1
1
r1
2
2
r2
n
n
rn
where pik are distinct irreducible monic polynomials over F and
ri
k
the rji are positive integers. Let W ji be the null space of pik (T ) ,
k = 1, 2, …, ki; i = 1, 2, …, n then
1. V = ( W11 ⊕ ... ⊕ Wk11 ) ∪ ( W12 ⊕ ... ⊕ Wk22 ) ∪ … ∪
( W1n ⊕ ... ⊕ Wknn )
2. each W i is invariant under Tik, i = 1, 2, …, n, 1 ≤ r ≤ ki.
74
3. if Tij is the operator induced on W ji by Ti then the
ri
minimal polynomial for Tij is pijj , true for j = 1, 2, …, ki
and i = 1, 2, …, n.
Several interesting results can be found in this direction
analogously.
Now we define the notion of n-diagonalizable part and nnilpotent part of a n-linear operator T.
Given V = V1 ∪ V2 ∪ … ∪ Vn is a n-vector space over the
field F. T = T1 ∪ T2 ∪ … ∪ Tn a n-linear operator on V.
Suppose the n-minimal polynomial of T is the product of first
degree polynomials, i.e., the case in which each pij is of the
form x – Cij . Now range of E ij , for each Ti in T is the null space
Wji of ( (Ti − Cij Iii ) j . This is true for each i, i = 1, 2, …, n. Put
ri
D =
=
D1 ∪ D2 ∪ … ∪ Dn
( C11E11 + C12 E12 + ... + C1k1 E1k1 ) ∪
( C12 E12 + C 22 E 22 + ... + Ck2 2 E k2 2 ) ∪ … ∪
( C1n E1n + C2n E 2n + ... + Ckn n E kn n ) .
Clearly D is n-diagonalizable operator which we define or
call as the n-diagonalizable part of T. Let as consider N = T – D.
Now
T = [ T1E11 + ... + T1E1k1 ] ∪ [ T2 E12 + ... + T2 E 2k 2 ] ∪ … ∪
[ Tn E1n + ... + Tn E nk n ]
D =
( C11E11 + C12 E12 + ... + C1k1 E1k1 ) ∪
( C12 E12 + C22 E 22 + ... + Ck2 2 E k2 2 ) ∪ … ∪
( C1n E1n + C2n E 2n + ... + C kn n E kn n )
so
N =
[ (T1 − C11I11 )E11 + (T1 − C12 I12 )E12 + ... + (T1 − C1k1 I1k1 )E1k1 ] ∪
75
[ (T2 − C12 I12 )E12 + (T2 − C 22 I 22 )E 22 + ... + (T2 − Ck2 2 I 2k 2 )E 2k 2 ]
∪ … ∪ [ (Tn − C1n I1n )E1n + ... + (Tn − Ckn n I kn n ) ] .
Clearly
N2 =
[ (T1 − C11I11 ) 2 E11 + ... + (T1 − C1k1 I1k1 ) 2 E1k1 ] ∪
[ (T2 − C12 I12 ) 2 E12 + ... + (T2 − C2k 2 I 2k 2 ) 2 E 2k 2 ] ∪ … ∪
[ (Tn − C1n I1n ) 2 E1n + ... + (Tn − Ckn n I kn n ) 2 E kn n ]
and in general we have
Nr =
[ (T1 − C11I11 ) r E11 + ... + (T1 − C1k1 I1k1 ) r E1k1 ] ∪ … ∪
[ (Tn − C1n I1n ) r E1n + ... + (Tn − C kn1 I kn n ) r E kn n ]
where r ≥ (r1, r2, …, rn) i.e., r > ri, i = 1, 2, …, n ( by misuse of
notation) we have Nr = 0 because the n-operator (T – CI)r will
be (0 ∪ 0 ∪ … ∪ 0) i.e., each (Ti – Cij Ii ) rji = 0 where r > rji for
j = 1, 2, …, ki and i = 1, 2, …, n.
Now we define a nilpotent n-linear operator T.
DEFINITION 2.31: Let N be a n-linear operator on V = V1 ∪ V2
∪ … ∪ Vn we say N is n-nilpotent if there exists some positive
integer r, r >ri; i = 1, 2, … , n such that N r = 0.
Note: If N = N1 ∪ N2 ∪ … ∪ Nn then Ni: Vi → Vi is of
dimension ni, ni ≠ nj if i ≠ j true for i = 1, 2, …, n so we may
have Nir = 0, i = 1, 2, … , n. We may not have ri = rj, if i ≠ j;
hence the claim.
i
Now we give only a sketch of the proof however the reader is
expected to get the complete the proof using this sketch.
THEOREM 2.32: Let T = T1 ∪ T2 ∪ … ∪ Tn be a n-linear
operator on a finite dimensional n-vector space V = V1 ∪ V2 ∪
76
… ∪ Vn over the field F. Suppose the n-minimal polynomial for
T decomposes over F in to product of n-linear polynomials, then
there is a n-diagonalizable n-operator D on V and a n-nilpotent
operator N on V such that
I. T = D + N
II. DN = ND.
The n-diagonalizable operator D and the n-nilpotent operator N
are uniquely determined by (I) and (II) and each of them is a npolynomial in T.
Proof: We give only a sketch of the proof. However the
interested reader can find a complete proof using this sketch.
Given V = V1 ∪ V2 ∪ … ∪ Vn finite (n1, n2, …, nk)
dimensional a n-vector space. T = T1 ∪ T2 ∪ … ∪ Tn a n-linear
operator on T such that Ti: Vi → Vi for each i = 1, 2, …, n. We
can write each Ti = Di + Ni, a nilpotent part Ni and a
diagonalizable part Di; i = 1, 2, …, n.
Thus
T = T1 ∪ T2 ∪ … ∪ Tn
= (N1 + D1) ∪ (N2 + D2) ∪ … ∪ (Nn + Dn)
= (N1 ∪ N2 ∪ … ∪ Nn) + (D1 ∪ D2 ∪ … ∪ Dn)
i.e., T = N + D where N = N1 ∪ N2 ∪ … ∪ Nn and D = D1 ∪ D2
∪ … ∪ Dn. Since each Di and Ni not only commute but are
polynomials in Ti we see D and N commute and are npolynomials of T, as the result is true for each i, i = 1, 2, … , n.
Suppose we have T = D1 + N1, i.e.,
T
=
=
=
T1 ∪ T2 ∪ … ∪ Tn
( D11 + N11 ) ∪ ( D12 + N12 ) ∪ … ∪ ( D1n + N1n )
D1 + N1
where D1 is the n-diagonalizable part of T i.e., each D1i is the
diagonalizable part of Ti for i = 1, 2, …, n and N1 the nnilpotent part of T i.e., each Ni is the nilpotent part of Ti for i =
1, 2, …, n.
77
Since each D1i and N1i commute for i = 1, 2, …, n we have D1
and N1 also n-commute with any n-polynomial in T. Hence in
particular they commute with D and N.
Now we have D + N = D1 + N1 i.e., D – D1 = N1 – N and
these four n-operator commute with each other. Since D and D1
are n-diagonalizable they commute and so D – D1 is also ndiagonalizable.
Since both N and N1 are n-nilpotent they n-commute and
the operator N – N1 is also n-nilpotent. Since N – N1 = D – D1
and N – N1 is n-nilpotent we have D – D1 the n-diagonalizable
n-operator is also n-nilpotent.
Such an n-operator can only be a zero operator, for since it
is n-nilpotent, the n-minimal polynomial for this n-operator is of
the form x r1 ∪ x r2 ∪ ... ∪ x rn with x ri = 0 for appropriate mi ≥ ri,
i = 1, 2, …, n. But since the n-operator is n-diagonalizable the
n-minimal polynomial cannot have repeated n-roots hence each
ri = 1 and the n-minimal polynomial is simple x ∪ x ∪ … ∪ x
which confirms the operator is zero. Thus we have D = D1 and
N = N1.
The interested reader is expected to derive analogous results
when F is the field of complex numbers.
Now we proceed on to work with n-characteristic values ncharacteristic vectors of a special n-linear n-operator on V.
Given V is a n-vector space say of finite dimension, V = V1
∪ V2 ∪ … ∪ Vn of dimension (n1, n1, …, nn) defined over the
field F. Let T = T1 ∪ T2 ∪ … ∪ Tn be a special n-linear operator
on V; i.e., Ti: Vi → Vi for each i, i = 1, 2, …, n.
We say C = (C1 ∪ C2 ∪ … ∪ Cn) is a n-characteristic value
of T if some n-vector α = α1 ∪ α2 ∪ … ∪ αn we have Tα = Cα,
i.e., Tα = (T1 ∪ T2 ∪ … ∪ Tn) (α1 ∪ α2 ∪ … ∪ αn) = (C1 ∪ C2
∪ … ∪ Cn) (α1 ∪ α2 ∪ … ∪ αn) i.e., T o T = T1α1 ∪ T2α2 ∪ …
∪ Tnαn = C1α1 ∪ C2α2 ∪ … ∪ Cnαn, i.e., each Tiαi = Ciαi for i
= 1, 2, …, n.
Here α = α1 ∪ α2 ∪ … ∪ αn is defined to be the ncharacteristic vector of T. The collection of all α such that Tα =
Cα is called the n-characteristic space associated with C.
78
We shall illustrate the working of the n characteristic
values, n-characteristic vectors associated with aT.
Example2.20: Let V = V1 ∪ V2 ∪ V3 be 3 vector space over Q
where V1 = Q × Q × Q, V2 = Q × Q and V3 = Q × Q × Q × Q are
vector spaces over Q of dimensions 3, 2 and 4 respectively i.e.,
V is of (3, 2, 4) dimension. Define T: V → V where the 3matrix associated with T is given by
A = A1 ∪ A2 ∪ A3
⎡1 0 2 1 ⎤
⎡3 0 2⎤
⎢
⎥
⎡1 2 ⎤ ⎢ 0 2 5 0 ⎥
⎢
⎥
.
= ⎢0 1 5 ⎥ ∪ ⎢
∪
0 3 ⎦⎥ ⎢0 0 3 7 ⎥
⎣
⎢⎣0 0 7 ⎥⎦
⎢
⎥
⎣0 0 0 4⎦
Now we will determine the 3-characterstic values associated
with T. The n-characteristic polynomial
−2 ⎤
0
⎡x − 3
⎡ x − 1 −2 ⎤
⎢
p= ⎢ 0
x − 1 −5 ⎥⎥ ∪ ⎢
∪
0
x − 3⎥⎦
⎣
⎢⎣ 0
0
x − 7 ⎥⎦
−2
−1 ⎤
0
⎡x − 1
⎢ 0
x − 2 −5
0 ⎥⎥
⎢
⎢ 0
0
x −3
0 ⎥
⎢
⎥
0
0
x − 4⎦
⎣ 0
=
(x – 3) (x – 1) (x –7) ∪ (x – 1) (x – 3) ∪ (x – 1) (x – 2)
(x – 3) (x – 4).
Thus the 3- characteristic values of A = A1 ∪ A2 ∪ A3 are {3, 1,
7} ∪ {1, 3} ∪ {1, 2, 3, 4}. One can find the 3-characteristic
values as in case of usual vector spaces and their set theoretic
union will give 3-row mixed vector, which will be 48 in number
as we have 48 choices for the 3-characterstic values as {3} ∪
{1} ∪ {1}, {3} ∪ {1} ∪ {2}, {3} ∪ {1} ∪ {3}, {3} ∪ {1} ∪
{4} so on and {7} ∪ {3} ∪ {4}.
79
Now having seen the working of 3-characteristic values we
just recall in case of matrices A we say A is orthogonal if AAt =
I. Further A is anti orthogonal if AAt = – I.
Now we for the first time define the notion of n-orthogonal
matrices and n-anti orthogonal matrices.
DEFINITION 2.32: Let A = (A1 ∪ A2 ∪ … ∪ An) be a n-matrix.
At = (A1 ∪ … ∪ An) t = A1t ∪ A2t ∪ ... ∪ Ant .
AAt = A1 A1t ∪ A2 A2t ∪ ... ∪ An Ant .
We say A is n-orthogonal if and only if AAt = I1 ∪ I2 ∪ … ∪
In where Ij is the identity matrix, i.e., if A = A1 ∪ A2 ∪ … ∪ An is
mi × ni matrix i = 1, 2, …, n; then AAt = I1 ∪ I2 ∪ … ∪ In is such
that Ij is a mj × mj identity matrix, j = 1, 2, …, n. We say A is
anti orthogonal if and only if AAt = (– I1) ∪ (–I2) ∪ … ∪ (– In)
where Ij is mj × mj identity matrix i.e., if
⎡1
⎢0
I= ⎢
⎢0
⎢
⎣0
then
0
1
0
0
0
0
1
0
0⎤
0 ⎥⎥
0⎥
⎥
1⎦
⎡ −1 0 0 0 ⎤
⎢ 0 −1 0 0 ⎥
⎥.
–I = ⎢
⎢ 0 0 −1 0 ⎥
⎢
⎥
⎣ 0 0 0 −1⎦
Now we say AAt is n-semi orthogonal if AAt = B1 ∪ B2 ∪ … ∪
Bn ; some of the Bi’s are identity matrices and some are not
identity matrices on similar lines we define n-semi anti
orthogonal if in AAt = C1 ∪ C2 ∪ … ∪ Cn some Ci’s are –Ii and
some are not – Ij.
It is not a very difficult task for the reader can easily get
examples of these 4 types of n-matrices.
80
Chapter Three
APPLICATIONS OF
n- LINEAR ALGEBRA OF T YPE I
In this chapter we just introduce the applications of the n-linear
algebras of type I. We just recall the notion of Markov bichains
and indicate the applications of vector bispaces and linear
bialgebras in Markov bioprocess. For this we have to first
define the notion of Markov biprocess and its implications to
linear bialgebra / bivector spaces. We may call it as Markov
biprocess or Markov bichains.
Suppose a physical or mathematical system is such that at
any moment it occupies two of the finite number of states
(Incase of one of the finite number of states we apply Markov
chains or the Markov process). For example say about a
individuals emotional states like happy, sad etc., suppose a
system move with time from two states or a pair of states to
another pair of states; let us construct a schedule of observation
times and a record of states of the system at these times. If we
find the transition from one pair of state to another pair of state
is not predetermined but rather can only be specified in terms of
certain probabilities depending on the previous history of the
system then the biprocess is called a stochastic biprocess. If in
addition these transition probabilities depend only on the
81
immediate history of the system; that is if the state of the system
at any observation is dependent only on its state at the
immediately proceeding observations then the process is called
Markov biprocess or Markov bichain.
The bitransition probability pij = p1i1 j1 ∪ pi22 j2 (i, j = 1, 2,…,
k) is the probabilities that if the system is in state j = (j1, j2) at
any observation, it will be in state i = (i1, i2) at the next
observation. A transition matrix
P = [pij] = ⎡⎣ p1i1 j1 ⎤⎦ ∪ ⎡⎣ pi22 j2 ⎤⎦
is any square bimatrix with non negative entries for which the
bicolumn sum is 1 ∪ 1. A probability bivector is a column
bivector with non negative entries whose sum is 1 ∪ 1.
The probability bivectors are said to be the state bivectors of
the Markov biprocess. If P = P1 ∪ P2 is the transition bimatrix
of the Markov biprocess and xn = x1n ∪ x n2 is the state bivector at
+1)
the nth observation then x(n+1) = P x(n) and thus x1(n +1) ∪ x (n
=
2
P1 x1(n) ∪ P2 x (n)
2 . Thus Markov bichains find all its applications
in bivector spaces and linear bialgebras.
Now we proceed onto define the new notion of Markov n-chain
n ≥ 2. Suppose a physical or a mathematical system is such that
at any time it can occupy a finite number of states; when we
view them as stochastic biprocess or Markov bichains when we
make an assumption that the system moves with time from one
state to another so that a schedule of observation times keeps the
states of the system at these times. But when we tackle real
world problems, say even for simplicity; emotions of a person
may be very unpredictable depending largely on the situation
and the mood of the person and its relation with another so such
study cannot come under Markov chains. Even more is the
complicated situation when the mood of a boss with
subordinates; where mood of a person with a n number of
persons and with varying emotions at a time and in such cases
more than one emotion is experienced by a person and such
states cannot be included and given as a next set of observation.
82
These changes and several feelings say at least n at a time (n >
2) will largely affect the transition n-matrix
P = P1 ∪ … ∪ Pn = ⎡⎣ p1i1 j1 ⎤⎦ ∪… ∪ ⎡⎣ pinn jn ⎤⎦
with non negative entries which we will explain shortly. We
indicate how n-vector spaces and n-linear algebras are used in
Markov n-process (n ≥ 2), when n = 2 the study is termed as
Markov bioprocess. We first define Markov n-process and its
implications to linear n-algebra and n-vector spaces; which we
may call as Markov n-process and Markov n-chains.
Suppose a physical or a mathematical system is such that at
any moment it occupies two or more finite number of states (in
case of one of the finite number of states we apply Markov
chains or the Markov process; in case of two of the finite
number of state we apply Markov bichains or Markov
biprocess). For example individual emotional states; happy, sad,
cold, angry etc. suppose a system move with time from n states
or a n tuple of states to another n-tuple of states; let us construct
a schedule of observation times and a record of states of the
system at these times. If we find the transition from n-tuple of
states to another n-tuple of states not predetermined but rather
can only be specified in terms of certain probabilities depending
on the previous history of the system then the n-process is
called a stochastic n-process. If in addition these transition
probabilities depend only on the immediate history of the
system that is if the state of the system at any observation is
dependent only on its state at immediately proceeding
observations then the process is called Markov n-process or
Markov n-chain.
The n-transition probability
pij = p1i1 j1 ∪ pi22 j2 ∪ … ∪ pinn jn
i, j = 1, 2, …, K is the probabilities that if the system is in state j
= (j1, j2, …, jn) at any observation it will be in state i = (i1, i2, …,
in) at the next observation.
A transition matrix associated with it is
P = [pij ] = [p1i1 j1 ] ∪ ... ∪ [pinn jn ]
83
is a square n-matrix with non negative entries for all of which ncolumn sum is (1 ∪ … ∪ 1). A probability n-vector is a column
n-vector with non negative entries whose sum is 1 ∪ … ∪ 1.
The probability n-vectors are said to be the state n-vectors
of the Markov n-process. If P = P1 ∪ … ∪ Pn is the transition nmatrix of the Markov n-process and x m = x1m ∪ … ∪ x nm is the
state n-vector at the mth observation then x(m+1) = Px(m) and thus
x1m +1 ∪ … ∪ x nm +1 = P1 (x1(m) ) ∪ … ∪ Pn x (m)
n . Thus Markov nchains find all its applications in n-vector spaces and linear nalgebras. (n-linear algebras).
Example 3.1: (Random Walk): A random walk by n persons on
the real lines i.e. lines parallel to x axis is a Markov n-chain
such that p1j1k1 ∪ … ∪ p njn k n = 0 ∪ … ∪ 0 if kt = jt – 1 or jt + 1, t
= 1, 2, …, n. Transition is possible only to neighbouring states
from j to j – 1 and j + 1. Here state n-space is S = S1 ∪ … ∪ Sn
where Si = { … –3 –2 –1 0 1 2 3 …}; i = 1, 2, …, n.
The following theorem is direct.
THEOREM 3.1: The Markov n-chain { X m1 ; m1 ≥ 0} ∪ … ∪
{ X mn ; mn ≥ 0} is completely determined by the transition n-
matrix P = P1 ∪ … ∪ Pn and the initial n-distribution
{PK11 } ∪ … ∪ {PKnn } defined as
P1 [ X 01 = K1 ] ∪ … ∪ Pn [ X 0n = K n ]
= pK1 ∪ … ∪ pKn ≥ 0 ∪ … ∪ 0
and
∑
K1∈S1
pk1 ∪ … ∪
∑
K n ∈Sn
pkn = 1 ∪ … ∪ 1.
The proof is similar to Markov chain.
The n vector u = (u11 … u1n1 ) ∪ … ∪ (u1n … u nn n ) is called a
probability n-vector if the components are non negative and
their sum is one.
84
The square n-matrix P = P1 ∪ …∪ Pn = (p1i1 j1 ) ∪ … ∪ (pinn jn )
is called a stochastic n-matrix if each of the n-row probability nvector i.e. each element of Pi is non negative and the sum of the
elements in each row of Pi is one for i = 1, 2, …, n.
We illustrate this by a simple example.
Example 3.2: Let
⎡1
⎢
⎢
P= 1
⎢ 3
⎢1
⎢⎣ 4
0
1
6
0
0 ⎤⎥
⎡1
1 ⎥∪⎢
2⎥ ⎢ 3
⎣ 7
3 ⎥
⎥
4⎦
0⎤
4 ⎥⎥
7⎦
⎡0
⎢
⎢1
∪ ⎢1
⎢ 4
⎢
⎢3
⎣ 7
1
2
0
1
4
2
7
0
0
1
2
1
7
1 ⎤
2⎥
0⎥
⎥
0⎥
⎥
1 ⎥
7⎦
be a stochastic 3-matrix.
The transition n-matrix P of a Markov n-chain (m-n-C) is a
stochastic n-matrix. A stochastic n-matrix A = A1 ∪ … ∪ An is
said to be n-regular if all the entries of some power of each Ai
i.e. A imi is positive, mi’s positive integer for every i, i = 1, 2 …,
n; i.e. (m1, …, mn) > (1, 1, …, 1). A m = A1m1 ∪ … ∪ A1mn ; m =
(m1, …, mn) with A imi > 0 for each i so that we state Am > (0 ∪
… ∪ 0). It is easily verified that if P = P1 ∪ … ∪ Pn is a
stochastic n-matrix then Pm is also a stochastic n-matrix for all
m > (1, 1, …, 1). Is P a stochastic n-matrix if Pn is a stochastic
n-matrix?
Prove (1, …, 1) is a n-eigen value of a stochastic n-matrix
i.e. if A = A1 ∪ … ∪ An; |λI – A| = 0 ∪ … ∪ 0 ⇒ λ = (1, …, 1)
if |λ1I1 – A1| ∪ … ∪ |λnIn – An| = 0 ∪ 0 ∪ … ∪ 0 ∪ 0, implies λ
= (λ1, …, λn) = (1, …, 1). We define n-independent trials
analogous to independent trials if
P = P 1 ∪ … ∪ Pn
and
P m = P = P1m ∪ … ∪ Pnm
= P1 ∪ … ∪ Pn
85
for all m ≥ (1, …, 1) where pitt jt = p tjt for t = 1, 2, …, n i.e. all
the rows of each Pt is the same then we say P is an nindependent trial.
We can also define the notion of Bernoulli n trials. We just
depict n-random walk with absorbing barriers. Let the possible
n-states be (E10 , E1 , … , E1K1 ) ∪ … ∪ (E 0n , E1n ,… , E nK n ) .
Consider the n-matrix of transition n-possibilities
P = Pi11 j1 ∪ … ∪ Pinn jn
⎡1 0 0 … 0⎤
⎢q
⎥
⎢ 1 0 p1 … 0 ⎥
= ⎢ 0 q1 0 p1 0 ⎥ ∪… ∪
⎢
⎥
q1 ⎥
…
⎢0
⎢⎣ 0 …
0 1 ⎥⎦ K ×K
1
1
⎡1
⎢q
⎢ n
⎢0
⎢
⎢0
⎢⎣ 0
0⎤
0 p n … 0 ⎥⎥
⎥
q n 0 pn
⎥
… 0 qn ⎥
…
0 1 ⎥⎦ K ×K
n
n
0
0
…
From each of the interior n states
{E11 ,… , E1K1−1 } ∪ … ∪ {E1n ,… , E Kn n −1 },
n-transmission are possible to the right and left neighbour with
(p t )it ,it +1 = p t , (p t )it ,it −1 = q t ; t = 1, 2, …, n.
However no n-transition is possible from either
E 0 = (E10 ∪… ∪ E 0n ) and E K = [E1K ∪ … ∪ E Kn ] to any other nstate.
This n-system may move from one n-state to another but
once E0 or EK is reached the n-system stays there permanently.
Now we describe random walk with reflecting barriers.
Let P = Pi11 j1 ∪ … ∪ Pinn jn be a n-matrix with
86
⎡q1 p1
⎢ 1
⎢q 0
P = ⎢ 0 q1
⎢
⎢0 0
⎢0 0
⎣
⎡q n
⎢ n
⎢q
∪…∪ ⎢0
⎢
⎢0
⎢0
⎣
pn
0
qn
0
0
0 …
p1 …
0
0
0
0
0
0
p1 0
… q1
0
0
0
…
q1
0
pn
0
0
0
0
… 0
… 0
pn 0
… qn
… 0
0⎤
⎥
0⎥
0⎥
⎥
p1 ⎥
p1 ⎥⎦
0
0
0
0
qn
0⎤
⎥
0⎥
0⎥
⎥
pn ⎥
p n ⎥⎦
pt and qt for t = 1, 2, …, n is defined by
⎧p t if jt = i t + 1
⎪
Pijt = P t (X nt t = jt |X nt t −1 = i t ) = ⎨q t if jt = 0
⎪0 otherwise
⎩
true for t = 1, 2, …, n.
( 3)
(2)
It may be possible that pitt jt = 0, pitt jt = 0 but pitt jt > 0 . We
say the state jt is accessible from state it if Pitt jt > 0 for some n >
(n)
In notation it → jt i.e. it leads to jt. If it → jt and jt → it then it
and jt communicate and we denote it by it ↔ jt, if this happens
we say they n-communicate. If only some of them communicate
and others do not communicate we say the n-system semi
communicates.
0.
⎧q t p t t for jt = 0,1, 2,… ,i t + n t − 1
⎪
(n)
Pitt jt = ⎨ jt p jt for jt = jt + n t
⎪0
otherwise.
⎩
The state it is essential if it → jt implies it ← jt i.e. if any state jt
is accessible from it then it is accessible from that state, true for t
Here
j
87
= 1, 2, …, n. Let ℑ = ℑ1 ∪ … ∪ ℑn denote the set of all
essential n state i.e. each ℑt denotes the set of all essential states,
t = 1, 2, …, n. States that not n-essential are called n-inessential.
We have semi essential if a few of the ℑt’s are essential. We
have semi essential state as m-essential state where m < n and
only m out of the n states are essential rest inessential or n – m
inessential state.
A Markov n-chain is called n-irreducible (or n-ergodic) if
there is only one n communicating class i.e. all states ncommunicate with each other or every n-state can be reached
from every other n-state.
A n-subset c = c1 ∪ … ∪ cn of S = S1 ∪ … ∪ Sn is said to
be closed (or n-transient) if it is impossible to leave c in one step
i.e. pij = 0 ∪ … ∪ 0, i.e. p1i1 j1 ∪ … ∪ pinn jn = 0 ∪ … ∪ 0 for all i
∈ c i.e. (i1, …, in) ∈ c1 ∪ … ∪ cn and all (j1, …, jn) ∉ c for all it
∈ ct and all jt ∉ ct; t = 1, 2, …, n.
We say a n-subset c = c1 ∪ … ∪ cn of S = S1 ∪ … ∪ Sn is
semi n-closed (or semi n-transient) if it is impossible to leave
(only m of the) ct’s, 1 ≤ t ≤ n, m < n in one state; i.e. pitt jt = 0
for all it ∈ ct, and for all jt ∈ ct. We call this also m-closed (m <
n) or m-transient, m = 1, 2, …, n –1. If m = n – 1 we call c to be
hyper n-closed (or hyper n-transient).
A Markov n-chain is n-irreducible if the only n-closed set in
S is S itself i.e., there is no n-closed set other than the set all of
n states.
We say a Markov n-chain is semi irreducible or mirreducible (m < n) if the closed sets in S = S1 ∪ … ∪ Sn are m
in number from the n-states {S1, …, Sn}, m < n. If m = n – 1
then we say the Markov n-chain is hyper n irreducible.
A single n-state {K1, …, Kn} forming a closed n-set is
called n-absorbing (n-trapping) i.e., a n-state such that the nsystem remains in that state once it enters there. Thus a n-state
{K1, …, Kn} is n absorbing if the {K1th , …, K nth } rows of the
transition n-matrix P = P1 ∪ … ∪ Pn has 1 on the main ndiagonal and 0 else where.
88
Example 3.3: Let P = P1 ∪ … ∪ P4 be a transition 4 matrix
given by
⎡0
⎢
⎢1
⎢
⎢0
⎢
P = ⎢0
⎢
⎢0
⎢
⎢0
⎢
⎢⎣ 0
⎡0
⎢
⎢0
⎢
⎢0
⎢
⎢1
⎢ 5
⎢7
⎣ 9
1
2
0
0
4
5
0
1
0
0
2
0
0
0
0
0
5
7
0
1
3
1
0
1
7
0
0
0
3
0
3
0
1
5
1
⎡0
⎢1
⎢ 2
⎢
⎢ 14
⎢
⎢0
⎢
⎢0
⎢0
⎣
0
5
1
0
0
0
2
3
0
0
1
0
0
0
0
3
1
1
2
3
0
0
0
2
0
8
8
1
0
5
5
0 ⎤
⎥
⎡1
1 ⎥
⎢
3 ⎥
⎢0
⎥
0 ∪ ⎢
⎥
⎢ 18
0 ⎥
⎢
⎥
⎢1
⎣ 4
2 ⎥
9 ⎦
0
0
1
0
0
1
0
0
0
0
3
0
0
0
0
1
0
0
8
0
0
0
0
0
2
4
7
Clearly the n-absorbing state is (4, 3, 1, 6).
89
0 ⎤
⎥
0 ⎥
⎥
0 ⎥
⎥
0 ⎥∪
1 ⎥
7 ⎥
⎥
0 ⎥
⎥
0 ⎥
⎦
0
0
1
0
9
0
7
0
1
0 ⎤
⎥
0 ⎥
⎥
0 ⎥
⎥.
1 ⎥
8 ⎥
0 ⎥
1 ⎥⎦
8
2
0 ⎤
8 ⎥⎥
9
⎥∪
0 ⎥
⎥
1 ⎥
4 ⎦
Several interesting results true in case of M C can be proved for
M – n – C with appropriate changes and suitable modifications.
Now we briefly describe the method for spectral mdecomposition (m ≥ 2). Let P = P1 ∪ … ∪ Pm be a N × N mmatrix
with
m
set
of
latent
roots
1
1
2
2
m
m
λ1 … λ N1 , λ1 … λ N2 , …, λ1 … λ Nm all distinct and simple i.e.
each set of latent roots {λ1t … λ tN t } are all distinct and simple for
t = 1, 2, …, m; then
(P1 − λ1i1 I1 ) U1i1 ∪ … ∪ (Pm − λ imm I m ) Uimm = 0 ∪ … ∪ 0
for the n-column latent n-vector U1i1 ∪ … ∪ U imm and
Vi11′ (P1 − λ1i1 I) ∪ … ∪ Vimm ′ (Pm − λ imm I) = 0 ∪ … ∪ 0
for the row latent n-vector Vi11 ∪ … ∪ Vimm .
A1i1 ∪ … ∪ A imm = U1i1 Vi11′ ∪ … ∪ U imm Vimm ′
are called m latent or m-spectral m-matrix associated with
(λ1i1 ,…, λ imm ); it = 1, 2, …, Nt, t = 1, 2, …, m.
The following properties of A1i1 ∪ … ∪ A imm are well known
(i) A1i1 ∪ … ∪ A imm ’s are m-idempotent i.e.
( A1i1 ∪ … ∪ Aimm )2 = A1i1 ∪ … ∪ Aimm
( )
i.e. each A itt
2
= Aitt , t = 1, 2, …, m.
(ii) They are n-orthogonal i.e.
A1i1 .A tjt = 0, i t ≠ jt ; t = 1, 2, …, m.
∑ λ1i1 A1i1 ∪ … ∪
(iii) They give a spectral n-decomposition
P1 ∪ … ∪ Pn =
N1
i1 =1
∑λ
Nm
i m =1
It follows from (i) to (iii), that
P K = P1K1 ∪ … ∪ PmK m =
90
m
im
Aimm .
⎛ N1 1 1 ⎞
⎛ Nm m m
λ
∪
…
∪
A
⎜⎜ ∑ i1 i1 ⎟⎟
⎜⎜ ∑ λ im A im
⎝ i1 =1
⎠
⎝ im =1
K1
=
∑λ
N1
i1 =1
K1
i1
A1i1 ∪ … ∪
= ∑ λ iK1 1 U1i1 Vi11′ ∪ … ∪
N1
i1 =1
∑λ
Nm
i m =1
∑λ
Nm
i m =1
Km
im
Km
im
⎞
⎟⎟
⎠
Km
A imm
U imm Vimm ′ .
Also we know that
P K = UD K U −1 = U1D1K1 (U1 ) −1 ∪… ∪ U m D Kmm (U m ) −1
where U = {U11 ,…, U1N1 } ∪ … ∪ {U1m , …, U mNm } and
D =
=
D1 ∪ D2 ∪ … ∪ Dm
⎡ λ11 0 … 0 ⎤
⎡ λ1m
⎢
⎥
⎢
1
⎢ 0 λ2 … 0 ⎥ ∪ … ∪ ⎢ 0
⎢
⎥
⎢
⎢
⎢
1 ⎥
λ N1 ⎦⎥
⎣⎢ 0 0
⎣⎢ 0
0 …
λ m2 …
0
0 ⎤
⎥
0 ⎥
.
⎥
⎥
λ mNm ⎦⎥
Since the n-latent n-vectors are determined uniquely only upto a
multiplicative constant, we have chosen them such that
U1i1′ Vi1 ∪ … ∪ Uimm′ Vim = (1 ∪ … ∪ 1).
One can work for any m-power of P to know λ itt ’s and A itt ’s; t
= 1, 2, …, m. Now even if we say P K = P1K1 ∪ … ∪ PmK m we
work for K = (K1, …, Km) and when the working with any Pt is
over that tth component remains as it is and calculations are
performed for the rest of the components of P. With the advent
of the appropriate programming using computers simultaneous
working is easy; also one needs to know in the present
technologically advanced age one cannot think of computing
one by one and also things do not occur like that in many
situations. So under these circumstances only the adaptation of
n-matrices plays a vital role by saving both time and economy.
Also stage by stage comparison of the simultaneous occurrence
of n-events is possible.
91
Matrix theory has been very successful in describing the
interrelations between prices, outputs and demands in an
economic model. Here we just discuss some simple models
based on the ideals of the Nobel-laureate Wassily Leontief. Two
types of models discussed are the closed or input-output model
and the open or production model each of which assumes some
economic parameter which describe the inter relations between
the industries in the economy under considerations. Using
matrix theory we evaluate certain parameters.
The basic equations of the input-output model are the
following:
⎡ a11 a12
⎢a
⎢ 21 a 22
⎢
⎢
⎣ a n1 a n 2
a1n ⎤
a 2n ⎥⎥
⎥
⎥
a nn ⎦
⎡ p1 ⎤
⎢p ⎥
⎢ 2⎥ =
⎢ ⎥
⎢ ⎥
⎣pn ⎦
⎡ p1 ⎤
⎢p ⎥
⎢ 2⎥
⎢ ⎥
⎢ ⎥
⎣pn ⎦
each column sum of the coefficient matrix is one
i.
ii.
iii.
pi ≥ 0, i = 1, 2, …, n.
aij ≥ 0, i , j = 1, 2, …, n.
aij + a2j +…+ anj = 1
for j = 1, 2 , …, n.
⎡ p1 ⎤
⎢p ⎥
p = ⎢ 2⎥
⎢ ⎥
⎢ ⎥
⎣pn ⎦
are the price vector. A = (aij) is called the input-output matrix
Ap = p that is, (I – A) p = 0.
Thus A is an exchange matrix, then Ap = p always has a
nontrivial solution p whose entries are nonnegative. Let A be an
exchange matrix such that for some positive integer m, all of the
92
entries of Am are positive. Then there is exactly only one
linearly independent solution of (I – A) p = 0 and it may be
chosen such that all of its entries are positive in Leontief open
production model.
In contrast with the closed model in which the outputs of k
industries are distributed only among themselves, the open
model attempts to satisfy an outside demand for the outputs.
Portions of these outputs may still be distributed among the
industries themselves to keep them operating, but there is to be
some excess some net production with which to satisfy the
outside demand. In some closed model, the outputs of the
industries were fixed and our objective was to determine the
prices for these outputs so that the equilibrium condition that
expenditures equal incomes was satisfied.
xi = monetary value of the total output of the ith industry.
di = monetary value of the output of the ith industry needed to
satisfy the outside demand.
σij = monetary value of the output of the ith industry needed by
the jth industry to produce one unit of monetary value of its own
output.
With these qualities we define the production vector.
⎡ x1 ⎤
⎢x ⎥
x = ⎢ 2⎥
⎢ ⎥
⎢ ⎥
⎣xk ⎦
the demand vector
⎡ d1 ⎤
⎢d ⎥
d = ⎢ 2⎥
⎢ ⎥
⎢ ⎥
⎣d k ⎦
93
and the consumption matrix,
⎡ σ11 σ12
⎢σ
σ22
C = ⎢ 21
⎢
⎢
⎣ σk1 σk 2
σ1k ⎤
σ2k ⎥⎥
.
⎥
⎥
σ kk ⎦
By their nature we have
x ≥ 0, d ≥ 0 and C ≥ 0.
From the definition of σij and xj it can be seen that the quantity
σi1 x1 + σi2 x2 +…+ σik xk
is the value of the output of the ith industry needed by all k
industries to produce a total output specified by the production
vector x.
Since this quantity is simply the ith entry of the column vector
Cx, we can further say that the ith entry of the column vector x –
Cx is the value of the excess output of the ith industry available
to satisfy the outside demand. The value of the outside demand
for the output of the ith industry is the ith entry of the demand
vector d; consequently; we are led to the following equation:
x – Cx = d or
(I – C) x = d
for the demand to be exactly met without any surpluses or
shortages. Thus, given C and d, our objective is to find a
production vector x ≥ 0 which satisfies the equation (I – C)x =
d.
A consumption matrix C is said to be productive if (1 – C)–1
exists and (1 – C)–1 ≥ 0.
A consumption matrix C is productive if and only if there is
some production vector x ≥ 0 such that x > Cx.
A consumption matrix is productive if each of its row sums
is less than one. A consumption matrix is productive if each of
its column sums is less than one.
94
Now we will formulate the Smarandache analogue for this,
at the outset we will justify why we need an analogue for those
two models.
Clearly, in the Leontief closed Input – Output model,
pi = price charged by the ith industry for its total output in reality
need not be always a positive quantity for due to competition to
capture the market the price may be fixed at a loss or the
demand for that product might have fallen down so badly so
that the industry may try to charge very less than its real value
just to market it.
Similarly aij ≥ 0 may not be always be true. Thus in the
Smarandache Leontief closed (Input – Output) model (SLeontief closed (Input-Output) model) we do not demand pi ≥ 0,
pi can be negative; also in the matrix A = (aij),
a1j + a2j +…+akj ≠ 1
so that we permit aij's to be both positive and negative, the only
adjustment will be we may not have (I – A) p = 0, to have only
one linearly independent solution, we may have more than one
and we will have to choose only the best solution.
As in this complicated real world problems we may not
have in practicality such nice situation. So we work only for the
best solution.
On similar lines we formulate the Smarandache Leontief
open model (S-Leontief open model) by permitting that x ≥ 0 , d
≥ 0 and C ≥ 0 will be allowed to take x ≤ 0 or d ≤ 0 and or C ≤ 0
. For in the opinion of the author we may not in reality have the
monetary total output to be always a positive quality for all
industries and similar arguments for di's and Cij's.
When we permit negative values the corresponding production
vector will be redefined as Smarandache production vector (Sproduction vector) the demand vector as Smarandache demand
vector (S-demand vector) and the consumption matrix as the
Smarandache consumption matrix (S-consumption matrix). So
when we work out under these assumptions we may have
different sets of conditions
95
We say productive if (1 – C)–1 ≥ 0, and non-productive or
not up to satisfaction if (1 – C)–1 < 0.
The reader is expected to construct real models by taking
data's from several industries. Thus one can develop several
other properties in case of different models.
Matrix theory has been very successful in describing the
interrelations between prices outputs and demands.
Now when we use n-matrices in the input – output model
we can under the same set up study the price vectors of all the
goods manufactured by that industry simultaneously. For in the
present modernized world no industry thrives only in the
production one goods. For instance take the Godrej industries it
manufacturers several goods from simple locks to bureau. So if
they want to study input output model to each and every goods
it has to work several times with the exchange matrix; but with
the introduction of n-mixed matrices we can use the n-matrix as
the input output n-model to study interrelations between the
prices outputs and demands of each and every goods
manufactured by that industry. Suppose the industry
manufactures n-goods, n ≥ 2.
Thus A = A1 ∪ … ∪ An is an exchange n-matrix where
each Ai is a ni × ni matrix i = 1, 2, …, n. The basic n-equations
of the input – output model is the following
⎡ a111 a112 … a11n1 ⎤
⎢ 1
⎥
2
1
⎢ a 21 a 22 … a 2n1 ⎥
⎢
⎥
⎢
⎥
⎢ a1n 1 a1n 2 … a1n n ⎥
1
1 1 ⎦
⎣ 1
2
2
2
⎡ a11
⎤
a12
… a1n
2
⎢ 2
⎥
2
2
⎢ a 21 a 22 … a 2n 2 ⎥
⎢
⎥
⎢
⎥
⎢ a 2n 1 a 2n 2 … a 2n n ⎥
2
2 2 ⎦
⎣ 2
96
⎡ p11 ⎤
⎢ ⎥
⎢ ⎥∪
⎢ p1n ⎥
⎣ 1⎦
⎡ p12 ⎤
⎢ ⎥
⎢ ⎥ ∪…∪
⎢ p 2n ⎥
⎣ 2⎦
n
⎡ a11
⎢ n
⎢ a 21
⎢
⎢
⎢ a nn 1
⎣ n
a n22
a nn n 2
⎡ p11 ⎤
⎢ ⎥
⎢ ⎥∪
⎢ p1n ⎥
⎣ 1⎦
n
⎤
a1n
n
⎥
n
… a 2n n ⎥
⎥
⎥
… a nn n n n ⎥⎦
…
n
a12
⎡ p1n ⎤
⎢ ⎥
⎢ ⎥ =
⎢ p nn ⎥
⎣ 1⎦
⎡ p12 ⎤
⎡ p1n ⎤
⎢ ⎥
⎢ ⎥
⎢ ⎥ ∪…∪ ⎢ ⎥
⎢ p1n ⎥
⎢ p nn ⎥
⎣ 2⎦
⎣ n⎦
each n column sum of the coefficient n-matrix is (1 ∪ … ∪ 1)
(i) pit ≥ 0; t = 1, 2, …, n.
(ii) a itt jt ≥ 0; it, jt = 1, 2, …, nt and t = 1, 2, …, n.
(iii) a ijt 1 + a 2t jt + … + a nt t jt = 1 for jt = 1, 2, …, nt and t = 1, 2, …, n.
⎡ p11 ⎤ ⎡ p12 ⎤
⎡ p1n ⎤
⎢ 1⎥ ⎢ 2⎥
⎢ n⎥
p2 ⎥ ⎢ p2 ⎥
p2
⎢
p = p1 ∪ … ∪ pn =
∪
∪…∪ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
⎢ ⎥
⎢ 1 ⎥ ⎢ 2 ⎥
⎢ n ⎥
⎣⎢ p n1 ⎦⎥ ⎣⎢ p n 2 ⎦⎥
⎣⎢ p n n ⎦⎥
are the price n-vector of the n-goods.
A = A1 ∪ … ∪ An = (a1i1 j1 ) ∪ … ∪ (a inn jn )
is called the input-output n-matrix.
Ap = p that is (I – A) p = 0 ∪ … ∪ 0
i.e. (I1 − A1 ) p1 ∪ … ∪ (I n − A n ) p n = 0 ∪ … ∪ 0.
Thus A is an exchange n-matrix then Ap = p always has a
nontrivial n-solution p = p1 ∪ … ∪ pn, whose entries are
nonnegative. Let A be the exchange n-matrix such that for some
n-positive integers (m1, …, mn) all the entries of
A m = A1m1 ∪ … ∪ A nmn are positive. Then there is exactly only
one linearly n-independent solution of (I – A)p = 0 ∪ … ∪ 0
97
and that it may be chosen such that all of its entries are positive
in Leontief open production n-model.
Thus the model provides at a time i.e. simultaneously the
price n-vector i.e. the price vector of each of the n-goods. When
n = 1 we see the structure corresponds to the Leontief open
model. When n = 2 we get the Leontief economic bi models.
This n-model is useful when the industry manufactures more
than one goods and it not only saves time and economy but it
renders itself stage by stage comparison of the price n-vector
which is given by p = p1 ∪ … ∪ pn.
Now we proceed onto describe the S-Leontief open n-model
using n-matrices.
In reality we may not always have the exchange n-matrix A
= A1 ∪ … ∪ An = (a1i1 j1 ) ∪… ∪ (a inn jn ), a itt jt ≥ 0. For it can also be
both positive or negative. Thus in S-Leontief closed (input output) n-model we do not demand pitt ≥ 0, pitt can be negative
also in the n-matrix A = A1 ∪ … ∪ An = (a1i1 j1 ) ∪… ∪ (a inn jn )
where a ijt t + … + a Kt t jt ≠ 1 for every t = 1, 2, …, n. i.e. we permit
a itt jt to be both positive and negative, the only adjustment will
be, we may not have (I – A)p = 0 ∪ … ∪ 0 to have only one nlinearly independent solution, we may have more than one and
we will have to choose only the best solution which will be
helpful to the economy of the nation. The best by no means
should favour in the interrelation high prices but a medium price
with most satisfactory outputs and best catering to the demands
as it is an economic n-model.
So n-matrices will be highly helpful and out of one set of
solution which will have n-components associated with the
exchange n-matrix A = A1 ∪ … ∪ An, we have to pick up from
the nontrivial solution p1 = p1 ∪ … ∪ pn the best suited pi’s and
once again find a p′ = p1′ ∪ … ∪ p′n with the estimated pi’s from
the earlier p remain as zero and choose the best p′j for the
solution p′ and so on. The final p = p1 ∪ … ∪ pn will be filled
with the best pi’s and pj’s and so on.
98
Thus the solution would be the best suited solution of the
economic model.
The difference between Leontief closed or input output nmodel and the S-Leontief closed or input output economic nmodel is that in the Leontief model there is only one
independent solution where as in the S-Leontief closed input
output economic n model we can choose the best solution from
the set of solutions so that best solution also may vary from
person to person for what is best for one may not be best for the
other that too when it describes the interrelations between
prices, outputs and demands in an economic n-model.
Now we briefly describe the Leontief open production nmodel. In contrast with Leontief closed n-model here the n-set
of or n-tuple of industries say (K1, …, Kn) where output of Ki
industries are distributed only among themselves the open n
model attempts to satisfy an outside demand for the n-outputs,
true for i = 1, 2, …, n. Portions of these n-outputs may still be
distributed among the (K1, …, Kn) set of industries themselves
to keep them operating, but there is to be some excess some net
production with which to satisfy the outside demand.
In the closed n-model the n-outputs of the industries were
fixed and the objective was to determine the n-prices for these
n-outputs so that the equilibrium condition that expenditures
equal income was satisfied.
x itt = monetary value of the itth industry from the tth unit i.e.
we have
K1 = industries in the first unit denoted by c1
K2 = industries in the second unit denoted by c2
Kt = industries in the tth unit denoted by ct
and so on
Kn – industries in the nth unit denoted by cn.
d itt − monetary value of the output of the itth industry need
to satisfy the outside demand.
σitt jt − monetary value of the output of the itth industry
needed by the jtth industry to produce one unit of monetary value
of its own profit.
99
This is true for every t; t = 1, 2, …, n. With these qualities
we define the n-production vector which is a n-vector.
⎡ x11 ⎤
⎡ x12 ⎤
⎡ x1n ⎤
⎢ ⎥
⎢
⎥
⎢
⎥
x = x1 ∪ … ∪ xn = ⎢ ⎥ ∪ ⎢
⎥ ∪… ∪ ⎢
⎥,
⎢ x1K ⎥
⎢ x K2 ⎥
⎢ x Kn ⎥
⎣ 1⎦
⎣ 2⎦
⎣ n⎦
the n-demand vector which is a n-vector,
⎡ d11 ⎤
⎢ ⎥
d = d1 ∪ … ∪ dn ⎢ ⎥ ∪ … ∪
⎢d1K ⎥
⎣ 1⎦
⎡ d1n ⎤
⎢
⎥
⎢
⎥
⎢ d Kn ⎥
⎣ n⎦
and the n-consumption matrix which is a n matrix
c = c1 ∪ … ∪ cn
⎡ σ111
⎢ 1
⎢ σ 21
=⎢
⎢
⎢σ1K
⎣ 1
1
σ12
σ122
σ1K1 2
σ11K1 ⎤
⎥
… σ12K1 ⎥
⎥∪…∪
⎥
… σ1K1K1 ⎥⎦
…
n
n
n
⎡ σ11
⎤
… σ1K
σ12
n
⎢ n
⎥
n
n
⎢ σ21 σ22 … σ2Kn ⎥ .
⎢
⎥
⎢ n
⎥
⎢⎣σK n 1 σKn n 2 … σKn n K n ⎥⎦
We have x ≥ 0 ∪ … ∪ 0 i.e. x = x1 ∪ … ∪ xn ≥ 0 ∪ 0 ∪ … ∪ 0,
d ≥ 0 ∪ … ∪ 0 i.e. d = d1 ∪ … ∪ dn ≥ 0 ∪ … ∪ 0 and c ≥ 0 ∪
… ∪ 0 i.e. c = c1 ∪ c2 ∪ … ∪ cn ≥ 0 ∪ … ∪ 0.
From the definition of σitt jt and x tjt it can be seen that the
quantity σitt 1x1t + σitt 2 x 2t +… + σitt K t
is the value of the i tht
industry of the tth unit needed for all Kt industries to produce a
total output specified by the production component vector
x t = x1t ∪ … x Kt t of the n-vector. x = x1 ∪ x2 ∪ … ∪ xn. This is
true for each t; t = 1, 2, …, n. Since the quantity is simply the
100
i tht entry of the tth unit column n vector ctxt we can further say
that the i tht entry of the column vector xt – ctxt is the value of the
excess output of the i tht industry available to satisfy the outside
demand for t = 1, 2, …, n. Thus the excess n-output of the (i1,
…, in) industry is given by the n-column vector
x − cx = x1 − c1 x1 ∪… ∪ x n − c n x n . The value of the outside
demand for the n output (i1 ,…,i n ) is the (c1 ,…,cn ) th entry of
the demand vector d = d1 ∪ … ∪ d n . Consequently, we are led
to the following equation.
x − cx = x1 − c1 x1 ∪… ∪ x n − c n x n = d1 ∪… ∪ d n
(I – c) x = d
1
(I1 – c1) x ∪ … ∪ (In – cn)xn = d1 ∪ … ∪ dn,
for the demand to be exactly met without any surplus or
shortages. Thus given c and d our objective is to find a
production n-vector x = x1 ∪ … ∪ xn ≥ 0 ∪ … ∪ 0 which
satisfies the n-equation (I – c) x = d (I1 – c1) x1 ∪ … ∪ (In – cn)
xn = d1 ∪ … ∪ dn.
The consumption n-matrix c = c1 ∪ … ∪ cn is said to be nproductive if (1 – c)-1 = (1 – c1)-1 ∪ … ∪ (1 – cn)–1 exists and (1
– c)–1 ≥ 0 ∪ … ∪ 0. A consumption n-matrix c = c1 ∪ … ∪ cn is
productive if and only if there is some production n-vector x =
x1 ∪ … ∪ xn ≥ 0 ∪ … ∪ 0 such that x > cx; x1 ∪ … ∪ xn > c1x1
∪ … ∪ cnxn.
A consumption n-matrix is productive if each of the n-row
sums is less than one. A consumption n-matrix is n-productive
if each of its column sum is less than one.
Now we will formulate the Smarandache analogue for this,
at the outset we will justify why we need an analogue for the
open or production n-model.
In the Leontief open n-model we may assume also x ≤ 0, or
d ≤ 0 and or c ≤ 0. For in the opinion of the author we may not
in reality have the monetary total output to be always a positive
101
quantity for all industries and similar arguments for d it ’s and
cijt ‘s.
When we permit negative values the corresponding
production n-vector will be redefined as S-production n-vector
the demand n-vector as S-demand n-vector and the consumption
n-matrix as S-consumption n-matrix. Under these assumptions
we may have different sets of conditions.
We say n-productive if (1 – c)-1 > 0 and non n-productive or
not upto satisfaction if (1 – c)-1 < 0.
Now we have given some application of these n-matrices to
industrial problems.
Finally it has become pertinent here to mention that in the
consumption n matrices a particular industry or many industries
can be used in several or more than one consumption matrix. So
in this situation only the open Leontief n-model will serve it
purpose. Also we can study the performance such industries
which is in several groups i.e. in several ci’s. One can also
simultaneously study the group in which an industry has the
best performance also the group in which it has the worst
performance. In such situation only this model is handy.
102
Chapter Four
SUGGESTED PROBLEMS
In this chapter we suggest some problems for the readers.
Solving these problems will be a great help to understand the
notions given in this book.
1. Find all p-subspaces of the n-vector space V = V1 ∪ V2
∪ V3 ∪ V4 where n = 4 and p ≤ 4 over Q.
⎧⎪⎛ a b e ⎞
⎫⎪
V1 = ⎨⎜
⎟ a, b,c,d,e,f ∈ Q ⎬ ,
⎪⎩⎝ c d f ⎠
⎭⎪
V2 = (Q × Q × Q × Q) over Q,
V3 = {Q[x] contains only polynomials of degree less than or
equal to 6 with coefficients from Q} and
⎧⎪⎛ a b c d ⎞
⎫⎪
V4 = ⎨⎜
⎟ a, b,...,g, h ∈ Q ⎬ .
⎩⎪⎝ e f g h ⎠
⎭⎪
What is the 4-dimension of V? Find a 4 basis of V.
2. Let V = V1 ∪ V2 ∪ V3 and W = W1 ∪ W2 ∪ W3 ∪ W4 be 3
vector space and 4 vector space over the field Q of 3
dimension (3, 2, 4) and 4 dimension (5, 3, 4, 2) respectively.
103
Find a 3 linear transformation from V to W. Also find a
shrinking 3 linear transformation from V into W.
3. Let V = V1 ∪ V2 ∪ V3 and W = W1 ∪ W2 ∪ W3 be 3-vector
spaces of dimensions (4, 2, 3) and (3, 5, 4) respectively
defined over Q. Find the 3 linear transformation from V to
W. What is the 3 dimension of the 3-vector space of all 3
linear transformation from V into W?
4. Let V = V1 ∪ V2 ∪ V3 ∪ V4 be a 4-vector space defined
over Q of dimension (3, 4, 2, 1). Give a 4 linear operator T
on V.
Verify: 4 rank T + 4 nullity T = n dim V = (3, 4, 2, 1).
5. Define T: V → W be a 4 linear operator where V = V1 ∪ V2
∪ V3 ∪ V4 and W = W1 ∪ W2 ∪ W3 ∪ W4 with 4dimension (3, 2, 4, 5) and (4, 3, 5, 2) respectively, such that
4 kerT is a 4-dimensional subspace of V. Verify 4 rank T +
4 nullily T = 4 dim V = (3, 2, 4, 5).
6. Explicitly describe the n-vector space of n-linear
transformations Ln (V,W) of V = V1 ∪ V2 ∪ V3 into W =
W1 ∪ W2 ∪ W3 ∪ W4 over Q of 3-dimension (3, 2, 4) and
4-dimension (4, 3, 2, 5) respectively.
7. What is n-dimension of Ln (V,W) given in the problem 6?
8. For T = T1 ∪ T2 ∪ T3 defined for V and W given in
problem 6; T1 : V1 → W3, T1 (x y z) = (x + y, y + z) for all
x, y, z ∈ V1, T2 : V2 → W2 defined by T2 (x1, y1) = (x1 + y1,
2y1, y1) for all x1, y1, ∈ V2 and T3 : V3 → W4 defined by T3
(a, b, c, d) = (a + b, b + c, c + d, d + a, a + b + d) for all a, b,
c, d ∈ V3. Prove 3 rank T + 3 nullity T = dim V = (3, 2, 4).
9. Let V = V1 ∪ V2 ∪ V3 ∪ V4 be a 4 vector space over Q,
where V1 = Q × Q × Q, V2 = Q × Q × Q × Q, V3 = Q × Q,
V4 = Q × Q × Q × Q × Q, T j = T1j ∪ T2j ∪ T3j ∪ T4j : V → V
104
Ti j : Vi → Vi; i = 1, 2, 3, 4.
Define two distinct 4 transformations T1 and T2 and find T1
o T2 and T2 o T1.
10. Give an example of a special linear 6-transformation T = T1
∪ T2 ∪ ... ∪ T6 of V into W where V and W are 6 vector
space of same 6 dimension.
11. Let T : V → W where 3-dim V = (3, 7, 8) and 3-dim W =
(8, 3, 7). Give an example of T and find T-1. Define T only
as a 3 linear transformation for which T-1 cannot be found.
12. Derive for a n-vector space the Gram-Schmidt northogonalization process.
13. Prove every finite n-dimensional inner product n-space has
an n-orthonormal basis.
14. Give an example of a 4-orthogonal matrix.
15. Give an example of a 5-anitorthogonal matrix.
16. Give an example of a 7-semi orthogonal matrix.
17. Give an example of a 5-semi antiorthogonal matrix.
⎡3
⎢1
18. Is A = ⎢
⎢0
⎢
⎣1
1
1
2
0
0
6
0
5
2⎤ ⎡1
1 ⎥⎥ ⎢⎢ 2
∪
1 ⎥ ⎢0
⎥ ⎢
0⎦ ⎣5
1
0
1
6
1
0
2
7
1
2
3
0
⎡3
0 ⎤ ⎢⎢ 0
1 ⎥⎥ ⎢ 1
∪⎢
4⎥ ⎢0
⎥
12 ⎦ ⎢ −1
⎢
⎣⎢ 1
1
1
0
1
0
1
8⎤
1 ⎥⎥
1⎥
⎥ a
4⎥
1⎥
⎥
0 ⎦⎥
3-semi orthogonal 3 matrix?
19. Find the 4-eigen values, 4-eigen vectors of A = A1 ∪ Α2
∪ Α3 ∪ Α4 =
105
⎡3
⎢0
= ⎢
⎢0
⎢
⎣1
⎡0
0 1 0⎤
⎢0
⎡3 0 1 ⎤
⎥
4 1 3⎥ ⎢
⎡ 5 1⎤ ⎢
∪ ⎢ 0 1 4 ⎥⎥ ∪ ⎢
∪ ⎢0
5 0 1⎥
0 1⎥⎦ ⎢
⎣
⎥ ⎢⎣ 0 0 5 ⎥⎦
⎢0
2 1 0⎦
⎢⎣0
1 2 3 6⎤
2 1 0 2 ⎥⎥
0 2 1 1⎥ .
⎥
0 1 0 0⎥
0 0 0 5 ⎥⎦
Find the 4-minimal polynomial and the 4-characteristic
polynomial associated with A. Is A a diagonalizable
transformation? Justify your claim.
20. Give an example of 5-linear transformation on V = V1 ∪ V2
∪ ... ∪ V5 which is not a 5-linear operator on V.
21. Let V = V1 ∪ V2 ∪ V3 be a 3-vector space over the field Q
of finite (5, 3, 2) dimension over Q. Give a special 3 linear
operator on V. Give a 3 linear transformation on V which is
not a special linear operator on V.
22. Define a 3-innerproduct on V given in the above problem
and construct a normal 3 linear operator T on V such that
T*T = TT*.
23. Let V = V1 ∪ V2 ∪ V3 ∪ V4 be a 4-vector space of (3, 5, 2,
4) dimension over Q. Find a 4-linear operator T on V so that
the 4-minimal polynomial of T is the same as 4characteristic polynomial of T. Give a 4-linear operator U
on V so that the 4-minimal polynomial is different from the
4-characteristic polynomial.
24. Let V = V1 ∪ V2 ∪ V3 ∪ V4 ∪ V5 be a 5-vector space over
Q of (2, 3, 4, 5, 6) dimension over Q. Construct a linear
operator T on V so that T is 5-diagonalizable.
25. Let V = V1 ∪ V2 ∪ V3 ∪ V4 be a 4-vector space over Q.
Define a suitable T and find the n-monic generator of the 4ideals of the polynomials over Q which 4-annihilate T.
106
Prove or disprove every 4-linear operator T on V need not
4-annihulate T.
26. State and prove the Cayley Hamilton theorem for n-linear
operator on a n-vector space V.
27. Let V = V1 ∪ V2 ∪ V3 ∪ V4 ∪ V5 be a 5-vector space over
Q of (2, 4, 6, 3, 5) dimension over Q. Give a 5-basis of V so
that Cayley Hamilton Theorem is true. Is Cayley Hamilton
Theorem true for every set of 5-basis of V? Justify your
claim.
28. Given V = V1 ∪ V2 ∪ V3 ∪ V4 is a 4-vector space over Q of
dimension (3, 7, 4, 2). Construct a T, a 4 linear operator on
V so that V has a 4-subspace 4-invariat under T. Does V
have any 4-linear operator T and a non-trivial 4-subspace W
so that W is 4-invariant under T? Justify your answer.
29. Let V = V1 ∪ V2 ∪ V3 ∪ V4 ∪ V5 be a 5-vector space of (2,
4, 5, 3, 7) dimension over Q. Construct a 5-linear operator
V on T so that the 5-minimal polynomial associated with T
is linearly factorizable. Find a T on V so that the 5-minimal
polynomial does not factor linearly over Q.
30. Let V = V1 ∪ V2 ∪ V3 be a 3-vector space of (2, 4, 3)
dimension over Q. Find L3 (V, V) the set of all 3-linear
transformations on V. Suppose L3S (V, V) is the set of all
special 3-linear transformations on V.
a. Prove L3S (V, V) ⊆ L3(V, V).
b. What is the 3-dimension of L3 (V, V)?
c. What is the 3-dimension of L3S (V, V)?
d. Find a set of 3-orthogonal 3 basis for L3S (V, V).
e. Find a set of 3-orthonormal 3-basis for L3 (V, V)
f. Find a T : V → V, T only a 3-linear transformation
which has a nontrivial 3-null space.
g. Find the 3-rank T of that is given in (6)
107
h. Can any T ∈ L3S (V, V) have nontrivial 3-null space?
Justify your answer.
i. Define a 3-unitary operator on V.
j. Define a 3-normal operator on V which not 3-unitary.
31. Let V and W be two 6-inner product spaces of same
dimension (W ≠ V) defined over the same field F. Define a
T linear operator from V into W which preserves inner
products by taking (3, 4, 6, 2, 1, 5) to be the dimension of V
and (6, 5, 4, 2, 3, 1) is the dimension of W.
Does every T ∈ L6 (V, W) preserve inner product? Justify
your claim.
32. Given V = V1 ∪ V2 ∪ V3 is a (4, 5, 3) dimensional 3-vector
space over Q. Give an example of a 3-linear operator T on
V which is 3-diagonalizable. Does their exist a 3-linear
operator T′ on V such that T′ is not 3 diagonalizable?
Justify your answer.
33. Let V = V1 ∪ V2 ∪ V3 ∪ V4 ∪ V5 be a (3, 4, 5, 2, 6)
dimension 5-vector space over Q. Define a 5 linear operator
T on V and decompose it into the 5-nilpotent operator and
5-diagonal operator.
a. Does there exist a 5-linear operator T on V such that
the 5-diagonal part is zero, i.e., the operator T is
nilpotent?
b. Does there exist a 5-linear operator P on V such that it
is completely 5-diagonal and the 5-nilpotent part of it
is zero.
c. Give examples of the above mentioned 5-operator in
(1) and (2)
d. What is the form of the 5-minimal polynomial in case
of (1) and (2)?
34. Define for a n-vector space V over a field F the notion of nindependent n-subspaces of V. Give an example when n =
4.
108
35. Let V = V1 ∪ V2 ∪ ... ∪ V6 be a 6-vector space over Q.
Define a 6-linear operator E on V such that E2 = E.
36. Let V = V1 ∪ V2 ∪ V3 ∪ V4 be a 4-vector space over Q of
(3, 4, 5, 2) dimension. Suppose V = ( W11 ⊕ W21 )
(W
2
1
⊕ W22 ⊕ W32 ) ∪ ( W13 ⊕ W23 ⊕ W33 ) ∪ ( W14 ⊕ W24 ) ,
Define 4-linear operators, E1i ∪ E 2j ∪ E 3k ∪ E 4m ; i = 1, 2; j = 1,
2, 3; k = 1, 2, 3 and m = 1, 2 such that each E ip is a
projection, i = 1, 2, 3, 4 and
⎪⎧= 0 if p ≠ j
E ip E ij ⎨ i
.
⎪⎩=E p if p = j.
37. Prove if T is any 4-linear operator on V then TE ij = E ijT
for i = 1,2, 3,4. j = 1, 2 or 1, 2, 3 or 1, 2, for the V given in
the problem 36.
38. Given V = V1 ∪ V2 ∪ V3 ∪ V4 ∪ V5 to be a 5-vector space
over Q of (2, 3, 4, 5, 6) dimension. Define T a linear
operator on V and find the 5 minimal polynomial for T. Is
every 5-subspace of V related with the 5-minimal
polynomials i.e. the 5-null space of the minimal
polynomials invariant under T?
Obtain the 5-nilpotent and 5-diagonalizable operator N and
D respectively so that T = N+D.
Verify ND = DN for the same N and D of T.
39. If T is a 7-linear operator on V = V1 ∪ V2 ∪ ... ∪ V7 of (3,
2, 5, 1, 6, 4, 7) dimension over Q. Is the generalized Cayley
Hamilton Theorem true for T?
40. Prove for a 3-vector spaces V = V1 ∪ V2 ∪ V3 of (3, 4, 2)
dimension over Q and W = W1 ∪ W2 ∪ W3 of dimension
(4, 5, 3) over Q if T is any 3 linear transformation find the 3
matrix associated with T. Find the 3-adjoint of T.
109
41. For any n-linear transformation T of a n vector space V =
V1 ∪ V2 ∪ ... ∪ Vn of dimension (n1, n2, ..., nn) into a mvector space W (m>n) of dimension (m1, m2, …, mm) over
Q. Prove there exists a n-matrix A = (A1 ∪ A2 ∪ ... ∪ An)
which is related to T. Prove Ln (V, W) ≅ {set of all nmatrices A1 ∪ A2 ∪ ... ∪ An where each Ai is a ni × mj
matrix with entries from Q}.
42. If V = V1 ∪ V2 ∪ ... ∪ Vn is a n-vector space over the field
F of (n1, n2,…, nn) dimension. If T : V → V is such that Ti :
Vi → Vi; i = 1, 2, …, n. Show LSn (V, V) ≅ {All n-mixed
square matrices A = (A1 ∪ A2 ∪ ... ∪ An) where Ai is a ni ×
ni matrix with entries from F}.
43. Define n-norm on V an inner product space and is it
possible to prove the Cauchy Schwarz inequality?
44. Derive Gram-Schmidt orthogonalization process for a nvector space V with an inner product for a n-set of nindependent vectors in V.
45. Let V be a n-inner product space over F. W a finite
dimensional n-subspace of V. Suppose E is a n orthogonal
projection of V on W, with E an n-idempotent n-linear
transformation of V onto W. W⊥ the n-null space of E.
Prove V = W ⊕ W⊥.
110
FURTHER READING
1. ABRAHAM, R., Linear and Multilinear Algebra, W. A.
Benjamin Inc., 1966.
2. ALBERT, A., Structure of Algebras, Colloq. Pub., 24,
Amer. Math. Soc., 1939.
3. BIRKHOFF, G., and MACLANE, S., A Survey of Modern
Algebra, Macmillan Publ. Company, 1977.
4. BIRKHOFF, G., On the structure of abstract algebras,
Proc. Cambridge Philos. Soc., 31 433-435, 1995.
5. BURROW, M., Representation Theory of Finite Groups,
Dover Publications, 1993.
6. CHARLES W. CURTIS, Linear Algebra – An introductory
Approach, Springer, 1984.
7. DUBREIL, P., and DUBREIL-JACOTIN, M.L., Lectures on
Modern Algebra, Oliver and Boyd., Edinburgh, 1967.
8. GEL'FAND, I.M., Lectures
Interscience, New York, 1961.
9. GREUB, W.H., Linear
Springer-Verlag, 1974.
111
on
linear
Algebra, Fourth
algebra,
Edition,
10. HALMOS, P.R., Finite dimensional vector spaces, D
Van Nostrand Co, Princeton, 1958.
11. HARVEY E. ROSE, Linear Algebra, Bir Khauser Verlag,
2002.
12. HERSTEIN I.N., Abstract Algebra, John Wiley,1990.
13. HERSTEIN, I.N., Topics in Algebra, John Wiley, 1975.
14. HERSTEIN, I.N., and DAVID J. WINTER, Matrix Theory
and Lienar Algebra, Maxwell Pub., 1989.
15. HOFFMAN, K. and KUNZE, R., Linear algebra, Prentice
Hall of India, 1991.
16. HUMMEL, J.A., Introduction to vector functions,
Addison-Wesley, 1967.
17. JACOB BILL, Linear Functions and Matrix Theory ,
Springer-Verlag, 1995.
18. JACOBSON, N., Lectures in Abstract Algebra, D Van
Nostrand Co, Princeton, 1953.
19. JACOBSON, N., Structure of Rings, Colloquium
Publications, 37, American Mathematical Society,
1956.
20. JOHNSON, T., New spectral theorem for vector spaces
over finite fields Zp , M.Sc. Dissertation, March 2003
(Guided by Dr. W.B. Vasantha Kandasamy).
21. KATSUMI, N., Fundamentals of Linear Algebra,
McGraw Hill, New York, 1966.
22. KEMENI, J. and SNELL, J., Finite Markov Chains, Van
Nostrand, Princeton, 1960.
112
23. KOSTRIKIN, A.I, and MANIN, Y. I., Linear Algebra and
Geometry, Gordon and Breach Science Publishers,
1989.
24. LANG, S., Algebra, Addison Wesley, 1967.
25. LAY, D. C., Linear Algebra and its Applications,
Addison Wesley, 2003.
26. PADILLA, R., Smarandache algebraic structures,
Smarandache Notions Journal, 9 36-38, 1998.
27. PETTOFREZZO, A. J., Elements of Linear Algebra,
Prentice-Hall, Englewood Cliffs, NJ, 1970.
28. ROMAN, S., Advanced Linear Algebra, Springer-Verlag,
New York, 1992.
29. RORRES, C., and ANTON H., Applications of Linear
Algebra, John Wiley & Sons, 1977.
30. SEMMES, Stephen, Some topics pertaining to algebras
of
linear
operators,
November
2002.
http://arxiv.org/pdf/math.CA/0211171
31. SHILOV, G.E., An Introduction to the Theory of Linear
Spaces, Prentice-Hall, Englewood Cliffs, NJ, 1961.
32. SMARANDACHE, Florentin (editor), Proceedings of the
First International Conference on Neutrosophy,
Neutrosophic Logic, Neutrosophic set, Neutrosophic
probability and Statistics, December 1-3, 2001 held at
the University of New Mexico, published by Xiquan,
Phoenix, 2002.
33. SMARANDACHE, Florentin, A Unifying field in Logics:
Neutrosophic Logic, Neutrosophy, Neutrosophic set,
Neutrosophic probability, second edition, American
Research Press, Rehoboth, 1999.
113
34. SMARANDACHE,
Florentin,
Special
Algebraic
Structures, in Collected Papers III, Abaddaba, Oradea,
78-81, 2000.
35. THRALL, R.M., and TORNKHEIM, L., Vector spaces and
matrices, Wiley, New York, 1957.
36. VASANTHA KANDASAMY, W.B., SMARANDACHE,
Florentin and K. ILANTHENRAL, Introduction to
bimatrices, Hexis, Phoenix, 2005.
37. VASANTHA KANDASAMY, W.B., Bialgebraic structures
and Smarandache bialgebraic structures, American
Research Press, Rehoboth, 2003.
38. VASANTHA KANDASAMY, W.B., Bivector spaces, U.
Sci. Phy. Sci., 11 , 186-190 1999.
39. VASANTHA KANDASAMY, W.B., Linear Algebra and
Smarandache Linear Algebra, Bookman Publishing,
2003.
40. VASANTHA KANDASAMY, W.B., On a new class of
semivector spaces, Varahmihir J. of Math. Sci., 1 , 2330, 2003.
41. VASANTHA KANDASAMY and THIRUVEGADAM, N.,
Application of pseudo best approximation to coding
theory, Ultra Sci., 17 , 139-144, 2005.
42. VASANTHA KANDASAMY and RAJKUMAR, R. Use of
best biapproximation in algebraic bicoding theory,
Varahmihir Journal of Mathematical Sciences, 509-516,
2006.
43. VASANTHA KANDASAMY, W.B., On fuzzy semifields
and fuzzy semivector spaces, U. Sci. Phy. Sci., 7, 115116, 1995.
114
44. VASANTHA KANDASAMY, W.B., On semipotent linear
operators and matrices, U. Sci. Phy. Sci., 8, 254-256,
1996.
45. VASANTHA KANDASAMY, W.B., Semivector spaces
over semifields, Zeszyty Nauwoke Politechniki, 17, 4351, 1993.
46. VASANTHA KANDASAMY, W.B., Smarandache Fuzzy
Algebra, American Research Press, Rehoboth, 2003.
47. VASANTHA KANDASAMY, W.B., Smarandache rings,
American Research Press, Rehoboth, 2002.
48. VASANTHA KANDASAMY,
semirings and semifields,
Journal, 7 88-91, 2001.
W.B.,
Smarandache
Smarandache Notions
49. VASANTHA KANDASAMY, W.B., Smarandache
Semirings, Semifields and Semivector spaces, American
Research Press, Rehoboth, 2002.
50. VOYEVODIN, V.V., Linear Algebra, Mir Publishers,
1983.
51. ZELINKSY, D., A first course in Linear Algebra,
Academic Press, 1973.
115
I NDEX
B
Bigroup, 8
C
Cayley Hamilton theorem for n-vector spaces of type I, 62-3
Characteristic n-value in type I vector spaces, 56-7
E
Essential n-states 87-8
F
Finite n-dimensional n-vector space, 20
H
Hyper n-irreducible, 88
116
I
Infinite n-dimensional n-vector space, 20
L
Leontief model, 92
Leontief open production n-models, 99
Linear n-algebra of type I, 13-8
Linear n-transformation, 21
Linear n-vector space of type I, 13-8
Linearly dependent n-subset, 18
M
Markov bichains, 81-2
Markov bioprocess, 81-3
Markov chains, 81-2
Markov n-chains, 81-4
Markov n-process, 82-4
m-idempotent, 90
m-n-C stochastic n matrix, 85-6
m-spectral m –matrix, 90
N
n-adjoints of T in type I n-vector spaces, 55-6
n-annihilating polynomials in n-vector spaces of type I, 61-2
n-basis of a n-vector space, 19
n-best approximation in n-vector spaces of type I, 50
n-characteristic n-polynomial in type I n-vector spaces, 57-8
n-characteristic n-vector in type I n-vector spaces, 56-7
n-characteristic value in type I n-vector spaces, 56-7
n-diagonalizable n-linear operator, 59
n-diagonalizable n-linear operator, 71
n-eigen value of the stochastic n-matrix, 85-6
n-eigen values in type I vector spaces, 56-7
117
n-ergodic, 88
n-field of characteristic zero, 11-2
n-field of finite characteristic, 11-2
n-field of mixed characteristic, 11-2
n-field, 7, 10-1, 81-4
n-group, 7-10
n-independent trial, 85-6
n-inner product of n-vector space of type I, 47-48
n-invariant under T, 63-4
n-irreducible (n-ergodic), 88-9
n-kernel of a n-linear transformation, 27
n-latent n vectors, 91
n-linear algebra of type I, 13-8
n-linear operator of a type I n-vector space, 28-9
n-linear transformation, 21
n-linearly independent subset, 18
n-minimal polynomial, 77-8
n-monic polynomial, 61-3
n-nilpotent n-linear operator, 76
n-normal linear n-operator on type I n-vector spaces, 56
n-orthogonal complement of a n-set
in a n-vector space of type I, 51-2
n-orthogonal n-vectors, 48-9
n-orthogonal, 80
n-orthogonal, 90
n-projection of n-linear operator, 67-8
n-range of a n-linear transformation, 31-2
n-row probability n-vector, 85
n-semi anti orthogonal, 80
n-semi orthogonal, 80
n-subfield, 83
n-subgroup, 10
n-subspace of type I, 17
n-system semi communicates, 87-8
n-unitary operator of type I vector space, 53
n-vector space linear n-isomorphism, 26
n-vector space of type I, 13-8
118
O
One to one n-linear transformation, 24
P
Probability n-vector, 85
R
Random walk with reflecting barriers, 86-7
Random walk, 84
S
Same n-dimension n-vector space, 25
Shrinking n-linear transformation, 22
Shrinking n-map, 22
S-Leontief n-closed n-model, 98
S-Leontief n-open models, 98
S-Leontief open model, 95
Special n-linear operators, 39-40
Special n-shrinking transformation, 23-4
Special shrinking n-transformation, 23-4
Spectral n decomposition, 90
T
Transition matrix, 83-4
Transition n-matrix, 84-5
119
ABOUT THE AUTHORS
D r .W .B.Va sa nt h a Ka n da sa m y is an Associat e Professor in t he
Depart m ent of Mat hem at ics, I ndian I nst it ut e of Technology
Madras, Chennai. I n t he past decade she has guided 12 Ph.D.
scholars in t he different fields of non- associat ive algebras,
algebraic coding t heory, t ransport at ion t heory, fuzzy groups, and
applicat ions of fuzzy t heory of t he problem s faced in chem ical
indust ries and cem ent indust ries.
She has t o her credit 646 research papers. She has guided
over 68 M.Sc. and M.Tech. proj ect s. She has worked in
collaborat ion proj ect s wit h t he I ndian Space Research
Organizat ion and wit h t he Tam il Nadu St at e AI DS Cont rol Societ y.
This is her 37 t h book.
On I ndia's 60t h I ndependence Day, Dr.Vasant ha was
conferred t he Kalpana Chawla Award for Courage and Daring
Ent erprise by t he St at e Governm ent of Tam il Nadu in recognit ion
of her sust ained fight for social j ust ice in t he I ndian I nst it ut e of
Technology ( I I T) Madras and for her cont ribut ion t o m at hem at ics.
( The award, inst it ut ed in t he m em ory of I ndian- Am erican
ast ronaut Kalpana Chawla who died aboard Space Shut t le
Colum bia) . The award carried a cash prize of five lakh rupees ( t he
highest prize- m oney for any I ndian award) and a gold m edal.
She can be cont act ed at vasant hakandasam y@gm ail.com
You can visit her on t he web at : ht t p: / / m at .iit m .ac.in/ ~ wbv
D r . Flor e n t in Sm a r a n da ch e is a Professor of Mat hem at ics and
Chair of Mat h & Sciences Depart m ent at t he Universit y of New
Mexico in USA. He published over 75 books and 150 art icles and
not es in m at hem at ics, physics, philosophy, psychology, rebus,
lit erat ure.
I n m at hem at ics his research is in num ber t heory, nonEuclidean geom et ry, synt het ic geom et ry, algebraic st ruct ures,
st at ist ics, neut rosophic logic and set ( generalizat ions of fuzzy
logic
and
set
respect ively) ,
neut rosophic
probabilit y
( generalizat ion of classical and im precise probabilit y) . Also, sm all
cont ribut ions t o nuclear and part icle physics, inform at ion fusion,
neut rosophy ( a generalizat ion of dialect ics) , law of sensat ions and
st im uli, et c. He can be cont act ed at sm arand@unm .edu
120