AE 738 Tensors
AE 738 Tensors
AE 738 Tensors
Contents
1 Review of index notation and mapping 5
1.1 Summation convention . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 Kronecker delta . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 Dot product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4 Permutation symbol . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.5 Cross product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.6 Differentiation notation . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.7 Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.7.1 Injective or one-to-one mapping . . . . . . . . . . . . . . . . 11
1.7.2 Surjective or onto mapping . . . . . . . . . . . . . . . . . . . 11
1.7.3 Bijective or one-to-one correspondence mapping . . . . . . . 12
1.7.4 Composition of two mappings . . . . . . . . . . . . . . . . . 13
1.8 Homomorphism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.9 Isomorphism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2 Vector space 17
2.1 Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2 Vector space or Linear space . . . . . . . . . . . . . . . . . . . . . . 18
2.3 Linear combination . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.4 Linearly independent set . . . . . . . . . . . . . . . . . . . . . . . . 21
2.5 Linear span . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.6 Basis, components and dimension . . . . . . . . . . . . . . . . . . . 24
2.6.1 Standard basis . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.7 Vector subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.8 Linear sum of two subspaces . . . . . . . . . . . . . . . . . . . . . . 30
2.9 Direct sum of two subspaces . . . . . . . . . . . . . . . . . . . . . . 32
1
CONTENTS
6 Tensor product 61
6.1 Bilinear functionals . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
6.2 Tensor product as homomorphism . . . . . . . . . . . . . . . . . . . 65
6.3 Matrix representation of tensor products . . . . . . . . . . . . . . . 68
6.4 Tensor spaces: a general multilinear function . . . . . . . . . . . . 69
6.5 Tensor multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . 70
6.6 Contraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
9 Invariants in 3D 113
9.1 Cofactor tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
or
or
In the index notation, they are represented by: aij , aij and aij respectively, where
each of the indices (i, j) takes values 1 and 2.
1. The convention does not apply to numerical indices. For instant a2 x2 stands
for single term.
2. The repeated index may be replaced by any other index. For instance ai xi =
at xt . For this reason, the repeated index is called a dummy index.
3. If an index is not dummy, it is called free index. Thus in the expression aik xi ,
k is the free index.
Some examples:
1. If n = 3, ak xk = a1 x1 + a2 x2 + a3 x3 .
∂ϕ ∂ϕ ∂ϕ
2. If ϕ is a function of x1 , x2 , x3 , ..., xn , then dϕ = ∂x1
dx1 + ∂x2
dx2 +...+ ∂xn
dxn =
∂ϕ
∂xi
dxi
3. Let yi = αit xt and zi = βit yt . We can rewrite yt = αtk xk and now we can
substitute back as zi = βit (αtk xk ) = [βit αtk ]xk = γik xk . Here γik = βit αtk ,
where t is the dummy index and i, k are the free index.
4. Akmn Bkn
l
= (A1mn B1n
l
) + (A2mn B2n
l
) + ... + (AN l 1 l 1 l
mn BN n )=(Am1 B11 + Am2 B12 +
..) + (A2m1 B21
l
+ A2m2 B22
l
+ ...) + ...
5. Consider the summation (a1j xj )2 + (a2j xj )2 + ... + (anj xj )2 . This expression
is written as (ais xs )(ait xt ) or ais ait xs xt .
Note that,
ei · (ej × ek ) = ϵijk
Basically we represents a = aj êj and b = bk êk . So we can also write the compo-
nent form as:
a × b × c = (a · c)b − (a · b)c
Ans: Consider
a = ai ei , b = bj ej , c = ck ek
Now
b × c = ϵmjk bj ck em = Am em .
Then,
a×b×c = ϵrim ai Am er
= ϵrim ai (ϵmjk bj ck )er
= (ϵmri ϵmjk )ai bj ck er
= (δrj δik − δrk δij )ai bj ck er
= δrj δik ai bj ck er − δrk δij ai bj ck er
= (δik ai )(δrj bj )ck er − (δij ai )bj ck (δrk er )
= (ak ck )(br er ) − (aj bj )(cr er )
= (a · c)b − (a · b)c
(a × b) · (a × b) = (a · a)(b · b) − (a · b)2
Consider,
a × b = ϵmij ai bj em , a × b = ϵpqr aq br ep .
Then
(a × b) · (a × b) = (ϵmij ai bj em ) · (ϵpqr aq br ep )
= ϵmij ϵpqr ai bj aq br (em · ep )
= ϵmij ϵpqr ai bj aq br (δmp )
= ϵmij ϵmqr ai bj aq br
= (δiq δjr − δir δjq ) ai bj aq br
= δiq δjr ai bj aq br − δir δjq ai bj aq br
= aq br aq br − ar bq aq br
= (aq aq )(br br ) − (ar br )(aq bq )
= (a · a)(b · b) − (a · b)2
a = a(x1 , x2 , x3 )
ai = ai (x1 , x2 , x3 )
aij = aij (x1 , x2 , x3 )
Then
∂a
a, i = = ∂i a
∂xi
∂ai
ai, j = = ∂ j ai
∂xj
∂aij
aij, k = = ∂k aij (10)
∂xk
Example 3 Show that
∇ × ϕu = ∇ϕ × u + ϕ∇ × u
1.7 Mappings
1.7.1 Injective or one-to-one mapping
Definition 1.1 Let X and Y be the two sets. Then a mapping
T :X →Y
is called injective or one-to-one mapping1 if
1. x1 ̸= x2 =⇒ T x1 ̸= T x2 or equivalently T x1 = T x2 =⇒ x1 = x2
2. the domain2 of the map D(T ) = X
3. the image of the map I(T ) ⊂ Y
where, x1 , x2 ∈ X , and T x1 , T x2 ∈ Y. We will use the notation T x instead of
T (x), and T x reads as ‘ T operates on x’.
In the other words, an injective mapping means that for every input, there is a
unique output. An injective mapping is shown in Fig. 3.
a y
b z
c u
d w
c x
d y
e z
a x
b y
c z
d z
Definition 1.4 Let X , Y and Z be the three sets. Consider the following map-
pings
T1 : X → Y, T2 : Y → Z
(T2 ◦ T1 ) : X → Z
such that
(T2 ◦ T1 )x = T2 (T1 x) ∈ Z, ∀x ∈ X
1.8 Homomorphism
Let X and Y be the two sets, equipped with some operations 3 . Then X and Y
are known as algebraic structures.
T : X → Y,
and let us denote ‘∗’ as a list of generic operations in X , and ‘⋆’ as a list of generic
operations in Y. Then,
T (x1 ∗ x2 ) = T x1 ⋆ T x2 ∈ Y, ∀x1 , x2 ∈ X .
T (x1 + x2 ) = T x1 + T x2 , ∀x1 , x2 ∈ X .
This means, we first apply the operation on some elements in X and then map the
resulting element into Y. It will be the same if we map those elements first into
Y and then perform the operations in Y.
Example 4 Let X and Y are equipped with the binary operations + and ·, i.e.,
∗ = ⋆ = {+, ·}, and consider the map
x 0
T : X → Y, such that T x = ∈ Y, x ∈ X .
0 x
3
An operation takes input value/s to a well-defined output value, e.g., binary operations like
+, ·(multiplication), etc.
and
x1 · x 2 0 x1 0 x2 0
T (x1 · x2 ) = = · = T (x1 ) · T (x2 )
0 x1 · x2 0 x1 0 x2
So the mapping preserves the operations and structures, and so the map is homo-
morphism. We say that X and Y are homomorphic to each other.
E C4 (C4 )2 (C4 )3
E E C4 (C4 )2 (C4 )3
(C4 )3 (C4 )3 E C4 (C4 )2
(C4 )2 (C4 )2 (C4 )3 E C4
C4 C4 (C4 )2 (C4 )3 E
1 i −1 −i
1 1 i −1 −i
−i −i 1 i −1
−1 −1 −i 1 i
i i −1 −i 1
as they have similar multiplication tables, i.e., having the same rearrangement.
Note that T (C4 · (C4 )2 ) = T (C4 )3 = −i and T (C4 ) · T (C4 )2 = i · (−1) = −i. It
is then satisfying T (C4 · (C4 )2 ) = T (C4 ) · T (C4 )2 . You can also check that this
relation is satisfied for the other elements. So, T is a homomorphism.
Very often, several groups, which arise in different contexts in everyday life
and consequently with different physical meanings attached to the elements, are
isomorphic to one abstract group, whose properties can then be analyzed once and
for all.
1.9 Isomorphism
A bijective homomorphism is known as isomorphism.
Figure 9: Essential elements of a field and the closer properties under addition and
multiplication. The boundary of the set in the diagram is just to schematically
represent a set.
Suppose F is equipped with two binary operations called addition and multi-
plication and denoted by + and ·, respectively. This means ∀a, b ∈ F, we have
a + b ∈ F and a · b ∈ F. This algebraic structure (F, +, ·) is called a field if the
following postulates are satisfied :
1. Closed under addition: a + b ∈ F, ∀a, b ∈ F
An operation is called vector addition if all the above operations hold. Now an
operation called scalar multiplication if a scalar (or a number) a ∈ F can be
combined with every element x ∈ V to give an element ax ∈ V such that the
following rules hold:
since aij + bij = bij + aij ∈ F as aij , bij ∈ F. Similarly, you can verify the
rest of the following relations.
Let Fn (x) denote the set of all polynomials of x of degree n over a field F. Let us
consider the following polynomials:
f (x) = a0 + a1 x + a2 x2 + · · · + an xn , f (x) ∈ Fn (x), ai ∈ F
g(x) = b0 + b1 x + b2 x2 + · · · + bn xn , g(x) ∈ Fn (x), bi ∈ F
h(x) = c0 + c1 x + c2 x2 + · · · + cn xn , h(x) ∈ Fn (x), ci ∈ F
Moreover consider the zero polynomial as
0̂(x) = 0 + 0x + 0x2 + · · · + 0xn , 0̂(x) ∈ Fn (x), 0 ∈ F
Then,
1. f (x) + g(x) ∈ Fn (x). Note that
f (x) + g(x) = (a0 + b0 ) + (a1 + b1 )x + (a2 + b2 )x2 + · · · + (an + bn )xn
= c0 + c1 x + a2 x2 + · · · + cn xn ∈ Fn (x)
since ai + bi = ci ∈ F as ai , bi ∈ F.
2. f (x) + g(x) = g(x) + f (x) as
f (x) + g(x) = (a0 + b0 ) + (a1 + b1 )x + (a2 + b2 )x2 + · · · + (an + bn )xn
= (b0 + a0 ) + (b1 + a1 )x + (b2 + a2 )x2 + · · · + (bn + an )xn
= (b0 + b1 x + b2 x2 + · · · + bn xn ) + (a0 + a1 x + a2 x2 + · · · + an xn )
= g(x) + f (x)
since ai + bi = bi + ai ∈ F as ai , bi ∈ F. Similarly check the remaining points.
8. 1f (x) = f (x)
Note that the polynomial are nonlinear with respect to x but the polynomial space
is linear!
x = a1 x 1 + a2 x 2 + · · · + an x n = ai x i
a1 x1 + a2 x2 + · · · + an xn = 0, ai ∈ F, 1 ≤ i ≤ n
Figure 10: (a) Two mutually perpendicular vectors and (b) two oblique vectors.
The first figure is obvious. Let’s examine the second case. We write,
a1 b1 0 a1 b 1 α 0
α +β = , i.e, = . (12)
a2 b2 0 a2 b 2 β 0
If the determinant of the coefficient matrix is not equal to zero, then we always get
α = β = 0. Then a and b are linearly independent. However, if the determinant
is zero, then
a2 b2
a1 b2 − a2 b1 = 0, or = .
a1 b1
This means a and b have the same direction (Fig. 11). We can express any of
them with a scalar multiplication of the other vector, say b = λa, where λ belongs
to the field and represents a scaling.
Example 12 Let
1 0 −1
S= 2 , 3 , 0 ⊂ V = R3 .
0 1 1
This gives,
a−c = 0
2a + 3b = 0
b + c = 0.
Span (S) = a1 x1 + a2 x2 + · · · + ap xp
x = xi g i = ai g i =⇒ (xi − ai )g i = 0i .
form a basis of V = R3 .
Let a, b, c ∈ F. We start with
1 2 1 0
a 2 + b 1 + c −1 = 0 .
1 0 2 0
We further write,
1 2 1 a 0
2 1 −1 b = 0
1 0 2 c 0
Now det A = −11 ̸= 0, and the only solutions are a = b = c = 0. So the vectors
are linearly independent. Next we need to show that they also span the space.
Consider an arbitrary vector
k1
k2 ∈ V = R3 .
k3
Then,
1 2 1 a k1
2 1 −1 b = k2 .
1 0 2 c k2
As detA ̸= 0, for any arbitrary k1 , k2 , k3 , the system is solvable uniquely. So, the
considered vectors span R3 , and they are a basis.
is a basis.
First step is to show that the set is linearly independent. Let a, b, c, d ∈ F. Then,
aα + bβ + cγ + dδ = 0,
i.e.,
1 0 0 1 0 0 0 0 0 0 a b 0 0
a +b +c +d = =⇒ =
0 0 0 0 1 0 0 1 0 0 c d 0 0
then
K = k1 α + k2 β + k3 γ + k4 δ,
i.e., the vectors α, β, γ, δ span the space. The set is linearly independent and span
the space as well. So the set of vectors is a basis.
Example 16 What is the dimension of the space V = {0}, i.e., space only with
the zero element?
First check that (by yourself) V = {0} is a vector space. Since {0} is the only
element, the only possible basis is B = {0}. Then,
α0 = 0
always satisfies for any α ̸= 0, i.e., the considered space can not have any basis.
So dim V = 0.
Theorem 2.2 The necessary and sufficient condition for a nonempty subset W
of a vector space V over field F to be a subspace is
a, b ∈ F, and x, y ∈ W =⇒ ax + by ∈ W. (14)
Proof:
First note that any subset of V will follow operations of vector additions and scalar
multiplications, i.e., A2, A3, B2, B4, B5. We do not need prove again those.
Necessary condition
Consider W is a subspace of V. Then W must be closed under scalar multiplication
and vector addition. This means
a ∈ F, x ∈ W =⇒ ax ∈ W
b ∈ F, y ∈ W =⇒ by ∈ W
Sufficient condition
Now suppose W is a non-empty subset of V satisfying the given condition ax+by ∈
W for all a, b ∈ F, and x, y ∈ W. We need to satisfy that
2. ∃ an element 0 ∈ W
As ax + by ∈ W,
Let,
x = (x1 + 2x2 , x2 , −x1 + 3x2 ) x1 , x2 ∈ F
y = (y1 + 2y2 , y2 , −y1 + 3y2 ) y1 , y2 ∈ F
W1 + W2 := {w1 + w2 ; w1 ∈ W1 , w2 ∈ W2 }
Bα = {γ 1 , γ 2 , . . . , γ k , α1 , α2 , . . . , αt }
and a basis of W2 as
Bβ = {γ 1 , γ 2 , . . . , γ k , β 1 , β 2 , . . . , β m }
So,
dim W1 + dim W2 − dim(W1 ∩ W2 ) = (k + t) + (k + m) − k = k + t + m.
We need to show that dim(W1 + W2 ) = k + t + m. Our claim is then,
B = {γ 1 , γ 2 , . . . , γ k , α1 , α2 , . . . , αt , β 1 , β 2 , . . . , β m }
is a basis of W1 + W2 . First we will show that the set B is linearly independent.
Consider,
(c1 γ 1 + c2 γ 2 + · · · + ck γ k + a1 α1 + a2 α2 + · · · + at αt ) + (b1 β 1 + b2 β 2 + · · · + bm β m ) = 0(15)
| {z } | {z }
∈W1 ∈W2
(b1 β 1 + b2 β 2 + · · · + bm β m ) = − (c1 γ 1 + c2 γ 2 + · · · + ck γ k + a1 α1 + a2 α2 + · · · + at αt ) ∈ W2
| {z } | {z }
∈W2 ∈W1
=⇒ b1 β 1 + b2 β 2 + · · · + bm β m ∈ W1 ∩ W2
So, it can be written with respect to the basis of W1 ∩ W2 , and we write,
b1 β 1 + b2 β 2 + · · · + bm β m = d1 γ 1 + d2 γ 2 + · · · + dk γ k
=⇒ b1 β 1 + b2 β 2 + · · · + bm β m − d1 γ 1 − d2 γ 2 − · · · − dk γ k = 0
But {γ 1 , γ 2 , . . . , γ k , β 1 , β 2 , . . . , β m } is a basis of W2 . We get
b1 = b2 = · · · = bm = 0 = d1 = · · · = dk
From (15),
(c1 γ 1 + c2 γ 2 + · · · + ck γ k + a1 α1 + a2 α2 + · · · + at αt ) = 0.
Moreover, = {γ 1 , γ 2 , . . . , γ k , α1 , α2 , . . . , αt } is a basis of W1 and so c1 = · · · =
ck = a1 = · · · = at = 0. This implies
c 1 = · · · = c k = a1 = · · · = at = b 1 = b 2 = · · · = b m = 0
and B = {γ 1 , γ 2 , . . . , γ k , α1 , α2 , . . . , αt , β 1 , β 2 , . . . , β m } is linearly independent.
Next step is to show that the set B spans W1 +W2 . Let w1 ∈ W1 , and w2 ∈ W2
such that w = w1 + w2 ∈ W1 + W2 . Now
w = w1 + w2
= (c1 γ 1 + c2 γ 2 + · · · + ck γ k + a1 α1 + a2 α2 + · · · + at αt )
+(c′1 γ 1 + c′2 γ 2 + · · · + c′k γ k + b1 β 1 + b2 β 2 + · · · + bm β m )
= (c1 + c′1 )γ 1 + (c2 + c′2 )γ 2 + · · · + (ck + c′k )γ k
+a1 α1 + a2 α2 + · · · + at αt + b1 β 1 + b2 β 2 + · · · + bm β m
= linear combination of B
We then have:
1. B is linearly independent.
2. Span B = W1 + W2 .
1. W1 + W2 = V
2. W1 ∩ W2 = {0}
We then write
V = W1 ⊕ W 2 .
Or,
w1 + 3w2 + w3
2w1 + 6w2 + 2w3 ∈ W
−w2 + w3
So,
α1 (x1 + 3x2 + x3 ) + α2 (y1 + 3y2 + y3 )
α1 x + α2 y = α1 (2x1 + 6x2 + 2x3 ) + α2 (2y1 + 6y2 + 2y3 )
α(−x2 + x3 ) + α2 (−y2 + y3 )
(α1 x1 + α2 y1 ) + 3(α1 x2 + α2 y2 ) + (α1 x3 + α2 y3 )
= 2(α1 x1 + α2 y1 ) + 6(α1 x2 + α2 y2 ) + 2(α1 x3 + α2 y3 )
−(α1 x2 + α2 y2 ) + (α1 x3 + α2 y3 )
p1 + 3p2 + p3
= 2p1 + 6p2 + 2p3
−p2 + p3
The space of linear combinations of the column vectors of A, i.e., the span of the
column vectors of A, is known as column space, colA, and W = colA. Similarly,
the span of the row vectors also forms a subspace, known as row space, rowA,
which is definitely different from W. Note that rowAt = colA = W.
If the det A =
̸ 0, then a, b, c are all zero, and the given set is a basis. However,
in our case det A = 0. One convenient way is to find the echelon form of At such
that row-shuffling implies different linear combinations of the column vectors of
A.
1 2 0 1 2 0 1 2 0
R2 →R2 −3R1 R →R −R2
3 6 −1 − −−−−−−→ 0 0 −1 −−3−−−3−−→ 0 0 −1
R3 →R3 −R1
1 2 1 0 0 1 0 0 0
So a basis is
1 0
2 , 0 ,
0 −1
So a basis is
1 0
−2 , 7 ,
5 −9
3 2
and dim W = 2.
T :V→U
2. T (ax) = aT x, a ∈ F, x ∈ V.
The above two conditions could be combined into a single condition as
L = Iω.
The mass moment of inertia I is a second order that maps the vector ω to L.
These two vectors may not necessarily be in the same direction. A scalar multipli-
cation with a vector also gives a vector. However, like a tensor operation, scalar
multiplication can not change the direction of a vector. The above relation also
follows the linear relationship (17), i.e.,
Example 23 The polarization vector P , and the electric field vector E are con-
nected as
P = ε0 χe E.
Here, ε0 , a scalar, is the electric permittivity of free space. χe , a scalar too, is the
electric susceptibility of the concerned medium. Note that P and E have the same
P = ε0 χe E.
Example 24 The traction vector t, and the unit normal n to a plane (cut) at a
point are connected as
t = σn
where σ is the Cauchy stress and a second order tensor. The above relation also
follows the linear relationship (17).
is homomorphism of V into U.
Let,
x1 y1
x = x2 ,
y = y2 .
(18)
x3 y3
Then
ax1 by1 ax1 + by1
ax 1 + by 1
T (ax + by) = T ax2 + by2 = T ax2 + by2 =
ax2 + by2
ax3 by3 ax3 + by3
x1 y1
ax1 by1 x1 y1
= + =a +b = aT x2 + bT y2
ax2 by2 x2 y2
x3 y3
= aT x + bT y
T : V → V.
I : V → V, such that Iv = v
T (α1 v 1 + · · · + αn v n ) = α1 T v 1 + · · · + αn T v n (19)
R(T ) := {T v = u ∈ U, v ∈ V} .
T (α1 v 1 + α2 v 2 ) = α1 T v 1 + α2 T v 2
= α1 u1 + α2 u2
N (T ) := {v ∈ V : T v = 0 ∈ U} ⊆ V.
T (α1 v 1 + α2 v 2 ) = α1 T v 1 + α2 T v 2
= 0+0=0
αk+1 T γ k+1 + · · · + αn T γ n = 0,
T (αk+1 γ k+1 + · · · + αn γ n ) = 0.
αk+1 γ k+1 + · · · + αn γ n = β1 γ 1 + β2 γ 2 + · · · + βk γ k ,
β1 γ 1 + β2 γ 2 + · · · + βk γ k − αk+1 γ k+1 − · · · − αn γ n = 0
or
Then,
1 2 −1
Span 0 , 1 ,
1 = R(T ).
1 1 −2
where the space of linear combinations of the column vectors of A, i.e., the column
space is equal to R(T ). Conducting the row-operations of At we write
1 0 1 1 0 1 1 0 1
R2 →R2 −2R1 R →R −R2
2 1 1 − −−−−−−→ 0 1 −1 −−3−−−3−−→ 0 1 −1
R3 →R3 +R1
−1 1 −2 0 1 −11 0 0 0
So a basis of R(T ) is
1 0
0 , 1 ,
1 −1
O
0 1 1 a2 = 0
0 0 0 a3 0
where a3 is the free component. We then express the other two components in
terms of a3 . We write,
a1 + 2a2 − a3 = 0
a2 + a3 = 0 (26)
Or,
a2 = −a3
a1 = 3a3 (27)
and so,
a1 3
a2 = a3 −1 (28)
a3 1
That is,
3
−1 (29)
1
Figure 18: R(T ) is a plane containing the two basis vectors and N (T ) is a line
containing its basis.
the mapped value would be the origin. Any other vectors in V will stay on the
plane (hatched, see Fig. 18).
3.6 Isomorphism
When T : V → U is bijective, we call it isomorphism. Moreover, T is isomorphic
if
1. T is nonsingular
2. dim V = dim U
Proof: To show isomorphism, we need to prove that the mapping is bijective, i.e.,
T is both injective and surjective.
T is injective:
We want to show T v 1 = T v 2 =⇒ v 1 = v 2 , where v 1 , v 2 ∈ V. Let us start with
T v1 = T v2,
T v 1 − T v 2 = 0,
T (v 1 − v 2 ) = 0
As, T is nonsingular, T (v 1 − v 2 ) = 0 means (v 1 − v 2 ) = 0 or v 1 = v 2 . So, T is
injective.
T is surjective:
We need to show that R(T ) = U. As T is nonsingular, then N (T ) = {0}, and
Rank T + Null T = dim V
gives
dim R(T ) + 0 = dim V = dim U (given).
As, dim R(T ) = dim U, and R(T ) ⊆ U then R(T ) = U, i.e., T is surjective. So,
T is injective and surjective, i.e., bijective.
3.7 Invertible T :
When T : V → U is an isomorphism, then T is invertible. We write,
T −1 : U → V
and if T v = u, then T −1 u = v. When T is invertible, it is also nonsingular and
dim V = dim U.
T : V → U.
T g j = Tij f i (31)
of f i . The scalars T1j , · · · , Tmj are the components of T g j with respect to the basis
BU . The n×m matrix [T ]BV −BU or simply [T ] (or [Tij ]) is the matrix representation
of T relative to the pair of basis BV and BU . If v = v1 g 1 + · · · + vn g n ∈ V, then
u = Tv = T (vj g j )
= vj (T g j )
= vj Tij f i
ui f i = (Tij vj )f i
Or we write,
ui = Tij vj
and we have
β1 + β2 + β3 = 2
β1 + β2 = −3
β1 = 3
Or
β1 = T12 = 3, β2 = T22 = −6, β3 = T32 = 5
Finally,
1 0
T 0 = 1
(38)
0 3
Now we want to express (0, 1, 3)t as a linear combination of the given basis. One
writes,
0 1 1 1
1 = γ1 1 + γ2 1 + γ3 0
3 1 0 0
and we have
γ1 + γ2 + γ3 = 0
γ1 + γ2 = 1
γ1 = 3
Or
γ1 = T13 = 3, γ2 = T23 = −2, γ3 = T33 = 1.
So,
3 −6 6
[T ]B = 3 −6 5
3 −2 1
Example 28 Consider a differential operator
D:V→V (39)
where V be the vector space of all polynomials
f (x) = a0 + a1 x + a2 x2 + a3 x3 , ai ∈ F.
Write the transformation matrix [D] relative to the basis BV and BR(D) .
D1 = 0 = 0 + 0x + 0x2
Dx = 1 = 1 + 0x + 0x2
D x2 = 2x = 0 + 2x + 0x2
D x3 = 3x2 = 0 + 0x + 3x2
So,
0 0 0
1 0 0
[D]BV −BR(D) =
0
2 0
0 0 3
So, cT ∈ Hom (V, U). One can show that the other laws for the vector space are
satisfied.
Lemma 4.1
T 2 ◦ T 1 (α1 v 1 + α2 v 2 ) = T 2 (T 1 (α1 v 1 + α2 v 2 ))
= T 2 (α1 T 1 v 1 + α2 T 1 v 2 )
= α1 T 2 T 1 v 1 + α2 T 2 T 1 v 2
= α1 (T 2 ◦ T 1 )v 1 + α2 (T 2 ◦ T 1 )v 2
So, T 2 ◦ T 1 ∈ Hom (V, W), and we simply write the product T 2 T 1 , i.e, (T 2 T 1 )v =
T 2 (T 1 v). Commutative diagrams are very useful to represent composition map-
pings. We can represent the above mentioned composition mapping as
T1 /
V U . (40)
T2
T 2 ◦T 1 =T 2 T 1
W
T /U IV /
V V V . (41)
IU T
IUT T IV
U U
IU T = T = T IV (42)
T / T −1 /
V U U V . (43)
T −1 T
T −1 T =I V T T −1 =I U
V U
T −1 u1 = T −1 u2
T T −1 u1 = T T −1 u2
I U u1 = I U u2
u1 = u2 .
V T / U U
S / V (44)
S T
ST =I V T S =I U
V U
From the commutative diagram (44), we can directly write S = T −1 . We can also
check it in the following way.
Similarly,
V S −1 / U U T −1 / V (45)
T −1 S −1
T −1 S −1 =I V S −1 T −1 =I U
V U
i (A + B)v = Av + Bv
ii (αA)v = αAv
Try yourself to prove the other relations! Note that, as V = U, we can write from
(42)
IT = T = T I (46)
Lemma 4.7 From Theorem 4.3, one can write for an algebraic structure Hom (V, V),
T −1 T = I = T T −1 (47)
Then ϕ ∈ Hom (V, R). The notation ϕ⟨v⟩ is same as T v. We are just using ϕ⟨v⟩
instead of ϕv. Many researchers use ϕ(v) as well. Note that
ϕi ⟨g j ⟩ = δij . (48)
α1 ϕ1 + α2 ϕ2 + · · · + αn ϕn = 0
Now,
(α1 ϕ1 + α2 ϕ2 + · · · + αn ϕn )⟨g j ⟩ = 0⟨g j ⟩ = 0
α1 ϕ1 ⟨g j ⟩ + α2 ϕ2 ⟨g j ⟩ + · · · + αn ϕn ⟨g j ⟩ = 0
αi ϕi ⟨g j ⟩ = 0
αi δij = 0
=⇒ αj = 0 for j = 1, · · · , n
As the set {ϕ1 , . . . , ϕn } is linearly independent, and dim V ∗ = n, then {ϕ1 , . . . , ϕn }
is a basis of V ∗ . This basis, BV ∗ is called the dual basis of BV . Such a dual basis
is unique (we are not proving its uniqueness). It is a standard practice to denote
ϕ as g ∗ , so we renaming and rewriting the dual basis as
BV ∗ = {g ∗1 , . . . , g ∗n }, and so g ∗i ⟨g j ⟩ = δij (49)
Let v ∗ ∈ V ∗ , and we write
v ∗ = c1 g ∗1 + c2 g ∗2 + · · · + cn g ∗n
v ∗ ⟨g i ⟩ = ci
=⇒ v ∗ = {v ∗ ⟨g i ⟩}g ∗i
Similarly, if v = V, then
v = c1 g 1 + c2 g 2 + · · · + cn g n
g ∗i ⟨v⟩ = ci
=⇒ v = {g ∗i ⟨v⟩}g i
V ∗∗ := Hom (V ∗ , R).
g ∗∗ ∗
i ⟨g j ⟩ = δij . (50)
g ∗∗ ∗ ∗
i ⟨g j ⟩ = g i ⟨g j ⟩. (51)
The above relation naturally connects with g ∗∗i to g i , and for a finite dimensional
space, a natural isomorphism is established between V, and V ∗∗ . Such an isomor-
phism is also known as canonical isomorphism. It could be shown that in such a
scenario the double dual space becomes exactly same as the original space, i.e.,
V ∗∗ = V. We then identify, g ∗∗
i = g i , and then
g ∗i ≡ g i . (53)
and
Ans: Let v ∈ V, and our goal is to find g 1 ⟨v⟩ and g 1 ⟨v⟩, i.e., the explicit form of
the mapping or function. Let us consider the standard basis {e1 , e2 } ∈ V. Then,
If we know g 1 ⟨e1 ⟩ and g 2 ⟨e2 ⟩, then the explicit form for g 1 ⟨v⟩ is known. Consid-
ering the properties of (48) one can write
Solving the two equations we get g 1 ⟨e1 ⟩ = −1, and g 1 ⟨e2 ⟩ = 3. Then
Similarly,
Solving the above two equations we get g 2 ⟨e1 ⟩ = 1, and g 2 ⟨e2 ⟩ = −2. Then
g 2 ⟨v⟩ = v1 − 2v2
5.3 Transposition
Let v ∈ V and {g i } be a basis of V. Let v ∗ ∈ V ∗ , and {g i } be the dual basis in
V ∗ (Fig. 23). Then,
v = vigi and v ∗ = vi g i
u = ui f i and u∗ = ui f i
We define,
The extension of this definition to functionals of more than two variables are
simple, and such functions are called multilinear functionals.
is bilinear.
Proof: Consider,
Similarly,
Therefore, L is bilinear. There are also other bilinear functionals. However, this
particular form is our further interest to explore tensors. All such real valued
bilinear functionals (V × U → R) again form a vector space (see Fig. 24) when the
following rules further added:
Linearly independent
Consider
cij g i ⊗ f j = 0 ⊗ 0.
Applying the defined bilinear mapping at left hand side, we write
cij g i ⊗ f j ⟨g m , f n ⟩ = 0
cij [g i ⟨g m ⟩][f j ⟨f n ⟩] = 0
i
cij [δm ][δnj ] = 0 =⇒ cmn = 0
So g i ⊗ f j are linearly independent.
Linearly independent
Consider
cij g i ⊗ f j = 0 ⊗ 0.
Applying the defined bilinear mapping at left hand side, we write
cij g i ⊗ f j ⟨g m , f n ⟩ = 0
cij [g i ⟨g m ⟩][f j ⟨f n ⟩] = 0
cij [δm
i
][δnj ] = 0 =⇒ cmn = 0
So g i ⊗ f j are linearly independent.
v ⊗ u ⟨v ∗ , u∗ ⟩ = [v⟨v ∗ ⟩][u⟨u∗ ⟩]
= [v i g i ⟨vj g j ⟩][um f m ⟨un f n ⟩]
= [v i vj g i ⟨g j ⟩][um un f m ⟨f n ⟩]
= [v i vj δij ][um un δmn ]
= [vi v i ][um um ] (60)
Moreover,
g i ⊗ f j ⟨v ∗ , u∗ ) = [g i ⟨v ∗ ⟩][f j ⟨u∗ ⟩]
= [vi ][uj ]
= vi uj (61)
From (60) and (61), we finally write
v ⊗ u ⟨v ∗ , u∗ ⟩ = v i uj g i ⊗ f j ⟨v ∗ , u∗ ⟩
∴ v ⊗ u = v i uj g i ⊗ f j . (62)
and
V × U ∗ → R.
The collection of all such bilinear functionals generate the vector space V ∗ ⊗ U. An
element v ∗ ⊗ u ∈ V ∗ ⊗ U could be then expressed as
v ∗ ⊗ u = vi uj g i ⊗ f j
v ⊗ u : U ∗ → V.
such that
Linearly independent
Consider cij (g i ⊗ f j ) = 0 ⊗ 0 ∈ Hom (U ∗ , V). Then
cij (g i ⊗ f j )f m = (0 ⊗ 0)f m
cij [f j ⟨f m ⟩]g i = 0
cij δjm g i = 0
cim g i = 0
v ⊗ u = v i uj g i ⊗ f j . (63)
and so gi ⊗ f j spans the space Hom (U ∗ , V). Comparing (62) and (63) we see that
the two spaces V ⊗ U and Hom (U ∗ , V) share the same element, same basis and
also the same dimension. We thus identify these two spaces are equal. i.e,
v ⊗ u ∈ V ⊗ U = Hom (U ∗ , V).
Similarly,
Finally,
Definition 6.4 For any vectors v ∗ ∈ V ∗ and u ∈ U, the tensor product v ∗ ⊗ u is
defined as a homomorphism
v∗ ⊗ u : U ∗ → V ∗
such that if w∗ ∈ U ∗ , then
(v ∗ ⊗ u)w∗ = (u⟨w∗ ⟩)v ∗
and
v ∗ ⊗ u ∈ V ∗ ⊗ U = Hom (U ∗ , V ∗ ).
We can extend this to the higher order spaces.
Theorem 6.6 Consider the basis {g i } ∈ V and {f i } ∈ U. Then {g i ⊗ f j } is
a basis of V ⊗ U = Hom (U ∗ , V). Then any T ∈ V ⊗ U could be expressed as
T = T ij g i ⊗ f j .
As g i ⊗ f j is a basis of Hom (U ∗ , V), any T ∈ Hom (U ∗ , V) can be expressed as
T = T ij g i ⊗ f j . In summary,
T = T ij g i ⊗ f j ∈ V ⊗ U ≡ Hom (U ∗ , V)
T̂ = T̂ij g i ⊗ f j ∈ V ∗ ⊗ U ∗ ≡ Hom (U, V ∗ )
T̆ = T̆i·j g i ⊗ f j ∈ V ∗ ⊗ U ≡ Hom (U ∗ , V ∗ )
Ť = Ť·ji g i ⊗ f j ∈ V ⊗ U ∗ ≡ Hom (U, V)
Note that Ti·j means i is the first index. A dot (·) is used to denote the second
index. T ij , T̂ij , T̆i·j , and Ť·ji are the components with respect to the associated
basis. Also note that Tj·i ̸= T·ji .
Example 30 Let u ⊗ v ∈ U ⊗ V, and v ∗ ⊗ w ∈ V ∗ ⊗ W. Then
(u ⊗ v)(v ∗ ⊗ w) = (u ⊗ w)[v⟨v ∗ ⟩] (64)
Ans: Note that the left hand side is the composition mapping or product of two
homomorphisms. Consider the following commutative diagram,
v ∗ ⊗w / V∗
W∗ (65)
u⊗v
(u⊗v )◦(v ∗ ⊗w )=(u⊗v )(v ∗ ⊗w )
U
If p∗ ∈ W ∗ , then
T = Tij f i ⊗ g j ∈ U ∗ ⊗ V ∗ . (66)
As,
T : V → U ∗,
T g 1 = T11 f 1 + T21 f 2
T g 2 = T12 f 1 + T22 f 2 .
T (v j g j ) = um f m
v j (T g j ) = um f m
v j Tmj f m = um f m
=⇒ v j Tmj = um (68)
So,
u1 = T11 v 1 + T12 v 2
u2 = T21 v 1 + T22 v 2
In a matrix notation,
u1 T11 T12 v 1
=
u2 T21 T22 v 2
Now,
u1 v1 u1 v2 v 1 u1 (v1 v 1 + v2 v 2 ) u
∗ ∗
[(u ⊗ v )v]j = 2 = 1 2 = [v ⟨v⟩] 1 .
∗
u2 v1 u2 v2 v u2 (v1 v + v2 v ) u2
i.e., the component representation of (u∗ ⊗ v ∗ )v = [v ∗ ⟨v⟩]u.
A tensor of type (r, 0) is called contravariant tensor and one of type (0, s) is called
covariant tensor. Suppose {g i } and {g i } are the bases of V and V ∗ respectively,
then
g i1 ⊗ · · · ⊗ g ir ⊗ g j1 ⊗ · · · ⊗ g js
form a basis of Tsr . Therefore an (r, s)-type tensor T can be uniquely expressed as
T = T·ij11...i j1 js
...j s g i1 ⊗ · · · ⊗ g ir ⊗ g ⊗ · · · ⊗ g ∈ Ts
r r
Note that this is one particular format that is often useful. You have to start with
covariant basis and contravariant basis will come after them. You can not have a
suffling of covariant and contravariant basis.
T ⊗ S ∈ Tsr11+s
+r2
2
This means if
i ...i
T = T· j11 ...jr1s1 g i1 ⊗ · · · ⊗ g ir1 ⊗ g j1 ⊗ · · · ⊗ g js1 ∈ Tsr11
and
m ...m
S = S· n11 ...nsr22 g m1 ⊗ · · · ⊗ g mr2 ⊗ g n1 ⊗ · · · ⊗ g ns2 ∈ Tsr22
then
i ...i m ...m
T ⊗ S = T· j11 ...jr1s1 S· n11 ...nsr22 g i1 ⊗ · · · ⊗ g ir1 ⊗ g m1 ⊗ · · · ⊗ g mr2
⊗g j1 ⊗ · · · ⊗ g js1 ⊗ g n1 ⊗ · · · ⊗ g ns2 ∈ Tsr11+s
+r2
2
.
A ⊗ (B ⊗ C) = (A ⊗ B) ⊗ C ∈ Tsr11+s
+r2 +r3
2 +s3
A ⊗ (B + C) = A ⊗ B + A ⊗ C ∈ Tsr11+s
+r2
2
3. For each α ∈ F:
α(A ⊗ B) = αA ⊗ B = A ⊗ αB ∈ Tsr11+s
+r2
2
The vector space equipped with such an addition and multiplication is called tensor
algebra.
6.6 Contraction
Consider tensors of type Tsr or (r, s) such that r, s ≥ 1.
r−1
such that a tensor space Tsr becomes Ts−1 . For example, let us consider the tensor
T = T·kmp g m ⊗ g p ⊗ g k ∈ T12
Note that:
1. Every contraction of a tensor removes one contravariance and one covariance.
r−2
such that a tensor space Tsr becomes Ts−2 . The symbol ‘:’ is used when two
contractions take place. Consider,
mp
T = T·kl g m ⊗ g p ⊗ g k ⊗ g l ∈ T22
Then
mp mp m
: T = T·kl [g m ⟨g k ⟩][g p ⟨g l ⟩] = T·kl [δk ][δlp ] = T·kl
kl
∈ T00
A n-contraction is a mapping
r−n
such that a tensor space Tsr becomes Ts−n . Unfortunately, the symbol ‘•’ (or
sometimes :) is used in most of the time beyond double contraction. You need to
understand it from the context.
Single contraction between two tensors are defined in the following way. First
we will consider for simplicity a pure contravariant (r1 , 0) and a pure covariant
tensor (0, s2 ) such that
−1
[·] : T0r1 × Ts02 → Tsr21−1
A single dot ‘·’ is used for a single contraction. For example, if T = T ijk g i ⊗g j ⊗g k
and S = Sqr g q ⊗g r , then, a single contraction between r and k (start from extreme
right) is given by
T · S = (T ijk g i ⊗ g j ⊗ g k ) · (Sqr g q ⊗ g r )
= T ijk Sqr g i ⊗ g j ⊗ g q [g k ⟨g r ⟩]
= T ijk Sqr g i ⊗ g j ⊗ g q [δrk ]
= T ijk Sqk g i ⊗ g j ⊗ g q
If nothing is mentioned, it is a standard practice to start from the last member.
A double contraction between two pure contravariant (r1 , 0) and a pure covariant
tensor (0, s2 ) such that
−2
[:] : T0r1 × Ts02 → Tsr21−2
Continuing the previous example we write
T : S = (T ijk g i ⊗ g j ⊗ g k ) : (Sqr g q ⊗ g r )
= T ijk Sqr g i [g j ⟨g q ⟩][g k ⟨g r ⟩]
= T ijk Sqr g i [δjq ][δrk ]
= T ijk Sjk g i
A generalized full contraction is given by
[•] : T0r1 × Ts02 → T0r1 −s2 if r1 > s2 or Ts02 −r1 if s2 > r1 .
If T = T ijkl g i ⊗ g j ⊗ g k ⊗ g l and S = Sqr g q ⊗ g r , then
T • S = (T ijkl g i ⊗ g j ⊗ g k ⊗ g l ) : (Sqr g q ⊗ g r )
= T ijkl Sqr g i ⊗ g j [g k ⟨g q ⟩][g l ⟨g r ⟩]
= T ijkl Sqr g i ⊗ g j [δkq ][δlr ]
= T ijkl Skl g i ⊗ g j
The following double contraction is equivalent to a bilinear functional of a tensor
product.
u ⊗ v : u∗ ⊗ v ∗ = u ⊗ v ⟨u∗ , v ∗ ⟩ = [u⟨u∗ ⟩][v⟨v ∗ ⟩].
(u ⊗ v ⊗ w ⊗ x ⊗ y)(a∗ ⊗ b∗ ⊗ c∗ ) = u ⊗ v ⊗ w ⊗ x ⊗ b∗ ⊗ c∗ [y⟨a∗ ⟩]
g :V ×V →R
1. g⟨u, v⟩ is bilinear,
3. g⟨u, u⟩ > 0, if u ̸= 0.
g = gij g i ⊗ g j
Now,
g⟨g i , g j ⟩ = gmn g m ⊗ g n ⟨g i , g j ⟩
= gmn [g m ⟨g i ⟩][g n ⟨g j ⟩]
= gmn δim δjn
= gij (69)
Moreover,
g⟨g i , g j ⟩ = g⟨g j , g i ⟩
=⇒ gij = gji (70)
g⟨g i , g j ⟩ = g i · g j = gij
and
gi · gj = gj · gi.
A vector space V equipped with the inner product is called an inner product space.
We will denote such space as ·V. By including the inner product, we have an
alternative of the dual space V ∗ to produce scalars. However, dual space carries
functionals where as the inner product space carries the vectors!
g is known as metric tensor or fundamental tensor. gij is called the components
of the metric tensor or fundamental tensor as it is directly linked with the measure
of distance in a generalized way.
g i · g j = δji
where δji is called the Kronecker delta. The basis {g 1 , g 2 , · · · , g n } spans the same
vector space V again. Such a basis is known as reciprocal basis.
From this construction, if v = v i g i ∈ V, then by taking the inner product with
i
g we have
v · g i = (v j g j ) · g i = v j (g j · g i ) = v j δji = v i .
uj = gij ui .
u = ui g i = ui g i .
We call
and
g⟨g i , g j ⟩ = g ij = g i · g j .
Then
uj = g ij ui .
gij : uj → ui , g ij : uj → ui
g i = gij g j , or g i = g ij g j .
Therefore, lowering or raising the index for the reciprocal basis can be made in
the same manner. Note that
g k · g i = gij g k · g j
δik = gij g kj = gij g jk .
This means [g ij ] = [gij ]−1 . In a tensor space with inner product, for a, b, c,
d, x, y ∈ ·V we write
1. (a ⊗ b)c = (b · c)a
2. (a ⊗ b) : (c ⊗ d) = (a · c)(b · d)
3. (a ⊗ b)(c ⊗ d) = (a ⊗ d)(b · c)
4. (a ⊗ b ⊗ c)d = (a ⊗ b)(c · d)
5. (a ⊗ b ⊗ c) : x ⊗ y = a(b · x)(c · y)
Note that, all the dual space operations are now represented by the dot · instead
of ⟨ ⟩.
For T ∈ ·V ⊗ ·V, we can write it in four possible ways:
All the components T ij , Tij , Ti·j , and T·ji are different. T ij , Ti·j , and T·ji are also
called associate tensor components of Tij .If we consider T = T ij g i ⊗ g j , then
T g k = T ij (g i ⊗ g j )g k = T ij (g j · g k )g i = T ij δjk g i = T ik g i
=⇒ g m · T g k = g m · T ik g i = T ik g m · g i = T ik δim = T mk
In summary,
Now,
g i · T g j = T ij , (g im g m ) · T (g jn g n ) = T ij , g im g jn (g m · T g n ) = T ij
=⇒ g im g jn Tmn = T ij .
Similarly, knowing the components with respect to a basis, one can directly cal-
culate components with respect to other basis through gij and g ij . We also have
four versions of g s:
g = g ij g i ⊗ g j
= gij g i ⊗ g j
= δi·j g i ⊗ g j = g i ⊗ g i
= δ·ji g i ⊗ g j = g i ⊗ g i
You operate g on any vector v i g i or vi g i , you will get that vector back, i.e., gv = v.
So g is indeed an identity tensor!
A vector space equipped with such a norm is called Euclidean vector space.
The notion of transpose is similar to what we studied in the dual space. How-
ever, we have only one vector space ·V. Consider,
T : ·V → ·V
such that v and T v belong to ·V. Let a and u be the any other two vectors such
that v · a = T v · u. Then the transpose is the mapping T t u = a, and satisfies the
relation (Fig. 25)
v · T t u = T v · u. (71)
r = x1 e1 + x2 e2 + x3 e3
The correspondence between points (x1 , x2 , x3 ) and the three scalar parameters
We assume that the mappings are bijective and have continuous derivatives, so
that the correspondence between (x1 , x2 , x3 ) and (u1 , u2 , u3 ) is unique.
If we only vary one local coordinate, say u1 , while the other two local coordi-
nates, (u2 , u3 ), are fixed, then we call the locus of the points (x1 , x2 , x3 ) a coordinate
line. So
are the three coordinate lines (Fig. 26). If the coordinate lines are straight,
we call it rectilinear coordinate system, else it is a curvilinear coordinate system.
Consider a point P ′ on the coordinate line u1 , obtained by a small change δu1 .
Then the position vector of the new point with respect to the standard basis could
be written as
P P ′ = OP ′ − OP
= ϕ1 (u1 + δu1 , c2 , c3 )e1 + ϕ2 (u1 + δu1 , c2 , c3 )e2 + ϕ3 (u1 + δu1 , c2 , c3 )e3
n ∂r ∂r ∂r o
{g 1 , g 2 , g 3 } := 1
, , (72)
∂u ∂u2 ∂u3
are known as covariant basis or natural basis (Fig. 28). As
we write
∂r 1 ∂r ∂r
dr(u1 , u2 , u3 ) = 1
du + 2 du2 + 3 du3
∂u ∂u ∂u
= du1 g 1 + du2 g 2 + du3 g 3 . (74)
So,
or
Here ds is the differential arc-length distance along dr with respect to the local
coordinates {u1 , u2 , u3 }. Recall that, gij (u1 , u2 , u3 ) is known as metric tensor or
fundamental tensor. It is associated with the measure of distance in a generalized
way.
Figure 29: Covariant or tangent basis vectors {g 1 , g 2 } for the polar coordinates.
Example 32 Find ds2 for the polar coordinate system (Fig. 29)
We identify u1 = r and u2 = θ. Then,
Therefore,
∂r
g1 = = cos θe1 + sin θe2
∂r
∂r
g2 = = −r sin θe1 + r cos θe2
∂θ
We compute
∂ψ 3 ∂ψ 3 ∂ψ 3
∇ψ 3 (x1 , x2 , x3 ) = e 1 + e 2 + e3
∂x1 ∂x2 ∂x3
or
∂u3 ∂u3 ∂u3
g3 = e 1 + e2 + e3 .
∂x1 ∂x2 ∂x3
Then g 3 = ∇u3 (x1 , x2 , x3 ) is perpendicular to the tangent plane 6 at P (Fig. 30).
6
Recall that gradient of a constant surface is normal to the tangent plane. Suppose we have a
surface xy 3 = z + 2. To find the normal to the surface at (1, 1, −1), we consider f = xy 3 − z = 2.
Then ∇f = y 3 e1 + 3xy 2 e2 − e3 = e1 + 3e2 − e3 is the normal vector at (1, 1, −1).
(1,1,−1)
Now,
∂u3 ∂u3 ∂u3 ∂x1 ∂x2 ∂x3
g3 · g3 = e +
1 1
e 2 + e3 · e 1 + e2 + e3 .
∂x ∂x2 ∂x3 ∂u3 ∂u3 ∂u3
∂u3 ∂x1 ∂u3 ∂x2 ∂u3 ∂x3
= + +
∂x1 ∂u3 ∂x2 ∂u3 ∂x3 ∂u3
∂u3
=
∂u3
= 1
Similarly,
∂u3 ∂u3 ∂u3 ∂x1 ∂x2 ∂x3
g3 · g2 = e +
1 1
e 2 + e3 · e 1 + e2 + e3 .
∂x ∂x2 ∂x3 ∂u2 ∂u2 ∂u2
∂u3 ∂x1 ∂u3 ∂x2 ∂u3 ∂x3
= + +
∂x1 ∂u2 ∂x2 ∂u2 ∂x2 ∂u2
∂u3
=
∂u2
= 0
g i · g j = δji (76)
Example 33 Find the reciprocal basis {g 1 , g 2 } for the polar coordinates, as shown
in Fig. 29.
We first find
p x2
r= (x1 )2 + (x2 )2 , θ = tan−1
x1
So,
∂r ∂r
g 1 = ∇r = e1 + e2
∂x1 ∂x2
x1 x2
= p e1 + p e2
(x1 )2 + (x2 )2 (x1 )2 + (x2 )2
= cos θe1 + sin θe2
and
∂θ ∂θ
g 2 = ∇θ = 1
e1 + 2 e2
∂x ∂x
1
x x2 x2 1
= − p − 1 2 e1 + p − 1 e2
(x1 )2 + (x2 )2 (x ) (x1 )2 + (x2 )2 (x )
2 1
x x
= −p e1 + p e2
(x1 )2 + (x2 )2 (x1 )2 + (x2 )2
sin θ cos θ
= − e1 + e2
r r
1
Moreover, g 1 · g 1 = g 11 = 1, g 1 · g 2 = g 12 = g 21 = 0, g 2 · g 2 = g 22 = r2
.
Figure 32: Local coordinates in the reference (B0 ) and current configuration (Bt ).
X = X 1 e1 + X 2 e2 + X 3 e3
We then write,
∂X
1. GI = ∂U I
2. GI = ∇X U I
We then write,
∂x
1. g i = ∂ui
2. g i = ∇x ui
3. Covariant basis: {g 1 , g 2 , g 3 }
4. Contravariant basis: {g 1 , g 2 , g 3 }
We also call U I as material coordinates, and ui as spatial coordinates. During
deformation, consider the following maps between the material and spacial coor-
dinates.
Or
uk = ϕk (U I , t). (77)
dX = dU I GI .
The point P ′ maps to the point p′ with the position vector x + dx, and we write
dx = duk g k .
Figure 33: A material line in the reference configuration passing through a point
P . At α = 0, we have the position vector of the point P , X = C(α = 0).
dx = F dX (79)
F : TX → Tx
Consider,
Figure 34: Set of all material line elements dX, belonging to the point P forms a
vector space, known as tangent space TX .
Figure 35: Change in length and angle between material line elements.
dx · dy − dX · dY = F dx · F dy − dX · dY
= F dX · F dY − dX · dY
= dX · F t F dY − dX · dY
= dX · (F t F − I)dY
dx · dy − dX · dY = dX · (C − I)dY
E : TX × TX → R (81)
such that,
E⟨dX, dY ⟩ = dX · 2EdY ∈ R
E represents the change in length and angle between material line elements. From
the bilinear mapping (81), we can say that E ∈ T∗X ⊗ T∗X , or rather in an inner
product space, E ∈ T# #
X ⊗ T#X , where TX is spanned by the reciprocal basis of
TX . We then write
E = EIJ GI ⊗ GJ
si · sj = δij
α1 s 1 + · · · + αn s n = 0
Taking dot product with sk , we write αk |sk |2 = 0. Note that k is not summed,
rather it is just one term. As |sk |2 = 1, we can further write αk = 0. So, Sm is
linearly independent.
Moreover, If v ∈ ·V, then v = αk sk = (v · sk )sk (k = 1, n). So any arbitrary
vector in ·V could be spanned by S.
Figure 36: Change in length and angle between material line elements.
4. Finally,
n−1
X hn
hn = g n − (g n · si )si , and sn =
i=1
|hn |
Then,
2 1 1/2
1 1
h2 = g 2 − (g 2 · s1 )s1 = 1 − √ (2 + 1) √ 1 = −1/2
−2 2 2 0 −2
So,
√ 1/6
h2
s2 = = 2 −1/6 .
|h2 |
−2/3
Finally,
10/9
h3 = g 3 − (g 3 · s1 )s1 − (g 3 · s2 )s2 = −10/9 .
5/9
and
2/3
h3
s3 = = −2/3 .
|h3 |
−1/3
Thus W ⊥ is the set of all vectors in ·V which are orthogonal to every vector in W.
p · s = 0, q · s = 0.
Let α1 , α2 ∈ F. Then,
(α1 p + α2 q) · s = α1 p · s + α2 q · s = 0.
W ⊕ W ⊥ = ·V.
h ⊥ {f 1 , · · · , f m } and h ∈ W ⊥ .
Now,
m
X m
X
⊥
v =h+ (v · f i )f i , where h ∈ W , and (v · f i )f i ∈ W
i=1 i=1
W + W ⊥ = ·V. (82)
W ∩ W ⊥ = {0}. (83)
W ⊕ W ⊥ = ·V. (84)
v = w + w⊥ .
Ans: Let v ∈ R(T ). Then there exists a vector, say u, such that T u = v. Let
s ∈ N (T ) such that T s = 0. Now,
v · s = T u · s = u · T t s = u · T s = u · 0 = 0.
Then v ⊥ s, and so
Now,
From (85) and (86), we write dim R(T ) = dim N (T )⊥ . Moreover, R(T ) ⊆
N (T )⊥ . So we conclude R(T ) = N (T )⊥ .
1 X
Sym (v 1 ⊗ · · · ⊗ v r ) = v σ(1) ⊗ · · · ⊗ v σ(r) . (87)
r!
σ∈P(r)
where the sum is taken over the r! permutations of the integers 1, ..., r. If we write
v 1 ⊗ · · · ⊗ v r = v1i1 · · · vrir g i1 ⊗ · · · ⊗ g ir ,
then
h1 X i i
i
Sym (v 1 ⊗ · · · ⊗ v r ) = v1σ(1) · · · vrσ(r) g i1 ⊗ · · · ⊗ g ir (88)
r!
σ∈P(r)
T = T i1 i2 ...ir g i1 ⊗ · · · ⊗ g ir
then
h1 X i
Sym T = T iσ(1) ...iσ(r) g i1 ⊗ · · · ⊗ g ir . (89)
r!
σ∈P(r)
that by exchanging one contravariant position and one covariant position, the new
tensor does not belong to Tsr space anymore. Such a situation does not arise for
a pure contravariant and a pure covariant space, i.e., for Tor or Tr0 . Symmetry
operation is not defined for mixed tensors.
1 2
σ(1) = 1 σ(2) = 2
σ(1) = 2 σ(2) = 1
where the top row is the natural order. From (87) we write
1 X
Sym (v 1 ⊗ v 2 ) = v σ(1) ⊗ v σ(2)
2!
σ∈P(r)
1
= (v 1 ⊗ v 2 + v 2 ⊗ v 1 )
2
If we express v 1 = v1i1 g i1 , and v 2 = v2i2 g i2 , then from (88) we write
h1 X i i
σ(1) iσ(2)
Sym (v 1 ⊗ v 2 ) = v1 v2 g i1 ⊗ g i2
2!
σ∈P(r)
h1 i
= (v1i1 v2i2 + v1i2 v2i1 ) g i1 ⊗ g i2
2!
1 2 3
σ(1) = 1 σ(2) = 2 σ(3) = 3
σ(1) = 1 σ(2) = 3 σ(3) = 2
σ(1) = 2 σ(2) = 1 σ(3) = 3
σ(1) = 2 σ(2) = 3 σ(3) = 1
σ(1) = 3 σ(2) = 2 σ(3) = 1
σ(1) = 3 σ(2) = 1 σ(3) = 2
where the top row is the natural order. From (87) we write
1 X
Sym (v 1 ⊗ v 2 ⊗ v 3 ) = v σ(1) ⊗ v σ(2) ⊗ v σ(3)
3!
σ∈P(r)
1
= (v 1 ⊗ v 2 ⊗ v 3 + v 1 ⊗ v 3 ⊗ v 2 + v 2 ⊗ v 1 ⊗ v 3 + v 2 ⊗ v 3 ⊗ v 1
6
+v 3 ⊗ v 2 ⊗ v 1 + v 3 ⊗ v 1 ⊗ v 2 )
So, we have an algebra of symmetric tensors. The first point shows that the algebra
is commutative.
1 X
Skw (v 1 ⊗ · · · ⊗ v r ) = sgn σ v σ(1) ⊗ · · · ⊗ v σ(r) . (90)
r!
σ∈P(r)
where the sum is taken over the r! permutations of the integers 1, ..., r. Here
T ∈ T0r and v 1 , . . . , v r ∈ V. If we write
v 1 ⊗ · · · ⊗ v r = v1i1 · · · vrir g i1 ⊗ · · · ⊗ g ir ,
then
h1 X i i
i
Skw (v 1 ⊗ · · · ⊗ v r ) = sgn σ v1σ(1) · · · vrσ(r) g i1 ⊗ · · · ⊗ g ir (91)
r!
σ∈P(r)
If we consider
T = T i1 i2 ...ir g i1 ⊗ · · · ⊗ g ir ,
we can then immediately write
h1 X i
Skw T = sgn σ T iσ(1) ...iσ(r) g i1 ⊗ · · · ⊗ g ir . (92)
r!
σ∈P(r)
where the top row is the natural order. From (90) we write
1 X
Skw (v 1 ⊗ v 2 ⊗ v 3 ) = sgn σ v σ(1) ⊗ v σ(2) ⊗ v σ(3)
3!
σ∈P(r)
1
= (v 1 ⊗ v 2 ⊗ v 3 − v 1 ⊗ v 3 ⊗ v 2 − v 2 ⊗ v 1 ⊗ v 3 + v 2 ⊗ v 3 ⊗ v 1
6
−v 3 ⊗ v 2 ⊗ v 1 + v 3 ⊗ v 1 ⊗ v 2 )
1. Note that by exchanging any two positions, we get a negative sign out.
2. Repeating any two vectors twice always gives zero! This is true for any order.
For example, Skw (v 1 ⊗ a ⊗ v 3 ⊗ a) = 0 as a is repeated twice.
(p + q)!
T ∧S = Skw (T ⊗ S)
p!q!
(1 + 1)! 1 X
T ∧ S = v1 ∧ v2 = Skw (v 1 ⊗ v 2 ) = 2! sgn σ v σ(1) ⊗ v σ(2)
1!1! 2!
σ∈P(r)
X
= sgn σ v σ(1) ⊗ v σ(2)
σ∈P(r)
= (v 1 ⊗ v 2 − v 2 ⊗ v 1 )
v 1 ∧ v 2 = −v 2 ∧ v 1 . (93)
v 1 ∧ v 2 . . . ∧ v p = p! Skw (v 1 ⊗ v 2 · · · ⊗ v p )
X
= sgn σ v σ(1) ⊗ v σ(2) · · · ⊗ v σ(p) ∈ Λp (94)
σ∈P(p)
1. Associativity: (A ∧ B) ∧ C = A ∧ (B ∧ C)
A ∧ B = (−1)pq B ∧ A.
In particular, ∀a, b ∈ Λ1 , a ∧ b = −b ∧ a.
3. Distributivity: (A + B) ∧ C = A ∧ C + B ∧ C.
8.5 Basis of Λk
Case-1: Suppose T = c ∈ Λ0 , i.e., a scalar, and the basis is {1}
Case-2: Suppose T = v 1 ∈ Λ1 , i.e., a covector or 1-form and v 1 = vi11 g i1 .
Obviously {g i1 } is a basis of Λ1 .
Case-3: Suppose T = v 1 ∧ v 2 ∈ Λ2 , i.e., a covector or 2-form and v 1 = vi11 g i1 and
v 2 = vi22 g i2 . Now
v 1 ∧ v 2 = vi11 g i1 ∧ vi22 g i2
= (v11 g 1 + v21 g 2 + v31 g 3 ) ∧ (v12 g 1 + v22 g 2 + v32 g 3 ) assuming dim ·V=3.
= (v11 g 1 ∧ v22 g 2 + v11 g 1 ∧ v32 g 3 ) + (v21 g 2 ∧ v12 g 1 + v21 g 2 ∧ v32 g 3 )
+(v31 g 3 ∧ v12 g 1 + v31 g 3 ∧ v22 g 2 )
= (v11 v22 − v21 v12 )g 1 ∧ g 2 + (v21 v32 − v31 v22 )g 2 ∧ g 3 + (v11 v32 − v31 v12 )g 1 ∧ g 3
(97)
C1 g 1 ∧ g 2 + C2 g 2 ∧ g 3 + C3 g 1 ∧ g 3 = 0
C1 (g 1 ∧ g 2 )g 2 + C2 (g 2 ∧ g 3 )g 2 + C3 (g 1 ∧ g 3 )g 2 = 0 (98)
Note that,
(g 1 ∧ g 2 )g 2 = (g 1 ⊗ g 2 − g 2 ⊗ g 1 )g 2 = g 1 (g 2 · g 2 ) − g 2 (g 1 · g 2 ) = g 1
(g 2 ∧ g 3 )g 2 = (g 2 ⊗ g 3 − g 3 ⊗ g 2 )g 2 = g 2 (g 3 · g 2 ) − g 3 (g 2 · g 2 ) = −g 3
(g 1 ∧ g 3 )g 2 = (g 1 ⊗ g 3 − g 3 ⊗ g 1 )g 2 = g 1 (g 3 · g 2 ) − g 3 (g 1 · g 2 ) = 0
C1 g 1 − C2 g 3 = 0 =⇒ C1 = C2 = 0.
We can easily extend our calculations to the higher dimension and write
h X i
v1 ∧ v2 = sgn σ vi1σ(1) vi2σ(2) g i1 ∧ g i2 (i1 < i2 ≤ n = dim · V)
σ∈P(2)
n
and we have 2
number of terms in the basis {g i1 ∧ g i2 } (i1 < i2 ≤ n). We then
write
h X i
1 2 k
v ∧ v ∧ ... ∧ v = sgn σ vi1σ(1) vi2σ(2) . . . vikσ(k) g i1 ∧ g i2 . . . ∧ g ik (i1 < i2 < · · · < ik ≤ n)
σ∈P(k)
··· · · · ···
n n
Λ n
{g ∧ g ∧ g 3 . . . ∧ g n }
1 2
All the above results are equally applicable to r-vectors, and we write
Space Dimensions Basis
0
Ω 1 {1}
1 n
Ω 1
{g i1 }
n
Ω2 2
{g i 1 ∧ g i2 } i1 < i2 ≤ n
3 n
Ω 3
{g i1 ∧ g i2 ∧ g i3 } i1 < i2 < i3 ≤ n
··· · · · ···
n
Ωk k
{g i1 ∧ g i2 ∧ g i3 . . . ∧ g ik } i1 < i2 < i3 < . . . ik ≤ n
··· · · · ···
n
Ωn n
{g 1 ∧ g 2 ∧ g3 . . . ∧ gn}
Example 38 Consider
a ∧ b = (a2 b3 − a3 b2 )e2 ∧ e3 − (a3 b1 − a1 b3 )e1 ∧ e3 + (a1 b2 − a2 b1 )e1 ∧ e2
⋆(a ∧ b) = (a2 b3 − a3 b2 )e1 + (a3 b1 − a1 b3 )e2 + (a1 b2 − a2 b1 )e3
= a×b
Example 39 Consider
8.7 Determinant
Definition 8.1 Let T ∈ Hom(·V, ·V) and nonsingular. Then the determinant of
T , det T ∈ R, is defined as the factor
T v 1 ∧ · · · ∧ T v n = det T (v 1 ∧ · · · ∧ v n )
= [Θ] g 1 ∧ g 2 . . . ∧ g n (100)
So, we write
σ∈P(n)
= [Θ] T g 1 ∧ T g 2 . . . ∧ T g n (102)
Note that the definition of determinant does not depend on the selection of a basis
or set of vectors. Hence determinant is an invariant. If we consider T = Ti·j g i ⊗g j ,
then
T g 1 = (Ti·j g i ⊗ g j )g 1 = Ti·1 g i
T g 2 = (Ti·j g i ⊗ g j )g 2 = Ti·2 g i
..
.
T g n = (Ti·j g i ⊗ g j )g n = Ti·n g i
This gives,
and so
X
det T = det[Ti·j ] = ·1
sgn σ Tσ(1) ·2
Tσ(2) ·n
. . . Tσ(n) (103)
σ∈P(n)
and so
·τ (1) ·τ (2)
X
det T t = sgn τ T1 T2 . . . Tn·τ (n) . (104)
τ ∈P(n)
Let us now compare the right hand sides of (103) and (104). We will justify
that they are equal by some simple arguments. Let us consider one particular
permutation of (103) among n!. Let say σ(2) = 8(< n). Then there is always a
·τ (8)
term T8 in (104) such that τ (8) = 2. So for a specific permutation of σ, there
exist a permutation in τ such that these two terms are exactly equal with the
sign. However, we are skipping the detail here. We can then immediately write
det T = det T t .
Example 40 det u ⊗ v = 0
Let T = u ⊗ v. Then,
(u ⊗ v)v 1 ∧ ... ∧ (u ⊗ v)v n = det(u ⊗ v)(v 1 ∧ ... ∧ v n )
(v · v 1 )u ∧ ... ∧ (v · v n )u = det(u ⊗ v)(v 1 ∧ ... ∧ v n
(v · v 1 )(v · v 2 )...(v · v n )[u ∧ ... ∧ u] = det(u ⊗ v)(v 1 ∧ ... ∧ v n )
0 = det(u ⊗ v)(v 1 ∧ ... ∧ v n )
So, det(u ⊗ v) = 0.
Example 41 det(αI) = αn
Let T = αI. Then,
αIv 1 ∧ ... ∧ αIv n = det(αI)(v 1 ∧ ... ∧ v n )
αv 1 ∧ ... ∧ αv n = det(αI)(v 1 ∧ ... ∧ v n )
αn (v 1 ∧ ... ∧ v n ) = det(αI)(v 1 ∧ ... ∧ v n )
So, det(αI) = αn .
8.8 Trace
Definition 8.2 Let T ∈ Hom(·V, ·V). Then the trace of T , tr T ∈ R, is defined
as
Xn
(v 1 ∧ ... ∧ T v i ... ∧ v n ) = tr T (v 1 ∧ ... ∧ v n )
i=1
We can also select a basis {g 1 , ..., g n ∈ ·V}, then (like the definition of determinant)
n
X
(g 1 ∧ ... ∧ T g i ... ∧ g n ) = tr T (g 1 ∧ ... ∧ g n )
i=1
n
X
(g 1 ∧ ... ∧ (αSg i + βT g i )... ∧ g n ) = tr (αS + βT )(g 1 ∧ ... ∧ g n )
i=1
n
X n
X
1 i n
(g ∧ ... ∧ αSg ... ∧ g ) + (g 1 ∧ ... ∧ βT g i ... ∧ g n ) = tr (αS + βT )(g 1 ∧ ... ∧ g n )
i=1 i=1
Xn Xn
α (g 1 ∧ ... ∧ Sg i ... ∧ g n ) + β (g 1 ∧ ... ∧ T g i ... ∧ g n ) = tr (αS + βT )(g 1 ∧ ... ∧ g n )
i=1 i=1
α(tr S)(g ∧ ... ∧ g ) + β(tr T )(g 1 ∧ ... ∧ g n ) = (α tr S + β tr T )(g 1 ∧ ... ∧ g n )
1 n
Example 44 tr I = n
n
X
(g 1 ∧ ... ∧ Ig i ... ∧ g n ) = tr I(g 1 ∧ ... ∧ g n )
i=1
Ig ∧ ... ∧ g + g ∧ Ig 2 ∧ ... ∧ g n + · · · = tr I(g 1 ∧ ... ∧ g n )
1 n 1
So, tr I = n.
Example 45 tr (u ⊗ v) = u · v
n
X
(g 1 ∧ ... ∧ (u ⊗ v)g i ... ∧ g n ) = tr (u ⊗ v)(g 1 ∧ ... ∧ g n )
i=1
(u ⊗ v)g ∧ ... ∧ g + g ∧ (u ⊗ v)g 2 ∧ ... ∧ g n + · · · = tr (u ⊗ v)(g 1 ∧ ... ∧ g n )
1 n 1
continuing we get
This gives tr (u ⊗ v) = u · v.
iv : ·V × Λk → Λk−1
iv · T = v · T . (105)
So,
iv · T = v · T
= v · (v 1 ∧ v 2 . . . ∧ v k )
= v · k! Skw (v 1 ⊗ v 2 · · · ⊗ v k )
iv · T = 0.
iv · T = v · 1! Skw (v 1 ) = v · v 1 .
iv · T = v · 2! Skw (v 1 ⊗ v 2 ) = v · (v 1 ⊗ v 2 − v 2 ⊗ v 1 )
= (v · v 1 )v 2 − (v · v 2 )v 1
= (iv · v 1 )v 2 − (iv · v 2 )v 1
or
iv · T = v · 3! Skw (v 1 ⊗ v 2 ⊗ v 3 )
= v · (v 1 ⊗ v 2 ⊗ v 3 − v 1 ⊗ v 3 ⊗ v 2 − v 2 ⊗ v 1 ⊗ v 3 + v 2 ⊗ v 3 ⊗ v 1
−v 3 ⊗ v 2 ⊗ v 1 + v 3 ⊗ v 1 ⊗ v 2 )
= (v · v 1 )v 2 ⊗ v 3 − (v · v 1 )v 3 ⊗ v 2 − (v · v 2 )v 1 ⊗ v 3 + (v · v 2 )v 3 ⊗ v 1
−(v · v 3 )v 2 ⊗ v 1 + (v · v 3 )v 1 ⊗ v 2
= (v · v 1 )[v 2 ⊗ v 3 − v 3 ⊗ v 2 ] − (v · v 2 )[v 1 ⊗ v 3 − v 3 ⊗ v 1 ]
−(v · v 3 )[v 2 ⊗ v 1 − v 1 ⊗ v 2 ]
= (iv · v 1 )v 2 ∧ v 3 + (iv · v 2 )v 3 ∧ v 1 + (iv · v 3 )v 1 ∧ v 2
or
iv · (A ∧ v 3 ) = v · 3! Skw (v 1 ⊗ v 2 ⊗ v 3 )
= (iv · v 1 )v 2 ∧ v 3 + (iv · v 2 )v 3 ∧ v 1 + (iv · v 3 )v 1 ∧ v 2
= [(iv · v 1 )v 2 − (iv · v 2 )v 1 ] ∧ v 3 + (iv · v 3 )v 1 ∧ v 2
= iv · (v 1 ∧ v 2 ) ∧ v 3 + (iv · v 3 )v 1 ∧ v 2 (from (106))
= (iv · A) ∧ v 3 + A(iv · v 3 )
iv · (v 1 ∧ B) = v · 3! Skw (v 1 ⊗ v 2 ⊗ v 3 )
= (iv · v 1 )v 2 ∧ v 3 + (iv · v 2 )v 3 ∧ v 1 + (iv · v 3 )v 1 ∧ v 2
= (iv · v 1 )v 2 ∧ v 3 − v 1 ∧ [(iv · v 2 )v 3 − (iv · v 3 )v 2 ]
= (iv · v 1 )B − v 1 ∧ (iv · B) (from (106))
In general, we write
We also have
iv · (T + S) = iv · T + iv · S (109)
⋆ := Λp → Λn−p .
⋆(ei1 ∧ ... ∧ eip ) = sgn σ(ej1 ∧ ... ∧ ejn−p ). i1 < · · · < ip & j1 < · · · < jn−p
where {j1 , . . . jn−p } ∈ {n}/{i1 , . . . , ip }, and σ ∈ P(n) is the permutation {i1 , . . . , ip , j1 , . . . , jn−p }.
⋆e1 = e2
⋆e2 = −e1
⋆(e1 ∧ e2 ) = 1
⋆e1 = e2 ∧ e3 , ⋆(e1 ∧ e2 ) = e3
⋆e2 = −e1 ∧ e3 , ⋆(e2 ∧ e3 ) = e1
⋆e3 = e1 ∧ e2 , ⋆(e1 ∧ e3 ) = −e2
and ⋆(e1 ∧ e2 ∧ e3 ) = 1.
Example 48 Consider
Example 49 Consider
T u ∧ T v ∧ T w = det T (u ∧ v ∧ w)
T u ∧ v ∧ w + u ∧ T v ∧ w + u ∧ v ∧ T w = tr T (u ∧ v ∧ w)
T u ∧ T v ∧ w + u ∧ T v ∧ T w + T u ∧ v ∧ T w = I2 (u ∧ v ∧ w)
⋆(a ∧ b ∧ c) = a · (b × c) = [a, b, c]
we write
[T u, T v, T w] = I3 [u, v, w] (110)
Cof T (u × v) := T u × T v. (113)
[T u, T v, T w] T u · (T v × T w) T u · Cof T (v × w)
det T = = =
[u, v, w] u · (v × w) u · (v × w)
t
⇒ (det T )(u · (v × w)) = T u · Cof T (v × w) = (Cof T ) T u · (v × w)
h i h i
t
⇒ (det T )I u · (v × w) = (Cof T ) T u · (v × w)
[T u, v, w] [u, T v, w] [u, v, T w]
tr T = + +
[u, v, w] [u, v, w] [u, v, w]
and replacing u by T u we get
[T 2 u, v, w] [T u, T v, w] [T u, v, T w]
tr T = + +
[T u, v, w] [T u, v, w] [T u, v, w]
2
⇒ tr T [T u, v, w] = [T u, v, w] + [T u, T v, w] + [T u, v, T w]
⇒ tr T [T u, v, w] − [T 2 u, v, w] + [u, T v, T w] = [T u, T v, w] + [T u, v, T w] + [u, T v, T w]
⇒ tr T [T u, v, w] − [T 2 u, v, w] + [u, T v, T w] = I2 [u, v, w]
⇒ tr T (T u · (v × w)) − T 2 u · (v × w) + u · (T v × T w) = I2 (u · (v × w))
⇒ tr T (T u · (v × w)) − T 2 u · (v × w) + u · Cof T (v × w) = I2 (u · (v × w))
h i h i h i h i
⇒ (tr T )T u · (v × w) − T 2 u · (v × w) + (Cof T )t u · (v × w) = I2 I u · (v × w)
This gives
h ih i h i h i
2 t
(tr T )T − T + (Cof T ) = I2 I
8
Recall that a · (b × c) = b · (c × a) = c · (a × b)
10 Orthogonal tensors
Let R ∈ Hom (·V, ·V) and dim ·V = 3 such that Rt = R−1 or alternatively
RRt = Rt R = I. (122)
Then R is said to be orthogonal tensor. It has the following properties
2. It preserves the angle between two vectors after transformation (Fig. 37):
Ru · Rv = u · Rt Rv = u · Iv = u · v. So,
Ru · Rv u·v
cos θ = =
|Ru||Rv| |u||v|
Qu
v
θ
Qv
θ u
(a) (b)
Figure 37: θ and the lengths of the vectors remain the same after an orthogonal
transformation.
1. for a vector v: v ′ = Rv
11 Skew-symmetric tensors
u
Wu = w × u
1. u · W v = W t u · v = −v · W u.
If ω = (ω1 , ω2 , ω3 )t , then
0 −ω3 ω2
[W ]ij = ω3 0 −ω1 (123)
−ω2 ω1 0
T n = λn or (T − λI)n = 0. (124)
where I is the identity tensor. As we know that there are four possibilities to
express T and I, i.e.,
Note that the matrix representations of the last two forms of I are diagonal. Now
going back to the eigenvalue problem, we have to make sure that both T and I
should be expressed with respect to the same basis for the subtraction operation.
Let us consider the form T = Tij g i ⊗ g j (also Tij = Tji ), then we should consider
I = gij g i ⊗ g j . Let n = nk g k (we can also consider n = nk g k ), then
h i
(T − λI)n = (Tij − λgij )g i ⊗ g j nk g k
= (Tij − λgij )nk δkj g i
= (Tij − λgij )nj g i
The above form is known as generalized eigenvalue problem, i.e., in the form of
An = λBv. However, solving a simple eigenvalue problem is convenient. Note
that nj = g jk nk and we write
det(Ti·j − λδji ) = 0
P(λ) = λ3 − I1 λ2 + I2 λ − I3 ,
1
where I1 = tr T = λ1 + λ2 + λ3 , I2 = [tr (T )2 − tr T 2 ] = λ1 λ2 + λ2 λ3 + λ3 λ1 , and
2
det T = λ1 λ2 λ3 .
Theorem 12.1 Let T ∈ Hom (·V, ·V), and dim ·V = 3. If T has three distinct eigenvalues,
and n(i) is the eigenvector corresponding to i-th eigenvalue, then {n(1) , n(2) , n(3) }
is a basis of ·V.
Consider,
Now n(3) is not a null vector, and all the eigenvalues are distinct, i.e., λ1 ̸= λ2 ̸=
λ3 . This means c3 = 0. Similarly, starting with the operators (T − λ2 I), and
(T − λ3 I) give c1 = 0. And starting with the operators (T − λ1 I), and (T − λ3 I)
give c2 = 0. This means {n(1) , n(2) , n(3) } is linearly independent. Now, one can
express any vector v = v 1 n(1) + v 2 n(2) + v 3 n(3) , i.e., it spans the space. This
shows {n(1) , n(2) , n(3) } is a basis. One can extend this algorithm to show that if
dim ·V = n, {n(1) , n(2) , . . . , n(n) } is a basis.
Now,
n(j) · T n(i) = n(j) · λi n(i) , ⇒ T t n(j) · n(i) = λi n(j) · n(i) , ⇒ T n(j) · n(i) = λi n(j) · n(i) .
We write,
As λi ̸= λj , we get n(j) ⊥ n(i) . So, for three distinct eigenvalues, {n(1) , n(2) , n(3) }
is an orthonormal basis.
For three distinct eigenvalues, when {n(1) , n(2) , n(3) } is an orthonormal basis
of ·V, we write the tensor T as
This is called spectral decomposition of T with respect to the basis system {n(i) ⊗
n(j) }. Note that the matrix form of T will be diagonal in this case. One of the
advantages, among many, of spectral decomposition is the following:
T 2 = (λ1 n(1) ⊗ n(1) + λ2 n(2) ⊗ n(2) + λ3 n(3) ⊗ n(3) )(λ1 n(1) ⊗ n(1)
+λ2 n(2) ⊗ n(2) + λ3 n(3) ⊗ n(3) )
= λ21 [n(1) (n(1) · n(1) ) ⊗ n(1) ] + λ22 [n(2) (n(2) · n(2) ) ⊗ n(2) ] + λ23 [n(3) (n(3) · n(3) ) ⊗ n(1) ]
= λ21 n(1) ⊗ n(1) + λ22 n(2) ⊗ n(2) + λ23 n(3) ⊗ n(3)
In general,
T n = λn1 n(1) ⊗ n(1) + λn2 n(2) ⊗ n(2) + λn3 n(3) ⊗ n(3) , (n ∈ R). (126)
n(3)
n(1) 2
1 n(2)
Figure 39: Any vector in the 2 − 3 plane, i.e., linear combination of n(2) and n(3)
is an eigenvector.
be an eigenvector with the eigenvalue λ. Moreover, n(2) and n(3) need not to be
orthonormal to each other. Note that the choice of n(1) is unique and it has to be
perpendicular to the plane containing n(2) and n(3) (Fig. 39). If we select that
n(2) ⊥ n(3) , the spectral decomposition in this case becomes
T = λ1 n(1) ⊗ n(1) + λn(2) ⊗ n(2) + λn(3) ⊗ n(3)
= (λ1 − λ)n(1) ⊗ n(1) + λI,
where the identity tensor is given by
I = n(1) ⊗ n(1) + n(2) ⊗ n(2) + n(3) ⊗ n(3) .
T = λI,
v · T v ≥ 0, v · T v = 0 =⇒ v = 0.
v · T v = v · (T s + T ss )v = v · T s v + v · T ss v = v · T s v + 0.
So, even for a general tensor, it is sufficient to study the symmetric part of it.
When a symmetric tensor T is positive definite, we say T ∈ Psym .
Theorem 12.3 Let T ∈ Hom (·V, ·V), and also T ∈ Sym , where dim ·V = 3. If
T has strictly positive eigenvalues, then T ∈ Psym .
We consider the form
T v = (λ1 n(1) ⊗ n(1) + λ2 n(2) ⊗ n(2) + λ3 n(3) ⊗ n(3) )(v1 n(1) + v2 n(2) + v3 n(3) )
= λ1 v1 n(1) + λ2 v2 n(2) + λ3 v3 n(3) .
So,
and T ∈ Psym .
Theorem 12.4 Let T ∈ Hom (·V, ·V), and also T ∈ Psym , where dim ·V = 3.
Then all the eigenvalues of T are strictly positive.
Theorem 12.5 Let T ∈ Hom (·V, ·V), and also T ∈ Psym , where dim ·V = 3.
Then there exists a U ∈ Psym such that U 2√= T . The tensor U is called positive
definite square root of T and we write U = T .
Let us start from T n = λn, where λ > 0 as T ∈ Psym . We then write
(T − λI)n = 0
(U 2 − λI)n = 0
√ √
(U + λI)(U − λI)n = 0
√
Let n̂ = (U − λI)n. Then continuing we get
√ √
(U + λI)n̂ = 0 ⇒ U n̂ = − λn̂
√
This mean − λ < 0 is an eigenvalue of U . This is not possible as U ∈ Psym .
We the only option is n̂ = 0, and we write
√ √
(U − λI)n = 0 ⇒ U n = λn.
√
This shows n is also the eigenvectors of U with eigenvalues λ. If the spectral
decomposition of T is
T = λ1 n(1) ⊗ n(1) + λ2 n(2) ⊗ n(2) + λ3 n(3) ⊗ n(3)
then
p p p
U= λ1 n(1) ⊗ n(1) + λ2 n(2) ⊗ n(2) + λ3 n(3) ⊗ n(3) .
Qr
p
r
Qp = p, (λ = 1)
Let R ∈ Orth, and let us study the associated eigenvalue problem Rn = λn.
We write,
Rn · Rn = λ2 n · n ⇒ n · Rt Rn = λ2 n · n ⇒ n · n = λ2 n · n.
This gives that the only real eigenvalues are λ = ±1. Further considering R is a
rotation tensor, i.e., R ∈ Orth+ , we have the following information:
Rt R = I, ⇒ Rt (R − I) = −(R − I)t ⇒ (det Rt ) det(R − I) = − det(R − I)t
As det Rt = 1, we have det(R − I) = 0. So, one real eigenvalue for a rotation
tensor is λ = 1. We have Rp = p, where p is the associated eigenvector. p is
known as the axis of rotation which does not change if operated by the rotation
tensor R. Note that Rt p = p as Rt Rp = Ip means Rt (Rp) = p or Rt p = p.
Let us select two other mutually perpendicular unit vectors q and r, which are
perpendicular to the axis p. We then construct a basis {p, q, r}. We have,
p · Rq = Rt p · q = p · q = 0 ⇒ p ⊥ Rq
p · Rr = Rt p · r = p · r = 0 ⇒ p ⊥ Rr
Rr · Rq = r · (Rt R)q = r · q = 0 ⇒ Rr ⊥ Rq.
This means, q and r, after transformation remains perpendicular to the axis of
rotation p. The initial angle (90◦ ) between q and q remains unchanged after
Rq = αq + βr, Rr = γq + δr (127)
Note that |Rq| = |q| = 1, |Rq| = |q| = 1, and θ is the angle of rotation from q.
We then have the following components of R
R = p ⊗ p + αq ⊗ q + δr ⊗ r + γq ⊗ r + δr ⊗ q.
R = p ⊗ p + (q ⊗ q + r ⊗ r) cos θ + (r ⊗ q − q ⊗ r) sin θ.
p · W q = W t p · q = −W p · q = 0 ⇒ p ⊥ W q
p · W r = W t p · r = −W p · r = 0 ⇒ p ⊥ W r
p · W p = q · W q = r · W r = 0 = q · W p = r · W p.
Wr
Wq
p
r
Wp = 0, (λ = 0)
q · W r = q · (ω × r) = ω · (r × q) = −ω · p = −ωp · p = −ω
r · W q = W t r · q = −W r · q = ω
W = ω(r ⊗ q − q ⊗ r).
Then the vector space is called normed linear space of simply normedspace
Example 1: If V = R, then ||v|| = |v|, v ∈ R.
p
Example 2: If V = En (Euclidean), then ||v|| = v12 + · · · + vn2 .
Example 3: If V = E2 (Euclidean), then we can define many norms. For example,
the length of the vector v. Note that normed space does not always mean an inner
product space, but an inner product space can always have a norm. Norms always
give rise to a metric, which is the distance between two points. Here the points
means the members of the set. The distance function between any two points are
written as
This particular norm, among many other possible norms, known as metric.
15.2 Differentiation
Let ·V and ·U are the two inner product spaces with a metric norm. Let v ∈ V
and a map f (v) ∈ ·U. Note that f need not to be a linear transformation, i.e.,
and so on. Let f be defined at the neighborhood of 0 ∈ ·V and have values in ·U.
If f (v) approaches 0 ∈ ·U faster than v, we write
f (v) = o(v), as v → 0.
or simply
f (v) = o(v)
if
||f (v)||
lim = 0.
v →0 ||v||
For example, consider f (t) = ta for a > 1, then f (t) = o(t). Let D be a subset of
f (x + v) − f (x)
is equal to a linear transformation of v (say, T v ∈ ·U) plus a term (i.e. o(v) ∈ ·U)
that approaches zero faster than v. We write
d
D x f (x)[v] = f (x + αv) (134)
dα α=0
ϕ(u + v) = (u + v) · (u + v) = u · u + u · v + v · u + v · v
= ϕ(u) + 2u · v + o(v).
Then,
= (u · v + 2v · u + 2αv · v)
α=0
= 2u · v
Recall that
det(A + I) = 1 + I1 + I2 + I3 .
So,
So,
In this case, the derivative D A ϕ(A) : Hom (·V, ·V) → R and so D A ϕ(A) ∈
Hom (·V, R) = ·V ∗ ≡ ·V. So, D A ϕ(A) ∈ Hom (Hom (·V, ·V), R) =
∗ ∗ ∗
Hom (·V, ·V) = Hom (·V , ·V ) ≡ Hom (·V, ·V). So it is a second order tensor.
We identify D A ϕ(A)[V ] = D A ϕ(V ) : V , as two second order tensors produce a
scalar only by a double contraction.
Then
So,
Example 59 Find the derivative of ϕ(A, u) = u · Au, where A ∈ Hom (·V, ·V),
u ∈ D ⊂ ·V and ϕ : Hom (·V, ·V) × ·V → R (tensor- and vector valued scalar
function)
We denote Dϕ(A, u)[v] = ∂u ϕ(A, u)[v] and Dϕ(A, u)[V ] = ∂A ϕ(A, u)[V ], where
v ∈ ·V and V ∈ Hom (·V, ·V).
ϕ(A, u + v) = (u + v) · A(u + v) = u · Au + u · Av + v · Au + v · Av
= ϕ(A, u) + At u · v + Au · v + o(v).
So,
d
∂u ϕ(A, u)[v] = ϕ(A, u + αv)
dα α=0
d
= (u + αv) · (A(u + αv))
dα α=0
d
u · Au + αv · Au + αu · Av + α2 v · Av
=
dα α=0
= v · Au + u · Av + 2αv · Av
α=0
= v · Au + u · Av
∂u ϕ(A, u) · v = (Au + At u) · v =⇒ ∂u ϕ(A, u) = Au + At u
π : ·F × ·G → ·H
and so on. Consider h(x) = π(f (x), g(x)) and, if v ∈ ·V, then
h(x + v) =
π(f (x + v), g(x + v))
=
π(f (x) + D x f (x)[v] + o(v), g(x) + D x g(x)[v] + o(v))
≈
π(f (x) + D x f (x)[v], g(x) + D x g(x)[v])
=
π(f (x), g(x) + D x g(x)[v]) + π(D x f (x)[v], g(x) + D x g(x)[v])
=
π(f (x), g(x)) + π(f (x), D x g(x)[v])
+π(D x f (x)[v], g(x)) + π(D x f (x)[v], Dg(x)[v])
= h(x) + π(f (x), D x g(x)[v]) + π(D x f (x)[v], g(x)) + o(v)
h(x + v) − h(x) = π(f (x), D x g(x)[v]) + π(D x f (x)[v], g(x)) + o(v)
So,
If ·V = R, and t, α ∈ R then
and
d
D t h(t) = h(t) = ḣ(t) = π(ḟ (t), g(t)) + π(f (t), ġ(t)). (141)
dt
Note that ·V = R. Let f (t) = A(t), and g = det() such that g(f (t)) = det A(t).
So, following equation (136) we write
d ˙ = (det A)A−t : Ȧ
(det A(t)) = D A(t) (det A(t))[A(t)] (144)
dt
AE 738 136 Dr. Krishnendu Haldar
15.6 Gradient and divergence
15.6.1 Gradient
1. Consider a vector valued scalar function
h(x) : D → R.
then we write
h(x) : D → ·V
then
3. In general, if
then
Example 61 Show that grad (ϕw) = w⊗grad ϕ+ϕ grad w, where ϕ(x) ∈ ·F =
R, and w(x) ∈ ·G.
9
The notation ⊗n · V = ·V ⊗ ·V ⊗ · · · ⊗ ·V means n copies of ·V.
we write
So,
Example 62 Show that grad (u · w) = (grad u)t w + (grad w)t u, where where
u(x) ∈ ·F, and w(x) ∈ ·G.
Consider h = π(u, w) = u · w. From product rule
we write
So,
we write
D x h(x)[v] = π(D x u(x)[v], S(x)) + π(u(x), D x S(x)[v])
= π([grad u(x)]v, S(x)) + π(u(x), [grad S(x)]v)
= [grad u]v · S + u · [grad S]v
grad h v = [grad u]v · S + u · [grad S]v
= S t [grad u]v + (u · [grad S])v
h i
t
= S [grad u] + u · [grad S] v
we write
grad (S t u) = S t (grad u) + u · (grad S) (147)
15.6.2 Divergence
Divergence will make sense when T (x) is a vector or higher order tensor. Consider
T (x) ∈ ⊗n · V Then
div T (x) = grad T (x) : I
where I ∈ ·V ⊗·V as long as the argument variable x ∈ ·V. Note that grad T (x) ∈
⊗n+1 · V and so div T (x) ∈ ⊗n+1−2 · V = ⊗n−1 · V. If n = 1, then without any
ambiguity we can write
div T (x) = grad T (x) : I = tr (T (x))
Example 64 Show that div (ϕw) = w · grad ϕ + ϕ div w, where ϕ(x) ∈ ·F = R,
and w(x) ∈ ·G.
We know grad (ϕw) from (145) . Taking trace for both sides
tr (grad (ϕw)) = tr (w ⊗ grad ϕ) + tr (ϕ grad w)
=⇒ div (ϕw) = w · grad ϕ + ϕ div w (148)
Sometimes the above relation is used as the definition of the divergence of a second
order tensor.
Example 66 Show that div (u ⊗ w) = u(div w) + (grad u)w, where u(x) ∈ ·F,
and w(x) ∈ ·G.
Let a be a constant vector. Then
div (u ⊗ w) · a = div ((w ⊗ u)a) (by using(150))
= div ((u · a)w) = w · grad (u · a) + (u · a) div w (by using(148))
h i
= w · (grad u)t a + (u · a) div w (by using(146))
= (grad u)w · a + u (div w) · a
So we get
15.6.3 Curl
Consider the map
ϕ : Rn → R3 .
Then curl is defined as
(grad ϕ(x) − (grad ϕ(x))t )v = curl ϕ(x) × v
Generalizing curl in any dimension and any order requires exterior calculus, which
is out of scope in this course.
u · n dA = div u dV (154)
∂D
Z ZD
Sn dA = div S dV (155)
∂D D
You are already familiar with (153) and (154) from your elementary vector calculus
course. So, we will prove only (155).
Let a be a constant vector. Then
Z Z Z
a· Sn dA = a · Sn dA = S t a · n dA
∂D Z∂D ∂D
Z
t
= div (S a) dV = a · div (S) dV
DZ D
= a· div (S) dV
Z Z D
=⇒ Sn dA = div S dV
∂D D
R R
Example 68 Show that ∂D
(u ⊗ n) dA = D
grad u dV
Let a be a constant vector. Then
Z Z Z
a· (u ⊗ n) dA = a · (u ⊗ n) dA = (a · u)n dA
∂D Z∂D Z∂D
= a· grad u dV
Z Z D
=⇒ (u ⊗ n) dA = grad u dV
∂D D
h(x + v) =
π(f (x + v), g(x + v))
=
π(f (x) + D x f (x)[v] + o(v), g(x) + D x g(x)[v] + o(v))
≈
π(f (x) + D x f (x)[v], g(x) + D x g(x)[v])
=
π(f (x), g(x)) + D f (x) π(f (x), g(x))[D x f (x)[v]]
+D g(x) π(f (x), g(x))[D x g(x)[v]] + o(v)
= h(x) + ∂f (x) π(f (x), g(x))[D x f (x)[v]]
+∂g(x) π(f (x), g(x))[D x g(x)[v]] + o(v)
h(x + v) − h(x) = ∂f (x) π(f (x), g(x))[D x f (x)[v]]
+∂g(x) π(f (x), g(x))[D x g(x)[v]] + o(v)
So,
D x h(x)[v] = ∂f (x) π(f (x), g(x))[D x f (x)[v]] + ∂g(x) π(f (x), g(x))[D x g(x)[v]],(156)
where gij is the metric tensor and positive definite when the space is Riemannian.
Denoting the local coordinate system at a point x + dx by u + du, the change
in covariant basis becomes {g i + dg i }. Now our task is to find dg i .
∂g i k ∂ ∂x k ∂ ∂x ∂g j k
Γkij = j
· g = j i
· g = i j
· g k
= i
· g = Γkji
∂u ∂u ∂u ∂u ∂u ∂u
and
∂g i ∂ ∂x ∂ ∂x ∂g j
Γijk = · g k = · g k = · g k = · g k = Γjik
∂uj ∂uj ∂ui ∂ui ∂uj ∂ui
∂g
Example 69 Show that ∂uji = −Γimj g m
We take advantage the relation g i · g m = δm i
by taking derivative with respect to
uj we write
∂g i ∂g
j
· g m + g i · mj = 0
∂u ∂u
∂g i
·g = −Γimj
∂uj m
∂g i
= −Γimj g m (162)
∂uj
AE 738 143 Dr. Krishnendu Haldar
16.1 Ricci identities
∂
∂k gij = (∂k g i ) · g j + g i · (∂k g j ) (where ∂k ≡ )
∂uk
= Γikj + Γjki . (163)
1
Γijk = − ∂k gij + ∂i gjk + ∂j gki (166)
2
Similarly,
1 mi
Γm
jk
mi
= g Γijk = g − ∂k gij + ∂i gjk + ∂j gki (167)
2
This means that knowing the metric, we will be able to compute Christoffel sym-
bols.
ϕ = D → R.
and we have
for a scalar function ϕ(u). The directional derivative along g i could be written as
∂ϕ i
grad ϕ(u) = g. (169)
∂ui
ϕ = D → Rn .
and we have
for a vector mapping ϕ(u). Now directional derivative along g i could be written
as
∂ϕ
grad ϕ(u) = ⊗ gi. (171)
∂ui
If ϕ = ϕi g i , then
∂ϕ ∂ k ∂ϕk k ∂g k
= (ϕ g k ) = g k + ϕ
∂ui ∂ui ∂ui ∂ui
k
∂ϕ
= g + ϕk Γm ki g m
∂ui k
So, we write
∂ϕ ∂ϕm
i
grad ϕ(u) = i
⊗g = i
+ ϕ Γki g m ⊗ g i = ϕm g m ⊗ g i .
k m
∂u ∂u i
∂ϕm
ϕm = + ϕk Γm
ki
i ∂ui
∂ϕ ∂ k ∂ϕk k ∂g k
= (ϕk g ) = g + ϕk
∂ui ∂ui ∂ui ∂ui
∂ϕk k
= g − ϕk Γkmi g m , from equation (162)
∂ui
So, we write
∂ϕ i
∂ϕ
m k
m i m i
grad ϕ(u) = i
⊗ g = i
− ϕ k mi g ⊗ g = ϕm g ⊗ g .
Γ
∂u ∂u i
∂T ∂T ij
k tj i it j k ij
grad T (u) = ⊗ g = + T Γkt + T Γkt g i ⊗ g j ⊗ g = T gi ⊗ gj ⊗ gk .
∂uk ∂uk k
where
∂T ij
T ij = k
+ T tj Γikt + T it Γjkt
k ∂u
If T = Tij g i ⊗ g j , we write
∂T k
∂T
ij t t
i j k ij
grad T (u) = k
⊗ g = k
− T Γ
it jk − Tjt ik g ⊗ g ⊗ g = T
Γ gi ⊗ gj ⊗ gk .
∂u ∂u k
where
∂Tij
Tij = − Tit Γtjk − Tjt Γtik (173)
k ∂uk
Example 70 Show that the covariant derivative of the metric coefficient gij is
always zero.
Replacing gij in place of Tij in equation (173)
∂gij
gij = − git Γtjk − gjt Γtik
k ∂uk
∂gij
= − Γjki − Γikj
∂uk
∂gij
= − (Γjki + Γikj )
∂uk
∂gij ∂gij
= − from equation (163)
∂uk ∂uk
= 0 (174)
∂g
Example 71 Show that Γiik = 12 g ij ∂uijk .
∂gij
= git Γtjk + gjt Γtik
∂uk
∂gij
g ij k = g ij git Γtjk + g ij gjt Γtik
∂u
= δtj Γtjk + δti Γtik
1 ∂gij
= Γjjk + Γiik = 2Γiik =⇒ Γiik = g ij k (175)
2 ∂u
√
∂ g
Example 72 Show that Γiik = √1 .
g ∂uk
Then
div ϕ(u) = grad ϕ(u) : I = tr (grad ϕ(u)) = ϕi
i
i
∂ϕ
= + ϕk Γiki
∂ui √
∂ϕi k 1 ∂ g
= +ϕ √ from (176)
∂ui g ∂uk
√
1 ∂ gϕk
= √ (177)
g ∂uk
grad T (u) = T ij g i ⊗ g j ⊗ g k .
k
Then
= T ij g i δjk
k
ik
= T gi
k
Note that we consider the form I = δnm g n ⊗ g m just to have simplified contraction
operations. You can consider any form among the four possible options. You will
get the same final results. However, intermediate steps will be different.