A Gentle Introduction To Tensors
A Gentle Introduction To Tensors
Boaz Porat
Department of Electrical Engineering
Technion Israel Institute of Technology
boaz@ee.technion.ac.il
All vector spaces discussed in this document are over the field R of real
numbers. We will not mention this every time but assume it implicitly.
1
Chapter 1
Let us begin with the simplest possible setup: that of plane vectors. We
think of a plane vector as an arrow having direction and length, as shown in
Figure 1.1.
The length of a physical vector must have physical units; for example: dis-
tance is measured in meter, velocity in meter/second, force in Newton, elec-
tric field in Volt/meter, and so on. The length of a mathematical vector is
a pure number. Length is absolute, but direction must be measured relative
to some (possibly arbitrarily chosen) reference direction, and has units of ra-
dians (or, less conveniently, degrees). Direction is usually assumed positive
in counterclockwise rotation from the reference direction.
Vectors are abstract objects, but they may be manipulated numerically and
algebraically by expressing them in bases. Recall that a basis in a plane is
2
Figure 1.1: A plane vector having length and direction
The basis depicted in Figure 1.2 happens to be orthonormal ; that is, the two
vectors are perpendicular and both have unity length. However, a basis need
not be orthonormal. Figure 1.3 shows another basis ( 2 ), whose vectors
e1 , e
are neither perpendicular nor having equal length.
Let x be an arbitrary plane vector and let (e1 , e2 ) be some basis in the plane.
Then x can be expressed in a unique manner as a linear combination of the
basis vectors; that is,
x = e1 x1 + e2 x2 (1.1)
The two real numbers (x1 , x2 ) are called the coordinates of x in the basis
3
Figure 1.3: Another basis in the plane
Consider two bases (e1 , e2 ), which we will henceforth call the old basis, and
( 2 ), which we will call the new basis. See, for example, Figure 1.4, in
e1 , e
which we have brought the two bases to a common origin.
4
Figure 1.4: Two bases in the plane
The matrix S is the direct transformation matrix from the old basis to the
new basis. This matrix is uniquely defined by the two bases. Note that
the rows of S appear as superscripts and the columns appear as subscripts;
remember this convention for later.
A special case occurs when the new basis is identical with the new basis. In
this case, the transformation matrix becomes the identity matrix I, where
Iii = 1 and Iij = 0 for i 6= j.
5
where T = S 1 or, equivalently, ST = T S = I. The object Tij , 1 i, j 2
In summary, with each pair of bases there are associated two transformations.
Once we agree which of the two bases is labeled old and which is labeled new,
there is a unique direct transformation (from the old to the new) and a unique
inverse transformation (from the new to the old). The two transformations
are the inverses of each other.
The coordinates ( x1 , x2 ) differ from (x1 , x2 ), but the vector x is the same.
The situation is depicted in Figure 1.5. The vector x is shown in red. The
basis (e1 , e2 ) and its associated coordinates (x1 , x2 ) are shown in black; the
basis ( 2 ) and its associated coordinates (
e1 , e x1 , x2 ) are shown in blue.
We now pose the following question: how are the coordinates ( x1 , x2 ) related
to (x1 , x2 )? To answer this question, recall the transformation formulas be-
tween the two bases and perform the following calculation:
1
x1 x x1
1 e
e 2 = e1 e2 S = x = e1 e2 (1.7)
x2 x2 x2
Since (1.7) must hold identically for an arbitrary vector x, we are led to
conclude that 1 1
x x
S 2 = (1.8)
x x2
Or, equivalently,
x1 1 x1 x1
=S =T (1.9)
x2 x2 x2
6
Figure 1.5: A vector in two bases
7
1.4 Generalization to Higher-Dimensional Vec-
tor Spaces
We assume that you have studied a course a linear algebra; therefore you are
familiar with general (abstract) finite-dimensional vector spaces. In particu-
lar, an n-dimensional vector space possesses a set of n linearly independent
vectors, but no set of n + 1 linearly independent vectors. A basis for an
n-dimensional vector space V is any ordered set of linearly independent vec-
tors (e1 , e2 , . . . , en ). An arbitrary vector x in V can be expressed as a linear
combination of the basis vectors:
n
X
x= e i xi (1.10)
i=1
The real numbers in (1.10) are called linear coordinates. We will refer to them
simply as coordinates, until we need to distinguish them from curvilinear
coordinates in Chapter 2. Note again our preferred convention of writing
the vector on the left of the scalar. If a second basis ( 2 , . . . , e
e1 , e n ) is given,
there exist unique transformations S and T such that
n
X n
X
i =
e ej Sij , ei = j Tij , T = S 1
e (1.11)
j=1 j=1
The coordinates of x in the new basis are related to those in the old basis
according to the transformation law
n
X
i
x = Tji xj (1.12)
j=1
Equation (1.12) is derived in exactly the same way as (1.9). Thus, vectors in
an n-dimensional space are contravariant.
Note that the rows of S appear as superscripts and the columns appear as
subscripts. This convention is important and should be kept in mind.
8
orthogonal or normal, although you may know these definitions from previ-
ous studies; we will return to this topic later. All that is needed here are
the concepts of linear dependence/independence, finite dimension, basis, and
transformation of bases.
i = ej Sij , ei = e
x = ej xj , e j Tij , xi = Tji xj , xi = Sji xj (1.13)
9
5. Indices other than dummy indices may appear any number of times
and are free, in the same sense as common algebraic variables. For
example, i in (1.13) is a free index. A free index may be replaced by
any other index that is not already in use, provided that replacement
is consistent throughout the equation. For example, replacing all is by
ms in (1.13) will not change the meaning.
6. The last rule is not in common use and we include it ad-hoc here, for
a reason to be explained soon: Attempt to place the symbol carrying
the summation index as subscript on the left of the symbol carrying
the summation index as superscript. For example, write aij xj or xj aji ,
but avoid xj aij or aji xj .
Einsteins notation takes some getting used to, but then it becomes natural
and convenient. As an example, let us use it for multiplying two matrices. Let
aij and bkm stand for the square matrices A and B, and recall that superscripts
denote row indices and subscripts denote column indices. Then cik = aij bjk
stands for the product C = AB, and dij = bik akj stands the product D = BA.
You may write dij = akj bik and this will also be a legal expression for D,
but the untrained reader may interpret this as C. Rule 6 helps avoiding
such potential confusion because the order of terms in Einsteins notation
then agrees with the order of operands in matrix multiplication. Sometimes,
however, we must abandon rule 6 because other factors take precedence. For
example, rule 6 cannot be used in the expression aij k
k bmj .
10
1.6 Covectors
We conclude that the members of the dual basis are transformed by change of
basis using the inverse transformation T . It follows, as in the case of vectors,
that the coordinates of covectors are transformed by change of basis using
the direct transformation S:
yi = yj Sij (1.18)
So, in summary, covectors behave opposite to the behavior of vectors under
change of basis. Vector bases are transformed using S and vector coordinates
are transformed using T . Covector bases are transformed using T and cov-
ector coordinates are transformed using S. Consequently, covectors are said
to be covariant whereas vectors, as we recall, are contravariant.
11
of basis. We want to express the partial derivatives of f (x) with respect to
the coordinates of the new basis. Using the chain rule for partial derivatives
we obtain
f f x1 f x2
= +
x1 x1 x1 x2 x1 (1.19)
f f x1 f x2
= +
x2 x1 x2 x2 x2
Or, in Einsteins notation:
f f xj
= (1.20)
xi xj xi
But from (1.13) we know that
xj
xj = Sij xi = Sij (1.21)
xi
If we try to substitute (1.21) in (1.20) we will find that things do not quite
work well, because the index j will appear twice as a superscript, once in
xj and once in Sij . Therefore, such substitution will result in an expression
that does not conform to Einsteins notation. To remedy this problem, we
introduce a new notational device for the partial derivatives, as follows:
f e k f = f
k f = k
, (1.22)
x xk
Then we can combine (1.20), (1.21), and (1.22) to find
e i f = (j f )S j
(1.23)
i
12
1.7 Linear Operators on Vector Spaces
Having covered vectors and covectors, and their laws of transformation under
change of basis, we are now ready to introduce new geometric objects. A
linear operator on an n-dimensional vector space V is a function f : V V
which is additive and homogeneous. We thus require that
y i = fji xj (1.25)
Remark: Although the notation fij resembles the notations Sij , Tij used for
the direct and inverse basis transformations, there is subtle but important
difference in our interpretation of these notations. The objects Sij , Tij are
nothing to us but square arrays of real numbers and their definitions depend
on a specific choice of bases, so they both depend on two bases (the old
and the new). By contrast, we interpret fij as a basis-independent geometric
object, whose numerical representation depends on a single chosen basis.
Let us explore the transformation law for fij when changing from a basis ei
i . We find from the contravariance of xi and y i that
to a basis e
Hence we conclude that, in order for the relation (1.25) to hold in the new
basis, we must have
fm
k
= Tik fji Sm
j
(1.27)
Expression (1.27) is the transformation law for linear operators. As we see,
the transformation involves both the direct and the inverse transformations.
Therefore a linear operator is contravariant in one index and covariant in
the second index. The transformation (1.27) can also be expressed in matrix
form as Fe = T F S = S 1 F S, which is the way it is presented in linear
algebra.
13
1.8 Tensors
Vectors, covectors, and linear operators are all special cases of tensors. We
will not attempt to define tensors in abstract terms, but settle for a coordinate-
based definition, as follows.
ij11...i
a r i1 ir k1 ...kr m1 ms
...js = Tk1 . . . Tkr am1 ...ms Sj1 . . . Sjs (1.28)
We know from linear algebra that a vector is an abstract object, which should
be distinguished from its coordinate representation. Whereas the coordinates
depend on the choice of basis, the vector itself is invariant. The same applies
to tensors. Tensors are abstract objects and what we see in (1.28) is only a
law of transformation of the coordinates of the tensor, while the tensor itself
is invariant.
14
the address as an integer represented in radix n and call an n-radix digit
enit (derived from bit). Then k1 . . . kr m1 . . . ms may be thought of as
an address consisting of r + s enits. However, the r leftmost enits and the
s rightmost enits are used differently in specifying the address: the former
are used as superscripts and the latter as subscripts. Thus k1 . . . kr is the
contravariant part of the address and m1 . . . ms is the is the covariant part
of the address. The contents of the storage, which is to say the coordinates,
depend on the basis. Under change of basis, the transformation law (1.28)
applies.
1.9.1 Addition
Two tensors of the same type can be added term-by-term. The expression
cji11...i r i1 ...ir i1 ...ir
...js = aj1 ...js + bj1 ...js (1.30)
means that each coordinate on the left hand holds the sum of the correspond-
ing coordinates on the right side. We can write tensor addition symbolically
as c = a + b. Tensor addition is obviously commutative. Furthermore, it is
straightforward to verify that the change-of-basis transformation law holds
for c, hence c is indeed a tensor.
Remarks:
1. Tensors of different ranks cannot be added.
15
2. The tensor 0 can be defined as a tensor of any rank whose coordinates
are all 0. Therefore, 0 is not a uniquely defined tensor, but if we accept
this vague definition, then a + 0 = 0 + a = a holds for any tensor a.
cij11...i r i1 ...ir
...js = xaj1 ...js (1.31)
16
That the tensor product is indeed a tensor is almost self-evident and is left to
the reader as an exercise. Simply write the change-of-basis transformation
to each factor, examine the resulting expression and compare it with the
change-of-basis transformation needed for c to be a tensor.
(xa + yb) c = x(a c) + y(b c), c (xa + yb) = x(c a) + y(c b) (1.34)
denotes their tensor product. This tensor product is multilinear in the fol-
lowing sense: For every k we have
a(1) . . . [xa(k) + yb] . . . a(m) =
(1.35)
x[a(1) . . . a(k) . . . a(m)] + y[a(1) . . . b . . . a(m)]
1.9.4 Contraction
17
Note how the index k disappears through the implied summation and the
resulting tensor has type (r 1, s 1). The operation (1.36) is called con-
traction. For a general (r, s)-tensor there are rs possible contractions, one
for each pair of contravariant and covariant indices.
To see that (1.36) indeed defines a tensor, consider first a simple case of
a (1, 1) tensor agh . Then agg is the trace of a. Applying a change-of-basis
transformation to a and then computing the trace gives
Tgm agh Sm
h
= agh gh = agg (1.37)
Hence the trace is invariant under change of basis. Now replace agh by a
general tensor and repeat (1.37) for a chosen pair of indices to find
g ...g gg ...g h g ...g gg ...g g ...g g ...g
Tgki ah11 ...hi1 i i+1 r
S j = ah11 ...hi1
j1 hj hj+1 ...hs k
i i+1 r
j1 gi hj+1 ...hs
= bh11 ...hi1 i+1 r
j1 hj+1 ...hs
(1.38)
Now, applying the transformations associated with the remaining indices
would complete the change-of-basis transformation for (1.36), resulting in
bg1 ...gi1 gi+1 ...gr as required.
h1 ...hj1 hj+1 ...hs
Consider the set T (r, s) of all (r, s)-tensors, including the zero tensor 0.
Equipped with the addition operation of two tensors and the multiplication
18
operation of a scalar by a tensor, this set becomes a vector space of dimension
nr+s . Let (e1 , . . . , en ) be a basis for V and (f 1 , . . . , f n ) a basis for V . Then all
tensor products ei1 . . .eir f j1 . . .f js , whose number is nr+s , constitute a
basis for T (r, s). To save space, we will henceforth use the abridged notation
ei1 . . . eir f j1 . . . f js for the basis tensors (i.e., the operation between the
basis vectors is implied).
a = aij11...i r j1
...js ei1 . . . eir f . . . f
js
(1.40)
Expression (1.40) brings out a problem that is both fundamental and nota-
tional. We implicitly assumed that, when writing a basis tensor
e i1 . . . e ir f j1 . . . f js
V V V , V V V, V V V
However, this works only for specific r and s, and a specific order of con-
travariance/covariance.
The full notation (1.40) disambiguates the tensor; for example, in the case
of (2, 1)-tensors, we understand aij k ij k ij k
k ei ej f , ak f ei ej , and ak ei f ej as being
ij
different. But what about the simplified coordinate notation ak that we have
19
used until now? In fact, many authors distinguish between aij k , akij , and aikj .
The coordinates of general tensors are often written as ai1 ...ir j1 ...js when the
contravariant basis vectors are meant to come first. Authors who use such
notations may also use S ij for transformation matrices. Another common
0
notation is 0 for the direct transformation matrix and for its inverse.
Here the same symbol is used for both transformations and one must pay
attention to the location of primes to distinguish between the two transfor-
mations. Such notations quickly become awkward and difficult to follow1 .
We have chosen to adopt a pragmatic approach. We will continue to use
r+s
aij11...i r
...js whenever the distinction between the r
possibilities is unimpor-
tant; such was the case until now. In the future, we will adopt the notation
ai1 ...ir j1 ...js and its mixed variations only when distinction is significant and
ambiguity may arise otherwise.
20
1. Bilinearity:
(au1 + bu2 ) v = a(u1 v) + b(u2 v)
u (av1 + bu2 ) = a(u v1 ) + b(u v2 )
for all u, u1 , u2 , v, v1 , v2 V and a, b R.
Such a function is called an inner product on the space V. If, in addition,
the following property holds:
2. Symmetry:
uv =vu
for all u, v V, then the inner product is called symmetric.
If, furthermore, the following property holds:
3. Nondegeneracy:
u x = 0 for all u V x = 0
then the inner product is called nondegenerate.
21
1.11.2 The Gram Matrix
22
If n is the dimension of G then, by Sylvesters theorem, n can be expressed
as n = n+ + n + n0 , according to the numbers of +1, 1, and 0 along the
diagonal of . The triplet (n+ , n , n0 ) is called the signature of G. Sylvesters
theorem tells us that the signature is invariant under congruence relation. If
G is nonsingular, as in our case, then necessarily n0 = 0 and n = n+ + n .
If G is positive, then n = n0 = 0 and n = n+ . This happens if and only if
the space is Euclidean.
u v = gij ui v j (1.44)
The (0, 2)-tensor gij is called the metric tensor of the inner product space.
Like all tensors, it is a geometric object, invariant under change-of-basis
transformations. By Sylvesters theorem, there exists a basis which makes
the metric diagonal and reveals the signature of the space. This signature
is uniquely defined by the definition of the inner product. It immediately
follows from (1.44) that the inner product is invariant under change of basis.
This is not surprising, since the definition of inner product does not depend
on a basis.
gjk g ki = ji (1.45)
23
If the vector space is Euclidean and the basis is orthonormal, then the coor-
dinates of the metric tensor in this basis are, by definition, gij = ij , where ij
Kronecker delta ji . In this case the inner
is defined in a similar manner to the P
product is simply given by u v = ni=1 ui v i . This is the inner-product ex-
pression usually encountered in linear algebra courses. Our discussion makes
it clear that this expression holds only when the space is Euclidean and the
basis is orthonormal.
24
Clearly, x x is not always nonnegative because the inner product of the
Minkowski space is not positive. The following terminology is in use, de-
pending on the sign of x x:
< 0 : timelike
x x = 0 : lightlike
> 0 : spacelike
There is much more to tell about relativity theory, but this is not the place
to do so.
Let aij11...i r
...js be the coordinates of the (r, s)-tensor a in some basis and gij be
the metric tensor in this basis. Let us form the tensor product gpq aij11...i r
...js . This
tensor has type (r, s + 2). Now choose one of the contravariant coordinates of
a, say ik . Replace ik by q and perform contraction with respect to q. Then
q will disappear and we will be left with a tensor of type (r 1, s + 1)
i ...i i ...ir i ...i qik+1 ...ir
bpj1 1 ...jk1
s
k+1
= gpq aj11 ...jk1
s
(1.48)
Raising is the dual of lowering. We start with the dual metric tensor g pq and
form the tensor product g pq aij11...i r
...js . We choose an index jk , replace jk by q
and perform contraction with respect to q, obtaining
25
is placed in the first position. If different placement is necessary, some ad-hoc
notation must be used.
vi = gik v k , v i = g ik vk (1.50)
26
1.13.2 The Levi-Civita Symbol
The Levi-Civita Symbol i1 i2 ...in is a function of n indices, each taking values
from 1 to n. It is therefore fully defined by nn values, one for each choice of
indices. The definition of the Levi-Civita symbol is as follows.
1, i1 i2 . . . in is an even permutation of 12 . . . n
i1 i2 ...in = 1, i1 i2 . . . in is an odd permutation of 12 . . . n (1.51)
0, i1 i2 . . . in is not a permutation of 12 . . . n
27
it is an odd permutation, and to 0 if it is not a permutation at all. It follows
that
j1 j2 ...jn = (det S)j1 j2 ...jn (1.54)
The expression (1.54) indicates why i1 i2 ...in cannot be a tensor. If it where,
the values of its coordinates would be in the set {1, 1, 0} in some preferred
basis and would have different values in all other bases. But, in defining
i1 i2 ...in we have not used any preferred basis, so we must assume that its
values are in the set {1, 1, 0} in any basis. Expression (1.54) contradicts
this and hence i1 i2 ...in is not a tensor.
The conclusion from (1.57) is that i1 i2 ...in is almost a tensor, except for
possible sign change. As long as all transformations S in the context of the
application are guaranteed to have positive determinant, i1 i2 ...in is a tensor.
In the general case, we refer to i1 i2 ...in is as a pseudotensor. We warn,
however, that this terminology is not standard.
p
The quantity | det G| is equal to the volume of the parallelepiped formed by
the basis vectors in the case of a 3-dimensional Euclidean space. Therefore,
we call i1 i2 ...in the volume pseudotensor, or the volume tensor in the special
case of positive det S. Again, this name is not standard.
3
For example, det G is positive in Euclidean spaces but is negative in the Minkowski
space.
28
1.14 Symmetry and Antisymmetry
In the following discussion, aj1 j2 ...js is a (0, s)-tensor, but can also be a symbol
such as the Levi-Civita symbol or a pseudotensor such as the volume tensor.
We say that aj1 j2 ...js is symmetric with respect to a pair of indices p and q if
We say that aj1 j2 ...js is antisymmetric with respect to a pair of indices p and
q if
aj1 ...p...q...js = aj1 ...q...p...js (1.59)
We note that each of (1.58) and (1.59) involves transposition of p and q; thus,
symmetry and antisymmetry are defined by the behavior of the coordinates
under transpositions.
29
A tensor aj1 j2 ...js is completely antisymmetric if and only if
where the sign is positive for any even permutation k1 k2 . . . ks and is negative
for any odd permutation. The proof is similar to the preceding proof for the
symmetric case.
The symmetric part of a tensor aj1 j2 ...js with respect to a pair of indices p
and q is defined by
The antisymmetric part of a tensor aj1 j2 ...js with respect to a pair of indices
p and q is defined by
The tensor aj1 ...p...q...js is the sum of its symmetric and antisymmetric parts:
30
Partial symmetrizations and antisymmetrizations are also useful in certain
applications. Their definitions are easy to understand and we discuss them
only briefly. We may select any subset of j1 j2 . . . js and perform complete
symmetrization or antisymmetrization using this subset and leaving the re-
maining indices intact. If the subset of indices is consecutive, the notation
is self explanatory. For example, if aijkm is a tensor, then ai(jkm) is partial
symmetrization with respect to jkm and a[ijk]m is partial antisymmetrization
with respect to ijk. To compute ai(jkm) , we add all aiuvw , where uvw is a
permutation of jkm, and divide by 6. To compute a[ijk]m , we compute the
alternating sum of all auvwm , where uvw is a permutation of ijk, and divide
by 6.
1.15 Summary
31
transform under the direct change-of-basis matrix and the latter transform
under its inverse.
The remaining sections of this chapter deal with somewhat more advanced
subjectsthe Levi-Civita symbol, the volume pseudotensor, and symmetriza-
tion and antisymmetrization of tensors. These may be skipped on first read-
ing, but in the long run they are important.
We define vector spaces to be flat, because all geometric objects are fixed,
although their coordinates vary depending on the basis. Applications in
physics, as well as many branches of mathematics, require more complex ob-
jects, in particular ones which involve functions and calculus. The remaining
chapters deal with tensors in more general settings and, in particular, intro-
duce tensor calculus and some special tensors of importance.
32
Chapter 2
In this chapter we will consider tensors that vary from point to point in space.
We therefore change our viewpoint on the underlying vector space V. Rather
than an abstract space, we will think of V as a real physical space, which can
be the usual 3-dimensional Euclidean Newtonian space or the 4-dimensional
Minkowski space. The former is associated with classical physics whereas the
latter constitutes the framework for relativistic physics. To V we attach a
fixed origin and a reference basis (e1 , e2 , . . . , en ). Each point in V has a radius
vector r with respect to the origin. The coordinates of r can be expressed as
linear coordinates in terms of the reference basis, as we learned in Chapter 1,
say (x1 , x2 , . . . , xn ). However, they can also be expressed in other ways, rather
than through bases. For example, you are probably familiar with cylindrical
and spherical coordinates in a 3-dimensional space. We may express a radius
vector in cylindrical coordinates as r(, , z) or in spherical coordinates as
r(r, , ). In both cases the arguments are not linear coordinates relative to
any basis. These are special cases of curvilinear coordinates, which we will
study in great detail in the sequel.
33
To each point in the space V we will assign a tensor a(r). These tensors have
the same rank (r, s) throughout the space; they are geometrical objects over
the space V by their very definition. Initially we define their coordinates
relative to the reference basis and later consider change of basis. Thus,
aij11...i r
...js (r) denotes the coordinates of a space-dependent tensor with respect
to the reference basis (e1 , e2 , . . . , en ). Such an object is called a tensor field
over V.
We have already met the gradient operator k in Sec. 1.6, applied to a scalar
field and expressed as a covariant operator in linear coordinates. We wish
to generalize the gradient operator to tensor fields. As long as we restrict
ourselves to linear coordinates, this is not difficult. Let aij11...i r
...js (r) be a tensor
field. Upon expressing the radius vector r in terms of the reference basis (i.e.
r = ei xi ), the tensor aij11...i r i
...js (r) becomes a function of the coordinates x . We
may now differentiate the tensor with respect to a particular coordinate xp :
aij11i r
...js (r)
p aij11...i r
...js (r) = aij11...i r
...js ;p (r) = (2.1)
xp
There are several novelties in equation (2.1) that we should note. First, the
right side expresses the fact that each component of the tensor, contravari-
ant and covariant, is differentiated separately and each provides n partial
derivatives, one for each p. Second, the result depends of the choice of ba-
sis, since the partial derivatives are with respect to the coordinates in the
34
basis. Third, the resulting object is a tensor field of type (r, s + 1), with an
additional covariant component; this fact is far from being obvious and must
be proved. Fourth, (2.1) introduces a new notation, with the new covariant
component appearing as the last one and separated by a semicolon. Several
different notations are used in the literature but we will adhere to this one.
aji11...i
r
...js (r)
ij11...i
e p ai1 ...ir (r) = a r
j1 ...js ...js ;p (r) = (2.2)
xp
ij11...i
a r i1 ir k1 ...kr m1 ms
...js (r) = Tk1 . . . Tkr am1 ...ms (r)Sj1 . . . Sjs (2.3)
akm11...k
...ms (r)
r
akm11...k
...ms (r) x
r q akm11...k
...ms (r) q
r
= = Sp (2.5)
xp xq xp xq
35
with implied summation over q on the right side of (2.5). Substitution of
(2.5) in (2.4) gives
aij11...i
r
...js (r) i1
k1 ...kr
ir am1 ...ms (r) m1
p
= Tk1 . . . Tkr q
Sj1 . . . Sjms s Spq (2.6)
x x
In our new notation, equation (2.6) reads
ij11...i
a r i1 ir k1 ...kr m1 ms q
...js ;p (r) = Tk1 . . . Tkr am1 ...ms ;q (r)Sj1 . . . Sjs Sp (2.7)
Let us assume that we are given n functions of the coordinates of the ref-
erence basis, to be denoted by y i (x1 , . . . , xn ), 1 i n. These functions
are assumed to be continuous and to possess continuous partial derivatives
on a certain region in the space1 . Additionally, they are assumed to be
invertible, and the inverse functions xi (y 1 , . . . , y n ), 1 i n are also as-
sumed to be continuous and to possess continuous partial derivatives on the
same region. Such functions y i (x1 , . . . , xn ) are called curvilinear coordinates.
Curvi implies nonlinearity, and linear implies that the functions can be
locally linearized in the vicinity of each point r, as we shall see soon.
Consider the partial derivatives of the radius vector r with respect to the
curvilinear coordinates. We define
j
r j
x
Ei = i = i e j x = e j (2.8)
y y y i
This derivation is correct because ej are fixed vectors; hence their derivatives
are identically zero. The n2 partial derivatives xj /y i form a square matrix
1
Technically, this region must be an open set, but you may ignore this if you are not
familiar with the notion of an open set. In the next chapter we will be more precise about
this point.
36
called the Jacobian matrix. This matrix is nonsingular since y i (x1 , . . . , xn ),
1 i n are invertible. Let us denote
xj y m
Sij = , T m
k = (2.9)
y i xk
Then
Tjm Sij = im , Sm
i
Tkm = ki (2.10)
The vectors (E1 , . . . , En ) are called the tangent vectors of the curvilinear
coordinates at the point r. The vector space spanned by the tangent vector
is called the tangent space at the point r. The tangent space varies from
point to point, unless all functions y i (x1 , . . . , xn ) are linear. The space is
n-dimensional in general, unless one (or more) of Ei is zero2 .
37
vector may be expressed in terms of the local basis (E1 , . . . , En ). Doing so
yields n2 linear expressions
Ei
j
= Ek kij (2.12)
y
We now derive several explicit formulas for the affine connections, which will
be useful later. First, differentiation of (2.11) and substitution in (2.12) gives
Tkp Si,j
k
= Tkp Sm
k m
ij = pij
p m
ij = m (2.16)
38
We thus obtain the first explicit formula for the affine connections:
We therefore deduce that the affine connections are symmetric in their lower
indices.
The third explicit formula expresses the affine connections in terms of the
metric tensor gij = Ei Ej associated with the basis (E1 , . . . , En ):
gmj,i = gpj pmi + gmp pji , gij,m = gpj pim + gip pjm (2.22)
Substitution of (2.21) and (2.22) in (2.20) and using the symmetry of the
affine connection and the metric tensor with respect to the lower indices, we
find
0.5g km gpi pmj + gmp pij + gpj pmi + gmp pji gpj pim gip pjm
(2.23)
= g km gmp pij = pk pij = kij
39
2.4.2 Example
x = r cos , y = r sin
Finally,
rrr = 0, rr = rr = 0, r = r, rr = 0, r = r = r1 , = 0
40
For the purpose of the material on tensor fields in curvilinear coordinates,
we will use the following convention henceforth. Tensor coordinates relative
to the reference basis will continue to be denoted by lower case letters. For
tensors in curvilinear coordinates we will use upper case letters, but without
the tilde. Thus, if akm11...k
...ms are the coordinates of the tensor a in the reference
r
denote the coordinates of the same tensor in the tangent basis Ei , expressed
as functions of the curvilinear coordinates y i . This convention will facilitate
convenient visual identification of tensors in curvilinear coordinates, which
tensors are heavily used in subsequent material.
Recall (2.7), which was shown to hold for change of basis in linear coordinates.
Let us require that (2.7) hold in curvilinear coordinates as well; that is,
Aij11...i r i1 ir k1 ...kr m1 ms q
...js ;p = Tk1 . . . Tkr am1 ...ms ;q Sj1 . . . Sjs Sp (2.24)
akm xq akm q
q p
= q
Sp = Shk hip Aij Tmj + Sik Aij,p Tmj Sik Aij jhp Tmh (2.27)
x y x
This is the same as
k
i am m q
Tk q Sj Sp = ihp Ahj + Aij,p Aih hjp (2.28)
x
or
Tki akm;q Sjm Spq = Aij,p + ihp Ahj Aih hjp (2.29)
Finally, comparing the left side of (2.29) with the right side of (2.24), we find
that
Aij;p = Aij,p + ihp Ahj Aih hjp (2.30)
41
Expression (2.30) is called the covariant derivative of the (1,1)-tensor Aij
in curvilinear coordinates. As we saw, Aij;p is a (1,2)-tensor, related to the
(1,2)-tensor akm;q via the change-of-basis formula (2.24).
42
2.6 The Second Covariant Derivative
Let us expand each of the first two terms on the right side of (2.35) (but not
the third, for reasons to be seen later), using Ai;p from (2.32):
(Ai;p ),q = (Ai,p + ihp Ah ),q = Ai,pq + ihp,q Ah + ihp Ah,q (2.36)
Ai;pq = Ai,pq + ihp,q Ah + ihp Ah,q + ihq Ah,p + ihq hkp Ak Ai;h hpq (2.38)
Expression (2.38) provides the desired result for the second covariant deriva-
tive of a vector in curvilinear coordinates. In the next section we will continue
to explore this result.
43
let us compute Ai;pq Ai;qp and check whether the difference is identically zero.
Using (2.38) and canceling terms that are equal due to symmetries gives
i i
R
ekqp R
bkqp =0
bi = R
Finally, this holds for all i, k, p, q, so R ei and it follows that Ri is
kqp kqp hqp
a tensor.
44
i
The tensor Rhqp is called the Riemann curvature tensor. The Riemann cur-
vature tensor plays a central role in differential geometry and the theory of
manifolds, as well as in applications of these theories. Here we do not pro-
vide enough material on these subjects to really appreciate the importance of
the Riemann tensor. We will settle for discussing some of its mathematical
properties and then present some tensor related to the Riemann tensor.
i
The first thing to note is that Rhqp depend only on the affine connections kij .
These, in turn, depend only on the metric gij , as seen from (2.20). It follows
that The Riemann tensor depends only on the metric. We should remember,
however, that in curvilinear coordinates the metric varies from point to point
and the Riemann tensor varies with it. Only when the curvilinear coordinates
are linear is the metric constant. In this case the affine connections are
identically zero and so is the Riemann tensor.
This identity also follows by substitution of (2.40) and performing the can-
celations. Note that the lower indices of the three terms in (2.43) are cyclic
permutations of one another.
Other symmetries of the Riemann tensor are more subtle; to make them
explicit, we introduce the purely covariant Riemann tensor, obtained by low-
i
ering the contravariant index of Rhqp :
j
Rihqp = gij Rhqp (2.44)
45
2.8 Some Special Tensors
The Ricci tensor is the result of contracting the contravariant and last co-
variant indices of the Riemann tensor:
k
Rij = Rijk (2.46)
The Ricci tensor is symmetric, as results from the following chain of equali-
ties:
k
Rij = Rijk = g ku Ruijk = g ku Rjkui = g ku Rkjiu = Rjiu
u
= Rji (2.47)
R = g ij Rij (2.48)
Einsteins tensor is symmetric, since both Rij and gij are symmetric. Ein-
steins tensor can also be expressed in mixed and contravariant forms:
46
2.9 Summary
47
Chapter 3
Tensors on Manifolds
3.1 Introduction
48
patch can be mapped in a one-to-one way to a patch of a Euclidean space
such that the map and its inverse are continuous. Let us exemplify this
informal description by considering the surface of a sphere in R3 centered at
the origin and having radius d = 1. Denote the surface of the sphere by S
and the three cartesian coordinates by (x, y, z). Then the two sets
O1 = {(x, y, z) S, z 6= 1} and O2 = {(x, y, z) S, z 6= 1}
together cover S. Note that these sets considerably overlap; the first contains
all points of S except the north pole and the second contains all points but
the south pole. Let (u, v) be cartesian coordinates on R2 and define
2x 2y
u= , v=
1z 1z
These functions, called the stereographic projection, map O1 onto R2 contin-
uously. The inverse functions
0.25(u2 + v 2 ) 1
z= , x = 0.5u(1 z), y = 0.5v(1 z)
0.25(u2 + v 2 ) + 1
are continuous on R2 onto O1 . In a similar way O2 can be mapped onto R2 ;
simply replace 1 z by 1 + z in the above. The surface of a sphere in three
dimensions as therefore a two-dimensional manifold.
49
Section 3.3 introduces manifolds. This is the most challenging section; read-
ing and re-reading will probably be needed by most students. By comparison,
the remaining sections should be easier. Section 3.4, on tensors and tensor
fields, is a relatively straightforward extension of the material in Chapter 2.
Sections 3.5, 3.6, and 3.7 on curves, curvature and geodesics are optional,
and rely mostly on standard multivariate calculus.
We assume that you know the concept of an abstract set, at least at the
intuitive level. We remind that the notation x A means that x belongs to
the set A or, equivalently, that x is a member of the set A. We also say that
A contains x. The notation x / A means that x does not belong to A.
If A is a set and B is another set such that, for all x B it holds true that
x A, then B is a subset of A, denoted as B A. We also say that B is
included in A and that A includes B. For two sets A and B, the notation
A\B stands for the set of all x A and x / B. We sometime call this set
(informally) A minus B. If, as a special case, B A, then A\B is called
the complement of B in A.
Unions and intersections are not limited to two sets, or even to a finite
collection of sets. If I is an arbitrary setfinite, countably infinite, or even
uncountably infinitewe may assign to each member S i I a set Ai , thus
obtaining a collection {Ai , i I}. Then the union iI T Ai is the set of all x
such that x Ai for some i I and the intersection iI Ai is the set of all
50
x such that x Ai for all i I.
The range of a function is the subset of all y Y for which y = f (x) for
some x X. The range of a surjective function is therefore identical with
its codomain. Every function becomes surjective if we redefine its codomain
as its range. On the other hand, a function that is not injective cannot be
made injective in a similar manner.
The image and inverse image satisfy the following union and intersection
51
properties:
|x y| |x z| + |z x| (3.5)
52
Let x0 be a point in Rn and d a positive number. The set
Property T2 is almost self evident. Given any collection of open sets, then
each is a union of open balls, thus their union is a union of unions of open
balls, which is itself a union of open balls, which is open.
Property T3 is more difficult to prove. We break the proof into three steps,
as follows.
A = B(x1 , d1 ) B(x2 , d2 )
Let y be a point in A. We aim to find an open ball B(y, ) such that B(y, )
B(x1 , d1 ) and B(y, ) B(x2 , d2 ). It will then follow that B(y, ) A.
Define
1 = d1 |y x1 |, 2 = d2 |y x2 |, = min{1 , 2 }
|z y| < 1 = d1 |y x1 |
53
so by the triangle inequality
|z x1 | |z y| + |y x1 | < d1 |y x1 | + |y x1 | = d1
Step Two. So far we proved that, given a point y in the intersection A of two
open balls, there Sexists an open ball B(y, ) that contains y and is included
in A. The union yA B(y, ) therefore both includes and is included in A,
so it is equal to A. It follows that A is an open set1 .
Step Three. To complete the proof, consider two open sets O1 , O2 . Each is
a union of open balls and hence U1 U2 is the union of all intersections of
the open balls comprising O1 with those comprising O2 (this follows from De
Morgans distributive laws for unions and intersections of sets). As we have
just shown, each such intersection is an open set; therefore U1 U2 is an open
set and the proof is complete.
The space Rn , together with the collection of all open sets, is an example of
a topological space. It is called the usual topology on Rn . This, in fact, was
the model on which general topology was founded.
1
If you have not seen this kind of mathematical proof, you must be surprised and
hopefully amazed. This is the power of infinity in action. It is Georg Cantors theory of
infinity that revolutionized mathematics; the revolution in physics followed only shortly
after.
54
Two simple examples of topological spaces are:
Any set S with T = {, S}. This topology has only two open sets and
is called the indiscrete topology on S.
Any set S with T containing all subsets of S. This is called the discrete
topology on S.
Many more examples can be given, but since our aim is not to teach topology
for its own sake, we will not provide more examples.
We call a set C closed if its complement S\C is open. It follows easily from
this definition that and S are closed, that an arbitrary intersection of closed
sets is closed, and that the union of two closed sets is closed. In the usual
topology on Rn , the closed ball
C(x0 , d) = {y : |x0 y| d} (3.7)
is closed. For a proof, note that the union of all open balls in Rn which are
disjoint from C(x0 , d) is open and is equal to Rn \C(x0 , d)
3.2.4 More on Rn
The usual topology on Rn has many interesting properties that arise fre-
quently in advanced calculus on Rn . We will mention some of them here;
most are not essential to understanding of the material in this chapter but
are good to know for completeness.
Hausdorff If x1 and x2 are two different points, then there exist open
neighborhoods O1 and O2 of x1 and x2 such that O1 O2 = . A
topological space having this property is called a Hausdorff space 2 . To
2
Named after Felix Hausdorff, one of the founders of the abstract theory of topological
spaces.
55
prove the Hausdorff property for Rn , let d = |x1 x2 | and then let
O1 = B(x1 , d/3), O2 = B(x2 , d/3). It is straightforward to prove that
O1 O2 = .
Separability A subset A of a topological space is dense if every open set
contains a point of A. A topological space is separable if it contains
a countable dense set. It is well known that the rational numbers are
dense in R (we will not prove this here). It then follows that the points
x whose coordinates are rational are dense in Rn . The set of rational
numbers is known to be countable. It then follows that the set of n-
tuples of rational numbers is countable. Therefore, Rn equipped with
the usual topology is separable.
Second Countability A base for a topology is a collection B of open sets
such that every open set is a union of members of B. The definition of
the usual topology on Rn automatically makes the collection of all open
balls a base for the topology. A topological space is second countable if
it has a countable base [first countable is another property, which we
will not discuss here]. The collection B of all open balls whose centers
have rational coordinates and whose radii are rational is a countable
base for usual topology on Rn . Here is a sketch of the proof. Given an
open set O, take all points x of O having rational coordinates. For each
x, takeS all open balls B(x, r) with rational r such that B(x, r) O.
Then x,r B(x, r) = O.
All these definitions are applicable to any topological space, not just to Rn .
56
points of X. The following theorem is of interest: A function is continuous
on X if and only if, for any open set V in Y , the inverse image U = f 1 [V ]
is an open set in X. The proof is a simple exercise which you may want to
carry out, to test your understanding of images, inverse images, and open
sets.
Let X and Y be two topological spaces and assume that there exists a bijec-
tive function f on X onto Y such that both f and f 1 are continuous. Then
the two spaces are homeomorphic and f is called homeomorphism. You may
think of two spaces as being homeomorphic if they are made of infinitely
flexible rubber and one can be obtained from the other by arbitrary stretch-
ing, squeezing, or bending, but no tearing or punching holes. The two spaces
look the same in the sense that an arbitrary set O in X is open if and only
if f [O] is open in Y .
57
3.3 Manifolds
58
3. It follows from axiom M3 that i [Oi ] is open in Rn .
5. A chart is also called a coordinate system. Given a chart (O, ), the set
O is called the chart set and the function is called the chart function.
59
characterized on M in its entirety. Analytic and C k functions on manifolds
to Rm can be defined similarly.
Let M be a smooth manifold and let F be the set of all smooth functions
f : M R, as defined in the preceding subsection. Note that the target
space here is the real line R rather than a general Euclidean space. Let f
and g be two functions in F. Then f + g clearly belongs to F for any
two real numbers and . It is, perhaps, less obvious that f g also belongs
to F. Note that f g denotes pointwise multiplication; that is, the value of f g
at p M is equal to f (p)g(p). Once this is understood, it is easy to verify
that f g is smooth by recalling MF1 and the fact that the product of two
numerical smooth functions is a smooth function.
60
MD2 Product rule:
dp (f 2 ) = dp (cf ) = cdp (f )
dp (f 2 ) = 2cdp (f )
Now comes a point that calls for special attention. Let us substitute f (p)
for f and (p) for in the partial derivatives. Doing so will result in a real
number. Now let us repeat for all functions f F. This will result in an
operator /xk |p : F R. Moreover, being a bona-fide partial derivative, it
61
satisfies (3.9) and (3.10) automatically. Therefore, /xk |p is a derivative
operator on M , as defined in the preceding subsection.
The main property of the the tangent space of a manifold at a point is given
by the following theorem
We will prove only one part, namely that the directional derivatives are inde-
pendent. The other part, that every derivative operator d can be expressed
as a linear sum of the directional derivatives, will be omitted3 . Assume that
vk k = 0
x p
3
Although I strive to include, or at list provide hints for, proofs of all claims in this
document, I decided to make an exception here. The omitted proof is rather long and
technical. Perhaps I will add it in the future.
62
for some n numbers v k . Then, by the definition of the directional derivatives,
k
1
v (f ) = 0
xk p
This equality cannot hold identically for all smooth functions at a specific
point p, unless all v k are zero.
At this point, the road is open to the development of tensor theory on mani-
folds, parallel to tensor theory on curvilinear coordinates. This endeavor will
be taken up in the next section.
63
3.4.1 Coordinate Transformations
64
called the cotangent space at p and we will refer to its elements as covectors or
dual vectors, as in Chapter 1. A common notation for the covectors compris-
ing the dual basis is (dx1 , dx2 , . . . dxn ). The change-of-basis transformation
for dual bases is
xi j xi j
xi =
d dx , dxi
= d
x (3.16)
xj xj
and the change-of-basis rules for the coordinates of a covector w are
xj xj
wi = wj , wi = wj (3.17)
xi xi
We can now define the notion of a smooth vector field. A vector field d is
smooth if df is smooth on M for every smooth function f on M .
Note that (3.19) can be defined only on the chart set Op corresponding to
the chart function p . The numerical functions (f p1 )/xk are obviously
65
smooth on p . When we move to a different chart, we must differentiate
with respect to the new coordinates, so all we can say is that the vector field
/xk can be defined on each chart separately and is smooth on each chart.
It is now easy to conclude that, since the fields of coordinate bases are smooth,
a general vector field v is smooth if and only if its coordinates v k (which are
functions M R) are smooth on every chart.
aji11...i
...js
r
aij11...i r
...js ,p = (3.21)
xp
The result is not a tensor field, however, since it does not satisfy the trans-
formation law (3.20). To obtain a derivative tensor field, we will need the
covariant derivative, as we saw in Chapter 2 and as we will see again later in
this chapter.
The special case of a (1, 0)-tensor field gives rise to a vector field, as defined
in Subsection 3.4.3. The special case of a (0, 1)-tensor field gives rise to a
covector field.
66
3.4.5 The Metric Tensor
The metric tensor can be expressed in full form, including its basis covectors,
as in (1.40),
ds2 = gij dxi dxj (3.25)
The notation ds2 , although it is merely symbolic and should not be un-
derstood as the square of a real number, is called the (square of the) line
element.
The transformation law of the metric tensor under change of basis is the
same as any (0, 2)-tensor:
xp xq
p q
x k x m
gij = gpq = Sp S km (3.26)
i
x x j x i xj q
67
3.4.6 Interlude: A Device
A simple device, which we now introduce, will enable us to save many pages
of definitions and derivations. Let us conjure up an n-dimensional inner
product space V, together with a basis (e1 , . . . , en ) whose Gram matrix is
ek em = km (3.27)
Now, as we did in Chapter 2, let us define a local basis at each point of the
manifold:
Ei = Sik ek (3.28)
We then find that
We emphasize that this device is possible only because of axiom MM3 of the
metric tensor. The constancy of facilitates the use of a fixed inner product
space with a fixed signature for the entire manifold. It should also be clear
that V and the bases ei an Ei are not part of the definition of a manifold.
They are artificial devices, introduced only for their technical usefulness, as
we will see in the next subsection.
Once the device employed in the preceding subsection is understood, all the
material in Chapter 2 from Section 2.4 onwards applies to tensors on man-
ifolds, with no changes. In particular, affine connections, covariant deriva-
tives, and the special tensors, are defined and used as in Chapter 2. All you
need to remember is that:
The coordinates xk on some chart (O, ) take the place of the curvi-
linear coordinates y k .
68
We have not dealt with the difficulty of patching charts and thereby
facilitating working on the manifold as a whole. Although such patch-
ing can be made rigorous, it is quite technical and is outside the scope
of this document. Here we must keep in mind that a single fixed chart
is assumed, although this can be any chart in the atlas.
The almost in the title of this subsection implies that we are not quite
finished yet. Additional material, both interesting and important, is provided
in the next section.
d(f )
(f ) = (3.30)
dt
for every f F. This definition makes (f ) is a derivative operator; that is,
a member of Dp , hence a vector at p.
dxk
= k
= k k (3.31)
dt x x
where k = dxk /dt are the coordinates of the tangent vector in the coordi-
nates basis. Note that xk (t) are the coordinates of the curve in the coordi-
nates basis.
Let (t) be a curve and k (t) its tangent vector. Let v i (t) be a function
R D; that is, a collection of vectors defined on all points of the curve.
69
Then v i (t) is parallel transported on the curve if
k (t)v;k
i
(t) = 0 for all t and all i (3.32)
i
where, as we recall, v;k is the covariant derivative of v i along the k th coordi-
nate. The left side of equation (3.32) is thus a contraction of a vector by a
mixed (1, 1)-tensor, resulting in a vector.
The definition (3.32) is, perhaps, not very illuminating and makes it hard
to understand what parallel means in this context. Let us therefore bring
(3.32) to a form that will make it more transparent. First, note that the
requirement that the left side of (3.32) be zero for all i translates to
k (t)v;k
i
(t)Ei = 0 for all t (3.33)
dxk v i dxk v i p
i m p i p m
+ km v Si ep = S + km Si v ep = 0 (3.34)
dt xk dt xk i
dxk v i p Sm p
dxk (v i Sip )
m
S + v ep = ep = 0 (3.35)
dt xk i xk dt xk
We now observe that v i Sip = V p , where V p are the coordinates of the vector
v in the hypothetical reference vector space V. Therefore,
(V p ep ) dxk d(V p ep )
= =0 (3.36)
xk dt dt
The conclusion from this derivation is: A parallel transported vector remains
unchangedthat is, remains parallel to itselfwhen viewed as an abstract
geometrical object in the hypothetical reference space V.
What have we learned from the preceding derivation? On the practical level,
certainly not much. On the intuitive level, we must exercise our imagination
and visualize the vector transported parallel to itself in some hypotherical
space along a curve that exists on a real manifold. Of course, both M and
V are ultimately abstract, but the former a defined mathematical object,
70
whereas the latter is an artificial device. If this digression was not helpful,
you may simply disregard it.
3.6 Curvature
Let us consider a simple example first. Place an arrow at the North Pole
of the earth, tangent to the surface and pointing south in the direction of
the Greenwich meridian (longitude 0 ). Now begin to move south along the
meridian with the arrow pointing south all the time, until you hit the equator.
71
At this point start moving east along the equator with the arrow continuing
pointing south. When you get to longitude 90 (you will be somewhere in the
Indian Ocean, but never mind that), start moving north along the meridian
with the arrow still pointing south. When you get to the north pole, the arrow
will be pointing south parallel to the 90 meridian, so it will be rotated 90
relative to the direction of the arrow when you started! Although we have
cheated slightly in this example (the curve in this example is not smooth,
having three corners), the general behavior is indeed true.
72
The area of the triangle spanning the angle between and + d is
dA = 0.5r2 d (3.38)
73
such that (t) is a fixed curve and is a small scalar. You may visualize
the curve as a small loop surrounding x0 . Our aim is to approximate the
parallel transport equations (3.37) to order 2 ; that is, to neglect terms of
higher order in .
74
and thus
v1i (1) v1i (0) = [ k (1) k (0)]ikm (x0 )v0m = 0 (3.52)
We conclude that there is no first-order effect of parallel transport on a closed
curve.
Let us now proceed to the second-order term in . We again find from (3.48)
dv2i (t) k
j d (t) l
+ [ikj,` (x0 ) ikm (x0 )m (x
j` 0 )]v0 (t) = 0 (3.54)
dt dt
This differential equation can be integrated to give
1
d k (t) l
Z
i i i i m j
v2 (1) v2 (0) = [kj,` (x0 ) km (x0 )j` (x0 )]v0 (t)dt
0 dt (3.55)
j k`
= [ikj,` (x0 ) ikm (x0 )m
j` (x0 )]v0 A
We can give (3.55) a more symmetrical form. First, since k and ` are dummy
indices, we may interchange them, without affecting the result. Therefore,
(3.55) is equal to
j `k
v2i (1) v2i (0) = [i`j,k (x0 ) i`m (x0 )m
jk (x0 )]v0 A (3.56)
75
We now recall the definition of the Riemann curvature tensor (2.40) and
i
recognize the quantity in the brackets in (3.58) as Rk`j . Therefore, finally,
v i (1) v i (0) 0.52 Rk`j
i
Ak` v0j (3.59)
We arrived at the result promised in the beginning of this section and, at the
same time, established an interesting interpretation of the Riemann curvature
tensor. When a vector is parallel transported along an infinitesimal closed
curve in a manifold, there is a second-order, nonzero difference between the
vectors at the end of the curve and at the beginning of the curve. This
difference is proportional to the Riemann tensor at the central point of the
loop and also to the area of the loop, as expressed by 2 Ak` . Another way
of expressing this result is: When a given vector is parallel transported from
a point p to a point q on a manifold, the resulting vector is not uniquely
determined by the points p and q, but depends on the chosen path between
the points.
3.7.1 Geodesics
A geodesic in a manifold is a curve having the property that its tangent vector
is parallel transported along the curve. Returning to (3.37), the differential
equation for the coordinates of a parallel transported vector, and substituting
the tangent vector in place of the transported vector, yields the equation
d 2 xi dxk dxm
+ ikm = 0 for all t and all i (3.60)
dt dt dt
This is known as the geodesic equation. It is a set of n second-order, nonlinear,
coupled differential equations in the unknown functions xi (t), which are the
coordinates of the geodesic. One must remember that the affine connections
ikm are not constant coefficients but functions of the xi (t), because the affine
connections are not constant in general on a curved space.
Assuming that the metric gij is smooth on the manifold, the affine connec-
tions are smooth. Then, given xi (0) and dxi (0)/dt at point p on the manifold,
76
the geodesic equation has a unique solution. This solution can be found ana-
lytically in very simple cases, or numerically when no analytic solution exists.
It is well known that the curve of shortest length between two points is a
straight line. By definition, a straight line has coordinates xi (t) = ai + bi t,
where ai , bi are constants. We note that a straight line satisfies the geodesic
equation (3.60), because the affine connections are identically zero on a flat
space and d2 xi (t)/dt2 = 0 for a straight line. So, the curve of shortest length
between two points on a Euclidean space satisfies the geodesic equation. We
wish to find out whether this result can be generalized to manifolds.
While we are still in a Euclidean space, let us introduce the concept of natural
parameterization. First define the partial length of a curve as
n
Z t X 1/2
dxk (u) 2
s(t) = du (3.62)
0 k=1 du
t = t(s) (3.63)
and redefine the functions xi (s) and dxi (s) accordingly. We distinguish be-
tween functions of t and functions of s only by the argument in parentheses
and do not assign different symbols to the functions themselves. This is ob-
viously an abuse of notation, but is convenient and hopefully will not lead to
confusion. Note also that (3.64) requires the inversion of the function s(t).
This may be computationally difficult but we are ignoring such difficulties in
the present discussion.
77
The parameterization xi (s) is called the natural parameterization of the
curve. With the natural parameterization, (3.62) becomes a triviality
Z s
du = s (3.64)
0
Let us now pose the following question: Given two points on the manifold,
what is the curve of minimum length connecting these points? A partial an-
swer is given as follows: The curve of minimum length satisfies the geodesic
equation, provided the curve is parameterized in the natural parameteriza-
tion. Note that this condition is necessary, but not sufficient, for the curve
to have minimum length.
78
Appendix A
The subject of dual vector spaces is not usually covered in linear algebra
courses, but is necessary for tensors. We will therefore provide a brief intro-
duction to this subject in this appendix.
The sum of two linear functionals is a linear functional and the product of a
linear functional by a scalar is a linear functional. Sums of linear functionals
and products by scalars obey all the properties of vectors in a vector space.
The proofs of these statements are straightforward. Therefore, the set of
all linear functionals on V is a vector space, called the dual space of V and
denoted by V . The elements of V are called dual vectors or covectors.
We will use bold font for dual vectors; usually there will be no confusion with
vectors, but in case of ambiguity will mention explicitly whether the symbol
stands for a vector or a dual vector. If y is a dual vector and x is a vector,
79
we will denote
hy, xi = y(x)
hf i , xi = xi (A.2)
Next we show that every linear functional can be expressed as a linear com-
bination of (f 1 , . . . , f n ). Let g VP and define the n scalars gi = hg, ei i. We
will prove that g is given by g = ni=1 gi f i . For an arbitrary vector x,
n
X n
X n
X n
X
i i i
hg, xi = g, ei x = hg, ei ix = gi x = gi hf i , xi (A.3)
i=1 i=1 i=1 i=1
Pn
Since this holds identically for all x, it follows that g = i=1 gi f i and the
proof is complete.
80
Appendix B
To find the covariant Riemann tensor, we must compute each of the four
terms in (2.40) and lower the contravariant index. Let us begin with the first
term giu uhp,q . We use the product rule for derivative and then substitute
(B.2) to find
giu uhp,q = (giu uhp ),q giu,q uhp = ihp,q (iuq + uiq )uhp (B.3)
The second term of (2.40) is simpler:
giu ukq khp = ikq khp = iuq uhp (B.4)
81
Adding (B.3) and (B.4) gives
giu (uhp,q + ukq khp ) = ihp,q uiq uhp = ihp,q g ut uiq thp (B.5)
The sum of the third and fourth terms of (2.40) is obtained from (B.5) upon
interchanging p and q:
Since the indices u and t in (B.6) are dummy (they are summation indices)
and since g ut is symmetric in u and t, we may interchange them and rewrite
(B.6) as
giu (uhq,p + ukp khq ) = ihq,p g ut tip uhq (B.7)
Adding (B.5) and (B.7) gives the desired formula for Rihqp :
i
Rihqp = giu Rhqp = giu (uhp,q + ukq khp ) giu (uhq,p + ukp khq )
(B.8)
= (ihp,q ihq,p ) + g ut (tip uhq thp uiq )
It is easy to see that the second term on the right side of (B.8) is anti-
symmetric in i and u, and it remains to check the first term. We have
82
Appendix C
ij
The scalar function F will result from a (2, 2)-tensor Fpq upon performing
ij
the contraction Fij . For the sake of clarity, let us keep the (2, 2)-tensor for
the time being. Also, to make the derivation easier to typeset and read, we
use the shorthand notation x i (t) = dxi (t)/dt. So, we define
ij
Fpq (x(t), x(t))
= gpq (x(t))x i (t)x j (t) (C.2)
83
and then
ij
Fpq (x(t) + y(t), x(t)
+ y(t))
=
gpq (x(t) + y(t))[x i (t) + y j (t)][x j (t) + y j (t)] = (C.3)
ij ij
Fpq (x(t), x(t))
+ Fpq (x(t), x(t))
+ O(2 )
ij
where Fpq (x(t), x(t))
is the linear term in and O(2 ) is a remainder term;
that is, a term bounded in magnitude by some multiple of 2 .
ij
To compute an explicit expression for Fpq (x(t), x(t)),
let us first approxi-
mate gpq up to first order in :
gpq (x(t) + y(t)) = gpq (x(t)) + gpq,k (x(t))y k (t) + O(2 ) (C.4)
Therefore,
ij
Fpq (x(t), x(t))
= gpq,k (x(t))y k (t)x i (t)x j (t)
(C.5)
+ gpq (x(t))x i (t)y j (t) + gpq (x(t))x j (t)y i (t)
Consider the following identity:
d
[gpq (x(t))x i (t)y j (t)] = gpq,` (x(t))x ` (t)x i (t)y j (t)
dt (C.6)
+ gpq (x(t)) xi (t)y j (t) + gpq (x(t))x i (t)y j (t)
from which we can write
d
gpq (x(t))x i (t)y j (t) = [gpq (x(t))x i (t)y j (t)]
dt (C.7)
gpq,` (x(t))x ` (t)x i (t)y j (t) gpq (x(t)) xi (t)y j (t)
and similarly
d
gpq (x(t))x j (t)y i (t) = [gpq (x(t))x j (t)y i (t)]
dt (C.8)
gpq,` (x(t))x ` (t)x j (t)y i (t) gpq (x(t)) xj (t)y i (t)
We can now substitute (C.7) and (C.7) in (C.5) and get
ij
Fpq (x(t), x(t))
= gpq,k (x(t))y k (t)x i (t)x j (t)
d d
+ [gpq (x(t))x i (t)y j (t)] + [gpq (x(t))x j (t)y i (t)]
dt dt (C.9)
gpq,` (x(t))x ` (t)x i (t)y j (t) gpq,` (x(t))x ` (t)x j (t)y i (t)
xi (t)y j (t) gpq (x(t))
gpq (x(t)) xj (t)y i (t)
84
We are now in a position to perform the contraction from (2,2)-tensors to
scalars:
F (x(t), x(t))
= gij (x(t))x i (t)x j (t) = 1 (C.10)
and
F (x(t), x(t))
= gij,k (x(t))y k (t)x i (t)x j (t)
d d
+ [gij (x(t))x i (t)y j (t)] + [gij (x(t))x j (t)y i (t)]
dt dt (C.11)
gij,` (x(t))x (t)x (t)y (t) gij,` (x(t))x ` (t)x j (t)y i (t)
` i j
We now recall the formula (2.20) for the affine connection to express (C.12)
as
d
F (x(t), x(t))
=2 [gij (x(t))x i (t)y j (t)]
dt (C.13)
2[gik xi (t) + gk` `ij x i (t)x j (t)]y k (t)
Adding the zero-order term and the first-order term and approximating the
square-root up to first order gives
[F (x(t), x(t))
+ F (x(t), x(t))
+ O(2 )]1/2 =
(C.14)
[1 + F (x(t), x(t))
+ O(2 )]1/2 = 1 + 0.5F (x(t), x(t))
+ O(2 )
where the number 1 results from the fact that we are using the natural
parameterization for the minimum-length solution.
85
which follows from the fact that y j (0) = y j (tf ) = 0.
Now, finally, comes the main point of the preceding derivation. For small
enough , only the first-order term determines whether the right side of
(C.16) is less than, equal to, or greater than S. Since we are free to choose
a positive or negative , we can force the first-order term to be positive or
negative, unless the integral is identically zero. It follows that a necessary
condition for S to be the minimum length is that
Z tf
[gik xi (t) + gk` `ij x i (t)x j (t)]y k (t)dt = 0 (C.17)
0
But, since y k (t) are arbitrary functions, the necessary condition is, in fact,
d2 xm (t) i j
m dx (t) dx (t)
+ ij =0 (C.21)
dt2 dt dt
Equation (C.21) is the geodesic equation.
86