Introduction To Vectors and Tensors
Introduction To Vectors and Tensors
Volume 2
Ray M. Bowen
Mechanical Engineering
Texas A&M University
College Station, Texas
and
C.-C. Wang
Mathematical Sciences
Rice University
Houston, Texas
To Volume 2
This is the second volume of a two-volume work on vectors and tensors. Volume 1 is concerned
with the algebra of vectors and tensors, while this volume is concerned with the geometrical
aspects of vectors and tensors. This volume begins with a discussion of Euclidean manifolds. The
principal mathematical entity considered in this volume is a field, which is defined on a domain in a
Euclidean manifold. The values of the field may be vectors or tensors. We investigate results due
to the distribution of the vector or tensor values of the field on its domain. While we do not discuss
general differentiable manifolds, we do include a chapter on vector and tensor fields defined on
hypersurfaces in a Euclidean manifold.
This volume contains frequent references to Volume 1. However, references are limited to
basic algebraic concepts, and a student with a modest background in linear algebra should be able
to utilize this volume as an independent textbook. As indicated in the preface to Volume 1, this
volume is suitable for a one-semester course on vector and tensor analysis. On occasions when we
have taught a one –semester course, we covered material from Chapters 9, 10, and 11 of this
volume. This course also covered the material in Chapters 0,3,4,5, and 8 from Volume 1.
We wish to thank the U.S. National Science Foundation for its support during the
preparation of this work. We also wish to take this opportunity to thank Dr. Kurt Reinicke for
critically checking the entire manuscript and offering improvements on many points.
iii
__________________________________________________________________________
CONTENTS
Section 55. Normal Vector, Tangent Plane, and Surface Metric… 407
Section 56. Surface Covariant Derivatives………………………. 416
Section 57. Surface Geodesics and the Exponential Map……….. 425
Section 58. Surface Curvature, I. The Formulas of Weingarten
and Gauss…………………………………………… 433
Section 59. Surface Curvature, II. The Riemann-Christoffel
Tensor and the Ricci Identities……………………... 443
Section 60. Surface Curvature, III. The Equations of Gauss and Codazzi 449
v
vi CONTENTS OF VOLUME 2
Section 63. The General Linear Group and Its Subgroups……….. 463
Section 64. The Parallelism of Cartan……………………………. 469
Section 65. One-Parameter Groups and the Exponential Map…… 476
Section 66. Subgroups and Subalgebras…………………………. 482
Section 67. Maximal Abelian Subgroups and Subalgebras……… 486
INDEX………………………………………………………………………. x
______________________________________________________________________________
CONTENTS
INDEX…………………………………………………………………………………….x
_____________________________________________________________________________
PART III
BISHOP, R. L., and R. J. CRITTENDEN, Geometry of Manifolds, Academic Press, New York, 1964
BISHOP, R. L., and S. I. GOLDBERG, Tensor Analysis on Manifolds, Macmillan, New York, 1968.
CHEVALLEY, C., Theory of Lie Groups, Princeton University Press, Princeton, New Jersey, 1946
COHN, P. M., Lie Groups, Cambridge University Press, Cambridge, 1965.
EISENHART, L. P., Riemannian Geometry, Princeton University Press, Princeton, New Jersey, 1925.
ERICKSEN, J. L., Tensor Fields, an appendix in the Classical Field Theories, Vol. III/1.
Encyclopedia of Physics, Springer-Verlag, Berlin-Gottingen-Heidelberg, 1960.
FLANDERS, H., Differential Forms with Applications in the Physical Sciences, Academic Press,
New York, 1963.
KOBAYASHI, S., and K. NOMIZU, Foundations of Differential Geometry, Vols. I and II, Interscience,
New York, 1963, 1969.
LOOMIS, L. H., and S. STERNBERG, Advanced Calculus, Addison-Wesley, Reading, Massachusetts,
1968.
MCCONNEL, A. J., Applications of Tensor Analysis, Dover Publications, New York, 1957.
NELSON, E., Tensor Analysis, Princeton University Press, Princeton, New Jersey, 1967.
NICKERSON, H. K., D. C. SPENCER, and N. E. STEENROD, Advanced Calculus, D. Van Nostrand,
Princeton, New Jersey, 1958.
SCHOUTEN, J. A., Ricci Calculus, 2nd ed., Springer-Verlag, Berlin, 1954.
STERNBERG, S., Lectures on Differential Geometry, Prentice-Hall, Englewood Cliffs, New Jersey,
1964.
WEATHERBURN, C. E., An Introduction to Riemannian Geometry and the Tensor Calculus,
Cambridge University Press, Cambridge, 1957.
_______________________________________________________________________________
Chapter 9
EUCLIDEAN MANIFOLDS
This chapter is the first where the algebraic concepts developed thus far are combined with
ideas from analysis. The main concept to be introduced is that of a manifold. We will discuss here
only a special case cal1ed a Euclidean manifold. The reader is assumed to be familiar with certain
elementary concepts in analysis, but, for the sake of completeness, many of these shall be inserted
when needed.
Consider an inner produce space V and a set E . The set E is a Euclidean point space if
there exists a function f : E × E → V such that:
(a) f ( x, y ) = f (x, z) + f ( z, y ), x, y , z ∈ E
and
(b) For every x ∈E and v ∈V there exists a unique element y ∈E such that
f ( x, y ) = v .
The elements of E are called points, and the inner product space V is called the translation space.
We say that f ( x, y ) is the vector determined by the end point x and the initial point y . Condition
b) above is equivalent to requiring the function f x : E → V defined by f x ( y ) = f ( x, y ) to be one to
one for each x . The dimension of E , written dimE , is defined to be the dimension of V . If V
does not have an inner product, the set E defined above is called an affine space.
A Euclidean point space is not a vector space but a vector space with inner product is made
a Euclidean point space by defining f ( v1 , v 2 ) ≡ v1 − v 2 for all v ∈V . For an arbitrary point space
the function f is called the point difference, and it is customary to use the suggestive notation
f ( x, y ) = x − y (43.1)
297
298 Chap. 9 • EUCLIDEAN MANIFOLDS
and
x−y =v (43.3)
(i ) x−x =0
(ii ) x − y = −( y − x ) (43.4)
(iii ) if x − y = x '− y ', then x − x ' = y − y '
x−x = x−x+x−x
which implies x − x = 0 . To obtain (ii) take y = x in (43.2) and use (i). For (iii) observe that
from (43.2). However, we are given x − y = x '− y ' which implies (iii).
The equation
x−y =v
has the property that given any v and y , x is uniquely determined. For this reason it is customary
to write
x=y+v (43.5)
Sec. 43 • Euclidean Point Spaces 299
d (x, y ) = x − y = {( x − y ) ⋅ ( x − y )}
1/ 2
(43.6)
It easily fol1ows from the definition (43.6) and the properties of the inner product that
d ( x, y ) = d ( y , x ) (43.7)
and
d ( x , y ) ≤ d ( x , z ) + d ( z, y ) (43.8)
for all x, y , and z in E . Equation (43.8) is simply rewritten in terms of the points x, y , and z
rather than the vectors x − y , x − z , and z − y . It is also apparent from (43.6) that
d ( x, y ) ≥ 0 and d ( x, y ) = 0 ⇔ x = y (43.9)
There are several concepts from the theory of metric spaces which we need to summarize.
For simplicity the definitions are sated here in terms of Euclidean point spaces only even though
they can be defined for metric spaces in general.
In a Euclidean point space E an open ball of radius ε > 0 centered at x 0 ∈E is the set
B( x 0 , ε ) = {x d ( x, x 0 ) < ε } (43.10)
B ( x 0 , ε ) = {x d ( x, x 0 ) ≤ ε } (43.11)
Proof. Let {Uα α ∈ I } be a collection of open sets, where I is an index set. Assume that
x ∈ ∪ α∈IUα . Then x must belong to at least one of the sets in the collection, say Uα0 . Since Uα0 is
open, there exists an open ball B( x, ε ) ⊂ Uα0 ⊂ ∪ α∈IUα . Thus, B( x, ε ) ⊂ ∪ α∈IUα . Since x is
arbitrary, ∪ α∈IUα is open.
Proof. Let {U1 ,...,Uα } be a finite family of open sets. If ∩ni=1Ui is empty, the assertion is trivial.
Thus, assume ∩ni=1Ui is not empty and let x be an arbitrary element of ∩ni=1Ui . Then x ∈Ui for
i = 1,..., n and there is an open ball B( x, ε i ) ⊂ Ui for i = 1,..., n . Let ε be the smallest of the
positive numbers ε1 ,..., ε n . Then x ∈ B( x, ε ) ⊂ ∩ni=1Ui . Thus ∩ni=1Ui is open.
Sec. 43 • Euclidean Point Spaces 301
It should be noted that arbitrary intersections of open sets will not always lead to open sets.
The standard counter example is given by the family of open sets of R of the form ( −1 n ,1 n ) ,
n = 1, 2,3.... The intersection ∩ ∞n =1 ( −1 n ,1 n ) is the set {0} which is not open.
By a sequence in E we mean a function on the positive integers {1, 2,3,..., n,...} with values
in E . The notation {x1 , x 2 , x 3..., x n ,...} , or simply {x n } , is usually used to denote the values of the
sequence. A sequence {x n } is said to converge to a limit x ∈E if for every open ball B (x, ε )
centered at x , there exists a positive integer n0 (ε ) such that x n ∈ B( x, ε ) whenever n ≥ n0 (ε ) .
Equivalently, a sequence {x n } converges to x if for every real number ε > 0 there exists a positive
integer n0 (ε ) such that d ( x n , x ) < ε for all n > n0 (ε ) . If {x n } converges to x , it is conventional to
write
x = lim x n or x → x n as n → ∞
n →∞
x = lim x n , y = lim x n
n →∞ n →∞
d ( x, y ) ≤ d ( x, x n ) + d ( x n , y )
for every n . Let ε be an arbitrary positive real number. Then from the definition of convergence
of {x n } , there exists an integer n0 (ε ) such that n ≥ n0 (ε ) implies d ( x, x n ) < ε and d ( x n , y ) < ε .
Therefore,
d ( x, y ) ≤ 2ε
The reader is cautioned not to confuse the concepts limit of a sequence and limit point of a
subset. A sequence is not a subset of E ; it is a function with values in E . A sequence may have a
limit when it has no limit point. Likewise the set of pints which represent the values of a sequence
may have a limit point when the sequence does not converge to a limit. However, these two
concepts are related by the following result from the theory of metric spaces: a point x is a limit
point of a set U if and only if there exists a convergent sequence of distinct points of U with x as
a limit.
A mapping f :U → E ' , where U is an open set in E and E ' is a Euclidean point space or
an inner product space, is continuous at x 0 ∈U if for every real number ε > 0 there exists a real
number δ ( x 0 , ε ) > 0 such that d ( x, x 0 ) < δ (ε , x 0 ) implies d '( f ( x 0 ), f ( x )) < ε . Here d ' is the
distance function for E ' . When f is continuous at x 0 , it is conventional to write
lim f ( x ) or f ( x ) → f ( x 0 ) as x → x 0
x→x0
f ( x + v ) = f ( x ) + A x v + o( x, v ) (43.12)
Sec. 43 • Euclidean Point Spaces 303
where
o( x, v )
lim =0 (43.13)
v →0 v
(A x − A x ) v = o ( x, v ) − o( x, v )
grad f ( x ) = A x , x ∈U (43.14)
f (x + τ v) − f (x) d
A x v = lim = f (x + τ v ) (43.15)
τ →0 τ dτ τ =0
304 Chap. 9 • EUCLIDEAN MANIFOLDS
for all v ∈V . To obtain (43.15) replace v by τ v , τ > 0 in (43.12) and write the result as
f (x + τ v) − f (x) o( x, τ v )
Ax v = − (43.16)
τ τ
By (43.13) the limit of the last term is zero as τ → 0 , and (43.15) is obtained. Equation (43.15)
holds for all v ∈V because we can always choose τ in (43.16) small enough to ensure that x + τ v
is in U , the domain of f . If f is differentiable at every x ∈U , then (43.15) can be written
d
( grad f (x ) ) v = f (x + τ v) (43.17)
dτ τ =0
Before closing this section there is an important theorem which needs to be recorded for
later use. We shall not prove this theorem here, but we assume that the reader is familiar with the
result known as the inverse mapping theorem in multivariable calculus.
Theorem 43.7. Let f :U → E ' be a C r mapping and assume that grad f ( x 0 ) is a linear
isomorphism. Then there exists a neighborhood U1 of x 0 such that the restriction of f to U1 is a
C r diffeomorphism. In addition
grad f −1 ( f ( x 0 )) = ( grad f ( x 0 ) )
−1
(43.18)
This theorem provides a condition under which one can asert the existence of a local inverse of a
smooth mapping.
Exercises
43.2 Show that arbitrary intersections and finite unions of closed sets yields closed sets.
43.3 Let f :U → E ' , where U is open in E , and E ' is either a Euclidean point space or an
inner produce space. Show that f is continuous on U if and only if f −1 (D ) is open in
E for all D open in f (U ) .
43.4 Let f :U → E ' be a homeomorphism. Show that f maps any open set in U onto an
open set in E ' .
43.5 If f is a differentiable scalar valued function on L (V ;V ) , show that the gradient of f
at A ∈ L (V ;V ) , written
∂f
(A)
∂A
⎛ ∂f ⎞ df
tr ⎜ ( A )BT ⎟ = (A + τ B)
⎝ ∂A ⎠ dτ τ =0
for all B ∈ L (V ;V )
43.6 Show that
∂μ1 ∂μ N
(A) = I ( A ) = ( adj A )
T
and
∂A ∂A
306 Chap. 9 • EUCLIDEAN MANIFOLDS
for all x ∈U . We call these fields the coordinate functions of the chart, and the mapping x̂ is also
called a coordinate map or a coordinate system on U . The set U is called the coordinate
neighborhood.
Two charts xˆ :U1 → R N and yˆ :U2 → R N , where U1 ∩U2 ≠ ∅ , yield the coordinate
transformation yˆ xˆ −1 : xˆ (U1 ∩U2 ) → yˆ (U1 ∩U2 ) and its inverse xˆ yˆ −1 : yˆ (U1 ∩U2 ) → xˆ (U1 ∩U2 ) .
Since
yˆ (x ) = ( yˆ 1 ( x ),..., yˆ N ( x ) ) (44.2)
( yˆ (x),..., yˆ
1 N
( x ) ) = yˆ xˆ −1 ( xˆ1 ( x ),..., xˆ N ( x ) ) (44.3)
( xˆ (x),..., xˆ
1 N
( x ) ) = xˆ yˆ −1 ( yˆ 1 ( x ),..., yˆ N ( x ) ) (44.4)
The component forms of (44.3) and (44.4) can be written in the simplified notation
y j = y j ( x1 ,..., x N ) ≡ y j ( x k ) (44.5)
Sec. 44 • Coordinate Systems 307
and
x j = x j ( y1 ,..., y N ) ≡ x j ( y k ) (44.6)
The two N-tuples ( y1 ,..., y N ) and ( x1 ,..., x N ) , where y j = yˆ j ( x ) and x j = xˆ j ( x ) , are the
coordinates of the point x ∈U1 ∩U2 . Figure 6 is useful in understanding the coordinate
transformations.
RN
U1 ∩U2
U1 U2
x̂
ŷ
xˆ(U1 ∩U2 ) yˆ(U1 ∩U2 )
yˆ xˆ −1
xˆ yˆ −1
Figure 6
It is important to note that the quantities x1 ,..., x N , y1 ,..., y N are scalar fields, i.e., real-
valued functions defined on certain subsets of E . Since U1 and U2 are open sets, U1 ∩U2 is open,
and because x̂ and ŷ are C r diffeomorphisms, x̂ (U1 ∩U2 ) and ŷ (U1 ∩U2 ) are open subsets of
R N . In addition, the mapping yˆ xˆ −1 and xˆ yˆ −1 are C r diffeomorphisms. Since x̂ is a
diffeomorphism, equation (44.1) written in the form
x = x( x1 ,..., x N ) = x( x j ) (44.8)
where x is a diffeomorphism x : xˆ (U ) → U .
E = ∪Uα (44.9)
α∈I
Equation (44.9) states that E is covered by the family of open sets {Uα α ∈ I } . A C r -Euclidean
manifold is a Euclidean point space equipped with a C r -atlas . A C ∞ -atlas and a C ∞ -Euclidean
manifold are defined similarly. For simplicity, we shall assume that E is C ∞ .
for all t such that ( x01 ,..., x0j −1 , x0j + t , x0j +1 ,..., x0N ) ∈ xˆ (U ) , where ( x0k ) = xˆ (x 0 ) . The subset of U
obtained by requiring
x j = xˆ j (x ) = const (44.11)
Euclidean manifolds possess certain special coordinate systems of major interest. Let
{i ,..., i } be an arbitrary basis, not necessarily orthonormal, for V . We define N constant vector
1 N
i j (x) = i j , x ∈U (44.12)
The use of the same symbol for the vector field and its value will cause no confusion and simplifies
the notation considerably. If 0E denotes a fixed element of E , then a Cartesian coordinate system
on E is defined by the N scalar fields zˆ1 , zˆ 2 ,..., zˆ N such that
z j = zˆ j ( x ) = ( x − 0E ) ⋅ i j , x ∈E (44.13)
If the basis {i1 ,..., i N } is orthonormal, the Cartesian system is called a rectangular Cartesian
system. The point 0E is called the origin of the Cartesian coordinate system. The vector field
defined by
r ( x ) = x − 0E (44.14)
for all x ∈E is the position vector field relative to 0E . The value r (x ) is the position vector of x .
If {i1 ,..., i N } is the basis reciprocal to {i1 ,..., i N } , then (44.13) implies
x − 0E = x ( z1 ,..., z N ) − 0E = zˆ j ( x )i j = z j i j (44.15)
r = zˆ j i j (44.16)
The product of the scalar field zˆ j with the vector field i j in (44.16) is defined pointwise; i.e., if f
is a scalar field and v is a vector field, then fv is a vector field defined by
fv ( x ) = f ( x ) v ( x ) (44.17)
for all x in the intersection of the domains of f and v . An equivalent version of (44.13) is
zˆ j = r ⋅ i j (44.18)
310 Chap. 9 • EUCLIDEAN MANIFOLDS
where the operator r ⋅ i j between vector fields is defined pointwise in a similar fashion as in
(44.17). As an illustration, let { i 1 , i 2 ,..., i N } be another basis for V related to the original basis
by
i j = Qkj i k (44.19)
z j = ( x − 0E ) ⋅ i j = ( x − 0E ) ⋅ i j + (0E − 0E ) ⋅ i j
(44.20)
= Qkj ( x − 0E ) ⋅ i k + (0E − 0E ) ⋅ i j = Qkj z k + c j
where (44.19) has been used. Also in (44.20) the constant scalars c k , k = 1,..., N , are defined by
c k = (0E − 0E ) ⋅ i k (44.21)
If the bases {i1 , i 2 ,..., i N } and { i 1 , i 2 ,..., i N } are both orthonormal, then the matrix ⎡⎣Qkj ⎤⎦ is
orthogonal. Note that the coordinate neighborhood is the entire space E .
The jth coordinate curve which passes through 0E of the Cartesian coordinate system is the
curve
λ (t ) = ti j + 0E (44.22)
Equation (44.22) follows from (44.10), (44.15), and the fact that for x = 0E , z1 = z 2 = ⋅⋅⋅ = z N = 0 .
As (44.22) indicates, the coordinate curves are straight lines passing through 0E . Similarly, the jth
coordinate surface is the plane defined by
Sec. 44 • Coordinate Systems 311
( x − 0E ) ⋅ i j = const
z3
i3
z2
0E i2
i1
z1
Figure 7
Geometrically, the Cartesian coordinate system can be represented by Figure 7 for N=3. The
coordinate transformation represented by (44.20) yields the result in Figure 8 (again for N=3).
Since every inner product space has an orthonormal basis (see Theorem 13.3), there is no loss of
generality in assuming that associated with every point of E as origin we can introduce a
rectangular Cartesian coordinate system.
312 Chap. 9 • EUCLIDEAN MANIFOLDS
z3
z3
r
r
i3 i3
z2 0E
0E i2 i2
i1
i1
z2
z1
z1
Figure 8
Given any rectangular coordinate system ( zˆ1 ,..., zˆ N ) , we can characterize a general or a
curvilinear coordinate system as follows: Let (U , xˆ ) be a chart. Then it can be specified by the
coordinate transformation from ẑ to x̂ as described earlier, since in this case the overlap of the
coordinate neighborhood is U = E ∩U . Thus we have
( z1 ,..., z N ) = zˆ xˆ −1 ( x1 ,..., x N )
(44.23)
( x1 ,..., x N ) = xˆ zˆ −1 ( z1 ,..., z N )
z j = z j ( x1 ,..., x N ) = z j ( x k )
(44.24)
x j = x j ( z1 ,..., z N ) = x j ( z k )
Sec. 44 • Coordinate Systems 313
As an example of the above ideas, consider the cylindrical coordinate system1. In this case
N=3 and equations (44.23) take the special form
In order for (44.25) to qualify as a coordinate transformation, it is necessary for the transformation
functions to be C ∞ . It is apparent from (44.25) that ẑ xˆ −1 is C ∞ on every open subset of R 3 .
Also, by examination of (44.25), ẑ xˆ −1 is one-to-one if we restrict it to an appropriate domain, say
(0, ∞) × (0, 2π ) × ( −∞, ∞) . The image of this subset of R 3 under ẑ xˆ −1 is easily seen to be the set
{ }
R 3 ( z1 , z 2 , z 3 ) z1 ≥ 0, z 2 = 0 and the inverse transformation is
⎛ 1 2 2 2 1/ 2 −1 z
2
⎞
( x , x , x ) = ⎜ ⎡⎣( z ) + ( z ) ⎤⎦ , tan 1 , z 3 ⎟
1 2 3
(44.26)
⎝ z ⎠
which is also C ∞ . Consequently we can choose the coordinate neighborhood to be any open subset
U in E such that
{
zˆ(U ) ⊂ R 3 ( z1 , z 2 , z 3 ) z1 ≥ 0, z 2 = 0 } (44.27)
or, equivalently,
Figure 9 describes the cylindrical system. The coordinate curves are a straight line (for x1 ), a
circle lying in a plane parallel to the ( z1 , z 2 ) plane (for x 2 ), and a straight line coincident with z 3
(for x 3 ). The coordinate surface x1 = const is a circular cylinder whose generators are the z 3
lines. The remaining coordinate surfaces are planes.
1
The computer program Maple has a plot command, coordplot3d, that is useful when trying to visualize coordinate
curves and coordinate surfaces. The program MATLAB will also produce useful and instructive plots.
314 Chap. 9 • EUCLIDEAN MANIFOLDS
z3 = x3
z2
x2
z1 x1
Figure 9
Returning to the general transformations (44.5) and (44.6), we can substitute the second into
the first and differentiate the result to find
∂y i 1 N ∂x
k
( x ,..., x ) ( y1 ,..., y N ) = δ ij (44.28)
∂x k ∂y j
∂y k 1 N ∂x
i
( x ,..., x ) k ( y1 ,..., y N ) = δ ij (44.29)
∂x j
∂y
⎡ ∂y i ⎤ 1
det ⎢ k ( x1 ,..., x N ) ⎥ = ≠0 (44.30)
⎣ ∂x ⎦ det ⎡ ∂x ( y1 ,..., y N ) ⎤
j
⎢ ∂y l ⎥
⎣ ⎦
For example, det ⎡⎣( ∂z j ∂x k )( x1 , x 2 , x 3 ) ⎤⎦ = x1 ≠ 0 for the cylindrical coordinate system. The
determinant det ⎡⎣( ∂x j ∂y k )( y1 ,..., y N ) ⎤⎦ is the Jacobian of the coordinate transformation (44.6).
Sec. 44 • Coordinate Systems 315
Just as a vector space can be assigned an orientation, a Euclidean manifold can be oriented
by assigning the orientation to its translation space V . In this case E is called an oriented
Euclidean manifold. In such a manifold we use Cartesian coordinate systems associated with
positive basis only and these coordinate systems are called positive. A curvilinear coordinate
system is positive if its coordinate transformation relative to a positive Cartesian coordinate system
has a positive Jacobian.
Given a chart (U , xˆ ) for E , we can compute the gradient of each coordinate function xˆ i
and obtain a C ∞ vector field on U . We shall denote each of these fields by g i , namely
g i = grad xˆ i (44.31)
for i = 1,..., N . From this definition, it is clear that g i (x ) is a vector in V normal to the ith
coordinate surface. From (44.8) we can define N vector fields g1 ,..., g N on U by
Or, equivalently,
for all x ∈U . Equations (44.10) and (44.33) show that g i ( x ) is tangent to the ith coordinate curve.
Since
xˆ i ( x( x1 ,..., x N )) = x i (44.34)
The chain rule along with the definitions (44.31) and (44.32) yield
g i ( x ) ⋅ g j ( x ) = δ ij (44.35)
as they should.
316 Chap. 9 • EUCLIDEAN MANIFOLDS
The values of the vector fields {g1 ,..., g N } form a linearly independent set of vectors
{g1 (x ),..., g N (x )} at each x ∈U . To see this assertion, assume x ∈U and
λ 1g1 ( x ) + λ 2g 2 ( x ) + ⋅⋅⋅ + λ N g N ( x ) = 0
for λ 1 ,..., λ N ∈R . Taking the inner product of this equation with g j (x ) and using equation (44.35)
, we see that λ j = 0 , j = 1,..., N which proves the assertion.
If (U1 , xˆ ) and (U2 , yˆ ) are two charts such that U1 ∩U2 ≠ ∅ , we can determine the
transformation rules for the changes of natural bases at x ∈U1 ∩U2 in the following way: We shall
let the vector fields h j , j = 1,..., N , be defined by
h j = grad yˆ j (44.36)
∂y j 1
h j ( x ) = grad yˆ j ( x ) = ( x ,..., x N ) grad xˆ i (x )
∂x i
(44.37)
∂y j
= i ( x1 ,..., x N )g i ( x )
∂x
∂x i 1
h j (x) = ( y ,..., y N )g i ( x ) (44.38)
∂y j
for all x ∈U1 ∩U2 . Equations (44.37) and (44.38) are the desired transformations.
gij ( x ) = g i ( x ) ⋅ g j (x ) = g ji ( x ) (44.39)
and
g ij ( x ) = g i ( x ) ⋅ g j ( x ) = g ji ( x ) (44.40)
−1
⎣⎡ g ( x ) ⎦⎤ = ⎡⎣ gij ( x ) ⎤⎦
ij
(44.41)
since we have
g i = g ij g j (44.42)
and
g j = g ji g i (44.43)
where the produce of the scalar fields with vector fields is defined by (44.17). If θij is the angle
between the ith and the jth coordinate curves at x ∈U , then from (44.39)
gij (x )
cos θij = 1/ 2
(no sum) (44.44)
⎡⎣ gii ( x ) g jj ( x ) ⎤⎦
318 Chap. 9 • EUCLIDEAN MANIFOLDS
Based upon (44.44), the curvilinear coordinate system is orthogonal if gij = 0 when i ≠ j . The
symbol g denotes a scalar field on U defined by
for all x ∈U .
ds 2 = dx ⋅ dx (44.46)
∂x ∂x
ds 2 = ( xˆ ( x )) ⋅ j ( xˆ ( x ))dx i dx j
∂x i
∂x (44.47)
= gij ( x )dx dx j
i
If (U1 , xˆ ) and (U2 , yˆ ) are charts where U1 ∩U2 ≠ ∅ , then at x ∈U1 ∩U2
hij ( x ) = hi ( x ) ⋅ h j (x )
∂x k 1 N ∂x
l (44.48)
= ( y ,..., y ) ( y1 ,..., y N ) g kl ( x )
∂y i ∂y j
Equation (44.48) is helpful for actual calculations of the quantities gij (x ) . For example, for
the transformation (44.23), (44.48) can be arranged to yield
∂z k 1 N ∂z
k
gij ( x ) = ( x ,..., x ) ( x1 ,..., x N ) (44.49)
∂x i
∂x j
since i k ⋅ i l = δ kl . For the cylindrical coordinate system defined by (44.25) a simple calculation
based upon (44.49) yields
Sec. 44 • Coordinate Systems 319
⎡1 0 0⎤
⎢
⎡⎣ gij (x ) ⎤⎦ = 0 ( x1 ) 2 0⎥ (44.50)
⎢ ⎥
⎣⎢0 0 1 ⎦⎥
Among other things, (44.50) shows that this coordinate system is orthogonal. By (44.50) and
(44.41)
⎡1 0 0⎤
⎢
⎡⎣ g ( x ) ⎤⎦ = 0 1 ( x1 ) 2
ij
0⎥ (44.51)
⎢ ⎥
⎣⎢0 0 1 ⎦⎥
g ( x ) = ( x1 ) 2 (44.52)
ds 2 = ( dx1 ) 2 + ( x1 ) 2 ( dx 2 ) 2 + ( dx 3 )2 (44.53)
Exercises
∂x
ik = and i k = grad zˆ k
∂z k
I = grad r (x )
320 Chap. 9 • EUCLIDEAN MANIFOLDS
z1 = x1 sin x 2 cos x 3
z 2 = x1 sin x 2 sin x 3
z 3 = x1 cos x 2
⎡1 0 0 ⎤
⎡⎣ gij ( x ) ⎤⎦ = ⎢0 ( x1 ) 2 0 ⎥
⎢ ⎥
⎢⎣0 0 ( x1 sin x 2 ) 2 ⎥⎦
z1 = x1 x 2 cos x 3
z 2 = x1 x 2 sin x 3
z 3 = ( ( x1 ) 2 − ( x 2 ) 2 )
1
2
⎡ ( x1 ) 2 + ( x 2 ) 2 0 0 ⎤
⎢ ⎥
⎡⎣ gij ( x ) ⎤⎦ = ⎢ 0 (x ) + (x )
1 2 2 2
0 ⎥
⎢ 0 0 ( x1 x 2 ) 2 ⎥⎦
⎣
a sin x 2 cos x 3
z =
1
cosh x1 − cos x 2
a sin x 2 sin x 3
z2 =
cosh x1 − cos x 2
and
a sinh x1
z =
3
cosh x1 − cos x 2
⎡ a2 ⎤
⎢ 0 0 ⎥
⎢ ( cosh x1 − cos x 2 ) ⎥
2
⎢ ⎥
⎢ a2 ⎥
⎡⎣ gij ( x ) ⎤⎦ = ⎢ 0 0 ⎥
( cosh x − cos x 2 )
1 2
⎢ ⎥
⎢ ⎥
⎢ a 2 (sin x 2 ) 2 ⎥
0 0
⎢
⎣ ( cosh x − cos x ) ⎦
1 2 2⎥
⎡ a 2 ( cosh 2 x1 − cos2 x 2 ) 0 0 ⎤
⎢ ⎥
⎡⎣ gij ( x ) ⎤⎦ = ⎢ 0 a 2 ( cosh 2 x1 − cos2 x 2 ) 0 ⎥
⎢ ⎥
⎢ 0 0 a 2 sinh 2 x1 sin 2 x 2 ⎥
⎣ ⎦
z1 = a cosh x1 cos x 2
z 2 = a sinh x1 sin x 2
z3 = x3
⎡ a 2 ( sinh 2 x1 + sin 2 x 2 ) 0 0⎤
⎢ ⎥
⎡⎣ gij ( x ) ⎤⎦ = ⎢ 0 a 2 ( sinh 2 x1 + sin 2 x 2 ) 0⎥
⎢ ⎥
⎢ 0 0 1⎥
⎣ ⎦
44.9 At a point x in E , the components of the position vector r (x ) = x − 0E with respect to the
basis {i1 ,..., i N } associated with a rectangular Cartesian coordinate system are z1 ,..., z N .
This observation follows, of course, from (44.16). Compute the components of r ( x ) with
respect to the basis {g1 ( x ), g 2 ( x ), g 3 ( x )} for (a) cylindrical coordinates, (b) spherical
coordinates, and (c) parabolic coordinates. You should find that
Sec. 44 • Coordinate Systems 323
a sinh x1 cos x 3
z1 =
cosh x1 − cos x 2
a sinh x1 sin x 3
z =
2
cosh x1 − cos x 2
and
a sin x 2
z3 =
cosh x1 − cos x 2
⎡ a2 ⎤
⎢ 0 0 ⎥
⎢ ( cosh x1 − cos x 2 ) ⎥
2
⎢ ⎥
⎢ a2 ⎥
⎡⎣ gij ( x ) ⎤⎦ = ⎢ 0 0 ⎥
( cosh x − cos x 2 )
1 2
⎢ ⎥
⎢ ⎥
⎢ a 2 sinh 2 x 2 ⎥
0 0
⎢
( cos x1 − cos x 2 ) ⎥⎦
2
⎣
324 Chap. 9 • EUCLIDEAN MANIFOLDS
In this section, we shall formalize certain ideas regarding fields on E and then investigate
the transformation rules for vectors and tensor fields. Let U be an open subset of E ; we shall
denote by F ∞ (U ) the set of C ∞ functions f :U → R . First we shall study the algebraic structure
of F ∞ (U ) . If f1 and f 2 are in F ∞ (U ) , then their sum f1 + f 2 is an element of F ∞ (U ) defined by
( f1 + f 2 )( x ) = f1 ( x ) + f 2 ( x ) (45.1)
( f1 f 2 )( x ) = f1 (x ) f 2 (x ) (45.2)
for all x ∈U . For any real number λ ∈R the constant function is defined by
λ (x) = λ (45.3)
for all x ∈U . For simplicity, the function and the value in (45.3) are indicated by the same
symbol. Thus, the zero function in F ∞ (U ) is denoted simply by 0 and for every f ∈ F ∞ (U )
f +0= f (45.4)
1f = f (45.5)
In addition, we define
− f = ( −1) f (45.6)
Sec. 45 • Transformation Rules 325
It is easily shown that the operations of addition and multiplication obey commutative, associative,
and distributive laws. These facts show that F ∞ (U ) is a commutative ring (see Section 7).
An important collection of scalar fields can be constructed as follows: Given two charts
(U1 , xˆ ) and (U2 , yˆ ) , where U1 ∩U2 ≠ ∅ , we define the N 2 partial derivatives ( ∂y i ∂x j )( x1 ,..., x N )
at every ( x1 ,..., x N ) ∈ xˆ (U1 ∩U2 ) . Using a suggestive notation, we can define N 2 C ∞ functions
∂y i ∂x j :U1 ∩U2 → R by
∂y i ∂y i
( x ) = xˆ (x ) (45.7)
∂x j ∂x j
v = υ i gi = υ j g j (45.8)
υ j = g jiυ i (45.9)
and υi by (45.9). In particular, if (U2 , yˆ ) is another chart such that U1 ∩U2 ≠ ∅ , then the
component form of h j relative to x̂ is
∂x i
hj = gi (45.11)
∂y j
326 Chap. 9 • EUCLIDEAN MANIFOLDS
v = υ k hk (45.12)
where υ k :U ∩U2 → R . From (45.12), (45.11), and (45.8), the transformation rule for the
components of v relative to the two charts (U1 , xˆ ) and (U2 , yˆ ) is
∂x i
υ = jυ
i j
(45.13)
∂y
υ i = v ⋅ gi (45.15)
Now let us consider tensor fields in general. Let Tq∞ (U ) denote the set of all tensor fields of
order q defined on an open set U in E . As with the set F ∞ (U ) , the set Tq∞ (U ) can be assigned
an algebraic structure. The sum of A :U → T q (V ) and B :U → T q (V ) is a C ∞ tensor field
A + B :U → T q (V ) defined by
f A ( x) = f ( x) A ( x ) (45.17)
Clearly this multiplication operation satisfies the usual associative and distributive laws with
respect to the sum for all x ∈U . As with F ∞ (U ) , constant tensor fields in Tq∞ (U ) are given the
same symbol as their value. For example, the zero tensor field is 0 :U → T q (V ) and is defined by
0( x ) = 0 (45.18)
− A = (−1) A (45.19)
The algebraic structure on the set Tq∞ (V ) just defined is called a module over the ring F ∞ (U ) .
The components of a tensor field A :U → T q (V ) with respect to a chart (U1 , xˆ ) are the N q
scalar fields Ai1 ...iq :U ∩U1 → R defined by
for all x ∈U ∩U1 . Clearly we can regard tensor fields as multilinear mappings on vector fields
with values as scalar fields. For example, A (g i1 ,..., g iq ) is a scalar field defined by
for all x ∈U ∩U1 . In fact we can, and shall, carry over to tensor fields the many algebraic
operations previously applied to tensors. In particular a tensor field A :U → T q (V ) has the
representation
A = Ai1...iq g i1 ⊗ ⋅⋅⋅ ⊗ g q
i
(45.21)
328 Chap. 9 • EUCLIDEAN MANIFOLDS
for all x ∈U ∩U1 , where U1 is the coordinate neighborhood for a chart (U1 , xˆ ) . The scalar fields
Ai1...iq are the covariant components of A and under a change of coordinates obey the
transformation rule
∂xi1 ∂x q
i
Equation (45.22) is a relationship among the component fields and holds at all points
x ∈U1 ∩U2 ∩U where the charts involved are (U1 , xˆ ) and (U2 , yˆ ) . We encountered an example of
(45.22) earlier with (44.48). Equation (44.48) shows that the gij are the covariant components of a
tensor field I whose value is the identity or metric tensor, namely
I = gij g i ⊗ g j = g j ⊗ g j = g j ⊗ g j = g ij g i ⊗ g j (45.23)
for points x ∈U1 , where the chart in question is (U1 , xˆ ) . Equations (45.23) show that the
components of a constant tensor field are not necessarily constant scalar fields. It is only in
Cartesian coordinates that constant tensor fields have constant components.
Another important tensor field is the one constructed from the positive unit volume tensor
E . With respect to an orthonormal basis {i j } , which has positive orientation, E is given by (41.6),
i.e.,
E( x ) = E (45.25)
for all x ∈E . With respect to a chart (U1 , xˆ ) , it follows from the general formula (42.27) that
and
e i1⋅⋅⋅iN
E i1⋅⋅⋅iN = ε (45.28)
g
An interesting application of the formulas derived thus far is the derivation of an expression
for the differential element of volume in curvilinear coordinates. Given the position vector r
defined by (44.14) and a chart (U1 , xˆ ) , the differential of r can be written
dr = dx = g i ( x )dx i (45.30)
where (44.33) has been used. Given N differentials of r , dr1 , dr2 , ⋅⋅⋅, drN , the differential volume
element dυ generated by them is defined by
or, equivalently,
If we select dr1 = g1 ( x )dx1 , dr2 = g 2 ( x )dx 2 , ⋅⋅⋅, drN = g N ( x )dx N , we can write (45.31)) as
Therefore,
dυ = (( x ) + ( x ) ) x x dx dx dx
1 2 2 2 1 2 1 2 3
(45.36)
Exercises
45.1 Let v be a C ∞ vector field and f be a C ∞ function both defined on U an open set in E .
We define v f :U → R by
Show that
v (λ f + μ g ) = λ ( v f ) + μ (v g)
and
Sec. 45 • Transformation Rules 331
v ( fg ) = ( v f ) g + f (v g)
For all constant functions λ , μ and all C ∞ functions f and g . In differential geometry,
an operator on F ∞ (U ) with the above properties is called a derivation. Show that,
conversely, every derivation on F ∞ (U ) corresponds to a unique vector field on U by
(45.37).
45.2 By use of the definition (45.37), the Lie bracket of two vector fields v :U →V and
u :U → V , written [u, v ] , is a vector field defined by
[ u, v ] f =u (v f )−v (u f) (45.38)
For all scalar fields f ∈ F ∞ (U ) . Show that [u, v ] is well defined by verifying that (45.38)
defines a derivation on [u, v ] . Also, show that
(c) Let (U1 , xˆ ) be a chart with natural basis field {g i } . Show that ⎡⎣ g i , g j ⎤⎦ = 0 .
dσ = dr1 × dr2
Show that
In many applications, the components of interest are not always the components with
respect to the natural basis fields {g i } and {g j } . For definiteness let us call the components of a
tensor field A ∈ Tq∞ (U ) is defined by (45.20) the holonomic components of A . In this section, we
shall consider briefly the concept of the anholonomic components of A ; i.e., the components of
A taken with respect to an anholonomic basis of vector fields. The concept of the physical
components of a tensor field is a special case and will also be discussed.
Let U1 be an open set in E and let {e a } denote a set of N vectors fields on U1 , which are
linearly independent, i.e., at each x ∈U1 , {e a } is a basis of V . If A is a tensor field in Tq∞ (U )
where U1 ∩U ≠ ∅ , then by the same type of argument as used in Section 45, we can write
A= A1 e b1 ⊗ ⋅⋅⋅ ⊗ e bq
b ... bq
(46.2)
e a ( x ) ⋅ e b ( x ) = δ ba (46.3)
for all x ∈U1 . Equations (46.1) and (46.2) hold on U ∩U1 , and the component fields as defined by
and
= A(e b1 ,..., e q )
b ... bq b
A1 (46.5)
Sec. 46 • Anholonomic and Physical Components 333
are scalar fields on U ∩U1 . These fields are the anholonomic components of A when the bases
{e } and {e } are not the natural bases of any coordinate system.
a
a
Given a set of N vector fields {e a } as above, one can show that a necessary and sufficient
condition for {e a } to be the natural basis field of some chart is
[e a , e b ] = 0
for all a, b = 1,..., N , where the bracket product is defined in Exercise 45.2. We shall prove this
important result in Section 49. Formulas which generalize (45.22) to anholonomic components can
easily be derived. If {eˆ a } is an anholonomic basis field defined on an open set U2 such that
U1 ∩U2 ≠ ∅ , then we can express each vector field eˆ b in anholonomic component form relative to
the basis {e a } , namely
eˆ b = Tba e a (46.6)
Tba = eˆ b ⋅ e a
e a = Tˆabeˆ ba (46.7)
where
and
334 Chap. 9 • EUCLIDEAN MANIFOLDS
for all x ∈U1 ∩U2 . It follows from (46.4) and (46.7) that
where
Equation (46.10) is the transformation rule for the anholonomic components of A . Of course,
(46.10) is a field equation which holds at every point of U1 ∩U2 ∩U . Similar transformation rules
for the other components of A can easily be derived by the same type of argument used above.
g a ⋅ g b = δ ab (46.14)
Sec. 46 • Anholonomic and Physical Components 335
⎡1 g11 ( x ) 0 ⋅ ⋅ ⋅ ⎤
0
⎢ 0 1 g 22 ( x ) 0 ⎥
⎢ ⎥
⎢ ⋅ ⋅ ⋅ ⎥
⎡⎣ g ij ( x ) ⎤⎦ = ⎢ ⎥ (46.15)
⎢ ⋅ ⋅ ⋅ ⎥
⎢ ⋅ ⋅ ⋅ ⎥
⎢ ⎥
⎣ 0 0 ⋅ ⋅ ⋅ 1 g NN (x ) ⎦
gi (x )
g i (x) = = ( gii ( x ))1 2 g i ( x ) (no sum) (46.16)
( g ii ( x ))1 2
⎡1 g11 0 ⋅ ⋅ ⋅ ⎤ 0
⎢ ⎥
⎢ 0 1 g 22 0 ⎥
⎢ ⋅ ⋅ ⋅ ⎥
⎡Tˆab ⎤ = ⎢ ⎥ (46.17)
⎣ ⎦ ⎢ ⋅ ⋅ ⋅ ⎥
⎢ ⎥
⎢ ⋅ ⋅ ⋅ ⎥
⎢ 0 0 ⋅ ⋅ ⋅ 1 g NN ⎥⎦
⎣
By the transformation rule (46.10), the physical components of A are related to the covariant
components of A by
( )
−1 2
A a a ... a ≡ A(g a1 , g a2 ,..., g a ) = g a1a1 ⋅⋅⋅ g aq aq Aa1 ... aq (no sum) (46.18)
1 2 q q
Equation (46.18) is a field equation which holds for all x ∈U . Since the coordinate system is
orthogonal, we can replace (46.18)1 with several equivalent formulas as follows:
336 Chap. 9 • EUCLIDEAN MANIFOLDS
( )
12
A a a ...a = g a1a1 ⋅⋅⋅ g aq aq
a ... aq
A1
1 2 q
( ) (g )
−1 2 12
= g a1a1 ⋅⋅⋅ g aq aq
a2 ... aq
a2 a2 Aa1
⋅ (46.19)
⋅
⋅
( ) (g )
−1 2 12
= g a1a1 ⋅⋅⋅ g aq−1aq−1
aq
aq aq Aa1 ... aq−1
In mathematical physics, tensor fields often arise naturally in component forms relative to
product bases associated with several bases. For example, if {e a } and {eˆ b } are fields of bases,
possibly anholonomic, then it might be convenient to express a second-order tensor field A as a
field of linear transformations such that
Ae a = Ab a eˆ b ,
ˆ
a = 1,..., N (46.20)
A = Ab a eˆ b ⊗ e a
ˆ
(46.21)
Relative to the product basis {eˆ b ⊗ e a } formed by {eˆ b } and {e a } , the latter being the reciprocal
basis of {e a } as usual. For definiteness, we call {eˆ b ⊗ e a } a composite product basis associated
with the bases {eˆ b } and {e a } . Then the scalar fields Ab̂ a defined by (46.20) or (46.21), may be
called the composite components of A , and they are given by
Ab a = A(eˆ b ⊗ e a )
ˆ
(46.22)
etc. Further, the composite components are related to the regular tensor components associated
with a single basis field by
where Tbˆa and Tˆbaˆ are given by (46.6) and (46.7) as before, In the special case where {e a } and
{eˆ b } are orthogonal but not orthonormal, we define the normalized basis vectors e a and eˆ a as
before. Then the composite physical components A bˆ,a of A are given by
A bˆ ,a = A(eˆ bˆ , e a ) (46.26)
A bˆ,a = ( gˆ bbˆ ˆ ) Ab a ( g aa )
12 ˆ 12
(no sum) (46.27)
Clearly the concepts of composite components and composite physical components can be
defined for higher order tensors also.
Exercises
e1 = g 1
e 2 = ( cos α ) g 2 + ( sin α ) g 3 (46.28)
e 3 = − ( sin α ) g 2 + ( cos α ) g 3
where α is a constant called the pitch, and where g α { } is the orthonormal basis associated
with the natural basis of the cylindrical system. Show that {e a } is anholonomic. Determine
the anholonomic components of the tensor field A in the preceding exercise relative to the
helical basis.
46.3 Determine the composite physical components of A relative to the composite product basis
{e a ⊗g b }
Sec. 47 • Christoffel Symbols, Covariant Differentiation 339
In this section we shall investigate the problem of representing the gradient of various
tensor fields in components relative to the natural basis of arbitrary coordinate systems. We
consider first the simple case of representing the tangent of a smooth curve in E . Let
λ : ( a, b ) → E be a smooth curve passing through a point x , say x = λ (c) . Then the tangent vector
of λ at x is defined by
dλ (t )
λ = (47.1)
x dt t =c
Given the chart (U , xˆ ) covering x , we can project the vector equation (47.1) into the natural basis
{gi } of x̂ . First, the coordinates of the curve λ are given by
∂x d λ j
λ = j (47.3)
x ∂x dt t =c
dλ j
λ = g j (x) (47.4)
x dt t =c
Thus, the components of λ relative to {g j } are simply the derivatives of the coordinate
representations of λ in x̂ . In fact (44.33) can be regarded as a special case of (47.3) when λ
coincides with the ith coordinate curve of x̂ .
υ i = d λ i dt (47.6)
λ (t ) = x + tv (47.7)
so that
λ (t ) = v (47.8)
d
grad f (x) ⋅ v = f ( x + τ v ) τ =0 (47.10)
dτ
for all v ∈V . As before, we choose a chart near x ; then f can be represented by the function
For definiteness, we call the function f x the coordinate representation of f . From (47.9), we
see that
d ∂f ( x ) j
f ( x + τ v ) τ =0 = υ (47.13)
dτ ∂x j
∂f j
grad f = g (47.14)
∂x j
where g j is a natural basis vector associated with the coordinate chart as defined by (44.31). In
fact, that equation can now be regarded as a special case of (47.14) where f reduces to the
coordinate function xˆ i .
Having considered the tangent of a curve and the gradient of a function, we now turn to the
problem of representing the gradient of a tensor field in general. Let A ∈ Tq∞ (U ) be such a field
and suppose that x is an arbitrary point in its domain U . We choose an arbitrary chart x̂ covering
x . Then the formula generalizing (47.4) and (47.14) is
∂A(x)
grad A(x) = ⊗ g j ( x) (47.15)
∂x j
where the quantity ∂A(x) ∂x j on the right-hand side is the partial derivative of the coordinate
representation of A , i.e.,
∂A(x) ∂
= j A x( x1 ,..., x N ) (47.16)
∂x j
∂x
From (47.15) we see that grad A is a tensor field of order q + 1 on U , grad A ∈ Tq∞+1 (U ) .
342 Chap. 9 • EUCLIDEAN MANIFOLDS
d
( grad A(x) ) v = A ( x + τ v ) τ =0 (47.17)
dτ
for all v ∈V . By using exactly the same argument from (47.10) to (47.13), we now have
∂A( x ) j
( grad A(x ) ) v = υ (47.18)
∂x j
Since this equation must hold for all v ∈V , we may take v = g k (x) and find
∂A(x)
( grad A(x) ) g k = (47.19)
∂x k
Since grad A(x) ∈ T q+1 (V ) , it can be represented by its component form relative to the
natural basis, say
∂A(x)
( grad A(x) ) g i1 (x) ⋅⋅⋅ g iq (x) =
i1 ...iq
(47.21)
j
∂x j
Sec. 47 • Christoffel Symbols, Covariant Differentiation 343
for all j = 1,..., N . In applications it is convenient to express the components of grad A (x) in terms
of the components of A (x) relative to the same coordinate chart. If we write the component form
of A (x) as usual by
∂A(x) ∂A 1 q (x)
i ...i
⎣ ∂x ∂x ⎦
From this representation we see that it is important to express the gradient of the basis vector g i in
component form first, since from (47.21) for the case A = g i , we have
∂g i (x)
= ( grad g i (x) ) j g k (x)
k
(47.24)
∂x j
or, equivalently,
∂g i ( x ) ⎧ k ⎫
= ⎨ ⎬ g k (x) (47.25)
∂x j ⎩ij ⎭
⎧k ⎫ ⎧k ⎫
We call ⎨ ⎬ the Christoffel symbol associated with the chart x̂ . Notice that, in general, ⎨ ⎬ is a
⎩ij ⎭ ⎩ij ⎭
function of x , but we have suppressed the argument x in the notation. More accurately, (47.25)
should be replaced by the field equation
∂g i ⎧ k ⎫
= ⎨ ⎬ gk (47.26)
∂x j ⎩ij ⎭
344 Chap. 9 • EUCLIDEAN MANIFOLDS
which is valid at each point x in the domain of the chart x̂ , for all i, j = 1,..., N . We shall consider
some preliminary results about the Christoffel symbols first.
⎧ k ⎫ ∂g i k
⎨ ⎬ = j ⋅g (47.27)
⎩ij ⎭ ∂x
⎧k ⎫ ∂ 2x
⎨ ⎬ = i j ⋅g
k
(47.28)
⎩ ⎭
ij ∂x ∂x
It follows from (47.28) that the Christoffel symbols are symmetric in the pair (ij ) , namely
⎧k ⎫ ⎧ k ⎫
⎨ ⎬=⎨ ⎬ (47.29)
⎩ij ⎭ ⎩ ji ⎭
gij = g i ⋅ g j (47.30)
Taking the partial derivative of (47.30) with respect to x k and using the component form (47.26),
we get
∂gij ∂g i ∂g
= ⋅ g j + g i ⋅ kj (47.31)
∂x k
∂x k
∂x
When the symmetry property (47.29) is used, equation (47.31) can be solved for the Christoffel
symbols:
Sec. 47 • Christoffel Symbols, Covariant Differentiation 345
⎧ k ⎫ 1 kl ⎛ ∂gil ∂g jl ∂gij ⎞
⎨ ⎬= g ⎜ j + i − l ⎟ (47.32)
⎩ij ⎭ 2 ⎝ ∂x ∂x ∂x ⎠
g kl = g k ⋅ g l (47.33)
The formula (47.32) is most convenient for the calculation of the Christoffel symbols in any given
chart.
As an example, we now compute the Christoffel symbols for the cylindrical coordinate
system in a three-dimensional space. In Section 44 we have shown that the components of the
metric tensor are given by (44.50) and (44.51) relative to this coordinate system. Substituting those
components into (47.32), we obtain
⎧ 1⎫ ⎧ 2⎫ ⎧ 2⎫ 1
⎨ ⎬ = −x , ⎨ ⎬=⎨ ⎬= 1
1
(47.34)
⎩22 ⎭ ⎩12 ⎭ ⎩12 ⎭ x
Given two charts x̂ and ŷ with natural bases {g i } and {hi } , respectively, the
transformation rule for the Christoffel symbols can be derived in the following way:
⎧k ⎫ k ∂g i ∂x k l ∂ ⎛ ∂y s ⎞
⎨ ⎬ = g ⋅ = h ⋅ j ⎜ i hs ⎟
⎩ij ⎭ ∂x j ∂y l ∂x ⎝ ∂x ⎠
∂x k l ⎛ ∂ 2 y s ∂y s ∂h s ∂y l ⎞
= h ⋅ ⎜ j i s h + ⎟
∂y l ⎝ ∂x ∂x ∂x i ∂y l ∂x j ⎠
(47.35)
∂x k ∂ 2 y l ∂x k ∂y s ∂y l l ∂h s
= l + h ⋅ l
∂y ∂x j ∂x i ∂y l ∂x i ∂x j ∂y
∂x k ∂ 2 y l ∂x k ∂y s ∂y l ⎧ l⎫
= + ⎨ ⎬
∂y l ∂x j ∂x i ∂y l ∂x i ∂x j ⎩ st ⎭
346 Chap. 9 • EUCLIDEAN MANIFOLDS
⎧k ⎫ ⎧ l⎫
where ⎨ ⎬ and ⎨ ⎬ denote the Christoffel symbols associated with x̂ and ŷ , respectively. Since
⎩ij ⎭ ⎩ st ⎭
(47.35) is different from the tensor transformation rule (45.22), it follows that the Christoffel
symbols are not the components of a particular tensor field. In fact, if ŷ is a Cartesian chart, then
⎧l⎫
⎨ ⎬ vanishes since the natural basis vectors h s , are constant. In that case (47.35) reduces to
⎩ st ⎭
⎧ k ⎫ ∂x k ∂ 2 y l
⎨ ⎬= l (47.36)
⎩ij ⎭ ∂y ∂x ∂x
j i
⎧k ⎫
and ⎨ ⎬ need not vanish unless x̂ is also a Cartesian chart. The formula (47.36) can also be used
⎩ij ⎭
to calculate the Christoffel symbols when the coordination transformation from x̂ to a Cartesian
system ŷ is given.
Having presented some basis properties of the Christoffel symbols, we now return to the
general formula (47.23) for the components of the gradient of a tensor field. Substituting (47.26)
into (47.23) yields
∂A 1 q ⎧ i1 ⎫ i1 ...iq −1k ⎧ iq ⎫
i ...i
A 1 q , j ≡ ( grad A ) 1 = +A 2 q ⎨ ⎬ + ⋅⋅⋅ + A
i ...i i ...iq ki ...i
⎨ ⎬ (47.38)
j
∂x j
⎩kj ⎭ ⎩ kj ⎭
i ...i
This particular formula gives the components A 1 q , j of the gradient of A in terms of the
i ...iq
contravariant components A 1 of A . If the mixed components of A are used, the formula
becomes
Sec. 47 • Christoffel Symbols, Covariant Differentiation 347
⎩ j1k ⎭ ⎩ js k ⎭
We leave the proof of this general formula as an exercise. From (47.39), if A ∈ Tq∞ (U ) , where
q = r + s , then grad A ∈ Tq∞+1 (U ) . Further, if the coordinate system is Cartesian, then Ai1 ...ir j1 ... js ,k
reduces to the ordinary partial derivative of Ai1 ...ir j1 ... js wih respect to x k .
Some special cases of (47.39) should be noted here. First, since the metric tensor is a
constant second-order tensor field, we have
gij ,k = g ij ,k = δ ij ,k = 0 (47.40)
for all i, l , k = 1,..., N . In fact, (47.40) is equivalent to (47.31), which we have used to obtain the
formula (47.32) for the Christoffel symbols. An important consequence of (47.40) is that the
operations of raising and lowering of indices commute with the operation of gradient or covariant
differentiation.
Another constant tensor field on E is the tensor field E defined by (45.26). While the sign
of E depends on the orientation, we always have
If we substitute (45.27) and (45.28) into (47.41), we can rewrite the result in the form
1 ∂ g ⎧ k⎫
=⎨ ⎬ (47.42)
g ∂x
i
⎩lk ⎭
δ ij ......ij ,k = 0
1
1
r
r
(47.43)
Some classical differential operators can be derived from the gradient. First, if A is a
tensor field of order q ≥ 1 , then the divergence of A is defined by
div A = Cq ,q +1 ( grad A )
div A = A 1 ,k g i1 ⊗ ⋅⋅⋅ ⊗ g iq −1
i ...iq −1k
(47.44)
so that div A is a tensor field of order q − 1 . In particular, for a vector field v , (47.44) reduces to
div v =
1 ∂ gυ
i
( ) (47.46)
g ∂x i
This result is useful since it does not depend on the Christoffel symbols explicitly.
The Laplacian of a tensor field of order q + 1 is a tensor field of the same order q defined
by
Sec. 47 • Christoffel Symbols, Covariant Differentiation 349
ΔA = g kl A 1 q ,kl g i1 ⊗ ⋅⋅⋅ ⊗ g iq
i ...i
(47.48)
1 ∂ ⎛ ∂f ⎞
Δf = g kl f ,kl = l ⎜
g g kl k ⎟ (47.49)
g ∂x ⎝ ∂x ⎠
where (47.42), (47.14) and (47.39) have been used. Like (47.46), the formula (47.49) does not
i ...i
depend explicitly on the Christoffel symbols. In (47.48), A 1 q ,kl denotes the components of the
second gradient grad ( grad A ) of A . The reader will verify easily that A 1 q ,kl , like the ordinary
i ...i
second partial derivative, is symmetric in the pair (k , l ) . Indeed, if the coordinate system is
Cartesian, then A 1 q ,kl reduces to ∂ 2 A 1 ∂x k ∂x l .
i ...i i ...iq
Finally, the classical curl operator can be defined in the following way. If v is a vector
field, then curl v is a skew-symmetric second-order tensor field defined by
curl v =
1
2
(υi , j −υ j ,i ) g i ⊗ g j (47.51)
1 ⎛ ∂υi ∂υ j ⎞ i
curl v = ⎜ − i ⎟g ⊗ g j (47.52)
2 ⎝ ∂x j
∂x ⎠
350 Chap. 9 • EUCLIDEAN MANIFOLDS
which no longer depends on the Christoffel symbols. We shall generalize the curl operator to
arbitrary skew-symmetric tensor fields in the next chapter.
Exercises
47.1 Show that in spherical coordinates on a three-dimensional Euclidean manifold the nonzero
Christoffel symbols are
⎧ 2 ⎫ ⎧ 2 ⎫ ⎧ 3 ⎫ ⎧ 3⎫ 1
⎨ ⎬=⎨ ⎬=⎨ ⎬=⎨ ⎬= 1
⎩ 21⎭ ⎩12 ⎭ ⎩31⎭ ⎩13⎭ x
⎧1⎫
⎨ ⎬ = −x
1
⎩ 22 ⎭
⎧1⎫
⎨ ⎬ = − x (sin x )
1 2 2
⎩33⎭
⎧ 3⎫ ⎧ 3⎫
⎨ ⎬ = ⎨ ⎬ = cot x
2
⎩32 ⎭ ⎩ 23⎭
⎧ 2⎫
⎨ ⎬ = − sin x cos x
2 2
⎩33⎭
47.2 On an oriented three-dimensional Euclidean manifold the curl of a vector field can be
regarded as a vector field by
∂υ j
curl v ≡ − E ijkυ j ,k g i = − E ijk gi (47.53)
∂x k
where E ijk denotes the components of the positive volume tensor E . Show that
curl ( curl v ) = grad ( div v ) − div ( grad v ) . Also show that
⎧k ⎫ ∂g k
⎨ ⎬ = − ⋅ gi (47.56)
⎩ij ⎭ ∂x j
Therefore,
∂g k ⎧k ⎫
= − ⎨ ⎬ gi (47.57)
∂x j
⎩ij ⎭
1 ∂ g
div g j = (47.58)
g ∂x
j
⎧k ⎫
⎨ ⎬ = 0, i ≠ j, i ≠ k , j ≠ k
⎩ij ⎭
⎧ j⎫ 1 ∂gii
⎨ ⎬=− if i≠ j
⎩ii ⎭ 2 g jj ∂x j
⎧ i⎫ ⎧ i ⎫ 1 ∂gii
⎨ ⎬=⎨ ⎬=− if i≠ j
⎩ij ⎭ ⎩ ji ⎭ 2 gii ∂x j
⎧ i⎫ 1 ∂gii
⎨ ⎬=
⎩ii ⎭ 2 gii ∂x
i
∂ ⎧k ⎫ ∂ ⎧ k ⎫ ⎧t ⎫ ⎧ k ⎫ ⎧t ⎫ ⎧ k ⎫
⎨ ⎬− l ⎨ ⎬+ ⎨ ⎬⎨ ⎬− ⎨ ⎬⎨ ⎬ = 0
∂x j ⎩il ⎭ ∂x ⎩ij ⎭ ⎩il ⎭ ⎩tj ⎭ ⎩ij ⎭ ⎩tl ⎭
The quantity on the left-hand side of this equation is the component of a fourth-order tensor
R , called the curvature tensor, which is zero for any Euclidean manifold2.
2
The Maple computer program contains a package tensor that will produce Christoffel symbols and other important
tensor quantities associated with various coordinate systems.
Sec. 48 • Covariant Derivatives along Curves 353
In the preceding section we have considered covariant differentiation of tensor fields which
are defined on open submanifolds in E . In applications, however, we often encounter vector or
tensor fields defined only on some smooth curve in E . For example, if λ : (a , b) → E is a smooth
curve, then the tangent vector λ is a vector field on E . In this section we shall consider the
problem of representing the gradients of arbitrary tensor fields defined on smooth curves in a
Euclidean space.
Given any smooth curve λ : (a, b) → E and a field A : (a , b) → T q (V ) we can regard the
value A(t ) as a tensor of order q at λ (t ) . Then the gradient or covariant derivative of A along λ
is defined by
d A( t ) A(t + Δt ) − A(t )
≡ lim (48.1)
dt Δ t → 0 Δt
for all t ∈ (a, b) . If the limit on the right-hand side of (48.1) exists, then d A(t ) dt is itself also a
tensor field of order q on λ . Hence, we can define the second gradient d 2 A(t ) dt 2 by replacing
A by d A(t ) dt in (48.1). Higher gradients of A are defined similarly. If all gradients of A
exist, then A is C ∞ -smooth on λ . We are interested in representing the gradients of A in
component form.
A( t ) = A 1 (t )g i1 ( λ (t )) ⊗ ⋅⋅⋅ ⊗ g iq ( λ (t ))
i ...iq
(48.2)
where the product basis is that of x̂ at λ (t ) , the point where A(t ) is defined. Differentiating
(48.2) with respect to t , we obtain
354 Chap. 9 • EUCLIDEAN MANIFOLDS
i ...i
d A(t ) dA 1 q (t )
= g i1 ( λ (t )) ⊗ ⋅⋅⋅ ⊗ g iq ( λ (t ))
dt dt
(48.3)
⎡ dg i ( λ (t )) dg i ( λ (t )) ⎤
+ A 1 q (t ) ⎢ 1 ⊗ ⋅⋅⋅ ⊗ g iq ( λ (t )) + ⋅⋅⋅ + g i1 ( λ (t )) ⊗ ⋅⋅⋅ ⊗ q
i ...i
⎥
⎣ dt dt ⎦
dg i ( λ (t )) ∂g i ( λ (t )) d λ j (t )
= (48.4)
dt ∂x j dt
where (47.5) and (47.6) have been used. In the preceding section we have represented the partial
derivative of g i by the component form (47.26). Hence we can rewrite (48.4) as
dg i ( λ (t )) ⎧k ⎫ d λ j (t )
=⎨ ⎬ g k ( λ (t )) (48.5)
dt ⎩ij ⎭ dt
=⎨ + ⎢A (t ) ⎨ ⎬ + ⋅⋅⋅ + A 1 q −1 (t ) ⎨ ⎬⎥
i ...i k
⎬
dt ⎪⎩ dt ⎢⎣ ⎩kj ⎭ ⎩kj ⎭⎦⎥ dt ⎪⎭ (48.6)
×g i1 ( λ (t )) ⊗ ⋅⋅⋅ ⊗ g iq ( λ (t ))
where the Christoffel symbols are evaluated at the position λ (t ) . The representation (48.6) gives
the contravariant components ( d A(t ) dt ) 1
i ...iq
of d A(t ) dt in terms of the contravariant components
i ...i
A 1 q (t ) of A(t ) relative to the same coordinate chart x̂ . If the mixed components are used, the
representation becomes
dt ⎪⎩ dt ⎣ ⎩kl ⎭ ⎩kl ⎭
⎧k⎫ ⎧ k ⎫ ⎤ d λ l (t ) ⎫
− A kj2 ... js (t ) ⎨ ⎬ − ⋅⋅⋅ − A j1 ... js−1k (t ) ⎨ ⎬ ⎥
i1 ...ir i1 ...ir
⎬ (48.7)
⎩ j1l ⎭ ⎩ js l ⎭ ⎦ dt ⎭
×g i1 (λ (t )) ⊗ ⋅⋅ ⋅ ⊗ g ir (λ (t )) ⊗ g j1 (λ (t )) ⊗ ⋅⋅⋅ ⊗ g js (λ (t ))
Sec. 48 • Covariant Derivatives along Curves 355
We leave the proof of this general formula as an exercise. In view of the representations (48.6) and
(48.7), we see that it is important to distinguish the notation
i1 ...ir
⎛ dA ⎞
⎜ ⎟
⎝ dt ⎠ j1 ... js
which denotes the derivative of a component of A . For this reason we shall denote the former by
the new notation
dv (t ) ⎧ dυ i (t ) ⎧ i ⎫ d λ j (t ) ⎫
=⎨ + υ (t ) ⎨ ⎬
k
⎬ g i ( λ (t )) (48.8)
dt ⎩ dt ⎩kj ⎭ dt ⎭
or, equivalently,
Dυ i (t ) dυ i (t ) ⎧ i ⎫ d λ j (t )
= + υ k (t ) ⎨ ⎬ (48.9)
Dt dt ⎩kj ⎭ dt
In particular, when v is the ith basis vector g i and then λ is the jth coordinate curve, then (48.8)
reduces to
356 Chap. 9 • EUCLIDEAN MANIFOLDS
∂g i ⎧k ⎫
= ⎨ ⎬ gk
∂x j ⎩ij ⎭
d 2 λ (t ) ⎧ d 2 λ i (t ) d λ k (t ) ⎧ i ⎫ d λ j (t ) ⎫
=⎨ + ⎨ ⎬ ⎬ g i (λ (t )) (48.10)
dt 2 ⎩ dt
2
dt ⎩kj ⎭ dt ⎭
where (47.4) has been used. In particular, if λ is a straight line with homogeneous parameter, i.e.,
if λ = v = const , then
d 2 λ i (t ) d λ k (t ) d λ j (t ) ⎧ i ⎫
+ ⎨ ⎬ = 0, i = 1,..., N (48.11)
dt 2 dt dt ⎩kj ⎭
This equation shows that, for the straight line λ (t ) = x + tv given by (47.7), we can sharpen the
result (47.9) to
⎛ 1 ⎧ 1⎫ 1 ⎧ N⎫ ⎞
xˆ ( λ (t ) ) = ⎜ x1 + υ 1t − υ kυ j ⎨ ⎬ t 2 ,..., x N + υ N t − υ kυ j ⎨ ⎬ t 2 ⎟ + o(t 2 ) (48.12)
⎝ 2 ⎩ kj ⎭ 2 ⎩ kj ⎭ ⎠
for sufficiently small t , where the Christoffel symbols are evaluated at the point x = λ (0) .
Now suppose that A is a tensor field on U , say A ∈ Tq∞ , and let λ : (a, b) → U be a curve
in U . Then the restriction of A on λ is a tensor field
ˆ (t ) ≡ A (λ (t )),
A t ∈ ( a, b) (48.13)
In this case we can compute the gradient of A along λ either by (48.6) or directly by the chain rule
of (48.13). In both cases the result is
Sec. 48 • Covariant Derivatives along Curves 357
d A(λ (t )) ⎡ ∂A 1 q (λ (t )) ⎧ i1 ⎫ ⎧ iq ⎫⎤ d λ j (t )
i ...i
=⎢ + + ⋅⋅⋅ +
ki2 ...iq i1 ...iq −1k
A (t ) ⎨ ⎬ A (t ) ⎨ ⎬⎥
dt ⎣⎢ ∂x j ⎩kj ⎭ ⎩kj ⎭⎥⎦ dt (48.14)
×g i1 (λ (t )) ⊗ ⋅⋅⋅ ⊗ g iq (λ (t ))
or, equivalently,
d A(λ (t ))
= ( grad A(λ (t )) ) λ (t ) (48.15)
dt
d A(λ (t )) ∂A(λ (t )) d λ i (t )
= (48.16)
dt ∂x j dt
By (48.16) it follows that the gradient of the metric tensor, the volume tensor, and the skew-
symmetrization operator all vanish along any curve.
Exercises
dλ
v=
dt
dv
a=
dt
Let E be a Euclidean manifold and let u and v be vector fields defined on some open set U in E.
In Exercise 45.2 we have defined the Lie bracket [ u, v ] by
In this section first we shall explain the geometric meaning of the formula (49.1), and then we
generalize the operation to the Lie derivative of arbitrary tensor fields.
To interpret the operation on the right-hand side of (49.1), we start from the concept of
the flow generated by a vector field. We say that a curve λ : ( a , b ) → U is an integral curve of a
vector field v if
dλ ( t )
= v ( λ (t )) (49.2)
dt
for all t ∈ ( a , b ) . By (47.1), the condition (49.2) means that λ is an integral curve of v if and
only if its tangent vector coincides with the value of v at every point λ ( t ) . An integral curve
may be visualized as the orbit of a point flowing with velocity v. Then the flow generated by v
is defined to be the mapping that sends a point λ ( t0 ) to a point λ ( t ) along any integral curve of
v.
To make this concept more precise, let us introduce a local coordinate system x̂ . Then
the condition (49.2) can be represented by
d λ i ( t ) / dt =υ i ( λ ( t ) ) (49.3)
where (47.4) has been used. This formula shows that the coordinates ( λ i ( t ) , i = 1,… , N ) of an
integral curve are governed by a system of first-order differential equations. Now it is proved in
the theory of differential equations that if the fields υ i on the right-hand side of (49.3) are
smooth, then corresponding to any initial condition, say
359
360 Chap. 10 • VECTOR FIELDS
λ ( 0) = x 0 (49.4)
or equivalently
a unique solution of (49.3) exists on a certain interval ( − δ , δ ) , where δ may depend on the
initial point x 0 but it may be chosen to be a fixed, positive number for all initial points in a
sufficiently small neighborhood of x 0 . For definiteness, we denote the solution of (49.2)
corresponding to the initial point x by λ ( t , x ) ; then it is known that the mapping from x to
λ ( t , x ) ; then it is known that the mapping from x to λ ( t , x ) is smooth for each t belonging to
the interval of existence of the solution. We denote this mapping by ρt , namely
ρt ( x ) = λ ( t , x ) (49.6)
ρ 0 ( x ) = x, x ∈U (49.7)
reflecting the fact that x is the initial point of the integral curve λ ( t , x ) .
Since the fields υ i are independent of t, they system (49.3) is said to be autonomous. A
characteristic property of such a system is that the flow generated by v forms a local one-
parameter group. That is, locally,
or equivalently
λ ( t1 + t2 , x ) = λ ( t2 , λ ( t1 , x ) ) (49.9)
for all t1 , t2 , x such that the mappings in (49.9) are defined. Combining (49.7) with (49.9), we see
that the flow ρt is a local diffeomorphism and, locally,
ρt−1 = ρ − t (49.10)
Sec. 49 • Lie Derivatives 361
Consequently the gradient, grad ρt , is a linear isomorphism which carries a vector at any point
x to a vector at the point ρt ( x ) . We call this linear isomorphism the parallelism generated by
the flow and, for brevity, denote it by Pt . The parallelism Pt is a typical two-point tensor whose
component representation has the form
Pt ( x ) = Pt ( x ) j g i ( ρt ( x ) ) ⊗ g j ( x )
i
(49.11)
or, equivalently,
⎡⎣ Pt ( x ) ⎤⎦ ( g j ( x ) ) = Pt ( x ) j g i ( ρt ( x ) )
i
(49.12)
{ }
where {g i ( x )} and g j ( ρt ( x ) ) are the natural bases of x̂ at the points x and ρt ( x ) ,
respectively.
Now the parallelism generated by the flow of v gives rise to a difference quotient of the
vector field u in the following way: At any point x ∈U , Pt ( x ) carries the vector u ( x ) at x to
the vector ( Pt ( x ) ) ( u ( x ) ) at ρt ( x ) , which can then be compared with the vector u ( ρt ( x ) ) also at
ρt ( x ) . Thus we define the difference quotient
1
⎡ u ( ρt ( x ) ) − ( Pt ( x ) ) ( u ( x ) )⎤⎦ (49.13)
t⎣
The limit of this difference quotient as t approaches zero is called the Lie derivative of u with
respect to v and is denoted by
1
L u ( x ) ≡ lim ⎡⎣ u ( ρt ( x ) ) − ( Pt ( x ) ) ( u ( x ) ) ⎤⎦ (49.14)
v t →0 t
We now derive a representation for the Lie derivative in terms of a local coordinate
system xˆ. In view of (49.14) we see that we need an approximate representation for Pt to within
first-order terms in t. Let x 0 be a particular reference point. From (49.3) we have
λ i ( t , x 0 ) = x0i + υ i ( x 0 ) t + o ( t ) (49.15)
Suppose that x is an arbitrary neighboring point of x 0 , say x i = x0i + Δx i . Then by the same
argument as (49.15) we have also
362 Chap. 10 • VECTOR FIELDS
λ i ( t, x ) = xi +υ i ( x ) t + o ( t )
∂υ i ( x 0 ) (49.16)
= x0i + Δx i + υ i ( x 0 ) t +
∂x j
( Δx j ) t + o ( Δ x k ) + o ( t )
∂υ i ( x 0 )
λ i ( t , x ) − λ i ( t , x 0 ) = Δx i +
∂x j
( Δ x j ) t + o ( Δx k ) + o ( t ) (49.17)
∂υ i ( x 0 )
Pt ( x 0 ) j ≡ ( grad ρt ( x o ) ) j = δ ij + t + 0 (t )
i i
(49.18)
∂x j
∂u i ( x 0 ) j
u i ( ρt ( x 0 ) ) = u i ( x 0 ) + υ ( x0 ) t + o (t ) (49.19)
∂x j
where (49.15) has been used. Substituting (49.18) and (49.19) into (49.14) and taking the limit,
we finally obtain
⎡ ∂u i ( x 0 ) j ∂υ i ( x 0 ) j ⎤
L u ( x0 ) = ⎢ υ − u ( x0 )⎥ gi ( x0 ) (49.20)
⎣ ∂x ∂x
v j j
⎦
⎡ ∂u i j ∂υ i j ⎤
L u = ⎢ j υ − j u ⎥ gi (49.21)
v
⎣ ∂x ∂x ⎦
L u = ( ui , j υ j − υ i , j u j ) gi (49.22)
v
L u = [ v, u ] (49.24)
v
Since the Lie derivative is defined by the limit of the difference quotient (49.13),
L u vanishes if and only if u commutes with the flow of v in the following sense:
v
u ρt = Pu
i (49.25)
When u satisfies this condition, it may be called invariant with respect to v . Clearly, this
condition is symmetric for the pair ( u, v ) , since the Lie bracket is skew-symmetric, namely
[ u, v ] = − [ v, u] (49.26)
v ϕ t = Qt v (49.27)
ϕ s ρt = ρt ϕ s (49.28)
[ u, v ] = 0 (49.30)
μ ( 0, t ) = λ ( t ) (49.31)
364 Chap. 10 • VECTOR FIELDS
The integral curves μ (⋅, t ) with t ∈ ( −δ , δ ) then sweep out a surface parameterized by ( s, t ) .
We shall now show that (49.30) requires that the curves μ ( s, ⋅) be integral curves of v for all s.
∂μ i ( s, t ) / ∂s = u i ( μ ( s, t ) ) (49.32)
and
∂μ i ( 0, t ) / ∂s = υ i ( μ ( 0, t ) ) (49.33)
∂μ i ( s, t ) / ∂s = υ i ( μ ( s, t ) ) (49.34)
for s ≠ 0. We put
∂μ i ( s, t )
ζ i ( s, t ) ≡ − υ i ( μ ( s, t ) ) (49.35)
∂t
By (49.33)
ζ i ( 0, t ) = 0 (49.36)
We now show that ζ i ( s, t ) vanishes identically. Indeed, if we differentiate (49.35) with respect
to s and use (49.32), we obtain
∂ζ i ∂ ⎛ ∂μ i ⎞ ∂υ i ∂μ j
= −
∂s ∂t ⎜⎝ ∂s ⎟⎠ ∂x j ∂s
∂u i ∂μ j ∂υ i j
= j − u (49.37)
∂x ∂t ∂x j
∂u i ⎛ ∂u i ∂υ i ⎞
= j ζ j + ⎜ j υi − j u j ⎟
∂x ⎝ ∂x ∂x ⎠
As a result, when (49.30) holds, ζ i are governed by the system of differential equations
∂ζ i ∂u i
= ζ j
(49.38)
∂s ∂x j
Sec. 49 • Lie Derivatives 365
and subject to the initial condition (49.36) for each fixed t. Consequently, ζ i = 0 is the only
solution. Conversely, when ζ i vanishes identically on the surface, (49.37) implies immediately
that the Lie bracket of u and v vanishes. Thus the condition (49.28) is shown to be equivalent to
the condition (49.30).
The result just established can be used to prove the theorem mentioned in Section 46 that
a field of basis is holonomic if and only if
⎡⎣ hi , h j ⎤⎦ = 0 (49.39)
for all i, j − 1,… , N . Necessity is obvious, since when {hi } is holonomic, the components of
each hi relative to the coordinate system corresponding to {hi } are the constants δ ij . Hence
from (49.21) we must have (49.39). Conversely, suppose (49.39) holds. Then by (49.28) there
exists a surface swept out by integral curves of the vector fields h1 and h 2 . We denote the
surface parameters by x1 and x2 . Now if we define an integral curve for h3 at each surface point
( x1 , x2 ) , then by the conditions
[ h1 , h3 ] = [h2 , h3 ] = 0
we see that the integral curves of h1 , h 2 , h3 form a three-dimensional net which can be regarded
as a “surface coordinate system” ( x1 , x2 , x3 ) on a three-dimensional hypersurface in the N-
dimensional Euclidean manifold E . By repeating the same process based on the condition
(49.39), we finally arrive at an N-dimensional net formed by integral curves of h1 ,…, h N . The
corresponding N-dimensional coordinate system x1 ,…, x N now forms a chart in E and its
natural basis is the given basis {hi } . Thus the theorem is proved. In the next section we shall
make use of this theorem to prove the classical Frobenius theorem.
So far we have considered the Lie derivative L u of a vector field u relative to v only.
v
Pt ( a ⊗ b ⊗ ) = ( Pt a ) ⊗ ( Pt b ) ⊗ (49.40)
Then we extend Pt to arbitrary tensors by linearity. Using this extended parallelism, we define
the Lie derivative of a tensor field A with respect to v by
1
L A ( x ) ≡ lim ⎡⎣ A ( ρt ( x ) ) − ( Pt ( x ) ) ( A ( x ) ) ⎤⎦ (49.41)
v t →0 t
366 Chap. 10 • VECTOR FIELDS
which is clearly a generalization of (49.14). In terms of a coordinate system it can be shown that
( ) ∂Ai1…ir j1… js
i1…ir
L A = υk
v j1… js ∂x k
∂υ i1 ∂υ ik
− A ki2…ir j1… js − − Ai1…ir −1k j1… js (49.42)
∂x k ∂x k
∂υ k ∂υ k
+A i1…ir
k… js + A i1…ir
j1… js −1k
∂x j1 ∂x js
which generalizes the formula (49.21). We leave the proof of (49.42) as an exercise. By the
same argument as (49.22), the partial derivatives in (49.42) can be replaced by covariant
derivatives.
It should be noted that the operations of raising and lowering of indices by the Euclidean
metric do not commute with the Lie derivative, since the parallelism Pt generated by the flow
generally does not preserve the metric. Consequently, to compute the Lie derivative of a tensor
field A, we must assign a particular contravariant order and covariant order to A. The formula
(49.42) is valid when A is regarded as a tensor field of contravariant order r and covariant order
s.
By the same token, the Lie derivative of a constant tensor such as the volume tensor or
the skew-symmetric operator generally need not vanish.
Exercises
49.1 Prove the general representation formula (49.42) for the Lie derivative.
49.2 Show that the right-hand side of (49.42) obeys the transformation rule of the components
of a tensor field.
49.3 In the two-dimensional Euclidean plane E , consider the vector field v whose
components relative to a rectangular Cartesian coordinate system ( x1 , x 2 ) are
υ 1 = α x1 , υ 2 = α x2
( x , x ) = (1,1)
1
0
2
0
49.4 In the same two-dimensional plane E consider the vector field such that
Sec. 49 • Lie Derivatives 367
υ1 = x2 , υ 2 = x1
Show that the flow generated by this vector field is the group of rotations of E . In
particular, show that the Euclidean metric is invariant with respect to this vector field.
49.5 Show that the flow of the autonomous system (49.3) possesses the local one-parameter
group property (49.8).
368 Chap. 10 • VECTOR FIELDS
are integral hypersurfaces of D . Since the natural basis vectors g i of any coordinate system are
smooth, by the very definition D must be smooth at x 0 if it is integrable there. Naturally, D is
said to be integrable if it is integrable at each point of its domain. Consequently, every
integrable distribution must be smooth.
Sec. 50 • The Frobenius Theorem 369
Now we are ready to state and prove the Frobenius theorem, which characterizes
integrable distributions.
Theorem 50.1. A smooth distribution D is integrable if and only if it is closed with respect to
the Lie bracket, i.e.,
u, v ∈D ⇒ [ u, v ] ∈D (50.2)
Proof. Necessity can be verified by direct calculation. If u, v ∈D and supposing that x̂ is a local
coordinate system such that {gα , α = 1,… , D} forms a local basis for D , then u and v have the
component forms
u = uα gα , v = υ α gα (50.3)
where the Greek index α is summed from 1 to D. Substituting (50.3) into (49.21), we see that
⎛ ∂υ α β ∂uα β ⎞
[ u, v ] = ⎜ β u − β υ ⎟ g α (50.4)
⎝ ∂x ∂x ⎠
Conversely, suppose that (50.2) holds. Then for any local basis {vα , α = 1,… , D} for D
we have
γ
⎡⎣ vα , v β ⎤⎦ = Cαβ vγ (50.5)
γ
where Cαβ are some smooth functions. We show first that there exists a local basis
{uα , α = 1,…, D} which satisfies the somewhat stronger condition
⎡⎣ uα , u β ⎤⎦ = 0, α , β = 1,…, D (50.6)
To construct such a basis, we choose a local coordinate system ŷ and represent the basis {vα }
by the component form
where {k i } denotes the natural basis of ŷ , and whre the repeated Greek indices β and Δ are
summed from 1 to D and D + 1 to N , respectively. Since the local basis {vα } is linearly
independent, without loss of generality we can assume that the D × D matrix ⎡⎣υαβ ⎤⎦ is
nonsingular, namely
where ⎡⎣υ −1αβ ⎤⎦ denotes the inverse matrix of ⎡⎣υαβ ⎤⎦ , as usual. Substituting (50.7) into (50.9), we
see that the component representation of uα is
We now show that the basis {uα } has the property (50.6). From (50.10), by direct
calculation based on (49.21), we can verify easily that the first D components of ⎡⎣ uα , u β ⎤⎦ are
zero, i.e., ⎡⎣ uα , u β ⎤⎦ has the representation
Δ
⎡⎣ uα , u β ⎤⎦ = Kαβ kΔ (50.11)
but, by assumption, D is closed with respect to the Lie bracket; it follows that
γ
⎡⎣ uα , u β ⎤⎦ = Kαβ uγ (50.12)
γ Δ
⎡⎣ uα , u β ⎤⎦ = Kαβ k γ + Kαβ kΔ (50.13)
γ
Comparing this representation with (50.11), we see that Kαβ must vanish and hence, by (50.12),
(50.6) holds.
Now we claim that the local basis {uα } for D can be extended to a field of basis {ui } for
V and that
Sec. 50 • The Frobenius Theorem 371
⎡⎣ ui , u j ⎤⎦ = 0, i, j = 1,… , N (50.14)
This fact is more or less obvious. From (50.6), by the argument presented in the preceding
section, the integral curves of {uα } form a “coordinate net” on a D-dimensional hypersurface
defined in the neighborhood of any reference point x 0 . To define u D +1 , we simply choose an
arbitrary smooth curve λ ( t , x 0 ) passing through x 0 , having a nonzero tangent, and not belonging
to the hyper surface generated by {u1 ,… , u D }. We regard the points of the curve λ ( t , x 0 ) to
have constant coordinates in the coordinate net on the D-dimensional hypersurfaces, say
( xo1 ,…, xoD ) . Then we define the curves λ ( t, x ) for all neighboring points x on the hypersurface
of x o , by exactly the same condition with constant coordinates ( x1 ,… , x D ) . Thus, by definition,
the flow generated by the curves λ ( t , x ) preserves the integral curves of any uα . Hence if we
define u D +1 to be the tangent vector field of the curves λ ( t , x ) , then
where we have used the necessary and sufficient condition (49.29) for the condition (49.30).
Having defined the vector fields {u1 ,… , u D +1} which satisfy the conditions (50.6) and
(50.15), we repeat that same procedure and construct the fields u D + 2 , u D +3 ,… , until we arrive at a
field of basis {u1 ,… , u N }. Now from a theorem proved in the preceding section [cf. (49.39)] the
condition (50.14) is necessary and sufficient that {ui } be the natural basis of a coordinate system
xˆ. Consequently, D is integrable and the proof is complete.
From the proof of the preceding theorem it is clear that an integrable distribution D can
be characterized by the opening remark: In the neighborhood of any point x 0 in the domain of
D there exists a D-dimensional hypersurface S such that D ( x ) is the D-dimensional tangent
hyperplane of S at each point x in S .
We shall state and prove a dual version of the Frobenius theorem in Section 52.
Exercises
Let U be an open set in E and let A be a skew-symmetric covariant tensor field of order
r on U ,i.e.,
A: U → T ˆr (V ) (51.1)
Then for any point x ∈U , A ( x ) is a skew-symmetric tensor of order r (cf. Chapter 8).
Choosing a coordinate chart xˆ onU as before, we can express A in the component form
A = Ai1…ir g i1 ⊗ ⊗ g ir (51.2)
where {g i } denotes the natural basis of xˆ . Since A is skew-symmetric, its components obey the
identify
for any pair of indices ( j, k ) in ( i1 ,..., ir ) . As explained in Section 39, we can then represent A
by
1
A= ∑
i1 < <ir
Ai1 ...ir g i1 ∧ ∧ g ir = Ai ...i g i1 ∧
r! 1 r
∧ g ir (51.4)
1
H. Flanders, Differential Forms with Applications to the Physical Sciences, Academic Press, New York-London,
1963.
374 Chap. 10 • VECTOR FIELDS
We define first the notion of the exterior derivative of a differential form. Let A be the r-
form represented by (51.2) or (51.4). Then the exterior derivative d A is an ( r + 1) –form given
by any one of the following three equivalent formulas:
N ∂Ai1…ir
dA = ∑∑
i1 < < ir k =1 ∂x k
g k ∧ g i1 ∧ ∧ g ir
1 ∂Ai1…ir k
= g ∧ g i1 ∧ ∧ g ir (51.5)
r ! ∂x k
1 ∂Ai1…ir j1
= δ kij1…
1 …ir
g ∧ ∧ g jr+1
r !( r + 1)! ∂x k
jr +1
Of course, we have to show that d A as defined by (51.5), is independent of the choice of the
chart xˆ. If this result is granted, then (51.5) can be written in the coordinate-free form
dA = ( −1) ( r + 1) !K r +1 ( grad A )
r
(51.6)
or, equivalently,
∂Ai1 ∂Ai1 ∂x l1 ∂x lr +1
δ jki… ij
1 r ir
= δ l1ki…1 lrir+1
ir
(51.8)
1 r +1
∂x k ∂x k ∂x j1 ∂x jr +1
for all j1 ,… , jr +1 , where Ai1…ir and { g i } denote the components of A and the natural basis
corresponding to ( x i ) . To prove (51.8), we recall first the transformation rule
∂x j1 ∂x jr
Ai1 = Aj1 (51.9)
∂x i1 ∂x ir
ir jr
for any covariant tensor components. Now differentiating (51.9) with respect to x k , we obtain
Sec. 51 • Differential Forms, Exterior Derivative 375
∂Ai1 ∂Aj1… jr ∂x l ∂x j1 ∂x jr
=
ir
∂x k ∂x l ∂x k ∂x i1 ∂x ir
∂ 2 x j1 ∂x j2 ∂x jr
+ Aj1… jr k i1 (51.10)
∂x ∂x ∂x i2 ∂x ir
∂x j1 ∂x jr −1 ∂ 2 x jr
+ + Aj1… jr
∂x i1 ∂x ir −1 ∂x k ∂x ir
Since the second derivative ∂ 2 x j ∂x k ∂x i is symmetric with respect to the pair ( k , i ) , when we
form the contraction of (51.10) with the skew-symmetric operator K r +1 the result is (51.8). Here
we have used the fact that K r +1 is a tensor of order 2 ( r + 1) , so that we have the identities
∂x k ∂x i1 ∂x ir ∂x p1 ∂x pr +1
δ jki… ij = δ plm… pm
1 r 1 r
1 r +1 1 r +1
∂x 1 ∂x m1 ∂x mr ∂x j1 ∂x jr +1
(51.11)
∂x k ∂x i1 ∂x ir ∂x p1 ∂x pr +1
=δ lm1 mr
p1… pr +1
∂x l ∂x m1 ∂x mr ∂x j1 ∂x jr +1
⎛ ∂u ∂u ⎞
du = ⎜ ij − ij ⎟ g j ⊗ g i (51.12)
⎝ ∂x ∂x ⎠
Comparing this representation with (47.52), we see that the exterior derivative and the curl
operator are related by
2 curl u = − du (51.13)
for any 1-form u. Equation (51.13) also follows from (51.6) and (47.50). In the sense of (51.13)
, the exterior derivative is a generalization of the curl operator from 1-forms to r-forms in
general. We shall now consider some basic properties of the exterior derivative.
df = grad f (51.14)
u ⋅ d ( v ⋅ w ) − v ⋅ d ( u ⋅ w ) = [ u, v ] ⋅ w + dw ( u, v ) (51.15)
We can prove this formula by direct calculation. Let the component representations of
u, v, w in a chart x̂ be
u = ui gi , v = υ i gi , w = wi g i (51.16)
Then
u ⋅ w = u i wi , v ⋅ w = υ i wi (51.17)
From I it follows that
∂ ( u i wi ) ⎛ ∂w ∂u i ⎞ k
d (u ⋅ w) = g k = ⎜ u i ki + wi k ⎟g (51.18)
∂x k
⎝ ∂x ∂x ⎠
and similarly
∂ (υ i wi ) ⎛ ∂w ∂υ i ⎞
d (v ⋅ w) = g k = ⎜υ i ki + wi k ⎟ g k (51.19)
∂x k ⎝ ∂x ∂x ⎠
Consequently,
⎛ ∂υ i ∂u i ⎞ i k ∂wi
u ⋅ d ( v ⋅ w ) − v ⋅ d (u ⋅ w ) = ⎜ uk k − υ k k ⎟ wi + ( u υ − u υ ) k (51.20)
k i
⎝ ∂x ∂x ⎠ ∂x
Now from (49.21) the first term on the right-hand side of (51.20) is simply [ u, v ] ⋅ w. Since the
coefficient of the second tensor on the right-hand side of (51.20) is skew-symmetric, that term
can be rewritten as
⎛ ∂w ∂w ⎞
u kυ i ⎜ ki − ik ⎟
⎝ ∂x ∂x ⎠
or, equivalently,
d w ( u, v )
when the representation (51.12) is used. Thus the identity (51.15) is proved.
Sec. 51 • Differential Forms, Exterior Derivative 377
d ( A ∧ B ) = d A ∧ B + ( −1) A ∧ d B
r
(51.21)
d ( A + B ) = d A + dB (51.22)
The proof of this result is obvious from the representation (51.5). Combining (51.21) and (51.22)
, we have
d (α A + β B ) = α d A + β d B (51.23)
d 2A ≡ d ( d A) = 0 (51.24)
1 1 1 ∂ 2 Ai1…ir
d A=
2
δ lj1 jr +1
δ ki1 ir
g p1 ∧ ∧ g pr + 2
r ! ( r + 2)! ( ( r + 1)!)
p1… pr + 2 j1… jr +1
2
∂x ∂x
l k
1 1 1 ∂2 A
= δ plki1…1 pirr + 2 l i1…ikr g p1 ∧ ∧ g pr + 2 (51.25)
r ! ( r + 1) ! ( r + 2 ) ! ∂x ∂x
=0
d (f ∗ ( A) ) = f ∗ (d A) (51.26)
where f ∗ denotes the induced linear map (defined below) corresponding to f , so that f ∗ ( A ) is an
r-form on U , and where d denotes the exterior derivative on U . To establish (51.26), let
378 Chap. 10 • VECTOR FIELDS
x i = f i ( x 1 ,… , x M ) , i = 1,…, N (51.27)
Suppose that A has the representation (51.4)2 in ( x i ) . Then by definition f ∗ ( A ) has the
representation
1 ∂x i1 ∂x ir α1
f ∗ (A) = Ai1…ir α1 g ∧ ∧ g αr (51.28)
r! ∂x ∂x αr
where { g α } denotes the natural basis of ( x α ) . Now by direct calculation of the exterior
derivatives of A and f ∗ ( A ) , and by using the symmetry of the second derivative, we obtain the
representation
d ( f ∗ ( A) ) = f ∗ ( d A)
1 ∂Ai1…ir ∂x k ∂x i1 ∂x ir β (51.29)
= g ∧ g α1 ∧ ∧ g αr
r ! ∂x k ∂x β ∂x α1 ∂x αr
Applying the result (51.26) to the special case when f is the flow ρt generated by a vector
field v as defined in Section 49, we obtain the following property.
VII. The exterior derivative and the Lie derivative commute, i.e., for any differential form
A and any smooth vector field v
( )
d L A = L ( d A)
v v
(51.30)
We leave the proof of (51.30), by direct calculation based on the representations (51.5) and
(49.42), as an exercise.
The results I-VII summarized above are the basic properties of the exterior derivative. In
fact, results I and III-V characterize the exterior derivative completely. This converse assertion
can be stated formally in the following way.
To see that this definition is equivalent to the representations (51.5), we consider any r-
form A given by the representation (51.4)2. Applying the operator d to both sides of that
representation and making use of conditions I and III-V, we get
(
r ! dA = d Ai1…ir g i1 ∧ ∧ g ir )
(
= d Ai1…ir ) ∧g i1
∧ ∧ g ir + Ai1…ir d g i1 ∧ g i2 ∧ ∧ g ir
− Ai1…ir g i1 ∧ dg i2 ∧ g i3 ∧ ∧ g ir + (51.31)
⎛ ∂Ai …i ⎞
= ⎜ 1 k r g k ⎟ ∧ g i1 ∧ ∧ g ir + 0 − 0 +
⎝ ∂x ⎠
∂Ai …i
= 1 k r g k ∧ g i1 ∧ ∧ g ir
∂x
where we have used the fact that the natural basis vector g i is the gradient of the coordinate
function x i , so that, by I,
g i = grad x i = dx i (51.32)
and thus, by V,
d gi = d 2 x i = 0 (51.33)
Exercises
51.1 Prove the product rule (51.21) for the exterior derivative.
51.2 Given (47.53), the classical definition of the curl of a vector field, prove that
curl v = D 2 ( dv ) (51.34)
∂x i
grad f = g ⊗ gα
α i
∂x
Next show that f ∗ , which maps r-forms on U into r-forms on U , can be defined by
for all u1 ,… , u r in the translation space of E . Note that for r = 1 , f ∗ is the transpose of
grad f .
Section 52. The Dual Form of the Frobenius Theorem; the Poincaré Lemma
The Frobenius theorem as stated and proved in Section 50 characterizes the integrability
of a distribution by a condition on the Lie bracket of the generators of the distribution. In this
section we shall characterize the same by a condition on the exterior derivative of the generators
of the orthogonal complement of the distribution. This condition constitutes the dual form of the
Frobenius theorem.
d z Γ ∧ z1 ∧ ∧ z N − D = 0, Γ = 1,… , N − D (52.1)
Proof. As in Section 50, let {vα , α = 1,… , D} be a local basis for D . Then
By the Frobenius theorem D is integrable if and only if (50.5) holds. Substituting (50.5) and
(52.2) into (51.15) with u = vα , v = v β , and w = z Γ , we see that (50.5) is equivalent to
To prove the said equivalence, we extend {vα } into a basis {v i } in such a way that its
dual basis {v i } satisfies v D +Γ = z Γ for all Γ = 1,… , N − D. This extension is possibly by virtue of
(52.2). Relative to the basis {v i } , the condition (52.3) means that in the representation of d z Γ
by
2d z Γ = ζ iΓΔ+ D v i ∧ z Δ (52.5)
z = h grad f (52.6)
Such a vector field is called complex-lamellar in the classical theory. Using the terminology
here, we see that a complex-lamellar vector field z is a vector field such that z ⊥ is integrable.
Hence by (52.1), z is complex-lamellar if and only if dz ∧ z = 0. In a three-dimensional space
this condition can be written as
( curl z ) ⋅ z = 0 (52.7)
z = zi g i = zi dx i (52.8)
then dz is represented by
1 ∂zi k
dz = dx ∧ dx i (52.9)
2 ∂x k
or, equivalently,
∂zi
δ pqr
kij
zj = 0, p, q, r = 1,… , N (52.11)
∂x k
Since it suffices to consider the special cases with p < q < r in (52.11), when N=3 we have
∂zi ∂z
0 = δ123
kij
zj = z jε kij ki (52.12)
∂x k
∂x
Sec. 52 • Dural Form of Frobenius Theorem; Poincaré Lemma 383
which is equivalent to the component version of (52.7). In view of this special case we see that
the dual form of the Frobenius theorem is a generalization of Kelvin’s condition from 1-forms to
r-forms in general.
In the classical theory a vector field (1-form) z is called lamellar if locally z is the gradient
of a smooth function (0-form) f, namely
z = grad f = df (52.13)
curl z = 0 (52.14)
The generalization of lamellar fields from 1-forms to r-forms in general is obvious: We say that
an r-form A is closed if its exterior derivative vanishes
dA = 0 (52.15)
A = dB (52.16)
From (51.24) any exact form is necessarily closed. The theorem that generalizes the classical
result is the Poincaré lemma, which implies that (52.16) and (52.15) are locally equivalent.
Before stating the Poincaré lemma, we defined first the notion of a star-shaped (or
retractable) open set U in E : U is star-shaped if there exists a smooth mapping
ρ : U ×R → U (52.17)
such that
⎧x when t ≤ 0
ρ ( x, t ) = ⎨ (52.18)
⎩x 0 when t ≥ 1
The equivalence of (52.13) and (52.14) requires only that the domain U of z be simply
connected, i.e., every closed curve in U be retractable. When we generalize the result to the
equivalence of (52.15) and (52.16), the domain U of A must be retractable relative to all r-
dimensional closed hypersurfaces. For simplicity, we shall assume that U is star-shaped.
Theorem 52.2. If A is a closed r-form defined on a star-shaped domain, then A is exact, i.e.,
there exists an (r-1)-form B on U such that (52.16) holds.
( y ,… , y ) = ( x ,… , x , t )
1 N +1 1 N
(52.19)
and
ρ (⋅ , t ) = xo when t ≥ 0 (52.21)
x i = ρ i ( yα ) , i = 1,… , N (52.22)
Then
∂ρ i ∂ρ i
= δ ij , =0 when t≤0 (52.23)
∂y j ∂y N +1
and
∂ρ i ∂ρ i
=0 =0 when t ≥1 (52.24)
∂y j ∂y N +1
1
A= Ai1…ir g i1 ∧ ∧ g ir (52.25)
r!
Sec. 52 • Dural Form of Frobenius Theorem; Poincaré Lemma 385
1 ∂ρ i1 ∂ρ ir α1
ρ∗ ( A ) = Ai1…ir α1 h ∧ ∧ hαr (52.26)
r! ∂y ∂y αr
hα = dy α (52.27)
where
∂ρ j1 ∂ρ jr
X i1…ir = A j1… jr (52.29)
∂y i1 ∂y ir
and
∂ρ j1 ∂ρ jr −1 ∂ρ jr
Yi1…ir −1 = Aj1… jr (52.30)
∂y i1 ∂y ir −1 ∂y N +1
We put
1
X= X i1…ir hi1 ∧ ∧ hir (52.31)
r!
and
1
Y= Y j1… jr −1 h j1 ∧ ∧ h jr −1 (52.32)
( )
r − 1 !
ρ∗ ( A ) = X + Y ∧ h N +1 = X + Y ∧ dt (52.33)
386 Chap. 10 • VECTOR FIELDS
From (52.23), (52.24), (52.29), and (52.30), X and Y satisfy the end conditions
and
B≡
( −1) 1Y
( r − 1) ! ∫0 i …i (
(⋅ , t ) dt 1 r −1 )g ∧ i1
∧ g ir −1 (52.36)
To prove this, we take the exterior derivative of (52.33). By (52.26) and the fact that A is
closed, the result is
d X + d Y ∧ dt = 0 (52.37)
where we have used also (51.21) and (51.24) on the term Y ∧ dt . From (52.31) and (52.32) the
exterior derivatives of X and Y have the representations
∂X i1 ∂X i1
r !d X = ir
h j ∧ hi1 ∧ ∧ hir + ir
dt ∧ hi1 ∧ ∧ hir (52.38)
∂x j
∂t
and
∂Yi1 ∂Yi1
( r − 1) ! d Y = ir −1
h j ∧ hi1 ∧ ∧ hir −1 + ir −1
dt ∧ hi1 ∧ ∧ hir (52.39)
∂x j
∂t
∂X i1 ir
h j ∧ hi1 ∧ ∧ hir = 0 (52.40)
∂x j
and
Sec. 52 • Dural Form of Frobenius Theorem; Poincaré Lemma 387
⎛ r ∂X i1 ∂Yi2 ⎞ i1
⎜ ( −1) +r ⎟h ∧ ∧ hir ∧ dt = 0
ir ir
(52.41)
⎝ ∂t ∂x i1
⎠
Consequently
1 ∂Y r ∂X j1
− δ ij11…irjr i2i1 ir = ( −1) jr
(52.42)
( r − 1) ! ∂x ∂t
Now from (52.36) if we take the exterior derivative of the ( r − 1) -form B on U , the
result is
( −1) ⎛ 1 ∂Yi i dt ⎞ gi ∧ ∧ gi
r
( r − 1) ! ⎜⎝ ∫0 ∂x i ⎟⎠
dB = 2
1
r 1 r
(52.43)
( −1) 1 δ i i ⎛ 1 ∂Yi i dt ⎞ g j ∧
r
( r − 1) ! r ! j … j ⎜⎝ ∫0 ∂x i ⎟⎠
= 1
1
r
r
2
1
r 1
∧ g jr
1 ⎛ 1 ∂X j1 ⎞
r ! ⎝ ∫0 ∂t
dB = ⎜− dt ⎟ g j1 ∧ ∧ g jr
jr
⎠
1
=
r!
(
X j1 jr ( ⋅ ,0 ) − X j1 jr ( ⋅ ,1) g j1 ∧ ) ∧ g jr (52.44)
1
= Aj j g j1 ∧ ∧ g jr = A
r! 1 r
The ( r − 1) -form B , whose existence has just been proved by (52.44), is not unique of
course. Since
ˆ ⇔ d B −B
dB = dB ˆ =0
( ) (52.45)
Exercises
52.1 In calculus a “differential”
388 Chap. 10 • VECTOR FIELDS
P ( x, y , z ) dx + Q ( x, y , z ) dy + R ( x, y , z ) dz (52.46)
∂U ∂U ∂U
dU = dx + dy + dz
∂x ∂y ∂z (52.47)
= P dx + Q dy + R dz
Use the Poincaré lemma and show that (52.46) is exact if and only if
∂P ∂Q ∂P ∂R ∂Q ∂R
= , = , = (52.48)
∂y ∂x ∂z ∂x ∂z ∂y
52.2 The “differential” (52.46) is called integrable if there exists a nonvanishing function
μ ( x, y , z ) , called an integration factor, such that the differential
μ ( P dx + Q dy + R dz ) (52.49)
is exact. Show that (52.46) is integrable if and only if the two-dimensional distribution
orthogonal to the 1-form (52.46) is integrable in the sense defined in this section. Then
use the dual form of the Frobenius theorem and show that (52.46) is integrable if and
only if
⎛ ∂R ∂Q ⎞ ⎛ ∂P ∂R ⎞ ⎛ ∂Q ∂P ⎞
P⎜ − ⎟ +Q⎜ − ⎟ +R ⎜ − ⎟=0 (52.50)
⎝ ∂y ∂z ⎠ ⎝ ∂z ∂x ⎠ ⎝ ∂x ∂y ⎠
Sec. 53 • Three-Dimensional Euclidean Manifold, I 389
We recall first that when E is three-dimensional the exterior product may be replaced by
the cross product [cf.(41.19)]. Specially, relative to a positive orthonormal basis
{e1 , e 2 , e3} for V , the component representation of u × v for any u, v,∈V is
u × v = ε ijk u i v j e k
(53.1)
= ( u 2υ 3 − u 3υ 2 ) e1 + ( u 3υ 1 − u1υ 3 ) e 2 + ( u1υ 2 − u 2υ 1 ) e 3
Where
u = uie i , v = υ je j (53.2)
are the usual component representations of u and v and where the reciprocal basis {ei } coincides
with {e i } . It is important to note that the representation (53.1) is valid relative to a positive
orthonormal basis only; if the orthonormal basis {e i } is negative, the signs on the right-hand side
of (53.1) must be reversed. For this reason, u × v is called an axial vector in the classical theory.
We recall also that when E is three-dimensional, then the curl of a vector field can be
represented by a vector field [cf. (47.53) or (51.34)]. Again, if a positive rectangular Cartesian
coordinate system ( x1 , x 2 , x 3 ) induced by {e i } is used, then curl v has the components
∂υ j
curl v = ε ijk ek
∂x i
(53.3)
⎛ ∂υ ∂υ ⎞ ⎛ ∂υ ∂υ ⎞ ⎛ ∂υ ∂υ ⎞
= ⎜ 23 − 32 ⎟ e1 + ⎜ 31 − 13 ⎟ e 2 + ⎜ 12 − 21 ⎟ e 3
⎝ ∂x ∂x ⎠ ⎝ ∂x ∂x ⎠ ⎝ ∂x ∂x ⎠
v = υi e i = υ i e i (53.4)
390 Chap. 10 • VECTOR FIELDS
Since ( x i ) is rectangular Cartesian, the natural basis vectors and the component fields satisfy the
usual conditions
υ i = υi , ei = ei , i = 1, 2,3 (53.5)
By the same remark as before, curl v is an axial vector field, so the signs on the right-hand side
must be reversed when ( x i ) is negative.
s = v/ v (53.6)
Then s is a unit vector field in the same direction as v. In Section 49 we have introduced the
notions of internal curves and flows corresponding to any smooth vector field. We now apply
these to the vector field s. Since s is a unit vector, its integral curves are parameterized by the
arc length1 s. A typical integral curve of s is
λ = λ (s) (53.7)
where
dλ / ds = s ( λ ( s ) ) (53.8)
dλ
parallel to v ( λ ( s ) ) (53.9)
ds
Now assuming that λ is not a straight line, i.e., s is not invariant on λ , we can take the
covariant derivative of s on λ (cf. Section 48 and write the result in the form
1
Arc length shall be defined in a general in Section 68. Here s can be regarded as a parameter such that
dλ / ds = 1.
Sec. 53 • Three-Dimensional Euclidean Manifold, I 391
ds / ds = κ n (53.10)
where κ and n are called the curvature and the principal normal of λ , and they are characterized
by the condition
ds
κ= >0 (53.11)
ds
It should be noted that (53.10) defines both κ and n : κ is the norm of ds / ds and n is the unit
vector in the direction of the nonvanishing vector field ds / ds . The reciprocal of κ ,
r = 1/ κ (53.12)
d ( s ⋅ s) ds
0= = 2s ⋅ = 2κ s ⋅ n (53.13)
ds ds
or, equivalently,
s⋅n = 0 (53.14)
Thus n is normal to s , as it should be. In view of (53.14) the cross product of s with n is a unit
vector
b ≡ s×n (53.15)
which is called the binormal of λ . The triad {s, n, b} now forms a field of positive orthonormal
basis in the domain of v. In general {s, n, b} is anholonomic, of course.
Now we compute the covariant derivative of n and b along the curve λ . Since b is a
unit vector, by the same argument as (53.13) we have
db
⋅b = 0 (53.16)
ds
db ds
⋅ s = −b ⋅ = −κ b ⋅ n = 0 (53.17)
ds ds
where we have used (53.10). Combining (53.16) and (53.17), we see that db / ds is parallel to n,
say
db
= −τ n (53.18)
ds
where τ is called the torsion of the curve λ . From (53.10) and (53.18) the gradient of n along
λ can be computed easily by the representation
n = b×s (53.19)
so that
dn ds db
= b× + × s = b × κ n − τ n × s = −κ s + τ b (53.20)
ds ds ds
The results (53.10), (53.18), and (53.20) are the Serret-Frenet formulas for the curve λ .
So far we have introduced a field of basis {s, n, b} associated with the nonzero and
nonrectilinear vector field v. Moreover, the Serret-Frenet formulas give the covariant derivative
of the basis along the vector lines of v . In order to make full use of the basis, however, we need
a complete representation of the covariant derivative of that basis along all curves. Then we can
express the gradient of the vector fields s, n, b in component forms relative to the anholonomic
basis {s, n, b} . These components play the same role as the Christoffel symbols for a holonomic
basis. The component forms of grad s , grad n and grad b have been derived by Bjørgum.2 We
shall summarize his results without proofs here.
Bjørgum shows first that the components of the vector fields curl s , curl n , and curl b
relative to the basis {s, n, b} are given by
curls = Ωss + κ b
curln = − ( div b ) s + Ω n n + θ b (53.21)
curlb = (κ + div n ) s + η n + Ω b b
2
O. Bjørgum, “On Beltrami Vector Fields and Flows, Part I. A Comparative Study of Some Basic Types of Vector
Fields,” Universitetet I Bergen, Arbok 1951, Naturvitenskapelig rekke Nr. 1.
Sec. 53 • Three-Dimensional Euclidean Manifold, I 393
and are called the abnormality of s, n, b , respectively. From Kelvin’s theorem (cf. Section 52)
we know that the abnormality measures, in some sense, the departure of a vector field from a
complex-lamellar field. Since b is given by (53.15), the abnormalities Ωs , Ω n , Ω b are not
independent. Bjørgum shows that
Ω n + Ω b = Ωs − 2τ (53.23)
where τ is the torsion of λ as defined by (53.18). The quantities θ and η in (53.21) are defined
by
θ − η = div s (53.25)
n ⋅ curl s = 0 (53.26)
Next Bjørgum shows that the components of the second-order tensor fields grad s ,
grad n , and grad b relative to the basis {s, n, b} are given by
grad s = κ n ⊗ s + θ n − ( Ω n + τ ) n ⊗ b
+ ( Ωb + τ ) b ⊗ n − ηb ⊗ b
grad n = −κ s ⊗ s − θ s ⊗ n + ( Ω n + τ ) s ⊗ b
(53.27)
+τ b ⊗ s − ( div b ) b ⊗ n + (κ + div n ) b ⊗ b
grad b = − ( Ω b + τ ) s ⊗ n + ηs ⊗ b − τ n ⊗ s
+ ( div b ) n ⊗ n − (κ + div n ) n ⊗ b
These representations are clearly consistent with the representations (53.21) through the general
formula (47.53). Further, from (48.15) the covariant derivatives of s, n, and b along the integral
curve λ of s are
394 Chap. 10 • VECTOR FIELDS
ds
= ( grad s ) s = κ n
ds
dn
= ( grad n ) s = −κ s + τ b (53.28)
ds
db
= ( grad b ) s = −τ n
ds
which are consistent with the Serret-Frenet formulas (53.10), (53.20), and (53.18).
The representations (53.27) tell us also the gradients of {s, n, b} along any integral curves
μ = μ ( n ) of n and ν = ν ( b ) of b. Indeed, we have
ds / dn = ( grad s ) n = θ n + ( Ω b + τ ) b
dn / dn = ( grad n ) n = −θ s − ( div b ) b (53.29)
db / dn = ( grad b ) n = − ( Ω b + τ ) s + ( div b ) n
and
ds / db = ( grad s ) b = − ( Ω n + τ ) n − η b
dn / db = ( grad n ) b = ( Ω n + τ ) s + (κ + div n ) b (53.30)
db / db = ( grad b ) b = ηs − (κ + div n ) n
Since the basis {s, n, b} is anholonomic in general, the parameters ( s, n, b ) are not local
coordinates. In particular, the differential operators d / ds, d / dn, d / db do not commute. We
derive first the commutation formulas3 for a scalar function f.
From (47.14) and (46.10) we verify easily that the anholonomic representation of grad f is
df df df
grad f = s+ n+ b (53.31)
ds dn db
for any smooth function f defined on the domain of the basis {s, n, b} . Taking the gradient of
(53.31) and using (53.27), we obtain
3
See A.W. Marris and C.-C. Wang, “Solenoidal Screw Fields of Constant Magnitude,” Arch. Rational Mech. Anal.
39, 227-244 (1970).
Sec. 53 • Three-Dimensional Euclidean Manifold, I 395
⎛ df ⎞ df ⎛ df ⎞
grad ( grad f ) = s ⊗ ⎜ grad ⎟ + ( grad s ) + n ⊗ ⎜ grad ⎟
⎝ ds ⎠ ds ⎝ dn ⎠
df ⎛ df ⎞ df
+ ( grad n ) + b ⊗ ⎜ grad ⎟ + ( grad b )
dn ⎝ db ⎠ db
⎡ d df df ⎤
=⎢ −κ ⎥s⊗s
⎣ ds ds dn ⎦
⎡ d df df df ⎤
+⎢ −θ − ( Ωb + τ ) ⎥ s ⊗ n
⎣ dn ds dn db ⎦
⎡ d df df df ⎤
+⎢ + ( Ωn + τ ) + η ⎥ s ⊗ b
⎣ db ds dn db ⎦
⎡ df d df df ⎤
+ ⎢κ + −τ ⎥ n ⊗ s
⎣ ds ds dn db ⎦
⎡ df d df df ⎤
+ ⎢θ + + ( div b ) ⎥ n ⊗ n
⎣ dn dn dn db ⎦
⎡ df d df df ⎤
+ ⎢ − ( Ωn + τ ) + − (κ + div n ) ⎥ n ⊗ b
⎣ ds db dn db ⎦
⎡ df d df ⎤
+ ⎢τ + b⊗s
⎣ dn ds db ⎥⎦
(53.32)
⎡ df df d df ⎤
+ ⎢( Ω b + τ ) − ( div b ) + b⊗n
⎣ ds dn dn db ⎥⎦
⎡ df df d df ⎤
+ ⎢ −η + (κ + div n ) + b⊗b
⎣ ds dn db db ⎥⎦
d df d df df df df
− =κ +θ + Ωb
ds ds ds dn ds dn db
d df d df df df
− = Ωn +η (53.33)
ds db db ds dn db
d df d df df df df
− = Ωs − ( div b ) + (κ + div n )
db dn dn db ds dn db
where we have used (53.23). The Formulas (53.33)1-3 are the desired commutation rules.
and the results are the following nine intrinsic equations4 for the basis {s, n, b} :
d dθ
− ( Ωn + τ ) − + (κ + div n )( Ω b − Ωn ) − (θ + η ) div b + Ωsκ = 0
dn db
dη d
− − ( Ω b + τ ) − ( Ω b − Ω n ) div b − (θ + η )(κ + div n ) = 0
dn db
dκ d
+ ( Ω n + τ ) − η ( Ωs − Ω b ) + Ω nθ = 0
db ds
dτ d
− (κ + div n ) − Ω n div b + η ( 2κ + div n ) = 0
db ds
dη
− + η 2 − κ (κ + div n ) − τ 2 − Ω n ( Ωs − Ω n ) = 0
ds
dκ dθ
− − κ 2 − θ 2 + ( 2Ω s − 3τ ) + Ω n ( Ω s − Ω n − 4τ ) = 0
dn ds
dτ d
+ div b − κ ( Ωs − Ω n ) + θ div b − Ω b (κ + div n ) = 0
dn ds
d d
(κ + div n ) + div b − θ n + ( div b ) + (κ + div n )
2 2
dn db
+ Ωsτ + ( Ω n + τ )( Ω b + τ ) = 0
d Ωs dκ (53.35)
+ + Ωs (θ − η ) + κ div b = 0
ds db
Having obtained the representations (53.21) and (53.27), the commutation rules (53.33),
and the intrinsic equations (53.35) for the basis {s, n, b} , we can now return to the original
relations (53.6) and derive various representations for the invariants of v. For brevity, we shall
now denote the norm of v by υ . Then (53.6) can be rewritten as
v = υs (53.36)
4
A. W. Marris and C.-C. Wang, see footnote 3.
Sec. 53 • Three-Dimensional Euclidean Manifold, I 397
where we have used (53.27)1. Notice that the component of grad v in b ⊗ s vanishes
identically, i.e.,
or, equivalently,
dv
b⋅ =0 (53.39)
ds
so that the covariant derivative of v along its vector lines stays on the plane spanned by s and n.
In differential geometry this plane is called the osculating plane of the said vector line.
dυ
div v = + υθ − υη
ds
dυ
= + υ div s
ds
(53.40)
dυ ⎛ dυ ⎞
curl v = υ ( Ω b + Ω n + 2τ ) s + n + ⎜υκ − ⎟b
db ⎝ dn ⎠
dυ ⎛ dυ ⎞
= υΩss + n + ⎜υκ − ⎟b
db ⎝ dn ⎠
where we have used (53.23) and (53.25). From (53.40)4 the scalar field Ω = v ⋅ curl v is given by
The representation (53.37) and its consequences (53.40) and (53.41) have many
important applications in hydrodynamics and continuum mechanics. It should be noted that
(53.21)1 and (53.27)1 now can be regarded as special cases of (53.40)4 and (53.37)2, respectively,
with υ = 1.
Exercises
53.1 Prove the intrinsic equations (53.35).
53.2 Show that the Serret-Frenet formulas can be written
5
O. Bjørgum, see footnote 2. See also J.L. Ericksen, “Tensor Fields,” Handbuch der Physik, Vol. III/1, Appendix,
Edited by Flügge, Springer-Verlag (1960).
398 Chap. 10 • VECTOR FIELDS
ds / ds = ω × s, dn / ds = ω × n, db / ds = ω × b (53.42)
where ω = τ s + κ b .
Sec. 54 • Three-Dimensional Euclidean Manifold, II 399
In Section 52 we have proved the Poincaré lemma, which asserts that, locally, a
differential form is exact if and only if it is closed. This result means that we have the local
representation
f = dg (54.1)
In a three-dimensional Euclidean manifold the local representation (54.1) has the following two
special cases.
(i) Lamellar Fields
v = grad f (54.3)
curl v = 0 (54.4)
Such a vector field v is called a lamellar field in the classical theory, and the scalar function f is
called the potential of v . Clearly, the potential is locally unique to within an arbitrary additive
constant.
(ii) Solenoidal Fields
v = curl u (54.5)
div v = 0 (54.6)
Such a vector field v is called a solenoidal field in the classical theory, and the vector field u is
called the vector potential of v . The vector potential is locally unique to within an arbitrary
additive lamellar field.
In Section 52 we remarked also that the dual form of the Frobenius theorem implies the
following representation.
(iii) Complex-Lamellar Fields
v = h grad f (54.7)
v ⋅ curl v = 0 (54.8)
Such a vector field v is called complex-lamellar in the classical theory. In the representation
(54.7) the surfaces defined by
f ( x ) = const (54.9)
We shall now derive some other well-known representations in the classical theory.
Proof. We claim that v has a particular vector potential û which is complex-lamellar. From the
remark on (54.5), we may choose û by
uˆ = u + grad k (54.11)
where u is any vector potential of v . In order for û to be complex-lamellar, it must satisfy the
condition (54.8), i.e.,
Clearly, this equation possesses infinitely many solutions for the scalar function k, since it is a
first-order partial differential equation with smooth coefficients. Hence by the representation
(54.7) we may write
Sec. 54 • Three-Dimensional Euclidean Manifold, II 401
uˆ = h grad f (54.13)
Taking the curl of this equation, we obtain the Euler representation (54.10):
It should be noted that in the Euler representation (54.10) the vector v is orthogonal to
grad h as well as to grad f , namely
h ( x ) = const (54.16)
or
f ( x ) = const (54.17)
For this reason, these surfaces are then called vector sheets of v . If v ≠ 0, then from (54.10),
grad h and grad f are not parallel, so that h and f are functionally independent. In this case the
intersections of the surfaces (54.16) and (54.17) are the vector lines of v.
where the scalar functions, h, k, f are called the Monge potentials (not unique) of v .
Proof. Since (54.10) is a representation for any solenoidal vector field, from (54.5) we can write
curl v as
Thus v − k grad f is a lamellar vector field. From (54.3) we then we have the local
representation
Next we prove another well-known representation for arbitrary smooth vector fields in
the classical theory.
Proof. We show that there exists a scalar function h such that v − grad h is solenoidal.
Equivalently, this condition means
Δh = div v (54.24)
where Δ denotes the Laplacian [cf. (47.49)]. Thus h satisfies the Poisson equation (54.24). It is
well known that, locally, there exist infinitely many solutions (54.24). Hence the representation
(54.22) is valid.
Notice that, from (54.19), Stokes’ representation (54.22) also can be put in the form
Next we consider the intrinsic conditions for the various special classes of vector fields.
First, from (53.40)2 a vector field v is solenoidal if and only if
dυ / ds = −υ div s (54.26)
Sec. 54 • Three-Dimensional Euclidean Manifold, II 403
where s,υ , and s are defined in the preceding section. Integrating (54.26) along any vector line
λ = λ ( s ) defined before, we obtain
( s
υ ( s ) = υ0exp -∫ div s ds
s0 ) (54.27)
where υ0 is the value of υ at any reference point λ ( s0 ) . Thus1 in a solenoidal vector field v
the vector magnitude is determined to within a constant factor along any vector line λ = λ ( s ) by
the vector line pattern of v .
υΩ s = 0 (54.28)
or, equivalently,
Ωs = 0 (54.29)
From (53.40)4 again, a vector field v is lamellar if and only if, in addition to (54.28) or
(54.29), we have also
dυ / db = 0, dυ / dn = υκ (54.30)
It should be noted that, when v is lamellar, it can be represented by (54.3), and thus the potential
surfaces defined by
f ( x ) = const (54.31)
are formed by the integral curves of n and b. From (54.30)1 along any b − line
υ ( b ) = υ ( b0 ) = const (54.32)
υ ( n ) = υ ( n0 ) exp ( ∫ κ dn)
n
n0
(54.33)
1
O. Bj∅rgum, see footnote 2 in Section 53.
404 Chap. 10 • VECTOR FIELDS
Finally, in the classical theory a vector field v is called a screw field or a Beltrami field if
v is parallel to its curl, namely
v × curl v = 0 (54.34)
or, equivalently,
curl v = Ω s v (54.35)
where Ω s is the abnormality of s, defined by (53.22)1. In some sense a screw field is just the
opposite of a complex-lamellar field, which is defined by the condition that the vector field is
orthogonal to its curl [cf. (54.8)]. Unfortunately, there is no known simple direct representation
for screw fields. We must refer the reader to the three long articles by Bjørgum and Godal (see
footnote 1 above and footnotes 4 and 5 below), which are devoted entirely to the study of these
fields.
We can, of course, use some general representations for arbitrary smooth vector fields,
such as Monge’s representation or Stokes’ representation, to express a screw field first. Then we
impose some additional restrictions on the scalar fields involved in the said representations. For
example, if we use Monge’s representation (54.18) for v , then curl v is given by (54.19). In
this case v is a screw field if and only if the constant potential surfaces of k and f are vector
sheets of v , i.e.,
From (53.40)4 the intrinsic conditions for a screw field are easily fround to be simply the
conditions (54.30). So the integrals (54.32) and (54.33) remain valid in this case, along the
b − lines and the n − lines. When the abnormality Ω s is nonvanishing, the integral of υ along
any s − line can be found in the following way: From (53.40) we have
Now from the basic conditions (54.35) for a screw field we obtain
d Ωs
0 = div ( Ωs v ) = Ωs div v + υ (54.38)
ds
υ d Ωs
div v = − (54.39)
Ωs ds
Sec. 54 • Three-Dimensional Euclidean Manifold, II 405
dυ ⎛ 1 d Ωs ⎞
= −υ ⎜ div s + ⎟ (54.40)
ds ⎝ Ω s ds ⎠
or, equivalently,
d (υΩ s )
= −υΩ s div s (54.41)
ds
υ ( s) =
υ ( s0 ) Ω s ( s0 )
Ω s ( s0 ) (
exp − ∫ ( div s ) ds
s
s0 ) (54.42)
υ ( s ) Ω s ( s0 )
=
υ ( s0 ) Ω s ( s ) s (
exp − ∫ ( div s ) ds
s
0
) (54.43)
since υ is nonvanishing. From (54.43), (54.33), and (54.32) we see that the magnitude of a
screw field, except for a constant factor, is determined along any s − line, n − line, or b − line by
the vector line pattern of the field.2
A screw field whose curl is also a screw field is called a Trkalian field. According to a
theorem of Mémenyi and Prim (1949), a screw field is a Trkalian field if and only if its
abnormality is a constant. Further, all Trkalian fields are solenoidal and successive curls of the
field are screw fields, all having the same abnormality.3
The proof of this theorem may be found also in Bjørgum’s article. Trkalian fields are
considered in detail in the subsequent articles of Bjørgum and Godal4 and Godal5
2
O. Bjørgum, see footnote 2, Section 53.
3
P. Nemenyi and R. Prim, “Some Properties of Rotational Flow of a Perfect Gas,” Proc. Nat. Acad. Sci. 34, 119-
124; Erratum 35, 116 (1949).
4
O. Bjørgum and T. Godal, “On Beltrami Vector Fields and Flows, Part II. The Case when Ω is Constant in Space,”
Universitetet i Bergen, Arbok 1952, Naturvitenskapelig rekke Nr. 13.
5
T. Godal, “On Beltrami Vector Fields and Flows, Part III. Some Considerations on the General Case,” Universitete
i Bergen, Arbok 1957, Naturvitenskapelig rekke Nr.12.
________________________________________________________________
Chapter 11
x ∈ N ⊂ S ⇔ f (x ) = 0 (55.1)
where f is a smooth function having nonvanishing gradient. The unit vector field on N
grad f
n= (55.2)
grad f
is called a unit normal of S , since from (55.1) and (48.15) for any smooth curve λ = λ (t ) in S
we have
df λ 1
0= = (grad f ) ⋅ λ = n⋅λ (55.3)
dt grad f
The local representation (55.1) of S is not unique, or course. Indeed, if f satisfies (55.1),
so does –f, the induced unit normal of –f being –n. If the hypersurface S can be represented
globally by (55.1), i.e., there exists a smooth function whose domain contains the entire
hypersurface such that
x ∈ S ⇔ f (x) = 0 (55.4)
then S is called orientable. In this case S can be equipped with a smooth global unit normal
field n. (Of course, -n is also a smooth global unit normal field.) We say that S is oriented if a
particular smooth global unit normal field n has been selected and designated as the positive unit
normal of S . We shall consider oriented hypersurfaces only in this chapter.
407
408 Chap. 11 • HYPERSURFACES
x = x( y1 ,…, y N −1 ) (55.5)
in such a way that ( y1 ,… , y N −1 , f ) forms a local coordinate system in S . If n is the positive unit
normal and f satisfies (55.2), then the parameters ( y1 ,… , y N −1 ) are said to form a positive local
coordinate system in S when ( y1 ,… , y N −1 , f ) is a positive local coordinate system in E . Since
the coordinate curves of y Γ , Γ = 1,… , N − 1, are contained in S the natural basis vectors
h Γ = ∂x / ∂y Γ , Γ = 1,… , N − 1 (55.6)
Since
h Γ ⋅ n = 0, Γ = 1,…, N − 1 (55.7)
The reciprocal basis of {h Γ , n} has the form {h Γ , n} where h Γ are also tangent to S , namely
h Γ ⋅ n = 0, Γ = 1,… , N − 1 (55.8)
and
h Γ ⋅ h Δ = δ ΔΓ , Γ, Δ = 1,…, N − 1 (55.9)
v = υ Γh Γ + υ N n = υ Γh Γ + υ N n (55.10)
υ Γ = v ⋅ hΓ , υΓ = v ⋅ hΓ , υN = υ N = v ⋅ n (55.11)
Sec. 55 • Normal Vector, Tangent Plane, and Surface Metric 409
vS ≡ υ Γ h Γ = υ Γ h Γ = v − ( v ⋅ n ) n (55.12)
v n ≡ υ N n = υ N n = v − vS (55.13)
the normal projection of v . Notice that in (55.10)-(55.12) the repeated Greek index is summed
from 1 to N-1. We say that v is a tangential vector field on S if v n = 0 and a normal vector
field if vS = 0 .
x i = x i ( y1 ,… , y N −1 ), i = 1,… , N (55.14)
∂x i
h Γ = hΓi g i = gi , Γ = 1,… , N − 1 (55.15)
∂y Γ
while from (55.2) the unit normal n has the component form
∂f / ∂x i
n= gi (55.16)
( g ab (∂f / ∂x a )(∂f / ∂x b ))1/ 2
where, as usual {g i } , is the reciprocal basis of {g i } and {g ab } is the component of the Euclidean
metric, namely
g ab = g a ⋅ g b (55.17)
The component representation for the surface reciprocal basis {h Γ } is somewhat harder
to find. We find first the components of the surface metric.
∂x i ∂x i
aΓΔ ≡ h Γ ⋅ h Δ = gij (55.18)
∂y Γ ∂y Δ
410 Chap. 11 • HYPERSURFACES
gij = g i ⋅ g j (55.19)
The inverse matrix exists because from (55.18), [ aΓΔ ] is positive-definite and symmetric. In
fact, from (55.18) and (55.9)
a ΓΔ = h Γ ⋅ h Δ (55.21)
so that a ΓΔ is also a component of the surface metric. From (55.21) and (55.15) we then have
Γ ΓΔ ∂x i
h =a gi , Γ = 1,… , N − 1 (55.22)
∂y Δ
At each point x ∈ S the components aΓΔ ( x ) and a ΓΔ ( x ) defined by (55.18) and (55.21)
are those of an inner product on S x relative to the surface coordinate system ( y Γ ) , the inner
product being the one with induced by that of V since S x is a subspace of V . In other words
if u and v are tangent to S at x , say
u = u Γh Γ (x) v = υ Δ h Δ (x ) (55.23)
then
u ⋅ v = aΓΔ ( x )u Γυ Δ (55.24)
This inner product gives rise to the usual operations of rising and lowering of indices for tangent
vectors of S . Thus (55.23) 1 is equivalent to
u = aΓΔ ( x )u Γ h Δ ( x ) = uΔ h Δ ( x ) (55.25)
Sec. 55 • Normal Vector, Tangent Plane, and Surface Metric 411
Obviously we can also extend the operations to tensor fields on S having nonzero components
in the product basis of the surface bases {h Γ } and {h Γ } only. Such a tensor field A may be
called a tangential tensor field of S and has the representation
A = AΓ1 ... Γr h Γ1 ⊗ ⊗ h Γ2
(55.26)
= AΓ1 Γ2 ... Γr h Γ1 ⊗ h Γ2 ⊗ ⊗ h Γr , etc.
Then
There is a fundamental difference between the surface metric a on S and the Euclidean
metric g on E however. In the Euclidean space E there exist coordinate systems in which the
components of g are constant. Indeed, if the coordinate system is a rectangular Cartesian one,
then gij is δ ij at all points of the domain of the coordinate system. On the hypersurface S ,
generally, there need not be any coordinate system in which the components aΓΔ or a ΓΔ are
constant unless S happens to be a hyperplane. As we shall see in a later section, the departure
of a from g in this regard can be characterized by the curvature of S .
Another important difference between S and E is the fact that in S the tangent planes at
different points generally are different (N-1)-dimensional subspaces of V . Hence a vector in V
may be tangent to S at one point but not at another point. For this reason there is no canonical
parallelism which connects the tangent planes of S at distinct points. As a result, the notions of
gradient or covariant derivative of a tangential vector or tensor field on S must be carefully
defined, as we shall do in the next section.
The notions of Lie derivative and exterior derivative introduced in Sections 49 and 51,
however, can be readily defined for tangential fields of S . We consider first the Lie derivative.
Let v be a smooth tangent field defined on a domain in S . Then as before we say that a
smooth curve λ = λ (t ) in the domain of v is an integral curve if
dλ (t ) / dt = v( λ (t )) (55.28)
d λ Γ (t ) / dt = υ Γ ( λ (t )) (55.30)
412 Chap. 11 • HYPERSURFACES
Hence integral curves exist for any smooth tangential vector field v and they generate flow, and
hence a parallelism, along any integral curve. By the same argument as in Section 49, we define
the Lie derivative of a smooth tangential vector field u relative to v by the limit (49.14), except
that now ρt and Pt are the flow and the parallelism in S . Following exactly the same derivation
as before, we then obtain
⎛ ∂u Γ ∂υ Γ ⎞
L u = ⎜ Δ υ Δ − Δ u Δ ⎟ hΓ (55.31)
v
⎝ ∂y ∂y ⎠
defined by (49.41) with ρt and Pt as just explained, and the component representation for L A is
v
A = AΓ1 ... Γr h Γ1 ⊗ ⊗ h Γr
= Σ AΓ1 ... Γr h Γ1 ∧ ∧ h Γr (55.33)
Γ1 <⋅⋅⋅<Γ r
1
= AΓ1 ... Γr h Γ1 ∧ ∧ h Γr
r!
N −1 AΓ1 ... Γr
dA = Σ Σ Δ
h Δ ∧ h Γ1 ∧ ∧ h Γr
Γ1 <⋅⋅⋅<Γ r Δ=1 ∂y
1 ∂AΓ1…Γr Δ
= h ∧ ∧ h Γr (55.34)
r ! ∂y Δ
1 ∂A
= δ ΣΔΓ1…1…ΣrΓ+1r Γ1…Δ Γr hΣ1 ∧ ∧ h Σ r +1
r !( r + 1)! ∂y
which generalizes (51.5). However, since the surface covariant derivative ∇A has not yet been
defined, we cannot write down the equation that generalizes (51.6) to the surface exterior
derivative. But we shall achieve this generalization in the next section. Other than this
exception, all results of Sections 49-52 can now be generalized and restated in an obvious way
for tangential fields on the hypersurface. In fact, those results are valid for differentiable
manifolds in general, so that they can be applied to Euclidean manifolds as well as to
hypersurfaces therein.
Excercises
L x v = vS
for all v ∈V . Show that L x is an orthogonal projection whose image space is S x and
whose kernel is the one-dimensional subspace generated by n at x .
L x = h Γ ( x ) ⊗ h Γ (x ) = aΓΔ ( x )h Γ ( x ) ⊗ h Δ (x )
∂x i ∂x j
= a ΓΔ ( x ) gi (x ) ⊗ g j (x )
∂y Δ ∂y Γ
Thus L x is the linear transformation naturally isomorphic to the surface matric tensor at
x.
I = Lx + n ⊗ n
and
414 Chap. 11 • HYPERSURFACES
∂x i ∂x j
g ij ( x ) = a ΓΔ ( x ) Δ Γ
+ n i ( x )n j ( x )
∂y ∂y
∂x l Σ
L x g j ( x ) = g jl ( x ) h (x )
∂y Σ
and
∂x j Σ
Lxg j (x) = h (x)
∂y Σ
55.5 Compute the components aΓΔ for (a) the spherical surface defined by
x1 = c sin y1 cos y 2
x 2 = c sin y1 sin y 2
x 3 = c cos y1
x1 =c cos y1 , x 2 =c sin y1 , x3 = y 2
where c is a constant.
x1 = b cos y1 sin y 2
x 2 = b sin y1 sin y 2
x 3 = c cos y1
and
∇A = AΓ1…Γr ,Δ h Γ1 ⊗ ⊗ h Γr ⊗ h Δ (56.1)
where
∂AΓ1…Γr ⎧ Γ1 ⎫ ⎧Γ ⎫
AΓ1…Γr ,Δ = + AΣΓ2…Γr ⎨ ⎬+ + AΓ1…Γr −1Σ ⎨ r ⎬ (56.2)
∂y Δ ⎩ΣΔ ⎭ ⎩ΣΔ ⎭
and
⎧Ω⎫
The quantities ⎨ ⎬ are the surface Christoffel symbols. They obey the symmetric condition
⎩ ΓΔ ⎭
⎧Ω⎫ ⎧Ω⎫
⎨ ⎬=⎨ ⎬ (56.4)
⎩ ΓΔ ⎭ ⎩ ΔΓ ⎭
We would like to have available a formula like (47.26) which would characterize the surface
Christoffel symbols as components of ∂h Δ ∂y Γ . However, we have no assurance that
∂h Δ ∂y Γ is a tangential vector field. In fact, as we shall see in Section 58, ∂h Δ ∂y Γ is not
generally a tangential vector field. However, given the surface Christoffel symbols, we can
formally write
Dh Δ ⎧ Ω ⎫
= ⎨ ⎬ hΩ (56.5)
Dy Γ ⎩ ΓΔ ⎭
With this definition, we can combine (56.2) with (56.1) and obtain
Sec. 56 • Surface Covariant Derivatives 417
∂AΓ1…Γr
∇A = h Γ1 ⊗ ⊗ h Γr ⊗ h Δ
∂y Δ
(56.6)
⎧ Dh Γ1 Dh Γ r ⎫
+ AΓ1…Γr ⎨ Δ
⊗ ⊗ h Γr + + h Γ1 ⊗ ⊗ ⎬⊗h
Δ
⎩ Dy Dy Δ ⎭
DAΓ1…Γr ∂AΓ1…Γr
= (56.7)
Dy Δ ∂y Δ
DA
∇A = Δ
⊗ hΔ (56.8)
Dy
The components AΓ1…Γr ,Δ of ∇A represents the surface covariant derivative. If the mixed
components of A are used, say with
Γ1 …Γ r
∂AΓ1…Γr Δ1…Δ s
A , =
Δ1 …Δ s Σ
∂y Σ
⎧Γ ⎫ ⎧ Γr ⎫
+ AΩΓ2 …Γr Δ1…Δ s ⎨ 1 ⎬ + + AΓ1…Γr −1Ω Δ1 Δs ⎨ ⎬ (56.10)
⎩ΩΣ ⎭ ⎩ΩΔ ⎭
⎧Ω ⎫ ⎧ Ω ⎫
− AΓ1…Γr ΩΔ2 …Δ s ⎨ ⎬− − AΓ1…Γr Δ1…Δ s−1Ω ⎨ ⎬
⎩ Δ1Σ ⎭ ⎩Δ s Σ ⎭
which generalizes (47.39). Equation (56.10) can be formally derived from (56.8) if we adopt the
definition
Dh Γ ⎧Γ⎫
Σ
= − ⎨ ⎬ hΔ (56.11)
Dy ⎩ΣΔ ⎭
The formulas (56.2) and (56.10) give the covariant derivative of a tangential tensor field
in component form. To show that AΓ1…Γr ,Δ and AΓ1…Γr Δ1…Δ s ,Σ are the components of some
tangential tensor fields, we must show that they obey the tensor transformation rule, e.g., if
( y Γ ) is another surface coordinate system, then
∂y Γ1 ∂y Γr ∂y Ω Σ1…Σr
AΓ1…Γr ,Δ = A ,Ω (56.13)
∂y Σ1 ∂y Σr ∂y Δ
where AΓ1…Γr ,Δ are obtained from (56.2) when all the fields on the right-hand side are referred to
( y ).
Γ
To prove (56.13), we observe first that from (56.3) and the fact that aΓΔ and a ΓΔ are
components of the surface metric, so that
∂y Γ ∂y Δ ΣΘ ∂y Σ ∂y Ω
a ΓΔ = a , aΓΔ = aΣΩ (56.14)
∂y Σ ∂y Θ ∂y Γ ∂y Δ
⎧ Σ ⎫ ∂y Σ ∂ 2 y Ω ∂y Σ ∂y Θ ∂y Φ ⎧ Ω ⎫
⎨ ⎬= Ω Γ Δ + Ω Γ Δ ⎨ ⎬ (56.15)
⎩ ΓΔ ⎭ ∂y ∂y ∂y ∂y ∂y ∂y ⎩ ΘΦ ⎭
which generalizes (47.35). Now using (56.15), (56.3), and the transformation rule
∂y Γ1 ∂y Γr Δ1…Δ r
AΓ1…Γr = A (56.16)
∂y Δ1 ∂y Δ r
we can verify that (56.13) is valid. Thus ∇A , as defined by (56.1), is indeed a tangential tensor
field.
The surface covariant derivative ∇A just defined is not the same as the covariant
derivative defined in Section 47. First, the domain of A here is contained in the hypersurface
⎧Σ⎫
S , which is not an open set in E . Second, the surface Christoffel symbols ⎨ ⎬ are generally
⎩ ΓΔ ⎭
nonvanishing relative to any surface coordinate system ( y Γ ) unless S happens to be a
hyperplane on which the metric components a ΓΔ and aΓΔ are constant relative to certain
Sec. 56 • Surface Covariant Derivatives 419
“Cartesian” coordinate systems. Other than these two points the formulas for the surface
covariant derivative are formally the same as those for the covariant derivative on E .
In view of (56.3) and (56.10), we see that the surface covariant derivative and the surface
exterior derivative are still related by a formula formally the same as (51.6), namely
for any surface r-form. Here K r +1 denotes the surface skew-symmetric operator. That is, in
terms of any surface coordinate system ( y Γ )
1 Γ1 Γp Δp
Kp = δΔ Δp h Δ1 ⊗ ⊗h ⊗ h Γ1 ⊗ ⊗ hΓ p (56.18)
p! 1
for any integer p from 1 to N-1. Notice that from (56.18) and (56.10) the operator K p , like the
surface metric a , is a constant tangential tensor field relative to the surface covariant derivative,
i.e.,
Γ1…Γ p
δ Δ …Δ ,Σ = 0
1 p
(56.19)
As a result, the skew-symmetric operator as well as the operators of raising and lowering of
indices for tangential fields both commute with the surface covariant derivative.
A ( t ) = AΓ1…Γr (t )h Γ1 ( λ ( t ) ) ⊗ ⊗ h Γr ( λ ( t ) ) (56.20)
which formally generalizes (48.6). By use of (56.15) and (56.16) we can show that the surface
covariant derivative D A / Dt along λ is a tangential tensor field on λ independent of the choice
of the surface coordinate system ( y Γ ) employed in the representation (56.21). Like the surface
covariant derivative of a field in S , the surface covariant derivative along a curve commutes
420 Chap. 11 • HYPERSURFACES
with the operations of raising and lowering of indices. Consequently, when the mixed
component representation is used, say with A given by
DA ⎛ dA 1 r Δ1…Δ s
Γ …Γ
⎧Γ ⎫ ⎧Γ ⎫
=⎜ + ( AΣΓ2…Γr Δ1…Δ s ⎨ 1 ⎬ + + AΓ1…Γr −1Σ Δ1…Δ s ⎨ r ⎬
Dt ⎜⎝ dt ⎩ΣΩ ⎭ ⎩ΣΩ ⎭
(56.23)
⎧ Σ ⎫ ⎧ Σ ⎫ dλΩ ⎞
− AΓ1…Γr ΣΔ2…Δ s ⎨ ⎬ − − A Γ1…Γ r
Δ1…Δ s −1Σ ⎨ ⎬)
Δ1
⎟ h Γ1 ⊗ ⊗ h Γ1 ⊗ h ⊗ ⊗ hΔs
⎩ Δ1Ω ⎭ ⎩ Δ s Ω ⎭ dt ⎠
As before, if A is a tangential tensor field and λ is a curve in the domain of A , then the
restriction of A on λ is a tensor of the form (56.20) or (56.22). In this case the covariant
derivative of A along λ is given by
which generalizes (48.15). The component form of (56.24) is formally the same as (48.14). A
special case of (56.24) is equation (56.5), which is equivalent to
Dh Γ Dy Δ = [ ∇h Γ ] h Δ (56.25)
⎧ Σ ⎫
∇h Γ = ⎨ ⎬ h Σ ⊗ h Ω (56.26)
⎩ ΓΩ ⎭
Dv Dt = 0 (56.27)
then v may be called a constant field or a parallel field on λ . From (56.21), v is a parallel field
if and only if its components satisfy the equations of parallel transport:
dυ Γ ⎧Γ⎫
+ υ Δ ⎨ ⎬ λ Σ = 0, Γ = 1,…, N − 1 (56.28)
dt ⎩ ΔΣ ⎭
Sec. 56 • Surface Covariant Derivatives 421
⎧Γ⎫
Since λ Σ ( t ) and ⎨ ⎬ ( λ ( t ) ) are smooth functions of t, it follows from a theorem in ordinary
⎩ ΔΣ ⎭
differential equations that (56.28) possesses a unique solution
υ Δ = υ Δ (t ) , Δ = 1,… , N − 1 (56.29)
defined by
ρ0,t ( v ( 0 ) ) = v ( t ) (56.32)
for all parallel fields v on λ . Naturally, we call ρ0,t the parallel transport along λ induced by
the surface covariant derivative.
The parallel transport ρ0,t preserves the surface metric in the sense that
for all u ( 0 ) , v ( 0 ) in S λ (0) but generally ρ0,t does not coincide with the Euclidean parallel
transport on E through the translation space V . In fact since S λ (0) and S λ (t ) need not be the
same subspace in V , it is not always possible to compare ρ0,t with the Euclidean parallelism. As
we shall prove later, the parallel transport ρ0,t depends not only on the end points λ ( 0 ) and
λ ( t ) but also on the particular curve joining the two points. When the same two end points
λ ( 0 ) and λ ( t ) are joined by another curve μ in S , generally the parallel transport along μ from
λ ( 0 ) to λ ( t ) need not coincide with that along λ .
Dv v ( t + Δt ) − ρt ,t +Δt ( v ( t ) )
= lim (56.34)
Dt Δt →0 Δt
⎛ Γ dυ Γ ( t ) ⎞
v ( t + Δt ) = ⎜ υ ( t ) + Δt ⎟ h Γ ( λ ( t + Δt ) ) + o ( Δt ) (56.35)
⎝ dt ⎠
ρt ,t +Δt ( v ( t ) )
⎛ ⎧Γ⎫ ⎞ (56.36)
= ⎜ υ Γ ( t ) − υ Δ ( t ) ⎨ ⎬ ( λ ( t ) ) λ Σ ( t ) Δt ⎟ h Γ ( λ ( t + Δ t ) ) + o ( Δ t )
⎝ ⎩ ΔΣ ⎭ ⎠
v(t + Δt ) − ρt ,t +Δt ( v ( t ) ) ⎛ dυ Γ ⎧Γ ⎫ ⎞
lim =⎜ + υ Δ ⎨ ⎬ λ Σ ⎟ hΓ (56.37)
Δt → 0 Δt ⎝ dt ⎩ ΔΣ ⎭ ⎠
which is consistant with the previous formula (56.21) for the surface covariant derivative
Dv Dt of v along λ .
D A Dt = 0 (56.38)
Then the equations of parallel transport along λ for tensor fields of the forms (56.20) are
If we donate the induced parallel transport by P0,t , or more generally by Pt , t , then as before we
have
DA A ( t + Δt ) − Pt ,t +Δt ( A ( t ) )
= lim (56.40)
Dt Δt →0 Δt
Sec. 56 • Surface Covariant Derivatives 423
The coordinate-free definitions (56.34) and (56.40) demonstrate clearly the main
difference between the surface covariant derivative on S and the ordinary covariant derivative
on the Euclidean manifold E . Specifically, in the former case the parallelism used to compute
the difference quotient is the path-dependent surface parallelism, while in the latter case the
parallelism is simply the Euclidean parallelism, which is path-independent between any pair of
points in E .
Exercises
56.1 Use (55.18), (56.5), and (56.4) and formally derive the formula (56.3).
56.2 Use (55.9) and (56.5) and the assumption that Dh Γ / Dy Σ is a surface vector field and
formally derive the formula (56.11).
∂h Δ Dh Δ ⎧ Ω ⎫
hΩ ⋅ Γ
= hΩ ⋅ =⎨ ⎬
∂y Dy Γ ⎩ ΓΔ ⎭
and
∂h Γ Dh Γ ⎧Γ⎫
hΔ ⋅ Σ
= h Δ ⋅ Σ
= −⎨ ⎬
∂y Dy ⎩ΣΔ ⎭
These formulas show that the tangential parts of ∂h Δ ∂y Γ and ∂h Γ ∂y Σ coincide with
Dh Δ Dy Γ and Dh Γ Dy Σ , respectively.
⎛ ∂h ⎞ Dh Δ ⎛ ∂h Γ ⎞ Dh Γ
L ⎜ ΓΔ ⎟ = Γ
and L⎜ Σ ⎟ = Σ
⎝ ∂y ⎠ Dy ⎝ ∂y ⎠ Dy
where L is the field whose value at each x ∈ S is the projection L x defined in Exercise
55.1.
424 Chap. 11 • HYPERSURFACES
⎛ ∂A ⎞
∇A = L∗ ⎜ Δ ⎟ ⊗ h Δ
⎝ ∂y ⎠
56.6 Adopt the result of Exercise 56.5 and derive (56.10). This result shows that the above
formula for ∇A can be adopted as the definition of the surface gradient. One advantage
of this approach is that one does not need to introduce the formal operation DA Dy Δ . If
needed, it can simply be defined to be L∗ ( ∂A ∂y Δ ) .
56.7 Another advantage of adopting the result of Exercise 56.5 as a definition is that it can be
used to compute the surface gradient of field on S which are not tangential. For
example, each g k can be restricted to a field on S but it does not have a component
representation of the form (56.9). As an illustration of this concept, show that
⎧ j⎫ ∂x q ∂x l
∇g k = ⎨ ⎬ g jq Σ Δ hΣ ⊗ h Δ
⎩kl ⎭ ∂y ∂y
and
⎛ ∂2 xk ⎧ Σ ⎫ ∂x k ⎧ k ⎫ ∂x l ∂x j ⎞ ∂x s
∇L = ⎜ Γ Δ − ⎨ ⎬ Σ + ⎨ ⎬ Δ Γ ⎟ g ks Φ hΦ ⊗ h Δ ⊗ h Γ = 0
⎝ ∂y ∂y ⎩ ΔΓ ⎭ ∂y ⎩ jl ⎭ ∂y ∂y ⎠ ∂y
56.8 Compute the surface Christoffel symbols for the surfaces defined in Excercises 55.5 and
55.7.
56.9 If A is a tensor field on E, then we can, of course, calculate its spatial gradient, grad A.
Also, we can restrict its domain S and compute the surface gradient ∇A . Show that
∇A = L∗ ( gradA )
∂ ( det [ aΔΓ ])
12
12 ⎧ Φ ⎫
Σ
= ( det [ aΔΓ ] ) ⎨ ⎬
∂y ⎩ΦΣ ⎭
Sec. 57 • Surface Geodesics, Exponential Map 425
In Section 48 we pointed out that a straight line λ in E with homogeneous parameter can
be characterized by equation (48.11), which means that the tangent vector λ of λ is constant
along λ , namely
dλ dt = 0 (57.1)
The same condition for a curve in S defines a surface geodesic. In terms of any surface
coordinate system ( y Γ ) the equations of geodesics are
d 2λ Γ d λ Σ d λ Δ ⎧Γ⎫
+ ⎨ ⎬ = 0, Γ = 1,… , N − 1 (57.2)
dt 2 dt dt ⎩ΣΔ ⎭
Since (57.2) is a system of second-order differential equations with smooth coefficients, at each
point
x0 = λ ( 0) ∈ S (57.3)
v0 = λ ( 0) ∈ S x 0 (57.4)
there exists a unique geodesic λ = λ ( t ) satisfying the initial conditions (57.3) and (57.4).
In classical calculus of variations it is known that the geodesic equations (57.2) represent
the Euler-Lagrange equations for the arc length integral
( λ (t ) ⋅ λ (t ))
12
s (λ) = ∫
t1
dt
t0
12 (57.5)
⎛ dλ Γ dλ Δ ⎞
= ∫ ⎜ aΓΔ ( λ ( t ) )
t1
dt dt ⎟⎠
dt
t0
⎝
x = λ ( t0 ) , y = λ ( t1 ) (57.6)
426 Chap. 11 • HYPERSURFACES
on S . If we consider all smooth curves λ in S joining the fixed end points x and y , then the
ones satisfying the geodesic equations are curves whose arc length integral is an extremum in the
class of variations of curves.
To prove this, we observe first that the arc length integral is invariant under any change
of parameter along the curve. This condition is only natural, since the arc length is a geometric
property of the point set that constitutes the curve, independent of how that point set is
parameterized. From (57.1) it is obvious that the tangent vector of a geodesic must have constant
norm, since
D (λ ⋅ λ ) Dλ
=2 ⋅λ = 0⋅λ = 0 (57.7)
Dt Dt
Consequently, we seek only those extremal curves for (57.5) on which the integrand on the right-
hand of (57.5) is constant.
( ) (
L λ Γ , λ Γ ≡ aΓΔ ( λ Ω ) λ Γλ Δ )
12
(57.8)
d ⎛ ∂L ⎞ ∂L
⎜ ⎟− = 0, Δ = 1,… , N − 1 (57.9)
dt ⎝ ∂λ Δ ⎠ ∂λ Δ
∂L
∂λ Δ
=
1
2L
( 1
L
)
aΓΔ λ Γ + aΔΓλ Γ = aΓΔ λ Γ (57.10)
d ⎛ ∂L ⎞ 1 ⎧ ∂aΓΔ Ω Γ Γ⎫
⎜ Δ ⎟ = ⎨ Ω λ λ + aΓΔ λ ⎬ (57.11)
dt ⎝ ∂λ ⎠ L ⎩ ∂y ⎭
where we have used the condition that L is constant on those curves. From (57.8) we have also
∂L 1 ∂aΓΩ Γ Ω
= λ λ (57.12)
∂λ Δ
2 L ∂yΔ
Combining (57.11) and (57.12), we see that the Euler-Lagrange equations have the explicit form
Sec. 57 • Surface Geodesics, Exponential Map 427
where we have used the symmetry of the product λ Γλ Ω with respect to Γ and Ω . Since
L ≠ 0 (otherwise the extremal curve is just one point), (57.13) is equivalent to
1 ⎛ ∂a ∂a ∂a ⎞
λ Θ + a ΘΔ ⎜ ΓΔ Ω
+ ΩΔΓ
− ΓΩΔ ⎟
λ Γλ Ω = 0 (57.14)
2 ⎝ ∂y ∂y ∂y ⎠
which is identical to (57.2) upon using the formula (56.3) for the surface Christoffel symbols.
The preceding result in the calculus of variations shows only that a geodesic is a curve
whose arc length is an extremum in a class of variations of curves. In terms of a fixed surface
coordinate system ( y Γ ) we can characterize a typical variation of the curve λ by N − 1 smooth
functions η Γ ( t ) such that
η Γ ( t0 ) = η Γ ( t1 ) = 0, Γ = 1,… , N − 1 (57.15)
From (57.15) the curves λ α satisfy the same end conditions (57.6) as the curve λ , and
λ α reduces to λ when α = 0. The Euler-Lagrange equations express simply the condition that
ds ( λ a )
=0 (57.17)
d α α =0
for all choice of η Γ satisfying (57.15). We note that (57.17) allows α = 0 to be a local minimum,
a local maximum, or a local minimax point in the class of variations. In order for the arc length
integral to be a local minimum, additional conditions must be imposed on the geodesic.
from a change of parameter) which lies entirely in N x0 . That geodesic has the minimum arc
kength among all curves in S not just curves in N x0 , joining x to y .
Now, as we have remarked earlier in this section, at any point x 0 ∈ S , and in each
tangential direction v 0 ∈ S x0 there exists a unique geodesic λ satisfying the conditions (57.3) and
(57.4). For definiteness, we donate this particular geodesic by λ v0 . Since λ v0 is smooth and,
when the norm of v 0 is sufficiently small, the geodesic λ v0 ( t ) , t ∈ [0,1] , is contained entirely in
the surface neighborhood N x0 of x 0 . Hence there exists a one-to-one mapping
defined by
exp x0 ( v ) ≡ λ v (1) (57.19)
for all v belonging to a certain small neighborhood Bx0 of 0 in S x0 . We call this injection the
exponential map at x 0 .
depends smoothly on the components υ Γ of v relative to ( y Γ ) ,i.e., there exists smooth functions
where (υ Γ ) can be regarded as a Cartesian coordinate system on Bx0 induced by the basis
{h ( x )} , namely,
Γ 0
v = υ Γh Γ ( x 0 ) (57.22)
In the sense of (57.21) we say that the exponential map expx 0 is smooth.
Smoothness of the exponential map can be visualized also from a slightly different point
of view. Since N x0 is contained in S , which is contained in E , expx0 can be regarded also as a
mapping from a domain Bx0 in an Euclidean space S x0 to the Euclidean space E , namely
Now the smoothness of expx0 has the usual meaning as defined in Section 43. Since the surface
coordinates ( y Γ ) can be extended to a local coordinate system ( y Γ , f ) as explained in Section
55, smoothness in the sense of (57.21) is consistant with that of (57.23).
As explained in Section 43, the smooth mapping expx0 has a gradient at any point v in the
( )
domain Bx0 . In particular, at v = 0, grad expx0 ( 0 ) exists and corresponds to a linear map
We claim that the image of this linear map is precisely the tangent plane S x0 , considered as a
( )
subspace of V ; moreover, grad expx0 ( 0 ) is simply the identity map on S x0 . This fact is more
( )
or less obvious, since by definition the linear map grad expx0 ( 0 ) is characterized by the
condition that
⎣ ( 0 ⎦ )
⎡ grad exp x ( 0 ) ⎤ ( v ( 0 ) ) = d expx ( v ( t ) )
dt 0
(57.25)
t =0
for any curve v = v ( t ) such that v ( 0 ) = 0 . In particular, for the straight lines
v ( t ) = vt (57.26)
so that
d
expx0 ( vt ) = v (57.28)
dt
Hence
⎣ ( 0 ⎦ )
⎡ grad expx ( 0 ) ⎤ ( v ) = v, v ∈Bx0 (57.29)
( )
But since grad expx0 ( 0 ) is a linear map, (57.29) implies that the same holds for all v ∈ S x0 ,
and thus
430 Chap. 11 • HYPERSURFACES
( )
grad expx0 ( 0 ) = idS x
0
(57.30)
It should be noted, however, that the condition (57.30) is valid only at the origin 0 of S x0 ; the
same condition generally is not true at any other point v ∈Bx0 .
Now suppose that {e Γ , Γ = 1,… , N − 1} is any basis of S x0 . Then as usual it gives rise to a
Cartesian coordinate system ( w Γ ) on S x0 by the component representation
w = w Γe Γ (57.31)
for any w ∈ S x0 . Since expx0 is a smooth one-to-one map, it carries the Cartesian system
( w ) on B
Γ
x0 to a surface coordinate system ( z Γ ) on the surface neighborhood N x0 of x 0 in S .
For definiteness, this coordinate system ( z Γ ) is called a canonical surface coordinate system at
x 0 . Thus a point z ∈ N x0 has the coordinates ( z Γ ) if and only if
z = expx0 ( z Γe Γ ) (57.32)
λ Γ ( t ) = υ Γt , Γ = 1,… , N − 1 (57.33)
for some constant υ Γ . From (57.32) the geodesic λ is simply the one denoted earlier by λ v ,
where
v = υ Γe Γ (57.34)
⎧Σ⎫ Γ Δ
⎨ ⎬υ υ = 0 (57.35)
⎩ ΓΔ ⎭
where the Christoffel symbols are evaluated at any point of the curve λ . In particular, at t = 0
we get
Sec. 57 • Surface Geodesics, Exponential Map 431
⎧Σ⎫
⎨ ⎬ ( x 0 )υ υ = 0
Γ Δ
(57.36)
ΓΔ
⎩ ⎭
Now since υ Γ is arbitrary, by use of the symmetry condition (56.4) we conclude that
⎧Σ⎫
⎨ ⎬ ( x0 ) = 0 (57.37)
⎩ ΓΔ ⎭
∂2 y Δ
=0 (57.38)
∂z Γ∂z Ω x0
y Δ = eΓΔ z Γ (57.39)
for some nonsingular matrix ⎡⎣ eΓΔ ⎤⎦ when ( y Δ ) is also a canonical surface coordinate system at x 0 ,
the ⎡⎣ eΓΔ ⎤⎦ being simply the transformation matrix connecting the basis {e Γ } for ( z Γ ) and thebasis
{eΓ } for ( y Γ ) .
∂AΓ1…Γr ( x 0 )
∇A ( x 0 ) = e Γ1 ⊗ ⊗ e Γ1 ⊗ e Δ (57.40)
∂z Δ
432 Chap. 11 • HYPERSURFACES
It should be noted, however, that (57.40) is valid at the point x 0 only, since a geodesic
coordinate system at x 0 generally does not remain a geodesic coordinate system at any
neighboring point of x 0 . Notice also that the basis {e Γ } in S x0 is the natural basis of ( z Δ ) at x 0 ,
this fact being a direct consequence of the condition (57.30).
aΓΔ ( x 0 ) = δ ΓΔ (57.41)
Then we do not even have to distinguish the contravariant and the covariant component of a
tangential tensor at x 0 . In classical differential geometry such a canonical surface coordinate
system is called a normal coordinate system or a Riemannian coordinate system at the surface
point under consideration.
Sec. 58 • Surface Curvature, I 433
We consider first the covariant derivative dn dt of the unit normal field of S on any
curve λ ∈ S . Since n is a unit vector field and since d dt preserves the spatial metric on S , we
have
dn dt ⋅ n = 0 (58.1)
Thus dn dt is a tangential vector field. We claim that there exists a symmetric second-order
tangential tensor field B on S such that
dn dt = − Bλ (58.2)
for all surface curves λ . The proof of (58.2) is more or less obvious. From (55.2), the unit
normal n on S is parallel to the gradient of a certain smooth function f , and locally S can be
characterized by (55.1). By normalizing the function f to a function
f (x)
w (x) = (58.3)
grad f ( x )
434 Chap. 11 • HYPERSURFACES
we find
n ( x ) = grad w ( x ) , x ∈S (58.4)
Now for the smooth vector field grad w we can apply the usual formula (48.15) to compute the
spatial covariant derivative ( d dt )( grad w ) along any smooth curve in E . In particular, along
the surface curve λ under consideration we have the formula (58.2), where
As a result, B is symmetric. The fact that B is a tangential tensor has been remarked after
(58.1).
From (58.2) the surface tensor B characterizes the spatial change of the unit normal n of S .
Hence in some sense B is a measurement of the curvature of S in E . In classical differential
geometry, B is called the second fundamental form of S , the surface metric a defined by
(55.18) being the first fundamental form. In component form relative to a surface coordinate
system ( y Γ ) , B can be represented as usual
From (58.2) the components of B are those of ∂n ∂y Γ taken along the y Γ -curve in S , namely
Next we consider the covariant derivatives dh Γ dt and dh Γ dt of the surface natural basis
vectors h Γ and h Γ along any curve λ in S . From (56.21) and (56.23) we have
Dh Γ ⎧ Δ ⎫ Σ
= ⎨ ⎬ λ hΔ (58.8)
Dt ⎩ ΓΣ ⎭
and
Dh Γ ⎧Γ⎫
= − ⎨ ⎬ λ ΣhΔ (58.9)
Dt ⎩ ΔΣ ⎭
dh Γ
= −CΓ λ (58.10)
dt
where CΓ is a symmetric spatial tensor field on S . In particular, when λ is the coordinate curve
of y Δ , (58.10) reduces to
∂h Γ
= −CΓ h Δ (58.11)
∂y Δ
∂h Γ ∂h Γ
−hΣ ⋅ = − h Δ ⋅ = CΓ ( h Δ , hΣ ) (58.12)
∂y Δ ∂y Σ
We claim that the quantity given by this equation is simply the surface Christoffel symbol,
⎧Γ⎫
⎨ ⎬
⎩ ΔΣ ⎭
∂h Γ ⎧ Γ ⎫
−hΣ ⋅ Δ = ⎨ ⎬ (58.13)
∂y ⎩ ΔΣ ⎭
∂δ ΣΓ ∂ ( h Σ ⋅ h )
Γ
∂h Γ ∂hΣ Γ
0= Δ = = hΣ ⋅ Δ + Δ ⋅ h (58.14)
∂y ∂y Δ ∂y ∂y
∂hΣ ⎧ Γ ⎫
hΓ ⋅ =⎨ ⎬ (58.15)
∂y Δ ⎩ΣΔ ⎭
∂aΓΔ ∂ ( h Γ ⋅ h Δ ) ∂h ∂h
Σ
= Σ
= h Γ ⋅ ΣΔ + h Δ ⋅ ΣΓ
∂y ∂y ∂y ∂y
(58.16)
⎛ ∂h ⎞ ⎛ ∂h ⎞
= aΓΩ ⎜ hΩ ⋅ ΣΔ ⎟ + aΔΩ ⎜ hΩ ⋅ ΣΓ ⎟
⎝ ∂y ⎠ ⎝ ∂y ⎠
436 Chap. 11 • HYPERSURFACES
∂h Δ ∂h
hΩ ⋅ Σ
= hΩ ⋅ ΔΣ (58.17)
∂y ∂y
Comparing (58.17) and (58.16) with (56.4) and (56.12), respectively, we see that (58.15) holds.
∂ ( n ⋅ hΓ ) ∂h Γ ∂n
0= = n⋅ + hΓ ⋅ Δ (58.18)
∂y Δ ∂y Δ
∂y
∂h Γ
n⋅ = bΔΓ (58.19)
∂y Δ
The formulas (58.19) and (58.13) determine completely the spatial covariant derivative of
h Γ along any y Δ -curve:
∂h Γ Γ ⎧Γ⎫ Σ Γ Dh Γ
= bΔ n − ⎨ ⎬ h = bΔ n + (58.20)
∂y Δ ⎩ ΔΣ ⎭ Dy Δ
where we have used (58.9) for (58.20) 2 . By exactly the same argument we have also
∂h Γ ⎧Σ ⎫ Dh Γ
Δ
= bΓΔ n + ⎨ ⎬ hΣ = bΓΔ n + (58.21)
∂y ⎩ ΓΔ ⎭ Dy Δ
As we shall see, (58.20) and (58.21) are equivalent to the formula of Gauss in classical
differential geometry.
The formulas (58.20), (58.21), and (58.7) determine completely the spatial covariant
derivatives of the bases {h Γ , n} and {h Γ , n} along any curve λ in S . Indeed, if the coordinates
of λ in ( y Γ ) are ( λ Γ ) , then we have
Sec. 58 • Surface Curvature, I 437
dn
= −bΓΔ λ Δ h Γ = −bΔΓλ Δ h Γ
dt
dh Γ ⎧Γ⎫ Dh Γ
= bΔΓλ Δ n − ⎨ ⎬ λ Δ hΣ = bΔΓλ Δ n + (58.22)
dt ⎩ ΔΣ ⎭ Dt
dh Γ ⎧Σ ⎫ Dh Γ
= bΓΔ λ Δ n + ⎨ ⎬ λ Δ hΣ = bΓΔ λ Δ n +
dt ⎩ ΓΔ ⎭ Dt
From these representations we can compute the spatial covariant derivative of any vector or
tensor fields along λ when their components relative to {h Γ , n} or {h Γ , n} are given. For
example, if v is a vector field having the component form
v ( t ) = υn ( t ) n ( λ ( t ) ) + υ Γ ( t ) hΓ ( λ ( t ) ) (58.23)
along λ , then
dv ⎛ ⎧Γ⎫ ⎞
( )
= υ n + bΓΔυ Γλ Δ n + ⎜υ Γ − υ n bΔΓλ Δ + ⎨ ⎬υ Σλ Δ ⎟ h Γ (58.24)
dt ⎝ ⎩ΣΔ ⎭ ⎠
dv ⎛ ⎧Γ⎫ ⎞
= bΓΔυ Γλ Δ n + ⎜υ Γ + ⎨ ⎬υ Σλ Δ ⎟ h Γ
dt ⎝ ⎩ΣΔ ⎭ ⎠ (58.25)
Dv
= bΓΔυ Γλ Δ n +
Dt
where we have used (56.21). Equation (58.25) shows clearly the difference between d dt and
D Dt for any tangential field.
dλ Dλ
= B ( λ, λ ) n + (58.26)
dt Dt
dλ dt = B ( λ , λ ) n (58.27)
438 Chap. 11 • HYPERSURFACES
Comparing this result with the classical notions of curvature and principal normal of a curve (cf.
Section 53), we see that the surface normal is the principal normal of a surface curve λ if and
only if λ is a surface geodesic; further, in this case the curvature of λ is the quadratic form
B ( s, s ) of B in the direction of the unit tangent s of λ .
In general, if λ is not a surface geodesic but it is parameterized by the arc length, then
(58.26) reads
ds Ds
= B ( s, s ) n + (58.28)
ds Ds
where s denotes the unit tangent of λ , as usual. We call the norms of the three vectors in
(58.28) the spatial curvature, the normal curvature, and the geodesic curvature of λ and denote
them by κ , κ n , and κ g , respectively, namely
ds Ds
κ= , κ n = B ( s, s ) , κg = (58.29)
ds Ds
κ 2 = κ n2 + κ g2 (58.30)
κ n0 = κ n n + κ g n g (58.31)
Or equivalently,
n 0 = ( κ n κ ) n + (κ g κ ) n g (58.32)
At this point we can define another kind of covariant derivative for tensor fields on S .
As we have remarked before, the tangent plane S x of S is a subspace of V . Hence a tangential
vector v can be regarded either as a surface vector in S x or as a special vector in V . In the
former sense it is natural to use the surface covariant derivative Dv Dt , while in the latter sense
we may consider the spatial covariant derivative dv dt along any smooth curve λ in S . For a
tangential tensor field A , such as the one represented (56.20) or (56.22), we may choose to
recognize certain indices as spatial indices and the remaining ones as surface indices. In this
Sec. 58 • Surface Curvature, I 439
a = aΓΔ h Γ ⊗ h Δ = h Δ ⊗ h Δ = a ΓΔ h Γ ⊗ h Δ (58.33)
and we have
Da D ( h Δ ⊗ h )
Δ
Dh Δ Dh Δ
0= = = hΔ ⊗ + ⊗ hΔ (58.34)
Dt Dt Dt Dt
The tensor a , however, can be regarded also as the inclusion map A of S x in V , namely
A : S x →V (58.35)
Av = ( h Δ ⊗ h Δ ) ( v ) = h Δ ( h Δ ⋅ v ) = υ Δ h Δ (58.36)
In (58.36) it is more natural to regard the first basis vector h Δ in the product basis h Δ ⊗ h Δ as a
spatial vector and the second basis vector h Δ as a surface vector. In fact, in classical differential
geometry the tensor A given by (58.35) is often denoted by
∂x i
A = hΔi g i ⊗ h Δ = g ⊗ hΔ
Δ i
(58.37)
∂y
where ( x i ) is a spatial coordinate system whose natural basis is {g i } . When the indices are
recognized in this way, it is natural to define a total covariant derivative of A , denoted by
δ A δ t , by
δ A dh Δ Dh Δ
= ⊗ hΔ + hΔ ⊗
δt dt Dt
(58.38)
d ⎛ ∂x i ⎞ Δ Dh Δ
= ⎜ Δ gi ⎟ ⊗ h + hΔ ⊗
dt ⎝ ∂y ⎠ Dt
440 Chap. 11 • HYPERSURFACES
In classical differential geometry the indices are distinguished directly in the notation of
the components. Thus a tensor A represented by
A ( t ) = Aikj……ΔΓ…… ( t ) g i ⊗ g j ⊗ g k ⊗ ⊗ hΓ ⊗ hΔ (58.39)
has an obvious interpretation: the Latin indices i, j, k … are designated as “spatial” and the Greek
indices Γ, Δ,… are “surface.” So δ A δ t is defined by
δ A d ik…Γ…
= ( Aj…Δ… g i ⊗ g j ⊗ g k ⊗ )⊗h Γ ⊗ hΔ
δ t dt
+ Aikj……ΔΓ……g i ⊗ g j ⊗ g k ⊗ ⊗
D
Dt
( hΓ ⊗ hΔ )
⎛d ⎞
= ⎜ Aikj……ΔΓ…… ⎟ g i ⊗ g j ⊗ g k ⊗ ⊗ hΓ ⊗ hΔ (58.40)
⎝ dt ⎠
⎡d
+ Aikj……ΔΓ…… ⎢ ( g i ⊗ g j ⊗ g k ⊗ )⊗h Γ ⊗ hΔ
⎣ dt
+gi ⊗ g j ⊗ g k ⊗ ⊗
D
Dt
( hΓ ⊗ hΔ )⎤⎥⎦
The derivative dAikj……ΔΓ…… dt is the same as DAikj……ΔΓ…… Dt , of course, since for scalars there is but
one kind of parallelism along any curve. In particular, we can compute explicitly the
representation for δ A δ t as defined by (58.38):
δ A d ⎛ ∂x i ⎞ ∂x i ⎛ dg Dh Δ ⎞
= ⎜ Δ ⎟ gi ⊗ hΔ + Δ ⎜ i ⊗ hΔ + gi ⊗ ⎟
δ t dt ⎝ ∂y ⎠ ∂y ⎝ dt Dt ⎠
(58.41)
⎛ ∂2 xi ∂x j ∂x k ⎧ i ⎫ ∂x i ⎧ Σ ⎫ ⎞
= ⎜ Δ Γ + Δ Γ ⎨ ⎬ − Σ ⎨ ⎬ λ Γg i ⊗ h Δ ⎟
⎝ ∂y ∂y ∂y ∂y ⎩ jk ⎭ ∂y ⎩ ΓΔ ⎭ ⎠
δ A ⎛ ∂ 2 xi ∂x j ∂x k ⎧ i ⎫ ∂x i ⎧ Σ ⎫⎞ Δ
= ⎜ + ⎨ ⎬− Σ ⎨ ⎬ ⎟ gi ⊗ h (58.42)
δ y Γ ⎝ ∂y Δ ∂y Γ ∂y Δ ∂y Γ ⎩ jk ⎭ ∂y ⎩ ΓΔ ⎭ ⎠
Sec. 58 • Surface Curvature, I 441
Actually, the quantity on the right-hand side of (58.42) has a very simple representation.
Since δ A δ y Γ can be expressed also by (58.38) 1 , namely
δ A ∂h Δ Δ Dh Δ
= ⊗ h + h Δ ⊗ (58.43)
δ y Γ ∂y Γ Dy Γ
δA Δ Dh Δ Δ Dh Δ
= bΔΓ n ⊗ h + ⊗ h + h Δ ⊗
δ yΓ Dy Γ Dy Γ
Γ ( Δ
h ⊗ hΔ )
D
= bΔΓ n ⊗ h Δ + (58.44)
Dy
= bΔΓ n ⊗ h Δ
where we have used (58.34). The formula (58.44) 3 is known as Gauss’ formula in classical
differential geometry. It is often written in component form
where the semicolon in the subscript on the left-hand side denotes the total covariant derivative.
Exercises
1 ⎛ ∂n ∂x ∂x ∂n ⎞
(a) bΓΔ = − ⎜ Δ ⋅ Γ + Γ ⋅ Γ ⎟
2 ⎝ ∂y ∂y ∂y ∂y ⎠
∂ 2x
(c) bΔΓ = n ⋅
∂y Δ ∂y Γ
58.2 Compute the quantities bΔΓ for the surfaces defined in Exercises 55.5 and 55.7.
A = A j1 jr
Γ1 Γ s g j1 ⊗ ⊗ g jr ⊗ h Γ1 ⊗ ⊗ h Γs
442 Chap. 11 • HYPERSURFACES
show that
δA
= Aj 1 jr
Γ1 Γ s ;Δ g j1 ⊗ ⊗ g jr ⊗ h Γ1 ⊗ ⊗ h Γs
δy Δ
where
∂A j1… jr Γ1…Γs
A j1 jr
Γ1 Γ s ;Δ =
∂y Δ
r
⎧ jβ ⎫ j j lj j ∂x k
+ ∑ ⎨ ⎬ A 1 β −1 β +1 r Γ1…Γs Δ
β =1 ⎩ lk ⎭ ∂y
s
⎧ Λ ⎫ j1 jr
−∑ ⎨ ⎬A Γ1 Γβ −1ΛΓβ +1 Γs
β =1 ⎩ Γ β Δ ⎭
Similar formulas can be derived for other types of mixed tensor fields defined on S .
∂x i
n i ;Γ = −bΓΔ
∂y Δ
Sec. 59 • Surface Curvature, II 443
Section 59. Surface Curvature, II. The Riemann-Christoffel Tensor and the Ricci
Identities
In the preceding section we have considered the curvature of S by examining the change
of the unit normal n of S in E . This approach is natural for a hypersurface, since the metric on
S is induced by that of E . The results of this approach, however, are not entirely intrinsic to
S , since they depend not only on the surface metric but also on the particular imbedding of S
into E . In this section, we shall consider curvature from a more intrinsic point of view. We
seek results which depend only on the surface metric. Our basic idea is that curvature on S
corresponds to the departure of the Levi-Civita parallelism on S from a Euclidean parallelism.
∂υ i
grad v = υ i , j g i ⊗ g j = gi ⊗ g j (59.1)
∂x j
Hence if we take the second covariant derivatives, then in the same coordinate system we have
∂ 2υ i
grad ( grad v ) = υ , jk g i ⊗ g ⊗ g = j k g i ⊗ g j ⊗ g k
i j k
(59.2)
∂x ∂x
In particular, the second covariant derivatives satisfy the same symmetry condition as that of the
ordinary partial derivatives:
υ i , jk = υ i ,kj (59.3)
Note that the proof of (59.3) depends crucially on the existence of a Cartesian coordinate
system relative to which the Christoffel symbols of the Euclidean parallelism vanish identically.
For the hypersurface S in general, the geodesic coordinate system at a reference point is the
closest counterpart of a Cartesian system. However, in a geodesic coordinate system the surface
Christoffel symbols vanish at the reference point only. As a result the surface covariant
derivative at the reference point x 0 still has a simple representation like (59.1)
grad v ( x 0 ) = υ Γ ,Δ ( x 0 ) e Γ ( x 0 ) ⊗ e Δ ( x 0 )
∂υ Γ (59.4)
= Δ eΓ ( x0 ) ⊗ eΔ ( x0 )
∂z x0
[cf. (57.40)]. But generally, in the same coordinate system, the representation (59.4)(59.4) does
not hold any neighboring point of x 0 . This situation has been explained in detail in Section 57.
444 Chap. 11 • HYPERSURFACES
In particular, there is no counterpart for the representation (59.2) 2 on S . Indeed the surface
second covariant derivatives generally fail to satisfy the symmetry condition (59.3) valid for the
spatial covariant derivatives.
To see this fact we choose an arbitrary surface coordinate system ( y Γ ) with natural basis
{h Γ } and {h Γ } on S as before. From (56.10) the surface covariant derivative of a tangential
vector field
v = υ Γh Γ (59.5)
is given by
∂υ Γ ⎧Γ⎫
υ Γ ,Δ = Δ
+υΣ ⎨ ⎬ (59.6)
∂y ⎩ΣΔ ⎭
∂ ⎛ ∂υ Γ Σ ⎧ Γ ⎫⎞ ⎛ ∂υ Φ Σ ⎧ Φ ⎫⎞ ⎧ Γ ⎫
υ Γ ,ΔΩ = Ω ⎜ Δ
+ v ⎨ ⎬⎟ ⎜ Δ +υ ⎨ ⎬⎟ ⎨
+ ⎬
∂y ⎝ ∂y ⎩ΣΔ ⎭ ⎠ ⎝ ∂y ⎩ΣΔ ⎭ ⎠ ⎩ΦΩ ⎭
⎛ ∂υ Γ ⎧ Γ ⎫⎞ ⎧ Ψ ⎫
− ⎜ Ψ +υΣ ⎨ ⎬⎟ ⎨ ⎬
⎝ ∂y ⎩ΣΨ ⎭ ⎠ ⎩ ΔΩ ⎭
(59.7)
∂ 2υ Γ ∂υ Σ ⎧ Γ ⎫ ∂υ Φ ⎧ Γ ⎫ ∂υ Γ ⎧Ψ⎫
= Δ Ω + Ω ⎨ ⎬+ Δ ⎨ ⎬− Ψ ⎨ ⎬
∂y ∂y ∂y ⎩ΣΔ ⎭ ∂y ⎩ΦΩ ⎭ ∂y ⎩ ΔΩ ⎭
⎛⎧ Φ ⎫⎧ Γ ⎫ ⎧ Γ ⎫⎧ Ψ ⎫ ∂ ⎧ Γ ⎫⎞
+υ Σ ⎜ ⎨ ⎬ ⎨ ⎬− ⎨ ⎬⎨ ⎬+ Ω ⎨ ⎬⎟
⎝ ⎩ΣΔ ⎭ ⎩ΦΩ ⎭ ⎩ΣΨ ⎭ ⎩ ΔΩ ⎭ ∂y ⎩ΣΔ ⎭ ⎠
In particular, even if the surface coordinate system reduces to a geodesic coordinate system
( y Γ ) at a reference point x0 , (59.7) can only be simplified to
∂ 2υ Γ ∂ ⎧Γ⎫
υ Γ ,ΔΩ ( x 0 ) = + υ Σ ( x0 ) ⎨ ⎬ (59.8)
∂z Δ ∂z Ω X0 ∂z Ω ⎩ΣΔ ⎭ X0
which contains a term in addition to the spatial case (59.2))2. Since that additional term
generally need not be symmetric in the pair ( Δ, Ω ) , we have shown that the surface second
covariant derivatives do not necessarily obey the symmetry condition (59.3).
Sec. 59 • Surface Curvature, II 445
From (59.7) if we subtract υ,ΓΩΔ from υ,ΓΔΩ then the result is the following commutation
rule:
where
∂ ⎧ Γ⎫ ∂ ⎧ Γ ⎫ ⎧ Φ ⎫⎧ Γ ⎫ ⎧ Φ ⎫ ⎧ Γ ⎫
R Γ ΣΔΩ ≡ Δ ⎨ ⎬− Ω ⎨ ⎬+ ⎨ ⎬⎨ ⎬− ⎨ ⎬ ⎨ ⎬ (59.10)
∂y ⎩ΣΩ ⎭ ∂y ⎩ΣΔ ⎭ ⎩ΣΩ ⎭ ⎩ΦΔ ⎭ ⎩ΣΔ ⎭ ⎩ΦΩ ⎭
Since the commutation rule is valid for all tangential vector fields v, (59.9) implies that under a
change of surface coordinate system the fields R Γ ΣΔΩ satisfy the transformation rule for the
components of a fourth-order tangential tensor. Thus we define
R ≡ R Γ ΣΔΩ h Γ ⊗ hΣ ⊗ h Δ ⊗ hΩ (59.11)
and we call the tensor R the Riemann-Christoffel tensor of L . Notice that R depends only on
the surface metrica a , since its components are determined completely by the surface Christoffel
symbols. In particular, in a geodesic coordinates system ( z Γ ) at x 0 (59.10) simplifies to
∂ ⎧ Γ⎫ ∂ ⎧ Γ⎫
R Γ ΣΔΩ ( x 0 ) = ⎨ ⎬ − ⎨ ⎬ (59.12)
∂z Δ ⎩ΣΩ ⎭ X0 ∂z Ω ⎩ΣΔ ⎭ X0
Lemma. The Riemann-Christoffel tensor R vanishes identically near x 0 if and only if the
system of first-order partial differential equations
∂υ Γ ⎧ Γ ⎫
0 = υ Γ ,Δ = Δ
+υ Ω ⎨ ⎬, Γ = 1,… N − 1 (59.13)
∂y ⎩ΩΔ ⎭
446 Chap. 11 • HYPERSURFACES
Proof. Necessity. Let {v Σ , Σ = 1,… , N − 1} be linearly independent and satisfy (59.13). Then we
can obtain the surface Christoffel symbols from
∂υΣΓ ⎧ Γ⎫
0 = Δ + υΣΩ ⎨ ⎬ (59.15)
∂y ⎩ΩΔ ⎭
and
⎧ Γ⎫ Σ ∂υ Σ
Γ
⎨ ⎬ = −uΩ (59.16)
⎩ΩΔ ⎭ ∂y Δ
where ⎡⎣ uΩΣ ⎤⎦ denotes the inverse matrix of ⎡⎣υΣΩ ⎤⎦ . Substituting (59.16) into (59.10), we obtain
directly
R Γ ΣΔΩ = 0 (59.17)
Sufficiency. It follows from the Frobenius theorem that the conditions of integrability for
the system of first-order partial differential equations
∂υ Γ ⎧ Γ⎫
Δ
= −υ Ω ⎨ ⎬ (59.18)
∂y ⎩ΩΔ ⎭
are
∂ ⎛ Ω ⎧ Γ ⎫⎞ ∂ ⎛ Ω ⎧ Γ ⎫⎞
⎜υ ⎨ ⎬ ⎟ − ⎜υ ⎨ ⎬ ⎟ = 0, Γ = 1,… N − 1 (59.19)
∂y Σ ⎝ ⎩ΩΔ ⎭ ⎠ ∂y Δ ⎝ ⎩ΩΣ ⎭ ⎠
If we expand the partial derivatives in (59.19) and use (59.18), then (59.17) follows as a result of
(59.10). Thus the lemma is proved.
Now we return to the proof of the necessity part of the theorem. From (59.16) and the
condition (56.4) we see that the basis {v Σ } obeys the rule
∂υΣΓ Δ ∂υΩΓ Δ
υΩ − Δ υΣ = 0 (59.20)
∂y Δ ∂y
Sec. 59 • Surface Curvature, II 447
[ v Σ , v Ω ] = 0, Σ, Ω = 1,… , N − 1 (59.21)
As a result there exists a coordinate system ( z Γ ) whose natural basis coincides with {v Σ } . In
particular, relative to ( z Γ ) (59.16) reduces to
⎧ Γ⎫ Ω ∂δ Σ
Γ
⎨ ⎬ = −δ Σ =0 (59.22)
⎩ΩΔ ⎭ ∂z Δ
It should be noted that tangential vector fields satisfying (59.13) are parallel fields
relative to the Levi-Civita parallelism, since from (56.24) we have the representation
Dv ( λ ( t ) ) / Dt = υ Γ ,Δ λ Δ h Γ (59.23)
along any curve λ. Consequently, the Levi-Civita parallelism becomes locally path-
independent. For definiteness, we call this a locally Euclidean parallelism or a flat parallelism.
Then the preceding theorem asserts that the Levi-Civita parallelism on S is locally Euclidean if
and only if the Riemann-Christoffel tensor R based on the surface metric vanishes.
The commutation rule (59.9) is valid for tangential vector fields only. However, we can
easily generalize that rule to the following Ricci identities:
for each prescribed initial value at any reference point x 0 ; further, a solution of (59.25)
corresponds to a parallel tangential tensor field on the developable hyper surface S .
448 Chap. 11 • HYPERSURFACES
Section 60. Surface Curvature, III. The Equations of Gauss and Codazzi
In the preceding two sections we have considered the curvature of S from both the
extrinsic point of view and the intrinsic point of view. In this section we shall unite the results of
these two approaches.
Our starting point is the general representation (58.25) applied to the y Δ -coordinate
curve:
∂v Dv
Δ
= bΓΔυ Γ n + (60.1)
∂y Dy Δ
for an arbitrary tangential vector field v. This representation gives the natural decomposition of
the spatial vector field ∂v / ∂y Δ on S into a normal projection bΓΔυ Γ n and a tangential
projection Dv / Dy Δ . Applying the spatial covariant derivative ∂ / ∂y Σ along the y Σ− coordinate
curve to (60.1)(60.1), we obtain
∂ ⎛ ∂v ⎞ ∂ ∂ ⎛ Dv ⎞
Σ ⎜ Δ ⎟
= Σ ( bΓΔυ Γ n ) + Σ ⎜ Δ ⎟
∂y ⎝ ∂y ⎠ ∂y ∂y ⎝ Dy ⎠
(60.2)
∂ D ⎛ Dv ⎞
= Σ ( bΓΔυ Γ n ) + bΓΣυ Γ ,Δ n +
∂y Dy Σ ⎜⎝ Dy Δ ⎟⎠
where we have applied (60.1) in (60.2)2 to the tangential vector field Dv / Dy Δ . Now, since the
spatial parallelism is Euclidean, the lefthand side of (60.2) is symmetric in the pair ( Σ, Δ ) ,
namely
∂ ⎛ ∂v ⎞ ∂ ⎛ ∂v ⎞
Σ ⎜ Δ ⎟
= Δ⎜ Σ⎟ (60.3)
∂y ⎝ ∂y ⎠ ∂y ⎝ ∂y ⎠
D ⎛ Dv ⎞ D ⎛ Dv ⎞ ∂
Δ ⎜ Σ ⎟
− Σ ⎜ Δ ⎟
= Σ ( bΓΔυ Γ n ) − ( bΓΣυ Γ n )
Dy ⎝ Dy ⎠ Dy ⎝ Dy ⎠ ∂y (60.4)
+ ( bΓΣυ Γ
,Δ − bΓΔυ ,Σ ) n
Γ
This is the basic formula from which we can extract the relation between the Riemann-
Christoffel tensor R and the second fundamental form B.
450 Chap. 11 • HYPERSURFACES
Specifically, the left-hand side of (60.4) is a tangaential vector having the component
form
D ⎛ Dv ⎞ D ⎛ D v ⎞
− = (υ Γ ,ΔΣ − υ Γ ,ΣΔ ) h Γ
Dy Δ ⎜⎝ Dy Σ ⎟⎠ Dy Σ ⎜⎝ Dy Δ ⎟⎠ (60.5)
= −υ Ω R Γ ΩΣΔ h Γ
as required by (59.9), while the right-hand side is a vector having the normal projection
⎛ ∂ ( bΓΔυ Γ ) ∂ ( bΓΣυ Γ ) ⎞
⎜ − + b υ Γ
− b υ Γ
,Σ ⎟ n (60.6)
⎜ ∂y Σ ∂y Δ
ΓΣ ,Δ ΓΔ
⎟
⎝ ⎠
where we have used Weingarten’s formula (58.7) to compute the covariant derivatives
∂n / ∂y Σ and ∂n / ∂y Δ . Consequently, (60.5) implies that
∂ ( bΓΔυ Γ ) ∂ ( bΓΣυ Γ )
Σ
− Δ
+ bΓΣυ Γ ,Δ − bΓΔυ Γ ,Σ = 0 (60.8)
∂y ∂y
and that
or, equivalently,
∂bΓΔ ⎧ Φ ⎫ ∂bΓΣ ⎧ Φ⎫
− bΦΔ ⎨ ⎬ − + bΦΣ ⎨ ⎬=0 (60.12)
∂y Σ ⎩ ΓΣ ⎭ ∂y
Δ
⎩ ΓΔ ⎭
By use of the symmetry condition (56.4) we can rewrite (60.12) in the more elegant form
The importance of the equations of Gauss and Codazzi lies not only in their uniting the
second fundamental form with the Riemann Christoffel tensor for any hypersurfaces S in E ,
but also in their being the conditions of integrability as asserted by the following theorem.
Theorem 60.1. Suppose that aΓΔ and bΓΔ are any presceibed smooth functions of ( y Ω ) such
that [ aΓΔ ] is positive-definite and symmetric, [ bΓΔ ] is symmetric, and together [ aΓΔ ] and [ bΓΔ ]
satisfy the equations of Gauss and Codazzi. Then locally there exists a hyper surface S with
representation
xi = xi ( y Ω ) (60.14)
on which the prescribed aΓΔ and bΓΔ are the first and second fundamental forms.
⎧ Σ⎫
∂hΓi / ∂y Δ = bΓΔ n i + ⎨ ⎬ hΣi , ∂n i / ∂y Γ = −bΓΔ hΔi (60.15)
⎩ ΓΔ ⎭
We claim that this system can be integrated and the solution preserves the algebraic conditions
In particular, if (60.16) are imposed at any one reference point, then they hold identically on a
neighborhood of the reference point.
452 Chap. 11 • HYPERSURFACES
The fact that the solution of (60.15) preserves the conditions (60.16) is more or less
obvious. We put
∂f ΓΔ ∂f Γ ⎧ Δ⎫ ∂f
= bΓΣ f Δ + bΔΣ f Γ , = − bΣΔ f ΓΔ + ⎨ ⎬ f Δ + bΓΣ f , = −2bΣΔ f Δ (60.19)
∂y Σ ∂y Σ
⎩ ΓΣ ⎭ ∂y Σ
along the coordinate curve of any y Σ . From (60.18) and (60.19) we see that f ΓΔ , f Γ , and f
must vanish identically.
Now the system (60.15) is integrable, since by use of the equations of Gauss and Codazzi we
have
∂ ⎛ ∂hΓi ⎞ ∂ ⎛ ∂hΓi ⎞
Σ ⎜ Δ ⎟
− Δ ⎜ Σ ⎟ = ( R Ω ΓΣΔ − bΓΣbΔΩ + bΓΔ bΣΩ ) hΩi
∂y ⎝ ∂y ⎠ ∂y ⎝ ∂y ⎠ (60.20)
+ ( bΓΣ ,Δ − bΓΔ ,Σ ) n i = 0
and
∂ ⎛ ∂n i ⎞ ∂ ⎛ ∂n i ⎞
⎟ = ( bΣ ,Γ − bΓ ,Σ ) hΩ = 0
Ω Ω
⎟−
i
⎜ ⎜ (60.21)
∂y Σ ⎝ ∂y Γ ⎠ ∂y Γ ⎝ ∂y Σ ⎠
Hence locally there exist functions hΓi ( y Ω ) and n i ( y Ω ) which verify (60.15) and (60.16).
∂x i / ∂y Δ = hΔi ( y Ω ) (60.22)
where the right-hand side is the solution of (60.15) and (60.16). Then (60.22) is also integrable,
since we have
Sec. 60 • Surface Curvature, III 453
∂ ⎛ ∂x i ⎞ ∂hΔi ⎧ Ω⎫ i ∂ ⎛ ∂x i ⎞
⎜ ⎟ = = bΔΣ n i
+ ⎨ ⎬ Ω h = ⎜ ⎟ (60.23)
∂y Σ ⎝ ∂y Δ ⎠ ∂y Σ ⎩ ΔΣ ⎭ ∂y Δ ⎝ ∂y Σ ⎠
Consequently the solution (60.14) exists; further, from (60.22), (60.16), and (60.15) the first and
the second fundamental forms on the hypersurface defined by (60.14) are precisely the
prescribed fields aΓΔ and bΓΔ , respectively. Thus the theorem is proved.
It should be noted that, since the algebraic conditions (60.16) are preserved by the
solution of (60.15), any two solutions ( hΓi , n j ) and ( hΓi , n j ) of (60.15) can differ by at most a
transformation of the form
While the equations of Gauss show that the Riemann Christoffel tensor R of S is
completely determined by the second fundamental form B, the converse is generally not true.
Thus mathematically the extrinsic characterization of curvature is much stronger than the
intrinsic one, as it should be. In particular, if B vanishes, then S reduces to a hyperplane which
is trivially developable so that R vanishes. On the other hand, if R vanishes, then S can be
developed into a hyperplane, but generally B does not vanish, so S need not itself be a
hyperplane. For example, in a three-dimensional Euclidean space a cylinder is developable, but
it is not a plane.
Exercise
60.1 In the case where N = 3 show that the only independent equations of Codazzi are
bΔΔ ,Γ − bΔΓ ,Δ = 0
for Δ ≠ Γ and where the summation convention has been abandoned. Also show that the
only independent equation of Gauss is the single equation
S = a h1 ∧ ∧ h N −1 (61.1)
where
σ (U ) = ∫ ⋅⋅ ⋅∫ ady1 dy N −1 (61.3)
U
where the ( N − 1) -fold integral is taken over the coordinates ( y Γ ) on the domain U . Since a
obeys the transformation rule
⎡ ∂y Γ ⎤
a = a det ⎢ Δ ⎥ (61.4)
⎣ ∂y ⎦
We shall consider the theory of integration relative to σ in detail in Chapter 13. Here we
shall note only a particular result wich connects a property of the surface area to the mean
curvature of S . Namely, we seek a geometric condition for S to be a minimal surface, which
is defined by the condition that the surface area σ (U ) be an extremum in the class of variations
of hypersurfaces having the same boundary as U . The concept of minimal surface is similar to
that of a geodesic whichwe have explored in Section 57, except that here we are interested in the
variation of the integral σ given by (61.3) instead of the integral s given by (57.5).
As before, the geometric condition for a minimal surface follows from the Euler-
Lagrange equation for (61.3), namely,
Sec. 61 • Surface Area, Minimal Surface 455
∂ ⎛∂ a ⎞ ∂ a
⎜ ⎟− =0 (61.5)
∂y Γ ⎝ ∂hΓi ⎠ ∂x i
∂x i ∂x j
aΓΔ = δ ij hΓi hΔi = δ ij (61.6)
∂y Γ ∂y Δ
For simplicity we have chosen the spatial coordinates to be rectangular Cartesian, so that
aΓΔ does not depend explicitly on x i . From (61.6) the partial derivative of a with respect to
hΓi is given by
∂ a
= a a ΓΔ δ ij hΔi = a δ ij h Γj (61.7)
∂hΓi
Substituting this formula into (61.5), we see that the condition for S to be a minimal surface is
⎛⎧ Ω ⎫ ∂h Γj ⎞
a δ ij ⎜ ⎨ ⎬ h Γj + Γ ⎟ = 0, i = 1,… , N (61.8)
⎝ ⎩ΩΓ ⎭ ∂y ⎠
⎛⎧ Ω ⎫ ∂h Γ ⎞
a ⎜ ⎨ ⎬ hΓ + Γ ⎟ = 0 (61.9)
⎝ ⎩ΩΓ ⎭ ∂y ⎠
∂ a ⎧Ω ⎫
=⎨ ⎬ a (61.10)
∂y Γ ⎩ΩΓ ⎭
which we have noted in Exercise 56.10. Now from (58.20) we can rewrite (61.9) in the simple
form
a bΓΓ n = 0 (61.11)
456 Chap. 11 • HYPERSURFACES
I B = tr B = bΓΓ = 0 (61.12)
In general, the first invariant I B of the second fundamental form B is called the mean
curvature of the hyper surface. Equation (61.12) then asserts that S is a minimal surface if and
only if its mean curvature vanishes.
N −1
B = ∑ β Γc Γ ⊗ c Γ (61.13)
Γ=1
where the principal basis {c Γ } is orthonormal on the surface. In differential geometry the
direction of c Γ ( x ) at each point x ∈ S is called a principal direction and the corresponding
proper number β Γ ( x ) is called a principal (normal) curvature at x. As usual, the principal
(normal) curvatures are the extrema of the normal curvature
κ n = B ( s, s ) (61.14)
in all unit tangents s at any point x [cf. (58.29)2]. The mean curvature I B , of course, is equal to
the sum of the principal curvatures, i.e.,
N −1
IB = ∑ β Γ (61.15)
Γ=1
Sec. 62 • Three-Dimensional Euclidean Manifold 457
xi = xi ( y Γ ) (62.1)
∂x i
hΓ ≡ gi , Γ = 1, 2 (62.2)
∂y Γ
The first and the second fundamental forms aΓΔ and bΓΔ relative to ( y Γ ) are defined
by
∂h Γ ∂n
aΓΔ = h Γ ⋅ h Δ . bΓΔ = n ⋅ Δ
= −h Γ ⋅ Δ (62.4)
∂y ∂y
Since bΓΔ is symmetric, at each point x ∈ S there exists a positive orthonormal basis
{c ( x )} relative to which B ( x ) can be represented by the spectral form
Γ
B ( x ) = β1 ( x ) c1 ( x ) ⊗ c1 ( x ) + β 2 ( x ) c 2 ( x ) ⊗ c 2 ( x ) (62.6)
458 Chap. 11 • HYPERSURFACES
[c1 , c 2 ] = 0 (62.7)
However, locally we can always choose a coordinate system ( z Γ ) in such a way that the natural
basis {h Γ } is parallel to {c Γ ( x )} . Naturally, such a coordinate system is called a principal
coordinate system and its coordinate curves are called the lines of curvature. Relative to a
principal coordinate system the components aΓΔ and bΓΔ satisfy
I B = tr B = β1 + β 2 = a ΓΔ bΓΔ
(62.9)
II B = det B = β1β 2 = det ⎡⎣bΔΓ ⎤⎦
In the preceding section we have defined I B to be the mean curvature. Now II B is called the
Gaussian curvature. Since
where the numerator on the right-hand side is given by the equation of Gauss:
Notice that for the two-dimensional case the tensor R is completely determined by R1212 , since
from (62.5)1, or indirectly from (59.10), RΦΓΣΔ vanishes when Φ = Γ or when Σ = Δ.
system ( y Γ ) such that the coordinates of x 0 are ( 0, 0 ) ) and the basis {h Γ ( x )} is orthonormal,
i.e.,
d ( y1 , y 2 ) = n ( x 0 ) ⋅ ( x − x 0 )
⎛ 1 Dh Γ ⎞
≅ n ( x0 ) ⋅ ⎜ hΓ ( x0 ) y Γ + yΓ yΔ ⎟ (62.14)
⎜ 2 Dy Δ ⎟
⎝ x0 ⎠
1
≅ bΓΔ ( x 0 ) y Γ y Δ
2
where (62.14)2,3 are valid to within an error of third or higher order in y Γ . From this estimate
we see that the intersection of S with a plane parallel to S xo is represented approximately by
the curve
1
x1 = y 1 , x2 = y2 , x3 ≅ bΓΔ ( x 0 ) y Γ y Δ (62.16)
2
As usual we can define the notion of conjugate directions relative to the symmetric
bilinear form of B ( x 0 ) . We say that the tangential vectors u, v ∈ S x0 are conjugate at x 0 if
B ( u, v ) = u ⋅ ( Bv ) = v ⋅ ( Bu ) = 0 (62.17)
460 Chap. 11 • HYPERSURFACES
For example, the principal basis vectors c1 ( x 0 ) and c 2 ( x 0 ) are conjugate since the components
of B form a diagonal matrix relative to {c Γ } . Geometrically, conjugate directions may be
explained in the following way: We choose any curve λ ( t ) ∈ S such that
λ ( 0 ) = u ∈ S xo (62.18)
Then the conjugate direction v of u corresponds to the limit of the intersection of S λ ( t ) with S x0
as t tends to zero. We leave the proof of this geometric interpretation for conjugate directions as
an exercise.
B ( v, v ) = v ⋅ ( Bv ) = 0 (62.19)
If every point of S is hyperbolic, then the asymptotic lines form a coordinate net, and
we can define an asymptotic coordinate system. Relative to such a coordinate system the
components bΓΔ satisfy the condition
II B = −b122 / a (62.21)
A minimal surface which does not reduce to a plane is necessarily hyperbolic at every point,
since when β1 + β 2 = 0 we must have β1β 2 < 0 unless β1 = β 2 = 0.
From (62.12), S is parabolic at every point if and only if it is developable. In this case
the asymptotic lines are straight lines. In fact, it can be shown that there are only three kinds of
developable surfaces, namely cylinders, cones, and tangent developables. Their asymptotic lines
are simply their generators. Of course, these generators are also lines of curvature, since they are
in the principal direction corresponding to the zero principal curvature.
Sec. 62 • Three-Dimensional Euclidean Manifold 461
Exercises
62.1 Show that
a = n ⋅ ( h1 × h 2 )
1 ∂ 2 x ⎛ ∂x ∂x ⎞
bΓΔ = Δ
⋅
Γ ⎜
× 2⎟
a ∂y ∂y ⎝ ∂y ∂y ⎠
1
62.3 Compute the principal curvatures for the surfaces defined in Exercises 55.5 and 55.7.
________________________________________________________________
Chapter 12
In this chapter we consider the structure of various classical continuous groups which are formed
by linear transformations of an inner product space V . Since the space of all linear transformation
of V is itself an inner product space, the structure of the continuous groups contained in it can be
exploited by using ideas similar to those developed in the preceding chapter. In addition, the group
structure also gives rise to a special parallelism on the groups.
In section 17 we pointed out that the vector space L (V ;V ) has the structure of an algebra,
the product operation being the composition of linear transformations. Since V is an inner product
space, the transpose operation
T :L (V ;V ) → L (V ;V ) (63.1)
u ⋅ A T v = Au ⋅ v, u, v ∈V (63.2)
A ⋅ B = tr ( ABT ) (63.3)
det A ≠ 0 (63.4)
Clearly, if A and B are isomorphisms, then AB and BA are also isomorphisms, and from Exercise
22.1 we have
463
464 Chap. 12 • CLASSICAL CONTINUOUS GROUP
As a result, the set of all linear isomorphisms of V forms a group GL (V ) , called the general
linear group ofV . This group was mentioned in Section 17. We claim that GL (V ) is the disjoint
union of two connected open sets in L (V ;V ) . This fact is more of less obvious since GL (V ) is
the reimage of the disjoint union ( −∞,0 ) ∪ ( 0, ∞ ) under the continuous map
det : L (V ;V ) →R
GL (V ) =GL (V ) ∪GL (V )
− +
(63.6)
A ∈S ⇔ A ∈L (V ;V ) and det A = 0
LA : GL (V ) →GL (V )
is defined by
LA ( X ) ≡ AX , X ∈GL (V ) (63.7)
RA :GL (V ) →GL (V )
is defined by
3. Inversion
J :GL (V ) →GL (V )
is defined by
J ( X ) = X −1 , X ∈GL (V ) (63.9)
Clearly these operations are smooth mappings, so they give rise to various gradients which
are fields of linear transformations of the underlying inner product space L (V ;V ) . For example,
the gradient of LA is a constant field given by
d d
LA ( X + Yt ) = ( AX + A Yt ) = AY
dt t =0 dt t =0
∇ RA ( Y ) = YA, Y ∈L (V ;V ) (63.11)
On the other hand, ∇J is not a constant field; its value at any point X ∈GL (V ) is given by
⎡⎣ ∇J ( X ) ⎤⎦ ( Y ) = − X −1YX, Y ∈ L (V ;V ) (63.12)
⎡⎣ ∇J ( I ) ⎤⎦ ( Y ) = − Y, Y ∈L (V ;V ) (63.13)
From (63.10) we see that the parallelism defined by (63.14) is not the same as the Euclidean
parallelism. Also, ∇LA is not the same kind of parallelism as the Levi-Civita parallelism on a
hypersurface because it is independent of any path joining X and Y. In fact, if X and Y do not
belong to the same connected set of GL (V ) , then there exists no smooth curve joining them at all,
but the parallelism ∇LA ( X ) is still defined. We call the parallelism ∇LA ( X ) with A ∈GL (V ) the
Cartan parallelism on GL (V ) , and we shall study it in detail in the next section.
The choice of left multiplication rather than the right multiplication is merely a convention.
The gradient RA also defined a parallelism on the group. Further, the two parallelisms ∇LA and
∇RA are related by
Sec. 64 • The General Linear Group 467
LA = J D RA −1 D J (63.16)
Before closing the section, we mention here that besides the proper general linear group
GL (V ) , several other subgroups of GL (V ) are important in the applications. First, the special
+
UM (V ) = SL (V )
+
(63.19)
A ∈O (V ) ⇔ A −1 = A T (63.20)
From (18.18) or (63.2) we see that A belongs to O (V ) if and only if it preserves the inner product
of V , i.e.,
Au ⋅ Av = u ⋅ v, u, v ∈V (63.21)
former being a subgroup of O (V ) , denoted by SO (V ) , called the special orthogonal group or the
rotational group of V . As we shall see in Section 65, O (V ) is a hypersurface of dimension
1
N ( N − 1) . From (63.20), O (V ) is contained in the sphere of radius N in L (V ;V ) since if
2
A ∈O (V ) , then
A ⋅ A = tr AA T = tr AA −1 = tr I = N (63.22)
As a result, O (V ) is bounded in L (V ;V ).
⎡⎣ ∇LA ( X ) ⎤⎦ : SL (V ) X → SL (V ) Y (63.23)
Thus there exists a Cartan parallelism on the subgroups as well as onGL (V ) . The tangent spaces
of the subgroups are subspaces of L (V ;V ) of course; further , as explained in the preceding
chapter, they vary from point to point. We shall characterize the tangent spaces of SL (V ) and
SO (V ) at the identity element I in Section 66. The tangent space at any other point can then be
obtained by the Cartan parallelism, e.g.,
Exercise
63.1 Verify (63.12) and (63.13).
Sec. 64 • The Parallelism of Cartan 469
The concept of Cartan parallelism on GL (V ) and on its various subgroups was introduced
in the preceding section. In this section, we develop the concept in more detail. As explained
before, the Cartan parallelism is path-independent. To emphasize this fact, we now replace the
notation ∇LA ( X ) by the notation C ( X ,Y ) , where Y=AX .
Then we put
C ( X ) ≡ C ( I, X ) ≡ ∇LX ( I ) (64.1)
C ( X, Y ) = C ( Y ) D C ( X )
−1
(64.2)
for all X and Y. We use the same notation C ( X, Y ) for the Cartan parallelism from X to Y for
pairs X, Y in GL (V ) or in any continuous subgroup of GL (V ) , such as SL (V ). Thus
C ( X, Y ) denotes either the linear isomorphism
C ( X, Y ) : SL (V ) X → SL (V ) Y (64.4)
when X, Y belong to SL (V ).
V :GL (V ) → L (V ;V ) (64.5)
Then we say that V is a left-invarient field if its values are parallel vectors relative to the Cartan
parallelism, i.e.,
⎡⎣C ( X , Y ) ⎤⎦ ( V ( X ) ) = V ( Y ) (64.6)
470 Chap. 12 • CLASSICAL CONTINUOUS GROUP
for any X, Y in GL (V ) . Since the Cartan parallelism is not the same as the Euclidean parallelism
induced by the inner product space L (V ;V ) , a left-invarient field is not a constant field. From
(64.2) we have the following representations for a left-invarient field.
Theorem 64.1. A vector field V is left-invariant if and only if it has the representation
V ( X ) = ⎡⎣C ( X ) ⎤⎦ ( V ( I ) ) (64.7)
As a result, each tangent vector V ( I ) at the identity element I has a unique extension into
a left-variant field. Consequently, the set of all left-invariant fields, denoted by g l (V ) , is a copy
of the tangent space GL (V )I which is canonically isomorphic to L (V ;V ). We call the
restriction map
I : gl (V ) →GL (V ) ≅ L (V ;V )
I
(64.8)
DU ( t ) U ( t + Δ t ) −⎡⎣C ( X ( t ) , X ( t +Δ t ) )⎤⎦ ( U ( t ) )
= (64.9)
Dt Δt
Since the Cartan parallelism is defined not just on GL (V ) but also on the various continuous
subgroups of GL (V ) , we may use the same formula (64.9) to define the covariant derivative of
tangent vector fields along a smooth curve in the subgroups also. For this reason we shall now
derive a general representation for the covariant derivative relative to the Cartan parallelism
without restricting the underlying continuous group to be the general linear groupGL (V ) . Then
Sec. 64 • The Parallelism of Cartan 471
we can apply the representation to vector fields on GL (V ) as well as to vector fields on the
various continuous subgroups ofGL (V ) , such as the special linear group SL (V ).
The simplest way to represent the covariant derivative defined by (64.9) is to first express
the vector field U ( t ) in component form relative to a basis of the Lie algebra of the underlying
continuous group. From (64.7) we know that the values {EΓ ( X ) , Γ = 1,..., M } of a left-invariant
field of basis {EΓ ( X ) , Γ = 1,..., M } form a basis of the tangent space at X for all X belonging to
the underlying group. Here M denotes the dimension of the group; it is purposely left arbitrary so
as to achieve generality in the representation. Now since U ( t ) is a tangent vector at X ( t ) , it can be
{
represented as usual by the component form relative to the basis EΓ ( X ( t ) ) , say }
U ( t ) = Uˆ Γ ( t ) EΓ ( X ( t ) ) (64.10)
where Γ is summed from 1 to M . Substituting (64.10) into (64.9) and using the fact that the basis
{EΓ } is formed by parallel fields relative to the Cartan parallelism, we obtain directly
D U ( t ) d Uˆ Γ ( t )
= EΓ ( X ( t ) ) (64.11)
Dt dt
This formula is comparable to the representation of the covariant derivative relative to the
Euclidean parallelism when the vector field is expressed in component form in the terms of a
Cartesian basis.
As we shall see in the next section, the left-variant basis {EΓ } used in the representation
(64.11) is not the natural basis of any coordinate system. Indeed, this point is the major difference
between the Cartan parallelism and the Euclidean parallelism, since relative to the latter a parallel
basis is a constant basis which is the natural basis of a Cartesian coordinate system. If we
introduce a local coordinate system with natural basis {H Γ ,Γ = 1,..., M } , then as usual we can
represent X ( t ) by its coordinate functions ( X Γ ( t ) ) , and EΓ and U ( t ) by their components
Û Γ ( t ) = U Δ ( t ) FΔΓ ( X ( t ) ) (64.13)
where ⎡⎣ FΔΓ ⎤⎦ is the inverse of ⎡⎣ EΔΓ ⎤⎦ . Substituting (64.13) into (64.11), we get
472 Chap. 12 • CLASSICAL CONTINUOUS GROUP
DU ⎛ dUΔ ∂ FΩΓ d X Σ Δ ⎞
=⎜ +U Ω EΓ ⎟ H Δ (64.14)
Dt ⎝ d t ∂XΣ dt ⎠
DU ⎛ dUΔ d XΣ ⎞
=⎜ + U Ω LΔΩΣ ⎟ HΔ (64.15)
Dt ⎝ d t dt ⎠
where
∂ FΩΓ Δ
Γ ∂ EΓ
LΔΩΣ = EΓΔ = − FΩ (64.16)
∂ XΣ ∂XΣ
Now the formula (64.15) is comparable to (56.37) with LΔΩΣ playing the role of the Christoffel
symbols, except that LΔΩΣ is not symmetric with respect to the indices Ω and Σ . For definiteness,
we call LΔΩΣ the Cartan symbols. From (64.16) we can verify easily that they do not depend on the
choice of the basis {EΓ } .
It follows from (64.12)1 and (64.16) that the Cartan symbols obey the same transformation
rule as the Christoffel symbols. Specifically, if ( X Γ ) is another coordinate system in which the
Δ
Cartan symbols are LΩΣ , then
Δ ∂X Δ ∂ X Ψ ∂ X Θ ∂ X Δ ∂2 X Φ
LΩΣ =LΦΨΘ + (64.17)
∂XΦ ∂XΩ ∂XΣ ∂XΦ ∂XΩ∂XΣ
This formula is comparable to (56.15). In view of (64.15) and (64.17) we can define the covariant
derivative of a vector field relative to the Cartan parallelism by
⎛ ∂U Δ ⎞
∇U = ⎜ Σ
+ U Ω LΔΩΣ ⎟ H Δ ⊗ HΣ (64.18)
⎝∂X ⎠
where {HΣ } denotes the dual basis of {H Δ } . The covariant derivative defined in this way clearly
possesses the following property.
Theorem 64.2. A vector field U is left-invariant if and only if its covariant derivative relative to
the Cartan parallelism vanishes.
Sec. 64 • The Parallelism of Cartan 473
The proof of this proposition if more or less obvious, since the condition
∂U Δ
Σ
+ U Ω LΔΩΣ = 0 (64.19)
∂X
∂ Uˆ Δ
=0 (64.20)
∂XΣ
where Û Δ denotes the components of U relative to the parallel basis {EΓ } . The condition (64.20)
means simply that Û Δ are constant, or, equivalent, U is left-invariant.
Comparing (64.16) with (59.16), we see that the Cartan parallelism also possesses the
following property.
∂ Γ ∂
R Γ ΣΔΩ ≡ L −
Δ ΣΩ
LΓΣΔ +LΦΣΩ LΓΦΔ − LΦΣΔ LΓΦΩ (64.21)
∂X ∂ XΩ
vanishes identically.
Notice that (64.21) is comparable with (59.10) where the Christoffel symbols are replaced
by the Cartan symbols. The vanishing of the curvature tensor
R Γ ΣΔΩ = 0 (64.22)
is simply the condition of integrability of equation (64.19) whose solutions are left-invariant fields.
From the transformation rule (64.17) and the fact that the second derivative therein is
symmetric with respect to the indices Ω and Σ , we obtain
Δ
TΩΣ ≡ LΔΩΣ − LΔΣΩ (64.24)
474 Chap. 12 • CLASSICAL CONTINUOUS GROUP
are the components of a third-order tensor field. We call the T the torsion tensor of the Cartan
parallelism. To see the geometric meaning of this tensor, we substitute the formula (64.16) into
(64.24) and rewrite the result in the following equivalent form:
Δ Ω ∂ EΨΔ
Σ Ω
Δ
Ω ∂ EΦ
T E E =E
ΩΣ Φ Ψ − EΨ
Φ (64.25)
∂XΩ ∂XΩ
T ( EΦ , EΨ ) = ⎡⎣ EΦ , EΨ ⎤⎦ (64.26)
⎛ ∂ EΨΔ Ω ∂ EΦ ⎞
Δ
⎡⎣ EΦ , EΨ ⎤⎦ ≡ L EΨ = ⎜ EΦΩ − E Ψ ⎟ HΔ (64.27)
EΦ
⎝ ∂XΩ ∂ XΩ ⎠
As we shall see in the next section, the Lie bracket of any pair of left-invariant fields is
itself also a left-invariant field. Indeed, this is the very reason that the set of all left-invariant fields
is so endowed with the structure of a Lie algebra. As a result, the components of ⎡⎣ EΦ , EΨ ⎤⎦
relative to the left-invariant basis {EΓ } are constant scalar fields, namely
⎡⎣ EΦ , EΨ ⎤⎦ = CΦΨ
Γ
EΓ (64.28)
Γ
We call CΦΨ the structure constants of the left-invariant basis {EΓ } . From (64.26), CΦΨ
Γ
are nothing
but the components of the torsion tensor T relative to the basis {EΓ } , namely
Γ
T = CΦΨ E Γ ⊗ EΦ ⊗ E Ψ (64.29)
Now the covariant derivative defined by (64.9) for vector fields can be generalized as in the
preceding chapter to covariant derivatives of arbitrary tangential tensor fields. Specifically, the
general formula is
which is comparable to (48.7) and (56.23). The formula (64.30) represents the covariant derivative
in terms of a coordinate system ( X Γ ) . If we express A in component form relative to a left-variant
basis {EΓ } , say
D A ( t ) d Aˆ Δ1...1...Δ sr ( t )
Γ Γ
The formulas (64.30) and (64.32) represent the covariant derivative of a tangential tensor field A
along a smooth curve X ( t ) . Now if A is a tangential tensor field defined on the continuous
group, then we define the covariant derivative of A relative to the Cartan parallelism by a formula
similar to (56.10) except that we replace the Christoffel symbols there by the Cartan symbols.
Naturally we say that a tensor field A is left-invariant if ∇A vanishes. The torsion tensor
field T given by (64.29) is an example of a left-invariant third-order tensor field. Since a tensor
field is left invariant if and only if its components relative to the product basis of a left-invariant
basis are constants, a representation formally generalizing (64.7) can be stated for left-invariant
tensor fields in general. In particular, if {EΓ } is a left-invariant field of bases, then
E = E1 ∧⋅⋅⋅∧ EM (64.33)
In section 57 of the preceding chapter we have introduced the concepts of geodesics and
exponential map relative to the Levi-Civita parallelism on a hypersurface. In this section we
consider similar concepts relative to the Cartan parallelism. As before, we define a geodesic to be a
smooth curve X ( t ) such that
DX
=0 (65.1)
Dt
where X denotes the tangent vector X and where the covariant derivative is taken relative to the
Cartan parallelism.
Since (65.1) is formally the same as (57.1), relative to a coordinate system ( X Γ ) we have
the following equations of geodesics
d 2 X Γ dX ∑ dX Δ Γ
+ L∑ Δ = 0, Γ = 1,..., M (65.2)
dt 2 dt dt
which are comparable to (57.2). However, the equations of geodesics here are no longer the Euler-
Lagrange equations of the arc length integral, since the Cartan parallelism is not induced by a
metric and the arc length integral is not defined. To interpret the geometric meaning of a geodesic
relative to the Cartan parallelism, we must refer to the definition (64.9) of the covariant derivative.
(t ) = G Γ (t ) E ( X (t ))
X (65.3)
Γ
then from (64.11) a necessary and sufficient condition for X ( t ) to be a geodesic is that the
components G Γ ( t ) be constant independent of t . Equivalently, this condition means that
(t ) = G ( X (t ))
X (65.4)
G = G Γ EΓ (65.5)
Sec. 65 • One-Parameter Groups, Exponential Map 477
where G Γ are constant. In other words, a curve X ( t ) is a geodesic relative to the Cartan
parallelism if and only if it is an integral curve of a left-invariant vector field.
A corollary of this theorem is that every geodesic can be extended indefinitely from t = − ∞
to t = + ∞ . Indeed, if X ( t ) is a geodesic defined for an interval, say t ∈[ 0,1] , then we can extend
X ( t ) to the interval t ∈[1, 2] by
X ( t + 1) ≡ LX ( I )X −1 (0 ) ( X ( t ) ) , t ∈ [ 0,1] (65.6)
Theorem 65.2. A smooth curve X ( t ) passing through the identity element I at t = 0 is a geodesic
if and only if it forms a one-parameter group, i.e.,
X ( t1 + t2 ) = X ( t1 ) X ( t2 ) , t1 , t2 ∈R (65.7)
X ( t1 + t2 ) = LX ( t1 ) ( X ( t2 ) ) (65.8)
X ( 0) = I (65.9)
Conversely, if (65.7) holds, then by differentiating with respect to t2 and evaluating the result at
t2 = 0 , we obtain
( t ) = ⎡C ( X ( t ) ) ⎤ ( X
X ( 0) ) (65.10)
1 ⎣ 1 ⎦
Now combining the preceding propositions, we see that the class of all geodesics can be
characterized in the following way. First, for each left-invariant field G there exists a unique one-
parameter group X ( t ) such that
( 0) = G ( I )
X (65.11)
where G ( I ) is the standard representation of G . Next, the set of all geodesics tangent to G can be
represented by LA ( X ( t ) ) for all A belonging to the underlying group.
As in Section 57, the one-to-one correspondence between G ( I ) and X ( t ) gives rise to the
notion of the exponential map at the indentity element I . For brevity, let A be the standard
representation of G , i.e.,
A ≡ G (I) (65.12)
Then we define
X ( t ) = exp ( At ) (65.14)
for all t ∈R . Here we have used the extension property of the geodesic. Equation (65.14) formally
represents the one-parameter group whose initial tangent vector at the identity element I is A .
We claim that the exponential map defined by (65.13) can be represented explicitly by the
exponential series
1 2 1 3 1
exp ( A ) = I + A + A + A + ⋅⋅⋅+ A n + ⋅⋅⋅ (65.15)
2! 3! n!
An ≤ A
n
(65.16)
for all positive integers n , the partial sums of (65.15) form a Cauchy sequence in the inner product
space L (V ;V ) . That is,
Sec. 65 • One-Parameter Groups, Exponential Map 479
1 n 1 m 1 1
A + ⋅⋅⋅+ A ≤ A + ⋅⋅⋅+
n m
A (65.17)
n! m! n! m!
and the right-hand side of (65.17) converges to zero as n and m approach infinity.
Now to prove that (65.15) is the correct representation for the exponential map, we have to
show that the series
1 22 1
exp ( At ) = I + At + A t + ⋅⋅⋅+ A n t n + ⋅⋅⋅ (65.18)
2! n!
defines a one-parameter group. This fact is more or less obvious since the exponential series
satisfies the usual power law
which can be verified by direct multiplication of the power series for exp ( At1 ) and exp ( At2 ) .
Finally, it is easily seen that the initial tangent of the curve exp ( At ) is A since
d ⎛ 1 22 ⎞
⎜ I + At + A t + ⋅⋅⋅ ⎟ = A (65.20)
dt ⎝ 2 ⎠ t =0
Theorem 65.3. Let G be a left-invariant field with standard representation A . Then the geodesics
X ( t ) tangent to G can be expressed by
⎛ 1 ⎞
X ( t ) = X ( 0 ) exp ( At ) = X ( 0 ) ⎜ I + At + A 2t 2 ⋅⋅⋅ ⎟ (65.21)
⎝ 2! ⎠
In the view of this representation, we see that the flow generated by the left-invariant field
G is simply the right multiplication by exp( At ) , namely
ρt = Rexp( At ) (65.22)
480 Chap. 12 • CLASSICAL CONTINUOUS GROUP
for all t . As a result, if K is another left-invariant field, then the Lie bracket of G with K is given
by
K ( X exp ( At ) ) − K ( X ) exp ( At )
[G, K ] ( X ) = lim
t →0
(65.23)
t
This formula implies immediately that [ G, K ] is also a left-invariant field. Indeed, if the standard
representation of K is B , then from the representation (64.7) we have
K ( X ) = XB (65.24)
and therefore
Substituting (65.24) and (65.25) into (65.23) and using the power series representation (65.18), we
obtain
[G, K ] ( X ) = X ( AB − BA ) (65.26)
which shows that [G, K ] is left-invariant with the standard representation AB − BA . Hence in
terms of the standard representation the Lie bracket on the Lie algebra is given by
[ A, B] = AB − BA (65.27)
So far we have shown that the set of left-invariant fields is closed with respect to the operation of
the Lie bracket. In general a vector space equipped with a bilinear bracket product which obeys the
Jacobi identities
[ A , B ] = − [ B, A ] (65.28)
and
is called a Lie Algebra. From (65.27) the Lie bracket of left-invariant fields clearly satisfies the
identities (65.28) and (65.29). As a result, the set of all left-invariant fields has the structure of a
Lie algebra with respect to the Lie bracket. This is why that set is called the Lie algebra of the
underlying group, as we have remarked in the preceding section.
Sec. 65 • One-Parameter Groups, Exponential Map 481
Before closing the section, we remark that the Lie algebra of a continuous group depends
only on the identity component of the group. If two groups share the same identity component,
then their Lie algebras are essentially the same. For example, the Lie algebra of GL (V ) and
GL (V ) are both representable by L (V ;V ) with the Lie bracket given by (65.27). The fact that
+
GL (V ) has two components, namely GL (V ) and GL (V ) cannot be reflected in any way by the
+ −
Lie algebra g l (V ) . We shall consider the relation between the Lie algebra and the identity
component of the underlying group in more detail in the next section.
Exercises
(c) exp 0 = I
(d) B ( exp A ) B −1 = exp BAB −1 for regular B .
(e) A = A Τ if and only if exp A = ( exp A ) .
Τ
exp λ P = I + ( eλ − 1) P
for λ ∈R .
482 Chap. 12 • CLASSICAL CONTINUOUS GROUP
V ( X ) = ⎡⎣C ( X ) ⎤⎦ ( V ( I ) ) (66.1)
for all X ∈GL (V ) . Here we have used the fact that the Cartan parallelism on H is the restriction
of that on GL (V ) to H . Therefore, when X ∈H , the representation (66.1) reduces to
V ( X ) = ⎡⎣C ( X ) ⎤⎦ ( V ( I ) )
since V is left-invariant on H .
A ≡ V (I)
[ V, U] = ⎡⎣ V, U ⎤⎦ (66.2)
Sec. 66 • Maximal Abelian Subgroups, Subalgebras 483
for all U and V in h . This condition shows that the extension h of h consisting of all left-
invariant fields V with V in h is a Lie subalgebra of gl (V ) . In view of (66.1) and (66.2), we
simplify the notation by suppressing the overbar. If this convention is adopted, then h becomes a
Lie subalgebra of gl (V ) .
It turns out that every Lie subalgebra of gl (V ) can be identified as the Lie algebra of a
unique connected continuous subgroup of gl (V ) . To prove this, let h be an arbitrary Lie
subalgebra of gl (V ) . Then the values of the left-invariant field belonging to h form a linear
subspace of L (V ;V ) at each point of GL (V ) . This field of subspaces is a distribution on
GL (V ) as defined in Section 50. According to the Frobenius theorem, the distribution is
integrable if and only if it is closed with respect to the Lie bracket. This condition is clearly
satisfied since h is a Lie subalgebra. As a result, there exists an integral hypersurface of the
distribution at each point inGL (V ) .
We denote the maximal connected integral hypersurface of the distribution at the identity
by H . Here maximality means that H is not a proper subset of any other connected integral
hypersurface of the distribution. This condition implies immediately that H is also the maximal
connected integral hypersurface at any point X which belongs to H . By virtue of this fact we
claim that
LX ( H ) = H (66.3)
for all X ∈H . Indeed, since the distribution is generated by left-invariant fields, its collection of
maximal connected integral hypersurfaces is invariant under any left multiplication. In particular,
LX ( H ) is the maximal connected integral hypersurface at the point X, since H contains the
identity I . As a result, (66.3) holds.
Now from (66.3) we see that X ∈H implies X −1 ∈H since X −1 is the only possible element
such that LX ( X −1 ) = I . Similiarly, if X and Y are contained in H , then XY must also be contained
in H since XY is the only possible element such that LY−1 LX −1 ( XY ) = I . For the last condition we
have used the fact that
484 Chap. 12 • CLASSICAL CONTINUOUS GROUP
LY−1 LX −1 ( H ) = LY−1 ( H ) = H
which follows from (66.3) and the fact that X −1 and Y −1 are both in H . Thus we have shown that
H is a connected continuous subgroup of GL (V ) having h as its Lie algebra.
Summarizing the results obtained so far, we can state the following theorem.
Theorem 66.1. There exists a one-to-one correspondence between the set of Lie subalgebras of
gl (V ) and the set of connected continuous subgroups of GL (V ) in such a way that each Lie
subalgebra h of gl (V ) is the Lie algebra of a unique connected continuous subgroup H of
GL (V ) .
To illustrate this theorem, we now determine explicitly the Lie algebras s l (V ) and
s o (V ) of the subgroups SL (V ) and SO (V ) . We claim first
A∈s l (V ) ⇔ tr A = 0 (66.4)
where tr A denotes the trace of A . To prove this, we consider the one-parameter group exp ( At )
for any A ∈L (V ;V ) . In order that A ∈ sl (V ) , we must have
Differentiating this condition with respect to t and evaluating the result at t = 0 , we obtain [cf.
Exercise 65.1(g)]
Conversely, if tr A vanishes, then (66.5) holds because the one-parameter group property of
exp ( At ) implies
From the representation (65.27) the reader will verify easily that the subspace of
L (V ;V ) characterized by the right-hand side of (66.4) is indeed a Lie subalgebra, as it should be.
A ∈ so (V ) ⇔ AT = − A (66.7)
where A T denotes the transpose of A . Again we consider the one-parameter group exp ( At ) for
any A ∈L (V ;V ) . In order that A ∈ so (V ) , we must have
⎡⎣exp ( A ) ⎤⎦ = exp ( A T )
−1
⎡⎣ exp ( A ) ⎤⎦ = exp ( − A ) ,
T
(66.9)
which can be verified directly from (65.15). The condition (66.7) clearly follows from the
condition (66.8). From (65.27) the reader also will verify the fact that the subpace of
L (V ;V ) characterized by the right-hand side is a Lie subalgebra.
The conditions (66.4) and (66.7) characterize completely the tangent spaces of SL (V ) and
SO (V ) at the identity element I . These conditions verify the claims on the dimensions of
SL (V ) and SO (V ) made in Section 63.
486 Chap. 12 • CLASSICAL CONTINUOUS GROUP
In this section we consider the problem of determining the Abelian subgroups ofGL (V ) .
Since we shall use the Lie algebras to characterize the subgroups, our results are necessarily
restricted to connected continuous Abelian subgroups only. We define first the tentative notion of
a maximal Abelian subset H of GL (V ) . The subset H is required to satisfy the following two
conditions:
The proof is more or less obvious. Clearly, the identity element I is a member of every
maximal Abelian subset. Next, if X belongs to a certain maximal Abelian subset H , then X −1 also
belongs to H . Indeed, X ∈H means that
XY = YX, Y ∈H (67.1)
YX −1 = X −1Y, Y ∈H (67.2)
As a result, X −1 ∈H since H is maximal. By the same argument we can prove also that
XY ∈H whenever X ∈H and Y ∈H . Thus, H is a subgroup ofGL (V ) .
In view of this theorem and the opening remarks we shall now consider the maximal,
connected, continuous, Abelian subgroups ofGL (V ) . Our first result is the following.
Theorem 67.2. The one-parameter groups exp ( At ) and exp ( Bt ) commute if and only if their
initial tangents A and B commute.
Sufficiency is obvious, since when AB = BA the series representations for exp ( At ) and
exp ( Bt ) imply directly that exp ( At ) exp ( Bt ) = exp ( Bt ) exp ( At ) . In fact, we have
in this case. Conversely, if exp ( At ) and exp ( Bt ) commute, then their initial tangents A and B
must also commute, since we can compare the power series expansions for exp ( At ) exp ( Bt ) and
exp ( Bt ) exp ( At ) for sufficiently small t . Thus the proposition is proved.
is true, its converse is not true in general. This is due to the fact that the exponential map is local
diffeomorphism, but globally it may or may not be one-to-one. Thus there exists a nonzero
solution A for the equation
exp ( A ) = I (67.4)
For example, in the simplest case when V is a two-dimensional spce, we can check directly from
(65.15) that
⎡ 0 − θ ⎤ ⎡cos θ − sin θ ⎤
exp ⎢ ⎥=⎢ cos θ ⎥⎦
(67.5)
⎣ 0 0 ⎦ ⎣sin θ
⎡0 − 2π ⎤
A=⎢ ⎥ (67.6)
⎣ 2π 0 ⎦
For this solution exp ( A ) clearly commutes with exp ( B ) for all B, even though A may or may not
commute with B. Thus the converse of (67.3) does not hold in general.
where Yi or Yi−1 belongs to N . [The number of factors k in the representation (67.7) is arbitrary.]
To prove the lemma, let H 0 be the subgroup generated by N . Then H 0 is an open set in
H since from (67.7) every point X ∈H 0 has a neighborhood LX ( N ) in H . On the other hand,
H 0 is also a closed set in H because the complement of H 0 in H is the union of LY ( H 0 ) ,
Y∈H which are all open sets in H . As a result H 0 must coincide with H since by hypothesis
H0
H has only one component.
By virtue of the lemma H is Abelian if and only if N is an Abelian set. Combining this
remark with Theorem 67.2, and using the fact that the exponential map is a local diffeomorphism at
the identity element, we can conclude immediately that H is a maximal connected continuous
Abelian subgroup of GL (V ) if and only if h is a maximal Abelian Lie subalgebra of gl (V ) .
This completes the proof.
X ≡ exp ( X Γ EΓ ) (67.8)
is a homomorphism of the additive group R M with the Abelian group H . This coordinate system
plays the role of a local Cartesian coordinate system on a neighborhood of the identity element
of H . The mapping defined by (67.8) may or may not be one-to-one. In the former case H is
isomorphic to R M , in the latter case H is isomorphic to a cylinder or a torus of dimension M .
Sec. 66 • Maximal Abelian Subgroups, Subalgebras 489
We say that the Cartan parallelism on the Abelian group H is a Euclidean parallelism
because there exists a local Cartesian coordinate system relative to which the Cartan symbols
vanish identically. This Euclidean parallelsin on H should not be confused with the Eucliedean
parallelism on the underlying inner product space L (V ;V ) in which H is a hypersurface. In
genereal, even if H is Abelian, the tangent spaces at different points of H are still different
subspaces of L (V ;V ) . Thus the Euclidean parallelism on H is not the restriction of the
Euclidean parallelism of L (V ;V ) to H .
⎡λ1 ⎤
⎢ ⋅ ⎥
⎢ ⎥
⎡⎣ X ij ⎤⎦ = ⎢ ⋅ ⎥ , λi > 0 i = 1,..., N (67.9)
⎢ ⎥
⎢ ⋅ ⎥
⎢⎣ λN ⎥⎦
where λi may or may not be distinct. The dilatation group with axes {e i } is the group of all
dilatations X . We leave the proof of the fact that a dilatation group is a maximal connected
Abelian continuous subgroup of GL (V ) as an exercise.
Dilatation groups are not the only class of maximal Abelian subgroup ofGL (V ) , of course.
For example, when V is three-dimensional we choose a basis {e1 , e 2 , e 3} forV ; then the subgroup
consisting of all linear transformations having component matrix relative to {e i } of the form
⎡ a b 0⎤
⎢0 a 0⎥
⎢ ⎥
⎢⎣b c a ⎥⎦
with positive a and arbitrary b and c is also a maximal connected Abelian continuous subgroup of
GL (V ) . Again, we leave the proof of this fact as an exercise.
490 Chap. 12 • CLASSICAL CONTINUOUS GROUP
For inner product spaces of lower dimensions a complete classification of the Lie
subalgebars of the Lie algebra gl (V ) is known. In such cases a corresponding classification of
connected continuous subgroups of GL (V ) can be obtained by using the main result of the
preceding section. Then the set of maximal connected Abelian continuous subgroups can be
determined completely. These results are beyond the scope of this chapter, however.
________________________________________________________________
Chapter 13
In this chapter we consider the theory of integration of vector and tensor fields defined on
various geometric entities introduced in the preceding chapters. We assume that the reader is
familiar with the basic notion of the Riemann integral for functions of several real variables.
Since we shall restrict our attention to the integration of continuous fields only, we do not need
the more general notion of the Lebesgue integral.
Let E be a Euclidean maniforld and let λ be a smooth curve in E . Then the tangent of λ
is a vector in the translation space V of E defined by
λ ( t + Δt ) − λ ( t )
λ ( t ) = lim (68.1)
Δt → 0 Δt
λ ( t ) = ⎡⎣ λ ( t ) ⋅ λ ( t ) ⎤⎦
1/ 2
(68.2)
which is a continuous function of t , the parameter of λ . Now suppose that λ is defined for t
from a to b. Then we define the arc length of λ between λ ( a ) and λ ( b ) by
∫ λ ( t ) dt
b
l= (68.3)
a
We claim that the arc length possesses the following properties which justify the definition
(68.3).
(i) The arc length depends only on the path of λ joining λ ( a ) and λ ( b ) ,
independent of the choice of parameterization on the path.
491
492 Chap. 13 • INTEGRATION OF FIELDS
λ (t ) = λ ( t ) (68.4)
with
t = t (t ) (68.5)
dt
λ (t ) = λ ( t ) (68.6)
dt
As a result, we have
∫ λ ( t ) dt = ∫ λ ( t ) dt
b b
(68.7)
a a
(ii) When the path joining λ ( a ) and λ ( b ) is a straight line segment, the arc length is
given by
l = λ (b) − λ ( a ) (68.8)
λ (t ) = ( λ (b) − λ ( a )) t + λ ( a ) (68.9)
∫ λ ( b ) − λ ( a ) dt = λ (b) − λ ( a )
1
l= (68.10)
0
(iii) The arc length integral is additive, i.e., the sum of the arc lengths from
λ ( a ) to λ ( b ) and from λ ( b ) to λ ( c ) is equal to the arc length from λ ( a ) to λ ( c ) .
Now using property (i), we can parameterize the path of λ by the arc length relative to a
certain reference point on the path. As usual, we assume that the path is oriented. Then we
assign a positive parameter s to a point on the positive side and a negative parameter s to a
point on the negative side of the reference point, the absolute value s being the arc length
Sec. 68 • Arc Length, Surface Area, Volume 493
between the point and the reference point. Hence when the parameter t is positively oriented,
the arc length parameter s is related to t by
s = s ( t ) = ∫ λ ( t ) dt
1
(68.11)
0
ds / dt = λ ( t ) (68.12)
Substituting this formula into the general transformation rule (68.6), we see that the tangent
vector relative to s is a unit vector pointing in the positive direction of the path, as it should be.
Having defined the concept of arc length, we consider next the concept of surface area.
For simplicity we begin with the area of a two-dimensional smooth surface S in E . As usual,
we can characterize S in terms of a pair of parameters ( u Γ , Γ = 1, 2 ) which form a local
coordinate system on S
x ∈ S ⇔ x = ζ ( u1 , u 2 ) (68.13)
where ζ is a smooth mapping. We denote the tangent vector of the coordinate curves by
h Γ ≡ ∂ζ / ∂u Γ , Γ = 1, 2 (68.14)
Then {h Γ } is a basis of the tangent plane S x of S at any x given by (68.13). We assume that
S is oriented and that ( u Γ ) is a positive coordinate system. Thus {h Γ } is also positive for S x .
Now let U be a domain in S with piecewise smooth boundary. We consider first the
simple case when U can be covered entirely by the coordinate system ( u Γ ) . Then we define the
surface area of U by
σ= ∫∫ e ( u , u ) du
1 2 1
du 2 (68.15)
ζ −1
(U )
where e ( u1 , u 2 ) is defined by
The double integral in (68.15) is taken over ζ −1 (U ) , which denotes the set of coordinates ( u Γ )
for points belonging to U .
By essentially the same argument as before, we can prove that the surface area has the
following properties which justify the definition (68.15).
(iv) The surface area depends only on the domain U , independent of the choice of
parameterization on U .
To prove this, we note that under a change of surface coordinates the integrand e of
(68.15) obeys the transformation rule [cf. (61.4)]
⎡ ∂u Γ ⎤
e = e det ⎢ Δ ⎥ (68.17)
⎣ ∂u ⎦
As a result, we have
∫∫ e ( u , u ) du du 2 = ∫∫ e ( u , u ) du
1 2 1 1 2 1
du 2 (68.18)
ζ −1 (U ) ζ −1 (U )
(v) When S is a plane and U is a square spanned by the vectors h1 and h 2 at the point
x 0 ∈ S , the surface area of U is
σ = h1 h 2 (68.19)
ζ ( u1 , u 2 ) = x 0 + u Γ h Γ (68.20)
and from (68.20), ζ −1 (U ) is the square [ 0,1] × [ 0,1] . Hence by (68.15) we have
1 1
σ =∫ ∫ h1 h 2 du1 du 2 = h1 h 2 (68.22)
0 0
Sec. 68 • Arc Length, Surface Area, Volume 495
(vi) The surface area integral is additive in the same sence as (iii).
Like the arc length parameter s on a path, a local coordinate system ( u Γ ) on S is called
an isochoric coordinate system if the surface area density e ( u 1 , u 2 ) is identical to 1 for all
(u ) .
Γ
We can define an isochoric coordinate system ( u Γ ) in terms of an arbitrary surface
coordinate system ( u Γ ) in the following way. We put
u 1 = u 1 ( u1 , u 2 ) ≡ u1 (68.23)
and
u 2 = u 2 ( u1 , u 2 ) ≡ ∫ e ( u1 , t ) dt
u2
(68.24)
0
where we have assumed that the origin ( 0, 0 ) is a point in the domain of the coordinate system
(u ) .
Γ
From (68.23) and (68.24) we see that
∂u 1 ∂u 1 ∂u 2
= 1, = 0, = e ( u1 , u 2 ) (68.25)
∂u1 ∂u 2 ∂u 2
⎡ ∂u Γ ⎤
det ⎢ Δ ⎥ = e ( u1 , u 2 ) (68.26)
⎣ ∂u ⎦
e ( u 1, u 2 ) = 1 (68.27)
by virtue of (68.17). From (68.26) the coordinate system ( u Γ ) and ( u Γ ) are of the same
orientation. Hence if ( u Γ ) is positively oriented, then ( u Γ ) is a positive isochoric coordinate
system on S .
system (i.e., a rectangular Cartesian coordinate system) generally does not exist on an arbitrary
surface S . But the preceding proof shows that isochoric coordinate systems exist on all S .
So far, we have defined the surface area for any domain U which can be covered by a
single surface coordinate system. Now suppose that U is not homeomorphic to a domain in R 2 .
Then we decompose U into a collection of subdomains, say
whee the interiors of U1 ,…,UK are mutually disjoint. We assume that each Ua can be covered by
a surface coordinate system so that the surface area σ (Ua ) is defined. Then we define σ (U )
naturally by
While the decomposition (68.28) is not unique, of course, by the additive property (vi) of the
integral we can verify easily that σ (U ) is independent of the decomposition. Thus the surface
area is well defined.
Having considered the concepts of arc length and surface area in detail, we can now
extend theidea to hypersurfaces in general. Specifically, let S be a hupersurface of dimension
M. Then locally S can be represented by
x ∈ S ⇔ x = ζ ( u1 ,… u M ) (68.30)
h Γ ≡ ∂ζ / ∂u Γ , Γ = 1,… , M (68.31)
and
aΓΔ ≡ h Γ ⋅ h Δ (68.32)
Then the {h Γ } span the tangent space S x and the aΓΔ define the induced metric on S x . We
define the surface area density e by the same formula (68.16) except that e is now a smooth
function of the M variables ( u1 ,… u M ) , and the matrix [ aΓΔ ] is also M × M .
Now let U be a domain with piecewise smooth boundary in S , and assume that U can
be covered by a single surface coordinate system. Then we define the surface area of U by
Sec. 68 • Arc Length, Surface Area, Volume 497
σ = h1 hM (68.34)
u M = u M ( u1 ,…, u M ) ≡ ∫ e ( u1 ,…, u M −1 , t ) dt
uM
(68.36)
0
⎡ ∂u Γ ⎤
det ⎢ Δ ⎥ = e ( u1 ,… u M ) (68.37)
⎣ ∂u ⎦
e ( u 1 ,… , u M ) = e ( u1 ,…, u M )
1
=1 (68.38)
det ⎡⎣∂u Γ / ∂u Δ ⎤⎦
∫U
f dσ ≡ ∫
ζ −1 (U ) ∫ fe du1 du M (69.1)
where the function f on the right-hand side denotes the representation of f in terms of the
surface coordinates ( u Γ ) :
f ( x ) = f ( u1 ,…, u M ) (69.2)
where
x = ζ ( u1 ,… , u M ) (69.3)
It is understood that the multiple integral in (69.1) is taken over the positive orientation on
ζ −1 (U ) in R M .
By the same argument as in the preceding section, we see that the integral possesses the
following properties.
(i) When f is identical to 1 the integral of f is just the surface area of U , namely
σ (U ) = ∫ dσ (69.4)
U
(ii) The integral of f is independent on the choice of the coordinate system ( u Γ ) and is
additive with respect to its domain.
(iii) The integral is a linear function of the integrand in the sense that
500 Chap. 13 • INTEGRATION OF FIELDS
∫ (α
U
f + α 2 f 2 ) dσ = α1 ∫ f1 dσ + α 2 ∫ f 2 dσ
1 1
U U
(69.5)
∫U
f dσ = ∫ f dσ +
U1
+∫
UK
f dσ (69.7)
By property (ii) we can verify easily that the integral is independent of the decomposition.
Having defined the integral of a scalar field, we define next the integral of a vector field.
Let v be a continuous vector field on S , i.e.,
v : S →V (69.8)
where V is the translation space of the underlying Euclidean manifold E . Generally the values
of v may or may not be tangent to S . We choose an arbitrary Cartesian coordinate system with
natural basis {e i }. Then v can be represented by
v ( x ) = υ i ( x ) ei (69.9)
∫ U
v dσ ≡ (∫
U
υ i dσ ei) (69.10)
∫U
A dσ ≡ (∫ U
)
Ai1…ir j1… js dσ ei1 ⊗ ⊗ e js (69.12)
The integrals defined by (69.10) and (69.12) possess the same tensorial order as the
integrand. The fact that a Cartesian coordinate system is used in (69.10) and (69.12) reflects
clearly the crucial dependence of the integral on the Euclidean parallelism of E . Without the
Euclidean parallelism it is generally impossible to add vectors or tensors at different points of the
domain. Then an integral is also meaningless. For example, if we suppress the Euclidean
parallelism on the underlying Euclidean manifold E , then the tangential vectors or tensors at
different points of a hyper surface S generally do not belong to the same tangent space or tensor
space. As a result, it is generally impossible to “sum” the values of a tangential field to obtain an
integral without the use of some kind of path-independent parallelism. The Euclidean
parallelism is just one example of such parallelisms. Another example is the Cartan parallelism
on a continuous group defined in the preceding chapter. We shall consider integrals relative to
the Cartan parallelism in Section 72.
In view of (69.10) and (69.12) we see that the integral of a vector field or a tensor field
possesses the following properties.
∫ U
v dσ ≤ ∫ v dσ
U
(69.13)
and similarly
∫
U
A dσ ≤ ∫ A dσ
U
(69.14)
where the norm of a vector or a tensor is defined as usual by the inner product of V . Then it
follows from (69.6) that
∫ v d σ ≤ σ (U ) max v (69.15)
U U
and
502 Chap. 13 • INTEGRATION OF FIELDS
∫ A d σ ≤ σ (U ) max A (69.16)
U U
However, it does not follow from (69.6), and in fact it is not true, that σ (U ) minU v is a lower
bound for thenorm of the integral of v.
Sec. 70 • Integration of Differential Forms 503
Integration with respect to the surface area density of a hyper surface is a special case of
a more general integration of differential forms. As remarked in Section 68, the transformation
rule (68.17) of the surface area density is the basic condition which implies the important
property that the surface area integral is independent of the choice of the surface coordinate
system. Since the transformation rule (68.17) is essentially the same as that of the strict
components of certain differential forms, we can extend the operation of integration to those
forms also. This extension is the main result of this section.
where z is called the relative scalar or the density of Z . As we have shown in Section 39, the
transformation rule for z is
⎡ ∂u i ⎤
z = z det ⎢ j ⎥ (70.2)
⎣ ∂u ⎦
This formula is comparable to (68.17). In fact if we require that ( u i ) and ( u i ) both be positively
oriented, then (70.2) can be regarded as a special case of (68.17) with M = N . As a result, we
can define the integral of Z over a domain U in E by
∫ Z ≡∫ ∫ z ( u ,…, u ) du
1 N 1
du N (70.3)
U ζ −1 (U )
and the integral is independent of the choice of the (positive) coordinate system x = ζ ( u i ) .
Notice that in this definition the Euclidean metric and the Euclidean volume density e
are not used at all. In fact, (68.39) can be regarded as a special case of (70.3) when Z reduces to
the Euclidean volume tensor
E = e h1 ∧ ∧ hN (70.4)
504 Chap. 13 • INTEGRATION OF FIELDS
Here we have assumed that the coordinate system is positively oriented; otherwise, a negative
sign should be inserted on the right hand side since the volume density e as defined by (68.16) is
always positive. Hence, unlike the volume integral, the integral of a differential N-form Z is
defined only if the underlying space E is oriented. Other than this aspect, the integral of Z and
the volume integral have essentially the same properties since they both are defined by an
invariant N-tuple integral over the coordinates.
Z = z h1 ∧ ∧ hM (70.5)
⎡ ∂u Γ ⎤
z = z det ⎢ Δ ⎥ (70.6)
⎣ ∂u ⎦
∫ Z ≡∫ z ( u ,… , u ) du
)∫
1 M 1
du M (70.7)
U ζ −1
(U
and the integral is independentof the choice of the positive surface coordinate system ( u Γ ) . By
the same remark as before, we can regard (70.7) as a generalization of (68.33).
The definition (70.7) is valid for any tangential M-form Z on S . In this definition the
surface metric and thesurface area density are not used. The fact that Z is a tangential field on
S is not essential in the definition. Indeed, if Z is an arbitrary skew-symmetric spatial
covariant tensorof order M on S , then we define the density of Z on S relative to ( u Γ ) simply
by
z = Z ( h1 ,… , h M ) (70.8)
Using this density, we define the integral of Z again by (70.7). Of course, the formula (70.8) is
valid for a tangential M-form Z also, since it merely represents the strict component of the
tangential projection of Z .
This remark can be further generalized in the following situation: Suppose that S is a
hyper surface contained in another hypersurface S 0 in E , and let Z be a tangential M-form on
Sec. 70 • Integration of Differential Forms 505
S 0 . Then Z gives rise to a density on S by the same formula (70.8), and the integral of Z over
any domain U in S is defined by (70.7). Algebraically, this remark is a consequence of the
simple fact that a skew-symmetric tensor over the tangent space of S 0 gives rise to a unique
skew-symmetric tensor over the tangent space of S , since the latter tangent space is a subspace
of the former one.
It should be noted, however, that the integral of Z is defined over S only if the order of
Z coincides with the dimension of S . Further, the value of theintegral is always a scalar, not a
vector or a tensor as in the preceding section. We can regard the intetgral of a vector field or a
tensor field as a special case of the integral of a differential form only when the fields are
represented interms of their Cartesian components as shown in (69.9) and (69.11).
An important special case of the integral of a differential form is the line integral in
classical vector analysis. In this case S reduces to an oriented path λ , and Z is a 1-form w.
When the Euclidean metric on E is used, w corresponds simply to a (spatial or tangential) vector
field on λ. Now using any positive parameter t on λ , we obtain from (70.7)
w = ∫ w ( t ) ⋅ λ ( t ) dt
b
∫λ a
(70.9)
Here we have used the fact that for an inner product space the isomorphism of a vector and a
covector is given by
w, λ = w ⋅ λ (70.10)
The tangent vector λ playes the role of the natural basis vector h1 associated with the parameter
t, and (70.10) is just the special case of (70.8) when M = 1.
The reader should verify directly that the right-hand side of (70.9) is independent of the
choice of the (positive) parameterization t on λ. By virtue of this remark, (70.9) is also written
as
∫ λ
w = ∫ w ⋅d λ
λ
(70.11)
Z ( h1 , h 2 ) = w ⋅ ( h1 × h 2 ) (70.12)
506 Chap. 13 • INTEGRATION OF FIELDS
∫ Z= ∫∫ w ⋅ ( h1 × h 2 ) du1du 2 ≡ ∫ w ⋅ d σ (70.13)
U U
ζ −1
(U )
dσ ≡ ( h1 × h 2 ) du1du 2 (70.14)
The reader will verify easily that the right-hand side of (70.14) can be rewritten as
dσ = en du1 du 2 (70.15)
where e is the surface area density on S defined by (68.16), and where n is the positive unit
normal of S defined by
h1 × h 2 1
n= = h1 × h 2 (70.16)
h1 × h 2 e
Substituting (70.15) into (70.13), we see that the integral of Z can be represented by
∫ Z= ∫∫ ( w ⋅ n ) e du du
1 2
(70.17)
U
ζ −1
(U )
which shows clearly that the integral is independent of the choice of the (positive) surface
coordinate system ( u Γ ) .
Since the multipleof an M-form Z by a scalar field f remains an M-form, we can define
the integral of f with respect to Z simply as the integral of f Z . Using a Cartesian coordinate
representation, we can extend this operation to integrals of a vector field or a tensor field relative
to a differential form. The integrals defined in the preceding section are special cases of this
general operation when the differential forms are the Euclidean surface area densities induced
bythe Euclidean metric on the underlying space E .
Sec. 71 • Generalized Stokes’ Theorem 507
Z= ∑
Γ1 < <Γ K
Z Γ1 ΓK h Γ1 ∧ ∧ h ΓK (71.1)
dZ = ∑
Γ1 < <Γ K
dZ Γ1 ΓK ∧ h Γ1 ∧ ∧ h ΓK (71.2)
where dZ Γ1 ΓK is defined by
∂Z Γ1 ΓK
dZ Γ1 ΓK ≡ Δ
hΔ (71.3)
∂u
In this section we shall establish a general result which connects the integral of d Z over a
( K + 1) − dimensional domain U in S with the integral of Z over the K-dimensional boundary
surface ∂U of U . We begin with a preliminary lemma about a basic property of the exterior
derivative.
η −1 ( y α ) = x ∈ S ⇔ y α = y α ( u Γ ) , α = 1,…, P (71.4)
W=
α
∑α Wα1 αK gα1 ∧ ∧ gα K (71.5)
1< < K
508 Chap. 13 • INTEGRATION OF FIELDS
where {gα , α = 1,… , P} denotes the natural basis of ( y α ) on S 0 , and suppose that Z is the
tangential projection of W on S , i.e.,
∂yα1 ∂yα K Γ1
Z=
α
∑α Wα1 αK
∂u Γ1 ∂u ΓK
h ∧ ∧ hΓK (71.6)
1< < K
where {h Γ
, Γ = 1,… , M } denotes thenatural basis of ( u Γ ) on S . Then the exterior derivatives
of Z coincides with the tangential projection of the exterior derivative of W . In other words,
theoperation of exterior derivative commutes with the operation of tangentialprojection.
We can prove this lemma by direct calculation of d W and d Z . From (71.5) and (71.2)
d W is given by
∂Wα1
dW =
α
∑α ∂y β
αK
g β ∧ gα1 ∧ ∧ gα K (71.7)
1< < K
∂Wα1 ∂y β ∂y α1 ∂y α K Δ
α
∑α ∂y β
αK
∂u Δ ∂u Γ1 ∂u Γ K
h ∧ h Γ1 ∧ ∧ h ΓK (71.8)
1< < K
∂ ⎛ ∂y α1 ∂y α K ⎞ Δ
dZ = ∑
α1 < <α K
⎜ Wα <
∂u Δ ⎝ 1
<α K
∂u Γ1 ∂u ⎠ ΓK ⎟
h ∧ h Γ1 ∧ ∧ h Γ K
(71.9)
∂Wα1…α K ∂y β ∂y α1 ∂y α K Δ
= ∑
α1 < <α K ∂y β ∂u Δ ∂u Γ1 ∂u ΓK
h ∧ h Γ1 ∧ ∧ h Γ K
where we have used the skew symmetry of the exterior product and the symmetry of the second
derivative ∂ 2 y α / ∂u Γ∂u Δ with respect to Γ and Δ. Comparing (71.9) with (71.8), we have
completed the proof of the lemma.
∫∂U
Z = ∫ dZ
U
(71.10)
Before proving this theorem, we remark first that other than the condition
S ⊂ S0 (71.11)
The hypersurface S 0 is entirely arbitrary. In application we often take S 0 = E but this choice is
not necessary. Second, by virtue of the preceding lemma it suffices to prove (71.10) for
tangential ( M − 1) − forms Z on S only. In other words, we can choose S 0 to be the same as
S without loss of generality. Indeed, as explained in the preceding section, the integrals in
(71.10) are equal to those of tangential projection of the forms Z and d Z on S . Then by virtue
of the lemma the formula (71.10) amounts to nothing but a formula for tangential forms on S .
Before proving the formula (71.10) in general, we consider first the simplest special case
when Z is a 1-form and S is a two-dimensional surface. This case corresponds to the Stokes
formula in classical vector analysis. As usual, we denote a 1-form by w since it is merely a
covariant vector field on S . Let the component form of w in ( u Γ ) be
w = wΓh Γ (71.12)
∂wΓ Δ ⎛ ∂w ∂w ⎞
dw = Δ
h ∧ h Γ = ⎜ 12 − 21 ⎟ h1 ∧ h 2 (71.13)
∂u ⎝ ∂u ∂u ⎠
⎛ ∂w2 ∂w1 ⎞ 1 2
∫ζ −1
( ∂U )
wΓλ Γ dt = ∫∫ ⎜ 1 − 2 ⎟du du
⎝ ∂u ∂u ⎠
(71.14)
ζ −1
(U )
Now since the integrals in (71.14) are independent of the choice of positive coordinate
system, for simplicity we consider first thecase when ζ −1 (U ) is the square [ 0,1] × [0,1] in R 2 .
Naturally, we use the parameters u1 , u 2 , 1 − u1 , and 1 − u 2 on the boundary segments
510 Chap. 13 • INTEGRATION OF FIELDS
( 0, 0) → (1,0) , (1,0) → (1,1) , (1,1) → ( 0,1) , and ( 0,1) → ( 0,0 ) , respectively. Relative to this
parameterization on ∂U the left-hand side of (71.14) reduces to
w1 ( u1 ,0 ) du1 + ∫ w2 (1, u 2 ) du 2
1 1
∫0 0
(71.15)
− ∫ w1 ( u ,1) du − ∫ w2 ( 0, u ) du
1 1
1 1 2 2
0 0
Similarly, relative to the surface coordinate system ( u1 , u 2 ) the right-hand side of (71.14)
reduces to
1 1 ⎛ ∂w2 ∂w1 ⎞ 1 2
∫∫
0 0
⎜ 1 − 2 ⎟ du du
⎝ ∂u ∂u ⎠
(71.16)
which may be integrated by parts once with respect to one of the two variables ( u1 , u 2 ) and the
result is precisely the sameas (71.15). Thus (71.14) is proved in this simple case.
In general U may not be homeomorphic to the square [ 0,1] × [0,1] , of course. Then we
decompose U as before by (68.28) and we assume that each Ua , a = 1,… , K , can be represented
by the range
∫ ∂Ua
w= ∫ ∂Ua
dw , a = 1,… , K (71.18)
Now adding (71.18) with respect to a and observing the fact that all common boundaries of
pairs of U1 ,…,UK are oriented oppositely as shown in Figure 10, we obtain
∫ ∂U
w= ∫ U
dw (71.19)
U3 U5 U7
U2
U8
U6
U1 U4
Figure 10
The proof of (71.10) for the general case with an arbitrary M is essentially the same as the
proof of the preceding special case. To illustrate the similarity of the proofs we consider next the
case M = 3. In this case Z is a 2-form on a 3-dimensional hypersurface S . Let
( u Γ , Γ = 1, 2, 3) be a positive surface coordinate system on S as usual. Then Z can be
represented by
Z = ∑ Z ΓΔ h Γ ∧ h Δ
Γ<Δ (71.20)
= Z12 h ∧ h + Z13h ∧ h + Z 23h ∧ h
1 2 1 3 2 3
⎛ ∂Z ∂Z ∂Z ⎞
d Z = ⎜ 123 − 132 + 231 ⎟ h1 ∧ h 2 ∧ h3 (71.21)
⎝ ∂u ∂u ∂u ⎠
As before, we now assume that U can be represented by the range of a cube relative to a certain
( u Γ ) , namely
U = ζ ([0,1] × [ 0,1] × [0,1]) (71.22)
1 1 1 ⎛ ∂Z12 ∂Z13 ∂Z 23 ⎞ 1 2 3
∫ ∫ ∫ ⎜⎝ ∂u
0 0 0 3
−
∂u 2
+ 1 ⎟ du du du
∂u ⎠
(71.23)
which may be integrated by parts once with respect to one of the three variables ( u1 , u 2 , u 3 ) . The
result consists of six terms:
Z 23 (1, u 2 , u 3 ) du 2 du 3 − ∫ Z 23 ( 0, u 2 , u 3 ) du 2 du 3
1 1 1 1
+∫ ∫ ∫
0 0 0 0
which are precisely the representations of the left-hand side of (71.10) on the six faces of the
cube with an appropriate orientation on each face. Thus (71.10) is proved when (71.22) holds.
∫
∂Ua
Z= ∫
Ua
dZ, a = 1,… , K (71.26)
Following exactly the same pattern, the formula (71.10) can be proved by an arbitrary
M = 4,5,6,…. Thus the theorem is proved.
The formula (71.10) reduces to two important special cases in classical vector analysis
when the underlying Euclidean manifold E is three-dimensional. First, when M = 2 and U is
two-dimensional, the formula takes the form
∫
∂U
w= ∫ U
dw (71.27)
∫∂U
w ⋅ dλ = ∫
U
curl w ⋅ dσ (71.28)
Sec. 71 • Generalized Stokes’ Theorem 513
where dσ is given by (70.14) and where curl w is the axial vector corresponding to dw , i.e.,
dw ( u, v ) = curl w ⋅ ( u × v ) (71.29)
for any u, v in V . Second, when M = 3 and U is three-dimensional the formula takes the
form
dυ ≡ e du1 du 2 du 3 (71.31)
dυ = dx1 dx 2 dx 3 (71.32)
which is equivalent to
Z ( u, v ) = w ⋅ ( u × v ) , u, v ∈V (71.34)
⎛ ∂w ∂w ∂w ⎞
d Z = ⎜ 11 + 22 + 33 ⎟ e1 ∧ e 2 ∧ e 3 = ( div w ) E (71.35)
⎝ ∂x ∂x ∂x ⎠
As a result, we have
∫∂U
Z= ∫∂U
w ⋅ dσ (71.36)
and
514 Chap. 13 • INTEGRATION OF FIELDS
∫U
d Z = ∫∫∫ div wdυ
U
(71.37)
The formulas (71.28) and (71.30) are called Stokes’ theorem and Gauss’ divergence
theorem, respectively, in classical vector analysis.
Sec. 72 • Invariant Integrals on Continuous Groups 515
where c is a constant. The integral of Z over any domain U obeys the condition
∫U
Z =∫
LX (U )
Z (72.2)
∫U
Z= ∫
RX (U )
Z, ∫ U
Z= ∫(
J U)
Z (72.3)
Here we have used the fact that J preserves the orientatin of the group when M is even, while
J reverses the orientation of the group when M is odd.
516 Chap. 13 • INTEGRATION OF FIELDS
By virtue of the representation (72.1) all left-invariant volume tensor fields differ from
one another by a constant multiple only and thus they are either all right-invariant or all not
right-invariant. As we shall see, the left-invariant volume tensor fields on GL (V ) , SL (V ) ,
O (V ) , and all continuous subgroups of O (V ) are right-invariant. Hence invariant integrals
exist on these groups. We consider first the general linear group GL (V ) .
To prove that the left-invariant volume tensor fields on the GL (V ) are also right-
invariant, we recall first from exterior algebra the transformation rule for a volume tensor under a
linear map of the underlying vector space. Let W be an arbitrary vector space of dimension M ,
and suppose that A is a linear transformation of W
A : W →W (72.4)
A ∗ (E ) = ( det A ) E (72.5)
for any {e1 ,..., e M } in W . From the representation of a left-invariant field on GL (V ) the value
of the field at any point X is obtained from the value at the identity I by the linear map
∇LX : L (V ;V ) → L (V ;V ) (72.7)
which is defined by
∇LX ( K ) = XK , K ∈L (V ;V ) (72.8)
∇RX ( K ) = KX , K ∈L (V ;V ) (72.9)
Then by virtue of (72.5) a left-invariant field volume tensor field is also right-invariant if and
only if
Sec. 72 • Invariant Integrals on Continuous Groups 517
Since the dimension of L (V ;V ) is N 2 , the matrices of ∇RX and ∇LX are N 2× N 2 . For
simplicity we use the product basis {ei ⊗ e j } for the space L (V ;V ) ; then (72.8) and (72.9) can
be represented by
and
To prove (72.10), we have to show that the N 2 × N 2 matrices ⎡⎣C ikjl ⎤⎦ and ⎡⎣ D ikjl ⎤⎦ have the same
determinant. But this fact is obvious since from (72.11) and (72.12), ⎡⎣C ikjl ⎤⎦ is simply the
transpose of ⎡⎣ D ikjl ⎤⎦ , i.e.,
tr ( K ) = 0 (72.14)
This result means that the orthogonal complement of SL (V )I relative to the inner product on
L (V ;V ) is the one-dimensional subspace
l = {α I, α ∈R } (72.15)
Now from (72.8) and (72.9) the linear maps ∇RX and ∇LX coincide on l , namely
By virtue of (72.16) and (72.10) we see that the restrictions of ∇RX and ∇LX on SL (V )I give
rise to the same volume tensor at X from any volume tensor at I . As a result every left-
invariant volume tensor field on SL (V ) or UM (V ) is also a right-invariant, and thus invariant
integrals exist on SL (V ) andUM (V ) .
Finally, we show that invariant integrals exists on O (V ) and on all continuous subgroups
of O (V ) . This result is entirely obvious because both ∇LX and ∇RX preserve the inner product
on L (V ;V ) for any X ∈O (V ). Indeed, if K and H are any elements of L (V ;V ) , then
XK ⋅ XH = tr ( XKHT X T ) = tr ( XKHT X −1 )
(72.17)
= tr ( KHT ) = K ⋅ H
and similiarly
KX ⋅ HX = K ⋅ H (72.18)
for any X ∈O (V ). As a result, the Euclidean volume tensor field E is invariant on O (V ) and on
all continuous subgroups of O (V ) .
Such is not the case for GL (V ) or SL (V ) , since they are both unbounded in L (V ;V ) . By
virtue of (72.19) any continuous function f on O (V ) can be integrated with respect to E over
the entire group O (V ) . From (72.2) and (72.3) the integral possesses the following properties:
∫ f ( Q) E ( Q) = ∫ f ( Q 0Q ) E ( Q )
O (V ) O (V )
(72.20)
∫ f ( Q) E ( Q) = ∫ f ( QQ0 ) E ( Q )
O (V ) O (V )
∫ f ( Q) E (Q) = ∫ f ( Q −1 ) E ( Q ) (72.21)
O (V ) O (V )
in addition to the standard properties of the integral relative to a differential form obtained in the
preceding section. For the unbounded groups GL (V ) , SL (V ) , and UM (V ) , some
continuous functions, such as functions which vanish identically outside some bounded domain,
can be integrated over the whole group. If the integral of f with respect to an invariant volume
tensor field exists, it also possesses properties similar to (72.20) and (72.21).
f : L (V ;V ) × ⋅ ⋅⋅ × L (V ;V ) →R (72.22)
for all Q∈O (V ) , the number of variables P being arbitrary. The representation is
∫( g ( QK 1QT ,..., QK P QT ) E ( Q )
)
f ( K 1 ,..., K P )
O V
= (72.24)
∫( E ( Q)
O V )
g : L (V ;V ) × ⋅⋅⋅ × L (V ;V ) →R (72.25)
∫( g ( QK 1 ,..., QK P ) E ( Q )
)
f ( K 1 ,..., K P )
O V
= (72.27)
∫( E (Q)
O V )
520 Chap. 13 • INTEGRATION OF FIELDS
∫ O (V )g ( K1Q,..., K PQ ) E ( Q )
f ( K 1 ,..., K P ) = (72.29)
∫ O (V ) E ( Q )
We leave the proof of these representations as exercises.
If the condition (72.23), (72.26), or (72.28) is required to hold for all Q belonging to a
continuous subgroup G of O (V ) , the representation (72.24), (72.27), or (72.29), respectively,
remains valid except that the integrals in the representations are taken over the group G .
_______________________________________________________________________________
INDEX
The page numbers in this index refer to the versions of Volumes I and II of the online versions. In addition to the
information below, the search routine in Adobe Acrobat will be useful to the reader. Pages 1-294 will be found in the
online editions of Volume 1, pages 295-520 in Volume 2
Algebraic multiplicity, 152, 159, 160, 161, dual, 204, 205, 215
164, 189, 191, 192 helical, 337
Algebraically closed field, 152, 188 natural, 316
Angle between vectors, 65 orthogonal, 70
Anholonomic basis, 332 orthonormal,70-75
Anholonomic components, 332-338 product, 221, 263
Associated homogeneous system of reciprocal, 76
equations, 147
x
INDEX xi
Lie derivative, 331, 361, 365, 412 Maximal linear independent set, 47
Limit point, 332 Member of a set, 13
Line integral, 505 Metric space, 65
Linear dependence of vectors, 46 Metric tensor, 244
Linear equations, 9, 142-143 Meunier’s equation, 438
Linear functions, 203-212 Minimal generating set, 56
Linear independence of vectors, 46, 47 Minimal polynomial, 176-181
Linear transformations, 85-123 Minimal surface, 454-460
Logarithm of an endomorphism, 170 Minor of a matrix, 8, 133, 266
Lower triangular matrix, 6 Mixed tensor, 218, 276
Module over a ring, 45
Maps Monge’s representation for a vector field, 401
Continuous, 302 Multilinear functions, 218-228
Differentiable, 302 Multiplicity
Matrix algebraic, 152, 159, 160, 161, 164,
189, 191, 192
adjoint, 8, 154, 156, 279
geometric, 148
block form, 146
Negative definite, 167
column rank, 138
Negative element, 34, 42
diagonal elements, 4, 158
Negative semidefinite, 167
identity, 6
Negatively oriented vector space, 275, 291
inverse of, 6
Neighborhood, 300
nonsingular, 6
Nilcyclic linear transformation, 156, 193
of a linear transformation, 136
Nilpotent linear transformation, 192
product, 5
Nonsingular linear transformation, 99, 108
row rank, 138
Nonsingular matrix, 6, 9, 29, 82
skew-symmetric, 7
Norm function, 66
square, 4
Normal
trace of, 4
of a curve, 390-391
transpose of, 7
of a hypersurface, 407
triangular, 6
Normal linear transformation, 116
zero, 4
Normalized vector, 70
Maximal Abelian subalgebra, 487
Normed space, 66
Maximal Abelian subgroup, 486
INDEX xvii
Range of a function, 18
Rank Scalar addition, 41
dual, 203-217
factor space of, 60-62
with inner product, 63-84
isomorphic, 99
normed, 66
Vectors, 41
angle between, 66, 74
component of, 51
difference,42
length of, 64
normalized, 70
sum of, 41
unit, 76
Volume, 329, 497
Zero element, 24
Zero linear transformation, 93
Zero matrix,4
Zero N-tuple, 42
Zero vector, 42