Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
25 views

Some Mathematical Tools: 1.1 Some Definitions in Algebra

1. The document defines important algebraic structures including rings, fields, vector spaces, and algebras. 2. It then discusses vector spaces and linear operators. Vector spaces can have basis changes described by matrices. The tensor product of vector spaces can be used to construct bigger vector spaces. 3. Within the direct product space, the symmetric and antisymmetric subspaces can be singled out. Nilpotent and semisimple endomorphisms are introduced, along with their properties such as eigenvalues.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views

Some Mathematical Tools: 1.1 Some Definitions in Algebra

1. The document defines important algebraic structures including rings, fields, vector spaces, and algebras. 2. It then discusses vector spaces and linear operators. Vector spaces can have basis changes described by matrices. The tensor product of vector spaces can be used to construct bigger vector spaces. 3. Within the direct product space, the symmetric and antisymmetric subspaces can be singled out. Nilpotent and semisimple endomorphisms are introduced, along with their properties such as eigenvalues.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Chapter 1

SOME MATHEMATICAL TOOLS

1.1 Some definitions in algebra


Let us recall in a nutshell the definition of some important algebraic structure, increasingly
more refined than that of group.

Ring A ring R is a Abelian group (for which the additive notation is usually employed: the
product is x + y, the identity is 0, etc.). Moreover on R is defined another product (for which
the standard multiplicative notation is used: x y or simply xy when no confusion is possible),
which is distributive with respect to the sum:

x, y, z R , x(y + z) = xy + xz . (1.1.1)

With respect to this second product, R is a semigroup, namely the product is associative:

x, y, z R x (y z) = (x y) z , (1.1.2)

and there is an identity element e:


x e = e x = x. (1.1.3)
It is instead not required the existence of an inverse for each element x. The typical example of
a ring is Z (a part from 1, all other numbers do not admit an inverse in Z).

Field A field is a ring in which all elements do admit an inverse with respect to the product,
except for the neutral element with respect to the addition, 0. Examples of fields are Q of R.

Vector space A vector space V over a field F (which for us will always be R or C) is first of all
an Abelian group, whose elements ~v can therefore be summed:

~v , w
~ V , ~v + w
~ V , (1.1.4)

there is a neutral element ~0 and each element ~v has an inverse ~v . Moreover, a second operation
is defined on V , namely the multiplication by scalars:

F , ~v V , f~v V , (1.1.5)

1
Vector spaces and linear operators 2

with the following properties:


F , ~v1 , ~v2 V : (~v1 + ~v2 ) = ~v1 + ~v2 ;
, F , ~v V : ( + )~v = ~v + ~v ;
, F , ~v V : ()~v = (~v ) ;
~v V : 1 ~v = ~v . (1.1.6)
An example of a vector space is of course the space of ordinary vectors in, say, three dimensions.

Algebra An Algebra A is a vector space on a filed F on which an additional operation


: AAA (1.1.7)
is defined. This additional operation must be associative and distributive and linear with respect
to the vector addition:
f, g, h A : f (g h) = (f h) h ;
f, g, h A , , F : f (g + h) = f g + f h . (1.1.8)

1.2 Vector spaces and linear operators


1.2.1 Vector spaces
General linear groups and basis changes in vector spaces As already recalled, the non-singular
n n matrices A with real or complex entries, i.e. the elements of GL(n, R) or GL(n, C) are
automorphisms of real or complex vector spaces. Namely, they describe the possible change of
basis in a n-dimensional vector space Rn or Cn : ~ei 0 = Ai j ~ej (in matrix notation, ~e 0 = A~e). Two
subsequent changes of basis, first with A and then with B, result in a change described by the
matrix product BA. The basis elements ~ei are said to transform covariantly. The components
0 0
v i of a vector ~v = v i~ei transform contravariantly: from ~v = v i ~ei 0 = v i~ei it follows v i = v j Aji (in
matrix notation, v = v 0 A, or v 0 = A1 v).

Tensor product spaces Starting from two vector spaces, say V1 (n-dimensional, with basis ~ei )
and V2 (m-dimensional, with basis f~j ), it is possible to construct bigger vector spaces, for instance
by taking the direct sum V1 V2 of the two spaces or their direct product V1 V2 . The direct
product has dimension mn and basis ~ei f~j . A change of basis A in V1 and B in V2 induce a
change of basis in the direct product space described1 by (~ei f~j )0 = Ai k Bj l ~ek f~l . An element
of a direct product vector space is called a tensor.
In particular, one can take the direct product of a vector space V of dimension n with itself
(in general, m times): V . . . V , which we may indicate for shortness as V m . Its nm basis
vectors are ~ei1 . . . ~eim . An element of V m is called a tensor of order m. An element A of
the general linear group G of V (a change of basis in V ) induces a change of basis on V m :
0
~ei10 . . . ~eim jm
= Ai1j1 . . . Aim ~ej1 . . . ~ejm . (1.2.9)
Such changes of bases on V m form a group which is the direct product of G with itself, m
times, Gm . Its elements are nm nm matrices, and the association of an element A of G with
1 The subgroup of all possible changes of basis obtained in this way is the direct product G 1 G2 (a concept we
will deal with in the next section) of the general linear groups (groups of basis changes) G 1 and G2 of V1 and V2 .
Vector spaces and linear operators 3

an element of Gm given in Eq. (1.2.9) preserves the product; thus Gm is a representation of


G, called an m-th order tensor product representation. This way of constructing representations
by means of tensor products is, as we will see, very important.

(Anti)-symmetrized product spaces Within the direct product space V V , we can single out two
subspaces spanned respectively from symmetric and antisymmetric combinations2 of the basis
vectors ~ei ~ej :

~ei ~ej ~ei ~ej + ~ei ~ej = (1 + (12)) (~ei ~ej ) ,


~ei ~ej ~ei ~ej ~ei ~ej = (1 + (12)) (~ei ~ej ) , (1.2.10)

where in the second equality we denoted by 1 and (12) the identity and the exchange element of
the symmetric group S2 acting on the positions of the two elements in the direct product basis.
The symmetric subspace has dimension n(n + 1)/2, the antisymmetric one n(n 1)/2.
Similarly, within V m the fully symmetric and fully antisymmetric subspaces can easily be
defined. Their basis elements can be written as
X
~ei1 . . . ~eim P (~ei1 . . . ~eim ) ,
P Sm
X
~ei1 . . . ~eim ()(P ) P (~ei1 . . . ~eim ) , (1.2.11)
P Sm

where (P ) is 1 or 1 if the permutation P is even or odd, respectively. The fully symmetric


subspace has dimension n(n +1). . . (n + m 1)/m!, the fully antisymmetric one has dimension
n
n(n 1) . . . (n m + 1)/m!= m .

1.2.1.1 Nilpotent and semisimple endomorphisms - Jordan decomposition


Definition 1.2.1. << A matrix Aij such that Aij = 0 if i > j is named upper triangular.
A matrix such that Aij = 0 if i < j is named lower triangular. Finally a matrix that is
simultaneously upper and lower triangular is named diagonal >>

Let us recall the concept of eigenvalue:


Definition 1.2.2. << Let A Hom(V, V ). A complex number C is named an eigenvalue of
A if

v V such that:
A
v =
v (1.2.12)
>>

Definition 1.2.3. << Let be an eigenvalue of the endomorphism A Hom(V, V ), the set of
vectors
v V such that A v = v is named the eigenspace V V pertaining to the
eigenvalue . It is obvious that it is a vector subspace. >>

As it is known from elementary courses in Geometry and Algebra the possible eigenvalues of A
are the roots of the secular equation:

det ( 1 A) = 0 (1.2.13)
2 The notation ~ei ~ej , though logical, is not much used. Often, with a bit of abuse, a symmetric combination is
simply indicated as ~ei ~ej ; we will sometimes do so, when no confusion is possible.
Vector spaces and linear operators 4

where 1 is the unit matrix and A is the matrix representing the endomorphism A in an arbitrary
basis.
Definition 1.2.4. << An endomorphism N Hom(V, V ) is named nilpotent if there exists an
integer: k N such that:
Nk = 0 (1.2.14)
>>
Lemma 1.2.1. << A nilpotent endomorphism has always the unique eigenvalue 0 C >>
Proof 1.2.1.1. Let be an eigenvalue and let

v V be an eigenvector. We have:
v = r
Nr

v (1.2.15)
Choosing r = k we obtain k = 0 which necessarily implies = 0.
Lemma 1.2.2. << Let N Hom(V, V ) be a nilpotent endomorphism. In this case one can choose
a basis {

e i } of V such that in this basis the matrix Nij satisfies the condition Nij = 0 for i j.
>>
Proof 1.2.2.1. Let e 1 be a null eigenvector of N , namely N
e 1 = 0 and let E1 be the subspace


of V generated by e 1 . From N we induce an endomorphism N1 acting on the space V /E1 ,
namely the vector space of equivalence classes of vectors in V modulo the relation:


v
w v
w = m e1 , mC (1.2.16)
Also the new endomorphism N1 : V /E1 V /E1 is nilpotent. If dim V /E1 6= 0, then we
can find another vector e2 V such that (e2 + E1 ) V /E1 is an eigenvector of N1 . Continuing
iteratively this process we obtain a basis
e 1 , . . . ,

e n of V such that:
N e1 = 0 ; N ep = 0 mod (e1 , . . . , ep1 ) ; 2pn (1.2.17)
where (e1 , . . . , ep1 ) denotes the subspace of V generated by the vectors e1 , . . . , ep1 . In this
basis the matrix representing N is triangular. Similarly if Nij is triangular with Nij = 0 for
i j, then the corresponding endomorphism is nilpotent
Definition 1.2.5. << Let S Hom(V, V ) be a subset of the ring of endomorphisms and W V
a vector subspace. The subspace W is named invariant with respect to S if S S we have
S W W . The space V is named irreducible if does not contain invariant subspaces. >>
Definition 1.2.6. << A subset S Hom(V, V ) is named semisimple if every invariant subspace
W V admits an orthogonal complement which is also invariant. In that case we can write:
p
M
V = Wi (1.2.18)
i=1

where each subspace Wi is invariant >>


A fundamental and central result in Linear Algebra, essential for the further development of Lie
algebra theory is the Jordans decomposition theorem that we quote without proof.
Theorem 1.2.1. << Let L Hom (V, V ) be an endomorphism of a finite dimensional vector space
V . Then there exists and it is unique the following Jordan decomposition:
L = S L + NL (1.2.19)
where SL is semisimple and NL is nilpotent. Furthermore, both SL and NL can be expressed as
polynomials in L >>
Vector spaces and linear operators 5

1.2.2 Some properties of matrices


Let us recall here some definitions and properties of matrices that are important in the following.
It is possible to define a matrix exponential function by means of a formal power series
expansion: if A is a square matrix,

X 1 k
exp(A) = A . (1.2.20)
k!
k=0

For instance, consider the following 2 2 case:



0 cos sin
m() = = exp[m()] = , (1.2.21)
0 sin cos
where is a real parameter. The above expression easily follows from Eq. (1.2.20) and from the
fact that that the Levi-Civita antisymmetric symbol ij , seen as a matrix, obeys 2 = 1.
This matrix exponentials enjoys various properties of the usual exponential, but not all of
them. For instance, with T, S square matrices, one has
exp(T ) exp(S) = exp(T + S) [T, S] = 0 , (1.2.22)
i.e. only if the two matrices commute. Otherwise, Eq. (??) generalizes to the so-called Baker-
Campbell-Hausdorff formula

1 1
exp(T ) exp(S) = exp T + S + [T, S] + ([T, [T, S]] + [[T, S] , S]) + . . . . (1.2.23)
2 12
Deduce directly the first terms in the expansion above. A property which is immediate to show
(check it) is that
exp(U 1 T U ) = U 1 exp(T ) U . (1.2.24)
A very useful property of the determinants is the following:
det (exp(m)) = exp (tr m) , (1.2.25)
or equivalently, by taking the logarithm (which for matrices is defined via a formal power series)
det M = exp (tr(ln M )) . (1.2.26)
These relations can be proven easily for a diagonalizable matrix; indeed, determinant and trace
are invariant under change of basis, so we can think of having diagonalized M . If i are the
eigenvalues of M , we have then
Y X
det M = i = exp ln i = exp (tr(ln M )) . (1.2.27)
i i

The result can then be extended to generic matrices as it can be argued that every matrix can
be approximated to any chosen accuracy by diagonalizable matrices.
Let us note that on the space of n n matrices (with, say, complex entries) one can define
a distance by
n
X 1/2
d(M, N ) = |Mij Nij |2 , (1.2.28)
i,j=1

where M, M are two matrices. This is nothing else than considering the space of generic n n
2
matrices as the Cn space parametrized by the n2 complex entries, and to endow it with the
usual topology.
Manifolds 6

Exponential maps for operators Notice that the definition Eq. (1.2.21) of the exponential map
can be generalized to linear operators acting on -dimensional spaces of functions, with all the
properties described for the matrix case. As a simple example, the exponental of the linear
d
operator dx acting of functions of a real variable x was already considered in Eq. (??); we found
it generates finite translations.

1.3 Manifolds
Let us summarize some definitions and properties regarding differential manifolds that are im-
portant in the discussion of Lie groups.

1.3.1 Definition of a manifold


Let us briefly recall the definition of a manifold. A manifold M is a topological space which is
locally homeomorphic to an Rd space (d is then called the dimension of M). An homeomorphism
(i.e. a continuous and invertible map). In Fig.s 1.1,1.2 examples of 1-dimensional spaces that
can or cannot be manifolds are depicted.
Being locally homeomorphic with Rd , M is covered by an atlas, i.e. a collections of charts
(Ui , i ), where Ui is an open neighbourhood of M and i : Ui Rd an omeomorphism. Each
chart provides a local system of coordinates. In general, a single chart cannot cover the entire
manifold (think of the sphere S 2 ). Different charts will overlap. In the overlap Ui Uj the
transition function ij = j (i )1 allows the comparison of the two coordinate systems. The
transition functions satisfy ij = (ji )1 and obey the cocycle condition that ki jk ij = id,
where id is the identical map.

p
PSfrag replacements

Figure 1.1. 1-dimensional manifolds; around Figure 1.2. This space is not a manifold:
each point, the space looks like R. around p the space does not look like R.

Figure 1.4. A transition function between


Figure 1.3. An open chart is a homeomor- two open charts is a differentiable map from
phism of an open subset Ui of the manifold M an open subset of Rm to another open subset
onto an open subset of Rm of the same.
Manifolds 7

The properties of a manifold depend crucially on those of the transition functions. If it


is possible to choose all the transition functions to be differentiable, smooth (i.e. infinitely
differentiable or of class C ) or analytic (i.e., expandable in power series) then M is called a
differentiable, smooth or analytic manifold.
On a differentiable manifold, thanks to the local omeomorphism with R d , the whole ma-
chinery of differential geometry can be set up. In particular, one can discuss functions, curves,
tangent vectors and vector fields, differential forms. Locally, i.e. in each chart, there is no differ-
ence with the differential geometry on an open subset of Rd ; however one must then relate the
various charts by means of transition functions appropriate for the various quantities.
Although the constructive definition of a differentiable manifold is always in terms of an
atlas, in many occurrences we can have other intrinsic global definitions of what M is and the
construction of an atlas of coordinate patches is an a posteriori operation. Typically this happens
when the manifold admits a description as an algebraic locus. The prototype example is provided
by the SN sphere which can be defined as the locus in RN +1 of points {xi } at distance r from
the origin, namely the locus such that
N
X +1
x2i = r2 (1.3.29)
i=1

In particular for N = 2 we have the familiar S2 which is diffeomorphic to the compactified complex
plane C {}. An atlas of (two) open charts is for instance suggested by the stereographic
projection.

1.3.2 Functions on manifolds


A real scalar function on a differentiable manifold M is a map:

f : M R (1.3.30)

that assigns a real number f (p) to every point p M of the manifold. Similarly one can define
complex functions; the following discussion is done for real functions, but adaptes readily to the
complex case.
The properties of a scalar function are the properties characterizing its local description in
the various open charts of an atlas. As shown in Fig. 1.6, the abstract function f : M R
(or C) is described locally in each chart (Ui , i ) by a function

f(i) f 1
i : Ui0 Rm R (or C) , (1.3.31)

which represent a coordinate presentation of f . We will say that f is differentiable (smooth,


analytic,. . . ) in a point p Ui iff f(i) is differentiable (smooth, analytic,. . . ) in P = i (p)
Ui0 . Moreover, f is named differentiable (smooth, analytic,. . . ) if it is differentiable (smooth,
analytic,. . . ) in every chart.
We need to glue together the various patches to define globally the function f or to examine
its global properties. Gluing the various patches imposes some obvious consistency requirements
of the coordinate presentations f(i) .
In particular, Ui Uj there is the consistency condition on f(i) and f(j) that

f(j) = f(i) ji . (1.3.32)


Manifolds 8

PSfrag replacements Ui f

i 1
i
R
i (Ui ) f(i)
Rd

Figure 1.6. A germ of smooth function is the


Figure 1.5. Local description of a scalar func- equivalence class of all locally defined function
tion on a manifold. that coincide in some neighborhood of a point.

This expresses the usual transformation of a scalar under a coordinate transformation.


Indeed, let x(i) (with = 1, . . . dim M) be the coordinates on Ui , and x(j) those on Ui . The two
set of coordinates of a point in Ui Uj are related by the transition functions: x(j) = ij (x(i) ).
The relation Eq. (1.3.32) reads thus in explicit coordinates
f(j) (x(j) ) = f(j) (ij (x(i) )) = (f(j) i j)(x(i) ) = f(i) (x(i) ) . (1.3.33)

This is the usual rule for expressing a scalar function s(x) in terms of new coordinates x 0 , namely,
the new functional form s0 of the scalar is such that s0 (x0 ) = s(x).

The algebra of smooth global functions These consistency conditions are rather restrictive. Thus
the space of globally defined functions of a certain type , e.g., C functions, on M is typically
very small if M is non-trivial, as they must be correctly related on the intersection of any two
charts. For this very reason, it contains a lot of information about M. It is in some sense possible
to replace the geometric study of the manifold M by the algebraic study of the algebra of
smooth functions on M, which is indicated as C (M).
Indeed, C (M) is an algebra. It is a vector space over R (or over C for complex functions,
of course):
(f + g)(p) = f (p) + g(p), for , R, and f, g C (M) , (1.3.34)
for every point p in M. Moreover, there is a multiplication of functions that we can define
pointwise:
(f g)(p) f (p)g(p) . (1.3.35)
This multiplication is associatve and distributive with respect to the sum:
f (g + h) = f g + f h . (1.3.36)

Locally defined functions In the following we shall be mostly interested in the local properties
of functions. We consider thus the set Cp (U) of (real or complex-valued) functions defined in a
chart (U, ) containing a given point p:
fU Cp (U) fU : U U R or C, of class C . (1.3.37)
Manifolds 9

Of fU we have of course a coordinate presentation as (fU 1 )(x ). Of course, Cp (U) forms an


algebra, exactly as C (M (recall that the product of functions was defined pointwise).
We could of course consider another chart (U 0 , 0 ), also containing p. Similarly to the
principle of analytic continuation, we introduce the following equivalence relation:

fU fU 0 fU (p) = fU 0 (p) , p U U 0 , (1.3.38)

namely, U and fU 0 are equivalent if they coincide in U U 0 .

Germs of smooth functions A germ of smooth function in a point p is an equivalence class with
respect to the above defined equivalence relation, of functions locally defined in p. Thus the
set of germs at p, named Cp (M), is the set of all smooth functions locally defined in p, having
identified all those that coincide in some open neighbourhood of p:

Cp (M) = {fU Cp (U) for some U such that p U}/ . (1.3.39)

Also Cp (M) can be naturally given the structure of an algebra.


The concept of germs of functions is useful for the definition of tangent vectors.

1.3.3 Tangent vectors and vector fields on a manifold


In elementary geometry the notion of a tangent is associated with the notion of a curve. To
introduce tangent vectors we begin therefore with the notion of curves in a manifold.

Curves on a manifold A curve C in a manifold M is a continuous and differentiable map of an


interval of the real line (say [0, 1] R) into M:

C : [0, 1] 7 M (1.3.40)

In other words a curve is onedimensional submanifold C M (see Fig. 1.7).

Figure 1.7. A curve in a manifold is a contin- Figure 1.8. In a neighborhood Up of each


uous map of an interval of the real line into the point p M we consider the curves that go
manifold itself through p.

Tangent vectors in a point For each point p M let us fix an open neighborhood U p M and
let us consider all possible curves Cp (t) that go trough p (see Fig.1.8). Intuitively, the tangent
in p to a curve that starts from p is the vector that specifies the curves initial direction. The
basic idea is that in an mdimensional manifold there are as many directions in which the curve
can depart as there are vectors in Rm : furthermore for sufficiently small neighborhoods of p
Manifolds 10

we cannot tell the difference between the manifold M and the flat vector space R m . Hence to
each point p M of a manifold we can attach an mdimensional real vector space T p M, which
parametrizes the possible directions in which a curve starting at p can depart. This vector space
is named the tangent space to M at the point p and is, by definition, isomorphic to R m . Fig.
1.9 depicts the case of the 2-sphere S2 .

Figure 1.10. The composed map fp Cp where fp is a


Figure 1.9. The tangent space in a germ of smooth function in p and Cp is a curve departing
generic point of an S2 sphere. from p M.

Let us now make this intuitive notion mathematically precise. Consider a point p M and
a germ of smooth function f Cp (M). In any open chart (U , ) that contains the point
p, the germ f is represented by an infinitely differentiable function f (x() ) of mvariables. Let
us choose an open curve Cp (t) that lies in U and starts at p (namely, lets for definiteness set
Cp (0) = p) and consider the composed map:

gp f C p : [0, 1[ R 7 R (1.3.41)

which is a real function of the real variable t: gp (t) R, see Fig. 1.10. We can calculate its
derivative with respect to t in t = 0 which in the open chart (U , ) reads as follows:
d f dx
gp (t) |t=0 = |t=0 (1.3.42)
dt x dt
We see from the above formula that the increment of any germ f C
p (M) along a curve Cp (t)
is defined through the m real coefficients:
dx
c |t=0 R (1.3.43)
dt
which can be calculated whenever the parametric form x (t) of the curve is given. Explicitly we
have:
df f
= c = ~tp f , (1.3.44)
dt x
where we introduced the differential operator on the space of germs of smooth functions

~tp c : C
p (M) 7 Cp (M) , (1.3.45)
x
called a tangent vector to the manifold in the point p. Indeed ~tp f is again a germ of smooth
functions. The tangent vector ~tp associates to any locally defined function f its directional
derivative in the initial direction of Cp . Thus the tangent space Tp M to the manifold M in the
point p can be defined as the vector space of first order differential operators on the germs of
smooth functions C p (M).
Manifolds 11

A more abstract definition is based on the concept of derivation of an algebra. A derivation


D of an algebra A is a map: D : A 7 A that obeys

, R f, g A : D (f + g) = Df + Dg (linearity);
(1.3.46)
f, g A : D(f g) = Df g + f Dg (Leibnitz rule).

The set of derivations D[A] of an algebra constitutes a real vector space. Indeed, a linear
combination of derivations is still a derivation, having set:

, R, D1 , D2 D[A], f A : ( D1 + D2 ) f = D1 f + D2 f . (1.3.47)

Furthermore, provided with a Lie product given by the commutator in the linear operator sense,
it forms a Lie algebra. Indeed, the commutator [1 , 2 ] of any two derivations of A is again a
derivation, i.e, it satisfies again the two properties in Eq. (1.3.46), linearity and the Leibnitz rule.
Linearity is of course no problem, the Leibnitz rule follows because

[1 , 2 ] (x y) = 1 (2 x y + x 2 y) (1 2)
= 1 2 x y + 2 x 1 y + 1 x2 y + x1 2 y (1 2)
= [1 , 2 ] x y + x [1 , 2 ] y . (1.3.48)

A tangent vector ~tp is clearly a derivation of the algebra Cp of locally defined functions:
indeed, , R, f, g Cp ,

~tp (f + g) = ~tp f + ~tp g ;



~tp (f g) = ~tp f g + f ~tp g . (1.3.49)

The tangent space Tp (M can be therefore defined as the vector space of derivations of the algebra
of germs of smooth functions in p:

Tp M D C

p (M) . (1.3.50)

In each coordinate patch a tangent vector is a first order differential operator which is defined
independently from the coordinate choice. However, writing it explicitely as in Eq. (1.3.44):
~tp = c ~ , we have that ~ depends on the coordinate choice, and so does therefore c . In the
language of tensor calculus the tangent vector is identified with the mtuplet of real numbers
c which, as we just remarked, that such mtuplet representing the same tangent vector is
different in different coordinate patches. Consider two coordinate patches (U, ) and (V, ) with
non vanishing intersection, and name x and y the respective coordinates of a point p in the
intersection. A tangent vector in p can be expressed in both coordinate systems, and we have:

~ ~
~
~tp = c sc y
= c (1.3.51)
x x y y

so that
y

c c (1.3.52)
x
Eq.(1.3.52) expresses the transformation rule for the components of a tangent vector from one
coordinate patch to another one.
Manifolds 12

Figure 1.12. Two local charts of the base


Figure 1.11. The tangent bundle is obtained manifold M yield two local trivializations of
by gluing together all the tangent spaces. the tangent bundle T M.

Vector fields A vector field X is a map

T : p M 7 ~tp Tp (M) (1.3.53)

that specifies a tangent vector in any point.


Such a definition poses no problems locally, i.e. in an open neighborhood U of p. Indeed,
even if by definition we have a different vector space Tp (M) in every point, by looking at the
coordinate presentation ~tp = c ~ , valid in the neighbourhood U, we see that we can identify all
neighbouring tangent spaces with a single vector space spanned by { ~ }. That is, locally we can
work within a direct product space U Rd , of elements (p, ~tp ) and Eq. (??) is well-defined. It
consists, in a given coordinate chart {x }, in defining point-dependent tangent vectors

T (x) = T (x)~ . (1.3.54)

The components T (x) of a vector field transform under coordinate changes exactly as a tangent
vector at a specific point, see Eq. (1.3.52), namely, by means of the Jacobiam matrix:

x0
X 0 (x0 ) = X (x) X 0 (x0 ) = X (x) . (1.3.55)
x0 xs x
Considering the transformation x x0 as a coordinate change is apassive point of view.
One could take an active point of view in which the transformation is considered as a mapping
: M M, sending x 7 x0 (x). Such a mapping induces a mapping on the space of vector
fields (which is sometimes indicated as d, and called differential of the map . Requiring that
X 0 (x0 ) = X(x), i.e., that the transformed vector field at the transformed point equals the original
field at x, we find immediately that the transformation Eq. (1.3.55).
The space Diff 0 (M) carries an interesting algebraic structure: it forms a Lie algebra, the
Lie product being defined as the commutator of the composed application of vector fields. This
aspect is discussed in the main text, in sec. ??.

You might also like