Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Dual PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Linear functionals and dual spaces (Secs.

31, 32, 14, 19)

(See also Simmonds, A Brief on Tensor Analysis, Chap. 2)

Definitions: A linear functional is a linear operator whose codomain is F (a one-


dimensional vector space). The set of such, V ∗ ≡ L(V; F ), is the dual space of V.

The dimension of V ∗ is equal to that of V. The elements of V ∗ are represented by row


matrices, those of V by column matrices.

If dim V = ∞, one usually considers only the linear functionals which are continuous
with respect to some topology. This space is called the topological dual, as opposed
to the algebraic dual. The topological dual spaces of infinite-dimensional vector spaces
are of even greater practical importance than those of finite-dimensional spaces, because
they can contain new objects of a different nature from the vectors of the original spaces.
In particular, the linear functionals on certain function spaces include distributions, or
“generalized functions”, such as the notorious Dirac δ. (Chapter 5 of Milne’s book is one
place to find an introduction to distributions.)

I will use the notation Ṽ , Ũ , . . . for elements of V ∗ , since the textbook’s notation “~v ∗ ”
could be misleading. (There is no particular ~v ∈ V to which a given Ṽ ∈ V ∗ is necessarily
associated.) Thus Ũ (~v) ∈ F . This notation is borrowed from B. Schutz, Geometrical
Methods of Mathematical Physics.

Often one wants to consider Ũ (~v ) as a function of Ũ with ~v fixed. Sometimes people
write D E
Ũ , ~v ≡ Ũ (~v).

Thus h. . .i is a function from V ∗ × V to F (sometimes called a pairing.) We have


D E D E D E
αŨ + Ṽ , ~v = α Ũ , ~v + Ṽ , ~v ,
D E D E D E
Ũ , α~u + ~v = α Ũ , ~u + Ũ , ~v .

(Note that there is no conjugation in either formula. The pairing is bilinear, not sesquilin-
ear.)

It will not have escaped your notice that this notation conflicts with one of the standard
notations for an inner product — in fact, the one which I promised to use in this part of
the course. For that reason, I shall not use the bracket notation for the result of applying
a linear functional to a vector; I’ll use the function notation, Ũ (~v).

1
Relation to an inner product

Suppose that V is equipped with an inner product. Let’s use the notation h~u, ~vi, with
hα~u, ~vi = α h~u, ~vi , h~u, α~vi = α h~u, ~vi .
(Thus h~u, ~vi ≡ ~v · ~u.)

Definition: The norm of a linear functional Ũ is the number


kŨ (~v)kF
kŨ kV ∗ ≡ sup .
~06=~
v ∈V k~v kV

Riesz Representation Theorem (31.2). Let V be a Hilbert space. (This includes any
finite-dimensional space with an inner product.) Then

(1) Every ~u ∈ V determines a Ũu~ ∈ V ∗ by


Ũu~ (~v ) ≡ h~u, ~vi .

(2) Conversely, every [continuous] Ũ ∈ V ∗ arises in this way from some (unique) ~u ∈ V.

(3) The correspondence G: ~u 7→ Ũu~ ≡ G(~u) is antilinear and preserves the norm:
Ũα~u+~v = αŨu~ + Ũ~v ;
kŨu~ kV ∗ = k~ukV .
Thus if F = R, then G is an isometric isomorphism of V onto V ∗ . [Therefore, when
there is an inner product, we can think of V and V ∗ as essentially the same thing.]

Proof: See Bowen & Wang, p. 206. The geometrical idea is that

....
..
...........
..~u
...
...
... ...
.............
... .............
... .............
... ..........................
Ũ = const. .............
................
.......
.............
....
. .
. .
. .
................. ...
... .............
...........
. .............
.
. .............
. ... ...............
.
~u = normal vector to ker Ũ
.
.............
.............
.......
.
.............
.............• . .
......
...
...

. ....
... ..
...............
............. ~0
...........
.............
.............
= gradient of the function Ũ (~v). ............. Ũ = 0

(Here gradient is meant in the geometrical sense of a vector whose inner product with
a unit vector yields the directional derivative of the function in that direction.)

Note: Closer spacing of the level surfaces is associated with a longer gradient vector.

2
Coordinate representation

Choose a basis. Recall that in an ON basis {êj },

N
X
h~u, ~vi = uj v j .
j=1

Thus Ũ~u is the linear functional with matrix (u1 , . . . , uN ). (In particular, in an ON basis
the gradient of a real-valued function is represented simply by the row vector of partial
derivatives.)

If the basis (call it {d~j }) is not ON, then

N
X
h~u, ~vi = gjk uj v k ≡ gjk uj v k ,
j,k=1

D E
where gjk ≡ d~j , d~k (the “metric tensor” of differential geometry and general relativity).
Note that gjk is symmetric if F = R (a condition henceforth referred to briefly as “the
real case”.) We see that Ũu~ has now the matrix {gjk uj } (where j is summed over, and the
free index k varies from 1 to N ). Thus (in the real case) {gjk } is the matrix of G.

Conversely, given Ũ ∈ V ∗ with matrix {Uj }, so that

Ũ (~v) = Uj v j ,

then the corresponding ~u ∈ V is given (in the real case) by

uk = g kj Uj ,

where {g jk } is the matrix of G−1 — i.e., the inverse matrix of {gjk }. The reason for using
the same letter for two different matrices — inverse to each other — will become clear
later.

The dual basis

Suppose for a moment that we do not have an inner product (or ignore it, if we do).
Choose a basis, {d~j }, for V, so that

v1

 v2 
~v = v j d~j = 
 ...  .

vN

3
Then (definition) the dual basis, {D̃j }, is the basis (for V ∗ !) consisting of those linear
functionals having the matrices

Dj = (0, 0, . . . , 0, 1, 0, . . . ) (1 in jth place).

That is,
D̃j (d~k ) ≡ δ jk , ∀j, k.

If Ũ = Uj D̃j , then

v1
 
.. 
h i 
Ũ (~v ) = Uj D̃j v k d~k = Uj v j = (U1 , U2 , . . . )  . .
vN

The reciprocal basis

j
If V is a real inner product space, define the reciprocal basis {d } (in V !) to be the
vectors in V corresponding to the dual-basis vectors D̃j under the Riesz isomorphism:

j
d ≡ G−1 D̃j .

j
Equivalent definition (Sec. 14): d is defined by
D j
E
~
d , dk = δ jk , ∀j, k.

Note that the bar in this case does not indicate complex conjugation. (It covers just
j
the symbol “d”, not the superscript.) If {d~j } is ON, then gjk = δjk and hence d = d~j for
all j. We “discovered” the reciprocal basis earlier, while constructing projection operators
associated with nonorthogonal bases. The reciprocal basis of the reciprocal basis is the
original basis.

Given ~v ∈ V, we may expand it as

j
~v = v j d~j = vj d .

Note that D E
j j j
v = D̃ (~v) = d , ~v ,

and similarly D E
vj = d~j , ~v ;

4
to find the coordinates of a vector with respect to one basis you take the inner products
with respect to the elements from the other basis. (Of course, if {dj } is ON, then the two
bases are the same and all the formulas we’re looking at simplify.) Now we see that
D E
vj = v k ~ ~
dj , dk = gjk v k ;

the counterpart equation for the other basis is

v j = g jk vk .

There follow
h~u, ~vi = uj vj = uj v j = gjk uj v k = g jk uj vk .
After practice, “raising and lowering indices” with g becomes routine; g (with indices up
or down) serves as “glue” connecting adjacent vector indices together to form something
scalar.

Geometry of the reciprocal basis

1
In a two-dimensional space, for instance, d must be orthogonal to d~2 , have positive
inner product with d~1 , and have length inversely proportional to kd~1 k so that
D E1
~
d , d1 = 1.

2
Corresponding remarks hold for d , and we get a picture like this:

.......
............
. ..
...... 2
..........
d~ ...
................ d~1
2 ............ .... .
.. .
......
.
d .....
.....
.....
.
....
.. .....
....
.
.....
.....
..... ...
. ..
......
.
..... ..
..... ... .....
..... ... .....
....
.....
.....
..
.. ..
.....
..... ... ....
..... .....
..... .. .....
..... ..
. ..
......
.
.....
..... .... .....
..... .....
..... .... ......... 1
..... .. .....
..... .. .....
.......................................................................................................
...
d

Here we see the corresponding contravariant (v j ) and covariant (vj ) components of a


vector ~v :

5
......
.............
~v
............
........... .....
2
. .....
..... .
..
............. ........
. . .....
..... ..... ...
v2 d .....
d~ ..... ..
.....
d~
.....
...... .... 2 ........ ..... ..... ..
...........
........ .
.
...... .
... .. ..... ................. 1
.. ... ...... .
. ..........
2 ........ . .. ........
. .
...
.
.........
.
d .....
.....
.....
.......
.... ...
...
.
.....
.....
..... ........
.....
........ .....
.....
.....
.....
.....
v d~ 2
2
......
..
.
.
. .
.
.
...
.
...
.
..
...
...
.
.
..
. ....... .....
.....
.....
.....
..... .
. ..
. ...... .....
..... .
. ... .. . .
..
. ... .....
..... ..........
....
v d~
.. .....
.....
..... . ..
..
..
..... 1 .....
.....
..... .. ...
. ...... 1 .....
..... . .... ....
. .....
..... .
. . ...
. .....
..... . .
. ...... .
. .....
..... .
. .
. . .....
..... .... .... ........ 1 .....
..... ......... ..... 1
............. d
.....................................................................................................................................................................................
... ...
v1 d

Application: Curvilinear coordinates. (See M. R. Spiegel, Schaum’s Outline of Vector


Analysis, Chaps. 7 and 8.)

Let xj ≡ f j (ξ 1 , . . . , ξ N ). For example,

x = r cos θ, x1 = x, x2 = y,
y = r sin θ; ξ 1 = r, ξ 2 = θ.

Let V = RN be the vector space where the Cartesian coordinate vector ~x lives. It is
equipped with the standard inner product which makes the natural basis ON. Associated
with a coordinate system there are two sets of basis vectors at each point:

1) the normal vectors to the coordinate surfaces (ξ j = constant):

∂ξ j ∂ξ j
 
j j
∇ξ = , ,... ≡d .
∂x1 ∂x2

From a fundamental point of view, these are best thought of as vectors in V ∗ , or


“covectors”. In classical vector analysis they are regarded as members of V, however.
In effect, the dual-space vectors have been mapped into V by G−1 ; they are a reciprocal
basis.

2) the tangent vectors to the coordinate lines (ξ k = constant for k 6= j):


 ∂x1 
∂ξj
d~x  ∂x2 
 ∂ξj  ≡ d~j .
=
dξ j 
..

.

These are ordinary (nondual) vectors (members of V), sometimes called “contravariant
vectors”.

6
.
...
... ... ..
.......
............ ... ........ .... ................
...
2 .............
........
... .. d~ ... ........
2 ................... d~
d .....
.....
.....
.....
..... .
......
...
. ...
.......
.
..
. 1
..... .........
..... ..... ............
..... .. ....
..... . ..... 1
.........................................................
.... ..... d
......... ...
.
..... ..
....... ..
.
...... .. 1
ξ 2 = const. . ......
......
......
.
ξ = const.
...
..
..... . .
....... ...
...
..

∂ξ j
  
j d~x
Note that (∇ξ ), = = δ jk . Thus the two sets of vectors form mutually
dξ k ∂ξ k
reciprocal bases. (Another way of looking at this equation is that the inner product or
pairing of a row of the Jacobian matrix of the coordinate transformation with a column
of the inverse of the Jacobian matrix is the corresponding element (0 or 1) of the unit
matrix.)

For polar coordinates, define r̂ and θ̂ to be the usual unit vectors. Then you will find
that
d~x d~x
= r̂, = r θ̂ ;
dr dθ
θ̂
∇r = r̂, ∇θ = .
r
(The geometrical interpretation of the two θ equations is that an increment in θ changes
~x little if r is small, much if r is large; and that θ changes rapidly with ~x if r is small,
slowly if r is large.) In this case the two bases are OG but not ON, hence they are distinct.
Their orthogonality makes it possible to define uniquely the ON basis {r̂, θ̂} sitting halfway
between them.
.................... . d~ x
......... ......
....... ......... dθ
r=2 ......
.....
.....
.
.
...
..... ..
...
d~
x ... ...
dθ = θ̂ = ∇θ .....
.
..........
... ...
. .
... ..........
..
...
... ..
...... θ̂
.......
..
.. .........∇θ
............................................................................
~0 • .
.. .... . ... r̂
r̂ ....
...
..

Change of basis

Again we ignore the inner product for awhile and study the dual basis. Recall our
earlier notation:

{~vj } = “old” basis, {w


~ j } = “new” basis,
~x = αj ~vj = β k w
~ k is an arbitrary element in V.
Suppose the transformation (old basis 7→ new basis) is

~ k = Rjk~vj .
w (1)

7
(In our previous discussion of change of basis, R was called S −1 .) Then the transformation
(old coordinates 7→ new coordinates) is
k
β k = R−1 jα
j
. (2)

(One matrix is “contragredient” to the other — the inverse of its transpose.)

Now look at Ũ ∈ V ∗ and the two dual bases:

Ũ = γj Ṽ j = δk W̃ k .

Then
Ũ (~x) = γj αj = δk β k = δk (R−1 )kj αj .
Thus
γj = (R−1 )kj δk .
This may be denoted the transformation (new* coordinates 7→ old* coordinates), the *
standing for the dual space, V ∗ . This result is more useful to us in the inverse direction:

δk = Rjk γj (3)

(old* coordinates 7→ new* coordinates). We can rewrite (3) so as to untangle the indices
into a normal matrix multiplication:

δk = γj Rjk or δk = (R∗ )kj γj .

Note that (3) “looks like” (1). Historically, vectors in the dual space V ∗ were called
covariant vectors, because under a change of coordinate system (basis) their coordinates
transform “along with” the basis vectors in V. The vectors in the original space V were
called contravariant vectors, because their coordinates transform “in the opposite direction
from” the basis vectors, as shown by (2). [Two familiar examples of the latter phenomenon
are (a) the result of a change of a unit of measurement, and (b) the relation between the
“active” rotation of an observer and the “passive” rotation of his view of the world.] Nowa-
days in many quarters it is considered in poor taste to talk of vectors as “transforming” at
all: Vectors are abstract objects which remain the same no matter what coordinate system
is used to describe them! Dual vectors are still called covectors, but the “co” just means
“dual” to the “ordinary” vectors in V.

Change of basis as Leibnitz would write it

Writing β as x, α as ξ, and R−1 as S, we cast the linear variable change (2) into the
form of the general nonlinear change of variables considered earlier:

xk = S kj ξ j .

8
Note that
∂xk
= S kj .
∂ξ j
By the inverse function theorem,
∂ξ j
= (S −1 )jk = Rjk .
∂xk
The point of this remark is that a handy way to remember the respective transformation
laws of vectors and covectors is through the following prototypes of each:
d~x(t)
contravector: Tangent vector to a curve,
 dt
∂f
covector: Gradient of a function,
∂xk

The transformation laws then follow from the multivariable chain rule:
dxk ∂xk dξ j k ∂xk j
= ⇒ β = α (2′ )
dt ∂ξ j dt ∂ξ j
∂f ∂f ∂ξ j ∂ξ j
= ⇒ δk = γj (3′ )
∂xk ∂ξ j ∂xk ∂xk
These equations remain meaningful for nonlinear coordinate transformations — but that
is material for another course.

The dual operator

Given A: V → U, there is a unique, linear A∗ : U ∗ → V ∗ defined by

[A∗ Ũ ](~v ) = Ũ (A~v ), ∀~v ∈ V. (1)

That is,
A∗ Ũ ≡ Ũ ◦ A. (2)
(This explicit formula proves the uniqueness and linearity.) In the “pairing” notation
which we recently outlawed, this would be written
D E D E

A Ũ , ~v = Ũ , A~v . (3)

Version (3) looks suspiciously like the definition of the adjoint operator in a Hilbert space.
Indeed, if U and V are inner-product spaces, then U ∗ is isomorphic to U and V ∗ to V (up to
conjugation in the complex case), and under these isomorphisms, A∗ : U ∗ → V ∗ coincides
with A∗ : U → V.

Practical applications of the dual operator are presented by Milne in Secs. 2.7,
2.9, 3.5(end), and 3.10. Unfortunately, we do not have time to discuss them in the course.

9
The double dual space
D E
Writing Ũ (~v) as the pairing Ũ , ~v emphasizes that each ~v ∈ V defines a linear
functional on V ∗ :
[J~v ](Ũ ) ≡ Ũ (~v).
That is, V is isomorphic to a subspace J[V] ⊂ (V ∗ )∗ .

If dim V < ∞ (and for many infinite-dimensional spaces too), J[V] is equal to V ∗∗ —
there are no other linear functionals on V ∗ . When this is true, V is called reflexive; V and
V ∗∗ are “the same”.

Note that the isomorphism J: V ↔ V ∗∗ is fixed — independent of a choice of basis


or any other structure. In contrast, the isomorphism G: V ↔ V ∗ depends on the inner
product. If there is no inner product, V is certainly isomorphic to V ∗ because they have
the same dimension (speaking now of finite-dimensional spaces), but there is no preferred
(“natural” or “canonical”) isomorphism. (The apparently obvious mapping ~vj ↔ Ṽ j is
basis-dependent. It disagrees with ~vj ↔ G~vj if the basis is not ON.)

10

You might also like