Symmetry
Symmetry
Symmetry
Symmetry
Robert B. Howlett
Contents
Chapter 1: Symmetry
1a An example of abstract symmetry 2
1b Structure preserving transformations 3
1c The symmetries of some structured sets 7
1d Some groups of transformations of a set with four elements 10
Chapter 2: Introductory abstract group theory
2a Group axioms 15
2b Basic deductions from the axioms 18
2c Subgroups and cosets 21
2d On the number of elements in a set. 26
2e Equivalence relations 30
2f Cosets revisited 34
2g Some examples 36
2h The index of a subgroup 39
Chapter 3: Homomorphisms, quotient groups and isomorphisms
3a Homomorphisms 40
3b Quotient groups 45
3c The Homomorphism Theorem 52
Chapter 4: Automorphisms, inner automorphisms and conjugacy
4a Automorphisms 57
4b Inner automorphisms 60
4c Conjugacy 63
4d On the number elements in a conjugacy class 66
Chapter 5: Reections
5a Inner product spaces 70
5b Dihedral groups 72
5c Higher dimensions 75
5d Some examples 81
iii
Chapter 6: Root systems and reection groups
6a Root systems 84
6b Positive, negative and simple roots 88
6c Diagrams 94
6d Existence and inadmissibility proofs 105
Index of notation 110
Index 111
iv
1
Symmetry
Of the following geometrical objects, some are rather symmetrical, others
less so:
Objects exhibiting a high degree of symmetry are more special, and
possiblydepending on your tastemore appealing than the less symmet-
rical ones. They are, therefore, natural objects of mathematical interest.
When studying symmetrical objects one can always exploit the symmetry to
limit the amount of work one has to do. It is only necessary to measure one
side of a square, since symmetry says that all other sides are the same. It is
not necessary to measure any of the angles: it is a consequence of symmetry
that they are all ninety degrees.
Although measuring a square is a trivial application of symmetry, it
gives a tiny glimpse of the importance of symmetry in mathematics. Sym-
metry occurs not only in concrete, geometrical situations, but also in highly
complex abstract situations, where exploitation of the symmetrical aspects of
the situation can provide methods for dealing with problems that otherwise
would be hopelessly intractable.
Group theory is the mathematical theory of symmetry, in which the
basic tools for utilizing symmetry are developed. The purpose of these notes
1
2 Chapter One: Symmetry
is to provide a introduction to group theory, concentrating in particular on
a very important class of geometrical groups: the nite Euclidean reection
groups.
1a An example of abstract symmetry
Before launching into a theoretical discussion of symmetry, we take a brief
look at an example of the use of symmetry in an abstract situation. After
this section our examples of symmetry will be almost entirely geometrical in
character, but the example we consider here is drawn from number theory.
Consider the equation x
2
+y
2
= 1, and suppose we wish to nd solutions
of this which have the form x = p/q and y = r/s, where p, q, r and s
are integers. It is not hard to see that this problem amounts to nding
Pythagorean triples: triples (a, b, c) of integers satisfying a
2
+ b
2
= c
2
. The
most famous of these triples, (3, 4, 5), yields the solution x = 3/5, y = 4/5.
Now, how can we bring symmetry into this?
If an equation possesses symmetry, once one solution has been found,
the symmetry may present you with other solutions for free. There is an
obvious symmetry to the equation x
2
+ y
2
= 1, given by interchanging x
and y. So of course we have the solution x = 4/5, y = 3/5 also. This is
not very exciting. But there are other symmetries of the equation which you
probably havent noticed yet. For example, the transformation
x x
= (3/5)x + (4/5)y
y y
= (4/5)x (3/5)y
can be seen to be a symmetry, since if x
2
+y
2
= 1 then
(x
)
2
+ (y
)
2
= ((3/5)x + (4/5)y)
2
+ ((4/5)x (3/5)y)
2
= (
9
25
x
2
+
24
25
xy +
16
25
y
2
) + (
16
25
x
2
24
25
xy +
9
25
y
2
)
= x
2
+y
2
= 1.
So, starting with the solution x = 15/17, y = 8/17, derived from the well
known Pythagorean triple (8, 15, 17), we deduce that
x
If X
i
= S for all i then X
1
X
2
X
n
is called the n-fold Cartesian
power of the set S, sometimes written as S
n
.
1.3 Definition Let n be a nonnegative integer and S a set. An n-ary
relation on S is a function from the n-fold Cartesian power S
n
to the
two element set true, false. The elements x
1
, x
2
, . . . , x
n
of S are said
to satisfy the relation if (x
1
, x
2
, . . . , x
n
) = true, and not satisfy if
(x
1
, x
2
, . . . , x
n
) = false.
Thus, for example, if S is the Euclidean plane then we can dene a
ternary (or 3-ary) relation Perp on S as follows: for all a, b, c S,
Perp(a, b, c) =
_
true if
abc = 90
false otherwise.
Chapter One: Symmetry 5
Binary (or 2-ary) relations are the most familiar kind; in this case the symbol
for the relation is commonly written between the two arguments. For exam-
ple, the less than relation on the set of all real numbers: a < b is true if
b a is a positive number, false otherwise. Single argument relationsthat
is, unary relationsare simply properties which a single object may or may
not possess. Thus there is a unary relation Green on the set of all visible
objects: Green(jumper) is true if the jumper is green.
If R is an n-ary relation on S, then we will abbreviate the statement
R(x
1
, x
2
, . . . , x
n
) is true to R(x
1
, x
2
, . . . , x
n
). For example, a < b has
the same meaning as a < b is true.
Virtually anything that one might want to say about sets and their
elements can be reformulated in terms of relations on sets. For example, the
concept of an operation on a set can be recast so that it becomes an example
of a relation on the set. We could dene a ternary relation Plus on the set R
by the rule
Plus(a,b,c) if and only if a + b = c.
A vector space could be dened as a set equipped with a ternary relation
Plus and binary relations (Mult)
(u
, v
) if
and only if, in the usual notation, v
= u
, Tv
, Tw
) whenever Plus(u
, v
, w
= Tu
+ Tv
whenever w
= u
+ v
. More
succinctly, T(u
+ v
) = Tu
+ Tv
, for all u
, v
(Tu
, Tv
) when-
ever (Mult)
(u
, v
); in other words, Tv
= Tu
whenever v
= u
, or, more
succinctly, T(u
) = Tu
for all u
(u, v) if uv = . A transformation
T: V V preserves (Dot)
. This
is slightly less obvious than you might think. Certainly you can multiply
any two elements of R
. It is
true, of course: the product of two nonzero real numbers is always a nonzero
real number. It now only remains to check that the axioms (i) and (ii) of 2.1
are satised, and this is trivial. The associative law is well known to hold for
multiplication of real numbers, the number 1 has the property required of an
identity element, and for each x R
, is an Abelian group,
since multiplication of real numbers is commutative.
Note that everything we have just said about the set of all nonzero real
numbers applies equally to the set of all nonzero elements of any eld. If F
is a eld then F
to R
P
3
,P
1
P
2
,P
4
Thus p preserves R
2
as well. Similarly we nd that p preserves R
1
:
P
4
,P
1
P
2
,P
3
P
3
,P
2
P
1
,P
4
, y, y
G are such
that x
x and y
y, then x
xy.
Proof. Given x
x and y
= xh
and y
= (xh)(yk) = xy(y
1
hy)k.
But since K is normal we know that y
1
hy K, and since k K it follows
from closure of K under multiplication that (y
1
hy)k K. Thus the equa-
tion x
= xy(y
1
hy)k says that x
xy, as required.
As an almost immediate consequence of 3.8 we see that if C
1
, C
2
G
are equivalence classes for the equivalence relation , then the product C
1
C
2
is also an equivalence class. For, let g be an arbitrary element of C
1
C
2
.
Then we have g = xy for some x C
1
and y C
2
. If g
is also an arbitrary
element of C
1
C
2
then, similarly, g
= x
, where x
C
1
and y
C
2
.
Since x and x
x, and
similarly y
g. So every element of
C
1
C
2
is congruent to g. Conversely, suppose that g
is an arbitrary element
of G that is congruent to g. Then g
= x(yk) C
1
C
2
, since x is in C
1
, and yk, being congruent to y, is in C
2
.
So an element of G is congruent to g if and only if it is in the set C
1
C
2
; thus
C
1
C
2
is an equivalence class, as claimed.
3.9 Proposition Let K be a normal subgroup of the group G. Then
(xK)(yK) = xyK for all x, y G.
Proof. As above, let be the relation of congruence modulo K. From our
discussion of congruence modulo a subgroup in Chapter 2, we know that xK
is the equivalence class containing x, and yK the equivalence class contain-
ing y. By the discussion above it follows that (xK)(yK) is an equivalence
class; but since x xK and y yK it follows that xy (xK)(yK), whence
(xK)(yK) is the equivalence class that contains xy. So (xK)(yK) = xyK,
as claimed.
Chapter Three: Homomorphisms, quotient groups and isomorphisms 49
Continuing with the assumption that K is a normal subgroup of G and
the relation of congruence modulo K, let Q be the quotient of G by .
That is, by Denition 2.19, Qis the set of all equivalence classes of G under .
We have seen that the product of two equivalence classes is an equivalence
class; so subset multiplication denes an operation on Q. It turns out that
this operation makes Q into a group; however, before proving this in general,
let us look at an example.
Let G = Alt1, 2, 3, 4 and K the subgroup consisting of the identity
permutation i and the three permutations a = (1, 2)(3, 4), b = (1, 3)(2, 4) and
c = (1, 4)(2, 3). We investigated the cosets of K in G at the end of Chapter 2,
and concluded that K is normal in G, and that the cosets of K in G are, in
the notation used in the multiplication table in Chapter 1,
iK = i, a, b, c
(1, 2, 3)K = t
1
, t
2
, t
3
, t
4
(1, 3, 2)K = s
1
, s
2
, s
3
, s
4
.
Since when writing out the multiplication table we grouped the elements of
G according to their cosets modulo K, it is easy to observe from a glance
at the table that the assertion of Proposition 3.8 is indeed satised. The
product of two elements of the coset t
1
, t
2
, t
3
, t
4
is an element of the
coset s
1
, s
2
, s
3
, s
4
, the product of an element in s
1
, s
2
, s
3
, s
4
with one
in t
1
, t
2
, t
3
, t
4
lies in i, a, b, c , and so on. If we write the cosets as
I = i, a, b, c, T = t
1
, t
2
, t
3
, t
4
and S = s
1
, s
2
, s
3
, s
4
then we see that
the multiplication table for the cosets is as follows.
I T S
I I T S
T T S I
S S I T
Since this is exactly the same as the multiplication table for a cyclic group of
order three, we conclude that in this case at least, Q, the set of all equivalence
classes for the relation congruence modulo K, is a group. Now let us prove
it in general.
3.10 Proposition Let G be a group and let K be a normal subgroup of G.
Let Q = xK [ x G, the set of all cosets of K in G, which equals the set
of all equivalence classes of G under congruence modulo K. Then Q forms
50 Chapter Three: Homomorphisms, quotient groups and isomorphisms
a group, with multiplication in Q satisfying the rule that (xK)(yK) = xyK
for all x, y G. The identity element of this group is the coset K = eK
(where e is the identity element of G), and for all x G the inverse in Q of
the coset xK is the coset x
1
K.
Proof. We have seen that subset multiplication yields an operation on Q,
and in 3.9 above we have seen that it satises (xK)(yK) = xyK for all
x, y G. Furthermore, we know from Proposition 3.5 that it is associative.
So all that remains is to check the group axioms (ii) (a) and (b) pertaining
to the identity element and inverses (see Denition 2.1).
Let C be an arbitrary element of Q, so that C = xK for some x G.
By 3.9 we have that
KC = (eK)(xK) = exK = xK = C,
and similarly
CK = (xK)(eK) = xeK = xK = C.
So the element K Q has the property that CK = KC = C for all C Q;
that is, K is an identity element for Q.
Again, let C = xK, an arbitrary element of Q. By 3.9
C(x
1
K) = (xK)(x
1
K) = xx
1
K = eK = K,
and similarly
(x
1
K)C = (x
1
K)(xK) = x
1
xK = eK = K,
showing that x
1
K has the property required of an inverse of C. So the
group axioms are satised, and all parts of the proposition are proved.
3.11 Definition The group Q in Proposition 3.10 is called the quotient
group, or factor group, G/K.
Surely, one might think, this same construction will work for any sub-
group K of a group G; the restriction to normal subgroups must be unnec-
essary. For, with Q = xK [ x G and with multiplication dened by
(xK)(yK) = xyK, all the group axioms are satised. The associative law is
trivial, since
(xKyK)zK = xyKzK = (xy)zK = x(yz)K = xKyzK = xK(yKzK),
Chapter Three: Homomorphisms, quotient groups and isomorphisms 51
the coset K = eK is an identity element, since by our denition of multipli-
cation
(xK)(eK) = xeK = xK = exK = (eK)(xK),
and x
1
K is the inverse of xK since
(x
1
K)(xK) = (x
1
x)K = eK = (xx
1
)K = (xK)(x
1
K).
The error in this reasoning comes right at the start, where we blithely asserted
that multiplication could be dened on Q in such a way that the formula
(xK)(yK) = xyK holds. For the subsequent reasoning to work, we need this
formula for all x, y G. However, if K is not normal in G this formula is
inconsistent with itself. If x, x
K.
If we multiply this element of Q by another, yK say, then on the one hand
the product C(yK) must be (xK)(yK) = xyK, on the other hand it must
also be (x
K)(yK) = x
yK = xyK, for
all y G, whenever x
)y, must be in K
whenever x
1
x
H. Thus putting x
K
without having x = x
K) as (x) and as (x
) whenever xK = x
K; this is in fact
what we did in the rst paragraph of the proof.
Given that is well-dened, the rest of the proof becomes easy. First
of all, if C
1
, C
2
G/K are arbitrary, then we have C
1
= xK and C
2
= yK
for some x, y G, and now
(C
1
C
2
) = (xK yK) = (xyK) = (xy)
= (x)(y) = (xK)(yK) = (C
1
)(C
2
).
Hence is a homomorphism.
Let h I. Since I = im we must have h = (x) for some x G,
and hence (xK) = (x) = h. So we have shown that every element of I,
the codomain of , has the form (C) for some C G/K. That is, is
surjective.
56 Chapter Three: Homomorphisms, quotient groups and isomorphisms
Finally, let C
1
, C
2
G/K with (C
1
) = (C
2
). We have C
1
= xK
and C
2
= yK for some x, y G, and now
(x) = (xK) = (C
1
) = (C
2
) = (yK) = (y),
from which it follows that
(x
1
y) = (x)
1
(y) = e
H
.
Thus x
1
y ker = K, and, by 2.11, xK = yK. That is, C
1
= C
2
. We
have thus shown that (C
1
) = (C
2
) implies C
1
= C
2
, and therefore that
is injective.
Since is an injective and surjective homomorphism from G/K to I,
it follows that G/K
= I, as claimed.
Although we did not make use of Proposition 2.20 in our proof of Theo-
rem 3.17, it is nevertheless true that Theorem 3.17 is little more than Propo-
sition 2.20 applied to a function which happens to be a homomorphism of
groups. According to Proposition 2.20, a homomorphism : G H can be
factorized as a surjection : G Q, followed by a bijection : Q I, fol-
lowed by an injection : I H. Furthermore, examining the proof of 2.20,
we nd that I is the image of and Q the quotient of G by the equivalence
relation dened by
x y if and only if (x) = (y).
We showed in the proof of 3.17 that (x) = (y) if and only if x is con-
gruent to y modulo K = ker ; so the equivalence relation derived from
Proposition 2.20 is exactly the the same as the equivalence relation used in
the denition of G/K. So the Q from 2.20 is, in this context, exactly the
quotient group G/K. Furthermore, looking once more at the proof of 2.20,
we nd that the mapping : Q I satises (C) = (x) whenever x C.
The mapping in Theorem 3.17 is dened in exactly the same way. It can
be checked easily that the proofs of the bijectivity of given in the proof of
2.20 and in the proof of 3.17 are essentially the same.
It is worth noting also that the mapping : G Q, dened in the
proof of 2.20, is precisely the natural homomorphism G G/K as dened
in Proposition 3.12. The mapping : I H is dened by (h) = h for all
h I; so clearly is a homomorphism also. So the content of Theorem 3.17,
given Proposition 2.20, is really that the three factors , and are all
homomorphisms, if the original function : G H is a homomorphism.
4
Automorphisms, inner automorphisms and conjugacy
According to the general conventions we have adopted, if we regard a group
G as a structured set then a symmetry, or automorphism, of G is a bijective
transformation of G that preserves the group structure. We start this chapter
by looking at a few examples of automorphisms of groups.
4a Automorphisms
In view of the above remarks and the denition of isomorphism given in
Chapter 3, we see that the following denition is appropriate.
4.1 Definition An automorphism of a group G is an isomorphism from
G to itself.
Recalling that an isomorphism is a bijective homomorphism, and that
a bijective function from G to itself can be called a permutation of G, the
denition can be rephrased as follows: an automorphism is a permutation
of G which is also a homomorphism.
The set of all symmetries of anything, no matter what, is always a
group; so it is certainly true that the set of all automorphisms of a group is
a group.
4.2 Definition If G is a group then Aut(G), the automorphism group
of G, is the set [ : G G is an automorphism, with multiplication of
elements of Aut(G) dened to be composition of functions.
The multiplication operation on Aut(G) is thus inherited from the mul-
tiplication operation on Sym(G), the group of all permutations of the set G.
If a separate proof were required that Aut(G) is indeed a group, the simplest
way to do it would be to use 2.10 to show that Aut(G) is a subgroup of
57
58 Chapter Four: Automorphisms, inner automorphisms and conjugacy
Sym(G). The main task then is to show that the composite of two homo-
morphisms is a homomorphism, and the inverse of an isomorphism is also
an isomorphism. We leave this as an exercise, and instead look at a few
examples.
Let G = I, T, S be a cyclic group of order three. The multiplication
table of G is as follows:
I T S
I I T S
T T S I
S S I T
If we interchange S and T in this table we obtain the following:
I S T
I I S T
S S T I
T T I S
However, this is still a multiplication table for G: the product of two elements
of G will be the same whichever table you choose. For example, both tables
say that T
2
= S, and both tables say that ST = I. The same would not
have worked had we chosen to interchange T and I rather than T and S.
The permutation
=
_
I T S
I S T
_
preserves the multiplicative structure of the group G, whereas the permuta-
tion
=
_
I T S
T I S
_
does not. That is, is an automorphism of G and is not. It is easy to show
that in fact the only other automorphism of G is the identity transformation.
We proved in Proposition 3.3 that if : G H is any homomorphism
then (e
G
) = e
H
. It follows that any automorphism of a group must take the
identity element to itself. If Gis the Klein 4-group then it turns out that every
permutation of G which xes the identity element is an automorphism of G.
This can be seen as follows. Write G = e, a, b, c, where e is the identity
element, and multiplication is given by the table which we wrote down in
Chapter 1. The important thing to note here is that three simple rules
Chapter Four: Automorphisms, inner automorphisms and conjugacy 59
describe the multiplication completely. First, x
2
= e for all x G; second,
xe = ex = x for all x G; third, if x and y are distinct nonidentity elements
of G then xy is the third nonidentity element of G. Now let : G G be
any permutation which xes e, and let x, y G. We will show, using a
case-by-case analysis, that (x)(y) = (xy). If x = y then, using the rst
of the three rules,
(x)(y) = (x)
2
= e = (e) = (x
2
) = (xy).
So suppose that x ,= y. If y = e then, by the second rule, we have
(x)(y) = (x)(e) = (x)e = (x) = (xe) = (xy).
A similar argument applies if x = e. Finally, if x and y are both non-identity
elements then the third rule tells us that xy = z, where z is the third non-
identity element. Furthermore, since is bijective and (e) = e, the elements
(x), (y) and (z) must be distinct nonidentity elements of G; so the third
rule also tells us that (x)(y) = (z). So (x)(y) = (xy) in this case too,
and so our claim is established. Since (x)(y) = (xy) for all x, y G, it
follows that is a homomorphism, and hence an automorphism of G.
From our previous discussion of the cyclic group of order three, it can
be seen that the cyclic group of order three also has the property that all per-
mutations of it which x the identity element are automorphisms, although
in this case there are only two such permutations, as opposed to six in the
case of the Klein group. It trivially holds also for the cyclic groups of or-
ders 1 and 2, when the only permutation which xes the identity element
is the identity permutation. However, there are no other groups with this
property.
Let G be the cyclic group of nite order n. Expressed in terms of some
xed generating element x, the distinct elements of G are e = x
0
(the identity
element), x, x
2
, . . . , x
n2
and x
n1
. Subsequent powers of x give the same
elements again: x
n
= x
0
, x
n+1
= x, and so on. In fact, if r and s are any
integers, then x
r
= x
s
if and only if r s is a multiple of n.
For each integer k there is a function =
k
: G G such that
(x
r
) = x
kr
for all integers r. To prove this we must show that x
kr
= x
ks
whenever x
r
= x
s
; otherwise phi will not be uniquely dened. But if x
r
= x
s
then, as we remarked above, r s must be a multiple of n. But if r s is a
multiple of n then kr ks, which equals k(r s), must be a multiple of n
also, and so x
kr
= x
ks
, as required.
60 Chapter Four: Automorphisms, inner automorphisms and conjugacy
Now if r and s are arbitrary integers we have that
(x
r
)(x
s
) = x
kr
x
ks
= x
kr+ks
= x
k(r+s)
= (x
r+s
) = (x
r
x
s
).
Since all elements of G are powers of x we conclude that (gh) = (g)(h)
for all g, h G. So is a homomorphism from G to G. This does not
mean that is an automorphism: as yet we do not know whether or not is
bijective. To determine whether is injective, we investigate the kernel of .
Suppose that x
r
ker . Then x
kr
= (x
r
) = e, and so kr must be a
multiple of n. Now let n = dm and k = dh, where d is the greatest common
divisor of n and k. Note that m and h cannot have any factors greater than 1,
since if they did we could nd a common factor of n and k greater than d.
Now since kr is a multiple of n it follows, after dividing through by d, that
hr is a multiple of m. Since m and h have no nontrivial common factors it
follows that r is a multiple of m. The converse is also true: if r is a multiple
of m then x
r
ker . So
ker = x
r
[ r is a multiple of m = n/ gcd(n, k) .
We see therefore that if the gcd of n and k is greater than 1 then ker contains
at least one element x
r
such that r is not a multiple of n. So ker ,= e in
this case, and hence (by Proposition 3.16) is not injective. On the other
hand if the gcd of n and k is 1 then
ker = x
r
[ r is a multiple of n = e,
and is injective. It is easily seen that an injective transformation of a nite
set is necessarily surjective also, and so we conclude that is an automor-
phism of G if and only if the greatest common divisor of n and k is 1.
4b Inner automorphisms
The groups whose automorphisms we discussed in 4a were all Abelian. This
section, however, is primarily concerned with groups which are not Abelian,
because the construction we present yields only the identity automorphism
in the Abelian case.
Homomorphisms from a group to itself are sometimes called endomorphisms.
Chapter Four: Automorphisms, inner automorphisms and conjugacy 61
4.3 Proposition Let G be a group and g G, and dene
g
: G G by
the rule
g
(h) = ghg
1
for all h G. Then
g
is an automorphism of G
Proof. If h, k are arbitrary elements of G then
g
(h)
g
(k) = (ghg
1
)(gkg
1
) = ghekg
1
= g(hk)g
1
=
g
(hk),
and so
g
is a homomorphism.
Suppose that h, k G with
g
(h) =
g
(k). Then ghg
1
= gkg
1
, and
applying 2.2 (i) and 2.2 (ii) we conclude that h = k. Since
g
(h) =
g
(k)
implies that h = k, it follows that
g
is injective.
Let h G be arbitrary. Since
g
(g
1
hg) = g(g
1
hg)g
1
= (gg
1
)h(gg
1
) = ehe = h
it follows that h im
g
. Since h was arbitrary, this shows that
g
is
surjective.
4.4 Definition If G is a group, then an automorphism of G is called
an inner automorphism if there exists g G such that (h) = ghg
1
for all
h G.
The reason for the name inner is clear enough: inner automorphisms
of G are produced, in some sense, by elements from within G itself.
4.5 Proposition Let G be a group, and for each g G let
g
Aut(G)
be as dened in 4.3. The the function : G Aut(G), dened by (g) =
g
for all g G, is a homomorphism.
Proof. We must show that (g)(h) = (gh), for all g, h G. That is, we
must show that
g
h
=
gh
, for all g, h G. Now since
g
h
and
gh
are
functions from G to G, to say that they are equal is to say that they have
the same eect on all elements of G.
So, let g, h and x be arbitrary elements of G. Since multiplication in
Aut(G) is composition of functions, (
g
h
)(x) is, by denition,
g
(
h
(x)).
Therefore
(
g
h
)(x) =
g
(hxh
1
) = ghxh
1
g
1
= (gh)x(gh)
1
=
gh
(x),
and since this is valid for all x G we conclude that the functions
g
h
and
gh
are equal, as required.
62 Chapter Four: Automorphisms, inner automorphisms and conjugacy
In Chapter 3 we expended some eort proving the Homomorphism The-
orem, which is applicable whenever one has a homomorphism from one group
to another. In Proposition 4.5 we showed that a certain function is a homo-
morphism; we would be foolish not to apply the Homomorphism Theorem
and see if it tells us anything interesting.
The Propositions 3.14 and 3.15 should really be regarded as part of the
Homomorphism Theorem, since they have the same hypotheses as 3.17 and
are needed for its statement. To apply these results to the homomorphism
dened in 4.5, the rst task is to calculate the kernel and the image of .
The kernel of is the set of all elements g G such that the function
g
: G G is the identity function. That is, ker is the set of all g G such
that
g
(h) = h for all h G.
4.6 Definition The centre of a group G is the set
Z(G) = g G [ gh = hg for all h G.
That is, Z(G) consists of those elements which commute with all elements
of G.
Multiplying the equation gh = hg on the right by g
1
converts it into
ghg
1
= h. Recalling that
g
(h) was dened as ghg
1
, we conclude that the
centre of G is the set of all g G such that
g
(h) = h for all g G.
4.7 Proposition The kernel of the homomorphism of Proposition 4.5
is Z(G), the centre of G.
By Proposition 3.14 it follows that the centre is a normal subgroup.
4.8 Proposition The centre of a group G is always a normal subgroup
of G.
Note that Proposition 4.8 is also very easy to prove directly, using 2.10
and the denition of normality.
The image of the homomorphism is Inn(G) =
g
[ g G, the set
of all inner automorphisms of G.
Chapter Four: Automorphisms, inner automorphisms and conjugacy 63
4.9 Proposition The set Inn(G) is a subgroup of Aut(G).
This follows directly from 3.15, or, alternatively, can be proved readily
using 2.10. We call Inn(G) the inner automorphism group of G.
4.10 Proposition If G is any group, then the central quotient group,
G/Z(G), is isomorphic to the inner automorphism group of G.
Proof. This is immediate from 3.17.
4c Conjugacy
4.11 Definition Let G be a group and x, y G. Then x and y are said to
be conjugate in G if there exists g G such that gxg
1
= y. In this situation
we will sometimes say that g transforms x into y.
An alternative formulation of this concept is as follows: x, y G are
conjugate if there exists Inn(G) such that (x) = y.
It is a straightforward exercise to show that conjugacy is an equiva-
lence relation, which therefore partitions G into mutually disjoint equivalence
classes. These are called the conjugacy classes of G.
Observe that if G is an Abelian group then gxg
1
= y becomes x = y;
so, in an Abelian group, x cannot be conjugate to anything but itself. So the
conjugacy classes of an Abelian group are just the single element subsets of
the group. Accordingly, the rst interesting example to look at is the smallest
group which is not Abelian, namely, the symmetric group Sym1, 2, 3.
Note that (2, 3)(1, 2)(2, 3)
1
= (1, 3), and (1, 2)(1, 3)(1, 2)
1
= (2, 3).
It follows that the elements (1, 2), (1, 3) and (2, 3) of G = Sym1, 2, 3 are
all conjugate to each other. Let p G be arbitrary, and let p(1) = a and
p(2) = b. Then
_
p(1, 2)p
1
_
(a) =
_
p(1, 2)
_
(p
1
(a)) =
_
p(1, 2)
_
(1) = p(2) = b,
and similarly it can be checked that p(1, 2)p
1
takes b to a. Since a and b are
interchanged by p(1, 2)p
1
, the remaining element of 1, 2, 3 must be xed.
In fact, in the literature, the conjugacy classes of a group are usually just
called the classes of the group.
64 Chapter Four: Automorphisms, inner automorphisms and conjugacy
Hence p(1, 2)p
1
= (a, b). It follows that (1, 2), (1, 3) and (2, 3) are the only
elements of this conjugacy class. By similar reasoning it can be veried that
(1, 2, 3) and (1, 3, 2) (which equals (1, 2)(1, 2, 3)(1, 2)) are the only elements
of G conjugate to (1, 2, 3). The remaining element of G is i, the identity,
which must form a class by itself. Indeed, it is obvious that i cannot be
conjugate to anything but itself, since pip
1
= i for all p. So the partitioning
of G into conjugacy classes is as follows:
G = i (1, 2), (1, 3), (2, 3) (1, 2, 3), (1, 3, 2).
Next we investigate conjugacy in Sym1, 2, . . . , n, for arbitrary n. The
key observation is that if q is an element of Sym1, 2, . . . , n that takes a to b,
and p Sym1, 2, . . . , n is arbitrary, then pqp
1
takes p(a) to p(b). For,
given q(a) = b, we nd that
(pqp
1
)(p(a)) = (pq)(p
1
(p(a))) = (pq)(a) = p(q(a)) = p(b).
Thus, for example, if q is the cycle (1, 4, 2, 3, 5) then pqp
1
is the cycle
(p(1), p(4), p(2), p(3), p(5)), for since q takes 1 to 4 it follows that pqp
1
takes p(1) to p(4), and since q takes 4 to 2 it follows that pqp
1
takes
p(4) to p(2), and so on. By choosing p appropriately, we can arrange for
(p(1), p(4), p(2), p(3), p(5)) to be any given 5-cycle. For example, if we want
pqp
1
to be (5, 4, 3, 2, 1), then p =
_
1 2 3 4 5
5 3 2 4 1
_
will do. Similarly, if
q = (2, 6)(3, 4) then pqp
1
= (p(2), p(6))(p(3), p(4)), and by suitable choice
of p we can make pqp
1
equal any product of two disjoint transpositions. In
general, two elements of Sym1, 2, . . . , n are conjugate if and only if they are
of the same cycle type. That is, when written as products of disjoint cycles,
they have the same number of cycles of length 1, the same number of length 2,
the same number of length 3, and so on. For example, in Sym1, 2, . . . , 24
a permutation might have three cycles of length 1, one of length two, four of
length three and one of length seven. Such a permutation is
q = (3, 7)(8, 21, 17)(11, 12, 13)(16, 6, 22)(20, 24, 23)(4, 9, 19, 10, 5, 18, 15),
and another is
r = (11, 12)(1, 2, 3)(7, 15, 14)(13, 16, 17)(19, 22, 20)(5, 6, 21, 24, 23, 10, 9).
Chapter Four: Automorphisms, inner automorphisms and conjugacy 65
Because q and r have the same cycle type, it follows that they must be
conjugate in Sym1, 2, . . . , 24. Here is a permutation p such that pqp
1
= r:
_
3 7 8 21 17 11 12 13 16 6 22 20 24 23 4 9 19 10 5 18 15 1 2 14
11 12 1 2 3 7 15 14 13 16 17 19 22 20 5 6 21 24 23 10 9 4 8 18
_
Arranging the numbers in the top row into increasing order (which is more
usual) this becomes:
_
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
4 8 11 5 23 16 12 1 6 24 7 15 14 18 9 13 3 10 21 19 2 17 20 22
_
In Sym1, 2, 3, 4, 5, which has 120 elements, there are exactly seven possible
cycle types, and so seven classes, as follows.
The identity element is a one element conjugacy class.
The elements conjugate to (1, 2) form ten element class.
The elements conjugate to (1, 2, 3) form a twenty element class.
The elements conjugate to (1, 2, 3, 4) form a thirty element class.
The elements conjugate to (1, 2, 3, 4, 5) form a twenty-four element class.
The elements conjugate to (1, 2)(3, 4) form a fteen element class.
The elements conjugate to (1, 2)(3, 4, 5) form a twenty element class.
Add these up and check that it comes to 120. Also, verify the numbers
themselves!
For groups of symmetries of geometrical objects, elements which are
conjugate invariably turn out to be geometrically similar kinds of transfor-
mations. For example, let G be the group of all symmetries of a square
with vertices a, b, c and d (see Chapter 1). The clockwise rotation through
90
i=1
[(
i
[.
For each class (
i
, choose a representative element g
i
(
i
. By Proposi-
tion 4.15, [(
i
[ = [G : C
G
(g
i
)], and so the equation above can be rewritten
as
[G[ =
s
i=1
[G : C
G
(g
i
)].
This is called the class equation of the group G.
It is useful to collect together the terms [G : C
G
(g
i
)] in the class equa-
tion which are equal to 1. Now [G : C
G
(g
i
)] = 1 if and only if the subgroup
C
G
(g
i
) is the whole of G (so that C
G
(g
i
) itself is the one and only coset
of C
G
(g
i
) in G), and this means that every t G commutes with g
i
. But
g
i
t = tg
i
for all t G if and only if g
i
Z(G). In other words, the single
element conjugacy classes of G correspond to the elements of the centre of G.
The class equation now becomes
[G[ = [Z(G)[ +
s
i=r
[G : C
G
(g
i
)],
where g
r
, g
r+1
, . . . , g
s
are representatives of the the conjugacy classes of
elements of G that lie outside the centre of G.
4.16 Proposition Let G be a group such that [G[ = p
n
, where p is a
prime number and n > 0. Then the centre of G contains at least one non-
identity element.
Proof. Since [G[ = p
n
and p is prime, every divisor of p
n
is also a power
of p. We know that the index of any subgroup of G is necessarily a divisor
of [G[, and so each of the terms [G : C
G
(g
i
)] appearing in the class equation
is a power of p. Now we have
[Z(G)[ = [G[
s
i=r
[G : C
G
(g
i
)],
and all the terms on the right hand side are powers of p greater than 1. So
all the terms on the right hand side of the equation are divisible by p, and it
follows that [Z(G)[ is divisible by p also. In particular, [Z(G)[ , = 1, whence
Z(G) has more elements than just the identity.
5
Reections
At this point we will abruptly abandon abstract group theory, although
we have not even scratched the surface of that enormous subject. Instead,
we turn our attention to n-dimensional Euclidean geometry.
There is an extremely naive viewpoint according to which, since we live
in three-dimensional space, higher dimensional geometry has no real appli-
cation, and is just some totally useless nonsense invented by mathematicians
to amuse themselves as they sit in their ivory tower. The truth is that the
mathematical analysis of very simple, concrete, physical problems immedi-
ately and inevitably involves higher dimensional spaces. To describe the po-
sition of a particle in three dimensional space requires three coordinates, but
if you need to worry about the positions of two particles simultaneously you
need six. Immediately you are working in six dimensional space. True, what
mathematicians call six dimensional geometry, the person in the street would
not recognise as geometry. It becomes hard to draw pictures to illustrate the
arguments that are used, which therefore appear more like algebra than ge-
ometry. Indeed, there is no clear distinction between algebra and geometry.
But, as we shall see, concepts from two and three dimensional geometry do
have natural generalizations in higher dimensions, and it is natural to use
the term geometry to apply to reasoning involving such concepts.
5a Inner product spaces
Let R
n
be the set of all n-component column vectors:
R
n
=
_
_
_
x
1
x
2
.
.
.
x
n
_
x
i
R for all i
_
.
70
Chapter Five: Reections 71
If x
, y
R
n
we dene the dot product of x
and y
by the formula
x
=
n
i=1
x
i
y
i
where x
i
, y
i
are the ith coordinates of x
, y
0
for all x, with equality if and only if x
= 0
. We dene
|x
| =
x
.
When n = 2 or 3 we can use Cartesian coordinates to identify R
n
with the
space of position vectors of points, in two or three dimensional Euclidean
space, relative to xed origin O. It turns out that in these casesprovided
we use a rectangular coordinate systemthe formula
distance =
_
(x
) (x
) = |x
|
gives the distance between the points P and Q whose position vectors are
OP = x
and
OQ = y
and y
|x
| |y
|
,
and so we use this same formula to dene the angle between x
and y
in
general.
With the dot product as dened above, R
n
becomes an inner product
space (as dened in Chapter 1). Furthermore, if V is any inner product space
over R, and if the dimension of V is n, then a basis v
1
, v
2
, . . . , v
n
of V can
be found which is orthonormal , in the sense that
v
i
v
j
=
_
0 if i ,= j
1 if i = j,
and then there is an inner product preserving vector space isomorphism be-
tween V and R
n
given by
1
v
1
+
2
v
2
+ +
n
v
n
_
2
.
.
.
n
_
_
.
So, eectively, R
n
is the only n-dimensional inner product space over R. The
corollary of this fact which will be important for us later is the following
proposition.
72 Chapter Five: Reections
5.1 Proposition If v
1
, v
2
, . . . , v
n
are elements of a real inner product
space V , then there exist x
1
, x
2
, . . . , x
n
in R
n
with x
i
x
j
= v
i
v
j
for all i, j.
In particular, if
ij
is the angle between x
i
and x
j
, then cos
ij
=
v
i
v
j
v
i
v
j
.
We will also apply the concepts of distance and angle to arbitrary real
inner product spaces, dening them by the same formulas as for R
n
.
It will sometimes be necessary for us to deal with spaces which are
almost inner product spaces, but not quite.
5.2 Definition Let V be a vector space over R. A bilinear form on V is
a function f: V V R such that
(i) f(v +u, w) = f(v, w) +f(u, w) for all u, v, w V and , R,
(ii) f(w, v +u) = f(w, v) +f(w, u) for all u, v, w V and , R.
The bilinear form f is said to be symmetric if in addition
(iii) f(u, v) = f(v, u) for all u, v V .
Observe that a symmetric bilinear form is an inner product if it satises
the extra condition that f(v, v) > 0 for all nonzero v V .
5.3 Proposition Let v
1
, v
2
, . . . , v
n
be a basis of a vector space V over
the eld R, and let A = (a
ij
) be an arbitrary n n matrix over R. Then
there exists a bilinear form f on V such that f(v
i
, v
j
) = a
ij
for all i, j. The
form f is symmetric if A is a symmetric matrix.
Proof. Every element of V can be written uniquely as a linear combination
of the basis vectors v
1
, v
2
, . . . , v
n
. If v =
n
i=1
i
v
i
and u =
n
i=1
i
v
i
,
dene
f(v, u) =
n
i=1
n
j=1
i
a
ij
j
.
It is straightforward to check that f has the required properties.
5b Dihedral groups
We look rst at two dimensional space, and examine the group of symmetries
of a regular n-sided polygon. Number its vertices 1, 2, . . . , n, where 1 is
adjacent to 2, 2 adjacent to 3, and so on, and n adjacent to 1. Let O be
the centre of the polygon; the perpendicular bisectors of all the sides, and
the bisectors of all the angles, all pass through O. These lines are all axes
Chapter Five: Reections 73
of symmetry of the polygon. Note that n bisectors of sides plus n bisectors
of angles only makes n axes of symmetry altogether, since each is counted
twice. There is a dierence between the case of even n, when the bisector of
an angle coincides with the bisector of the opposite angle, and similarly for
sides, and odd n, where the bisector of an angle is also the bisector of the
opposite side. The cases n = 5 and n = 6 are illustrated.
An axis of symmetry is a line in which the object can be reected
without being changed. Given a line in the plane, the reection in is the
transformation of the plane which takes each point P to its mirror image,
which is that point P
(v)
(u)
(u + v)
= (u) + (v)
v
u
u + v
O
A similar kind of diagram can be drawn to illustrate preservation of scalar
multiplication.
Chapter Five: Reections 75
Since the reection symmetries of the regular polygon discussed above
all correspond to lines through the centre of the polygon, if we choose the
centre as our origin of coordinates then the reections in question are all lin-
ear transformations. Since the product of two linear transformations is again
linear, all the elements of the dihedral group correspond to linear transfor-
mations of the plane. Our aim in the remainder of these notes will be to
classify all nite groups of linear transformations of Euclidean space which
are generated by reections. The dihedral groups are the simplest examples,
and they arein several waysof fundamental importance in the study of
the more complicated examples.
5c Higher dimensions
If is a reection of n-dimensional space then, as in the 2-dimensional case
described above, if P is any point then (
OP) =
OP
. Expressing
this in terms of the dot product, u
H if and only if u
a = 0. Now if P is
the point with position vector
OP = a
, and if P
) = a
. If u
) = u
) for all v
R
n
.
5.4 Proposition Let a
be a nonzero vector in R
n
and let H be the hy-
perplane of points perpendicular to a
) = v
2(v
)
a
for all v
R
n
.
Or, rather, a unique pair of mutually opposite directions.
76 Chapter Five: Reections
Proof. Let v
R
n
, and let = (v
)/(a
) a
= v
(a
)
= v
= v
= 0,
and therefore v
H. Hence (v
) = v
) = (v
) +(a
)
= v
+(a
)
= v
+(a
)
= v
2a
as claimed.
Proposition 5.4 was stated in terms of R
n
, but it is natural extend the
terminology to all real inner product spaces.
5.5 Definition If V is an arbitrary real inner product space and if a V
is nonzero, then the transformation
a
: V V dened by
a
(v) = v 2
v a
a a
a
is called the reection in the hyperplane orthogonal to a.
The proof of the following easy fact is left as an exercise for the reader.
5.6 Proposition If a, b are nonzero elements of the inner product space V ,
then
a
=
b
if and only if a is a scalar multiple of b. In particular,
a
=
a
.
Recall that an orthogonal transformation of an inner product space is
a linear transformation which preserves the inner product. Geometrically,
this means that distances and angles are preserved, since distance and angle
are dened in terms of the inner product. It is straightforward to show that
reections are orthogonal transformations.
Chapter Five: Reections 77
5.7 Proposition Let V be a real inner product space and 0 ,= a V .
Then
a
(u)
a
(v) = u v, for all u, v V .
Proof. Given u, v V , put = (v
)/(a
) and = (u
)/(a
). Then
a
(u)
a
(v) = (u 2a) (v 2a)
= u (v 2a) 2(a (v 2a))
= u v 2(u a) 2(a v) + 4(a a)
= u v 2((a a)) 2((a a) + 4(a a)
= u v.
1
=
(a)
.
Proof. Let v V be arbitrary, and let x =
1
(v). Then we have
(
a
1
)(v) = (
a
(
1
(v)))
= (
a
(x))
= (x 2
x a
a a
a) (by denition of
a
)
= (x) 2
x a
a a
(a) (as is linear)
= (x) 2
(x) (a)
(a) (a)
(a) (as preserves the dot product)
= v 2
v (a)
(a) (a)
(a) (as x =
1
(v) gives v = (x))
=
(a)
(v) (by denition of
(a)
)
Since
a
1
and
(a)
are transformations of V which have the same eect
on all elements of V , it follows that they are equal.
78 Chapter Five: Reections
Our next task is to investigate the product of two reections. In the
case of the Euclidean plane, geometrical methods can be used to show that
the product of two reections is a rotation. Alternatively, we can use linear
algebra to derive the same result, as follows. Let
a
=
_
x
y
_
=
_
r cos
r sin
_
,
the position vector of an arbitrary point in the plane. Assume that a
,= 0
,
and let be the reection in the hyperplane orthogonal to a
. (Of course, in
this situation, a hyperplane is simply a line.) Using the formula in 5.5 we
nd that
_
1
0
_
=
_
1
0
_
2
_
1
0
_
_
r cos
r sin
_
_
r cos
r sin
_
_
r cos
r sin
_
_
r cos
r sin
_
=
_
1
0
_
(2r cos /r
2
)
_
r cos
r sin
_
=
_
1 2 cos
2
2 cos sin
_
=
_
cos 2
sin2
_
= cos 2
_
1
0
_
sin2
_
0
1
_
and similarly
_
0
1
_
=
_
0
1
_
2
_
0
1
_
_
r cos
r sin
_
_
r cos
r sin
_
_
r cos
r sin
_
_
r cos
r sin
_
=
_
0
1
_
(2r sin/r
2
)
_
r cos
r sin
_
=
_
2 sin cos
1 2 sin
2
_
=
_
sin2
cos 2
_
= sin2
_
1
0
_
+ cos 2
_
0
1
_
.
Thus the matrix of the linear transformation relative to the standard basis
_
1
0
_
,
_
0
1
_
of R
2
is
_
cos 2 sin2
sin2 cos 2
_
Chapter Five: Reections 79
(which is the same as
_
cos sin
sin cos
_
, where = (/2) 2). Now if
is the
reection corresponding to another vector a
, then
sin2
sin2
cos 2
_
,
where
has matrix
_
cos 2 sin2
sin2 cos 2
_ _
cos 2
sin2
sin2
cos 2
_
=
_
cos 2 cos 2
+ sin2 sin2
cos 2 sin2
sin2 cos 2
sin2 cos 2
cos 2 sin2
sin2 sin2
+ cos 2 cos 2
_
=
_
cos 2(
) sin2(
)
sin2(
) cos 2(
)
_
,
which is well known to be the matrix of an anticlockwise rotation about the
origin through an angle of 2(
and a
.
We move now to the general situation, and consider nonzero linearly
independent vectors a and b in an n-dimensional inner product space V .
The vectors perpendicular to both a and b form an (n 2)-dimensional sub-
space, K, which is the intersection of H
a
and H
b
, the reecting hyperplanes
corresponding to
a
and
b
. If v K then v is xed by both reections
a
and
b
, and hence also by the product
a
b
. On the other hand, if u is a
vector which is a linear combination of a and b, then
a
(u) and
b
(u) are also
linear combinations of a and b. This says that the 2-dimensional subspace
P spanned by a and b is both
a
-invariant and
b
-invariant. In fact, P is a
plane, and
a
and
b
act on P as reections in the lines H
a
P and H
b
P
respectively. Hence
a
b
acts on P as a rotation through twice the angle
between a and b.
Note that the subspaces K and P are complementary to each other,
meaning that every element of V is uniquely expressible in the form x + y
with x K and y P, and so it follows that the eect of
a
b
on the whole
of V is determined by what it does on the subspaces K and P. Specically,
(
a
b
)(x +y) = (
a
b
)(x) + (
a
b
)(y) = x +(y)
where is a rotation of P. The K-component is xed, the P-component is
rotated.
80 Chapter Five: Reections
5.9 Proposition Suppose that a and b are nonzero elements of a real inner
product space V , and suppose that there is a nite group Gof transformations
of V which contains both the reection
a
and the reection
b
. Then the
angle between a and b is (p/q) for some integers p and q.
Proof. Let P be the subspace spanned by a and b, and the angle be-
tween a and b. If P is 1-dimensional then is either 0 or , satisfying the
Proposition. Otherwise P is 2-dimensional, and
a
b
acts on P as a rotation
through 2. It follows that (
a
b
)
q
acts on P as a rotation through 2q. But
since
a
b
G, which is nite, it follows from 2.28 that (
a
b
)
q
= i (the
identity transformation) for some integer q which is a divisor of [G[. So for
this q the angle 2q must be an integral multiple of 2. Thus 2q = 2p for
some integer p, and the desired conclusion follows.
5.10 Definition We will say that a set S of vectors in an inner product
space V is -commensurable if all elements of S are nonzero, and for every
pair a, b of elements of S there exist integers p, q such that the angle between
a and b is (p/q).
There are some trivial examples of large -commensurable sets of vec-
tors in n-dimensional space. For example, it is possible to have n mutually
perpendicular vectors: the elements of the standard basis have this property.
In 2-dimensional space it is easy to nd a set of 2k vectors such that the angle
between any two of them has the form (r/k) for some r Z: just take the
position vectors of the vertices and the midpoints of the edges of a regular
k-sided polygon with centre at the origin. We will call -commensurable sets
of this type polygonal. Now if V and V
. Indeed,
each element of U is uniquely expressible as a sum v + v
with v V and
v
1
and v
2
+ v
2
is
given by
(v
1
+v
1
) (v
2
+v
2
) = v
1
v
2
+v
1
v
2
.
Now if S is a -commensurable set of vectors in V , and S
a -commensurable
set of vectors in V
of U will be -commensurable
too, since anything in S will make an angle of /2 with anything in S
.
This enables us to construct -commensurable sets consisting of pairwise
Chapter Five: Reections 81
orthogonal subsets each of which is polygonal. But it is a nontrivial task to
nd examples of -commensurable sets of vectors which are not of this kind.
5d Some examples
Let G be the group of all symmetries of a regular tetrahedron T. A tetrahe-
dron can be described as a pyramid on a triangular base; it has four vertices
and four triangular faces. Regularity means that the triangular faces are equi-
lateral, and all congruent to each other. Every permutation of the vertices
in fact corresponds to a symmetry of T, and so it follows that [G[ = 24.
For every pair of vertices of T there is an edge joining the two; thus
there are six edges altogether. For each edge there is a unique opposite
edge
, joining the two vertices which are not endpoints of . The directions
of and
,
and xes the two which are the endpoints of . Thus we see that there are
exactly six reections in the group G; each edge of T has exactly one plane
of symmetry through it.
To visualize the situation adequately, it helps to embed T in a cube. The
vertices of the cube can be coloured red and blue, so that each edge has one
red endpoint and one blue endpoint. The four red vertices of the cube are the
vertices of T, the four blue ones vertices of a second regular tetrahedron T
,
which shares the same symmetry group G as T. The diagonals of the faces of
the cube are the edges of the tetrahedra. The cube has twelve edges, which
split into six pairs of opposite edges, and for each pair of opposite edges
there is a plane that includes both the edges, and which passes through O,
the centre of the cube, as well as including one edge of T and one edge of T
.
These are the six planes corresponding to the reections in G. If P is one of
these planes, then the line through O perpendicular to P bisects another two
edges of the cube. So we see that if a is the position vector of the midpoint
of an edge of the cube, then the plane orthogonal to a is one of the six planes
of symmetry we have described, and so
a
is an element of G. There are
twelve possible choices for a, corresponding to the twelve edges of the cube
(but only six reections since
a
=
a
). Proposition 5.9 guarantees that
this set of vectors is -commensurable. In fact, if X and Y are midpoints of
edges of the cube then the angle between
OX and
OY is 0, or /2 if the
two edges are parallel, and /3 or 2/3 otherwise.
82 Chapter Five: Reections
For our next example, consider the full group of symmetries of a cube.
As well as the six reections we described above, the cube has another three
reections. There are three pairs of opposite faces, and the plane which is
parallel to a pair of opposite faces and midway between the two is a plane of
symmetry. If a = OX
b
has order 5,
angle(b, c) =
2
3
,
b
c
has order 3,
angle(a, c) =
1
2
,
a
c
has order 2.
Each of the twelve pentagonal faces has ve lines of symmetry bisecting it,
and each such line determines a plane through O, the reection in which is a
symmetry of the dodecahedron. Each of these planes bisect four faces, and so
we obtain a set H of 5 12/4 = 15 planes corresponding to fteen reection
symmetries. Each plane in H passes through a uniquely determined pair of
opposite edgesnote that there are thirty edges altogetherand perpendic-
ularly bisects another pair of opposite edges. Furthermore, the line through
O normal to the plane bisects a third pair of opposite edges. In particular
the reection in the plane orthogonal to the position vector of the midpoint
of an edge is indeed a symmetry of the dodecahedron. The thirty position
vectors of midpoints of edges of the dodecahedron form a -commensurable
set.
To nd suitable points A, B and C, proceed as follows. Let P be the
centre of one of the faces, let Q be the midpoint of one of the edges of this
face and let R be one of the vertices on this edge. The plane containing the
triangle QRO is in the set H, and so its normal through O passes through
points A and A
POQ =
POR =
a
2
a
k
[ k Z is nonnegative, and a
i
S for each i .
Note that we do not need to mention inverses when describing G, since each
a
is its own inverse. Note also that the group G is necessarily a subgroup
of O(V ), the group of all orthogonal transformations of V , since 5.7 shows
that
a
O(V ) for all a.
It is very likely that G will be an innite group. For example, by
Proposition 5.9, if a V [
a
1 is not -commensurable, then the
group will certainly be innite. (If the angle between a and b is not a rational
multiple of then
a
and
b
by themselves generate an innite group, since
b
has innite order.) However, the next proposition shows that if S is a
root system then the group is nite.
6.2 Proposition Let W be a set of bijective linear transformations of a
vector space V , and suppose that S is a nite set of vectors which spans V .
Suppose that S is preserved all elements of W, in the sense that g(v) S
whenever g W and v S. Then W is a nite set.
Proof. For each g W dene
g
: S S by
g
(v) = g(v)
for all v S. In other words,
g
is simply the restriction to S of the trans-
formation g of V . Note that it is because of our hypothesis that g(v) S for
all v S that this denition yields a function from S to S; it is at this point
of the proof that the hypothesis is used.
Let g W. Since g is injective it follows that
g
is injective. Since an
injective transformation of a nite set is necessarily surjective, we conclude
that
g
Sym(S), the group of all permutations of S. So we can dene a
function : W Sym(S) by (g) =
g
for all g W. We will show that
is injective.
86 Chapter Six: Root systems and reection groups
Suppose that g, h W with (g) = (h). Since one of our hypotheses
is that S is nite, we may write S = v
1
, v
2
, . . . , v
n
. Since
g
=
h
we have
g(v
i
) =
g
(v
i
) =
h
(v
i
) = h(v
i
)
for all i from 1 to n. Now since g and h are linear transformations it follows
that
g(
n
i=1
i
v
i
) =
n
i=1
i
g(v
i
) =
n
i=1
i
h(v
i
) = h(
n
i=1
i
v
i
)
for all choices of the scalars
i
. But one of our hypotheses is that S spans V ,
and so for every v V there exist scalars
i
with v =
n
i=1
i
v
i
. Hence we
can conclude that g(v) = h(v) for all v V , and since by denition g and h
are transformations of V , it follows that g = h. So we have proved that
(g) = (h) implies g = h; that is, is injective.
Since there is an injective function from W to Sym(S) it follows that
the number of elements of W is less than or equal to the number of elements
of S. Since the number of permutations of a set of size n is n!, a nite
number, we can conclude that Sym(S) is nite, and hence W is nite, as
required.
6.3 Corollary Let S be a root system in the inner product space V , and
G the subgroup of O(V ) generated by
a
[ a S . Then G is nite.
Proof. Since S is nite and spans V , all we need in order to apply Propo-
sition 6.2 is that g(b) S for all b S and g G. But if g G then
g =
a
1
a
2
a
k
for some a
i
S, and so
g(b) =
a
1
a
2
a
k
(b) =
a
1
(
a
2
( (
a
k
(b)) ))
for all b S. An obvious induction using Part (iv) of the denition of a root
system now shows that g(b) S.
Corollary 6.3 shows that for every root system there is a corresponding
nite subgroup of O(V ) generated by reections. In the other direction, the
next proposition shows that every nite subgroup of O(V ) gives rise to a root
system.
Chapter Six: Root systems and reection groups 87
6.4 Proposition Let V be an inner product space and G a nite subgroup
of O(V ). Then the set
S = a V [ a a = 1 and
a
G
is a root system in the subspace of V that it spans.
Proof. Since G is a nite group it contains only nitely many reections.
If a, b S are such that
a
=
b
then, by Proposition 5.6, there exists a
scalar such that a = b. But a a = 1 = b b (since a, b S), and so
1 = a a = (b) b) =
2
(b b) =
2
.
Hence a = b, and so there are at most two elements of S for each reection
in G. Hence S is nite.
We have shown that S satises condition (i) of Denition 6.1, and so by
the remarks following 6.1 it remains to show that conditions (ii) and (iv) are
also satised. Now (ii) is trivial, since if a a = 1 then a is certainly nonzero.
So it remains to show that if a, b S then
a
(b) S.
Let a, b S, and let c =
a
(b). By Proposition 5.7 we know that
a
is
an orthogonal transformation; so by Proposition 5.8,
c
=
a
(b)
=
a
1
a
,
which is in G since
a
and
b
are in G. Furthermore, the fact that
a
is
orthogonal also tells us that
c c =
a
(b)
a
(b) = b b = 1.
Hence c S, as required.
We will say that a root system S is normalized if a a = 1 for all a S.
It is easily checked that if S is any root system, then the set
1
a
a [ a S
is a normalized root system. Of course, the reections corresponding to the
vectors in this normalized system are exactly the same as those corresponding
to the vectors in S itself. So as far as groups generated by reections are
concerned, there is nothing lost by restricting attention to normalized root
systems; hence we shall usually do so.
88 Chapter Six: Root systems and reection groups
6b Positive, negative and simple roots
Let S be a root system in the inner product space V . Observe that if a S
then, by (iv) of Denition 6.1,
a
(a) S. But
a
(a) = a
2(aa)
(aa)
a = a. So
a S whenever a S. We now propose to divide S into two halves, called
S
+
and S
, and
vice versa. We do this in a rather arbitrary fashion, simply choosing some
hyperplane, and declaring the elements of S on one side of it to constitute S
+
,
those on the other side to constitute S
.
6.6 Definition Let S be a root system, and x v
0
V such that a v
0
,= 0
for all a S. Dene
S
+
= a S [ a v
0
> 0 ,
the set of positive roots, and
S
= a S [ a v
0
< 0 ,
the set of negative roots.
The following technical denition will be of great use in our investiga-
tions of root systems.
6.7 Definition If B = b
1
, b
2
, . . . , b
n
is a nite set of vectors in the vector
space V , then a vector v V is said to be a positive linear combination of B
if v ,= 0 and
v =
1
b
1
+
2
b
2
+ +
n
b
n
for some scalars
i
such that
i
0 for all i. The set of all v V which are
positive linear combinations of B will be denoted by plc(B).
6.8 Definition Let S be a root system in V , with sets S
+
and S
of
positive and negative roots relative to some xed v
0
V , as in Denition 6.6.
A subset B S
+
is called a base of S if
(i) B is a basis of V , and
(ii) S
+
plc(B).
The next result is the key to our analysis of root systems.
6.9 Theorem Let S be a root system in V and S
+
a set of positive roots
in S. Then S has a base B S
+
.
Proof. Let / = B o
+
[ o
+
plc(B) . That is, / is the set of all sub-
sets B of S
+
with the following property,which we shall call Property (P):
(P) B S
+
and every a S
+
is a positive linear combination of B.
90 Chapter Six: Root systems and reection groups
Observe that S
+
itself has Property (P). For, let S
+
= a
1
, a
2
, . . . , a
k
;
now for all a S
+
we can achieve a =
1
a
1
+
2
a
2
+ +
k
a
k
with
i
0
by putting
i
= 1 if a
i
= a and
i
= 0 otherwise. Hence / ,= . Now choose
B / with [B[ as small as possible. So B has Property (P), but no proper
subset of B has Property (P). We will prove that B is a basis of V .
Step 1. Let B = b
1
, b
2
, . . . , b
n
. For all i, the vector b
i
is not a positive
linear combination of Bb
i
= b
1
, b
2
, . . . , b
i1
, b
i+1
, . . . , b
n
.
Proof. Suppose that
(6.9.1) b
i
=
1
b
1
+ +
i1
b
i1
+
i+1
b
i+1
+ +
n
b
n
with all the coecients
j
0. We will prove that Bb
i
has Property (P),
contradicting minimality of B.
Let a S
+
. Since B has (P),
a =
1
b
1
+
2
b
2
+ +
n
b
n
for some
i
0.
Substituting into this the value of b
i
from Equation (6.9.1) gives
a =
1
b
1
+ +
i
(
1
b
1
+ +
i1
b
i1
+
i+1
b
i+1
+ +
n
b
n
) + +
n
b
n
which is clearly a positive linear combination of Bb
i
since all the
i
and
j
are nonnegative. So we have shown that S
+
plc(Bb
i
, as required
Step 2. Let B = b
1
, b
2
, . . . , b
n
. If b
i
,= b
j
, then b
i
b
j
0.
Proof. Suppose that b
i
b
j
> 0. Let a =
b
i
(b
j
), which is an element of the
root system S since b
i
, b
j
S. Now
a = b
j
b
i
where = 2(b
i
b
j
)/(b
j
b
j
) > 0.
If a S
+
then a =
1
b
1
+
2
b
2
+ +
n
b
n
for some coecients
i
0.
So
b
j
b
i
=
1
b
1
+
2
b
2
+ +
n
b
n
,
and rearranging this gives
(6.9.2) (1
j
)b
j
= b
i
+
l=j
l
b
l
.
Chapter Six: Root systems and reection groups 91
Taking the dot product with v we nd that
(1
j
)(b
j
v
0
) = (b
i
v
0
) +
l=j
l
(b
l
v
0
)
(b
i
v
0
) > 0.
Since b
j
v
0
> 0, it follows that 1
j
> 0. Now by (6.9.2)
b
j
=
1
j
b
i
+
l=j
l
1
j
b
l
plc(Bb
j
, and so a S
+
. So there exist
coecients
i
0 with a =
1
b
1
+
2
b
2
+ +
n
b
n
, and this gives
b
i
b
j
=
1
b
1
+
2
b
2
+ +
n
b
n
,
or, rearranging,
(6.9.3) (
i
)b
i
= b
j
+
l=i
l
b
l
.
Taking the dot product with v
0
we deduce that
(
i
)(b
i
v
0
) = b
j
v
0
+
l=j
l
b
l
b
j
v
0
> 0,
and therefore
i
> 0 (since b
i
v
0
> 0). Now dividing through by
i
in (6.9.3) we see that b
i
plc(Bb
i
, contradicting Step 1.
Step 3. B is a basis of V .
Proof. Let v V . Now S spans V , since it is a root system in V , and so
there exist scalar coecients
a
such that v =
aS
a
a. But if a S then
a S
+
for some sign = 1, and since B has Property (P) it follows that
a plc(B). So there exist scalars
ab
(which are nonnegative if a S
+
and
nonpositive if a S
) such that a =
bB
ab
b, and thus
v =
bB
_
aS
ab
_
b.
92 Chapter Six: Root systems and reection groups
So every element of V is a linear combination of the elements of B, and we
have shown that B spans V .
It remains to show that the elements of B are linearly independent. Sup-
pose, to the contrary, that
1
b
1
+
2
b
2
+ +
n
b
n
= 0 for some scalars
i
, at
least one of which is nonzero. Now let H be the set of indices i 1, 2, . . . , n
such that
i
0, and J the set of those such that
i
< 0 (the rest). So
1, 2, . . . , n is the disjoint union of H and J, whence
0 =
n
i=1
i
b
i
=
iH
i
b
i
+
iJ
i
b
i
=
iH
[
i
[b
i
iJ
[
i
[b
i
,
and it follows that we can dene
v =
iH
[
i
[b
i
=
iJ
[
i
[b
i
.
Now v v
0
=
iH
[
i
[(b
i
v
0
) =
iJ
[
i
[(b
i
v
0
); furthermore, b
i
v
0
> 0 for
all i (since b
i
S
+
), whence each term [
i
[(b
i
v
0
) is nonnegative, and strictly
positive if
i
,= 0. Since we have assumed that at least one
i
is nonzero, we
conclude that at least one term is strictly positive and the others nonnegative,
and hence v v
0
> 0. In particular, v ,= 0. Therefore,
0 < v v =
_
iH
[
i
[b
i
_
jJ
[
j
[b
j
_
=
iH
jJ
[
i
[ [
j
[(b
i
b
j
) 0
since b
i
b
j
0 whenever i H and j J, by Step 2.
This contradiction proves that the vectors in B are linearly independent,
and therefore form a basis of V .
Since B is a basis and has Property (P), it is a base for the root system,
and Theorem 6.9 is proved.
It can be shown that the base B is in fact uniquely determined by the set
S
+
of positive roots. The positive roots which lie in this uniquely determined
base are called the simple or fundamental roots of the positive system. Note
that, since B is a basis for V , an arbitrary root a can be expressed as a
linear combination of simple roots (since B spans V ), and furthermore the
expression is unique (since B is linearly independent). If a is positive then
all the scalar coecients in this unique expression will be nonnegative (since
Chapter Six: Root systems and reection groups 93
B has Property (P)), while if a is negative then all the coecients will be
nonpositive (since a is a positive root in this case).
We know from Proposition 5.9 that the angle between any two roots
in a root system S is necessarily a rational multiple of , and this applies in
particular to any two simple roots. We also know, in fact, from Step 2 of the
proof of Theorem 6.9, that the angle between any two simple roots must be
obtuse or a right angle (as the cosine of the angle is nonpositive). Our next
result gives even more accurate information.
6.10 Proposition Let b, b
is
m1
m
.
Proof. The vectors b and b
b
acts on
this plane as a rotation through 2, where is the angle between b and b
.
Let be this rotation, and let the order of be m. Applying powers of
to b and b
yield 2m vectors, all of which are in the root system and are
linear combinations of b and b
= a
m1
.
b = a
0
a
m (a)
b
= a
i
b
a
m1
= ()b + b
= a
i
for some i such that 1 i m2.
(Note that b
,= a
m
, and b
,= a
0
, since b and b
,= a
m1
cannot be sustained: b
has to be a
m1
, whence the angle between
b and b
is
m1
m
, as required.
6c Diagrams
A graph is a set Vert(), whose elements are called the vertices of , with a
binary relation , which in our cases will always be assumed to be symmetric
and irreexive. That is, (a, b) is true if and only if (b, a) is true, and (a, a)
is always false. The pairs a, b of vertices such that (a, b) is true are called
the edges of .
Graphs are most conveniently represented by drawings consisting of
dots joined by lines. There should be one dot for each vertex, and a line
joining the dots corresponding to the vertices a and b if and only if a, b is
an edge. Here is a graph with six vertices and seven edges.
If B is a base for the root system S, then, by Proposition 6.10, for all
a, b B there exists an integer m
ab
2 such that the angle between a and b
is
m
ab
1
m
ab
. We now associate to S a graph , called the Coxeter diagram of
the root system, as follows.
(i) Vert() is in one-to-one correspondence with B.
(ii) If a, b B are not perpendicular, then the vertices corresponding to a
and b are joined by an edge labelled with the integer m
ab
.
Chapter Six: Root systems and reection groups 95
Note that a, b B are perpendicular if and only if m
ab
= 2. The vertices
corresponding to a and b are not joined by an edge if m
ab
= 2. It is also
customary to omit the label on the edge if m
ab
= 3. So, for example, if
B = a, b, c, d, with
m
ac
= m
ad
= m
bd
= 2,
m
ab
= m
cd
= 3,
m
bc
= 4,
then the associated Coxeter diagram is
4
where, from left to right, the vertices correspond to a, b, c and d. On the
other hand, no root system can give rise to the diagram
4
4 4
since, as we commented at the start of this chapter, it is impossible to arrange
three vectors in Euclidean space so that the angle between any two of them
is (3/4).
It is our aim in this section to determine exactly which graphs can occur
as Coxeter diagrams of root systems, for this will essentially classify nite
Euclidean reection groups.
Suppose that is a graph which resembles a Coxeter diagram, in that
its edges are labelled with integers greater than 2 (unlabelled edges being
understood to have the label 3). Dene a function m = m
, to be called the
labelling function, as follows: the domain of m is Vert() Vert(), and for
all x, y Vert(),
m(x, y) =
_
_
_
1 if x = y
2 if x ,= y and x, y is not an edge,
l if x, y is an edge labelled l.
Suppose that in fact has n vertices; indeed, with no loss of generality, let us
assume that Vert() = 1, 2, . . . , n, and (for brevity) write m
ij
for m
(i, j).
96 Chapter Six: Root systems and reection groups
Let V
be the bilinear
form dened on V
(v
i
, v
j
) = cos
_
m
ij
1
m
ij
_
for all i, j. Note that the existence and uniqueness of f
are guaranteed
by 5.3. Note also that f
(v
i
, v
i
) = 1 for all i. We will call V
the space
associated with , and f
.
If this bilinear form is positive denite, it means that one can nd lin-
early independent vectors b
i
in Euclidean space such that the angle between
b
i
and b
j
is
m
ij
1
m
ij
(see Proposition 5.1). If the form is not positive denite
then it is impossible to nd such vectors in Euclidean space.
6.11 Definition We say that a diagram is admissible if the form f
(v
i
, v
j
)is
_
_
1 1/2 0
1/2 1
3/2
0
3/2 1
_
_
.
It turns out that the form f
with f
3v
3
.
By the denition of V
, the vectors v
1
, v
2
, v
3
form a basis, and are therefore
linearly independent. So v ,= 0. Moreover,
f
(v, v) = [ 1 2
3 ]
_
_
1 1/2 0
1/2 1
3/2
0
3/2 1
_
_
_
_
1
2
3
_
_
= [ 0 0 0 ]
_
_
1
2
3
_
_
= 0.
Hence this diagram is inadmissible.
In a similar way we can see that the diagram is also inad-
missible. In this case the matrix of the form is
_
_
1 1/2 1/2
1/2 1 1/2
1/2 1/2 1
_
_
Chapter Six: Root systems and reection groups 97
and it can be seen that
[ 1 1 1 ]
_
_
1 1/2 1/2
1/2 1 1/2
1/2 1/2 1
_
_
_
_
1
1
1
_
_
= [ 0 0 0 ]
_
_
1
1
1
_
_
= 0.
On the other hand, the diagram
4
is admissible. To prove this
one has simply to nd three linearly independent vectors a, b, c in Euclidean
space such that the angle between a and b is 2/3, the angle between a and c
is /2, and the angle between b and c is 3/4. Three suitable vectors are
_
_
2/2
2/2
0
_
_
,
_
_
0
2/2
2/2
_
_
,
_
_
1
0
0
_
_
.
Note that the diagram does not have to be connected. For example,
can perfectly well be regarded as a single diagram, the space V
being
6-dimensional. In situations like this, if the i-th and j-th vertices of belong
to dierent components, then m
ij
= 2; so if v
i
and v
j
are the corresponding
elements of the canonical basis of V
, then f
(v
i
, v
j
) = 0. Thus if has k
connected components altogether, the canonical basis splits into k mutually
disjoint subsets such that f
(v
i
, v
j
) = 0 whenever v
i
and v
j
are from dierent
subsets. This yields a decomposition of V
as V
1
V
2
V
k
, where the
i
are the connected components of , and
f
(x
1
+x
2
+ +x
k
, x
1
+x
2
+ +x
k
) = f
(x
1
, x
1
) + +f
(x
k
, x
k
)
whenever x
i
, x
i
V
i
(since the cross terms, like f
(x
4
, x
2
), are all zero).
Now if is admissible, then f
(x
1
+x
2
+ +x
k
, x
1
+x
2
+ +x
k
) > 0
whenever x
1
+x
2
+ +x
k
,= 0. In particular, if 0 ,= x
i
V
i
, then (putting
x
j
= 0 for all j ,= i) it follows that f
(x
i
, x
i
) > 0. Hence the component
i
is
also an admissible diagram. Conversely, if each
i
is admissible then must
98 Chapter Six: Root systems and reection groups
be admissible also, since if 0 ,= x V
, then x = x
1
+x
2
+ +x
k
for some
x
i
V
i
, and since x
i
,= 0 for at least one i, we conclude that
f
(x
1
+x
2
+ +x
k
, x
1
+x
2
+ +x
k
) = f
(x
1
, x
1
) + +f
(x
k
, x
k
)
=
x
i
=0
f
(x
i
, x
i
)
> 0.
We have now proved the following proposition.
6.12 Proposition A disconnected diagram is admissible if all its con-
nected components are admissible.
Let us give another, slightly dierent, proof that if the connected com-
ponents of are admissible then so is itself.
Proof. Observe rst that the set of all v
R
n+m
whose last m coordinates
are all zero constitutes an n-dimensional subspace V which is isomorphic
to R
n
in an obvious way. In other words, there is a vector space isomorphism
: R
n
V which preserves angles between vectors. Similarly, the set of all
w
R
n+m
whose rst n coordinates are zero constitutes an m-dimensional
subspace W which is isomorphic to R
m
; this gives a vector space isomorphism
: R
m
W which preserves angles. Furthermore, if v
V and w
W then
v w = 0: the spaces V and W are perpendicular to each other. Note also
that R
n+m
is the direct sum of V and W.
Let
1
and
2
be admissible diagrams, and let be the disconnected
diagram consisting of
1
alongside
2
. Let Vert(
1
) = 1, 2, . . . , n and
Vert(
2
) = n+1, n+2, . . . , n+m, and let m be the labelling function. Since
1
is admissible there exists a basis u
1
, u
2
, . . . , u
n
of R
n
such that the angle
between u
i
and u
j
is
m(i,j)1
m(i,j)
for all i, j 1, 2, . . . , n. Similarly, since
2
is admissible there exists a basis w
n+1
, w
n+2
, . . . , w
n+m
of R
m
such that the
angle between w
i
and w
j
is
m(i,j)1
m(i,j)
for all i, j n +1, n +2, . . . , n +m.
Now dene v
i
= (u
i
) for all i 1, 2, . . . , n, and v
i
= (w
i
) for all
i n + 1, n + 2, . . . , n + m. If i, j 1, 2, . . . , n then the angle between
v
i
and v
j
equals the angle between u
i
and u
j
; if i, j n+1, n+2, . . . , n+m
then the angle between v
i
and v
j
equals the angle between w
i
and w
j
; if i n
and j n + 1 then the angle between v
i
and v
j
is /2. Hence the angle be-
tween v
i
and v
j
is
m(i,j)1
m(i,j)
in all cases. Since v
1
, v
2
, . . . , v
n+m
is clearly a
basis of R
n+m
, this proves that is admissible.
Chapter Six: Root systems and reection groups 99
Clearly, Proposition 6.12 reduces the problem of classifying admissible
diagrams to the problem of classifying connected admissible diagrams.
Our strategy for obtaining the complete list of admissible diagrams is as
follows. We obtain a long list of inadmissible diagrams, proved inadmissible
by methods similar to those we used above for the diagrams and
6
. The key step is to prove that if is any inadmissible dia-
gram, then any diagram which (in some sense) is more complicated than is
also inadmissible. It is a straightforward task to describe all diagrams which
are not more complicated than any of the inadmissible diagrams on our list,
and which are therefore the only possible admissible diagrams. Each of these
possibly admissible diagrams is proved to be admissible by explicitly nd-
ing linearly independent vectors b
i
in Euclidean space such that the angle
between b
i
and b
j
is
m
ij
1
m
ij
.
The denition of more complicated is as follows:
is more compli-
cated than if
is more compli-
cated than , and we write
have n, n
.
(ii) Given a numbering 1, 2, . . . , n of the vertices of , there exists a num-
bering 1, 2, . . . , n
of the vertices of
, such that m
and (respectively).
For example, if
=
8
7 6
22
and
=
4
22
,
then
.
Let us prove the key proposition forthwith.
100 Chapter Six: Root systems and reection groups
6.14 Proposition If ,
, and if is inadmis-
sible, then
is also inadmissible.
Proof. Let n = [Vert()[ and n
= [Vert(
and f
, the relevant
basis of V
being v
1
, v
2
, . . . , v
n
.
Let i, j 1, 2, . . . , n with i ,= j. Then 2 m(i, j) m
(i, j),
and since cos is decreasing on the interval [0, /2],
0 cos(/m(i, j) cos(/m
(i, j)).
Since cos( ) = cos for all ,
0 cos
_
m(i, j) 1
m(i, j)
_
= cos(/m(ij))
cos(/m
(i, j) = cos
_
m
(i, j) 1
m
(i, j)
_
,
and so, by the denitions of f and f
,
0 f(v
i
, v
j
) f
(v
i
, v
j
).
We have shown that this holds for all i, j 1, 2, . . . , n with i ,= j.
Note that a disconnected diagram is more complicated than any of its con-
nected components; so 6.14 provides another proof that every connected component
is admissible if the whole diagram is.
Chapter Six: Root systems and reection groups 101
Given that is inadmissible, there must exist v V with f(v, v) < 0.
Since v must be expressible as a linear combination of the basis vectors
v
1
, v
2
, . . . , v
n
, let
1
,
2
, . . . ,
n
be scalars such that
v =
n
i=1
i
v
i
=
1
v
1
+
2
v
2
+ +
n
v
n
.
Now dene v
by
v
=
n
i=1
[
i
[v
i
= [
1
[v
1
+[
2
[v
2
+ +[
n
[v
n
.
(Recall that n
= dimV
i
in the expression
for v
i=1
i
v
i
,
n
j=1
j
v
j
_
=
n
i=1
2
i
f(v
i
, v
i
) +
i=j
j
f(v
i
, v
j
).
But f(v
i
, v
i
) = 1 for all i, and since f(v
i
, v
j
) 0 whenever i ,= j it follows
that
i
j
f(v
i
, v
j
) [
i
[ [
j
[f(v
i
, v
j
). Moreover, since f
(v
i
, v
j
) f
(v
i
, v
j
),
we have [
i
[ [
j
[f(v
i
, v
j
) [
i
[ [
j
[f
(v
i
, v
j
) whenever i ,= j. Hence
0 >
n
i=1
2
i
+
i=j
[
i
[ [
j
[f(v
i
, v
j
)
n
i=1
[
i
[
2
f
(v
i
, v
i
) +
i=j
[
i
[ [
j
[f
(v
i
, v
j
).
But this last expression equals f
_
n
i=1
[
i
[v
i
,
n
j=1
[
j
[v
j
_
= f
(v
, v
), and
we have shown that the element v
(v
, v
) < 0.
So f
is inadmissible, as required.
We now need a long list of inadmissible diagrams. We defer the proofs
for the time being, but here is the list.
102 Chapter Six: Root systems and reection groups
(i) The simple circuits
. . .
are all inadmissible.
(ii) The following diagrams are all inadmissible.
(iii) So are these.
4 4 4
(iv) And these.
4 4 4 4 4 4
(v) Here are four more inadmissible diagrams.
6 5 5
4
By simple we mean that the edge labels are all equal to three.
Chapter Six: Root systems and reection groups 103
(vi) Finally, the following three diagrams are also inadmissible.
Accepting that it is true that the diagrams just listed are all inadmis-
sible, let us determine exactly which diagrams might be admissible.
6.15 Theorem Let be a connected admissible diagram. Then is one
of the following types.
Type A
n
: (n vertices, for any n 1)
Type B
n
:
4
(n vertices, for any n 2)
Type D
n
: (n vertices, for any n 4)
Type I
2
(p):
p
(any p > 4)
Type H
3
:
5
Type H
4
:
5
Type F
4
:
4
Type E
6
:
Type E
7
:
Type E
8
:
Proof. If has exactly one vertex then it is of type A
1
, and if it has exactly
two vertices then it is either of type A
2
or B
2
, or type I
2
(p) for some p > 4.
So we may assume that has at least three vertices.
If has an edge label which is 6 or greater, then is more complicated
than the rst diagram in (iv) of our list of inadmissible diagrams, and by
104 Chapter Six: Root systems and reection groups
Proposition 6.14 it follows that is inadmissible, contradiction. So 3, 4 and 5
are the only labels that occur.
If had a circuit then it would be more complicated than one of the
simple circuits in (i) of our list of inadmissible diagrams, and if had a
vertex of valency 4 or more, then it would be more complicated than the
rst diagram in (ii) of our list of inadmissible diagrams. By 6.14, this is
impossible. Similarly, it is impossible for to have two or more vertices of
valency 3, for otherwise would be more complicated than one of the other
diagrams in (ii) of our list of inadmissibles. These facts combine to tell us
that is either a string
with various labels 3, 4 or 5 on the edges, or else consists of three branches
of various lengths emanating from the only vertex of valency 3,
again with variously labelled edges.
The diagrams in (iii) of the inadmissibles list show that cannot have
a vertex of valency 3 and an edge labelled 4 or more; so in the three branch
cases can have only simple edges (labelled 3). The rst diagram in (vi) of
the inadmissibles list shows that the three branches cannot all have length
two or more; that is, at least one of the branches has length one. If two
of the branches have length one then is of type D
n
for some n, and this
is listed as a possibility in the theorem statement. So we can assume that
exactly one of the branches has length one. Now if both the other branches
had length three or more, then would be more complicated than the second
diagram in (vi) of the inadmissibles, which is impossible. So at least one of
the other branches has length exactly two. The third branch can have length
two, three or four, corresponding to types E
6
, E
7
and E
8
, but no more than
that, or else would be more complicated than the third diagram in (vi). So
all the three branch possibilities are covered.
Suppose, on the other hand, that is a string, so that there are exactly
two vertices of valency 1, the rest having valency 2. The diagrams in (iv)
show that cannot have two edges labelled 4 or more. That is, there is at
most one non-simple edge. If all the edges are simple then is of type A
n
;
so we may assume that there is exactly one non-simple edge. If the label on
Chapter Six: Root systems and reection groups 105
this edge is 5, then its endpoints cannot both have valency 2, or would be
more complicated than the second diagram in (v) of the list of inadmissible
diagrams. In other words, if the label is 5 then the non-simple edge is one of
the two end edges. And must be either of type H
3
or H
4
, since if it had ve
or more vertices it would be more complicated than the third diagram in (v).
So it remains to deal with the cases when the non-simple edge is labelled 4.
If the non-simple edge is an end edge then is of type B
n
. If not, then
must be of type F
4
, since if it had ve or more vertices it would be more
complicated than the fourth diagram in (v). So all the string possibilities are
covered too.
6d Existence and inadmissibility proofs
Overlooking the fact that a few proofs have been skipped, and a technicality
to be mentioned in a moment, we have now completely classied the nite
groups of transformations of Euclidean space which are generated by reec-
tions. For, if G is such a group, it must have a root system, and the root
system must have a base, and the base must correspond to an admissible dia-
gram. The technicality is that, for all we have proved so far, several dierent
groups generated by reections might give the same diagram. In fact, it is
not too dicult to prove (although we will not do it) that if G is generated
by reections, then it is also generated by the reections corresponding to
the roots in any base for its root system. This means that the diagram does
determine G up to isomorphism. So the classication theorem for Euclidean
reection groupswhich we have not quite provedis as follows.
6.16 Theorem There is a one-to-one correspondence between isomorphism
classes of nite Euclidean reection groups and diagrams whose connected
components come from the list in Theorem 6.15.
We have also yet to prove that all the types listed in Theorem 6.15 are
actually admissible, and correspond to nite reection groups. To show that
the diagrams are admissible simply requires nding, in each case, n vectors
in Euclidean space with the right conguration of angles. Proving that there
is a corresponding nite reection group is more dicult, and requires con-
structing the entire root system (so that Proposition 6.2 can be applied).
Finally, we have yet to prove the inadmissibility of all those diagrams.
The proofs of inadmissibility all use the same method, and so we will
leave most of them as exercises. They depend on the following lemma.
106 Chapter Six: Root systems and reection groups
6.17 Lemma Suppose that f is a bilinear form on a vector space V over the
eld R, and suppose that v
1
, v
2
, . . . , v
n
is a basis of V . If there exist scalars
i
0 that are not all zero and have the property that
n
i=1
i
f(v
i
, v
j
) 0
for all j 1, 2, . . . , n, then f is not positive denite.
Proof. Suppose that there exist such
i
, and put v =
n
i=1
i
v
i
. Then
clearly v ,= 0, since the
i
are not all zero and the v
i
are linearly independent.
However,
f(v, v) = f(v,
n
j=1
j
v
j
) =
n
j=1
j
f(v, v
j
)
=
n
j=1
j
f(
n
i=1
i
v
i
, v
j
) =
n
j=1
j
_
n
i=1
i
f(v
i
, v
j
)
_
0
since
j
(
n
i=1
i
f(v
i
, v
j
)) 0 for all j. Hence f is not positive denite.
n
i=1
i
f(v
i
, v
j
) = 0 for all but one value of j. Since the values f(v
i
, v
j
) are
known, this involves solving a system of n 1 homogeneous linear equations
in the n unknowns
i
. The solution will probably be unique up to a scalar
multiple. Take any nonzero solution and see whether
n
i=1
i
f(v
i
, v
j
) 0
for the remaining value of j. It will be, in every case we need.
For example, let be the third diagram in (v), and let v
1
, v
2
, . . . , v
9
be
the canonical basis of V
(v
i
, v
i
) = 1 for all i, then
f
(v
1
, v
4
) = f
(v
2
, v
3
) = f
(v
3
, v
4
) = f
(v
4
, v
5
) = 1/2
f
(v
5
, v
6
) = f
(v
6
, v
7
) = f
(v
7
, v
8
) = f
(v
8
, v
9
) = 1/2
and f
(v
i
, v
j
) = 0 in all other cases. If we now consider the equations
Chapter Six: Root systems and reection groups 107
n
i=1
i
f(v
i
, v
j
) = 0 for all j ,= 2, we nd the requirements to be
0 =
1
2
8
+
9
0 =
1
2
7
+
8
1
2
9
0 =
1
2
6
+
7
1
2
9
0 =
1
2
5
+
6
1
2
7
0 =
1
2
4
+
5
1
2
6
0 =
1
2
3
1
2
1
+
4
1
2
9
0 =
1
2
1
+
4
0 =
1
2
2
+
3
1
2
4
and if we put
9
= c we quickly nd that
8
= 2c,
7
= 3c,
6
= 4c,
5
= 5c,
4
= 6c,
1
= 3c,
3
= 4c and
2
= 2c. Now, lo and behold!, we
see that
n
i=1
i
f(v
i
, v
2
) =
2
1
2
3
= 0. So the conditions of the lemma
are satised, and the form f
(v, v) = 0.
It actually works just like this in most of the other cases, and we wind
up with a nonzero vector v such that f
(v, v
j
) = 0 for all j (which certainly
gives f
(v, v) = 0). Only the diagrams with edge labels of 5 give slightly
more complicated calculations.
As for the existence proofs, again we will content ourselves with one
example: type E
8
. This is, in fact, the most dicult. We start with an
orthogonal basis e
1
, e
2
, . . . , e
8
of 8-dimensional Euclidean space such that
each e
i
has length 1/
2. For example, in R
8
, we could choose e
i
to be the
8-tuple whose i-th component is 1/
i=1
i
e
i
[
i
= 1 and
8
i=1
i
= 1 .
There are 240 vectors altogether in S, since in the rst piece there are 112
(since there are
_
8
2
_
= 28 choices for the pair i, j and 4 choices for the signs)
108 Chapter Six: Root systems and reection groups
and 128 in the other piece (since the rst seven signs can be chosen arbitrarily,
giving 2
7
possibilities, and then the last sign is determined uniquely).
It is not hard to check that u u = 1 and u v
1
2
, 0,
1
2
for all
u, v S with u ,= v. Thus the angle between two vectors in S is always
either /3, /2 or 2/3, and so we have that
u
(v) =
_
_
_
v u if the angle is /3
v if the angle is /2
v +u if the angle is /3
It needs to be checked that
u
(v) S in all cases. Again, this is not hard.
The point is that once it has been checked for one pair u, v, permuting the
coordinates yields many other pairs which need not be checked separately.
Dene a
1
=
1
2
(e
1
e
2
e
3
e
4
e
5
e
6
e
7
+e
8
) and a
2
= e
1
+e
2
, and,
for 3 i 8, dene a
i
= e
i1
+e
i2
. Then if we take B = a
i
[ 1 i 8
it can be checked that the inner products a
i
a
j
are as they should be for the
diagram of type E
8
. (That is, a
i
a
j
= cos(2/3) =
1
2
if the vertices i and j
are adjacent, a
i
a
j
= 0 for nonadjacent vertices.) It is an interesting fact
that when an arbitrary root is expressed as a linear combination of the simple
roots a
i
, all the coecients turn out to be integers. A root
8
i=1
i
e
i
S is
positive if the largest i with
i
,= 0 has
i
> 0.
It is possible to explicitly describe the linear transformations g in the
reection group G corresponding to this root system as matrices relative
to the basis e
1
, . . . , e
8
. Firstly, there are the 8! permutation matrices, and
2
7
diagonal matrices with diagonal entries 1 and determinant 1. These
generate a group of order 2
7
8! = 5160960 which is a subgroup H of G. (In
fact, H is itself a Euclidean reection group: it is of type D
8
.) The idea now
is to investigate the cosets of H in G. If v
1
, . . . , v
8
is any orthonormal basis
of the space of eight-component row vectors then there are 2
7
8! orthogonal
matrices of the form xg where x H and g is the matrix whose rows are
v
1
, . . . , v
8
. (These matrices are obtained from g by permuting the rows and
multiplying an even number of rows by 1.) We proceed to describe a large
number of orthonormal bases which give rise to elements of G.
Let
1
,
2
, . . . ,
8
be signs
i
= 1 with
8
i=1
i
= 1, and dene
v
ij
=
_
3
4
if i = j
1
4
j
if i ,= j.
Chapter Six: Root systems and reection groups 109
Let v
i
be the row whose jth entry is v
ij
. Then v
1
, . . . , v
8
form an orthonor-
mal basis and give rise to a coset of H in G. In fact this gives 64 cosets,
corresponding to the 64 choices for the signs
i
. These 64 cosets give us
another 64[H[ = 330301440 elements of G. (Who would have thought that
there would be so many 8 8 orthogonal matrices whose entries are all plus
or minus three quarters or one quarter!)
Choose a division of the set 1, 2, . . . , 8 into two subsets J and J
of
four elements eachthere are thirty-ve ways of doing thisand let = 1.
Let J = j
1
, j
2
, j
3
, j
4
and J
= j
5
, j
6
, j
7
, j
8
, and let
ij
be the (i, j)-entry
of the matrix
X =
_
_
_
_
_
_
_
_
_
_
_
1 0 0 0 0
1 1 1 0 0 0 0
1 1 1 0 0 0 0
1 1 1 0 0 0 0
0 0 0 0 1
0 0 0 0 1 1 1
0 0 0 0 1 1 1
0 0 0 0 1 1 1
_
_
_
_
_
_
_
_
_
_
_
Let v
i
be the vector whose kth entry is
1
2
ij
k
; then the matrix g whose rows
are the v
i
is in G. Since there were 35 possible partions of 1, 2, . . . , 8 as
J J
5
Sym(S) 7
i 8
(Dot)
10
O(V ) 10
[G : H] 39
[G[ 39
Aut(G) 57
Z(G) 62
Inn(G) 62
C
G
(g) 66
a
76
plc(B) 89
V
96
f
96
110
Index
A
Abelian groups . . . . . . . . . . . . . . . . . . . . . . 16
additive group of a eld . . . . . . . . . . . . . 17
admissible diagram. . . . . . . . . . . . . . . . . . 96
angle between two vectors . . . . . . . . . . . 71
automorphism of a group . . . . . . . . . . . . 57
automorphisms . . . . . . . . . . . . . . . . . . . . . . 10
of cyclic groups . . . . . . . . . . . . . . . . 58, 59
of Klein 4-group. . . . . . . . . . . . . . . . . . . 58
axis of symmetry . . . . . . . . . . . . . . . . . . . . 73
B
base. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 bilinear form . . . . . . . . . . . . . . . . . . . . . . . . 72
C
canonical homomorphism. . . . . . . . . . . . 52
Cantor, Georg . . . . . . . . . . . . . . . . . . . . . . . 27
cardinality. . . . . . . . . . . . . . . . . . . . . . . 26, 29
Cartesian coordinates. . . . . . . . . . . . . . . . 71
Cartesian product . . . . . . . . . . . . . . . . . . . . 4
central quotient . . . . . . . . . . . . . . . . . . . . . 63
centralizer . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
centre . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
of a p-group. . . . . . . . . . . . . . . . . . . . . . . 69
class equation . . . . . . . . . . . . . . . . . . . . . . . 69
classes of a group. . . . . . . . . . . . . . . . . . . . 63
of Sym1, 2, 3 . . . . . . . . . . . . . . . . . . . . 64
of Sym1, 2, . . . , n . . . . . . . . . . . . . . . . 64
of Sym1, 2, 3, 4, 5 . . . . . . . . . . . . . . . . 65
of the dihedral group of order 8 . . . 66
of GL
3
(C). . . . . . . . . . . . . . . . . . . . . . . . . 66
closure under an operation. . . . . . . . . . . 21
under inversion . . . . . . . . . . . . . . . . . 9, 22
under multiplication. . . . . . . . . . . . . . . . 8
codomain. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
-commensurable. . . . . . . . . . . . . . . . . . . . 80
conjugate elements . . . . . . . . . . . . . . . . . . 63
cosets of a subgroup . . . . . . . . . . . . . . . . . 23
of C
G
(g) . . . . . . . . . . . . . . . . . . . . . . . . . . 67
countable set . . . . . . . . . . . . . . . . . . . . . . . . 27
Coxeter diagram. . . . . . . . . . . . . . . . . . . . . 94
cycle notation . . . . . . . . . . . . . . . . . . . . . . . . 8
cycle type . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
cyclic groups . . . . . . . . . . . . . . . . . . . . . . . . 11
111
D
dense subset. . . . . . . . . . . . . . . . . . . . . . . . . 27
desmic tetrahedra . . . . . . . . . . . . . . . . . . . 81
determinant homomorphism. . . . . . . . . 40
dihedral group of order 8 . . . . . . . . . . . . . 9
dihedral group of order 2n. . . . . . . . . . . 73
distance between two points . . . . . . . . . 71
domain. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
dot product . . . . . . . . . . . . . . . . . . . . . . . 9, 71
E
endomorphism. . . . . . . . . . . . . . . . . . . . . . . 60
enumerable set . . . . . . . . . . . . . . . . . . . . . . 27
equivalence relation. . . . . . . . . . . . . . . . . . 30
equivalence class . . . . . . . . . . . . . . . . . . . . 31
Euclidean space . . . . . . . . . . . . . . . . . . . . . . 9
even permutation. . . . . . . . . . . . . . . . . . . . 41
F
First Isomorphism Theorem . . . . . . . . . 54
foundations. . . . . . . . . . . . . . . . . . . . . . . . . . . 3
functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
fundamental roots . . . . . . . . . . . . . . . . . . . 92
G
general linear group . . . . . . . . . . . . . . . . . 18
generating a group . . . . . . . . . . . . . . . . . . 11
graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
greatest common divisor . . . . . . . . . . . . . 60
group of transformations. . . . . . . . . . . . . . 9
H
homomorphism. . . . . . . . . . . . . . . . . . 40, 45
injective . . . . . . . . . . . . . . . . . . . . . . . . . . 54
kernel of . . . . . . . . . . . . . . . . . . . . . . . . . . 52
natural . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Sym1, 2, 3, 4 to Sym1, 2, 3 . . . . . 42
Homomorphism Theorem. . . . . . . . . . . . 54
hyperplane . . . . . . . . . . . . . . . . . . . . . . . . . . 75
112
I
identity permutation. . . . . . . . . . . . . . . . . . 8
image of a homomorphism. . . . . . . . . . . 53
inadmissible diagram. . . . . . . . . . . . . . . . 96
index of a subgroup . . . . . . . . . . . . . . . . . 39
inherited operation . . . . . . . . . . . . . . . . . . 21
injectivity of a homomorphism. . . . . . . 54
inner automorphism. . . . . . . . . . . . . . . . . 61
inner product space. . . . . . . . . . . . . . . . . . . 9
invariant subspace . . . . . . . . . . . . . . . . . . . 79
isomorphic groups . . . . . . . . . . . . . . . . . . . 12
isomorphism . . . . . . . . . . . . . . . . . . . . . . . . 45
K
kernel of a homomorphism. . . . . . . . . . . 52 Kleins four group . . . . . . . . . . . . . . . . . . . 12
L
labelling function. . . . . . . . . . . . . . . . . . . . 95 Latin square. . . . . . . . . . . . . . . . . . . . . . . . . 20
M
map, mapping . . . . . . . . . . . . . . . . . . . . . . . . 4
modulus homomorphism. . . . . . . . . . . . . 41
multiplicative group of a eld. . . . . . . . 17
N
natural homomorphism. . . . . . . . . . . . . . 52
negative roots . . . . . . . . . . . . . . . . . . . . . . . 89
normal subgroup . . . . . . . . . . . . . . . . . . . . 46
normalized root system. . . . . . . . . . . . . . 87
O
odd permutation . . . . . . . . . . . . . . . . . . . . 41
operations as relations . . . . . . . . . . . . . . . . 5
operation on a set . . . . . . . . . . . . . . . . . . . 15
orthogonal direct sum . . . . . . . . . . . . . . . 80
orthogonal transformation . . . . . . . . . . . 10
orthogonal group . . . . . . . . . . . . . . . . . . . . 10
orthonormal basis . . . . . . . . . . . . . . . . . . . 71
113
P
parallelogram law . . . . . . . . . . . . . . . . . . . 74
parity of a permutation. . . . . . . . . . . . . . 41
permutation . . . . . . . . . . . . . . . . . . . . . . . . . . 7
permutation multiplication . . . . . . . . . . . 7
perpendicularity relation. . . . . . . . . . . . . . 4
-commensurable. . . . . . . . . . . . . . . . . . . . 80
polygonal -commensurable set . . . . . . 80
position vectors . . . . . . . . . . . . . . . . . . . . . 71
positive deniteness . . . . . . . . . . . . . . . . . . 9
positive linear combination . . . . . . . . . . 89
positive roots. . . . . . . . . . . . . . . . . . . . . . . . 89
Pythagorean triples. . . . . . . . . . . . . . . . . . . 2
Q
quotient by an equivalence relation . . 32
R
rational number . . . . . . . . . . . . . . . . . . . . . 27
reection formula. . . . . . . . . . . . . . . . . . . . 75
regular polygon. . . . . . . . . . . . . . . . . . . . . . 72
relations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
reexive. . . . . . . . . . . . . . . . . . . . . . . . . . . 30
symmetric . . . . . . . . . . . . . . . . . . . . . . . . 30
transitive . . . . . . . . . . . . . . . . . . . . . . . . . 30
root system . . . . . . . . . . . . . . . . . . . . . . . . . 84
rotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
rotation matrix. . . . . . . . . . . . . . . . . . . . . . 79
S
sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
sign of a permutation. . . . . . . . . . . . . . . . 41
simple edge . . . . . . . . . . . . . . . . . . . . . . . . 101
simple roots . . . . . . . . . . . . . . . . . . . . . . . . . 92
squares as structured sets . . . . . . . . . . . . . 5
structured sets . . . . . . . . . . . . . . . . . . . . . . . 5
subgroup. . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
normal . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
subset multiplication . . . . . . . . . . . . . . . . 45
surjectivity of a homomorphism. . . . . . 54
symmetric group . . . . . . . . . . . . . . . . . . . . . 7
symmetries of a square . . . . . . . . . . . . . . . 8
symmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
intuitive denition. . . . . . . . . . . . . . . . . . 4
precise denition . . . . . . . . . . . . . . . . . . . 6
T
tetrahedron . . . . . . . . . . . . . . . . . . . . . . . . . 81 transformation . . . . . . . . . . . . . . . . . . . . . . . 4
relation-preserving . . . . . . . . . . . . . . . . . 6
V
vector spaces, as structured sets . . . . . . 5
114