K - Algebra
K - Algebra
K - Algebra
Before defining quaternion algebras, we begin with some general preliminaries on algebras.
This is largely to fix some definitions and to help us see how quaternion algebras fit into the
general picture of algebraic structures. While some of the definitions may seem unmotivated
at first, I hope this will be somewhat clarified by the examples. I urge you to think carefully
about the definitions. A good definition is as important (and often harder to come up with)
as a good theorem.
In the interest of time, many relevant details that I think of as “general algebraic facts”
are left as exercises. This is a compromise forced on me by time constraints and my goals for
the course, but I hope that I have treated things in a way that these exercises are both useful
for learning the material and not overly demanding. The reader wanting more details can
can consult another reference such as [Rei03], [Pie82], [BO13], [GS06] or [MR03]. There is
also a new book [Bre14], which I haven’t looked at closely but aims to present this material
in a way requiring minimal prerequisites.
Algebras are be a generalization of field extensions. For instance, if K is a number field,
then it can be regarded as a vector space over Q with a multiplication law for the vectors
that is commutative. Algebras will be vector spaces over a field F with a multiplication law
defined on the vectors, which we do not assume is commutative.
55
QUAINT Chapter 2: Basics of associative algebras Kimball Martin
Sometimes we will just say A is an algebra when we do not need to specify the field F .
Now let’s look at examples both of algebras with one of these extra properties and with
neither of these extra properties.
First, let’s give the second bonus property a name. We say an F -algebra A is a divi-
sion algebra if, as a ring, it is a division ring, i.e., if every nonzero element of A has a
multiplicative (necessarily 2-sided) inverse. Note division rings are sometimes called skew
fields, as the only condition lacking to be a field is commutativity of multiplication. In fact
some authors use the term field to mean division ring, e.g. in [Wei95], or corps in French,
e.g. [Vig80]. However this is not so common nowadays (at least in my circles).
Example 2.1.2. Let E/F and K/F be field extensions of degrees n and m. Let A = E K,
the direct sum as both an F -vector space and as a ring, so addition and multiplication are
component wise. Then A is an F -algebra (see below) of dimension m+n; it is commutative,
but not a division algebra.
The above example is a special case of the direct sum for algebras, which we introduce
below.
Exercise 2.1.2. Prove that Hamilton’s quaternions H, as given in the introduction (i.e.,
H = R[i, j, k]/hi2 = j 2 = k 2 = ijk = 1i), form a noncommutative division algebra over
R.
56
QUAINT Chapter 2: Basics of associative algebras Kimball Martin
Indeed, the examples of field extensions and matrix algebras are primary motivations for
the definition of algebras—the notion of an algebra is a structure that encompasses both of
these objects. More generally, one has algebras of functions, such as polynomial algebras,
or C ⇤ -algebras in analysis, but these are not finite-dimensional and will not enter into our
study. However, certain infinite-dimensional operator algebras called Hecke algebras play
an important role in modular forms, and we should encounter them later.
Exercise 2.1.3. Prove that Mn (F ) is not a division algebra for any n > 1.
Exercise 2.1.5. Prove that Z(Mn (F )) = F . (Hint: check what it means to commute
with the matrix Eij with a 1 in the ij-position and 0’s elsewhere.)
Exercise
✓ ◆ 2.1.6. Consider the subalgebra A of M2 (R) containing all elements of the form
a b
, where a, b 2 R. Show A is isomorphic to C, as an R-algebra.
b a
Of course, isomorphic as algebras means what you think it means. Formally, we say
a homomorphism of F -algebras : A ! B is an F -linear map which also is a ring
homomorphism. (Recall, since we are working in the category of unital rings, this means
we need (1A ) = 1B .) Further, it is an isomorphism if it is bijective, i.e., it is both a ring
isomorphism and a vector space isomorphism.
Exercise 2.1.7. Which of the following are algebras over F ? For those that are algebras,
determine their dimension and center. For those that are not, state at least one property
that fails. Below, assume n > 1, and that the ring operations are usual matrix addition
and multiplication.
(i) The set of matrices of trace 0 in Mn (F );
(ii) The set of matrices of determinant 1 in Mn (F );
(iii) The set of diagonal matrices in Mn (F );
(iv) The set of diagonal matrices in Mn (F ) whose lower right coordinate is 0;
(v) The set of diagonal matrices in Mn (F ) whose lower right coordinate is 1;
(vi) The set of upper triangular matrices in Mn (F ).
Recall from algebraic number theory, that one can represent a degree n field extension
K/F in the space of n ⇥ n matrices over F by choosing an F -basis and letting K act on
itself by left multiplication.
57
QUAINT Chapter 2: Basics of associative algebras Kimball Martin
We can do the same for general algebras. Namely, fix a basis e1 , . . . , en of A (as an
F -vector space). An element ↵ defines a linear operator L↵ : A ! A via left multiplication
x 7! ↵x. Thus we can explicitly realize A as an algebra of n ⇥ n matrices over F with
respect to our chosen basis. To write down this matrix representation of A, it suffices to
write down matrices for e1 , . . . , en and use linearity. This implies the following:
Proposition 2.1.1. An n-dimensional F -algebra A can be realized (or represented) as a
subalgebra of Mn (F ), i.e., there is an injective F -algebra homomorphism from A into Mn (F ).
This says that algebras are relatively nice, well-behaved structures, and we can’t get
anything too weird.
Exercise 2.1.8. Let F = R and take A = H. With respect to the usual basis {1, i, j, k},
write down the matrices for i, j, k acting by left multiplication. Using this, define an
explicit embedding of H into M4 (R).
Exercise
✓ ◆ 2.1.9. ✓ Let◆F be a✓field ◆and A = M✓ 2 (F ). ◆With respect to the basis e1 =
1 0 0 1 0 0 0 0
, e2 = , e3 = and e4 = , determine the matrices Lei for
0 0 0 0 1 0 0 1
i = 1, 2, . . . , 4. Determine the image of A in M4 (F ) under the associated embedding.
Here is a better way to represent H in terms of matrices. Consider the (R-)linear map
from H to M2 (C) given by
✓ ◆ ✓ ◆ ✓ ◆ ✓ ◆
1 i 1 i
1 7! , i 7! , j 7! , k 7! . (2.1.1)
1 i 1 i
In any case, these matrix representations (or realizations) of algebras allow us to associate
a useful set of invariants to objects in algebras. Let me briefly explain how we can do this
with our non-optimal matrix embeddings—then we will come back an improve on this in
Section 2.4.
For ↵ 2 A, define the non-reduced characteristic polynomial (resp. non-reduced
minimal polynomial1 ) of ↵ to be the characteristic polynomial (resp. minimal polyno-
mial) of L↵ . The non-reduced minimal polynomial divides the non-reduced characteristic
1
Really one can just define the minimal polynomial in the usual way—it’s the minimal degree monic
polynomial over F which annihilates ↵. I’m just doing this for symmetry’s sake.
58
QUAINT Chapter 2: Basics of associative algebras Kimball Martin
polynomial by the Cayley–Hamilton theorem, which states this for matrices. Similarly, we
define the non-reduced norm (resp. non-reduced trace) of ↵ to be the determinant
(resp. trace) of L↵ . We denote the non-reduced norm of ↵ by NA/F nr (↵) and the non-reduced
trace of ↵ by trA/F (↵). From linear algebra, none of these invariants depend upon the choice
nr
of the basis e1 , . . . , en , which is why we call them invariants. (Put another way, they are
invariant under conjugation: e.g., NA/F nr (↵) = N nr ( ↵
A/F
1 ) for any invertible 2 A.)
Then the following elementary properties follow from the corresponding properties of
determinant and trace:
Lemma 2.1.2. The non-reduced norm map NA/F
nr : A ! F is multiplicative
nr nr nr
NA/F (↵ ) = NA/F (↵)NA/F ( ) = NA/F ( ↵)
trnr nr nr
A/F (↵ + ) = trA/F (↵) + trA/F ( )
for all ↵, 2 A.
We will prefer to work with reduced norm and trace maps, which in the case of H are
just given by the determinant and trace maps applied to the image of the embedding of
H into M2 (C) explained above. For H, the reduced norm will be the quaternary quadratic
form N (x + yi + zj + wk) = x2 + y 2 + z 2 + w2 described in the introduction.
Reduced norm and trace maps will be introduced in Section 2.4, which will generalize
the embedding of H into M2 (C) to more general algebras.
59
QUAINT Chapter 2: Basics of associative algebras Kimball Martin
Here the direct sum of algebras, a priori just an F -module (vector space), is made into
a ring with component-wise multiplication. Similarly, the tensor products, a priori just
modules, can be made into rings—e.g., for A ⌦ B, we define (a ⌦ b)(c ⌦ d) = ac ⌦ bd and
extend this multiplication to all of A ⌦ B linearly. The dimension statements fall out of the
dimension statements direct sum and tensor products of vector spaces, and the only thing
to check is that these definitions of multiplication are valid and compatible with the vector
space structure as required in the definition of an F -algebra. This is easy and I leave it to
you.
Since we can embed A and B into matrix algebras, we can try to understand what direct
sums and tensor products do at the level of matrices. Say A ⇢ Mn (F ) and B ⇢ Mm (F ) are
subalgebras. Then it is easy to see that
⇢✓ ◆
a 0
A B' : a 2 A, b 2 B ⇢ Mn+m (F ).
0 b
The tensor product can also be understood in terms of matrices. Say a = (aij ) 2 Mn (F )
and b = bij 2 Mm (F ). Then the Kronecker product of a and b is the block matrix
0 1
a11 b a12 b · · · a1n b
a b = @ ... .. .. .. C 2 M (F ).
B
. . . A nm
an1 b an2 b · · · ann b
(One defines the Kronecker product for non-square matrices similarly.) Usually, the Kro-
necker product is denoted with ⌦ instead of , because it is the matrix realization of the
tensor product. This is the content of the next exercise, and after doing this exercise, you
can use ⌦ to denote Kronecker products.
Exercise 2.1.14. Show the map a b 7! a ⌦ b gives an algebra isomorphism Mmn (F ) '
Mn (F ) ⌦ Mm (F ).
In particular, tensoring two matrix algebras doesn’t give us a new kind of algebra.
Similarly, tensoring extension fields doesn’t get us much new either.
Exercise 2.1.15. Let F/Q and K/Q be two quadratic extensions in C Show F ⌦Q K is
isomorphic to the compositum of F K if F 6= K and F ⌦Q K ' F F if F = K.
However, the extension of scalars will be very important for us. Just like with number
fields F , we will pass to local completions Fv to apply local methods. For an algebra A,
we will want to consider the “local completions” Av = A ⌦ Fv . This is important both for
classifying algebras over number fields as well as understanding the arithmetic of algebras.
Tensor products of two non-commutative algebras will also be useful to consider as a
tool to studying the general structure of algebras. In particular, the following will be useful.
60
QUAINT Chapter 2: Basics of associative algebras Kimball Martin
Exercise 2.1.16. Let A and B be F -algebras. Show Z(A ⌦F B) ' Z(A) ⌦F Z(B).
(Recall a nilpotent element ↵ of a ring is one such that ↵n = 0 for some n 2 N.)
Proof. First we note A must be commutative. Let {1, ↵} be a basis for A over F . From
expanding terms, one sees that (x+y↵)(x0 +y 0 ↵) = (x0 +y 0 ↵)(x+y↵) for any x, y, x0 , y 0 2 F .
Hence A is commutative, as claimed. If every nonzero element of A is invertible, then A/F
must be a quadratic field extension.
Suppose some x + y↵ is nonzero but not invertible. This implies y 6= 0, so by a change
of basis if necessary, we can assume ↵ itself is not invertible. From Proposition 2.1.1, we
know that we can realize A as a subalgebra of M2 (F ). Write
✓ ◆
a b
↵=
c d
with respect to some basis {e1 , e2 }. In fact, since the associated linear transformation L↵
has a nontrivial kernel, we can choose e2 such that ↵e2 = 0. This means b = d = 0 and
↵e1 = ae1 + ce2 . If a = 0, then ↵ is nilpotent.
So assume a 6= 0. Then we can replace e1 with e01 = e1 + ac e2 . Then ↵e01 = ↵e1 = ae01 ,
so with respect the basis {e01 , e2 }, we see
✓ ◆
a 0
↵= .
0 0
61
QUAINT Chapter 2: Basics of associative algebras Kimball Martin
algebras. Of course, over a typical field F one gets infinitely many 3-dimensional (unital)
algebras just via cubic field extensions or quadratic field extensions direct sum F .
We also remark that for a finite field F , one can show the number of n-dimensional
3
F -algebras, up to isomorphism, is at most (#F )n (see [Pie82, Prop 1.5]).
Example 2.1.4. Any division algebra A (and thus any field) is simple (as a ring or
algebra).
To see this, suppose I is a nonzero proper two-sided ideal of A. Let ↵ 2 I and 2 A I
be nonzero elements. Then = ↵ 1 2 A by the division property and ↵ = 2 I by the
definition of an ideal, a contradiction. Note this argument also applies to left ideals, and
with a minor modification to right ideals, so any division algebra has no nonzero proper
one-sided ideals either.
2
This is different from the notion of being simple as a module (say over itself), as we will explain in the
next section.
62
QUAINT Chapter 2: Basics of associative algebras Kimball Martin
Example 2.1.5. Our earlier example E K (Example 2.1.2) of the sum of two field
extensions of F is semisimple, as E and K are simple by the previous example. Note that
E K is not itself simple—e.g., E 0 and 0 K are nontrivial two-sided ideals.
More generally, the direct sum of two nonzero algebras cannot be simple.
For many purposes, including ours, it suffices to consider semisimple algebras. Of course,
understanding semisimple algebras just boils down to understanding simple algebras, and
that is what we will focus on. However, semisimple algebras do play a role in the study of
simple algebras—e.g., if A is a simple F algebra, one basic question is does the semisimple
algebra F n = ni=1 F embed in A?
Here is one thing we can say about homomorphisms and simplicity.
Proposition 2.1.6. Suppose : A ! B is an algebra homomorphism and A is simple.
Then is injective. In particular, dimF A dimF B.
Proof. Consider ker = {a 2 A : (a) = 0}. This is a 2-sided ideal in A, and therefore must
be either {0} or A. However, the kernel can’t be A because (1A ) = 1B 6= 0. The dimension
statement follows because (A) is then a subalgebra of B of dimension dimF A.
63
QUAINT Chapter 2: Basics of associative algebras Kimball Martin
right A-module. Of course, if A is commutative, then we can think of left modules as right
modules and vice versa just by writing the action of A on the other side of M . Note if A is
a field, then an A-module is just an A-vector space.
If a general statement is true about left modules, then the analogue is also true for right
modules, just writing things in the opposite order. One can formally show this by working
with the opposite algebra Aopp where multiplication is defined in reverse order, i.e., Aopp
is the same as A as a vector space, but the multiplication a·b is replaced by b⇥a := a·b. One
easily checks this is also an algebra. Then left modules for A correspond to right modules
for Aopp . Note A and Aopp are not isomorphic in general. One obvious case where they are
isomorphic (in fact the same) is when A is commutative. Here is another case.
We will work more generally with matrix algebras over division rings. In case you
haven’t seem them, matrix algebras over division rings are defined just like matrix algebras
over fields. As a vector space, Mn (D) is just Dn⇥n . Matrix multiplication is defined as
usual, e.g., for n = 2,
✓ ◆✓ ◆ ✓ ◆
a b e f ae + bg af + bh
= ,
c d g h ce + dg cf + dh
but now one needs to be careful about the order of terms in products like ae. In general,
scalar multiplication by D is not commuative, and there are some difficulties when trying
to define things like characteristic polynomials or determinants over D. However since we
can realize D as a subalgebra of some Mm (F ), we can identify Mn (D) with a subalgebra of
Mmn (F ) to work with more familiar matrices over commutative fields.
The next exercise generalizes Exercises 2.1.5 and 2.2.1 to matrix algebras over division
rings.
Exercise 2.2.2. For a division algebra D, show Z(Mn (D)) ' Z(D) and Mn (D)opp '
Mn (Dopp ).
In light of the meta-equivalence of left and right modules, we will by default assume
module without qualification means left module, but bear in mind analogous statements
apply to right modules after switching A with Aopp .
Example 2.2.1. A itself is a left and right A-module. More generally, the left and right
submodules of A are precisely the left and right ideals of A, so A-modules are can be
viewed as a generalization of ideals.
Thus the meta-equivalence of left and right modules applies to ideals also.
64
QUAINT Chapter 2: Basics of associative algebras Kimball Martin
Exercise 2.2.3. If A semisimple, show that any A-module M is a semisimple module, i.e.,
a direct sum of simple modules. (Hint: prove it for free A-modules and take quotients; cf.
[Pie82, Sec 3.1] or [Rei03, Thm 7.1].)
Lemma 2.2.1. As left A-modules, we have EndA (A) ' A. However, as F -algebras we have
EndA (A) ' Aopp .
Recall that a simple A-module if it has no nonzero proper submodules. Our goal will be
to study the structure of A ' EndA (A) (isomorphic as A-modules, but not as algebras in
general) by decomposing it into simple A-modules.
65
QUAINT Chapter 2: Basics of associative algebras Kimball Martin
⇢✓ ◆
⇤ 0
Exercise 2.2.4. Let A = M2 (F ) and I = be the subset of A. Check that I
⇤ 0
is a simple left A-module. In particular, A is not simple as an A-module. However, we
know M2 (F ) is simple as an algebra from Exercise 2.1.17.
On the other hand, we remark A being semisimple as an A-module is the same as being
semisimple as an algebra (cf. Exercise 2.2.3).
It is easy to understand what the simple modules look like in terms of A. We will not
actually use the next result, but I will just mention it for your edification.
Lemma 2.2.2. Any simple left (resp. right) A-module M is isomorphic to A/m (resp. m\A)
for a maximal left (resp. right) ideal m of A.
Proof. Say M is a simple left A-module. Fix a nonzero m 2 M . Then the map (a) = am
is a left A-module homomorphism from A (as a left A-module) to M . Since 1 2 A, the
image of contains m and thus is a nonzero left A-module. By simplicity, this means is
surjective. Note that ker is a left ideal, and defines an isomorphism of A/ ker with M .
(The quotient A/ ker , as an abelian group, is a left A-module and thus defines a quotient
module.) Since A/ ker is has non nontrivial left ideals, ker must be maximal.
We are interested in the (say left) simple A-modules which are submodules of A, i.e.,
the left ideals of A which are simple as left A-modules. This means they do not properly
contain any nonzero left ideals, i.e., they are minimal left ideals.
Here is a very simple (honestly, no pun intended) but very useful result.
Lemma 2.2.3 (Schur). Let M and N be (both left or right) A-modules and : M ! N be
a nonzero homomorphism. If M is simple then is injective and if N is simple then is
surjective.
Compare this with Proposition 2.1.6 and Exercise 2.1.19 about homomorphisms of simple
algebras, i.e., simple rings.
The above two lemmas are also true in the more general setting for modules over a ring
R.
Corollary 2.2.4. Let M be a (left or right) simple A-module. Then EndA (M ) is a division
algebra.
Exercise 2.2.6. Prove Schur’s lemma and deduce the above corollary.
3
In algebra, there are a lot of similar notions, and one needs to keep track of their sometimes subtle
differences with careful bookkeeping so as to minimize the number of false theorems one proves. Exercise:
Find all the false theorems in these notes.
66
QUAINT Chapter 2: Basics of associative algebras Kimball Martin
Ln
Exercise 2.2.7. Let M be a left or right A-module and N = i=1 M . Show that,
EndA (N ) ' Mn (EndA (M )opp )) if M is a left A-module and EndA (N ) ' Mn (EndA (M ))
if M is a right A-module. In particular, if M is simple, conclude that EndA (N ) ' Mn (D)
for some division algebra D.
Exercise 2.2.8. Let A be a semisimple F -algebra. Show that the minimal left ideals of
A are the same as the simple left A-modules which are submodules of A.
Lemma 2.2.5. Let A be a simple F -algebra, and I, J be minimal left ideals of A. Then
I ' J as A-modules and J ' I↵ for some ↵ 2 A.
Now we can prove our the main result of this section, which is a partial classification the-
orem for simple algebras—namely classification modulo the classification of division algebras
over F .
Theorem 2.2.6 (Wedderburn). Let A be a simple F -algebra. Then A ' Mn (D) where D
is a division algebra over F . Furthermore, this n and D are uniquely determined (D up to
isomorphism) by A.
Proof. Since
LnA is semisimple as a left A-module (Exercise 2.2.3), it decomposes asLa direct
sum A ' i=1 Mi of simple left A-modules. Identifying A with the direct sum Mi , we
can view each Mi as a minimal left ideal Ii of A. (In the case where A = Mn (F ), we can
take minimal left ideals to be the subspaces Ii consisting of matrices which are zero off the
i-th column.) By Lemma L 2.2.5, all Ii are isomorphic (as A-modules) to a single minimal
left ideal I, i.e., A ' ni=1 I.
Then EndA (A) = EndA ( I) = Mn (EndA (I)opp ) by Exercise 2.2.7. However EndA (I)
is a division algebra D by the Corollary 2.2.4, thus
A ' EndA (A)opp ' Mn (EndA (I)opp )opp ' Mn (Dopp )opp ' Mn (D),
as F -algebras using Lemma 2.2.1 for the first isomorphism and Exercise 2.2.2 for the last.
(We could have avoided the business with opposites if we started with I being a minimal
right ideal and D = EndA (I) and worked with right A-modules; cf. Exercise 2.2.7.)
Now we want to prove uniqueness. Recall by Lemma 2.2.5, all minimal left ideals of A
are isomorphic. Since the isomorphism class of I determines n and isomorphism class of D
in the above procedure to write A ' Mn (D), we see this procedure results in a unique n
and D (up to isomorphism) given A. However, to show that Mn1 (D1 ) ' Mn2 (D2 ) implies
n1 = n2 and D1 ' D2 for division algebras D1 , D2 requires a little more. We need to know
the above procedure for Mn (D) gives back n and D, rather than some n0 and D0 .
67
QUAINT Chapter 2: Basics of associative algebras Kimball Martin
So fix a division algebra D and n 2 N, and set A = Mn (D). Let e 2 A be the matrix
which is 1 in the upper left entry and 0 elsewhere. It is easy to check that I = Ae must be
a minimal left ideal of A. Note I is just the set of matrices in A which are 0 off the first
column. Then A = Ie1 · · · Ien where ei is a matrix with a 1 in the first entry of the
i-th column and 0 elsewhere (so e1 = e and Iei is the set of matrices which are 0 off the
i-th column). Thus we recover the n we started with in the above procedure.
Now it suffices to show that the above procedure recovers D. One can show that, as F -
algebras, right multiplication defines an isomorphism of EndA (I) with Dopp . This is similar
in spirit to Lemma 2.2.1, and the details are Exercise 2.2.9. Thus A ' Mn ((Dopp )opp ) '
Mn (D) as above.
Outside of these notes, the term “Wedderburn’s theorem” sometimes refers to another
theorem, like a generalization of the above result known as the Artin–Wedderburn theorem,
or Wedderburn’s theorem on finite division rings (that they are all fields). To specify, the
above theorem (or a generalization) is sometimes called Wedderburn’s structure theorem.
The endomorphism algebra EndA (I) that came up in the proof of Wedderburn’s theorem
is easy to understand in terms of matrices. The following exercise completes the proof of
the uniqueness part of Wedderburn’s theorem, and is a generalization of Exercise 2.2.5.
Exercise 2.2.9. Let D be a division algebra over F , A = Mn (D), and I be the minimal
left ideal consisting of matrices which are zero in every entry off the first column. For
2 D, show that right multiplication by diag( , . . . , ) defines an endomorphism R : I !
I. Show moreover, that 7! R defines an isomorphism (as F -algebras) of Dopp with
EndA (I).
The following exercise gives a kind of duality of endomorphism algebras, and implicitly
arose in the proof of Wedderburn’s theorem.
Exercise 2.2.10. Let A be a simple F -algebra and I a minimal left ideal. Put D0 =
EndA (I). Viewing I as a left D0 -module, show EndD0 (I) ' A as F -algebras. That is, we
have EndEndA (I) (I) ' A.
The above exercises will be also important in the proof of the Skolem–Noether theorem
in the next section.
Of course, we haven’t really completed a classification of simple F -algebras (even modulo
the classification of division algebras), because we have not actually shown that Mn (D) is a
simple F -algebra! We have only done this for n = 1 from Example 2.1.4 and when D = F
in Exercise 2.1.17. But it is true.
Theorem 2.2.7. Let D be a division algebra over F . Then Mn (D) is a simple F -algebra
for any n 2 N.
This is basically an exercise in linear algebra over division rings, generalizing Exer-
cise 2.1.17, so I will leave it to you:
68
QUAINT Chapter 2: Basics of associative algebras Kimball Martin
Corollary 2.3.1. Let A be a simple F -algebra. Then there exists a field extension K/F of
finite degree such that A is a central simple K-algebra.
Proof. By Wedderburn’s theorem, A ' Mn (D) for some D. Recall Z(Mn (D)) = Z(D) by
Exercise 2.2.2. Now Z(D) must be commutative, and therefore it is a field K which must
contain F (and is a finite extension by finite-dimensionality). Note the action of scalar
multiplication by K makes A into a K-linear space, and thus a K-algebra. Since A is
simple as an F -algebra it is simple as a K-algebra (simplicity as a ring is independent of
the underlying field as an algebra) and it is central over K.
This corollary says it suffices to understand central simple algebras, and thus by Wed-
derburn’s theorem, to understand central division algebras.
The corollary says that we can extend the algebra structure of A to an algebra over a
field contained in the center. The following exercise shows we can extend the vector space
structure to fields not contained in the center.
Exercise 2.3.1. Let A be a CSA over F and K a subalgebra which is a field. Show that
A is a K-vector space but not a K-algebra.
A central simple algebra is in some sense an even more basic object than a simple
algebra, and is abbreviated CSA. Many elegant results (e.g., extension of scalars and the
Skolem–Noether theorem) are true for CSAs, but not true for arbitrary simple algebras.
Here K/F need not be finite degree. On the other hand, even if K/F is finite degree and
A is a simple algebra over F , A ⌦F K need not be a simple algebra over K. For instance,
if K/F is quadratic and A ' K, then A ⌦F K ' K ⌦F K ' K K (cf. Exercise 2.1.15),
which is semisimple but not simple (as an F -algebra or as a K-algebra).
Proof. Note that it essentially suffices to show the first part of (i) by Exercise 2.1.16. (Tech-
nically to deal with infinite degree extensions K/F in (ii), one should show (i) and Exer-
cise 2.1.16 without our default assumption that B is not be finite dimensional. This is still
possible if B is artinian, in particular, if B = K is a field—see [Rei03].) For this, roughly,
one can take a nonzero 2-sided ideal I in A ⌦ B and show it must contain an element of
69
QUAINT Chapter 2: Basics of associative algebras Kimball Martin
the form 1 ⌦ b, and in fact 1 ⌦ 1. However, I’m not convinced the proof is particularly,
enlightening, and will not give it. See, e.g., [Rei03, Sec 7b], [BO13, Sec III.1] or [MR03,
Prop 2.8.4] for details.
The main result we want to prove in this section is about embedding simple algebras
into central simple algebras. For instance if A = Mn (D) is central over F and K/F is a
field extension, we might want to know in what ways does K embed in A. For instance,
what are the embeddings of C into Hamilton’s quaternions H? The following theorem tells
us that either K does not embed in A or it embeds in an essentially unique way (unique up
to conjugation).
Theorem 2.3.3 (Skolem–Noether). Let A and B be simple F -algebras, and assume that A
is central. If , : B ! A are algebra homomorphisms, then there exists ↵ 2 A⇥ such that
( ) = ↵ ( )↵ 1 for all 2 B.
As another consequence (taking B = A), this says that any algebra automorphism of a
CSA A is inner, i.e., given by conjugation of an element of A⇥ .
Proof. Recall from Proposition 2.1.6 that , must be injective, hence (B) and (B) are
subalgebras of A isomorphic to B.
Let M be a simple A-module and D = EndA (M ), which is a division algebra from
Corollary 2.2.4. For instance, we can take M = I, where I is a minimal left ideal. Explicitly,
if A = Mn (D0 ) then we can take I to be the ideal of matrices in A which are zero in every
entry not in the first column. Then D0 ' Dopp as F -algebras by Exercise 2.2.9.
Then we can make M a D ⌦ B = D ⌦F B-module in two ways:
( ⌦ )m = ( ( )m)
and
( ⌦ )m = ( ( )m),
where we extend this action to D ⌦ B linearly. Explicitly, if M = I and we identify D = D0
as sets, then the first action, say, is just
0 1
B .. C
( ⌦ )x = ( ) · x · @ . A,
70
QUAINT Chapter 2: Basics of associative algebras Kimball Martin
( ( ) · (m)) = ( ( ( ) · m)), 2 D, 2 B, m 2 M.
Taking = 1 shows 2 EndD (M ). But from Exercise 2.2.10, we know EndD (M ) ' A—
explicitly, with M = I, is just left matrix multiplication by some ↵ 2 A. Thus
( ( ) · ↵ · m) = ↵ · ( ( ) · m), 2 D, 2 B, m 2 M.
( ) · ↵ · m = ↵ · ( ) · m, 2 B, m 2 M
Since A acts faithfully on M (recall A ' EndD (M ), or just think in terms of matrices), this
means ( ) · ↵ = ↵ · ( ) for all 2 B, as desired. Note also ↵ is invertible because it
represents an isomorphism : M ! M .
where
a, b 2 C, |a|2 + |b|2 = 1.
Here we just conjugated i by norm 1 (i.e., determinant 1) elements of H because any element
of H⇥ can be made norm 1 by multiplying by an element of the center R⇥ (and the center
does nothing for conjugation).
Mild warning: for a division algebra D over a field F we do not in general have D⇥ =
F D1 , where D1 denotes the norm 1 elements. It happens to be true for H/R because the
⇥
image NH/R (H⇥ ) = R>0 = NH/R (R⇥ ), i.e., in effect because every positive real has a real
square root.
71
QUAINT Chapter 2: Basics of associative algebras Kimball Martin
Answers to these questions will provide important insight into the structure of A. The
first question is analogous to understanding what abelian subgroups are contained in a given
group. The second question (somewhat analogous to realizing a finite group inside Sn with
n as small as possible) will be important for many things, such as defining a reduced norm
map in a way that is compatible with the determinant for matrix algebras.
As a first example, if F = R and A = H, then C embeds in H (in many ways, but all
conjugate by Skolem–Noether). On the other hand H embeds in M2 (C) (Exercise 2.1.10),
and since H is not a field itself, this is clearly the smallest dimensional matrix algebra over
a field into which H embeds.
For the first question, we may as well just ask what are the largest fields which embed
in A. Here, we need to be a little careful about what we mean by embed. Let us say a
subalgebra K ⇢ A is a subfield of A if it is also a field. Let us say a subset K ⇢ A is a
quasi-subfield4 if it is a field under the ring operations of A, i.e., if K is a subfield of A
in the category of rngs (not a typo—a rng is a “ring without identity,” i.e., a (potentially)
non-unital ring). Note proper subfields of F (regarding F as a field, not an F -algebra) are
examples of quasi-subfield of A which are not “subfields,” but we will also see below that
there are quasi-subfields containing F which are not subfields of A (i.e., not subalgebras)
because the identity elements do not coincide.
We say a subfield K of A is a maximal subfield if it is maximal among subfields with
respect to inclusion. We often abuse terminology and say an extension K/F is a subfield
or quasi-subfield or maximal subfield of A if it is isomorphic to one. Note C is the unique-
up-to-isomorphism maximal subfield of H (though as we saw in the last section, it can be
realized in H in infinitely many ways, which are all conjugate). Maximal subfields exist
(though are not unique in general) because K must contain F , and chains of subfields of
A must terminate by finite dimensionality. In fact, since A is simple, Z(A) is a field by
Corollary 2.3.1, so maximal subfields must contain Z(A). Thus, for this question, we can
replace F by Z(A) and assume A is a CSA.
In fact, the crucial case is where A is a central division algebra, because it is easy to say
what fields are contained in matrix algebras.
72
QUAINT Chapter 2: Basics of associative algebras Kimball Martin
On the other hand, there is essentially no distinction between quasi-subfields and sub-
fields of division algebras.
Now let’s consider the second question, about finding an optimal matrix representation
of A. Again the crucial case is where A is a (central) division algebra. To see why, recall by
Wedderburn’s theorem we know A ' Mm (D) for a unique m and a division algebra D/F
(unique up to isomorphism). So if we have an embedding D ,! Mn (K), then we get an
73
QUAINT Chapter 2: Basics of associative algebras Kimball Martin
embedding of A ,! Mmn (K). While it’s not obvious at this stage, using tools we develop
one can show that if this was an optimal matrix representation for D, the corresponding
matrix representation for A is also optimal. The way we will obtain matrix representations
is through splitting fields.
Let A be a CSA over F . We say a field K F splits A, or is a splitting field for A,
if A ⌦F K ' Mn (K), as K-algebras. (We do not require K ⇢ A, or even K/F to be finite
degree.) If F itself is a splitting field for A, i.e., A ' Mn (F ) for some n, then we simply say
A is split.
Exercise 2.4.3. Consider the Hamilton quaternion algebra H over R, with subfield C.
Show H splits over C, i.e., H ⌦R C ' M2 (C).
The connection between the two questions we posed will be that for a central division
algebra D, maximal subfields K are the same as splitting fields. In particular, maximal
subfields of division algebras must all have the same dimension, in contrast to the case of
maximal subfields of matrix algebras. We will also deduce that any division algebra has
square dimension.
Remark 2.4.2 (Terminology and connection with algebraic groups). The term split agrees
with the use of the term for algebraic groups. Namely, we can consider G = A⇥ as an
algebraic group over F . Roughly, an algebraic group is a matrix group defined by polynomials
over F such as the general linear group GLn (F ) = Mn (F )⇥ or the special linear group
SLn (F ) = {g 2 Mn (F ) : det g = 1}. Inside G = A⇥ , we can look at a maximal (algebraic)
torus T , which is just by definition a maximal abelian algebraic subgroup of G. We say a
torus T is split if T ' (F ⇥ )m for some m, and G is split if it contains a split maximal torus.
Then A being split (a matrix algebra) is the same as G = A⇥ being split as an algebraic
group. In particular, if A = Mn (F ), the diagonal subgroup (F ⇥ )n is a maximal torus, which
is split). Other maximal tori in A⇥ = GLn (F ) will be the multiplicative groups of direct
sums of field extensions, where the degrees sum to n—e.g., K ⇥ is also a maximal torus in
GLn (F ) for any degree n field extension K/F by Proposition 2.4.1.
For example, if we take A = H and F = R, so G = H⇥ , then that any maximal torus in
G turns out to be isomorphic to C⇥ . Neither G nor A is split over R, and we see a torus
T = C⇥ ' S 1 ⇥ R⇥ is topologically a (bi)infinite cyclinder. However, when we tensor up to
C we see A ⌦R C ' M2 (C) so GC := (A ⌦R C)⇥ ' GL2 (C). Thus we can take (C)2 to be a
maximal torus TC . If we take the real points of TC we just get TC (R) = R⇥ ⇥ R⇥ , i.e., we
have “split” the circle S 1 and turned it into the straight line minus a point R⇥ . Hence there
is a geometric meaning of the term split.
If you haven’t see this stuff before, you might wonder why an algebraic torus is called
a torus. In our example above, the torus wasn’t topologically a torus but an open cy-
clinder. However, you do get topological tori for some groups. If we instead work with
74
QUAINT Chapter 2: Basics of associative algebras Kimball Martin
To obtain the connection between maximal subfields and splitting fields, it will be useful
to look at centralizers. Suppose A, B are F -algebras and B is a subalgebra of A. The
centralizer of B in A is
It is easy to check that CA (B) is also a subalgebra of F which contains Z(A). Further
B ⇢ CA (B) if and only if B is commutative. Note CA (Z(A)) = A and CA (A) = Z(A).
This is essentially [Pie82, Lem 12.7]. When A is central and B = F , this just says
EndA (A) ' Aopp , which was Lemma 2.2.1.
Theorem 2.4.3 (Double centralizer theorem (DCT)). Let A be a CSA over F and B be a
simple subalgebra. Then:
Proof. By Proposition 2.3.2, we know C := A ⌦ B opp is simple. The equality Z(CA (B)) =
Z(B) is straightforward, which proves (1).
Let I be a minimal left ideal
L of C. Then, as the proof of the Wedderburn structure
theorem, we know C ' EndC ( I)opp ' Mn (Dopp ), where D = EndC (I) is a division
algebra. Thus
dim C = dim A · dim B = n2 dim D = n dim I
L
(as F -vector spaces). Now A embeds in C by Exercise 2.4.4 so we can write A ' m i=1 I
and dim A = m dim I. Combining with the equation for dim C, we see
n
dim B = .
m
75
QUAINT Chapter 2: Basics of associative algebras Kimball Martin
On the other hand, CA (B)opp ' EndC (A) ' Mm (Dopp ), using Exercise 2.4.5 for the first
2 I
isomorphism. So dim CA (B) = m2 dim D = m dim n , which gives (2).
It is straightforward from the definition that B ⇢ CA (CA (B)). Applying (2) to B 0 =
CA (B) shows dim CA (CA (B)) = dim B and we get (3).
Now suppose B is a CSA. Since B and CA (B) commute, the map ⌦ 7! defines
an algebra homomorphism : B ⌦ CA (B) ! A: it’s clearly F -linear—it respects ring
multiplication as
0 0 0 0 0 0 0 0
( ⌦ ) ( ⌦ )= = = (( ⌦ )( ⌦ )).
Exercise 2.4.6. Let A be a CSA over F and B a subalgebra with center K. Show
CA (K) ' B ⌦K CA (B) as K-algebras.
( ⌦ )·x= x = x = ( ⌦ ) · x.
That is, ⌦ yields a CD (B)-linear operator on D, and in fact defines an algebra ho-
momorphism : D ⌦ B ! EndCD (B) (Dopp ) (viewing Dopp as a left CD (B)-module via left
multiplication). Then is injective because D⌦B Lis simple. Note CD (B) must D also be a divi-
sion algebra, so as left CD (B)-modules, Dopp ' ri=1 CD (B), where r = dimdim
CD (B) = dim B,
using (2) of the double centralizer theorem for the last equality. Thus EndCD (B) (Dopp ) '
Mr (CD (B)), and hence has dimension dim CD (B)r2 = dim D · dim B over F (again by the
DCT). Looking at dimensions shows is surjective, and we get an isomorphism
76
QUAINT Chapter 2: Basics of associative algebras Kimball Martin
Corollary 2.4.6. All maximal subfields of a division algebra D with center F have the same
degree.
Corollary 2.4.7. Let A be a CSA over F . Then dim A = n2 for some n 2 N, and there
exists a degree d field extension K/F for some d|n such that A ⌦F K ' Mn (K). We call
n = deg A the degree of A over F .
So the dimension of a CSA is always a square. This is not true for non-central simple
algebras A/F . Instead, we have that dimF A is a square times dimF Z(A).
Proof. By Wedderburn’s theorem, A ' Mr (D) for some D. Since dim D = d2 for some d by
the theorem, dim A = (rd)2 . Since D splits over a field K/F of degree D, so does Mr (D)
as Mr (D) ⌦ K ' Mr (D ⌦ K) (see exercise below).
Exercise 2.4.7. Let D be a division algebra over F and K a field extension of F . Show
Mr (D ⌦ K) ' Mr (D) ⌦ K as K-algebras.
Exercise 2.4.8. Let K/F be a field extension. Show K is (isomorphic to) a subfield of
Mn (F ) if and only [K : F ]|n. (Suggestion: use the DCT.)
The following exercise essentially says that for CSAs—say A = Mn (D) where D is
division of degree d—the optimal matrix representations of A are into Mnd (K) where [K :
F ] = d (cf. (2.5.1)).
Exercise 2.4.9. Let A = Mn (D) where D is a central division algebra over F of degree
d. Show that any splitting field K of A which is a subfield of A must satisfy [K : F ] d.
Remark 2.4.8. An important result is not just that splitting fields K/F exist, but that
we can take K to be a separable extension. This requires some additional work to prove.
However, since we are concerned with local and global fields F of characteristic 0, we get
separability automatically in our cases of interest.
77
QUAINT Chapter 2: Basics of associative algebras Kimball Martin
If A is a CSA over F with A ' Mn (D) for D a division algebra, we call deg D the
(Schur) index of A, denoted by ind A. The above corollary says that any CSA A has a
splitting field K which is a subfield with [K : F ] = ind A. We can take K to be a subfield
of A. This does not mean any subfield K of A of degree ind A over F splits A. For instance
if A = M2 (D) where D is a degree 2 division algebra, so ind A = 2, there are subfields K of
A of degree 2 over F which are not contained in D, and these need not split A.
One can show that any the degree of any splitting field (subfield or not) of A must be a
multiple of the index of A, i.e., we cannot find splitting fields of smaller degree than what
we can get by using subfields. (See, e.g., [GS06, Sec 4.5].)
using Exercise 2.4.4 for the hookarrow. To be clear, ◆ denotes an embedding A ,! Mn (K).
For ↵ 2 A, define the reduced characteristic polynomial (resp. (reduced) mini-
mal polynomial, resp. (reduced) norm, resp. (reduced) trace) to be the characteristic
polynomial (resp. minimal polynomial, resp. determinant, resp. trace) of ◆(↵) 2 Mn (K). Let
us temporarily denote these by p◆↵ , m◆↵ , N ◆ (↵) and tr◆ (↵). A priori, these polynomials are
just polynomials defined over K and depend on ◆. In fact they are defined over F and do
not depend on ◆.
Note that if A = Mn (F ), then K = F so the reduced norm, trace, etc. agree with
the usual notions for matrices (over fields). The reason we make the above definitions for
general CSAs is that there are issues which arise when trying to generalize determinants
and characteristic polynomials to matrix algebras over skewfields. There are theories of
“noncommutative determinants” to address this, but we will not pursue that approach.
Lemma 2.5.1. The polynomials p◆↵ and m◆↵ are polynomials of degree at most nd defined
over F . Furthermore, the quantities p◆↵ , m◆↵ , N ◆ and tr◆ do not depend on the choice ◆ or
K.
78
QUAINT Chapter 2: Basics of associative algebras Kimball Martin
just F . (If K/F is not Galois, we can just replace K by the Galois closure L and extend ◆
to a map A ,! Mn (L) withoutP i changing the characteristic
P polynomial.) Let 2 Gal(K/F ).
0 0
Set ◆ =
0 ◆. If p↵ (x) =
◆ ci x , then p↵ (x) =
◆ ci x . Thus p◆↵ = p◆↵ implies the coefficients
i
of p◆↵ (x) are Galois invariant, and thus lie in F . The same is true for minimal polynomials.
Finally, suppose we have two embeddings ◆ : A ,! Mn (K) and ◆0 : A ,! Mn (K 0 ),
where K and K 0 are maximal subfields of D. Let L = KK 0 be the compositum, and we
⇠ 0
can extend ◆, ◆0 to isomorphism ◆L , ◆0L : A ⌦ L ! Mn (L). Then p◆↵ and p◆↵ must agree
with the characteristic polynomials of ◆L (↵) and ◆0L (↵), which must be the same by our
previous argument. Hence the reduced characteristic polynomial is also independent of
K. This implies reduced norms and traces do not depend on K. Similarly for minimal
polynomials.
Consequently we will denote the reduced characteristic and minimal polynomials and
reduced norm and trace simply by p↵ , m↵ , NA/F (↵) = N (↵) and trA/F (↵) = tr(↵). I
may also drop parentheses for the reduced norm and traces, and simply call them the
norm and trace as we will work with these rather than the non-reduced ones. Note the
(reduced) minimal polynomial must be the minimum degree monic polynomial over F which
annihilates ↵, and therefore agrees with the “non-reduced” minimal polynomial.
Proof. We already showed the assertion about p↵ , which implies that the reduced norm and
trace are F -valued, as they are, up to signs, coefficients of p↵ . The first property follows
from multiplicativity of determinant and additivity of trace for matrices. The third property
follows because ◆(x) = diag(x, . . . , x).
For (2), if ↵ 2 A⇥ , then 1 = N (1) = N (↵)N (↵ 1 ) implies N (↵) 6= 0. If N (↵) 6= 0, then
the characteristic polynomial p↵ (x) = xn + cn 1 xn 1 + · · · + c0 has nonzero constant term.
By the Cayley–Hamilton theorem, p↵ (↵) = 0 (and thus the minimal polynomial divides the
characteristic polynomial), i.e.,
↵(↵n 1
+ cn 1↵
n 2
+ · · · + c1 ) = c0 .
This lemma implies that the reduced characteristic polynomial, reduced norm and re-
duced trace are different from the non-reduced versions whenever D 6= F , because they will
be different on elements of F (e.g., for x 2 F , NA/F
nr (x) = xn ).
79
QUAINT Chapter 2: Basics of associative algebras Kimball Martin
A consequence of (1) is that the reduced norm and trace define group homomorphisms
N : A⇥ ! F ⇥ , tr : A ! F,
and thus are analogous to norms and traces for number field extensions. A consequence of
(2) is that A is a division algebra (i.e., n = 1) if and only if the norm is nonzero on all
nonzero elements.
As mentioned in the introduction, the reduced norm will provide a link between quater-
nion algebras and quadratic forms, generalizing the case of H in the exercise above.
Exercise 2.5.2. Let D/F be a central division algebra of degree n, and K be a subfield
of D. Show that for any x 2 K,
n n
trD/F x = trK/F x, ND/F (x) = NK/F (x) [K:F ] .
[K : F ]
(The quantities on the left denote reduced trace and norm on D, and on the right are
trace and norm of extensions of number fields.) In particular, if K is a maximal subfield
of D which contains x, then the reduced trace and norm agree with the trace and norm
for extensions of algebraic number fields.
Proof. Let D be a complex division algebra. Fix ↵ 2 D and let p be its minimal polynomial.
Then p(↵) = 0 by definition of minimal polynomial. But since p factors into linear factors
over C, we have ↵ z = 0 for some z 2 C (assuming ↵ 6= 0). Hence ↵ 2 C.
Corollary 2.6.2. Any simple C-algebra is isomorphic to a complex matrix algebra Mn (C).
The classification of simple algebras over R follows from the following famous result.
Theorem 2.6.3 (Frobenius). Let D be a real division algebra, i.e., a division algebra over
R. Then D is isomorphic to R, C or H.
80
QUAINT Chapter 2: Basics of associative algebras Kimball Martin
Proof. We use the fact that the only field extensions K/R of finite degree are R and C. If
D is not central over R, then by Corollary 2.3.1 D must be central over C, hence by the
previous result D = C.
So suppose Z(D) = R and let K be a maximal subfield of D. If K = R, then D is split
so D = R. If K = C, by Theorem 2.4.5, we see D is a degree 2 division algebra containing
C. Now apply the exercise below.
There are more elementary proofs of Frobenius’s’s theoremseseses,5 e.g., R.S. Palais’s’s’
note in the Monthly (Apr 1968), but I wanted to show the utility of splitting fieldseses.
You can do the above exercise without much theory. This exercise is also a consequence
of general structure theory of quaternion algebras we will develop later.
Corollary 2.6.4. Any simple R-algebra is isomorphic to Mn (R), Mn (C) or Mn (H) for some
n.
Proof. This follows immediately from combining Frobenius’s theorem and Wedderburn’s
theorem.
AF = A ⌦ F F v ,
which is a CSA over Fv of degree n by Proposition 2.3.2. It is clear that A ' A0 implies
Av ' A0v for all v. It is not at all obvious that the converse is true.
81
QUAINT Chapter 2: Basics of associative algebras Kimball Martin
Pierce [Pie82, Sec 18.4] calls this “The most profound result in the theory of central sim-
ple algebras.” This also sometimes just called the Brauer–Hasse–Noether theorem, though
Albert (an American mathematician) played a role in its proof. Had the four mathematicians
been on the same continent or lived in a time with more advanced travel and communication
options, the correspondence described in Roquette’s article leads me to believe the original
proof would have been a 4-way collaboration.
One method of proof is to reduce to the case of cyclic algebras and use Hasse’s norm
theorem. Cyclic algebras of degree n are CSAs which can be constructed in a certain concrete
way in terms of matrices over a degree n cyclic extension K/F (i.e., K/F is Galois with
cyclic Galois group). Specifically, let K/F be a cyclic extension of degree n, a generator
of the Galois group, and b 2 K ⇥ . Then the cyclic algebra (K/F, , b) is the degree n CSA
over F with generated by an element y and the extension K subject to the relations
y n = b, ↵y = y↵ , for all ↵ 2 K.
and embedding K in Mn (K) via ↵ 7! diag(↵, ↵ , . . . , ↵(n 1) ). One can show the index of
(K/F, , b) is just the order of b in F ⇥ /F (n) .
Exercise 2.7.1. Let n = 2. Show the cyclic algebra (K/F, , b) is a division algebra if b
is not a norm from K and split if b is a norm from K.
Albert–Brauer–Hasse–Noether (or some subset) eventually proved that any CSA over a
number field or p-adic field (or R or C) is a cyclic algebra, which one deduces as a consequence
of the above local–global principle (and Roquette says that the authors really considered
this as their main result).
Hasse’s norm theorem is the following local–global principle for norms:
Theorem 2.7.2 (Hasse’s norm theorem). Let K/F be a cyclic extension of number fields.
Then x 2 F is a norm from K if and only if it is a norm in Fv from Kv for all v.
The proof of Hasse’s norm theorem is long, but the reduction of Albert–Brauer–Hasse–
Noether to this theorem is relatively short (see, e.g., [Pie82, Sec 18.4]). One can also
prove the local global principle by using zeta functions—e.g., Weil uses the zeta function
approach in [Wei95, Thm XI.2] to prove the above ABHN local-global principle in the case
A0 = Mn (F ).
Thus, to classify CSAs over number fields, it suffices to (i) classify CSAs over local fields,
and (ii) determine when local CSAs can be patched together to make a global CSA.
82
QUAINT Chapter 2: Basics of associative algebras Kimball Martin
While to answer (i) it suffices to classify division algebras over local fields by Wedder-
burn’s theorem, the answer is nicer to explain in terms of CSAs. We have already explained
the classification of CSAs over archimedean fields in Section 2.6: over R, one just gets Mn (R)
and Mn/2 (H); over C, just Mn (C).
Theorem 2.7.3. Let v be a nonarchimedean valuation. Then the CSAs of degree n over Fv
are, up to isomorphism, in bijection with Z/nZ. Under this correspondence, the index of a
CSA corresponding to a 2 Z/nZ is the order of a in Z/nZ. Thus 0 corresponds to Mn (Fv )
and the elements of (Z/nZ)⇥ correspond to division algebras.
When n = 2, this says that the only quaternion algebras over a p-adic field are (up to
isomorphism) the unique division algebra of dimension 4 and M2 (Fv ).
This classification is generally proven using the Brauer group of Fv . The Brauer group
of a field k is the collection Br(k) of CSAs (up to isomorphism) over k modulo the equivalence
Mn (D) ⇠ Mm (D), i.e., two CSAs are Brauer equivalent if they have the same index. The
group law is tensor product.
Exercise 2.7.2. Let A be a CSA over k. Show A ⌦ Aopp ⇠ k (Brauer equivalence), and
show that tensor product makes Br(k) into an abelian group.
Exercise 2.7.3. Show Br(R) ' Z/2Z and Br(C) ' {1}.
The above local classification can be deduced from the following result, which is nowadays
typically proved by cohomological methods.
It is a theorem that, over p-adic fields (or number fields), the exponent of a CSA in the
Brauer group (also called the period of the CSA) is the same as the index. (Over general
fields the period-index theorem says the exponent or period divides the index, but Brauer
constructed examples to show they need not be equal.)
Via this isomorphism with the Brauer group, each CSA Av of degree n over Fv cor-
responds to a rational number of the form na where 0 a < n. Then Theorem 2.7.3 is
essentially just the “degree n” part of Theorem 2.7.4.
The rational number na is called the (Hasse) invariant of Av and denoted inv Av .
This invariant will play an important role in the global classification. For v archimedean,
inv Av = 0 if Av ' Mn (R) or Av ' Mn (C), and inv Av = 12 if Av ' Mn/2 (H). Note for
any v, inv Av = 0 if and only if split and more generally the order of inv Av in Q/Z equals
deg Av .
See, e.g., [Pie82, Chap 17] for a detailed exposition of these facts.
Now let us describe the complete classification of CSAs over F .
We say A is unramified or split at v, or Av is unramified if Av ⇠ F (Brauer equiva-
lence), i.e., if Av ' Mn (F ) is split, i.e., if inv Av = 0. Otherwise A is ramified at v. Let
83
QUAINT Chapter 2: Basics of associative algebras Kimball Martin
0 1 n 1
(3) Given any finite set S of places
P of F and av 2 n , n , . . . , n a local Hasse invariant
for each v 2 S such that v2S av 2 Z, there exist a CSA A/F of degree n which is
unramified at each v 62 S such that inv Av = av for each v 2 S.
In the third part of the theorem, it is understood that at a real place each Hasse invariant
must be 0 or 21 and at each complex place each Hasse invariant must be 0 (or one can just
assume S does not contain complex places).
By the Albert–Brauer–Hasse–Noether theorem, this gives a complete classification of
CSAs over number fields as the conditions in (3) determine A up to isomorphism (the local
Hasse invariants determine Av up to isomorphism). In particular, if A is a CSA which is
not split (i.e., A 6' Mn (F )), then it must be ramified at at least 2 places.
When n = deg A = 2, each inv Av is either 0 or 12 , with the latter happening precisely
when Av is ramified, i.e., a degree 2 division algebra. The condition (2) that the invariants
must sum to an integer is simply that A is ramified at a (finite) even number of places. Part
(3) of the theorem says that given any set S consisting of an even number of non-complex
places, there is a quaternion algebra A which is ramified precisely at v 2 S, i.e., Av is
division if and only if v 2 S.
7
Like with number fields, being ramified is something that can happen at only finitely many places as
stated in the theorem. However, unlike CSAs, for extension of number fields K/F being unramified and
split at v are not the same—we have (infinitely many) inert places too. So you may not want to think of
ramification for CSAs as exactly corresponding to that for number fields now, though we will explain an
analogy between these two notions of ramification when we examine division algebras over local fields more
closely (at least in the quaternion case).
84