Matrix Algebra
Matrix Algebra
Contents
1 Definitions 2
2 Matrix Operations 3
3 Rank of a Matrix 6
5 Systems of Equations 10
7 Quadratic Forms 13
8 Partitioned Matrices 15
10 Kronecker Product 18
References 19
Foreword
These lecture notes are supposed to summarize the main results concern-
ing matrix algebra as they are used in econometrics and economics. For
a deeper discussion of the material, the interested reader should consult
the references listed at the end.
1 Definitions
A symmetric matrix is a square matrix such that aij = aji for all
i = 1, . . . , n and j = 1, . . . , m.
A diagonal matrix is a square matrix such that the off-diagonal ele-
ments are all equal to zero, i.e. aij = 0 for i 6= j.
The identity matrix is a diagonal matrix with all diagonal elements
equal to one. The identity matrix is denoted by I or In .
A square matrix is said to be upper triangular whenever aij = 0 for
i > j and lower triangular whenever aij = 0 for i < j.
Two vectors a and b are said to be linearly dependent if there exist
scalars α and β both not equal to zero such that αa + βb = 0. Otherwise
they are said to be linearly independent.
2 Matrix Operations
2.1 Equality
Two matrices or two vectors are equal if they have the same dimension
and if their respective elements are all equal:
2.2 Transpose
The addition and subtraction of matrices is only defined for matrices with
the same dimension.
Definition 2. The sum of two matrices A and B of the same dimensions
is given by the sum of their elements, i.e.
• A+O=A (2.3)
• A − B = A + (−B) (2.4)
• A+B=B+A (2.5)
• (A + B) + C = A + (B + C) (2.6)
• (A + B)0 = A0 + B0 (2.7)
2.4 Product
(n × m) such that
k
X
C = AB ⇐⇒ cij = ais bsj for all i and j
s=1
where a0i• denotes the i-th row of A and b•j the j-th column of B.
We have the following calculation rules if matrix dimensions agree:
• AI = A, IA = A (2.8)
• AO = O, OA = O (2.9)
• A(B + C) = AB + AC (2.11)
• (B + C)A = BA + CA (2.12)
• c(A + B) = cA + cB (2.13)
3 Rank of a Matrix
Pn
A set of vectors x1 , x2 , . . . , xn is linearly independent if i=1 ci xi = 0
implies ci = 0 for all i = 1, . . . , n.
The column rank of a matrix is the maximal number of linearly in-
dependent columns. The row rank of a matrix is the maximal number
of linearly independent rows. A matrix is said to have full column (row)
rank if the column rank (row rank) equals the number of columns (rows).
The column rank of an n × k matrix A is equal to its row rank. We
can therefore just speak of the rank of a matrix denoted by rank(A).
For an (n × k) matrix A, a (k × m) matrix B and an (n × n) square
matrix C, we have
4.2 Determinant
§ a11 a1 j a1n ·
¨ ¸
¨ ¸
¨ ¸
A ij = ¨ ai1 aij ain ¸
¨ ¸
¨ ¸
¨a a a ¸
© n1 nj nn ¹
§ a11 a1 j a1n ·
¨ ¸
¨ ¸
¨ ¸
A ij = ¨ ai1 aij ain ¸
¨ ¸
¨ ¸
¨a a a ¸
© n1 nj nn ¹
We have the following calculation rules if both A−1 and B−1 exist and
matrix dimensions agree:
−1
• A−1 =A (4.9)
• A is nonsingular (4.13)
• |A| =
6 0 (4.14)
5 Systems of Equations
Ax = b
A x = λx = λI x ⇐⇒ (A − λ I)x = 0.
H0 AH = Λ, (6.3)
H0 H = I,
where the matrices hi h0i have all rank one. This decomposition is called
the spectral decomposition or eigendecomposition of A.
The inverse of a nonsingular symmetric matrix A can be calculated as
n
X 1
A−1 = HΛ−1 H0 = xi x0i .
i=1
λi
Remark 5. Beside symmetric matrices, many other matrices, but not all
matrices, are also diagonalizable.
13 Short Guides to Microeconometrics
7 Quadratic Forms
x0 Ax ≥ 0 for all x.
• |A| ≥ 0 (7.6)
• tr(A) ≥ 0 (7.7)
For an (n × m) matrix B,
where C is a (n × n) matrix.
15 Short Guides to Microeconometrics
8 Partitioned Matrices
∂y ∂a0 x ∂x0 a h ∂y i h i
= = = ∂x1 ∂y
... ∂y
= a1 a2 ... a n = a0
∂x0 ∂x0 ∂x0 ∂x2 ∂xn
∂yi ∂a0i x
= = ai
∂x ∂x
Consequently the derivative of y = Ax with respect to row vector x0 can
be defined as
∂x0 Ax
= 2Ax.
∂x
The derivative of the quadratic form x0 Ax with respect to the matrix
elements aij is given by
∂x0 Ax
= xi xj .
∂aij
Therefore the derivative with respect to the matrix A is given by
∂x0 Ax
= xx0 .
∂A
Elements of Matrix Algebra 18
10 Kronecker Product
• (A ⊗ B) + (C ⊗ B) = (A + C) ⊗ B (10.1)
• (A ⊗ B) + (A ⊗ C) = A ⊗ (B + C) (10.2)
References
[1] Abadir, K.M. and J.R. Magnus, Matrix Algebra, Cambridge: Cam-
bridge University Press, 2005.
[4] Meyer, C.D., Matrix Analysis and Applied Linear Algebra, Philadel-
phia: SIAM, 2000.
[5] Strang, G., Linear Algebra and its Applications, 3rd Edition, San
Diego: Harcourt Brace Jovanovich, 1986.