Mathematical Prerequisites: 1.1 Operations On Complex Numbers
Mathematical Prerequisites: 1.1 Operations On Complex Numbers
2
)
. (12)
This gives you an easy way to calculate the powers and the roots of complex numbers. Pay
attention to the fact that re
i
= re
i(+2k)
, k Z, so (re
i
)
1/N
= r
1/N
e
i(+2k)/N
,k =
0, 1, ..., N 1.
Now the addition of two complex numbers is just the vector addition of two vectors, and
the multiplication with a xed complex number can be seen as a simultaneous rotation and
stretching.
Multiplication with i corresponds to a counter clockwise rotation by 90 degrees (/2 ra-
dians). The geometric content of the equation i
2
= 1 is that a sequence of two 90 degree
rotations results in a 180 degree ( radians) rotation. Even the fact (-1) (-1) = +1 from
arithmetic can be understood geometrically as the combination of two 180 degree turns.
1.4 Absolute Value, Conjugation and Distance
One can check readily that the absolute value has three important properties:
|z| = 0 if and only if z = 0 (13)
|z + w| |z| + |w| (triangle inequality) (14)
|zw| = |z| |w| (15)
for all complex numbers z and w. It then follows, for example, that |1| = 1 and |z/w| =
|z|/|w| . By dening the distance function d(z, w) = |z w| we turn the complex numbers
into a metric space and we can therefore talk about limits and continuity. The addition,
subtraction, multiplication and division of complex numbers are then continuous operations.
Unless anything else is said, this is always the metric being used on the complex numbers.
The complex conjugate of the complex number z = a + ib is dened to be a ib, written
as z or z
. z is the reection of z about the real axis. The following can be checked:
z + w = z + w (16)
zw = z w (17)
(z/w) = z/ w (18)
z = z (19)
z = z if and only if z is real (20)
|z| = | z| (21)
|z|
2
= z z (22)
z
1
= z|z|
2
if z is non zero. (23)
The latter formula is the method of choice to compute the inverse of a complex number if it
is given in rectangular coordinates.
That conjugation commutes with all the algebraic operations (and many functions; e.g.
sin z = sin z) is rooted in the ambiguity in choice of i (-1 has two square roots).
2 Summations
Let f be a function whose domain includes the integers from n through m. We dene
m
i=n
f(i) = f(n) + f(n + 1) + ... + f(m) (24)
3
We call i the index of summation,n is the lower limit of summation, and m is the upper limit
of summation. One can show that:
n
k=1
c = c + c + ... + c = cn (25)
n
k=1
k = 1 + 2 + ... + n =
n(n + 1)
2
(26)
n
k=1
k
2
= 1 + 4 + ... + n
2
=
n(n + 1)(2n + 1)
6
(27)
(28)
Another well-known result is the following:
S
n
=
n
k=0
r
k
= 1 + r + r
2
+ ... + r
n
=
_
1r
n+1
1r
if r = 1
n + 1 else
(29)
Note that when r < 1:
lim
n+
S
n
=
1
1 r
(30)
Additionally, be very cautious when taking squares of summations:
_
m
i=n
f(i)
_
2
=
_
m
l=n
f(l)
__
m
k=n
f(k)
_
=
m
l=n
m
k=n
f(l)f(k) (31)
Finally, lets S
N
=
N
n=1
a
n
and S = lim
N+
S
N
. If the sequence of partial sums is
divergent (i.e. either the limit does not exist or is innite) then we call the series divergent.
if |S| = c < , we call the series convergent and we call S the sum or value of the series.
The Cauchy convergence criterion states that a series
n=1
a
n
converges if and only if the
sequence of partial sums is a Cauchy sequence. This means that for every > 0, there is a
positive integer N such that for all n m N we have:
k=m
a
k
< (32)
which is equivalent to
lim
n
m
n+m
k=n
a
k
= 0 (33)
3 Integration
Besides being comfortable with the basic properties of integrals and methods for integra-
tion (e.g. substitution, integration by parts) it is important to know the denition and basic
properties of the convolution integral.
The convolution between two functions f and g, both with domain R, is itself a function,
lets call it h, and is dened by
h(x) := f(x) g(x) =
_
R
f(y)g(x y)dy.
The following properties are easy to show:
f(x) g(x) = g(x) f(x).
f(x) (g(x) h(x)) = (f(x) g(x)) h(x).
f(x) ( g(x) + h(x)) = f(x) g(x) + f(x) h(x).
4
4 Linear Algebra
4.1 Matrices
Let A be a matrix with n rows and m colums of complex entries. That is, we have
A :=
_
_
A
1,1
A
1,2
. . . A
1,m
A
2,1
A
2,2
. . . A
2,m
.
.
.
.
.
.
.
.
.
.
.
.
A
n,1
A
n,2
. . . A
n,m
_
_
,
A
i,j
C.
One of the basic operations on A is taking the transpose, denoted by A
T
and dened as
_
A
T
_
i,j
= A
j,i
, i = 1, . . . , n, j = 1, . . . , m. More explicitely we have
A
T
=
_
_
A
1,1
A
2,1
. . . A
n,1
A
1,2
A
2,2
. . . A
n,2
.
.
.
.
.
.
.
.
.
.
.
.
A
1,m
A
2,m
. . . A
m,n
_
_
.
The conjugate transpose of A, denoted by A
, is dened as (A
)
i,j
= A
j,i
. Besides taking the
transpose of A we take the complex conjugate of each element. Note that A
is also known
as the Hermitian of A.
Based on the above operations we dene symmetric and Hermitian matrices. A real matrix
A is symmetric if A
T
= A, a (complex) matrix is Hermitian if A
= A.
The matrix A can be right multiplied with a m by p matrix, say B, resulting in a n by p
matrix. Remember that matrix multiplication is dened by (AB)
i,j
=
m
k=1
A
i,k
B
k,j
. Note
that matrix multiplication is not commutative, i.e. AB = BA (assuming n = m).
4.2 Vectors
Let c and d be length n resp. m vectors, i.e. c = [c
1
, c
2
, . . . , c
n
] and d = [d
1
, d
2
, . . . , d
m
]. We
will usually assume that vectors are column vectors. This allows us to right multiply our
matrix A with vector d. The result Ad is a length n vector. Similarly b
T
A gives a length m
row vector.
The inner product between two n length vectors a and b is dened by
a, b :=
n
i=1
a
i
b
i
= a
T
b.
Note that matrix multiplication is nothing more than taking inner products between rows and
columns of the two matrices. The most common way to dene the norm of a vector is through
the inner product. This gives that the norm of x, x
2
is dened as
x
2
= x, x
1/2
.
A very useful relation is the Cauchy-Schwartz inequality, which states that
| x, y | x
2
y
2
.
4.3 Determinants
One of the most used properties of a matrix is its determinant. The determinant of a 2 by 2
matrix
A =
_
a b
c d
_
5
is given by
det(A) = ad bc.
In general for a square n by n matrix A we have, for any row i = 1, . . . , n
det(A) =
n
j=1
A
i,j
(1)
i+j
det
_
A
\(i,j)
_
,
where A
\(i,j)
is the resulting matrix after removing row i and column j from matrix A. We
can also expand along any column j = 1, . . . , m, which gives us
det(A) =
n
i=1
A
i,j
(1)
i+j
det
_
A
\(i,j)
_
.
An important result to keep in mind is that
a matrix is invertible if and only if its determinant is not equal to zero.
Finally, we note the following basic relations:
det(AB) = det(A) det(B).
det
_
A
1
_
= det (A)
1
.
det(A
) = det(A)
.
4.4 Eigenvalues and Eigenvectors
The eigenvalues of a matrix A are the solutions for in the equation
det(A I) = 0,
which is called the characteristic equation. Given that
is an eigenvalue of A, we call the
vector x for which
Ax =
x.
the eigenvector corresponding to
.
One can verify that the eigenvalues of A and A