Error Control Coding Text: Error Control Coding Fundamentals & Applications - Shu Lin, D.J. Costello
Error Control Coding Text: Error Control Coding Fundamentals & Applications - Shu Lin, D.J. Costello
Error Control Coding Text: Error Control Coding Fundamentals & Applications - Shu Lin, D.J. Costello
Text : Error Control Coding Fundamentals & Applications - Shu Lin, D.J. Costello
Chapters l, 2 & 3
1.0 Introduction
Communication System:
x Source u Channel
Source Encoder,
Encoder
Modulation
v
Noise
Channel
Destination x̂ Source û
Decoder
x̂ Decoder Demodulation
u v
Digital
Encoder
Source
Noise
Coding
Channel
Digital û r
Decoder
Sink
* There is a duality between the problems of data compression ( Source coding - finding
the optimal source code ) and data transmission ( Channel coding). During compression
, we remove all the redundancy in the data to form the most compressed version
possible, whereas during data transmission, we add redundancy in a controlled fashion
to combat errors in the channel.
1.11 Groups:
a*e = e*a = a
4. Inverses For any element a in G, there exists another element b in G such that
a*b = b*a = e
If G has a finite number of elements then it is called a finite group and the number of
elements in G is called the order of G.
Some groups satisfy the additional property that for all a,b in the group
a*b = b*a.
This is called the commutative property. These groups are called commutative groups or
abelian groups.
e.g. The group G = { 1,2 ..... p-l} under modulo-p multiplication is called a
multiplicative group. If p is not a prime then the set is not a group under modulo-p
multiplication.
. 1 2 3 4
1 1 2 3 4
2 2 4 1 3
3 3 1 4 2
4 4 3 2 1
Modulo - 5 Multiplication
e.g. Group: real numbers under addition, Sub group: integers under addition.
A ring is an abstract set that is an abelian group and also has an additional structure.
Definition: A ring R is a set with two operations defined: the first is called addition
(denoted by + ); the second is called multiplication ; and the following axioms are
satisfied.
3. Associative law
a(bc) = (ab) c
4. Distributive law
a(b+c) = ab + ac (b+c)a = ba + ca
The addition operation is always commutative in a ring, but the multiplication operation
need not be commutative. A commutative ring is one in which multiplication is
commutative, i.e. ab = ba for all a,b in R.
The distributive law in the definition of a ring links the addition and multiplication
operations.
e.g.
1. Thc set of all n by n matrices with integer valued elements under matrix addition and
multiplication is a noncommutative ring with identity.
2. The set of all integers under the usual addition and multiplication is a ring with
identity. This ring is conventionally denoted by Z.
Loosely speaking, an abelian group is a set in which one can add and subtract, and a
ring is a set in which one can add, subtract and multiply. A more powerful algebraic
structure, known as a field is a set in which one can add, subtract, multiply and divide.
Definition: A field F is a set that has two operations defined on it: addition and
multiplication, such that the following axioms are satisfied.
2. The field is closed under multiplication, and the set of nonzero elements
is an abelian group under multiplication.
(a+b)c = ac + bc
It is conventional to denote the identity element under addition by 0 and to call it "zero";
to denote the additive inverse of a by -a; to denote the identity element under
multiplication by 1 and to call it "one"; and to denote the multiplicative inverse of a by
a-1.
e.g.
These fields all have an infinite number of elements. We are interested in fields with a
finite number of elements.
( The Galois field are named for Evarsite Galois (1811 - 1832). Abelian groups are
named for Niels Henrik Abel (1802-1829)).
What is the smallest field? It must have an element zero and an element one. In fact
these suffice with the addition and multiplication tables.
+ 0 1 . 0 1
0 0 1 0 0 0
1 1 0 1 0 1
This is the field GF(2). No other field exists with two elements.
+ 0 1 2 . 0 1 2
0 0 1 2 0 0 0 0
1 1 2 0 1 0 1 2
2 2 0 1 2 0 2 1
Notice that multiplication in GF(4) is not modulo-4 and addition is not modulo-4.
e.g. GF(2) is contained in GF(4) = GF(22). However GF(2) is not contained in GF(3).
A field has all the properties of a ring. It also has an additional important property - it is
always possible to cancel. Cancellation is a weak form of division that states that
Some rings may also satisfy this cancellation law and yet not be fields. The ring of
integers is a simple example.
In general, we can construct a code with symbols from any Galois field GF(q), where q
is either a prime p or a power of p. However , codes with symbols from the binary field
GF(2) or its extension GF(2m) are most widely used in digital data transmission and
storage systems because information in these systems is universally coded in binary
form for practical reasons.
Let V be a set of elements on which a binary operation called addition + is defined. Let
F be a field. A multiplication operation, denoted by '.' , between the elements in F and
elements in V is also defined. The set V is called a vector space over the field F if it
satisfies the following conditions:
a.(u + v) = a .u + a .v
(a+b).v = a.v + b.v
(a.b).v = a.(b.v)
The elements of V are called vectors and the elements of the field F are called scalars.
The addition on V is called a vector addition and the multiplication that combines a
scalar in F and a vector in V into a vector in V is referred to as scalar multiplication (or
product). The additive identity of V is denoted by 0.
Properties:
1. Let 0 be the zero element of the field F. For any vector v in V 0.v = 0.
2. For any scalar c in F c.0 = 0.
3. For any scalar c in F and any vector v in V,
where each component ai is all element from the binary field GF(2). This
sequence is generally called an n-tuple over GF(2).
For any u Vn, u = (u0, u1, ..., un-1) and for any v Vn, v = (v0, v1, ..., vn-1)
u+v = (u0+v0, u1+v1, ..., un-1+vn-1)
==> Vn of all n-tuples over GF(2) forms a vector space over GF(2).
Let v1, v2, ..., vk be k vectors in a vector space V over a field. Let a1, a2, ..., ak be k
scalars from F. The sum
is also a linear combination of v1, v2, ..., vk and the product of a scalar c in F and a linear
combination of v1, v2, ..., vk
is also a linear combination of v1, v2, ..., vk. It follows from the above theorem,
Theorem: Let v1, v2, ..., vk be k vectors in a vector space V over a field of F. The set of
all linear combinations of v1, v2, ..., vk forms a subspace of V.
A set of vectors v1, v2, ..., vk is said to be linearly independent if it is not linearly
dependent, i.e. if v1, v2, ..., vk are linearly independent, then
e.g. The vectors (1 0 1 1 0), (0 1 0 01) and (1 1 1 1 1)are linearly dependent since
e0 = (1 , 0, 0 ......... 0)
e2 = (0, 0, 1, 0 ........ 0)
.
.
en-1 = (0, 0, 0,.. . . ., l)
where the n-tuple ei has only one nonzero component at I th position. Then, every n-
tuple (a0, ..., an-1), in Vn can be expressed as a linear combination of e0, ..., en-1 , as
follows:
Therefore, e0, ..., en-1, span the vector space Vn of all n-tuples over GF(2). It is clear that
e0, ..., en-1 are linearly independent. Hence they form a basis for V, and the dimension is
n. If k< n and v1, ..., vk are k linearly independent vectors in V. the all linear
combinations of v1, ..., vk of the form
form a k-dimensional subspace S of V,. Since each ci has two possible values 0 or 1,
there are 2k possible distinct linear combinations of v1, ..., vk. Thus S consists of 2k
vectors and is a k-dimensional subspace of Vn.
where ui.vi and ui.vi + ui+1.vi+1 are carried out in modulo-2 multiplication and addition.
Hence the inner product u.v is a scalar in GF(2). If u.v = 0, u and v are said to be
orthogonal to each other. The inner product has the following properties.
i) u.v = v.u
ii) u.(v + w ) = u.v + u.w
iii) (au).v = a. ( u.v )
Let S be a k-dimensional subspace of Vn and let Sd be the set of vectors in Vn such that,
for any u in S and v in Sd u.v = 0. The set Sd contains at least the all-zero n-tuple 0 =
(0,0,.,0) since for any u in S 0.u = 0.Thus Sd is nonempty. For any element a in GF(2)
and any v in Sd.
a.v = 0 if a=0
=v if a=1
Therefore a.v is also in Sd. Let v and w be any two vectors in Sd. For any vector u in S,
u.(v+w) = u.v + u.w = 0+0=0. This says that if v and w are orthogonal to u the vector
sum v + w is also orthogonal to u.
This subspace Sd is called the null ( or dual) space of S. Conversely S is also the null
space of Sd.
1.3 Matrices
A k x n matrix over GF(2) ( or over ally other field) is a rectangular array with k rows
and n columns.
g 00 g 01 .......g 0 ,n −1
g 10 g 11 .......g 1,n −1
G = .
.
g g k −1,1 .......g k −1,n −1
k −1,0
where each entry g ii with 0< i<k and 0< j<n is an element from the binary field GF(2).
* each row of G is an n-tuple over GF(2) and each column is a k-tuple over GF(2). G
can also be represented its
g 0
g
1
G = .
.
g k −1
If the k (k < n) rows of G are linearly independent then the 2k linear combinations of
these rows from a k-dimensional subspace of the vector space V. of all n-tuples over
GF(2). This subspace is called the row space of G.
We may interchange any two rows of G or add one row to the another. These are called
elementary row operations. Performing elementary row operations on G we obtain
another matrix G' over GF(2). However both G and G' have the same row space.
We may form an (n - k)xn matrix H using , h0, h1, ..., hn-k-1 as rows.
h 0
h
1
H= .
.
h n −k −1
The row space of H is Sd. Since each row gi of G is a vector in S and each row hj of H is
a vector in Sd, the inner product of gi and hj must be zero. ( i.e. gi . hj= 0). Since the row
space S of G is the null space of the row space Sd of H we call S the null space of H.
Theorem: For any kxn matrix G over GF(2) with k linearly independent rows there
exists an (n-k)×n matrix H over GF(2) with n-k linearly independent rows such that for
any row gi in G and any hj in H, gi . hj = 0. The row space of G is the null space of H
and vice versa.
e.g.
1 1 0 1 1 0 1 0 1 1 0 0
G = 0 0 1 1 1 0 , H = 0 1 1 0 1 0
0 1 0 0 1 1 1 1 0 0 0 1
* A kxk matrix is called tin identity matrix if it has 1's on the main diagonal and 0's
elsewhere. This matrix is usually denoted by Ik.
* A submatix of a matrix G is a matrix that is obtained by striking out given rows or
columns of G.