Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

LDPC

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 40

Low Density Parity

Check Code (LDPC)

Ali Alsadi

University of Arkansas at Little Rock


Contents
Introduction to LDPC Code
Regular & Irregular LDPC Codes
Hard Decision Decoding
Soft Decision Decoding
Density Evolution & Threshold Analysis
Conclusion
References

Information Theory and Coding


University of Arkansas at Little Rock 2
Introduction to LDPC
Code
History of LDPC Codes
Low Density Parity Check Code, A class of Linear
Block Codes.
Invented by Robert Gallager in his 1960 MIT Ph. D.
dissertation.
Being ignored for long time due to:
Requirement of high complexity computation.
Introduction of Reed-Solomon codes.
The concatenated RS and convolutional codes were
considered perfectly suitable for error control coding.
Rediscovered by MacKay(1999) and Richardson
/Urbanke(1998).

Information Theory and Coding


University of Arkansas at Little Rock 4
Features of LDPC Codes
Approaching Shannon capacity.
For example, 0.3 dB from Shannon limit.
An closer design from (Chung:2001), 0.04 dB away from
capacity (block size of 107 bits, 1000 iterations).
Good block error correcting performance.
Low error floor.
100
Linear decoding complexity

Bit Error Probability


10-1 Uncoded
in time.
Suitable for parallel implementation. 10-2

4 dB coding gain over convolutional 10-3 4 dB


Conv. code
Codes
10-4
Performance similar to turbo
codes 0 1 2 3 4 5 6 7 8
Signal to Noise Ratio (dB)
Information Theory and Coding
University of Arkansas at Little Rock 5
Features of LDPC Codes

LDPC Renewed interest


Introduced in LDPC

1960 1970 1980 1990 2000

Hamming Turbo
Practical codes
codes implementation
BCH codes Convolutional LDPC beats
of codes
codes Turbo and
Reed Solomon convolutional
codes codes

Information Theory and Coding


University of Arkansas at Little Rock 6
Features of LDPC Codes
Standards and applications
10 Gigabit Ethernet (10GBASE-T)
Digital Video Broadcasting
(DVB-S2, DVB-T2, DVB-C2)
Next-Gen Wired Home
Networking (G.hn)
WiMAX (802.16e)
WiFi (802.11n)
Hard disks
Deep-space satellite missions

Information Theory and Coding


University of Arkansas at Little Rock 7
Properties of LDPC Codes
LDPC is a class of coding and decoding schemes that
can utilize the long block lengths necessary for low
error probability without requiring excessive equipment
or computation.
What do we mean by low density?
Low-density parity-check codes are codes specified by
a matrix containing mostly 0s and only a small number
of 1s.
Why do we need LDPC?
LDPC codes are not optimum in the somewhat artificial
sense of minimizing probability of decoding error for a
given block length. However,
a very simple decoding scheme exists for low-density
codes, and this compensates for their lack of optimality
Information Theory and Coding
University of Arkansas at Little Rock 8
Properties of LDPC Codes
Two difference between ordinary PC matrix and
LDPC.

For large N in a typical parity check matrix the order


of the 1s in H is in order N squared. while in LDPC
the number of 1s in H is in order N.
LDPC is a dual code parity check matrix with relaxing
the linear independency between the rows of H.

Information Theory and Coding


University of Arkansas at Little Rock 9
Regular & Irregular
LDPC Codes
Regular LDPC Codes
The code words of a parity-check code are formed
by combining a block of binary information digits with
a block of check digits. C = [n, K, d].
Each check digit is the modulo2 sums of a specified
set of information digits.
These formation rules for the check digits can be
conveniently represented by a parity-check matrix.
This matrix represents a set of linear homogeneous
modulo 2 equations called parity-check equations.

Information Theory and Coding


University of Arkansas at Little Rock 11
Regular LDPC Codes
An (n, wc, wr) low-density code is a code of block
length N with a matrix where each column contains a
small fixed number (wc) of ls and each row contains
a small fixed number (wr) of 1s.
this type of matrix does not have the check digits
appearing in diagonal form.

Information Theory and Coding


University of Arkansas at Little Rock 12
Regular LDPC Codes
The analysis of a low-density code of long block
length is difficult because of the immense number of
code words.
It is simpler to analyze a whole ensemble of such
codes because the statistics of an ensemble permit
one to average over quantities that are not tractable
in individual codes.
There are two interesting results that can be proven
using this ensemble, the first concerning the
minimum distance of the member codes, and the
second concerning the probability of decoding error.
the code rate is K/ = R, 0<R<1 = dmin/,
0 < < 1.
Information Theory and Coding
University of Arkansas at Little Rock 13
Regular LDPC Codes

Information Theory and Coding


University of Arkansas at Little Rock 14
Regular LDPC Codes
The probability of error using maximum likelihood
decoding for low-density codes clearly depends upon the
particular channel on which the code is being used.
over a reasonable range of channel transition
probabilities, the low-density code has a probability of
decoding error that decreases exponentially with block
length and that the exponent is the same as that for the
optimum code of slightly higher rate.

Information Theory and Coding


University of Arkansas at Little Rock 15
Regular LDPC Codes

Information Theory and Coding


University of Arkansas at Little Rock 16
Regular LDPC Codes
A Sample LDPC Code
Parity Check Matrix
Wc = 3
n = 10 and m = 5
Wr = 3 * (10/5) = 6

G can be found by
Gaussian elimination.

Information Theory and Coding


University of Arkansas at Little Rock 17
Irregular LDPC Codes

Information Theory and Coding


University of Arkansas at Little Rock 18
Encoding LDPC Codes
General encoding of systematic linear block codes
Issues with LDPC codes
The size of G is very large.
G is not generally sparse.
Example: A (10000, 5000) LDPC code.
P is 5000 x 5000.

We may assume that the density of 1's in P is 0.5

There are 12.5x106 1's in P

12.5x106 addition operations are required to encode one


code word.
An alternative approach to simplified encoding is to
design the LDPC code via algebraic or geometric
methods
Such structured codes can be encoded with shift registers
Information Theory and Coding
University of Arkansas at Little Rock 19
Hard Decision
Decoding
Hard Decision Decoding
Also called the Gallager A hard decision decoder
(Hard Decision).
The first have of first iteration (row processing)
The intrinsic information Ui1 = ri1, with probability = (1-p)
that this estimate is correct.
The extrinsic information Ui1= ri2 + ri3 + . . . . . + riwr
modulo 2, with probability = {1+ [1- 2P] wr-1}/2.
U (1) i, 1, U (1) i, 2 U (1) i, wc 1<= i <= n

Information Theory and Coding


University of Arkansas at Little Rock 21
Hard Decision Decoding
The second have of first iteration (column
processing).

First part: making decisions


C1 (1) = majority {r1, U (1) 1, 1, U (1) 1, 2 U (1)1, wc}
Second part: preparing for the next iteration
V (2) 1, 1 = the estimate of C1 used in first row in second iteration
If U (1) i, 2 = U 1) i, 2 = U (1)i, wc = b then V (2) i, 1 =b
Else V (2) i, 1 = ri

Information Theory and Coding


University of Arkansas at Little Rock 22
Hard Decision Decoding

Information Theory and Coding


University of Arkansas at Little Rock 23
Soft Decision
Decoding
Tanner Graph
Tanner showed parity check matrix can be represented
effectively by a bipartite graph, now called a Tanner graph.
a bipartite graph is an unidirectional graph whose nodes
may be separated into two classes, where edges only
connect two nodes not residing in the same class.
Tanner graph has two classes of nodes

variable nodes (bit or symbol nodes)


check nodes (function nodes)

The Tanner graph is drawn according to the following rule


check node j is connected to variable node i.

25
Tanner Graph
This graph is called a tanner graph H=1111011000
1
0011111100

0101011100
2
1010100111

1100101011
3

A 1, 2, 3, 4, 6, 7
4

B 3, 4, 5, 6, 7, 8
5

C
6 2, 4, 6, 8, 9, 10

D
7 1, 3, 5, 8, 9, 10

E
8 1, 2, 5, 7, 9, 10

The designed rate of the code >= 1/2


9
Number of edges (1s) = (No. of rows x wr)
Edges Number of edges (1s) = (No. of columns x wc)
1
0
Degree=3 Degree=6
wr=3 wc=6 26
Variable Nodes Check Nodes
Tanner Graph
Decoding complexity grows in O(n2)
Even sparse matrices dont result in a good performance
if the block length (n) gets very high.
So iterative decoding algorithms are used
Those algorithms perform local calculations and pass
those local results via messages.
This step is typically repeated several times.
It was observed that iterative decoding algorithms of.
sparse codes perform very close to the optimal decoder.
27
Tanner Graph
Summary of the message passing algorithm:
Initialize nodes.
Pass the messages from Bit nodes to Check nodes.
Pass the messages from Check nodes to Bit nodes.
Approximate the code word from probabilistic information
residing in Bit nodes.
If cHT = 0 or maximum number of iterations reached,
then stop otherwise continue iterations.

28
Tanner Graph Terminology
The total number of operations at each node is
proportional to the number of nodes, or to the degree of
the node, and proportional to the number of iterations.
If the number of iteration is fixed and so as the degree of
the nodes is fixed, the total number of operations is linear
in the length of the code.
If we can decode this code simply by a sequence of
messages passing the left to the right and vice versa then
we can able to decode the code in linear complexity and
that is the beauty of LDPC.
The decoding complexity algorithm is proportional to the
number of edges in the tanner graph and hence is linear in
the block length of the code.
Information Theory and Coding
University of Arkansas at Little Rock 29
Tanner Graph Terminology
It is called ubiquities error correcting code which is popular and wildly
used.
It is called tanner graph after Michel Tanner how rediscovered this
code but he went further by adding some structures and he
introduced this graphical representation of the decoding algorithm.
The binary symmetric channel (BSC).
Xt = (-1)ut and Ut {0,1}
Yt = Xt Zt where Zt { 1}, pzt(+1) = 1- and pzt(-1) =

1-
Xt =1 Yt =1

Xt = -1 Yt = -1
Information Theory and Coding
University of Arkansas at Little Rock 30
Message passing Terminology
r (0) : m the initial message map
c (l) : mwc-1 m message passed from check to variable node
r (l) : x mwr-1 m message passed from variable to check node
v
m
m
m
Channel c m c
Input v v
m

c m
v

Information Theory and Coding


University of Arkansas at Little Rock 31
Message passing Terminology
After fixed number of iteration we need to construct the
correct symbols from the graph.
we should look at the sign of the message we are sending
out from the variable node, if that sign is positive then we
declare that the corresponding variable signal to be
positive, take the majority of the multiple edges.
The message along an edge is said to be in error if its sign
is not the true sign of the associated code symbol.

Information Theory and Coding


University of Arkansas at Little Rock 32
Density Evolution &
Threshold Analysis
Density Evolution
How well this algorithm perform?
Estimate the number of incorrect messages during each
iteration.
Assume all one code word was transmitted.

P1 (0) = the probability of a message passed by a variable node


during the 0th iteration = +1
P-1(0) = the probability of a message passed by a variable node
during the 0th iteration = -1
q1 (l) = probability that on the lth the check node message is= 1
q-1(l) = probability that on the lth the check node message is= -1
P1 (0) = 1-
P-1(0) =
Information Theory and Coding
University of Arkansas at Little Rock 34
Density Evolution

1 v

wc c
v

wc-1 v

Information Theory and Coding


University of Arkansas at Little Rock 35
Threshold Analysis

Information Theory and Coding


University of Arkansas at Little Rock 36
Threshold Analysis

Information Theory and Coding


University of Arkansas at Little Rock 37
Threshold Analysis

38
Conclusion
The regular parity check matrix achieved better code rate
than the ordinary one and easer to decode.
The regular code is a special case of irregular code.
However, the irregular achieved better code rate than the
regular
Gallager proved that the probability of decoding error
approaches 0 with an increasing number of iterations for
sufficiently small cross over probabilities.

Information Theory and Coding


University of Arkansas at Little Rock 39
References
GALLAGER, R. G. (1962). Low-Density Parity-Check Codes. IRE
TRANSACTIONS ON INFORMATION THEORY, 8.
Liva, G., Song, S., & Ryan, W. (n.d.). Design of LDPC Codes: A Survey and
New Results. 22.
MacKay, D. J. (2015, November 26). Gallager codes. Retrieved from The
Inference Group:
http://www.inference.phy.cam.ac.uk/mackay/CodesGallager.html
R. M. Tanner. (1981, Sep) .A recursive approach to low complexity codes. IEEE
Trans. Inform. Theory, vol. 27, pp. 533-547.
W. E. Ryan. (2004). An Introduction to LDPC Codes. CRC Handbook for Coding
and Signal Processing for Recording Systems,Ed.,B.Vasicand E. Kurtas, CRC
Press.
Information Theory and Coding
University of Arkansas at Little Rock 40

You might also like