Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

LDPC Codes - A Brief Tutorial

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

LDPC Codes a brief Tutorial

Bernhard M.J. Leiner, Stud.ID.: 53418L bleiner@gmail.com April 8, 2005

Introduction
LDPC

Low-density parity-check (LDPC) codes are a class of linear block codes. The name comes from the characteristic of their parity-check matrix which contains only a few 1s in comparison to the amount of 0s. Their main advantage is that they provide a performance which is very close to the capacity for a lot of dierent channels and linear time complex algorithms for decoding. Furthermore are they suited for implementations that make heavy use of parallelism. They were rst introduced by Gallager in his PhD thesis in 1960. But due to the computational eort in implementing coder and encoder for such codes and the introduction of Reed-Solomon codes, they were mostly ignored until about ten years ago.

Gallager, 1960

1.1

Representations for LDPC codes

Basically there are two dierent possibilities to represent LDPC codes. Like all linear block codes they can be described via matrices. The second possibility is a graphical representation. Matrix Representation Lets look at an example for a low-density parity-check matrix rst. The matrix dened in equation (1) is a parity check matrix with dimension n m for a (8, 4) code. We can now dene two numbers describing these matrix. wr for the number of 1s in each row and wc for the columns. For a matrix to be called low-density the two conditions wc n and wr m must 1

0 1 H= 0 1 f0

1 1 0 0 f1

0 1 1 0

1 0 0 1

1 0 0 1 f2

0 1 1 0

0 0 1 1 f3

1 0 1 0

(1)

c nodes

v nodes c0 c1 c2 c3 c4 c5 c6 c7

Figure 1: Tanner graph corresponding to the parity check matrix in equation (1). The marked path c2 f1 c5 f2 c2 is an example for a short cycle. Those should usually be avoided since they are bad for decoding performance. be satised. In order to do this, the parity check matrix should usually be very large, so the example matrix cant be really called low-density. Graphical Representation Tanner introduced an eective graphical representation for LDPC codes. Not only provide these graphs a complete representation of the code, they also help to describe the decoding algorithm as explained later on in this tutorial. Tanner graphs are bipartite graphs. That means that the nodes of the graph are separated into two distinctive sets and edges are only connecting nodes of two dierent types. The two types of nodes in a Tanner graph are called variable nodes (v-nodes) and check nodes (c-nodes). Figure 1.1 is an example for such a Tanner graph and represents the same code as the matrix in 1. The creation of such a graph is rather straight forward. It consists of m check nodes (the number of parity bits) and n variable nodes (the number of bits in a codeword). Check node fi is connected to variable node cj if the element hij of H is a 1.
Tanner graph, 1981

v-nodes c-nodes

1.2

Regular and irregular LDPC codes


regular

A LDPC code is called regular if wc is constant for every column and wr = wc (n/m) is also constant for every row. The example matrix from equation (1) is regular with wc = 2 and wr = 4. Its also possible to see the regularity of this code while looking at the graphical representation. There is the same number of incoming edges for every v-node and also for all the c-nodes. If H is low density but the numbers of 1s in each row or column arent constant the code is called a irregular LDPC code.

irregular

1.3

Constructing LDPC codes


MacKay

Several dierent algorithms exists to construct suitable LDPC codes. Gallager himself introduced one. Furthermore MacKay proposed one to semi-randomly generate sparse parity check matrices. This is quite interesting since it indicates that constructing good performing LDPC codes is not a hard problem. In fact, completely randomly chosen codes are good with a high probability. The problem that will arise, is that the encoding complexity of such codes is usually rather high.

Performance & Complexity

Before describing decoding algorithms in section 3, I would like to explain why all this eort is needed. The feature of LDPC codes to perform near the Shannon limit1 of a channel exists only for large block lengths. For example there have been simulations that perform within 0.04 dB of the Shannon limit at a bit error rate of 106 with an Shannon limit block length of 107 . An interesting fact is that those high performance codes are irregular. The large block length results also in large parity-check and generator matrices. The complexity of multiplying a codeword with a matrix depends on the amount of 1s in the matrix. If we put the sparse matrix H in the form [PT I] via Gaussian elimination the generator matrix G can be calculated as G = [I P]. The sub-matrix P is generally not sparse so that the encoding complexity will be quite high.
Shannon proofed that reliable communication over an unreliable channel is only possible with code rates above a certain limit the channel capacity.
1

Since the complexity grows in O(n2 ) even sparse matrices dont result in a good performance if the block length gets very high. So iterative decoding (and encoding) algorithms are used. Those algorithms perform local calculations and pass those local results via messages. This step is typically repeated several times. The term local calculations already indicates that a divide and conquere strategy, which separates a complex problem into manageable sub-problems, is realized. A sparse parity check matrix now helps this algorithms in several ways. First it helps to keep both the local calculations simple and also reduces the complexity of combining the sub-problems by reducing the number of needed messages to exchange all the information. Furthermore it was observed that iterative decoding algorithms of sparse codes perform very close to the optimal maximum likelihood decoder.

divide and conquere

Decoding LDPC codes

The algorithm used to decode LDPC codes was discovered independently several times and as a matter of fact comes under dierent names. The most common ones are the belief propagation algorithm , the message passing algorithm and the sum-product algorithm. In order to explain this algorithm, a very simple variant which works with hard decision, will be introduced rst. Later on the algorithm will be extended to work with soft decision which generally leads to better decoding results. Only binary symmetric channels will be considered.

BPA MPA SPA

3.1

Hard-decision decoding

The algorithm will be explained on the basis of the example code already introduced in equation 1 and gure 1.1. An error free received codeword would be e.g. c = [1 0 0 1 0 1 0 1]. Lets suppose that we have a BHC channel and the received the codeword with one error bit c1 ipped to 1. 1. In the rst step all v-nodes ci send a message to their (always 2 in our example) c-nodes fj containing the bit they believe to be the correct one for them. At this stage the only information a v-node ci has, is the corresponding received i-th bit of c, yi . That means for example, that c0 sends a message containing 1 to f1 and f3 , node c1 sends messages containing y1 (1) to f0 and f1 , and so on. 4
ci fj

c-node f0 f1 f2 f3

received: sent: received: sent: received: sent: received: sent:

received/sent c1 1 c3 1 c4 0 0 c1 0 c3 1 c4 c0 1 c1 1 c2 0 0 c0 0 c1 1 c2 c2 0 c5 1 c6 0 0 c2 1 c5 0 c6 c0 1 c3 1 c4 0 1 c0 1 c3 0 c4

c7 1 0 c7 c5 1 0 c5 c7 1 1 c7 c6 0 0 c6

Table 1: overview over messages received and sent by the c-nodes in step 2 of the message passing algorithm 2. In the second step every check nodes fj calculate a response to every connected variable node. The response message contains the bit that fj believes to be the correct one for this v-node ci assuming that the other v-nodes connected to fj are correct. In other words: If you look at the example, every c-node fj is connected to 4 v-nodes. So a c-node fj looks at the message received from three v-nodes and calculates the bit that the fourth v-node should have in order to fulll the parity check equation. Table 2 gives an overview about this step. Important is, that this might also be the point at which the decoding algorithm terminates. This will be the case if all check equations are fullled. We will later see that the whole algorithm contains a loop, so an other possibility to stop would be a threshold for the amount of loops. 3. Next phase: the v-nodes receive the messages from the check nodes and use this additional information to decide if their originally received bit is OK. A simple way to do this is a majority vote. When coming back to our example that means, that each v-node has three sources of information concerning its bit. The original bit received and two suggestions from the check nodes. Table 3 illustrates this step. Now the v-nodes can send another message with their (hard) decision for the correct value to the check nodes. 4. Go to step 2. In our example, the second execution of step 2 would terminate the decoding process since c1 has voted for 0 in the last step. This corrects 5
ci fj f j ci

loop

v-node yi received c0 1 c1 1 0 c2 c3 1 c4 0 1 c5 c6 0 1 c7

messages from check nodes f1 0 f3 1 f0 0 f1 0 f1 1 f2 0 f0 0 f3 1 f0 1 f3 0 f1 0 f2 1 f2 0 f3 0 f0 1 f2 1

decision 1 0 0 1 0 1 0 1

Table 2: Step 3 of the described decoding algorithm. The v-nodes use the answer messages from the c-nodes to perform a majority vote on the bit value. the transmission error and all check equations are now satised.

3.2

Soft-decision decoding

The above description of hard-decision decoding was mainly for educational purpose to get an overview about the idea. Soft-decision decoding of LDPC codes, which is based on the concept of belief propagation, yields in a better decoding performance and is therefore the prefered method. The underlying idea is exactly the same as in hard decision decoding. Before presenting the algorithm lets introduce some notations: Pi = Pr(ci = 1|yi ) qij is a message sent by the variable node ci to the check node fj . Every message contains always the pair qij (0) and qij (1) which stands for the amount of belief that yi is a 0 or a 1. rji is a message sent by the check node fj to the variable node ci . Again there is a rji (0) and rji (1) that indicates the (current) amount of believe in that yi is a 0 or a 1. The step numbers in the following description correspond to the hard decision case. 1. All variable nodes send their qij messages. Since no other information is avaiable at this step, qij (1) = Pi and qij (0) = 1 Pi . 6

belief propagation

ci fj

a)

fj

b)

fj rji (b) qij (b)

rji (b) qij (b) ci ci yi

Figure 2: a) illustrates the calculation of rji (b) and b) qij (b) 2. The check nodes calculate their response messages rji :2 rji (0) = and rji (1) = 1 rji (0) (4) So they calculate the probability that there is an even number of 1s amoung the variable nodes except ci (this is exactly what Vj \i means). This propability is equal to the probability rji (0) that ci is a 0. This step and the information used to calculate the responses is illustrated in gure 2. 3. The variable nodes update their response messages to the check nodes. This is done according to the following equations, qij (0) = Kij (1 Pi )
j Ci \j

f j ci

1 1 + 2 2

(1 2qi j (1))
i Vj \i

(3)

ci fj

rj i (0)

(5)

qij (1) = Kij Pi


j Ci \j

rj i (1)

(6)

2 Equation 3 uses the following result from Gallager: for a sequence of M independent binary digits ai with an propability of pi for ai = 1, the probability that the whole sequence contains an even number of 1s is

1 1 + 2 2

(1 2pi )
i=1

(2)

whereby the Konstants Kij are chosen in a way to ensure that qij (0) + qij (1) = 1. Ci \j now means all check nodes except fj . Again gure 2 illustrates the calculation in this step. At this point the v-nodes also update their current estimation ci ^ of their variable ci . This is done by calculating the propabilities for 0 and 1 and voting for the bigger one. The used equations Qi (0) = Ki (1 Pi )
j Ci

rji (0)

(7)

and Qi (1) = Ki Pi
j Ci

rji (1)

(8)

are quite similar to the ones to compute qij (b) but now the information from every c-node is used. 1 if Qi (1) > Qi (0), 0 else

ci = ^

(9)

If the current estimated codeword fulls now the parity check equations the algorithm terminates. Otherwise termination is ensured through a maximum number of iterations. 4. Go to step 2. The explained soft decision decoding algorithm is a very simple variant, suited for BSC channels and could be modied for performance improvements. Beside performance issues there are nummerical stability problems due to the many multiplications of probabilities. The results will come very close to zero for large block lenghts. To prevent this, it is possible to change into the log-domain and doing additions instead of multiplications. The result is a more stable algorithm that even has performance advantages since additions are less costly.
loop

log-domain

Encoding

The sharped eyed reader will have noticed that the above decoding algorithm does in fact only error correction. This would be enough for a traditional systematic block code since the code word would consist 8

of the message bits and some parity check bits. So far nowhere was mentioned that its possible to directly see the original message bits in a LDPC encoded message. Luckily it is. Encoding LDPC codes is roughly done like that: Choose certain variable nodes to place the message bits on. And in the second step calculate the missing values of the other nodes. An obvious solution for that would be to solve the parity check equations. This would contain operations involving the whole parity-check matrix and the complexity would be again quadratic in the block length. In practice however, more clever methods are used to ensure that encoding can be done in much shorter time. Those methods can use again the spareness of the parity-check matrix or dictate a certain structure3 for the Tanner graph.

Summary

Low-density-parity-check codes have been studied a lot in the last years and huge progresses have been made in the understanding and ability to design iterative coding systems. The iterative decoding approach is already used in turbo codes but the structure of LDPC codes give even better results. In many cases they allow a higher code rate and also a lower error oor rate. Furthermore they make it possible to implement parallelizable decoders. The main disadvantes are that encoders are somehow more complex and that the code lenght has to be rather long to yield good results. For more information and there are a lot of things which havent been mentioned in this tutorial I can recommend the following web sites as a start: http://www.csee.wvu.edu/wcrl/ldpc.htm A collection of links to sites about LDPC codes and a list of papers about the topic. http://www.inference.phy.cam.ac.uk/mackay/CodesFiles.html The homepage of MacKay. Very intersting are the Pictorial demonstration of iterative decoding

It was already mentioned in section 1.3 that randomly generated LDPC codes results in high encoding complexity.

You might also like