Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Sandhu09 Coding Theory Project

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Minimizing the Gap-to-Capacity of a

Rate 1/3 Code via Convolutional Encoding/Decoding


Romeil Sandhu
rsandhu@gatech.edu
Abstract
In this project, we seek to minimize the gap-to-capacity
(given by Shannons theoretical limit) of a rate 1/3 code.
This is done via a convolutional encoder/decoder for vary-
ing memory elements as well for both soft and hard decod-
ing scheme. We show that the gap-to-capacity can be mini-
mized with respect to the suboptimal un-coded code word or
a (3,1) repetition code. Although better schemes are avail-
able such as LDPC and turbo codes, we have chosen the
convolutional code for its simplicity and generality. That is,
a generic framework can be readily developed for which
multiple convolutional schemes can be implemented with
minimal changes to the overall structure (see Appendix A
for MATLAB code). In this paper, we present the basic con-
cepts associated with convolution codes, specic encoding
and decoding schemes used in this project, and results com-
paring the gap-to-capacity of the algorithm implemented
with respect to Shannons optimal code.
1. Introduction
Given the code rate contraint of R = 1/3 for a binary-
input additive white gaussian noise (AWGN) channel, this
paper presents several convolutional encoder and decoders
of varying element sizes in effort to minimize the gap to ca-
pacity of a code with respect to the Shannon limit of any
R = 1/3 system. This can be seen in Figure 1, where
we have also plotted the bit-error rate of an un-coded word.
We begin by recalling the model for a binary-input AWGN
channel, which is given below as
r
l
= a
l
+ n
l
(1)
where a
l
{1, 1} is the l-th transmitted symbol, and r
l
is the l-th measured or received symbol corrupted by i.i.d
zero mean Gaussian noise n
l
with variance N
0
/2. Although
it is a simple approximation, the AWGN channel presented
in Equation (1) has been a powerful instrument in modeling
real-life disturbances caused from ambient heat in the trans-
miiter/receiver hardware and propagation medium. For ex-
ample, many satellite channels and line-of-sight terrestrial
Figure 1. A simulated BER (log scale) versus E
b
/No (in dB)
curve highlighting both the gap to capacity with respect to Shan-
non Limit curve of a 1/3 system as well as the coding gain with
regards to an un-coded code
channels can be accurately modeled as an AWGN channel.
In this work, we propose to use a 1/n, more specically,
a 1/3 convolution encoder/decoder, to mitigate the distur-
bance resulting from such a channel. However, before do-
ing so, let us revisit some of classical coding techniques pre-
sented in this class, and motivate our reasoning for choosing
a convolutional code.
In classical coding theory, Hamming codes, Reed-Muller
codes, and Reed-Solomon codes have been popular choices
in implementing efcient and reliable coding schemes.
However, in this present work, these codes in a stand-
alone fashion can not be directly applied to the problem
at hand. For example, there exist no Hamming codes that
can produce a binary rate 1/3 code. Similarly, noting that
our code length is unconstrained with binary input/output,
Reed-Muller codes and Reed-Solomon codes are not well
suited. In other words, we seek a coding algorithm that
can perform with limited-power. However, if one were to
concatenate coding algorithms together (e.g., a (7,4) Ham-
ming Code with a (12,7) Reed-Solomon code), one could
Figure 2. The general form a 1/n convolution encoder.
add exibility given the constraints of the problem. To this
end, we propose to use only a 1/3 convolution code (al-
though one could use a 1/2 convolution code paired with a
(3,2) Reed-Solomon code). Lastly, we note that recent work
of LDPC code, turbo code, and repeat-accumulate code will
offer a better performance gain than the algorithm presented
here, but given the limited time of the project, the 1/3 con-
volution code was chosen.
The remainder of this paper is organized as follows: In
the next section, we begin with a review of convolution
codes detailing the 1/3 convolution encoder/decoder for a
given constraint length. Numerical implementation details
are given in Section 3. In Section 4, we present the Bit-Error
Rate (BER) rates of our convolution code, with respect to
Shannons limit and the un-coded word, for varying mem-
ory element sizes. Finally, we conclude with Section 5
2. Optimal 1/3 Convolution Codes
Binary linear convolution codes, like that of binary lin-
ear block codes are useful in the power-limited regime.
The general form of a 1/n convolution encoder is given
in Figure 2. Here, we see that the encoder is a LTI lter
with banks g
i
(D) that are both rational and causal. More-
over, the message m = [m
1
, m
2
, ..., m
L
] of length L is
passed in bit-by-bit producing n code words c
n
k
. From
this, we can then form the encoded message as c
k
=
[c
1
1
, c
2
1
, ..., c
n
1
, c
1
2
, , ..., c
n
2
, c
1
k
, ..., c
n
k
]. Moreover, if one forms
the Delay Transform of an impulse response g
i
(D), the n-
th code word can then be formed as c
n
k
(D) = m(D)g
i
(D).
Together, all possible solutions of an message code forms
what is known as a convolution code. Although the formu-
lation of a convolution code assumes a message to be theo-
retically innite (as well as the space of acceptable codes),
we dene block codes of length L.
With this, an encoder can be viewed as a generator ma-
trix G(x) of a linear block code, and hence, multiple encod-
ing schemes can be designed to achieve a rate 1/3 system.
In addition, each encoding scheme can contain memory
elements, adding versatility to design of a particular convo-
lution code. To this end, we seek to implement an optimal
Figure 3. The Optimal Rate 1/3 Convolution Encoder for K = 4.
convolution code of different sizes. This is discussed next.
2.1. Encoding Scheme
Using the Table 11-4 presented in [2], we can choose
an optimal encoding scheme, dependent on the constraint
length K = 1 + for a rate 1/3 convolution. For con-
venience, we recall three optimal lters with K = 4, 6, 8
given below.
K g
0
g
1
g
2
d
free
4
54
101100
64
110100
74
111100
10
6
47
100111
53
101011
75
111101
13
8
452
100101010
662
110110010
756
111101110
16
Table 1. Rate 1/3 convolution codes with minimum distance
Table 1 above shows the optimal lter design for each
code generator, where the response is given in octal and bi-
nary representation. With this, we can realize the actual
encoder via a circuit diagram. This is given in Figure 3 for
K = 4. Note, for the MATLAB implementation presented
in Appendix A, the use of the function conv.m is used to
do the encoding. After one encodes the message m into a
code word c, it is then passed through the channel model
given by Equation (1). We then need to be able to recover
or decode the code word corrupted by noise.
2.2. Decoding Scheme
We assume now that the code word has been passed
through the channel, and now we must decode the (pos-
sibly) corrupted message. One popular technique is the
Viterbi algorithm, in which one can map the possible so-
lutions to what is known as a trellis map. For the sake of
brevity, we refer the reader for information about how to
construct a trellis map [2]. However, we note that through
this map, the decoder is able to choose the maximum like-
lihood (ML) estimate by labeling the nodes with a value
denoting the partial branch metric. Then, we seek to nd a
path with total minimum cost. That is, the decoding scheme
can re-expressed as
min
cC
d
H
(r, c) =
L+

l=0
d
H
(r
l
, c
l
) (2)
It is important to note that we have yet to dene the met-
ric d
H
(., .) in Equation (2). Depending on the chosen met-
ric, one can produce a sub-optimal decoder by choosing the
metric to be the Hamming distance. In contrast, if one
chooses the L
p
norm, specically the L
2
norm, one can
achieve an optimal soft-decoder. Lastly, we also refer the
reader to advancement of other chosen metrics that have
arisen in prediction theory and have found uses in elds
such as computer vision [1]
2.2.1 Sub-optimal Decoder: Hard Decoding
As previously noted, the chosen partial branch metric,
d
H
(., .), is crucial for the decoder. In particular, let us
denote r
l
= [sgn(r
l
1
), sgn(r
l
2
), ..., sgn(r
l
k
)], where sgn(.)
outputs the sign of value. With our newly formed estimate
r
l
, we can then dene the partial branch metric using the
Hamming distance. This is given as
d
H
(r
l
, c
l
) = |{i| r
l
i
= c
l
i
, i = 0, 1, ..., k}| (3)
Given that we rst formed the estimate r by making hard
decisions of the received vector r, we denote this procedure
as hard decoding.
2.2.2 Optimal Decoder: Soft Decoding
One major drawback of making hard decisions in form-
ing the estimate r, as seen in Section 2.2.1, is a loss of in-
formation of the received vector. Instead, if we deal with
the received vector r directly, we can then begin to form a
measure of similarity via the L
p
norm. That is, if dene the
partial branch metric to be
d
H
(r
l
, c
l
) =

k

i=1
|r
l
i
c
l
i
|
p
1
p
(4)
where p is chosen to be p = 2 or the Euclidean distance,
then onen arrives at the optimal soft decoding scheme using
the square of the Euclidean distance.
3. Implementation
We have used MATLAB to perform the convolution en-
coder/decoder algorithm presented in this report. More im-
portantly, we should note that because of the exponential in-
crease in complexity with regards to the number of memory
elements used and unoptimized MATLAB code, a major
drawback is the computational speed. However, from pre-
vious experiences that involves a search based type of algo-
rithm, one could invoke a kd-tree to performfast searches.
We also note the generality of the framework and refer
the reader to the documented version of the MATLAB code
used to implement the convolutional encoder/decoder. This
can be found at the end of this report. In particular, the code
is written for K = 4; however, one can easily change it
to incorporate encoders (e.g., K = 6 or K = 8). These
changes will be denoted by a red box.
4. Experiments
We test the robustness of the rate 1/3 convolution code
for memory element sizes of = 3, 5, 7. Specically,
we measure the coding efciency of each respective con-
volution code over 10,000 trials and assume that our mes-
sage is of L = 100 bits. Moreover, this simulation is
done over several SNR levels. Although one would ide-
ally like to reach the theoretical coding gain given by Shan-
nons limit, we deem the success of encoder/decoder if
it is able to achieve roughly 4 dB using a hard decoding
scheme. This base line can then be improved by substitut-
ing various branch metrics, such as the L
2
norm. To this
end, we present simulation results of the algorithm for both
hard and soft decoding, and refer the reader to Appendix A
for information of how to switch between the two by trivial
changes to the MATLAB code.
We begin with K = 4 convolution code (see Figure 3).
In Figure 4a, we present the BER simulated over a series
trials along with the the Shannons theoretical limit and the
un-coded BPSK algorithm. Figure 4b, shows a zoomed in
plot of the value located on the simulated curve at BER =
10
4
. The coding gain and gap to capacity at this BER level
are 2.242 dB and 6.951 dB, respectively. Using Table 1, we
see that the theoretical gain for a hard decoding scheme is
10 log
10
(
Rdmin
2
) = 2.1285 dB, which falls near to what is
measured.
Similarly, Figure 4(c)-(d) and Figure 4(e)-(f) show the
convolution code results for K = 6 and K = 8. We again
nd that the measured coding gain of each curves falls near
the expectations of its theoretical coding gains. That is, for
K = 6 we expect a gain of 3.358 dB, but measured a coding
gain of 2.93 dB. Likewise, for K = 8, we expect a gain of
4.23 dB, but measured a coding gain of 3.59 dB.
Finally, Figure 5 shows simulated results for the K = 8
convolution encoder using the soft decoding scheme dis-
(a) (c)
(e)
(b) (d) (f)
Figure 4. Rate 1/3 Convolution Encoder Simulated Results using a Hard Decoder. (a)-(b) Encoder (Hard Decoding) for K = 4. (c)-(d)
Encoder (Hard Decoding) for K = 6. (e)-(f) Encoder (Hard Decoding) for K = 8.
(a) (c)
(b) (d)
Figure 5. Soft Decoding Results for K = 4 and K = 6 Rate 1/3 Convolution Encoders. (a)-(b) Encoder (Soft Decoding) for K = 4. (c) -
(d) Encoder (Soft Decoding) for K = 6
cussed in this report. Interestingly, we only were able
to measure a coding gain of 4.12 dB, which is far differ-
ent from what is theoretically expected, i.e., 10 log
10
(R
d
min
) = 7.27 dB. One particular problem that maybe at-
tributed to such a disparity between the two values could be
the small message length of L = 100 or the short amount
of trials (Trials = 10, 000) since each of these contribute
to the overall transmitted message bits. Nevertheless, we
do achieve a dB gain that is reasonable for objective of this
project.
5. Conclusion
In this report, we attempt to mitigate the gap to capac-
ity of Shannons theoretical limit for a rate 1/3 system. In
particular, given the generality and exibility provided with
convolution codes, we present several varying convolution
encoders for several varying memory element sizes. Using
both soft and hard decoding, we then presented experimen-
tal results that for the most part fall within the expected the-
oretical gains.
References
[1] R.Sandhu, T.Georgiou, and A.Tannenbaum. A new distribu-
tion metric for image segmentation. In SPIE Medical Imaging,
2008. 3
[2] S. B. Wicker. Error Control Systems for Digital Communica-
tions and Storage. Prentice Hall, 1995. 2, 3
6. APPENDIX A: MATLAB CODE
Please See Figure 6 through Figure 10 for the detailed
MATLAB code. The red boxes highlight regions of code
that should be only altered in order to change the design of
the encoder (i.e., memory element size or a hard/soft decod-
ing scheme). Current implementation shown is for K = 4
with hard decoding.
Figure 6. This is the main MATLABscript le that is used to simulate the binary AWGNChannel. To change different Rate 1/3 Convolution
Encoders with different memory elements or soft/hard decoding schemes, modify area inside red box
Figure 7. This is the rst half of the generalized decoder of convolution codes. We note that one can perform both soft and hard decoding
via a ag input
Figure 8. This is the second half of the generalized decoder of convolution codes. If one needs to modify the optimal K = 4 convolution
encoder, then modify circuit logic function
Figure 9. These are the associated helper les needed to compute the encoding of the convolution code, compute circuit logic for a specic
encoder, and compute the appropriate cost functionals for node paths
Figure 10. This function traverses the trellis map backwards nding the optimal or survival path. It then returns the message that is the
ML estimate.

You might also like