Sandhu09 Coding Theory Project
Sandhu09 Coding Theory Project
Sandhu09 Coding Theory Project
l=0
d
H
(r
l
, c
l
) (2)
It is important to note that we have yet to dene the met-
ric d
H
(., .) in Equation (2). Depending on the chosen met-
ric, one can produce a sub-optimal decoder by choosing the
metric to be the Hamming distance. In contrast, if one
chooses the L
p
norm, specically the L
2
norm, one can
achieve an optimal soft-decoder. Lastly, we also refer the
reader to advancement of other chosen metrics that have
arisen in prediction theory and have found uses in elds
such as computer vision [1]
2.2.1 Sub-optimal Decoder: Hard Decoding
As previously noted, the chosen partial branch metric,
d
H
(., .), is crucial for the decoder. In particular, let us
denote r
l
= [sgn(r
l
1
), sgn(r
l
2
), ..., sgn(r
l
k
)], where sgn(.)
outputs the sign of value. With our newly formed estimate
r
l
, we can then dene the partial branch metric using the
Hamming distance. This is given as
d
H
(r
l
, c
l
) = |{i| r
l
i
= c
l
i
, i = 0, 1, ..., k}| (3)
Given that we rst formed the estimate r by making hard
decisions of the received vector r, we denote this procedure
as hard decoding.
2.2.2 Optimal Decoder: Soft Decoding
One major drawback of making hard decisions in form-
ing the estimate r, as seen in Section 2.2.1, is a loss of in-
formation of the received vector. Instead, if we deal with
the received vector r directly, we can then begin to form a
measure of similarity via the L
p
norm. That is, if dene the
partial branch metric to be
d
H
(r
l
, c
l
) =
k
i=1
|r
l
i
c
l
i
|
p
1
p
(4)
where p is chosen to be p = 2 or the Euclidean distance,
then onen arrives at the optimal soft decoding scheme using
the square of the Euclidean distance.
3. Implementation
We have used MATLAB to perform the convolution en-
coder/decoder algorithm presented in this report. More im-
portantly, we should note that because of the exponential in-
crease in complexity with regards to the number of memory
elements used and unoptimized MATLAB code, a major
drawback is the computational speed. However, from pre-
vious experiences that involves a search based type of algo-
rithm, one could invoke a kd-tree to performfast searches.
We also note the generality of the framework and refer
the reader to the documented version of the MATLAB code
used to implement the convolutional encoder/decoder. This
can be found at the end of this report. In particular, the code
is written for K = 4; however, one can easily change it
to incorporate encoders (e.g., K = 6 or K = 8). These
changes will be denoted by a red box.
4. Experiments
We test the robustness of the rate 1/3 convolution code
for memory element sizes of = 3, 5, 7. Specically,
we measure the coding efciency of each respective con-
volution code over 10,000 trials and assume that our mes-
sage is of L = 100 bits. Moreover, this simulation is
done over several SNR levels. Although one would ide-
ally like to reach the theoretical coding gain given by Shan-
nons limit, we deem the success of encoder/decoder if
it is able to achieve roughly 4 dB using a hard decoding
scheme. This base line can then be improved by substitut-
ing various branch metrics, such as the L
2
norm. To this
end, we present simulation results of the algorithm for both
hard and soft decoding, and refer the reader to Appendix A
for information of how to switch between the two by trivial
changes to the MATLAB code.
We begin with K = 4 convolution code (see Figure 3).
In Figure 4a, we present the BER simulated over a series
trials along with the the Shannons theoretical limit and the
un-coded BPSK algorithm. Figure 4b, shows a zoomed in
plot of the value located on the simulated curve at BER =
10
4
. The coding gain and gap to capacity at this BER level
are 2.242 dB and 6.951 dB, respectively. Using Table 1, we
see that the theoretical gain for a hard decoding scheme is
10 log
10
(
Rdmin
2
) = 2.1285 dB, which falls near to what is
measured.
Similarly, Figure 4(c)-(d) and Figure 4(e)-(f) show the
convolution code results for K = 6 and K = 8. We again
nd that the measured coding gain of each curves falls near
the expectations of its theoretical coding gains. That is, for
K = 6 we expect a gain of 3.358 dB, but measured a coding
gain of 2.93 dB. Likewise, for K = 8, we expect a gain of
4.23 dB, but measured a coding gain of 3.59 dB.
Finally, Figure 5 shows simulated results for the K = 8
convolution encoder using the soft decoding scheme dis-
(a) (c)
(e)
(b) (d) (f)
Figure 4. Rate 1/3 Convolution Encoder Simulated Results using a Hard Decoder. (a)-(b) Encoder (Hard Decoding) for K = 4. (c)-(d)
Encoder (Hard Decoding) for K = 6. (e)-(f) Encoder (Hard Decoding) for K = 8.
(a) (c)
(b) (d)
Figure 5. Soft Decoding Results for K = 4 and K = 6 Rate 1/3 Convolution Encoders. (a)-(b) Encoder (Soft Decoding) for K = 4. (c) -
(d) Encoder (Soft Decoding) for K = 6
cussed in this report. Interestingly, we only were able
to measure a coding gain of 4.12 dB, which is far differ-
ent from what is theoretically expected, i.e., 10 log
10
(R
d
min
) = 7.27 dB. One particular problem that maybe at-
tributed to such a disparity between the two values could be
the small message length of L = 100 or the short amount
of trials (Trials = 10, 000) since each of these contribute
to the overall transmitted message bits. Nevertheless, we
do achieve a dB gain that is reasonable for objective of this
project.
5. Conclusion
In this report, we attempt to mitigate the gap to capac-
ity of Shannons theoretical limit for a rate 1/3 system. In
particular, given the generality and exibility provided with
convolution codes, we present several varying convolution
encoders for several varying memory element sizes. Using
both soft and hard decoding, we then presented experimen-
tal results that for the most part fall within the expected the-
oretical gains.
References
[1] R.Sandhu, T.Georgiou, and A.Tannenbaum. A new distribu-
tion metric for image segmentation. In SPIE Medical Imaging,
2008. 3
[2] S. B. Wicker. Error Control Systems for Digital Communica-
tions and Storage. Prentice Hall, 1995. 2, 3
6. APPENDIX A: MATLAB CODE
Please See Figure 6 through Figure 10 for the detailed
MATLAB code. The red boxes highlight regions of code
that should be only altered in order to change the design of
the encoder (i.e., memory element size or a hard/soft decod-
ing scheme). Current implementation shown is for K = 4
with hard decoding.
Figure 6. This is the main MATLABscript le that is used to simulate the binary AWGNChannel. To change different Rate 1/3 Convolution
Encoders with different memory elements or soft/hard decoding schemes, modify area inside red box
Figure 7. This is the rst half of the generalized decoder of convolution codes. We note that one can perform both soft and hard decoding
via a ag input
Figure 8. This is the second half of the generalized decoder of convolution codes. If one needs to modify the optimal K = 4 convolution
encoder, then modify circuit logic function
Figure 9. These are the associated helper les needed to compute the encoding of the convolution code, compute circuit logic for a specic
encoder, and compute the appropriate cost functionals for node paths
Figure 10. This function traverses the trellis map backwards nding the optimal or survival path. It then returns the message that is the
ML estimate.