LDPC Report
LDPC Report
LDPC Report
Kenneth Andrews
Outline
• Code design and selection
• Code performance
• Fill data and the pseudorandomizer
• Start and tail sequence
• Impact on upper and lower protocol layers
• FPGA implementation
• Examples of LDPC-coded frames and CLTUs
• References
2
Report Concerning Space Data System Standards
TC SYNCHRONIZATION AND
CHANNEL CODING—
SHORT BLOCKLENGTH
LDPC CODES
CCSDS 230.1-G-x
GREEN BOOK
October 2016
EXPERIMENTAL SPECIFICATION FOR SHORT BLOCKLENGTH LDPC CODES
DOCUMENT CONTROL
CONTENTS
Section Page
Figure
Figure 1-2: Parity check matrix for (128,64) LDPC code ................................................... 4
Figure 1-5: Total and undetected error rates of the (128,64) code with two decoders:
red=100 iterations, black=10 iterations ................................................................................ 7
Figure 1-6 False-Alarm vs. Miss Rate for Start Sequence Detection ................................. 9
Figure 1-7 The 64-symbol sequence’s autocorrelation, and cross correlation with the
alternating …0101… idle sequence .................................................................................... 10
Figure 1-9 Word Error Rates for several uplink coding schemes ................................... 13
Figure 1-10 Undetected Error Rates for several uplink coding schemes ....................... 14
Figure 1-11 PLOP-2 telecommand protocol, reproduced from Figure 6-2 of [2] .......... 14
Table
Table 1-6: Example CLTU encoded with the (128,64) LDPC code and with the optional
tail sequence included ........................................................................................................... 18
Table 1-7: Example CLTU encoded with the (128,64) LDPC code and with the optional
tail sequence included ........................................................................................................... 19
Table 1-9: Example CLTUs generated from a single (128,64) LDPC codeword, with no
tail sequence, and repetition factor of 3. ............................................................................. 19
Table A-1: Implementation results for LDPC encoders on Xilinx Virtex-2 FPGAs ...... 16
Table A-2: Implementation results for LDPC decoders on Xilinx Virtex-2 FPGAs ...... 16
Traditionally, uplink communication has been used primarily for telecommand, where short
command sequences were transmitted to an unmanned spacecraft a few times per day to a
few times per month. Current and future spacecraft use uplink communication for a much
wider variety of uses. While telecommand continues to be an essential application, both in
normal and emergency situations, there is increasing demand for transmitting larger volumes
of data to spacecraft. Flight equipment can be reprogrammed, both with software for
microprocessors and “logicware” for Field Programmable Gate Arrays (FPGAs). Manned
missions benefit from live uplink video and Internet access.
One view of this variety of uplink needs is summarized in Table 1-1, where they have been
categorized into four different applications. The different applications place different
demands on the telecommunications link, and hence the error correcting codes chosen for
each application may also be different.
The “TC Synchronization and Channel Coding” blue book [1] defines codes that are
primarily intended for Modes A and B in this table. While not standardized by CCSDS, some
missions have selected codes from the “TM Synchronization and Channel Coding” blue book
[0] for Modes C and D, and the codes are well suited to these uses for these applications are
similar to telemetry transfer.
Two binary LDPC codes were standardized, and the codes may be substituted in place of the
BCH code, with virtually no impact to other aspects of the telecommand system. Section ??
briefly describes how these codes were selected.
[1] CCSDS 231.0-B-2, “TC Synchronization and Channel Coding”, Blue Book, Issue 2,
September, 2010.
[2] CCSDS 232.0-B-2, “TC Space Data Link Protocol”, Blue Book, Issue 2, September
2010.
[3] CCSDS 131.0-B-2, “TM Synchronization and Channel Coding”, Blue Book, Issue 2,
August 2011.
1.2.1 INTRODUCTION
Initial trade studies promptly determined that codes of rate 1/2 provided an attractive balance
between encoding and decoding complexity, coding gain, and bandwidth expansion, not so
much to meet spectral requirements, but to limit the number of parity bits to be stored and
transmitted. As noted in Table 1-1, blocklengths on the order of 100 bits were of interest.
LDPC codes are well suited to such a code rate and blocklength, and in contrast to turbo
codes, LDPC codes probably provide a wider range of implementation options to trade
performance against speed and complexity.
The vast majority of LDPC coding research has studied binary codes, though some recent
results ([9],[10],[10]) show that short nonbinary LDPC codes, defined over the finite field
GF(256), can offer performance improvements of 1.0 to 1.3 dB over corresponding binary
LDPC codes. However, decoders for these are far more complex than decoders for the
binary codes.
Decoding of non-binary LDPC codes remains a field of research. Edge messages are
probability vectors over each of the field elements. Over GF(q), each edge message consists
of q probabilities, or rather q–1degrees of freedom because the probabilities must sum to
unity. Computation at the graph nodes is generally done with finite-field fast Fourier
transforms, with complexity on the order of q log(q). Sub-optimal decoding algorithms can
reduce this complexity considerably, but with some cost in performance. In general, the
complexity of practical decoding algorithms is not well understood.
Liva, Paolini, Matuz, Scalise and Chiani provided a structured non-binary LDPC design over
GF(256) and based on a protograph for a rate-1/2 cycle code [9],[10],[11]. The graph was
obtained by expanding the protograph targeting a large girth. In particular, whenever
possible, the resulting graph was selected in the category of cages, i.e., graphs with a
minimum number of vertices for a given girth. The choice of the coefficients in the final
parity-check matrix was performed with the aim of maximizing the minimum distance of the
binary image of each check equation.
Independently, Divsalar and Dolecek pursued protograph constructions for non-binary LDPC
codes in much the same way [14][15]. Several different techniques were explored for
expanding the protograph into a full graph, and selecting finite field coefficients for the
edges. They explored both GF(256), and also the smaller finite field GF(16) as a
compromise to gain much of the coding gain with less complexity.
For telecommand, the decoder is onboard the spacecraft, and so low decoding complexity is a
particularly important metric. Hence, binary LDPC codes were selected over the non-binary
alternatives, despite the loss in performance. Based on the experience of the research
community, a protograph plus circulant construction [6] was selected. This construction
permits fast, simple encoders, and is amenable to fast, well-structured decoders. Further
information about possible encoder and decoder implementations is given in [3].
For long blocklength codes, research shows that good designs typically use about 3.5 edges
per variable node; for short codes, this number must be increased to eliminate small trapping
sets [7] and low weight codewords. A wide variety of protographs were studied, and that
shown in Figure 1-1 yields excellent performance for short block lengths.
In this figure, circles represent variable nodes, and squares represent check nodes, or
constraint equations. To construct a full-size LDPC code, one replicates the protograph M
times, in this case yielding 8M variable nodes (one per transmitted channel symbol), and 4M
constraints on those channel symbols. Each edge in the protograph becomes a bundle of M
edges; these are cut, cyclicly permuted, and reconnected. Cyclic shifts must be chosen for
each edge in the protograph; this is done using a variant of Progressive Edge Growth (PEG)
[8], a greedy algorithm that intends to maximize loop length among other metrics. Note that
while the protograph has parallel edges, they do not remain so when the full graph is
constructed.
This process was performed twice to generate a pair of codes with blocklengths: (n=128,
k=64) and (n=512, k=128), where k is the number of information bits encoded, and n is the
number of transmitted code symbols. The resulting circulant choices are listed in Table 1-2
and Table 1-3, and the full parity check matrix, H128x64, for the (n=128, k=64) code is shown
in Figure 1-2. Here, dots are used to indicate 1’s in the parity check matrix, and lines are
added to clarify the block-circulant structure.
A B C D E F G H
A 0,7 2 14 6 0 13 0
B 6 0,15 0 1 0 0 7
C 4 1 0,15 14 11 0 3
D 0 1 9 0,13 14 1 0
A B C D E F G H
A 0,63 30 50 25 43 62 0
B 56 0,61 50 23 0 37 26
C 16 0 0,55 27 56 0 43
D 35 56 62 0,11 58 3 0
Some of the circulants selected are free parameters, because variable nodes and check nodes
can be relabeled without changing the performance of the code. This freedom was used to
put identity circulants on the main diagonal of the left half of the parity check matrix, and on
the first sub-diagonal of the right half of H (considered as a block-matrix). This may
simplify memory addressing in a hardware decoder implementation. Moreover, check
equations may be reordered at will, and with the last row of circulants moved to the top, the
matrix has the structure suggested by Richardson and Urbanke in [17], and this may simplify
encoding in some cases.
1.3 PERFORMANCE
Performance for the two codes has been determined by software simulation, and is shown in
Figure 1-3. These curves were generated using the min* decoding algorithm [16] with 100
iterations maximum. Bit Error Rate (BER) curves for the short (n=128, k=64) code are
shown in green and for the longer (n=512, k=256) code in red. For comparison, performance
of the BCH code is shown in both its triple-error-detection (TED) and single-error-correct /
double-error-detect (SEC/DEC) modes. While the BCH code in TED mode has negative
coding gain, it has error detection capability that is not shown in this graph. Also shown are
the performance of two rate-1/2 codes from[3], and the rate-1/2 capacity and uncoded curves.
0
10
uncoded
BCH TED
−1
10 BCH SEC/DED
LDPC k=64
LDPC k=128
−2
10 LDPC k=256
LDPC k=1024
−3
10
Word Error Rate
−4
10
−5
10
−6
10
−7
10
−8
10
0 2 4 6 8 10 12 14
Eb/N0
The undetected error rate can be improved, at a cost in total error rate, by modifying the
decoder. One method is to decrease the maximum number of iterations performed. For the
short (n=128, k=64) code, two sets of results are shown in Figure 1-5. The red curves are
identical to those shown in Figure 1-3; the black curves are for a decoder that performs a
maximum of 10 iterations instead of 100. In the measurable region, this lowers the
undetected error rates by about an order of magnitude. Other, better, methods for building
incomplete decoders are described in [4] and [5].
Figure 1-5: Total and undetected error rates of the (128,64) code with two decoders:
red=100 iterations, black=10 iterations
Through computer searches, it is believed the minimum distance of the (128,64) code is
dmin=14, and the weight enumerator is A(128,64)=16(x14+33x16+352x18). Likewise, for the
(512,256) code, dmin=40 and the weight enumerator is A(512,256)=64(x40+11x42+99x44+699x46),
where the last two coefficients are approximate [David Declercq, private communication].
The TC standard was developed with the expectation that the transfer frames from the data
link protocol sublayer would fill one or a few BCH codewords, and modestly larger data
volumes were accommodated with the addition of PLOP-2 to the protocol. This is the same
application profile for which the short-blocklength LDPC codes are designed (low data-rate
command and emergency communications), and so the same design remains appropriate.
The coding and synchronization sublayer of the telecommand protocol consists of four
operations, as shown in Error! Reference source not found., from the perspective of the
transmitter. The LDPC codes are a direct substitution for the BCH code, with the single
modification that the randomizer is mandatory with the LDPC codes. In the remainder of
this section, we discuss the implications more carefully.
With either the BCH code or the LDPC codes, fill data is added if necessary to complete the
last codeword, and it may be added either before or after randomization. With LDPC codes,
it is required that the fill data be added first as shown in Error! Reference source not
found. for a couple of minor reasons. First this is the better inverse of the receive side that
always derandomizes before passing the fill data up to the TC data link protocol sublayer;
second, this reduces the minor spectral effects that the fill data may introduce.
The randomizer is mandatory with the LDPC codes, because the codewords may be longer,
and longer transition-free runs are possible in the unrandomized symbol stream. Note that the
parity symbols of the BCH codewords are inverted to prevent long runs. When LDPC codes
are used, the randomizer is mandatory, and the parity symbols are not inverted.
CLTU generation (the addition of the start sequence and tail sequence) is performed with
LDPC codewords in exactly the same way as with BCH codewords.1 Note that there are no
synchronization markers between LDPC codewords, as there is in the TM standard. This is
acceptable because resynchronization is accomplished at the end of each CLTU. Markerless
codeword synchronization may be used if desired for marginally improved performance.
Because LDPC codes operate at a lower symbol SNR than BCH codes do, a longer start
sequence is required for reliable detection. The natural procedure is to scan through a
received bitstream with a correlator or some variant thereof, until a match exceeding some
threshold is found. Rather than a correlator, the optimum algorithm was derived by Massey
[??], and he noted that an approximate version of that algorithm gave similar performance
with dramatically reduced complexity. Decreasing the threshold decreases the probability of
missing a start sequence, but also increases the probability of a false alarm. Such Receiver
1
Because the 16-symbol start sequence may not be detectable with an acceptably high probability, it
may be advantageous to assure that the first LDPC codeword transmitted contains entirely idle data.
Operating Characteristics (ROCs) are plotted at a range of SNRs in Figure 1-6. Because the
LDPC codes have a threshold of Eb/N0 around 3 to 4 dB, the start sequence detector should
have a point on the ROC curve that allows reliable operation at this level. That is, P(miss)
should be smaller than the WER of the decoder, and P(FA), times the expected number of
symbols to be tested, should also be smaller than the decoder’s WER. Figure 1-6 shows that a
64-symbol start sequence with the Approximate Massey algorithm achieves this requirement;
similar figures would show that 16- and 32-symbol start sequences are not sufficient.
Eb/No=4.0 dB
−6
10
−8
10
−10
10
−10 −8 −6 −4 −2 0
10 10 10 10 10 10
P(miss)
Figure 1-6 False-Alarm vs. Miss Rate for Start Sequence Detection
The standard 64-symbol start sequence has the autocorrelation function shown on the left
side of Figure 1-7, and cross correlation with the alternating …0101… sequence on the right
side. While these may not be optimum, there are no substantial defects that would cause
concern for false acquisition.
Figure 1-7 The 64-symbol sequence’s autocorrelation, and cross correlation with the
alternating …0101… idle sequence
With the BCH code, the tail sequence was chosen to be an uncorrectable vector, with a
Hamming distance at least two from each of the BCH codewords. This requires no detection
circuitry at the receiver, for the expected response is identical to that when an uncorrectably
noisy codeword is received. The possible drawback is that such a receiver does not
distinguish between a properly terminated CLTU and an invalid CLTU containing a
corrupted codeword; separating these is left as a task for the upper protocol layers.
With LDPC codes, marking the end of a CLTU with an uncorrectable codeword is less
attractive, for codewords are 128 or 512 bits long, rather than 64. One natural alternative is
to specify a binary string that can be searched for like the start sequence. As in that case, a
64-bit sequence is necessary for reliable detection when the channel noise is at the threshold
of the LDPC decoder. Another alternative is to omit the tail sequence altogether, in which
case the decoder will almost certainly fail to decode whatever symbols follow. Both
alternatives are permitted by the standard, and they have modestly different characteristics.
The receiver implementation differs between the two cases by the presence of Event E5 in
the state diagram shown in Figure 1-8 and Table 1-1, and they have modestly different
performance characteristics.
When the tail sequence is included, the receiver must check for its presence at the end of
each codeword. If found, it is discarded, and the receiver returns to its search state. This
allows the receiver to report when a CLTU is properly terminated. Because the tail sequence
is explicitly identified, it also frees the decoder from any obligation to fail when the tail
sequence is encountered. Hence, a complete decoder could be used, that decodes all n-
vectors to a codeword. While complete decoders are generally computationally complex,
they may provide improvements in decoding performance of 1 dB or more.
Alternatively, the tail sequence may be omitted. This alternative is attractive for it saves the
overhead of the 64-symbol tail sequence entirely, and offers minor simplifiations to the
implementation of both the transmit and receive sides. However, it imposes several
restrictions on the receiver. The demodulator is obliged to produce enough symbols for the
LDPC decoder to attempt, and fail, to decode a subsequent codeword. As shown in Figure
1-10, the undetected error rate of a typical LDPC decoder is low enough at any SNR that the
probability of erroneously accepting a codeword beyond the end of a CLTU is negligible, but
an atypical LDPC decoder may not have this property. As with the BCH case, the receiver is
also unable to distinguish between an intentional decoder failure at the end of a CLTU and an
unintentional decoder due to channel noise. Aside from the loss of potentially valuable
statistics, it places the burden of validating a CLTU on upper protocol layers.
The LDPC codes can operate at a lower Signal to Noise Ratio (SNR) than the (63,56) BCH
code for several reasons: the code rate is reduced to 1/2 from 0.89, they are typically decoded
with a soft-decision decoder which saves about 2 dB as a rule of thumb, and all three LDPC
codes have longer codeword lengths than the BCH code. These changes need not have any
significant impact on the transmit side of an uplink communications link, but there are two
notable impacts on the receive side.
1. It will be advantageous for the spacecraft’s radio receiver to work at a lower symbol
SNR than with the current BCH code. By analogy, a communications system is only
as capable as its weakest link. The BCH code is currently a weak link, and by
replacing it with the stronger LDPC code, the system becomes more capable. The
next-weakest link may be the SNR at which the receiver can maintain symbol
synchronization, and in this case, further performance gains may be won by
improving the receiver’s tracking loops. If the receiver cannot be improved though,
of course the use of the modern codes can only be an improvement. If one aims to
achieve a particular Word Error Rate (WER), the potential gains are shown in Figure
1-9 where the horizontal axis shows the symbol-SNR, rather than the more common
bit-SNR. If the receiver can maintain synchronization at an SNR below that required
by an LDPC code, then the full power of the code can be used; otherwise the benefits
are smaller. Similarly, if one has a constraint on the Undetected Word Error Rate
(UER), the potential gains are shown in Figure 1-10.
2. An LDPC code is generally decoded using “soft symbols”, rather than the binary
“hard symbols” typically used for a BCH code. This provides a performance
improvement of about 2 dB, but depends on a receiver that can produce soft outputs.
This modification is not mandatory, however, for a belief propagation decoder can
also operate on the hard symbols if necessary. Conversely, BCH codes are typically
decoded with an algebraic decoder that operates on binary inputs, but there are soft-
decision BCH decoding algorithms that can provide a performance improvement at a
substantial cost in complexity.
Figure 1-9 Word Error Rates for several uplink coding schemes
Figure 1-10 Undetected Error Rates for several uplink coding schemes
The CCSDS standards specify that telecommand uses one of two Physical Layer Operations
Procedures (PLOPs). All missions whose planning began after September 2010 are to use
PLOP-2, as shown in Figure 1-11; the older PLOP-1 differs only in that the loop returns to
CMM-1 (Carrier Modulation Mode 1) instead of to CMM-3. No modification is required to
use the PLOP-2 protocol and the short-blocklength LDPC codes together.
Figure 1-11 PLOP-2 telecommand protocol, reproduced from Figure 6-2 of [2]
The Coding and Synchronization sublayer exchanges variable length TC Transfer Frames
with the TC Space Data Link Protocol [2]. Because these are of variable length, the protocol
sublayers are almost completely isolated from each other.
The “All Frames Generation Function” is responsible for computing the Frame Error Control
Field (FECF), more commonly known as a Cyclic Redundancy Check (CRC), and passing
the resulting Transfer Frames to the coding sublayer. Its behavior, like those of the functions
above it, remains unchanged.
The “All Frames Reception Function” is responsible for stripping fill data from TC Transfer
Frames, and validating the resulting frames. Because the LDPC codewords can be up to 64
octets long, as many as 63 octets of fill may have to be stripped from the end of a TC
Transfer Frame. As with the BCH coding scheme, there is the possiblity that the fill may be
mistaken for a Transfer Frame Header, but for the same reasons, it will fail the standard
frame validation check procedure.
ANNEX A
FPGA IMPLEMENTATION
Encoders and decoders for these three binary LDPC codes have been implemented on a
Field Programmable Gate Array (FPGA). As with most FPGA problems, the speed of
an LDPC encoder or decoder scales essentially linearly with FPGA area. The designs
reported here were intended to be small, with good decoding performance, but not
especially fast. Each is small enough that it would use only a few percent of any FPGA
that might be used in a spacecraft design (such as the Xilinx radiation-tolerant
XQR2V1000), and will support any data rate that is anticipated for these codes. The
encoders produce one code symbol per clock cycle; the decoders use 26 clock cycles per
iteration per information bit, and implement an 8-bit max* operation at the check
nodes. Speed and area results are given in Table A-1 and
Table A-1: Implementation results for LDPC encoders on Xilinx Virtex-2 FPGAs
Table A-2: Implementation results for LDPC decoders on Xilinx Virtex-2 FPGAs
ANNEX B
Just as an error correcting code will correct errors introduced by channel noise, it will also
correct some implementation errors. Hence, interoperability is best tested by providing
identical inputs to two implementations of the sending end, and verifying that their outputs
exactly match, symbol by symbol. Moreover, only the sending end is standardized.
Implementers are free to build their receiving equipment in any way they choose, particularly
with respect to LDPC decoding algorithms, and detection algorithms for the start and tail
sequences. “Good” implementations will all have very similar performance in a statistical
sense, but individual results may vary.
At the sending end, the Synchronization and Channel Coding Sublayer accepts Transfer
Frames as input. While not consistent with what might be produced by the Data Link
Protocol Sublayer, we use the 70-character ASCII text string, “Short Blocklength LDPC
Codes for TC Synchronization and Channel Coding” as an example. This sample transfer
frame is listed in hexadecimal in Table 1-5; it is converted to a binary string by taking the
hexadecimal octets in order, Most Significant Bit (MSB) first within each octet.
53 68 6f 72 74 20 42 6c 6f 63 6b 6c 65 6e 67 74
68 20 4c 44 50 43 20 43 6f 64 65 73 20 66 6f 72
20 54 43 20 53 79 6e 63 68 72 6f 6e 69 7a 61 74
69 6f 6e 20 61 6e 64 20 43 68 61 6e 6e 65 6c 20
43 6f 64 69 6e 67
For the example transfer frame above, two octets of fill data are added, and after pseudo-
randomization, nine 128-bit codewords are formed. When the 64-bit start sequence and
optional 64-bit tail sequence are added, the 1280-bit result is shown in Table 1-6 (again
displayed as octets, MSB first).
03 47 76 c7 27 28 95 b0 ac 51 f1 28 1c c9 44 99
62 66 35 57 42 04 9c 50 03 ea 44 cd 54 30 6f b4
43 db bf d7 a7 c3 d6 e6 3a 88 f7 ea 1e 81 e7 ae
ef 7f 36 a6 7a a8 34 d4 09 b8 5d a7 d8 e0 3f 4f
2e 12 88 bd 31 1e 08 48 de 27 7f 94 82 ab 63 89
60 3c c1 a1 a2 be 24 9d b1 60 30 2c 0b c6 70 f4
a7 a1 b0 0c aa 34 5b 2f cc 3e 19 7c fc eb eb fa
c1 bd a2 d3 aa 72 40 b5 8e d0 10 c7 9f 69 cc 5b
56 45 51 d6 3f 8d 22 85 bf 89 1d 00 cd c3 4e 80
23 32 9c 77 f5 61 fe ee c5 c5 c5 c5 c5 c5 c5 79
Table 1-6: Example CLTU encoded with the (128,64) LDPC code and with the optional
tail sequence included
Any sublayer implementation that produces this output from the same input is interoperable
at the sending side. While the receiving side is not standardized, an implementation should
be able to locate this CLTU when embedded in random binary data and combined with
Additive White Gaussian Noise (AWGN) with a Signal to Noise Ratio (SNR) above about
Eb/N0 = 4 dB. The result will be a Transfer Frame that appears in ASCII as “Short
Blocklength LDPC Codes for TC Synchronization and Channel CodingUU”, because this
sublayer cannot remove the two octets of fill data that have binary value 01010101, or “U” in
ASCII.
Here, we repeat the same example from Table 1-5, and use the longer (512, 256) LDPC code.
In this case, 26 octets of fill data are added, and after pseudo-randomization, three 512-bit
codewords are formed. When the 64-bit start sequence and optional 64-bit tail sequence are
added, the 1664-bit result is shown in Table 1-6 (again displayed as octets, MSB first).
03 47 76 c7 27 28 95 b0 ac 51 f1 28 1c c9 44 99
03 ea 44 cd 54 30 6f b4 3a 88 f7 ea 1e 81 e7 ae
09 b8 5d a7 d8 e0 3f 4f 43 73 db fd 55 64 c0 e5
79 89 d1 04 32 cf 79 5a 4e fd d7 5d 92 4f de 16
16 09 9e 17 79 f7 ab 49 de 27 7f 94 82 ab 63 89
b1 60 30 2c 0b c6 70 f4 cc 3e 19 7c fc eb eb fa
8e d0 10 c7 9f 69 cc 5b 54 54 48 70 10 d7 59 78
8e c1 d8 73 01 2e 87 6b cb ec 44 a4 fa 36 b6 3e
d5 d0 f2 5c 0d 06 b8 4e bf 89 1d 00 cd c3 4e 80
e7 71 eb d1 90 2d 76 54 1f f7 bb ec 6e 5e 4a e0
ce 25 b6 06 b7 4c 15 a2 b4 b7 86 ee 9f e0 08 ce
5f 18 96 6e be b9 29 1f 1c 83 05 c7 ae 18 9c c2
ec c0 12 69 17 e6 33 e4 c5 c5 c5 c5 c5 c5 c5 79
Table 1-7: Example CLTU encoded with the (128,64) LDPC code and with the optional
tail sequence included
Any sublayer implementation that produces this output from the same input is interoperable
at the sending side. While the receiving side is not standardized, an implementation should
be able to locate this CLTU when embedded in random binary data and combined with
AWGN with a SNR above about Eb/N0 = 3 dB. The result will be a Transfer Frame that
appears in ASCII as “Short Blocklength LDPC Codes for TC Synchronization and Channel
CodingUUUUUUUUUUUUUUUUUUUUUUUUUU”, because this sublayer cannot remove
the 26 octets of fill data.
As one more case, we consider Example 1 from [Annex F of the current Telecommand Green
Book]. The transfer frame is an eight-octet “hardware command” shown in Table 1-8, and it
is consistent with the transfer frames that might be generated by the TC Data Link Protocol
[2].
30 1B 00 07 00 00 4C A9
With the (128, 64) code, no fill data are required. After pseudo-randomization, one 128-bit
codeword is formed. With the 64-bit start sequence, and without the optional tail sequence, a
192-bit CLTU results. If a repetition factor of 3 is specified, the CLTU is repeated three
times, and the result is given in Table 1-9.
03 47 76 c7 27 28 95 b0 cf 22 9e 5d 68 e9 4a 5c
c7 52 ac 35 0b e0 47 52 03 47 76 c7 27 28 95 b0
cf 22 9e 5d 68 e9 4a 5c c7 52 ac 35 0b e0 47 52
03 47 76 c7 27 28 95 b0 cf 22 9e 5d 68 e9 4a 5c
c7 52 ac 35 0b e0 47 52
Table 1-9: Example CLTUs generated from a single (128,64) LDPC codeword, with no
tail sequence, and repetition factor of 3.
Performance will vary, but when embedded in random binary data and combined with
AWGN with an SNR above about Eb/N0 = 2 dB (Es/N0 = –1 dB when considering the rate ½
code), a good implementation should be able to locate and recover at least one of the three
copies of this CLTU with a probability of about 95%.
ANNEX A
INFORMATIVE REFERENCES
[4] S. Dolinar, K. Andrews, F. Pollara, and D. Divsalar, "Bounded Angle Iterative Decoding
of LDPC Codes", Milcom08 (San Diego), Nov. 17-19, 2008.
[8] X.-Y. Hu, E. Eleftheriou, and M. Arnold, “Irregular Progressive Edge-Growth (PEG)
Tanner Graphs”, International Symposium on Information Theory, July 2002, p. 480.
[9] G. Liva, E. Paolini, S. Scalise, and M. Chiani, “Turbo Codes Based on Time-Variant
Memory-1 Convolutional Codes over Fq”, International Conference on Communications
(ICC), Kyoto, June 2011.
[10] T. De Cola, E. Paolini, G. Liva, and G.P. Calzolari, “Reliability Options for Data
Communications in the Future Deep-Space Missions”, IEEE Proceedings, Nov. 2011, pp.
2056-2074.
[11] G. Liva, E. Paolini, B. Matuz, S. Scalise, and M. Chiani, "Short Turbo Codes over High
Order Fields," IEEE Transactions on Communications, June 2013, pp. 2201-2211.
[13] C. Poulliat, M. Fossorier, and D. Declercq, “Design of regular (2, dc)-LDPC codes over
GF(q) using their binary images,” IEEE Trans.Commun., vol. 56, no. 10, pp. 1626–1635,
Oct. 2008.
[14] D. Divsalar and L. Dolecek, “Graph Cover Ensembles of Non-binary Protograph LDPC
Codes”, International Symposium on Information Theory (ISIT), Boston, MA, July 2012.