Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

An Overview of Digital Communication Systems

Download as pdf or txt
Download as pdf or txt
You are on page 1of 51

1 An Overview of Digital

Communication Systems

1.1 Introduction

Until the late 1980s, analog modems were most widely used for data transmission over
telephone wirelines. The techniques developed in digital communications from the
1960s through the mid-1980s were mainly targeted for wireline modems. Since then,
various forms of digital transmission technologies have been developed and become
popular for communication over digital subscriber loops, Ethernet, and wireless
networks. The basic principles of synchronization techniques developed in the analog
modem era have evolved. They are still being used directly or as the foundation of
synchronization in other types of digital communication systems.
Since the mid-1980s, wireless has taken over wireline as the main form of communi-
cation connecting our society and permeating citizens’ everyday life. Because of the
widespread deployment of cellular communication systems, wireless technologies have
been making impressive progress in all disciplines including synchronization.
Another important area of digital communications is satellite communication, which
also experienced rapid progress during the same time period. While satellite communi-
cation has its unique properties, it also has many commonalities with wireline and
wireless communications including those in the area of synchronization. In this book,
we will focus mainly on the theories and techniques of synchronization for wireline and
cellular-type wireless communication systems. However, what is discussed is also
applicable to satellite communications.
To achieve bidirectional communications, two communication channels are needed.
The two channels can be physically independent or can share the same physical media.
Over wirelines, communications in the two directions can be either symmetric or
asymmetric. For wireless, such as mobile communications, they are most likely
asymmetric in the two directions. Communications from wireless base stations to
mobile devices are usually called forward link communications. Communications in
the other direction, i.e., from devices to the base stations, are usually called reverse link
communications. In this book, most techniques discussed and examples given for
wireless communications are assumed for the forward link, although what is discussed
can be adapted for the reverse link as well.
In this chapter, an overview of the communication system and its main functional
blocks is provided in Section 1.2. The details of the three major components of a typical

1
Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
2 An Overview of Digital Communication Systems

communication system, i.e., the transmitter, the channel, and the receiver, are presented
in Section 1.3. Section 1.4 provides a high-level view of the main synchronization
functions in communication systems as an introduction to the subjects on which this
book focuses. Sections 1.5 and 1.6 describe the basics of code division multiple access
(CDMA) and orthogonal frequency-division multiplexing (OFDM) technologies to
facilitate the discussion of synchronizations in communication systems that employ
these technologies presented in later chapters. Finally, Section 1.7 summarizes what has
been discussed and provides a summary of the common parameter notations used in this
book to conclude this chapter.

1.2 A High-Level View of Digital Communications Systems

A digital communication system has three major components: a transmitter (Tx),1


a receiver (Rx), and a communications channel. A high-level block diagram of such
a typical digital communication system is shown in Figure 1.1.
The objective of a digital communication system is to send information from one
entity, in which the information resides or is generated, through a communication
channel to another entity, which uses the information. A high-level description of the
process of how a communication system achieves information transfer from the source
to the destination is given below. Details of their realization in different types of
communications systems are provided later in this chapter.

A Brief Overview of Transmitter Operations


To be processed by the transmitter, the information from the source must be in binary
form, i.e., represented by bits in the form of zeros and ones. If the information exists in
nonbinary forms, such as in quantized audio or video waveforms, it is first converted to
the digital form by source coding. The source-coded information or information that is
already in the binary form may be encrypted and/or compressed. Such preprocessed
source information in binary form is referred to as information bits, which are ready for
further processing in the transmitter.
Inside the transmitter, the information bits are first processed in digital form. This
step is commonly referred to as digital baseband processing. Its output is baseband
signal samples, which are converted to analog baseband waveforms by a digital-to-
analog converter (DAC).
The generated analog baseband waveforms are modulated onto a carrier frequency fc
to become passband signals. After being filtered and amplified, the passband signal at
the carrier frequency is transmitted over a communication channel. The communication
channel can be a wireless channel through radio wave propagation, or it can be a
wireline channel such as a twisted pair of wires or cables.

1
In some references, Tx is often used as a shorthand notation of transmitter. It can also be used to describe or
specify any quantity that is transmitter related. The same usages can also apply to Rx.

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
1.2 A High-Level View of Digital Communications Systems 3

Figure 1.1 High-level block diagram of a digital communication system

A Brief Overview of Receiver Operations


After passing through the communication channel, the transmitted signal reaches the
receiver at the other end of the channel. Communication channels always introduce
various types of impairments including linear/nonlinear distortions, additive noise, and
interference. As a result, the received signal is a distorted version of the transmitted
signal. The task of the receiver is to recover the original transmitted information with no
or little loss.
The first step of the receiving process is to convert the passband signal at carrier
frequency to baseband. After performing passband filtering to reduce as much outband
interference and noise as possible, the received signal is frequency down-converted or
demodulated to become a baseband analog waveform.2 The generated baseband analog
signal can be expressed as the convolution of the transmitted baseband waveform and
the channel impulse response (CIR) of the equivalent baseband channel.
The baseband analog waveforms of the received signal are analog filtered and are
converted to digital samples. Then the baseband digital signal samples are processed by
the receiver digital baseband processing block to regenerate the original information bits
sent by the transmitter with no or few errors. The original source information is
recovered from the regenerated information bits to complete the operations of the
communication system.
The details of the operations performed in the blocks as just described are explained
in the following.

2
In digital communication terminology, the term demodulation commonly refers to the general operation of
recovering the original data symbols from their modulated forms. In a loose sense, it may also mean the
operations for data symbol recovery including frequency down-conversion, or carrier phase and/or
frequency offset correction.

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
4 An Overview of Digital Communication Systems

1.3 Major Components of Typical Digital Communication Systems

As discussed above, a digital communication system consists of a source information


processing block, a transmitter, a channel, a receiver, and a source-information recover-
ing block. In this section, we consider its three major components, which perform
essential operations for achieving successful digital transmission, i.e., the transmitter,
the channel, and the receiver.3
The functional blocks of the transmitter and the receiver and their operations
described below are in the context of single-carrier digital communication.4 However,
in principle, what is discussed is also applicable to other digital communication systems
with additional operations as necessary. Some digital communication technologies that
are widely used in modern communication systems will be introduced later in this
chapter. For example, details of the direct-sequence code division multiple access
(DS-CDMA), which is a type of direct-sequence spread spectrum (DSSS) communi-
cation, and OFDM will be presented in Sections 1.5 and 1.6, respectively.

1.3.1 Transmitters and Their Operations


In this section, we describe various transmitter functional blocks and their operations for
data transmission in single-carrier digital communication systems. The functional
blocks considered are channel coding and interleaving, data symbol mapping, data
packet forming and control signal insertion, spectrum/pulse shaping, digital-to-analog
conversion, and carrier modulation and filtering. The block diagram of such a transmit-
ter is shown in Figure 1.2.

1.3.1.1 Channel Coding and Interleaving


In order to improve the reliability of data transmission over the communication link, the
information bits to be transmitted are first coded by a forward error correction (FEC)
encoder. This step is called channel coding. Channel coding introduces redundancy into
the input bits. Such redundant information is used by the decoder in the receiver to
detect the information bits and to correct the errors introduced when the information
was transmitted over the communication channel. Thus, a system with FEC coding can
tolerate higher distortion and noise that are introduced during transmission than a
system without it. In other words, a coded system can achieve more reliable communi-
cation at a lower signal-to-noise ratio (SNR) of the received signal than an uncoded
system. The difference between the SNRs required by the two systems to attain the

3
The source information processing and recovery blocks are sometimes viewed as associated with the
transmitter and receiver, respectively. However, in order to concentrate on the aspects that are most pertinent
to the subject matter of this book, their functions will not be considered here.
4
Direct-sequence spread spectrum communication is also a type of single-carrier digital communication.
To avoid confusion, in this book, the term “single-carrier communication” generally refers to non-DSSS
single-carrier communication, unless otherwise specified.

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
1.3 Major Components of Typical Digital Communication Systems 5

Figure 1.2 Block diagram of a typical transmitter

same error rate is called the coding gain with the SNR measured by the ratio of energy
per information bit (Eb) divided by noise density (N0), i.e., Eb/N0.
Due to the introduced redundancy, the number of coded bits generated by the FEC
encoder is always greater than the number of input information bits. Consequently, the
ratio of the number of the input information bits divided by the number of the output
coded bits, known as the coding rate R,
number of information bits
R¼ (1.1)
number of coded bits
is always less than 1 in order to achieve positive coding gain.
Channel coding is an important component in a digital communication system. Its
purpose is to improve the performance of digital communication systems so that these
systems can approach the theoretical limit given by the Shannon channel capacity [1].
In earlier years, the most commonly used FEC codes were algebraic block codes,
convolutional codes, and trellis codes. Since the mid-1990s, turbo and low-density
parity-check (LDPC) codes have become popular in communication system designs
because of their ability to approach and/or achieve the Shannon channel capacity.
FEC is one of the most important disciplines of digital communications and has a
very well developed theoretical foundation. Due to the scope of this book, it is not
possible to cover all of its details. We will only mention some of its aspects related to
synchronization when appropriate. Interested readers can find abundant information
regarding the coding theory, code design, and code implementation from many
references and textbooks including [2] and [3].
In many communication systems, especially those intended for communication over
fading channels, the coded bits are interleaved before being further processed. Briefly,
an interleaver changes the order of the input bits. As a result, the consecutive coded bits
are distributed over a longer time period when transmitted. Interleaving makes the
effects of impulsive noise and channel deep fading spread over different parts of
the coded bit stream, so that the impact of the noise and fading on the decoding in
the receiver is reduced.
Because an interleaver only changes the order of the coded bits, the numbers of the
input and output bits are the same. Thus, the interleaver can be viewed as a rate one
encoder, which provides no coding gain over static channels.

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
6 An Overview of Digital Communication Systems

The most popular interleavers used for digital transmissions are block interleavers [4,
5] and convolutional interleavers [6, 7]. Similar to error correction coding, interleaving
does not directly affect synchronization. We will not elaborate on its functions further.
The interested reader can find the relevant information in the available references.

1.3.1.2 Data Symbol Mapping


In order to be modulated and transmitted over a communication channel, the binary
coded bits are usually grouped to form data symbols. Thus, each data symbol can
represent one or more coded bits. Depending on the modulation technique used for
transmission, the data symbols may be represented by a symbol constellation, as shown
in Figure 1.3. The most popular types of modulation are amplitude modulation (AM),
binary phase-shift keying (BPSK), quadrature phase-shift keying (QPSK), multiple
phase-shift keying (MPSK), and quadrature amplitude modulation (QAM).
Note that the bits mapped to the constellation points shown in the figure satisfy a
special property such that any two adjacent constellation points differ by only one bit.
This is called Gray mapping or Gray coding. This property is important to achieving
good system performance. This is because at the receiver a symbol error is most likely
to occur when the adjacent constellation points are misinterpreted as the constellation
point of the actually transmitted symbol. With Gray mapping, such a symbol error with
high probability will result in only a one-bit error, which is easier to be corrected by
channel decoding.
Depending on the symbol constellations used, different numbers of coded bits are
mapped to one of the constellation points. The simplest modulation constellations are
BPSK and QPSK. The BPSK constellation is used if one bit is mapped to a data symbol.
A bit 0 is mapped to the real value 1, and a bit 1 is mapped to the real value 1. The
QPSK constellation is used if two coded bits are mapped to a data symbol correspond-
ing to a point in a QPSK constellation. For example, 00 are mapped to the complex
point 1+j, 10 to 1+j, etc.

Figure 1.3 BPSK, QPSK, 8PSK, and 16QAM symbol constellations

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
1.3 Major Components of Typical Digital Communication Systems 7

When each data symbol represents more than two coded bits, MPSK and QAM
constellations are often used. An MPSK constellation consists of 2M points evenly
distributed on a circle centered at the origin, and each data symbol corresponds to
M coded bits. The 8PSK constellation is shown in Figure 1.3(c). As shown there, the
triplet 000 maps to the constellation points of the complex number cos(π/8) + jsin(π/8),
001 maps to sin(π/8) + jcos(π/8), etc.
The most popular QAM constellations have 2N constellation points on a square lattice
grid. Therefore, each constellation point represents a data symbol mapped from an
N-tuple of coded bits. The 16QAM constellation with four-bit Gray mapping and the
coordinates of its constellation points in the complex plane is shown in Figure 1.3(d).
So far in this section, we have assumed that the output of the channel encoder is
coded binary bits, and these bits are mapped to complex-valued data symbols. There are
also channel encoders that directly generate complex data symbols from the information
bits. One example is trellis coding, which was widely used during the 1980s and early
1990s. However, from the viewpoint of synchronization, only the form of the channel
modulation symbols is relevant. Therefore, in the rest of this book, we are interested
only in the type of data symbol constellations that is used in the transmission regardless
of how the symbols are generated.
In the literature, data symbols as described above are often referred to as data
modulation symbols. However, to distinguish the data modulation symbols from the
channel modulation symbols to be discussed below, we will refer to them simply as data
symbols. For single-carrier communications, data and channel modulation symbols are
the same. However, they are different for OFDM and CDMA communication systems.

1.3.1.3 Formation of Data Packets/Streams for Transmission


The data symbol sequence as generated above can be directly used to generate baseband
signal waveforms. However, in many digital communication systems, additional infor-
mation may need to be transmitted together with the basic data. For example, in multiple
access systems, there may be data from more than one user that need to be transmitted.
Moreover, there are control signals such as signals for synchronization that need to be
transmitted as well.
Data symbols carrying signaling/data information are generated in ways similar to
those described in Section 1.3.1.2. The data and signaling symbols that are transmitted
together may use different symbol constellations. For example, the data symbols may
have 16QAM or 64QAM constellation in order to improve the transmission spectrum
efficiency. In contrast, the signaling symbols, which are used to facilitate synchroniza-
tion and for other purposes, such as pilot or reference symbols for performing channel
estimation, are often transmitted in BPSK or QPSK. Nonetheless, their symbol rates of
transmission are usually the same.
Different types of communication systems may have different data packet/stream
transmission formats. For example, in wireline digital transmission, the control signals
for training the receiver are usually sent at the beginning of a data communication
session. Then regular data are transmitted continuously as a long symbol stream.
Because the wireline channel does not change much in a short time duration, no

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
8 An Overview of Digital Communication Systems

additional training signals are sent during the normal data transmission/receiving
session to improve the efficiency of the system.
In contrast, for wireless communication, the channel often experiences fast fading. As
a result, the data symbols to be transmitted are usually organized in packets, which are
transmitted consecutively to form a long data stream. Each packet has a short time span
and is embedded with synchronization signals. In the receiver, these synchronization
signals, such as pilot symbols, are used to perform channel estimation for the estimation
of data symbols, as well as to achieve carrier and timing synchronization. These signals
can also be used to obtain other information from systems that the receiver needs to
communicate with.

1.3.1.4 Generation of Baseband Signal Waveform – Spectrum/Pulse Shaping


Transmitter channel modulation symbols, or simply channel symbols, are generated
from the data symbols described above and converted to analog baseband waveforms.
A channel symbol is represented by a complex number belonging to one of the
modulation symbol constellations, including BPSK, QPSK, MPSK, QAM, etc. The
baseband analog signal is generated by multiplying channel symbols with time-shifted
continuous-time pulses that satisfy certain time- and frequency-domain characteristics.
The Fourier transform of the most commonly used time pulse, denoted by gT(t), for
generating analog baseband signal x(t), is a square-root raised cosine (SRCOS) func-
tion, i.e.,
8
> ð1  β Þ
>
> 1 jf j 
> sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
> 2T
>
<   
πT 1β ð1  β Þ ð1 þ β Þ
GT ðf Þ ffi FT½gT ðt Þ ¼ 0:5 cos jf j  þ 0:5 < jf j 
>
> β 2T 2T 2T
>
>
>
> ð 1 þ β Þ
:0 jf j >
2T
(1.2)
The time pulse gT(t) in the transmitter is actually generated by multistage processing,
which will be mentioned later. It has a large peak at t = 0 and decays to zero when t goes
to positive or negative infinity.
In single-carrier communication systems, the data symbols are directly used as the
channel modulation symbols to generate the analog baseband signals. Thus, no complex
conversion is needed, except maybe a single scaling operation. However, in DS-CDMA
and OFDM communication systems, additional processing steps are performed to
convert the data symbols to the suitable forms of channel symbols.
Generating baseband signal pulses with the desired spectrum from the channel
symbols is often called spectrum shaping or pulse shaping. This process is the same
for all three types of communication systems mentioned above. A block diagram of
baseband waveform generation is shown in Figure 1.4(a).
Channel modulation symbols can be viewed as complex-valued random variables.
They are converted to the analog form by digital-to-analog converters. The output of
a DAC may be analog voltage impulses or rectangular pulses. Every T second, two

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
1.3 Major Components of Typical Digital Communication Systems 9

DAC outputs are generated for every complex channel symbol, one from its real part
and the other from its imaginary part. To simplify our discussion, we will assume that
the DAC outputs are impulses. Practically, it is more common to use the “sample-and-
hold” DACs to generate output in the form of rectangular pulses with a width equal
to T. The description given below is also applicable to such DACs with minor
modifications.
The power spectrum of a channel symbol sequence has a constant magnitude as it is
assumed that the channel symbols are, by design, uncorrelated in most systems. For
T-spaced symbols, the spectrum is periodic with a period of 1/T [8] as shown in
Figure 1.4(b).
It is possible to generate the analog baseband pulses by suitable analog low-pass
filtering of the analog impulses generated by DACs directly from the symbol sequence
as indicated in Figure 1.4(b). Because the analog filter performs both spectrum/pulse
shaping and image rejection, it is quite demanding to implement. In practice, it is more
efficient to perform spectrum shaping first by digital low-pass filtering. Once converted
to the analog domain, the spectral images can be rejected by using a simple analog filter
as described below.
To perform digital spectrum shaping, the channel symbol sequence is first up-
sampled by inserting zeros between adjacent symbols. As an example, we consider
the simplest case that one zero is inserted between any two adjacent channel symbols.
The spectrum of the symbol sequence does not change after zeros are inserted. How-
ever, because the sample spacing is now equal to T/2, the spectrum of the new sequence
has a base period of 2/T as shown in Figure 1.4(c). The zero-inserted sequence is filtered
by a digital low-pass filter, which has the desired frequency response such as SRCOS
previously mentioned. Figure 1.4(d) shows the power spectrum of the sequence at the
filter output.
DACs are used to convert the digital sample sequence at the digital low-pass filter’s
output to T/2 spaced analog impulses, which have the power spectrum shape shown in
Figure 1.4(d). Given that the spectrum has a period of 2/T, the image bands are located
at 2m/T Hz, m = 1, 2, . . . A simple analog low-pass filter shown in the figure filters
the impulse sequence to retain the baseband signal spectrum centered at zero frequency
and removes the image spectra. The output of the analog low-pass filter is the desired
baseband signal waveform as shown in Figure 1.4(e).
In practical implementations, sample-and-hold circuits are usually incorporated into
DACs to generate rectangular pulses. For T/2 spaced input digital samples, the widths of
the rectangular pulses are equal to T/2. Thus, the power spectrum at the DAC output is
equal to the spectrum shown in Figure 1.4(d) multiplied by a window of
sin 2 ðπTf Þ=ðπTf Þ2 ¼ sinc2 ðTf Þ.5 As a result, the power spectrum of the actual analog
waveform is no longer the same as the squared frequency response of the digital
spectrum shaping filter.

5
In this book, the sinc function is defined as sinc(x) = sin(π x)/π x, commonly used in information theory and
digital signal processing, such as in Matlab. It can also be defined as sinc(x) = sin(x)/x in some other
disciplines in mathematics.

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
10 An Overview of Digital Communication Systems

Figure 1.4 Baseband waveform generation

Such distortions in signal spectra can be either precorrected by properly designed


spectrum shaping filters or post-corrected by appropriate analog low-pass filters. Prac-
tically, the most convenient remedy for the distortion introduced by the rectangular
analog pulse is to increase the number of the zeros between the channel symbols, i.e., to
increase the upsampling frequency. By such a design, the signal spectrum occupies only

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
1.3 Major Components of Typical Digital Communication Systems 11

a small portion of the main lobe of the sinc filter of the rectangular pulse, where it is
relatively flat. Therefore, very little distortion is introduced. Descriptions of the practical
baseband pulse generation using DACs with sampling-and-hold circuits can be found in
the references (e.g., [9, 10]).
The generated analog baseband signal waveforms can be expressed as
X

xT ð t Þ ¼ ak gT ðt  kT Þ (1.3)
k¼∞

where ak’s are the complex-valued channel modulation symbols at time kT, and gT(t) is
the combined impulse response of the digital low-pass filter, DAC output pulse, and the
analog low-pass filter and often has the SRCOS frequency response given by (1.2). In
other words, the overall pulse/spectrum shaping filtering is performed by the composite
functional block that consists of all of these filtering functions. It is referred to as the
transmitter filter in the rest of this book.

1.3.1.5 Modulation of Baseband Waveform to Carrier Frequency


In most cases, the baseband signal xT(t) given by (1.3) cannot be directly transmitted
over communication channels. The analog baseband signal is first modulated onto a
carrier frequency fc that is suitable for the application. The modulated carrier signal is
generated as
 
xc ðt Þ ¼ xT , r ðt Þ cos 2πfc t  xT , i ðt Þ sin 2πfc t ¼ Re xT ðt Þej2πfc t (1.4)

where xT,r(t) and xT,i(t) are the real and imaginary parts of xT(t). In the literature, they are
often called in-phase and quadrature components of xT(t), denoted by I(t) and Q(t),
respectively. The modulated signal can also be viewed as the real part of the product of
xT(t) and the complex sinusoid ej2πfc t . The modulator implementation and the spectrum
of the modulated carrier signal xc(t) at the carrier frequency fc are shown in Figure 1.5.
The modulated carrier signal is then amplified and filtered to meet the regulatory
requirements and is ready to be transmitted over the channel.

1.3.2 Channel
In the field of digital communications, a channel refers to the physical medium that
connects a transmitter and a receiver. The signal sent by the transmitter propagates
through the channel before it reaches the receiver. During the transmission process,
impairments introduced by the channel distort the signal. Thus, the signal that reaches
the receiver will not be the same as its original form when it was generated at the
transmitter. The receiver regenerates the original information bits with no or few errors
from the impaired received signal.
The impairments introduced by the channel include linear and/or nonlinear distor-
tions, as well as additive noise and interference. Depending on the type of the channel,
these impairments have different forms. In most cases, the interference is from many
different and independent sources. Thus, it can be modeled as Gaussian noise according
to the central limiting theorem.

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
12 An Overview of Digital Communication Systems

Figure 1.5 Modulation of baseband signal to carrier

Based on the nature of the propagation media, communication channels can be


broadly classified as wireline and wireless channels. Within each class, they can be
further divided into different types as shown below.

1.3.2.1 Wireline Channel


The typical wireline channels include twisted-pair transmission lines and coax cables.
These channels are used for transmissions over telephone lines, wired local loops, Ethernet
cables, and others. Parts of such transmission channels may also include optical fibers.
Such transmission lines and cables can be viewed as two-port linear networks, which
have continuous-time CIRs. The output of the channel is the convolution of the
transmitted signal with the CIR. Hence, they may introduce linear distortions into the
received signal in the form of intersymbol interference (ISI). In addition, thermal noise
and interference from nearby electromagnetic signal sources will also corrupt the signal
passing through the wireline channel.
Except for certain types of interference, such as impulse interference, the impairments
introduced by the wireline channels have the common characteristic of being slowly
time varying. This is because changes in the characteristics of the wireline impairments
are most likely caused by changes in the surrounding environment, e.g., temperature
variations. The time constant of such changes is most likely in the order of minutes,
hours, or even longer. In other words, the tracking requirements for the impairments of
wirelines are usually not demanding.

1.3.2.2 Wireless Channel


A number of different types of communication systems use radio waves as the communi-
cation media for the transmission of information. As a result, there are many types of

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
1.3 Major Components of Typical Digital Communication Systems 13

wireless channels, such as satellite channels, mobile wireless channels, microwave


communication links, Wi-Fi/WLAN channels, etc. Various wireless channels have
different characteristics. For example, signals transmitted over mobile wireless channels
encounter fast fading at high vehicle speeds, whereas signals transmitted over other types
of wireless channels may encounter other type of impairments, such as high nonlinearity
in satellite communications. In this book, we will mainly consider wireless channels that
are pertinent to synchronizations in mobile and Wi-Fi wireless communication systems.
Wireless communication channels also introduce linear distortions, which create ISI
in the signal at its output. In wireless systems, in which the transmitted signals are
carried by radio waves from one point to another, the linear distortion introduced by the
channel is caused by reflections of radio waves. As a result, unlike wireline channels,
which have time-continuous CIRs, the CIRs of wireless channels consist of multiple
time-discrete paths. Each of the paths is caused by the reflection from an object hit by a
ray of the radio waves. Thus, wireless channels are usually modeled as multipath
channels. Such a model is convenient to use in analyzing the behavior of the receivers
in general and the synchronization functions in particular.
Another important property of most wireless channels is that they are time varying.
This is especially true for mobile wireless communication systems, in which transmit-
ters and/or receivers could be inside fast-moving vehicles. Over such channels, the
signals received often experience different types of fading. Thus, mobile wireless
channels are often called fading channels.
Fading in mobile channels can be divided into two categories: slow fading and fast
fading. Slow fading, also called shadowing, occurs when mobiles are traveling through
areas with different geographical characteristics and experience different radio wave
propagation loss. Shadowing usually has a time constant of more than tens of seconds.
Such a time constant affects the overall mobile system performance but has less special
effect in receiver synchronization.
The second type of channel fading, which usually has a fading frequency from a few
Hertz to hundreds of Hertz, is due to Doppler effects[11]. When a receiver is moving
relative to the transmitter, the frequency of the transmitted waves observed by the
receiver is different from that of the transmitted carrier. If the receiver moves toward the
transmitter, the carrier signal frequency observed by the receiver becomes higher than
the transmitted carrier frequency. Conversely, if they are moving away from each other,
the observed carrier frequency becomes lower. The change in frequency is called
Doppler frequency shift or simply Doppler frequency.
The value of Doppler frequency is equal to the carrier frequency times the speed of
the relative movement between the transmitter and receiver divided by the radio wave
propagation speed, i.e., the speed of light. Let us assume that a mobile is moving at 100
km/hour relative to the transmitter. If the carrier frequency is 2 GHz (gigahertz) and
since the light speed is equal to 3  108m/sec, i.e., 1.08  109km/hour, the Doppler
frequency shift that the mobile observes is equal to

100
fd ¼  2  109 ’ 185Hz (1.5)
1:08  109

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
14 An Overview of Digital Communication Systems

If a vehicle is moving in open space, e.g., on the highway, the observed Doppler
frequency shift behaves as a single tone. Namely, the carrier has a simple frequency
change, which is equivalent to an offset of the carrier frequency that the receiver can
correct by carrier synchronization. However, when a vehicle is in complex areas, e.g., in
urban environments, the received signal’s carrier frequency, phase, and magnitude will
all be timing varying. In this case, the Doppler frequency can be modeled as a random
variable. Its power spectrum, called Doppler spectrum, is distributed between fd,max and
fd,max, where fd,max is computed by (1.5) at the vehicle speed. The most popular Doppler
spectrum model, called the Jakes model, shown in Section 1.2 of [11], is given by
8 "sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
>  #1
< C f
Pðf Þ ¼ πf 1 f  f d, max (1.6)
> f d, max
: d, max
0 f > f d, max

where fd,max is the maximum Doppler frequency and C is a constant to normalize the
expression.
Above, we discussed some characteristics of the wireless channels, characteristics
that are useful when considering the channels’ impact on synchronization functions.
More information regarding the general characteristics of wireless and mobile channels
can be found in the literature, including the classic book [11] on this topic and other
textbooks such as [5, 12, 13].

1.3.3 Receivers and Their Operations


The modulated signal at a carrier frequency transmitted over the channel is received by
the receiver on the other end of the channel. The function of the receiver is to recover
the transmitted information bits from the signal received from the channel. Its oper-
ations essentially undo the transmitter operations in the reverse order. The functional
blocks of the receiver are frequency down-conversion, receiver filtering, digital sample
generation, generation of the estimates of data symbols and/or decoding bit metrics,
channel deinterleaving and decoding. After these operations, the transmitted informa-
tion bits are regenerated with no or few errors. A block diagram of a typical receiver is
shown in Figure 1.6.
The functions and operations of these receiver blocks are described below.

1.3.3.1 Frequency Down-Conversion


The transmitted passband signal at carrier frequency fc(t) is given by (1.4) with its
spectrum shown in Figure 1.5(b). After passing through the channel, the received
passband signal can be expressed as
 
r c ðt Þ ¼ r r ðt Þ cos 2πfc t  r i ðt Þ sin 2πfc t ¼ Re r ðt Þej2πfc t (1.7)

where r(t) = r(t) + jri(t) is the complex baseband received signal, which is the
transmitted baseband waveform convolved with the time response of the equivalent
baseband channel [5]. It can be expressed as

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
1.3 Major Components of Typical Digital Communication Systems 15

Figure 1.6 Block diagram of a typical receiver

X

r ðt Þ ¼ ak gc ðt  kT  τÞ þ zc ðt Þ (1.8)
k¼∞

where gc(t) is the impulse response of the composite channel including the transmitter
filter and the channel, τ is an unknown but constant delay, ak’s are the channel
modulation symbols, and zc(t) is an additive white Gaussian noise (AWGN) process
introduced by the channel.
The real and imaginary parts of r(t), i.e., rr(t) and ri(t), also called the I and Q
components, can be generated as follows.
Multiplying rc(t) by cos 2π^fc t, where ^fc is the locally generated down-conversion
frequency, sometimes called demodulation frequency, which is nominally equal to fc,
we obtain

r c ðt Þ cos 2π^f c t ¼ r r ðt Þ cos 2πfc t cos 2π^fc t  r i ðt Þ sin 2πfc t cos 2π^fc t

¼ 0:5r r ðt Þ cos 2π fc  ^fc  0:5r i ðt Þ sin 2π fc  ^fc t


þ terms with frequency fc þ ^fc (1.9)

The last terms on the right side of (1.9) are at about twice the carrier frequency, which
can be removed by low-pass filtering of r c ðt Þ cos 2π^fc t. The first two terms can be
expressed as 0:5Re r ðt Þe j2πΔft , where Δf ffi fc  ^fc is called the carrier frequency offset
of the baseband signal at the frequency down-converter output. Similarly,
 
r c ðt Þ sin 2π^fc t ¼ 0:5Im r ðt Þej2πΔft þ terms with frequency fc þ ^fc (1.10)
 
Thus, Im r ðt Þej2πΔft can be generated by low-pass filtering of r c ðt Þ sin 2π^fc t with a sign
change. The frequency down-converter implementation, which generates the baseband
signal, based on (1.9) and (1.10) with carrier frequency offset Δf, is shown in Figure 1.7.
The sinusoids for frequency down-conversion are generated by a local oscillator (LO)
with down-conversion frequency ^fc , which may be slightly different from the modula-
tion frequency fc due to the inaccuracy of the LO. As a result, there will be a nonzero
frequency offset Δf. The offset can be corrected either by adjusting the LO frequency or

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
16 An Overview of Digital Communication Systems

Figure 1.7 Frequency down-converter implementation

digitally later in the received signal samples. In both cases, the information from the
carrier synchronization block is used, as will be shown in Chapter 5.
The low-pass filters only need to reject the image signals at twice the carrier
frequency. Thus, their design and implementation are not very demanding. In practice,
they also serve the purpose of antialiasing filtering for the analog-to-digital converters
(ADCs) and need to meet the necessary requirements.
Thus, the frequency down-conversion process described above converts the modu-
lated waveform at the carrier frequency to the equivalent baseband signal.

1.3.3.2 Receiver Filtering and Channel Signal Sample Generation


The baseband waveforms generated from frequency down-conversion block are con-
verted to digital samples after optional additional filtering. The characteristics of the
overall receiver front-end filtering functions, which we will call collectively the
receiver filter, directly affect the receiver’s performance. It has been shown in the
literature that the optimal receiver filter is a matched filter (MF) of the received signal.
The MF is optimal for receivers that employ equalizers for data symbol recovery in
single-carrier communication systems [14]. It is also the optimal receiver front-end for
various synchronization functions including initial acquisition, carrier synchronization,
and timing synchronization as will be discussed in Chapters 2, 3, 5, and 6 of
this book.
The implementation of the receiver filter as an MF requires the exact knowledge
of the composite channel, which includes the transmitter filter and the channel.
Practically, as this is not likely to be the case, the receiver filter is usually designed
only to match the transmitter filter or the average composite channel characteris-
tics. The receiver performance with such an approximate MF would be
suboptimum.
The receiver filter can be implemented by using analog, digital, or mixed signal
processing. With analog signal processing, the MF or approximate matched filtering is
performed on the analog baseband signal. The output is sampled by ADCs at 1/T at the
right time instant that is required by the MF filter, or sampled at m/T, where m is an
integer. When using digital processing, the baseband signal is antialiasing filtered and
sampled at a rate higher than its Nyquist sampling rate [8, 15], such as 2/T or higher.

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
1.3 Major Components of Typical Digital Communication Systems 17

The samples are filtered in the digital domain to implement the MF or approximate MF.
Such a receiver filter can also be implemented by a combination of analog and digital
filtering.
Without getting into the details of implementation, we can model the analog base-
band signal at the receiver filter output as
X

yðt Þ ¼ ej2πΔft ak gðt  kT  τÞ þ zðt Þ (1.11)
k¼∞

where g(t) is the impulse response of the overall channel including the transmitter filter,
channel, and the receiver filter; τ is an unknown but constant delay; Δf is the frequency
offset described in the last subsection; and z(t) is an AWGN process. If the sampling
delay is equal to τ and with the sampling rate of 1/T, the sample yn can be expressed as
X

yn ffi yðnT þ τÞ ¼ ej2πΔf ðnTþτÞ ak gðnT  kT Þ þ zn (1.12)
k¼∞

where zn is AWGN. If g(lT) = 0 for l 6¼ 0, such a channel satisfies the Nyquist (ISI-free)
criterion [15, 16] and is referred to as the Nyquist channel and we have

yn ¼ an jgð0Þjej½2πΔf ðnTþτÞtþθ0  þ zn (1.13)

For example, if the transmitter filter is an SRCOS filter, the channel has a single path,
i.e., no time dispersion, and the receiver filter matches the transmitter filter, then the
overall channel has a raised-cosine (RCOS) frequency response. The CIR of such
Nyquist channels has a form similar to a sinc function with its peak at g(0) [5]. Thus,
the sample given by (1.12) is at the peak of the CIR and yn is ISI free. Otherwise, there
P
will be ISI terms ej2πΔf ðnTþτÞ ∞k¼∞, k6¼n ak gðnT  kT Þ in yn.
The digital samples are often generated at a rate of 1/T or 2/T depending on the type
of processing that follows. The sampling rate should be synchronous to the remote
transmitter’s channel symbol rate, which is detected and controlled by the receiver
timing synchronization block as will be discussed in Section 1.4.2 and Chapter 6.
There are two ways to generate digital samples that are synchronous to the remote
transmitter symbol timing. Traditionally, they are generated by sampling the baseband
waveform, and the sampling is done by using ADCs with a sampling clock that is
synchronous to the remote transmitter timing. In recent years, it has become more
common to generate digital samples first by using ADCs with sampling clocks that are
free running or asynchronous to the remote transmitter timing. Samples with the desired
timing are generated from the asynchronous digital samples through digital resampling.
Digital resampling, also called digital rate conversion, is the subject of Chapter 7 of
this book.

1.3.3.3 Data Symbol Detection/Estimation


The next step of the receiver operation is to estimate the transmitted data symbols from
the digital samples of the baseband signal. This process varies depending on the type of
the digital communications under consideration, namely, whether it is single-carrier,

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
18 An Overview of Digital Communication Systems

multicarrier (e.g., OFDM), or spread-spectrum (e.g., DS-CDMA) communications. In


this section, we consider the first case.
In single-carrier communications, data symbols are directly used as the channel
modulation symbols. If the overall channel satisfies the Nyquist criterion, as shown
by (1.13) the digital samples at the receiver filter output are the transmitted data symbols
an’s scaled by the channel gain jgð0Þjejθ0 with the additional phase rotation due to
frequency offset Δf. They are corrupted by additive noise but are ISI free. Figure 1.8(a)
shows such received signal samples with a carrier phase error ejθðtÞ , where θðt Þ ¼
2πΔf ðt þ τÞ þ θ0 , introduced by the channel and the down-conversion frequency error,
and corrupted by AWGN. After the phase error is corrected by the carrier synchronization
block and with proper scaling, the samples are unbiased data symbol estimates as shown
in Figure 1.8(b). On the other hand, when transmitted over a channel with ISI, in addition
to AWGN, the received signal samples are also corrupted by ISI. In the latter case, an
equalizer can be used to remove ISI for more reliable data symbol detection.
The most widely used equalizers are the linear equalizer (LE), the decision feedback
equalizer (DFE), and the maximum likelihood sequence estimator (MLSE). The outputs
of the first two types of equalizers are the estimates of the data symbols, which have the
same form as the estimates of the symbols received from the ISI-free channels shown in
Figure 1.8(b). In both cases, the symbol estimates are used to generate the decoding
metrics for channel decoding. If MLSE is employed, decoding metrics can be directly
generated from the received signal samples without recovering the data symbols.
Equalization is another important technical area in digital communications. Again,
detailed descriptions of various equalization techniques are beyond the scope of this

Figure 1.8 Samples of 16QAM signal for data symbol and decoding metric generation

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
1.3 Major Components of Typical Digital Communication Systems 19

book. However, they are well documented in textbooks on digital communications,


including [3, 5, 17, 18].
The generation of data symbol estimates and the decoding metrics in CDMA and
OFDM communication systems are discussed in Sections 1.5 and 1.6.

1.3.3.4 Generation of Decoding Metrics


The output samples generated from the front-end processing of many types of receivers
with the carrier phase error removed and with proper scaling are the unbiased estimates
of the transmitted data symbols. Such unbiased estimates can be modeled as the
transmitted data symbols, which are the corresponding constellation points, with
AWGN and other distortions.
Receivers that remove phase errors between the transmitted data symbols and their
estimates before further processing are called coherent receivers. Communication
systems with coherent receivers are called coherent communication systems. There are
also communication systems that employ noncoherent coding/modulation techniques,
which can operate without correction of carrier phase errors. However, the performance
of noncoherent receivers is usually inferior to that of coherent receivers. Thus, coherent
communications are preferred in system designs.
Data symbol constellations are usually in the form of BPSK, QPSK, MPSK, and
QAMs. As an example, the estimates of transmitted 16QAM symbols with carrier phase
errors are shown in Figure 1.8(a). Figure 1.8(b) shows the symbol estimates after carrier
phase error correction and the expected constellation points.
To perform coherent data symbol detection, it is necessary to correct the carrier
phase error so that the expectations of the symbol estimates align with the transmitted
symbols’ constellation points. As shown in Figure 1.8(b), square A represents the
digital sample, which is an estimate of the transmitted data symbol, generated from
the receiver filter. The gray dots in the background represent the constellation points,
one of which is the transmitted data symbol. When there is noise and distortion
introduced by the channel, the estimate will not be exactly on top of the transmitted
symbol.
It can be shown that if the error in the estimate is solely due to AWGN, the maximum
likelihood (ML) estimate of the symbol is the constellation point closest to the estimate,
e.g., B in Figure 1.8(b). However, in systems with channel coding, it will be necessary
to compute the likelihood or log-likelihood values of the coded bits that are mapped to
the transmitted symbol for performing channel decoding.
The most commonly used decoding metrics of coded bits are the log-likelihood ratios
(LLRs).6 The LLR values for BPSK and QPSK symbol constellations are relatively easy
to compute. For the BPSK constellation on the real axis shown in Figure 1.3(a), the log-
likelihood value of the transmitted bit is simply the real part of the estimate divided by
the noise variance as will be shown by the example in Section 2.1.2. For the QPSK
constellation shown in Figure 1.3(b), the real and imaginary parts of the estimate

6
The concepts of the likelihood, log-likelihood, the maximum likelihood, and the log-likelihood ratio will be
introduced in Chapter 2.

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
20 An Overview of Digital Communication Systems

divided by the noise variance are the LLRs of the first and the second bits that determine
the QPSK data symbol.
It is quite complex to compute the exact LLR of a coded bit for high-order data
symbol constellations. In general, it is necessary to evaluate the likelihood values of the
received signal sample with respect to all of the possible transmitted data symbols, i.e.,
all of the constellation points [19]. Therefore, in practical implementations, approximate
LLRs are often used instead of the exact ones with little loss in performance.
One of such approaches to computing approximate LLRs is that, instead of evaluat-
ing all of the likelihood values, only the two largest ones, one for the bit equal to one
and the other for the bit equal to minus one, are computed. Moreover, it can be shown
that maximizing the probability of the received sample with regard to the transmitted
data symbol is equivalent to minimizing the distance between the received sample and
the corresponding constellation point [5]. Therefore, only the two closest constellation
points corresponding to the coded bit being one and minus one, need to be considered.
Let us use the received signal sample A shown in Figure 1.8(b) as an example. For
the first bit in the transmitted symbol, the closest constellation points for the first bit
being one and minus one are B and C, respectively. When the noise is stationary, the
LLR of the first bit is simply equal to the difference of the distance between A and
C minus the distance between A and B divided by the noise variance. Similarly, the
LLR of the second bit is equal to the difference of the distance between A and D minus
the distance between A and B divided by the noise variance.
There are a number of methods to compute, with QAM constellations, the LLR of
the coded bits of the transmitted data symbol. The theory and the details of these
approaches can be found in [19, 20].

1.3.3.5 Deinterleaving and Decoding


The LLR values generated in the previous processing step are deinterleaved to undo the
interleaving performed in the transmitter. After deinterleaving, the correlated noise,
interference and other distortions contained in the received signal are nearly uniformly
distributed in the resulting LLRs that are used by the decoder as decoding metrics.
Under normal channel conditions, the channel decoder recovers the original transmitted
information bits with no or few errors. These recovered bits are sent to the information
sink to regenerate the original information sent by the transmitter.
The theory and operations of deinterleaving and decoding are not essential to the
understanding of synchronization and will not be explored further.
The transmitter and receiver operations continue until all of the available information
is sent and recovered.

1.4 Overview of Synchronization Functions in Digital


Communication Systems

In Section 1.3, we provided a general view of the major transmitter and receiver
functions in digital communication systems. The descriptions were given in the context

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
1.4 Synchronization Functions in Digital Communication Systems 21

of their operations in a data transmission/receiving session. In addition to what has been


discussed, there are other important transmitter and receiver functions, which are intro-
duced in this section. These functions, including initial acquisition, carrier synchroniza-
tion, and timing synchronization, are commonly classified as synchronization functions.
Synchronization functions play an important role in achieving reliable and robust
communication system performance. They are the key means to establish a reliable
communication link during the initial stage of establishing communication and to main-
tain the receiver operations with desirable performances during normal data sessions.
Synchronization is especially important for communication over mobile wireless channels
because such channels change rapidly due to fast fading. As mobile/wireless communi-
cations have become popular in recent years, a good understanding of synchronization in
digital communication systems becomes even more important today than before.
Synchronization in a communication system involves both the transmitter and the
receiver. While most of the synchronization tasks are performed by the receiver, the
transmitter also plays an important role. Specifically, special control signals are usually
included in the transmitted signals to facilitate achieving synchronization by the
receiver.
The theory, characteristics, and implementation of synchronization in digital com-
munication systems are the subjects of this book. The details of the synchronization
functions will be presented and discussed in later chapters. In this section, we provide
brief overviews of the three major synchronization functions and their roles during
various stages of transmission and reception in digital communication systems.

1.4.1 Initial Acquisition


Initial acquisition is the first task a device executes when it tries to establish a communi-
cation link with the desired remote transmitter. The objectives of initial acquisition are
to establish quickly and reliably a communication link and to be ready for the data
sessions that follow. It significantly affects the overall link and system performance.
To facilitate the initial acquisition by device receivers in a multiple-access communi-
cation system, the transmitters that connect to a network constantly transmit signals that
are known to all of the devices in the area covered by the network. Receivers that need
to establish communication with the network continuously search for the known signal
that they expect to detect. Once such a signal is detected by a device, the information in
the detected signal is extracted. Such information includes various system parameters,
in particular, the timing of the start of the data frames. Thus, initial acquisition usually
also performs frame synchronization.
When starting operations for the first time, the receiver’s parameters are not accur-
ately known to the device itself. Thus, another task of initial acquisition is to calibrate
and initialize various receiver parameters by using the information obtained from the
detected signal. Such parameters include the accuracy of the LO frequency, the desired
timing phase and frequency of the digitals samples, receiver’s gain, etc. Once the
system information is acquired and the parameters are calibrated, the receiver is ready
to start normal data sessions.

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
22 An Overview of Digital Communication Systems

For point-to-point communications, the roles of the two entities on the two ends of
the channel are symmetric in most systems. However, the communication protocols
usually assign one entity as the master and the other as the slave. The master first starts
transmitting synchronization signals and the slave will listen until such signals are
detected. Then, their operations will continue according to the communication protocols.
The analysis, details of the operations, and the implementation of initial acquisition in
various communication systems, will be presented in Chapter 3.

1.4.2 Timing Synchronization


Another major receiver synchronization function is timing synchronization, also often
called timing recovery in the literature.
As shown in Section 1.3.1.4, the digital transmission signal, before it is converted to
the analog form, consists of the transmitter channel symbols. These channel symbols
are multiplied to analog pulses spaced at a fixed time interval T. The reciprocal of T, i.e.,
1/T, is called the transmitter channel symbol rate. The analog baseband waveform,
which is the summation of the pulses multiplied by the transmitter channel symbols, is
modulated onto carrier frequency and sent over the communication channel.
After down-converted from the carrier frequency to baseband, the received baseband
signal is converted to digital samples at the received signal sampling rate for further
processing. This sampling rate is nominally equal to 1/T or m/T, where m is an integer.
For normal receiver operations, the receiver’s sampling clock should be synchronous to
the transmitter channel symbol rate of 1/T. Thus, the objective of the receiver timing
synchronization is to ensure that the received signal is sampled synchronously to 1/T.
In addition, the sampling time should have a fixed delay, called timing phase, relative
to the time at which the transmitter symbols are transmitted.
As shown by (1.13), if the samples are generated every T, the sampling time should
be at the peak of the channel impulse response to maximize the received signal sample
energy. Hence, the sampling clock must have the proper delay to compensate for the
delay introduced by the overall channel.7 In other words, the sampling clock must be
synchronous to the transmitter symbol clock in timing phase. However, if the samples
are generated at the Nyquist sampling rate of the received signal or higher, e.g., at 2/T,
there will be no information loss regardless of the timing phase. If such samples are
used by the receiver to recover the transmitted data symbols directly, an exact sampling
phase is not critical. However, it is important that the sampling phase is stable relative to
the transmitter symbol clock, so that the channel observed by the receiver after sampling
does not change with time.
In both cases, it is important to maintain a fixed timing phase relative to the timing
phase of the transmitter channel symbol clock. It is equivalent to maintaining a

7
The delay introduced by the overall channel may be longer than the transmitter symbol interval T and can be
expressed as MDT + τ. Only the fractional part of the delay τ is observable and can be determined from the
received signal at the physical (PHY) layer and is thus of interest to receiver synchronization. The integer
part MDT can be determined at higher layers during system acquisition if necessary.

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
1.4 Synchronization Functions in Digital Communication Systems 23

sampling frequency synchronous to the remote transmitter channel symbol rate. In other
words, timing synchronization needs to achieve both phase and frequency synchroniza-
tion between the receiver sampling clock and the transmitter channel symbol clock.
During initial acquisition, the signal detector searches for the signal energy peak by
correlating a known transmitted symbol sequence with the received signal samples at
different sampling times. Thus, the signal peak found can serve as the initial timing phase
estimate. Once entering the data mode, the timing phase and frequency information
contained in signal samples for data recovery are extracted and used to maintain and
fine-tune the timing phase and frequency synchronization. Timing synchronization is
usually achieved by forming a feedback loop and thus it is often called timing locked loop
(TLL) or timing control loop (TCL). Once the TLL has converged, its output is synchron-
ous to the remote transmitter timing, i.e., the remote transmitter’s channel symbol clock.
The timing synchronization functions in receivers of different communication
systems, such as single-carrier systems with or without spreading and OFDM systems,
have their commonalities and differences. The theory and the details of timing
synchronization operations of various communication systems will be presented in
Chapter 6.

1.4.3 Carrier Synchronization


Carrier synchronization is yet another important component of synchronization in
digital communication systems. It directly affects the overall system performance
including system capacity and the robustness under adverse conditions. Specifically,
carrier synchronization concerns the estimation and compensation of the carrier fre-
quency and phase differences between the transmitted signal and the corresponding
received signal.
As discussed in Section 1.3.3, the signal received from the communication channel at
the carrier frequency is down-converted in frequency to baseband by using a locally
generated reference clock. The baseband signal is filtered by the Rx filter and converted
to digital samples by ADCs. The generated digital samples are processed to produce the
estimates of the transmitted channel symbols.
The receiver channel symbol estimates are used to recover the transmitted data
symbols and to generate the decoding metrics of the original transmitted information
bits. To reliably recover the original transmitted data, the phase of a transmitted data
symbol and the phase of its estimate should be the same. However, practically, this is
usually not the case. As shown by (1.13) and Figure 1.8(a), in addition to being
corrupted by noise and interference, a complex symbol estimate may have a phase
different from that of the transmitted data symbol. Such a phase difference is caused by
the complex channel response and the offset between the transmitter carrier frequency
and receiver down-conversion frequency.
All these impairments degrade the receiver performance. While it is impossible or
difficult to remove the additive random noise and interference, it is possible to accur-
ately estimate and correct the phase errors in the receiver. Such phase error correction is
particularly important to coherent communication systems. In coherent communication

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
24 An Overview of Digital Communication Systems

systems, to reliably recover the transmitted data, the phase of the symbol estimate
should align with that of the original transmitter data symbols.
Hence, the receiver carrier synchronization function, also called carrier recovery,
needs to perform both carrier phase and carrier frequency synchronization. Various
schemes for carrier synchronization have been developed and implemented in digital
communication systems. Briefly, the carrier synchronization block obtains the carrier
phase and frequency offset information from the analog or digital form of the received
signal. The information is processed and used to compensate for these offsets in the
corresponding receiver blocks.
Detailed descriptions of the various aspects of carrier synchronization and the
realization are discussed in Chapter 5.
So far, we focused our discussions on single-carrier communications. Due to the
widespread applications of DS-CDMA and OFDM technologies in recent years, a
significant portion of this book is devoted to synchronization in communication systems
that employ these technologies. To facilitate our further discussion, the basics of these
technologies are presented in the next two sections.

1.5 Basics of DSSS/DS-CDMA Communications

For a long time, commercial digital communication systems almost exclusively used
single-carrier technology without spreading to wider band. The first commercial wire-
less communication system using DSSS technology was the IS-95 cellular system
proposed in the late 1980s and standardized in a TIA standard committee in the United
States in the early 1990s. Later, IS-95 evolved into the cdma2000–1x standard, one of
the worldwide 3G wireless communication standards. They are known as DS-CDMA,
or simply CDMA, cellular communication systems [21, 22]. Another member of this
family, cdma2000 Evolution-Data Optimized, or simply EV-DO, was developed in the
mid to late 1990s specifically for wireless data communications. In the late 1990s, the
DS-CDMA technology was adopted by the European Telecommunications Standards
Institute (ETSI) in a new wireless communication standard called wideband CDMA
(WCDMA) [23]. It was later adopted by the 3GPP partnership of standard organizations,
which have standard organizations in many countries including ETSI as a member.
WCDMA is now a member of the Universal Mobile Telecommunications System
(UMTS) family. WCDMA and cdma2000 are radio interfaces of IMT-2000, the 3G
standard adopted by the International Telecommunication Union (ITU).
In a DS-CDMA system, the transmitter first generates a spreading sequence, which is
complex-valued, with white noiselike flat spectrum, and at a rate of fch, called chip rate.
Here the term chip refers to a symbol interval of the spreading sequence and chip rate
refers to the rate at which the CDMA channel symbols are transmitted. In that sense, a
chip is equivalent to a channel modulation symbol in the conventional single-carrier
systems discussed above. The spreading sequence is divided into multiple segments,
each of which has N elements. Every data symbol for transmission is multiplied to a
segment of the sequence. The resulting sequence is modulated to the transmitter carrier

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
1.5 Basics of DSSS/DS-CDMA Communications 25

frequency and is then transmitted. Thus, the chip rate is equal to N times the input data
symbol rate. The spectrum of such generated transmission signal is N times wider than
the spectrum of direct transmission of data symbols without spreading. In other words,
the spectrum of the transmitter signal is spread by a factor of N. Hence, N is called the
spreading factor. This is where the term spread spectrum comes from.
Since the DS-CDMA signal occupies a wider spectrum than the original signal, it is
not spectrum efficient. However, the receiver can combine coherently N received signal
samples to generate an estimate of the data symbol. The SNR of the data symbol
estimate will be N times higher than the received signal sample. This process is called
despreading, which yields a processing gain of N. Due to the processing gain, the
system can operate at a much lower SNR to achieve the same performance of a
nonspread spectrum system. Essentially, the DS-CDMA technology trades spectrum
efficiency for improved SNR. DS-CDMA is particularly suitable for voice
communications.
During a conversation, one person usually speaks for only about 30 percent of the
time. In other words, the speech activity is about 30 percent on average. DS-CDMA can
effectively take advantage of this speech activity factor. Below, we consider the forward
link communication from a base station to mobile devices as an example.
In base stations of a DS-CDMA system, such as IS-95/cdma2000–1x and WCDMA,
the data from multiple users are spread by different sequences that are uncorrelated or
orthogonal to each other and added together before transmission. At the receiver, when
despreading the received signal using one user’s spreading sequence for the user’s data,
the interference from other users’ signals, which are spread with uncorrelated or
orthogonal sequences, are suppressed. Hence, the desired user’s signal is enhanced
and recovered. Due to the speech activity factor, on average, the power of the interfer-
ence from other users is not as high as when the users transmit full power all the time.
This is why the DS-CDMA can support more simultaneous users than the early
generations of TDMA and FDMA systems could.
For wireless data communications, the advantage of DS-CDMA disappears relative
to other communication technologies. For data communication it is most efficient to
totally avoid inter-user interference. As a result, most wireless data communication
systems employ transmission technologies that create no or little such interference. For
example, Long Term Evolution (LTE) and recent generations of Wi-Fi systems use the
OFDM technology. EV-DO essentially employs TDMA for handling user data signals,
even though it uses direct sequence spreading for other purposes.
Below, we describe the basic operations of DS-CDMA forward-link transmitters and
receivers to establish the foundation for the device synchronization functions described
in the later chapters.

1.5.1 DS-CDMA Transmitter Operations


DS-CDMA, or simply CDMA, is a single-carrier communication technology. Its
transmitter is essentially the same as the transmitters of generic single-carrier communi-
cation systems described in Section 1.3.1. The main difference between them is that in a

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
26 An Overview of Digital Communication Systems

Figure 1.9 An exemplary DS-CDMA transmitter

CDMA system, multiple data and control channels are transmitted simultaneously in the
same frequency band. To distinguish the data from different channels, each data symbol
is multiplied by a channel-specific spreading code. The spread symbols from different
channels are added together before being transmitted. A typical forward-link transmitter
of a CDMA communication system is shown in Figure 1.9. Because many of its
components are the same as that in generic single-carrier transmitters, only the parts
unique to CDMA are shown.
Similar to what is described in Section 1.3.1.2, coded bits of CDMA traffic and
control channels are mapped into data symbols at the symbol rate 1/T. Each data symbol
is multiplied to a channel-specific spreading code. In order for the receiver to distin-
guish the data symbols from different channels, the channel spreading codes are
uncorrelated or orthogonal to each other.
The spread spectrum signals are generated in two steps. In the first step, a symbol
from each channel is multiplied, also called covered, by a channel-specific orthogonal
code. If the symbols of different channels have different rates, the length of the
orthogonal code is inversely proportional to the data symbol rate of the channel. As a
result, the spreader outputs of all of the channels will have the same chip rate fch. To
simplify the discussion, in this section, we only consider the case that the symbol rates
of all channels are the same. In addition, the transmitter also generates a pilot channel
with unmodulated symbols, typically BPSK with value +1. The pilot symbols are
repeated to be at the same rate as other channels. This step is called time spreading.
In the time-spreading step, the purpose of applying the orthogonal code covering is to
eliminate, or at least to reduce, the interference between different channels. The most
popular orthogonal spreading code is Walsh code [24]. Hence, we will use it as the
example in the discussion below.
Walsh codes, also called Walsh functions or Hadamard codes, are a family of binary
orthogonal codes with lengths of N = 2M. There are N members in a group of length-N
Walsh functions. Each of the length-N Walsh functions can be expressed as an

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
1.5 Basics of DSSS/DS-CDMA Communications 27

ðiÞ ðiÞ
N-dimensional vector, denoted by wN i = 0, 1, . . ., N1, with N elements, wN , 0 ,
ðiÞ ðiÞ
wN , 1 , . . ., wN , N1 , each of which is equal to either 1 or +1, and
t X
N 1
ðiÞ ðjÞ ðiÞ ðjÞ N for i ¼ j
wN wN ¼ wN , k wN , k ¼ (1.14)
k¼0
0 for i ¼
6 j

where the superscript t denotes the transpose operation of a vector or a matrix.


The Walsh functions look like variable-width square waves that do not have white
spectrum. Because different Walsh codes are orthogonal to each other, the data symbols
spread by different Walsh codes will not interfere with each other when transmitted and
received over a single-path channel. For multipath channels, there is interference among
the signals passing through the paths with different delays. However, the interference is
reduced nonetheless. Details of Walsh codes and related topics have been well dis-
cussed in the literature, e.g., in [25].
Although, the chip rate of the time-spread sequence is higher than the symbol rate, the
spectrum of such time-spread symbol sequences are not white as desired for transmis-
sion. To make it white noise like, the combiner output symbols are scrambled by
pseudo-noise (PN) sequences, such as m-sequences [26]. This is the second step, which
performs frequency-spreading, in generating the spread spectrum signals. In such a
ðiÞ
way, the data symbol Si(n) of the ith channel is expanded to a vector wN Si ðnÞ with N
elements and N is called the spreading factor. The chip sequences transmitted at nNTc,
denoted by vector c(n), can be expressed as

X ðiÞ
cðnÞ ffi cnN cnNþ1    cðnþ1ÞN1 ¼ PnN w N Si ð nÞ (1.15)
i2 active channels

where PnN is a diagonal matrix with elements of spnN , spnNþ1 ,    spðnþ1ÞN1 , which
are the elements of the scrambling code. With no loss of generality, we will assume that
these elements have unit magnitude. The pilot sequence after scrambling is the same as
the scrambling sequence.
After the CDMA chips are up-sampled and low-pass filtered, they are converted to
analog signals by the D-to-A conversion and further filtered by analog filters. Finally,
the baseband analog signal is up-converted in frequency, i.e., modulated, to carrier
frequency fc and is then transmitted. These final steps are the same as those performed
by the single-carrier transmitters discussed in Sections 1.3.1.4 and 1.3.1.5. The nominal
transmission bandwidth is equal to fch. Commonly, the transmitter filter is designed so
that the transmitted signal will have a baseband spectrum close to an SRCOS shape.

1.5.2 DS-CDMA Receiver Operations


A block diagram of a typical DS-CDMA receiver in a mobile device is shown in
Figure 1.10. Similar to the generic single-carrier receiver described in Section 1.3.3,
the DS-CDMA receiver’s radio frequency (RF) front-end down-converts the received
signal from carrier frequency fc to baseband based on a locally generated reference
clock. The generated baseband signal is low-pass filtered and converted to digital

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
28 An Overview of Digital Communication Systems

Figure 1.10 An exemplary DS-CDMA receiver in mobile devices

samples by an ADC at a rate of mfch, where m is an integer. The low-pass filter performs
the antialiasing function. It is usually designed to match, at least approximately, the
transmitter filter characteristics. The signal spectrum after receiver filtering is close to
RCOS shape and approximately satisfies the Nyquist criterion for a single-path trans-
mission channel.
For DS-CDMA systems, the communication channel is usually modeled as a multi-
path channel due to radio wave reflections. The impulse response of such a channel is a
combination of multiple impulses with different delays and complex gains. Namely,
X
L1
hch ðt Þ ¼ gl δðt  τl Þ (1.16)
l¼0

where δ(t) is the Dirac delta function, and gl and τl are the complex gain and delay of the
lth path of the channel, respectively. These paths are assumed resolvable, i.e., the
spacing between two adjacent paths is greater than the chip duration Tc.
The digital samples generated from the ADCs are processed by a RAKE receiver [27].
The function of the RAKE receiver, which consists of multiple “fingers,” is to generate
the estimates of the transmitted data symbols. The RAKE receiver is a key component
of a DS-CDMA receiver and will be discussed in detail in the next section.
To estimate data symbols effectively, the RAKE receiver must have the precise
knowledge of the complex gain gl and time delay τl of the paths. Thus, determining
the existence of the paths with sufficient signal energy and their delay and phases is the
main synchronization task of a DS-CDMA receiver.
The receiver operations in initialization and in data traffic states can be summarized
as follows.
During initial acquisition, a searcher in the receiver looks for the pilot channel signal
or special synchronization signals that are defined by system designs. Once such a
desired signal is found, the receiver also acquires the carrier phase and time delay of at
least one of the paths with sufficient energy. The searcher passes the information to at
least one RAKE finger for it to demodulate the signal. At the same time, the carrier
frequency offset value is estimated. It is then used to initialize the frequency value of the

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
1.5 Basics of DSSS/DS-CDMA Communications 29

local clock generator to reduce the offset between the local reference and the remote
transmitter carrier frequencies. The details of the DS-CDMA receiver operations in the
initial acquisition stage are discussed in Section 3.5.
After the system parameters are acquired during initial acquisition, the communi-
cation link between the base station and the device can be established, and then the
receiver enters the data mode.
In the data mode, the receiver continues improving the accuracy of the estimated
carrier and timing parameters. It also searches for new paths with sufficient energy
from the acquired base station and from other base stations. When a new path is
found, the searcher passes its information to an available RAKE finger to demodulate
its signal. If the new path’s signal is stronger than one of the existing ones, the new
path may be assigned to the weaker RAKE finger and the older path may be
deassigned. These operations are called finger management, which is one of the
most important functions of the DS-CDMA timing synchronization. It is probably
also the most difficult and challenging operation of DS-CDMA receivers. The
examples of the timing synchronization operations of DS-CDMA systems are
discussed in Section 6.5.
After initial correction of a frequency offset, the RAKE fingers will track the carrier
phase changes of the associated paths. The residual frequency offset can be computed
from consistent phase changes. Carrier synchronization is usually less difficult to achieve
than timing synchronization in DS-CDMA systems, except in some special cases.
In the receiver, the LO inaccuracy will cause a much larger carrier frequency offset
than the timing frequency offset. Therefore, it is easier to correct the LO frequency from
the estimated carrier frequency offset than from the estimated timing frequency offset.
The estimated carrier frequency offset can then be used to correct the timing frequency
error, since the carrier and timing frequencies are most likely derived from the same
reference in the transmitter. However, it is also possible that these two frequencies are
not locked. For example, additional frequency up and down-conversion may be
involved during transmission. In such cases, it may be desirable to implement a
second-order loop to correct the residual timing frequency error.
Once the initial finger timing phases are estimated accurately, the estimation of data
symbols and combining of the outputs of multiple fingers are performed in a RAKE
receiver, which generates the estimates of transmitted data symbols. Decoding metrics
are derived from the symbol estimates and used by the decoder to recover the transmit-
ted information bits. These operations are relatively less dynamic than the timing and
carrier synchronizations. The details of recovering DS-CDMA symbols by RAKE
receivers are described below.

1.5.3 RAKE Receiver: The DS-CDMA Demodulator


Recovering the transmitted data symbols in a DS-CDMA receiver is usually performed
by the RAKE receiver, which was originally proposed in [27] and [28]. A form of the
RAKE receiver appropriate for practical DS-CDMA systems was also proposed there.
In this section, we describe the implementation of such a practical RAKE receiver and

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
30 An Overview of Digital Communication Systems

show that it is an approximate realization of a matched filter in a discrete multipath


channel often encountered in a wireless communication environment.

Channel Model and RAKE Receiver Structure


Figure 1.11(a) shows an example of a multipath channel with four paths. The receiver
filter output generated by a transmitted chip is shown in Figure 1.11(b). If the combined
frequency response of the transmitter and receiver filters has an RCOS shape as desired,
the impulse response of each path is close to a sync function. When the paths are
resolvable as defined above, each of the peaks can be determined at the receiver. The
signal sampled at a peak generates the highest energy received from the associated path.
Figure 1.12 shows a high-level RAKE receiver and an example of a RAKE finger.
The operations of the RAKE receiver and how its fingers operate are described below.

Overview of RAKE Receiver Operations


As shown in Figure 1.12(a), the receiver front-end processes the received signal by
converting it to baseband and generates samples at an integer multiple of the chip rate,
i.e., mfch. Depending on the implementation, the value of integer m is commonly selected
to be 2 or 8, as will be discussed in Chapters 6 and 7. The samples are sent to a number of
RAKE fingers of the receiver for recovering the data symbols sent to this device. In the
multipath channel shown in Figure 1.11, each RAKE finger demodulates the signal from
a particular path. The samples sent to the finger are delayed according to the path delay.
The output of each of the RAKE fingers is an estimate of the transmitted symbol. The
estimates of the same transmitted symbol from multiple RAKE fingers are added
together to generate a combined symbol estimate. Bit LLRs are derived from the symbol
estimates as shown in Section 1.3.3.4 to be used by a decoder for recovering the
transmitted information bits.

Figure 1.11 An example of a multipath channel at the receiver filter output

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
1.5 Basics of DSSS/DS-CDMA Communications 31

Figure 1.12 RAKE receiver block diagram

In addition to recovering the transmitted data symbols, the RAKE fingers also
generate information for adjusting the delay of the samples as needed to achieve the
best receiver performance. If the sampling rate is equal to 8fch or higher, the delay
adjustment is simply to select the sample at the path delay. If the sampling rate is equal
to 2fch, the digital samples need to be resampled to obtain a higher timing resolution by
digital interpolation, a process that will be described in Chapter 7. Moreover, the fingers
also detect the existing carrier frequency offset, which is sent to the carrier-frequency
synchronization block to reduce the offset. These operations will be described in detail
in Chapters 5 and 6.

RAKE Finger Operations


A RAKE finger has two major blocks: a synchronization block and a demodulation
block. The synchronization block performs two functions: to estimate the complex path
gain and to optimize the sample timing relative to the Tc/2-spaced input samples. These
samples are partitioned into two Tc spaced sample sequences. Each sample of the first
sequence has the delay aligned with the peak of the combined transmitter and receiver
filters shown in Figure 1.11(b) for the path to which the finger is assigned. Thus, each

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
32 An Overview of Digital Communication Systems

sample of the sequence is an estimate of a transmitted chip. The second sample


sequence is used for the detection and correction of timing phase offset as will be
discussed below.
Let us denote the sample, which is a scaled estimate of cnN+k in (1.15) received by the
ðlÞ
lth finger ynNþk . It can be expressed as
ðlÞ ðlÞ
X ðiÞ ðlÞ
ynNþk ¼ gl cnNþk þ znNþk ¼ gl spnNþk wN , k Si ðnÞ þ znNþk (1.17)
i2 active channels

ðlÞ
where gl is the complex gain of the lth path in the sample and znNþk is the interference
term including both noise and ISI.
To estimate the complex channel gain gl , the synchronization block collects N
ðlÞ ðlÞ ðlÞ
samples, ynN , ynNþ1 , . . ., yðnþ1ÞN1 , which are generated from the same data symbol
Si(n). The channel gain estimate is obtained by summing the N unscrambled samples,
i.e.,

X
N 1 X X
N 1   X
N 1
sp∗
ðlÞ spnNþk 2 wðiÞ Si ðnÞ þ sp∗
ðlÞ
nNþk ynNþk ¼ gl N, k nNþk znNþk (1.18)
k¼0 i2 active channels k¼0 k¼0

Because the elements of the scrambling sequence have unit magnitude and
PN1 ðiÞ PN1 ð0Þ ðiÞ
k¼0 wN , k ¼ k¼0 wN , k wN , k ¼ 0 if i 6¼ 0, only the contributions of the pilot symbols
remain and we have

X
N 1
ðlÞ
sp∗ 0
nNþk ynNþk ¼ gl N þ znN ¼ N^
g l ðnNT c Þ ffi N^g l ðnÞ (1.19)
k¼0

which is a scaled estimate of the complex channel gain at time nNTc.


The estimate ^g l ðnÞ is used to correct the carrier phase of the estimated data symbols
as described below. It can also be used to detect and correct the carrier frequency offset
as will be discussed in Chapter 5.
The second function of the synchronization block is to adjust the sampling delay, i.e.,
the timing phase, for achieving the best possible receiver performance. This adjustment
is necessary because the estimate of the sampling time from initial acquisition is usually
coarse with an accuracy of only about Tc/4. Moreover, the sample timing may drift
during the data mode and needs to be corrected. The second Tc spaced sample sequence
with an offset of Tc/2 from the first sequence is used for this purpose. This function will
be described in Chapter 6 when we discuss the timing synchronization of CDMA
receivers.
Based on the information provided by the synchronization block, the demodulation
block of the RAKE finger generates estimates of the transmitted symbols for the
device. The demodulation block of the lth finger receives the same Tc-spaced sample
sequences that are used for generating the path gain estimates in the synchronization
block. These samples are unscrambled and correlated with the proper Walsh function
for generating the estimates of the transmitted data symbols. Let us assume that the
ðlÞ
device needs to recover the transmitted symbol S1(n) from the samples ynNþk ,

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
1.5 Basics of DSSS/DS-CDMA Communications 33

l = 0, . . ., L1 and k = 0, . . ., N1 given by (1.17). Similar to (1.18), the correlation is


performed by
X
N 1 X N 1 h
X  i
ð 1Þ
wN , k sp∗
ðiÞ spnNþk 2 wð1Þ wðiÞ Si ðnÞ þ z0
nNþk ynNþk ¼ gl N, k N, k nN , 1 (1.20)
k¼0 i2 active channels k¼0
PN1  2
Note that k¼0 wN , k wN , k ¼ 0, if i 6¼ 1 and spnNþk  ¼ 1. Only the components in the
ð1Þ ðiÞ

sample due to S1(n) remain, i.e.,


X
N 1
ð1Þ ðlÞ
wN , k sp∗ 0
nNþk ynNþk ¼ gl NS1 ðnÞ þ znN , 1 (1.21)
k¼0

The carrier phase in gl is removed by multiplying the conjugate of the estimated path
gain ^
g l ðnÞ obtained from the finger’s synchronization block. The output is the estimate
of the transmitted symbol weighted by the magnitude squared gain of the lth channel
path as

1 ∗ X N 1
ð1Þ ðlÞ 2^
g^l ðnÞ wN , k sp∗
nN c þk ynNþk ¼ jgl ðnÞj S 1 ðnÞ (1.22)
N k¼0

The weighted symbol estimates from the active fingers are added together to generate a
combined estimate of the transmitted symbols. In CDMA systems under low SNR
conditions, the noise variances in these estimates are approximately equal to each other.
The squared path gain is proportional to the SNR of the path. Thus, the summation
approximately implements maximum ratio combining and is optimal if these noises and
interferences are spatially white.

Optimality of RAKE Receiver


Let us assume
 that the overall transmitter impulse response, hT(t), has a finite time
duration of T p ; T p . The composite time response of the transmitter and the
8

multipath channel with CIR given by (1.16) can be shown to be


X
L1
gð t Þ ¼ gl hT ð t  τ l Þ (1.23)
l¼0

In the analysis below, we consider the chip cNn, sent at t = nNTc. After passing through
the channel, the signal containing cNn that arrives at the receiver is
X
L1
ycnN ðt Þ ¼ cnN gðt  nNT c Þ ¼ cnN gl hT ðt  nNT c  τl Þ (1.24)
l¼0

8
In this book, we adopt the notations [a,b], [a,b], (a, b), and (a, b) of an interval (region) of the real numbers
between a and b, as commonly used in mathematics. A square bracket, [ or ], means the associated endpoint
is included in the interval. A parenthesis, ( or ), means that the associated endpoint is excluded from the
interval.

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
34 An Overview of Digital Communication Systems

The matched filter output at the optimal sampling time for the received signal ycnN ðt Þ is
equal to
ð "X #
L1
∗ ∗

X
L1
cnN gj hT u  nNT c  τj gl hT ðu  nNT c  τl Þ du (1.25)
j¼0 l¼0
T0

where T0 is the support region of g(t),


i.e.,
g(t) is equal to zero outside this region. If the
overlap between hT ðt  τi Þ and hT t  τj , for i 6¼ j, can be ignored, the matched filter
output of cNn can be approximately expressed as
nNT c þT
ð p þτl
X
L1
cNn j gl j 2
jhT ðu  nNT c  τl Þj2 du (1.26)
l¼0
nNT c T p þτl

Now let us consider the RAKE receiver. If the receiver filter matches the transmitter
filter, the output of the lth rake finger sampled at nNT c þ T p þ τl , i.e.,ynN , l , can be
expressed as
nNT c þT
ð p þτl
ynN , l ¼ cnN gl jhT ðu  nNT c  τl Þj2 du (1.27)
nNT c T p þτl

It is the peak value at the receiver filter output due to the signal from the lth path.
Weighting ynN , l by the conjugate of the lth path gain estimated and adding the weighted
RAKE finger outputs together, we obtain the RAKE receiver output
nNT c þT
ð p þτl
X
L1 X
L1
g∗
l ynN , l ¼ cnN j gl j 2
jhT ðu  nNT c  τl Þj2 du (1.28)
l¼0 l¼0
nNT c þτl

This is the same as that given by (1.26).


Thus, we have shown that the RAKE receiver output generated from the peaks of the
channel paths is an approximation of the output of the MF. The main difference between
the RAKE receiver and the matched filter is due to the interference between the signals
received over different paths. When DS-CDMA systems operate under low SNR
conditions, such interpath interferences are much weaker than the additive noise and
other interference. As a result, the difference between the RAKE and the optimal MF
receiver front end will be insignificant, and the RAKE receiver can be viewed as nearly
optimal under the ML criterion.
In general, the performance of the MF is not optimal according to the minimum-
mean-square-error (MMSE) criterion when ISI exists. However, in the case that the
other noise and interference are significantly higher than the ISI, the effect of the ISI
may be ignored. Thus, the RAKE receiver can also be viewed as nearly optimal under
the MMSE criterion for DS-CDMA receivers. Similar arguments were originally
introduced in [27] to show the optimality of the RAKE receiver.

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
1.6 Introduction to OFDM Communication 35

1.6 Introduction to OFDM Communication

Earlier we described various aspects of the communication systems that are based on
single-carrier communication technologies with and without spread spectrum. In recent
years, OFDM has become the technology of choice in the latest generation of digital,
especially wireless, communications.
The first major digital communication standard employing OFDM was DVB-T, the
European digital terrestrial television standard, which was developed around the late
1990s [29]. In the United States, a variation of OFDM, called discrete multitone (DMT)
modulation, was adopted for the digital subscriber line (DSL) standards [30]. For
wireless data communications, OFDM was first standardized in the HyperLAN, ETSI’s
wireless local area network (LAN), also called Wi-Fi, standard. In 1999, OFDM
technology was adopted by IEEE 802.11 standard committee as the 802.11a Wi-Fi
standard for the 5 GHz band. OFDM technology was extended to the 2.4G MHz band in
the 802.11g standard ratified in 2003. Later on, new 802.11 standards have emerged that
further improved wireless LAN’s performance [31].
Another important OFDM wireless communication standard is the Long-Term Evo-
lution (LTE) [32], also commonly known as 4G LTE. It is for wireless high-speed data
communications targeted to mobile phones and other devices in wide area network
(WAN) environments. LTE is the evolution of 3G wireless standards including
cdma2000 and WCDMA. It was first proposed by NTT DoCoMo in Japan, then
developed and finalized at 3GPP. The main advantage of LTE is that it can effectively
use a wideband wireless spectrum, e.g., up to 20 MHz or even wider, in each direction
to achieve very high data throughput, e.g., 300 Mbps and higher.
The popularity of OFDM is the result of a number of its salient features as
summarized below.
First, OFDM is a type of multicarrier communication technology. The OFDM signal
transmitted from a transmitter consists of a plurality of carriers, called subcarriers.
Multiple data symbols, each of which is modulated onto a separate subcarrier, are
grouped in an OFDM symbol and transmitted. The subcarriers are orthogonal to each
other. Hence, there will be no intersubcarrier interference (ICI) and no loss of efficiency
even with overlaps in spectra between them. Its theoretical channel capacity is the same
as that of a single-carrier system.
Second, when communicating over multipath channels, with proper design, OFDM
communication can more effectively and efficiently combat ISI than single-carrier
communication. As ISI is one of the major limiting factors affecting the transmission
efficiency, OFDM has the potential to provide higher spectrum efficiency than the
single-carrier technology with a reduced implementation complexity at the same time.
Third, an OFDM receiver eliminates ISI without performing channel inversion as
equalizers in single-carrier receivers. In theory, OFDM communication can achieve the
Shannon channel capacity more efficiently than single-carrier communications.
Finally, OFDM signaling is a natural fit for multiple input, multiple output (MIMO)
technology, which has been used to further improve the spectrum efficiency of digital
communication systems.

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
36 An Overview of Digital Communication Systems

However, OFDM also has its shortcomings. The main deficiency of a communi-
cation system using OFDM signaling is its higher overhead than that of the single-
carrier signaling. To eliminate ISI, the first portion of an OFDM symbol, called a cyclic
prefix (CP), duplicates the last portion of the same symbol. Therefore, the CP does not
carry information that should be transmitted, i.e., it constitutes an overhead. In
addition, it is necessary for OFDM signaling to include reference symbols, or pilots,
to facilitate data symbol recovery and synchronization functions. In contrast, single-
carrier communications do not have anything equivalent to CP and may be able to
dedicate less energy on reference/pilot signals when they are necessary. Nevertheless,
after making trade-offs, the advantages of OFDM outweigh its disadvantages. As a
result, OFDM has become attractive and important in today’s wideband high-speed
data communications systems.
Another potential issue with OFDM is its sensitivity to frequency and timing errors.
To combat ISI, it is desirable to design the subcarriers to have narrow bandwidths.
However, such a design results in long OFDM symbol lengths and OFDM becomes
susceptible to carrier frequency error, Doppler effects, and, albeit to a lesser degree,
timing error. Therefore, proper design and implementation of synchronization functions
in OFDM receivers are crucial.
The theory, characteristics, and implementations of OFDM communication systems
have been thoroughly studied and presented in the literature, including [33, 34, 35]. In
this section, we consider the basics of OFDM system implementation and signaling
design to facilitate the discussion on synchronization in OFDM communication systems
later in this book.
Because of the need to reduce the complexity of implementation, today’s OFDM
systems are based, almost with no exception, on a structure that was first proposed in
[36]. In such OFDM system implementations, the modulation and demodulation of data
symbols are accomplished by using inverse discrete Fourier transforms (iDFT) and
discrete Fourier transforms (DFT) that are usually implemented by using the computa-
tionally efficient inverse fast Fourier transform (iFFT) and fast Fourier transform (FFT)
algorithms. To begin, we provide an overview of the operations of such OFDM
transmitters and receivers, including their synchronization functions.

1.6.1 OFDM Transmitter Operations


The block diagram of such an OFDM transmitter is shown in Figure 1.13.
In a typical OFDM system, the binary input bits are organized into ND parallel
sequences, where ND is the number of the active OFDM data subcarriers and is smaller
than the number of the DFT/iDFT size NDFT. These input bits may be from single or
multiple sources, and may have been encoded by error correction codes and interleaved.
To take advantage of the efficient FFT/iFFT implementation, NDFT is usually chosen to
be equal to 2M, where M is an integer although DFT/iDFT with any size can be used.
The bits in each of the ND sequences are mapped to data modulation symbols, which are
represented by complex numbers. These modulation symbols may be mapped to
different forms of symbol constellations, such as PSK and QAM. In the nth OFDM

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
1.6 Introduction to OFDM Communication 37

Figure 1.13 A typical iDFT/iFFT-based OFDM transmitter

symbol, ND of such modulation symbols, one per sequence, together with (NDFT  ND)
zeros form an input data vector XN DFT ðnÞ of the iDFT block.
The iDFT block performs the iDFT of the input vector XN DFT ðnÞto generate an output
vector x(n), which consists of NDFT output samples, x(n,l), l = 0, . . ., NDFT  1. The
OFDM transmitter treats the NDFT output samples as a time sample sequence. This step
is usually called parallel-to-serial conversion. These samples are transmitted as OFDM
channel samples at a rate of fs. The sample time interval Ts is equal to 1/fs. As shown in
[36], iDFT effectively modulates the modulation data symbols, i.e., the elements of the
input vector, onto the corresponding subcarriers.
In the frequency domain, the spectrum of such an OFDM signal is periodic with a
period of fs. It has a total of NDFT subcarriers that span the entire bandwidth of fs, which
is called the DFT bandwidth. Thus, the spacing between adjacent subcarriers, which is
called the subcarrier spacing and denoted by fsc, is equal to fs/NDFT. The kth element of
the DFT input vector is modulated onto the kth subcarrier, which is a windowed
complex sinusoid function at frequency kfsc in the form of
j2πkf t
e sc 0  t < 1=f sc
νk ðt Þ ¼ (1.29)
0 otherwise

where 1/fsc = NDFT/fs = NDFTTs is equal to the time spanned by the NDFT samples at the
Ð 1=f
iDFT output. It can be shown that 0 sc ν∗ l ðt Þνk ðt Þdt ¼ 0, for l 6¼ k, i.e., they are
orthogonal to each other. Due to the rectangular time windowing, the spectrum of each
νk(t) has the shape of the sinc function.
An example of an iDFT input data vector with NDFT elements is shown in Figure 1.14.
The data vector has ND nonzero data elements including the frequency-domain multi-
plexed (FDM) pilots to be discussed later. The zero elements in the input data vector
correspond to the so-called guard subcarriers.

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
38 An Overview of Digital Communication Systems

Figure 1.14 Structure of the iDFT input vector

Conventionally in digital signal processing, the indices of the iDFT and DFT input
vectors run from 0 through NDFT –1 as shown in Figure 1.13 and Figure 1.14. Following
this convention, the 0th element in the input vector corresponds to the DC subcarrier in
the frequency domain. The data symbols assigned to the 1st through (ND/2)th elements
are modulated onto the positive frequency subcarriers at frequencies from fsc to NDfsc/2.
The data symbols assigned to the (NDFTND/2)th through (NDFT –1)th elements are on
the subcarriers with negative frequency from NDfsc/2 to fsc. The guard subcarriers
are located at most positive and most negative frequencies. It is usually desirable to
allocate one null symbol to the position corresponding to the DC subcarrier in order to
simplify the receiver implementation. Alternatively, the iDFT can also be performed
with index from NDFT/2 to NDFT/2 1.
The spectrum of the iDFT output is shown in Figure 1.15. It can be seen that the
subcarriers in frequency domain have the shape of the sinc function. At the peak
position of a nonzero subcarrier, the contributions from all other subcarriers are equal
to zero. In other words, the subcarriers do not interfere with each other. This fact also
demonstrates the orthogonality among them.
In addition to the NDFT samples from the iDFT output, which is called the main
portion of an OFDM symbol in this book, each OFDM symbol contains additional parts
to facilitate the implementation and for the performance improvement of OFDM
communications.
In order to eliminate or, at least, to reduce the inter-symbol interference (ISI) between
the adjacent OFDM symbols, a guard-interval (GI) is added before the main portion of
the OFDM symbol. Commonly, the GI portion is the same as the last part of the main
portion and can be viewed as its cyclic extension. Therefore, such a GI is also called the
cyclic prefix, i.e., CP, of the OFDM symbol.
The CP is the key element of the OFDM signaling for effectively combating ISI. If
the CP is longer than the span of the multipath channel, the interference of ISI from the
multipaths can be completely eliminated. In addition, as will be shown later, the CP can
also play a useful role in achieving synchronization in OFDM systems. However, the
CP also constitutes an overhead of transmitted signal energy. Thus, it is important to
choose the CP length carefully in order to achieve the best possible system performance.
Two complete OFDM symbols with their CPs are depicted in Figure 1.16. The
portions with the same shade indicate that they have the same data. TM is the length

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
1.6 Introduction to OFDM Communication 39

Figure 1.15 Spectrum of iDFT output

Figure 1.16 OFDM symbols with CP

of the main portion of the OFDM symbol with NDFT samples, TCP is the length of
CP with NCP samples, and TSym is the duration of the entire OFDM symbol with
(NDFT+NCP) samples.
Since the time domain OFDM channel samples are generated at every Ts, the time
duration TM of the main portion of an OFDM symbol is equal to NDFTTs. The length of
the entire OFDM symbol is T Sym ¼ ðN DFT þ N CP ÞT s .
The baseband OFDM sample sequence with the CP extensions is converted to
analog waveforms by D-to-A conversion as shown in Figure 1.13. The generated
analog signals are modulated onto the carrier frequency, amplified to the desired
power, filtered again if necessary, and radiated from the transmitter antenna. These
operations are the same as the corresponding ones in any other form of digital
transmission, e.g., single-carrier or CDMA transmission, described previously. The
total bandwidth span of the NDFT subcarriers at the DFT output is equal to fs = 1/Ts.
Since only ND out of the NDFT subcarriers are modulated by data symbols, the
bandwidth occupied by the OFDM signal, called its data bandwidth, is only slightly
wider than (ND/NDFT)fs.

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
40 An Overview of Digital Communication Systems

During the OFDM transmitter operations, OFDM symbols are generated and trans-
mitted consecutively. An OFDM symbol is the unit of transmission in the time domain.
Furthermore, an OFDM symbol consists of ND active subcarriers, each of which is
modulated by a data symbol. Thus, the OFDM signals can be viewed as being
transmitted over a time-frequency two-dimensional space with the time unit of an
OFDM symbol and with the frequency unit of the subcarrier spacing fsc. This concept
will be used when we discuss the properties of OFDM systems.

1.6.2 OFDM Receiver Operations


The OFDM RF signals radiated from the transmitter antenna and passing through the
communication channel are received by a receiver. As is shown in Figure 1.17,
the received OFDM signal is frequency down-converted from carrier to baseband.
The analog baseband signal is filtered by an anti-aliasing filter and then converted to
the digital form by the A to D conversion. Due to the existence of guard subcarriers, a
sampling rate of 1/Ts satisfies the Nyquist sampling theorem [15] and no sub-chip
sampling is needed. These operations are also essentially the same as the corresponding
ones in the other types of digital receivers discussed previously. The digital baseband
signal samples are processed by the various blocks in the receiver as shown below.
The first step of the received signal sample processing is to determine the locations of
the OFDM symbols in the sample stream. As will be described in Chapter 3, the initial
acquisition block searches for the rough locations of the OFDM symbols in the sample
stream during receiver’s initial acquisition. Once an OFDM symbol is found, the initial
position of a DFT window, which contains NDFT samples for performing the DFT
operation, is determined. The position is refined during the further receiver operations
on the received signal. Determining the correct position of the DFT window is

Figure 1.17 A typical DFT/FFT-based OFDM receiver

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
1.6 Introduction to OFDM Communication 41

equivalent to achieving timing synchronization in OFDM receivers, as no sub-chip


adjustment is needed. More details of this procedure will be provided in Chapter 6.
The initial acquisition block also estimates the frequency offset generated during
frequency down-conversion. The estimated offset is refined by the frequency synchron-
ization block as will be discussed in Chapter 5.
For each OFDM symbol, the NDFT received signal samples inside the DFT window
are grouped together as an input data vector and sent to the DFT block. This operation is
often called serial-to-parallel conversion. The NDFT elements of the DFT block output
are the estimates of the NDFT modulation symbols at the transmitter’s iDFT input with
possibly rotated phase, scaled magnitude and corrupted by additive noise/interference
due to channel impairments. The DFT block performs demodulation of OFDM symbols
as it converts the data symbols modulated on the subcarriers to baseband. As shown
below, the transmitted modulation symbols are recovered after the phase rotations and
magnitude scaling are corrected. These recovered symbols are corrupted by the additive
noise introduced in the transmission process. A decoder is used to regenerate the
information bits sent with no or few errors.

1.6.3 Demodulation of OFDM Signal Over Single- and Multipath Channels


In Section 1.6.1, the main portion of the OFDM symbol is defined as the output of the
iDFT operation with the data vector XN DFT ðnÞ as its input. To facilitate analysis, we
denote the input vector of the n-th OFDM symbol to the iDFT block by

XN DFT ðnÞ ¼ ½X ðn; 0Þ; X ðn; 1Þ; :::; X ðn; N DFT  1Þt (1.30)

where X(n,k), k = 0, . . ., NDFT  1, are the modulation symbols to be transmitted with


the null guard subcarrier symbols added. Each element of X(n,k) is a complex number
representing a modulation symbol to be modulated onto the kth subcarrier of the nth
OFDM symbol by the iDFT operation. The iDFT output vector denoted by x(n), i.e., the
main portion of the OFDM symbol, is

xðnÞ ¼ ½xðn; 0Þ; xðn; 1Þ; :::; xðn; N DFT  1Þt (1.31)

To simplify the description below, we adopt matrix expressions of the DFT and iDFT
operators. The length-NDFT DFT operator can be expressed as
0 1
1 1 1 1  1
B 1 W N DFT W 2N DFT W 3N DFT    W NN DFT 1 C
B DFT C
B 2ðN DFT 1Þ C
1 B B 1 W 2
W 4
W 6
   W C
N DFT N DFT N DFT N DFT C
WN DFT ¼ pffiffiffiffiffiffiffiffiffiffiffi B 3ðN DFT 1Þ C
N DFT B 1 W 3
W 6
W 9
   W C
B N DFT N DFT N DFT N DFT
C
B .. .. .. .. . . .. C
@. . . . . . A
1 2ðN 1Þ 3ðN 1Þ ðN 1Þ2
1 W NN DFT
DFT
W N DFTDFT W N DFTDFT    W N DFT
DFT

(1.32)
where W N DFT ¼ ej2π=N DFT [8].

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
42 An Overview of Digital Communication Systems

H
The iDFT operator is WH N DFT , where the superscript represents the complex conju-
gate and transpose, or Hermitian, operation. It is easy to verify that WN DFT satisfies

N DFT WN DFT ¼ WN DFT WN DFT ¼ IN DFT N DFT


WH H
(1.33)

which is an identity matrix. Therefore, WN DFT is an orthonormal matrix.


Using the matrix notations, the time samples generated by the iDFT operation in the
transmitter can be expressed as

xð nÞ ¼ W H
N DFT XðnÞ (1.34)

The demodulation of OFDM symbols, i.e., recovering the elements of the data vector
X(n) is performed by the DFT operation.

Demodulation of Signals Transmitted Over a Single-Path Channel


Let us consider that the OFDM symbols are transmitted over a single-path channel with
channel coefficient h ¼ ejθh jhj. Assume that with proper DFT window placement, each
DFT input vector consists of the received signal samples corresponding the main
portion of an OFDM symbol to be demodulated. It can be expressed as

yðnÞ ¼ ½yðn; 0Þ; yðn; 1Þ; :::; yðn; N DFT  1Þt ¼ hxðnÞ þ zðnÞ ¼ hWH
N DFT XðnÞ þ zðnÞ
(1.35)

where z(n) is an NDFT dimensional noise vector, whose elements are AWGN with
variance of σ 2z . The vector y(n) is processed by the receiver DFT block. The output
vector resulting from the DFT operation, YðnÞ ¼ ½Y ðn; 0Þ; Y ðn; 1Þ; :::; Y ðn; N DFT  1Þt ,
can be shown as
0
YðnÞ ¼ WN DFT ½hxðnÞ þ zðnÞ ¼ hWN DFT WH
N DFT XðnÞ þ WN DFT zðnÞ ¼ hXðnÞ þ z ðnÞ
(1.36)

where z’(n) is a transformed noise vector, whose elements are also AWGN with the
variance of σ 2z because the operator WN DFT is orthonormal.
From (1.36), we observe that the elements Y(n,k) of Y(n) are the estimates of X(n,k)
of X(n) scaled by the complex channel gain h. If h is known to the receiver, Y(n,k)/h are
unbiased estimates of X(n,k). Moreover, it can be shown that the elements h*Y(n,k) are
the log-likelihood estimates of X(n,k). For communication over single-path channels,
only the main portions of the OFDM symbols are used and the CPs are not needed. The
CPs play an essential role in combating the ISI generated from multipath channels as
shown below.

Demodulation of Signals Transmitted Over a Multipath Channel


When passing through a multipath, i.e., frequency selective, channel, the received signal
sample vector y(n) contains multiple copies of the transmitted signal vector x(n), each of
which is associated with a delayed path as shown in Figure 1.18.

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
1.6 Introduction to OFDM Communication 43

Figure 1.18 Received samples and DFT window placement

From the figure, we observe that the (n – 1)th OFDM symbol generates ISI in the nth
symbol and the nth symbol generates ISI in the (n +1)th symbol, as they overlap with
each other. However, if the CP length is longer than the multipath span, it is possible to
place the DFT window in such a way that there is no ISI from the OFDM symbol prior
to the one to be demodulated. Moreover, the samples inside the DFT window contain
one complete circularly shifted x(n) from each path. Thus, no information of the
transmitted data gets lost. Note that the optimal DFT window location is not unique if
the span of the multipath channel is shorter than the CP. The receiver performance for
an OFDM signal transmitted over multipath channel is analyzed below.
The discrete-time impulse response of a multipath channel, excluding the transmitter
and receiver filters, can be expressed as
X
M1
hð t Þ ¼ hm δ ð t  d m T s Þ (1.37)
m¼0

where hm is the complex value of the mth channel path coefficient, dm is its delay in
units of the channel sample intervals Ts, and δ(t) is the Dirac delta function.
With a sampling frequency equal to 1/Ts, the continuous time frequency response of
this channel is the Fourier transform of h(t), which can be expressed as
X
M 1
H ðf Þ ¼ hm ej2πdm ðf =f s Þ (1.38)
m¼0

This is a periodic function with a period of fs = 1/Ts.


Now, let us consider the nth OFDM symbol, which consists of the transmitter channel
samples {x(n, k)}, k = 0, 1, . . ., NDFT1, in the form given by (1.31) with the CP
extension. As can be seen from Figure 1.18, after passing through the discrete multi-
path channel with frequency response H(n,f) given by (1.38), the received samples
contain M segments of the transmitted OFDM symbol with delays {dmTs} and weighted

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
44 An Overview of Digital Communication Systems

by path coefficients hm’s. Each of such segments can be considered as a segment of


samples taken from a periodic sample sequence of the repeated x(n). Moreover, the
NDFT samples inside the DFT window to be sent to the DFT processing block contain M
periods of the circularly shifted x(n), each of which is associated with one of the paths.
We can express the received signal samples in the DFT window, y(n,k), k = 0, . . ., NDFT
 1, as
X
M 1
yðn; kÞ ¼ x n; ðk  d m Þmod N DFT hm þ zðn; kÞ (1.39)
m¼0

where ð•Þmod N DFT represents the modulo NDFT operation.


The data vector y(n), which consists of the samples y(n,k) given by (1.35), is
processed by the DFT block to generate the output vector Y(n). The kth element, Y(n,
k), of Y(n) can be expressed as
X
M 1 NX
FFT 1
Y ðn; kÞ ¼ hm x n; ðl  dm Þmod N DFT W lkNDFT þ z0 ðn; k Þ (1.40)
m¼0 l¼0

where z0 ðn; kÞ is the noise term in the kth element of the DFT output. By using the
property of the DFT of circularly shifted input sequences [37], we have,
X
M 1
Y ðn; k Þ ¼ X ðn; kÞ hm W dNmDFT
k
þ z0 ðn; kÞ (1.41)
m¼0

From (1.38) and because W dNmDFT


k
¼ ej2πdm k=N DFT , we can rewrite (1.41) as
Y ðn; k Þ ¼ X ðn; kÞH ðn; kÞ þ z0 ðn; kÞ (1.42)

Represented in continuous time, H ðn; k Þ ffi H ðn; kf sc Þ is simply the value of H(n,f )


sampled at kfsc. In other words, the frequency-domain channel coefficient of subcar-
rier k is equal to the channel frequency response sampled at the frequency of this
subcarrier.
Equation (1.42) shows that, when receiving the nth OFDM symbol, the kth element
of the DFT output is an estimate of the kth modulation symbol, X(n,k), times the channel
frequency response at the kth subcarrier. The modulation symbol can be recovered if the
frequency response of the channel is known or can be estimated. Denoting the estimate
^ ðn; kÞ, the estimate of the transmitted symbol X(n,k) can be computed as
of H(n,k) by H

X ^ ∗ ðn; k ÞY ðn; kÞ=jH ðn; kÞj2


^ ðn; k Þ ¼ H (1.43)

Thus, when the cyclic prefix length and the DFT window position are selected properly,
the OFDM transmission channel can be viewed as consisting of ND independent
subchannels, each of which is carried by a subcarrier. If the transmitter carrier frequency
and receiver down-conversion frequency are synchronized, there will be no or little
inter-(sub)carrier interference, or ICI, between them. The impact of the frequency offset
on OFDM receiver performance and how to achieve OFDM carrier synchronization will
be discussed in Chapter 5.

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
1.6 Introduction to OFDM Communication 45

In order to achieve reliable detection of the transmitted symbols, it is necessary to


estimate as accurately as possible the frequency-domain channel coefficient of each
subcarrier. For transmission over fading channels, the phases and amplitudes of the
subcarriers change with time, but the coefficients of adjacent subcarriers are correlated.
The subcarrier phase estimation is done as a part of overall frequency-domain channel
estimation. The channel frequency response can be estimated either directly in the
frequency domain or computed indirectly from the estimated time-domain CIR
according to (1.38). Due to the importance of the accuracy of this estimate, consider-
ations must be taken when designing the OFDM system. One of the approaches to
facilitating this task is to send known data symbols called pilot or reference symbols
together with information bearing data symbols, as described in the next section.

1.6.4 TDM and FDM Pilot/Reference Signals in OFDM Communication


Different types of pilot, or reference, signals are usually inserted in the signal streams of
OFDM transmission. Even though the pilot signals are known or partially known to the
receiver and thus carry no or little information for data transmission, they play a crucial
role in achieving reliable synchronization. Below we describe two main types of pilot
signals: the time-division multiplexed (TDM) pilots and the frequency-division multi-
plexed (FDM) pilots.

1.6.4.1 TDM Pilot Signals


TDM pilot signals, or TDM pilots, are transmitted as entirely known OFDM symbols
with all known symbol modulated subcarriers. They are mostly used to assist the
receivers during initial acquisition to detect the existence of the desired signals, to
initialize the carrier frequency and/or phase estimates, and to set the initial receiver
sample timing. To accomplish these functions, the TDM pilots are often sent at the
beginning of the transmission for establishing the communication link. They may also
be sent periodically for new users to acquire the link in multiple-access systems. We
will discuss their roles in detail later in this book. Below we provide brief descriptions
of these types of pilots with a few examples.

802.11a/g Wi-Fi Communications


Defined in 802.11a/g Wi-Fi standards, two types of TDM pilot symbols, also called
preambles, are transmitted at the beginning of each data frame before the data transmis-
sion begins. The preambles consist of 10 short preamble symbols and 2 long preamble
symbols. Each of the short preamble symbols is 16 OFDM transmitter channel samples
long and repeats 10 times to form a short preamble. Following the short preamble is a
long preamble. The long preamble has 2 periods of a long preamble symbol. Each
period is 64 samples long with a CP of 32 samples. The lengths of the preambles are
different from the length of the regular data OFDM symbols, each of which is 80
samples long.
As the first step to establish the communication link with the access points (APs), the
short preamble is mainly used by the Wi-Fi device to detect the existence of the systems

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
46 An Overview of Digital Communication Systems

and their frame boundaries. They are also used for determining and correcting the rough
frequency offset and for establishing approximate timing. The long preamble symbols
are then used to perform channel estimation. In addition to being used for demodulation
of data symbols transmitted in the frame, the generated channel estimate is used to
determine the more accurate DFT window position, i.e., the more precise timing. They
are also used to further refine the estimate of the frequency offset. After these steps, the
receiver is ready to communicate with the Wi-Fi network.
More details of the Wi-Fi initial acquisition procedures based on the preamble TDM
pilots will be provided in Section 3.6.2.

LTE
The Primary Synchronization Signal (PSS) and the Secondary Synchronization Signal
(SSS) defined in the LTE standard can also be viewed as TDM pilots. Given that LTE is
a wideband system, in order to efficiently use the available resources and for systems’
robustness, the PSS and SSS occupy only parts of the total bandwidth. The PSS and
SSS are transmitted every half-frame (5 ms). They occupy a 1.4 MHz frequency band at
the center of the entire LTE channel bandwidth, which can be as wide as 20 MHz.
The PSS and SSS are used in LTE’s two-step system acquisition procedure. In the
process, the receiver also determines other system parameters such as slot and frame
boundaries, the cell’s FDM pilot positions, and it performs initial frequency offset
correction. The detailed operations of the initial acquisition using the PSS and SSS
are presented in Section 3.6.1.

TDM Pilot-Assisted Channel Estimation and Data Symbol Recovery


In some systems, TDM pilots are sent periodically during data transmission after the
link is established. Their function is to assist carrier and time synchronization of the
receivers. For example, in the Chinese digital terrestrial television broadcasting
(DTMB) standard, TDM pilots are sent in the form of PN sequences as GIs instead of
the CPs [38]. It is also used for channel estimation for data symbol recovery instead of
using FDM pilots. Because the TDM pilots serve a dual purpose, the system overhead is
reduced. However, it appears that such a design may cause other system deficiencies. As
a result, the overall system performance may not be better than the conventional OFDM
system designs discussed in this book.

1.6.4.2 FDM Pilot Signals


FDM pilot signals, or FDM pilots, generally refer to the known modulation symbols on
a subset of subcarriers in every one or every few OFDM symbols. Using the time-
frequency grid concept, an example of OFDM transmission signal with embedded FDM
pilots is shown in Figure 1.19. The OFDM symbol index is shown along the x, or time,
axis and the subcarrier index in an OFDM symbol is shown along the y, or frequency,
axis. The white squares are the data symbol modulated subcarriers, and the gray squares
denote the pilot symbol modulated subcarriers.
Two types of FDM pilots are shown in the figure. The first type of pilots, shown as
the light gray boxes, is in every OFDM symbol on the same subcarrier. This type of

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
1.6 Introduction to OFDM Communication 47

Figure 1.19 FDM pilots in OFDM signal

pilots is called continuous pilots, which are mainly used for carrier frequency estima-
tion. The second type of pilots, shown as dark gray boxes, is evenly distributed over the
entire two-dimensional grid and called scattered pilots. They are mainly used for
channel estimation for performing data symbol recovery as explained below.
The main purpose of sending FDM pilots is to perform channel carrier phase
estimation for coherent detection of the data symbols. As shown by (1.43), to perform
coherent detection of a data symbol, we must have the estimate of the frequency-domain
channel coefficient of the subcarrier, on which the data symbol is modulated.
From (1.42), we observe that the estimate of the frequency-domain subcarrier
coefficient H(n,k) can be computed if X(n,k) is known to the receiver, i.e., if it is
an FDM pilot. For example, in Figure 1.19 the modulation symbol on OFDM symbol
n, the subcarrier k is a pilot, denoted by Xp(n,k). The estimate of H(n,k) can be
computed as
 2
^ ðn; k Þ ¼ X ∗  
H p ðn; k ÞY ðn; k Þ= X p ðn; k Þ (1.44)

If the channel coherence time is longer than the OFDM symbol time interval, the
channel frequency response for the OFDM symbols that are close to each other in time
is correlated. The estimate H ^ ðn; kÞ can be used as the estimate for recovering data
symbols on subcarrier k of these OFDM symbols, e.g., with time index n1 and n+1.
Similarly, if the coherence frequency bandwidth of the channel is wider than the
subcarrier spacing, channel estimate H ^ ðn; kÞ can be used to recover the data symbols
modulated on subcarriers near the kth subcarrier of the same OFDM symbol.
Moreover, because the modulation symbol on subcarrier k+6 is also a pilot, the
channel coefficients of subcarriers k+1 through k+5 can be computed by interpolating
H^ ðn; kÞ and H^ ðn; k þ 6Þ to obtain more accurate estimates. Theoretically, the best

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
48 An Overview of Digital Communication Systems

results can be obtained by interpolations performed in the two-dimensional plane over


the frequency-time domains. The method of interpolation is widely used in frequency-
domain channel estimations in OFDM receivers. More details of various interpolation
techniques will be considered in Chapter 5.
In this section, for demodulation of the OFDM symbols, we have assumed that the
carrier frequency and timing phase and frequency have been accurately acquired. In
practice, some or all of them may not be known accurately when the receiver operation
starts, and they may change during the data mode. Carrier and timing synchronization
techniques in OFDM systems will be presented in Chapters 5 and 6.

1.7 Summary and Notations of Parameters

In this chapter, we first provided an overview of digital communication systems. The


characteristics, components, and operations of the three main constituent blocks of
digital communication systems, i.e., the transmitter, the channel, and the receiver, were
then described. While the descriptions focused on the data mode operations of single-
carrier systems, many of their components are shared by other types of communication
systems, such as OFDM and CDMA systems. Moreover, they are also pertinent to
performing the synchronization functions in these communication systems. Thus, what
was presented in this first chapter forms the foundation for understanding and discuss-
ing synchronization functions in various communication systems, which are the sub-
jects of this book.
Since the late 1980s, mobile wireless communications have been widely deployed
and penetrated into the everyday life of our society. As a consequence, the technologies
employed in mobile communication systems, in particular CDMA and OFDM, have
attracted a great deal of attention in the digital communication technical community. In
this chapter, two sections were devoted to the overviews of these two technologies to
establish the foundation for the discussion of synchronization techniques in systems that
use these technologies.
It should be noted that what was presented in this chapter is only an overview of
communication systems and technologies. The purpose of this chapter is to facilitate the
understanding and discussion of synchronization functions to be treated later in this
book. The examples given are only for illustrating the related transmitter and receiver
operations. There are many alternative approaches to performing these tasks. In other
words, the examples should not be taken as a complete treatment of these subjects.
Readers interested in gaining more complete knowledge and understanding on these
topics are referred to the references provided, especially the many excellent textbooks
available, such as [3, 5, 18].
In this book, we will cover the synchronization methods used in a number of different
communication systems. To facilitate the discussion, notations of the parameters
involved are summarized below. They will be used throughout this book as consistently
as possible.

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
1.7 Summary and Notations of Parameters 49

Single-Carrier Communication

T: Data symbol (data modulation symbol) interval/duration


1/T: Data symbol (data modulation symbol) frequency/rate
(Note: For such systems, data symbols are the same as the channel modulations
symbols.)

DS-CDMA Communication

T: Data symbol (data modulation symbol) interval/duration


fSym: Data symbol (data modulation symbol) frequency, fSym = 1/T
Tc: Chip interval/duration, Tc = T/N, where N is the spreading factor
fch: Chip frequency/rate, fch = 1/Tc = NfSym
(Note: A chip corresponds to a channel modulation symbol in single-carrier
communications with no spreading.)

OFDM Communication

Ts: OFDM channel sample duration/interval, called standard time unit in LTE
fs: OFDM channel sample rate, fs = 1/ Ts
WDFT: OFDM DFT bandwidth, WDFT = 1/ Ts = fs
fsc: OFDM subcarrier spacing, fsc = WDFT/NDFT = fs/NDFT
TSym: OFDM symbol duration/interval, T Sym ¼ ðN DFT þ N CP ÞT c
fSym: OFDM symbol rate, f Sym ¼ 1=T Sym ¼ f s =ðN DFT þ N CP Þ
ND: Number of data (active) subcarriers
WD: OFDM data bandwidth, WD = ND fsc = ND fs/NDFT = ND/(NDFTTs)
(Note: An OFDM channel sample corresponds to a channel modulation symbol in
single-carrier communication.)

References

[1] C. E. Shannon, “Communication in the Presence of Noise,” Proceedings of the IRE, vol. 37,
no. 1, pp. 10–21, 1949.
[2] W. E. Ryan and S. Lin, Channel Codes – Classical and Modern, Cambridge University Press,
2009.
[3] B. Sklar, Digital Communications – Fundamentals and Applications, 2nd edn, Upper Saddle
River, NJ: Prentice Hall PTR, 2011.
[4] J. George, C. Clark, and J. B. Cain, Error Correction Coding for Digital Communications,
New York: Plenum Press, 1981.
[5] J. G. Proakis and M. Salehi, Digital Communications, 5th edn, New York: McGraw-Hill,
2008.
[6] J. L. Ramsey, “Realization of Optimum Interleavers,” IEEE Transactions on Information
Theory, vol. IT-16, no. 3, pp. 338–45, 1970.
[7] G. D. Forney Jr., “Burst Correction Codes for the Classic Bursty Channel,” IEEE Transaction
on Comm., vol. COM-19, no. 10, pp. 772–81, 1971.
[8] A. V. Oppenheim and R. W. Schafer, Digital Signal Processing, Englewood Cliffs, NJ:
Prentice Hall, 1975.

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
50 An Overview of Digital Communication Systems

[9] W. Kester, “Oversampling Interpolating DACs, Analog Devices Tutorial MT-107,” October
2008. [Online]. Available: www.analog.com/media/en/training-seminars/tutorials/MT-017
.pdf. [Accessed October 16, 2015].
[10] National Instruments, “Interpolation and Filtering to Improve Spectral Purity, National
Instruments White Paper 5515,” May 29, 2014. [Online]. Available: www.ni.com/white-
paper/5515/en/pdf. [Accessed October 16, 2015].
[11] W. C. Jakes, et al, Microwave Mobile Communications, W. C. Jakes, Ed., Piscataway, NJ:
IEEE Press, 1974, reissued 1993.
[12] D. Tse and P. Viswanath, Fundamentals of Wireless Communication, Cambridge University
Press, 2005.
[13] G. L. Stüber, Principles of Mobile Communication, New York: Springer-Verlag, 2012.
[14] G. D. Forney Jr., “Maximum-Likelihood Sequence Estimation of Digital Sequences in the
Presence of Intersymbol Interference,” IEEE Transactions on Information Theory, vol. 18,
no. 3, pp. 363–78, May 1972.
[15] H. Nyquist, “Certain Topics in Telegraph Transmission Theory,” AIEE Transactions, vol.
47, pp. 617–44, 1928.
[16] J. G. Proakis, Digital Communications, 3rd edn, New York: McGraw-Hill, 1995.
[17] R. F. H. Fischer, Precoding and Signal Shaping for Digital Transmission, New York: John
Wiley & Sons, 2002.
[18] R. D. Gitlin, J. F. Hayes, and S. B. Weinstein, Data Communications Principles, New York:
Plenum Press, 1992.
[19] S. Allpress, C. Luschi, and S. Felix, “Exact and Approximated Expressions of the Log-
Likelihood Ratio for 16-QAM Signals,” in Conference Record of the Thirty-Eighth Asilomar
Conference on Signals, Systems and Computers, Pacific Grove, CA, 2004.
[20] J. D. Ellis and M. B. Pursley, “Comparison of Soft-Decision Decoding Metrics in a QAM
System with Phase and Amplitude Errors,” in IEEE Military Communications Conference,
MILCOM 2009, Boston, MA, 2009.
[21] 3rd Generation Partnership Project 2, “Physical Layer Standard for cdma2000 Spread,”
2004.
[22] A. J. Viterbi, CDMA Principle of Spread Spectrum Communications, Reading, MA: Addison-
Wesley, 1995.
[23] A. Toskala, H. Holma, and P. Muszynski, “ETSI WCDMA for UMTS,” in IEEE 5th
International Symposium on Spread Spectrum Techniques and Applications, Sun City,
South Africa, 1998.
[24] J. L. Walsh, “A Closed Set of Normal Orthogonal Functions,” American Journal of
Mathematics, vol. 45, no. 1, pp. 5–24, 1923.
[25] J. S. Lee and L. E. Miller, CDMA System Engineering Handbook, Norwood, MA: Artech
House, 1998.
[26] M. K. Simon, J. M. Omura, R. A. Scholtz, and B. K. Levitt, Spread Spectrum Communi-
cations Handbook, rev. edn, Boston, MA: McGraw-Hill, 1985, 1994.
[27] R. Price, “Optimal Detection of Random Signals in Noise with Application to Scatter-
Multipath Communication, I,” IRE Transactions, vol. 6, no. 4, pp. 125–35, December 1956.
[28] R. Price and P. E. Green Jr., “A Communication Technique for Multipath Channels,”
Proceedings of the IRE, vol. 46, pp. 555–70, March 1958.
[29] U. Ladebusch and C. A. Liss, “Terrestrial DVB (DVB-T): A Broadcast Technology for
Stationary Portable and Mobile Use,” Proceedings of the IEEE, vol. 94, no. 1, pp. 183–93,
January 2006.

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003
1.7 Summary and Notations of Parameters 51

[30] T. Star, M. Sorbara, J. M. Cioffi, and P. J. Silverman, DSL Advances, Upper Saddle River,
NJ: Prentice Hall, 2003.
[31] IEEE LAN/MAN Standards Committee, “Part 11: Wireless LAN Medium Access Control
(MAC) and Physical Layer (PHY) Specifications,” New York, 2012.
[32] 3GPP TS 25.213, “Technical Specification Group Radio Access Network; Spreading and
Modulation (FDD),” Valbonne, France, 2010.
[33] Y. G. Li and G. Stuber, Eds., Orthogonal Frequency Division Multiplexing for Wireless
Communications, New York: Springer Science+Business Media, 2006.
[34] R. Prasad, OFDM for Wireless Communications Systems, Norwood, MA: Artech House,
2004.
[35] J. Heiskala and J. Terry, OFDM Wireless LANs: A Theoretical and Practical Guide,
Indianapolis, IN: SAMS, 2002.
[36] S. B. Weinstein and P. M. Ebert, “Data Transmission by Frequency Division Multiplexing
Using Discrete Fourier Transform,” IEEE Transactions on Communications, vol. COM-19,
no. 10, pp. 628–34, Oct. 1971.
[37] J. G. Proakis and D. G. Manolakis, Introduction to Digital Signal Processing, New York:
Macmillan, 1988.
[38] M. Liu, M. Crussiere, J. Helard, and O. P. Pasquero, “Analysis and Performance Compari-
son of DVB-T and DTMB Systems for Terrestrial Digital TV,” in 11th IEEE Singapore
International Conference on Communication Systems, 2008. ICCS 2008, Guangzhou,
China, 2008.

Downloaded from https://www.cambridge.org/core. Columbia University Libraries, on 15 Aug 2017 at 13:41:55, subject to the Cambridge Core terms of use,
available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335444.003

You might also like