Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Chap 1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 101

Part I

Signal Processing and Detection


1
Contents
I Signal Processing and Detection 1
1 Fundamentals of Discrete Data Transmission 3
1.1 Data Modulation and Demodulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.1 Waveform Representation by Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.1.2 Synthesis of the Modulated Waveform . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.1.3 Vector-Space Interpretation of the Modulated Waveforms . . . . . . . . . . . . . . 12
1.1.4 Demodulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.2 Discrete Data Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.2.1 The Vector Channel Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.2.2 Optimum Data Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.2.3 Decision Regions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.2.4 Irrelevant Components of the Channel Output . . . . . . . . . . . . . . . . . . . . 19
1.3 The Additive White Gaussian Noise (AWGN) Channel . . . . . . . . . . . . . . . . . . . . 22
1.3.1 Conversion from the Continuous AWGN to a Vector Channel . . . . . . . . . . . . 23
1.3.2 Optimum Detection with the AWGN Channel . . . . . . . . . . . . . . . . . . . . . 25
1.3.3 Signal-to-Noise Ratio (SNR) Maximization with a Matched Filter . . . . . . . . . 27
1.4 Error Probability for the AWGN Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
1.4.1 Invariance to Rotation and Translation . . . . . . . . . . . . . . . . . . . . . . . . 30
1.4.2 Union Bounding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
1.4.3 The Nearest Neighbor Union Bound . . . . . . . . . . . . . . . . . . . . . . . . . . 38
1.4.4 Alternative Performance Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
1.4.5 Block Error Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
1.5 General Classes of Constellations and Modulation . . . . . . . . . . . . . . . . . . . . . . . 42
1.5.1 Cubic Constellations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
1.5.2 Orthogonal Constellations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
1.5.3 Circular Constellations - M-ary Phase Shift Keying . . . . . . . . . . . . . . . . . 54
1.6 Rectangular (and Hexagonal) Signal Constellations . . . . . . . . . . . . . . . . . . . . . . 54
1.6.1 Pulse Amplitude Modulation (PAM) . . . . . . . . . . . . . . . . . . . . . . . . . 55
1.6.2 Quadrature Amplitude Modulation (QAM) . . . . . . . . . . . . . . . . . . . . . . 58
1.6.3 Constellation Performance Measures . . . . . . . . . . . . . . . . . . . . . . . . . . 66
1.6.4 Hexagonal Signal Constellations in 2 Dimensions . . . . . . . . . . . . . . . . . . . 69
1.7 Additive Self-Correlated Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
1.7.1 The Filtered (One-Shot) AWGN Channel . . . . . . . . . . . . . . . . . . . . . . . 70
1.7.2 Optimum Detection in the Presence of Self-Correlated Noise . . . . . . . . . . . . 71
1.7.3 The Vector Self-Correlated Gaussian Noise Channel . . . . . . . . . . . . . . . . . 74
1.7.4 Performance of Suboptimal Detection with Self-Correlated Noise . . . . . . . . . . 76
Chapter 1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
A Gram-Schmidt Orthonormalization Procedure 97
B The Q Function 98
2
Chapter 1
Fundamentals of Discrete Data
Transmission
Figure 1.1 illustrates discrete data transmission, which is the transmission of one message from a nite
set of messages through a communication channel. A message sender at the transmitter communicates
with a message receiver. The sender selects one message from the nite set, and the transmitter sends a
corresponding signal (or waveform) that represents this message through the communication channel.
The receiver decides the message sent by observing the channel output. Successive transmission of
discrete data messages is known as digital communication. Based on the noisy received signal at the
channel output, the receiver uses a procedure known as detection to decide which message, or sequence
of messages, was sent. Optimum detection minimizes the probability of an erroneous receiver decision
on which message was transmitted.
This chapter characterizes and analyzes optimum detection for a single message transmission through
the channel. Dependencies between message transmissions can be important also, but the study of such
inter-message dependency is deferred to later chapters.
The messages are usually digital sequences of bits, which are usually not compatible with transmission
of physical analog signals through a communication channel. Thus the messages are converted into analog
signals that can be sent through the channel. Section 1.1 introduces both encoding and modulation to
characterize such conversion of messages into analog signals by a transmitter. Encoding is the process of
converting the messages from their innate form (typically bits) into vectors of real numbers that represent
the messages. Modulation is a procedure for converting the encoder-output real-number vectors into
analog signals for transmission through a physical channel.
Section 1.2 studies the theory of optimal detection, which depends on a probabilistic model for the
communication channel. The channel distorts the transmitted signals both deterministically and with
Figure 1.1: Discrete data transmission.
3
random noise. The noisy channel output will usually not equal the channel input and will be described
only in terms of conditional probabilities of various channel-output signals. The channel-input signals
have probabilities equal to the probabilties of the messages that they represent. The optimum detector
will depend only on the probabilistic model for the channel and the probability distribution of the
messages at the channel input. The general optimum detector specializes in many important practical
cases of interest.
This chapter develops a theory of modulation and detection that uses a discrete vector representation
for any set of continuous-time signals. This vector-channel approach was pioneered for educational
purposes by Wozencraft and Jacobs in their classic text [1] (Chapter 4). In fact, the rst four sections of
this chapter closely parallel their development (with some updating and rearrangement), before diverging
in Sections 1.5 1.7 and in the remainder of this text.
The general model for modulation and demodulation leads to a discussion of the relationship between
continuous signals and their vector-channel representation, essentially allowing easier analysis of vectors
to replace the more dicult analysis of continuous signals. Section 1.2 solves the general detection
problem for the discrete vector channel. Section 1.3 shows that the most common case of a continuous
Gaussian-noise channel maps easily into the discrete vector model without loss of generality. Section
1.3 then nds the corresponding optimum detector with Gaussian noise. Given the optimum detector,
Section 1.4 shows methods to calculate and estimate average probability of error, P
e
, for a vector channel
with Additive White Gaussian Noise (AWGN). Sections 1.5 and 1.6 discuss several popular modulation
schemes and determine bounds for their probability of error with AWGN. Section 1.6 focuses in particular
on signals derived from rectangular lattices, a popular signal transmission format. Section 1.7 then
generalizes results for the case of self-correlated Gaussian noise.
1.1 Data Modulation and Demodulation
Figure 1.2 adds more detail to the basic discrete data transmission system of Figure 1.1. The messages
emanate from a message source. A vector encoder converts each message into a symbol, which is
a real vector x that represents the message. Each possible message corresponds to a distinct value
of the symbol vector x. The words symbol and message are often used interchangeably, with the
tacit understanding that the symbol actually represents the message via the action of the encoder. A
message from the set of M possible messages m
i
i = 0, ..., M 1 is sent every T seconds, where T is
the symbol period for the discrete data transmission system. Thus, messages are sent at the symbol
rate of 1/T messages per second. The number of messages that can be sent is often measured in bits so
that b = log
2
(M) bits are sent every symbol period. Thus, the data rate is R = b/T bits per second.
The message is often considered to be a real integer equal to the index i, in which case the message is
abbreviated m with possible values 0, ...M 1.
The modulator converts the symbol vector x that represents the selected message into a continuous
time (analog) waveform that the transmitter outputs into the channel. There is a set of possible M
signal waveforms x
i
(t) that is in direct one-to-one correspondence with the set of M messages. The
demodulator converts continuous-time channel output signals back into a channel output vector y, from
which the detector tries to estimate x and thus also the message sent. The messages then are provided
by the receiver to the message sink.
In any data transmission system, the physically transmitted signals are necessarily analog and con-
tinuous time. In general, the conversion of a discrete data signal into a continuous time analog signal is
called modulation. The inverse process of converting the modulated signal back into its original discrete
form is called demodulation. Thus, the combination of encoding and modulation in the transmitter
leads to the mapping:
discrete message m
i
x
i
(t) continuous waveform .
Conversely, the combination of demodulation and detection in the receiver leads to the mapping:
continuous waveform y(t) m discrete message .
4
Figure 1.2: Discrete data transmission with greater detail.
When the receiver output message is not equal to the transmitter input message, an error occurs. An
optimum receiver minimizes the probability of such errors for a given communications channel and set
of message waveforms.
EXAMPLE 1.1.1 (binary phase-shift keying) Figure 1.3 repeats Figure 1.1 with a spe-
cic linear time-invariant channel that has the Fourier transform indicated. This channel
essentially passes signals between 100 Hz and 200 Hz with 150 Hz having the largest gain.
Binary logic familiar to most electrical engineers transmits some positive voltage level (say
perhaps 1 volt) for a 1 and another voltage level (say 0 volts) for a 0 inside integrated circuits.
Clearly such 1/0 transmission on this channel would not pass through the channel, leaving 0
always at the output and making a receiver detection of the correct message dicult if not
impossible. Instead the two modulated signals x
0
(t) = +cos(2t) and x
1
(t) = cos(2t)
will easily pass through this channel and be readily distinguishable at the channel output.
This latter type of transmission is known as BPSK for binary phase-shift keying. If the
symbol period is 1 second and if successive transmission is used, the data rate would be 1
bit per second (1 bps).
1
In more detail, the engineer could recognize the trivial vector encoder that converts the
message bit of 0 or 1 into the real one-dimensional vectors x
0
= +1 and x
1
= 1. The
modulator simply multiples this x
i
value by the function cos(2t).
A variety of modulation methods are applied in digital communication systems. To develop a separate
analysis for each of these formats would be an enormous task. Instead, this text uses a general vector
representation for modulated signals. This vector representation leads to a single method for the analysis
of the performance of the data transmission (or storage) system. This section describes the discrete
vector representation of any nite or countably innite set of continuous-time signals and the conversion
between the vectors and the signals.
The analysis of the detection process will simplify for an additive white Gaussian noise (AWGN)
channel through the symbol-vector approach, which was pioneered by Wozencraft and Jacobs. This
approach, indicated in Figure 1.2 by the real-valued vector symbols x
i
and y, decouples the probability-
of-error analysis from the specic modulation method. Each modulation method uses a set of basis
1
However, this chapter is mainly concerned with a single transmission. Each of such successive transmissions could be
treated independently because by ignoring transients at the beginning or end of any message transmission as they would
be negligible in time extent on such a channel.
5
Figure 1.3: Example of channel for which 1 volt and 0 volt binary transmission is inappropriate.
functions that link the vector x
i
with the continuous waveform x
i
(t). The choice of modulation basis
functions usually depends upon their spectral properties. This chapter investigates and enumerates a
number of dierent basis functions in later sections.
1.1.1 Waveform Representation by Vectors
The reader should be familiar with the innite-series decomposition of continuous-time signals from the
basic electrical-engineering study of Fourier series in signals and systems. For the transmission and
detection of a message during a nite time interval, this text considers the set of real-valued functions
f(t) such that
_
T
0
f
2
(t)dt < (technically known as the Hilbert space of continuous time functions
and abbreviated as L
2
[0, T]). This innite dimensional vector space has an inner product, which permits
the measure of distances and angles between two dierent functions f(t) and g(t),
f(t), g(t)) =
_
T
0
f(t) g(t)dt .
Any well-behaved continuous time function x(t) dened on the interval [0, T] decomposes according
to some set of N orthonormal basis functions
i
(t) as
x(t) =
N

n=1
x
n

n
(t)
where
n
(t) satisfy
n
(t),
m
(t)) = 1 for n = m and 0 otherwise. The continuous function x(t) describes
the continuous-time waveform that carries the information through the communication channel. The
number of basis functions that represent all the waveforms x
i
(t) for a particular communication system
may be innite, i.e. N may equal . Using the set of basis functions, the function x(t) maps to a set
of N real numbers x
i
; these real-valued scalar coecients assemble into an N-dimensional real-valued
vector
x =
_

_
x
1
.
.
.
x
N
_

_ .
Thus, the function x(t) corresponds to an N-dimensional point x in a vector space with axes dened by

i
(t) as illustrated for a three-dimensional point in Figure 1.4.
Similarly a set of continuous time functions x
i
(t) corresponds to a set of discrete N-dimensional
points x
i
known as a signal constellation. Such a geometric viewpoint advantageously enables the
visualization of the distance between continuous-time functions using distances between the associated
signal points in
N
, the space of N-dimensional real vectors. In fact, later developments show
x
1
(t), x
2
(t)) = x
1
, x
2
) ,
6
Figure 1.4: Vector space.
where the right hand side is taken as the usual Euclidean inner product in
N
(discussed later in
Denition 1.1.6). This decomposition of continuous-time functions extends to random processes using
what is known as a Karhunen-Loeve expansion. The basis functions also extend for all time, i.e. on the
innite time interval (, ), in which case the inner product becomes f(t), g(t)) =
_

f(t)g(t)dt.
Decomposition of random processes is fundamental to demodulation and detection in the presence
of noise. Modulation constructively assembles random signals for the communication system from a set
of basis functions
n
(t) and a set of signal points x
i
. The chosen basis functions and signal points
typically satisfy physical constraints of the system and determine performance in the presence of noise.
1.1.2 Synthesis of the Modulated Waveform
The description of modulation begins with the denition of a data symbol:
Denition 1.1.1 (Data Symbol) A data symbol is dened as any N-dimensional real
vector
x

=
_

_
x
1
x
2
.
.
.
x
N
_

_
. (1.1)
The data symbol is in lower-case boldface, indicating a vector, to distinguish it from its components,
shown in lowercase Roman to indicate scalars. Unless specied otherwise, all quantities shall be real-
valued in this chapter. Extensions of the denitions to complex-valued quantities occurs in succeeding
chapters as necessary. The synthesis of a modulated waveform uses a set of orthonormal basis func-
tions.
Denition 1.1.2 (Orthonormal Basis Functions) A set of N functions
n
(t) consti-
tute an N-dimensional orthonormal basis if they satisfy the following property:
_

m
(t)
n
(t)dt =
mn
=
_
1 m = n
0 m ,= n
. (1.2)
The discrete-time function
mn
will be called the discrete delta function
2
.
The construction of a modulated waveform x(t) appears in Figure 1.5:
Denition 1.1.3 (Modulated Waveform) A modulated waveform, corresponding to
the data symbol x, for the orthonormal basis
n
(t) is dened as
x(t)

=
N

n=1
x
n

n
(t) , (1.3)
2
mn is also called a Kronecker delta.
7
Figure 1.5: Modulator.
Thus, the modulated signal x(t) is formed by multiplying each of the components of the vector x by the
corresponding basis function and summing the continuous-time waveforms, as shown in Figure 1.5. There
are many possible choices for the basis functions
n
(t), and correspondingly many possible modulated
waveforms x(t) for the same vector x. The specic choice of basis functions used in a communication
system depends on physical limitations of the system.
In practice, a modulator can construct a modulated waveform from any set of data symbols, leading
to the concept of a signal constellation:
Denition 1.1.4 A signal constellation is a set of M vectors, x
i
i = 0, ..., M1. The
corresponding set of modulated waveforms x
i
(t) i = 0, ..., M1 is a signal set.
Each distinct point in the signal constellation corresponds to a dierent modulated waveform, but all
the waveforms share the same set of basis functions. The component of the i
th
vector x
i
along the n
th
basis function
n
(t) is denoted x
in
. The occurrence of a particular data symbol in the constellation
determines the probability of the i
th
vector (and thus of the i
th
waveform), p
x
(i).
The power available in any physical communication system limits the average amount of energy
required to transmit each successive data symbol. Thus, an important concept for a signal constellation
(set) is its average energy:
Denition 1.1.5 (Average Energy) The average energy of a signal constellation is de-
ned by
c
x

= E
_
[[x[[
2

=
M1

i=0
[[x
i
[[
2
p
x
(i) , (1.4)
where [[x
i
[[
2
is the squared-length of the vector x
i
, [[x
i
[[
2

=

N
n=1
x
2
in
. E denotes ex-
pected or mean value. (This denition assumes there are only M possible waveforms and

M1
i=0
p
x
(i) = 1.)
The average energy is also closely related to the concept of average power, which is
P
x

=
c
x
T
, (1.5)
8
Figure 1.6: BPSK basis functions and waveforms.
corresponding to the amount of energy per symbol period.
The minimization of c
x
places signal-constellation points near the origin; however, the distance
between points shall relate to the probability of correctly detecting the symbols in the presence of noise.
The geometric problem of optimally arranging points in a vector space with minimum average energy
while maintaining at least a minimum distance between each pair of points is the well-studied sphere-
packing problem, said geometric viewpoint of communication appeared rst in Shannons 1948 seminal
famous work, A Mathematical Theory of Communication (Bell Systems Technical Journal).
The following example at this point illustrates the utility of the basis-function concept:
EXAMPLE 1.1.2 A commonly used and previously discussed transmission method is Bi-
nary Phase-Shift Keying (BPSK), used in some satellite and deep-space transmissions as well
as a number of simple transmission systems. A more general form of the basis functions, which
are parameterized by variable T, is
1
(t) =
_
2
T
cos
_
2t
T
+

4

and
2
(t) =
_
2
T
cos
_
2t
T


4

for 0 t T and 0 elsewhere. These two basis functions (N = 2),


1
(t) and
2
(t), are shown
in Figure 1.6. The two basis functions are orthogonal to each other and both have unit energy,
thus satisfying the orthonormality condition. The two possible modulated waveforms trans-
mitted during the interval [0, T] also appear in Figure 1.6, where x
0
(t) =
1
(t)
2
(t) and
x
1
(t) =
2
(t)
1
(t). Thus, the data symbols associated with the continuous waveforms are
x
0
= [1 1]

and x
1
= [1 1]

(a prime denotes transpose). The signal constellation appears


in Figure 1.7. The resulting waveforms are x
0
(t) =
2

T
sin(
2t
T
) and x
1
(t) =
2

T
sin(
2t
T
).
This type of modulation is called binary phase-shift keying, because the two waveforms are
shifted in phase from each other. Since only two possible waveforms are transmitted during
each T second time interval, the information rate is log
2
(2) = 1 bit per T seconds. Thus
to transmit at 1 Mbps, T must equal 1 s. (Additional scaling may be used to adjust the
BPSK transmit power/energy level to some desired value, but this simply scales all possible
constellation points and transmit signals by the same constant value.)
9
Figure 1.7: BPSK and FM/Manchester signal constellation.
Another set of basis functions is known as FM code (FM is Frequency Modulation) in the
storage industry and also as Manchester Encoding in data communications. This method
is used in many commercial disk storage products and also in what is known as 10BT or
Ethernet (commonly used in local area networks for the internet). The basis functions are
approximated in Figure 1.8 in practice, the sharp edges are somewhat smoother depending
on the specic implementation. The two basis functions again satisfy the orthonormality
condition. The data rate equals one bit per T seconds; for a data transfer rate into the disk of
24 MBytes/s or 192 Mbps, T = 1/(192MHz); for a data rate of 10 Mbps in Ethernet, T =
100 ns. Again for the FM/Manchester example, only two signal points are used, x
0
= [1 1]

and x
1
= [1 1]

, with the same constellation shown in Figure 1.7, although the basis
functions dier from the previous example. The resulting modulated waveforms appear
in Figure 1.8 and correspond to the write currents that are applied to the head in the
storage system.(Additional scaling may be used to adjust either the FM or Ethernet transmit
power/energy level to some desired value, but this simply scales all possible constellation
points and transmit signals by the same constant value.)
The common vector space representation (i.e. signal constellation) of the Ethernet and BPSK
examples allows the performance of a detector to be analyzed for either system in the same way, despite
the gross dierences in the overall systems.
In either of the systems in Example 1.1.2, a more compact representation of the signals with only one
basis function is possible. (As an exercise, the reader should conjecture what this basis function could
be and what the associated signal constellation would be.) Appendix A considers the construction of a
minimal set of basis functions for a given set of modulated waveforms.
Two more examples briey illustrate vector components x
n
that are not necessarily binary-valued.
EXAMPLE 1.1.3 (ISDN - 2B1Q)
3
ISDN digital phone-line service uses M = 4 wave-
forms while the number of basis functions N = 1. Thus, the ISDN system transmits 2 bits
of information per T seconds of channel use. ISDN uses a basis function that is roughly
approximated
4
by
1
(t) =
_
1
T
sinc(
t
T
), where 1/T = 80kHz, and sinc(x)

=
sin(x)
x
. This
basis function is not time limited to the interval [0,T]. The associated signal constellation
appears in Figure 1.9. 2 bits are transmitted using one 4-level (or quaternary) symbol
every T seconds, hence the name 2B1Q.
Telephone companies also often transmit the data rate 1.544 Mbps on twisted pairs (such a
signal often carries twenty-four 64 kbps digital voice signals plus overhead signaling informa-
tion of 8 kbps). A method, known as HDSL (High-bit-rate Digital Subscriber Lines), uses
3
ISDN stands for IntegratedServices Digital Network, an all digital communications standard establishedby the CCITT
for the public telephone network to carry voice and data services simultaneously. It has largely yielded to more sophisticated
transmission at higher rates, known as DSL, but provides a good introductory example.
4
Actually 1/

Tsinc(t/T), or some other Nyquist pulse shape is used, see Chapter 3 on Intersymbol Interference.
10
Figure 1.8: Manchester/FM (Ethernet) basis functions and waveforms.
Figure 1.9: 2B1Q signal constellation.
11
Figure 1.10: 32 Cross signal constellation.
2B1Q with 1/T= 392 kHz, and thus transmits a data rate of 784 kbps on each of two phone
lines for a total of 1.568 Mbps (1.544 Mbps plus 24 kbps of additional HDSL management
overhead).
EXAMPLE 1.1.4 (V.32 - 32CR)
5
Consider a signal set with 32 waveforms (M = 32)
and with 2 basis functions (N = 2) for transmission of 32 signals per channel use. The
CCITT V.32-compatible 9600bps voiceband modems use basis functions that are equivalent
to
1
(t) =
_
2
T
cos
t
T
and
2
(t) =
_
2
T
sin
t
T
for 0 t T and 0 elsewhere. A raw bit rate
of 12.0Kbps
6
is achieved with a symbol rate of 1/T = 2400 Hz. The signal constellation is
shown in Figure 1.10; the 32 points are arranged in a rotated cross pattern, called 32 CR or
32 cross.
5 bits are transformed into 1 of 32 possible 2-dimensional symbols, hence the extension in
the name V.32.
The last two examples also emphasize another tacit advantage of the vector representation, namely
that the details of the rates and carrier frequencies in the modulation format are implicit in the normal-
ization of the basis functions, and they do not appear in the description of the signal constellation.
1.1.3 Vector-Space Interpretation of the Modulated Waveforms
A concept that arises frequently in transmission analysis is the inner product of two time functions
and/or of two N-dimensional vectors:
Denition 1.1.6 (Inner Product) The inner product of two (real) functions of time
u(t) and v(t) is dened by
u(t), v(t))

=
_

u(t)v(t)dt . (1.6)
5
The CCITT has published a set of modem standards numbered V.XX.
6
The actual user information rate is usually 9600 bps with the extra bits used for error-correction purposes as shown
in Chapter 8.
12
The inner product of two (real) vectors u and v is dened by
u, v)

= u

v =
N

n=1
u
n
v
n
, (1.7)
where denotes vector transpose (and conjugate vector transpose in Chapter 2 and beyond).
The two inner products in the above denition are equal under the conditions in the following
theorem:
Theorem 1.1.1 (Invariance of the Inner Product) If there exists a set of basis func-
tions
n
(t), n = 1, ..., N for some N such that u(t) =

N
n=1
u
n

n
(t) and v(t) =

N
n=1
v
n

n
(t)
then
u(t), v(t)) = u, v) . (1.8)
where
u

=
_

_
u
1
.
.
.
u
N
_

_ and v

=
_

_
v
1
.
.
.
v
N
_

_ . (1.9)
The proof follows from
u(t), v(t)) =
_

u(t)v(t)dt =
_

n=1
N

m=1
u
n
v
m

n
(t)
m
(t)dt (1.10)
=
N

n=1
N

m=1
u
n
v
m
_

n
(t)
m
(t)dt =
N

m=1
N

n=1
u
n
v
m

nm
=
N

n=1
u
n
v
n
(1.11)
= u, v) QED. (1.12)
Thus the inner product is invariant to the choice of basis functions and only depends on the com-
ponents of the time functions along each of the basis functions. While the inner product is invariant
to the choice of basis functions, the component values of the data symbols depend on basis functions.
For example, for the V.32 example, one could recognize that the integral
2
T
_
T
0
_
2 cos
_
t
T
_
+ sin
_
t
T
_

_
cos
_
t
T
_
+ 2 sin
_
t
T
_
dt = 2 1 + 1 2 = 4.
Parsevals Identity is a special case (with x = u = v) of the invariance of the inner product.
Theorem 1.1.2 (Parsevals Identity) The following relation holds true for any modulated
waveform
c
x
= E
_
[[x[[
2

= E
__

x
2
(t)dt
_
. (1.13)
The proof follows from the previous Theorem 1.1.1 with u = v = x
E[u(t), v(t))] = E[x, x)] (1.14)
= E
_
N

n=1
x
n
x
n
_
(1.15)
= E
_
|x|
2

(1.16)
= c
x
QED. (1.17)
Parsevals Identity implies that the average energy of a signal constellation is invariant to the choice of
basis functions, as long as they satisfy the orthonormality condition of Equation (1.2). As another V.32
example, one could recognize that the energy of the [2,1] point is
2
T
_
T
0
_
2 cos
_
2t
T
_
+ sin
_
2t
T
_
2
dt =
2 2 + 1 1 = 5.
13
Figure 1.11: The correlative demodulator.
The individual basis functions themselves have a trivial vector representation; namely
n
(t) is rep-
resented by
n
= [0 0 , ..., 1 , ..., 0]

, where the 1 occurs in the n


th
position. Thus, the data symbol x
i
has a representation in terms of the unit basis vectors
n
that is
x
i
=
N

n=1
x
in

n
. (1.18)
The data-symbol component x
in
can be determined as
x
in
= x
i
,
n
) , (1.19)
which, using the invariance of the inner product, becomes
x
in
= x
i
(t),
n
(t)) =
_

x
i
(t)
n
(t)dt n = 1, ..., N . (1.20)
Thus any set of modulated waveforms x
i
(t) can be interpreted as a vector signal constellation, with
the components of any particular vector x
i
given by Equation (1.20). In eect, x
in
is the projection of
the i
th
modulated waveform on the n
th
basis function. The Gram-Schmidt procedure can be used to
determine the minimum number of basis functions needed to represent any signal in the signal set, as
discussed in Appendix A of this chapter.
1.1.4 Demodulation
As in (1.20), the data symbol vector x can be recovered, component-by-component, by computing
the inner product of x(t) with each of the N basis functions. This recovery is called correlative
demodulation because the modulated signal, x(t), is correlated with each of the basis functions to
determine x, as is illustrated in Figure 1.11. The modulated signal, x(t), is rst multiplied by each of the
basis functions in parallel, and the outputs of the multipliers are then passed into a bank of N integrators
to produce the components of the data symbol vector x. Practical realization of the multipliers and
integrators may be dicult. Any physically implementable set of basis functions can only exist over a
14
Figure 1.12: The matched-lter demodulator.
nite interval in time, call it T, the symbol period.
7
Then the computation of x
n
alternately becomes
x
n
=
_
T
0
x(t)
n
(t)dt . (1.21)
The computation in (1.21) is more easily implemented by noting that it is equal to
x(t)
n
(T t)[
t=T
, (1.22)
where indicates convolution. The component of the modulated waveform x(t) along the n
th
basis
function is equivalently the convolution (lter) of the waveform x(t) with a lter
n
(T t) at output
sample time T. Such matched-lter demodulation is matched to the corresponding modulator
basis function. Matched-lter demodulation is illustrated in Figure 1.12.
Figure 1.12 illustrates a conversion between the data symbol and the corresponding modulated wave-
form such that the modulated waveform can be represented by a nite (or countably innite as N )
set of components along an orthonormal set of basis functions. The coming sections use this concept to
analyze the performance of a particular modulation scheme on the AWGN channel.
1.2 Discrete Data Detection
In practice, the channel output waveform y(t) is not equal to the modulated signal x(t). In many cases,
the essential information of the channel output y(t) is captured by a nite set of vector components,
i.e. a vector y generated by the demodulation described in Section 1.1. Specic important examples
appear later in this chapter, but presently the analysis shall presume the existence of the vector y and
proceed to study the detector for the channel. The detector decides which of the discrete channel input
vectors x
i
i = 0, ..., M 1 was transmitted based on the observation of the channel output vector y.
1.2.1 The Vector Channel Model
The vector channel model appears in Figure 1.13. This model suppresses all continuous-time waveforms,
7
The restriction to a nite time interval is later removed with the introduction of Nyquist Pulse shapes in Chapter
3, and the term symbol period will be correspondingly reinterpreted.
15
Figure 1.13: Vector channel model.
and the channel produces a discrete vector output given a discrete vector input. The detector chooses a
message m
i
from among the set of M possible messages m
i
i = 0, . . . , M1 transmitted over the vector
channel. The encoder formats the messages for transmission over the vector channel by translating the
message m
i
into x
i
, an N-dimensional real data symbol chosen from a signal constellation. The encoders
of this text are one-to-one mappings between the message set and the signal-constellation vectors. The
channel-input vector x corresponds to a channel-output vector y, an N-dimensional real vector. (Thus,
the transformation of y(t) y is here assumed to occur within the channel.) The conditional probability
of the output vector y given the input vector x, p
y|x
, completely describes the discrete version of the
channel. The decision device then translates the output vector y into an estimate of the transmitted
message x. A decoder (which is part of the decision device) reverses the process of the encoder and
converts the detector output x into the message decision m.
The particular message vector corresponding to m
i
is x
i
, and its n
th
component is x
in
. The n
th
component of y is denoted y
n
, n = 1, ..., N. In the vector channel, x is a random vector, with discrete
probability mass function p
x
(i) i = 0, ..., M 1.
The output random vector y may have a continuous probability density or a discrete probability
mass function p
y
(v), where v is a dummy variable spanning all the possible N-dimensional outputs for
y. This density is a function of the input and channel transition probability density functions:
p
y
(v) =
M1

i=0
p
y|x
(v[i) p
x
(i) . (1.23)
The average energy of the channel input symbols is
c
x
=
M1

i=0
|x
i
|
2
p
x
(i) . (1.24)
The corresponding average energy for the channel-output vector is
c
y
=

v
|v|
2
p
y
(v) . (1.25)
An integral replaces
8
the sum in (1.25) for the case of a continuous density function p
y
(v).
As an example, consider the simple additive noise channel y = x+n. In this case p
y|x
= p
n
(yx),
where p
n
() is the noise density, when n is independent of the input x.
1.2.2 Optimum Data Detection
For the channel of Figure 1.13, the probability of error is dened as the probability that the decoded
message m is not equal to the message that was transmitted:
Denition 1.2.1 (Probability of Error) The Probability of Error is dened as
P
e

= P m ,= m . (1.26)
8
The replacement of a continuous probability density function by a discrete probability mass function is, in strictest
mathematical terms, not advisable; however, we do so here, as this particular substitution prevents a preponderance of
additional notation, and it has long been conventional in the data transmission literature. The reader is thus forewarned to
keep the continuous or discrete nature of the probability density in mind in the analysis of any particular vector channel.
16
The corresponding probability of being correct is therefore
P
c
= 1 P
e
= 1 P m ,= m = P m = m . (1.27)
The optimum data detector chooses m to minimize P
e
, or equivalently, to maximize P
c
. The probability
of being correct is a function of the particular transmitted message, m
i
.
The MAP Detector
The probability of the decision m = m
i
being correct, given the channel output vector y = v, is
P
c
( m = m
i
, y = v) = P
m|y
(m
i
[v) p
y
(v) = P
x|y
(i[v) p
y
(v) . (1.28)
Thus the optimum decision device observes the particular received output y = v and, as a function of
that output, chooses m = m
i
i = 0, ..., M1 to maximize the probability of a correct decision in (1.28).
This quantity is referred to as the `a posteriori probability for the vector channel. Thus, the optimum
detector for the vector channel in Figure 1.13 is called the Maximum `a Posteriori (MAP) detector:
Denition 1.2.2 (MAP Detector) The Maximum `a Posteriori Detector is dened
as the detector that chooses the index i to maximize the `a posteriori probability p
x|y
(i[v)
given a received vector y = v.
The MAP detector thus simply chooses the index i with the highest conditional probability p
x|y
(i[v).
For every possible received vector y the designer of the detector can calculate the corresponding best
index i, which depends on the input distribution p
x
(i). The a posteriori probabilities can be rewritten
in terms of the a priori probabilities p
x
and the channel transition probabilities p
y|x
by recalling the
identity
9
,
p
x|y
(i[v) p
y
(v) = p
y|x
(v[i) p
x
(i) . (1.29)
Thus,
P
x|y
(i[v) =
p
y|x
(v[i) p
x
(i)
p
y
(v)
=
p
y|x
(v[i) p
x
(i)

M1
j=0
p
y|x
(v[j)p
x
(j)
, (1.30)
for p
y
(v) ,= 0. If p
y
(v) = 0, then that particular output does not contribute to P
e
and therefore is not
of further concern. When maximizing (1.30) over i, the denominator p
y
(v) is a constant that is ignored.
Thus, Rule 1.2.1 below summarizes the following MAP detector rule in terms of the known proba-
bility densities of the channel (p
y|x
) and of the input vector (p
x
):
Rule 1.2.1 (MAP Detection Rule)
m m
i
if p
y|x
(v[i) p
x
(i) p
y|x
(v[j) p
x
(j) j ,= i (1.31)
If equality holds in (1.31), then the decision can be assigned to either message m
i
or m
j
without changing the minimized probability of error.
The Maximum Likelihood (ML) Detector
If all transmitted messages are of equal probability, that is if
p
x
(i) =
1
M
i = 0, ..., M 1 , (1.32)
then the MAP Detection Rule becomes the Maximum Likelihood Detection Rule:
9
The more general form of this identity is called Bayes Theorem, [2].
17
Rule 1.2.2 (ML Detection Rule)
m m
i
if p
y|x
(v[i) p
y|x
(v[j) j ,= i . (1.33)
If equality holds in (1.33), then the decision can be assigned to either message m
i
or m
j
without changing the probability of error.
As with the MAP detector, the ML detector also chooses an index i for each possible received vector
y = v, but this index now only depends on the channel transition probabilities and is independent of
the input distribution (by assumption). The ML detector essentially cancels the 1/M factor on both
sides of (1.31) to get (1.33). This type of detector only minimizes P
e
when the input data symbols have
equal probability of occurrence. As this requirement is often met in practice, ML detection is often used.
Even when the input distribution is not uniform, ML detection is still often employed as a detection
rule, because the input distribution may be unknown and thus assumed to be uniform. The Minimax
Theorem sometimes justies this uniform assumption:
Theorem 1.2.1 (Minimax Theorem) The ML detector minimizes the maximum possi-
ble average probability of error when the input distribution is unknown if the conditional
probability of error P
e,ML/m=mi
is independent of i.
Proof:
First, if P
e,ML/i
is independent of i, then
P
e,ML
=
M1

i=0
p
x
(i) P
e,ML/i
= P
e,ML/i
And so,
max
{p
x
}
P
e,ML
= max
{p
x
}
M1

i=0
p
x
(i) P
e,ML/i
= P
e,ML
M1

i=0
p
x
(i)
= P
e,ML
Now, let R be any receiver other than the ML receiver. Then,
max
{p
x
}
P
e,R
= max
{p
x
}
M1

i=0
p
x
(i) P
e,R/i

M1

i=0
1
M
P
e,R/i
(Since max
{p
x
}
P
e,R
P
e,R
for given p
x
.)

M1

i=0
1
M
P
e,ML/i
(Since the ML minimizes P
e
when p
x
(i) =
1
M
for i = 0, . . . , M 1.)
= P
e,ML
So,
max
{p
x
}
P
e,R
P
e,ML
= max
{p
x
}
P
e,ML
The ML receiver minimizes the maximum P
e
over all possible receivers. QED.
18
Figure 1.14: Decision regions.
The condition of symmetry imposed by the above Theorem is not always satised in practical situations;
but the likelihood of an application where both the inputs are nonuniform in distribution and the ML
conditional error probabilities are not symmetric is rare. Thus, ML receivers have come to be of nearly
ubiquitous use in place of MAP receivers.
1.2.3 Decision Regions
In the case of either the MAP Rule in (1.31) or the ML Rule in (1.33), each and every possible value
for the channel output y maps into one of the M possible transmitted messages. Thus, the vector space
for y is partitioned into M regions corresponding to the M possible decisions. Simple communication
systems have well-dened boundaries (to be shown later), so the decision regions often coincide with
intuition. Nevertheless, in some well-designed communications systems, the decoding function and the
regions can be more dicult to visualize.
Denition 1.2.3 (Decision Region) The decision region using a MAP detector for each
message m
i
, i = 0, ..., M 1 is dened as
T
i

= v [ p
y|x
(v[i) p
x
(i) p
y|x
(v[j) p
x
(j) j ,= i . (1.34)
With uniformly distributed input messages, the decision regions reduce to
T
i

= v [ p
y|x
(v[i) p
y|x
(v[j) j ,= i . (1.35)
In Figure (1.14), each of the four dierent two-dimensional transmitted vectors x
i
(corresponding to
the messages m
i
) has a surrounding decision region in which any received value for y = v is mapped to
the message m
i
. In general, the regions need not be connected, and although such situations are rare in
practice, they can occur (see Problem 1.12). Section 1.3 illustrates several examples of decision regions
for the AWGN channel.
1.2.4 Irrelevant Components of the Channel Output
The discrete channel-output vector y may contain information that does not help determine which of
the M messages has been transmitted. These irrelevant components may be discarded without loss of
performance, i.e. the input detected and the associated probability of error remain unchanged. Let us
19
presume the L-dimensional channel output y can be separated into two sets of dimensions, those which
do carry useful information y
1
and those which do not carry useful information y
2
. That is,
y =
_
y
1
y
2
_
. (1.36)
Theorem 1.2.2 summarizes the condition on y
2
that guarantees irrelevance [1]:
Theorem 1.2.2 (Theorem on Irrelevance) If
p
x|(y
1
,y
2
)
= p
x|y
1
(1.37)
or equivalently for the ML receiver,
p
y
2
|(y
1
,x)
= p
y
2
|y
1
(1.38)
then y
2
is not needed in the optimum receiver, that is, y
2
is irrelevant.
Proof: For a MAP receiver, then clearly the value of y
2
does not aect the maximization of
p
x|(y
1
,y
2
)
if p
x|(y
1
,y
2
)
= p
x|(y
1
)
and thus y
2
is irrelevant to the optimum receivers decision.
Equation (1.37) can be written as
p
(x,y
1
,y
2
)
p
(y
1
,y
2
)
=
p
(x,y
1
)
p
y
1
(1.39)
or equivalently via cross multiplication
p
(x,y
1
,y
2
)
p
(x,y
1
)
=
p
(y
1
,y
2
)
p
y
1
, (1.40)
which is the same as (1.38). QED.
The reverse of the theorem of irrelevance is not necessarily true, as can be shown by counterexamples.
Two examples (due to Wozencraft and Jacobs, [1]) reinforce the concept of irrelevance. In these
examples, the two noise signals n
1
and n
2
are independent and a uniformly distributed input is assumed:
EXAMPLE 1.2.1 (Extra Irrelevant Noise) Suppose y
1
is the noisy channel output shown
in Figure 1.15. In the rst example, p
y
2
|y
1
, x
= p
n
2
= p
y
2
|y
1
, thus satisfying the condition
for y
2
to be ignored, as might be obvious upon casual inspection. The extra independent noise
signal n
2
tells the receiver nothing given y
1
about the transmitted message x. In the second
example, the irrelevance of y
2
given y
1
is not quite as obvious as the signal is present in both
the received channel output components. Nevertheless, p
y
2
|y
1
, x
= p
n2
(v
2
v
1
) = p
y
2
|y
1
.
Of course, in some cases the output component y
2
should not be discarded. A classic example is the
following case of noise cancelation.
EXAMPLE 1.2.2 (Noise Cancelation) Suppose y
1
is the noisy channel output shown
in Figure 1.16 while y
2
may appear to contain only useless noise, it is in fact possible to
reduce the eect of n
1
in y
1
by constructing an estimate of n
1
using y
2
. Correspondingly,
p
y
2
|y
1
, x
= p
n2
(v
2
(v
1
x
i
)) ,= p
y
2
|y
1
.
Reversibility
An important result in digital communication is the Reversibility Theorem, which will be used several
times over the course of this book. This theorem is, in eect, a special case of the Theorem on Irrelevance:
20
Figure 1.15: Extra irrelevant noise.
Figure 1.16: Noise can be partially canceled.
21
Figure 1.17: Reversibility theorem illustration.
Theorem 1.2.3 (Reversibility Theorem) The application of an invertible transforma-
tion on the channel output vector y does not aect the performance of the MAP detector.
Proof: Using the Theorem on Irrelevance, if the channel output is y
2
and the result of
the invertible transformation is y
1
= G(y
2
), with inverse y
2
= G
1
(y
1
) then [y
1
y
2
] =
_
y
1
G
1
(y
1
)

. Then, p
x/(y
1
,y
2
)
= p
x/y
1
, which is denition of irrelevance. Thus, either of
y
1
or y
2
is sucient to detect x optimally.QED.
Equivalently, Figure 1.17 illustrates the reversibility theorem by constructing a MAP receiver for
the output of the invertible transformation y
1
as the cascade of the inverse lter G
1
and the MAP
receiver for the input of the invertible transformation y
2
.
1.3 The Additive White Gaussian Noise (AWGN) Channel
Perhaps the most important, and certainly the most analyzed, digital communication channel is the
AWGN channel shown in Figure 1.18. This channel passes the sum of the modulated signal x(t) and an
uncorrelated Gaussian noise n(t) to the output. The Gaussian noise is assumed to be uncorrelated with
itself (or white) for any non-zero time oset , that is
E[n(t)n(t )] =
^
0
2
() , (1.41)
and zero mean, E[n(t)] = 0. With these denitions, the Gaussian noise is also strict sense stationary
(See Annex C of Chapter 2 for a discussion of stationarity types). The analysis of the AWGN channel
is a foundation for the analysis of more complicated channel models in later chapters.
The assumption of white Gaussian noise is valid in the very common situation where the noise is
predominantly determined by front-end analog receiver thermal noise. Such noise has a power spectral
22
Figure 1.18: AWGN channel.
density given by the B

oltzman equation:
N(f) =
hf
e
hf
kT 1
kT for small f < 10
12
, (1.42)
where Boltzmans constant is k = 1.38 10
23
Joules/degree Kelvin, Plancks constant is h = 6.63
10
34
Watt-s
2
, and T is the temperature on the Kelvin (absolute) scale. This power spectral density is
approximately -174 dBm/Hz (10
17.4
mW/Hz) at room temperature (larger in practice). The Gaussian
assumption is a consequence of the fact that many small noise sources contribute to this noise, thus
invoking the Central Limit Theorem.
1.3.1 Conversion from the Continuous AWGN to a Vector Channel
In the absence of additive noise in Figure 1.18, y(t) = x(t), and the demodulation process in Sub-
section 1.1.3 would exactly recover the transmitted signal. This section shows that for the AWGN
channel, this demodulation process provides sucient information to determine optimally the transmit-
ted signal. The resulting components y
l

= y(t),
l
(t)), l = 1, ..., N comprise a vector channel output,
y = [y
1
, ..., y
N
]

that is equivalent for detection purposes to y(t). The analysis can thus convert the
continuous channel y(t) = x(t) +n(t) to a discrete vector channel model,
y = x +n , (1.43)
where n

= [n
1
n
2
... n
N
] and n
l

= n(t),
l
(t)). The vector channel output is the sum of the vector
equivalent of the modulated signal and the vector equivalent of the demodulated noise. Nevertheless,
the exact noise sample function may not be reconstructed from n,
n(t) ,=
N

l=1
n
l

l
(t)

= n(t) , (1.44)
or equivalently,
y(t) ,=
N

l=1
y
l

l
(t)

= y(t) . (1.45)
There may exist a component of n(t) that is orthogonal to the space spanned by the basis functions

1
(t) ...
N
(t). This unrepresented noise component is
n(t)

= n(t) n(t) = y(t) y(t) . (1.46)
A lemma quickly follows:
23
Lemma 1.3.1 (Uncorrelated noise samples) The noise samples in the demodulated noise
vector are independent for AWGN and of equal variance
N0
2
.
Proof: Write
E[n
k
n
l
] = E
__

n(t)n(s)
k
(t)
l
(s)dt ds
_
(1.47)
=
^
0
2
_

k
(t)
l
(t)dt (1.48)
=
^
0
2

kl
. QED. (1.49)
The development of the MAP detector could have replaced y by y(t) everywhere and the development
would have proceeded identically with the tacit inclusion of the time variable t in the probability densities
(and also assuming stationarity of y(t) as a random process). The Theorem of Irrelevance would hold
with [y
1
y
2
] replaced by [ y(t) n(s)], as long as the relation (1.38) holds for any pair of time instants t and
s. In a non-mathematical sense, the unrepresented noise is useless to the receiver, so there is nothing of
value lost in the vector demodulator, even though some of the channel output noise is not represented.
The following algebra demonstrates that n(s) is irrelevant:
First,
E[ n(s) n(t)] = E
_
n(s)
N

l=1
n
l

l
(t)
_
=
N

l=1

l
(t)E[ n(s) n
l
] . (1.50)
and,
E[ n(s) n
l
] = E[(n(s) n(s)) n
l
] (1.51)
= E
__

n(s)
l
()n()d
_
E
_
N

k=1
n
k
n
l

k
(s)
_
(1.52)
=
^
0
2
_

(s )
l
()d
^
0
2

l
(s) (1.53)
=
^
0
2
[
l
(s)
l
(s)] = 0 . (1.54)
Second,
p
x| y(t), n(s)
=
p
x, y(t), n(s)
p
y(t), n(s)
(1.55)
=
p
x, y(t)
p
n(s)
p
y(t)
p
n(s)
(1.56)
=
p
x, y(t)
p
y(t)
(1.57)
= p
x| y(t)
. (1.58)
Equation (1.58) satises the theorem of irrelevance, and thus the receiver need only base its decision
on y(t), or equivalently, only on the received vector y. The vector AWGN channel is equivalent to the
continuous-time AWGN channel.
Rule 1.3.1 (The Vector AWGN Channel) The vector AWGN channel is given by
y = x +n (1.59)
and is equivalent to the channel illustrated in Figure 1.18. The noise vector n is an N-
dimensional Gaussian random vector with zero mean, equal-variance, uncorrelated compo-
nents in each dimension. The noise distribution is
p
n
(u) = (^
0
)

N
2
e

1
N
0
u
2
=
_
2
2
_

N
2
e

1
2
2
u
2
. (1.60)
24
Figure 1.19: Binary ML detector.
Application of y(t) to either the correlative demodulator of Figure 1.11 or to the matched-lter demod-
ulator of Figure 1.12, generates the desired vector channel output y at the demodulator output. The
following section species the decision process that produces an estimate of the input message, given the
output y, for the AWGN channel.
1.3.2 Optimum Detection with the AWGN Channel
For the vector AWGN channel in (1.59),
p
y|x
(v[i) = p
n
(v x
i
) , (1.61)
where p
n
is the vector noise distribution in (1.60). Thus for AWGN the MAP Decision Rule becomes
m m
i
if e

1
N
0
vxi
2
p
x
(i) e

1
N
0
vxj
2
p
x
(j) j ,= i , (1.62)
where the common factor of (^
0
)

N
2
has been canceled from each side of (1.62). As noted earlier, if
equality holds in (1.62), then the decision can be assigned to any of the corresponding messages without
change in minimized probability of error. The log of (1.62) is the preferred form of the MAP Decision
Rule for the AWGN channel:
Rule 1.3.2 (AWGN MAP Detection Rule)
m m
i
if |v x
i
|
2
^
0
lnp
x
(i) |v x
j
|
2
^
0
lnp
x
(j) j ,= i (1.63)
If the channel input messages are equally likely, the ln terms on both sides of (1.63) cancel, yielding the
AWGN ML Detection Rule:
Rule 1.3.3 (AWGN ML Detection Rule)
m m
i
if |v x
i
|
2
|v x
j
|
2
j ,= i . (1.64)
The ML detector for the AWGN channel in (1.64) has the intuitively appealing physical interpretation
that the decision m = m
i
corresponds to choosing the data symbol x
i
that is closest, in terms of the
Euclidean distance, to the received vector channel output y = v. Without noise, the received vector
is y = x
i
the transmitted symbol, but the additive Gaussian noise results in a received symbol most
likely in the neighborhood of x
i
. The Gaussian shape of the noise implies the probability of a received
point decreases as the distance from the transmitted point increases. As an example consider the decision
regions for binary data transmission over the AWGN channel illustrated in Figure 1.19. The ML receiver
decides x
1
if y = v 0 and x
0
if y = v < 0. (One might have guessed this answer without need for
theory.) With d dened as the distance |x
1
x
0
|, the decision regions are oset in the MAP detector
by

2
d
ln
p
x
(j)
p
x
(i)
with the decision boundary shifting towards the data symbol of lesser probability, as
illustrated in Figure 1.20. Unlike the ML detector, the MAP detector accounts for the `a priori message
probabilities. The decision region for the more likely symbol is extended by shifting the boundary
towards the less likely symbol. Figure 1.21 illustrates the decision region for a two-dimensional example
of the QPSK signal set, which uses the same basis functions as the V.32 example (Example 1.1.4). The
points in the signal constellation are all assumed to be equally likely.
25
Figure 1.20: Binary MAP detector.
Figure 1.21: QPSK decision regions.
General Receiver Implementation
While the decision regions in the above examples appear simple to implement, in a digital system, the
implementation may be more complex. This section investigates general receiver structures and the
detector implementation.
The MAP detector minimizes the quantity (the quantity y now replaces v averting strict mathemat-
ical notation, because probability density functions are used less often in the subsequent analysis):
|y x
i
|
2
^
0
lnp
x
(i) (1.65)
over the M possible messages, indexed by i. The quantity in (1.65) expands to
|y|
2
2y, x
i
) +|x
i
|
2
^
0
lnp
x
(i) . (1.66)
Minimization of (1.66) can ignore the |y|
2
term. The MAP decision rule then becomes
m m
i
if y, x
i
) + c
i
y, x
j
) +c
j
j ,= i , (1.67)
where c
i
is the constant (independent of y)
c
i

=
^
0
2
lnp
x
(i)
|x
i
|
2
2
. (1.68)
A system design can precompute the constants c
i
from the transmitted symbols x
i
and their proba-
bilities p
x
(i). The detector thus only needs to implement the M inner products, y, x
i
) i = 0, . . . , M1.
When all the data symbols have the same energy (c
x
= |x
i
|
2
i) and are equally probable (i.e. MAP
= ML), then the constant c
i
is independent of i and can be eliminated from (1.67). The ML detector
thus chooses the x
i
that maximizes the inner product (or correlation) of the received value for y = v
with x
i
over i.
26
Figure 1.22: Basis detector.
There exist two common implementations of the MAP receiver in (1.67). The rst, shown in Fig-
ure 1.22, called a basis detector, computes y using a matched lter demodulator. This MAP receiver
computes the M inner products of (1.67) digitally (an MN matrix multiply with y), adds the constant
c
i
of (1.68), and picks the index i with maximum result. Finally, a decoder translates the index i into
the desired message m
i
. Often in practice, the signal constellation is such (see Section 1.6 for examples)
that the max and decode function reduces to simple truncation of each component in the received vector
y.
The second form of the demodulator eliminates the matrix multiply in Figure 1.22 by recalling the
inner product equivalences between the discrete vectors x
i
, y and the continuous time functions x
i
(t)
and y(t). That is
y, x
i
) =
_
T
0
y(t)x
i
(t)dt = y(t), x
i
(t)) . (1.69)
Equivalently,
y, x
i
) = y(t) x
i
(T t)[
t=T
(1.70)
where indicates convolution. This type of detector is called a signal detector and appears in Fig-
ure 1.23.
EXAMPLE 1.3.1 (pattern recognition as a signal detector) Pattern recognition is a
digital signal processing procedure that is used to detect whether a certain signal is present.
An example occurs when an aircraft takes electronic pictures of the ground and the corre-
sponding electrical signal is analyzed to determine the presence of certain objects. This is a
communication channel in disguise where the two inputs are the usual terrain of the ground
and the terrain of the ground including the object to be detected. A signal detector consisting
of two lters that are essentially the time reverse of each of the possible input signals, with
a comparison of the outputs (after adding any necessary constants), allows detection of the
presence of the object or pattern. There are many other examples of pattern recognition in
voice/command recognition or authentication, written character scanning, and so on.
The above example/discussion illustrates that many of the principles of digital communication theory
are common to other elds of digital signal processing and science.
1.3.3 Signal-to-Noise Ratio (SNR) Maximization with a Matched Filter
SNR is a good measure for a systems performance, describing the ratio of signal power (message) to
unwanted noise power. The SNR at the output of a lter is dened as the ratio of the modulated
27
Figure 1.23: Signal detector.
Figure 1.24: SNR maximization by matched lter.
signals energy to the mean-square value of the noise. The SNR can be dened for both continuous- and
discrete-time processes; the discrete SNR is SNR of the samples of the received and ltered waveform.
The matched lters shown in Figure 1.23 satisfy the SNR maximization property, which the following
theorem summarizes:
Theorem 1.3.1 (SNR Maximization) For the system shown in Figure 1.24, the lter
h(t) that maximizes the signal-to-noise ratio at sample time T
s
is given by the matched lter
h(t) = x(T
s
t).
Proof: Compute the SNR at sample time t = T
s
as follows.
Signal Energy = [x(t) h(t)[
t=Ts
]
2
(1.71)
=
__

x(t) h(T
s
t) dt
_
2
= [x(t), h(T
s
t))]
2
. (1.72)
The sampled noise at the matched lter output has energy or mean-square
Noise Energy = E
__

n(t)h(T
s
t)dt
_

n(s)h(T
s
s)ds
_
(1.73)
28
=
_

N
0
2
(t s)h(T
s
t)h(T
s
s)dtds (1.74)
=
N
0
2
_

h
2
(T
s
t)dt (1.75)
(1.76)
=
^
0
2
|h|
2
. (1.77)
The signal-to-noise ratio, dened as the ratio of the signal power in (1.72) to the noise power
in (1.77), equals
SNR =
2
^
0

[x(t), h(T
s
t))]
2
|h|
2
. (1.78)
The Cauchy-Schwarz Inequality states that
[x(t), h(T
s
t))]
2
|x|
2
|h|
2
(1.79)
with equality if and only if x(t) = kh(T
s
t), where k is some arbitrary constant. Thus, by
inspection, (1.78) is maximized over all choices for h(t) when h(t) = x(T
s
t). The lter h(t)
is matched to x(t), and the corresponding maximum SNR (for any k) is
SNR
max
=
2
^
0
|x|
2
. (1.80)
An example of the use of the SNR maximization property of the matched lter occurs in time-delay
estimation, which is used for instance in radar:
EXAMPLE 1.3.2 (Time-delay estimation) Radar systems emit electromagnetic pulses
and measure reection of those pulses o objects within range of the radar. The distance of
the object is determined by the delay of the reected energy, with longer delay corresponding
to longer distance. By processing the received signal at the radar with a lter matched to the
radar pulse shape, the signal level measured in the presence of a presumably xed background
white noise will appear largest relative to the noise. Thus, the ability to determine the exact
time instant at which the maximum pulse returned is improved by the use of the matched
lter, allowing more accurate estimation of the position of the object.
1.4 Error Probability for the AWGN Channel
This section discusses the computation of the average probability of error of decoding the transmitted
message incorrectly on an AWGN channel. From the previous section, the AWGN channel is equivalent
to a vector channel with output given by
y = x +n . (1.81)
The computation of P
e
often assumes that the inputs x
i
are equally likely, or p
x
(i) =
1
M
. Under this
assumption, the optimum detector is the ML detector, which has decision rule
m m
i
if |v x
i
|
2
|v x
j
|
2
j ,= i . (1.82)
The P
e
associated with this rule depends on the signal constellation x
i
and the noise variance
N0
2
.
Two general invariance theorems in Subsection 1.4.1 facilitate the computation of P
e
. The exact P
e
,
P
e
=
1
M

M1

i=0
P
e/i
(1.83)
= 1
1
M

M1

i=0
P
c/i
(1.84)
may be dicult to compute, so convenient and accurate bounding procedures in Subsections 1.4.2
through 1.4.4 can alternately approximate P
e
.
29
Figure 1.25: Rotational invariance with AWGN.
1.4.1 Invariance to Rotation and Translation
The orientation of the signal constellation with respect to the coordinate axes and to the origin does not
aect the P
e
. This result follows because (1) the error depends only on relative distances between points
in the signal constellation, and (2) AWGN is spherically symmetric in all directions. First, the probability
of error for the ML receiver is invariant to any rotation of the signal constellation, as summarized in the
following theorem:
Theorem 1.4.1 (Rotational Invariance) If all the data symbols in a signal constellation
are rotated by an orthogonal transformation, that is x
i
Qx
i
for all i = 0, ..., M1 (where
Q is an N N matrix such that QQ

= Q

Q = I), then the probability of error of the ML


receiver remains unchanged on an AWGN channel.
Proof: The AWGN remains statistically equivalent after rotation by Q

. In particular consider
n = Q

n, a rotated Gaussian random vector. ( n is Gaussian since a linear combination of


Gaussian random variables remains a Gaussian random variable). A Gaussian random vector
is completely specied by its mean and covariance matrix: The mean is E[ n] = 0 since
E[n
i
] = 0, i = 0, . . . , N 1. The covariance matrix is E[ n n

] = Q

E[nn

]Q =
N0
2
I. Thus,
n is statistically equivalent to n. The channel output for the rotated signal constellation is
now y = x+n as illustrated in Figure 1.25. The corresponding decision rule is based on the
distance from the received signal sample y = v to the rotated constellation points x
i
.
| v x
i
|
2
= ( v x
i
)

( v x
i
) (1.85)
= (v x
i
)

Q(v x
i
) (1.86)
= |v x
i
|
2
, (1.87)
where y = x+Qn. Since n = Q

n has the same distribution as n, and the distances measured


in (1.87) are the same as in the original unrotated signal constellation, the ML detector for the
rotated constellation is the same as the ML detector for the original (unrotated) constellation
in terms of all distances and noise variances. Thus, the probability of error must be identical.
QED.
An example of the QPSK constellation appears in Figure 1.21, where N = 2. With Q be a 45
o
rotation matrix,
Q =
_
cos

4
sin

4
sin

4
cos

4
_
, (1.88)
then the rotated constellation and decision regions are shown in Figure 1.26. From Figure 1.26, clearly
the rotation has not changed the detection problem and has only changed the labeling of the axes,
eectively giving another equivalent set of orthonormal basis functions. Since rotation does not change
the squared length of any of the data symbols, the average energy remains unchanged. The invariance
does depend on the noise components being uncorrelated with one another, and of equal variance, as in
(1.49); for other noise correlations (i.e., n(t) not white, see Section 1.7) rotational invariance does not
30
Figure 1.26: QPSK rotated by 45
o
.
Figure 1.27: Rotational invariance summary.
31
hold. Rotational invariance is summarized in Figure 1.27. Each of the three diagrams shown in gures
1.26 and 1.27 have identical P
e
when used with identical AWGN.
The probability of error is also invariant to translation by a constant vector amount for the AWGN,
because again P
e
depends only on relative distances and the noise remains unchanged.
Theorem 1.4.2 (Translational Invariance) If all the data symbols in a signal constella-
tion are translated by a constant vector amount, that is x
i
x
i
a for all i = 0, ..., M1,
then the probability of error of the ML detector remains unchanged on an AWGN channel.
Proof: Note that the constant vector a is common to both y and to x, and thus subtracts
from |(v a) (x
i
a)|
2
= |v x
i
|
2
, so (1.82) remains unchanged. QED.
An important use of the Theorem of Translational Invariance is the minimum energy translate
of a signal constellation:
Denition 1.4.1 (Minimum Energy Translate) The minimum energy translate of
a signal constellation is dened as that constellation obtained by subtracting the constant
vector Ex from each data symbol in the constellation.
To show that the minimum energy translate has the minimum energy among all possible translations
of the signal constellation, write the average energy of the translated signal constellation as
c
xa
=
M1

i=0
|x
i
a|
2
p
x
(i) (1.89)
=
M1

i=0
_
|x
2
i
| 2x
i
, a) + |a|
2

p
x
(i)
= c
x
+ |a|
2
2Ex, a) (1.90)
From (1.90), the energy c
xa
is minimized over all possible translates a if and only if a = Ex, so
minc
xa
=
M1

i=0
_
|x
i
Ex|
2
p
x
(i)

= c
x
[E(x)]
2
. (1.91)
Thus, as transmitter energy (or power) is often a quantity to be preserved, the engineer can always
translate the signal constellation by Ex, to minimize the required energy without aecting perfor-
mance. (However, there may be practical reasons, such as complexity and synchronization, where this
translation is avoided in some designs.)
1.4.2 Union Bounding
Specic examples of calculating P
e
appear in the next two subsections. This subsection illustrates this
calculation for binary signaling in N dimensions for use in probability-of-error bounds.
Suppose a system has two signals in N dimensions, as illustrated for N = 1 dimension in Figure 1.19
with an AWGN channel. Then the probability of error for the ML detector is the probability that the
component of the noise vector n along the line connecting the two data symbols is greater than half the
distance along this line. In this case, the noisy received vector y lies in the incorrect decision region,
resulting in an error. Since the noise is white Gaussian, its projection in any dimension, in particular,
the segment of the line connecting the two data symbols, is of variance
2
=
N0
2
, as was discussed in the
proof of Theorem 1.4.1. Thus,
P
e
= Pn, )
d
2
, (1.92)
where is a unit norm vector along the line between x
0
and x
1
and d

= |x
0
x
1
|. This error probability
is
P
e
=
_

d
2
1

2
2
e

1
2
2
u
2
du
32
=
_

d
2
1

2
e

u
2
2
du
= Q
_
d
2
_
. (1.93)
The Q-function is dened in Appendix B of this chapter. As
2
=
N0
2
, (1.93) can also be written
P
e
= Q
_
d

2^
0
_
. (1.94)
Minimum Distance
Every signal constellation has an important characteristic known as the minimum distance:
Denition 1.4.2 (Minimum Distance, d
min
) The minimum distance, d
min
(x) is de-
ned as the minimum distance between any two data symbols in a signal constellation x

=
x
i

i=0,...,M1
. The argument (x) is often dropped when the specic signal constellation is
obvious from the context, thus leaving
d
min

= min
i=j
|x
i
x
j
| i, j . (1.95)
Equation (1.93) is useful in the proof of the following theorem for the probability of error of a ML
detector for any signal constellation with M data symbols:
Theorem 1.4.3 (Union Bound) The probability of error for the ML detector on the AWGN
channel, with an M-point signal constellation with minimum distance d
min
, is bounded by
P
e
(M 1)Q
_
d
min
2
_
. (1.96)
The proof of the Union Bound denes an error event
ij
as the event where the ML detector
chooses x = x
j
while x
i
is the correct transmitted data symbol. The conditional probability of error
given that x
i
was transmitted is then
P
e/i
= P
i0

i1
...
i,i1

i,i+1
...
i,M1
= P
M1
_
j=0
(j=i)

ij
. (1.97)
Because the error events in (1.97) are mutually exclusive (meaning if one occurs, the others cannot), the
probability of the union is the sum of the probabilities,
P
e/i
=
M1

j=0
(j=i)
P
ij

M1

j=0
(j=i)
P
2
(x
i
, x
j
) , (1.98)
where
P
2
(x
i
, x
j
)

= P y is closer to x
j
than to x
i
, (1.99)
because
P
ij
P
2
(x
i
, x
j
) . (1.100)
As illustrated in Figure 1.28, P
ij
is the probability the received vector y lies in the shaded decision
region for x
j
given the symbol x
i
was transmitted. The incorrect decision region for the probability
P
2
(x
i
, x
j
) includes part (shaded red in Figure 1.28) of the region for P
ik
, which explains the inequality
in Equation (1.100). Thus, the union bound overestimates P
e/i
by integrating pairwise on overlapping
half-planes.
33
Figure 1.28: Probability of error regions.
Figure 1.29: NNUB PSK constellation.
34
Figure 1.30: 8 Phase Shift Keying.
Using the result in (1.93),
P
2
(x
i
, x
j
) = Q
_
|x
i
x
j
|
2
_
. (1.101)
Substitution of (1.101) into (1.98) results in
P
e/i

M1

j=0
(j=i)
Q
_
|x
i
x
j
|
2
_
, (1.102)
and thus averaging over all transmitted symbols
P
e

M1

i=0
M1

j=0
(j=i)
Q
_
|x
i
x
j
|
2
_
p
x
(i) . (1.103)
Q(x) is monotonically decreasing in x, and thus since d
min
|x
i
x
j
|,
Q
_
|x
i
x
j
|
2
_
Q
_
d
min
2
_
. (1.104)
Substitution of (1.104) into (1.103), and recognizing that d
min
is not a function of the indices i or j,
one nds the desired result
P
e

M1

i=0
(M 1)Q
_
d
min
2
_
p
x
(i) = (M 1)Q
_
d
min
2
_
. (1.105)
QED.
Since the constellation contains M points, the factor M1 equals the maximum number of neighboring
constellation points that can be at distance d
min
from any particular constellation point.
Examples
The union bound can be tight (or exact) in some cases, but it is not always a good approximation to the
actual P
e
, especially when M is large. Two examples for M = 8 show situations where the union bound
is a poor approximation to the actual probability of error. These two examples also naturally lead to
the nearest neighbor bound of the next subsection.
EXAMPLE 1.4.1 (8PSK) The constellation in Figure 1.30 is often called eight phase
or 8PSK. For the maximumlikelihood detector, the 8 decision regions correspond to sectors
35
Figure 1.31: 8PSK P
e
bounding.
bounded by straight lines emanating from the origin as shown in Figure 1.29. The union
bound for 8PSK equals
P
e
7Q
_

c
x
sin(

8
)

_
, (1.106)
and d
min
= 2

c
x
sin(

8
).
Figure 1.31 magnies the detection region for one of the 8 data symbols. By symmetry
the analysis would proceed identically, no matter which point is chosen, so P
e/i
= P
e
. An
error can occur if the component of the additive white Gaussian noise, along either of the
two directions shown, is greater than d
min
/2. These two events are not mutually exclusive,
although the variance of the noise along either vector (with unit vectors along each dened
as
1
and
2
) is
2
. Thus,
P
e
= P(| < n,
1
> | >
d
min
2
)
_
(| < n,
2
> | >
d
min
2
) (1.107)
P(n
1
>
d
min
2
) +P(n
2
>
d
min
2
) (1.108)
= 2Q
_
d
min
2
_
, (1.109)
which is a tighter union bound on the probability of error. Also
P|n
1
| >
d
min
2
P
e
, (1.110)
yielding a lower bound on P
e
, thus the upper bound in (1.109) is tight. This bound is graphi-
cally illustrated in Figure 1.29. The bound in (1.109) overestimates the P
e
by integrating the
two half planes, which overlap as clearly depicted in the doubly shaded region of gure 1.28.
The lower bound of (1.110) only integrates over one half plane that does not completely cover
the shaded region. The multiplier in front of the Q function in (1.109) equals the number of
nearest neighbors for any one data symbol in the 8PSK constellation.
The following second example illustrates problems in applying the union bound to a 2-dimensional signal
constellation with 8 or more signal points on a rectangular grid (or lattice):
EXAMPLE 1.4.2 (8AMPM) Figure 1.32 illustrates an 8-point signal constellation called
8AMPM (amplitude-modulated phase modulation), or 8 Square. The union bound for
P
e
yields
P
e
7Q
_

_
. (1.111)
By rotational invariance the rotated 8AMPM constellation shown in Figure 1.33 has the
same P
e
as the unrotated constellation. The decision boundaries shown are pessimistic at
36
Figure 1.32: 8AMPM signal constellation.
Figure 1.33: 8AMPM rotated by 45
o
with decision regions.
37
the corners of the constellation, so the P
e
derived from them will be an upper bound. For
notational brevity, let Q

= Q[d
min
/2]. The probability of a correct decision for 8AMPM is
P
c
=
7

i=0
P
c/i
p
x
(i) =

i=1,4
P
c/i

1
8
+

i=1,4
P
c/i

1
8
(1.112)
>
6
8
(1 Q)(1 2Q) +
2
8
(1 2Q)
2
(1.113)
=
3
4
_
1 3Q+ 2Q
2
_
+
1
4
_
1 4Q+ 4Q
2
_
(1.114)
= 1 3.25Q+ 2.5Q
2
. (1.115)
Thus P
e
is upper bounded by
P
e
= 1 P
c
< 3.25Q
_
d
min
2
_
, (1.116)
which is tighter than the union bound in (1.111). As M increases for constellations like
8AMPM, the accuracy of the union bound degrades, since the union bound calculates P
e
by pairwise error events and thus redundantly includes the probabilities of overlapping half-
planes. It is desirable to produce a tighter bound. The multiplier on the Q-function in (1.116)
is the average number of nearest neighbors (or decision boundaries) =
1
4
(4+3+3+3) = 3.25
for the constellation. This rule of thumb, the Nearest-Neighbor Union bound (NNUB), often
used by practicing data transmission engineers, is formalized in the next subsection.
1.4.3 The Nearest Neighbor Union Bound
The Nearest Neighbor Union Bound (NNUB) provides a tighter bound on the probability of error
for a signal constellation by lowering the multiplier of the Q-function. The factor (M1) in the original
union bound is often too large for accurate performance prediction as in the preceding sections two
examples. The NNUB requires more computation. However, it is easily approximated.
The development of this bound uses the average number of nearest neighbors:
Denition 1.4.3 (Average Number of Nearest Neighbors) The average number of neigh-
bors, N
e
, for a signal constellation is dened as
N
e
=
M1

i=0
N
i
p
x
(i) , (1.117)
where N
i
is the number of neighboring constellation points of the point x
i
, that is the number
of other signal constellation points sharing a common decision region boundary with x
i
.
Often, N
e
is approximated by
N
e

M1

i=0

N
i
p
x
(i) , (1.118)
where

N
i
is the set of points at minimum distance from x
i
, whence the often used name
nearest neighbors. This approximation is often very tight and facilitates computation of
N
e
when signal constellations are complicated (i.e., coding is used - see Chapters 6, 7, and
8).
Thus, N
e
also measures the average number of sides of the decision regions surrounding any point
in the constellation. These decision boundaries can be at dierent distances from any given point and
thus might best not be called nearest. N
e
is used in the following theorem:
38
Theorem 1.4.4 (Nearest Neighbor Union Bound) The probability of error for the ML
detector on the AWGN channel, with an M-point signal constellation with minimum distance
d
min
, is bounded by
P
e
N
e
Q
_
d
min
2
_
. (1.119)
In the case that N
e
is approximated by counting only nearest neighbors, then the NNUB
becomes an approximation to probability of symbol error, and not necessary an upper bound.
Proof: Note that for each signal point, the distance to each decision-region boundary must
be at least d
min
/2. The probability of error for point x
i
, P
e/i
is upper bounded by the union
bound as
P
e/i
N
i
Q
_
d
min
2
_
. (1.120)
Thus,
P
e
=
M1

i=0
P
e/i
p
x
(i) Q
_
d
min
2
_
M1

i=0
N
i
p
x
(i) = N
e
Q
_
d
min
2
_
. (1.121)
QED.
The previous Examples 1.4.1 and 1.4.2 show that the Q-function multiplier in each case is exactly N
e
for that constellation.
As signal set design becomes more complicated in Chapters 7 and 8, the number of nearest neighbors is
commonly taken as only those neighbors who also are at minimumdistance, and N
e
is then approximated
by (1.118). With this approximation, the P
e
expression in the NNUB consequently becomes only an
approximation rather than a strict upper bound.
1.4.4 Alternative Performance Measures
The optimum receiver design minimizes the symbol error probability P
e
. Other closely related measures
of performance can also be used.
An important measure used in practical system design is the Bit Error Rate. Most digital com-
munication systems encode the message set m
i
into bits. Thus engineers are interested in the average
number of bit errors expected. The bit error probability will depend on the specic binary labeling
applied to the signal points in the constellation. The quantity n
b
(i, j) denotes the number of bit errors
corresponding to a symbol error when the detector incorrectly chooses m
j
instead of m
i
, while P
ij

denotes the probability of this symbol error.


The bit error rate P
b
obeys the following bound:
Denition 1.4.4 (Bit Error Rate) The bit error rate is
P
b

=
M1

i=0

j
j=i
p
x
(i)P
ij
n
b
(i, j) (1.122)
where n
b
(i, j) is the number of bit errors for the particular choice of encoder when symbol i
is erroneously detected as symbol j. This quantity, despite the label using P, is not strictly a
probability.
The bit error rate will always be approximated for the AWGN in this text by:
P
b

M1

i=0
Ni

j=1
p
x
(i)P
ij
n
b
(i, j) (1.123)
39
Q
_
d
min
2
_
M1

i=0
p
x
(i)
Ni

j=1
n
b
(i, j)
P
b
<
Q
_
d
min
2
_
M1

i=0
p
x
(i)n
b
(i)
<
N
b
Q
_
d
min
2
_
(1.124)
where
n
b
(i)

=
Ni

j=1
n
b
(i, j) , (1.125)
and the Average Total Bit Errors per Error Event, N
b
, is dened as:
N
b
=
M1

i=0
p
x
(i) n
b
(i) . (1.126)
An expression similar to the NNUB for P
b
is
P
b
N
b
Q
_
d
min
2
_
, (1.127)
where the approximation comes from Equation (1.123), which is an approximation because of the reduc-
tion in the number of included terms in the sum over other points. The accuracy of this approximation
is good as long as those terms corresponding to distant neighbors have small value in comparison to
nearest neighbors, which is a reasonable assumption for good constellation designs. The bit error rate
is sometimes a more uniform measure of performance because it is independent of M and N. On the
other hand, P
e
is a block error probability (with block length N) and can correspond to more than one
bit in error (if M > 2) over N dimensions. Both P
e
and P
b
depend on the same distance-to-noise ratio
(the argument of the Q function). While the notation for P
b
is commonly expressed with a P, the bit
error rate is not a probability and could exceed unity in value in aberrant cases. A better measure that
is a probability is to normalize the bit-error rate by the number of bits per symbol: Normalization of P
b
produces a probability measure because it is the average number of bit errors divided by the number of
bits over which those errors occur - this probability is the desired probability of bit error:
Lemma 1.4.1 (Probability of bit error

P
b
.) The probability of bit error is dened
by

P
b
=
P
b
b
. (1.128)
The corresponding average total number of bit errors per bit is

N
b

=
N
b
b
. (1.129)
The bit error rate can exceed one, but the probability of bit error never exceeds one.
Furthermore, comparison of values of P
e
between systems of dierent dimensionality is not fair (for
instance to compare a 2B1Q system operating at P
e
= 10
7
against a multi-dimensional design consisting
of 10 successive 2B1Q dimensions decoded jointly as a single symbol also with P
e
= 10
7
, the latter
system really has 10
8
errors per dimension and so is better.) A more fair measure of symbol error
probability normalizes the measure by the dimensionality (or number of bits per symbol) of the system
to compare systems with dierent block lengths.
Denition 1.4.5 (Normalized Error Probability

P
e
.) The normalized error proba-
bility is dened by

P
e

=
P
e
N
. (1.130)
40
The normalized average number of nearest neighbors is:
Denition 1.4.6 (Normalized Number of Nearest Neighbors) The normalized num-
ber of nearest neighbors,

N
e
, for a signal constellation is dened as

N
e
=
M1

i=0
N
i
N
p
x
(i) =
N
e
N
. (1.131)
Thus, the NNUB is

P
e


N
e
Q
_
d
min
2
_
. (1.132)
EXAMPLE 1.4.3 (8AMPM) The average number of bit errors per error event for 8AMPM
using the octal labeling indicated by the subscripts in Figure 1.32 is computed by
N
b
=
7

i=0
1
8
n
b
(i)
=
1
8
[(1 + 1 + 2) + (3 + 1 + 2 + 2)+ (1.133)
(2 + 1 + 1) + (1 + 2 + 3) + (3 + 2 + 2 + 1)+ (1.134)
+(1 + 1 + 2) + (3 + 1 + 2) + (1 + 2 + 1)] (1.135)
=
44
8
= 5.5 . (1.136)
Then
P
b
5.5 Q
_
d
min
2
_
. (1.137)
Also,

N
e
=
3.25
2
= 1.625 (1.138)
so that

P
e
1.625 Q
_
d
min
2
_
, (1.139)
and

P
b

5.5
3
Q
_
d
min
2
_
. (1.140)
Thus the bit error rate is somewhat higher than the normalized symbol error rate. Careful
assignment of bits to symbols can reduce the bit error rate slightly.
1.4.5 Block Error Measures
Higher-level engineering of communication systems may desire knowledge of message errors within pack-
ets of several messages cascaded into a larger message. An entire packet may be somewhat useless if
any part of it is in error. Thus, the concept of a symbol from this perspective of analysis may be the
entire packet of messages. The probability of block or packet error is truly identical to the
probability of symbol error already analyzed as long as the entire packet is considered as a single symbol.
If a packet contains B bits, each of which independently has probability of bit error

P
b
, then the
probability of packet (or block) error is often approximated by B

P
b
. Clearly then if one says the
probability of packet error is 10
7
and there are 125 bytes per packet, or 1000 bits per packet, then the
probability of bit error would then be 10
10
. Low packet error rate is thus a more stringent criterion on
the detector performance than is low probability of bit error. Nonetheless, analysis can proceed exactly
as in this section. As B increases, the approximation above of P
e
= B

P
b
can become inaccurate as we
see below.
41
An errored second is often used in telecommunications as a measure of performance. An errored
second is any second in which any bit error occurs. Obviously, fewer errored seconds is better. A given
xed number of error seconds translates into increasingly lower probability of bit error as the data rate of
the channel increases. An error-free second is a second in which no error occurs. If a second contains
B independent bits, then the exact probability of an error-free second is
P
efs
= (1

P
b
)
B
(1.141)
while the exact probability of an errored second is
P
e
= 1 P
efs
=
B

i=1
_
B
i
_
(1

P
b
)
Bi

P
i
b
. (1.142)
Dependency between bits and bit errors will change the exact nature of the above formulae, but is usually
ignored in calculations. More common in telecommunications is the derived concept of percentage
error free seconds which is the percentage of seconds that are error free. Thus, if a detector has

P
b
= 10
7
and the data rate is 10 Mbps, then one might naively guess that almost every second contains
errors according to P
e
= B

P
b
, and the percentage of error free seconds is thus very low. To be
exact, P
efs
= (1 10
7
)
10
7
= .368, so that the link has 36.8% error free seconds, so actually about
63% of the seconds have errors. Typically large telecommunications networks strive for ve nines
reliability, which translates into 99.999% error free seconds. At 10 Mbps, this means that the detector
has

P
b
= 1 e
10
7
ln(.99999)
= 2.3 10
12
. At lower data rates, ve nines is less stringent on the channel
error probability. Data networks today, often designed for bit error rates above 10
12
operate at 10 Mbps
with external error detection and retransmission protocols. Retransmission may not be acceptable for
continuous signals like voice or video, so that ve nines reliability is often not possible on the data
network this has become a key issue in the convergence of telecommunications networks (designed
with 5 nines reliability normally for voice transmission over the last 5 decades) and data networks,
designed with higher error rates for data transmission over the last 3 decades. (Often though, the data
network probability error is much better than the specication, so systems may work ne without any
true understanding of the designers as to exactly why. However, Voice (VoIP) and video (IPTV)
signals on data networks often exhibit quality issues, which is a function of the packet error rate being
too high and also of the failure of retransmission approaches to restore the quality.)
In any case, the probability of symbol and bit error are fundamental to all other measures of network
performance and can be used by the serious communication engineer to evaluate carefully a systems
performance.
1.5 General Classes of Constellations and Modulation
This section describes three classes of modulation that abound in digital data transmission. Each of these
three classes represent dierent geometric approaches to constellation construction. Three successive
subsections examine the choice of basis functions for each modulation class and develop corresponding
general expressions for the average probability of error P
e
for these modulation classes when used on
the AWGN channel. Subsection 1.5.1 discusses cubic constellations (Section 1.6 also investigates some
important extensions to the cubic constellations). Subsection 1.5.2 examines orthogonal constellations,
while Subsection 1.5.3 studies circular constellations.
To compare constellations, some constellation measures are developed rst. The cost of modulation
depends upon transmitted power (energy per unit time). A unit of time translates to a number of
dimensions, given a certain system bandwidth, so the energy per dimension is essentially a measure of
power. Given a wider bandwidth, the same unit in time and power will correspond to proportionately
more dimensions, but a lower power spectral density. While somewhat loosely dened, a system with
symbol period T and bandwidth
10
W, has a number of dimensions available for signal constellation
construction
N = 2 W T dimensions. (1.143)
10
It is theoretically not possible to have nite bandwidth and nite time extent, but in practice this can be approximated
closely.
42
The reasons for this approximation will become increasingly apparent, but all the methods of this section
will follow this simple rule when the reasonable and obvious denition of bandwidth is applied. Systems
in practice all follow this rule (or have fewer dimensions than this practical maximum) even though it
may be possible to construct signal sets with slightly more dimensions theoretically. The number of
dimensions in any case is a combined measure of the system resources of bandwidth and time - thus,
performance measures and energy are thus often normalized by N for fair comparison. The data rate
concept thus generalizes to the number of bits per dimension:
Denition 1.5.1 (Average Number of Bits Per Dimension) The average number
of bits per dimension,

b, for a signal constellation x, is

b

=
b
N
. (1.144)
The related quantity, data rate, is
R =
b
T
. (1.145)
Using (1.143), one can compute that
2

b =
R
W
, (1.146)
the spectral eciency of a modulation method which is often used by transmission engineers to
describe an eciency of transmission (how much data rate per unit of bandwidth). Spectral eciency
is often described in terms of the unit bits/second/Hz, which is really a measure of double the number
of bits/dimension. Engineers often abbreviated the term bits/second/Hz to say bits/Hz, which is an
(unfortunately) often used and confusing term because the units are incorrect. Nonetheless, experienced
engineers automatically translate the verbal abbreviation bits/Hz to the correct units and interpretation,
bits-per-second/Hz, or simply double the number of bits/dimension.
The concept of power also generalizes to energy per dimension:
Denition 1.5.2 (Average Energy Per Dimension) The average energy per dimen-
sion,

c
x
, for a signal constellation x, is

c
x

=
c
x
N
. (1.147)
A related quantity is the average power,
P
x
=
c
x
T
. (1.148)
Clearly N cannot exceed the actual number of dimensions in the constellation, but the constellation
may require fewer dimensions for a complete representation. For example the two-dimensional constella-
tion in Figure 1.7 can be described using only one basis vector simply by rotating the constellation by 45
degrees. The average power, which was also dened earlier, is a scaled quantity, but consistently dened
for all constellations. In particular, the normalization of basis functions often absorbs gain into the signal
constellation denition that may tacitly conceal complicated calculations based on transmission-channel
impedance, load matching, and various non-trivially calculated analog eects. These eects can also be
absorbed into bandlimited channel models as is the case in Chapters 2, 3, 4, 10 and 11.
The energy per dimension allows the comparison of constellations with dierent dimensionality. The
smaller the

c
x
for a given

P
e
and

b, the better the design. The concatenation of two successively
transmitted N-dimensional signals taken from the same N-dimensional signal constellation as a single
2N-dimensional signal causes the resulting 2N-dimensional constellation, formed as a Cartesian product
of the constituent N-dimensional constellations, to have the same average energy per dimension as the
N-dimensional constellation. Thus, simple concatenation of a signal set with itself does not improve the
design. However, careful packing of signals in increasingly larger dimensional signals sets can lead to
a reduction in the energy per dimension required to transmit a given set of messages, which will be of
interest in this section and throughout this text.
43
The average power is the usual measure of energy per unit time and is useful when sizing the power
requirements of a modulator or in determining scale constants for analog lter/driver circuits in the
actual implementation. The power can be set equal to the square of the voltage over the load resistance.
The noise energy per dimension for an N-dimensional AWGN channel is

2
=

N
l=1

2
N
=
2
=
^
0
2
. (1.149)
While AWGN is inherently innite dimensional, by the theorem of irrelevance, a computation of proba-
bility of error need only consider the noise components in the N dimensions of the signal constellation.
For AWGN channels, the signal-to-noise ratio (SNR) is used often by this text to characterize
the channel:
Denition 1.5.3 (SNR) The SNR is
SNR =

c
x

2
(1.150)
As shown in Section 1.4, the performance of a constellation in the presence of AWGN depends on
the minimum distance between any two vectors in the constellation. Increasing the distance between
points in a particular constellation increases the average energy per dimension of the constellation.
The Constellation Figure of Merit.
11
combines the energy per dimension and the minimum distance
measures:
Denition 1.5.4 (Constellation Figure of Merit - CFM ) The constellation gure of
merit,
x
for a signal constellation x, is

x

=
_
d
min
2
_
2

c
x
, (1.151)
a unit-less quantity, dened only when

b 1.
The CFM
x
will measure the quality of any constellation used with an AWGN channel. A higher
CFM
x
generally results in better performance. The CFM should only be used to compare systems
with equal numbers of bits per dimension

b = b/N, but can be used to compare systems of dierent
dimensionality.
A dierent measure, known as the energy per bit, measures performance in systems with low
average bit rate of

b 1 (see Chapter 7).
Denition 1.5.5 (Energy Per Bit) The energy per bit, c
b
, in a signal constellation x
is:
c
b
=
c
x
b
=

c
x

b
. (1.152)
This measure is only dened when

b 1 and has no meaning in other contexts.
Fair comparisons of modulation types consider the following parameters:
1. data rate R
2. power P
x
3. total bandwidth needed for all basis functions W
4. symbol period T
5. probability of error,

P
b
or possibly P
e
.
11
G. D. Forney, Jr., 8/89 IEEE Journal on Selected Areas in Communication.
44
Any 4 of these parameters may be held constant for two compared modulation methods, while the 5
th
varies and determines the better method. This set of 5 parameters can be reduced to 3 by normalizing
to the number of dimensions:
1. bits per dimension

b
2. energy per dimension

c
x
3. normalized probability of symbol error,

P
e
.
Any two can be xed while the third is varied for the comparison. This is one of the advantages of the
concept of normalizing to a number of dimensions. The constellation gure of merit presumes

b xed
and then looks at the ratio of d
2
min
to

c
x
, essentially holding

c
x
xed and looking at

P
e
(equivalent to
d
min
if nearest neighbors are ignored on the AWGN). The normalization essentially prevents an excess
of symbol period or bandwidth from letting one modulation method look better than another, tacitly
including the third and fourth parameters (bandwidth and symbol period) from the list of parameters
in a comparison.
Denition 1.5.6 (margin) The margin of a transmission system is the amount by which
the argument of the Q-function can be reduced while retaining the probability of error below
a specied maximum that is associated with the margin.
Margins are often quoted in transmission design as they give a level of condence to designers that
unforseen noise increases or signal attenuation will not cause the system performance to become unac-
ceptable.
EXAMPLE 1.5.1 (Margin in DSL) Digital Subscriber Line systems deliver 100s of kilo-
bits to 10s of megabits of data over telephone lines and use sophisticated adaptive modulation
systems described later in Chapters 4 and 5. The two modems are located at the ends of
the telephone line at the telephone-company central oce and at the customers premise.
However, they ultimately also have probability of error specied by a relation of the form
N
e
Q(d
min
/2). Because noise sources can be unpredictable on telephone lines, which tend
to sense everything from other phone lines signals to radio signals to refrigerator doors and
uorescent and other lights, and because customer-location additional wiring to the modem
can be poor grade or long, a margin of at least 6 dB is mandated at the data rate of service
oered if the customer is to be allowed service. This 6 dB essentially allows performance to
be degraded by a combined factor of 4 in increased noise before costly manual maintenance
or repair service would be necessary.
1.5.1 Cubic Constellations
Cubic constellations are commonly used on simple data communication channels. Some examples of
cubic signal constellations are shown in Figure 1.34 for N = 1, 2, and 3. The construction of a cubic
constellation directly maps a sequence of N = b bits into the components of the basis vectors in a
corresponding N-dimensional signal constellation. For example, the bit stream . . . 010010 . . . may be
grouped as the sequence of two-dimensional vectors . . . (01)(00)(10) . . . The resulting constellation is
uniformly scaled in all dimensions, and may be translated or rotated in the N-dimensional space it
occupies.
The simplest cubic constellation appears in Figure 1.34, where N = b =

b = 1. This constellation
is known as binary signaling, since only two possible signals are transmitted using one basis function

1
(t). Several examples of binary signaling are described next.
Binary Antipodal Signaling
In binary antipodal signaling, the two possible values for x = x
1
are equal in magnitude but opposite in
sign, e.g. x
1
=
d
2
. As for all binary signaling methods, the average probability of error is
P
e
= P
b
= Q
_
d
min
2
_
. (1.153)
45
Figure 1.34: Cubic constellations for N = 1, 2, and 3.
The CFM for binary antipodal signaling equals
x
= (d/2)
2
/[(d/2)
2
] = 1.
Particular types of binary antipodal signaling dier only in the choice of the basis function
1
(t).
In practice, these basis functions may include Nyquist pulse shaping waveforms to avoid intersymbol
interference. Chapter 3 further discusses waveform shaping. Besides the time-domain shaping, the basis
function
1
(t) shapes the power spectral density of the resultant modulated waveform. Thus, dierent
basis functions may require dierent bandwidths.
Binary Phase Shift Keying
Binary Phase Shift Keying (BPSK) uses a sinusoid to modulate the sequence of data symbols

c
x
.

1
(t) =
_ _
2
T
sin
2t
T
0 t T
0 elsewhere
(1.154)
This representation uses the minimum number of basis functions N = 1 to represent BPSK,
rather than N = 2 as in Example 1.1.2.
Bipolar (NRZ) transmission
Bipolar signaling, also known as baseband binary or Non-Return-to-Zero (NRZ) signal-
ing, uses a square pulse to modulate the sequence of data symbols

c
x
.

1
(t) =
_
1

T
0 t T
0 elsewhere
(1.155)
Manchester Coding (Bi-Phase Level)
Manchester Coding, also known as biphase level (BPL) or, in magnetic and optical record-
ing, as frequency modulation, uses a sequence of two opposite phase square pulses to
modulate each data symbol. In NRZ signaling long runs of the same bit result in a constant
output signal with no transitions until the bit changes. Since timing recovery circuits usually
require some transitions, Manchester or BPL guarantees a transition occurs in the middle of
each bit (or symbol) period T. The basis function is:

1
(t) =
_
_
_
1

T
0 t < T/2

T
T/2 t < T
0 elsewhere
(1.156)
46
The power spectral density of the modulated signal is related to the Fourier transform
1
(f) of the
pulse
1
(t). The Fourier transform of the NRZ square pulse is a sinc function with zero crossings spaced
at
1
T
Hz. The basis function for BPL in Equation (1.156) requires approximately twice the bandwidth
of the basis function for NRZ in Equation (1.155), because the Fourier transform of the biphase pulse
is a sinc function with zero crossings spaced at
2
T
Hz. Similarly BPSK requires double the bandwidth
of NRZ. Both BPSK and BPL are referred to as rate 1/2 transmission schemes, because for the same
bandwidth they permit only half the transmitted transmission rate compared with NRZ.
On-O Keying (OOK)
On-O Keying, used in direct-detection optical data transmission, as well as in gate-to-gate transmis-
sion in most digital circuits, uses the same basis function as bipolar transmission.

1
(t) =
_
1

T
0 t T
0 elsewhere
(1.157)
Unlike bipolar transmission, however, one of the levels for x
1
is zero, while the other is nonzero (

2c
x
).
Because of the asymmetry, this method includes a DC oset, i.e. a nonzero mean value. The CFM is

x
= .5, and thus OOK is 3dB inferior to any type of binary antipodal transmission. The comparison
between signal constellations is 10 log
10
[
x,OOK
/
x,NRZ
] = 10 log
10
(0.5) = 3 dB.
As for any binary signaling method, OOK has
P
e
= P
b
= Q
_
d
min
2
_
. (1.158)
Vertices of a Hypercube (Block Binary)
Binary signaling in one dimension generalizes to the corners of a hypercube in N-dimensions, hence the
name cubic constellations. The hypercubic constellations all transmit an average of

b = 1 bit per
dimension. For two dimensions, the most common signal set is QPSK.
Quadrature Phase Shift Keying (QPSK)
The basis functions for the two dimensional QPSK constellation are

1
(t) =
_ _
2
T
cos
2t
T
0 t T
0 elsewhere
(1.159)

2
(t) =
_ _
2
T
sin
2t
T
0 t T
0 elsewhere
. (1.160)
The transmitted signal is a linear combination of both an inphase (cos) component and a
quadrature (sin) component. The four possible data symbols are
[x
1
x
2
] =
_

_
_
E
x
2
[1 1]
_
E
x
2
[1 + 1]
_
E
x
2
[+1 1]
_
E
x
2
[+1 + 1]
_

_
. (1.161)
The additional basis function does not require any extra bandwidth with respect to BPSK,
and the average energy c
x
remains unchanged. While the minimum distance d
2
min
has
decreased by a factor of two, the number of dimensions has doubled, thus the CFM for
QPSK is
x
= 1 again, as with BPSK.
For performance evaluation, it is easier to compute the average probability of a correct
decision P
c
rather than P
e
for maximum likelihood detection on the AWGN channel with
47
equally probable signals. By symmetry of the signal constellation, P
c|i
is identical i =
0, . . . , 3.
P
c
=
3

i=0
P
c/i
p
x
(i) = P
c/i
=
_
1 Q
_
d
min
2
___
1 Q
_
d
min
2
__
(1.162)
= 1 2Q
_
d
min
2
_
+
_
Q
_
d
min
2
__
2
. (1.163)
To prove the step from the rst to second line, note that the noise in the two dimensions
is independent. The probability of a correct decision requires both noise components to fall
within the decision region, which gives the product in (1.162). Thus
P
e
= 1 P
c
(1.164)
= 2Q
_
d
min
2
_

_
Q
_
d
min
2
__
2
< 2Q
_
d
min
2
_
, (1.165)
where d
min
=

2c
x
= 2

c
1/2
x
. For reasonable error rates (P
e
< 10
2
), the
_
Q
_
d
min
2
__
2
term in (1.165) is negligible, and the bound on the right, which is also the NNUB, is tight.
With a reasonable mapping of bits to data symbols (e.g. the Gray code 0 1 and
1 +1), the probability of a bit error

P
b
=

P
e
for QPSK. P
e
for QPSK is twice P
e
for
BPSK, but

P
e
is the same for both systems. Comparing

P
e
is usually more informative.
Block Binary
For hypercubic signal constellations in three or more dimensions, N 3, the signal points
are the vertices of a hypercube centered on the origin. In this case, the probability of error
generalizes to
P
e
= 1
_
1 Q
_
d
min
2
__
N
< NQ
_
d
min
2
_
. (1.166)
where d
min
= 2

c
1/2
x
. The basis functions are usually given by
n
(t) = (t nT), where (t)
is the square pulse given in (1.155). The transmission of one symbol with the hypercubic
constellation requires a time interval of length NT. Alternatively, scaling of the basis func-
tions in time can retain a symbol period of length T, but the narrower pulse will require N
times the bandwidth as the T width pulses. For this case again
x
= 1. As N , P
e
1.
While the probability of any single dimension being correct remains constant and less than
one, as N increases, the probability of all dimensions being correct decreases.
Ignoring the higher order terms Q
i
, i 2, the average probability of error is approximately

P
e
Q(d
min
/(2)), which equals

P
e
for binary antipodal signaling. This example illustrates
that increasing dimensionality does not always reduce the probability of error unless the
signal constellation has been carefully designed. As block binary constellations are just a
concatenation of several binary transmissions, the receiver can equivalently decode each of
the independent dimensions separately. However, with a careful selection of the transmitted
signal constellation, it is possible to drive the probability of both a message error P
e
and
a bit error P
b
to zero with increasing dimensionality N, as long as the average number of
transmitted bits per unit time does not exceed a fundamental rate known as the capacity
of the communication channel. (Chapter 8)
1.5.2 Orthogonal Constellations
In orthogonal signal sets, the dimensionality increases linearly with the number of points M = N in
the signal constellation, which results in a decrease in the number of bits per dimension

b =
log
2
(M)
N
=
log
2
(N)
N
.
48
Figure 1.35: Block orthogonal constellations for N = 2, and 3.
Block Orthogonal
Block orthogonal signal constellations have a dimension, or basis function, for each signal point. The
block orthogonal signal set thus consists of M = N orthogonal signals x
i
(t), that is
x
i
(t), x
j
(t)) = c
x

ij
. (1.167)
Block orthogonal signal constellations appear in Figure 1.35 for N = 2 and 3. The signal constellation
vectors are, in general,
x
i
=
_
0 ... 0
_
c
x
0 ... 0
_
=
_
c
x

i+1
. (1.168)
The CFM should not be used on block orthogonal signal sets because

b < 1.
As examples of block orthogonal signaling, consider the following two dimensional signal sets.
Return to Zero (RZ) Signaling
RZ uses the following two basis functions for the two-dimensional signal constellation shown
in Figure 1.35:

1
(t) =
_
1

T
0 t T
0 elsewhere
(1.169)

2
(t) =
_
_
_
1

T
0 t < T/2

T
T/2 t < T
0 elsewhere
(1.170)
Return to zero indicates that the transmitted voltage (i.e. the real value of the signal
waveform) always returns to the same value at the beginning of any symbol interval.
As for any binary signal constellation,
P
b
= P
e
= Q
_
d
min
2
_
= Q
_
_
c
x
2
2
_
. (1.171)
RZ is 3 dB inferior to binary antipodal signaling, and uses twice the bandwidth of NRZ.
Frequency Shift Keying (FSK)
49
Frequency shift keying uses the following two basis functions for the two dimensional signal
constellation shown in Figure 1.35.

1
(t) =
_ _
2
T
sin
t
T
0 t T
0 elsewhere
(1.172)

2
(t) =
_ _
2
T
sin
2t
T
0 t T
0 elsewhere
(1.173)
The term frequency-shift indicates that the sequence of 1s and 0s in the transmitted
data shifts between two dierent frequencies, 1/(2T) and 1/T.
As for any binary signal constellation,
P
b
= P
e
= Q
_
d
min
2
_
= Q
_
_
c
x
2
2
_
. (1.174)
FSK is also 3 dB inferior to binary antipodal signaling.
FSK can be extended to higher dimensional signal sets N > 2 by adding the following basis
functions (i 3):

i
(t) =
_ _
2
T
sin
it
T
0 t T
0 elsewhere
(1.175)
The required bandwidth necessary to realize the additional basis functions grows linearly
with N for this FSK extension.
P
e
Computation for Block Orthogonal
The computation of P
e
for block orthogonal signaling returns to the discussion of the signal detector
in Figure 1.23. Because all the signals are equally likely and of equal energy, the constants c
i
can be
omitted (because they are all the same constant c
i
= c). In this case, the MAP receiver becomes
m m
i
if y, x
i
) y, x
j
) j ,= i, (1.176)
By the symmetry of the block orthogonal signal constellation, P
e/i
= P
e
or P
c/i
= P
c
for all i. For
convenience, the analysis calculates P
c
= P
c|i=0
, in which case the i
th
elements of y are
y
0
=
_
c
x
+ n
0
(1.177)
y
i
= n
i
i ,= 0 . (1.178)
If a decision is made that message 0 was sent, then y, x
0
) y, x
i
) or equivalently y
0
y
i
i ,= 0.
The probability of this decision being correct is
P
c/0
= Py
0
y
i
i ,= 0[ given 0 was sent . (1.179)
If y
0
takes on a particular value v, then since y
i
= n
i
i ,= 0 and since all the noise components are
independent,
P
c/0,y0=v
= Pn
i
v, i ,= 0 (1.180)
=
N1

i=1
Pn
i
v (1.181)
= [1 Q(v/)]
N1
. (1.182)
The last equation uses the fact that the n
i
are independent, identically distributed Gaussian random
variables N(0,
2
). Finally, recalling that y
0
is also a Gaussian random variable N(

c
x
,
2
).
P
c
= P
c/0
=
_

2
2
e

1
2
2
(v

E
x
)
2
[1 Q(v/)]
N1
dv , (1.183)
50
Figure 1.36: Plot of probability of symbol error for orthogonal constellations.
yielding
P
e
= 1
_

2
2
e

1
2
2
(v

E
x
)
2
[1 Q(v/)]
N1
dv . (1.184)
This function must be evaluated numerically using a computer.
A simpler calculation yields the NNUB, which also coincides with the union bound because the
number of nearest neighbors M1 equals the total number of neighbors to any point for block orthogonal
signaling. The NNUB is given by
P
e
(M 1)Q
_
d
min
2
_
= (M 1)Q
_
_
c
x
2
2
_
. (1.185)
A plot of performance for several values of N appears in Figure 1.36. As N gets large, performance
improves without increase of SNR, but at the expense of a lower

b.
Simplex Constellation
For block orthogonal signaling, the mean value of the signal constellation is nonzero, that is E[x] =
(

c
x
/M)[ 1 1 ... 1 ]. Translation of the constellation by E[x] minimizes the energy in the signal
constellation without changing the average error probability. The translated signal constellation, known
as the simplex constellation, is
x
s
i
=
_

c
x
M
, ...,

c
x
M
,
_
c
x
(1
1
M
),

c
x
M
, ...

c
x
M
_

, (1.186)
where the

c
x
(1
1
M
) occurs in the i
th
position. The superscript s distinguishes the simplex constellation
x
s
i
from the block orthogonal constellation x
i
from which the simplex constellation is constructed.
The energy of the simplex constellation equals
c
s
x
=
M 1
M
c
x
, (1.187)
51
Figure 1.37: Pulse Duration Modulation (PDM).
which provides signicant energy savings for small M. The set of data symbols, however, are no longer
orthogonal.
x
s
i
, x
s
j
) = (x
i
E[x])

(x
j
E[x]) (1.188)
= c
x

ij
E[x], (x
i
+ x
j
)) +
c
x
M
(1.189)
= c
x

ij
2
c
x
M
+
c
x
M
(1.190)
= c
x

ij

c
x
M
. (1.191)
By the theorem of translational invariance, P
e
of the simplex signal set equals P
e
of the block orthogonal
signal set given in (1.184) and bounded in (1.185).
Pulse Duration Modulation
Another signal set in which the signals are not orthogonal, as usually described, is pulse duration modula-
tion (PDM). The number of signals points in the PDM constellation increases linearly with the number of
dimensions, as for orthogonal constellations. PDM is commonly used, with some modications, in read-
only optical data storage (i.e. compact disks and CD-ROM). In optical data storage, data is recorded
by the length of a hole or pit burned into the storage medium. The signal set can be constructed as
illustrated in Figure 1.37. The minimum width of the pit (4T in the gure) is much larger than the
separation (T) between the dierent PDM signal waveforms. The signal set is evidently not orthogonal.
A second performance-equivalent set of waveforms, known as a Pulse Position Modulation (PPM),
appears in Figure 1.38. The PPM constellation is a block orthogonal constellation, which has the pre-
viously derived P
e
. The average energy of the PDM constellation clearly exceeds that of the PPM
constellation, which in turn exceeds that of a corresponding Simplex constellation. Nevertheless, con-
stellation energy minimization is usually not important when PDM is used; for example in optical
storage, the optical channel physics mandate the minimum pit duration and the resultant energy
increase is not of concern.
52
Figure 1.38: Pulse-Position Modulation (PPM).
Biorthogonal Signal Constellations
A variation on block orthogonal signaling is the biorthogonal signal set, which doubles the size of the
signal set from M = N to M = 2N by including the negative of each of the data symbol vectors in the
signal set. From this perspective, QPSK is both a biorthogonal signal set and a cubic signal set.
The probability of error analysis for biorthogonal constellations parallels that for block orthogonal
signal sets. As with orthogonal signaling, because all the signals are equally likely and of equal energy,
the constants c
i
in the signal detector in Figure 1.23 can be omitted, and the MAP receiver becomes
m m
i
if < y, x
i
>< y, x
j
> j ,= i . (1.192)
By symmetry P
e/i
= P
e
or P
c/i
= P
c
for all i. Let i = 0. Then
y
0
=
_
c
x
+ n
0
(1.193)
y
i
= n
i
i ,= 0 . (1.194)
If x
0
was sent, then a correct decision is made if y, x
0
) y, x
i
) or equivalently if y
0
[y
i
[ i ,= 0.
Thus
P
c/0
= Py
0
[y
i
[, i ,= 0[0 was sent . (1.195)
Suppose y
0
takes on a particular value v [0, ), then since the noise components n
i
are iid
P
c/0,y0=v
=
N1

i=1
P[n
i
[ v (1.196)
= [1 2Q(v/)]
N1
. (1.197)
If y
0
< 0, then an incorrect decision is guaranteed if symbol zero was sent. (The reader should visualize
the decision regions for this constellation). Thus
P
c
= P
c/o
=
_

0
1

2
2
e

1
2
2
(v

E
x
)
2
[1 2Q(v/)]
N1
dv , (1.198)
yielding
P
e
= 1
_

0
1

2
2
e

1
2
2
(v

E
x
)
2
[1 2Q(v/)]
N1
dv . (1.199)
53
This function can be evaluated numerically using a computer.
Using the NNUB, which is slightly tighter than the union bound because the number of nearest
neighbors is M 2 for biorthogonal signaling,
P
e
(M 2)Q
_
d
min
2
_
= 2(N 1)Q
_
_
c
x
2
2
_
. (1.200)
1.5.3 Circular Constellations - M-ary Phase Shift Keying
Examples of Phase Shift Keying appeared in Figures 1.21 and 1.30. In general, M-ary PSK places the
data symbol vectors at equally spaced angles (or phases) around a circle of radius

c
x
in two dimensions.
Only the phase of the signal changes with the transmitted message, while the amplitude of the signal
envelope remains constant, thus the origin of the name.
PSK is often used on channels with nonlinear amplitude distortion where signals that include in-
formation content in the time varying amplitude would otherwise suer performance degradation from
nonlinear amplitude distortion. The minimum distance for M-ary PSK is given by
d
min
= 2
_
c
x
sin

M
= 2
_
2

c
x
sin

M
. (1.201)
The CFM is

x
= 2 sin
2
(

M
) , (1.202)
which is inferior to block binary signaling for any constellation with M > 4. The NNUB on error
probability is tight and equal to
P
e
< 2Q
_

c
x
sin

M

_
, (1.203)
for all M.
1.6 Rectangular (and Hexagonal) Signal Constellations
This section studies several popular signal constellations for data transmission. These constellations
use equally spaced points on a one- or two-dimensional lattice. This study introduces and uses some
basic concepts, namely SNR, the shaping gain, the continuous approximation, and the peak-to-average
power ratio. These concepts and/or their results measure the quality of signal constellations and are
fundamental to understanding performance.
Subsection 1.6.1 studies pulse amplitude modulation (PAM), while Subsection 1.6.2 studies quadra-
ture amplitude modulation (QAM). Subsection 1.6.3 discusses several measures of constellation perfor-
mance.
The Continuous Approximation
For constellations that possess a degree of geometric uniformity, the continuous approximation com-
putes the average energy of a signal constellation by replacing the discrete sum of energies with a
continuous integral. The discrete probability distribution of the signal constellation is approximated by
a continuous distribution that is uniform over a region dened by the signal points. In dening the
region of the constellation, each point in the constellation is associated with an identical fundamental
volume, 1
x
. Each point in the constellation is centered within an N-dimensional decision region (or
Voronoi Region) with volume equal to the fundamental volume. This region is the decision region for
internal points in the constellation that have a maximum number of nearest neighbors. The union of
these Voronoi regions for the points is called the Voronoi Region T
x
of the constellation and has volume
M1
x
.
The discrete distribution of the constellation is approximated by a uniform distribution over the
Voronoi Region with probability density p
x
(u) =
1
MV
x
u in T
x
.
54
Figure 1.39: PAM constellation.
Denition 1.6.1 (Continuous Approximation) The continuous approximation to the av-
erage energy equals
c
x


c
x
=
_
D
x
[[u[[
2
1
M1
x
du , (1.204)
where the N-dimensional integral covers the Voronoi region T
x
.
For large size signal sets with regular spacing between points, the error in using the continuous
approximation is small, as several examples will demonstrate in this section.
1.6.1 Pulse Amplitude Modulation (PAM)
Pulse amplitude modulation, or amplitude shift keying (ASK), is a one-dimensional modulated signal set
with M = 2
b
signals in the constellation, for b Z
+
, the set of positive integers. Figure 1.39 illustrates
the PAM constellation. The basis function can be any unit-energy function, but often
1
(t) is

1
(t) =
1

T
sinc
_
t
T
_
(1.205)
or another Nyquist pulse shape (see Chapter 3). The data-symbol amplitudes are
d
2
,
3d
2
,
5d
2
, ...,
(M1)d
2
,
and all input levels are equally likely. The minimum distance between points in a PAM constellation
abbreviates as
d
min
= d . (1.206)
Both binary antipodal and 2B1Q are examples of PAM signals.
The average energy of a PAM constellation is
c
x
=

c
x
=
1
M
(2)
M/2

k=1
_
2k 1
2
_
2
d
2
(1.207)
=
d
2
2M
M/2

k=1
(4k
2
4k + 1) (1.208)
=
d
2
2M
_
4
_
(M/2)
3
3
+
(M/2)
2
2
+
(M/2)
6
_
4
_
(M/2)
2
2
+
(M/2)
2
_
+
M
2
_
(1.209)
=
d
2
2M
_
M
3
6

M
6
_
. (1.210)
This PAM average energy is expressed in terms of the minimum distance and constellation size as
c
x
=

c
x
=
d
2
12
_
M
2
1

. (1.211)
55
The PAM minimum distance is a function of c
x
and M:
d =
_
12c
x
M
2
1
. (1.212)
Finally, given distance and average energy,

b = log
2
M =
1
2
log
_
12

c
x
d
2
+ 1
_
. (1.213)
Figure 1.39 shows that the decision region for an interior point of PAM extends over a length d
interval centered on that point. The Voronoi region of the constellation thus extends for an interval of
Md over [L, L] where L =
Md
2
. The continuous approximation for PAM assumes a uniform distribution
on this interval (L, L), and thus approximates the average energy of the constellation as
c
x
=

c
x

_
L
L
x
2
2L
dx =
L
2
3
=
M
2
d
2
12
. (1.214)
The approximation for the average energy does not include the constant term
d
2
12
, which becomes
insignicant as M becomes large.
Since M = 2

b
, then M
2
= 4

b
, leaving alternative relations (

b = b for N = 1) for (1.211) and (1.212)


c
x
=

c
x
=
d
2
12
_
4
b
1

=
d
2
12
_
4

b
1
_
, (1.215)
and
d =
_
12c
x
4
b
1
. (1.216)
The following recursion derives from increasing the number of bits, b =

b, in a PAM constellation while
maintaining constant minimum distance between signal points:

c
x
(b + 1) = 4

c
x
(b) +
d
2
4
. (1.217)
Thus for moderately large b, the required signal energy increases by a factor of 4 for each additional bit
of information in the signal constellation. This corresponds to an increase of 6dB per bit, a measure
commonly quoted by communication engineers as the required SNR increase for a transmission scheme
to support an additional bit-per-dimension of information.
The PAM probability of correct symbol detection is
P
c
=
M1

i=0
P
c|i
p
x
(i) (1.218)
=
M 2
M
_
1 2Q
_
d
min
2
__
+
2
M
_
1 Q
_
d
min
2
__
(1.219)
= 1
_
2M 4 + 2
M
_
Q
_
d
min
2
_
(1.220)
= 1 2
_
1
1
M
_
Q
_
d
min
2
_
(1.221)
Thus, the PAM probability of symbol error is
P
e
=

P
e
= 2
_
1
1
2

b
_
Q
_
d
min
2
_
< 2Q
_
d
min
2
_
. (1.222)
The average number of nearest neighbors for the constellation is 2(1 1/M); thus, the NNUB is exact
for PAM. Thus
P
e
= 2
_
1
1
M
_
Q
_
_
3
M
2
1
SNR
_
(1.223)
56
d
2
for

P
e
= 10
6
SNR = SNR increase =
b =

b M 2Q
_
d
min
2
_
(M
2
1)10
1.37
3
M
2
1
(M1)
2
1
1 2 13.7dB 13.7dB
2 4 13.7dB 20.7dB 7dB
3 8 13.7dB 27.0dB 6.3dB
4 16 13.7dB 33.0dB 6.0dB
5 32 13.7dB 39.0dB 6.0dB
Table 1.1: PAM constellation energies.
Figure 1.40: 56K Modem PAM example.
For P
e
= 10
6
, one determines that
d
2
4.75 (13.5dB). Table 1.1 relates b =

b, M,
d
2
, the SNR, and
the required increase in SNR (or equivalently in

c
x
) to transmit an additional bit of information at a
probability of error P
e
= 10
6
. Table 1.1 shows that for b =

b > 2, the approximation of 6dB per bit is
very accurate.
Pulse amplitude constellations with b > 2 are typically known as 3B1O - three bits per octal signal
(for 8 PAM) and 4B1H (4 bits per hexadecimal signal), but are rare in use with respect to the more
popular quadrature amplitude modulation of Section 1.6.2.
EXAMPLE 1.6.1 (56K voiceband modem) The 56 kbps voiceband modem of Figure
1.40 can be viewed as a PAM modem with 128 levels, or b = 7, and a symbol rate of 8000
Hz. Thus, the data rate is R = b/T = 56, 000 bits per second. The 8000Hz is thus consistent
with the maximum allowed bandwidth of a voice-only telephone-network connection (4000
Hz) that is imposed by network switches that sample also at 8000Hz and the clock is supplied
from the network to the internet service providers modem as shown. The modulator is
curiously and tacitly implemented by the combination of the network connection and the
eventual DAC (digital-to-analog converter) at the beginning of the customers analog phone
line. A special receiver structure known as a DFE (see Chapter 3) converts the telephone
line into an AWGN with an SNR suciently high to carry 128PAM. This particular choice of
modulator implementation avoids the distortion that would be introduced by an extra ADC
57
and DAC on the Internet Service Providers phone line (and which are superuous as that
Internet Service Provider most often has a digital high-speed connection by ber or other
means to the network).
The levels are not equally spaced in most implementations, an artifact only of the fact
that the network DAC is not a uniform-DAC, and instead chooses its levels for best voice
transmission, which is not the best for data transmission. Yet higher-speed DSL versions
of the same system, allow a higher-speed connection through the network, and use a new
DAC in what is called a DSLAM that replaces the voice connection to the subscriber.
1.6.2 Quadrature Amplitude Modulation (QAM)
QAM is a two dimensional generalization of PAM. The two basis functions are usually

1
(t) =
_
2
T
sinc
_
t
T
_
cos
c
t , (1.224)

2
(t) =
_
2
T
sinc
_
t
T
_
sin
c
t . (1.225)
The sinc(t/T) term may be replaced by any Nyquist pulse shape as discussed in Chapter 3. The
c
is a
radian carrier frequency that is discussed for in Chapters 2 and 3; for now,
c
/T.
The QAM Square Constellation
Figure 1.41 illustrates QAM Square Constellations. These constellations are the Cartesian products
12
of
2-PAM with itself and 4-PAM with itself, respectively. Generally, square M-QAM constellations derive
from the Cartesian product of two

M-PAM constellations. For



b bits per dimension, the M = 4

b
signal
points are placed at the coordinates
d
2
,
3d
2
,
5d
2
, ...,
(

M1)d
2
in each dimension.
The average energy of square QAM constellations is easily computed as
c
MQAM
= c
x
= 2

c
x
=
1
M

i,j=1
_
x
2
i
+x
2
j
_
(1.226)
=
1
M
_
_

i=1
x
2
i
+

j=1
x
2
j
_
_
(1.227)
= 2
1

i=1
x
2
i
(1.228)
= 2c

MPAM
(1.229)
= d
2
_
M 1
6
_
(1.230)
Thus, the average energy per dimension of the M-QAM constellation

c
x
= d
2
_
M 1
12
_
, (1.231)
equals the average energy of the constituent

M-PAM constellation. The minimum distance d
min
= d
can be computed from c
x
(or

c
x
) and M by
d =
_
6c
x
M 1
=

12

c
x
M 1
. (1.232)
12
A Cartesian Product, a product of two sets, is the set of all ordered pairs of coordinates, the rst coordinate taken
from the rst set in the Cartesian product, and the second coordinate taken from the second set in the Cartesian product.
58
Figure 1.41: QAM constellations.
59
Since M = 4

b
, alternative relations for (1.231) and (1.232) in terms of the average bit rate

b are

c
x
=
c
x
2
=
d
2
12
_
4

b
1
_
, (1.233)
and
d =

12

c
x
4

b
1
. (1.234)
Finally,

b =
1
2
log
2
_
6c
x
d
2
+ 1
_
=
1
2
log
2
_
12

c
x
d
2
+ 1
_
, (1.235)
the same as for a PAM constellation.
For large M,

c
x

d
2
12
M =
d
2
12
4

b
, which is the same as that obtained by using the continuous
approximation. The continuous approximation for two dimensional QAM uses a uniform constellation
over the square dened by [L, L],
c
x

_
L
L
_
L
L
x
2
+ y
2
4L
2
dxdy = 2
L
2
3
, (1.236)
or L =

1.5c
x
. Since the Voronoi region for each signal point in a QAM constellation has area d
2
M
4 L
2
d
2
=
6c
x
d
2
=
12

c
x
d
2
. (1.237)
This result agrees with Equation 1.231 for large M.
As the number of points increases, the energy-computation error caused by using the continuous
approximation becomes negligible.
Increasing the number of bits, b, in a QAM constellation while maintaining constant minimum
distance, we nd the following recursion for the increase in average energy:
c
x
(b + 1) = 2c
x
(b) +
d
2
6
. (1.238)
Asymptotically the average energy increases by 3dB for each added bit per two dimensional symbol.
The probability of error can be exactly computed for QAM by noting that the conditional probability
of a correct decision falls into one of 3 categories:
1. corner points (4 points with only 2 nearest neighbors)
P
c|corner
=
_
1 Q
_
d
2
__
2
(1.239)
2. inner points (

M 2)
2
points with 4 nearest neighbors)
P
c|inner
=
_
1 2Q
_
d
2
__
2
(1.240)
3. edge points 4(

M 2) points with 3 nearest neighbors)


P
c|edge
=
_
1 Q
_
d
2
___
1 2Q
_
d
2
__
. (1.241)
60
d
2
for

P
e
= 10
6
SNR = SNR increase =
b = 2

b M 2Q
_
d
min
2
_
(M1)10
1.37
3
M1
(M1)1
dB/bit
2 4 13.7dB 13.7dB
4 16 13.7dB 20.7dB 7.0dB 3.5dB
6 64 13.7dB 27.0dB 6.3dB 3.15dB
8 256 13.7dB 33.0dB 6.0dB 3.0dB
10 1024 13.7dB 39.0dB 6.0dB 3.0dB
12 4096 13.7dB 45.0dB 6.0dB 3.0dB
14 16,384 13.7dB 51.0dB 6.0dB 3.0dB
Table 1.2: QAM constellation energies.
The probability of being correct is then (abbreviating Q Q
_
d
2

)
P
c
=
M1

i=0
P
c/i
p
x
(i) (1.242)
=
4
M
(1 Q)
2
+
(

M 2)
2
M
(1 2Q)
2
+
4(

M 2)
M
(1 2Q) (1 Q) (1.243)
=
1
M
_
(4 8Q+ 4Q
2
) + (4

M 8)(1 3Q+ 2Q
2
) (1.244)
+(M 4

M + 4)(1 4Q+ 4Q
2
)
_
(1.245)
=
1
M
_
M + (4

M 4M)Q+ (4 8

M + 4M)Q
2
_
(1.246)
= 1 + 4(
1

M
1)Q+ 4(
1

M
1)
2
Q
2
(1.247)
Thus, the probability of symbol error is
P
e
= 4
_
1
1

M
_
Q
_
d
2
_
4
_
1
1

M
_
2
_
Q
_
d
2
__
2
< 4
_
1
1

M
_
Q
_
d
2
_
. (1.248)
The average number of nearest neighbors for the constellation equals 4(1 1/

M), thus for QAM the


NNUB is not exact, but usually tight. The corresponding normalized NNUB is

P
e
2
_
1
1
2

b
_
Q
_
d
2
_
= 2
_
1
1
2

b
_
Q
_
_
3
M 1
SNR
_
, (1.249)
which equals the PAM result. For P
e
= 10
6
, one determines that
d
2
4.75 (13.5dB). Table 1.2 relates

b, M,
d
2
, the SNR, and the required increase in SNR (or equivalently in

c
x
) to transmit an additional
bit of information.
As with PAM for average bit rates of

b > 2, the approximation of 3dB per bit per two-dimensional
additional for the average energy increase is accurate.
The constellation gure of merit for square QAM is

x
=
3
M 1
=
3
4

b
1
=
3
2
b
1
. (1.250)
When b is odd, it is possible to dene a SQ QAM constellation by taking every other point from a b +1
SQ QAM constellation. (See Problem 1.14.)
A couple of examples illustrate the wide use of QAM transmission.
61
Figure 1.42: Cross constellations.
EXAMPLE 1.6.2 (Cable Modem) Cable modems use existing cable-TV coaxial cables
for two-way transmission (presuming the cable TV provider has sent personnel to the various
unidirectional blocking points in the network and replaced them with so-called diplex
lters). Early cable modem conventions (i.e., DOCSIS) use 4QAM in both directions of
transmission. The downstream direction from cable TV end to customer is typically at a
carrier frequency well above the used TV band, somewhere between 300 MHz and 500 MHz.
The upstream direction is below 50 MHz, typically between 5 and 40 MHz. The symbol rate
is typically 1/T=2MHz so the data rate is 4 Mbps on any given carrier. Typically about
10 carriers can be used (so 40 Mbps maximum) and each is shared by various subgroups
of customers, each customer within a subgroup typically getting 384 kbps in fair systems
(although some customers can hog all the 4 Mbps and systems for resolving such customer-
use are evolving).
EXAMPLE 1.6.3 (Satellite TV Broadcast) Satellite television uses 4QAM in for broad-
cast transmission at one of 20 carrier frequencies between 12.2 GHz to 12.7 GHz from satellite
to customer receiver for some suppliers and satellites. Corresponding carriers between 17.3
and 17.8 GHz are used to send the signals from the broadcaster to the satellite, again with
4 QAM. The symbol rate is 1/T = 19.151 MHz, so the aggregate data rate is 38.302 Mbps
on any of the 20 carriers. A typical digital TV signal is compressed into about 2-3 Mbps
allowing for 4-16 channels per carrier/QAM signal. (Some stations watched by many, for
instance sports, may get a larger allocation of bandwidth and carry a higher-quality image
than others that are not heavily watched. A high-denition TV channel (there are only 4
presently) requires 20 Mbps if sent with full delity. Each carrier is transmitted in a 24
MHz transponder channel on the satellite these 24 MHz channels were originally used to
broadcast a single analog TV channel, modulated via FM unlike terrestrial analog broadcast
television (which uses only 6 MHz for analog TV).
QAM Cross Constellations
The QAM cross constellation also allows for odd numbers of bits per symbol in QAM data transmission.
To construct a QAM cross constellation with b bits per symbol one augments a square QAM constellation
for b 1 bits per symbol by adding 2
b1
data symbols that extend the sides of the QAM square. The
corners are excluded as shown in Figure 1.42.
62
Figure 1.43: 32CR constellation.
One computation of average energy of QAM cross constellations doubles the energy of the two large
rectangles ([2
b3
2
+ 2
b1
2
] 2
b1
2
) and then subtracts the energy of the inner square (2
b1
2
2
b1
2
). The
energy of the inner square is
c
x
(inner) =
d
2
6
(2
b1
1) . (1.251)
The total sum of energies for all the data symbols in the inner-square-plus-two-side-rectangles is (looking
only at one quadrant, and multiplying by 4 because of symmetry)
c =
d
2
4
(4)
2
b3
2

k=1
32
b5
2

l=1
_
(2k 1)
2
+ (2l 1)
2

(1.252)
=
d
2
4
(4)
_
3 2
b5
2
_
2
3b3
2
2
b1
2
6
_
+ 2
b3
2
_
27 2
3b9
2
3 2
b3
2
6
__
(1.253)
=
d
2
4
(4)
_
2
b7
2
_
2
3b3
2
2
b1
2
_
+ 2
b5
2
_
9 2
3b9
2
2
b3
2
__
(1.254)
=
d
2
4
_
2
2b3
2
b2
+ 9 2
2b5
2
b2

(1.255)
=
d
2
4
_
13
32
2
2b
2
b1
_
. (1.256)
Then
c
x
=
2c 2
b1
c
x
(inner)
2
b
=
d
2
4
_
26
32
2
b
1
2
3
2
b2
+
2
3
1
2
_
(1.257)
=
d
2
4
__
13
16

1
6
_
2
b

2
3
_
(1.258)
=
d
2
4
_
31
48
2
b

2
3
_
=
d
2
6
_
31
32
M 1
_
(1.259)
The minimum distance d
min
= d can be computed from c
x
(or

c
x
) and M by
d =

6c
x
31
32
M 1
=

12

c
x
31
32
M 1
=

12

c
x
31
32
4

b
1
. (1.260)
In (1.259), for large M, c
x

31d
2
192
M =
31d
2
192
4

b
, the same as the continuous approximation.
63
The following recursion derives from increasing the number of bits, b, in a QAM cross constellation
while maintaining constant minimum distance:.
c
x
(b + 1) = 2c
x
(b) +
d
2
6
. (1.261)
As with the square QAM constellation asymptotically the average energy increases by 3 dB for each
added bit per two dimensional symbol.
The probability of error can be bounded for QAM Cross by noting that a lower bound on the
conditional probability of a correct decision falls into one of two categories:
1. inner points (
_
2
b
4
_
3 2
b3
2
2 2
b5
2
__
=
_
2
b
4
_
2
b1
2
__
) with four nearest neighbors
P
c/inner
=
_
1 2Q
_
d
min
2
__
2
(1.262)
2. side points (4
_
3 2
b3
2
2 2
b5
2
_
= 4
_
2
b1
2
_
) with three nearest neighbors. (This calculation is
only a bound because some of the side points have fewer than three neighbors at distance d
min
)
P
c/outer
=
_
1 Q
_
d
min
2
___
1 2Q
_
d
min
2
__
. (1.263)
The probability of a correct decision is then, abbreviating Q = Q
_
d
min
2
_
,
P
c

1
M
_
4
_
2
b1
2
_
(1 Q)(1 2Q)
_
(1.264)
+
1
M
__
2
b
4
_
2
b1
2
__
(1 2Q)
2
_
(1.265)
=
1
M
_
4 2
b1
2
(1 3Q+ 2Q
2
) +
_
2
b
2
b+3
2
_
(1 4Q+ 4Q
2
)
_
(1.266)
= 1
_
2
3b
2
+ 4
_
Q+
_
2
5b
2
2 2
5b
2
+ 4
_
Q
2
(1.267)
Thus, the probability of symbol error is bounded by
P
e
4
_
1
1

2M
_
Q
_
d
min
2
_
4
_
1
_
2
M
_
_
Q
_
d
min
2
__
2
(1.268)
< 4
_
1
1

2M
_
Q
_
d
min
2
_
< 4Q
_
d
min
2
_
. (1.269)
The average number of nearest neighbors for the constellation is 4(11/

2M); thus the NNUB is again


accurate. The normalized probability of error is

P
e
2
_
1
1
2

b+.5
_
Q
_
d
min
2
_
, (1.270)
which agrees with the PAM result when one includes an additional bit in the constellation, or equivalently
an extra .5 bit per dimension. To evaluate (1.270), Equation 1.260 relates that
_
d
min
2
_
2
=
3 SNR
31
32
M 1
(1.271)
Table 1.3 lists the incremental energies and required SNR for QAM cross constellations in a manner
similar to Table 1.2. There are also square constellations for odd numbers of bits that Problem 1.14
addresses.
64
d
2
for

P
e
= 10
6
SNR = SNR increase =
b = 2

b M 2Q
_
d
min
2
_
([31/32]M1)10
1.37
3
[31/32]M1
[32/32](M1)1
dB/bit
5 32 13.7dB 23.7 dB
7 128 13.7dB 29.8dB 6.1dB 3.05dB
9 512 13.7dB 35.8dB 6.0dB 3.0dB
11 2048 13.7dB 41.8dB 6.0dB 3.0dB
13 8192 13.7dB 47.8dB 6.0dB 3.0dB
15 32,768 13.7dB 53.8dB 6.0dB 3.0dB
Table 1.3: QAM Cross constellation energies.
Vestigial Sideband Modulation (VSB), CAP, and OQAM
From the perspective of performance and constellation design, there are many alternate basis function
choices for QAM that are equivalent. These choices sometimes have value from the perspective of
implementation considerations. They are all equivalent in terms of the fundamentals of this chapter
when implemented for successive transmission of messages over an AWGN. In successive transmission,
the basis functions must be orthogonal to one another for all integer translations of the symbol period.
Then successive samples at the demodulator output at integer multiples of T will be independent; then,
the one-shot optimum receiver can be used repeatedly in succession to detect successive messages without
loss of optimality on the AWGN (see Chapter 3 to see when successive transmission can degrade in the
presence of intersymbol interference on non-AWGN channels with bandwidth limitations in practice.)
The PAM basis function always exhibits this desirable translation property on the AWGN, and so do
the QAM basis functions as long as
c
/T. The QAM basis functions are not unique with respect
to satisfaction of the translation property, with VSB/SSB, CAP, and OQAM all being variants:
VSB Vestigial sideband modulation (VSB) is an alternative modulation method that is equivalent to
QAM. In QAM, typically the same unit-energy basis function (
_
1/Tsinc(t/T)) is double-side-band
modulated independently by the sine and cosine of a carrier to generate the two QAM basis functions.
In VSB, a double bandwidth sinc function and its Hilbert transform (see Chapter 2 for a discussion of
Hilbert transforms) are single-side-band modulated. The two VSB basis functions are
13

1
(t) =
_
1
T
sinc
_
2t
T
_
cos
c
t , . (1.272)

2
(t) =
_
1
T
sinc
_
t
T
_
sin(
t
T
) sin
c
t . (1.273)
A natural symbol-rate choice for successive transmission with these two basis functions might appear to
be 2/T, twice the rate associated with QAM. However, successive translations of these basic functions
by integer multiples of T/2 are not orthogonal that is <
i
(t),
j
(t T/2) >,=
ij
; however, <

i
(t),
j
(t kT) >=
ij
for any integer k. Thus, the symbol rate for successive orthogonal transmissions
needs to be 1/T.
VSB designers often prefer to exploit the observation that <
1
(t),
2
(t kT/2) >= 0 for all odd
integers k to implement the VSB transmission system as one-dimensional time-varying at rate 2/T
dimensions per second. The rst and second dimensions are alternately implemented at an aggregate
rate of 2/T dimensions per second. The optimum receiver consists of two matched lters to the two
basis functions, which have their outputs each sampled at rate 1/T (staggered by T/2 with respect to
one another), and these samples interleaved to form a single one-dimensional sample stream for the
detector. Nonetheless, those same designers call the VSB constellations by two-dimensional names.
13
This simple descriptionis actually single-side-band(SSB), a special case of VSB. VSB uses practical realizable functions
instead of the unrealizable sinc functions that simplify fundamental developments in Chapter 1.
65
Thus, one may hear of 16 VSB or 64 VSB, which are equivalent to 16 QAM (or 4 PAM) and 64 QAM
(8 PAM) respectively. VSB transmission may be more convenient for upgrading existing analog systems
that are already VSB (i.e., commercial television) to digital systems that use the same bandwidths and
carrier frequencies - that is where the carrier frequencies are not centered within the existing band. VSB
otherwise has no fundamental advantages or dierences from QAM.
CAP Carrierless Amplitude/Phase (CAP) transmission systems are also very similar to QAM. The
basis functions of QAM are time-varying when
c
is arbitrary that is, the basis functions on subsequent
transmissions may dier. CAP is a method that can eliminate this time variation for any choice of
carrier-frequency, making the combined transmitter implementation appear carrierless and thus time-
invariant. CAP has the same one-shot basis functions as QAM, but also has a time-varying encoder
constellation when used for successive transmission of two-dimensional symbols. The time-varying CAP
encoder implements a sequence of additional two-dimensional constellation rotations that are known
and easily removed at the receiver after the demodulator and just before the detector. The time-varying
encoder usually selects the sequence of rotations so that the phase (argument of sines and cosines) of
the carrier is the same at the beginning of each symbol period, regardless of the actual carrier frequency.
Eectively, all carrier frequencies thus appear the same, hence the term carrierless. The sequence of
rotations has an angle that increases linearly with time and can often be very easily implemented (and
virtually omitted when dierential encoding - see Chapter 4 - is implemented). See Section 2.4.
OQAM Oset QAM (OQAM) or staggered QAM uses the alternative basis functions

1
(t) =
_
2
T
sinc
_
t
T
_
cos
_
t
T
_
(1.274)

2
(t) =
_
2
T
sinc
_
t T/2
T
_
sin
_
t
T
_
(1.275)
eectively oseting the two dimensions by T/2. For one-shot transmission, such oset has no eect (the
receiver matched lters eectively re-align the two dimensions) and OQAM and QAM are the same. For
successive transmission, the derivative (rate of change) of x(t) is less for OQAM than for QAM, eectively
reducing the bandwidth of transmitted signals when the sinc functions cannot be perfectly implemented.
OQAM signals will never take the value x(t) = 0, while this value is instantaneously possible with QAM
thus nonlinear transmitter/receiver ampliers are not as stressed by OQAM. There is otherwise no
dierence between OQAM and QAM.
The Gap
The gap, , is an approximation introduced by Forney for constellations with

b 1/2 that is empirically
evident in the PAM and QAM tables. Specically, if one knows the SNR for an AWGN channel, the
number of bits that can be transmitted with PAM or QAM according to

b =
1
2
log
2
_
1 +
SNR

_
. (1.276)
At error rate

P
e
= 10
6
, the gap is 8.8 dB. For

P
e
= 10
7
, the gap is 9.5 dB. If the designer knows
the SNR and his desired performance level (

P
e
) or equivalently the gap, then the number of bits per
dimension (and thus the achievable data rate R = b/T) are immediately computed. Chapters 8-10
will introduce more sophisticated encoder designs where the gap can be reduced, ultimately to 0 dB,
enabling a highest possible data rate of .5 log
2
(1 + SNR), sometimes known as the channel capacity
of the AWGN channel. QAM and PAM are thus about 9 dB away in terms of ecient use of SNR from
ultimate limits.
1.6.3 Constellation Performance Measures
Having introduced many commonly used signal constellations for data transmission, several performance
measures compare coded systems based on these constellations.
66
Coding Gain
Of fundamental importance to the comparison of two systems that transmit the same number of bits
per dimension is the coding gain, which species the improvement of one constellation over another
when used to transmit the same information.
Denition 1.6.2 (Coding Gain) The coding gain (or loss), , of a particular constella-
tion with data symbols x
i

i=0,...,M1
with respect to another constellation with data symbols
x
i

i=0,...,M1
is dened as


=
_
d
2
min
(x)/

c
x
_
_
d
2
min
( x)/

x
_ =

x

x
, (1.277)
where both constellations are used to transmit

b bits of information per dimension.
A coding gain of = 1 (0dB) implies that the two systems perform equally. A positive gain (in dB)
means that the constellation with data symbols x outperforms the constellation with data symbols x.
As an example, we compare the two constellations in Figures 1.30 and 1.32 and obtain
=

x
(8AMPM)

x
(8PSK)
=
2
10
sin
2
(

8
)
1.37 (1.4dB) . (1.278)
Signal constellations are often based on N-dimensional structures known as lattices. (A discussion
of lattices appears in Chapter 8.) A lattice is a set of vectors in N-dimensional space that is closed
under vector addition that is, the sum of any two vectors is another vector in the set. A translation of
a lattice produces a coset of the lattice. Most good signal constellations are chosen as subsets of cosets
of lattices. The fundamental volume for a lattice measures the region around a point:
Denition 1.6.3 (Fundamental Volume) The fundamental volume 1() of a lattice
(from which a signal constellation is constructed) is the volume of the decision region for
any single point in the lattice. This decision region is also called a Voronoi Region of
the lattice. The Voronoi Region of a lattice, 1(), is to be distinguished from the Voronoi
Region of the constellation, 1
x
the latter being the union of M of the former.
For example, an M-QAM constellation as M is a translated subset (coset) of the two-
dimensional rectangular lattice Z
2
, so M-QAM is a translation of Z
2
as M . Similarly as M ,
the M-PAM constellation becomes a coset of the one dimensional lattice Z.
The coding gain, of one constellation based on x with lattice and volume 1() with respect to
another constellation with x,

, and

1() can be rewritten as
=
_
d
2
min
(x)
V()
2/N
_
_
d
2
min
(

x)

V()
2/N
_
_
V()
2/N

E
x
_
_

V()
2/N

E
x
_ (1.279)
=
f
+
s
(dB) (1.280)
The two quantities on the right in (1.280) are called the fundamental gain
f
and the shaping
gain
s
respectively.
Denition 1.6.4 (Fundamental Gain) The fundamental gain
f
of a lattice, upon which
a signal constellation is based, is

=
_
d
2
min
(x)
V()
2/N
_
_
d
2
min
(

x)

V()
2/N
_ . (1.281)
The fundamental gain measures the eciency of the spacing of the points within a particular
constellation per unit of fundamental volume surrounding each point.
67
Denition 1.6.5 (Shaping Gain) The shaping gain
s
of a signal constellation is dened
as

s
=
_
V()
2/N

E
x
_
_

V()
2/N

E
x
_ . (1.282)
The shaping gain measures the eciency of the shape of the boundary of a particular con-
stellation in relation to the average energy per dimension required for the constellation.
Using a continuous approximation, the designer can extend shaping gain to constellations with dierent
numbers of points as

s
=
_
V()
2/N

E
x
_
2
2

b(x)
_

V()
2/N

E
x
_
2
2

b(

x)
. (1.283)
Peak-to-Average Power Ratio (PAR)
For practical system design, the peak power of a system may also need to be limited. This constraint
can manifest itself in several dierent ways. For example if the modulator uses a Digital-to-Analog
Converter (or Analog-to-Digital Converter for the demodulator) with a nite number of bits (or nite
dynamic range), then the signal peaks can not be arbitrarily large. In other systems the channel or
modulator/demodulator may include ampliers or repeaters that saturate at high peak signal voltages.
Yet another way is in adjacent channels where crosstalk exists and a high peak on one channel can couple
into the other channel, causing an impulsive noise hit and an unexpected error in the adjacent system.
Thus, the Peak-to-Average Power Ratio (PAR) is a measure of immunity to these important types of
eects.
The peak energy is:
Denition 1.6.6 (Peak Energy) The N-dimensional peak energy for any signal constel-
lation is c
peak
.
c
peak

= max
i
N

n=1
x
2
in
. (1.284)
The peak energy of a constellation should be distinguished from the peak squared energy of a signal
x(t), which is max
i,t
[x
i
(t)[
2
. This later quantity is important in analog amplier design or equivalently
in however the lters
n
(t) are implemented.
The peak energy of a constellation concept allows precise denition of the PAR:
Denition 1.6.7 (Peak-to-Average Power Ratio) The Ndimensional Peak-to-Average
Power Ratio, PAR
x
, for N-dimensional Constellation is
PAR
x
=
c
peak
c
x
(1.285)
For example 16SQ QAM has a PAR of 1.8 in two dimensions. For each of the one-dimensional 4-PAM
constellations that constitute a 16SQ QAM constellation, the one-dimensional PAR is also 1.8. These
two ratios need not be equal, however, in general. For instance, for 32CR, the two-dimensional PAR is
34/20 = 1.7, while observation of a single dimension when 32CR is used gives a one-dimensional PAR of
25/(.75(5) + .25(25)) = 2.5. Typically, the peak squared signal energy is inevitably yet higher in QAM
constellations and depends on the choice of (t).
68
Figure 1.44: Hexagonal lattice.
1.6.4 Hexagonal Signal Constellations in 2 Dimensions
The most dense packing of regularly spaced points in two dimensions is the hexagonal lattice shown in
Figure 1.44. The volume (area) of the decision region for each point is
1 = 6(
1
2
)(
d
2
)(
d

3
) = d
2

3
2
. (1.286)
If the minimum distance between any two points is d in both constellations, then the fundamental gain
of the hexagonal constellation with respect to the QAM constellation is

f
=
d
2

3d
2
2
=
2

3
= .625dB . (1.287)
The encoder/detector for constellations based on the hexagonal lattice may be more complex than
those for QAM.
1.7 Additive Self-Correlated Noise
In practice, additive noise is often Gaussian, but its power spectral density is not at. Engineers often
call this type of noise self-correlated or colored. The noise remains independent of the message
signals but correlates with itself from time instant to time instant. The origins of colored noise are
many. Receiver ltering eects, noise generated by other communications systems (crosstalk), and
electromagnetic interference are all sources of self-correlated noise. A narrow-band transmission of a
radio signal that somehow becomes noise for an unintended channel is another common example of
self-correlated noise and called RF noise (RF is an acroynm for radio frequency).
Self-correlated Gaussian noise can signicantly alter the performance of a receiver designed for white
Gaussian noise. This section investigates the optimum detector for colored noise and also considers
the performance loss when using a suboptimum detector designed for AWGN case in the presence of
Additive Correlated Gaussian Noise (ACGN).
This study is facilitated by rst investigating the ltered one-shot AWGN channel in Subsection
1.7.1. Subsection 1.7.2 then nds the optimum detector for additive self-correlated Gaussian noise, by
adding a whitening lter that transforms the self-correlated noise channel into a ltered AWGN channel.
69
Figure 1.45: Filtered AWGN channel.
Subsection 1.7.3 studies the vector channel, for which (for some unspecied reason) the noise has not
been whitened and describes the optimum detector given this vector channel. Finally, Subsection 1.7.4
studies the degradation that occurs when the noise correlation properties are not known for design of
the receiver, and an optimum receiver for the AWGN is used instead.
1.7.1 The Filtered (One-Shot) AWGN Channel
The ltered AWGN channel is illustrated in Figure 1.45. The modulated signal x(t) is ltered by h(t)
before the addition of the white Gaussian noise. When h(t) ,= (t), the ltered signal set x
i
(t) may
dier from the transmitted signal set x
i
(t). This transformation may change the probability of error
as well as the structure of the optimal detector. This section still considers only one use of the channel
with M possible messages for transmission. Transmission over this type of channel can incur a signicant
penalty due to intersymbol interference between successively transmitted data symbols. In the one-
shot case, however, analysis need not consider this intersymbol interference. Intersymbol interference
is considered in Chapters 3, 4, and 5.
For any channel input signal x
i
(t), the corresponding ltered output equals x
i
(t) = h(t) x
i
(t).
Decomposing x
i
(t) by an orthogonal basis set, x
i
(t) becomes
x
i
(t) = h(t) x
i
(t) (1.288)
= h(t)
N

n=1
x
in

n
(t) (1.289)
=
N

n=1
x
in
h(t)
n
(t) (1.290)
=
N

n=1
x
in

n
(t) , (1.291)
where

n
(t)

= h(t)
n
(t) . (1.292)
Note that:
70
The set of N functions
n
(t)
n=1,...,N
is not necessarily orthonormal.
For the channel to convey any and all constellations of M messages for the signal set x
i
(t), the
basis set
n
(t) must be linearly independent.
The rst observation can be easily proven by nding a counterexample, an exercise for the interested
reader. The second observation emphasizes that if some dimensionality is lost by ltering, signals in the
original signal set that diered only along the lost dimension(s) would appear identical at the channel
output. For example consider the two signals x
k
(t) and x
j
(t).
x
k
(t) x
j
(t) =
N

n=1
(x
kn
x
jn
)
n
(t) = 0 , (1.293)
If the set
n
(t) is linearly independent then the sum in (1.293) must be nonzero: a contradiction
to (1.293). If this set of vectors is linearly dependent, then (1.293) can be satised, resulting in the
possibility of ambiguous transmitted signals. Failure to meet the linear independence condition could
mandate a redesign of the modulated signal set or a rate reduction (decrease of M). The dimensionality
loss and ensuing redesign of x
i
(t)
i=0:M1
is studied in Chapters 4 and 5. This chapter assumes such
dimensionality loss does not occur.
If the set
n
(t) is linearly independent, then the Gram-Schmidt procedure in Appendix A generates
an orthonormal set of N basis functions
n
(t)
n=1,...,N
from
n
(t)
n=1,...,N
. A new signal constellation
x
i

i=0:M1
can be computed from the ltered signal set x
i
(t) using the basis set
n
(t).
x
in
=
_

x
i
(t)
n
(t)dt = x
i
(t),
n
(t)) . (1.294)
Using the previous analysis for AWGN, a tight upper bound on message error probability is still given
by
P
e
N
e
Q
_
d
min
2
_
, (1.295)
where d
min
is the minimumEuclidean distance between any two points in the ltered signal constellation
x
i

i=0:M1
. The matched lter implementation of the demodulator/detector does not need to compute

n
(t)
n=1,...,N
for the signal detector as shown in Figure 1.46. (For reference the reader can reexamine
the detector for the unltered constellation in Figure 1.23).
In the analysis of the ltered AWGN, the transmitted average energy c
x
is still measured at the
channel input. Thus, while c

x
can be computed, its physical signicance can dier from that of c
x
. If,
as is often the case, the energy constraint is at the input to the channel, then the comparison of various
signaling alternatives, as performed earlier in this chapter could change depending on the specic lter
h(t).
1.7.2 Optimum Detection in the Presence of Self-Correlated Noise
The Additive Self-Correlated Gaussian Noise (ACGN) channel is illustrated in Figure 1.47. The only
change with respect to Figure 1.18 is that the autocorrelation function of the additive noise r
n
(),
need not equal
N0
2
(). Simplication of the ensuing development denes and uses a normalized noise
autocorrelation function
r
n
()

=
r
n
()
N0
2
. (1.296)
The power spectral density of the unnormalized noise is then
o
n
(f) =
^
0
2

o
n
(f) , (1.297)
where

o
n
(f) is the Fourier Transform of r
n
().
71
Figure 1.46: Modied signal detector.
Figure 1.47: ACGN channel.
72
The Whitening lter
The whitening-lter analysis of the ACGN channel whitens the colored noise with a whitening lter
g(t), and then uses the results of the previous section for the ltered AWGN channel where the lter
h(t) = g(t). To ensure no loss of information when ltering the noisy received signal by g(t), the lter
g(t) should be invertible. By the reversibility theorem, the receiver can use an optimal detector for this
newly generated ltered AWGN without performance loss. Actually, the condition on invertibility of
g(t) is sucient but not necessary. For a particular signal set, a necessary condition is that the lter
be invertible over that signal set. For the lter to be invertible on any possible signal set, g(t) must
necessarily be invertible. This subtle point is often overlooked by most works on this subject.
For g(t) to whiten the noise,
_

o
n
(f)

1
= [G(f)[
2
. (1.298)
In general many lters G(f), may satisfy Equation (1.298) but only some of the lters shall possess
realizable inverses.
To ensure the existence of a realizable inverse o
n
(f) must satisfy the Paley-Wiener Criterion.
Theorem 1.7.1 (Paley-Wiener Criterion) If
_

[ lno
n
(f)[
1 + f
2
df < , (1.299)
then there exists a G(f) satisfying (1.298) with a realizable inverse. (Thus the lter g(t) is
a 1-to-1 mapping).
If the Paley-Wiener criterion were violated by a noise signal, then it is possible to design transmission
systems with innite data rate (that is when o
n
(f) = 0 over a given bandwidth) or to design transmission
systems for each band overwhich Paley-Wiener is satised (that is the bands where noise is essentially of
nite energy). This subsections analysis always assumes Equation (1.299) is satised.
14
With a 1-to-1
g(t) that satises (1.298), the ACGN channel converts into an equivalent ltered white Gaussian noise
channel as shown in Figure 1.45 replacing h(t) with g(t) . The performance analysis of ACGN is identical
to that derived for the ltered AWGN channel in Subsection 1.7.1.
A further renement handles the ltered ACGN channel by whitening the noise and then analyzing
the ltered AWGN with h(t) replaced by h(t) g(t).
Analytic continuation of

o
n
(s) determines an invertible g(t):

o
n
(s) =

o
n
_
f =
s
2
_
, (1.300)
where

o
n
(s) can be canonically (and uniquely) factored into causal (and causally invertible) and anti-
causal (and anticausally invertible) parts as

o
n
(s) =

o
+
n
(s)

o

n
(s) , (1.301)
where

o
+
n
(s) =

o

n
(s) . (1.302)
If

o
n
(s) is rational, then

o
+
n
(s) is minimum phase, i.e. all poles and zeros of

o
+
n
(s) are in the left half
plane. The lter g(t) is then given by
g(t) = L
1
_
1

o
+
n
(s)
_
(1.303)
where L
1
is the inverse Laplace Transform. The matched lter g(t) is given by g(t) =
1
2
_
G(s)e
st
ds,
or equivalently by
g(t) = L
1
_
1

n (s)
_
. (1.304)
14
Chapters 4 and 5 expand to the correct form of transmission that should be used when (1.299) is not satised.
73
g(t) is anticausal and cannot be realized. Practical receivers instead realize g(T t), where T is
suciently large to ensure causality.
In general g(t) may be dicult to implement by this method; however, the next subsection considers
a discrete equivalent of whitening that is more straightforward to implement in practice. When the noise
is complex (see Chapter 2), Equation (1.302) generalizes to

o
+
n
(s) =
_

n
(s

. (1.305)
1.7.3 The Vector Self-Correlated Gaussian Noise Channel
This subsection considers a discrete equivalent of the ACGN
y = x +n , (1.306)
where the autocorrelation matrix of the noise vector n is
E[nn

] = R
n
=

R
n

2
. (1.307)
Both R
n
and

R
n
are positive denite matrices. This discrete ACGN channel can often be substituted
for the continuous ACGN channel. The discrete noise vector can be whitened, transforming

R
n
into
an identity matrix. The discrete equivalent to whitening y(t) by g(t) is a matrix multiplication.
The N N whitening matrix in the discrete case corresponds to the whitening lter g(t) in the
continuous case.
Cholesky factorization determines the invertible whitening transformation according (see Appendix
A of Chapter 3):

R
n
=

R
1/2

R
/2
, (1.308)
where

R
1/2
is lower triangular and

R
/2
is upper triangular. These matrices constitute the matrix
equivalent of a square root, and both matrices are invertible. Noting the denitions,

R
/2
=
_

R
1/2
_
1
, (1.309)
and

R
/2
=
_

R
/2
_
1
, (1.310)
To whiten n, the receiver passes y through the matrix multiply

R
/2
,
y

=

R
/2
y =

R
/2
x +

R
/2
n = x + n . (1.311)
The autocorrelation matrix for n is
E[ n n

] =

R
/2
E[nn

]

R
/2
=

R
/2
_

R
1/2

R
/2

2
_

R
/2
=
2
I . (1.312)
Thus, the covariance matrix of the transformed noise n is the same as the covariance matrix of the
AWGN vector. By the theorem of reversibility, no information is lost in such a transformation.
EXAMPLE 1.7.1 (QPSK with correlated noise) For the example shown in Figure 1.21
suppose that the noise is colored with correlation matrix
R
n
=
2
_
1
1

2
1

2
1
_
(1.313)
Then

R
1/2
=
_
1 0
1

2
1

2
_
(1.314)
74
Figure 1.48: Equivalent signal constellation for Example 1.7.1.
and

R
/2
=
_
1
1

2
0
1

2
_
. (1.315)
From (1.314),

R
/2
=
_
1 0
1

2
_
(1.316)
and

R
/2
=
_
1 1
0

2
_
. (1.317)
The signal constellation after the whitening lter becomes
x
0
=

R
/2
x
0
=
_
1 0
1

2
__
1
1
_
=
_
1
(

2 1)
_
, (1.318)
and similarly x
2
=
_
1 (

2 + 1)

, x
1
=
_
1 (

2 1)

, and x
3
=
_
1 (

2 + 1)

.
This new constellation forms a parallelogramin two dimensions, where the minimumdistance
is now along the shorter diagonal (between x
1
and x
3
), rather than along the sides and
d
min
= 2.164 > 2. This new constellation appears in Figure 1.48. Thus, the optimum
detector for this channel with self-correlated Gaussian noise has larger minimum distance
than for the white noise case, illustrating the important fact that having correlated noise is
sometimes advantageous.
The example shows that correlated noise may lead to improved performance measured with respect to
the same channel and signal constellation with white noise of the same average energy. Nevertheless, the
autocorrelation matrix of the noise is often not known in implementation, or it may vary from channel
75
use to channel use. Then, the detector is designed as if white noise were present anyway, and there is a
performance loss with respect to the optimum detector. The next subsection deals with the calculation
of this performance loss.
1.7.4 Performance of Suboptimal Detection with Self-Correlated Noise
A detector designed for the AWGN channel is obviously suboptimum for the ACGN channel, but is often
used anyway, as the correlation properties of the noise may be hard to know in the design stage. In this
case, the detector performance will be reduced with respect to optimum.
Computation of the amount by which performance is reduced uses the error event vectors

ij

=
x
i
x
j
|x
i
x
j
|
. (1.319)
The component of the additive noise vector along an error event vector is n,
ij
). The variance of the
noise along this vector is
2
ij

= E
_
n,
ij
)
2
_
. Then, the NNUB becomes
P
e
N
e
Q
_
min
i=j
_
|x
i
x
j
|
2
ij
__
. (1.320)
For Example 1.7.1, the worst case argument of the Q-function in (1.320) is 1/, which represents
a factor of (2.164/2)
2
= .7dB loss with respect to optimum. This loss varies with rotation of the
signal set, but not translation. If the signal constellation in Example 1.7.1 were rotated by 45
o
, as in
Figure 1.26, then the increase in noise variance is (1 +
_
1/2)/1=2.3 dB, but d
min
remains at 2 for this
sub-optimum detector, so performance is 3 dB inferior than the optimum detector for the unrotated
constellation. However, the optimum receiver for the rotated case would also have changed to have 3
dB worse performance for this rotation, so in this case the optimum rotated and sub-optimum rotated
receiver have the same performance.
76
Chapter 1 Excercises
1.1 Our First Constellation.
a. Show that the following two basis functions are orthonormal. (2 pts)

1
(t) =
_
2 (cos (2t)) if t [0, 1]
0 otherwise

2
(t) =
_
2 (sin (2t)) if t [0, 1]
0 otherwise
b. Consider the following modulated waveforms.
x
0
(t) =
_
2 (cos (2t) + sin(2t)) if t [0, 1]
0 otherwise
x
1
(t) =
_
2 (cos (2t) + 3 sin (2t)) if t [0, 1]
0 otherwise
x
2
(t) =
_
2 (3 cos (2t) + sin (2t)) if t [0, 1]
0 otherwise
x
3
(t) =
_
2 (3 cos (2t) + 3 sin (2t)) if t [0, 1]
0 otherwise
x
4
(t) =
_
2 (cos (2t) sin(2t)) if t [0, 1]
0 otherwise
x
5
(t) =
_
2 (cos (2t) 3 sin (2t)) if t [0, 1]
0 otherwise
x
6
(t) =
_
2 (3 cos (2t) sin (2t)) if t [0, 1]
0 otherwise
x
7
(t) =
_
2 (3 cos (2t) 3 sin (2t)) if t [0, 1]
0 otherwise
x
i+8
(t) = x
i
(t) i = 0, , 7
Draw the constellation points for these waveforms using the basis functions of (a). (2 pts)
c. Compute c
x
and

c
x
(

c
x
= c
x
/N) where N is the number of dimensions
(i) for the case where all signals are equally likely. (2 pts)
(ii) for the case where (2 pts)
p(x
0
) = p(x
4
) = p(x
8
) = p(x
12
) =
1
8
and
p(x
i
) =
1
24
i = 1, 2, 3, 5, 6, 7, 9, 10, 11, 13, 14, 15
d. Let
y
i
(t) = x
i
(t) + 4
3
(t)
where

3
(t) =
_
1 if t [0, 1]
0 otherwise
Compute c
y
for the case where all signals are equally likely. (2 pts)
77
1.2 Inner Products.
Consider the following signals:
x
0
(t) =
_
2

T
cos(
2t
T
+

6
) if t [0, T]
0 otherwise
x
1
(t) =
_
2

T
cos(
2t
T
+
5
6
) if t [0, T]
0 otherwise
x
2
(t) =
_
2

T
cos(
2t
T
+
3
2
) if t [0, T]
0 otherwise
a. Find a set of orthonormal basis functions for this signal set. Show that they are orthonormal.
Hint: Use the identity for cos(a +b) = cos(a) cos(b) sin(a) sin(b). (4 pts)
b. Find the data symbols corresponding to the signals above for the basis functions you found in (a).
(3 pts)
c. Find the following inner products: (3 pts)
(i) < x
0
(t), x
0
(t) >
(ii) < x
0
(t), x
1
(t) >
(iii) < x
0
(t), x
2
(t) >
1.3 Multiple sets of basis functions.
Consider the following two orthonormal basis functions:
Figure 1.49: Basis functions.
a. Use the basis functions given above to nd the modulated waveforms u(t) and v(t) given the data
symbols u = [1 1] and v = [2 1]. It is sucient to draw u(t) and v(t). (2 pts)
b. For the same u(t) and v(t), a dierent set of two orthonormal basis functions is employed for which
u = [

2 0] produces u(t). Draw the new basis functions and nd the v which produces v(t). (3 pts)
1.4 Minimal orthonormalization with MATLAB.
Each column of the matrix A given below is a data symbol that is used to construct its corresponding
modulated waveform from the set of orthonormal basis functions
1
(t),
2
(t), . . . ,
6
(t). The set of
modulated waveforms described by the columns of A can be represented with a smaller number of basis
functions.
78
A = [a
0
a
1
... a
7
] (1.321)
=
_

_
1 2 3 4 5 6 7 8
1 3 5 7 9 11 13 15
2 4 6 8 10 12 14 16
0 1 0 1 0 1 0 1
0 2 0 2 0 2 0 2
2 4 2 4 2 4 2 4
_

_
(1.322)
The transmitted signals a
i
(t) are represented (with a superscript of * meaning matrix or vector transpose)
as
a
i
(t) = a

i
_

1
(t)

2
(t)
.
.
.

6
(t)
_

_
(1.323)
A(t) = A

(t) . (1.324)
Thus, each row of A(t) is a possible transmitted signal.
a. Use MATLAB to nd an orthonormal basis for the columns of A. Record the matrix of basis vectors.
The MATLAB commands help and orth will be useful. In particular, if one executes Q = orth(A)
in matlab, a 6 3 orthogonal matrix Q is produced such that Q

Q = I and A

= [A

Q] Q

. The
columns of Q can be thought of as a new basis thus try writing A(t) and interpreting to get a
new set of basis functions and description of the 8 possible transmit waveforms. Note that help
orth will give a summary of the orth command. To enter the matrix B in matlab (for example)
shown below, simply type B=[1 2; 3 4]; (2 pts)
B =
_
1 2
3 4
_
(1.325)
b. How many basis functions are actually needed to represent our signal set ? What are the new basis
functions in terms of
1
(t),
2
(t), . . . ,
6
(t) ?(2 pts)
c. Find the new matrix

A which gives the data symbol representation for the original modulated
waveforms using the smaller set of basis functions found in (b).

A will have 8 columns, one for
each data symbol. The number of rows in

A will be the number of basis functions you found in
(b). (1 pts)
1.5 Decision rules for binary channels.
a. The Binary Symmetric Channel (BSC) has binary (0 or 1) inputs and outputs. It outputs
each bit correctly with probability 1 p and incorrectly with probability p. Assume 0 and 1 are
equally likely inputs. State the MAP and ML decision rules for the BSC when p <
1
2
. How are
the decision rules dierent when p >
1
2
? (5 pts)
b. The Binary Erasure Channel (BEC) has binary inputs as with the BSC. However there are
three possible outputs. Given an input of 0, the output is 0 with probability 1 p
1
and 2 with
probability p
1
. Given an input of 1, the output is 1 with probability 1 p
2
and 2 with probability
p
2
. Assume 0 and 1 are equally likely inputs. State the MAP and ML decision rules for the BEC
when p
1
< p
2
<
1
2
. How are the decision rules dierent when p
2
< p
1
<
1
2
? (5 pts)
1.6 Minimax [Wesel 1994].
79
Figure 1.50: Binary Symmetric Channel(BSC).
Consider a 1-dimensional vector channel
y = x + n
where x = 1 and n is Gaussian noise with
2
= 1. The Maximum-Likelihood (ML) Receiver which is
minimax, has decision regions:
D
ML,1
= [0, )
and
D
ML,1
= (, 0)
So if y is in D
ML,1
we decode y as +1; in D
ML,1
, as 1.
Consider another receiver, R, where the decision regions are:
D
R,1
= [
1
2
, )
and
D
R,1
= (,
1
2
)
a. Find P
e,ML
and P
e,R
as a function of p
x
(1) = p for values of p in the interval [0, 1]. On the same
graph, plot P
e,ML
vs. p and P
e,R
vs. p. (2 pts)
b. Find max
p
P
e,ML
and max
p
P
e,R
. Are your results consistent with the Minimax Theorem? (2 pts)
c. For what value of p is D
R
the MAP decision rule? (1 pt)
Note: For this problem you will need to use the Q() function discussed in Appendix B. Here are
some relevant values of Q().
x Q(x)
0.5 0.3085
1.0 0.1587
1.5 0.0668
80
Figure 1.51: Binary Erasure Channel (BEC).
1.7 Irrelevancy/Decision Regions. (From Wozencraft and Jacobs)
a. Consider the following channel where x, n
1
, and n
2
are independent binary random variables. All
the additions shown below are modulo two. (Equivalently, the additions may be considered xors.)
(i) Given only y
1
, is y
3
relevant? (1 pt)
(ii) Given y
1
and y
2
, is y
3
relevant? (1 pt)
For the rest of the problem, consider the following channel,
One of the two signals x
0
= 1 or x
1
= 1 is transmitted over this channel. The noise random
81
variables n
1
and n
2
are statistically independent of the transmitted signal x and of each other.
Their density functions are,
p
n1
(n) = p
n2
(n) =
1
2
e
|n|
(1.326)
82
b. Given y
1
only, is y
2
relevant ? (1 pt)
c. Prove that the optimum decision regions for equally likely messages are shown below, (3 pts)
d. A receiver chooses x
1
if and only if (y
1
+y
2
) > 0. Is this receiver optimumfor equally likely messages
? What is the probability of error ? (Hint: P
e
= Py
1
+y
2
> 0/x = 1p
x
(1) +Py
1
+y
2
/x =
1p
x
(1) and use symmetry. Recall the probability density function of the sum of 2 random variables
is the convolution of their individual probability density functions) (4 pts)
e. Prove that the optimum decision regions are modied as indicated below when PrX = x
1
> 1/2.
(2 pts)
1.8 Optimum Receiver. (From Wozencraft and Jacobs)
Suppose one of M equiprobable signals x
i
(t), i = 0, . . . , M1 is to be transmitted during a period of
time T over an AWGN channel. Moreover, each signal is identical to all others in the subinterval [t
1
, t
2
],
where 0 < t
1
< t
2
< T.
83
a. Show that the optimum receiver may ignore the subinterval [t
1
, t
2
]. (2 pts)
b. Equivalently, show that if x
0
, . . . , x
M1
all have the same projection in one dimension, then this
dimension may be ignored.(2 pts)
c. Does this result necessarily hold true if the noise is Gaussian but not white ? Explain.(2 pts)
1.9 Receiver Noise (use MATLAB for all necessary calculations - courtesy S. Li, 2005.
Each column of A given below is a data symbol that is used to construct its corresponding modulated
waveform from a set of orthonormal basis functions (assume all messages are equally likely):
(t) =
_

1
(t)
2
(t)
3
(t)
4
(t)
5
(t)
6
(t)

.
The matrix A is given by
A =
_

_
1 2 3 4 5 6 7 8
2 4 6 8 10 12 14 16
1 1 1 1 0 0 0 0
0 0 0 0 1 1 1 1
3 3 3 3 3 3 3 3
5 6 7 8 5 6 7 8
_

_
(1.327)
so that
x(t) = (t)A = [x
0
(t) x
1
(t) ... x
7
(t)] . (1.328)
A noise vector n =
_

_
n1
n2
n3
n4
n5
n6
_

_
is added to the symbol vector x, such that
y(t) = (t) (x + n)
where n
1
...n
6
are independent, with n
k
= 1 with equal probability.
The transmitted waveform y(t) is demodulated using the basis detector of Figure ??. This problem
examines the signal-to-noise ratio of the demodulated vector y = x +n with
2

= E(n
2
)
a. Find

c
x
,
2
, and SNR,

c
x
/
2
if all messages are equally likely. (2 pts)
b. Find the minimal number of basis vectors and new matrix

A as in Problem 1.4, and calculate the
new
x
,
2
, and SNR. (4 points)
c. Let the new vector be y = x + n, and discuss if the conversion from y to y is invariant (namely,
if P
e
is aected by the conversion matrix). Compare the detectors for parts a and b. (1 points)
d. Compare

b,
x
with the previous system. Is the new system superior? Why or why not? (2 pts)
e. The new system now has three unused dimensions, we would like to send 8 more messages by
constructing a big matrix

A, as follows:

A =
_

A 0
0

A
_
Compare

b,
x
with the original 6-dimensional system, and the 3-dimensional system in b). (4 pts)
1.10 Tilt. Consider the signal set shown in Figure 1.52 with an AWGN channel and let
2
= 0.1.
84
Figure 1.52: A Signal Constellation
a. Does P
e
depend on L and ? (1 pt)
b. Find the nearest neighbor union bound on P
e
for the ML detector assuming p
x
(i) =
1
9
i. (2 pts)
c. Find P
e
exactly using the assumptions of the previous part. How far o was the NNUB? (5 pts)
d. Suppose we have a minimum energy constraint on the signal constellation. How would we change
the constellation of this problem without changing the P
e
? How does aect the constellation
energy? (2 pts)
1.11 Parseval. Consider binary signaling on an AWGN
2
= 0.04 with ML detection for the following
signal set. (Hint: consider various ways of computing d
min
.)
x
0
(t) = sinc
2
(t)
x
1
(t) =

2sinc
2
(t) cos(4t)
Determine the exact P
e
assuming that the two input signals are equally likely. (5 pts)
1.12 Disk storage channel.
Binary data storage with a thin-lm disk can be approximated by an input-dependent additive white
Gaussian noise channel where the noise n has a variance dependent on the transmitted (stored) input.
The noise has the following input dependent density:
p(n) =
_

_
1

2
2
1
e

n
2
2
2
1 if x = 1
1

2
2
0
e

n
2
2
2
0 if x = 0
and
2
1
= 31
2
0
. The channel inputs are equally-likely.
a. For either input, the output can take on any real value. On the same graph, plot the two possible
output probability density functions (pdfs). i.e. Plot the output pdf for x = 0 and the output pdf
for x = 1. Indicate (qualitatively) the decision regions on your graph. (2 pts)
85
b. Determine the optimal receiver in terms of
1
and
0
. (3 pts)
c. Find
2
0
and
2
1
if the SNR is 15 dB. SNR is dened as
E
x
1
2
(
2
0
+
2
1
)
=
1

2
0
+
2
1
. (1 pt)
d. Determine P
e
when SNR = 15 dB. (3 pts)
e. What happens as

2
0

2
1
0? You may restrict your attention to the physically reasonable case
where
1
is a xed nite value and
0
0. (1 pt)
1.13 Rotation with correlated noise.
A two dimensional vector channel y = x + n has correlated gaussian noise (that is the noise is not
white and so not independent in each dimension) such that E[n
1
] = E[n
2
] = 0, E[n
1
2
] = E[n
2
2
] = 0.1,
and E[n
1
n
2
] = 0.05. n
1
is along the horizontal axis and n
2
is along the vertical axis.
a. Suppose we use the constellation below with = 45

and d =

2. (i.e. x
1
= (1, 1) and x
2
=
(1, 1)) Find the mean and mean square values of the noise projected on the line connecting the
two constellation points. Note this value more generally is a function of when noise is not white.
(2 pts)
Figure 1.53: Constellation
b. Note that the noise projected on the line in the previous part is Gaussian. Find P
e
for the ML
detector. Assume your detector was designed for uncorrelated noise. (2 pts)
c. Fixing d =

2, nd to minimize the ML detector P
e
and give the corresponding P
e
. You may
continue to assume that the receiver is designed for uncorrelated noise. (2 pts)
d. Could your detector in part a be improved by taking advantage of the fact that the noise is
correlated? (1 pt)
1.14 Hybrid QAM. Consider the 64 QAM constellation with d=2 (see Figure 1.54.): The 32 hybrid
QAM () is obtained by taking one of two points of the constellation. This problem investigates the
properties of such a constellation. Assume all points are equally likely and the channel is an AWGN.
a. Compute the energy c
x
of the 64 QAM and the 32 hybrid QAM constellations. Compare your
results. (2 pts)
b. Find the NNUB for the probability of error for the 64 QAM and 32 hybrid QAM constellations.
Which has lower P
e
? Why ? (3 pts)
c. What is d
min
for a 32 Cross QAM constellation having the same energy ? (1 pt)
d. Find the NNUB for the probability of error for the 32 Cross QAM constellation. Compare with
the 32 hybrid QAM constellation. Which one performs better ? Why ? (2 pts)
86
Figure 1.54: 32 SQ embedded in 64 SQ QAM Constellation
e. Compute the gure of merit for both 32 QAM constellations. Is your result consistent with the
one of (d) ? (2 pts)
1.15 Ternary Amplitude Modulation.
Consider the general case of the 3-D TAM constellation for which the data symbols are,
(x
l
, x
m
, x
n
) =
_
d
2
(2l 1 M
1
3
),
d
2
(2m1 M
1
3
),
d
2
(2n 1 M
1
3
)
_
with l = 1, 2, . . . M
1
3
, m = 1, 2, . . . M
1
3
, n = 1, 2, . . . M
1
3
. Assume that M
1
3
is an even integer.
a. Show that the energy of this constellation is (2 pts)
c
x
=
1
M
_

_3M
2
3
M
1
3

l=1
x
2
l
_

_ (1.329)
b. Now show that (3 pts)
c
x
=
d
2
4
(M
2
3
1)
c. Assuming an AWGN channel with variance
2
, nd the NNUB for P
e
and P
e
. (3 pts)
d. Find b and b. (1 pt)
e. Find c
x
and the energy per bit c
b
. (1 pt)
f. For an equal number of bits per dimension b =
b
N
, nd the gure of merit for PAM, QAM and
TAM constellations with appropriate sizes of M. Compare your results. (2 pts)
1.16 Equivalency of rectangular-lattice constellations.
Consider an AWGN system with a SNR =

E
x

2
of 22 dB, a target probability of error P
e
= 10
6
, and
a symbol rate
1
T
= 8 KHz.
87
a. Find the maximum data rate R =
b
T
that can be transmitted for
(i) PAM (
1
2
pt)
(ii) QAM (
1
2
pt)
(iii) TAM (
1
2
pt)
b. What is the NNUB normalized probability of error P
e
for the systems used in (a).(1
1
2
pts)
c. For the rest of the problem we will only consider QAM systems. Suppose that the desired data
rate is 40 Kbps. What is the transmit power needed to maintain the same probability of error ?
The SNR is no longer given as 22 dB. (2 pts)
d. Suppose now that the SNR was increased to 28 dB. What is the highest data rate that can be
reliably sent at the same probability of error 10
6
? (1 pt)
1.17 Frequency separation in FSK. (Adapted from Wozencraft & Jacobs.)
Consider the following two signals used in a Frequency Shift Key communications system over an
AWGN channel.
x
0
(t) =
_ _
2E
x
T
cos(2f
0
(t)) if t [0, T]
0 otherwise
x
1
(t) =
_ _
2E
x
T
cos(2(f
0
+ )t) if t [0, T]
0 otherwise
T = 100s f
0
= 10
5
Hz
2
= 0.01 c
x
= 0.32
a. Find P
e
if = 10
4
. (2 pts)
b. Find the smallest [[ such that the same P
e
found in part (a) is maintained. What type of
constellation is this ? (3 pts)
1.18 Pattern Recognition.
In this problem a simple pattern recognition scheme, based on optimum detectors is investigated.
The patterns considered consist of a square divided into four smaller squares, as shown in Figure 1.55,
Figure 1.55: Sample pattern
Each square may have two possible intensities, black or white. The class of patterns studied will
consist of those having two black squares, and two white squares. For example, some of these patterns
are as shown in Figure 1.56,
88
Figure 1.56: Examples of patterns considered
Each pattern can be encoded into a vector x = [x
1
x
2
x
3
x
4
] where each component indicates the
intensity of a small square according to the following rule,
Black square x
i
= 1
White square x
i
= 1
For a given pattern, a set of four sensors take measurements at the center of each small square and
outputs y = [y
1
y
2
y
3
y
4
],
y = x + n (1.330)
Where n = [n
1
n
2
n
3
n
4
] is thermal noise (White Gaussian noise) introduced by the sensors. The goal
of the problem is to minimize the probability of error for this particular case of pattern recognition.
a. What is the total number of possible patterns ? (1 pt)
b. Write the optimum decision rule for deciding which pattern is being observed. Draw the corre-
sponding signal detector. Assume each pattern is equally likely. (3 pts)
c. Find the union bound for the probability of error P
e
. (2 pts)
d. Assuming that nearest neighbours are at minimum distance, nd the NNUB for the probability of
error P
e
. (2 pts)
1.19 Shaping Gain.
Find the shaping gain for the following two dimensional voronoi regions (decision regions) relative
to the square voronoi region. Do this using the continuous approximation for a continuous uniform
distribution of energy through the region.
a. equilateral triangle (2 pts)
b. regular hexagon (2 pts)
c. circle (2 pts)
1.20 ( From Wozencraft and Jacobs). On an additive white Gaussian noise channel, determine P
e
for
the following signal set with ML detection. Leave the answer in terms of
2
.
(Hint: Plot the signals and then the signal vectors.)
x
1
(t) =
_
1 if t [0, 1]
0 otherwise
x
2
(t) =
_
1 if t [1, 2]
0 otherwise
89
x
3
(t) =
_
1 if t [0, 2]
0 otherwise
x
4
(t) =
_
1 if t [2, 3]
0 otherwise
x
5
(t) =
_
_
_
1 if t [0, 1]
1 if t [2, 3]
0 otherwise
x
6
(t) =
_
1 if t [1, 3]
0 otherwise
x
7
(t) =
_
1 if t [0, 3]
0 otherwise
x
8
(t) = 0
1.21 Comparing bounds. Consider the following signal constellation in use on an AWGN channel.
x
0
= (1, 1)
x
1
= (1, 1)
x
2
= (1, 1)
x
3
= (1, 1)
x
4
= (0, 3)
Leave answers for parts a and b in terms of .
a. Find the union bound on P
e
for the ML detector on this signal constellation.
b. Find the Nearest Neighbor Union Bound on P
e
for the ML detector on this signal constellation.
c. Let the SNR = 14 dB and determine a numerical value for P
e
using the NNUB.
1.22 Basic QAM Design - Midterm 1996 Either square or cross QAM can be used on an AWGN
channel with SNR = 30.2 dB and symbol rate 1/T = 10
6
.
a. Select a QAM constellation and specify a corresponding integer number of bits per symbol, b, for
a modem with the highest data rate such that P
e
< 10
6
.
b. Compute the data rate for part a.
c. Repeat part a if P
e
< 2 10
7
is the new probability of error constraint.
d. Compute the data rate for part c.
1.23 Basic Detection - One shot or Two? - Final 1996
A 2B1Q signal with d = 2 is sent two times in immediate succession through an AWGN channel with
transmit lter p(t), which is a scaled version of the basis function. All other symbol times, a symbol
value of zero is sent. The symbol period for one of the 2B1Q transmissions is T = 1, and the transmit
lter is p(t) = 1 for 0 < t < 2 and p(t) = 0 elsewhere. At both symbol periods, any one of the 4 messages
is equally likely, and the two successive messages are independent. The WGN has power spectral density
N0
2
= .5.
a. Draw an optimum (ML) basis detector and enumerate a signal constellation. (Hint: use basis
functions.) (3 pts)
b. Find d
min
. (2 pts)
c. Compute

N
e
counting only those neighbors that are d
min
away. ( 2pts)
d. Approximate P
e
for your detector. (3 pts)
90
Figure 1.57: Discrete Memoryless Channel
1.24 Discrete Memoryless Channel - Midterm 1994
Given a channel with p
y|x
as shown in Figure 1.57: (y 0, 1, 2 and x 0, 1, 2) Let p
1
= .05
a. For p
x
(i) = 1/3, nd the optimum detection rule.
b. Find P
e
for part a.
c. Find P
e
for the MAP detector if p
x
(0) = p
x
(1) = 1/6 and p
x
(2) = 2/3.
1.25 Detection with Uniform Noise - Midterm 1995
A one-dimensional additive noise channel, y = x +n, has uniform noise distribution
p
n
(v) =
_
1
L
[v[
L
2
0 [v[ >
L
2
where L/2 is the maximumnoise magnitude. The input x has binary antipodal constellation with equally
likely input values x = 1. The noise is independent of x.
a. Design an optimum detector (showing decision regions is sucient.) (2 pts)
b. For what value of L is P
e
< 10
6
? (1 pt)
c. Find the SNR (function of L). (2 pts)
d. Find the minimum SNR that ensures error-free transmission. (2 pts)
e. Repeat part d if 4-level PAM is used instead. (2 pts.)
1.26 Can you design or just use formulae? - Midterm 1995
32 CR QAM modulation is used for transmission on an AWGN with
N0
2
= .001. The symbol rate is
1/T = 400kHz.
a. Find the data rate R.
b. What SNR is required for P
e
< 10
7
? (ignore N
e
).
91
c. In actual transmitter design, the analog lter rarely is normalized and has some gain/attenuation,
unlike a basis function. Thus, the average power in the constellation is calibrated to the actual
power measured at the analog input to the channel. Suppose

c
x
= 1 corresponds to 0 dBm ( 1
milliwatt), then what is the power of the signals entering the transmission channel for the 32CR
in this problem with P
e
< 10
7
?
d. the engineer under stress. Without increasing transmit power or changing
N0
2
= .001, design a
QAM system that achieves the same P
e
at 3.2 Mbps on this same AWGN.
1.27 QAM Design - Midterm 1997
A QAM system with symbol rate 1/T=10 MHz operates on an AWGN channel. The SNR is 24.5
dB and a P
e
< 10
6
is desired.
a. Find the largest constellation with integer b for which P
e
< 10
6
. (2 pts)
b. What is the data rate for your design in part a? (2 pts)
c. How much more transmit power is required (with xed symbol rate at 10 MHz) in dB for the data
rate to be increased to 60 Mbps? (P
e
< 10
6
) (2 pts)
d. With SNR = 24 dB, an reduced-rate alternative mode is enabled to accommodate up to 9 dB margin
or temporary increases in the white noise amplitude. What is the data rate in this alternative 9dB-
margin mode at the same P
e
< 10
6
? ( 2 pts)
e. What is the largest QAM (with integer b) data rate that can be achieved with the same power,
c
x
/T, as in part d, but with 1/T possibly altered? ( 2 pts)
1.28 Basic Detection.
A vector equivalent to a channel leads to the one-dimensional real system with y = x +n where n is
exponentially distributed with probability density function
p
n
(u) =
1

2
e

2
|u|

for all u (1.331)


with zero mean and variance
2
. This system uses binary antipodal signaling (with equally likely inputs)
with distance d between the points. We dene a function

Q(x) =
_ _

x
1

2
e

2u
du =
1
2
e

2x
for x 0
1
_

|x|
1

2
e

2u
du = 1
1
2
e

2|x|
for x 0
(1.332)
a. Find the values

Q(),

Q(0),

Q(),

Q(

10). (2 pts)
b. For what x is

Q(x) = 10
6
? (1 pt)
c. Find an expression for the probability of symbol error P
e
in terms of d, , and the function

Q.
(2 pts)
d. Dening the SNR as SNR =

E
x

2
, nd a new expression for P
e
in terms of

Q and this SNR. (2 pts)
e. Find a general expression relating P
e
to SNR, M, and

Q for PAM transmission. (2 pts)
f. What SNR is required for transmission at

b = 1, 2, and 3 when P
e
= 10
6
? (2 pts)
g. Would you prefer Gaussian or exponential noise if you had a choice? (1 pt)
1.29 QAM Design - Midterm 1998
QAM transmission is to be used on an AWGN channel with SNR=27.5 dB at a symbol rate of
1/T = 5 MHz used throughout this problem. Youve been hired to design the transmission system. The
desired probability of symbol error is

P
e
10
6
.
92
a. (2 pts) List two basis functions that you would use for modulation.
b. (2 pts) Estimate the highest bit rate,

b, and data rate, R, that can be achieved with QAM with
your design.
c. (1 pt) What signal constellation are you using?
d. (3 pts) By about how much (in dB) would

c
x
need to be increased to have 5 Mbps more data rate
at the same probability of error? Does your answer change for c
x
or for P
x
?
1.30 Basic Detection - Midterm 2000 - 10 pts
QAM transmission is used on an AWGN channel with
N0
2
= .01. The transmitted signal constellation
points for the QAM signal are given by
_

3
2

1
2
_
,
_
0
0
_
, and
_
0
1
_
, with each constellation point
equally likely.
a. (1 pt) Find M (message-set size) and

c
x
(energy per dimension) for this constellation.
b. (2 pts) Draw the constellation with decision regions indicated for an ML detector.
c. (2 pts) Find N
e
and d
min
for this constellation.
d. (2 pts) Compute a NNUB value for

P
e
for the ML detector of part b.
e. (1 pt) Determine

b for this constellation (value may be non-integer).
f. (2 pts) For the same

b as part e, how much better in decibels is the constellation of this problem
than SQ QAM?
1.31 Basic Detection - Midterm 2001 - 7 pts
The QAM radial constellation in Figure 1.58 is used for transmission on an AWGN with
2
= .05.
All constellation points are equally likely.
Figure 1.58: Constellation for 8 points.
a. ( 2pts) Find c
x
and

c
x
for this constellation.
b. (3 pts) Find

b, d
min
, and N
e
for this constellation.
c. (2 pts) Find P
e
and

P
e
with the NNUB for an ML detector with this constellation.
93
Figure 1.59: Constellation for Problem 1.32.
1.32 A concatenated QAM Constellation - 2002 Miterm - 15 pts
A set of 4 orthogonal basis functions
1
(t),
2
(t),
3
(t),
4
(t) uses the following constellation in
both the rst 2 dimensions and again in the 2nd two dimensions: The constellation points are restricted
in that an E (even) point may only follow an E point, and an O (odd) point can only follow an O
point. For instance, the 4-dimensional point [+1 + 1 1 1] is permitted to occur, but the point
[+1 + 1 1 + 1] cannot occur.
a. (2 pts) Enumerate all M points as ordered-4-tuples.
b. (3 pts) Find b,

b, and the number of bits/Hz or bps/Hz.
c. (1 pt) Find c
x
and

c
x
(energy per dimension) for this constellation.
d. (2 pts) Find d
min
for this constellation.
e. (2 pts) Find N
e
and

N
e
for this constellation (you may elect to include only points at minimum
distance in computing nearest neighbors).
f. (2 pts) Find P
e
and

P
e
for this constellation using the NNUB if used on an AWGN with
2
= 0.1.
g. (3 pts) Compare this 4-dimensional constellation fairly (which requires increasing the number of
points in the constellation to 6 to get the same data rate). 4QAM.
1.33 Detection Fundamentals - 2003 Miterm - 15 pts
A random variable x
1
takes the 2 values 1 with equal probability independently of a second random
variable x
2
that takes the values 2 also with equal probability. The two random variables are sumed
to x = x
1
+x
2
, and x can only be observed after zero-mean Gaussian noise of variance
2
= .1 is added,
that is y = x +n is observed where n is the noise.
a. (1 pt) What are the values that the discrete random variable x takes, and what are their proba-
bilities? (1 pt)
b. (1 pt) What are the means and variances of x and y?
c. (2 pts) What is the lowest probability of error in detecting x given only an observation of y? Draw
corresponding decision regions.
d. (1 pt) Relate the value of x with a table to the values of x
1
and x
2
. Explain why this is called a
noisy DAC channel.
e. (1 pt) What is the (approximate) lowest probability of error in detecting x
1
given only an obser-
vation of y?
f. (1 pt) What is the (approximate) lowest probability of error in detecting x
2
given only an obser-
vation of y?
94
g. Suppose additional binary independent random variables are added so that the two bipolar values
for x
u
are 2
u1
, u = 1, ..., U. Which x
u
has lowest probability of error for any AWG noise, and
what is that P
e
? (1 pt)
h. For U = 2, what is the lowest probability of error in detecting x
1
given an observation of y and a
correct observation of x
2
? (1 pt)
i. For U = 2, what is the lowest probability of error in detecting x
2
given an observation of y and a
correct observation of x
1
? (1 pt)
j. What is the lowest probability of error in any of parts e through i if
2
= 0? What does this mean
in terms of the DAC? (1 pt)
k. Derive a general expression for the probability of error for all bits u = 1, ..., U where x = x
1
+x
2
+
... +x
U
in AWGN with variance
2
for part g? (2 pts)
1.34 Honey Comb QAM - 2005 Miterm - 15 pts
The QAM constellation below is used for transmission on an AWGN with symbol rate 10MHz and a
carrier frequency of 100 MHz.
Figure 1.60: Constellation for Problem 1.34.
Each of the solid constellation symbol possibilities is at the center of a perfect hexagon (all sides are
equal) and the distance to any of the closest sides of the hexagon is
d
2
. The 6 empty points represent
a possible message also, but each is used only every 6 symbol instants, so that for instance, the point
labelled 0 is a potential message only on symbol instants that are integer multiples of 6. The 1 point
can only be transmitted on symbol instants that are integer multiples of 6 plus one, the 2 point only on
symbol instants that are integer multiples of 6 plus two, and so on. At any symbol instant, any of the
points possible on that symbol are equally likely.
a. What is the number of messages that can be possibly transmitted on any single symbol? What
are b and

b? (3 pts)
b. What is the data rate? (1 pt)
c. Draw the decision boundaries for time 0 of a ML receiver. ( 2 pts)
d. What is d
min
? (1 pt)
95
e. What are c
x
and

c
x
for this constellation in terms of d? (3 pts)
f. What is the number of average number nearest neighbors? (1 pt)
g. Determine the NNUB expression that tightly upper bounds

P
e
for this constellation in terms of
SNR. (2 pts)
h. Compare this constellation fairly to Cross QAM transmission. (1 pt)
i. Describe an equivalent ML receiver that uses time-invariant decision boundaries and a constant
decision device with a simple preprocessor to the decision device. (1 pt).
96
Appendix A
Gram-Schmidt Orthonormalization
Procedure
This appendix illustrates the construction of a set of orthonormal basis functions
n
(t) from a set of
modulated waveforms x
i
(t), i = 0, ..., M1. The process for doing so, and achieving minimal dimen-
sionality is called Gram-Schmidt Orthonormalization.
Step 1:
Find a signal in the set of modulated waveforms with nonzero energy and call it x
0
(t). Let

1
(t)

=
x
0
(t)
_
c
x
0
, (A.1)
where c
x
=
_

[x(t)]
2
dt. Then x
0
=
__
c
x0
0 ... 0

.
Step i for i = 2, ..., M:
Compute x
i1,n
for n = 1, ..., i 1 (x
i1,n

=
_

x
i1
(t)
n
(t)dt).
Compute

i
(t)

= x
i1
(t)
i1

n=1
x
i1,n

n
(t) (A.2)
if
i
(t) = 0, then
i
(t) = 0, skip to step i + 1.
If
i
(t) ,= 0, compute

i
(t) =

i
(t)
_
c
i
, (A.3)
where c
i
=
_

[
i
(t)]
2
dt. Then x
i1
=
_
x
i1,1
... x
i1,i1
_
c
i
0 ... 0

.
Final Step:
Delete all components, n, for which
n
(t) = 0 to achieve minimum dimensional basis function set,
and reorder indices appropriately.
97
Appendix B
The Q Function
The Q Function is used to evaluate probability of error in digital communication - It is the integral of
a zero-mean unit-variance Gaussian random variable from some specied argument to :
Denition B.0.1 (Q Function)
Q(x) =
1

2
_

x
e

u
2
2
du (B.1)
The integral cannot be evaluated in closed form for arbitrary x. Instead, see Figures B.1 and B.2 for a
graph of the function that can be used to get numerical values. Note the argument is in dB (20 log
10
(x)).
Note Q(x) = 1 Q(x), so we need only plot Q(x) for positive arguments.
We state without proof the following bounds
(1
1
x
2
)
e

x
2
2

2x
2
Q(x)
e

x
2
2

2x
2
(B.2)
The upper bound in (B.2) is easily seen to be a very close approximation for x 3.
Computation of the probability that a Gaussian random variable u with mean m and variance
2
exceeds some value d then uses the Q-function as follows:
Pu d = Q(
d m

) (B.3)
The Q-function appears in Figures B.3, B.1, and B.2 for very low SNR (-10 to 0 dB), low SNR (0 to
10 dB), and high SNR (10 to 16 dB) using a very accurate approximation (less than 1% error) formula
from the recent book by Leon-Garcia:
Q(x)
_
1
1

x +
1

x
2
+ 2
_
e

x
2
2

2
. (B.4)
For the mathematician at heart, Q(x) = .5 erfc(x/

2), where erfc is known as the complimentary error


function by mathematicians.
98
Figure B.1: Low SNR Q-Function Values
Figure B.2: High SNR Q-Function Values
99
Figure B.3: Very Low SNR Q-Function Values
100
Bibliography
[1] J.M. Wozencraft and I.M. Jacobs. Principles of Communication Engineering. Wiley, New York,
1965.
[2] B.W. Lindgren. Statistical Theory, Third Edition. Macmillan, New York, 1968.
101

You might also like