Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

DC Notes Before Mid

Download as pdf or txt
Download as pdf or txt
You are on page 1of 41

1

Schematic Diagram:

0, 1, 1, 0, 0, 1

DIGITAL
m(t) ENCODER
MODULATION

Transmitter
Bit Error
CHANNEL

0, 1, 1, 0, 1, 1

m̂(t) DECODER DEMODULATOR

Receiver

How to design various modules? How to transmit? How to receive?

At transmitter,

1) Transmit Filter

2) Transmit Power

3) Modulation Techniques

At receiver,

1) Receive Filter

2) Probability of Error

Transmitted signal corresponds to a series of symbols.

0 +A
Information Symbols Two Symbols belonging to the digital modulation constellation
1 -A

BPSK (Binary Phase Shift Keying)

The operation is termed as digital modulation.

0 1 1 0 0 1 Bit Stream
+A -A -A +A +A -A Stream of digital modulated symbols

March 11, 2024 DRAFT


2

Transmit the digitally modulated symbols using a pulse

pT (t)

t
−T
2
T
2


T

1, if |t| <

2
pT (t) =

0, otherwise

a4 pT (t − 4T )
a0 pT (t) a3 pT (t − 3T )
A

3T
2
t
− T2 T
2
5T
2
7T
2
9T
2

−A

a1 pT (t − T ) a2 pT (t − 2T )

Therefore, for the k th symbol, ak pT (t − kT ), where ak is the the k th symbol, and pT is any pulse of

duration T , pT (t − kT ) is that pulse shifted by kT


P∞
Net transmitted digital communication signal is x(t) = k=−∞ ak pT (t − kT ). This is the structure of

a typical transmitted signal in a digital communication system.

Spectrum of the transmitted signal



X
x(t) = ak pT (t − kT )
k=−∞

k th bit could be 0 or 1, accordingly ak could be +A or −A, where occurrence of bit is random, hence

ak is a binary random variable.

March 11, 2024 DRAFT


3

pT (t)

t
− T2 T
2


T

1, if |t| <

2
pT (t) =

0, otherwise

F sin(πF T )
pT (t) ←
→ T sincF T = T
πF T
1 1
Since ak is random, P(ak = A) = 2
and P(ak = −A) = 2

1 1
E{ak } = AP(ak = A) + (−A)P(ak = −A) = A − A = 0
2 2

Also assume that symbols ak are i.i.d (independently identically distributed)

E{ak am } = E{ak }E{am } = 0

, for k ̸= m, the symbols are uncorrelated random variables.


X
x(t) = ak pT (t − kT )
k=−∞


X
E{x(t)} = E[ ak pT (t − kT )]
k=−∞

X
= E[ak ]pT (t − kT )
k=−∞

= 0

March 11, 2024 DRAFT


4

This implies that the average value of transmitted signal, x(t) is zero. The Fourier transform of x(t),

X(f ) is given as
Z ∞
X(f ) = x(t)e−j2πf t dt
−∞

Z ∞
E{X(f )} = E{ x(t)e−j2πf t dt}
−∞
Z ∞
= E{x(t)}e−j2πf t dt
−∞

= 0

This does not mean that spectrum is zero, but the average value of the spectrum is zero.


X
x(t) = ak pT (t − kT )
k=−∞

How to measure the spectral content of a random signal? “Power Spectral Density” - Spectral Distri-

bution of power of a random signal.

We have to first compute the auto-correlation Rxx (τ ) = E{x(t)x(t + τ )},


F
Auto-correlation ←
→ PSD


X ∞
X
E{x(t)x(t + τ )} = E{ ak pT (t − kT ) am pT (t − mT + τ )}
k=−∞ m=−∞

X ∞
X
= E{ ak pT (t − kT )am pT (t − mT + τ )}
k=−∞ m=−∞

X ∞
X
= E{ak am }pT (t − kT )pT (t + τ − mT )
k=−∞ m=−∞

E{ak am } = E{ak }E{am } = 0, if k ̸= m

, since symbols are zero mean i.i.d.

For k = m

X
E{x(t)x(t + τ )} = E{a2k }pT (t − kT )pT (t + τ − kT )
k=−∞

March 11, 2024 DRAFT


5

1 1
E{a2k } = A2 P(ak = A) + (−A)2 P(ak = −A) = A2 + A2 = A2
2 2
, which is the power of the data symbol.

X
E{x(t)x(t + τ )} = A2 pT (t − kT )pT (t + τ − kT )
k=−∞

X
2
= A pT (t − kT )pT (t + τ − kT )
k=−∞

, since it depends on t, hence x(t) is not wide sense stationary (WSS).



X
x(t) = ak pT (t − kT − to )
k=−∞

, where to is the random delay, uniformly distributed.

Average value or expected value w.r.t to ,



X
E{x(t)x(t + τ )} = A2 E{pT (t − kT − to )pT (t + τ − kT − to )}
k=−∞
∞ Z
X ∞
2
= A fTo (to )pT (t − kT − to )pT (t + τ − kT − to )dto
k=−∞ −∞

∞ Z T
2
X 1
= A pT (t − kT − to )pT (t + τ − kT − to )dto
k=−∞ 0 T

Substitute to + kT = t̃o , and dto = dt̃o


∞ Z (k+1)T
2
X 1
E{x(t)x(t + τ )} = A pT (t − t̃o )pT (t + τ − t̃o )dt̃o
k=−∞ kT T
∞ Z (k+1)T
Pd X
= pT (t − t̃o )pT (t + τ − t̃o )dt̃o
T k=−∞ kT

Substitute t + τ − t̃o = t′o , and −dt̃o = dt′o

∞ Z (k+1)T
2
X 1
E{x(t)x(t + τ )} = A pT (t − t̃o )pT (t + τ − t̃o )dt̃o
k=−∞ kT T
∞ Z (k+1)T
Pd X
= pT (t − t̃o )pT (t + τ − t̃o )dt̃o
T k=−∞ kT
Pd ∞
Z
= pT (t′o )pT (t′o − τ )dt′o
T −∞
does not depend on ′ t′ only depends on ′ τ ′ . Hence, x(t) is WSS.

March 11, 2024 DRAFT


6

Rxx (τ ) = E{x(t)x(t + τ )}
R∞
p (t′ )p (t′ − τ )dt′o = RpT pT (τ ), auto-correlation of the pulse pT (t)
−∞ T o T o

Rxx (τ ) = E{x(t)x(t + τ )}
Pd ∞
Z
= pT (t′o )pT (t′o − τ )dt′o
T −∞
Pd
= RpT pT (τ )
T

The autocorrelation of the signal x(t), depends on the autocorrelation of the pulse, pT (t)
Pd F Pd
Rxx (τ ) = T
RpT pT (τ ) ←
→ Sxx (f ) = S
T pT pT
(f ) , where Sxx (f ): Power Spectral Density and SpT pT (f ):

Energy Spectral Density.

E{x(t)x(t + τ )} = Rxx (τ )
Pd
= RpT pT (τ )
T
R∞ R∞
RpT pT = −∞
pT (t)pT (t − τ )dt, and SpT pT (f ) = −∞
RpT pT e−j2πf τ dτ

SpT pT (f ) = |P (f )|2 , where P (f ) is the Fourier Transform of the pulse, pT (t), where P (f ) =
R∞
−∞
pT (t)e−j2πf t dt
Pd
R∞
Rxx (τ ) = T
RpT pT (τ ), and Sxx (f ) = −∞
Rxx (τ )e−j2πf τ dτ

For a wide sense stationary process, psd is given by the Fourier transform of the auto-correlation.
Pd Pd
Sxx (f ) = S (f )
T pp
= T
|P (f )|2

The Power Spectral density is proportional to energy spectral density of the pulse, pT (t)

pT (t)

t
− T2 T
2

March 11, 2024 DRAFT


7

Consider p(t) = pT (t),

Z ∞
PT (f ) = pT (t)e−j2πf t dt
−∞
Z T
2
= e−j2πf t dt
− T2

= T sinc(f T )
sinπf T
= T
πf T
|PT (f )|2 = T 2 sinc2 (f T ), and the power spectral density of the transmitted signal, x(t), Sxx (f ) =
Pd Pd Pd 2
S (f )
T pp
= T
|P (f )|2 = T
T sinc2 (f T ) = Pd T sinc2 (f T )

Digital Communication Channel, is the medium through which signal propagates from the transmitter

to a receiver in a communication system. For examples, telephone lines, coaxial cable, wireless channel

(AM, FM, cellular, bluetooth, wi-fi)

x(t) = ∞
P
k=−∞ ak pT (t − kT )

SIMPLE CHANNEL MODEL:

AWGN - Additive White Gaussian Noise

y(t) = x(t) + n(t), where x(t): transmitted signal, y(t): received signal, and n(t): noise.

Noise adds to signal. This is termed as additive noise.

Transmitted Signal x(t) CHANNEL y(t) Received Signal

x(t) y(t)

n(t)

N (t): random + function of time =⇒ random process, popular noise model = Gaussian Noise

Process. Noise is a Gaussian random process.

Gaussian Random Process, N (t) is a Gaussian random process if the statistics of all orders are jointly

Gaussian.

March 11, 2024 DRAFT


8

Consider noise samples, N1 (t1 ), N2 (t2 ), ...., Nk (tk ) k noise samples at t1 , t2 , t3 , ..., tk

FN1 (t1 ),N2 (t2 ),....,Nk (tk ) (n1 , n2 , .., nk ): Joint Distribution of Noise Samples. If this joint distribution is

jointly Gaussian i.e. follows a multivariate Gaussian density for all sets {t1 , t2 , ..., tk } and for any value

k. This is termed as a Gaussian Random Process.

Noise + Gaussian Random Process = Gaussian Noise.

Gaussian Noise Process: If noise process is wide sense stationary =⇒ it is also strict sense stationary.

Typically, SSS =⇒ WSS but WSS ̸ =⇒ SSS, only for a Gaussian random process WSS =⇒ SSS.

N (t) is WSS if E{N (t)} = µ∀t, and E{N (t)N (t + τ )} = RN N (τ )∀t, τ depends on time difference.

WHITE NOISE: Noise typically has very low correlation.

Noise Process (Very erratic)

Typical Signal = Very Smooth which has high level of temporal correlation

RN N (τ ) = E{N (t)N (t + τ )} = η2 δ(τ )

E{N (t)N (t + τ )} = 0, if τ ̸= 0 Noise samples at any two different time instants t and t + τ are

uncorrelated. E{N (t)N (t + τ )} = RN N (τ ) = η2 δ(τ ), if τ = 0

The auto-correlation, RN N (τ ) = η2 δ(τ ) ← η


F
→ SN N (f ) = 2

RN N (τ )

η
2 δ(τ )

Correlation is zero if τ = 0

March 11, 2024 DRAFT


9

η
2

The power spectral density is flat, power is spread uniformly over all frequencies (power is uniformly

distributed over all frequency components similar to ‘white light’). Hence, it is termed as “White Noise”

One of the most popular model + practically applicable for a digital communication system.

Transmitted Signal x(t) CHANNEL y(t) Received Signal

x(t) y(t)

n(t)

If the noise is white and Gaussian, the additive channel =⇒ AWGN Channel. Noise is additive +

white + Gaussian.

Digital Communication Receiver:

y(t) r(t) r(T)


x(t) h(t)
Sampling Operation sampled at T
Filter LTI System
n(t)

r(t) = y(t) ∗ h(t)


Z ∞
= h(τ )y(t − τ )dτ
−∞

March 11, 2024 DRAFT


10

y(t) = x(t) + n(t), where x(t) = ao p(t), ao is the symbol, and p(t) is the pulse.

y(t) = ao p(t) + n(t)


R∞
r(t) = −∞ {ao p(t − τ ) + n(t − τ )} h(τ )dτ Sampled at T ,
Z ∞
r(T ) = {ao p(t − τ ) + n(t − τ )} h(τ )dτ
−∞
Z ∞ Z ∞
= ao p(T − τ )h(τ )dτ + h(τ )n(T − τ )dτ
−∞ −∞
| {z } | {z }
Signal Component Noise Component

How to design ‘h(t)’ to maximize the signal power and minimize noise power.
Signal Power
SN R = Noise Power
, which is the fundamental quantity for analysis of performance of communication

system.
R∞
Signal Term: ao −∞
p(T − τ )h(τ )dτ

Z ∞
Signal Power = E{|ao p(T − τ )h(τ )dτ |2 }
−∞
Z ∞
2
= E{|ao | }E{| p(T − τ )h(τ )dτ |2 }
−∞
Z ∞
2
= E{|ao | }E{( p(T − τ )h(τ )dτ )2 }
−∞
Z ∞
= Pd E{( p(T − τ )h(τ )dτ )2 }
−∞

Since p(t) and h(t) are fixed, hence these are deterministic.
R∞
Signal Power = Pd ( −∞ p(T − τ )h(τ )dτ )2
R∞
Noise Power: ñ = −∞ h(τ )n(T − τ )dτ , where ñ is noise after passing through receive filter and

sampling at t = T .

Assuming n(t) is real quantity, E{|ñ|2 } = E{ñ2 }

E{n(t)} = 0, and Rnn (τ ) = E{n(t)n(t + τ )} = η2 δ(τ )

ñ(t) ñ(T ) = ñ
n(t) h(t)
Sampling Operation sampled at T
Filter LTI System

March 11, 2024 DRAFT


11

R∞
ñ(t) = −∞
h(τ )n(t − τ )dτ , and ñ is a Gaussian Random Variable since it is a sample of a Gaussian

Random process. Any linear transformation on Gaussian Process is a Gaussian Process. ñ(t) is also

Gaussian Random Process, characterized by mean and variance.

Z ∞
E{ñ} = E{ h(τ )n(T − τ )dτ }
−∞
Z ∞
= h(τ )E{n(T − τ )}dτ
−∞

= 0

since n(t) is a zero-mean Gaussian random process.

E{ñ2 } = E{ññ}
Z ∞ Z ∞
= E{ h(τ )n(T − τ )dτ h(τ̃ )n(T − τ̃ )dτ̃ }
−∞ −∞
Z ∞ Z ∞
= h(τ )h(τ̃ )E{n(T − τ )n(T − τ̃ )}dτ dτ̃
−∞ −∞
Z ∞ Z ∞
η
= h(τ )h(τ̃ ) δ(τ − τ̃ )dτ dτ̃
−∞ −∞ 2
Z ∞
η
h2 (τ ) dτ
=
−∞ 2
Z ∞
η
= h2 (τ )dτ
2 −∞
R∞
If h(τ ) is not real, E{ñ2 } = η2 −∞ |h(τ )|2 dτ

E{ñ2 } = η2 Eh , where Eh is the energy of the impulse response of the filter.

y(t) r(t) r(T)


x(t) h(t)
Sampling Operation sampled at T
Filter LTI System
n(t)

R∞
r(t) = h(τ ){x(t − τ ) + n(t − τ )}dτ
−∞
R∞ R∞
Signal = −∞ ao p(T − τ )h(τ )dτ ; Noise = −∞ n(T − τ )h(τ )dτ
R∞
Signal Power Pd ( −∞ p(T −τ )h(τ )dτ )2
SNR = Noise Power
= η R∞
|h(τ )|2 dτ
2 −∞

March 11, 2024 DRAFT


12

We have to find the filter, h(t) which maximizes the ‘SNR’.


R∞ R∞ R∞
Cauchy-Schwartz Inequality: ( −∞ u(t)v(t)dt)2 ≤ −∞ u2 (t)dt × −∞ v 2 (t)dt

Using Cauchy-Schwartz inequality,


R∞ R∞ R∞ R∞
Signal Power Pd ( −∞ p(T −τ )h(τ )dτ )2 Pd −∞ p2 (T −τ )dτ −∞ |h(τ )|2 dτ Pd −∞ p2 (T −τ )dτ
SNR = Noise Power
= η R∞
|h(τ )|2 dτ
≤ η R∞ 2
|h(τ )| dτ
= η
2 −∞ 2 −∞ 2

SNR is maximized when P (T − τ ) ∝ h(τ ) =⇒ P (T − τ ) = kh(τ ), without loss of generality, lets

consider k = 1, i.e. P (T − τ ) = h(τ )

For maximum SNR, impulse response has to be matched to pulse. Hence, the filter is termed as

MATCHED FILTER (pulse shaping filter).

p(T − τ ) = h(τ )
p(t) = e−t u(t)

T τ

Optimal Filter equals the Matched Filter which maximizes SNR

R∞ R∞
Substituting T − τ = τ̃ , we have −∞
p2 (T − τ )dτ = −∞
p2 (τ̃ )dτ̃
R∞ R∞
Pd −∞ |h(τ )|2 dτ Pd −∞ p
2 (τ )dτ
Maximum SNR = η = η for matched filter h(τ ) = p(T − τ )
2 2

Probability of Error:

Transmit several symbols. The probability of error is an important metric to characterize the perfor-

mance of a digital communication system.

Aim: To minimize probability of error ∼ 10−6 − 10−8 , typical values of probability of error.

For a given scheme, how to characterize the probability of error?


R∞
After filering and sampling, r(T ) = ao −∞ p(T − τ )h(τ )dτ + ñ

To maximize SNR, p(T − τ ) = h(τ )


R∞
r(T ) = ao −∞ p2 (T − τ )dτ + ñ = ao Ep + ñ
R∞ R∞
Mean, E{ñ} = 0, and Variance of ñ = E{ñ2 } = η
2 −∞
|h(τ )|2 dτ = η
2 −∞
|p(τ )|2 dτ = η2 Ep

Noise at the output of sampler, ñ ∼ N(0, η2 Ep )

X ∼ N(0, 1): standard normal random variable.

r(T ) = ao Ep + ñ, symbol must belong to a digital communication in a digital communication system.

ao = +A or −A

r(T ) = ±AEp + ñ

March 11, 2024 DRAFT


13

0 +A
Information Symbols Two Symbols belonging to the digital modulation constellation
1 -A

BPSK (Binary Phase Shift Keying)

The operation is termed as digital modulation.

0 1 1 0 0 1 Bit Stream
+A -A -A +A +A -A Stream of digital modulated symbols

A. Noise

If n ∼ N(µ, σ 2 ), then n + k ∼ N(µ + k, σ 2 )

fN (n)

σ2 σ2

µ µ+k
Spread remains unchanged

AEp + ñ; N(AEp , σ 2 )


r(T ) =
−AEp + ñ; N(−AEp , σ 2 )

r(T ) is Gaussian with mean = AEp or −AEp and same variance, σ 2 = η2 Ep

March 11, 2024 DRAFT


14

ao = −A ao = +A

σ2 σ2

−AEp 0 AEp
point of intersection by symmetry
Both have same variance, σ 2 = η
2 Ep



AEp + ñ

r(T ) =

−AEp + ñ

pdf of ouput r(T ) for ao = −A pdf of ouput r(T ) for ao = A


ao = −A ao = +A

σ2 σ2

−AEp 0 AEp
To decide ao = −A To decide ao = A

pdf of ouput r(T ) for ao = −A pdf of ouput r(T ) for ao = A


is higher is higher

0
if r(T ) < 0 if r(T ) > 0
then decide ao = −A then decide ao = A

r(T ) ≥ 0 =⇒ Decide A, this corresponds to information bit ‘0’, and r(T ) < 0 =⇒ Decide -A, this

corresponds to information bit ‘1’.

March 11, 2024 DRAFT


15

Probability of error:

When does error occur? Error occurs when r(T ) ≥ 0 for ao = −A or when r(T ) < 0 for ao = A

Consider the transmission of ao = −A

r(T ) = −AEp + ñ = −Ã + ñ, error occurs if r(T ) ≥ 0 =⇒ −AEp + ñ ≥ 0

Probability of error,

Pe = P(ñ ≥ Ã)
Z Ã
1 ñ2
= 1− √ e− 2σ2 dñ
−∞ 2πσ 2
Z ∞
1 ñ2
= √ e− 2σ2 dñ
à 2πσ 2
Z ∞
1 n′2
= √ e− 2 dn′
Ã/σ 2π
This is the pdf of standard Gaussian with mean zero and variance ‘1’.
R∞ u2
Q(u) = u √12π e− 2 du

Q-function: Gaussian Q-function denotes that the probability that the standard Gaussian random variable

is greater than ‘u’.

Pe = P{ñ > Ã}

= Q(Ã/σ)
!
AEp
= Q pη
E
2 p
s !
A2 Ep
= Q η
2
s !
A2 Ep
= Q 2
η

BPSK is one of digital modulation scheme.


q
BPSK pulse shape p(t), p(t) = T2 cos(2πfc t), 0 ≤ t ≤ T , where T is the symbol duration.

T = k f1c , the signal contains an integer ‘k ′ multiples of cycle.

March 11, 2024 DRAFT


16

Z ∞
Ep = p2 (t)dt
−∞
Z T
2
= cos2 (2πfc t)dt
0 T
Z T
1
= (1 + cos4πfc t)dt
T 0
Z T Z T
1 1
= dt + cos(4πfc t)dt
T 0 T 0

= 1

x(t) = ao p(t), where ao ∈ {−A, A}, each ao carries one bit of information.

Eb : denotes average energy per bit and it is constant across scheme.


1
Information bit ‘0’ occurs with probability 2
and ‘1’ bit occurs with probability 21 .

ao = ±A and x(t) = ao p(t)



Average energy per bit, Eb = 21 A2 Ep + 12 A2 Ep = A2 Ep = A2 =⇒ A = Eb

Therefore, the transmitted waveforms are,


√ q
0 → ao = +A = Eb ; x(t) = 2ET b cos(2πfc t), 0 ≤ t ≤ T , waveform that corresponds to bit ‘0’.
√ q
1 → ao = −A = − Eb ; x(t) = − 2ET b cos(2πfc t), 0 ≤ t ≤ T , waveform that corresponds to bit ‘1’.

Waveforms

 
2Eb 2Eb
T cos(2πfc t) T cos(2πfc t + π)

shift by π

shift by π
 
2Eb 2Eb
T cos(2πfc t + π + π) = T cos(2πfc t)

Two binary waveforms =⇒ Binary modulation scheme;

Waveforms are phase shifted into each other =⇒ Phase shifting

Together =⇒ Binary Phase Shift Keying (BPSK)

At receiver, matched filter h(τ ) = p(T − τ ) (filter by h(t) and sampled by T )

March 11, 2024 DRAFT


17

p(t) p(-t)

T 0

0 -T

h(t) is non-zero (non-causal) for t ≤ 0; delaying bt T



r(T ) ≥ 0 =⇒ decidesao = A = Eb =⇒ bit = 0


r(T ) < 0 =⇒ decidesao = −A = − Eb =⇒ bit = 1

q 2 q q
Optimal Decision Rule: Probability of error = Q( Aη/2Ep ) = Q( η/2
Eb
) = Q( 2 Eηb )

This is the probability of error for BPSK average energy per bit, Eb .
R ∞ 1 − x′2 ′
Q(x) = x 2π e 2 dx = P(X > x), where X ∼ N(0, 1) is a standard Gaussian Random Variable.

Amplitude Shift Keying (ASK), another digital modulation scheme


q
2
p(t) = T
cos(2πfc t), 0 ≤ t ≤ T , where T is the symbol duration, and T = k f1c i.e., there are k

number of cycles in a symbol duration, T .


RT
Ep = 0 p2 (t)dt = 1 and ao ∈ {0, A}
q q
Information bit ‘0’ → A T2 cos(2πfc t), 0 ≤ t ≤ T , and Information bit ‘1’ → 0 × T2 cos(2πfc t) =

0, 0 ≤ t ≤ T

Assume k = 2, T = 2 f1c , there are two cycles in a symbol duration, T.

0 1 0

ON OFF ON

ASK is also termed as ON-OFF keying.

How to choose A? Bit energy = Eb


0 1
x(t) →
− Ap(t) if ao = A, and x(t) →
− 0 if ao = 0
1
P(0) = P(1) = 2

Average energy per bit, Eb = 12 A2 Ep + 1
2
× 0 = 21 A2 =⇒ A = 2Eb This ensures that the average

energy per bit is constant across schemes.

March 11, 2024 DRAFT


18

Both waveforms differ in amplitudes. Therefore, the digital modulation scheme is termed as amplitude

shift keying (ASK).

At the receiver, y(t) = x(t) + n(t), where n(t) is AWGN, for matched filter, h(t) = p(T − t), followed

by sampling at T .

Consider bit ‘0’, y(t) = Ap(t) + n(t)

After matched filtering and followed by sampling at ‘T’,

r(T ) = AEp + ñ, where ñ is a Gaussian random variable with zero mean and variance η2 Ep = η
2

Consider bit ‘1’, y(t) = n(t)

After matched filtering and followed by sampling at ‘T’,

r(T ) = ñ, where ñ is a Gaussian random variable with zero mean and variance η2 Ep = η
2



AEp + ñ; ifao = A

r(T ) =

ñ; ifao = 0

What is the optimal decision rule?


AEp A AEp A
If r(T ) ≥ 2
= 2
=⇒ ao = A, and r(T ) < 2
= 2
=⇒ ao = 0
bit ‘1’ bit ‘0’

0 AEp
AEp
2
‘Detection Threshold’



AEp + ñ, if ao = A

r(T ) =

ñ, if ao = 0

bit ‘1’ bit ‘0’

0 AEp
AEp
2
‘Detection Threshold’

AEp AEp
Therefore, decide ao = A if r(T ) ≥ 2
, and decide ao = 0 if r(T ) < 2

March 11, 2024 DRAFT


19

AEp
r̃(T ) = r(T ) − 2


 AEp + ñ, if a = A


2 o
r̃(T ) =
− AEp + ñ, if ao = 0


2

AEp
Similar to BPSK with AEp replaced by 2
.
AEp AEp
Decide ao = A if r̃(T ) ≥ 0 =⇒ r(T ) − 2
≥ 0 =⇒ r(T ) ≥ 2
AEp AEp
Decide ao = 0 if r̃(T ) < 0 =⇒ r(T ) − 2
< 0 =⇒ r(T ) < 2
Ep
Bit error rate or probability of bit error similar to BPSK with AEp replaced by A E2p , Pe = Q( q ηE2 p ) =
A

q 2 2
A Ep
Q( 2η
)
q 2
Thus, the probability of error for amplitude shift keying (ASK), Pe = Q( A2ηEp )
q
Ep = 1, A2 = 2Eb =⇒ Pe = Q( Eηb )

Comparing this with the probability of error of BPSK,


q q
Pe,ASK = Q( Eηb ), and Pe,BP SK = Q( 2Eη b )
q q
Q( η ) > Q( Eηb )
2Eb

Pe,BP SK < Pe,ASK

Fair comparison, both modulation schemes have same average energy per bit, Eb .

How much improvement in BER does BPSK provide?


1
For same BER, BPSK needs 2
the average energy per bit as that of ASK.

=⇒ Eb,BP SK = 1
2
× Eb,ASK

=⇒ 10log10 Eb,BP SK − 10log10 12 × Eb,ASK

=⇒ 10log10 Eb,BP SK − 10log10 Eb,ASK = 10log10 12

=⇒ 10log10 Eb,BP SK (dB) − 10log10 Eb,ASK (dB) = −10log10 2 = −3(dB)

Note: For the same BER, BPSK requires 3 dB lower average energy per bit Eb or basically, ASK

requires 3 dB higher average energy per bit Eb when compared to BPSK for the same BER.

Signal Space:

Framework to understand digital communication systems.


R∞ 2 R∞ 2
p
−∞ 1
(t)dt = p (t)dt = Ep = 1
−∞ 2

March 11, 2024 DRAFT


20

R∞
Both pulses have unit energy, further, more importantly, −∞
p1 (t)p2 (t)dt = 0, which is the inner

product of two signals, and it is zero. Both the pulses are orthogonal.

p1 (t), p2 (t) constitute the orthogonal basis for a signal space. Both the pulses are orthogonal and are

normalized to unit energy.

Signal Space: Space formed by linear combination of basis signals.

α1 p1 (t) + α2 p2 (t), where αi ∈ R, all such linear combinations are the elements of signal space.
q q
For example, p1 (t) = T cos(2πf1 t), 0 ≤ t ≤ T , and p2 (t) = T2 cos(2πf2 t), 0 ≤ t ≤ T
2

Pulse duration T contains integer number of cycles of p1 (t) and p2 (t), f1 = k1 /T = k1 fo and f2 =

k2 /T = k2 fo , where f1 and f2 are different multiples of same fundamental frequency, fo .

Z ∞
E1 = p21 (t)dt
−∞
2 ∞
Z
= cos2 (2πf1 t)dt
T 0
2 ∞ 1 + cos(4πf1 t)
Z
= dt
T 0 2
= 1
R∞
Similarly, E2 = −∞
p22 (t)dt = 1 = Ep

Consider the inner product of p1 (t) and p2 (t)

Z ∞ Z ∞
2
p1 (t)p2 (t)dt = cos(2πf1 t)cos(2πf2 t)dt
−∞ −∞ T
Z ∞
1
= (cos(2π(f1 − f2 )t) + cos(2π(f1 + f2 )t)) dt
T −∞
Z ∞
1
= (cos(2π(k1 − k2 )fo t) + cos(2π(k1 + k2 )fo t)) dt
T −∞
 
1 sin(2π(k1 − k2 )fo t) T sin(2π(k1 + k2 )fo t) T
= |0 + |0
T 2π(k1 − k2 )fo 2π(k1 + k2 )fo
 
1 1 1
= {sin2π(k1 − k2 )fo T − sin(0)} + {sin2π(k1 + k2 )fo T − sin(0)}
2π k1 − k2 k1 + k2
= 0

p1 (t) and p2 (t) are orthogonal and unit energy, hence orthonormal basis for the signal space.

March 11, 2024 DRAFT


21

q q
2 2
α1 p1 (t) + α2 p2 (t) = α1 T
cos(2πf1 t) + α2 T
cos(2πf2 t)

Frequency Shift Keying (FSK)

Signal Space Concept:


q q
p1 (t) = T2 cos(2πf1 t), 0 ≤ t ≤ T ; p2 (t) = T2 cos(2πf2 t), 0 ≤ t ≤ T

where f1 = kT1 , and f2 = kT2 , i.e. T = kf11 = kf22


R∞ R∞ R∞
Ep = −∞ p21 (t)dt = −∞ p22 (t)dt = 1, and −∞ p1 p2 (t)dt = 0. Both the pulses have unit energy and

they are orthogonal.

x(t) = Ap1 (t)


q q
Information bit ‘0’ → Ap1 (t) = A T2 cos(2πf1 t) = x(t); Information bit ‘1’ → Ap2 (t) = A T2 cos(2πf2 t) =

x(t)

Both waveforms differ in frequency. One is obtained from other by shifting in frequency.
1
Assuming ‘0’ and ‘1’ occur with probability 2
each.

Average energy per bit, Eb = 21 A2 Ep + 12 A2 Ep = A2 Ep = A2 =⇒ A = Eb
q q
2 2Eb
Information bit ‘0’ → Ap1 (t) = A T cos(2πf1 t) = T
cos(2πf1 t) = x(t); Information bit ‘1’
q q
→ Ap2 (t) = A T2 cos(2πf2 t) = 2ET b cos(2πf2 t) = x(t)

At the Receiver:
No
y(t) = x(t) + n(t), where n(t): Additive White Gaussian Noise, and Rnn (τ ) = 2
δ(τ )

0 → y(t) = Ap1 (t) + n(t); 1 → y(t) = Ap2 (t) + n(t);

Matched Filter: h(τ ) = p1 (T − τ ) = p2 (T − τ )

which pulse should be chosen for matched filter?

y(t) − A (p1 (t)+p


2
2 (t))
, where p̃(t) = (p1 (t)+p2 (t))
is a constant signal.
2
 
For ‘0’, we have y(t) = Ap1 (t) + n(t) =⇒ ỹ(t) = y(t) − A p1 (t)+p
2
2 (t)
= A p1 (t)−p
2
2 (t)
+ n(t) =

Ap̃(t) + n(t)
 
p1 (t)+p2 (t)
For ‘1’, we have y(t) = Ap2 (t) + n(t) =⇒ ỹ(t) = y(t) − A 2
= A p2 (t)−p
2
1 (t)
+ n(t) =

−Ap̃(t) + n(t)
p1 (t)−p2 (t)
This is similar to BPSK and is obtained by replacing p(t) by p̃(t) = 2
p1 (t)−p2 (t)
Therefore, the optimum matched filter is h(t) = p̃(T − τ ) matched to p̃(t) = 2
1
Factor of 2
is a scaling factor which does not affect the filter.

March 11, 2024 DRAFT


22

q q
2 2
Hence, optimum matched filter, h(t) = p1 (T −τ )−p2 (T −τ ) = T
cos(2πf1 (T −τ ))− T
cos(2πf2 (T −

τ ))

The optimum filter at the receiver which maximizes the SNR.

For ‘0’, y(t) = Ap1 (t) + n(t), and for ‘1’, y(t) = Ap2 (t) + n(t)

Optimal Receive Filter (that maximize output SNR), h(t) = p1 (T − t) + p2 (T − t)

Consider information bit ‘0’, y(t) = Ap1 (t) + n(t)

Receive filter with h(t) = p1 (T − t) − p2 (T − t) followed by sampling at t = T .


Z ∞
r(T ) = y(τ )h(T − τ )dτ
−∞
Z∞
= {Ap1 (τ ) + n(τ )}{p1 (τ ) − p2 (τ )}dτ
−∞
Z ∞ Z ∞
= Ap1 (τ ){p1 (τ ) − p2 (τ )}dτ + n(τ ){p1 (τ ) − p2 (τ )}dτ
−∞ −∞
R∞ R∞ R∞
Signal, = −∞ Ap1 (τ ){p1 (τ ) − p2 (τ )}dτ = A −∞ p21 (τ )dτ − A −∞ p1 (τ )p2 (τ )dτ = AEp = A
R∞
ñ = −∞ n(τ ){p1 (τ ) − p2 (τ )}dτ

E{ñ} = 0, and

No2
Z
2
E{ñ } = |h(T − τ )|2 dτ
2 −∞

No2
Z
= {p1 (τ ) − p2 (τ )}2 dτ
2 −∞

No2
Z
= {p21 (τ ) + p22 (τ ) − 2p1 (τ )p2 (τ )}dτ
2 −∞
∞ ∞ ∞
No2 N2 N2
Z Z Z
= p21 (τ )dτ + o p22 (τ ) − o 2p1 (τ )p2 (τ )dτ
2 −∞ 2 −∞ 2 −∞
No No
= Ep + Ep − 0
2 2
= No Ep

= No

Noise Power after sampling, r(T ) = AEp + ñ = A + ñ

Corresponding to bit ‘1’, y(t) = Ap2 (t) + n(t), where for matched filter h(t) = p1 (T − t) − p2 (T − t)

r(T ) = −AEp + ñ = −A + ñ
0 1
r(T ) → − −A + ñ This model is similar to BPSK except σ 2 = No
− A + ñ, and r(T ) →

March 11, 2024 DRAFT


23

But symmetry, optimal decision rule is, r(T ) ≥ 0 =⇒ Decide ao = 0 or r(T ) < 0 =⇒ Decide

ao = 1

Optimal Threshold based decision rule for frequency shift keying.


q
A2
Bit Error Rate, BER = Q( Aσ ) = Q( √ANo ) = Q( N o
)

Substitute A = Eb , where Eb : average energy per bit.
q
Eb
The probability of error or bit error rate, for frequency shift keying, (FSK), Pe = Q( No
)
q
BER for BPSK = Q( No ), and BER for ASK and FSK = Q( NEbo )
2Eb

BER of ASK = FSK for equal average energy per bit, Eb .

BER of ASK, FSK > BER of BPSK, because Q(.) functions is a decreasing function. ASK and FSK

are 3dB inefficient in comparison to BPSK.

For same BER, ASK/FSK need 3dB more power.

Quadrature Phase Shift Keying (QPSK)

Consider the following pulses: p1 (t) and p2 (t), which are unit energy signals.
q q
p1 (t) = T cos(2πfc t), 0 ≤ t ≤ T , and p2 (t) = T2 sin(2πfc t), 0 ≤ t ≤ T
2

where T : duration of a symbol, and T = fkc ; observe that the p2 (t) is a phase shift version of p1 (t),
q q
since p1 (t) = T2 cos(2πfc t) = T2 sin(2πfc t + π2 ), 0 ≤ t ≤ T

Both the pulses have unit energy.


Z ∞ Z T
2
p22 (t)dt = sin2 (2πfc t)dt
−∞ T 0

2 T 1 − cos(4πfc t)
Z
= dt
T 0 2
2 T 1 sin(4πfc t) T
= × − |0
T 2 T 4πfc
= 1

i.e Ep = 1

March 11, 2024 DRAFT


24

Inner Product:
Z ∞ Z T
2
p1 (t)p2 (t)dt = cos(2πfc t)sin(2πfc t)dt
−∞ T 0

1 T
Z
= sin(4πfc t)dt
T 0
1 cos(4πfc t) T
= |0
T 4πfc
= 0

These pulses, p1 (t) and p2 (t) have unit energy and their inner product is zero, so they are unit energy

+ orthogonal =⇒ orthonormal basis of signal space. Note that these two signals are similar to those

in Quadrature Carrier Multiplexing (QCM), in which the signals are modulated over m1 cos(2πfc t) −

m2 sin(2πfc t)

Two separate message signals m1 (t) and m2 (t) can be modulated on orthogonal carriers.

We generate a signal, x(t), where x(t) = a1 p1 (t) + a2 p2 (t), each of the two pulses carries one symbol

or one bit of information.



For example: a1 = ±A and a2 = ±A; average energy per bit, Eb = A2 or A = E

We can have a set of four possible signals






 Ap1 (t) + Ap2 (t)





Ap1 (t) − Ap2 (t)

x(t) =

−Ap1 (t) + Ap2 (t)







−Ap1 (t) − Ap2 (t)

Here, we see that a signal has two orthogonal pulses, thus each signal carries 2 bits of information in

one symbol cycle, whereas in previous scheme, only one bit per symbol cycle can be transmitted.

Waveforms of QUadrature Phase Shift Keying

In QPSK, the signal is given by, x(t) = a1 p1 (t) + a2 p2 (t), where ai ∈ {−A, +A}, so there could be

possible waveforms.

March 11, 2024 DRAFT


25



Ap1 (t) + Ap2 (t)







Ap1 (t) − Ap2 (t)

x(t) =

−Ap1 (t) + Ap2 (t)








−Ap1 (t) − Ap2 (t)

q q q
Consider a1,2 = A, we have x(t) = A T cos(2πfc t) + A T sin(2πfc t) = A T2 {cos(2πfc t) +
2 2

sin(2πfc t)} = A √2T { √12 cos(2πfc t) + √1 sin(2πfc t)}


2
= √2AT cos(2πfc t − π4 )
q q q
Consider a1 = A and a2 = −A, we have x(t) = A T2 cos(2πfc t)−A T2 sin(2πfc t) = A T2 {cos(2πfc t)−

sin(2πfc t)} = A √2T { √12 cos(2πfc t) − √1 sin(2πfc t)}


2
= 2A

T
cos(2πfc t + π4 )
2A 5π
Similarly, −Ap1 (t) + Ap2 (t) = √
T
cos(2πfc t + 4
)
2A 3π
Similarly, −Ap1 (t) − Ap2 (t) = √
T
cos(2πfc t + 4
)


2A
cos(2πfc t − π4 )


 √

 T



 √2A cos(2πfc t + π )


T 4
x(t) =
 2A 3π
√ cos(2πfc t + )




 T 4



 √2A cos(2πfc t +
 5π
)
T 4

Note here that each consecutive waveform is shifted from the other by a phase difference of π2 . Also
π
the last waveform is shifted from the first with a phase shift of 2
. Each waveform is shifted from its
π
neighbouring by a phase of 2
or 90o or a quadrature. This is the reason, why this scheme is called as

‘Quadrature Phase Shift Keying’


π
Phase difference between different waveforms = 2
= 90o These different waveforms can be represented
2A
using a phasor diagram. The four signals can be represented on a circle of radius √
T
. Each is shifted

from its consecutive by a phase of π2 . The points are at the angle of π4 , 3π


4
, − 3π
4
or 5π
4
, − π4 or 7π
4

We can represent these waveforms using a 2-dimensional constellation diagram. The four points will

be (A, A), (A, −A), (−A, −A), and (−A, A)

March 11, 2024 DRAFT


26

Coonstellation Diagram

( A, A)
(- A, A)
A T2 A
A T2

4

4
π
4
-A A
−π
4

A T2 A T2 -A

(- A, -A) (A, - A)

−Ap1 (t) − Ap2 (t)

QPSK receiver, Matched Filter, BER, and Symbol error rate.

x(t) = a1 p1 (t) + a2 p2 (t), where p1 (t) and p2 (t) are independent pulses. Thus, the optimal receive filter

for each will be p1 (T − t) and p2 (T − t).

x(t) = a1 p1 (t) + a2 p2 (t)

h1 (t) = p1 (T − τ) h2 (t) = p2 (T − τ)

a1 Ep + ñ1 a2 Ep + ñ2

Matched Filter w.r.t h(t) = p1 (T − t)


No
y(t) = x(t) + n(t), where n(t) is zero mean Gaussian Noise with Rnn (τ ) = 2
δ(τ ).

y(t) = a1 p1 (t) + a2 p2 (t) + n(t), with the matched filter impulse response, h1 (t) = p1 (T − t)

We get, y(t) = x(t) ∗ h1 (t)|t=T + n(t) ∗ h1 (t)|t=T

Z ∞
(a1 p1 (t) + a2 p2 (t)) ∗ h1 (t) = (a1 p1 (τ ) + a2 p2 (τ ))h1 (t − τ )dτ
−∞
Z ∞
= = (a1 p1 (τ ) + a2 p2 (τ )) h1 (t − τ )dτ
−∞
Z ∞
= (a1 p1 (τ ) + a2 p2 (τ )) p1 (T + τ − t)dτ
−∞

March 11, 2024 DRAFT


27

Sample this at t = T ,
Z ∞
y(t) ∗ h1 (t)|t=T = (a1 p1 (t) + a2 p2 (t)) ∗ h1 (t) = (a1 p1 (τ ) + a2 p2 (τ )) p1 (T + τ − T )dτ
−∞
Z ∞
= (a1 p1 (τ ) + a2 p2 (τ )) p1 (τ )dτ
−∞
Z ∞ Z ∞
= a1 p21 (τ )dτ + a2 p2 (τ )p1 (τ )dτ
−∞ −∞

= a1 E p = a1

= N2o
r1 (T ) = a1 + n(t) ∗ h1 (t)|t=T = a1 + ñ1 , where ñ1 is zero mean Gaussian with variance = No
2 p
E

The final output at the sampler will be given by r1 (T ) = a1 + ñ1 , where a1 = ±A, and A = Ep ,
p

here Ep is the average energy per bit.

Note this output is similar to BPSK. By symmetry, the optimal decision rule will be r1 (T ) ≥ 0, decide

a1 = A, otherwise −A (i.e. if r1 (T ) < 0 decide a1 = −A)

Similarly for a2 ,

y(t) = a1 p1 (t) + a2 p2 (t) + n(t), with the matched filter impulse response, h2 (t) = p2 (T − t)

We get, y(t) = x(t) ∗ h2 (t)|t=T + n(t) ∗ h2 (t)|t=T

Z ∞
(a1 p1 (t) + a2 p2 (t)) ∗ h1 (t) = (a1 p1 (τ ) + a2 p2 (τ ))h2 (t − τ )dτ
−∞
Z ∞
= = (a1 p1 (τ ) + a2 p2 (τ )) h2 (t − τ )dτ
−∞
Z ∞
= (a1 p1 (τ ) + a2 p2 (τ )) p2 (T + τ − t)dτ
−∞

Sample this at t = T ,
Z ∞
y(t) ∗ h2 (t)|t=T = (a1 p1 (t) + a2 p2 (t)) ∗ h2 (t) = (a1 p1 (τ ) + a2 p2 (τ )) p2 (T + τ − T )dτ
−∞
Z ∞
= (a1 p1 (τ ) + a2 p2 (τ )) p2 (τ )dτ
−∞
Z ∞ Z ∞
= a1 p1 p2 (τ )dτ + a2 p22 (τ )dτ
−∞ −∞

= a2 E p = a2

r2 (T ) = a2 + n(t) ∗ h2 (t)|t=T = a2 + ñ2 , where ñ2 is zero mean Gaussian with variance = No
2 p
E = No
2

March 11, 2024 DRAFT


28

Ep ,
p
The final output at the sampler will be given by r2 (T ) = a2 + ñ2 , where a2 = ±A, and A =

here Ep is the average energy per bit.

Note this output is similar to BPSK. By symmetry, the optimal decision rule will be r2 (T ) ≥ 0, decide

a2 = A, otherwise −A (i.e. if r2 (T ) < 0 decide a2 = −A)


2 Branches
t = T

Matched filter r1 (T )
h1 (t) = p1 (T − t)

y(t)
t = T
Matched filter r2 (T )
h2 (t) = p2 (T − t)

Hard Thresholding

Schematic for QPSK Detection

The output of each sampler is similar to BPSK. Both a1 and a2 can be ±A

r1 (T ) = a1 + ñ1 ; r2 (T ) = a2 + ñ2 , where ai ∈ {−A, +A}

The bit error rate (BER) of each component will be similar to BPSK.
q
BER of 1st channel = Q( No ) = Q( 2E
√A b
No
)
2 q
Similarly, BER of 2 channel = Q( No ) = Q( 2E
nd √A
No
b
)
2 q
Thus, the BER for both the channels is given by, Pe1 = Pe2 = Pe = Q( 2E
No
b
)

B. Overall symbol error rate:

Probability that QPSK symbol is in error if either bit a1 or a2 is in error.

Probability of error of the first channel = Pe ; Probability that a1 is received correctly in the first channel,

Pc = 1 − Pe

Similarly, Probability of error of the second channel = Pe ; Probability that a2 is received correctly in

the first channel, Pc = 1 − Pe

Probability that both bits {a1 , a2 } are received correctly, = (1 − Pe )(1 − Pe ) = (1 − Pe )2

Assume that the events ‘a1 is received correctly’ and ‘a2 is received correctly’ are independent.

The Probability that the symbol is in error = 1 - both bits are received correctly = 1 − (1 − Pe )2 =
q q
2Pe − Pe = 2Q( No ) − Q ( 2E
2 2Eb 2 b
No
)
q q q
For high SNR, NEbo , Q( 2ENo
b
) << 1 =⇒ Q 2
( 2Eb
No
) << Q( 2Eb
No
)
q
The overall probability of symbol error or symbol error rate, Pe ≈ 2Q( 2E No
b
)

March 11, 2024 DRAFT


29

M-ary Pulse Amplitude Modulation (M-ary PAM)

x(t) = ao p(t), where ao can take one of M different levels.

Number of bits for M levels = log2 M bits per symbol.

For example, for 4 levels, i.e. M = 4, we need 2 bits for representation, each level can be represented

as below:

‘00’ → Level 0

‘01’ → Level 1

‘10’ → Level 2

‘11’ → Level 4

For another example, M = 8, number of bits = log2 8 = 3, The eight possible values would be

ao ∈ {±A, ±3A, ±5A, ±7A}. All the levels in the constellation are separated by 2A. So, this is a M-ary

or for this above example, 8-ary PAM pulse amplitude modulation constellation.

General M-ary PAM:

ao = ±(2i + 1)A, i = 0, 1, 2, ...., M2 − 1, where M is even.

Then, number of bits per symbol = log2 M . Let the average symbol energy or average energy per

symbol is Es , where Es is given as below:


M
2
−1
1 X
Es = (2i + 1)2 A2
M/2 i=0

By symmetry, −A and A will have the same energy so average can be calculated by taking one side

March 11, 2024 DRAFT


30

itself.
M
2
−1
1 X
Es = (2i + 1)2 A2
M/2 i=0
M
2
−1
2A2 X
= (4i2 + 4i + 1)
M i=0
( )
2A2 4( M2 − 1) M2 (M − 1) ( M2 − 1) M2 M
= +4 +
M 6 2 2
2A2 2 M 2 M
 
M M
= ( − )(M − 1) + M ( − 1) +
M 3 4 2 2 2
2A2 2 M 3 3M 2 M M2 M
 
= ( − + )+ −
M 3 4 4 2 2 2
2
 3 
2A M M
= −
M 6 6
2A2 M 3
= (M 2 − 1)
M 6
A2
= (M 2 − 1)
3
A2
The average energy per symbol, Es = 3
(M 2 − 1), and it is a function of M and A, note that the

spacing between the M-ary levels was 2A.


q
When A = M3E2 −1 s
, it ensures that the average energy per symbol is Es .

x(t) = ao p(t), where ao = ±(2i + 1), 0 ≤ i ≤ M2 − 1


q
Choose pulse, p(t) = T2 cos(2πfc t), 0 ≤ t ≤ T , the energy of the pulse, Ep = 1.

C. Receiver for M-ary PAM

Matched filter h(t) = p(T − t)

We can say this because for the matched filter principle, we did not assume any specific modulation

scheme or number of levels, only the pulse shape. Thus, for M-ary scheme the matched filter will be

given by the above expression.

After matched filtering and sampling at t = T , r(T ) = ao Ep + ñ = ao + ñ, where ao = ±(2i + 1), 0 ≤

i≤ M
2
− 1, and ñ is zero Gaussian noise with variance, No
2 p
E

March 11, 2024 DRAFT


31

Probability of Error 0 2A

-7A -5A -3A -A A 3A 5A 7A

M = 8 PAM Constellation

If r(T ) lies in this region, decide ao = A

By symmetry, if 0 ≤ r(T ) ≤ 2A =⇒ decide ao = A or if 2A ≤ r(T ) ≤ 4A =⇒ decide ao = 3A.

Similarly, if −2A ≤ r(T ) ≤ 0 =⇒ decide ao = −A or if −4A ≤ r(T ) ≤ −2A =⇒ decide

ao = −3A.

Combination of M-cases, one for each point. Such decision rule is also known as the nearest neighbour

rule as the decision for a symbol is based on the level nearest to the sampled or received signal mean.

The boundary points, however, will have a slightly different rule as they only have one neighbour, for

example, if r(T ) ≤ −6A =⇒ decide ao = −7A or if r(T ) ≥ 6A =⇒ decide ao = 7A.

M-ary PAM (Pulse Amplitude Modulation) - Part - II, Optimal Decision Rule, Probability of error.
Decision: ao = 5A
M - ary PAM

-7A -5A -3A -A A 3A 5A 7A

0 2A

Decision: ao = A
Nearest Neighbour Decision Rule, ao closest neighbour of r(T )

Nearest neighbour rule means ao is decided to be the nearest neighbour of the detected (output after

the filter) value of r(T ).

At the two end points, however, the situation is slightly different. The end points only have one

neighbour and are given by, −(M − 1)A and (M − 1)A. Thus, we can form the decision rule for end

points,

Decide ao = −(M − 1)A if r(T ) < −(M − 2)A, and Decide ao = (M − 1)A if r(T ) ≥ (M − 2)A

March 11, 2024 DRAFT


32

M
(2i + 1)A = (M − 1)A, when i = 2 −1
- (M - 2)A
r(T) r(T)

- (M - 1)A - (M - 3)A (M - 3)A (M - 1)A

Decide ao = −(M − 1)A (M - 2)A

Decide ao = (M − 1)A

M
−(2i + 1)A = −(M − 1)A, when i = 2 −1

D. Probability of error:

Consider the transmission of ao = A

0 2A

- A A 3A

Decide ao = A if 0 ≤ r(T ) < 2A

Error occurs if either r(T ) > 2A or r(T ) < 0.

Pe = P(r(T ) < 0 ∪ r(T ) ≥ 2A), where these events, ϕ1 = {r(T ) < 0} and ϕ2 = {r(T ) ≥ 2A}, are

disjoint events or mutually exclusive events.

Then P{ϕ1 ∪ ϕ2 } = P{ϕ1 } + P{ϕ2 }


No
Since r(T ) = ao + ñ, where ñ is zero mean Gaussian noise with variance, 2
.

Therefore, error occurs if r(T ) ≥ 2A =⇒ A + ñ ≥ 2A =⇒ ñ ≥ A or r(T ) < 0 =⇒ A + ñ <

0 =⇒ ñ < −A

Pe = P{ñ ≥ A} + P{ñ < −A}

March 11, 2024 DRAFT


33

pdf is symmetric about 0

- A 0 A

By symmetry, these two tail probabilities are equal. Therefore, P{ñ ≥ A} = P{ñ − A}
   q 2
Hence, Pe = P{ñ ≥ A}+P{ñ < −A} = 2P{ñ ≥ A} = 2P √ No ≥ √ No = 2P √ No ≥ A
ñ A ñ
No =
2 2 2 2
q 
A2
2Q No
2

The same can be said for all other interior points. Now, let us consider the exterior points, like

transmission of the point, ao = (M − 1)A, then r(T ) = (M − 1)A + ñ. Error occurs if r(T ) <

(M − 2)A =⇒ (M − 1)A + ñ < (M − 2)A =⇒ ñ < −A. Thus, Pe = P{ñ < −A}. Due to symmetry,
 q 2 q 
Ö A A2
Pe = P{ñ < −A} = P{ñ > A} = P No
> No = Q No
2 2 2

Assume all the constellation points are equi-probable.


1 2
P{ao = ±(2i + 1)A} = M
, then probability of end point = M
, and probability of an internal point
M −2
= M

Average probability of error,Pe = P{error/end point}P{end point} + P{error/internal point}P{internal point}


s ! s !
A2 2 A2 M −2
= Q No
× + 2Q No
×
2
M 2
M
s !
2 + 2(M − 2) A2
= ×Q No
M 2
  s !
1 A2
= 2 1− Q No
M 2
q
, where A = M3E2 −1s

 q 6Es 
Hence, the average probability of error for M-PAM, Pe = 2 1 − M1 Q (M 2 −1)No
q 
2Es
For M = 2, Pe = Q No
, and the number of bits per symbol is given by log2 M

M-ary QAM (Quadrature Amplitude Modulation) - part-I

Similar to QPSK (Quadrature Phase Shift Keying), there are two pulses, p1 (t) and p2 (t), and the

March 11, 2024 DRAFT


34

transmitted signal, x(t) = a1 p1 (t) + a2 p2 (t), where a1 and a2 are two independent and separate symbols

from a PAM constellation.


2 2 k
p1 (t) = T
cos(2πfc t), and p2 (t) = T
sin(2πfc t), where T = fc
. These pulses are identical to QPSK,

and they have unit energy, and are shifted by a phase difference of π2 .

For an M-ary QAM, the total number of symbols wil be M , and we have two sets for a1 , and a2 .

Thus, both a1 and a2 will be derived from a constellation of M -ary PAM.
√ √ √
Total number of symbols in QAM = M × M = M . Similar to PAM¡ we are now using M -ary

PAM. i.e. ai ∈ M -ary PAM, for i ∈ {1, 2}
√ √
M M
a1 = ±(2i + 1)A, 0 ≤ i ≤ 2
− 1, and a2 = ±(2j + 1)A, 0 ≤ j ≤ 2
−1

or, aj = ±(2i + 1)A, 0 ≤ i ≤ 2M − 1, j ∈ {1, 2}

This naturally means that M has to be an integer.
√ √
M
For example, for 64-QAM, M = 64; M = 8 =⇒ 2
=4

Each PAM constellation comprises of 8 symbols.

Total number of bits in this scheme is given by,

Total number of bits per symbol = log2 M = log2 64 = 6

This means there are 3 bits for each PAM, (23 = 8)

Average symbol energy, Es , i.e. average symbol energy per a1 is Es /2, and average symbol energy per

a2 is Es /2. i.e. each constituent PAM has half the symbol energy.
√ Es A2
√ 2
Each PAM is a M -ary PAM with distance 2A between levels, thus 2
= 3
( M − 1) =⇒ A =
3Es
2(M −1)

M-ary QAM (Quadrature Amplitude Modulation)-Part-II Optimal Decision Rule, Probability of Error,

Constellation Diagram

Receiver for M-ary QAM:

The transmitted signal is given by: x(t) = a1 p1 (t)+a2 p2 (t)+n(t) where p1 (t) and p2 (t) are orthonormal

functions (pulses).

The received signal is given by:

y(t) = x(t) + n(t)

= a1 p1 (t) + a2 p2 (t) + n(t)

March 11, 2024 DRAFT


35

where, n(t) is AWGN with autocorrelation Rnn (τ ) = δ(τ ) N2o

The receive filter will be similar to that of QPSK, i.e. we will have two filters h1 (t) and h2 (t) with

h1 (t) = p1 (T − t) and h2 (t) = p2 (T − t).


r(T ) = a1 Ep + n˜1 = a1 + n˜1
h1 (t) = p1 (T − t)

No
x(t) Zero mean Gaussian Noise with variance = 2
h2 (t) = p2 (T − t)

r(T ) = a2 Ep + n˜2 = a2 + n˜2

The output of the filter after sampling will be a combination of two functions,

r1 (T ) = a1 + ñ1

r2 (T ) = a2 + ñ2

The above is the case where Ep = 1, and ñ1 and ñ2 are zero mean Gaussian noise with variance No
2
.

Decision Rule: Decide a1 from r1 (T ) and decide a2 from r2 (T ), where a1 and a2 are independent

PAM symbols,so decision is similar to that of PAM, but are M -ary PAM.

 ( M − 2)A 
decide ai = −( M − 1)A A decide ai = ( M − 1)A

2A
√ - A 0 A 3A √
−( M − 1)A ( M − 1)A

If 0 ≤ ri (T ) < 2A, decide ai = A


√ √
End Points: If ri (T ) ≥ ( M − 2)A, decide ai = ( M − 1)

Similarly, one can formulate for the rest of the cases



Choose ai : closes constellation point in M -ary PAM to ri (T )

Probability of error for M-ary PAM:

Pe = 2(1 − 1
M
)Q( √ANo )
2

Now, Probability of error for M -ary PAM: Pe = 2(1 − √1M )Q( √ANo )
√ 2

This is the probability of error for a1 , a2 which belongs to M -ary PAM, and Q(.) is the complementary

cdf of standard Gaussian Random Variable.

M-ary QAM:

March 11, 2024 DRAFT


36

√ Es
q
Given symbol energy, Es , each individual M -ary PAM has average energy 2 , and A = 2(M 3Es
−1)
,

ensures average symbol energy Es /2 for each M -ary PAM. Average energy Es for overall M-ary QAM.

 q 3Es 
Average Symbol error rate for each constituent M -ary PAM, Pe = 2(1 − M1 )Q √
2(M −1)
No
= 2(1 −
q  2
1 3Es
M
)Q No (M −1)

Error rate for overall M-ary QAM constellation is, Pe,QAM = 1 − (1 − Pe,P AM )2 = 1 − (1 + Pe,P
2
AM −

2 2
2Pe,P AM ) = 2Pe,P AM −Pe,P AM ≈ 2Pe,P AM At high SNR, PE,P AM is very small in comparison to Pe,P AM
q 
1 3Es
Pe,QAM = 2(1 − M )Q

No (M −1)

Constellation Diagram for M-ary QAM:



For example, M = 16; a1 , a2 ∈ M = 4 - PAM

a1 ∈ {−3A, −A, A, 3A} and a2 ∈ {−3A, −A, A, 3A}, these are levels of M = 16-PAM i.e., 4-PAM
a2

3A

- 3A - A A 3A a1

- A a1 = 3A, a2 = −A

- 3A

Total of 16 points in 16-QAM constellation

QAM is a square constellation.

Number of bits per symbol = log2 M , for M = 16, number of bits = log2 16 = 4 bits, i.e., 2 bits on

each constituent PAM.

In the QAM scheme, we can increase M in order to increase the bit rate without increasing the symbol

transfer rate.

For M = 4 QAM, there are 4 symbols, i.e 2 bits, where one bit on each phase (in-phase and quadrature

phase).

Higher modulation and adaptive modulation are important to achieve higher data rates in 3G, 4G, and

5G.

March 11, 2024 DRAFT


37

If you have a symbol rate of 10M symbols per second, and if we use 4-QAM, then there are 2 bits

per symbol. Thus, the data rate is 20Mbps.

If we are able to use 1024-QAM, then there are 10 bits per symbol. The data rate is 100Mbps. By

using higher modulation, data rate is increased by a factor of 5. This implies that higher data rate depends

on the modulation scheme along with several other technologies.

Choice of modulation scheme depends on the user requirement and also channel condition, i.e. the

wireless channel is able to support such data rate. So, adaptively one can choose an appropiate modulation

to meet the data requirement, and ofcourse to deliver the maximum data rate to each user.

So, the modulation scheme, QAM that increases the data rate significant as we go from 3G to 4G

to 5G wireless technologies and QAM is a very general and one of the most widely used modulation

schemes.

M-ary PSK (Phase Shift Keying) - Part-I (Introduction, transmitted waveform, constellation diagram)

M-ary PSK or phase shift keying has M-symbols and it is phase shift keying based digital modulation

scheme. It generalizes the concept of binary phase shift keying which has 2 phases.

M-ary PSK: In which there are M possible phases, its constellation has M symbols, and there are

log2 M bits per symbol.


−1)
For M-ary PSK, 360o is divided into M components, the M possible phases are, 0, 2π
M M
, 4π , ...., 2π(M
M

the ith phase is given by ith = i 2π


M
,0 ≤ i ≤ M − 1

For M = 2, M-ary PSK reduces to BPSK, where phases are 0 and π.

March 11, 2024 DRAFT


38

4π i = 2
M


M i = 1

i = 0

A
Radius of Circle, A

Constellation Diagram

The M phases can be represented on a circle of radius A. The first phase i = 0 will be at the x-axis
2π 4π
and the second will be M
, then M
, and so on.

The ith point will have projection Acos( i2π


M
) on the x-axis and Asin( i2π
M
) on the y-axis.

So, now we can have two separate symbols, a1 = Acos( i2π


M
) and a2 = Asin( i2π
M
)

The transmitted signal is given by, x(t) = a1 p1 (t) + a2 p2 (t), where p1 (t) and p2 (t) are orthonormal

pulses given by,


q q
p1 (t) = T2 cos(2πfc t), 0 ≤ t ≤ T , and p2 (t) = T2 sin(2πfc t), 0 ≤ t ≤ T
k
where T = fc
, and a1 , a2 belong to the constellation, a1 ∈ {Acos(k 2π
M
)}, k = 0, 1, 2, ..., M − 1, and

a2 ∈ {Asin(k 2π
M
)}, k = 0, 1, 2, ..., M − 1

The k th constellation point is given by, sk = Acos(k 2π 2π



M
), Asin(k M
)

March 11, 2024 DRAFT


39

The average symbol energy is given by,


Z ∞ 
Es = E 2
x (t)dt
−∞
Z ∞ 
2
= E (a1 p1 (t) + a2 p2 (t)) dt
−∞
Z ∞ 
a21 p21 (t) a22 p22 (t)

= E + + 2a1 a2 p1 (t)p2 (t) dt
−∞
Z ∞ Z ∞ Z ∞ 
= E a21 p21 (t)dt + a22 p22 (t)dt + 2a1 a2 p1 (t)p2 (t)dt
−∞ −∞ −∞
 Z ∞ Z ∞ Z ∞ 
= E a21 p21 (t)dt + a22 p22 (t)dt + 2a1 a2 p1 (t)p2 (t)dt
−∞ −∞ −∞

a21 a22

= E +
 
2π 2 2π 2
= E (Acos(k )) + (Asin(k ))
M M
 
2 2π 2 2π 2
= A E (cos(k )) + (sin(k ))
M M
= A2

This implies that A = Es

Thus the radius of the circle for a fixed average symbol energy, Es will be Es

M-ary PSK (Phase Shift Keying) Part-II, Optimal decision rule, Nearest Neighbour Criterion, Approx-

imate Probability of error.

Each symbol is given by, sk = Acos(k 2π ), Asin(k 2π



M M
)

Receiver Processing for M-ary PSK,

transmitted signal, x(t) = a1 p1 (t) + a2 p2 (t), and the received signal, y(t) = x(t) + n(t), where n(t):

Additive White Gaussian Noise.

March 11, 2024 DRAFT


40

y(t) = a1 p1 (t) + a2 p2 (t) + n(t)

MF
MF h2 (t) = p2 (T − t)
h1 (t) = p1 (T − t)

r1 (T ) = a1 Ep + ñ r2 (T ) = a2 Ep + ñ

= a1 + ñ = a2 + ñ

No
Additive Gaussian Noise with variance 2

The decision rule will be to look for the nearest constellation point from r(T ), i.e., we choose the

constellation point such that the distance ∥r(T ) − sk ∥ is minimized,

mink ∥r(T ) − sk ∥
q √ √
The distance ∥r(T ) − sk ∥ = (r1 (T ) − Es cos(k 2π
M
))2 + (r (T ) −
2 Es sin(k 2π
M
))2

The expression can not be solved in any manner for an accurate analytical expression of BER. We

will thus employ an indirect approach for BER.

Consider QPSK
√ √ √ √
− Eb , Eb Eb , Eb


dmin = 2 Eb

√ √

− Eb , − Eb
√ Eb , − Eb

Each constellation point has 2 nearest neighbours, α = 2

Approximation Symbol Error Rate (SER) = αQ( √dmin


2No
) where, dmin : distance to nearest neighbour.

March 11, 2024 DRAFT


41

It is very useful expression to compute error rate of digital communication, when exact expression is

difficult to obtain.

M-ary PSK:

In M-ary PSK, each constellation point has two closest neighbours.


Es


dmin
M where, 2 = sin( M
π
)
dmin
π 2
M

Es


Es

In M-ary PSK, a constellation point will have two nearest neighbours and the distance will be, dmin =

2 Es sin( M
π
)
 √ q 
2 Es sin( M
π π
2Es sin2 ( M

) )
The probability of error, Pe = 2Q √
2No
= 2Q No

This is an approximate expression for the SER but is a very tight approximation (or very close to the
Es
actual value) for high values of No

March 11, 2024 DRAFT

You might also like