DC Notes Before Mid
DC Notes Before Mid
DC Notes Before Mid
Schematic Diagram:
0, 1, 1, 0, 0, 1
DIGITAL
m(t) ENCODER
MODULATION
Transmitter
Bit Error
CHANNEL
0, 1, 1, 0, 1, 1
Receiver
At transmitter,
1) Transmit Filter
2) Transmit Power
3) Modulation Techniques
At receiver,
1) Receive Filter
2) Probability of Error
0 +A
Information Symbols Two Symbols belonging to the digital modulation constellation
1 -A
0 1 1 0 0 1 Bit Stream
+A -A -A +A +A -A Stream of digital modulated symbols
pT (t)
t
−T
2
T
2
T
1, if |t| <
2
pT (t) =
0, otherwise
a4 pT (t − 4T )
a0 pT (t) a3 pT (t − 3T )
A
3T
2
t
− T2 T
2
5T
2
7T
2
9T
2
−A
a1 pT (t − T ) a2 pT (t − 2T )
Therefore, for the k th symbol, ak pT (t − kT ), where ak is the the k th symbol, and pT is any pulse of
k th bit could be 0 or 1, accordingly ak could be +A or −A, where occurrence of bit is random, hence
pT (t)
t
− T2 T
2
T
1, if |t| <
2
pT (t) =
0, otherwise
F sin(πF T )
pT (t) ←
→ T sincF T = T
πF T
1 1
Since ak is random, P(ak = A) = 2
and P(ak = −A) = 2
1 1
E{ak } = AP(ak = A) + (−A)P(ak = −A) = A − A = 0
2 2
∞
X
x(t) = ak pT (t − kT )
k=−∞
∞
X
E{x(t)} = E[ ak pT (t − kT )]
k=−∞
∞
X
= E[ak ]pT (t − kT )
k=−∞
= 0
This implies that the average value of transmitted signal, x(t) is zero. The Fourier transform of x(t),
X(f ) is given as
Z ∞
X(f ) = x(t)e−j2πf t dt
−∞
Z ∞
E{X(f )} = E{ x(t)e−j2πf t dt}
−∞
Z ∞
= E{x(t)}e−j2πf t dt
−∞
= 0
This does not mean that spectrum is zero, but the average value of the spectrum is zero.
∞
X
x(t) = ak pT (t − kT )
k=−∞
How to measure the spectral content of a random signal? “Power Spectral Density” - Spectral Distri-
∞
X ∞
X
E{x(t)x(t + τ )} = E{ ak pT (t − kT ) am pT (t − mT + τ )}
k=−∞ m=−∞
∞
X ∞
X
= E{ ak pT (t − kT )am pT (t − mT + τ )}
k=−∞ m=−∞
∞
X ∞
X
= E{ak am }pT (t − kT )pT (t + τ − mT )
k=−∞ m=−∞
For k = m
∞
X
E{x(t)x(t + τ )} = E{a2k }pT (t − kT )pT (t + τ − kT )
k=−∞
1 1
E{a2k } = A2 P(ak = A) + (−A)2 P(ak = −A) = A2 + A2 = A2
2 2
, which is the power of the data symbol.
∞
X
E{x(t)x(t + τ )} = A2 pT (t − kT )pT (t + τ − kT )
k=−∞
∞
X
2
= A pT (t − kT )pT (t + τ − kT )
k=−∞
∞ Z T
2
X 1
= A pT (t − kT − to )pT (t + τ − kT − to )dto
k=−∞ 0 T
∞ Z (k+1)T
2
X 1
E{x(t)x(t + τ )} = A pT (t − t̃o )pT (t + τ − t̃o )dt̃o
k=−∞ kT T
∞ Z (k+1)T
Pd X
= pT (t − t̃o )pT (t + τ − t̃o )dt̃o
T k=−∞ kT
Pd ∞
Z
= pT (t′o )pT (t′o − τ )dt′o
T −∞
does not depend on ′ t′ only depends on ′ τ ′ . Hence, x(t) is WSS.
Rxx (τ ) = E{x(t)x(t + τ )}
R∞
p (t′ )p (t′ − τ )dt′o = RpT pT (τ ), auto-correlation of the pulse pT (t)
−∞ T o T o
Rxx (τ ) = E{x(t)x(t + τ )}
Pd ∞
Z
= pT (t′o )pT (t′o − τ )dt′o
T −∞
Pd
= RpT pT (τ )
T
The autocorrelation of the signal x(t), depends on the autocorrelation of the pulse, pT (t)
Pd F Pd
Rxx (τ ) = T
RpT pT (τ ) ←
→ Sxx (f ) = S
T pT pT
(f ) , where Sxx (f ): Power Spectral Density and SpT pT (f ):
E{x(t)x(t + τ )} = Rxx (τ )
Pd
= RpT pT (τ )
T
R∞ R∞
RpT pT = −∞
pT (t)pT (t − τ )dt, and SpT pT (f ) = −∞
RpT pT e−j2πf τ dτ
SpT pT (f ) = |P (f )|2 , where P (f ) is the Fourier Transform of the pulse, pT (t), where P (f ) =
R∞
−∞
pT (t)e−j2πf t dt
Pd
R∞
Rxx (τ ) = T
RpT pT (τ ), and Sxx (f ) = −∞
Rxx (τ )e−j2πf τ dτ
For a wide sense stationary process, psd is given by the Fourier transform of the auto-correlation.
Pd Pd
Sxx (f ) = S (f )
T pp
= T
|P (f )|2
The Power Spectral density is proportional to energy spectral density of the pulse, pT (t)
pT (t)
t
− T2 T
2
Z ∞
PT (f ) = pT (t)e−j2πf t dt
−∞
Z T
2
= e−j2πf t dt
− T2
= T sinc(f T )
sinπf T
= T
πf T
|PT (f )|2 = T 2 sinc2 (f T ), and the power spectral density of the transmitted signal, x(t), Sxx (f ) =
Pd Pd Pd 2
S (f )
T pp
= T
|P (f )|2 = T
T sinc2 (f T ) = Pd T sinc2 (f T )
Digital Communication Channel, is the medium through which signal propagates from the transmitter
to a receiver in a communication system. For examples, telephone lines, coaxial cable, wireless channel
x(t) = ∞
P
k=−∞ ak pT (t − kT )
y(t) = x(t) + n(t), where x(t): transmitted signal, y(t): received signal, and n(t): noise.
x(t) y(t)
n(t)
N (t): random + function of time =⇒ random process, popular noise model = Gaussian Noise
Gaussian Random Process, N (t) is a Gaussian random process if the statistics of all orders are jointly
Gaussian.
Consider noise samples, N1 (t1 ), N2 (t2 ), ...., Nk (tk ) k noise samples at t1 , t2 , t3 , ..., tk
FN1 (t1 ),N2 (t2 ),....,Nk (tk ) (n1 , n2 , .., nk ): Joint Distribution of Noise Samples. If this joint distribution is
jointly Gaussian i.e. follows a multivariate Gaussian density for all sets {t1 , t2 , ..., tk } and for any value
Gaussian Noise Process: If noise process is wide sense stationary =⇒ it is also strict sense stationary.
Typically, SSS =⇒ WSS but WSS ̸ =⇒ SSS, only for a Gaussian random process WSS =⇒ SSS.
N (t) is WSS if E{N (t)} = µ∀t, and E{N (t)N (t + τ )} = RN N (τ )∀t, τ depends on time difference.
Typical Signal = Very Smooth which has high level of temporal correlation
E{N (t)N (t + τ )} = 0, if τ ̸= 0 Noise samples at any two different time instants t and t + τ are
RN N (τ )
η
2 δ(τ )
Correlation is zero if τ = 0
η
2
The power spectral density is flat, power is spread uniformly over all frequencies (power is uniformly
distributed over all frequency components similar to ‘white light’). Hence, it is termed as “White Noise”
One of the most popular model + practically applicable for a digital communication system.
x(t) y(t)
n(t)
If the noise is white and Gaussian, the additive channel =⇒ AWGN Channel. Noise is additive +
white + Gaussian.
y(t) = x(t) + n(t), where x(t) = ao p(t), ao is the symbol, and p(t) is the pulse.
How to design ‘h(t)’ to maximize the signal power and minimize noise power.
Signal Power
SN R = Noise Power
, which is the fundamental quantity for analysis of performance of communication
system.
R∞
Signal Term: ao −∞
p(T − τ )h(τ )dτ
Z ∞
Signal Power = E{|ao p(T − τ )h(τ )dτ |2 }
−∞
Z ∞
2
= E{|ao | }E{| p(T − τ )h(τ )dτ |2 }
−∞
Z ∞
2
= E{|ao | }E{( p(T − τ )h(τ )dτ )2 }
−∞
Z ∞
= Pd E{( p(T − τ )h(τ )dτ )2 }
−∞
Since p(t) and h(t) are fixed, hence these are deterministic.
R∞
Signal Power = Pd ( −∞ p(T − τ )h(τ )dτ )2
R∞
Noise Power: ñ = −∞ h(τ )n(T − τ )dτ , where ñ is noise after passing through receive filter and
sampling at t = T .
ñ(t) ñ(T ) = ñ
n(t) h(t)
Sampling Operation sampled at T
Filter LTI System
R∞
ñ(t) = −∞
h(τ )n(t − τ )dτ , and ñ is a Gaussian Random Variable since it is a sample of a Gaussian
Random process. Any linear transformation on Gaussian Process is a Gaussian Process. ñ(t) is also
Z ∞
E{ñ} = E{ h(τ )n(T − τ )dτ }
−∞
Z ∞
= h(τ )E{n(T − τ )}dτ
−∞
= 0
E{ñ2 } = E{ññ}
Z ∞ Z ∞
= E{ h(τ )n(T − τ )dτ h(τ̃ )n(T − τ̃ )dτ̃ }
−∞ −∞
Z ∞ Z ∞
= h(τ )h(τ̃ )E{n(T − τ )n(T − τ̃ )}dτ dτ̃
−∞ −∞
Z ∞ Z ∞
η
= h(τ )h(τ̃ ) δ(τ − τ̃ )dτ dτ̃
−∞ −∞ 2
Z ∞
η
h2 (τ ) dτ
=
−∞ 2
Z ∞
η
= h2 (τ )dτ
2 −∞
R∞
If h(τ ) is not real, E{ñ2 } = η2 −∞ |h(τ )|2 dτ
R∞
r(t) = h(τ ){x(t − τ ) + n(t − τ )}dτ
−∞
R∞ R∞
Signal = −∞ ao p(T − τ )h(τ )dτ ; Noise = −∞ n(T − τ )h(τ )dτ
R∞
Signal Power Pd ( −∞ p(T −τ )h(τ )dτ )2
SNR = Noise Power
= η R∞
|h(τ )|2 dτ
2 −∞
For maximum SNR, impulse response has to be matched to pulse. Hence, the filter is termed as
p(T − τ ) = h(τ )
p(t) = e−t u(t)
T τ
R∞ R∞
Substituting T − τ = τ̃ , we have −∞
p2 (T − τ )dτ = −∞
p2 (τ̃ )dτ̃
R∞ R∞
Pd −∞ |h(τ )|2 dτ Pd −∞ p
2 (τ )dτ
Maximum SNR = η = η for matched filter h(τ ) = p(T − τ )
2 2
Probability of Error:
Transmit several symbols. The probability of error is an important metric to characterize the perfor-
Aim: To minimize probability of error ∼ 10−6 − 10−8 , typical values of probability of error.
r(T ) = ao Ep + ñ, symbol must belong to a digital communication in a digital communication system.
ao = +A or −A
r(T ) = ±AEp + ñ
0 +A
Information Symbols Two Symbols belonging to the digital modulation constellation
1 -A
0 1 1 0 0 1 Bit Stream
+A -A -A +A +A -A Stream of digital modulated symbols
A. Noise
fN (n)
σ2 σ2
µ µ+k
Spread remains unchanged
AEp + ñ; N(AEp , σ 2 )
r(T ) =
−AEp + ñ; N(−AEp , σ 2 )
ao = −A ao = +A
σ2 σ2
−AEp 0 AEp
point of intersection by symmetry
Both have same variance, σ 2 = η
2 Ep
AEp + ñ
r(T ) =
−AEp + ñ
σ2 σ2
−AEp 0 AEp
To decide ao = −A To decide ao = A
0
if r(T ) < 0 if r(T ) > 0
then decide ao = −A then decide ao = A
r(T ) ≥ 0 =⇒ Decide A, this corresponds to information bit ‘0’, and r(T ) < 0 =⇒ Decide -A, this
Probability of error:
When does error occur? Error occurs when r(T ) ≥ 0 for ao = −A or when r(T ) < 0 for ao = A
Probability of error,
Pe = P(ñ ≥ Ã)
Z Ã
1 ñ2
= 1− √ e− 2σ2 dñ
−∞ 2πσ 2
Z ∞
1 ñ2
= √ e− 2σ2 dñ
à 2πσ 2
Z ∞
1 n′2
= √ e− 2 dn′
Ã/σ 2π
This is the pdf of standard Gaussian with mean zero and variance ‘1’.
R∞ u2
Q(u) = u √12π e− 2 du
Q-function: Gaussian Q-function denotes that the probability that the standard Gaussian random variable
= Q(Ã/σ)
!
AEp
= Q pη
E
2 p
s !
A2 Ep
= Q η
2
s !
A2 Ep
= Q 2
η
Z ∞
Ep = p2 (t)dt
−∞
Z T
2
= cos2 (2πfc t)dt
0 T
Z T
1
= (1 + cos4πfc t)dt
T 0
Z T Z T
1 1
= dt + cos(4πfc t)dt
T 0 T 0
= 1
x(t) = ao p(t), where ao ∈ {−A, A}, each ao carries one bit of information.
Waveforms
2Eb 2Eb
T cos(2πfc t) T cos(2πfc t + π)
shift by π
shift by π
2Eb 2Eb
T cos(2πfc t + π + π) = T cos(2πfc t)
p(t) p(-t)
T 0
0 -T
√
r(T ) ≥ 0 =⇒ decidesao = A = Eb =⇒ bit = 0
√
r(T ) < 0 =⇒ decidesao = −A = − Eb =⇒ bit = 1
q 2 q q
Optimal Decision Rule: Probability of error = Q( Aη/2Ep ) = Q( η/2
Eb
) = Q( 2 Eηb )
This is the probability of error for BPSK average energy per bit, Eb .
R ∞ 1 − x′2 ′
Q(x) = x 2π e 2 dx = P(X > x), where X ∼ N(0, 1) is a standard Gaussian Random Variable.
0, 0 ≤ t ≤ T
0 1 0
ON OFF ON
Both waveforms differ in amplitudes. Therefore, the digital modulation scheme is termed as amplitude
At the receiver, y(t) = x(t) + n(t), where n(t) is AWGN, for matched filter, h(t) = p(T − t), followed
by sampling at T .
r(T ) = AEp + ñ, where ñ is a Gaussian random variable with zero mean and variance η2 Ep = η
2
r(T ) = ñ, where ñ is a Gaussian random variable with zero mean and variance η2 Ep = η
2
AEp + ñ; ifao = A
r(T ) =
ñ; ifao = 0
0 AEp
AEp
2
‘Detection Threshold’
AEp + ñ, if ao = A
r(T ) =
ñ, if ao = 0
0 AEp
AEp
2
‘Detection Threshold’
AEp AEp
Therefore, decide ao = A if r(T ) ≥ 2
, and decide ao = 0 if r(T ) < 2
AEp
r̃(T ) = r(T ) − 2
AEp + ñ, if a = A
2 o
r̃(T ) =
− AEp + ñ, if ao = 0
2
AEp
Similar to BPSK with AEp replaced by 2
.
AEp AEp
Decide ao = A if r̃(T ) ≥ 0 =⇒ r(T ) − 2
≥ 0 =⇒ r(T ) ≥ 2
AEp AEp
Decide ao = 0 if r̃(T ) < 0 =⇒ r(T ) − 2
< 0 =⇒ r(T ) < 2
Ep
Bit error rate or probability of bit error similar to BPSK with AEp replaced by A E2p , Pe = Q( q ηE2 p ) =
A
q 2 2
A Ep
Q( 2η
)
q 2
Thus, the probability of error for amplitude shift keying (ASK), Pe = Q( A2ηEp )
q
Ep = 1, A2 = 2Eb =⇒ Pe = Q( Eηb )
Fair comparison, both modulation schemes have same average energy per bit, Eb .
=⇒ Eb,BP SK = 1
2
× Eb,ASK
Note: For the same BER, BPSK requires 3 dB lower average energy per bit Eb or basically, ASK
requires 3 dB higher average energy per bit Eb when compared to BPSK for the same BER.
Signal Space:
R∞
Both pulses have unit energy, further, more importantly, −∞
p1 (t)p2 (t)dt = 0, which is the inner
product of two signals, and it is zero. Both the pulses are orthogonal.
p1 (t), p2 (t) constitute the orthogonal basis for a signal space. Both the pulses are orthogonal and are
α1 p1 (t) + α2 p2 (t), where αi ∈ R, all such linear combinations are the elements of signal space.
q q
For example, p1 (t) = T cos(2πf1 t), 0 ≤ t ≤ T , and p2 (t) = T2 cos(2πf2 t), 0 ≤ t ≤ T
2
Pulse duration T contains integer number of cycles of p1 (t) and p2 (t), f1 = k1 /T = k1 fo and f2 =
Z ∞
E1 = p21 (t)dt
−∞
2 ∞
Z
= cos2 (2πf1 t)dt
T 0
2 ∞ 1 + cos(4πf1 t)
Z
= dt
T 0 2
= 1
R∞
Similarly, E2 = −∞
p22 (t)dt = 1 = Ep
Z ∞ Z ∞
2
p1 (t)p2 (t)dt = cos(2πf1 t)cos(2πf2 t)dt
−∞ −∞ T
Z ∞
1
= (cos(2π(f1 − f2 )t) + cos(2π(f1 + f2 )t)) dt
T −∞
Z ∞
1
= (cos(2π(k1 − k2 )fo t) + cos(2π(k1 + k2 )fo t)) dt
T −∞
1 sin(2π(k1 − k2 )fo t) T sin(2π(k1 + k2 )fo t) T
= |0 + |0
T 2π(k1 − k2 )fo 2π(k1 + k2 )fo
1 1 1
= {sin2π(k1 − k2 )fo T − sin(0)} + {sin2π(k1 + k2 )fo T − sin(0)}
2π k1 − k2 k1 + k2
= 0
p1 (t) and p2 (t) are orthogonal and unit energy, hence orthonormal basis for the signal space.
q q
2 2
α1 p1 (t) + α2 p2 (t) = α1 T
cos(2πf1 t) + α2 T
cos(2πf2 t)
x(t)
Both waveforms differ in frequency. One is obtained from other by shifting in frequency.
1
Assuming ‘0’ and ‘1’ occur with probability 2
each.
√
Average energy per bit, Eb = 21 A2 Ep + 12 A2 Ep = A2 Ep = A2 =⇒ A = Eb
q q
2 2Eb
Information bit ‘0’ → Ap1 (t) = A T cos(2πf1 t) = T
cos(2πf1 t) = x(t); Information bit ‘1’
q q
→ Ap2 (t) = A T2 cos(2πf2 t) = 2ET b cos(2πf2 t) = x(t)
At the Receiver:
No
y(t) = x(t) + n(t), where n(t): Additive White Gaussian Noise, and Rnn (τ ) = 2
δ(τ )
Ap̃(t) + n(t)
p1 (t)+p2 (t)
For ‘1’, we have y(t) = Ap2 (t) + n(t) =⇒ ỹ(t) = y(t) − A 2
= A p2 (t)−p
2
1 (t)
+ n(t) =
−Ap̃(t) + n(t)
p1 (t)−p2 (t)
This is similar to BPSK and is obtained by replacing p(t) by p̃(t) = 2
p1 (t)−p2 (t)
Therefore, the optimum matched filter is h(t) = p̃(T − τ ) matched to p̃(t) = 2
1
Factor of 2
is a scaling factor which does not affect the filter.
q q
2 2
Hence, optimum matched filter, h(t) = p1 (T −τ )−p2 (T −τ ) = T
cos(2πf1 (T −τ ))− T
cos(2πf2 (T −
τ ))
For ‘0’, y(t) = Ap1 (t) + n(t), and for ‘1’, y(t) = Ap2 (t) + n(t)
E{ñ} = 0, and
∞
No2
Z
2
E{ñ } = |h(T − τ )|2 dτ
2 −∞
∞
No2
Z
= {p1 (τ ) − p2 (τ )}2 dτ
2 −∞
∞
No2
Z
= {p21 (τ ) + p22 (τ ) − 2p1 (τ )p2 (τ )}dτ
2 −∞
∞ ∞ ∞
No2 N2 N2
Z Z Z
= p21 (τ )dτ + o p22 (τ ) − o 2p1 (τ )p2 (τ )dτ
2 −∞ 2 −∞ 2 −∞
No No
= Ep + Ep − 0
2 2
= No Ep
= No
Corresponding to bit ‘1’, y(t) = Ap2 (t) + n(t), where for matched filter h(t) = p1 (T − t) − p2 (T − t)
r(T ) = −AEp + ñ = −A + ñ
0 1
r(T ) → − −A + ñ This model is similar to BPSK except σ 2 = No
− A + ñ, and r(T ) →
But symmetry, optimal decision rule is, r(T ) ≥ 0 =⇒ Decide ao = 0 or r(T ) < 0 =⇒ Decide
ao = 1
BER of ASK, FSK > BER of BPSK, because Q(.) functions is a decreasing function. ASK and FSK
Consider the following pulses: p1 (t) and p2 (t), which are unit energy signals.
q q
p1 (t) = T cos(2πfc t), 0 ≤ t ≤ T , and p2 (t) = T2 sin(2πfc t), 0 ≤ t ≤ T
2
where T : duration of a symbol, and T = fkc ; observe that the p2 (t) is a phase shift version of p1 (t),
q q
since p1 (t) = T2 cos(2πfc t) = T2 sin(2πfc t + π2 ), 0 ≤ t ≤ T
2 T 1 − cos(4πfc t)
Z
= dt
T 0 2
2 T 1 sin(4πfc t) T
= × − |0
T 2 T 4πfc
= 1
i.e Ep = 1
Inner Product:
Z ∞ Z T
2
p1 (t)p2 (t)dt = cos(2πfc t)sin(2πfc t)dt
−∞ T 0
1 T
Z
= sin(4πfc t)dt
T 0
1 cos(4πfc t) T
= |0
T 4πfc
= 0
These pulses, p1 (t) and p2 (t) have unit energy and their inner product is zero, so they are unit energy
+ orthogonal =⇒ orthonormal basis of signal space. Note that these two signals are similar to those
in Quadrature Carrier Multiplexing (QCM), in which the signals are modulated over m1 cos(2πfc t) −
m2 sin(2πfc t)
Two separate message signals m1 (t) and m2 (t) can be modulated on orthogonal carriers.
We generate a signal, x(t), where x(t) = a1 p1 (t) + a2 p2 (t), each of the two pulses carries one symbol
Here, we see that a signal has two orthogonal pulses, thus each signal carries 2 bits of information in
one symbol cycle, whereas in previous scheme, only one bit per symbol cycle can be transmitted.
In QPSK, the signal is given by, x(t) = a1 p1 (t) + a2 p2 (t), where ai ∈ {−A, +A}, so there could be
possible waveforms.
Ap1 (t) + Ap2 (t)
Ap1 (t) − Ap2 (t)
x(t) =
−Ap1 (t) + Ap2 (t)
−Ap1 (t) − Ap2 (t)
q q q
Consider a1,2 = A, we have x(t) = A T cos(2πfc t) + A T sin(2πfc t) = A T2 {cos(2πfc t) +
2 2
2A
cos(2πfc t − π4 )
√
T
√2A cos(2πfc t + π )
T 4
x(t) =
2A 3π
√ cos(2πfc t + )
T 4
√2A cos(2πfc t +
5π
)
T 4
Note here that each consecutive waveform is shifted from the other by a phase difference of π2 . Also
π
the last waveform is shifted from the first with a phase shift of 2
. Each waveform is shifted from its
π
neighbouring by a phase of 2
or 90o or a quadrature. This is the reason, why this scheme is called as
We can represent these waveforms using a 2-dimensional constellation diagram. The four points will
Coonstellation Diagram
( A, A)
(- A, A)
A T2 A
A T2
5π
4
3π
4
π
4
-A A
−π
4
A T2 A T2 -A
(- A, -A) (A, - A)
x(t) = a1 p1 (t) + a2 p2 (t), where p1 (t) and p2 (t) are independent pulses. Thus, the optimal receive filter
h1 (t) = p1 (T − τ) h2 (t) = p2 (T − τ)
a1 Ep + ñ1 a2 Ep + ñ2
y(t) = a1 p1 (t) + a2 p2 (t) + n(t), with the matched filter impulse response, h1 (t) = p1 (T − t)
Z ∞
(a1 p1 (t) + a2 p2 (t)) ∗ h1 (t) = (a1 p1 (τ ) + a2 p2 (τ ))h1 (t − τ )dτ
−∞
Z ∞
= = (a1 p1 (τ ) + a2 p2 (τ )) h1 (t − τ )dτ
−∞
Z ∞
= (a1 p1 (τ ) + a2 p2 (τ )) p1 (T + τ − t)dτ
−∞
Sample this at t = T ,
Z ∞
y(t) ∗ h1 (t)|t=T = (a1 p1 (t) + a2 p2 (t)) ∗ h1 (t) = (a1 p1 (τ ) + a2 p2 (τ )) p1 (T + τ − T )dτ
−∞
Z ∞
= (a1 p1 (τ ) + a2 p2 (τ )) p1 (τ )dτ
−∞
Z ∞ Z ∞
= a1 p21 (τ )dτ + a2 p2 (τ )p1 (τ )dτ
−∞ −∞
= a1 E p = a1
= N2o
r1 (T ) = a1 + n(t) ∗ h1 (t)|t=T = a1 + ñ1 , where ñ1 is zero mean Gaussian with variance = No
2 p
E
The final output at the sampler will be given by r1 (T ) = a1 + ñ1 , where a1 = ±A, and A = Ep ,
p
Note this output is similar to BPSK. By symmetry, the optimal decision rule will be r1 (T ) ≥ 0, decide
Similarly for a2 ,
y(t) = a1 p1 (t) + a2 p2 (t) + n(t), with the matched filter impulse response, h2 (t) = p2 (T − t)
Z ∞
(a1 p1 (t) + a2 p2 (t)) ∗ h1 (t) = (a1 p1 (τ ) + a2 p2 (τ ))h2 (t − τ )dτ
−∞
Z ∞
= = (a1 p1 (τ ) + a2 p2 (τ )) h2 (t − τ )dτ
−∞
Z ∞
= (a1 p1 (τ ) + a2 p2 (τ )) p2 (T + τ − t)dτ
−∞
Sample this at t = T ,
Z ∞
y(t) ∗ h2 (t)|t=T = (a1 p1 (t) + a2 p2 (t)) ∗ h2 (t) = (a1 p1 (τ ) + a2 p2 (τ )) p2 (T + τ − T )dτ
−∞
Z ∞
= (a1 p1 (τ ) + a2 p2 (τ )) p2 (τ )dτ
−∞
Z ∞ Z ∞
= a1 p1 p2 (τ )dτ + a2 p22 (τ )dτ
−∞ −∞
= a2 E p = a2
r2 (T ) = a2 + n(t) ∗ h2 (t)|t=T = a2 + ñ2 , where ñ2 is zero mean Gaussian with variance = No
2 p
E = No
2
Ep ,
p
The final output at the sampler will be given by r2 (T ) = a2 + ñ2 , where a2 = ±A, and A =
Note this output is similar to BPSK. By symmetry, the optimal decision rule will be r2 (T ) ≥ 0, decide
Matched filter r1 (T )
h1 (t) = p1 (T − t)
y(t)
t = T
Matched filter r2 (T )
h2 (t) = p2 (T − t)
Hard Thresholding
The bit error rate (BER) of each component will be similar to BPSK.
q
BER of 1st channel = Q( No ) = Q( 2E
√A b
No
)
2 q
Similarly, BER of 2 channel = Q( No ) = Q( 2E
nd √A
No
b
)
2 q
Thus, the BER for both the channels is given by, Pe1 = Pe2 = Pe = Q( 2E
No
b
)
Probability of error of the first channel = Pe ; Probability that a1 is received correctly in the first channel,
Pc = 1 − Pe
Similarly, Probability of error of the second channel = Pe ; Probability that a2 is received correctly in
Assume that the events ‘a1 is received correctly’ and ‘a2 is received correctly’ are independent.
The Probability that the symbol is in error = 1 - both bits are received correctly = 1 − (1 − Pe )2 =
q q
2Pe − Pe = 2Q( No ) − Q ( 2E
2 2Eb 2 b
No
)
q q q
For high SNR, NEbo , Q( 2ENo
b
) << 1 =⇒ Q 2
( 2Eb
No
) << Q( 2Eb
No
)
q
The overall probability of symbol error or symbol error rate, Pe ≈ 2Q( 2E No
b
)
For example, for 4 levels, i.e. M = 4, we need 2 bits for representation, each level can be represented
as below:
‘00’ → Level 0
‘01’ → Level 1
‘10’ → Level 2
‘11’ → Level 4
For another example, M = 8, number of bits = log2 8 = 3, The eight possible values would be
ao ∈ {±A, ±3A, ±5A, ±7A}. All the levels in the constellation are separated by 2A. So, this is a M-ary
or for this above example, 8-ary PAM pulse amplitude modulation constellation.
Then, number of bits per symbol = log2 M . Let the average symbol energy or average energy per
By symmetry, −A and A will have the same energy so average can be calculated by taking one side
itself.
M
2
−1
1 X
Es = (2i + 1)2 A2
M/2 i=0
M
2
−1
2A2 X
= (4i2 + 4i + 1)
M i=0
( )
2A2 4( M2 − 1) M2 (M − 1) ( M2 − 1) M2 M
= +4 +
M 6 2 2
2A2 2 M 2 M
M M
= ( − )(M − 1) + M ( − 1) +
M 3 4 2 2 2
2A2 2 M 3 3M 2 M M2 M
= ( − + )+ −
M 3 4 4 2 2 2
2
3
2A M M
= −
M 6 6
2A2 M 3
= (M 2 − 1)
M 6
A2
= (M 2 − 1)
3
A2
The average energy per symbol, Es = 3
(M 2 − 1), and it is a function of M and A, note that the
We can say this because for the matched filter principle, we did not assume any specific modulation
scheme or number of levels, only the pulse shape. Thus, for M-ary scheme the matched filter will be
After matched filtering and sampling at t = T , r(T ) = ao Ep + ñ = ao + ñ, where ao = ±(2i + 1), 0 ≤
i≤ M
2
− 1, and ñ is zero Gaussian noise with variance, No
2 p
E
Probability of Error 0 2A
M = 8 PAM Constellation
ao = −3A.
Combination of M-cases, one for each point. Such decision rule is also known as the nearest neighbour
rule as the decision for a symbol is based on the level nearest to the sampled or received signal mean.
The boundary points, however, will have a slightly different rule as they only have one neighbour, for
M-ary PAM (Pulse Amplitude Modulation) - Part - II, Optimal Decision Rule, Probability of error.
Decision: ao = 5A
M - ary PAM
0 2A
Decision: ao = A
Nearest Neighbour Decision Rule, ao closest neighbour of r(T )
Nearest neighbour rule means ao is decided to be the nearest neighbour of the detected (output after
At the two end points, however, the situation is slightly different. The end points only have one
neighbour and are given by, −(M − 1)A and (M − 1)A. Thus, we can form the decision rule for end
points,
Decide ao = −(M − 1)A if r(T ) < −(M − 2)A, and Decide ao = (M − 1)A if r(T ) ≥ (M − 2)A
M
(2i + 1)A = (M − 1)A, when i = 2 −1
- (M - 2)A
r(T) r(T)
Decide ao = (M − 1)A
M
−(2i + 1)A = −(M − 1)A, when i = 2 −1
D. Probability of error:
0 2A
- A A 3A
Pe = P(r(T ) < 0 ∪ r(T ) ≥ 2A), where these events, ϕ1 = {r(T ) < 0} and ϕ2 = {r(T ) ≥ 2A}, are
0 =⇒ ñ < −A
- A 0 A
By symmetry, these two tail probabilities are equal. Therefore, P{ñ ≥ A} = P{ñ − A}
q 2
Hence, Pe = P{ñ ≥ A}+P{ñ < −A} = 2P{ñ ≥ A} = 2P √ No ≥ √ No = 2P √ No ≥ A
ñ A ñ
No =
2 2 2 2
q
A2
2Q No
2
The same can be said for all other interior points. Now, let us consider the exterior points, like
transmission of the point, ao = (M − 1)A, then r(T ) = (M − 1)A + ñ. Error occurs if r(T ) <
(M − 2)A =⇒ (M − 1)A + ñ < (M − 2)A =⇒ ñ < −A. Thus, Pe = P{ñ < −A}. Due to symmetry,
q 2 q
Ö A A2
Pe = P{ñ < −A} = P{ñ > A} = P No
> No = Q No
2 2 2
q 6Es
Hence, the average probability of error for M-PAM, Pe = 2 1 − M1 Q (M 2 −1)No
q
2Es
For M = 2, Pe = Q No
, and the number of bits per symbol is given by log2 M
Similar to QPSK (Quadrature Phase Shift Keying), there are two pulses, p1 (t) and p2 (t), and the
transmitted signal, x(t) = a1 p1 (t) + a2 p2 (t), where a1 and a2 are two independent and separate symbols
and they have unit energy, and are shifted by a phase difference of π2 .
For an M-ary QAM, the total number of symbols wil be M , and we have two sets for a1 , and a2 .
√
Thus, both a1 and a2 will be derived from a constellation of M -ary PAM.
√ √ √
Total number of symbols in QAM = M × M = M . Similar to PAM¡ we are now using M -ary
√
PAM. i.e. ai ∈ M -ary PAM, for i ∈ {1, 2}
√ √
M M
a1 = ±(2i + 1)A, 0 ≤ i ≤ 2
− 1, and a2 = ±(2j + 1)A, 0 ≤ j ≤ 2
−1
√
or, aj = ±(2i + 1)A, 0 ≤ i ≤ 2M − 1, j ∈ {1, 2}
√
This naturally means that M has to be an integer.
√ √
M
For example, for 64-QAM, M = 64; M = 8 =⇒ 2
=4
Average symbol energy, Es , i.e. average symbol energy per a1 is Es /2, and average symbol energy per
a2 is Es /2. i.e. each constituent PAM has half the symbol energy.
√ Es A2
√ 2
Each PAM is a M -ary PAM with distance 2A between levels, thus 2
= 3
( M − 1) =⇒ A =
3Es
2(M −1)
M-ary QAM (Quadrature Amplitude Modulation)-Part-II Optimal Decision Rule, Probability of Error,
Constellation Diagram
The transmitted signal is given by: x(t) = a1 p1 (t)+a2 p2 (t)+n(t) where p1 (t) and p2 (t) are orthonormal
functions (pulses).
The receive filter will be similar to that of QPSK, i.e. we will have two filters h1 (t) and h2 (t) with
No
x(t) Zero mean Gaussian Noise with variance = 2
h2 (t) = p2 (T − t)
The output of the filter after sampling will be a combination of two functions,
r1 (T ) = a1 + ñ1
r2 (T ) = a2 + ñ2
The above is the case where Ep = 1, and ñ1 and ñ2 are zero mean Gaussian noise with variance No
2
.
Decision Rule: Decide a1 from r1 (T ) and decide a2 from r2 (T ), where a1 and a2 are independent
√
PAM symbols,so decision is similar to that of PAM, but are M -ary PAM.
√
( M − 2)A
decide ai = −( M − 1)A A decide ai = ( M − 1)A
2A
√ - A 0 A 3A √
−( M − 1)A ( M − 1)A
Pe = 2(1 − 1
M
)Q( √ANo )
2
√
Now, Probability of error for M -ary PAM: Pe = 2(1 − √1M )Q( √ANo )
√ 2
This is the probability of error for a1 , a2 which belongs to M -ary PAM, and Q(.) is the complementary
M-ary QAM:
√ Es
q
Given symbol energy, Es , each individual M -ary PAM has average energy 2 , and A = 2(M 3Es
−1)
,
√
ensures average symbol energy Es /2 for each M -ary PAM. Average energy Es for overall M-ary QAM.
√
q 3Es
Average Symbol error rate for each constituent M -ary PAM, Pe = 2(1 − M1 )Q √
2(M −1)
No
= 2(1 −
q 2
1 3Es
M
)Q No (M −1)
Error rate for overall M-ary QAM constellation is, Pe,QAM = 1 − (1 − Pe,P AM )2 = 1 − (1 + Pe,P
2
AM −
2 2
2Pe,P AM ) = 2Pe,P AM −Pe,P AM ≈ 2Pe,P AM At high SNR, PE,P AM is very small in comparison to Pe,P AM
q
1 3Es
Pe,QAM = 2(1 − M )Q
√
No (M −1)
3A
- 3A - A A 3A a1
- A a1 = 3A, a2 = −A
- 3A
Number of bits per symbol = log2 M , for M = 16, number of bits = log2 16 = 4 bits, i.e., 2 bits on
In the QAM scheme, we can increase M in order to increase the bit rate without increasing the symbol
transfer rate.
For M = 4 QAM, there are 4 symbols, i.e 2 bits, where one bit on each phase (in-phase and quadrature
phase).
Higher modulation and adaptive modulation are important to achieve higher data rates in 3G, 4G, and
5G.
If you have a symbol rate of 10M symbols per second, and if we use 4-QAM, then there are 2 bits
If we are able to use 1024-QAM, then there are 10 bits per symbol. The data rate is 100Mbps. By
using higher modulation, data rate is increased by a factor of 5. This implies that higher data rate depends
Choice of modulation scheme depends on the user requirement and also channel condition, i.e. the
wireless channel is able to support such data rate. So, adaptively one can choose an appropiate modulation
to meet the data requirement, and ofcourse to deliver the maximum data rate to each user.
So, the modulation scheme, QAM that increases the data rate significant as we go from 3G to 4G
to 5G wireless technologies and QAM is a very general and one of the most widely used modulation
schemes.
M-ary PSK (Phase Shift Keying) - Part-I (Introduction, transmitted waveform, constellation diagram)
M-ary PSK or phase shift keying has M-symbols and it is phase shift keying based digital modulation
scheme. It generalizes the concept of binary phase shift keying which has 2 phases.
M-ary PSK: In which there are M possible phases, its constellation has M symbols, and there are
4π i = 2
M
2π
M i = 1
i = 0
A
Radius of Circle, A
Constellation Diagram
The M phases can be represented on a circle of radius A. The first phase i = 0 will be at the x-axis
2π 4π
and the second will be M
, then M
, and so on.
The transmitted signal is given by, x(t) = a1 p1 (t) + a2 p2 (t), where p1 (t) and p2 (t) are orthonormal
a2 ∈ {Asin(k 2π
M
)}, k = 0, 1, 2, ..., M − 1
a21 a22
= E +
2π 2 2π 2
= E (Acos(k )) + (Asin(k ))
M M
2 2π 2 2π 2
= A E (cos(k )) + (sin(k ))
M M
= A2
√
This implies that A = Es
√
Thus the radius of the circle for a fixed average symbol energy, Es will be Es
M-ary PSK (Phase Shift Keying) Part-II, Optimal decision rule, Nearest Neighbour Criterion, Approx-
transmitted signal, x(t) = a1 p1 (t) + a2 p2 (t), and the received signal, y(t) = x(t) + n(t), where n(t):
MF
MF h2 (t) = p2 (T − t)
h1 (t) = p1 (T − t)
r1 (T ) = a1 Ep + ñ r2 (T ) = a2 Ep + ñ
= a1 + ñ = a2 + ñ
No
Additive Gaussian Noise with variance 2
The decision rule will be to look for the nearest constellation point from r(T ), i.e., we choose the
mink ∥r(T ) − sk ∥
q √ √
The distance ∥r(T ) − sk ∥ = (r1 (T ) − Es cos(k 2π
M
))2 + (r (T ) −
2 Es sin(k 2π
M
))2
The expression can not be solved in any manner for an accurate analytical expression of BER. We
Consider QPSK
√ √ √ √
− Eb , Eb Eb , Eb
√
dmin = 2 Eb
√ √
√
− Eb , − Eb
√ Eb , − Eb
It is very useful expression to compute error rate of digital communication, when exact expression is
difficult to obtain.
M-ary PSK:
√
Es
2π
dmin
M where, 2 = sin( M
π
)
dmin
π 2
M
√
Es
√
Es
In M-ary PSK, a constellation point will have two nearest neighbours and the distance will be, dmin =
√
2 Es sin( M
π
)
√ q
2 Es sin( M
π π
2Es sin2 ( M
) )
The probability of error, Pe = 2Q √
2No
= 2Q No
This is an approximate expression for the SER but is a very tight approximation (or very close to the
Es
actual value) for high values of No