Orthogonal Biorthogonal and Simplex Signals
Orthogonal Biorthogonal and Simplex Signals
Orthogonal Biorthogonal and Simplex Signals
2-21
4.2.4 Orthogonal, Biorthogonal and Simplex Signals In PAM, QAM and PSK, we had only one basis function. For orthogonal, biorthogonal and simplex signals, however, we use more than one orthogonal basis function, so N-dimensional Examples: the Fourier basis; time-translated pulses; the Walsh-Hadamard basis. o Weve become used to SER getting worse quickly as we add bits to the symbol but with these orthogonal signals it actually gets better. o The drawback is bandwidth occupancy; the number of dimensions in bandwidth W and symbol time Ts is
N 2W Ts
o So we use these sets when the power budget is tight, but theres plenty of bandwidth.
Orthogonal Signals With orthogonal signals, we select only one of the orthogonal basis functions for transmission:
4.2-22
Examples of orthogonal signals are frequency-shift keying (FSK), pulse position modulation (PPM), and choice of Walsh-Hadamard functions (note that with Fourier basis, its FSK, not OFDM).
o The signals are equidistant, as can be seen from the sketch or from
Es location i si s j = Es location j
4.2-23
o For a fixed energy per bit, adding more bits increases the minimum distance strong contrast to PAM, QAM, PSK.
o But adding each bit doubles the number of signals M, which equals the number of dimensions N and that doubles the bandwidth! Error analysis for orthogonal signals. Equal energy, equiprobable signals, so receiver is
o Error probability is same for all signals. If s1 was sent, then the correlation vector is
Es + n1 n2 with real components. c= nM
4.2-24
o Assume s1 was sent. The probability of a correct symbol decision, conditioned on the value of the received c1 , is
M 1
pc1 ( c1 ) dc1
M 1
c1 = 1 Q N 2 0 =
1 2
1 2 1 c1 Es exp 2 N0 N0 2
dc1
(1 Q ( u ) )
M 1
1 1 exp u 2 s 2 2
du
4.2-25
About half the bits are in error in the average symbol error. The SER for orthogonal signals is striking:
Symbol Error Prob, Orthogonal Signals
1 0.1 prob of symbol error 0.01 1 .10
3
4 1 .10
1 .10 1 .10
5 6
10
12
14
16
4.2-26
increases linearly with k = log 2 ( M ) , the pairwise error probability (between two specific points) decreases exponentially with k, the number of bits per symbol.
o Which effect wins? Cant get much analytical traction from the
o Without loss of generality, assume s1 was transmitted. o Define Em , m = 2, , M as the event that r is closer to s m than to s1 .
4.2-27
o Note that Em is a pairwise error. It does not imply that the receivers
decision is m. In fact, we have events E2 ,, EM and they are not mutually exclusive, since r could lie closer to two or more points than to s1 :
m=2
P [ Em ]
log 2 ( M ) b
<e
k b ln(2) 2
A general expansion is
M M M
P [ A1 A2 AM ] = P [ A i ] P Ai A j
i =1 i =i j =1 j i
+ P Ai A j Ak
i =1 j =1 k =1 j i k j k i
M M M
etc.
4.2-28
o From the bound, if b > 2ln ( 2 ) = 1.386 , then loading more bits onto a
bandwidth also increase exponentially with k. This scheme is called block orthogonal coding. Thresholds are characteristic of coded systems. Remember that this analysis is based on bounds
SER and Bound, Orthogonal Signals
10 1 0.1 prob of symbol error 0.01 1 .10 1 .10 1 .10 1 .10
3 4 5 6
Upper bound is useful to show decreasing SER, but is too loose for a good approximation.
4.2-29
Biorthogonal Signals
If you have coherent detection, why waste the other side of the axis? Double the number of signal points (add one bit) with Es . Now
M = 2 N , with no increase in bandwidth. Or keep the same M and cut the
bandwidth in half.
4.2-30
The probability of error is messy, but the union bound is easy. A signal is equidistant from all other signals but its own complement.
which is slightly less than orthogonal signaling, for the same M and same b . The major benefit is that biorthogonal needs only half the bandwidth of orthogonal, since it has half the number of dimensions.
4.2-31
Simplex Signals
The orthogonal signals can be seen as a mean values, shared by all, and signal-dependent increments:
m =1
sm .
s = s m s m
where
4.2-32
They still have equal energy (though no longer orthogonal). That energy is
Es = s m
1 2 = Es M 2 + 1 M M
1 = E 1 M
4.2-33
for m n
A uniform negative correlation. The SER is easy. Translation doesnt change the error rate, so the SER is that of orthogonal signals with a M ( M 1) SNR boost.
More signals defined on a multidimensional space. Unlike orthogonal, we will now allow more than one dimension to be used in a symbol. Vertices of a hypercube is straightforward: two-level PAM (binary antipodal) on each of the N dimensions:
simple OFDM
4.2-34
o Signal vectors:
sm1 s sm = m2 smN
with smn = Es N
for m = 1, M and M = 2 N .
o Every point has the same distance from the origin, hence the same
energy
sm
= Es = log 2 ( M ) Eb = N Eb
4.2-35
sm =
sn =
d min = s m s n = 2
Es = 2 Eb N
o What are the effects on d min and bandwidth if we increase the number
of bits, keeping Eb fixed? Its easy to generalize from binary to PAM, PSK, QAM, etc. on each of the dimensions:
4.2-36
then the independence of the noise causes it to decompose to N independent detectors. So the following structure
Ps N Q
2 b
= log 2 ( M )Q
)
2 b
Pb depends on labels
becomes
Ps is meaningless
Pb = Q
2 b , just
binary antipodal
4.2-37
Generalization: user multilevel signals (PAM) in each dimension. Again, if labeling is independent by dimension, then its just independent and parallel use, like QAM. Not too exciting. Yet. These signals form a finite lattice.
Generalization: use a subset of the cube vertices, with points selected for greater minimum distance. This is a binary block code.