On The Mutual Information and Precoding For SM With Finite Alphabet
On The Mutual Information and Precoding For SM With Finite Alphabet
On The Mutual Information and Precoding For SM With Finite Alphabet
in the N points modulation constellation. In other words, the substituting (6) and (7) into (5), the mutual information for
source information bits are mapped into two information car- SM with finite alphabet can be calculated by
rying units: one is the index of the selected transmit antenna, 1
M N
i.e. m, and the other is the chosen symbol xi . With precoding, I (x, h; y) = log2 M N − MN Ev
m=1 i=1
the signal xi transmitted by the mth antenna is multiplied by M N 2
the precoding coefficient αm,i . Each antenna will be selected log2 1 + m2 ,i2
exp −γ dm,i + v − |v| 2
and (4), I (h; y |x ) and I (x; y) are given by (6) and (7) I(h; y |x)= log2 M − Ev log2 exp(0) = 0 (10)
M N m=1i=1 m2 =1
1
M N
I (h; y |x ) = log2 M − MN Correspondingly, the maximum value of I (x, h; y) is no
m=1 i=1
(6) longer log2 M N , but log2 N . This also implies that the infor-
M
|dm2 +v|2 −|v|2
Ev log2 exp − m σ2 mation bits conveyed on the index of the antenna is no longer
m2 =1 uniquely decodable and the multiplexing gain introduced by
1
M N SM vanishes.
I (x; y) = log2 N − MN To avoid degrading the performance of SM such badly, the
⎡ m=1 i=1 2 ⎤
M N
exp −
| d 2 2 +v|
m ,i
m,i
(7)
uniqueness of the channels should be enlarged. This inspires
⎢ m2 =1 i2 =1
σ2
⎥ us to develop a precoding scheme with the assistance of
Ev ⎣log2 2 ⎦
M
exp −
| dm2 +v|
m
CSI. With precoding, the ith symbol transmitted by the mth
σ2
m2 =1 antenna is multiplied by the coefficient αm,i , so the process
in which dm 2 ,i2
= h x
m i −h x
m2 i2 , d m2
= (h m −h m2 ) x i , and of SM can be viewed as a mapping ψ given by
m,i m
Ev (∗) denotes the expectation operator with respect to v. By ψ : H × X → R where R= {αm,i hm xi |∀m, i } (11)
GUAN et al.: ON THE MUTUAL INFORMATION AND PRECODING FOR SPATIAL MODULATION WITH FINITE ALPHABET 385
where R can be seen as the received signal space for non- from antenna 1 0.8
from antenna 1
1 from antenna 2 from antenna 2
noise channel, in which there are totally M N points. The from antenna 3 0.6
from antenna 3
from antenna 4
Euclidean distance between any two of them is denoted by 0.4 from antenna 4
m2 ,i2 0.5
0 0
to give more insights into how the object function for the −1 −0.5 0 0.5 1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8
(a) without precoding (b) with precoding
precoding is derived and how the precoding coefficient αm,i
is designed. It should be noticed that the motivation of the Fig. 2. Signal space diagrams for SM, without precoding and with precoding,
finally proposed precoding scheme is to improve the lower respectively.
bound I (x, h; y)Low , and the numerical result demonstrates
that it is consistent with the object of improving I (x, h; y).
can be rewritten as
Lower bound: The mutual information of SM with finite 2 ,i2
max dmm,i
alphabet can be lower bounded by αm,i (16)
M 2 2
1
N
s.t.E |r| = h /M
I (x, h; y)Low =log2 M N−(log2 e−1)− MN
m ,i m=1
i=1
(12) This can be considered as a traditional constellation design
M
N
|dm,i2 2 |
2
problem. Considering PAM, PSK and QAM constellations,
log2 exp − 2σ2
m2 =1 i2 =1 the last one provides the maximum dmin , within the same
power constraint [10, Table 3.2-1]. At this point, we rearrange
Proof: The mutual information in (8) can be rewritten as the M N points in the space R, to form an M N −QAM
2
I (x, h; y) = log2 M N − Ev log2 exp |v|
σ2 − MN 1 constellation. We denote a M N −QAM constellation by
Q = {q1 , q2 , ...qMN }, with E(|q|2 ) = 1. Based on the above
M N
M
N −|dm,i
m2 ,i2
+v |
2
min log2 exp − m,i 2σ2 carlo simulation over 10000 independent channel realizations.
αm,i m=1 i=1
m2 =1 i2 =1 (15) Fig. 3 depicts the mutual information comparison between
2 2
s.t. E |r| = h /M N-PSK inputs SM and Gaussian inputs SM [6]. In Gaussian
where r ∈ R, and ∗ denotes the Frobenius norm. With- inputs case, the mutual information always increases as the
out precoding, the average power of signal points in R SNR increases, whereas finite discrete inputs lead to a loss
2 on mutual information. But this performance gap becomes
equals h /M . To guarantee the average power
remains
smaller and smaller as the modulation order and the number of
the same, we have the constraint E |r| = h 2 /M .
2
antennas increase. Moreover, it is very interesting to observe
Close observation shows
that the minimum distance, i.e.
m2 ,i2 that the mutual information with Gaussian inputs varies little
dmin = min dm,i is the dominant term in (15), as M increases. On the contrary, it increases by 1 bit/s/Hz as
(m2 ,i2 )=(m,i)
thus it has an important impact on the performance of SM. M varies from 2 to 4 in the finite alphabet inputs scenario.
Therefore, we intend to design αm,i to provide the maximum The mutual information for BF, CBC and SM versus the
possible minimum distance, i.e. the optimizing problem in (15) SNR with BPSK and QPSK inputs is depicted in Fig. 4. It is
386 IEEE WIRELESS COMMUNICATIONS LETTERS, VOL. 2, NO. 4, AUGUST 2013
10 3
7.25 SM
9 M=2 BF
Gaussian inputs 2.5
M=4 CBC
8 7.2
Mutual infomation (bits/s/Hz)
5 1.5
4
1
3
2
0.5
1 QPSK BPSK
0 0
−20 −16 −12 −8 −4 0 4 8 12 16 20 24 28 32 −20 −15 −10 −5 0 5 10 15 20
SNR(1/σ2,dB) 2
SNR(1/σ ,dB)
Fig. 3. Mutual information comparison between BF, CBC, and SM for Fig. 4. Mutual information comparison between N-PSK (N = 4, 16) inputs
M × 1 (M = 2, 4) channels. SM and Gaussian inputs SM for M × 1 (M = 2, 4, 8) channels.
3
mutual information
interesting to observe that there exists an intersection of curves lower bound
2.5
for BF, CBC and SM. For low-SNR, due to the obtained spatial without precoding