Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

SPH Decodin

Download as pdf or txt
Download as pdf or txt
You are on page 1of 44

EFFICIENT MAXIMUM LIKELIHOOD DETECTION

FOR COMMUNICATION OVER


MULTIPLE INPUT MULTIPLE OUTPUT CHANNELS

by

Karen Su
(Trinity Hall)

Laboratory for Communication Engineering


Department of Engineering
University of Cambridge

c 2005 by Karen Su.


Copyright
All Rights Reserved.
Efficient Maximum Likelihood detection
for communication over
Multiple Input Multiple Output channels
Laboratory for Communication Engineering
Cambridge University Engineering Department
University of Cambridge

by Karen Su
February 2005

Abstract

The Maximum Likelihood (ML) detection of signals transmitted over Multiple Input Mul-
tiple Output (MIMO) channels is an important problem in modern communications that is
well-known to be NP-complete. However, recent advances in signal processing techniques
have led to the development of the Sphere Decoder (SD), which offers ML detection for
MIMO channels at an average case polynomial time complexity. Even so, existing SDs are
not without their weaknesses: For most current proposals, the decoder performance is highly
sensitive to the value chosen for the search radius parameter. The complexity coefficient can
also become very large when the Signal-to-Noise Ratio (SNR) is low or when the problem
dimension is high, e.g., at high spectral efficiencies.
We report here on two novel contributions designed to address these key weaknesses exhib-
ited by existing schemes. First we present a new decoder, dubbed the Automatic Sphere
Decoder (ASD) because of its ability to perform sphere decoding without a search radius,
thereby eliminating any sensitivity to this parameter. We also define the notion of sphere
decoder efficiency and prove that the ASD achieves the optimal efficiency over the set of
known SDs. Secondly, we propose a pre-processing stage that dramatically improves the
efficiency of sphere decoding. The pre-processing is itself computationally efficient and is
shown to be effective when applied before existing SDs. The advantage offered is particu-
larly great at low SNRs and at high spectral efficiencies, two important operating regions
where current SD algorithms are prohibitively complex.
Combined, the ASD and the proposed pre-processing stage make ML detection of signals
transmitted over MIMO channels feasible in practice, even at low SNRs and at high spectral
efficiencies. Therefore we believe that these contributions will play an important role in the
next generation of wireless communication systems.

ii
Preface

The work detailed in this report was undertaken as a component of my approved course of
research for the degree of PhD at the University of Cambridge. My dissertation is tenta-
tively entitled On Detection and Coding for Communication over Linear MIMO Channels.
This report is concerned with the detection aspects of the greater investigation. Previous
work by other researchers is appropriately cited in the text; all major results and theorems
presented are original contributions. Parts of this work have been accepted for presentation
at the IEEE International Conference on Communications in May 2005 [15].

I would like to thank my PhD supervisor Dr Ian Wassell for his support of my research at
the Laboratory for Communication Engineering. My thanks also to Dr Miguel Rodrigues,
for the many insightful discussions that have so greatly enriched my overall research expe-
rience, and to Mr Colin Jones, for yet more discussions that, among other things, led to
the formulation of the core ideas behind the Automatic Sphere Decoder. Finally, I most
gratefully acknowledge the generous assistance of Universities UK, the Cambridge Com-
monwealth Trust, the Natural Sciences and Engineering Research Council of Canada, and
Trinity Hall in providing financial support for my studies.

Karen Su
Cambridge, 2005.

iii
Contents

Abstract ii

Preface iii

1 Introduction 1

2 Maximum Likelihood detection 3


2.1 Mathematical preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Sphere decoding fundamentals . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3 The Fincke-Pohst and Schnorr-Euchner enumerations . . . . . . . . . . . . . 7

3 Automatic sphere decoding 9


3.1 A novel approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.2 The computational efficiency of sphere decoding . . . . . . . . . . . . . . . . 11
3.3 Complexity analysis and results . . . . . . . . . . . . . . . . . . . . . . . . . . 12

4 Ordering for computational efficiency 14


4.1 Case study: Orderings and sphere decoding . . . . . . . . . . . . . . . . . . . 14
4.2 An enhanced ordering scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.3 Performance evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

5 Conclusions 22

Bibliography 23

A The geometry of sphere decoding 25

B Proof of the optimality of the automatic sphere decoder 32

C Description of the experimental setup 36

D Proof of the optimality of the enhanced ordering when M=B=2 39

iv
Chapter 1

Introduction

Driven by the demand for increasingly sophisticated connectivity anytime, anywhere, wire-
less communications has emerged as one of the largest and most rapidly growing sectors
of the global telecommunications industry. One of the most significant technological devel-
opments of the last decade, which promises to play a key role in fuelling this tremendous
growth, is communication using Multiple Input Multiple Output (MIMO) antenna archi-
tectures [11].
In a MIMO system, multiple element antenna arrays are deployed at both the transmitter
and the receiver. The communications challenge lies in designing the sets of signals simul-
taneously sent by the transmit antennas and the algorithms for processing those observed
by the receive antennas, so that the quality of the transmission (i.e., bit error probability)
and/or its data rate are superior to those supported by traditional single antenna systems.
These gains can then provide increased reliability, reduced power requirements and higher
composite data rates. What is especially exciting about the benefits offered by MIMO tech-
nology is that they can be attained without the need for additional spectral resources, which
are not only expensive but also extremely scarce.
Over the last ten years, the greatly enhanced performance that is possible over realistic
wireless MIMO channels has been both shown theoretically and demonstrated in experi-
mental laboratory settings [2, 3, 8, 16, 17]. Hence the recent explosion of interest from both
academic and industrial researchers in the area of signal processing techniques for MIMO
systems. Particularly, at the receiver end of the MIMO channel, signal detection has been
the subject of intensive study. One of the most important and industrially relevant algo-
rithms to emerge from these efforts is the Sphere Decoder (SD) [7, 9, 18].
The Maximum Likelihood (ML) or optimal detection of signals transmitted over MIMO
channels is well-known to be an NP-complete problem. However, the SD has been shown to
offer ML detection at a computational complexity that is polynomial in the average case [9].

1
CHAPTER 1. INTRODUCTION 2

So bright are its prospects for application in future communication systems that hardware
implementations have already been reported in the literature (e.g., [5]).
Even so, existing SDs exhibit two major weaknesses: First, the performance of most
current proposals is highly sensitive to the value chosen for the search radius parameter.
The successful termination of the algorithm, i.e., returning an optimal solution, as well as
its time complexity, are heavily dependent on the search radius [18]. Secondly, although its
time complexity is polynomial in the average case, the complexity coefficient can become
very large when the SNR is low, or when the problem dimension is high, e.g., at the high
spectral efficiencies required to support higher communication rates.
We have developed two novel contributions that successfully tackle these critical chal-
lenges. Our presentation begins in Chapter 2 with a formal definition of the ML detection
problem. Algebraic tools that are instrumental in the study of sphere decoding are then
overviewed; geometric tools are included for the interested reader in Appendix A. We also
introduce two generic SDs, based respectively on the Fincke-Pohst and Schnorr-Euchner
enumerations. These schemes can be considered as representative of the majority of exist-
ing decoders, which have been built around either of these two enumeration strategies, and
are therefore used as our benchmarks.
The first of our contributions addresses the sensitivity of the sphere decoders perfor-
mance to its radius parameter. Chapter 3 details a new decoding algorithm, dubbed the
Automatic Sphere Decoder (ASD) because of its ability to find an ML solution without
invoking any notion of search radius, thereby eliminating any sensitivity to this parameter.
The concept of sphere decoder computational efficiency is also introduced and we prove
theoretically that the ASD achieves the optimal efficiency over the set of known SDs.
The SD detects multiple transmitted symbols given multiple observations in a sequential
manner. In Chapter 4 we show that the order of symbol detection has a significant effect
both on the average computational efficiency of a SD, as well as on the variance of its
computation time. Our analysis, distinguished from other works by its geometric approach,
reveals a previously unreported property that we find to be an indicator of sphere decoding
efficiency. Based on this study, we propose a heuristic ordering rule designed to maximize
the computational efficiency of the ASD and demonstrate via simulation that a dramatic
reduction in computation time is achievable. The improvement is particularly great at low
SNRs and at high spectral efficiencies, two important operating regions where SD algorithms
have traditionally been considered to be prohibitively complex.
Chapter 5 concludes this report on our work, which puts forward effective solutions
addressing the key issues facing state-of-the-art sphere decoding proposals, under certain
conditions at a reduced time complexity. Thus making the sphere decoder an even stronger
candidate MIMO detector that is viable for use over a larger range of channel conditions.
Chapter 2

Maximum Likelihood detection

Consider the linear MIMO system diagram shown in Fig. 2.1.1 To communicate over this
channel, we are faced with the task of detecting a set of M transmitted symbols from a
set of N observed signals. Our observations are corrupted by the non-ideal communication
channel, typically modelled as a linear system followed by an additive noise vector.
n

s v bs
H Detector

Figure 2.1: A simplified linear MIMO communication system diagram showing the following
discrete time signals: transmitted symbol vector s X M , channel matrix H RN M ,
additive noise vector n RN , received vector v RN , and detected symbol vector b
s RM .

To assist us in achieving our goal, we draw the transmitted symbols from a known finite
alphabet X = {x1 , . . . , xB } of size B. The detectors role is then to choose one of the B M
possible transmitted symbol vectors based on the available data. Our intuition correctly
suggests that an optimal detector should return b
s = s , the symbol vector whose (posterior)
probability of having been sent, given the observed signal vector v, is the largest:

s , argmax P (s was sent | v is observed) (2.1)


sX M

P (v is observed | s was sent)P (s was sent)


= argmax . (2.2)
sX M P (v is observed)

Equation (2.1) is known as the Maximum A posteriori Probability (MAP) detection rule.
Making the standard assumption that the symbol vectors s X M are equiprobable, i.e.,
1
Note that all of the signals and coefficients used in our theoretical derivations are represented as real
numbers. This mathematical convenience does not limit our results since the complex case where s (X 2 )M
is a vector of M QAM modulated signals, v CN and H CN M can be written as an equivalent problem
in twice the number of real dimensions, i.e., with v R2N and H R2N 2M , as shown in Appendix C.

3
2.1. MATHEMATICAL PRELIMINARIES 4

that P (s was sent) is constant, the optimal MAP detection rule can be written as:

s = argmax P (v is observed | s was sent). (2.3)


sX M

A detector that always returns an optimal solution satisfying (2.3) is called a Maximum
Likelihood (ML) detector. If we further assume that the additive noise n is white and
Gaussian, then we can express the ML detection problem of Fig. 2.1 as the minimization
of the squared Euclidean distance metric to a target vector v over an M -dimensional finite
discrete search set:

s = argmin |v Hs|2 , (2.4)


sX M

where borrowing terminology from the optimization literature we call the elements of s
optimization variables and |v Hs|2 the objective function.
Examples of wireless communications problems that can be modelled in this way, by
appropriately defining the entries of channel matrix H, include ML detection of lattice
coded signals and QAM-modulated signals transmitted over MIMO flat fading channels
and frequency selective fading channels, as well as multi-user channels.

2.1 Mathematical preliminaries


In our derivations, we assume an overdetermined problem, i.e., that M N , and that H is
of full rank M . For communication over MIMO flat fading channels, this assumption means
that there are at least as many receive antennas (N ) as transmit antennas (M ).
We make use of the following notational conveniences: Given a square M M matrix
A, let ai denote the ith column vector, aii the element in the ith row and column position,
A the tall submatrix comprised of all columns but the ith , and A the square submatrix
\i \ii
formed by removing the ith row and column. Given a vector z, let zi denote the ith element
and z the vector comprised of all elements but the ith .
\i
We also denote by 0 an appropriately sized vector or matrix of zeros, by I M the square
M M identity matrix, by ei the ith elementary vector of appropriate length, by I the
index set {1, . . . , M }, and by the superscript T the vector or matrix transpose operation.
Unless otherwise indicated, transposition takes precedence over column selection, i.e., given
an M M matrix A, AT denotes the ith row of A written as a column vector and (a )T the
i i

ith column of A written as a row vector. When it is necessary to distinguish between the
optimization variables themselves and the values that they may take, we use the underline
notation si or s to refer to the actual variables, and si X or s X M to indicate particular
values taken.
2.2. SPHERE DECODING FUNDAMENTALS 5

2.2 Sphere decoding fundamentals


Sphere decoding is based on the enumeration of points in the search set that are located
within a sphere of some radius centered at a target, e.g., the received signal point. The
Fincke-Pohst (F-P) and Schnorr-Euchner (S-E) techniques are two computationally efficient
means of realizing this enumeration [7], and so they have come to form the foundation of
most existing sphere decoders [6, 9].
Underlying both the F-P and S-E enumerations, and in fact all known SDs, is the QR
factorization of the channel matrix: Every N by M N matrix H with linearly independent
columns can be factored into
h i
R
H=Q 0 , (2.5)

where Q is N N and orthogonal, R is M M , upper triangular and invertible, and 0 is


an (N M ) M matrix of zeros.
Since the objective function is invariant under orthogonal transformation, minimization
problem (2.4) can be written as
h i 2
R
argmin |v Hs|2 = argmin QT v 0 s (2.6)
sXM M
sX

v Rs|2 ,
= argmin |e (2.7)
sX M

 M
e = QT v 1 extracts the first M elements of the orthogonally transformed target.
where v
The upper triangular structure of the factored matrix then enables the decoder to de-
compose the equivalent objective function (2.7) recursively as follows:
2

v Rs|2 = d2 (e
|e v rM sM )\M R\M M s1M 1
vM , rM M sM ) + (e (2.8)
2

= d2 (e
v M , rM M s M ) + ye (sM ) R\M M s1M 1 (2.9)

0
X  
= d2 ye sM
D+1 D
, r DD s D , (2.10)
D=M 1

where d2 () is the squared Euclidean distance metric and we call



 y
e () = v
e, D=M
e sM
y D+1 ,   (2.11)
y
e sMD+2 rD+1 sD+1 , D = M 1, . . . , 0
\D+1
2.2. SPHERE DECODING FUNDAMENTALS 6

a residual target.2 Residual targets are parameterized by a set of L = M D constraint


e with a tilde to
values applied to optimization variables sD+1 , . . . , sM . We overmark y
e . As shown in
indicate that it resides in the same orthogonally transformed space as v
Appendix A, this space is D-dimensional.
Thus the QR factorization provides a means of evaluating the objective function more
efficiently, through decomposition into the summations of (2.10), which contain many shared
terms. For instance, the first term of (2.9) is involved in computing the values of the
objective function for all B M 1 points in the search set satisfying sM = sM . We can
therefore associate the constraint sM = sM with this term.
The summation in (2.10) lends itself naturally to a weighted B-ary tree representation,
as shown in Fig. 2.2 for the case where M = B = 2 and X = {1, 1}. In this diagram,
each of the terms in the summations of (2.10) is associated with a constraint as well as
with a branch. Each node then encapsulates a set of constraints sM M
D+1 = sD+1 that have
been applied, as specified by the branches traversed along its path from the root node. By
computing (2.11), a residual target can also be associated with each node. We observe that
because of the underlying QR factorization, the variables must be constrained in order from
sM to s1 .

(n0 ) = 0

s2 = 1 s2 = 1
(b1 ) = d2 (e
y ()2 , r22 ) (b2 ) = d2 (e
y ()2 , r22 )

(n1 ) = (n0 ) + (b1 ) (n2 ) = (n0 ) + (b2 )

s1 = 1 s1 = 1 s1 = 1 s1 = 1
(b3 ) = d2 (e
y (1)1 , r11 ) (b4 ) = d2 (e
y (1)1 , r11 ) (b5 ) = d2 (e
y (1)1 , r11 ) (b6 ) = d2 (e
y (1)1 , r11 )

(n3 ) = (n1 ) + (b3 ) (n4 ) = (n1 ) + (b4 ) (n5 ) = (n2 ) + (b5 ) (n6 ) = (n2 ) + (b6 )
h i 2 h i 2 h i 2 h i 2

= ve R 1
1 = ve R 11 = ve R 11 = ve R 11

Figure 2.2: A weighted B-ary tree for computing (2.10) with M = 2 and X = {1, 1}.

The tree shown in Fig. 2.2 is uniquely determined by problem parameters v, H, and
X . We highlight here a few properties of the tree that are important in the study of sphere
decoding algorithms. The interested reader is referred to Appendix B for a more formal
graph theoretic treatment.

1. The nodes are distributed over M + 1 levels, numbered from the root node n 0 at level
0 to the leaf nodes at level M . Non-leaf nodes are those at levels 0 through M 1.
2
Making a slight abuse of notation, we take only the non-zero elements of ri in (2.11), thus making it an
appropriately-sized vector of length K. We also make the logical interpretation s M
M +1 = .
2.3. THE FINCKE-POHST AND SCHNORR-EUCHNER ENUMERATIONS 7

2. All branches between nodes at levels L and L + 1 (L = 0, . . . , M 1) are associated


with variable sD , where we let D = M L in the statements to follow.

3. The tree is B-ary, i.e., each non-leaf node is the parent of exactly B child nodes. Each
child branch corresponds to one of the B values in X that the associated variable can
take. Therefore there are B L nodes at level L, and each is associated with a set of L
constraints sM M
D+1 = sD+1 . In particular, each leaf node is associated with a full vector
of constraints s = s, where s X M specifies a point in the search set.

4. The tree is a weighted tree; non-negative weights (bj ) and (nk ) are associated with
the branches and nodes, respectively. We assign to the root node n0 the weight 0.
  
5. Branches from nodes at level L to L + 1 are assigned weights d2 ye sM
D+1 D , rDD sD ,
where the constraints sM M
D+1 = sD+1 are associated with the parent node at level L
(Property 3), and sD = sD is the constraint correponding to the branch itself.

6. Each node weight is the sum of the branch weights along its path from the root, or
equivalently the sum of the weights of its parent node and the connecting branch.

7. Along any path from the root to a leaf node, the node weights are non-decreasing.

8. The leaf node weights are precisely equal to the values of summation (2.10), i.e., the
values of the objective function evaluated at each of the points in the search set.

Properties 3 and 8 imply that the ML solution is specified by the point in the search set
associated with the smallest weight leaf node in the tree of Fig. 2.2. However, there remains
an exponential number of leaf nodes to consider, and a comparable number of non-leaf
nodes whose weights must all be computed in order to determine those of the leaf nodes. In
the next section we discuss how existing sphere decoders are able to reduce the number of
computations from an exponential to an average case polynomial number.

2.3 The Fincke-Pohst and Schnorr-Euchner enumerations


A Sphere Decoder (SD) searches for the smallest weight leaf node starting from the root.
Because of the recursive definition of the node weights, it must begin at the root and can only
expand its knowledge by computing the weights of connected branches and nodes. Through
clever pruning of the tree based on intermediate node weights, it is able to declare an ML
solution after computing only a polynomial number of weights, in the average case [9]. This
pruning is made possible by Property 7.
Recall that geometrically, the weight of each leaf node corresponds to the squared Eu-
clidean distance from a point in the search set to the target. Then given C R, we can
2.3. THE FINCKE-POHST AND SCHNORR-EUCHNER ENUMERATIONS 8

enumerate the points located within a sphere of radius C centered at the target by explor-
ing from the root along all branches until a node nk is encountered such that (nk ) > C 2 .
Because of Property 7, all descendants of node nk have weights at least as great as (nk ).
Therefore the points associated with leaf nodes that are descendants of node n k must lie
outside of the search sphere and are not of interest.
The computation time of the tree-based search can then be reduced by pruning at node
nk , i.e., the weights of branches and nodes that are descendants of node nk need not be
computed. The search continues, traversing the tree depth-first [13, Ch. 29], from left to
right, until all nodes having weights not greater than C 2 are discovered. It then returns a list
of leaf nodes that correspond to points located within the search sphere. This description
summarizes the behaviour of the Fincke-Pohst (F-P) enumeration with respect to the tree
in Fig. 2.2. More details on its implementation can be found in works such as [1, 9].
An important characteristic of the F-P strategy is that a search radius must be specified.
However, if C is too large, many node weights will have to be computed and a large number
of leaf nodes may also be returned. If it is too small, no leaf nodes will be found and
the decoder must then be restarted with a larger search radius. Both of these factors
negatively impact the overall computation time, and thus it is well-known that one of the
main weaknesses of the F-P decoder is the sensitivity of its performance to the choice of C.
A typical choice is the distance to the Babai point [1], which is a point in the search set and
therefore we can be assured that at least one leaf node will be found. Our first benchmark
decoder, referred to as the FPB, uses this value of C and the F-P enumeration.
The Schnorr-Euchner (S-E) enumeration adds a small but significant refinement to the
F-P approach. In the F-P strategy, the tree is traversed depth-first and from left to right,
i.e., the children of a node are considered in order of increasing si X , where i is the level
of the parent node and we recall that each of its children is associated with applying the
additional constraint si = si . The S-E strategy also advocates traversing the tree depth-
first, but instead of considering child nodes from left to right, it computes their connecting
branch weights and then explores them in increasing order of these weights.
As a result, it has been shown that the S-E enumeration discovers eligible leaf nodes
more quickly than the F-P enumeration [1]. If this were the only refinement, the S-E
enumeration would still have to compute the same number of branch and node weights as
the F-P strategy. However, observe that once a leaf node nl is discovered, the search radius
p
can be adaptively reduced to C = (nl ). In other words, after having discovered a point
in the search set, we become interested only in locating those points that are even closer to
the target than that point. By adaptively adjusting the search radius, this decoder based
on the S-E enumeration has become the current state-of-the art [57]. Thus it is our second
benchmark decoder, referred to as the SEA.
Chapter 3

Automatic sphere decoding

In Section 2.2 we saw that sphere decoding amounts to searching for the smallest weight leaf
node in a B-ary tree with M + 1 levels. We have also seen that one of the main weaknesses
of current proposals is the sensitivity of their computation time to the value chosen for the
search radius parameter C. To tackle this problem, algorithms have been proposed that
adaptively adjust the search radius during execution of the decoder [1, 7], which are shown
to be less sensitive to its value [6], and where preprocessing is applied to find an optimal
search radius before starting the decoding stage [10].
In our work, we take a fresh approach to the problem. Recall that a Sphere Decoder
(SD) starts from the root node and expands its knowledge of other nodes in the tree by
computing the weights of connected branches and nodes. However, its purpose is not to
completely explore the tree. Instead it seeks to explore just enough of the tree in order to
find a point in the search set that specifies an ML solution.
e , sM
To develop this idea more formally, let a node n be a 4-tuple (, L, y D+1 ), where the
e (sM
weight is in R0 , the level L is in {0, . . . , M }, the associated residual target y D+1 ) is
in RD , the associated vector of applied constraint values sM L
D+1 is in X , and we recall that
D = M L as before. We can then define the action of expanding a node:

Definition 3.1 Given upper triangular matrix R, alphabet X = {x1 , . . . , xB } of size B and
e , sM
non-leaf node n = (, L, y e , and sM
D+1 ), let expanding n be defined as processing , L, y D+1
to generate its B children {nc1 , . . . , ncB } as follows:
  
2 xj
ncj = + d (e y rD xj )\D ,
yD , rDD xj ), L + 1, (e sM . (3.2)
D+1

We begin by asking the following question: Assuming a tree-based search is applied to


the problem, what is the smallest number of nodes in Fig. 2.2 that must be expanded in
order to obtain an ML solution? Given the weight of the smallest weight leaf node, denoted
, it is clear that no more than all nodes having weights less than must be expanded.

9
3.1. A NOVEL APPROACH 10

Expanding any non-leaf node whose weight is greater than or equal to can only lead to
the discovery of leaf nodes whose weights are also greater than or equal to .
Therefore, both the FPB and the SEA decoders may expand more nodes than necessary
because they rely on a search radius to dictate whether or not a node should be expanded
during the enumeration. Even though it may be adaptively reduced, if the squared search
radius is ever larger than , then it is possible for nodes having weights greater than or
equal to to be expanded. In contrast, the Automatic Sphere Decoder (ASD) is designed
to expand precisely the number of nodes that is needed to establish an ML solution. It is
also able to do so without prior knowledge of the optimal leaf node weight .

3.1 A novel approach


The ASD makes use of the tree structure in Fig. 2.2 to eliminate the need for a radius
parameter. It efficiently searches for the smallest weight leaf node by maintaining a list of
nodes Nb that define the border between the explored and unexplored parts of the tree.
e , ).
Initially this list contains only the root node, to which is assigned the 4-tuple (0, 0, v
In each iteration, the ASD selects and expands the border node with the smallest weight. 1
The expanded node is then deleted from Nb , since it is no longer on the border, and replaced
by its B children. Note that although the decoder knows about the border nodes, they are
considered to be unexplored. The first time a leaf node nl0 is selected for expansion, the
decoder returns the associated vector of applied constraint values s (n l0 ), along with its
weight (nl0 ), and terminates. Pseudocode is given in Algorithm 1:

Algorithm 1 Automatic Sphere Decoder ASD(v, H, X )


1: Initialize Nb as {n0 } Initialize border nodelist
2: n Get&DeleteMin(Nb ) Select root node
3: while n is not a leaf node do Until leaf node selected
4: {nc1 , . . . , ncB } Expand(n) Expand selected node (3.2)
5: Insert ncj into Nb , j = 1, . . . , B Insert children into nodelist
6: n Get&DeleteMin(Nb ) Select smallest weight node
7: end while

8: Return s = s, C = Report optimal solution and search radius

Intuitively, the strategy employed by the ASD ensures that whenever a node is selected
for expansion, all nodes in the tree having weights less than that node will already have been
explored. Consequently, all unexplored nodes must have weights that are at least as great as
that of the selected node. Thus, when the first leaf node nl0 is selected for expansion, it must
be the case that the weights of the other leaf nodes, none of which have yet been explored
(although they may also be on the border), are greater than or equal to (n l0 ). Since these
1
In the event of a tie, the border node with the smallest weight and the lowest level is selected.
3.2. THE COMPUTATIONAL EFFICIENCY OF SPHERE DECODING 11

weights correspond to values of the objective function evaluated for a given constraint value
vector, s (nl0 ) can be declared an ML solution.
The reader is referred to Appendix B for proofs that the constraint value vector s
returned by Algorithm 1 is an ML solution to minimization problem (2.4), and that the

square root of the returned weight is the optimal search radius, denoted C .

3.2 The computational efficiency of sphere decoding


One of the main difficulties encountered when directly comparing the computation times of
different sphere decoders, e.g., in terms of floating-point operations, is the implementation-
dependent nature of such a comparison. In our work, we break down the operations
performed by SDs into three categories: expanding nodes, determining the next node to
expand, and maintaining a nodelist, if necessary. We claim that all computations involved
in sphere decoding can be grouped into one of these categories. Further, we argue that in or-
der to provide a fair comparison, the computation time required for a single node expansion
must be fixed, i.e., equally optimized for all of the decoders being compared.
Therefore, we propose to evaluate and compare the computational performance of dif-
ferent SDs in a theoretical framework, by considering the number of nodes expanded during
their execution, denoted . We believe that is an important characteristic for distinguish-
ing between different decoding algorithms, and begin our study of this quantity by providing
a lower bound on its value:

Proposition 3.3 The number of nodes expanded by a sphere decoder satisfies M .

Proof We recall that the tree structure shown in Fig. 2.2 underlies the operation of sphere
decoders. Because all of the leaf nodes are at level M , when exploring from the root, the
smallest number of expansions required to reach the first leaf node is M .
The lower bound of M on the number of nodes expanded by a sphere decoder suggests
the following definition of efficiency:

Definition 3.4 Given target v, channel matrix H, finite alphabet X , and search radius C,
let the number of nodes expanded by sphere decoder SD(v, H, X , C) be denoted SD (v, H, X , C).
The computational efficiency of algorithm SD(v, H, X , C) is then defined as

M
SD (v, H, X , C) , . (3.5)
SD (v, H, X , C)

Definition 3.4 ensures that 0 < SD 1 and that SD = 1 SD = min . Thus pro-
viding an intuitively satisfying description of the efficiency of a sphere decoding algorithm.
Armed with this definition, we can then present the following result:
3.3. COMPLEXITY ANALYSIS AND RESULTS 12

Theorem 3.6 Given target v, channel matrix H, and finite alphabet X , the computational
efficiency of Algorithm 1 satisfies

ASD (v, H, X ) SD (v, H, X , C), (3.7)

for all other optimal sphere decoding algorithms SD(v, H, X , C).

Proof Corollary B.9 asserts that all nodes expanded by Algorithm 1 satisfy (n) C 2 .
Since it is the optimal search radius, C C for the radius C of any sphere decoder that ter-
minates successfully. Therefore n is also expanded by these decoders and ASD (v, H, X )
SD (v, H, X , C) for any other optimal sphere decoding algorithm. The result follows from
the inverse relationship between and given in (3.5).
Thm. 3.6 demonstrates that there is no known SD that expands fewer nodes than the
ASD during the decoding process. Thus the ASD achieves the optimal computational
efficiency. However, note that this result does not necessarily imply that the overall com-
putation time of the ASD is lower than that of all known decoders, since we have not taken
the node selection and nodelist maintenance operations into consideration.

3.3 Complexity analysis and results


Two key advantages of the automatic approach are that the search radius parameter is no
longer needed and the number of nodes expanded by the proposed decoder is always less
than or equal to that expanded by existing sphere decoders. Since the time complexity of a
sphere decoding algorithm is dominated by the product of the number of nodes expanded
and the per node processing time , both of these factors serve to improve the time
complexity of the automatic decoder with respect to that of existing sphere decoders.
In Fig. 3.1 we compare the average number of nodes expanded by the ASD to that
expanded by the FPB and SEA benchmark decoders over a wide range of SNRs. The simu-
lations were conducted over a (complex) MIMO Rayleigh flat fading channel with Additive
White Gaussian Noise (AWGN), four transmit and four receive antennas. The decoders
were tested using the QPSK, 16-QAM and 64-QAM symbol alphabets, which correspond
to spectral efficiencies of 8, 16 and 24 bits/s/Hz, respectively.2 The experimental setup is
described in more detail in Appendix C.
Fig. 3.1 shows that the SEA and ASD decoders significantly outperform the FPB with
respect to the average number of nodes expanded during the decoding stage. As predicted
by Thm. 3.6, ASD SEA regardless of the SNR. For all of the decoders, there is a lower

2
Results for the FPB decoder are only shown for the QPSK case, as even its average performance was
found to be degrade very quickly with increasing search space dimension.
3.3. COMPLEXITY ANALYSIS AND RESULTS 13

50
FPB, QPSK
45 SEA, QPSK
Average number of expanded nodes () SEA, 16QAM
SEA, 64QAM
40
ASD, QPSK
ASD, 16QAM
35 ASD, 64QAM

30

25

20

15

10

0
0 5 10 15 20 25 30
Average received SNR per bit [dB]

Figure 3.1: Average number of expanded nodes () vs. average received SNR per bit for the
FPB, SEA and proposed automatic sphere decoders over a 4:4 MIMO flat fading channel
using QPSK, 16-QAM and 64-QAM modulations.

bound of 2M , where we recall that M is the number of transmit antennas; the factor
of two arises because there are two real dimensions per complex dimension.
In addition to , the overall computation time of a SD depends on the per node processing
time . This parameter is difficult to quantify because of its strong implementation-
dependence. In [9], it is argued that the FPB decoder performs a number of elementary
operations per expansion that is linear in i, where i = 1, . . . , M is the level of the node
being expanded. Therefore the per node processing time is O(M ). This base set of opera-
tions required per expansion persists in the SEA and ASD algorithms. On top of these, the
SEA incurs a constant complexity penalty due to the S-E enumeration, which imposes an
expansion order on the nodes at each level. Therefore, SEA is also linear in M .
The ASD decoder incurs a slightly greater penalty because it imposes an expansion order
on all nodes across all levels. In our implementation of ASD, the nodelist Nb is stored as a
(binary) heap [13, Ch. 11]. Using this efficient data structure, the Get&DeleteMin and Insert
functions can each be completed in at most O(M log 2 B) time. Thus the complexity penalty
is linear, and like SEA , ASD is still linear in M , albeit with a larger coefficient. In practice
we have found that the time complexity penalty is not significant, as long as appropriate
data structures are used in the implementation.
Chapter 4

Ordering for computational


efficiency

Even when using the more efficient S-E enumeration, it is known that the computational cost
of sphere decoding is highly sensitive to the ordering of the columns of the channel matrix
[7]. In this chapter, we investigate the impact of different orderings on the computational
efficiency of SD algorithms. Unlike previous proposals, where ordering decisions are based
only on the channel matrix [7], we demonstrate that the optimal ordering for sphere decoding
depends on the channel matrix and also on the target. Drawing on our findings, we develop
an efficient algorithm for computing an enhanced ordering and evaluate its effectiveness in
reducing the computational cost incurred by various sphere decoders.

4.1 Case study: Orderings and sphere decoding


To study their effect on sphere decoders, we begin by defining an ordering as a permutation
of the index set I and write its elements as the ordered set

= {iM , iM 1 , . . . , i1 } , (4.1)

where the mapping from I to is a bijection. also implicitly defines an M M permuta-


 
tion matrix P , eiM eiM 1 ei1 that can be used to permute the columns of channel
matrix H through right multiplication:

HP = [ hiM hi1 ] . (4.2)

Recall from Section 2.2 that the underlying QR factorization forces sphere decoders to
constrain the optimization variables in order from sM to s1 , i.e., in the reverse order of
the columns of H. By applying ordering to the channel matrix before computing its QR

14
4.1. CASE STUDY: ORDERINGS AND SPHERE DECODING 15

factorization, we allow the variables to be constrained in a strategic rather than random


order si1 , . . . , siM .
Since the ASD achieves the optimal sphere decoding efficiency over the set of known SDs
(Thm. 3.6), we propose to use ASD (v, HP , X ), or simply ASD (v, HP , X ), to quantify
the computational efficiency of ordering . Because it expands those nodes contained in the
search sphere of optimal radius C centered at the target v (Corollary B.9), ASD captures
an essential property of sphere decoder geometry. Therefore, unlike other decoders, when
it is invoked with different orderings applied to the channel matrix, any variations in ASD
can be attributed solely to changes in the decoder geometry induced by the ordered channel
matrix. This approach enables us to decouple the ordering part of the problem from the
sphere decoder itself.
Next, we consider how the ASD decodes a received signal transmitted using BPSK
modulation (X = {1, 1}) over a sample 2 2 channel
" #
1.13 5.65
H= . (4.3)
6.78 2.20

Fig. 4.1 shows a map of the optimal ordering regions of the columns of sample channel
matrix H over the domain of possible targets v, where we call ordering and permuted
channel matrix H = HP optimal if

ASD (v, H , X ) ASD (v, HP, X ) (4.4)

for all M ! possible permutation matrices P.


Given the two-column channel matrix in (4.3), there are two possible orderings of H. On
the diagram in Fig. 4.1, light shading indicates points v for which ASD (v, H[ e1 e2 ], X ) =
ASD (v, H[ e2 e1 ], X ), i.e., where both orderings result in equivalent decoder efficiencies,
medium shading shows where the decoder using ordering = {1, 2} expands fewer nodes
than the alternative, and dark shading, where ordering = {2, 1} is favoured.
The optimal ordering map in Fig. 4.1 illustrates an important property of ordering
for sphere decoding that has been identified through our work: The optimal ordering for
efficient sphere decoding depends not only on the channel matrix H, but also on the target
v. This characterization is different from that underlying previous proposals, e.g., applying
the Vertical Bell Labs Layered Space-Time (V-BLAST) ordering borrowed from the spatial
multiplexing literature [19] to sphere decoding [7], since only H is considered in such designs.
4.2. AN ENHANCED ORDERING SCHEME 16

h
2 1
r

h2

r1

Figure 4.1: Optimal ordering of the columns of channel matrix H as a function of target
v: Light shading indicates that both orderings are equivalent, medium shading that H =
H[ e1 e2 ], and dark shading that H = H[ e2 e1 ].

4.2 An enhanced ordering scheme


To develop an ordering that promotes sphere decoder efficiency, we consider the tree struc-
ture in Fig. 2.2. Recall that one optimization variable is associated with all of the branches
originating from the nodes at each level. Each of the M ! possible orderings associates differ-
ent variables with these sets of branches and therefore defines a distinct set of branch and
node weights. Because the ASD expands those nodes having weights less than the squared
optimal search radius C2 (Corollary B.9), it is this difference in the node weights that so
greatly affects the computational performance of SDs under different orderings.
Our strategy is to start at the root node and descend to a leaf, possibly not the one with
the smallest weight, in M steps. The resulting path then implicitly defines an ordering.
Initially, we are armed only with the structure of the tree; no optimization variables are
associated with any of the branches. At each level, we choose the best variable to constrain
given the available information. Once a variable si0 is chosen, it is removed from contention
in subsequent steps by applying one of the B constraints si0 = xj associated with the chosen
variable, i.e., traversing one of the outgoing branches to a child node.
To choose the next variable to constrain, the following decision criterion is proposed:


i0 = argmax b(si = x[2] ) , (4.5)
i remaining
4.3. PERFORMANCE EVALUATION 17


where we recall that is a branch weight. Its parameter b si = x[2] is the second branch

associated with variable si , taken in order of ascending weight, i.e., x[1] , . . . , x[B] is a
 
permutation of alphabet X such that b si = x[1] b si = x[B] .
The rationale behind decision criterion (4.5) is to select at each level the variable whose
resulting branch weights may be expected to most encourage heavy pruning of the search
tree. To achieve this goal, we recall that B adjacent branches originate from each non-leaf

node. Observe that the branches with the smallest weights b si = x[1] are by definition the
most likely to be explored by the ASD, i.e., they are not likely to be pruned. Therefore our
heuristic ordering rule is designed to choose the optimization variable such that the weight

of the adjacent branch having the second smallest weight b si = x[2] is maximized.
Thus making it more likely that all B 1 adjacent nodes are pruned by the ASD.

Having chosen variable i0 as the next to constrain, we then traverse branch b si0 = x[1] ,
i.e., the one that we also expect the ASD to explore. This process is repeated until all M
selections are made. Pseudocode to compute proposed ordering b is given in Algorithm 2,
where the function Child(n, b) returns the child of node n along branch b:

Algorithm 2 An Enhanced Ordering (v, b H, X )


1: I {1, 2, . . . , M } Initialize index set
2: n n0 Select root node
3: for each level L from 1 to M do

4: iL argmaxiI b si = x[2] Assign Lth variable to constrain
5: I I \ iL  Remove iL from index set
6: n Child n, b siL = x[1] Traverse smallest weight branch to next node
7: end for
b = {iM , . . . , i1 }
8: Return

4.3 Performance evaluation


We present three evaluations of the performance of the enhanced ordering scheme. First we
consider how closely the orderings of Algorithm 2 correspond to the optimal ones confirmed
b = HP b
by simulation (Fig. 4.1). Fig. 4.2 depicts a map of the proposed ordering decisions H
over the domain of possible targets v, given the same sample channel as before (4.3).
In this two-dimensional example, the match between the orderings computed by Algo-
rithm 2 and the optimal orderings can easily be verified graphically. A similar correspon-
dence is evidenced when using higher order modulations such as 16-QAM; graphical results
for this case are shown in Fig. 4.3. We also provide a proof of the optimality of the proposed
ordering for all channel matrices H and targets v when M = B = 2 in Appendix D.
One of the key hypotheses underlying our work is that an ordering effective in enhancing
the performance of the ASD should also enhance that of other sphere decoding algorithms.
4.3. PERFORMANCE EVALUATION 18

h
2 1
r

h2

r1

Figure 4.2: Proposed ordering of the columns of a sample two-dimensional (real) channel
b = H[ e1 e2 ], and
matrix H as a function of target v: Medium shading indicates where H
b
dark shading where H = H[ e2 e1 ].

Thus another important evaluation of the performance of enhanced ordering scheme is given
by how successful it is in reducing the number of nodes expanded by a standard sphere
decoder. Fig. 4.4 shows the average number of nodes expanded by the state-of-the-art SEA
benchmark decoder described in Section 2.3 as a function of the SNR. The performance
curves obtained under three orderings are reported: Random ordering, that used in V-
BLAST detection [19], and that proposed by Algorithm 2.
A vast improvement is realized by the new ordering scheme, especially at low SNRs,
where the complexity of existing sphere decoders is not considered to be competitive. What
is particularly noteworthy is that the improvement remains significant even for higher mod-
ulation orders. Although the enhanced ordering was derived using the ASD, simulation
results demonstrate that the advantage is transferable to other sphere decoders as well.
Finally, we consider how the enhanced ordering fares over different realizations of the
channel matrix H: Is it the case that there are a few easy-to-order channels, or is it
generally able to provide some benefit? Figs. 4.5(a)-4.5(c) present histograms showing esti-
mates of the probability density functions fASD (SEA ) of the number of nodes expanded by
the SEA decoder when using random ordering, that used in V-BLAST detection, and the
proposed approach. In Fig. 4.5(d), the density function fASD (ASD ) obtained when com-
bining the enhanced ordering with the ASD is also shown for comparison. 10,000 different
channel realizations were generated for this experiment, which was conducted at an SNR
4.3. PERFORMANCE EVALUATION 19

h1
2
r

h2

r1

(a) Optimal ordering map.

h1
2
r

h
2

r
1

(b) Proposed ordering map.

Figure 4.3: Optimal and proposed orderings of the columns of channel matrix H as a
function of target v: Light shading indicates that both orderings are equivalent, medium
shading where H or H b = H[ e1 e2 ], and dark shading where H or Hb = H[ e2 e1 ].
4.3. PERFORMANCE EVALUATION 20

50
QPSK, Random

Average number of expanded nodes () 45 QPSK, VBLAST


QPSK, Proposed
16QAM, Random
40
16QAM, VBLAST
16QAM, Proposed
35 64QAM, Random
64QAM, VBLAST
30 64QAM, Proposed

25

20

15

10

0
0 5 10 15 20 25 30
Average received SNR per bit [dB]

Figure 4.4: Average number of expanded nodes by the SEA decoder (SEA ) vs. average
received SNR per bit under the random, V-BLAST and proposed orderings over a 4:4
MIMO flat fading channel using QPSK, 16-QAM and 64-QAM modulations.

of 0dB, i.e., under extremely poor channel conditions. We see that not only is the average
number of expanded nodes dramatically reduced by the enhaned ordering, but the variance
and maximum values of are likewise greatly reduced. These performance metrics are even
further improved when the pre-processing stage is followed by the ASD.
Thus we have shown that the proposed ordering scheme is an extremely and consistently
effective pre-processing stage that can be readily combined with existing sphere decoders.
The time complexity of Algorithm 2 is O(M 3 ), roughly comparable to a matrix inversion
and a few QR factorizations. In practice, we have observed immensely reduced decoding
times at low SNRs using unoptimized implementations of the proposed ordering. As before,
we see in Fig. 4.4 that as the SNR exceeds a certain modulation-dependent level, the number
of nodes expanded by all sphere decoders, even under random ordering, converges to the
lower bound of 2M . Thus at such SNRs it may not be efficient to apply any pre-processing.
4.3. PERFORMANCE EVALUATION 21

0.6 0.6

0.5 avg(SEA) = 25.575 0.5 avg(SEA) = 24.955


var(SEA) = 241.164 var(SEA) = 265.895
0.4 0.4
)

)
max(SEA) = 177 max(SEA) = 159
SEA

SEA
f(

f(
0.3 0.3

0.2 0.2

0.1 0.1

0 0
5 10 15 20 25 30 35 5 10 15 20 25 30 35
SEA SEA

(a) SEA with random ordering. (b) SEA with V-BLAST ordering.

0.6 0.6

0.5 avg(SEA) = 10.745 0.5 avg(ASD) = 10.554


var(SEA) = 25.883 var(ASD) = 20.834
0.4 0.4
)
)

max(SEA) = 70 max(ASD) = 65
ASD
SEA
f(

f(

0.3 0.3

0.2 0.2

0.1 0.1

0 0
5 10 15 20 25 30 35 5 10 15 20 25 30 35
SEA ASD

(c) SEA with proposed ordering. (d) ASD with proposed ordering.

Figure 4.5: Empirical estimates of the probability density functions fSEA (SEA ) and
fASD (ASD ) under the random, V-BLAST and proposed orderings over a 4:4 MIMO flat
fading channel at an SNR of 0dB using QPSK modulation. The average value of , as well
as its variance and maximum value over 10,000 channel realizations, are also indicated on
each figure.
Chapter 5

Conclusions

In this report we discuss two contributions that we have put forward to the wireless com-
munications research community on the subject of efficient Maximum Likelihood (ML)
detection for communication over linear Multiple Input Multiple Output (MIMO) chan-
nels. Our solutions are both effective in addressing critical weaknesses of current proposals
and timely, as the sphere decoder vies for a central role in current 4G standardization efforts.
The new Automatic Sphere Decoder (ASD) is capable of ML detection at a competitive
complexity without the need for specifying a radius parameter. The removal of this param-
eter leads to reduced pre-processing times, since it is no longer necessary to compute an
initial search radius before invoking the decoder. We also show that the philosophy behind
the ASD means that it expands fewer nodes, i.e., has a better computational efficiency, than
all known sphere decoders, over all SNRs and modulation orders.
Next, applying the theoretical framework developed in the design of the ASD, we show
that the order of detection of the transmitted symbols can have an extremely significant
impact on the computational efficiency of sphere decoding algorithms. Therefore, we pro-
pose a heuristic ordering rule designed to maximize sphere decoding efficiency. Simulations
demonstrate that our enhanced ordering not only offers an immense reduction in the com-
putational cost of sphere decoding, but it is also compatible with existing decoders.
Of particular note is its effectiveness in the low SNR regime, which has traditionally been
a prohibitively expensive one for sphere decoders. Another strength of the new ordering is
that it is of increasing influence at higher spectral efficiencies, thus enabling sphere decoders
to achieve ML detection at a competitive computational cost over a wider range of operating
parameters.
Because they facilitate communication over MIMO channels at low SNRs and at higher
spectral efficiencies, offering great computational improvements over current proposals, we
believe that these contributions and their underlying ideas will play a key role in the design
of future wireless systems.

22
Bibliography

[1] Erik Agrell, Eriksson Thomas, Alexander Vardy, and Kenneth Zeger. Closest point
search in lattices. IEEE Transactions on Information Theory, 48(8):22012214, August
2002.

[2] Siavash M. Alamouti. A simple transmit diversity technique for wireless communica-
tions. IEEE Journal on Selected Areas in Communications, 16(8):14511458, October
1998.

[3] Helmut Bolcskei, David Gesbert, and Arogyaswami J. Paulraj. On the capacity of
OFDM-based spatial multiplexing systems. IEEE Transactions on Communications,
50(2):225234, February 2002.

[4] Bela Bollobas. Graph Theory. Springer-Verlag, 1979.

[5] Andreas Burg, Moritz Borgmann, Claude Simon, Markus Wenk, Martin Zellweger,
and Wolfgang Fichtner. Performance tradeoffs in the VLSI implementation of the
sphere decoding algorithm. In IEE 3G Mobile Communication Technologies Conference,
October 2004.

[6] Albert M. Chan and Inkyu Lee. A new reduced-complexity sphere decoder for multiple
antenna systems. In IEEE International Conference on Communications, volume 1,
pages 460464, April 2002.

[7] Mohamed Oussama Damen, Hesham El Gamal, and Giuseppe Caire. On maximum-
likelihood detection and the search for the closest lattice point. IEEE Transactions on
Information Theory, 49(10):23892402, October 2003.

[8] Gerard J. Foschini. Layered space-time architecture for wireless communication in


a fading environment when using multiple antennas. Bell Labs Technical Journal,
1(2):4159, September 1996.

[9] Babak Hassibi and Haris Vikalo. On the sphere decoding algorithm: Part I, The
expected complexity. To appear in IEEE Transactions on Signal Processing, 2004.

23
BIBLIOGRAPHY 24

[10] Wai Ho Mow. Universal lattice decoding: Principle and recent advances. Wireless
Communications and Mobile Computing, 3(5):553569, August 2003.

[11] Theodore S. Rappaport, A. Annamalai, R. M. Buehrer, and William H. Tranter. Wire-


less communications: Past events and a future perspective. IEEE Communications
Magazine, 40(5):148161, May 2002.

[12] R. Tyrrell Rockafellar. Convex analysis. Princeton University Press, 1970.

[13] Robert Sedgewick. Algorithms. Addison-Wesley, 1988.

[14] Gilbert Strang. Linear algebra and its applications. Harcourt, 1988.

[15] Karen Su and Ian J. Wassell. An enhanced ordering for efficient sphere decoding. In
IEEE Intl Conf. on Communications (to appear), May 2005.

[16] Vahid Tarokh, Nambi Seshadri, and A. Robert Calderbank. Space-time codes for high
data rate wireless communication: Performance criterion and code construction. IEEE
Transactions on Information Theory, 44(2):744765, March 1998.

[17] I. Emre Telatar. Capacity of multi-antenna Gaussian channels. Technical report, Oc-
tober 1995.

[18] Emanuele Viterbo and Joseph Boutros. A universal lattice code decoder for fading
channels. IEEE Transactions on Information Theory, 45(5):16391642, July 1999.

[19] Peter W. Wolniansky, Gerard J. Foschini, Glen D. Golden, and Reinaldo A. Valenzuela.
V-BLAST: An architecture for realizing very high data rates over the rich-scattering
wireless channel. In International Symposium on Signals, Systems, and Electronics,
pages 295300, September 1998.
Appendix A

The geometry of sphere decoding

Chapter 2 considers sphere decoding from an algebraic perspective. By applying the QR


factorization and decomposing the objective function into the summations of (2.11), we
arrive at the underlying tree structure depicted in Fig. 2.2. This appendix presents a
geometric interpretation of sphere decoding, leading to an equivalent tree structure that
likewise facilitates efficient decoding. First we review some important geometric notions. 1

Definition A.1 Given matrix H of full rank M , let the linear subspace spanned by the
columns of H be defined as


S(H) , z z = Ha, a RM . (A.2)

The ML detection problem involves working within this space to find the point in the
search set that is closest to a target v.

Definition A.3 Given matrix H of full rank M and alphabet X of size B, let the finite
lattice of points in the search set be defined as


L(H) , z z = Hs, s X M . (A.4)

There are B M lattice points in L(H), as shown in Fig. A.1(a) for the case where B =
M = 2. It can be partitioned in M ways into B sets of B M 1 lattice points, where each
set of points is embedded in one of B parallel affine sets forming a collection. For instance,
given i I, the ith collection {Fi (si ) | si X } contains B affine sets defined as



Fi (si ) , z z hi si , (H1 )Ti = 0 , (A.5)

where h, i denotes the inner product.


1
For more details on linear algebraic structures and their geometry, please see [14].

25
THE GEOMETRY OF SPHERE DECODING 26

(H1 )T1 (H1 )T1 projF1 (1) (y)


h1 F1 (1)

(H1 )T2

h2
F1 (1)
projF1 (1) (y)

y
F2 (1) F2 (1) proj(H1 )T1 (y)

(a) A finite lattice with M = 2 (b) Projections of a point y onto


and X = {1, 1}. the affine sets in collection i = 1.

Figure A.1: Some geometric entities useful in the analysis of sphere decoding.

We note that an affine set is generally defined as a set F = S +a for some linear subspace
S and offset a [12, Sec. 1]. Thus an equivalent definition of the affine sets in colletion i is

Fi (si ) , S(H\i ) + hi si . (A.6)

Equation (A.5) defines the affine sets in terms of their shared null space, which is spanned by
normal vector (H1 )Ti , whereas (A.6) is concerned instead with their column space, which
is also shared and is spanned by the columns of H\i . The orthogonal projection of a vector
y onto affine set Fi (si ) can then be defined as


y hi si , (H1 )Ti
projFi (si ) (y) , y (H1 )Ti , (A.7)
(H1 )T 2
i

and the corresponding orthogonal distance as




d(y, Fi (si )) , y projFi (si ) (y) . (A.8)

It should be clear that projFi (si ) (y) is the point in the affine set that is closest in Euclidean
distance to y, as depicted in Fig. A.1(b). Algebraically, Fi (si ) contains the feasible set of
lattice points satisfying constraint si = si , which we define as

Li (si ) , L(H\i ) + hi si , (A.9)

where we refer to hi and si as the offset vector and offset coefficient, respectively, of Li (si )
and Fi (si ). In particular, observe that Li (si ) hi si is nothing more than a finite lattice
having one less dimension than the original lattice L(H). The objective function |v Hs| 2
THE GEOMETRY OF SPHERE DECODING 27

can then be decomposed as follows:

|v Hs|2 = |v z|2 , z L(H) (A.10)


2

= d2 (v, Fi1 (si1 )) + projFi (si ) (v) z0 , z0 Li1 (si1 ) . (A.11)
1 1

Before completing this recursive decomposition, it will be helpful to observe some simi-
larities and differences between the geometric step in (A.11) and the algebraic one in (2.8).
There is clearly a strong structural resemblance between these two expressions. As before,
the first term of (2.8) is associated with a constraint, si1 = si1 , which is applied in the
geometric framework by projecting the target v onto affine set F i1 (si1 ). It is involved in
computing the values of the objective function for all B M 1 points where si1 = si1 . We
emphasize that these points are all contained in the affine set.
Because the geometric interpretation is not restricted by the QR factorization, we may
choose any collection i1 I onto which to project the target in the first step, i.e., we
may choose to constrain any of the M optimization variables. In contrast, the QR-based
algebraic decomposition (2.8) must extract the M th variable sM in the first step.
Geometrically, (A.11) expresses the objective function as the sum of the squared orthog-
onal distance to an affine set, i.e., from the target v to a projection proj Fi (si ) (v), and a
1 1
squared distance from this projection to a point in the search set satisfying si 1 = si1 . Since
all such points are in the same affine set as the projection, it should be clear that subsequent
steps of the recursion take place within that affine set Fi1 (si1 ).
To capture this notion in our recursive expressions, we extend the definitions of the affine
sets of interest to cases where L > 1 constraints are satified:


FiL ,...,i1 (siL ,...,i1 ) , S H\i1 ,...,iL + HiL ,...,i1 siL ,...,i1 . (A.12)

We can then complete the recursive decomposition as follows:

2
|v Hs|2 = d2 (v, Fi1 (si1 )) + wi1 (si1 ) z0 , z0 Li1 (si1 ) (A.13)

M
X  
= d2 wiL1 ,...,i1 siL1 ,...,i1 , FiL ,...,i1 (siL ,...,i1 ) , (A.14)
L=1

where, making the logical interpretations {i0 , i1 }, {si0 ,i1 } = , the affine residual targets are
defined as

w () = v, L=0

wiL ,...,i1 (siL ,...,i1 ) ,  (A.15)
projF , L = 1, . . . , M.
iL ,...,i1 (siL ,...,i1 ) wiL1 ,...,i1 siL1 ,...,i1
THE GEOMETRY OF SPHERE DECODING 28

We can see immediately that the summations of (A.14) have exactly the same recursive
structure as those of (2.10). Therefore this geometrically-oriented decomposition also en-
ables us to perform sphere decoding by exploring nodes of a B-ary tree with M + 1 levels, as
depicted in Fig. 2.2 and studied in Section 2.2. In addition, L gives the level of the residuals
defined according to (A.15).
As expected from their name, the affine residual targets w are closely related to the
e previously defined in Section 2.2. Because
orthogonally transformed (linear) residuals y
they are projections onto affine sets as given by (A.12) it should be clear that each level L
residual resides in a space of dimension D = M L. In fact, if the variables are constrained
in the same order as that enforced by the QR factorization, we can write the following
explicit relationship between these entities via the untransformed linear residual targets y:

Proposition A.16 Given target v, channel matrix H, and alphabet X , let D = M L.


Then the linear residual targets can be expressed as

 
y sM M M M
D+1 = wD+1,...,M sD+1 HD+1 sD+1 , or (A.17)
 
 e M
T
Q y sM
D+1 = y sD+1 , (A.18)
0

where Q is obtained from the QR factorization of H and the orthogonally transformed resid-
ual targets are defined as previously in (2.11).

Proof In the trivial case where L = 0, {M + 1, M }, {sM M M


M +1 } = , and HM +1 sM +1 = 0.
Therefore y() = w () 0 = v and QT y() = QT v = v
e=y
e ().
In the more general case, we consider first (A.17): The affine residual is defined in (A.15)
 D1
as a projection onto affine set FD,...,M sM
D , which has underlying linear subspace S(H1 )

and offset HM M M
D sD . Therefore, to compute the projection of wD+1,...,M sD+1 onto this set,
we first remove the offset and then subtract away any remaining components that lie in the
null space of S(HD1
1 ). From the QR factorization, we have that the null space of S(HD1
1 )
is spanned by the columns of QM
D and so the orthogonal projection can be written as

  
M T
 
wD,...,M sM M M
D = wD+1,...,M sD+1 QD QD wD+1,...,M sM M M
D+1 HD sD (A.19)
 
M T
 
= wD+1,...,M sM M
D+1 QD QD y sM
D+1 hD sD . (A.20)
THE GEOMETRY OF SPHERE DECODING 29

Then removing the offset and applying orthogonal transform QT to (A.19) and (A.20)
we obtain the following:

 h T  i
M T
 
QT y sM T M
D = Q Q QD QD wD+1,...,M sM M M
D+1 HD sD (A.21)
" T #
QD1  
= 1 y sM
D+1 hD sD . (A.22)
0

The proof then proceeds by induction on L: For the base case where L = 0, D = M and
(A.22) becomes
" T #
QT y(sM ) = Q1M 1 (v hM sM ) (A.23)
0
 
v rM sM )\M
(e
= (A.24)
0
 
e (sM )
y
= 0 . (A.25)

 
Finally, making the assumptions that y sM D+1 = wD+1,...,M sMD+1 HM M
D+1 sD+1 and
" #
e (sM
y D+1 )
that QT y(sMD+1 ) = , the inductive step can be shown in a similar manner:
0
" T #
T
 QD1  
Q y sM
D = 1 y sM
D+1 hD sD (A.26)
0
   
e sM
y D+1 rD sD \D
= (A.27)
0
 
e
y (s M)
= D . (A.28)
0

Proposition A.16 confirms that the orthogonally transformed linear residual targets
(2.11) are related to the affine residuals (A.15) by the appropriate affine offset and the
orthogonal matrix Q from the QR factorization. Therefore, if the optimization variables are
constrained in the same order as that enforced by the QR factorization, the summations in
(A.14) and (2.10) become identical.

Corollary A.29 Given target v, channel matrix H, and alphabet X , let D = M L. Then
the terms in summations (2.10) and (A.14) are identical, i.e.,

   
d ye sM
D+1 D
, r DD s D = d w D+1,...,M s M
D+1 , F D,...,M s M
D (A.30)
THE GEOMETRY OF SPHERE DECODING 30

Proof Combining (A.15) and (A.20) with Proposition A.16 gives

   
d wD+1,...,M sM M
D+1 , FD,...,M sD = d wD+1,...,M sM M
D+1 , wD,...,M sD (A.31)
   M M T 
M T
= d QM
D Q D y s M
D+1 , Q D Q D h s
D D (A.32)
    
0  0
=d  M
M T y sD+1 ,

M T hD sD (A.33)
QD QD
 
e sM
=d y D+1 D
, (rD sD )D . (A.34)

Fig. A.2 provides a graphical illustration of the entities discussed in this Appendix. The
left hand side considers the decomposition (2.9) based on the QR factorization of a channel
matrix H = [ h1 h2 ] (M = 2). It also shows the transformed residual targets computed
when performing the first step of the recursion given a target v. The right hand side shows
the relationship between these residuals and those computed in the geometric decomposition
(A.11) with i1 = M . This link enables us to study the problem from a geometric perspective,
as in Chapter 4, while still performing computations using the efficient upper triangular
structure of R.
THE GEOMETRY OF SPHERE DECODING 31

h1

h2
v
(Target)

Q T v r 2 s2 projF2 (s2 ) (v)


s2 X (Rotate) (Project) s2 X
(Offset)
e r2 (1)
v F2 (1)
(Orthogonally) F2 (1)
(Transformed) h1
e
v
(Target)

h2
e r2 (1)
v r2 v (Affine)
(Residuals)
r1 w2 (1)
w2 (1)

F2 (s2 ) h2 s2 (Offset)
v r2 s2 )\2
(e s2 X
(Project)
s2 X

h1
T
e(1) y
y e(1) r1 Q

(Orthogonally) (Rotate)
(Transformed) y(1)
(Linear)
(Residuals) (Residuals)
y(1)

Figure A.2: An illustration of the relationship between some of the algebraic and geometric
entities involved in sphere decoding for the case where M = 2 and X = {1, 1}. Observe
that the problem dimension, i.e., the dimension of space in which the residual targets reside,
is reduced by one at each step of the recursion.
Appendix B

Proof of the optimality of the


automatic sphere decoder

The Automatic Sphere Decoder (ASD) makes use of the tree structure induced by the QR
factorization to find an ML solution without the need to specify a radius parameter. A
sample tree is shown in Fig. B.1 for the case where M = B = 2. The ASD efficiently
explores the tree in search of the smallest weight leaf node. It starts from the root and
expands the smallest number of nodes necessary to obtain a solution while guaranteeing its
optimality. To prove that it returns an ML solution, we begin by recalling and formalizing
some properties of this tree that were previously introduced in Section 2.2 and Chapter 3.

(n0 ) = 0

s2 = 1 s2 = 1
(b1 ) = d2 (e
y ()2 , r22 ) (b2 ) = d2 (e
y ()2 , r22 )

(n1 ) = (n0 ) + (b1 ) (n2 ) = (n0 ) + (b2 )

s1 = 1 s1 = 1 s1 = 1 s1 = 1
(b3 ) = d2 (e
y (1)1 , r11 ) (b4 ) = d2 (e
y (1)1 , r11 ) (b5 ) = d2 (e
y (1)1 , r11 ) (b6 ) = d2 (e
y (1)1 , r11 )

(n3 ) = (n1 ) + (b3 ) (n4 ) = (n1 ) + (b4 ) (n5 ) = (n2 ) + (b5 ) (n6 ) = (n2 ) + (b6 )
h i 2 h i 2 h i 2 h i 2

= ve R 1
1 = ve R 11 = ve R 11 = ve R 11

Figure B.1: A weighted B-ary tree for sphere decoding with M = 2 and X = {1, 1}.

Using standard graph theoretic notation, we define the tree T (v, H, X ) as the ordered
pair (N , B). Then given two distinct nodes ni , nj N there is a unique path from ni to nj ,
denoted P(ni , nj ).1 Observe that P(ni , nj ) is nothing more than a connected subgraph of
1
Recall that a path is a walk over distinct nodes, i.e., an alternating sequence of nodes and branches
ni . . . nj in which no node appears more than once. See [4] for more details.

32
PROOF OF THE OPTIMALITY OF THE ASD 33

 
T , which is in turn defined by the ordered pair NP(ni ,nj ) , BP(ni ,nj ) comprising the sets of
nodes and branches along the path. Next, we choose one of the nodes of T to be the root
n0 , and define a rooted path as a path having n0 as one of its endnodes. Therefore for all
n N \ n0 , there exists a unique rooted path P(n0 , n).
The nodes of tree T (v, H, X ) are distributed over M + 1 levels numbered from the leaf
nodes at level M to the root at level 0. We associate optimization variable sD with all
branches bridging levels L and L + 1. Since sD takes values sD X , B branches originate
at all non-leaf nodes, each associated with a constraint of the form sD = sD , as shown in
Fig. B.1. Based on (2.10), the branch weight of a branch b bridging levels L and L + 1 is
then computed by applying the associated constraint:


(b) = d2 (e
y sM
D+1 D
, rDD sD ). (B.1)

Therefore, by definition branch weights are non-negative. Note that the weight of a branch
bridging levels L and L + 1 also depends on the set of constraints sM M
D+1 sD+1 associated
with its parent node at level L.
From the recursive form of (2.10) and the branch weights (B.1), the node weight of node
n is computed as
X
(n) = (b), (B.2)
bBP(n0 ,n)

i.e., the sum of the weights of the branches in its rooted path. Similarly, the set of constraints
associated with n is given by those associated with the branches in its rooted path.
The node level L corresponds to the number of constrained variables; the dimension
of the residual target vector is then D = M L. In particular, at a leaf node n l , all M
optimization variables are constrained, i.e., s(nl ) X M , and (nl ) represents one of the
summations in in (2.10), i.e., the value of the objective function evaluated at s(n l ).
Although the structure of T is fixed, initially, only the properties of the root node are
known. The proposed decoder maintains a list of nodes Nb defining the border between the
explored and unexplored parts of the tree. At the start of decoding it contains only the root
node n0 . In each iteration, it selects and expands the border node with the smallest weight.
Recall that in the case of a tie, the border node having the smallest weight and the lowest
level is selected for expansion. The expanded node is then deleted from Nb , since it is no
longer on the border, and replaced by its B children. When the smallest weight border node
is a leaf, the decoder returns its associated constraint value vector s, its weight , and then
terminates. We now show that s is an ML solution to minimization problem (2.4).
PROOF OF THE OPTIMALITY OF THE ASD 34

In the derivations to follow, let Ne (k), Nu (k) and Nb (k) denote the lists of expanded,
unexpanded and border nodes, respectively, at the beginning of iteration k of Algorithm 1.
It should be clear from their descriptions that Nu (k) = N \ Ne (k) and Ne (k) Nb (k) = .

Lemma B.3 Given Ne (k), Nb (k), and n Nu (k), there exists a node n0 in rooted path
P(n0 , n) such that n0 Nb (k).

Proof (By induction) When k = 1, Ne (1) = and Nb (1) = {n0 }. Since n0 = n0 NP(n0 ,n)
for all n, the condition is trivially met in the base case.
Next, suppose that for all n Nu (k) such a node n0 Nb (k) exists at the beginning
of iteration k. If n0 is not expanded in iteration k, then n0 Nb (k + 1). Otherwise, if
n0 is expanded, then either n 6= n0 , in which case one of the child nodes of n0 is in path
P(n0 , n), or n = n0 . In the latter case n0
/ Nb (k + 1) but also n
/ Nu (k + 1). Therefore the
condition still holds at the end of iteration k. Verification of the inductive step completes
the proof.
The purpose of Lemma B.3 is to show that Nb encloses Ne , and so we can explore the
larger set Nu by considering only nodes in the smaller border set Nb .

Lemma B.4 Given rooted path P(n0 , n), the weights of the nodes n0 NP(n0 ,n) satisfy
0 = (n0 ) (n0 ) (n).

Proof The result follows from the non-negativity of branch weights (B.1) and the definition
of node weights given by (B.2).
Lemma B.4 establishes that node weights are monotonically non-decreasing along rooted
paths, enabling us to show the following:

Lemma B.5 Given Ne (k) and Nb (k), the next node selected by Algorithm 1, i.e., n =
argminn0 Nb (k) (n0 ), also satisfies n = argminn0 Nu (k) (n0 ).

Proof Let n = argminn0 Nb (k) (n0 ) and suppose that there exists n0 Nu (k) \ Nb (k)
such that (n0 ) < (n). Then by Lemmas B.3 and B.4, there also exists a node n00 such
that n00 Nb (k), (n00 ) (n0 ), and hence (n00 ) < minn0 Nb (k) (n0 ). This contradiction
completes the proof.
In other words, until its termination, Algorithm 1 expands the nodes of N in order of
increasing weight. Next, we show the optimality of the solution returned by the automatic
sphere decoder.

Theorem B.6 Given target v, QR factored channel matrix H = QR, and finite alphabet
v Rs |2 |e
X , Algorithm 1 returns an optimal solution s X M such that |e v Rs|2 , and
hence |v Hs |2 |v Hs|2 , s X M .
PROOF OF THE OPTIMALITY OF THE ASD 35

v Rs|2 . Since no
Proof When Algorithm 1 terminates, L = M , s X M , and = |e
leaf nodes will have been expanded, Nl Nu . Therefore by Lemma B.5, we have that
(n) n Nl . Recalling that the leaf node weights are equal to the values of the
orthogonally transformed objective function evaluated at each in the search set, and that
the objective function is invariant under such transformation, we obtain the desired result.

A few useful corollaries follow immediately:

Corollary B.7 The weight returned by Algorithm 1 is the square of the optimal search
radius, which is defined as

C , min |v Hs| . (B.8)


sX M

Corollary B.9 A node n is expanded by Algorithm 1 if and only if it satisfies (n) C 2 ,



or equivalently its associated affine residual target wD+1,...,M sM
D+1 (n) is contained in the
(closed) optimal search sphere

S(v, C ) , {z : |z v| C } . (B.10)
Appendix C

Description of the experimental


setup

In this appendix we describe the experimental setup used in our work. A block diagram is
presented in Fig. C.1. It is parameterized by the size of the (real) transmission alphabet
B = |X |, the number of transmit antennas M , and the number of receive antennas N .
Across the top of the figure are shown the four key quantities that are required to simulate
2
each channel use: a sequence of data bits b {0, 1}M log2 B , a power scaling factor R,
a complex channel matrix H CN M and a complex noise vector n CN .

b H n

s v
Symbol Channel
Mapper
C R2
H
b
v, H

Error b
b Bit b
s b
s Sphere Pre-
Counter Mapper R2 C Decoder processing

p

Figure C.1: Block diagram of experimental setup in complex baseband representation.

Each channel use begins by drawing M log2 B 2 bits from the uniform distribution. These
are then taken log2 B 2 bits at a time and mapped to one of B 2 complex symbols s
{xR +jxI | xR , xI X }M depending on the modulation scheme being simulated. Figs. C.2(a)
to C.2(c) depict the symbol maps or constellations for QPSK (B = 2), 16-QAM (B = 4)

36
DESCRIPTION OF THE EXPERIMENTAL SETUP 37

and 64-QAM (B = 8) modulation. The power of the constellations is normalized, with the
dotted circle of unit radius in each of the figures below indicating its average power, i.e.,
before power scaling, E(|si |2 ) = 1 and E(|s|2 ) = M . Therefore the symbol alphabet can be
written as
s
3
X = {B + 1, . . . , 1, +1, . . . , B 1}. (C.1)
2(B 2 1)

jR jR jR

R R R

(a) QPSK modulation. (b) 16-QAM modulation. (c) 64-QAM modulation.

Figure C.2: Symbol constellations for various complex modulation schemes of interest.

The purpose of the power scaling factor is to give the desired Signal-to-Noise Ratio
(SNR). The total energy transmitted per bit is

E(|s|2 )
Eb = (C.2)
M log2 B 2

2
= . (C.3)
log2 B 2

Then to simulate a received SNR (in dB) of , we set the scaling factor as follows:

N Eb
= 10 log10 (C.4)
N0

Eb = N0 10 10 (C.5)
s

N0 log2 B 2 10 10
= , (C.6)
N

where for numerical convenience, we may arbitrarily choose N0 = 1. The factor of N arises
because each receive antenna observes a transmitted energy per bit of E b . Therefore the
overall SNR seen across the receive array is given by (C.4).
DESCRIPTION OF THE EXPERIMENTAL SETUP 38

The transmission channel is modelled as a complex Multiple Input Multiple Output


(MIMO) linear system with complex Additive White Gaussian Noise (AWGN). As discussed
in [7], a number of important channels can be simulated in this manner, including MIMO
flat fading wireless channels, frequency selective fading channels, and multi-user channels.
Of interest in our work is the multiple antenna Rayleigh spatially independent flat fading
wireless channel, for which the coefficients of H are drawn independently from the circularly
symmetric complex Gaussian distribution of zero mean and unit variance CN (0, 1).
The N M channel matrix represents the case where there are M transmit antennas
and N receive antennas. In addition to the distortion caused by H, additive noise is seen at
the output of the channel, encapsulated by the complex random vector n. The noise sam-
ples are independently drawn from the circularly symmetric complex Gaussian distribution
CN (0, N0 ), where recall that we assigned N0 = 1 in computing the power scaling factor.
The vector of complex signals seen at the receiver is then given by

v = Hs + n. (C.7)

Observe from Fig. C.1 that we also make the common assumption of perfect channel state
information at the receiver, i.e., that an uncorrupted copy of H is available at the detector.
Next, in order to perform (real) sphere decoding, we expand the complex signals into
real ones as follows:
 
Re{v}
v = Im{v} (C.8)
    
Re{H} Im{H} Re{s} Re{n}
= Im{s} + Im{n} (C.9)
Im{H} Re{H}

= Hs + n. (C.10)

Received vector or target v, channel matrix H and symbol alphabet X are then passed to
the appropriate detector algorithms, depending on the test being conducted.
For the results presented in this report, we have been concerned primarily with averaging
, the number of nodes expanded by a sphere decoder, over a large number of channel uses.
These experiments were repeated at a wide range of SNRs, using the three symbol constel-
lations shown in Fig. C.2, combined with a variety of sphere decoders and pre-processing
schemes. Generally speaking, the number of channel uses must be large enough to ensure
that the variance of the reported value is sufficiently low. Therefore we can feel confident
that it provides a reliable representation of the average performance. To this end, thousands
to hundreds of thousands of realizations were generated for each point on the final curves.
Fig. C.1 also illustrates how the error rate performance p can be assessed.
Appendix D

Proof of the optimality of the


enhanced ordering when M=B=2

In this appendix we show the optimality of the proposed ordering with respect to the compu-
tational efficiency of a subsequent application of the Automatic Sphere Decoder (ASD) when
M = B = 2. The direct proof is surprisingly straightforward and based on the simplified
search tree depicted in Fig. D.1.

(n0 ) = 0

(n1 ) (n2 )

v Rs |2
(n3 ) = |e (n4 ) (n5 ) (n6 )

Figure D.1: A weighted binary tree for sphere decoding with M = B = 2, wherepthe ML
solution is associated with node n3 , and hence the optimal search radius is C = (n3 ).

We begin by deriving an upper bound on the number of nodes expanded by a SD:

Proposition D.1 Given target v, channel matrix H, and finite alphabet X of size B, the
problem dimension is M = rank (H). Then the number of nodes expanded by a sphere
decoder satisfies

M
X 1
Bi. (D.2)
i=0

39
PROOF OF THE OPTIMALITY OF THE ENHANCED ORDERING, M=B=2 40

Proof Recalling the (M + 1)-level B-ary tree structure that underlies the operation of
sphere decoders, the maximum number of nodes expanded is clearly the number of non-leaf
nodes, an expression for which is provided in (D.2).

Theorem D.3 Given a matrix H RN 2 and finite alphabet X of size 2, the ordering p
computed by Algorithm 2 is optimal in the sense of (4.4) for all targets v R N .

Proof Without loss of generality, we assume that the ML solution is associated with node
p
n3 , and hence also that the optimal search radius is C = (n3 ). Then it is clear that
nodes n0 and n1 are expanded by all sphere decoders. Proposition D.1 states that the
number of nodes expanded is at most 3. Therefore in order to evaluate the computational
efficiency induced by an ordering, it remains to consider whether node n2 is expanded.
Since M = 2, there are two possible orderings of the columns of H. For both orderings,
the structure of the tree in Fig. D.1 applies, and the weights of the root and leaf nodes
are also the same by definition. Thus the only difference between the performance achieved
under the two orderings arises from the weight of node n2 , specifically ASD = 3 if and only
if (n2 ) < C2 , otherwise ASD = 2.
Let the weight of node n2 and the number of nodes expanded by the ASD under ordering
{2, 1} be denoted as 1 and ASD,1 , respectively, and those under ordering {1, 2} as 2 and
ASD,2 . Then based on decision criterion (4.5), Algorithm 2 proposes ordering {1, 2} if
1 < 2 , and ordering {2, 1} if 2 < 1 . If 2 = 1 , it makes an arbitrary choice. All of the
possible cases are enumerated in Table D.1:

1 ? 2 ? < C ? C ASD,1 ASD,2 p


1 < 2 1 , 2 C 2 2 either {1, 2}
1 < C 2 C 3 2 {1, 2} {1, 2}
1 , 2 < C 3 3 either {1, 2}
1 = 2 1 , 2 C 2 2 either either
1 , 2 < C 3 3 either either
1 > 2 2 , 1 C 2 2 either {2, 1}
2 < C 1 C 2 3 {2, 1} {2, 1}
2 , 1 < C 3 3 either {2, 1}

Table D.1: Enumeration of possible relationships between 1 , 2 and C , as well as the


resulting optimal and proposed orderings (Algorithm 2).

It is clear that in all cases the proposed ordering agrees with the optimal choice.

You might also like