Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

2 Dtic Ada189329

Download as pdf or txt
Download as pdf or txt
You are on page 1of 229

KANSAS /

UNIV CENTER FOR RESEARCH


ALGORITHMS LAURENCERRASU)
FOR NFINCANTENNA TELECONIWNICATIO
7 A-A89 329 ADAPTIVE
D NREGERT ET AL JUL 87 TISL-5481 RADC-TR-86-159
UNCLASSIFIED F362-8i-C-9285 F/G 9/1 M
Li IL 2L

36
Q

II .2.
11 WO. 1111.0

11IN HAPfl
(y) RADC-TR-86- 159
Final Technical Report4
July 1987

00

ADAPTIVE ALGORITHMS FOR HF


ANTENNA ARRA YS

University of Kansas

D. Haegert. R. Spohn. V. Frost, K. Shanmugan, G. Sargent,


R. Yarbro, B. McClendon and J. Holtzman

APPROVED FOR PUBLIC RELEASE, DISTRIBUTION UNLIMITED

'r d
This report has been reviewed by the RADC Public Affairs Office (PA)
and is releasable to the National Technical Information Service (NTIS). At
NTIS it will be releasable to the general public, including foreign nations.

RADC-TR-86-159 has been reviewed and is approved for publication.

APPROVED: r
~Project RITCHIE
PETER J.Engineer

APPROVED: /,

BRUNO BEEK, Technical Director


Directorate of Communications

FOR THE COMMANDER:

JOHN A. RITZ
Directorate of Plans & Programs

If your address has changed or if you wish to be removed from the RADC mailing
list, or if the addressee is no longer employed by your organization, please
notify RADC (DCCL) Griffiss AFB NY 13441-5700. This will assist us in
maintaining a current mailing list.

Do not return copies of this report unless contractual obligations or notice


on a specific document requires that it be returned.

. . . ,
-. . . .

-. . . . .
UNCLASSIFIED
SECURIJT LASSIFICATION OF THII PA01' /
ForM Approved
REPORT DOCUMENTATION PAGE OJmejNO8 o j4
aREPORT SECURITY CLASSIFICATION lbE RESTRICTIVE MARKINGS
UNCLASSIFIED N/A
2a. SECURITY CLASSIFIC-ATION AUTHORITY 3. DISTRIBUTION / AVAILABILITY OF REPORT
NIA -Approved for public release; distribution
2ib-.DECLASSIFICATION/ODOWNGRADING SCHEDULE unlimited.
N/A ____________________

4. PERFORMING ORGANIZATION REPORT NUMBER(S) S. -MONITORING ORGANIZATION REPORT NUMBER(S)


TISL5481RADC-TR-86-159
6a. NAME OF PERFORMING ORGANIZATION 6b. OFFICE SYMBOL 7a. NAME OF MONITORING ORGANIZATION
UnivrsiyofKanas tf appicabile) Rome Air Development Center (DCCL)

-6c. ADDRESS (City, Stat, and Z1PCod@) I IS b. ADDRESS (CRtY,State, iand ZIP Code)
Telecommunications and Information Sciences lab Griffiss AFB NY 13441-570o
224 Nichols Hall, Campus West
Lawrence KS 6604 _______________________

NAME OF FUNDING I SPONSORING


8&Ba 8b. OFFICE SYMBOL 9. PROCUREMENT INSTRUMENT IDENTIFICATION NUMBER
ORGANIZATION apkae F30602-81-C-0205
Rome Air Development Center DCCL
Bc ADDRESS (City, State, and ZIP Code) 10. SOURCE OF FUNDING NUMBERS
PROGRAM PROJECT ITASK IWORK UNIT
Griffiss AFB NY 13441-5700 ELEMENT NO. NO. N0 ~ ACCESSION NO
62702P1 4519 61 P2
'Vi i. TTLE (Anclude Savuwity Csicaaon)
- 4' ADAPTIVE ALGORITH1S FOR HF ANTMNA AMRYS

12. PERSONAL AUTHOR(S)


D. Haegert, R. Spohn, V. Frost, K. Sba Da . Sehr B. !cCledn _-J.Ho
* * 13.. TYPE OF REPORT 13b. TIME COVERED 14. DAywF EPR (year Mfonm, Day) IS. PAGE COUNT
Final FO uJiTO 62.30
16. SUPPLEMENTARY NOTATION

'INIA

17. COSATI CODES kit. SUBJECT TERMS (Contnue on reverse dfnoceaq and identiy by bkICk numnber)
FIELD GROUP SUB-GROUP HP Adaptive Arrays HrF Comunications Systems
4 HP hannl Simlatin . F Adaptive Algorithms ,,

19. BTAT(otneo reverse f eury and identify by block number)

Thefrac o naatv array system Will not only be dependent upon the controlling
algorithm, but also upon the specific environment in which the array is used. Isolating the
Contributions Made solely by the control algorithm represents a formidable task, considering
* that adaptive array Systems are inherently used in situations in which the interference
environment is Initially unknownm, time-variant, or both. If an effective algorithm choice
isto be made, it is important to be able to compare array performances in identical
environments. The major focus of this study is to compare and contrast the performance of
4 various control algorithms under identical conditions in a Computer simulated adaptive HF
'-4 antenna array System. After examining adaptive array Systems and the role played by the
4 ~~~~controlling algorithms, this study concentrates on the selection andsiuaonfthe
specific algorithms. The simulation results are then presented in the form of a comparativ
r evaluation of the selected Control algorithms and relevant conclusions are drawn.

20 DISTRIBUTION / AVAILABILITY OF ABSTRACT 2.ABSTRACT SECURITY CLASSIFICATION


O3UNCLASSIIEDIJNLIMITED 13 SAME AS RPT 0 OTIC USERS I UNCLASSIFIED __________

22a NAME OF RESPONSIBLE INDIVIDUAL 22b. TELEPHONE (include Are& Code) 22c. OFFICE SYMBOL
'..Peter J. Ritchie (315) 330-3077 RaDC (DCCL)
00 Form 1473. JUN 15 fte ious editiol are 06804ot. SECURITY CLASSIFICATION OF THIS OAGE

1. %.

I"- * '. *. -. '

-- 'N ... % N". %4, ,- N


TABLE OF CONTENTS

List of Figures ................................................... v

List of Tables ................................................. viii

List of Symbols .................................................. ix

1.0 Introduction ................................................. 1

2.0 Adaptive Array System Model................................... 5

2 .1 Sensor Element Array..... ...................................... 5


2.2 Pattern-Forming Network ............................. 7
2.3 Adaptive Processor ................................... 8

3.0 Least Mean Square Algorithm..................................10

3.1 Motivation for Selection of LMS Algorithm .............10


3.2 Mean Square Error (MSE) Performance Criterion .........11
3.3 MSE Performance Criterion Derivation .................13
3.4 LMS Algorithm Description ............................16
3.5 LMS Software Modules................................19

4.0 Constrained LMS Algorithm....................................21

4.1 Motivation for Selection............................ 21


4.2 Constrained LMS Algorithm Description ................21
4.3 Derivation of Optimum Constrained Weight Solution..25
4.4 Derivation of Constrained LMS Algorithm...............28
4.5 Constrained LMS Simulation Model .....................29
4.5.1 TAKAO Implementation ..........................31
.14.6 Constrained 11S Software Modules .....................32
5.0 Update Covariance Algorithm .................................... 35

5.1 Motivation for Selection .............................. 35


5.2 Update Covariance Algorithm Description ............... 36
5.3 Update Covariance Software Modules .................... 38

6.0 Description of the HF Adaptive Array Simulation Model .......... 40

6.1 User-Definition of Array System and Environment ....... 40


6.2 Performance Evaluation ................................ 40
6.3 Simulation Software Description ....................... 44
6.3.1 Desired Signal Model ........................... 44
6.3.2 Interference Model ............................. 46
6.3.3 Thermal Noise Model ............................ 46
6.3.4 Correlation Matrices ........................... 46
6.3.5 Simulation Program Operation ................... 48

7.0 Simulation Results and Conclusions ............................. 50

7.1 TEST1: Negligible Channel Effects .................... 54


7.2 TEST2: HF Channel with Moderate Delay Characteristics
and Poor Attenuation Characteristics .................. 69
7.3 TEST3: HF Channel with Poor Delay Characteristics
and Poor Attenuation Characteristics .................. 84
7.4 TEST4: Dependence of Convergence of Constrained
LMS Algorithm on Signal Presence ...................... 98
7.5 TEST5: Dependence of Adapted Antenna Patterns on
Number of Required Iterations ........................ 107

e7.6 TEST6: Investigation of an Alternate Antenna


Geometry ............................................. 118
7.7 Conclusions and Recommendations ...................... 141

References ..................................................... 143


I

Appendix A: Complex Lowpass Equivalent Representation ............. 144

-ii

7!-'
- % %i%-

•~. . . %% . . ., .
Appendix B. User Manual for HF Adaptive Antenna Array
Evaluation Facility .................................. 146

N 5Sw'J

xZ-J
LIST OF FIGURES

2.1 N-Element Adaptive Array System Model ....................... 6

3.1 LMS-Controlled Adaptive Array System ....................... 12


3.2 MSE Performance Surface .................................... 15
V 3.3 LMS Software Modules ....................................... 20

4.1 Signal-Aligned Broadband Adaptive Array System ............. 24


4.2 Narrowband Signal-Aligned Array System ..................... 30
4.3 Constrained LMS Software Modules ........................... 33

5.1 Update Covariance Software Modules ......................... 39

6.1 User-Entered Parameters ................................... 41

6.2 Three-Path HF Channel Model ................................ 43


6.3 Flow Diagram of Program Operation .......................... 49

a 7.1 Constant Test Parameters ................................... 51


7.2 Description of Elevation Plots ............................. 52

7.3 Description of Azimuthal Plots .............................53


7.4 Convergence Histogram of LMS Algorithm in Ideal Channel ....55
1% 7.5 Convergence Histogram of Constrained LMS Algorithm
in Ideal Channel ........................................... 56
7.6 Convergence Histogram of Update Covariance Algorithm

in Ideal Channel ........................................... 57


7.7 Convergence Histogram of LMS Algorithm in Channel 2 ........ 70

V. 7.8 Convergence Histogram of Constrained LMS Algorithm


V in Channel 2 ............................................... 71
C 7.9 Convergence Histogram of Update Covariance Algorithm
in Channel 2 ............................................... 72
7.10 Convergence Histogram of LMA Algorithm in Channel 3 ........ 85
7.11 Convergence Histogram of Constrained LMS Algorithm
in Channel 3 ............................................... 86

*_v-

04,
7.12 Convergence Histogram of Update Covariance Algorithm

in Channel 3 ............................................... 87
7.13 Convergence Histogram of Constrained LMS Algorithm
in ideal Channel (no signal present) ....................... 99
7.14 Convergence Histogram of Constrained LMS Algorithm
in Channel 2 (no signal present) .......................... 100
7.15 Convergence Histogram of Constrained LMS Algorithm
in Channel 3 (no signal present) .......................... 101
7.16 Convergence Summary for LMS Algorithms.................... 114
7.17 Convergence Summary for Constrained LMS Algorithm ......... 115
7.18 Convergence Summary for Update Covariance Algorithm ....... 116
7.19 Convergence Summary for Constrained LMS Algorithm
(no signal present) ....................................... 117
7.20 Alternate Antenna Geometry for Rhomboid Geometry .......... 119
7.21 Convergence Histogram of LMS Algorithm in Channel 2 for

Rhomboid Geometry ......................................... 120


7.22 Convergence Histogram of Constrained Algorithm in
Channel 2 for Rhomboid Geometry ........................... 121
7.23 Convergence Histogram of Update Covariance Algorithm in
Channel 2 for Rhomboid Geometry ........................... 122
7.24 Convergence Histogram of LMS Algorithm in Channel 3
for Rhomboid Geometry ..................................... 123
7.25 Convergence Histogram of Constrained LMS Algorithm in

Channel 3 for Rhomboid Geometry ........................... 124


7.26 Convergence Histogram of Update Covariance Algorithm in
Channel 3 for Rhomboid Geometry ........................... 125

TI.1-8 Unadapted and Adapted Antenna Plots at the Arrival


Angles of the Jamming Signals for TESTI ................. 61-68

T2.1-8 Unadapted and Adapted Antenna Plots at the Arrival


T2.-8 Unadapted adAdapted Antenna Plots at the Arrival

Angles of the Jamming Signals for TEST2 ................. 76-83

% T3.1-8 Unadapted and Adapted Antenna Plots at the Arrival

Angles of the Jamming Signals for TEST3 ................. 90-97

-vi-

%
W e. e..e 4
~%p.~~j %~ *.,. J% ~ Ja\

4WA I =Mn~wA1A gna


T4.1-4 Unadapted and Adapted Antenna Plots at the Arrival
Angles of the Jamming Signals for TEST4 ............... 103-106

T5.1-4 Comparison of Adapted Antenna Patterns of Fastest-/


Slowest Convergence ................................... 108-111

T6.1-14 Unadapted and Adapted Antenna Plots with Rhomboid


Geometry for TEST2 and TEST3 .......................... 127-140

for TET2 an

%• %
..¢

.r.

-vii- }

,~~~~~~~~~~~~~~.
., . , ,, =- , .-....
. .,. , .. . . ......
. . . . . . -.... . . . . . . . . . .,.. . . .. .. . ... . .... . . . . . . -. . . . . '
LIST OF TABLES

Table 3.1 Features of Adaptive Algorithms .......................... 22

Table 7.1 TEST1 Summary............................................58


Table 7.2 TEST2 Summary ............................................73
Table 7.3 TEST3 Summary ............................................88
Table 7.4 TEST4 Summary ...........................................102
Table 7.5 Comparison of Convergence Properties for the
Adaptive Algorithms ..................................... 113

-v4'.,.

5~I, 4~V

%S
% % % % %

%S S.
LIST OF SYMBOLS

N number of antenna elements


xi(t) output of ith antenna element
Si(t) signal component of output of ith antenna element
ni(t) noise (deliberate and natural) component of output of ith antenna
element
y(t) overall array output
wi ith antenna weight
d(t) reference signal
E(t) error signal
W convergence constant
R sample covariance matrix
I identity matrix
C constraint matrix &
P projection matrix for constraint re-establishment

8 orthogonal vector for constraint re-establishment


a data de-weighting constant
X vector of LaGrange multipliers
f response vector
C mean square error

a- V' -ix .

.4- 44
*44

'i -i-S %'


% %

T
., , .., .V, - ., ' ", , -. \ , .v o,', ',(4
1.0 INTRODUCTION

Conventional antenna receiving systems are susceptible to performance


degradation due to the presence of undesired noise signals (deliberate or
natural) that enter the system. Extensive research has been conducted in the
area of adaptive antenna arrays as a means of compensating for the inevitable
presence of these interference signals.

Adaptive arrays can provide a vital element of flexibility to a


communication system. They can respond to changes in the interference
environment by steering nulls and reducing sidelobes in the directions of the
interferences, while maintaining an acceptable level of response in the
direction of the desired signal. More importantly, they can do so without
prior information pertaining to the interfering signals. These features make

adaptive array systems very attractive for applications in which the


environment is changing or unknown. For example, it is quite possible that
very little descriptive information concerning an intentional jamming signal
will be available. Any system requiring such information could not
effectively compensate for the interference.

Adaptive arrays also have a reliability advantage over their conventional


counterparts. In a conventional array system, if a single sensor element
becomes inoperable, the characteristic gain pattern may be noticeably
affected, depending on the location of the non-working element and the total
number of sensor elements. In contrast, an adaptive array system positions,
nulls, and reduces sidelobes according to the monitored external
signal/interference environment. Therefore, if a sensor element became
incapacitated, the remaining operable elements would be adjusted to produce a
pattern that is similar to the original pattern. Simply stated, adaptive
array systems fail more gracefully than conventional receiving systems.

The heart of the adaptive array system is the controlling algorithm. It


determines not only the method that is used to adapt, but also the speed of
adaptation and the amount and complexity of hardware necessary to implement
the system. Some of the early pioneering work in the area of adaptive control
algorithms began in the 1960's. B. Widrow and others developed a self-
training, self-optimizing control algorithm known as the Least Mean Square

a.I
% r,
(LMS) algorithm [1I. This algorithm uses gradient techniques to
asymptotically approach an optimal solution. At approximately the same time,
Howells and Applebaum were developing a sidelobe cancelling algorithm for
radar applications [2]. This algorithm exploited the fact that the signal of
interest was normally absent, and attempts to maximize a generalized signal-
to-noise ratio. Numerous algorithms were developed shortly thereafter. Among
the more common types are the Differential Steepest Descent algorithm,
constrained algorithms such as Griffiths' P-vector and Frost's Constrained
*LMS, and random search algorithms [3-5].

The direct matrix inversion and recursive processors represent a


significant departure from the above algorithms, and they were developed more
recently. Although their heavy computational load renders them impractical
*1 for many applications, the advancements in cheap, fast digital hardware have
spurred an increasing interest in these methods. Both methods involve finding
the inverse of the sample covariance matrix to determine an optimum weight
solution [7-9].

As mentioned, a wide variety of adaptive algorithms have been developed


and utilized. This fact suggests that there is no "clearly superior" adaptive

control algorithm that should be implemented without regard to the


application. In fact, the performances of adaptive control algorithms are
very application dependent. The selection of a specific algorithm is based on
many factors including the quantity of a priori information, the convergence
speed requirements, implementation cost considerations, and the transmission
medium, among others. The process of selecting a control algorithm is further
complicated by the fact that adaptive array systems are generally placed in
changing surroundings. These natural uncertainties often make it impossible
to analytically predetermine how well a specific algorithm will perform.
This, combined with the fact that it is often economically unreasonable to
build a system in order to test it, makes computer simulation a useful tool
for array evaluation.

The importance of application considerations has been stressed. This


study is concerned with the simulation of a communication system operating in
the HF frequency band. The HF band, which spans the 3-30 MHz range, is
commonly used in military communication systems, and has been modeled as a

-2-

*4 4,%
slowly varying channel. Other factors that played a significant role in the
algorithm selection process include:

- Knowledge of interleaved code is periodically available for reference


signal generation.

- Knowledge of the angle of arrival of the desired communication signal


is known a priori.

- Antenna array geometry.

Three adaptive control algorithms have been selected for computer


simulation in this study. They were selected on the basis of their usefulness
and applicability in the system of interest as well as the fact that they
-4 comprise a fairly representative set of adaptive algorithms in general. These
algorithms are:

1. Least Mean Square Algorithm


2. Constrained LMS Algorithm
3. Update Covariance Algorithm (recursive)

- The motivation for their selection as well as their attributes and


operation will be discussed in detail later in the report.

Once the control algorithms had been selected, attention was turned to
the development of the computer simulation models. The overall adaptive array
system model was defined first, followed by the software implementation of the
chosen control algorithms. In order to obtain relevant results regarding
* algorithm performance, it was imperative that the control algorithm be
isolated in the overall system model. In this way, identical environments
could be reproduced, and discrepancies in array performance could be
attributed solely to the differences in algorithms. It was also necessary to
define performance measures and develop a scheme to monitor the results in
order to make a comparative evaluation of the selected control algorithms.

The first section of this report is devoted to the definition of the


basic adaptive array system model. The principle system elements and their

-3-

. . %
*4'. . ,- . . t . - .- 6 . - . _. .- _ _._._ ._--.--. . . - . •u
operation are explained, and common notation which will be used in following
discussions is presented.

Once these fundamental aspects have been examined, each of the selected
algorithms is discussed in detail. Each discussion will include the
motivations for that algorithm's selection, and a description of how the
algorithm operates. It will also contain an explanation of the simulation
model that has been used.

The next section gives a presentation of the testing procedure and all
parameters that must be specified. The signal models, performance measures,
and simulation processes will be explained. The final section presents the
results in the form of graphs and offers conclusions concerning the algorithm
recommendations.

-2's

p.'

A' *
fiA.

_4
'.v ..-.
N

2.0 ADAPTIVE ARRAY SYSTEM MODEL .

The purpose of this section is to introduce the adaptive array system


model that is used throughout the study. Although such a model is explicitly
required for a computer simulation, it is primarily intended to aid in the
understanding of the qualities and objectives of the simulated array sytem. A
common representation of an adaptive array system is shown in Figure 2.1.
This basic model, while not possessing an abundance of detail, can be used to
examine the principle system elements and their functions.

Virtually all adaptive antenna array systems consist of three principle


components as depicted in Figure 2.1. These components include: 1) An array
of sensor elements; 2) A pattern-forming network; 3) An adaptive processor.
Although not explicitly shown, it is also important to realize that the system
is located in an environment in which the desired and deliberate jamming
signals impinge on the array at various angles of azimuth and elevation in the
presence of thermal noise. Each of the components will now be discussed.

2.1 Sensor Element Array

The array consists of N spatially separated sensor elements. Their


function is to monitor the external signal and interference environment and
distribute this information to the other system components. The sensor
elements should be chosen according to their ability to perform in the
specified medium. elements can then be physically located in a planar
These %
to produce a suitable gain pattern over the desired region. %
configuration
Care should be taken when choosing the type of sensor element and their
locations, as both factors place fundamental limitations on the ultimate
capability of the adaptive array system.

The output of each sensor element, xi(t), is simply the sum of the signal
components that arrive at that element.

xi(t) = si(t) + ni(t)

.N

p -5-

* , ~~ 4 .
.4~ ~~~~~
. A . r
.4
.
,
%
. .* 4 ~~~
.
% . 4
%.~~-

*4,~~~ . %.4 %
Sensor Elementse'
?attern-froing Networkc

I I

I?.P ADAPTIVE

Figure 2.1 N-element Adaptive Array System M',.odel

' -6-

% %
-W - W.'

where
si(t) is the desired signal component at element i

ni(t) is the combined noise components at element i from both deliberate


and natural sources.

Also note that components of the same signal will differ from element to
element due to phase delay caused by the spatial separation of the sensors.

The sensor element array model that has been utilized in this study
contains some differences with models used in much of the related
literature. The most notable departure from virtually every simulation
conducted, is the use of a dipole model for the sensor elements. The use of
isotropic elements is almost universal among simulated studies. Factors such
as frequency dependence and directional gain of the elements themselves can
then be ignored. Although using dipole elements complicates the model,
"reality" is not compromised as severely. Also, many simulations are limited
to linear array alignments. Such alignments make phase calculations almost %
trivial, but may inhibit the array's ability to distinguish signals on the
basis of elevation arrival angle.

2.2 Pattern-Forming Network

The pattern-forming network consists of N variable complex weights and a


summing device. The function of the network is very straightforward. It
accepts a set of weights from the adaptive processor and multiplies each of
them with their corresponding sensor element output. This multiplication can
be thought of as an amplitude and phase adjustment of the element outputs.
The complex weights can be written as

Wi wi A e (2.1)

where

22
A,
1 (ewl (ImJW1 })

and

-7-

' e1 7. .l
-1

0 -tan (Im{wi}/Re{wi})

Thus, the output of the sensor element i is assigned a gain, Ai, ind delayed

by 8i radians at the discretion of the adaptive processor.

The sensor output/weight products are summed to produce the overall array
output signal, y(t), which can be written as

y(t) i
N
wi xi(t) x
TW (2.2)

where the column vectors x and w are defined as

xl(t) wl
x 2 (t) w2
x x 3 (t) w w3 (2.3)

• .

xN(t)J wN

Throughout the report, matrices will be denoted with an underscored capital


letter. Vectors will be denoted with an underscored lower case letter.
Complex conjugates will be assigned a superscript asterisk (*), and
transpositions will be denoted with a superscript "T". Scalar quantities will

be written as a lower case letter with no special notation.

2.3 Adaptive Processor

The adaptive processor controls the operation of the array system. It is

comprised of the control algorithm and any supplemental signal processing


components required by that particular algorithm. The adaptive processor uses
the sensor element and array system outputs to compute a new set of complex
weights. These weights are passed to the pattern-forming network to adjust
the amplitude and phase of the sensor element outputs. The resulting array
system response, which is hopefully acquiring a greater resemblance to the

desired communication signal, is then used by the adaptive processor to help


compute the next weighting vector.

... %-
-%f%
"p%
The variable complex weights are updated by the processor in such a way -.

that the array system output is optimized according to some specified


performance criterion. This criterion governs the control algorithm by
defining the parameters that are to be optimized through adaptation.
Different control algorithms have different performance criteria. In other
words, despite the fact that they are all trying to accomplish similar tasks,
algorithms strive for optimization in different ways. The following three
sections are devoted to the selected algorithms (processors), and include
explanations of what the algorithms are trying to accomplish and how they
'A. achieve their respective goals.

Before leaving this section, it should be mentioned that although the


array system model was depicted using continuous time representation, computer

A.
simulations require discrete samples. In the discussions and derivations that
- follow, signals will be represented as a sampled value. Also, in order to
represent bandpass signals, complex lowpass equivalent representation (CLPE)
has been utilized. A discussion of CLPE is given in Appendix A.

.1,q

1%1

.A.

.4 -.. -. :

-. .. . . . . . - . .4~~.... . - .- . . . . ... - ,". - . . ' . . - - .- -. . . . - . - - - - . . , -


3.0 LEAST MEAN SQUARE (LMS) ALGORITHM

The LMS algorithm was introduced and developed by Widrow in the mid
1960's [11. It uses gradient estimation techniques to arrive at an optimal
solution. The L[MS algorithm is probably the most popular of all adaptive
algorithms. It has been used in a variety of adaptive applications including
channel equalization, noise cancellation, .-
id antenna array systems. It has
V.. become somewhat of a standard and is frequently used as a performance

barometer for other adaptive algorithms studies.

It is important to note that the LMS algorithm requires the generation of

m"/2 an error signal, which in turn requires the generation of a reference


signal. This reference signal represents the signal that it is desired to
receive and it is generated using some approximation technique. For some
applications, the reference signal requirement is unattainable, and the LIMS
algorithm cannot be used. Communication systems in general, however, lend
themselves to reference generation, and systems that involve the use of known
codes have been shown to be particularly conducive to the generation of a
satisfactory reference signal [11-13].

3.1 Motivation for Selection of LMS Algorithm

As its popularity suggests, the LMS algorithm has many attractive


qualities. Probably its most attractive quality is its overall simplicity.
Few computations must be performed in order to update the complex weights. It
takes on the order of 2N computations, where N is the number of sensor
elements, to update the weights. Also, the adjustment of one weight does not
affect the adjustment of another. Therefore, the weights can be updated
simultaneously. This keeps processing time to a minimum.

.) A natural outgrowth of the LMS algorithm's computational simplicity is


the relatively small hardware requirements. It can be easily implemented in
either analog or digital form. For many applications, the LMS algorithm
represents a good trade off between speed of convergence* and implementational

The speed of convergence is measured relative to the number of times the


complex weighting vector must be updated before the antenna pattern has
converged.

-1.0-

.-...
4W.,..- .

feasibility. As a general rule, algorithms with rapid convergence rates are


very complex. The LMS algorithm, while not possessing convergence rates as
rapid as those offered by recursive processors, has rates that are
satisfactory for most slowly varying environments.

Finally, the operation of the LMS algorithm is straightforward and easily


understood. The algorithmic steps are clear and well defined. This along
with the other factors mentioned, make the LMS algorithm a prime candidate for
our study.

The operation of the 1.S algorithm is governed by the Mean Square Error
performance criterion. Before discussing the inner workings of the L14S
algorithm, this criterion will be explained as it adds valuable insight into
how the algorithm operates.
.4.

3.2 Mean Square Error (MSE) Performance Criterion

The LMS-controlled adaptive array system is shown in Figure 3.1 and will
be used to present the fundamental manner in which the MSE criterion is
used. For the moment, assume that the reference signal, d(k), is the actual
'4 value that is sent by the desired communicator. An error signal, e(k), is
defined as the difference between what was sent and what was received.

e(k) = d(k) - y(k) - d(k) - w T x(k) (3.1)

The LMS algorithm uses this error signal, along with the sensor element output ,
information, to calculate a new set of complex weight values. The weights are
computed such that the resulting error signal, and thus the MSE is reduced.
As this process is repeated, the mean square error approaches zero, signifying
that the value that was received was approximately equal to the value that was
sent.

Although correct in principle, the preceding discussion has one blatant


* flaw. If the actual value sent by the communicator was truly known at the
receiver, no information would be conveyed and there would certainly be no
need to perform complicated adaptive techniques. The desired signal cannot be a

known with certainty at the receiver and therefore muist be estimated. As


mentioned, systems using known codes, such as the current system of interest,
have been shown to be particularly adept at reference signal generation.

%.4%

-X1-
x.?(k) -

LZ43 ALGORITHM

* Figure 3.1 LXS-control1ed Adaptive Array SysteM

% %
% %
k -.

3.3 MSE Performance Criterion Derivation

The goal of the LMS algorithm is to adjust the complex weights in order

to minimize mean square error. An expression of mean square error as a


function of the weight values will now be derived.

The required error signal is defined as the difference between the


generated reference signal and the adaptive array output signal.
.'.

e(k) = d(k) - y(k) = d(k) - wT x(k)

The square of the error signal can then be written as


V

e 2 (k) = d 2 (k) - 2d(k) xT(k) w + wT x(k) xT(k) w (3.2)

Taking the expected value of both sides yields

Eje 2 (k)} - Efd 2 (k)} + wElx(k) x (k)J.! - 2E~d(k)xT (k)} f (3-3)

The above equation represents the mean square error as a function of the
complex weights.O

In order to simplify the notation, define the vector p as the cross


correlation between the desired response and the sensor element output and the
matrix R as the input correlation matrix.

d(k)xl(k)
d(k)x 2 (k)
p E (3.4)

'5

d(k)xN(k)

where d(k) is the scalar desired response and xi(k) is the output of sensor
element i.

-13-
IIV,

%
,. ;..%
,-.,,.. ,.-..-
I--. -,,.K. .::.,,... : . .. .. %,.-;,
., . , ..
: '. ,._ _. ..- i;T'.,. :.:-, N,'. . , .- ,, , :%,, ,, .%
Xl(k) Xl(k) Xl(k) x2(k) .... x (k) X(k)

R E . (3.5)

XN(k) x1 (k) XN(k) x2 (k) .... XN(k) XN(k) -a.

Once again, xi(k) is the output of sensor element i. The mean square error
can then be defined as

MSE E{e 2(k)} = w R w - 2p w+ E{d 2(k)} (3.6)

It is very important to note that the MSE is a positive quadratic function


of the weights. This function is a concave hyperparabaloidal (bowl-shaped)
surface that contains no local minima. This is a performance surface and is
depicted in Figure 3.2.

Notice that the minimum mean square error corresponds to the global 4,%

.1minimum of the performance function. It is also known that the gradient of


the function is zero at this minimum. Therefore, in order to find the optimal
weight settings (the weights which produce minimum mean square error), the
gradient of the function is found and set equal to zero.

-, The gradient is obtained by differentiating the MSE function with respect


to the weights.

awa

=-2p + 2 Rw (3.7)

a.w
e2 (k)l
3 fE ..

a~a, N

Setting the gradient equal to zero, the optimal weights are found.

-14-

%%
w- -L-1 VI -V - VI MV)

ItN

%1
%r*-
-2p + 2Rw 0

=-R p (3.8)
n0PTIMUM

. This is an extremely important result and represents the matrix form of the
Weiner-Hopf equation.

The Weiner-Hopf equation defines the optimal weight settings in the mean
square sense. Intuitively, it may seem unreasonable to prescribe a

complicated adaptive technique when an optimal solution is known. As will be


shown, however, the solution is the problem, and the LMS algorithm is a method

:e to avoid actually computing the Weiner-Hopf equation.

3.4 LMS Algorithm Description

* An actual computation of the Weiner-Hopf equation would require the


explicit measurement of all correlation functions, since it is unreasonable to
assume that the correlations of deliberate interferences will be known. This
" is a plausible task but it would require large amounts of hardware. The

requirement of matrix inversion, which for most arrays of practical size is


computationally intensive, is more prohibitive. In fact, for most
applications of reasonable magnitude, the process of directly inverting the
correlation matrix is entirely unfeasible. The LMS algorithm is simply a
practical method of finding close approximations to the Weiner-Hopf equation.

The LMS algorithm is an implementation of the Method of Steepest


Descent. Using this method, the updated weight vector is equal to the past
weight vector plus a change that is proportional to the negative gradient.

I," w(k + 1) - w(k) - pV (3.9)

The parameter U is a factor that controls stability and convergence rate.

.. Updating the weights can be thought of as descending along the aforementioned


performance surface in an attempt to reach the "bottom of the bowl."

-16-

A % I

% %.
@4 -%

% %S

, , * ~ ~S.yg . '
",..
The LMS algorithm avoids explicit correlation
function measurement and C.

5matrix inversion by utilizing a crude but effective gradient estimate. Recall e

that the gradient of the MSE function is given by


-2
w1

=-2p + 2 Rw

(2,
3Efr (k~
awN

The LMS algorithm estimates the gradient by using the square of a single error
sample instead of the MSE and differentiating with respect to the weights.

a 2(k)
w

V = - 2 e(k) x (k) (3.10)

a 2(k)
L N

Using this gradient estimate in place of the true gradient in (3.9) yields the

LMS algorithm.

w(k + I) = w(k) + 2we(k) x (k) (3.11)

Notice that this algorithm does not require squaring, averaging or


differentiation. The gradient estimate can be shown to be an unbiased
estimator of the true gradient.

-17-

V J.
%P

%
4,
1
EJj= E{-2e(k) x (k)1

= E{-2[d(k) - y(k)] x (k)

- -2E~d(k) x(k) - x(k) x (k) w(k)}

- 2(Rw-p) = V (3.11)

The LMS algorithm does not require the angle of arrival of the desired signal
to be known a priori. If it is unknown, the weights are normally initialized
to an arbitrary value of 1 < 00.

i < 00
i < 00
w(0) (3.12)

i < 00

If the angle of arrival of the desired signal is known, however, the initial
weights can be chosen such that the initial antenna pattern effectively
"looks" directly at the desired signal.

-ja
e

e
d,
4

w ) (3.13)
r.."

.O.

- "N 4."

-18-

.....................................................
where -Qi is a phase value that exactly compensates for the phase delay due to
the spatial separation of the sensor elements. These values can be easily
V calculated if the angle of arrival and element locations are known.

3.5 L1MS Software Modules

Simulation of the LMS algorithm requires two routines. The weights are
initialized using the routine WEIGHTINIT. The initial weights are given as

wi(O) = exp(-jQi) i = 1, 2, .-., N

where Q is the phase of the desired signal at element i.


i

The routine LMS updates the weights according to the update equations
that have been given. It requires both the sensor element outputs and the
overall array system output to adjust the weights. It also requires a
%! reference signal. In this model, it has been assumed that a known code is
available. It is assumed that the code at the receiver is synchronized with
the code (preamble) that is being sent. The channel model introduces delay,

however, so the reference signal must be delayed accordingly if the


synchronization assumption is to be satisfied.

The FORTRAN source code listings are given in Appendix C. Figure 3.3

depicts the modules discussed and illustrates the primary input and output
parameters. The variable names used in the program are shown in parenthesis.

The LMS algorithm is not without its drawbacks. It does not converge
terribly fast, but more importantly, it requires the presence of a reference
6 signal to adapt. In cases such as this one where the reference signal is not
always present, the weight values would have to be effectively frozen during

the signal's absence. Environmental changes in this period could not be


tracked.

The severity of this problem will be dependent upon the rapidity of


change of the application medium. A slowly varying medium should pose no
prohibitive difficulties. An adaptive algorithm which does not encounter the
problem of a required reference signal will now be examined.

*1' -19-

• o%
i

LMS SOFTWARE MODULES

type of algorithm WEIGHT INITIALIZATION


(ALGTYP)

element location SUBROUTINE initial weights (COMPWT)


(X,Y) WEIGHTINIT
arrival angle of
communication
signal
(PHI, THETA)

weights -LMS ALGORITHM


(COMPWT)

sensor outputs SUBROUTINE updated weights (COMPWT)


"'-N (OUTPUT) LMS

array output
(WTDSUM)

Figure 3.3 LMS Software Modules

-20-

., .. ,.,...,.'
,, . .'.p,.. . ..,. . . . . . ..... . .
4.0 CONSTRAINED LMS ALGORITHM

The second control algorithm selected for this study is the Constrained
LMS algorithm. It was developed by 0. L. Frost [5 ]. The name is somewhat
misleading as it suggests that this algorithm is simply a permutation of the p..

LMS algorithm. This is definitely not the case, and some of their contrasting
features are illustrated below in Table 3.1. The name may have been derived -

from the fact that, like the LMS algorithm, the Constrained LMS algorithm uses
a gradient approach.

4.1 Motivation for Selection

The primary advantage of the Constrained LMS algorithm is the elimination r


of the reference signal requirement. No reference signal must be present for
the algorithm to reduce the effects of interferers or respond to changes in
the environment. Complicated reference generation techniques can be
avoided. It is, however, required that the arrival angle of the desired
signal be known a priori. Secondly, the Constrained LMS algorithm requires
relatively few computations in order to update the variable weights. Finally,
many adaptive processors tend to degrade their own mainbeam response when
attempting to place nulls in the directions of interferences.
By explicitly
constraining the response, the Constrained LMS algorithm prevents this from ""

occurring.

4.2 Constrained LMS Algorithm Description

As mentioned, the Constrained LMS algorithm places fundamental limits on


the adaptive array's response in the direction of interest (look direction).
Throughout the adaptation process, the response in the direction of arrival of
the desired signal will remain unchanged. The individual variable weights are
allowed to take on any values in order to null out interferences provided that
the look direction response is maintained. This type of algorithm allows the
*1 array to look in the direction of the desired signal (whether it is present or
not) while ignoring signals arriving from other directions. It is true that
an interfering signal that enters the array system at virtually the same angle
as the desired signal will disturb the system. The same could be said for all
algorithms because they rely on spatial separation to discriminate.

-21-

, I
%,- % .~ . %-
.*V . . ..
Table 3.1

Features of Adaptive Algorithms


5,

, Feature LMS Constrained LMS

Reference Signal Required Not Required I

Angle of Arrival
of Communication Not Required Required
.1 Signal __.

Performance
Criterion M.S.E. Maximum-Likelihood

Optimization Minimize Minimize output


Technique Error power according
to constraints

r V .r
".".
,. . -22- "
.,-5.-::'./-*..5..-;.
S%%
,, '..-:-: .<- ? -',.q_,¢27;.,, , .'';-'-:)?'-,'-2

vM
The adaptive array system model which will be used to explain the r
operation of the Constrained LMS algorithm was presented in [5] and is shown
in Figure 4.1. Although this simulation deals with narrowband signals, the

original broadband processor model will be used in this discussion. The model
consists of N elements and J taps per element. When narrowband signals are
used a simplified model results and it will be described later. Also shown in
Figure 4.1 is an "equivalent processor" which aids in the understanding of how

the Constrained LMS algorithm operates.

From Figure 4.1, it is evident that the Constrained LMS processor


contains an additional component. This component, known as a spatial
correction filter, performs a task that is often regarded as preprocessing.
This filter compensates for the physical misalignment of the sensor elements %
by introducing individual delays so that the desired signal effectively
'p. arrives at the same time at each element. In other words, the spatial
correction filter guarantees that the communication signal component is
identical at each element output. The delays can be calculated from the array
geometry and the arrival angle of the desired signal. Noise components
arriving at the sensors at other angles will not produce equal components at
the element outputs.

From the desired signal's vantage point, the processors in Figure 4.1 are
equivalent. Each adaptive weight in the equivalent processor is simply equal
to the sum of the weights in the vertical column above it. With these values,
the signal components at the respective processor outputs are identical. By
assigning a value to these equivalent weights, a desired frequency response in
the look direction is selected. This introduces J constraint conditions.
Since there are N X J adjustable weights, the remaining N X J - J degrees of
freedom can be used to minimize the non-look direction noise power.
Minimizing non-look direction noise power is equivalent to minimizing total
output power because, regardless of how the weights are adjusted, the
constraints guarantee that the response in the look direction will not be -

degraded.

The basic manner in which the Constrained LMS algorithm operates has been
discussed. For the purpose of clarity, the primary steps taken by the
algorithm will now be re-emphasized. Delays in the spatial correction filter

'p. -23-

0
7 Z-- __:

Figure 4.1 Signal-aligned Broadband Adaptive Array System ,

-24-

O% .
are calculated to align the communication signal components on the sensors. A
desired response in the look direction is selected by assigning weight values
to the equivalent processor (the sum-on-column constraints are determined).
Once these tasks have been completed, adaptation begins and the processor
strives to minimize the total output power. The constraints guarantee that
there is no possibility of reducing power contributions made by the
communication signal. Mathematical derivations of the optimum constrained
weight solution and the Constrained LMS algorithm will now be presented.

W 4.3 Derivation of Optimum Constrained Weight Solution

The assumptions and definitions will be discussed first. Recall that the
signals at the sensor element outputs can be written as the combination of the
signal component and noise components

x(k) = s(k) + n(k)

It is assumed that both the signal and noises can be modeled as zero-mean
random processes with unknown second-order statistics. The covariance
matrices are defined as follows:

T
Ejx(k) x (k)} =RXX

Ejs(k) s (k) - RS
.. 4,
T
Ejn(k) n (k)j - RN

It is also assumed that the signal component is uncorrelated with the noise
components.

T ,
E{n(k) s (k)j 0

.0
e
Finally, the expected value of the array output power is given by

Ejy (k)}
2()-
E{

w
wT x =
x(k) x (k) wi =
wT
X w (4.1)

(.1
1
"

,°i

- -25-

op r
.... ............... ,..............----
. . -4,.,4_" .,-. --*.. .. ,. . *
-.. *. _ °_. * - . . '.-. -.. 4,.,...- -- : . ,
. '.. j e €' -.
Recall that the adaptive weights in the equivalent processor dictated the
frequency response characteristic in the look direction. Define a J-
dimensional vector that guarantees the desired frequency response and L
represents the summed weight values of the j vertical columns as

f2

-S" f- (4.2)

L~ JM

The weights in the jth vertical column must sum to the selected number f
This constraint condition can be expressed as
'-S
T
c. w = f. j= , 2, 3, .--,N (4.3)

where c. is an NJ-dimensional vector consisting of all zeros and N ones given


by P

T 5
E[000..0 ....000..0 ....
.l..1 ....000..0 ....000..0] (4.4)
N N N N N

p. A constraint matrix can then be defined that satisfies all j equations given
by (4.3) as

C . [cI ... C ... Cj] (4.5) -,

The full set of constraints can then be written as

CT w -f (4.6)
U...

Although it seems like a complicated process, the constraint matrix C simply


guarantees that the sum of the weights in the vertical columns is equal to the
weights in the equivalent processor.

-26-

" "
"" % " " S.
" ' ." " """" "" " S
" . . . .. . " ",S* " " " * . .. .
The constrained optimization problem statement can now be formulated.
The array output power, wT Rxx w, must be minimized subject to the constraint

condition CT w = f.

The optimum weight vector is found by using LaGrange multipliers. A cost


function, similar in purpose to the MSE function of the LMS algorithm, is
formed by concatenating the constraint equation with a J-dimensional vector of
undetermined LaGrange multipliers X. This cost function is then minimized
with respect to the weights.

Cost(w) =i/2wT R w + X [CT w - f] (4.7)

(a factor of 1/2 is added to simplify the arithmetic)

Once again, notice that the cost function is a quadratic function of the
4 weights. It is known that the gradient of this function is zero at the

minimum point. The optimum weights are then found by finding the gradient of
the function and setting it equal to zero.

The gradient of the cost function is found by differentiating with


respect to the weights.

V Rxx w + C X (4.8)
COST

Setting this result equal to zero yields the optimal weight solution.

Rxx w + C = 0
II -1
S-Rxx C A (4.9)

The LaGrange multipliers are yet to be determined. They can be found by


realizing that the optimal weight solution must satisfy the constraint
condition.

T T I

C wO = f --C[-Rxx C A]

[C Rxx C] f (4.10) i

-27-

-", X. %
?,,0 4'.~
The optimum constrained weight vector can now be expressed as
1 T 1
-- R [(C -x -I:

'a... sOPT C CTI f (4.11)

4.4 Derivation of Constrained LMS Algorithm

As in the LMS algorithm, this algorithm uses the Method of Steepest


Descent. Recall that this method states that the new weight vector is equal
to the previous weight vector plus a change proportional to the negative
gradient.

w(k + 1) = w(k) - j V
COST

In this case, the update equation can be expressed as

w(k + 1) = w(k) - p[Rxx w(k) + C X(k)] (4.12)

The initial weight vector, w(O), must satisfy the constraint condition.
It is chosen as

T -1
w(O) = C(C C) f (4.13)

The updated weight vector must satisfy the constraint condition as well. This ad'
can be written as

f = CT w(k + 1) =C T[w(k) - p(Rxx w(k) + C)(k))]

The LaGrange multipliers, X(k), are then given by a"'

'O, ~~~Ii-A(k) C
-[CT C-IT Rxx w(k) - cll_
_
[C TC][f - _~
C w(k)] (.4
(4.14)

-.' and the iterative relation for the update equation is expressed as

-
T -IcT T1) CT
w(k+l) w(k) - - C(C C) C JRxx w(k) + C(C [f - w(k)]

-28- '
O.1-,

'.: . ,.---
..,.-...
-..
_, '..:
.-.
.-,.
.-.
'-.---'.'...
.-
".-'
',,. .'.-.
'---'g--'--.
.•. ...-..-.. . .-."-
'.--- - .'.'--.-... .-..
.-. . '..-..
.-. .'-.. .-..'.. .
For the sake of convenience, two definitions are made. Define the NJ-
dimensional vector as

a = C(CT C)- f (4.16)

* and the NJ X NJ matrix P as


-
%

P I- C(CT C)-ICT (4.17)

where I is the identity matrix.

The update equation can then be rewritten as

w(k + 1) = P[w(k) - wRxx w(k)] + _

The covariance matrix Rxx is unknown, however, so an approximation of Rxx at


the kth iteration, x(k) xT(k), is used. Recognizing the fact that xT(k)w(k) -
y(k), the final update equation becomes

w(k + 1) = P[w(k) - py(k) x(k)] + 6 (4.18)

4.5 Constrained LMS Simulation Model

The tapped delay line in the broadband processor enables the user to
select a desired frequency response in the look direction. This study is not
concerned with such filtering because narrowband signal models are being
used. For the purposes of this simulation, it is only necessary that the
response of the adaptive array in the look direction be equal to unity. This
response can be achieved in the broadband model by setting one weight in the
equivalent processor equal to one and the remaining weights equal to zero. V,

This is somewhat wasteful, however, as the same response can be obtained using L

the simplified model shown in Figure 4.2. The explanations and definitions
that have been presented are still valid, but the tapped-delay line in the e.,
broadband model now consist of a single tap (weight). The weight in the
equivalent processor is assigned a value of i < 0 ° so that the adaptive array
has an all-pass distortionless response in the look direction. The sum of the

-29- II
. .-. .. .+-A..............................................................".................................
.. .+. . .• .+ .. ...
. • . .. . . . . . . . . .. . . . . , . .- . -+ ..........
. ..• . . . . . . ............. + . •._ - '
, . +• .. . • . .
+ - m+ ++ %. +'- . . . . . . . + . , . . . • . . - . . - . - .' . - m= . , + - m
, 4,

.%

Oqw

m' . . Figure 4.2 Narrowband Signal-aligned array system

0-30- % %4

A.. Way 4W-uLAhk


weights in the single vertical column of the original processor are still
required to equal that of the equivalent processor.

4.5.1 TAKAO Implementation

The Constrained MS algorithm requires a spatial correction filter to- -

compensate for the misalignment of the sensor elements. A method proposed by


Takao et. al., [6] merges the misalignment compensation and the weight
computation into a single process. The direction of arrival of the
communication signal is used to generate a directional constraint to govern
the weights. This method has been used in this simulation. An additional
benefit of this implementation is that it helps to isolate the Constrained
LMS processor from the other array system components, which was desired from

the outset.

A description of the Takao implementation will now be presented. Note


the similarities between these and the original equations (5], as they are for
the most part equivalent.

f = I < 0O (only one weight in equivalent processor)

(c*)T w - f constraint equation

= (wl + jw2 , w 3 + jw4 W2N - i + JW2N)

T 1 J2 N
c = (e , e , "', e )

where 2i the phase of the desired signal at sensor element i

P I - (c c)IN"

=c/N

w(k + i) = P [w(k) - j y(k) x (k)] + a update equation

-31- C
%.%%

.....................................
S--.

I . . . . . . . . . .- . . .I - *x .- -- .- o.,II." I
ft,,

4.6 Constrained LMS Software Modules

The weights are initialized using the WEIGHTINIT routine. They are
assigned a value of
f -ft
"'

wi(O) = steering delay i/number of elements

The WEIGHTINIT subroutine is also responsible for calculating the individual


4 steering delays that compensate for misalignment. These are given as

steering delay i = exp(-j2 i )

where Qi is the phase of the desired signal at element i.

Several quantities are calculated from directional information the first


time the weight update routine CONLMS is called. These include

P = I - (c c)/N

c./N

w# J~l jQ2 JQN "


= [e , e , ., e

-. . Each time the routine is called, a new weight is calculated using

w(k + 1) = P [w(k) - py(k) x *(k)] + g

These updated weights are then passed to the pattern-forming network.

The FORTRAN source code listings are given in Appendix C. Figure 4.3
depicts the modules discussed and illustrates the primary input and output
parameters. The variable names used in the program are shown in parenthesis.

S. The Constrained LMS algorithm also has some limitations. Although it


does not require a reference signal, it does require prior information
£ pertaining to the communication signal. It is vulnerable to steering error.
Steering error arises if the supposedly known angle of arrival of the

-32-

ft.7 ' t . f t ft f t f f t f t
oAS -'t ft . .t..t. .ft .. . .
%t-t'
J. ,

.J.

(XY)Wrqwwrrw.,. W

arrva ange o

'type of algorithm WEIGHT INITIALIZATION


i!' ~ ~~(ALGTYP) "~,
-

element location SUBROUTINE initial weights (COMPWTr) ''

(X,Y) WEIGHTINIT -
~arrival angle of

communication
signal F-
(PHI, THETA)

b-4,

weights
4. (COMPWT)-

array output CONSTRAINED UL4S


(ARYOUT ) ALGORITHM

sensor outputs SUBROUTINE updated weights (COMPWT)


(OUTPUT) CONLMS

array output
(STRDEL)
'S4

Figure 4.3 Constrained LMS Software Modules .4

"-33-

% 33
% %
information signal changes appreciably. In that event, the algorithm treats
4. the signal as it would any other non-look direction signal - by placing a
deep null in its direction of arrival. The algorithm would have no way of
knowing that the "interference" it is trying to ignore carries desired
information. Secondly, its convergence rate is comparable to that of the LMS
algorithm. An algorithm that possesses a faster response will now be
examined.
k

e
r4. "

k
.4

-34

-. 4r

,-O4'

.. .. -34

O. .4.,

"0;< ..- ,. s,. < -: .---. u.-ss.- - - ...- s,.-,-..--.,-.-. -. ,- ,---,, '
5.0 UPDATE COVARIANCE ALGORITHM

Both the LMS and Constrained LMS algorithms circumvent computational

problems associated with the direct calculation of a set of weights by using

effective estimates. The simpler calculations that result allow them to


frequently update the weights in order to compensate for the time-varying

environment. Recursive processors such as the Update Covariance algorithm

presented by Monzingo and Miller [7] can also be used to avoid these

computational difficulties. These algorithms recursively perform matrix


inversion so that direct matrix inversion is never required. Although they
also avoid direct matrix inversion, recursive processors represent a

significant departure from the algorithms previously discussed.

J Recall that the optimum weight solution given by Weiner-Hopf can be

*expressed as

-HOPT = RXX-I P
.21.

The Update Covariance algorithm and other recursive processors recursively

estimate the sample covariance matrix rather than rely on gradient methods
that asymptotically approach an optimal solution. These algorithms calculate

the optimal set of weights at each sampling instant based on a least-squares

fit to the received data.

5.1 Motivation for Selection

The primary reason for the selection of the Update Covariance algorithm

is its speed of convergence. This characteristic, which is common among

6 recursive processors, allows the algorithm to respond to changes more rapidly

than other methods of adaptation. It is simply a more direct approach to the

problem of computing an optimal weight solution. Updating the complex weights

does not involve descending along a performance or cost surface at a limited

* rate.

Another beneficial quality of the Update Covariance algorithm is that no

reference signal is required for adaptation. Once again, this eliminates the
need for complicated reference generation techniques. It does, however,

-35-

% %.
%--- - - - - - - - - - - .
Iv

require initial directional information pertaining to the communication


signal.

Finally, recursive processors hold good promise for the future. They
require a digital implementation and this has been, and still is, their
primary disadvantage. The great technolog.cal strides made in the production
of very fast, inexpensive, and compact digital hardware, however, have
resulted in the consideration of recursive processors for applications that
I were previously out of the question. The improved convergence rates (measured

in terms of iterations) offered by recursive processors have been documented


[8] and, if technological trends continue, it may become implementationally
feasible to exploit this advantage.

*" 5.2 Update Covariance Algorithm Description

As mentioned, the Update Covariance processor estimates the sample


covariance matrix in order to calculate an optimal weight solution. Unlike
the other algorithms described, the operation of this algorithm can be
described as a series of complex computations solely intended to calculate the
optimal weight solution.

As the name implies, the Update Covariance algorithm uses the sample
covariance estimate, R, to summarize the effect of de-emphasizing the past %
data. The new sample covariance matrix estimate is given by

*T
Rxx(k+l) -a Rxx(k) + x (k+l) x (k+ ) (5.1)
-. .

4 The new estimate is equal to the new computed value x (k+l) xT(k+l) plus the
past estimate scaled by a factor of a. a is a number between 0 and I that is
used to determine the significance of past data. The inverse estimate then
becomes

R-1~+I 1 + 1Rxk T( I:
-I
(k+l) -- [R(k) +- --x (k+l) x(k+l) (5.2)

Note that calculating the inverse in this manner however, would require matrix
inversion, which is exactly what the algorithm is trying to avoid. Therefore
it is useful to invoke the following matrix identity

-36-

.% . %.
.:-.
..' ,,..,.
. -....
,..... - .,.....
.-..-- , ,,,.,',.,',,
,.-- -. .... .. , .,-..
' .,,... .,.,,.
... ,,,',..
--,-,,, i.,..
..
%.0 A ,-,
j (..[-.-.,
V1 .-

- I
[p-i + M* -1M] P- P11* T [MPM * r
+"]' P(53
41

This identity is applied to equation 5.2 to obtain R (k+l) in the form


- 1

RXl (k) x (k+l) x (k+l) (k)


_ X (k+l) [R_- (k) - T-(5.4)
a + x (k+l) (k) x (k+l)

The optimum weight solution can then be found by utilizing the Weiner-Hopf
equation

HoPr -' RXX- P

Multiplying both sides of equation 5.4 by the vector p yields the Update
Covariance weight update equation.
- X-I(k * xTkl)
R (k) x (k+l) x (k+l) w(k)
w(k+l) - [w(k) - xTk - * (5.5)
, a +_ (k+l) Rxx x (k+l)

The Update Covariance algorithm consists of the following two steps: r

1. The inverse sample covariance estimate is formed using equation 5.4.

2. The weight solution is calculated using equation 5.5.

Due to the fact that the Update Covariance algorithm can be thought of as an
entirely mathematical process, the simulation model will be dispensed with.
The software associated with this algorithm simply performs the computations
outlined in equations 5.4 and 5.5. L

-37-

-: ,. ---
. .---,": .-- - -.-
".".-. ---. -"- ..- : - : ". ---- "v. -. ""-v-.? .,-..- .".:.-' ':',,':.:,,:, ,."',.."""; ;*"
%) %
*.-
0 - ' '.".. -". - '.'
.,..-. . -. ' -. ' - '.'-0*-0. . ', , ,A,.-... . .'.-. - . ,'---.-."Z.* .,,, , ".,,'.. " '
5.3 Update Covariance Software Modules

The weights are initialized using the WEIGHTINIT subroutine. These


weights contain directional information which initially steers the antenna in .-
the direction of the desired signal.

The first time the weight update routine UPDCOVAR is called, the inverse
sample covariance estimate initialized to the identity matrix. After the
first call, a new sample covariance estimate is formed by performing the
. necessary computations. This result is then used to calculate the new weight
vector which is then passed to the pattern-forming network.

The FORTRAN source code is given in Appendix C. Figure 5.1 depicts the
modules that have been discussed and illustrates the primary input and output
,U parameters. The variable names used in the routines are shown in parenthesis. -

By examining the update equations, it becomes quite evident that the


% Update Covariance algorithm, or any recursive processor for that matter, is
computationally intensive. Although recursive estimation produces significant
- "processing savings when compared to direct matrix inversion, it still
represents somewhat of a quantum leap in terms of complexity when compared to
the other algorithms presented.

Recall that the LMS algorithm required on the order of 2N computations to -

update the weights, where N is the number of sensor elements. The Update
Covariance processor requires on 2
the order of 5N computations to update the -.

weights. Therefore, although the recursive processors require fewer .U

iterations to converge, it may take a great deal of time to complete each


iteration. For applications large in size (those having many elements),
recursive processing may offer no improvement in actual convergence time over -

the gradient-based algorithms.

'A. .. * ..

%•%%

-38-

~~~~~~~~~~~~.
..... , .-. , .. ,... •-- .- ... ,",4 .. . •............... . - ...

%,' •
•,.,'% %"
" " • "" ' " .... ,. " "" - " "• "- - "" "-. -" i,' . • . " ,- " - "
type of algorithm
(ALGTYP) WEIGHT INITIALIZATION

element location
(X,Y) SUBROUTINE initial weights (COMPWT) r
WEIGHTINIT
arrival angle of
communication
ft signal
(PHI, THETA)

weights UPDATE COVARIANCE


(COMPWT) AL.GORITHM

sensr ouputsupdated weights (COMPWT)


(OUTPUT) SUBOUIN
UP DCOVAE

Figure 5.1 Update Covariance Software Modules

-39--

%V
6.0 DESCRIPTION OF THE HF ADAPTIVE ARRAY SIMULATION MODEL

The purpose of this section is to explain the computer simulation program


that has been developed to study adapti'v2 algorithms for HF antenna arrays.
All parameters that the user must specify will be discussed first. These
parameters define the signal/interference environment as well as the array
system to be tested. Secondly, the method used to evaluate the performance of
the algorithms will be examined. Finally, the simulation software including
signal models and the operation of the program will be presented.

4.

6.1 User-Definition of Array System and Environment

Before an adaptive array system can be evaluated, the user must define
the array system and environment to be studied. The parameters that must be
specified can be divided into the following classes:

I. Signal characteristics and environment

2. Interference environment

3. Array system

4. HF channel characteristics

5. Convergence characteristics

A complete listing of the user-specified parameters is given in Figure 6.1.


V-•..•

v. Appendix B contains the user manual for the simulation, giving complete
descriptions of the above parameters, as well as the actual user interface.

.4. In addition to a description of the input process, the manual also defines
user options for viewing the simulation output.

6.2 Performance Evaluation

The performance of the adaptive algorithms will be evaluated using a


maximum signal-to-noise criterion. This criterion will now be examined.

O:, Recall that the output of the array can be expressed as

y(t) = wT x(t)

-40-

OZ.,

ii ii
i !i 2.*.4~i -. l.... -. .?!
-.. i5i. 57i?
. .i.i=?iiS. .li~ ii!? 1i iiiiil iii!i~ il2 77! i l!)!= jii~
2*'4

UNITS

ARRAY SYSTEM
Number of sensor elements ---

Length of antennas meters


Location of elements in rectangualr coordinators meters

Adaptive algorithm ---

SIGNAL CHARACTERISTICS
Arrival angles, azimuth and elevation degrees
Number of Samples/bit ---

INTERFERRENCE CHARACTERISTICS
Number of Jamming signals
Arrival angles, azimuth and elevation degrees
J/S ratio dB
S/N (thermal) ratio dB

HF CHANNEL CHARACTERISTICS
Number of signal paths --

Delay of each path msecs


4..
Attenuation of paths dB

CONVERGENCE CHARACTERISTICS
Number of Convergences --- ,

SNR tolerance dB

Figure 6.1 User-entered parameters

4,'.

-41-
,2.1

%'.

*4,%
* 5

J6 N A
where x(t) contains both signal and noise components.

The array output can then be divided into signal and noise components.

Ys(t) = wTs(t) Yn(t) - WT n(t)

The expected signal and noise power at the array output is given as

i- ,T ,T-

Efly m)1I = T siT =w R w

2 T
*T *T
E{lyn(t)I 2 } = wi = w in n ] w = w R w (6.1)

Therefore, the signal-to-noise (noise + interference) ratio can be calculated


as

w R w
.-"SNR T (6.2)

-- -nn

The optimum SNR can be computed using a matrix transformation as given by


Monzingo and Miller [7]. The optimum SNR is given as

T -1*
SNROP T =s
OPT -R -s (6.3)

As the name implies, the maximum achievable SNR is used to evaluate algorithm
performance. The goal (optimum SNR) is known, and by computing the current
SNR, it is possible to observe the degree of success that the algorithm is
* having in attaining this goal. The model is executed until it has reached a
user set SNR goal.

This study is primarily interested in finding the average number of times


that a particular algorithm must update the weights (iterations) before it has
.
converged as function of the HF channel. It is useful to first consider a
three-path HF channel model as depicted by Figure 6.2. The HF channel is
characterized by the delays between the paths and the attenuation of each path
(determined by the variances of the random complex numbers, gi(t)). By
selecting a set of delays and attenuations, an HF channel model is defined.

-42-

v:.'.. ............-... ..... ,.-.-.......... ..........-.-.. ..-.. ,--. -_..'.... '

04% . . .
. . ..."."..
.- . .". ."... . . . "-"
"':-." . . . . . . . . ..-. ....,. .-. .. ".,... '/- .'.- .-' .- .".
....-.-...- . .... " • . ... " "-
•"'
r wr.. .. r r--, - w~
.r EE-PATH u . ....
*- i W'2" .'. -. "..

m4A

*-.

g, (t s2 tg3:i

" ":

r. gi(t) is a complex gaussian zero-mean random process

,,he,,,,nc of gi(t) determines the attenuation of the '


reszec-.Lve -ath .

Fgre
F, .2 Three-path HF Channel M~odel "

-43-

r AA..,'--
. Notice, however, that it is possible to get different channel "realizations"
using the same characteristic channel model (same delays and variances) by
simply using different random numbers. This plays an important role in the
performance evaluation.

Due to the fact that the HF channel is slowly varying, a single channel
realization is selected for each adaptation. The optimum SNR, which is
channel dependent, is then calculated and adaptation begins. The current SNR
is periodically calculated and if this value is not sufficiently close to the
optimum value (proximity to optimum entered by user and known as SNR
tolerance), the adaptation process is continued. When the SNR does approach
the optimum SNR, the algorithm is said to be "converged" and the number of
required iterations is recorded. A new HF channel realization (a new set of
- random numbers for the tap weights) is then selected. The entire process is
* repeated until the algorithm has converged the specified number of times
(user-entered). At that time, the average number of iterations is
calculated.

By performing this simulation for each of the selected algorithms, it


will be possible to determine, on the average, which algorithm converges in
the fewest number of iterations in an HF environment.

6.3 Simulation Software Description

The purpose of this section is to explain the signal models and the
operation of the simulation program used in the study.

6.3.1 Desired Signal Model

The signal that is to be received is assumed to be a BPSK waveform at a


carrier frequency f " This can be written in complex form as

n s(t) = A(t) /- exp (jwct) .


.

where

-44-

. .. .
-. . .. K .... . L % 7 r
in F.d '.
Ps desired signal power and
A(t) = +1, -L

The signal component at element i can be written as

si(t) = A(t) F--P-s exp(jwct) exp(j~i) (6.4) '.

~F':
where Qi is the phase of the desired signal at element i.

Q is calculated using the arrival angle information and the location of


the elements.

= 2*Tr*f c * xrot(i)*sinO/c (6.5)

where

xrot(i) y(i)sin + x(i)cos


x(i),y(i) - rectangular coordinates of element i
* - azimuthal arrival angle of desired signal
G - elevation arrival angle of desired signal
c - speed of light

Using complex envelope representation, the carrier component can be


dropped from the notation. The jammer power and thermal noise power are r
defined as ratios relative to the signal power, Ps. For the sake of
simplicity, the signal power is assigned a value of I. %

Equation 6.4 can be written for discrete sampling as

-
Si A(k) V exp (jQ ) ""

The signal vector then becomes -


e j~ l

e
s(k) = A(k) /.P

e JQ

-45-

".~~~~ - -N ._
...............................................................- % .. . . . " *.".. ... % " . %~" ,- ",,. i, " ' , "4
'' %,., '' ,
6.3.2 Interference Model

The jamming signals in this study have been modeled as complex Gaussian
noises. This can be expressed as

nj(t) vP3772 [E(t) + j F(t)] (6.6)

where E(t) and F(t) are zero-mean random processes with a gaussian
distribution and a variance of 1. The power of the jammer is calculated from
.- the user-specified jammer-to-signal ratio.

The jamming signal component at each element is defined as

nii(t) = nj(t) exp(j2i) (6.7)

where is the phase of the jamming signal at element i. It is calculated in


r the same manner previously discussed using directional and element location
information.

6.3.3 Thermal Noise Model

In this study, thermal noise has been modeled by adding complex Gaussian
Noise to each element. This can be expressed as

n (t) - /f77 (E(t) + j F(t)] (6.8)


t n

where E(t) and F(t) are zero-mean processes with a Gaussian distribution and
variance of I.

The power, Pn, is calculated from the user-specified signal-to-noise


ratio.

6.3.4 Correlation Matrices

In order to evaluate the performance using a maximum signal-to-noise


criterion, it is necessary to compute correlation matrices. These will now be
defined.

Recall that the desired signal can be expressed as

-46-
0V%
-6 r2 .- & ' - AL&:i .- 1 - A
S (k) - A(k) V-7 exp(Ji)

The correlation matrix can then be written as

jo-j(Q 1t Q2 ) -J(Ql QN )
e e e

e'2 1 -joe 2 N

'R =P *(6.9)

' e e • 0e

-J(Q2- I -J N -JO j
The noise correlation matrix can be written as the sum of the individual noise
mat rices.

RJ = P (6.9) '.

'LRn RJamer RNHRIA (6.10)

where

1
e-j e N

.4

e 2 1 e-j e 2 N (-1

" -4

e 1) e eJO
: : : : .r. 4 4 4 , 4 4,-.. . ,4%

an NTHERA a2 (6.12)
.. %

As previously mentioned, the optimum SUR is given by

4'SNR -sR s
opt -n
a

The current SNR can be written ash ono

-47- 44

IV. . - "

% % 4 P-
., w*TR w
SNR = *

w R w

The optimal value is calculated first. The simulation is started and

adaptation begins. The SNR is periodically checked, and when it is


sufficiently close to the optimum value the simulation is stopped.

6.3.5 Simulation Program Operation

The flow chart of Figure 6.3 depicts the order in which the described
operations are performed. The following is a brief discussion of how the
program operates.

The simulation begins with the user specifying the defining parameters
such as the element locations, selected algorithm, arrival angles, and
relative signal strengths, among others. The program must then calculate the
optimum SNR for a particular channel realization. This value will be used to
determine when the algorithm has converged. The antenna weights are then
initialized and the adaptation process begins.

The desired signal and jamming signals are determined first using the
models described. These signals are then passed to the HF channel model.
These "channelized" signals are then passed to the antenna routine which
calculates the contributions of all signals at each antenna element. The
output of each sensor element is multiplied by its corresponding weight, and
these products are summed to form an overall array system output. This
overall output and the output of each element is then passed to the selected
algorithm. The algorithm uses this information to adjust the weights.

The weights are updated by the selected algorithm and the current SNR is r
periodically computed. If this SNR is not close to the optimum value,
adaptation continues. If it is, however, the converged weights and the number
of convergence iterations are stored in a file. A new channel realization is

selected, the parameters are re-initialized, and the process is repeated.

When the algorithm has converged a sufficient number of times (user-


specified), overall statistics are gathered and the simulation is complete.

-48-

%
USER PARAMETER SPECIFICATION

I CALCULATE OPTIMUM SNR


PARAMETERSF%
Rss/Ps, Ran, Ptn'Sr

CALCULATE OPTIMUM SNR


IPs, sTRnn s*

INITIALIZE WEIGHTS

GENERATE SIGNALS AND


PASTHROUGH CHANNEL

INCREMENT
UPDATE WEIGHTS SOUVRGENCE

SYES

STORE WEIGHTS
AND NO. OF ITERATIONS

ONVERGED ENOUGH TIME NO COUNT = o GET E


CHANNELNEW~

YES

Figure 6.3 Flow-diagram of Program Operation

-49- 5

'5%
-. 7.0 SIMULATION RESULTS AND CONCLUSIONS

The purpose of this section is to illustrate the results that have been
obtained using the simulation model described in the previous section. From
these results, conclusions concerning the applicability of the particular

algorithm to the problem of interest can be drawn.

In all of the tests to be conducted, certain parameters will remain


-.. constant. These include arrival angles and signal powers and are summarized
in Figure 7.1. For each test, the average number of required convergence
iterations will be given to demonstrate how quickly the algorithms approach
optimality in the maximum-SNR sense. Also, polar plots will be presented in
order to pictorially demonstrate the nulling capabilities of the algorithms.
.~..1
These plots consist of two "slices" of each antenna pattern at the azimuth and
elevation angles for each incoming signal (see Figures 7.2 a .d 7.3 for a
definition of the geometry).

r.- .

° L'T .i ,..

0%

%-

0€ .W
S: 3NAL AZ U TH AL ANCLE ELEVATION ANGLE POWER

90. 0I0 00

90. 0 E60.0 20 d2

J MI 2 150. 0 75.0 20 da~I

Sjh~ra1 -10 d-, Carrier freQuency '130 A7ThZ

S=E"NS-C." £EEMENT LOCATICNS

Y-A: Is

(0. 10) (5,.10) (10. 10)

V V NV

(015) (5,5) (1015)

(0.0) (5,0) (10.0)


X-AXIS

MIN

Co s a t T st P r t r
Fi ur 7.
-51-
'.
5- 5

Z-r %

..-
..
y-

,..
%
"

~Figure 7.2 Description of Elevation Plots

"5,
-52-l
1
't 7ot

t
...
......

'

-5
3-

,--

r'.

?I
7.1 Test 1: Negligible Channel Effects

It may be enlightening to first consider the performance of the algorithm


in an unperturbed channel environment. In this way, the degrading effects of
the HF channel can be monitored as well. Although it is not a terribly
realistic case, it may become so if some efficient channel compensation
technique could be employed. Recall that in this study the array model
consists of one weight per element. This allows for beam steering, but the
array cannot compensate for the smearing effects of the channel.

This case, although utilizing an ideal channel, is far from a trivial


one. In fact, simulation studies frequently limit themselves to an ideal
channel, because the task of nulling out strong jamming signals is a difficult
one even in this setting.

The results for all algorithms were averaged over 100 convergences. The
number of iterations for each convergence was recorded to produce the
histograms of Figures 7.4 through 7.6. In all cases, the SNR tolerance (the
proximity to the optimum SNR that determined covergence) was set to I dB. In
other words, the algorithms "converged" when the SNR was within i dB of the
optimum value. The results are tabulated in Table 7.1. The polar antenna
plots that follow verify that all of the algorithms did an excellent job in
placing nulls in the directions of the interferences.

Probably the most striking result obtained was the incredibly few
iterations required by the Update Covariance algorithm to converge. This
result can be misleading, however, due to the number of computations it
requires per iteration. Recall that the update covariance algorithm requires
about 5N complex computations for each iteration, where N is the number of
antenna elements. The LMS and Constrained LMS algorithms require on the order
of 2N and 5N2 computations per iteration respectively. Therefore the

difference in the actual number of required computations is not that great.

The Constrained LMS algorithm converged the slowest of the three


algorithms. It is interesting to note that the weight changes of the LMS
algorithm approach zero as the weights approach their optimal value. This can
be seen from the LMS weight update equation listed below as

-54-

04,
* .,,..-.."..----'...-1-
I~ TEM. I OHS

-55-

4.3L -A
U 4

.4

left ~ ~~~Re 0 SO --

-- 56

- -5 -

4.
OU .-

.. . .

lie ih,..
"'A"

1.

U 40A

E S.

4i i)

A otoIOa 6

.7 UTIO

Fiue76 CnegneHsormo paeCvrac loih

inIel hne

-57
0p

*Table 7.1 TEST1 Summary


N''

Adaptive Number of Average Number Standard


-- Algorithm Convergences of Iterations Deviation

i~i
LMS 100 318.0 103.5

Conistr. LMS 100 536.5 8.14

Update Coy. 100 9.9 6.7

" A~oitm ove~ece o Ieaton Dvit o

ii -58-

A.- %.

%"Aa vN b oA %dar r Nube Sta


,. +

= :'(k) + j e(k) X* (k)

As the error decreases, so does the amount that the weights can change.

The weight changes of the Constrained LMS algorithm however, will never
approach zero because the beam is constrained. Signal power will always
appear at the output, even if contributions of interferences are negligible.
Therefore, the weights will always change an appreciable amount, provided that

the signal is present

'.(k+l) = Factor*[W(k) - .iy(k) X*(k)]

It therefore seems reasonable that the Constrained LMS algorithm will perform
better when the signal power is low or when it is absent altogether. This
illustrates an important advantage possessed by the Constrained LMS
4 algorithm. Unlike the others, the Constrained LMS algorithm could optimally
adjust the weights before actual signal transmission (including preamble)
begins. This is the major focus of TEST 4.

i 'V

i .4

%, %' %.

.4

""-59- ,, 4.,

I i

kjl ~-* . . . " . + .+


"
. ' .'1•m+'
.P "
. - .*.+
+++i .. '4.
.+,, * I . •
**-*.44%7+ . .. . .. * .
*+ .* - m -" " 'I" ''*'•m+ " ""+r1
+. +ol m
'
+
+1
m~ +l...+
',
-- i.i
.- - -' - - ' - " :" ; . : - - ' ..
: - : - : - '- . ",. 4.4- ' . ' : "' - : ' : '- . -' . . . ' + : : - - - -' - , . . - . . ,, '
SUMMARY OF PLOTS FOR TEST1

Figure Tl.l: Unadapted antenna plot at p = 90', 6 = 800


- Purp3se: To indicate initial gain in direction of Jammer I

Figure T1.2: LMS-adapted antenna plot at p = 900, e = 80-


Figure TI.3: Constrained LMS-adapted antenna plot at = 900, 6 = 800
Figure T1.4: Update Covariance-adapted antenna plot at = 900, 6 = 800
Purpose: To demonstrate nulling of Jammer I by each algorithm

Figure T1.5: Unadapted antenna plot at = 1500, e = 750


-7 Purpose: To indicate initial gain in direction of Jammer 2

Figure TI.6: 1UIS-adapted antenna plot at 1500, e - 750


Figure T1.7: Constrained LMS-adapted antenna plot at = 150', 8 = 750
Figure T1.8: Update Covariance-adapted antenna plot at = 1500, a = 750
Purpose: To demonstrate nulling of Jammer 2 by each algorithm

.;y.% %"

060

",.

iQ .' . - 6 0- :

•.4 ... #
%

1.d

30dB

-40dB ,

e 9' 5d

e (9igure0d)

-61a--

%* A
~rr W w-'yv~- -W-.- n r --- W -------. r Uww 'rW

"%'vs
nL

%" .%'
44

Jammer 1
.
900

%6

1800

4'.'

"5270 0°

'
.._ Azimuthal plot at 800 elevation
)
Arrow indicates arrival angle of Jammer 1 ( =90
t%'

1 :
Figure T1. Unadapted Antenna Pattern in Direction of Jammer
;
. (Figure B)

"-
-61b-
.

..
0dB

-20dB

-30dB

<5,
-40dB

.5-50dB

Figure T1.2 LMS-Adapted Pattern in Direction of Jammner1


(Figure A)

-62 a-

% %

% % %

45/
Jammer

900

1800 9=0o

"C.

;'- .'270 0

'"--. " Figure T1.2 LIMS-Adapted Pattern in Direction of Jammer 1


'5,
'I

.
-I :, . ,., (Figure B ) C
"
*- "C-. ,,..S.,.-.,- -.--. "" '-'' " ".- > " '. .. " "- F . "" " " " ;-. " *""'.'0 -" ,,". "",

IVVV
s-, -62b-
OdB

-lOdB

-20dB

-30dB

-- " 1
~Jammer ,"i

-40dB

"=90 0 -50dB
/Z

Figure T1.3 Constrained LMS-Adapted Pattern


in Direction of Jammer 1

(Figure A)

-63a-

'. * -

P ° f. , -' JJ ,-"" .' " 4%w' V.


' I2.-, . ,,. J'.'' .. '.. -.
'."i'.".%<
.4_

Jammer 1

900

I t I

'.,

1800 3d~ 0

u
C.

Sin Figure TI.3 Constrained of Jammer I Pattern-


Direction LIS-Adapted

(Figure B)

-63b-

. .
%•
-,.--.
~z
~~OdB ".

o
Z A

• -20dB .-.

-30dB

Jammer1

"950dB
0
%=90

Figure T1.4 Update Covariance-Adapted Pattern


in Direction of Jammer 1

(Figure A)

-64a-

A% - . p p p

,'.. ," '-,


.... .'-.'- .' "- , "-" " " - "-.
p . -" " z" . . . :J' j .; '..- -.. & ,, ,
".
" '.'.
' ..',,
,." -:' ". " "-." " " " -" " "~e' eJ .f, ~4.,
" ";.i. . ' l 4
I, -

if Jammer 1
900

S...

"
I,,"-, '270

pU.
S....

A..

9
iA . "

-..<Figure Tl.4 Update Covariance-Adapted Pattern 9,.

.-.. in Direction of Jamxer 1 .


i'-i (Figure B) .

%.'

-64b-

KzX:.. . ... . . . . . " ..

. .. . .. . . . ... . . . .... . .. .. . - .. ... .. .< . .-.. .. .. . .( , .


r'.~~~~~~~~~~
- *--W-- w-I-- '- re17 -
V.'-w W 7' -791 .7rq- T 4 -v

0dB

-10dB

-I40dB

e=900 5d

Elevation plot at 1500 azimuth


Arrow indicates arrival angle of Jazmmer 2 (e=750)

Figure T1.5 Unadapted Antenna Pattern in Direction of Jamnmer 2

(Figure A)

-65a-

aN
900

Arrow iniae rrvlage fJme 2o=5

FiueT. ndatdAtnaPatr nDrcto fJme

(FgreB

*-65b-

* .. ,..%
20dB

Jammer 2-30dB

-20dB

=900. -50d

-66a-
900

N-

N-. 80

1800Z=

2700

Figure T1.6 LIS-Adapted Pattern in Direction of' Jammner 2

(Figure B)

-6 6b-

'p.
%j
Vu-yA2
A e=ool

-20dB

3d.......
Jammer 2

9=90, -50dB r

Figure T1.7 Ccnstrained LMS-Adapted Pattern


in Direction of Jammer 2

(Figure A)

-67a-

NS
900

,-. Jammer 2

2700
m'I'


-67b
2700°
4
.

Figure Ti.? Constrained LMS-Adapted Pattern


* in Direction of' Jammner 2

[:. (Figure B)

..

" i -67b-

N.
.4.,
z L

e=00
0dB

lOdB

-20dB

Jamne 30dB IA

-40dB

9:-90 0 50dB r

4 Figure T1.8 Update Covariance-Adapted Pattern


in Direction of Jammrer 2
(Figure A)

-68a-

I A W4I

%7
900

,9-

1800 93'= 0 P

-68b-
- .j inDiecioeorJmmr2
q

.4
•. '

(Figue B)

2700

Figure T1.8 Update Covariance-Adapted Pattern

4,..
4 ,9
.

*i .2o

44-
f ,
FiueT. pat
n Diectin
oaineAdpe
of amme92
atr
9]
7.2 Test 2: HF Channel With Moderate Delay Characteristics and Poor

Attenuation Characteristics

a significant departure from an ideal


The second test represents

channel. The channel consists of three paths separated by 0.83333 msecs.

Each of the paths is given equal weighting. In other words, signal components
will not be attenuated more in one path than in another.

Once again, results for all algorithms were averaged over 100 -P-.

convergences. Histograms depicting when the algorithms converged were

generated and given as Figures 7.7 - 7.9. The results are tabulated in Table -

7.2, and as the plots will verify, all algorithms did an effective job in
nulling out the jammers.

The number of iterations required by the Update Covariance algorithm was *

very small. In this case, however, it is interesting to note the difference r


in actual computations of this algorithm and the LMS algorithm (9,846 for LMS,
10,651 for Update Covariance). If the computations all required the same

- amount of time to complete, the actual "convergence time" for the LMS would be
shorter. This is why it is important to take the computational complexity of
each iteration for the algorithms into account.

Once again, the Constrained LMS algorithm was the slowest in average

convergence. The results also indicate that the algorithm had a wide
fluctuation in the required number of iterations. As can be seen from Figure

7.7, the Constrained LMS algorithm had difficulty in attaining the "1 dB" r
threshold once the SNR had approached the optimal value. This problem can NN

quite possibly be attributed to the weight update problem previously


discussed. Even after contributions of interferences were reduced from the

array output, the weights will change an appreciable amount due to the signal
power (which now may be magnified by the existence of three paths). Once

again, this will be studied further in TEST4. -,.

As expected, the LMS algorithm also had more difficulty approaching an

optimal value in this test than in the previous one. It should also be noted

that in the event that the preamble was severely distorted by a channel,
covergence of any kind would be severely hampered. In other words, if the

received signal did not look much at all like the reference signal, the I

-69- -

. ., .
.. I~.. ~ * * ., a - . . ~~~~~~~~~~~%
.. %-~
.14.

0~~~' a"noM lw S

a% W V

7AU
"0

-A +. ".°

I -

L

V
E
r

Ut

- A "

I.* .4

" '2 .,Figure 7.B Convergence Histogram of Constrained LMS Algorithm

. in Channel 2

%71
-4
-4

A-

-- 4$

4$

* 4,

ow~ a

p.

---

ft
C
I
ft
'A, T
44
U
C .1~
F -
1
C
ft
U
C
'I
C h.
V

a.
-A.
-4

(.

N,

N
0 Se I.. isa ~. ~.

* N.
4 iT~TZOs6

'-'A

-4
A--

S
A'

Figure 7.9 Convergence Histogram of Update Covariance Algorithm


4..

in Channel 2
-72-
4.

4.
'A,

0
-................. ~ -- -- *

- ~ ~ . - -
-. .-
-........... * .. -- -. . .V. ''A~" N: -4 ~,AA '~ 4% 'A -
4
'A ~~44'A~'A "'4 '4%
V 1

"S

I,'

. TABLE 7.2 TEST2 SUrIARY 4

ADAFrirVE NUMB2ER OF AVERAGE NUMBER STAN1DARD


SAGRTMCONV ERG ENCES OF ITERATIONS DEVIATION

LMs 100lo 547.0 I 17.8

4-73
'-.'UPDATE COV. 100 26.3] 38.3

% %

5o.

_ ," -73

4i- .-_- -_,,-., , -., ., v , - . -"." " v v . ,'. " '


. ' .? 'y ." . ¢ . : " ' ¢ ' : " % ?. -f .( ]
algorithm would have an extremely difficult task. After all, algorithms
*, requiring references operate by forcing the array output to be equal to the
. reference signal. If this is attained, it is assumed that contributions of
iammers are negligible. Consider the case, however, when the received signal
does not strongly resemble the reference signal. Even if the weights were
optimally adjusted, the output of the array would not look like the reference

. signal, and the algorithm would continue to adjust the weights in an attempt
to force the output to be equal to the reference signal. In effect, it would
be trying to compensate for the channel effects, although it has no true means
-it doing so. This is an inherent problem with algorithms that require
references and is worth mentioning. It is also worth mentioning that the L4S
algorithm had no problem converging in this case, and the channel produced

some fairly bad smearing effects. Therefore, it can be assumed that the
:.hannel must get significantly worse than this to prevent convergence.

.. .
.-.. . . ~ . .. . . . . . . --. . *. . . .

-74-

%- - . -
SUMHARY OF PLOTS FOR TEST2

Figure T2.1: Unadapted antenna plot at ¢ = 90', e = 800


Purpose: To indicate initial gain in direction of Jammer I

Figure T2.2: LMS-adapted antenna plot at ¢ = 90, 9 = 800


" Figure T2.3: Constrained LMS-adapted antenna plot at = 90 ,° 9 = 80"
Figure T2.4: Update Covariance-adapted antenna plot at ¢ = 900, 9 800
Purpose: To demonstrate nulling of Jammer 1 by each algorithm

Figure T2.5: Unadapted antenna plot at 0 150, 8 = 750


Purpose: To indicate initial gain in direction of Jammer 2

Figure T2.6: LMS-adapted antenna plot at € 1500, e 750


Figure T2.7: Constrained LIS-adapted antenna plot at O = 1500, 0 = 750
* Figure T2.8: Update Covariance-adapted antenna plot at q = 1500, 8 = 750 W,
Purpose: To demonstrate nulling of Jammer 2 by each algorithm

0 1*

:::5
",

i.: V ,.-,.

-75-

0N6
--~- -~ -~---- -W -y~- -'~ - - - - - - - - - .---

OdB

- - - - - - - - - - -.

-9e =900 -50dB

Elevation plot at 900 azimuth


Arrow indicates arrival angle of' Jammner 1 (8=800)

Figure T2.1 Unadapted Antenna Pattern in Direction of' Jamnmer1


(Figure A)

9,-76a-

'S '9%

* .9.
900
' Jammer 1I

200

Azimuthal plot at 800 elevation


Arrow indicates arrival angle of~ Jammner 1 ($=900)

Figure T2.1 Unadapted Antenna Pattern in Direction of Jammrer 1


(Figure B)

-76b-

OIAXg
KANSAS 2/3
UNIVCENTER FOR RESEARCH
FOR HFINCANTENNA ARRAYSCJ)
LAURENCE TELECOMMUNICRTIO
ADAPTIVE ALGORITHMS
-A199 229 .HAEGERT ET AL JUL 87 TISL-548 RAOC-TR-86-159

Ehhhhh7h
UNCLASSIFIED F3,8682-Si-C-895 F/G 9/1 UL&

EhhhhhhhhhhhhI
hhhomhollsomhE
W..L 3.6_

11-6

Mil
0dB

-20dB

e=90 0 -50dB

Figure T2.2 LMS-Adapted Pattern in Direction of Jammer 1


(Figure A)

-77a-

901
Jammer 1

900

1800 9d=0 0

2700

Figure T2.2 LMS-Adapted Pattern in Direction of Jamnmer 1

(Figure B)

-77b-
0dB ^

-20dB

-30dB

inme
Dieto1fJme

-40d-

Z..

e=90 -5d
I Jammer 1

900

2700

Figure T2-3 Constrained LMS-Adapted Pattern


in direction of Jammer 1
(Figure B)

-78b-

X
LAv
0dB

-30dB

-40dB

e=90* 5d

Figure T2.4 Update Covariance-Adapted Pattern


q in Direction of' Jammer 1
(Figure A)

-79a-
Jammer 1

900

1800 90

2700

Figure T2.4 Update Covariance-Adapted Pattern


in Direction of Jammner 1.
(Figure B)

-79b-

-~~4
-'-A*j
%z N&
z
8=00

OdB

-20dB

-40dB

e=900 -50dB

Elevation plot at 1500 azimuth


Arrow indicates arrival angle of Jammer 2 (e=750)

N Figure T2.5 Unadapted Antenna Pattern in Direction of Jammer 2


(Figure A)

,5i

-80a-

.%!

%
900

Jammer 2 "

1800 ,g0 0

2700

Azimuthal plot at 750 elevation

Arrow indicates arrival angle of Jammer 2 (%=1500)

Figure T2.5 Unadapted Antenna Pattern in Direction of Jammer 2

(Figure B)

-80b-

*r , .'i .,_. ' . ' b , ' " - ./ ,' , , . " ".


e=00 I d

0dB

-20dB

Jammer 2-30dB

-40dB

G =900 -50dB

Figure T2.6 LMS-Adapted Pattern in Direction of' Jammer 2


(Figure A)

-8la-
900 I

alxa

1800 )=

Ipi

2700

Figure T2.6 LMS-Adapted Pattern in Direction of Jammer 2


(Figure A)

-81b-

!II ' ' "~ ,,"%" -


" , ,,.-.'
"-. .,.2
" ',,''

".2 , "
" ." ',""' ',"
. "' "","%'
' '.,'.*
w- J '
%' ""
', ' 'w'
','.' "
° ,W ,'
..
',.=
',',,,'
w..w
".".
.
I"

-10dB

-40dB

9=900 -50dB

Figure T2.7 Constrained LMS-Adapted Pattern


in Direction of Jammner 2

(Figure A)

-82a-

A.
'.4'
.4~l~
900

Jame

180
.0 0

!, 7

27I'

Figure T2.7 Constrained LMS-Adapted Pattern


in Direction off Jammner 2

0. (Figure B)

I -82b-

.5...
! z
e=o0

0dB

-40dB

. in Direco o aer 2-30dB


9=0 -50dB I

4".

e=9o o _ -50dB

€ Figure T2.8 Update Covariance-Adapted Pattern


4[ in Direction of Jammer 2

(Figure A)

.- 4, -83a- '-9

-4'
900

Jammer 2.b

NN
%%

1800270

'. 2700

Figure T2.8 Update Covariance-Adapted Pattern


in Direction of Jammer 2
(Figure B)

0, .

%
S.

-83b-

'-%

as1w
7.3 TEST3: HF Channel with Poor Delay Characteristics and Poor Attenuation

Characteristics

This test differs from TEST2 in that the delay between signals arriving
from different paths is now doubled to 1.6666 msecs. ;.

Once again, results for all algorithms were averaged over 100
convergences. The convergences were monitored and histograms were generated

which indicate the nature of when the algorithms converged. The SNR tolerance
was set to I dB for all trials. All of the algorithms placed deep nulls in

the directions of the jamming signals, as the plots that follow will verify.
The results are shown in Table 7.3.

The Update Covariance algorithm again required the fewest number of


iterations by a wide margin. It is also important to note that the LMS

*1 algorithm actually required fewer computations on the average to converge.


Therefore, in real time, the LMS algorithm converged faster, making the
assumption that all computations take the same amount of time.

The Constrained LMS algorithm was the "loser" again. But the average
number of convergence iterations did not change significantly from TEST2.
This seems to indicate that the covergence difficulties are more related to

the magnification of the signal (existence of three paths) than the delay
between the paths.

The results indicate that the delay between the paths did not seriously

affect the performance of the LMS algorithm. Again, this fact suggests that

the existence of three distinct paths plays a more significant role in


affecting algorithm performance. The LMS algorithm still converged in a
modest number of iterations on the average.

-84-
%- % %

% -d P %.1 %.1 -
I

.1.

U
[
V
3
C
Qu "beS -
C

I1TIwT| 6S --

JV
L%

Figure 7.10 Convergence Histogram of LMA Algorithm in Channel 3

-85-

".
m# .. '_ ',-.-'
'' ' .. .%'..\%
.' ' r."'-''.' .'4
%,. .Lv ." .'..*.'.%.-....,-.,-.% "-..,..."%" ' ' ," .* • -'.. .. .. .. .. .. .. ,.:.
:.. . : ,', ",". . . . .- ".. - . . .• ,, - . . .. . ..- .. ., .- ' -
" , . . . , , % ".%"%"%". % o. % % * . , °. . % " 'p . %
I.I
3

T
U

-- 0 IWO N

4Figure 7.11 Convergence Histogram of Constrained LMS Algorithm,

in Channel 3

-86-

- % % -
pH4,
H %
c~wfvqrL

404

14.
%
r 1"1'.-"
V%
"I %m I WI b 13~~
TABLE 7.3 TEST3 SUMMARY

ADAPTIVE NUR4BFR OF AVERAGE NUMBER STANDARD


ALGORITHM CONVERGEDICES OF ITERATIONS DEVIATION

LMS 100 542.5 172.6


CONSTR. LMS 100 1143.5 542.6

UPDATE COV. 100 28.8 43.1

'qi
4 -88- .?
SUMMARY OF PLOTS FOR TEST3 -

Figure T3.1: Unadapted antenna plot at 4 =900 , 9 - 80*


Purpose: To indicate initial gain in direction of Jammer 1

F'gure T3.2: LMS-adapted antenna plot at 4) 900, 0 = 800


Figure T3.3: Constrained LMS-adapted antenna plot at 4 = 90, 9 - 800

Figure T3.4: Update Covariance-adapted antenna plot at = 900, 0 = 800


, Purpose: To demonstrate nulling of Jammer I by each algorithm

Figure T3.5: Unadapted antenna plot at 4 1500, 0 - 750


Purpose: To indicate initial gain in direction of Jammer 2

Figure T3.6: LMS-adapted antenna plot at = 150, 0 = 750


Figure T3.7: Constrained LMS-adapted antenna plot at - 150, - 750
Figure T3.8: Update Covariance-adapted antenna plot at 4 1500, - 750
Purpose: To demonstrate nulling of Jammer 2 by each algorithm

"p -8
U--

<p, I,r 89 -

." ,"" . 1."% ". .. ,' .% '," ." • . ,"," ," . " ." . " . " - ". . " 4 " ." ""." " . *.

Sn " '' "' "' ' - .- ' ', , 4 " ., . -. " '- " , " - ", . , ", " " - ".' '" ' . 'r
-2d

30dB

-40dB

~9=900

-90a

.
Nv,

.0 -
4 Jammer 1

900

1800

9..

2700

Azimuthal plot at 800 elevation

Arrow indicates arrival angle of Jammer 1 ( =900)

Figure T3.1 Unadapted Antenna Pattern in Direction of Jammer 1


-. (Figure B)

-90b-
1 J.

,. ,.
O #
0dB

-40dB

0=900 5d

Figure T3.2 LMS-Adapted Pattern in Direction of Jammner1

(Figure A)

-9la-

K.
S Jammer1

900

1800 0

2700

Figure T3.2 LMS-Adapted Pattern in Direction of Jamnmer 1.


(Figure B)

-9lb-

411 ~ M> - N Z -p
z

0dB

-20dB

-30dB 4.4

in Direction of Jammer 1
(Figure A)

-92a-

A% %
SJammer I

900

1800 WWI

2700

Figure T3.3 Constrained LMS-Adapted Pattern


in Direction of Jammer 1
(Figure B)

IV

-92b-
-'V
; %

01

~ '~/
ez

40d

0=0
9=90 -0dB

46

-93a-

%* %

% %
{ Jammer 1
900

-93b

J, % %
1800 0

AMLI LF r
z
e=00
0dB

-10dB

-20dB

-40dB

9=900 -- -50dB

Elevation plot at 1500 azimuth


Arrow indicates arrival angle of Jammer 2 (e=750)

Figure T3.5 Unadapted Antenna Pattern in Direction of' Jammer 2


40,
(Figure A)

-94a-

% %
°
" -- - f r l. w rf rf . 9o

i ~ ~Jammer 2 .'

N.N
-,

,=0 o
1800
- 2700

. • " 5 -. ;

Azimuthal plot at 75 elevation-


Arrow indicates arrival angle of Jammer 2 ( =150* )

; ~Figure. T3.5 Unadapted Antenna Pattern in Direction of Jammer 2 .. ,

(Figure
' ,,€ %
B) 1-
""
""" "",r ,r" %'"
" ',,, ".,-.Jw
(" " ",,,,,,,r'. ,V"' ", ,,.," "." "w'
m .',h. ),. % ,) w", ,,,,,",,-'.- % " ,r " ,4.'
"'.,P.',".." ,'.w " ... . .-.... .l,

.),_ )r - -"" . w :aT .

-94b-
%%
20dB

Jamer210dB

-40dB

9=900 -50d

g4.

FiueT. M-dpe atrni ieto fJme 2


(Figure A

-95a-

%4

%=0 5d
,V.

900

1800 Q 0~s

A.t % le %.

ftt%
% %f
z
e =00

-~ ' 0dB

o. ....... -lOdB ."a


i
/ -

-20dB %

Jammer 2 -30dB

,,. ,.
-, I

-40dB

e=900 -50dB

Figure T3.7 Constrained LMS-Adapted Pattern


j! in Direction of Jammer 2
(Figure A)

'.p

-96a-

-p

,,
a..,
... .
.. ..
...
.... ..,.,,....
.•,-.--.,

"
. - , ., .; ,, .,,, .. ,. . //
,...,,, .,,
. .,.

""'
, ,-,
,.
.. ,.,.'/k _ . ,aJ
-.
%
"" '" "" " "'" -
-p
o
.00
S90

. Jamer 2

'S--

12700

'V
J"

-'O4-

SFigure T37 Const,rained LMS-Adapted Pattern


'.-" in Direction of Jammer 2 i
B)
.(Figure

4-..
•,S..

270

"" ~-96b-" "

% -. I-.
4:
- ',-

h. Z
z
e=o 0 OdB

=1 -

-20dB

-30dB
-20dB

e =900 -50dB

.5
%Figure T3.8 Update Covariance-Adapted Pattern --,

N
p •%
in Direction of Jammer 2
-.'N .'
'p,'& - . -, • ....
,f .- ,", • % - - .- ,,,_. ,.. .,. , (Figure
-.-.. .- A ) ,• . - .- . - . . - .- '
.,. . -
"9.
900

I:
.1'*

A
2700.

FiueT. paeCvrac-dpe in o Drecton


Jamer atr

(Fiur B

~~-97b-
7.4 TEST4: Dependence of Convergence of Constrained LMS Algorithm on the
Presence of the Signal

It has been mentioned that the Constrained LMS algorithm will perform

better when the signal is absent than when it is present. In other words, if
the Constrained LMS algorithm was to be selected, sending a preamble would not
only be wasteful, it would be detrimental. Tests 1, 2 and 3 will now be
repeated for the Constrained 1MS with the "desired" signal (preamble) being
absent.
.. %

In order to perform this test, the simulation had to be effectively


"fooled." If the signal power is zero, the SNR is undefined. Therefore, the
optimum values will be calculated as if the signal was present (signal power =

I). In other words, the Constrained LMS algorithm will have to attain the
same value of SNR as it did before.

The results are tabulated in Table 7.4. The polar antenna plots that
follow verify that the Constrained LMS algorithm places deep nulls in the

directions of the jammers without the benefit of any preamble whatsoever. The
number of average iterations required also validates the assumption that the
algorithm will perform better in the absence of the signal.

Notice that the average number of required iterations is now virtually


V identical to that of the LMS algorithm. Also recall that the Constrained LMS
algorithm (with the signal present) suffered large fluctuations in required

iterations for channels 2 and 3. When the signal is absent, however, the
algorithm actually had the smallest percentage deviation of all the algorithms
. tested. This is graphically illustrated by the histograms of Figures 7.13 -
7.15. This fact only strengthens the claim that the existence of the three

paths (and the signal magnification that results) is the primary culprit of
the convergence problems suffered by the Constrained LMS algorithm. When this
effect was nullified by eliminating the signal, convergence was fairly rapid
and quite consistent.

4:-.

-98-

% %,

%%~ ~~~~
.o *1,i
3 7'
1
SUE
7%%
*1m

iso

• .'

IN"
to to" 30 "e 40" 6"40 Go"

I TUT I am

N" " ¢" 41

N %q IR
'

* Figure 7.13 Convergence Histogram of Constrained LNIS Algorithm in


Ideal Channel (no signal present)
P' N "
h~A € " 99- "%

p% -.'1
L

c
v

g.00 :.

V,

0,I- 3T 00T-

%0

Figure 7.14 Converge.e Histogram of Constrained LMS Algorithm in


Channel 2 (no signal present)

-100-

L~.,
,''.. -,..,.,. ,-.,:,. ..-. .,,-..-- ..- .-.- .,;.- ,-,- .-.-.-.-. '...-.- ,..-.-
.- ,'.- .-..,...- ,%
O1! %. %..
T.T. ..
,",,. .- .,.-•. . .. ..... .-,..- •.: .,"....... ' •. ..,,.
,,,",,,,,,,, u. .....-..
. .. ., ' - -.-. .- -.-...- , .-,..
... )

-/ A

-i01-
Table 7.4 TEST4 SUMMARY .-

NUMBER OF AVERAGE NUMBER STANDARD


CHANNEL CONVERGENCES OF ITERATIONS DEVIATION

IiIDEAL 100 337.0 81.1

CHANNEL 2 100 593.0 88.1

CHANNEL 3 100 598.5 105.7

012

:"ON 4..
z

0dB

-10dB

-20d

-3d

Jame

-40d

e=90 0 --- 50dB

I, Figure T4.1
~Arrow
Elevation plot at 900 azimuth

indicates arrival angle of' Jammer 1 (6=800)


Unadapted Antenna Pattern in Direction of Jammer1
..

* (Figure A)]

-103a-

'at.% % %
% Ji. " "
up%
't Jammner1
S. 900

CN.

)=
1800

* 270.

%4
[Z •

8 =00
OdB

• -lOdB

-20dB "-

30dB

Jammer 1 ".'..

-40dB

,S -

9=900 --- -50dB

Figure T4.2 Adapted Pattern in Direction of Jammer 1


(Figure A)

-104a-- ,*

5,,%.

.44
,-,..p . ._ ,. .. _. ,., .. , ... , .. . , .. . ,.. . ., , .. . . . . . . . .. . . . : -
L

18090 o =0
*

a
--

'
[O

L'-. Figure T4.2 Adapted Pattern in Direction of Jammrer 1-

~(
Figure B) '.
..
..

a,,r ..-,,.
-a .
a ..'.
-1-4b-

"." _
-. ....

a. a . ' ':' ~ ' . - . "'a"


".-

- *
1 o. A,
."""""-. "- €".*." -a' -" " .L, ' ", 2 ;
€' " ",/ ' "- .... " " *"
e=0o
OdB

-10dB F,

r
-20dB

• Jammer 2 3d

S-40dB

S=900o5 .4,

.dB
Elevation plot at 1500
azimuth

414
-
- Arrow indicates arrival angle of Jammer 2 -4

(9=750 ) .;'
8= 9-105 a-
.,., -50dB -'
i..
.. A
ft.-
f-,
'%
"*-"tft

,, "
- .°'
,,,% / ., -,
••
"-...
'."
•"
"-..
'.
.:.'"
*..'
,"
.-.-.
',.
..'.."r"
.'°".
.4-
." ,. , .. '.4" -."•.,",",""-.'".,." % ',, ' .
'" . " ., ,,",,%."
"•%",."" ""•"- -[",'k . "". ",.'"""
4'. "'" , ,; " '-'
" f"t tp*
900
I
".

['-'-'-Jamm er 2

- OZ, -•

PP

n ,

% %

-"' 270 0

@." Azimuthal plot at 750 elevation*


Arrow indicates arrival angle of Jammer 2 ($=150o)

Figure T4.3 Unadapted Antenna Pattern in Direction of Jammier 2 A

(Figure B)

N:
-10 5b- I
I ,d%
-
z
e=0
-0dB

-'3.

-40dB

:.:." ',

.-

e=90 0 -50dB

Figure T4.4 Adapted Pattern in Direction of Jammer 2


(Figure A)

-106a-

a--

, -', ... ~~~~~~~~~~~~~~~~.


. ...... .... .. .. ....-......... . . - . .. .. •- -- . %,,.,
9001

IlI

2700

Figure T4.4 Adapted Pattern in Direction of Jammer 2

(Figure B)

liv

-106b-
7.5 TEST5: Dependence of Adapted Antenna Pattern on Number of Required P

Iterations

The histograms that have been presented graphically demonstrate that


quite often there is a considerable difference in the number of required
iterations for an algorithm to converge. The question may arise whether or -

not the adapted antenna patterns produced by widely separated convergences are

significantly different.

The polar antenna plots that follow were generated using the fastest and

slowest convergences of the LMS algorithm of TEST2. It can be seen the


antenna plots are virtually indistinguishable. This indicates that there is
not a marked difference in the final adapted antenna patterns produced by the
widely separated convergences. This is certainly not a surprising result, as
the performance measure that has been utilized requires that the pattern be
close to optimal before the algorithm is said to be converged.

€ '.'i'-V

[ ~.-"

-107-

". -107-

oS
- , * ,-* - . .. * -
zA
0 =00

30dB

Jammer

-40dB

SZZ.

e=900 -50dB

Fastest Convergence
Figure T5. 1 Adapted Elevation Patterns at 900 Azimut.1h

(Figure A)

Kw -

a -108a-

J
%6

- - .mr */. CC. -


20dB _

-40dB

9=900-50dB

Slwet ovegec

Fiur T5. Adpe Elvto-atrsa 0 zmt

(Figure.B)

-18b

6MWIW
Lrf
k w~

4, Jamer 1
900

* ~1800 ~0

2700

Figure T5.2 Adapted Azimuthal Patterns at 800 Elevation


4/.
(Figure A)

-109a-

.40,
4.%

Wk4
Jammer 1

900

4
g 0
1300

2700

Slowest Convergence

Figure T5.2 Adapted Azimuthal Patterns at 800 Elevation


(Figure B)

-109b-

*1.-

NhA e .:: - - ,'" , N-M


%~!
OdB

"'I

-30dB -'-

i%
%%%

.k ,tU. 44d

NA .

N.

IU-.

"

--"=90'
-50dB -

r . . r) (FJigure

% G=90 o
-50dB
• '. Fastest Convergence
"
~Figure T5.3 Adapted Elevation Patterns at 1500 Azimuth
'" '" '
(Figure A)
.

"." .
-ll~a-
.

'. .....
,"

~* -*.---.- ",
* ~e=ovI -

I *dv

10dB

-20dB r

4 -30dB

-40dB

r
e=90 0 -50dB

Slowest Convergence

(Figure B)

r FiureT5.3 AdatedElevtio Patern at1500A10mbt

%
N.A.

4
Vaa
-- ~ ~ ~~~ r 'rr -. ~~~~ - W-.-W - r'W r rn -w,. r. r - rr-

900

1800 O0
as0'

2700

Slowest Convergence
Figure T5.4~ Adapted Azimuthal Patterns at 750 Elevation

I (Figure B)

,
-ilib-

p 4OQi.~
TEST SULMARY

For convenience, the results from Tests 1-4 are re-tabulated as shown in
Table 7.5. Also, Figures 7.16 - 7.19 track the performances of each algorithm
for identical convergences in each of the three channels tested.

% %

.4Z

,I.-6!AA-I-VLZ
L.A &I o. M.'C d%

-112-" ',
Table 7.5 Comparison of Convergence Properties for the
Adaptive Algorithms

ADA PT ','E
ADA
I:: AL CXA:;' L
-t CHA ;E 2
a
C-i
CA:.:
-er

. 313 108 547 171 543 173


L:;S -:.: - .

"" (with s-.ena!


CC::ST. i'.S ' :;37 81 1155 699 h144 543

cC:2.. 1 - - ,37

(no L::s
P .3.s ,ra~',, 88 59 8 1 06

I I'
10 7 26 38 29 43
COV AP TA CE '

:.:. -n 3 - .. ,

-113- U

,J..

U., - - .-.- - ..-


"...
ql... . . . . . . . . . . . . . . . . . . .

k".,"
--..."."
'-..-.-'.,'-..',V,,:,-.'.-,'."',':...'..':,':,.:'..",,./
- ., i .-.
:- .'.'..-,'-,.-..
.'-."-. ... . . ,... . . , , .
,.',.---?..
. ..:,.
-- - -- - -- - -- - -

aA AL ftL w9

I.~IEA / -O OWf

4"..

%0U
Figure 7.16 Convergence Summary for LMS Algorithms

-114-

-% 'S.

5-~~~ - %
%-.*. - §> - S % CZ&iZ§:. -

v.~sS.
.~ - ~~v, S
OW91n 3

-30.

I
- -S

0 25 ee7 O1" 25 S

IaTIM

a%

,10
- - - -- •

~%

4-" .1.

"--

-" W.
II~
0
T 0 2.

/
-40

1 i1i o l,

-..-.-

4,.%"

-4 i 6- I I I I-.,

- I-I I I

,- - -0.0.'. - ' ' ' ' .. - 3.


" . ." . '. . .- ," . .". 5 '. % .. 00.
-" - S ."-
"- ."
,-w.-' - "-"-"- -- '- . - ... J
....- ,... . . ... - . , -.- . .... .: .' T~,.,,,.L? ,,, , .Y, ,,Z, -. , . ,,W6',: ,. . , ,

/ . . . a, , m, , # l,,e, r , # = .. ,.. - : , . .. _
44

21 4N go "

(n sina present)W~

-
%
7.6 Testb: Investigation of an Alternate Antenna Geometry

In an effort to understand the algorithm performance which results from a


change in the antenna geometry, Tests 2 and 3 were repeated for the geometry
shown in Figure 7.20. All other parameters from the tests were held fixed so
that any differences may be directly attributed to the new geometry.

This rhomboid pattern was chosen somewhat at random, and is not in any
way special. It was chosen simply to provide the simulation with a geometry
different from the rectangular array used elsewhere.

, The antenna plots for this case do show a difference in shape as compared
to those for the rectangular geometry. This is simply due to the change in
* the antenna array factor caused by the alternate geometry. Notice, however,
" that the general trend of the patterns is the same for both geometries. We
see only slight variations in the patterns as functions of both the
controlling algorithm and the channel characteristics.

m'..

-118- 1
S . ,
'-.
--,.-
,.-.,,-
- --s . " ..
.-. ',-. -.-. -.- .- " '- ' - .*'-' - . -, , --. - - - ' , -- - '- ---- -- - -. ' - '- ' . . -...- '". .-
"- -' '. -' ' ,.,- J -; -i '- -'
" <.''...,,.--
-. ",''.-"
."'. ""..
.-
°- . .'".".. . .-..
"-" . ."-.".
. ."..
.".. . .'.""..
.". .,,.. ."-.. . . . '.-' ''J '.-'•'
is',% %

'-" y axis .. ;

N',

y s

Sensor element locations

•(010) (5.10) (10.10)

(5.5) (10,5) (15,5)


"" V V .'

(10.0) (15,0) (20.0)


V. V v

x axis N

6I Alternate Antenna Geometry for Rhomboid Geometry

i '-
Figure 7.20

t -119-
4-

-p.d

-4

% 1,.S
.2._ ,<. ,,.... ,.,, ., - -"v -"- r
,.e::.....................".-...- .-....
...... .. .---.--.--.-.
- .-
.-.
t.-. /'#,. .: .:.: :.: 'i,....AV'J' . %, " $*):;..
. . - -.---. , -..-. , - . -----.. , ...-... -?..- -.--.
tlml t

fL

q"j

-* U
31

* n
C

I I I r r

'3 2S8a sa :ee ia'se z see

number of iterations

. ,

Figure 7.21 Convergence Histogram of LMS Algorithm in Channel 2


for Rhomboid Geometrv

-120-

Li
..
,< ..... % %€

" A- . . ,% .' ° . " * . + . ' ..A * V 'A.

V %. A W,
* -
r s-'v-r.-r-'. ..
vI- . r '.. - - r. .' .J --
: . .- J: l. .
I -'p -

.46
'

"-.

e 0-

00 20048- 00 50

,p....

i-

C.-121-

5.
.- in
LMS Algorithm "
of Constrained
Histogram
Convergence -121-
Figure 7.22 Geometry
2 for Rhomboid
Channel
PIk--L71,.:,.: Ii -7RrK
Iv,,

60' *

4 Lk

i 40-

.. % ,n ,

,J- ",J

'Ib

-.Nott

,r"
212
,".. .I..
-2'-%

-n : 'k. ...- 122-,


.. e. , , , [ , / I , '-' " -" -,- '-.

•041 -,J~e fltr trn

--]1 "-
-3

I q 1

e -

0 2S0 Soo 7 S0 10630 1268 IS

number of iterations

r Figure 7.2/' Convergence Histogram of LMS Algorithm in Channel 3


for Rhomboid Geometry

-123-

%.

%
Ar 9p %
413

e
F.
4 II

q 20-
b ' ?r
n

9 i:, ae 0 pee 4 00See

number of iterations

.9- J- : , - - , -. -,.-o.7.25
Figure >-, . ,...., ... Histogram
Convergence .-..-. ,.oof Constrained
. ,. ,.o. .,.-,.,...o
LMS Algorithm-...
in . . .o. -:
,-'9 " % " % , m , . - - . , . . .- ,. , . . % . % . , .- - . - - - - -
• ". Channel 3 for Rhomboid Geometry
**-...
>_*.4 _ .' . Y " * . -' .'-----'.'.'.'.-.-.- '
. .'. -'- .' * '. g . . '' ' ' -.-.- ' -. * ' ' - ".'''" "' ' '
. -9-, , .9-%;,. "-%%% '. ,;. ; .qZ.-,'. ' % '. ;7'--_; . --- :..',;-.;'-.---, l

4€ -12 4-

0 6 % .--
Hi t oq r am

u. .'.- ..

r -

wf

n
n of ieraio-

¢ I

.4

P
Figure 7.26 Convergence Histogram of Update Covariance Algorithm.

in Channel 3 for Rhomboid Geometry ,-,.,


4'...

C'. q
-125-
'4 e
t]. '%_.
%%", * ' " Figure --. • ,
7.26"-"Convergence"-'-'-Histogram,,".of-"Update..
Covarian.',ce Algorithm " "P
:,, -,., , , ,,,,.,,.,...-. in.Channel-.-,.. 3...for Rhomboid, .,.,-.,..../....Geometry.. . ., ... ..
o. ..

%%"
SUMMARY OF PLOTS FOR TEST6 - RHOMBOID GEOMETRY

Figure T6.1: Unadapted antenna plot at = 900, e= 80 °


Purpose: To indicate initial gain in direction of Jammer I

Figure T6.2: LMS-adapted antenna plot at =90% 9 80"


Figure T6.3: Constrained LMS-adapted antenna plot at = 90° ,
9 -80' N

Figure T6.4: Update Covariance-adapted antenna plot at = 900, 0 = 800


Purpose: To demonstrate nulling of Jammer I (in Channel 2) by each
algorithm with new antenna geometry

Figure T6.5: L.MS-adapted antenna plot at d = 900, = 80'


=
Figure T6.6: Constrained LMS-adapted antenna plot at 90 0 , = 800
Figure T6.7: Update Covariance-adapted antenna plot at = 900, - 800
* Purpose: To demonstrate nulling of Jammer I (in Channel 3) by each
algorithm with new antenna geometry

Figure T6.3: Unadapted antenna plot at = 1500, e= 750


Purpose: To indicate initial gain in direction of Jammer 2

Figure T6.9: LMS-adapted antenna plot at = 1500, 9 = 750


Figure T6.10: Constrained LMS-adapted antenna plot at , - 1500, = 750 ..

Figure T6.11: Update Covariance-adapted antenna plot at t = 1500, 9 = 750


Purpose: To demonstrate nulling of Jammer 2 (in Channel 2) by each

.' algorithm with new antenna geometry

[O Figure T6.12: LMS-adapted antenna plot at = 1500, 9 = 750


Figure T6.13: Constrained LMS-adapted antenna plot at , 1500, a = 750
-. '..'. Figure T6.14: Update Covariance-adapted antenna plot at t = 1500, 9 = 750
-U.
Purpose: To demonstrate nulling of Jammer 2 (in Channel 3) by each

algorithm with new antenna geometry

V .-

% ,

%4
•" ' ." ." " " ." ." .

.'- . .-.,.."
.-.- .P.,+.-..
',".

,''..
.

.-.-.
.,...... .p'"

' +. '.+ -+.-,,.


-,,,

,+t
'' ' ";+. ,. + '+',''

,-. +. ,,' ?'-126-, t+ "-....


."'

% ' ..
. .

"".s ',,, .,
."' " ' - .

%
.

,, .-. , ..-.. •
' ' . '"

•.
"" "

I
'....-

. ".".... .
z

OdB

For Rhomboid Geometry - 10dB

% %

.40d

p.-

8=0 5d

e (F0gure0A)

-127a--

%lvto plo at9 azmt


.. p °

Jammer 1

900

\ -

180o 0 I=0"

I..>'..,

For oncoid Geometry


'" 2700

Azimuthal plot at 800 elevation


Arrow indicates arrival angle of Jammer 1 ( =900)

i
Figm;re 61 'nadapted Antenna Pattern in Direction of Jammer 1

(Figure B)

-127b-

O%.' %
0dB

.4,4 :1
-20dB

-30dB

-40dB

e=900 -50dB

o For Rhomboid Geometry


Figure T6.2 LS-Adapted Pattern in Direction of Jammer 1

(Figure A)

-128a- I
All
Jammer 1
900

S.

180

%. %
86=00I

20dB

-30dB -

"S a.e

-30dB f

e=90 -50dB

For Rhomboid Geometry

[.Fi]-re 16.3 Constrained LMS-Adapted Pattern

in Direction
A) of Jammer(Fgr

-129a- -

% %*

JAI- Ae.
I Jammer 1

900

15 0-- -3.

%.

p.

2700

For Rhoboi Geoetr

Fi~g- re 76.3

(FigUi.e B

*% %

I Nk, . ----
20dB

-30dB

-40dB

9=900 -50dB

For Rhomb"oid Geometry

?ig r -.
nUdate Covariance-Adapted Pattern
in Direction of Jammer 1
* (Fig-.ire A)

-l3Oa-

I*1

.5A
Jae I'

900

30bft

%f

%.

lk 2700 11 MM
z
e=00
______ OdB

"', -lOdB

:: ., -40d ".

-30dB

~~Jammer 1 "

-40dB
.- S

-N.'

9 =90 0 -50dB ~

For Rhomboid Geometry

p Figure T6.5 LMS-Adapted Pattern in Direction of Jammer 1

(Figure A)

-131a-

,+%,

@ | ,
Ja;MM er 1
900

1800 90

2700

For Rhomboid Geometry

Figue
T65 LM-Adapted Pattern in Direction of' Jammer 1
S (Figure B)

-13 ib-

0%
f.. ..-.~ ~ *~*%* -.- ~.-- --- p %
OdB

- ' / ,',. .. -1dB ..

",% -20dB m

-30dB "

~~~Jammer 1",";i

%% %

' e=90 0 -50dB ,

For Rhomboid Geometry


Figure T6.6 Constrained LMS-Adapted Pattern
-4. in Direction of Jammer 1

(Figure A)

-132a-

4.
Jammer 1.

900

0270

For Rhomboid Geometry

Figure T6.6 Constrained LMS-Adapted Pattern


in Direction of' Jammier 1
(Figure B)

wp-132b- %

.1w,
z I

9=0 o
0 d -_

-lOd B -:

-20dB

-30dB 4

Jammer 1

-40dB

8=900 -50dB

For Rhomboid Geometry

Figure T6.7 Update Covariance-Adapted Pattern


in Direction of Jammer I

(Figure A)

L-133a-

!N N N
Ile e%% % % r % *N;..N: :'*: :*- *. . * N. \ .
Jammer 1

900

180----e

%
11

270

Fo RhmodGoer

%* %

Kv-P
z

0dB

For Rhomboid Geometry


-10dB r

.1 ,,-20d.

2 * -30dB

/"" "-"-40dB

::: .- -,.

9 =9o -- - - ..
--..- ----.. -50dB

Elevation plot at 1500 azimuth

Arrow indicates arrival angle of Jammer 2 (0=750) .'

Figure T6.8 Unadapted Antenna Pattern in Direction of Jammer 2

,@ (Figure A)

• .8

-134a-

..
• Q.%
gj4j,2
4-€.j4 .j4jj, -.-.-.-. '. '
-j',-,.,- ,,. , , ., - '.,' .. ,.- .- -.. . ..-. --... .. -- , .... . '5/'
,,
5O " - - ' - ' " " '- ' : ' ' '' "
, --.
"I" "'': "" -" - : "...... °2 .: .. ..' - . : . .. . . .. .. < .. . .. . . . - - : .4","
''
900

Jammer 2

OIAA

%5

2700A

Azmuha pltaA5 lvto

-134b4

r -C
% 5..% -". %?
41 , . . ,
", .1, % 2700
% %%.
e=00l
0dB

sN..

%-30dB

-40d

e=90o -50dB

For Rhomboid Geometry


I Figure T6.9 LMS-Adapted Pattern in Direction of Jamnmer 2
(Figure A)

;5. -135a-

*~ -'

--. 5 ~ .5. -N
900

Janmer 2

1800 2700

AWIN

-135A..

- 2700
e4o'..-.-o

OdB

-n4;.I

-20dB

•-,...
-44...

" '
.4-

-4 ,..,

e=90 0 -0dB
4-..:-

,,-4.-.,"'.
For Rhomboid Geometry

Figure T6.10 Constrained LMS-Adapted Pattern


in Direction of' Jamer 2 -40dB ..

(Figure A)

%", ".
-136a- ,%

4-". . (. . . . . . . . . .
900

1800 ~0

2700

For Rhomboid Geometry


'Figure T6.10 Constrained LMS-Adapted Pattern
in Direction of Jammer 2
(Figure B)

-136b-

L Lk
zA

lOd

- -20dB

-40dB

e=90 0 4-50dB

For Rhomboid Geometry .

0Figure T6.11 Update Covariance-Adapted Pattern


in Direction of Jammer 2
(Figure A)

-137a- .

%%
W,
vi%
V,

zA

1800A'

=00S

2700

For
Gemetr Rombod

Figur T61 paeAvrac-dpe atr


in Dietono ame

(Fiur B)

*17b

'A%
14

z
N 08=00

0d3

-20d

*P% %

.9=900

r
For Rhomboid Geometry
0 Figure T6.12 LMS-Adapted Pattern in

(Figure A)

AnrAN -
"7-Ai09 329 ADPTV AL 0RITI4NS FOR HF ANTENNA ARRAVS(U) K-ANSAS 3/3
FOR RESEARCH INC LAUdRENCE TELECOKNUNICATIO
TEUNIV
CENTER
0 HAEGERT ET AL JUL 87 TISL-5481 RADC-TR-86-159
UNCLASSIFIED F3860?2-8i-C-8265 F/G 9/1i U

SENSEhhhi
L111 1112.

IN HARI
900

p.1800
J=

2700

For Rhomboid Geometry

Figure T6.12 LMS-Adapted Pattern in Direction of' Jammer 2


(Figure B)

-138b-

A.%

V.w
e =900

Jane Dieto2f axe

(Figure A)

-1L3 9a-

I Selm
- - ~I--------_--
-

900

1800 J=

2700

* For Rhomboid Geometry

Figure T6.13 Constrained LMS-Adapted Pattern


in Direction of' Jammer 2
(Figure B)

-139b-

a% %
3' lip

*O~ &I.'
8=00 I d

0dB

-20dB

-40dB

e=90 0 5d

For Rhomboid Geometry


Figure T6.14 Update Covariance-Adapted Pattern
in Direction of' Jammner 2

(Figure A)

-140a-

ALI;
1800 Jd900

180700

%~ %

till 11
J&
7.7 Conclusions and Recommendations

In all of the tests that were conducted, the Update Covariance algorithm
consistently outperformed the others in terms of required iterations for
convergence. It has also been shown that this can be an extremely misleading
quantity, as the algorithm may require more actual computations to converge.
Therefore, although the Update Covariance algorithm will converge in fewer
iterations, it will undoubtedly require significantly more time to complete
. each iteration. As an example, assume that it was desired to build an array
. consisting of 36 elements, and that the algorithm must converge on a preamble
3
100 msecs. in duration. The Update Covariance algorithm would then require N
+ 3N2 + 3N or approximately 50652 complex computations for each iteration. If
the algorithm needs 100 iterations to converge, a grand total of 5065200
complex computations must be completed in 100 msecs. This allows
approximately 19.74 nanoseconds to perform a complex computation. So despite
the comparatively few iterations required, implementing the Update Covariance
algorithm may place rigorous, if not unreasonable demands on the hardware.
Even if this can be attained, it seems unreasonable to implement such a system
when other algorithms can offer similar convergence times with much less

complicated hardware. It was shown that the LMS algorithm could converge with
fewer computations. And unlike the Update Covariance algorithm (which
involves matrix arithmetic), the computations of the LMS algorithm can be
performed simultaneously. In other words, the required 2N computations can be
performed N at a time as the update of one weight is independent of the update
of another. The required time for each iteration is therefore equal to the
*5, time it takes to perform 2 complex computations. This means that the actual
realtime convergence for 2.MSalgorithm is much less than that of the Update
* Covariance algorithm in all cases that have been presented.

The LMS algorithm is not without its drawbacks, however, as has been
V" mentioned. Probably the most notable is the reference requirement that has

been discussed. The channel model in this simulation posed not prohibitive
O,57
difficulties for the algorithm and, if it is a good representation of what
-. goes on in the HF channel as it is believed, the reference requirement should
not be a major obstacle. The reference signal generation will, however,
require precise synchronization between the arrival of the transmitted M
preamble and the preamble used as the reference. This may be a difficult

.0%.#-141-

-/,/ I4 ?
% ~
%* - * %* %, %

%,.-,
task. If not, however, the simulation study indicates that the LMS algorithm
would be an effective choice.

The simulation results also indicate that if a preamble is to be sent,


the Constrained L.MS algorithm, although capable of meeting the convergence
requirements, would not be the best choice in light of its performance in
comparison to the L1MS algorithm in the presence of a signal. It also requires
more computations for each iteration, although many of these computations can
be performed simultaneously as well.

The performance of the Constrained LMS algorithm in the absence of the


signal, however, introduces many interesting possibilities. TEST4
demonstrated that if preamble was not sent, the Constrained LMS algorithm
would converge in approximately the same number of iterations as the LMS
algorithm. Complicated synchronization and reference generation techniques
could be avoided by simply not sending a preamble. Simply initiate the
algorithm prior to any transmission. It may also be that the transmitted data
had naturally occurring "breaks" in it. During either natural or deliberate
breaks, the algorithm could be re-initiated, minimizing the effects of
interferences. This would require some type of directional power sensing
device. There are many interesting possibilities that could and should be
considered. If the preamble and related synchronization devices are being
generated solely for the benefit of the algorithm, one would have to consider
the advantages of implementing the Constrained LMS algorithm and dispensing
" with the preamble. This would not seriously degrade the convergence
performance, as evidenced by TEST4, and could possibly simplify the overall
system.

:JL
-142-

.d.i % %.-
. .1
-o .. .. " .-.. ,"-.
N... . - .. "- " " .- " - " ', ",,, "
REFERENCES

[1] B. Widrow, P. Mantey, L. Griffiths, and B Goode, "Adaptive Antenna


Systems," Proceedings of the IEEE, Vol. 55, Dec. 1967.

[2] S.P. Applebaum, "Adaptive Arrays," IEEE Transactions on Antennas and


Propagation, vol AP-24, no. 5, Sept. 1976.

[3] B. Widrow and J.M. McCool, " A Comparison of Adaptive Algorithms Based
on the Method of Steepest Descent and Random Search," IEEE Transactions
on Antennas and Propagation, vol AP-24, no. 5, Sept. 1976.

[4] L.J. Griffiths, "A Simple Algorithm for Real-time Processing in Antenna
Arrays," Proceedings of the IEEE, vol. 57, Oct. 1969.

(51 O.L. Frost III, "An Algorithm for Linearly Constrained Adaptive Array
Processors," Proceedings of the IEEE, vol. 60, no. 8, Aug. 1972.

(61 K. Takao, M. Fujita, and T. Nishi, "An Adaptive Antenna Array Under
Directional Constraint," IEEE Transactions on Antennas and Propagation,
vol AP-24, no. 5, Sept 1976.

[7] R. A. Monzingo and T. W. Miller, "Introduction to Adaptive Arrays,"


John Wiley and Sons, New York, 1980.

[8] L.E. Brennan, J.D. Mallet, I.S. Reed,"Adaptive Arrays in Airborne MTI
Radar," IEEE Transactions on Antennas and Propagation, vol AP-24, no.
5, Sept. 1976.

[91 M. Brobston, "Simulation of Recursive Processors for Adaptive Antenna


Arrays," M.S. project report, Dept. of Electrical and Computer
Engineering, Univ. of Kansas, 1981.

[10] John G. Proakis, "Digital Communication," McGraw-Hill, New York,


1983.

[11] R. T. Compton Jr., "An Adaptive Array in a Spread Spectrum


Communication System," Proceedings of the IEEE, vol. 66, no. 3, March
1978.

[12] R. L. Reigler and R. T. Compton Jr.," An Adaptive Array for


Interference Rejection," Proceedings of the IEEE, vol. 61, June 1973.

[13] P. Snow, " An Antenna Simulation for a Spread Spectrum System," M.S.
project report, Dept. of Electrical and Computer Engineering, 1984.

o~' ..-.,o,
-143-

I.4..

% %'q

%0

%' ~?W
APPENDIX A

COMPLEX LOWPASS EQUIVALENT REPRESENTATION

Digital information signals are often transmitted using some type of


carrier modulation. Signals which have a bandwidth that is much smaller than
the carrier frequency are known as narrowband bandpass signals. For
convenience, it is desirable to reduce the bandpass signals to equivalent
lowpass signals. The carrier component can be dispensed with because it
carries no information.

A real-valued signal s(t) with a frequency concentrated in a narrow band


of frequencies about the carrier frequency, fc' can be expressed as

s(t) = a(t) cos[2vf c t + 0(t)]

where

a(t) = amplitude of s(t)


0(t) = phase of s(t)
2'1'

By expanding the cosine function in the above expression, a second


- representation is obtained. This is written as

9 ' s(t) = a(t) coso(t) cos27f t - a(t) sino(t) sin27f t


c c

* = I(t) cos2wf c t - Q(t) sin2nf c t

where

I(t) - a(t) coso(t)

Q(t) = a(t) sinO(t)

The frequency content of 1(t) and Q(t) is concentrated at low frequencies.


These are lowpass signals.

JII
-144-

.P~
Finally, define the complex envelope u(t) as

u(t) - I(t) + jQ(t)

so that

j2rrf t
ct
s(t) = re[u(t) e

Therefore, a real bandpass signal s(t) is completely specified by its complex


envelope, u(t), if its carrier frequency is known.

g15

4 I
APPENDIX B

USER MANUAL FOR HF ADAPTIVE ANTENNA ARRAY EVALUATION FACILITY

The following is a brief description of the parameters and procedures


needed to operate the simulation. The operations are divided into an input
and an output phase, and will be described separately.

BI.O Input Procedures

As shown in Figure B-I, the input phase is initiated with the command
@INPUT. This command file asks the user to input the name of the experiment
to be performed, and in this case we have chosen TESTI. The result of this is
the creation of a new subdirectory which is given the name EXPERIMENT
TESTI. This subdirectory, then, will provide a location for the simulation
run, and will contain all important output files produced. This, however, is
completely transparent to the user.

Again referring to Figure B-I, we see that as soon as the user determines
*the name of the experiment, he or she is immediately introduced to the first
menu of the actual input program. It is this routine which creates the data
file which is subsequently read by the simulation mainline. Before explaining
the actual variables appearing in this and the remaining menus, we first
consider the methods by which variables are input and new mer.us acquired.

Concentrating again on the first menu, we see that 4 variables appear


(numbered 1 through 4). Assume we wish to change the number of desired
convergences from the default value of 5 to a value of 20. As shown in the
Figure, this is accomplished simply by typing the variable index (4) followed
by a space and the new value (20). The program then returns the instructions
< NEXT ENTRY PLEASE (0 TO REVIEW PAGE, -1 TO LEAVE PAGE > and as shown, 0
redisplays the menu with the new value inserted. A -1 at this point (instead
of a 0) simply moves to the next menu without displaying the change. Of
course, if it is desired to change nothing, it is possible to traverse the
e: entire program simply by typing a -1 after each menu.
The data file, then,
will simply contain the default values. It should be noted that this program
continuously updates the default file. That is to say that if a variable is

-146-

%t..%

04.
-~~M ~ A*t .
changed on a run through the input program, it becomes the new default value -

for the next run.

Now that we may move through the input program, the variables themselves
will be explained. It will be expedient to cover the variables one menu at a
time since, in many cases, the variables within a menu are closely related.
For the following discussion, refer to Figures B-I through B-10.

B1.1 Menu 1: Simulation Parameters

These parameters are general and serve to set up the simulation at its
most basic level. Here, many of the variables are self-explanatory.

I. Number of samples per bit. Sets up sample rate.


2. Signal to Noise Ratio (dB). This is the thermal S/N ratio which is
used to simulate random electrical noise.
3. Simulation Bit Rate. Used to determine the Nyquist bandwidth of the
information signal.
4. Number of simulation convergences desired. To arrive at a
statistical evaluation of the simulation, the system is forced to
converge a number of times. This variable simply determines the
•"number of loops (value = 100).

BI.2 Menu 2: Adaptive Algorithm Options

Here the user is asked to choose the type of algorithm which will control

the simulation.

I. Least Mean Squared Algorithm


2. Constrained Lease Mean Squared Algorithm

3. Update Covariance Algorithm

B1.3 Menu 3: Antenna Array Parameters

This menu gives the user several options to construct the antenna array
to be used in the simulation. The array elements are dipole antennas (whip

above ground).

-147-

UP" - . J - J -. -,U + - rlrl


-p, -. ,U.. -, .. . ., - - . + - , - ' -.. . '--. *.U.,. . ..
% .- , +

,. ....
me'+.'+"
.. +..- .-...
" * ,++*
* .'.. . .. . o. ., . . +"Z.. . . . ,,. ."..
. . . .-."... . . . . . . ... . 5
I. Number of antenna elements (maximum = 9 elements)
2. Number of incoming signals. This parameter is simply the total
number of impinging signals, including both the desired signal as
well as one or two jammers.
. 3. Dipole equivalent element length (meters). The length of an
equivalent dipole has been assumed to be twice the length of the
actual whip antennas. -

. 4. Carrier frequency of friendly communicator (kHz). This is simply the


frequency, in kilohertz, of the desired signal. The frequency is
used to determine the receiving characteristics of the dipoles.

BI.4 Menu 4: Jammer to Signal Ratios (dB)

Menu 4 is variable in length as dictated by the number of signals entered


earlier. Each jammer to signal ratio may be specified independently and are
used to adjust the jammer powers.

Bl.5 Menu 5: Relative Antenna Element Locations (meters)

Here, the user is allowed to define the actual geometry of the array by
specifying the x and y coordinates of each element. This menu, again, is
variable in length depending on the number of elements entered earlier.

BI.6 Menu 6: Azimuth (PHI) and Elevation (THETA) Coordinates of Incoming


Signals

The user is prompted for the arrival angles of the incoming signals (in
degrees). It is always assumed that the friendly communicator is signal #1.
The azimuth and elevation angles are specified in terms of the spherical
coordinates PHI and THETA. Phi determines the azimuthal coordinate and is
defined to be 0 degrees on the x-axis and increasing toward the y-axis.
Theta, on the other hand, determines the elevation and is defined to be 0
degrees on the z-axis and increasing toward the x-y plane. Note that in our
case, it is only meaningful if theta is in the range from 0* to 90* as this
corresponds to the space above ground.

-148-

.,
BI.7 Menu 7: HF Channel Parameters

* This is just the number of different paths to be simulated with the HF

channel. It essentially chooses the number of taps in the tapped delay line
model employed by the channel.

,* B.8 Menu 8: Delays of Each Propagation Mode - (mS)

Menu 8 is variable in length depending on the number of channel modes

entered in menu 7. Here the delay of each path is given in milliseconds and
is used in the HF channel model to simulate dispersiveness. Note that the
first delay is assumed to be zero, and the others are simply relative to the
first. Also, it should be noticed that the path delays should be integer
multiples of (1/Nyquist Rate). This is necessary, again, due to the
implementation of the HF channel model.

4 B1.9 Menu 9: Attenuation of Each Propagation Mode (dB)

The length of this menu is determined, again, by the number of modes.

The HF channel model contains multipliers which allow signals emerging from

different paths to be attenuated separately. This menu lets the user assign a
different attenuation to each path.

"N B.10 Menu 10: Adaptive Algorithm Convergence Constant

The convergence constant is used by the LMS and constrained LMS r


algorithms to dictate the update step sizes as convergence is taking place.
This constant is quite crucial to the convergence time and overall character

of the convergence, but is also very difficult to obtain. For the most part,
6 only a trial and error type search will produce the optimum value. A rough

estimated value is calculated by the program, but it should not be trusted too
far. The default value may be repeatedly changed until the simulation model
is executed. Note that only 5 decimal places are provided by the input

g program. The convergence value may be specified out to 9 decimal locations,

but unfortunately only 5 are displayed. The actual value may be viewed simply
* by listing the parameter file.

-149- ' .
-"

%% . ,,

~ N '~'rJ.'......... eN' %L.N


~ "'.s.ia
At this point, as can be seen from Figure B-10, once the convergence
constant is specified, the input program automatically submits the simulation
in batch. The important files generated by the mainline are stored, as was

mentioned earlier, in the user defined directory EXPERIMENTTESTI. When the


'p.. simulation is complete, it is then possible to use the output programs to view
the resulting data. The output phase will be discussed next.

B2.0 Output Procedures

The purpose of this section is to explain the use of the output programs
which allow both a review of the input parameters, as well as the results, of
any test.

To invoke the output procedures, simply type @OUTPUT while stationed at a


graphics terminal. The program responds with the question, "For which

-
experiment do you wish to see the output?" After the response is given, which
in our case is TESTI, the program returns the menu shown in Figure B-I1. We
-. see that there are 8 choices of output, the first 3 simply being a review of
the input parameters of the test, while the remaining choices are actual data
output from the run. To explain the use of this menu, we will step through it

one option at a time.

B2.1 Option 1: Review Antenna Parameters

I
To invoke this or any other option, simply type the corresponding

number. Thus, in this case, after a 1 is typed, a screen similar to Figure B-


12 will appear. This screen reminds us of the antenna geometry which was used

in the test as well as the carrier frequency and equivalent dipole lengths of
the antennas. Also displayed are the amplitude and phase distributions for
each element resulting from the adaptation. To return, then, to the main menu

from this option, simply type <return> and Figure B-I will again appear.

B2.2 Option 2: Review Channel Information

The second screen which may be viewed, shown in Figure B-13, is also a
review of parameters, but here it is the channel information which is
displayed. Several other input quantities are also listed such as data rate,

-150-

~ ~ ~
.~ .. . ...
type of algorithm, number of convergences, and the convergence constant.
Again, to return to the main menu, just type <return>.

B2.3 Option 3: Review Incoming Signal Information

As shown in Figure B-14, this screen lets the user review the arrival
angles of the friendly signal as well as that of the jammers. Also, the
jammer to signal ratios are displayed for each of the jammers.

B2.4 Option 4: View Calculated Results

This screen, arrived at by typing 4 at the main menu level, is shown in


Figure B-15. This is the first screen that actually returns some data from
the simulation run. The values which appear here are the average number of

iterations to converge, as well as the average final signal/interference


ratio. The variance of these quantities is also given, to show the user the
amount of spread which has resulted in the values.

B2.5 Option 5: View Antenna Plot

Typing 5 at the main menu level allows the user to examine the antenna
plots which have resulted from the simulation. This is probably the most
useful portion of the output program, as it allows immediate conformation of
the algorithm performance. Actually when Option 5 is chosen in the main menu,
a new menu appears as shown in Figure B-16. With this menu at hand, the user
is given the capability of examining any cut of the antenna pattern simply by
changing the menu entries. To better understand the capabilities, we will
step through each option of the sub-menu.

* B2.5.1 Sub-Option I: I) Min Convergence Time 2) Avg. 3) Max. -f

When the simulation is finished, the convergence iterations of all


convergences are examined and the corresponding antenna weights of the
slowest, average, and fastest convergences are written to a file. With this
option, it is possible to choose which set of weights will be used in
computation ofthe antenna patterns. These are very similar, however, and
usually indistinguishable in graph form. The default value of this parameter
is set to the average.

-151-

% "".-"-""
4V*N"%.%,,
82.5.2 Sub-Option 2: Fixed Angle PHI for Elevation Plot

Here, the user is allowed to choose a fixed azimuth angie in order to


produce an elevation plot. In other words, PHI is held fixed and THETA is

allowed to vary over the range 0 to 180 degrees. For example, if PHI is equal
.. to 0, then the resulting elevation plot will exist in the x-z plane.
L .. ..

B.2.5.3 Sub-Option 3: Fixed Angle THETA for Azimuth Plot

This option is very similar to the previous one in this menu, but here it
is THETA which we fix instead of PHI. This value is used to produce an
azimuth plot at some fixed elevation. For example, if THETA is 90 degrees,
the resulting azimuth plot would exist in the x-y plane. For values other
than 90 degrees, it should be realized that the resulting pattern does not
exist in a plane, but is simply the projection onto a plane of the field
values.

82.5.4 Sub-Option 4: Slice Type 1) Elevation 2) Azimuth

Finally, the user must make a choice to see either an elevation or an


azimuth plot. If one chooses, for example, to see an elevation plot, the PHI
coordinate is set to the value dictated by Option 2, while THETA is varied to
,,.
.~form the pattern. Thus, once elevation or azimuth are chosen, then either the
Option 2 or the Option 3 values (respectively) are used; never both. .
"4S..
To actually see an antenna plot, this menu is exited by typing -1 as in
the other menu prograt'q. For the default values of Figure B-16, this produces
. the plot as shown in Figure B-17. Note that for both azimuth and elevation, 0
degrees is to the right on the plot. Thus for an azimuthal plot, right
implies the x-axis, while for elevation, it implies THETA - 0 (z-axis).

At the bottom of the screen on each antenna plot, the program asks
whether one wishes to see more antenna plots. If the response is yes then
O, control is returned to the menu of Figure B-16. On the other hand, if the
answer is no, the user is returned to the main menu (Figure B-1).

-152-

4W % % %' % e
B2.6 Option 6: View Histogram of Convergence Counts

Upon choosing this option, one is able to look at the distribution of %


convergence iterations in histogram form. This is shown in Figure B-18. The
bins are fixed in size and equal to 50. This value works very well for the
0. *
-
LY-S and constrained LMS, but is slightly large for the update covariance
algorithm. The fixed value of 50 was chosen simply to provide ease of use.

B2.7 Option 7: View S/N Ratio vs. Time Plot

This option produces a plot of the signal to noise ratio versus the
number of iterations. An example of this is shown in Figure B-19. This plot

is always stopped at 6250 iterations in order to display a reasonable number

* of convergences while still providing an acceptable resolution. The purpose


, of the graph is simply to show the convergence characteristics of the test.

B2.8 Option 8: View Error Signal vs. Time Plot

The final available plot is shown in Figure B-20, and represents the

* magnitude of the error signal used by the LMS algorithm for adaptation. This
option may only be invoked if the controlling algorithm of the test is the
LMS, however. It is the only algorithm which makes use of an error signal,
and consequently the mainline only produces the output file for that case.
This plot is also stopped at 6250 iterations to get a reasonable number of

convergences while still providing space for clarity.

* %
• .4

.6,

-153-
.4. . 4 . ** . ** . * .,.-.,..

." -45-.;. .
S kinput
Whi.r i- the ri.e or this experimene": tostI

TMULLTIOH PQRPMETERt

I NUIMrER or S-iMPLEr PER PIT I .4 3 (7

% 2 IL-I4AL TO ?4OI..E RATIO ld.) I t..41,4 '-

Z SIMULATION BIT RATE (bpsi 3 .d4 i

4 NUMPER OF SIMULATION CONVERGENCE' [ErtIRED 5.j100(40

4 20
'~.. NE>.T ENTRY PLEPfE (0k TO kEVIEW THE PAGE, -1 TO LEAVE THE PAGEJ

SIMULATION PARAMETERS

I NUMBER oF SAMPLES PER BIT .,


2 ZIGNAL TO NOISE FATIO (dB) I .

3SIMLILATION BIT RATE (bps) :(4 &.A ,I ',

4 NUMBER OF SIMULATION CONVERGENCES DESIPED 20.0000rd

Figure B-1 Menu #1 of input

ADAPTIVE ALGOkITHM OPIONS

41"%
TYPE=I1 = LMS

TP-Z CON-'TRAINED LMSf

TYPE- U~IPDATE COVAkIANCE ALGOPITHM


%, %. % -.-.-. "

I PD -T I VE ALGOR ITHM TYPE (1,2, OR 34I

%-% Figure B-2 Menu #2 of input

-154 -
14L
.: . . .-
.d4TElNA NHRY r-wkmE crE.

I PiLIMPER V~ ONTENN%1 ELEMEHITV . o-.

23 4kINIILR C'F 1.4C0MIIIG SIGNALS 'DEIREDj + J&MMEFZSl 0 o(

3 DlIPOLE E211!JdQI-EHT ELEMENT LENQTH-0m) 6


4 '-14kk[fl FVE(-jl-ENCY, Or FK-IE4DLY COt P1-jICA4T(R- IkHI~ ki o

Figure B-3 of input


Menu -'.3

A~MMEP T) :IrIJi4L RPTIO:-


dI

1. i44 10 F OR J WIIIE P* I

Figure B-4 Menu 1)4 of input

-155

%
4 %.

%0
RELATrVE kNTENHA ELEMENT LOCATIONS

NE91SURE I4 METER.

CC .....
IPTE CF ELE'EHT 3
0. 9
2 Y CO0R:11 OATE OF E L EN ENT I1 e O O

COODINATE 1F ELEET 2 . eeEMT


4 ? COCRDINATE OF ELEMENT 1 3E-.9090

5 , --,IORrDI 1TE or FLEMEHT 3.


6 Y COORDINATE OF ELEMENT 3 19.,0600
7 X COORDINATE or ELEMENT 4 10.9eia
6 Y CJORDINATE OF ELEMEHT 4 5.lsee0
9 '< CC,ORD[1l4TE OF ELEMENT S . reee
1, Y CCOFBI HATE Or ELEMENT S s..,ae'8

5 -1

RELATIVE d41TEHHQ ELE:lEHT LOCA4TIONS

MEACURED 1H METERS

II X CrORDINATE OF ELE"IE:4T 6 e. ?0000

12 Y COORDIHATE ,3F ELEMENT ,i S. 4 s


13 x COORDINATE OF ELEMENT 7 1e.01080
14 Y COORDINATE OF ELEMENT 7 3.ae8o8
1 X COORDINATE OF ELEMENT 8 s. a1.9e
16 Y COORDINATE OF ELEMENT 9 % .deeo0
17 X COORDINATE OF ELEMENT 0.aeee
I, y COORDINATE OF ELEMENT .

-1
DO fOu ,dlNT TO REVIEW ,ILL THE PARAMETERS ly-N)

Figure 3-5 Menu t5 of inout

-. Coe -IF'r -r q fr4

% ,,%i
L oe--L'.u,.
b Y~Q.IMUTH kPHI) ;ND ELEVATION (THETA) COORDINATES OF INCOMING SISrL4AL,
~~~PHI=O IMPLIE: 7HE - I.PHI INCREASE: TOWARD THE Y-AXIS..,

THETA-C. IMPLIES 7HE .Z-AXI:. (INCREASE: TOWARD THE X-Y PLANE)

SIGNAL 41 IS ASSUMED TO BE THE FRIENDLY COMMUNICATOR.

1 PHI COORDINATE OF SIGSNAL 1 9.Cke@

-. 2 THETA COORDIHATE OF SIGHAL I E-.. ' lees

3 PHI COORDINATE OF SIGNAL

4 THETA COORDINATE OF SIGNAL 2 0.0eqee

5 PHI COORDINATE OF IGNPL S 1seeee

.6 THETA CO RO INATE OF SIGNAL 3 e- ..


7S ,

Figure 3-6 Menu 46 of indue

mF CHANNEL PARAMETERS
,N
I NIMBER OF ODEL Or PQOPOGATION (TAPS) 0 N
-!

Figure 3-7 Menu 07 of inout

DE1'' F EACH 9rFQG~OATl,? :',ODE

EA:;URED I ?1ILL;',ECONDS

4 :HCILL:.' BE IHTE EP UL PLES Or I (NYQUI :T Q ATE

IF PAH UM !E I 0I
L .)O.
0e e
.%', ~ ~~:
) EL,4Y ')F )Qirr 4¢JMFER rm

.4 2 DEL~e -IF PATH -4UMBER I

-Fgure 3- Menu 19 J n

".,

V.%

. °
---, -j ._l.3-. _%-.j..iwj.. '.' . ) . jL -. ' ..- V .... '.. .. %-'..'. %*.. -.- ..-.. -*. .

_ . F ,.".. ". . ,'. ". " . - , .- ,', %, ., - .


4T7ENUATIO:N AF EA4CH FFROPOIATION MOD~fE

1 AT7ENUATION OJF PATH NUMBER I

ATTENUATIOJN OF PATH NUMB~ER 2


ATTENUAIQTON OF PA~TH .~C?~' 0LME

Figure B-9 Menu 4.9 of input

vi

I I

A,% %
OUTPUT- The Main Menu PO I

. ' ntcrr.3 paragseterr


ReIe?,a chr-cnnelIntcormation "
k. e'icw irComirQ signal intormatlon
4 -flew caicuJiated resu lts
Ve.i antenr a :lct .
o View histoaram or convergence counts
. ie-a &,'N vs,_. time Qraphl
Vivi. error- cifal vs. time anaon
-t End output
IWh i c h" ,

Figure B-11 Main menu of output

-o,

.ntnn- Ele ,.nt Parameters

element amp I tude phase in radians Io


S1,3 ., 0' I A. 42 2 . 0 8' " '
94 949.. it . ' I1 8.4 .1'

S. C 0.12 31 .32
C
9%. ":. '.' sot'
.0 .03
_ i
S,., O-k, 0 . ; -e.l --
* 1F. ,-,
, C. k) 8.3 - 1 . 24 "
' 8.(o @.,I -1 . :6

Carrier rra ,jenc-, *in H:i 300008A L


.9
wr, tenna Len,_ tr, t ir, metersi: oO.b

Figure B-12 Antenna parameters r

,9,

ft ro at o

"'D ulbOer
tat, R;ate , Cin bit
t Irie.s tS. ' -c
t i : 3I0
l1 I-A

Al :.,)r . hm used: LNV'


Numoe r f (onveroenres r Eq

Tr, , or, -rve -nrc, r tant t: 0L 6 149


Charnel "/H rIt o ir, dti : 10.0
% Del iv ntnrmatior,
% del ulfn te-i path attenuation (in d1
"" .667 A .88' 7

Figure B-13 Channel and other assorted information

-159-

*---- r e - - ... .-.. .- :7 -,. -. 5--".. I . .Z


- . -. 9I - . *--% . ,'.-:J.-.#,',,L'...

• ~~- "u"l ~dQ " "t U% -";. " " . -" m


Incom~ing Sigjnal Information
sional phitin degrees) theta~in -JeoreesJ J/S(in dB)
j amiler 9E1 0O 8 U 0,1 213O0C4(4
jamme r
j 11SO .ie ISh (Ike
20,c~

Figure B-14 Signal information

aI I R

Pvry ume fItrtos:32E

ar1a cc ,Ci* l

Avrce..na/nefrne i B

Figut B-5 alulatdrslssfotu

. . l Jill
V14 Jill aJl.%

Wvr~
i
,a1~~ lItrfrneu dl 28
ME.=
-LICE DESCRIPTION

After xl tin thi*3 menu, the cr Ph datz, wII be


calculated and then plotted in polar form.

i min. convergence time 23) ava. 31 ax068


2 Fi: j angle Phi for elewation plot (in deQ. I
3 Fi:.ed angle Theta for azimuth plot (in deQ.)
i .CIROOO
4 C lice tvpe: l)Elevation.21IAzimuth

Figure B-16 Antenna pattern menu

" -

/Y

/ -..
- .,- ,

( /
/
/

, -
- , %

,,,' \ ..

1'61
4 " " ,

$ More Antenrna c'lots ln?


- ...
'' - - j .

Figure B-17 Antenna plot from Menu 5-} !


.

-161- '
HiLtooram.

-i El16

r j.

1
-, t

f
r
e

n
C

y0 H -¢

i~~~ ) 'f r'l I '1 1'7 1


100 208 300 - 408
L 88 688 788 800

6_ number of iterations

* Figure B-18 Convergence histogram

p.,.

-162-

.-. "
s'rs v s numoer or iter

12C4

.4' ~n
d

1C4-4 I7

I .2a 0 I

-91be ofieain

Figre
-19 Sigal o niseplo

iI~iIi iI~ 1 I

*I
%~'
n
(%
2* *J-.

e 'rro r rvs nuim re r


or i ter

-0 LA

-1

IIt
,r
0- - "I I "

- 2 0
20 40@0 ROO@80
, number of iteratons

:' Figure B-20 -~~~ Error signal olot IM--


Wl

,, . ii,. ..ii . . -,4n

-164-
MISSION
of
Rome Air Development Center
A c, C
L' 1,17itJ'' c c,~t
i_'nm,
g a L c .'

¢c'm,
Iils c, T, c h i~t c at ' ziit.~ , :..,a
s(qpc . x Lt t t . ( S
C t_, Co I il i, c!
T"L 3 1 : . I.,
ESO i.a1
P 0c P05)al i ESV C 2WnC iIt5

', t c~ C. Z ac rti. C t c,, s.fC3 ,' .


T, z ct 's c g ),c h iti a{ c c m pt c zc, C'i c u I

zc o, c )z iecl ra ca 0cte
, a iI a. ti tt,
t,., , satec
L, i'l cc t om a LJ c, s a I

U ,,

a a t C, i iL tC la iItac ' C ..

,.1

. --.. . . . .

I ." . . . . . . . . . .
A
Do t

.1,,

.d*.n

•w
O5O/ • • • • • ,

You might also like