1. Introduction
Localization is a fundamental issue for many applications, such as radar detection [
1], acoustic sensing [
2], and wireless communication [
3]. This paper considers the problem of using spatially distributed passive antenna arrays to locate an unknown number of emitters in the presence of sensor errors. Passive joint emitter localization has received much attention, which is based on using parameters, including Angle of Arrival (AOA) [
4], Time of Arrival (TOA) [
5], Time Difference of Arrival (TDOA) [
6], etc.
In practice, mobile and/or loosely synchronized systems are often used. We assume that emitters and sensors are non-collaborative and, hence, TOA information is absent. We also assume that sensor arrays are loosely synchronized in the sense that the synchronization error is much smaller than the time scale needed for relevant movements of emitters and sensors, i.e., the locations of emitters and sensors remain unchanged during the measuring process, but can be larger than that required for accurate TDOA measurements. Hence, we focus on using AOA information for localization.
It has been widely accepted that sensor errors can have large impacts on localization performance. There are many literature studies [
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17] that have reviewed joint localization and calibration. The focus of calibration is on the non-ideal characteristics of a single antenna array, including mutual coupling, gain, and phase errors across antenna elements (antenna element placement errors can be modeled as element phase errors). It is noteworthy that the above errors are typically caused by hardware imperfections, remain constants for a long term, and typically do not require online calibration during operation.
In this work, we focused on possible errors in the array directions. We assumed that each sensor array in the system was well calibrated before operation so that mutual coupling, gain, and phase errors of antenna elements were negligible [
18,
19]. However, operation conditions can cause errors in the assumed array directions, for example, a sharp turn of autonomous vehicles [
20], the unstable motion of aerial vehicles [
21], the drifting of buoys [
22], strong winds, and other unpredictable changes in the operating environment. Therefore, joint localization and calibration with array direction errors are significant in source localization using mobile platforms, such as the internet of vehicles, shipborne radars, and underwater sonars. To the best of our knowledge, this problem has not been formally studied in the literature despite its clear applicability.
In many early works, localization was achieved using a two-step procedure: first, some intermediate parameters were estimated, e.g., AoA, ToA, and/or TDoA, then emitters were located by fusing these parameters. Following this two-step procedure, self-calibration can be performed either in the first step [
7,
8,
9] or in the second step [
10,
11,
12]. There are two issues—data association and error propagation—that limit performance. For example, consider the case of localization using AoA information. Each sensor acquires multiple AoA parameters. These AoA parameters need to be associated with multiple emitters and then fused to calculate emitter locations. It is well known that data association is NP-hard in general [
23], and that errors in parameter estimation and/or data association can deteriorate the localization performance severely.
To address the limitations of this two-step procedure, direct localization [
24,
25,
26] has become the main approach in more recent works. The key idea is to build a mapping directly from locations to sensor measurements and then to search for locations that best match the measured data. In the context of joint localization and calibration, the mapping from locations to measurements is an operator specified by the parameters to be calibrated [
13,
14,
15,
16,
17]. In [
14,
15,
16], the authors studied the problem of positioning a single emitter with multiple snapshots. The authors of [
17] studied the array calibration and direction estimation problems from a Bayesian perspective and provided a unified method to handle the single array errors of mutual coupling, gain, and phase errors. Recent work [
13] extends the localization model to multiple emitters using a single moving array with gain and phase errors.
In this paper, we also followed the idea of direct localization. The main contributions are as follows:
The signal model was built on a gridless mapping directly from emitter locations and array directional errors to received signals. Many localization [
17] or imaging [
27,
28,
29] methods discretize the parameter space into grids and assume that true parameters lie on the grids. However, in our setting, multiple arrays are involved, and there is nearly always a grid mismatch problem. For example, consider a system with three arrays positioned at different locations in different directions. The intersection of the discrete grids (in angle) of the arbitrary two arrays is a set of discrete irregular points. Most of these intersection points are off the grid of the discrete grid of the third array. This grid mismatch problem could result in severe degradation in performance [
30]. To avoid this problem, both emitter locations and array directional errors are modeled as real numbers (hence, the parameter space is continuous) in our work. Similar modeling for joint localization and calibration also appeared in recent work [
13].
Atomic norm minimization (ANM) [
31,
32] is used to estimate the unknown number of emitters. The majority of the existing works assume prior knowledge of the number of emitters. In practice, such prior knowledge may not always be available. To handle this challenge, it is reasonable to assume that the number of emitters is relatively small, yielding a sparse model. Atomic norm can be viewed as a generalization of
-norm and promotes sparsity in the continuous parameter space.
We apply group sparsity [
33] to enforce geometric consistency. The feasibility of exploiting group sparsity lies in that we consider the multiple-measurement scenarios with a small number of emitters, where sparsity is generally exploited for performance improvement. Note that there are multiple sensor arrays for joint localization and the received signals from the same emitter should be geometrically consistent if these sensor arrays are perfectly calibrated. This means that source signals that originate from the same position but are received at different sensor arrays should be either all zero (no emitter at the location) or simultaneously nonzero (an emitter at the location). This consistency is naturally described by group sparsity.
Group sparsity helps align ‘ghost’ emitter locations resulting from calibration errors. Due to calibration errors, source signals from the same location may be treated as from different locations from the view of different antenna arrays. Group sparsity promotes consensus locations where signal components originate, and eliminates possible ‘ghost’ locations caused by calibration errors.
In this work, we apply the concept of group sparsity to source signal locations. This is similar to—but different from—its common usage for multiple measurement vector (MMV). In a typical MMV, multiple measurement vectors corresponding to directions share the same sparsity patterns, which are linear maps. Here, locations instead of directions (from views of different arrays) share the same sparsity patterns and there are nonlinear mappings from locations to measurement vectors. In fact, our model and simulations show that our scheme works for a single snapshot case where there is only a single measurement vector at each sensor array.
A tailored optimization procedure has been developed for joint localization and calibration. There are several typical approaches used to solve an ANM problem in a continuous parameter space. When the atomic norm is defined on the set of steering vectors, it is possible to reformulate ANM to semi-definite programming (SDP) problem [
32,
34] and solve it efficiently using off-the-shelf convex toolboxes, such as CVX [
35]. In the more general case, the alternating descent conditional gradient (ADCG) [
36] can be adopted.
The optimization problem for joint localization and calibration has a different nature. For a given calibration error, the optimization problem is a standard ANM problem and can be solved by ADCG. However, in the problem at hand, the optimization is with respect to calibration error as well. As a consequence, we designed our own optimization procedure that was inspired by the spirit of ADCG but significant modifications were made to simplify the process. Specifically, the optimization alternates between updating source signal locations and coefficients by fixing calibration errors and updating calibration errors by fixing locations. The objective function monotonically decreases during the optimization process and, hence, convergence is guaranteed.
Other results presented in this paper include (1) simulations verifying the effectiveness of the proposed scheme; (2) the Cramér Rao lower bound; we numerically compare it to the simulations; and (3) a necessary condition as a rule of thumb to decide the feasibility of joint localization and calibration.
The rest of the paper is organized as follows.
Section 2 describes the signal model.
Section 3 derives a necessary condition for the feasibility of joint localization and calibration.
Section 4 briefly reviews the basic concepts of group sparse recovery and ANM, followed by an optimization formulation of joint localization and calibration problems.
Section 5 develops a tailored optimization method to solve the formulated optimization problem with a convergence guarantee. Numerical simulation results are shown in
Section 6. We draw our conclusions in
Section 7.
Notations used in this paper are depicted as follows. We use and to denote the sets of real and complex numbers, respectively. Notation and are used for the real and imaginary parts of a complex-valued argument. The amplitude of a scalar is represented by . The largest integer less than the real number x is expressed as . Uppercase boldface letters denote matrices(e.g., ) and lowercase boldface letters denote vectors (e.g., ). The element of matrix in the m-th row and the n-th column is written as and denotes the n-th entry of vector . The complex conjugate operator, transpose operator, and the complex conjugate–transpose operator are denoted by , respectively. Denote the Hadamard product by ⊙ and matrix column vectorization by . We use as the norm of an argument. We denote the imaginary unit for complex numbers by .
2. Signal Model
In this section, the signal model of the passive joint emitter localization with the array directional error is given.
We consider a two-dimensional (2D) plane, denoted by
. Assume that there are
L static emitters closely spaced near the origin point
O, continuously transmitting signals at frequency
, where
c denotes the light speed and
denotes the wavelength. We denote the
l-th emitter position by
, where
denotes the region of emitter positions. There are
M sensors with passive uniform linear arrays (ULA) receiving the transmitted signals from the emitters. We assume that the manifold of each array is exactly known and there is no array element location perturbation. These errors are usually calibrated by offline or online [
37] methods. Here we use ULAs for simplicity. Note that our optimization formulation and method (developed in Sections IV and V, respectively) do not rely on structured matrices tied to uniform arrays. They can be extended to other array geometries. Denote by
the number of array elements and by
the interval between adjacent elements in the
m-th sensor. We use
and
to represent the position of the reference element in the
m-th sensor and the angle of the
m-th array surface versus the
X axis, respectively, which determine the deployment of the array. The angle of the
l-th emitter w.r.t. the normal direction of the
m-th array, i.e., the AOA, is denoted by
, expressed as
for
and
.
In practice, the perturbations in the sensor deployment are inevitable, resulting in that the actual parameters
are different from their pre-assumed counterparts, denoted by
. We denote array directional errors by
, where
is a rough range of
. The AOA measurements, which are crucial in passive joint emitter localization, are affected by array directional errors. In particular, the AOA of the
l-th emitter w.r.t. the pre-assumed
m-th sensor, denoted by
, is expressed as
for
and
. Note that array directional error is highly structured, prevalent on mobile platforms, but not widely studied yet. What is the effect of the kind of error and how to achieve self-calibration well are still open problems, which are our focus in this paper. The geometry of the arrays and emitters is depicted in
Figure 1.
Before introducing the signal model, there are some pre-conditions listed as follows: (1) The baseband signals of the transmitted signals from emitters are assumed to be narrow-band and wide-sense stationary. (2) The emitters are in the far field [
38]. (3) In each array, one sample is expected to be taken at the same time as a single snapshot. Here, we do not assume that time synchronization is accurate for TDOA measurements. However, we assume a loosely synchronized system such that the locations of emitters and sensors remain unchanged during the measuring process. (We leave the multiple-snapshot case for future investigations.) Then, the received signal of the
m-th array is given by
where
,
is a complex coefficient characterizing the unknown envelope of the transmitted signals from the
l-th emitter to the
m-th array,
is often referred to the spatial frequency,
denotes the additive noise, and the steering vector
is denoted by
where
and
.
By substituting (
2) with (
3), we recast the received signals as
where the steering vector
is denoted by
For abbreviations, we use
,
,
with
and
to represent the directional errors vector, position matrix, complex coefficient matrix, and measurement set, respectively,
. In (
5), we note that
,
M,
, and the function of
as (
2) are known. Unknown parameters
and
L are to be estimated.
Note that (
5) is an
incoherent model, which means that the intensity
of the
l-th emitter varies in amplitude and phase w.r.t. different sensors and the relationship between them is unknown. Therefore, we do not require the time synchronization assumption that guarantees the specific relationship of
between all the sensor signals. Moreover, since only single-snapshot measurements are used, signal statistic assumptions for multiple-snapshot cases are also not required. The challenge of recovering emitter positions
from (
5) lies in the highly non-linear structure for frequencies
w.r.t.
and unknown array directional errors
,
and
.
3. Necessary Conditions
In this section, we explain the feasibility of self-calibrating array directional errors in passive joint emitter localization from the aspect of necessary conditions.
First, we give a necessary condition in our scenario by comparing the number of equations and unknowns. In particular, for the equations, we note that the principle of the passive joint emitter localization is to use distributed arrays to measure the emitters from different directions, and the intersections of the direction lines indicate the emitter positions. Each direction line corresponds to an equation as (
2). Since there are
L emitters and
M arrays, i.e.,
direction lines in total, the number of such equations is
. For the unknowns, there are
L emitter positions in the 2D plane,
, and
M array directional errors to estimate, i.e.,
real-valued unknown parameters in total. A necessary condition to achieve the joint estimation of
is to guarantee that the number of equations is larger than the unknowns, i.e.,
From (
7), it is not hard to find that when
, many enough sensors
M guarantee (
7), yielding the potential of self-calibrating array directional errors. However, in practice, due to the impact of noises, sensor system settings, sensor and emitter spatial geometries, and so on, more sensors than the threshold in (
7) are generally in demand.
When
, (
7) does not always hold, no matter what
M is, yielding the inevitable failure of the joint estimation of
. It implies that in the passive joint emitter localization, we are not able to calibrate all directional errors when
. Here, we further explain this phenomenon as follows: If
, for any true values
and a false emitter position
, it is always available to estimate the directional errors as
, such that the following equation is satisfied:
where
. Therefore,
yields the same signals as
according to (
5) when
and noises are fixed. This indicates that
and
cannot be distinguished only from the received signals
. Correspondingly, the joint estimation of
can be either one of them and will fail eventually.
To avoid the estimation failure when
, on the one hand, we assume that there are some precise arrays without errors. Denote the number of arrays with errors by
,
. A necessary condition in this case is
. On the other hand, we consider another scenario assuming that the directional errors in multiple sensors are the same. Similar assumptions on the errors can be seen in [
13], where distributed sensor observations are obtained by sampling different moments of a single moving array with errors, and the position relationship of each array at different samplings is assumed to be exactly known. Under this
condition, we have 3 unknowns (2 for emitter positions and 1 for the directional error) and
M equations, yielding the necessary condition as
.
In summary, we explain the potential of self-calibrating the directional errors in terms of necessary conditions when . For the case, where the necessary condition always does not hold, we also give the additional conditions for the self-calibration to be available. Note that in the sequel, our proposed method is suitable for both cases.
5. Solving Optimization Problems
In this section, we propose to solve (
14), yielding the estimates of unknown parameters
. Overall, we implement rough position estimation emitter-by-emitter, while locally improving these estimates by a two-loop alternating gradient descent in both emitter positions and the sensor errors. The gradient descent procedure guarantees the decrease of the non-negative cost function at each step, and hence a convergence is achieved. To clearly introduce the proposed method called group sparsity exploitation for self-calibration (GSE−SC), the framework is shown in
Section 5.1, and the local improvement is detailed in
Section 5.2.
5.1. Framework
Our proposed method, GSE−SC, is based on such a scheme: the emitters are added to the support set one by one, then all the unknown variables are improved locally by alternating the gradient descent and support set prune. Before introducing this algorithm framework, we denote by t the counter of steps. Assume that after steps, we obtain the position matrix that contains emitter position estimates, i.e., , the corresponding complex intensity matrix with the -th entry , as well as the directional errors .
It is well-known that initialization plays an important role in iterative methods. In the GSE−SC, we initialize the emitter position in the
t-th step by using grid-based group sparse recovery methods, particularly by solving the following
norm minimization lasso problem as
where
is the regularization parameter, the
norm is defined as
and
is the dictionary matrix constructed by uniformly dividing the range of emitter position
into
grids as
, and concatenating the corresponding atoms, i.e.,
. Here,
is the residual signal of the
m-th sensor in the
-th step, given by
The
t-th initialization of the emitter position,
, is then derived from the optimal solution
by selecting the grid with the largest intensity as
The
t-th corresponding intensity matrix
is then initialized by minimizing the temp residual as
for
, which is solved directly via the least square method, yielding the closed form solutions as
where
. Without prior knowledge, directional error
is, thus, initialized as
.
The initialized
is then improved locally by an alternative gradient descent method to guarantee the decrease of the cost function in (
14) and help self-calibrate errors, which is shown in the next subsection. Consequently, we will obtain
, which implies the residuals
. Then we repeat the procedures from (
15) to (
20) to continue the refinement of
, and this iteration terminates when
t reaches the maximum number of steps,
. Empirically,
is set to be larger than the emitter number
L.
Lastly, for the identification of the emitter number
L, we assume that the true
L can be exactly estimated by our method, which is introduced in the next subsection. There are specific studies on this issue and some existing principles, such as AIC [
45] and BIC [
46], have been widely used. Our main contribution is not on this; previous related works are available for reference.
5.2. Joint Estimation of
In this subsection, we detail the local improvement procedure, which aims to bring down the cost function in (
14) by an alternating gradient descent method.
The difficulty of solving the optimization problem (
14) reflects on two aspects: On the one hand, (
14) is non-convex; therefore, a globally optimal solution is hardly guaranteed for most optimization methods. On the other hand, the array directional error
is key to the non-convexity and complexity of (
14) and how to self-calibrate
is important for the precise emitter localization.
In response to the difficulties above, our strategy is based on an alternating gradient descent method with a two-loop structure containing an outer loop and an inner loop. In particular, we simultaneously improve the joint estimation of
and
by gradient descent in the outer and inner loop, respectively. In between, we prune the support set of
by removing weak elements as
, a typical procedure commonly seen in ANM methods such as alternating descent conditional gradient (ADCG) [
36] and greedy CoGEnT [
47]. The proposed structure benefits from at least the following three points:
- (A1)
The outer loop with gradient descent w.r.t.
guarantees bringing the cost function in (
14) down.
- (A2)
The inner loop inside an outer loop is used to refine first. In this way, we reduce the probability of converging to local minimums in the outer loop iteration due to the imprecise initialization of .
- (A3)
The two-loop structure is designed for self-calibrating precisely. This is because after the inner loop, reach their local minimums for a fixed . Then, the iteration direction and step size of gradient descent w.r.t. in the outer loop mainly depends on , which forces to move to a better estimate.
In particular, we introduce the details of the two-loop structure here. As a preliminary, we use
k as the iteration counter of the outer loop, denote intermediate estimate by
and initial
by
, where · belongs to the set
or
. Moreover, denote by
the cost function in (
14) and by
the partial derivative of the cost function w.r.t. ∘ at
, where ∘ belongs to the set
. For the convenience of the presentation, we remain the calculation of these derivatives in
Appendix A.
In the
k-th outer loop iteration, the inner loop is first carried out, i.e.,
are renewed by the gradient descent repeatedly until the maximum number of repetitions
, which is large enough to guarantee convergence. Note that we use this convergence criterion for a brief expression; other criteria (such as stopping the iteration when the gradients are small enough) are also available. Here, let
i count the number of the inner loop repetitions, and initialize
by
. Then, we perform
where
denotes the step size in the
i-th repetition and is determined via the Armijo line search [
48]. When the inner loop ends, we achieve the updated parameters denoted by
. After the ‘prune support’ procedure, the array directional error
is then updated by gradient descent together with the immediate parameters
as
where the
k-th step size
is also determined via the Armijo line search. The alternating iterations, (
21) and (
22), continue to be carried out until
k reaches the maximum number of steps
, yielding the end of the outer loop. The specific GSE−SC method is summarized in Algorithm 1.
Algorithm 1 GSE−SC |
Input: Signal and parameters , , , , . Initialization:, , . Sett from 1 to : - (1)
Localize the next emitter with ( 18), yielding . - (2)
Update the supports as , the corresponding intensities as ( 20) and . - (3)
Alternating gradient descent: -
Setk from 1 to (outer loop): - 1)
Refine using ( 21) while (inner loop). - 2)
Prune support as . - 3)
Locally improve together using ( 22).
Output:, and . |
Note that Algorithm 1 is proposed for the
case, but is also available for the particular
case mentioned in
Section 3 by just viewing
for
as a single unknown parameter and iteratively update it, similarly to (
22).
Here, we analyze the complexity of our proposed method. In particular, the complexity depends on the number of iterations. In each iteration, the calculation of the gradient contributes most to the computational burden. Therefore, we evaluate the complexity as follows:
In summary, the complexity of our proposed method is calculated as . Compared with grid-based methods, our proposed method has additional complexity in the refinement iterations, yielding more convergence time. This is a trade-off between performance improvement and complexity.
6. Numerical Simulation
In this section, we perform numerical simulations to compare our proposed method GSE−SC with some existing methods, including matched filtering (MF), grid-based group compressed sensing (GCS), ADCG [
36], and Cramér-Rao Lower Bound (CRB). In particular, we examine the influence of noises, the array directional errors
, the total number of sensors
M and the number of biased sensors
on the recovery of the emitter position
. Meanwhile, the estimation results of array directional error
are also presented. Finally, in the GSE−SC method, we consider the multipath effect on the performance, and the necessary conditions for a precise localization w.r.t.
M and
and.
6.1. Simulation Setting
We consider using M sensors to passively receive signals from emitters. Each sensor is equipped with a ULA array and the number of array elements is , . The intervals of the adjacent elements are set as half the wavelength, i.e., . The positions of these sensors are set randomly and uniformly in a circle with a center m and a radius of 120 m. The pre-assumed angles, , are set as 0. Among these M sensors, sensors have unknown array directional errors, denoted imprecise sensors, and the rest of the sensors are exactly located. It is assumed known that which sensor has errors and which is exactly located. In the simulations, each entry of is a random variable, uniformly distributed in . Emitter positions are set randomly and uniformly distributed in a circle with a center m and a radius of 40 m. The amplitude matrix is set as a standard complex Gaussian random matrix. We assume that the additive noise is i.i.d. white Gaussian with zero mean and variance , and the signal to noise ratio (SNR) is defined as .
We compare our proposed method GSE−SC with MF, GCS, ADCG methods and CRB. In particular, the MF method is used to solve the following optimization problem (
23) w.r.t.
, given by
Note that in this way, only one emitter is estimated. GCS method solves the lasso problem (
15). The position estimates of the
L emitters correspond to the
L largest values in the set
. The ADCG method is detailed in [
36]. The CRBs are calculated in
Appendix B.
The parameter settings of these methods are shown as follows: The region of interest is set as
m. The regularization parameter
is set by borrowing the corresponding idea in [
49]. In the GCS method, we set the grid number as
. In the GSE−SC method, we set the total iteration times
, the outer loop iteration times
, and the inner loop iteration times
.
We use the indicator root mean square error (RMSE) in the
scale to measure the recovery performance of
. In particular, we carry out
Monte Carlo trials and denote the RMSE of
by
where
and
represent the true values and estimation of ·.
To further study the feasibility of the GSE−SC method in certain scenarios, such as , we also use the hit rate as the evaluation index. In particular, it is recognized as a successful hit if the RMSE of emitter positions is less than a specific threshold, denoted by . Then the hit rate is defined as the probability of a successful hit.
6.2. Convergence Behavior
Here, we use one numerical example to illustrate the convergence behavior of GSE−SC. We denote the iteration times by
. We set
,
, SNR=25 dB and
. Denote the estimation error of
and
by
and
, respectively, where
is the true value and
is the temp estimation in the iteration. The iteration convergence behavior of GSE−SC in a single trial is shown in
Figure 2. From
Figure 2a, we see that the cost function gradually approaches the true value in the iteration. From
Figure 2b, we find that the estimation errors of
and
are gradually close to 0 in the iteration. There is a zigzag in the curve of
because in the GSE−SC we only update
in the outer loop and keep
unchanged in the inner loop. Both simulation results show that GSE−SC converges in the iteration.
6.3. RMSE versus SNR
In this subsection, we compare the performances of MF, GCS, ADCG, and GSE−SC methods with CRB in terms of RMSE under different levels of noise.
In particular, we set the total sensor number
and the imprecise sensor number
. We change SNR from 0 to 40 dB and perform
Monte Carlo trials for each SNR and method. We plot the CRBs of
and
when
for
.
Figure 3a,b show RMSEs of
by the tested methods and RMSEs of
using the GSE−SC method, respectively.
From
Figure 3a, we find that under the same SNR, the RMSE of
using the GSE−SC method is significantly lower than the counterparts of MF, GCS, and ADCG methods, indicating that GSE−SC outperforms the other methods in the estimation accuracy of emitter positions. Moreover, the RMSE of
by the GSE−SC method decreases much faster along with the increase of SNR compared to the MF, GCS, and ADCG methods. This is because array directional errors play more important roles than noise here, and GSE−SC efficiently alleviates the influence of the errors, while MF, GCS, and ADCG methods do not. Meanwhile, the RMSE of
using the GSE−SC method is close to CRB when SNR is large enough.
Figure 3b demonstrates that the RMSE of
using the GSE−SC method decreases along with the increase of SNR and is also close to CRB.
6.4. RMSE versus
In this subsection, we quantify the effects of array directional errors
on the performance. In particular, we set
, and test
with different levels, i.e.,
. We also set
and
. Then, we take the ADCG method [
36] as a benchmark and compare its performance with GSE−SC for different
. The simulation results are shown in
Figure 4.
There are 12 and 8 lines in
Figure 4a,b, respectively. We find that the CRB lines w.r.t. different
are very close.
From
Figure 4a, we first find that GSE−SC is not sensitive to different
, and the RMSEs are all close to the CRB when SNR is reasonably large. Second, we find that when
, the performances of ADCG and GSE−SC are close. As
increases, the performance of ADCG degrades gradually. These results demonstrate that array directional errors affect the localization performance significantly and our proposed method self-calibrates the errors well. From
Figure 4b, we find that GSE−SC achieves a good estimation of
when SNR is reasonably large.
6.5. RMSE versus M
In this subsection, we investigate the impact of the number of sensors
M on the estimation performance of
. Here, SNR is fixed as 25 dB,
M varies from 2 to 10, and
is set as
. We then carry out
Monte Carlo trials in each case of
M. The simulation results are given in
Figure 5.
We observe from
Figure 5 that the RMSEs of
and
using the GSE−SC method first decrease along with the increase of
M and then tend to level off. Remarkably, there is a sharp drop from
to
, implying a necessary condition for the GSE−SC method, which in this case is
. In
Figure 5a, we find that the RMSE of the GSE−SC method is much smaller than the other tested methods and close to CRB when
M is large enough,
. However, as
M increases, RMSE and CRB both decrease slowly. This phenomenon helps in the effective use of multiple arrays in practice.
Figure 5b shows that the curve of the RMSE of
is similar to the RMSE of
. This is because more sensors are beneficial to the self-calibration of sensor errors, which further improves localization accuracy.
Note that in the setting, the number of unknown array directional errors, set as , changes along with M. More biased sensors result in higher CRB. When , we have , respectively. Although is larger than 3, the former has more biased sensors to calibrate, . This leads to a zigzag of the CRB curve at .
6.6. RMSE versus
In this subsection, when the total sensor number
M and SNR are fixed, we analyze the effect of the number of sensors with errors
on the localization performance. This indeed reflects the performance bound of the multiple-array system in self-calibrating directional errors. In particular, we set
, SNR = 25 dB, and then vary
from 1 to 8. The simulation results are shown in
Figure 6.
From
Figure 6, we find that the RMSEs of
and
using the GSE−SC method are close to CRB when
and increase apparently when
. This reflects a performance bound of the GSE−SC method in self-calibrating array directional errors, and the limit is
when
in this scenario. However, in most cases, the GSE−SC method still performs better than other tested methods.
6.7. RMSE versus Multipath Effect
In this section, we consider the multipath effect on the performance of the GSE−SC method. We assume that the effects of the multipath results in spurious emitters, which are located in different positions from the true emitters, have the same/coherent emitted signals. The simulation setting is almost the same as in
Figure 3. In particular, a true emitter and a spurious emitter resulted from the multipath (located in different positions), where we set the signals of the spurious emitter to be the same as the true emitter’s. In this way, the signals of the two emitters are coherent. We compare this multipath case with the results of
Figure 3 in
Figure 7.
From
Figure 7, we find that the two results are close, yielding that the multipath scenarios have little effect on the estimation performance of the GSE−SC method. This shows the feasibility of our proposed method in the cases of coherent signals.
6.8. Hit Rate versus M
In this subsection, we look at how many sensors are required to achieve precise localization in the cases with when using the GSE−SC method. The corresponding results are compared with the necessary conditions.
Here, we fix the SNR as 25dB. Based on the simulation results in
Figure 3 and
Figure 5, the threshold of the hit rate is empirically set as
. In particular, we focus on the scenarios with
or
and
. We perform
Monte Carlo trials in each case and the hit rate results, as well as the corresponding necessary conditions (distinguished by markers), as shown in
Figure 8a. Then, assuming the same
for
for
case in
Section 3, we carry out
trials and the hit rate results, as shown in
Figure 8b.
From
Figure 8a, we find that when
, the hit rates with
tend to be 1 as
M increases. Moreover, the least number of sensors for hit rates larger than 0.9 are 5 and 7 for
, respectively, which are larger than the necessary conditions. This indicates that the necessary conditions in our scenario are not very tight and more sensors are required in practice. When
and
, the hit rates grow slowly as
M increases, since more unknowns are to be estimated compared with the
case. When
and
, the failure of the joint estimation of
is unavoidable as discussed in
Section 3 and, hence, the hit rate is close to 0. Therefore, we do not mark its necessary condition. For the
case, assuming the same
,
Figure 8b indicates that a large enough
M, such as
in this scenario, guarantees a hit rate close to 1.