1. Introduction
Satellite image matching is an essential stage in the production of photogrammetric products, such as various large scale maps and digital surface models (DSMs). Image matching is defined as finding correspondences between two or more images in which, after determination of primitives, a similarity criteria evaluated between those and the corresponding features is detected.
There are different categorizations for image matching techniques. Some techniques are designed specifically for stereo images and some for multiple view images [
1]. Methods of stereo image matching are divided into two groups, namely, global and local methods [
2]. The local or window-based methods work on local windows in each point of the stereo images, while the global methods generate a depth map from entire images through defining and optimizing an energy function [
1]. The global methods demonstrate better performance in comparison with local methods in dense matching, but their computational complexity is higher [
3]. Moreover, the semi-global methods have reduced computational complexity through the introduction of some simplifications in optimization algorithms of the global methods, such as semi-global matching (SGM) [
4], tSGM [
3], and SGM-Nets [
5].
Local methods are divided into area-based matching (ABM) and feature-based matching (FBM). In ABM methods, for a specific point in the first image, only by relying on the grey level pattern in a neighborhood around this point, the search is performed to find the same pattern in the second image. Cross-correlation and least-squares matching (LSM) are the most commonly used ABM techniques. FBM methods perform image matching based on the extraction of features from two images. Then a correspondence is established between the extracted features regarding their similar attributes. Scale-Invariant Feature Transform (SIFT) [
6], and Speeded-up Robust Features (SURF) [
7] techniques are among the known examples of the FBM method [
8]. Additionally, wavelet-based methods help to detect features in the scale space [
9,
10]. The precision of the FBM methods are limited, similar to the global methods. Therefore, the results of these methods are often used as seed points for precise matching methods [
11]. In addition, the final density of the matching results depends entirely on the success of the feature extraction step.
In this paper, the focus is on the definition of an image matching method for satellite stereo images with high precision and reliability which is capable of generating dense matching results. The least squares matching as an ABM method has the potential to achieve high precision [
8,
12], and its mathematical model allows evaluation of the quality of the results. However, it requires seed points within a small pull-in range and it may converge to a wrong point in regions with poor texture, even with appropriate seed point [
1]. The latter properties reduce the reliability of the matching technique. Adding geometric constraints could potentially help refine to achieve more reliable matching results [
13]. An advantageous characteristic of the LSM technique is its flexibility in incorporating geometric constraint equations into matching equations In this regard, the LSM can be combined with the geometric constraint [
14,
15], or separately, used to improve the precision of provided seed points [
16].
The epipolar geometry constraint is one of these constraints that can be defined using orientation information (in the form of rigorous or rational polynomial camera (RPC) model) of satellite stereo imagery. The well-known epipolar line pairs of stereo images acquired by frame cameras, can be locally assumed for matching of satellite linear array stereo images [
16]. Even though the epipolar geometry in linear array images is not defined as a straight line [
17], the approximate epipolar lines can be defined using the Shuttle Radar Topography Mission (SRTM) elevation model and the RPC model [
18]. The epipolar constraint is capable of increasing the convergence radius and rate of the matching [
12], reducing the number of false matches [
19], and significantly reducing the dimensions of the search space [
20].
Here, in the image matching algorithm, the RPC models are used to provide the seed points in the coarse matching step and to restrict the search space in the form of a geometric constraint. As a result, the reliability of the matching is increased by employing the known orientation parameters of the stereo images. Additionally, a new weighting approach is proposed in this study in order to combine the RPC intersection of corresponding rays as the geometric constraint to the LSM method. The primary prerequisite of this combination is the assignment of the appropriate weight for different types of observations. It is expected that better results could be achieved if observations from different types have equal accuracy and reliability. Improving the definition of the weight matrices could create these better results. In this context, an optimal weighting technique for the second-order design (SoD) of geodetic networks [
21] has been proposed. Since, different types of observation are involved in the SoD stage (angles and distances), they have proposed assigning a weight to observations in a way that the same average redundancy numbers are obtained for all types of observations.
In this manuscript, with the aim of increasing the reliability of satellite image matching, we decided to utilize this technique for combining geometric constraint equations and least-squares image matching equations. This means that the point coordinates (geometric nature) observations of the geometric constraint were combined with the grey level (radiometric nature) observations of LSM in an optimal manner. In comparison with the unit weighting, the proposed method can significantly improve the precision of the space intersection of the corresponding rays. Additionally, this method, as a purely statistical technique, was compared with a conventional weighting method that uses the radiometric content of images.
2. Geometrically-Constrained LSM
As stated in the introduction section, the main problem facing this study is finding an accurate corresponding point on the second image given a fixed point on the first image. In most precise satellite image matching methods the LSM technique is used. An image matching method, guided by the object space, is used to reduce the inherent ambiguity in this problem. The LSM technique has the ability to combine with a large variety of geometric constraints. Some of these constraints must first be linearized and organized in the least squares framework and then added to the LSM equations. Here, relying on the known sensor orientation parameters, the space intersection equations are used as a geometric constraint. The framework of constrained image matching strategy is illustrated in
Figure 1.
The space intersection of corresponding rays is written based on the stereo viewing geometry. Here, the point in the first image and the initial position of the point in the second image are known. Each point presents two observations in which their weight should be assigned in a way to achieve appropriate redundancy numbers relative to the average redundancy number of the LSM observations. A higher weight leads to a smaller redundancy number, which means reduced reliability and freedom of the observation, and vice versa.
2.1. Least Squares Image Matching (LSM)
The LSM equation, based on an affine geometric transformation and a linear (drift and offset) radiometric transformation, was formed as follows:
where
is the grey value of the template window which is formed centered on the point in the first image,
is the grey value of the search window which is formed centered on the seed point in the second image,
is the true error function,
are the pixel coordinates of the point in the template window, and
are the pixel coordinates of corresponding point in the search window. Additionally,
is the radiometric transformation, and
and
are 2D affine transformations as follows:
where
and
are affine transformation parameters,
and
are the parameters of the linear radiometric transformation. Substituting
from Equation (2) and
from Equation (3) within Equation (1), the following equation yields:
By expanding Equation (4), the grey level matching equations written for each pixel pair in two homologous windows are given by:
where
and
are the grey level derivatives of the
-th pixel in two directions, and
is the grey value of the search window in each iteration, which must be interpolated from the search image. By adding three zero columns to the end of the design matrix in Equation (5), we reach the following expression which readily combines the constraint equations:
where
are the grey level differences between pixels of the two image windows,
is the residual vector, and
is the design matrix of LSM with the added zero columns. The vector of unknown parameters of the combined system can be written as:
where
are the differentials of geodetic coordinates of points that are added to the LSM unknown vector from the geometric constraint (
Section 2.2). As can be seen from vector
, the geodetic coordinates of the intersection point, along with resulted error ellipsoid of each point, could be estimated during the matching process. The geodetic systems, especially WGS84 (World Geodetic System 1984), establish a geocentric terrestrial reference system. In this system, each point on the Earth’s surface is defined using geodetic latitude
, geodetic longitude
, and ellipsoid height
[
22].
In general, Equation (6), without adding any constraint equations, can be used only to improve the precision of the initial corresponding (seed) points, provided that the coarse matching step was performed with high reliability, e.g., using manual matching [
23].
2.2. Geometric Constraint Based on the RPC Space Intersection
Space intersection equations for satellite stereo images are generally formed based on the RPC model. The forward rational functions are as follows [
24,
25]:
where
are the normalized line and sample coordinates of a point in the
-th image,
are the normalized geodetic coordinates of the point in ground space, and
–
are the third-order polynomials related to the
th image. The coefficients of these polynomials are calculated by the image provider and are included in the RPC files of stereo images.
Each corresponding points extracted from stereo images are related to a specific point in the ground space. The coordinates of the ground point are the common parameters between these equations. Thus, the space intersection consists of four equations where, somehow, the two relations of Equation (8) are written for each of these corresponding points, individually. The equations of the RPC space intersection are written based on the following expressions:
where
and
are the rational polynomial functions defined in Equation (8),
and
denote the scale values for the two image point coordinates, and
and
are the offset values for the image point coordinates. Similarly, the normalization values (scale and offset) for the geodetic coordinates of the ground point will be used in the linearized model in Equation (10). These normalization values are provided in the RPC files of stereo images.
The linearized model of space intersection equations for two corresponding rays can be written in the matrix form as:
where
are the center pixel coordinates of the template window in the first image, and
are the center pixel coordinates of the moving search window in the second image. It should be noted that all coordinates in Equation (10) are denormalized. The relationship between the vector of unknown parameters and observations in the geometric constraint, according to Equation (10), can be established:
where
is the observation equations of geometric constraint,
is the residual vector, and
is the design matrix of geometric constraint with added zero columns.
2.3. Combined System of Constrained Matching Equations
While LSM uses all the pixels in the template window, the geometric constraint is only written for a point of this template window and is usually the center point. Therefore, selecting the appropriate image coordinate system for combining the equations is important. The rational functions for each known object point can provide the coordinate values of the point in the pixel coordinate system, which is defined for the image. On the other hand, LSM equations are defined for each matching point in a way that its origin coincides at the matching point position in the search window. These two coordinate systems only differ in the location of the origin of the two systems (
Figure 2).
The translation parameters of the corresponding point in the search image are the common terms between the LSM and the geometric constraint equations. These parameters are independent of the origin of the coordinate system:
According to Equations (5) and (10), and also common unknown parameters between the LSM equations and the geometric constraint equations, are as follows:
Each observation set with its assigned weight matrix should be joined into the equation system. The unknown parameters of the equation system are estimated based on the least-squares method:
where
and
are the weight matrices of LSM and geometric constraint observation equations, respectively. These two sets of equations have different types of observations and, as a result, the definition of the weight matrices is a critical step.
2.3.1. Proposed Weighting Method
Assigning the appropriate weights should lead to achieving the following goals: the point in the first image must be kept fixed during the matching iterations; the corresponding point must have sufficient freedom to satisfy geometric constraints; and the precision of estimated unknown parameters should be improved.
Application of the Redundancy Matrix
One way of determining the effectiveness of different types of observations in the final results is to compare the redundancy numbers of these observations in the redundancy (reliability) matrix. The redundancy matrix relates the observations vector to the estimated residual vector, which is expressed using the following equation:
The geometric interpretation of this matrix in the least-squares approximation is given in
Figure 3. The least squares estimation
projects the observation vector
onto the subspace M which is spanned by the columns of the matrix
[
26]. In this way, the redundancy matrix
, is a projection matrix which projects the observations vector to the orthogonal complement of the subspace M.
The
elements of the redundancy matrix indicate the effectiveness of the observation
to the estimated residual value of observation
. All elements of this matrix are between zero and one. Redundancy numbers are the diagonal elements
of the redundancy matrix which can be used as a reliability measure so that large values of them (close to one) provide the possibility to discover the gross errors [
27].
Weight Design Using the Redundancy Matrix
In the proposed approach, the weight of the constraint observations was considered to be relatively small, since the constraints can only be applied to one pixel, while all of the pixels in the image window were involved in the least squares matching equations.
Assuming unit weights for LSM observations, the weights of constraint observations are assigned as follows:
Weight of
: By assigning a unit weight, the redundancy number of this observation is about half of the average redundancy number of the LSM observations (see
Section 3.4). Thus, we must provide more freedom for this observation in order to direct the LSM window toward a more precise position. Therefore, a small weight should be assigned for this component (relative to the unit weights of the LSM observations).
Weight of
: The initial value of this component is captured from the coarse matching step. At this step, the elevation model of the study area as well as the RPCs of the image pair have been utilized, which minimize the along-track error [
28]. Thus, the initial value of this component is very close to the desired final value and therefore, its weight should be large enough relative to the weight of
. This condition is met automatically by maintaining the unit weight of this component.
Weight of and : Owing to the presence of in Equation (11), the estimated values of would be different from their initial values but it contradicts main goals of this research where the point in the first image assumed fixed during the image matching procedure. The common approach to deal with this issue is raising weight of the observations. However, after updating the weight of the observation, the obtained redundancy numbers for these two observations were reduced to approximately zero. Thus, similar to what was considered for the component, there was no need for an additional scale factor to fix the position of the point in the first image.
According to the above discussion and taking inspiration from the work of [
21], we decided to divide the observations into two groups. One of the constraint equations formed a single-member group and the other three constraint equations, along with the LSM equations (1225 equations assuming a 35 × 35 window), formed another group. The member of the single-member group is a point coordinate, and most of the members of the other group are grey levels. In other words, these two groups have different type of observations.
We tried to equalize the redundancy numbers of these two groups, which led to higher freedom for . By analyzing the diagonal values of the redundancy matrix, an additive scaling factor for the single-member group was calculated. In this manner, the weight of each group of observations was assigned properly for achieving better matching results.
Thus, the matrix form of the equations in Equations (6) and (11) should be rewritten as follows:
where the indices denote the group number,
includes
and the first three elements of the
vector,
is a single-element vector,
,
are the new design matrices, and
,
denote the new residual vectors after grouping. The proposed weighting approach was formulated on the basis of two general criteria:
The average redundancy number of two groups of observations must be equalized (uniform redundancy).
For higher reliability, the average redundancy number of each group should be as close as possible to one (high redundancy).
In order to satisfy simultaneously both criteria, the average redundancy number of the two groups must be equal to the fixed value of the average redundancy number of all observations:
where
is the index of groups,
,
and
denotes the number of observations. In this situation, each group will have the largest number of its redundancy number and, at the same time, the uniformity of the redundancy numbers will be maintained. Combining these equations yields:
where
is the degree of freedom of the system and the sum of diagonal elements of the redundancy matrix. As a result, we assigned unit weights for the first group and then an optimal technique for estimating the weight of the second group was adopted. Considering the observations were divided into two groups, the equality in Equation (18) depends on the weight of the second group of observations, which is obtained by estimating the scale factor
for this weight matrix. This scale factor applied only to the second group, which is related to the
observation:
where
and
are the weight matrices of the two groups of observations and
,
are the design matrix and weight matrix related to the system of equations. Using the rule
:
where
is the modified weight matrix of the system of equations:
Substituting
from Equation (21) within Equation (20) and using the normal equation notation gives:
Referring to [
21] after expanding Equation (23) by Taylor series, a quadratic equation yields:
where
,
, and
. Between the two roots of Equation (24), one with a minus sign for the square root of the discriminant expression is accepted. Solving for
was continued until it converged to one. Practically, this convergence occurs in less than 10 iterations. Multiplying the outputs from all iterations gives the desired scaling factor
:
where
is the iteration index. A scheme of the proposed weighting and unit weighting approaches is presented in
Figure 4. In each case, the combined system was solved assuming unit weights for all of observations and then using the Helmert VCE method [
29,
30] the variance factors of the two observation sets were calculated. Again, the system was solved, this time with the updated weights applying the estimated variance factors.
2.3.2. Precision Estimation
After calculating least-squares residuals, the a posteriori variance factor is given by:
where
is the modified weight matrix of the geometric constraint observations, as seen in Equation (28). At the beginning, due to the lack of preliminary information, unit weights are often assumed for all observations. After estimating the variance components, the scaling factors for each weight matrix were obtained and then the image matching procedure should be repeated based on the new set of weights. As a result, by applying the estimated scale factors
and
, the final form of the covariance matrix of the combined system is as follows:
After estimating the coefficients
and
, constrained matching was repeated with the new weights. In the proposed approach before performing variance estimation, the scaling factor
was introduced to the weight matrix of the geometric constraint as the following equation:
Considering the change in one of the weight matrices, a new estimate of the covariance matrix of the observations will be obtained, which should be close to the previous covariance matrix. This covariance matrix can be estimated as follows:
where
and
are the estimated scale factors using the Helmert VCE method. Then, the covariance matrix of unknown parameters yields:
2.4. Discussion on the Execution Speed
The LSM technique is usually employed to increase the precision of the matching results. However, the nonlinearity of the mathematical model and the use of a neighborhood around each point directly affect the processing time of the image matching step. This mathematical model is linearized around an approximate position of the seed point and is solved iteratively. In each iteration, after the geometric transformation, a new window should be interpolated from the second image. The larger the size of the window, the longer the interpolation process takes.
Here, the coarse matching method performs a 1D search and calculates the correlation coefficient pixel by pixel. Due to the fact that the search is performed in the line direction of the image, the search interval length depends on the elevation range of the study area. For CARTOSAT-1 stereo images, according to the spatial resolution of these images, a one meter increment in the elevation range appears as four pixels in the line direction which should be added to the search interval length.
The presence of the constraint equations do not have a significant effect on the execution speed, because the number of constraint equations is much lower than the number of the LSM equations. Additionally, these equations are written based on rational functions which operate only with the coordinates of the points and do not require the complicated process, e.g., grey level interpolation.
In summary, the desired precision of the image matching, the size of the matching window, the quality of the image texture in the neighborhood of the seed point, the elevation variations of the study area, as well as the spatial resolution of the stereo images, are effective at the convergence speed of the utilized image matching algorithm.
3. Experimental Results
In order to test the proposed method, 16 points on a regular grid in the first image were considered. The image matching step, through the unit weighting and proposed weighting methods, was implemented. Relying on the redundancy matrix, we controlled the amount of displacement of the points on both images during image matching iterations. Estimated residual vectors of the observations demonstrated the success rate of the proposed method. Additionally, the performance of the method was assessed through analysis of the error ellipses in the object space.
3.1. Study Area and Data
The experimental data used in this study is a part of the ISPRS benchmark dataset which consists of a pair of CARTOSAT-1 sub-images from the LaMola region of Catalonia, Spain, captured on 5 February 2008, which was supplied in Orthokit format (
Figure 5) [
31]. Additionally, the improved RPCs of the images are also provided.
We have chosen this dataset because the image matching strategy which was adopted requires precise RPCs of the stereo image. Additionally, the one-arc-second SRTM file of the study area was available from the USGS website [
32]. Some specifications of the dataset and test area are presented in
Table 1.
3.2. Coarse Matching Step
The RPC model can be used for finding the approximate corresponding point. By transferring the point on the first image to multiple levels of elevation in the object space, and then transferring all points of intersection to the second image, the search space for matching is limited to a straight line.
Using RPCs of the image pair, at first, each point, were transferred to the SRTM elevation model. Then, the point of intersection were projected to the second image, which will be the center of the 1D search space, due to the low resolution of the SRTM elevation model. The search was performed along the line direction (along-track direction) of the second image to find the initial corresponding point. In each position of the search space, a normalized cross-correlation (NCC) value was calculated and the position defined by the maximal NCC coefficient was then introduced as the seed point to the precise matching step.
3.3. Precise Matching Step
In order to extract highly-accurate corresponding points between the stereo images, strict thresholds must be applied, especially to the bidirectional matching shift (
and
). Although, the value of 0.1 pixels is sufficient for the CARTOSAT-1 stereo images [
33], in this experiment the value of 0.05 pixels is selected for the shift threshold. Moreover, in order to avoid false matching, the number of iterations should be limited to 20 and the normalized correlation coefficient of two windows in the last iteration should be greater than 0.8.
Optimum Weight Estimation
The iterative solution of
coefficients for a sample point is given in
Table 2. The product of the values in all iterations gives the final value of this scaling factor,
. Considering that an almost identical value was obtained in the other adopted points all over the test area, this value was used in order to update the weight matrix of the second group of observations.
For a comparison with other methods, scaling factors in all points were computed using Zhang’s method [
15], which was originally proposed by the authors of [
34]:
where
is equal to the average of the squared grey level derivatives in the
–by–
image window. The computed scale factors are given in
Table 3. As we have seen, the scaling factors for the weight matrix are not much different from the results of the proposed approach. The differences are due to the impact of the radiometric content in the image windows which are not considered in our approach. It should be noted that in this method both points were displaced and, hence, the scaling factors were applied to all observations in the geometric constraint equations.
3.4. Analysis of the Results
In order to evaluate the effect of the proposed approach on the results of adjustment, three points from among the 16 points were selected. Results of the statistical information on the redundancy numbers of all the observations collected in both cases is presented in
Table 4 and
Table 5.
The matching window with a size 35 × 35 pixels, introduces 1225 grey level observations (one observation per pixel, which is described in Equation (5)) to the system of equations. By adding four observations from the geometric constraint, the number of observations reaches 1229, whereas the number of the unknown parameters is 11 (Equation (7)). Therefore, the degree of freedom and average redundancy number of the combined system will be , and , respectively.
As is clear in
Table 4 and
Table 5 in the proposed approach relative to the unit weighting approach, the redundancy number of observation
is much closer to the average redundancy of the system, marked in bold.
In both cases, after completing the least squares adjustment, the covariance matrix of the unknown parameters was calculated. The estimated standard deviations in the coordinates of the intersection point were calculated from covariance matrix elements. These standard deviations present error estimates in the reference axes directions. Then, the estimated error can be shown through the orientation and lengths of the semi-axes of the standard (68% confidence level) error ellipse [
35]. Therefore, in order to complete a comparison, the planimetric error ellipses depicted around the estimated intersection points resulting from the two cases are shown in
Figure 6. For more confidence, all of the error ellipses were plotted at the 95% confidence level.
For a better understanding of the dimensions of the error ellipses, which were obtained in arc-degrees, they have been converted to lengths on the reference ellipsoid in meters as follows [
36]:
where
and
,
and
are the semi-major axes and eccentricity of WGS84 ellipsoids, respectively. Additionally,
and
denote the dimensions of the error ellipses in meters. The values of
and
are estimated in degrees. Therefore, this effect should be taken into account in the estimated covariance matrix of the unknowns, with the aim of acquiring error ellipsoid dimensions in meters.
As can be seen in
Figure 6, the size of error ellipses is not uniform, even if only the unit weighting approach is considered. In order to clarify the main reason of this discrepancy, we should consider the texture of the template window (
Figure 7). The area-based matching methods have the potential to achieve highly-precise results if the template window is well-textured. The uniqueness of each template window in the search area can be evaluated using an autocorrelation function [
37]. The existence of the sharp autocorrelation peak at the center of the window is essential for the good performance of the local (intensity-based) image matching techniques.
Error ellipsoids are the general way to illustrate the confidence region of estimated parameters in 3D. However, in our computed points, one of the axes of the error ellipsoids was almost vertical, implying the correlation between vertical and horizontal components is small and insignificant. For this reason, we treated them as independent components. Hence, the confidence regions of the vertical and horizontal components are presented individually. The error ellipses for horizontal components are shown in
Figure 6 and the intervals for vertical components are shown in
Figure 8.
In each of the 16 points, the intersection position of corresponding rays (as the center of the error ellipse) has been changed after applying the proposed weighting approach relative to the unit weighting case. However, in order to make a true comparison, the error ellipses resulting from two weighting approaches are illustrated as concentric ellipses in
Figure 9.
The length of the error ellipses axes were significantly decreased at all 16 points.
Table 6 shows their numerical values. As can be seen, in our proposed method the shapes of the error ellipses are almost circular (smaller eccentricity) and this implies that the variances of the errors are distributed isotropically on the horizontal plane.
4. Discussion
An analysis of the results and comparison with the unit weighting method shows the success of the proposed approach in achieving the predetermined goals. In the
Figure 10, the computed residuals for the observations of the image points are illustrated. As can be seen in
Figure 10, using the proposed method, residuals of the first image observations are close to zero. As expected, by assigning small values to the redundancy number of the given point observations in the first image makes it possible to significantly fix the position of this point during the image matching iterations. Additionally, by increasing the redundancy number of the sample component in the second image, the residual of this observation in all points increased. In other words, the point in the second image was moved toward the epipolar plane with more freedom.
It is remarkable that two observations of four intersection observations, which are related to coordinates of a corresponding point on the search image, take new values in each iteration. This is because of the influence of some unknown LSM parameters on these two observations. After computing the differential values of affine transformation model shift parameters, they are used to update the corresponding point coordinates. This is the reason that applying a scale factor to the weight of a corresponding point’s sample component leads to a change on the image matching result and improvement on the intersection precision. It is clear that if these observations were given constant values, adding a scale factor would have no effect on the improvement of unknown precision estimation.
5. Conclusions
A more reliable image matching strategy leads to a more accurate scene reconstruction, which is usually achieved by introducing geometric constraints. Before adding any geometric equations to the radiometric equations of LSM, the optimal weights of different types of observations should be determined. Referring to a previous study that used the redundancy matrix to determine the optimal weights in second-order design of geodetic networks, in this study, the weights were determined from the redundancy matrix analysis. The conventional application of the redundancy matrix was the internal reliability test after performing the least squares technique.
Assuming the unit weight for all observations and the estimate of the resulting redundancy matrix, we decided to update the weight of just one observation (). In this way, the redundancy number of this observation increased about two times and equalized with the average redundancy number of the other elements of the observation vector. The higher the redundancy number, on the one hand, increases the internal reliability, and on the other hand, improves the flexibility of the geometric constraint to improve the image matching performance. It should be noted that in the image matching procedure, the coordinates of the point in the second image were treated as an observation of the geometric constraint and as an unknown parameter of the LSM.
The proposed weighting approach introduces some changes in the estimated values of unknown parameters and also in the estimated covariance matrix. As a result, the size of the planimetric error ellipses of the intersection points in the object space were decreased significantly. In this regard, the geometric mean of two axes of these ellipses by the proposed weighting was reduced 0.17 times relative to the unit weighting approach.
A comparison with a similar approach was performed and the same results were obtained using the two approaches in half of the points. However, different scale factors were achieved in other points due to neglecting the grey level of pixels within the matching window in the proposed method. In future research, the proposed method, which is based only on statistical analysis, can be combined with radiometric content of the matching windows.