Abstract
Multiplicative errors in addition to spatially referenced observations often arise in geodetic applications, particularly with light detection and ranging (LiDAR) measurements. However, regression involving multiplicative errors remains relatively unexplored in such applications. In this regard, we present a penalized modified least squares estimator to handle the complexities of a multiplicative error structure while identifying significant variables in spatially dependent observations. The proposed estimator can be also applied to classical additive error spatial regression. By establishing asymptotic properties of the proposed estimator under increasing domain asymptotics with stochastic sampling design, we provide a rigorous foundation for its effectiveness. A comprehensive simulation study confirms the superior performance of our proposed estimator in accurately estimating and selecting parameters, outperforming existing approaches. To demonstrate its real-world applicability, we employ our proposed method, along with other alternative techniques, to estimate a rotational landslide surface using LiDAR measurements. The results highlight the efficacy and potential of our approach in tackling complex spatial regression problems involving multiplicative errors.
Similar content being viewed by others
Data availability
The dataset used in this article can be made available upon request.
References
Bhattacharyya, B., Khoshgoftaar, T., & Richardson, G. (1992). Inconsistent m-estimators: Nonlinear regression with multiplicative error. Statistics & Probability Letters, 14, 407–411.
Breheny, P., & Huang, J. (2011). Coordinate descent algorithms for nonconvex penalized regression, with applications to biological feature selection. The annals of applied statistics, 5(1), 232.
Cai, L., & Maiti, T. (2020). Variable selection and estimation for high-dimensional spatial autoregressive models. Scandinavian Journal of Statistics, 47(2), 587–607.
Candes, E., & Tao, T. (2007). The dantzig selector: Statistical estimation when p is much larger than n. The annals of Statistics, 35(6), 2313–2351.
Cheng, L., Guo, Z., Li, Y., et al. (2023). Two-stream multiplicative heavy-tail noise despeckling network with truncation loss. IEEE Transactions on Geoscience and Remote Sensing. https://doi.org/10.1109/TGRS.2023.3302953
Chu, T., Zhu, J., & Wang, H. (2011). Penalized maximum likelihood estimation and variable selection in geostatistics. The Annals of Statistics, 39(5), 2607–2625.
Cui, M., Zhu, Y., Liu, Y., et al. (2022). Dense depth-map estimation based on fusion of event camera and sparse lidar. IEEE Transactions on Instrumentation and Measurement, 71, 1–11.
Doukhan, P. (1994). Mixing. Mixing (pp. 15–23). Cham: Springer.
Fan, J., & Li, R. (2001). Variable selection via nonconcave penlized likelihood and its oracla properties. Journal of the American Statistical Association, 96(456), 1348–1360.
Fan, Y., & Li, R. (2012). Variable selection in linear mixed effects models. Annals of statistics, 40(4), 2043.
Fan, J., & Peng, H. (2004). Nonconcave penalized likelihood with a diverging number of parameters. The annals of statistics, 32(3), 928–961.
Feng, W., Sarkar, A., Lim, C. Y., et al. (2016). Variable selection for binary spatial regression: Penalized quasi-likelihood approach. Biometrics, 72(4), 1164–1172.
Feng, C., Wang, H., Han, Y., et al. (2013). The mean value theorem and taylor’s expansion in statistics. The American Statistician, 67(4), 245–248.
Hladik, C., & Alber, M. (2012). Accuracy assessment and correction of a lidar-derived salt marsh digital elevation model. Remote Sensing of Environment, 121, 224–235.
Iyaniwura, J., Adepoju, A.A., Adesina, O.A. (2019) Parameter estimation of cobb douglas production function with multiplicative and additive errors using the frequentist and bayesian approaches. Annals Computer Science Series 17(1)
Lahiri, S. (2003). Central limit theorems for weighted sums of a spatial process under a class of stochastic and fixed designs. Sankhya The Indian Journal of Statistics. https://doi.org/10.2307/25053269
Lahiri, S., & Zhu, J. (2006). Resampling methods for spatial regression models under a class of stochastic designs. The Annals of Statistics, 34(4), 1774–1813.
Li, J., Luo, C., Yang, X. (2023) Pillarnext: Rethinking network designs for 3d object detection in lidar point clouds. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 17567–17576
Li, Y., Yu, A.W., Meng, T., et al (2022) Deepfusion: Lidar-camera deep fusion for multi-modal 3d object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 17182–17191
Lim, C., Meerschaert, M., & Scheffler, H. P. (2014). Parameter estimation for operator scaling random fields. Journal of Multivariate Analysis, 123, 172–183.
Liu, X., Chen, J., & Cheng, S. (2018). A penalized quasi-maximum likelihood method for variable selection in the spatial autoregressive model. Spatial statistics, 25, 86–104.
Mauro, F., Monleon, V. J., Temesgen, H., et al. (2017). Analysis of spatial correlation in predictive models of forest variables that use lidar auxiliary information. Canadian Journal of Forest Research, 47(6), 788–799.
McRoberts, R. E., Næsset, E., Gobakken, T., et al. (2018). Assessing components of the model-based mean square error estimator for remote sensing assisted forest applications. Canadian Journal of Forest Research, 48(6), 642–649.
Ribeiro, P. J., Jr., Diggle, P. J., Ribeiro, M. P. J., Jr., et al. (2007). The geor package. R news, 1(2), 14–18.
Shi, Y., & Xu, P. (2020). Adjustment of measurements with multiplicative random errors and trends. IEEE Geoscience and Remote Sensing Letters, 18(11), 1916–1920.
Shi, Y., Xu, P., Peng, J., et al. (2014). Adjustment of measurements with multiplicative errors: error analysis, estimates of the variance of unit weight, and effect on volume estimation from lidar-type digital elevation models. Sensors, 14(1), 1249–1266.
Stein ML (1999) Interpolation of spatial data: some theory for kriging. Springer Science & Business Media
Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society Series B (Methodological), 58, 267–288.
Uss, M. L., Vozel, B., Lukin, V. V., et al. (2019). Estimation of variance and spatial correlation width for fine-scale measurement error in digital elevation model. IEEE Transactions on Geoscience and Remote Sensing, 58(3), 1941–1956.
Viggh, H. E., & Staelin, D. H. (2007). Surface reflectance estimation using prior spatial and spectral information. IEEE transactions on geoscience and remote sensing, 45(9), 2928–2939.
Wang, L., & Chen, T. (2021). Ridge estimation iterative solution of ill-posed mixed additive and multiplicative random error model with equality constraints. Geodesy and Geodynamics, 12(5), 336–346.
Wang, L., & Chen, T. (2021). Virtual observation iteration solution and a-optimal design method for ill-posed mixed additive and multiplicative random error model in geodetic measurement. Journal of Surveying Engineering, 147(4), 04021016.
Wang, H., Li, R., & Tsai, C. L. (2007). Tuning parameter selectors for the smoothly clipped absolute deviation method. Biometrika, 94(3), 553–568.
Wang, H., & Zhu, J. (2009). Variable selection in spatial regression via penalized least squares. The Canadian Jounal of Statistics, 37(4), 607–624.
Wu, T. T., & Lange, K. (2008). Coordinate descent algorithms for lasso penalized regression. The Annals of Applied Statistics, 2(1), 224–244.
Xu, P., & Shimada, S. (2000). Least squares parameter estimation in multiplicative noise models. Communications in Statistics-Simulation and Computation, 29(1), 83–96.
Xu, P., Shi, Y., Peng, J., et al. (2013). Adjustment of geodetic measurements with mixed multiplicative and additive random errors. Journal of Geodesy, 87(7), 629–643.
Yao, W., Guo, Z., Sun, J., et al. (2019). Multiplicative noise removal for texture images based on adaptive anisotropic fractional diffusion equations. SIAM Journal on Imaging Sciences, 12(2), 839–873.
You, H., Yoon, K., Wu, W. Y., et al. (2024). Regularized nonlinear regression with dependent errors and its application to a biomechanical model. Annals of the Institute of Statistical Mathematics, 76, 1–30.
Yuan, M., & Lin, Y. (2006). Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68(1), 49–67.
Zhao, L., Ma, X., Xiang, Z., et al. (2022). Landslide deformation extraction from terrestrial laser scanning data with weighted least squares regularization iteration solution. Remote Sensing, 14(12), 2897.
Zhou, Y., Tuzel, O. (2018) Voxelnet: End-to-end learning for point cloud based 3d object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4490–4499
Zhu, J., Huang, H. C., & Reyes, P. E. (2010). On selection of spatial linear models for lattice data. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 72(3), 389–402.
Zou, H. (2006). The adaptive lasso and its oracle properties. Journal of the American statistical association, 101(476), 1418–1429.
Acknowledgements
Wu was supported by National Science and Technology Council (NSTC) of Taiwan under grant No. 111-2118-M-259-002. The works of Lim and Choi were supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2019R1A2C1002213, 2020R1A4A1018207, RS-2024-00335033, RS-2023-00221762, RS-2024-00344732). This work of Yoon was supported by an Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. RS-2022-00155966, Artificial Intelligence Convergence Innovation Human Resources Development (Ewha Womans University)).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no Conflict of interest. Chae Young Lim is a co-Editor of Journal of the Korean Statistical Society. The co-Editor status has no bearing on editorial consideration.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix A Proofs
Appendix A Proofs
We introduce a lemma with reference where the proof of the lemma is provided, so we omit the proof here.
Lemma 1
[Theorem 3.2 from Lahiri (2003)]
A random field \(\{\xi (\varvec{s}): \varvec{s} \in {\mathcal {R}}^d\}\) is stationary with \(\text{ E }\xi (\varvec{0}) =0\), \(\text{ E }|\xi (\varvec{0})|^{2+\delta } < \infty\) for some \(\delta >0\). In addition, if the conditions below are fulfilled,
-
(a)
There exists a function \(H_1(\cdot )\) such that for all \(\varvec{h} \in {\mathcal {R}}^d\), as \(n \rightarrow \infty\),
$$\begin{aligned} \left[ \int w_n^2(\eta _n \varvec{u}) \phi (\varvec{u}) d\varvec{u}\right] ^{-1}\int w_n(\eta _n \varvec{u}+\varvec{h})w_n(\eta _n \varvec{u}) \phi ^2(\varvec{u})d\varvec{u} \rightarrow H_1(\varvec{h}) \end{aligned}$$ -
(b)
\(\gamma _{1n}^2 = O(n^a)\ \& \ o\left( [\log n]^{-2}\eta _n^{\frac{\tau -d}{4\tau }}\right)\) for some \(a \in [0, 1/8)\) and \(\tau >d\), where
$$\begin{aligned} \gamma _{1n}^2 = \frac{\sup \{|w_n(\varvec{s})|^2:\varvec{s}\in \eta _n R_0\}}{\text{ E }[w_n^2(\eta _n \varvec{U}_1)]} \end{aligned}$$ -
(c)
\(\phi (\varvec{u})\) is continuous and everywhere positive with support \({\bar{R}}_0\)
-
(d)
For \(\alpha _1(\cdot )\) from assumption,
-
1.
\(\int _1^{\infty } t^{d-1} \alpha _1(t)^{\frac{\delta }{2+\delta }}<\infty\)
-
2.
\(\alpha _1(t) \sim t^{-\tau }L(t)\) as \(t \rightarrow \infty\) for some slowly varying function \(L(\cdot )\)
-
1.
-
(e)
For \(d\ge 2\) and \(\alpha _2(\cdot )\) from assumption, as \(t \rightarrow \infty\)
$$\begin{aligned} \alpha _2(t) = o\left( [L(t)^{\frac{\tau +d}{4d\tau }}]^{-1}t^{\frac{\tau -d}{4d}} \right) \end{aligned}$$
then, with \(n/\eta _n^d \rightarrow C_1 \in (0, \infty )\), as \(n\rightarrow \infty\)
Proof of Theorem 1
For a large enough n, we have
where \(t \in (0,1)\). For the term \({\mathbb {A}}\),
We follow a similar proof to You et al. (2024) and evaluate \({\text {Var}}({\mathbb {A}})\) since \({\mathbb {A}}-\text{ E }({\mathbb {A}}) = O_p\left( \sqrt{{\text {Var}}({\mathbb {A}})}\right)\).
where \(\lambda _{max}(\varvec{M})\) refers to the maximum eigenvalue of a matrix \(\varvec{M}\). The second equality holds since products of matrices AB and BA share the same non-zero eigenvalues and \(\varvec{\Sigma }_n^2 = \varvec{\Sigma }_n\). The second inequality holds because \(\varvec{\Sigma }_n\) has eigenvalues of 0 and 1’s. The last equality comes from the compactness of the domain \({\mathcal {D}}\times \Theta\) by Assumption 1-(1). Thus, \({\text {Var}}({\mathbb {A}}) = O\left( n a_n^2 \lambda _\varepsilon \right) \Vert \varvec{v} \Vert ^2\), which leads to
Now, we decompose \({\mathbb {B}}\) into 4 terms as follows.
where \(\varvec{\theta }_n = \varvec{\theta }_0 + a_n \varvec{v} t\) and \(\varvec{d}(\varvec{\theta }, \varvec{\theta }') = (d_1(\varvec{\theta }, \varvec{\theta }'), \ldots , d_n(\varvec{\theta }, \varvec{\theta }'))^T\) with \(d_i(\varvec{\theta }, \varvec{\theta }') = g(\varvec{x}(\varvec{s}_i); \varvec{\theta })-g(\varvec{x}(\varvec{s}_i); \varvec{\theta }')\). Since \(n^{-1}{\varvec{\dot{G}}}({\varvec{\theta }})^{T}\varvec{\Sigma }_n {\varvec{\dot{G}}}({\varvec{\theta }})\) is a uniformly continuous function of \(\varvec{\theta }\) and converges to \(\varvec{\Sigma }_g = \varvec{\Delta }_{g^2}-\varvec{\Delta }_g\varvec{\Delta }_g^T\) at \(\varvec{\theta }_0\) in probability by Assumptions 1-(4) and (5),
By nature, \({\mathbb {B}}_2 = na_n^2 \varvec{v}^T \varvec{\Sigma }_g \varvec{v} (1+o(1))\). Note that \({\mathbb {B}}_2\) is positive with the order of \(na_n^2 = \lambda _\varepsilon\) and \(\lambda _\varepsilon\) does not vanish to zero. Next,
where \([a_{kl}]_{k,l=1, \ldots , p}\) is a matrix of which component at k-th row and l-th column equals to \(a_{kl}\).
The first inequality is Cauchy-Schwarz inequality with semi-norm and the second inequality holds since the maximum eigenvalue of \(\varvec{\Sigma }_n\) is 1. The compactness of the domain explains the first and second equalities. To sum up,
Similarly, to deal with \({\mathbb {B}}_4\) we evaluate \(\left[ \varvec{g}_{kl}(\varvec{\theta }_n)^T \varvec{\Sigma }_n \varvec{\xi }\right]\) for all \(k,l =1,\ldots , p\).
Recall that the maximum eigenvalue of \(\varvec{\Sigma }_n \varvec{\Sigma }_\varepsilon \varvec{\Sigma }_n\) is not greater than \(\lambda _\varepsilon\). Since \(\text{ E }(\varvec{g}_{kl}(\varvec{\theta }_n)^T \varvec{\Sigma }_n \varvec{\xi })=0\), \(\varvec{g}_{kl}(\varvec{\theta }_n)^T \varvec{\Sigma }_n \varvec{\xi }=O_p(n^{1/2}\lambda _\varepsilon ^{1/2})\) and
Therefore, conditional on \(\varvec{U}\),
The last equality holds for \(\lambda _\varepsilon =o(n)\) in Assumption 2-(1). Finally, by (A1) and (A5), we attain
With large enough \(\Vert \varvec{v}\Vert\), (A6) stays positive with probability tending to 1, which completes the proof.
Proof of Theorem 2
Let \(b_n = a_n+c_n\),
where \({\mathbb {A}}'\) and \({\mathbb {B}}'\) are defined similarly to \({\mathbb {A}}\) and \({\mathbb {B}}\) in the proof of Theorem 1 with replacement of \(a_n\) to \(b_n\). \({\mathbb {C}}=n\sum _{i=1}^{s} q_{\lambda _{n}}(|\theta _{0i}|)sgn(\theta _{0i})b_n v_i\) and \({\mathbb {D}}=(n/2)\sum _{i=1}^{s}q_{\lambda _n}'(|\theta _{0i}|)b_n^2 v_i^2(1+o(1))\). The expression of \({\mathbb {D}}\) comes from Assumption 3-(2). Then, by (A1) and (A5),
For the desired result, it is enough to show that \({\mathbb {C}}=O(nb_n^2)\Vert \varvec{v}\Vert\) and \({\mathbb {D}}=o(nb_n^2)\Vert \varvec{v}\Vert ^2\). By the definition of \(b_n\) and Assumption 3-(1),
Putting the results above all together, we obtain \({\mathbb {B}}'\) dominates \({\mathbb {C}}\) and \({\mathbb {D}}\) with large enough \(\Vert \varvec{v}\Vert\). By Assumption 3-(1), \(b_n=O(n^{-1/2}\lambda _\varepsilon ^{1/2})\), so with the probability tending to 1, (A7) is positive.
Proof of Theorem 3
By the mean-value theorem of a vector-valued function (Feng et al., 2013),
For a arbitrary vector \(\varvec{v} \in {\mathcal {R}}^d\), we consider \(n^{-1/2} \varvec{v}^T \nabla S_n(\varvec{\theta }_0)\) as follows.
Since \(\varvec{x}(\varvec{s}_i) = \varvec{x}(\eta _n\varvec{u}_i)\) is independent, by WLLN \(n^{-1}\varvec{v}^{T}\varvec{\dot{G}}(\varvec{\theta }_0)^{T}\varvec{1}=\varvec{v}^T \varvec{\Delta }_g (1+o_p(1))\). Thus, conditional on \(\varvec{U}\),
By Assumptions 1, 2 and the lemma 1 with \(w_n(\varvec{s}_i) = \varvec{v}^{T}\left( \frac{\partial g(\varvec{x}(\varvec{s}_i);\varvec{\theta }_0)}{\partial \varvec{\theta }}-\varvec{\Delta }_g \right)\),
where \(\varvec{\Sigma }_g = \varvec{\Delta }_{g^2}-\varvec{\Delta }_g\varvec{\Delta }_g^T,\ \varvec{r}(\varvec{h}) = \varvec{r}_2(\varvec{h})-\varvec{\Delta }_g(\varvec{r}_1(\varvec{0})+\varvec{r}_1(\varvec{h}))^T+\text{ E }[\phi (\varvec{U}_1)]\varvec{\Delta }_g\varvec{\Delta }_g^T\), and \(\sigma (\varvec{h}) = \text{ E }\left[ \xi (\varvec{0})\xi (\varvec{h})\right]\). The variance expression follows from the lemma 1 since
Therefore, by the conditional version of Cramer-Wold device,
where \(\varvec{\Sigma }_\theta =\sigma (0)\varvec{\Sigma }_g + \int \sigma (\varvec{h}) \varvec{r}(\varvec{h}) d\varvec{h}\). The proof is completed by Slutsky’s theorem if we attain
where \(\varvec{\Sigma }_g = \varvec{\Delta }_{g^2}-\varvec{\Delta }_{g}\varvec{\Delta }_{g}^T\). For (A10), it is enough to show that
For (A11),
The first term converges to \(2\varvec{\Sigma }_g\) by Assumptions 1-(4) and (5) and the second term is \(O_p(n^{-1/2}\lambda _\varepsilon ^{1/2})\) by similar procedure to (A4). Thus, by Assumption 2-(1), we achieve (A11). Next, since \(\hat{\varvec{\theta }}^{(s)}\) is root-n-consistent under Assumption 2-(2),
The above term can be decomposed into 3 terms as follows.
The first term is o(1) conditional on \(\varvec{U}\) by similar procedure to (A2). The second part converges to 0 by
The first and third inequalities hold since p is finite and g is bounded, respectively. For the last term, similarly to (A4) we evaluate the variance of (k, l)-th component.
This indicates that the last term is \(O_p(n^{-1/2})\). To sum up,
Lastly, we attain (A12) and, therefore, (A10). Bringing (A9) and (A10) together with Slutsky’s theorem, we obtain
Proof of Theorem 4
Proof of (i) We show for \(\epsilon >0\) & \(\ i \in \{s+1, \ldots , p\},\) \(P({\hat{\theta }}_i \ne 0) <\epsilon\). First, we break \(({\hat{\theta }}_i\ne 0)\) into two sets:
Then, it is enough to show for any \(\epsilon > 0\), \(P(E_n) <\epsilon /2\) and \(P(F_n) < \epsilon /2\). For any \(\epsilon >0\), we can show \(P(E_n)<\epsilon /2\) for large enough n because \(\Vert \varvec{{{\hat{\theta }}}}-\varvec{\theta }_0\Vert =O_p(n^{-1/2})\) under Assumption 2-(2). To verify \(P(F_n)<\epsilon /2\) for large enough n, we first show \(n^{1/2} q_{\lambda _{n}}(|{{\hat{\theta }}}_i|)=O_p(1)\) on the set \(F_n\). By the mean-value theorem again,
Recall that \(n^{-1/2}\nabla S_n(\varvec{\theta }_0) = O_p(1)\) from (A9). In addition, with \(\Vert \varvec{\theta }_0-\varvec{\theta }\Vert =O(n^{-1/2})\), the similar results of (A10) inform that
Then, we have
Since \(\hat{\varvec{\theta }}\) is the local minimizer of \(Q_n(\varvec{\theta })\) with \(\Vert \hat{\varvec{\theta }}-\varvec{\theta }_0\Vert =O_p(n^{-1/2})\), we attain
from
Therefore, there exists \(M^\prime\) such that \(P\{n^{1/2} q_{\lambda _n}(|{\hat{\theta }}_i|)>M^{\prime }\}<\epsilon /2\) for large enough n, which implies \(P\{{\hat{\theta }}_i\ne 0, |{\hat{\theta }}_i|<Cn^{-1/2}, n^{1/2} q_{\lambda _n}(|{\hat{\theta }}_i|)>M^{\prime }\}<\epsilon /2\). Finally, by Assumptions 3-(3) and (4), for large enough n,
which leads to \(P(F_n)<\epsilon /2\).
Proof of (ii) By Taylor expansion,
where \(\varvec{q}_{\lambda _n}(\hat{\varvec{\theta }}) \cdot sgn(\hat{\varvec{\theta }})=\left( q_{\lambda _n}(|{\hat{\theta }}_1|)sgn({{\hat{\theta }}}_1),..,q_{\lambda _n}(|{\hat{\theta }}_p|)sgn({{\hat{\theta }}}_p)\right) ^{T}.\) Since \(\hat{\varvec{\theta }}\) is the local minimizer of \(Q_n({\varvec{\theta }})\), \(\nabla Q_n(\hat{\varvec{\theta }})=0\), which implies
Since \(n^{-1/2}\nabla S_n(\varvec{\theta }_0) \overset{d}{\rightarrow }N(4\varvec{\Sigma }_\theta )\) by (A9) and \(n^{-1}\left( \int ^{1}_{0}\nabla ^2 S_n(\varvec{\theta }_0+(\varvec{{{\hat{\theta }}}}-\varvec{\theta }_0)t)dt \right) \overset{p}{\rightarrow }\ 2\varvec{\Sigma }_g\) by the similar results of (A10),
Finally,
where \((\varvec{\Sigma }_g)_{11}^{-1}\) is the \(s\times s\) upper-left matrix of \(\varvec{\Sigma }_g^{-1}\).
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
You, H., Wu, WY., Lim, C.Y. et al. Spatial regression with multiplicative errors, and its application with LiDAR measurements. J. Korean Stat. Soc. 53, 1177–1204 (2024). https://doi.org/10.1007/s42952-024-00282-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s42952-024-00282-3