This paper is a selective review of the regularization methods scattered in statistics literature... more This paper is a selective review of the regularization methods scattered in statistics literature. We introduce a general conceptual approach to regularization and fit most existing methods into it. We have tried to focus on the importance of regularization when dealing with today's high-dimensional objects: data and models. A wide range of examples are discussed, including nonparametric regression, boosting, covariance matrix estimation, principal component estimation, subsampling.
An algorithm which is valid to estimate the parameters of linear models under several robust cond... more An algorithm which is valid to estimate the parameters of linear models under several robust conditions is presented. With respect to the robust conditions, firstly, the dependent variables may be either non-grouped or grouped. Secondly, the distribution of the errors may vary ...
This work presents an EM approach for nonlinear regression with incomplete data. Radial Basis Fun... more This work presents an EM approach for nonlinear regression with incomplete data. Radial Basis Function (RBF) Neural Networks are employed since their architecture is appropriate for an efficient parameter estimation. The training algorithm expectation (E) step takes into account the censorship over the data, and the maximization (M) step can be implemented in several ways. The results guarantee the convergence of the algorithm in the GEM (Generalized EM) framework.
In this paper we introduce an iterative estimation procedure based on conditional modes suitable ... more In this paper we introduce an iterative estimation procedure based on conditional modes suitable to fit linear models when errors are known to be unimodal and, moreover, the dependent data stem from different sources and, consequently, may be either non-grouped or grouped with different classification criteria. The procedure requires, at each step, the imputation of the exact values of the grouped data and runs by means of a process that is similar to the EM algorithm with normal errors. The expectation step has been substituted with a mode step that avoids awkward integration with general errors and, in addition, we have substituted the maximisation step with a natural one which only coincides with it when the error distribution is normal. Notwithstanding the former modifications, we have proved that, on the one hand, the iterative estimating algorithm converges to a point which is unique and non-dependent on the starting values and, on the other hand, our final estimate, being anM-estimator, may enjoy good stochastic asymptotic properties such as consistency, boundness inL 2, and limit normality.
This paper is a selective review of the regularization methods scattered in statistics literature... more This paper is a selective review of the regularization methods scattered in statistics literature. We introduce a general conceptual approach to regularization and fit most existing methods into it. We have tried to focus on the importance of regularization when dealing with today's high-dimensional objects: data and models. A wide range of examples are discussed, including nonparametric regression, boosting, covariance matrix estimation, principal component estimation, subsampling.
An algorithm which is valid to estimate the parameters of linear models under several robust cond... more An algorithm which is valid to estimate the parameters of linear models under several robust conditions is presented. With respect to the robust conditions, firstly, the dependent variables may be either non-grouped or grouped. Secondly, the distribution of the errors may vary ...
This work presents an EM approach for nonlinear regression with incomplete data. Radial Basis Fun... more This work presents an EM approach for nonlinear regression with incomplete data. Radial Basis Function (RBF) Neural Networks are employed since their architecture is appropriate for an efficient parameter estimation. The training algorithm expectation (E) step takes into account the censorship over the data, and the maximization (M) step can be implemented in several ways. The results guarantee the convergence of the algorithm in the GEM (Generalized EM) framework.
In this paper we introduce an iterative estimation procedure based on conditional modes suitable ... more In this paper we introduce an iterative estimation procedure based on conditional modes suitable to fit linear models when errors are known to be unimodal and, moreover, the dependent data stem from different sources and, consequently, may be either non-grouped or grouped with different classification criteria. The procedure requires, at each step, the imputation of the exact values of the grouped data and runs by means of a process that is similar to the EM algorithm with normal errors. The expectation step has been substituted with a mode step that avoids awkward integration with general errors and, in addition, we have substituted the maximisation step with a natural one which only coincides with it when the error distribution is normal. Notwithstanding the former modifications, we have proved that, on the one hand, the iterative estimating algorithm converges to a point which is unique and non-dependent on the starting values and, on the other hand, our final estimate, being anM-estimator, may enjoy good stochastic asymptotic properties such as consistency, boundness inL 2, and limit normality.
Uploads
Papers by Carlos Rivero