We present an energy based approach to estimate a dense disparity map between two images while preserving its discontinuities resulting from image boundaries. We first derive a simpli#ed expression for the disparity that allows us to... more
We present an energy based approach to estimate a dense disparity map between two images while preserving its discontinuities resulting from image boundaries. We first derive a simpli#ed expression for the disparity that allows us to easily estimate it from a stereo pair of images using an energy minimization approach. We assume that the epipolar geometry is known, and we include this information in the energy model. Discontinuities are preserved by means of a regularization term based on the Nagel-Enkelmann operator. We investigate the associated Euler-Lagrange equation of the energy functional, and we approach the solution of the underlying partial differential equation (PDE) using a gradient descent method. In order to reduce the risk to be trapped within some irrelevant local minima during the iterations, we use a focusing strategy based on a linear scale-space. We prove the existence and uniqueness of the underlying parabolic partial differential equation. Experimental results ...
In this paper the problem of recovering a regularized solution of the Fredholm integral equations of the first kind with Hermitian and square-integrable kernels, and with data corrupted by additive noise, is considered. Instead of using a... more
In this paper the problem of recovering a regularized solution of the Fredholm integral equations of the first kind with Hermitian and square-integrable kernels, and with data corrupted by additive noise, is considered. Instead of using a variational regularization of Tikhonov type, based on a priori global bounds, we propose a method of truncation of eigenfunction expansions that can be proved to converge asymptotically, in the sense of the L^2--norm, in the limit of noise vanishing. Here we extend the probabilistic counterpart of this procedure by constructing a probabilistically regularized solution without assuming any structure of order on the sequence of the Fourier coefficients of the data. This probabilistic approach allows us to use the statistical tools proper of time-series analysis, and in this way we attain a new regularizing algorithm, which is illustrated by some numerical examples. Finally, a comparison with solutions obtained by the means of the variational regulari...
Portfolio optimization is to build your portfolio in such a way that you maximize potential returns from investments while still not exceeding the amount of risk you’re willing to carry. Creating a balanced portfolio with many different... more
Portfolio optimization is to build your portfolio in such a way that you maximize potential returns from investments while still not exceeding the amount of risk you’re willing to carry. Creating a balanced portfolio with many different investments, such as stocks, bonds, mutual funds, etc., is the best way to spread out assets to maintain a riskto-reward ratio. The portfolio optimization model has limited impact in practice due to estimation issues when applied with real data. To address this, we adapt two machine learning methods, regularization and cross-validation, for portfolio optimization. First, we introduce performance-based regularization (PBR), where the idea is to constrain the sample variances of the estimated portfolio risk and return, which steers the solution towards one associated with less estimation error in the performance. We consider PBR for both mean-variance and mean-CVaR problems. For the mean-variance problem, PBR introduces a quartic polynomial constraint, for which we make two convex approximations: one based on rank-1 approximation and another based on a convex quadratic approximation
Three issues concerning the iterative solution of the nonlinear equations governing the flows and heads in a water distribution system network are considered. Zero flows cause a computation failure (division by zero) when the Global... more
Three issues concerning the iterative solution of the nonlinear equations governing the flows and heads in a water distribution system network are considered. Zero flows cause a computation failure (division by zero) when the Global Gradient Algorithm of Todini and Pilati is used to solve for the steady state of a system in which the head loss is modeled by the Hazen-Williams formula. The proposed regularization technique overcomes this failure as a solution to this first issue. The second issue relates to zero flows in the Darcy-Weisbach formulation. This work explains for the first time why zero flows do not lead to a division by zero where the head loss is modeled by the Darcy-Weisbach formula. In this paper, the authors show how to handle the computation appropriately in the case of laminar flow (the only instance in which zero flows may occur). However, as is shown, a significant loss of accuracy can result if the Jacobian matrix, necessary for the solution process, becomes poorly conditioned, and so it is recommended that the regularization technique be used for the Darcy-Weisbach case also. Only a modest extra computational cost is incurred when the technique is applied. The third issue relates to a new convergence stopping criterion for the iterative process based on the infinity-norm of the vector of nodal head differences between one iteration and the next. This test is recommended because it has a more natural physical interpretation than the relative discharge stopping criterion that is currently used in standard software packages such as EPANET. In addition, it is recommended to check the infinity norms of the residuals once iteration has been stopped. The residuals test ensures that inaccurate solutions are not accepted.
Online speech recognition is crucial for developing natural human-machine interfaces. This modality, however, is significantly more challenging than off-line ASR, since real-time/low-latency constraints inevitably hinder the use of future... more
Online speech recognition is crucial for developing natural human-machine interfaces. This modality, however, is significantly more challenging than off-line ASR, since real-time/low-latency constraints inevitably hinder the use of future information , that is known to be very helpful to perform robust predictions. A popular solution to mitigate this issue consists of feeding neural acoustic models with context windows that gather some future frames. This introduces a latency which depends on the number of employed look-ahead features. This paper explores a different approach, based on estimating the future rather than waiting for it. Our technique encourages the hidden representations of a unidirectional recurrent network to embed some useful information about the future. Inspired by a recently proposed technique called Twin Networks, we add a regularization term that forces forward hidden states to be as close as possible to cotemporal backward ones, computed by a " twin " neural network running backwards in time. The experiments, conducted on a number of datasets, recurrent architectures, input features, and acoustic conditions, have shown the effectiveness of this approach. One important advantage is that our method does not introduce any additional computation at test time if compared to standard unidirectional recurrent networks.
Term “super-resolution” is typically used for a high-resolution image produced from several low-resolution noisy observations. In this paper, we consider the problem of high-quality interpolation of a single noise-free image. Several... more
Term “super-resolution” is typically used for a high-resolution image produced from several low-resolution noisy observations. In this paper, we consider the problem of high-quality interpolation of a single noise-free image. Several aspects of the corresponding super-resolution algorithm are investigated: choice of regularization term, dependence of the result on initial approximation, convergence speed, and heuristics to facilitate convergence and improve the visual quality of the resulting image.
ABSTRACT The prediction of ply delamination in laminated composites is modeled by using interface elements. The numerical approach is based on the cohesive zone model, which is shown to provide an efficient description of the delamination... more
ABSTRACT The prediction of ply delamination in laminated composites is modeled by using interface elements. The numerical approach is based on the cohesive zone model, which is shown to provide an efficient description of the delamination growth. The theoretical study is performed in quasi-static regime and an implicit finite element scheme is used. A comprehensive 1D example shows that cohesive elements may induce numerical instability and that the use of a viscous regularization is relevant to suppress the instability and to obtain a reasonable numerical convergence.This paper describes a technique that can be used to introduce damping into cohesive zone finite element simulations of crack nucleation and growth, with a view to avoiding convergence difficulties in quasi-static finite element simulations. A new kind of viscous regularization is proposed, which applies to the bilinear debonding model and has a limited rate dependency. An experimental study is performed by using DCB (double cantilever beam) samples for the mode I interlaminar fracture toughness test; a good agreement between numerical predictions and experimental results is obtained.
This study considers the robust identification of the parameters describing a Sugeno type fuzzy inference system with uncertain data. The objective is to minimize the worst-case residual error using a numerically efficient algorithm. The... more
This study considers the robust identification of the parameters describing a Sugeno type fuzzy inference system with uncertain data. The objective is to minimize the worst-case residual error using a numerically efficient algorithm. The Sugeno type fuzzy systems are linear in consequent parameters but nonlinear in antecedent parameters. The robust consequent parameters identification problem can be formulated as second-order cone programming problem. The optimal solution of this second-order cone problem can be interpreted as solution of a Tikhonov regularization problem with a special choice of regularization parameter which is optimal for robustness (Ghaoui and Lebret (1997). SAIM Journal of Matrix Analysis and Applications 18, 1035–1064). The final regularized nonlinear optimization problem allowing simultaneous identification of antecedent and consequent parameters is solved iteratively using a generalized Gauss–Newton like method. To illustrate the approach, several simulation studies on numerical examples including the modelling of a spectral data function (one-dimensional benchmark example) is provided. The proposed robust fuzzy identification scheme has been applied to approximate the physical fitness of patients with a fuzzy expert system. The identified fuzzy expert system is shown to be capable of capturing the decisions (experiences) of a medical expert.
Spectral unmixing is an important tool in hyperspectral data analysis for estimating endmembers and abundance fractions in a mixed pixel. This paper examines the applicability of a recently developed algorithm called graph regularized... more
Spectral unmixing is an important tool in hyperspectral data analysis for estimating endmembers and abundance fractions in a mixed pixel. This paper examines the applicability of a recently developed algorithm called graph regularized nonnegative matrix factorization (GNMF) for this aim. The proposed approach exploits the intrinsic geometrical structure of the data besides considering positivity and full additivity constraints. Simulated data based on the measured spectral signatures, is used for evaluating the proposed algorithm. Results in terms of abundance angle distance (AAD) and spectral angle distance (SAD) show that this method can effectively unmix hyperspectral data.
In this paper we consider a multi-dimensional inverse heat conduction problem with time-dependent coefficients in a box, which is well-known to be severely ill-posed, by a variational method. The gradient of the functional to be minimized... more
In this paper we consider a multi-dimensional inverse heat conduction problem with time-dependent coefficients in a box, which is well-known to be severely ill-posed, by a variational method. The gradient of the functional to be minimized is obtained by the aid of an adjoint problem, and the conjugate gradient method with a stopping rule is then applied to this ill-posed optimization problem. To enhance the stability and the accuracy of the numerical solution to the problem, we apply this scheme to the discretized inverse problem rather than to the continuous one. The difficulties with large dimensions of discretized problems are overcome by a splitting method which only requires the solution of easy-to-solve one-dimensional problems. The numerical results provided by our method are very good and the techniques seem to be very promising.
A pesar de que los diferentes gobiernos europeos aboguen por gestionar unos flujos migratorios legales y ordenados, la presencia de inmigrantes irregulares en territorio europeo es una evidencia. Los países del sur europeo son, por... more
A pesar de que los diferentes gobiernos europeos aboguen por gestionar unos flujos migratorios legales y ordenados, la presencia de inmigrantes irregulares en territorio europeo es una evidencia. Los países del sur europeo son, por razones va rias, los que mayor número de irregulares acogen, pero el resto de Europa también debe enfrentarse a este fenómeno. Este artículo tiene por objetivo analizar los mecanismos e instrumentos que se han implementado a lo largo de los últimos años para responder al reto de la irregularidad en los distintos países europeos. Se prestará especial atención al punto de inflexión que representa el 2005, año a partir del cual la Unión Europea ha empezado a construir un marco común para debatir y reflexionar sobre las respuestas políticas que deben darse para gestionar la inmigración irregular en el escenario europeo.
Mes travaux portent sur l'analyse de la dynamique cerebrale a partir de donnees de neuro-imagerie fonctionnelle issues d'examens d'Imagerie par Resonance Magnetique fonctionnelle (IRMf). Ils concernent aussi bien l'etude... more
Mes travaux portent sur l'analyse de la dynamique cerebrale a partir de donnees de neuro-imagerie fonctionnelle issues d'examens d'Imagerie par Resonance Magnetique fonctionnelle (IRMf). Ils concernent aussi bien l'etude de la dynamique evoquee par un paradigme d'activation cerebrale et celle issue de l'activite spontanee ou de « fond » lorsque le sujet est au repos (resting state). Les algorithmes que j'ai developpes s'appuient pour une large partie sur une connaissance explicite du paradigme experimental mis au point par l'experimentateur mais aussi prennent place dans une moindre part au sein des methodes exploratoires, qui n'exploitant pas ces informations issues du paradigme. Ce theme de recherche embrasse a la fois des problemes bas niveau relatifs a la reconstruction d'images en IRM mais aussi des aspects plus haut niveau qui concernent l'estimation et la selection de modeles hemodynamiques regionaux non-parametriques, capables ...
This study considers the robust identification of the parameters describing a Sugeno type fuzzy inference system with uncertain data. The objective is to minimize the worst-case residual error using a numerically efficient algorithm. The... more
This study considers the robust identification of the parameters describing a Sugeno type fuzzy inference system with uncertain data. The objective is to minimize the worst-case residual error using a numerically efficient algorithm. The Sugeno type fuzzy systems are linear in consequent parameters but nonlinear in antecedent parameters. The robust consequent parameters identification problem can be formulated as second-order cone programming problem. The optimal solution of this second-order cone problem can be interpreted as solution of a Tikhonov regularization problem with a special choice of regularization parameter which is optimal for robustness (Ghaoui and Lebret (1997). SAIM Journal of Matrix Analysis and Applications 18, 1035–1064). The final regularized nonlinear optimization problem allowing simultaneous identification of antecedent and consequent parameters is solved iteratively using a generalized Gauss–Newton like method. To illustrate the approach, several simulation studies on numerical examples including the modelling of a spectral data function (one-dimensional benchmark example) is provided. The proposed robust fuzzy identification scheme has been applied to approximate the physical fitness of patients with a fuzzy expert system. The identified fuzzy expert system is shown to be capable of capturing the decisions (experiences) of a medical expert.
Extreme Learning Machine (ELM) algorithm based on single hidden layer feedforward neural networks has shown as the best time series prediction technique. Furthermore, the algorithm has a good generalization performance with extremely fast... more
Extreme Learning Machine (ELM) algorithm based on single hidden layer feedforward neural networks has shown as the best time series prediction technique. Furthermore, the algorithm has a good generalization performance with extremely fast learning speed. However, ELM facing overfitting problem that can affect the model quality due to the implementation using empirical risk minimization scheme. Therefore, this study aims to improve ELM by introducing an Activation Functions Regularization in ELM called RAF-ELM. The experiment has been conducted in two phases. First, investigating the modified RAF-ELM performance using four types of activation functions are: Sigmoid, Sine, Tribas and Hardlim. In this study, input weight and bias for hidden layers are randomly selected, whereas the best neurons number of hidden layer is determined from 5 to 100. This experiment used UCI benchmark datasets. The number of neurons (99) using Sigmoid activation function shown the best performance. The proposed methods has improved the accuracy performance and learning speed up to 0.016205 MAE and processing time 0.007 seconds respectively compared with conventional ELM and has improved up to 0.0354 MSE for accuracy performance compare with state of the art algorithm. The second experiment is to validate the proposed RAF-ELM using 15 regression benchmark dataset. RAF-ELM has been compared with four neural network techniques namely conventional ELM, Back Propagation, Radial Basis Function and Elman. The results show that RAF-ELM technique obtain the best performance compared to other techniques in term of accuracy for various time series data that come from various domain.
—This study uses remote sensing technology that can provide information about the condition of the earth's surface area, fast, and spatially. The study area was in Karawang District, lying in the Northern part of West Java-Indonesia. We... more
—This study uses remote sensing technology that can provide information about the condition of the earth's surface area, fast, and spatially. The study area was in Karawang District, lying in the Northern part of West Java-Indonesia. We address a paddy growth stages classification using LANDSAT 8 image data obtained from multi-sensor remote sensing image taken in October 2015 to August 2016. This study pursues a fast and accurate classification of paddy growth stages by employing multiple regularizations learning on some deep learning methods such as DNN (Deep Neural Networks) and 1-D CNN (1-D Convolutional Neural Networks). The used regularizations are Fast Dropout, Dropout, and Batch Normalization. To evaluate the effectiveness, we also compared our method with other machine learning methods such as (Logistic Regression, SVM, Random Forest, and XGBoost). The data used are seven bands of LANDSAT-8 spectral data samples that correspond to paddy growth stages data obtained from i-Sky (eye in the sky) Innovation system. The growth stages are determined based on paddy crop phenology profile from time series of LANDSAT-8 images. The classification results show that MLP using multiple regularization Dropout and Batch Normalization achieves the highest accuracy for this dataset.
LPR (License Plate Recognition) is a main component of modern transportation management systems. It uses a set of computer image-processing technologies to identify vehicles by its license plate. We propose a novel super resolution (SR)... more
LPR (License Plate Recognition) is a main component of modern transportation management systems. It uses a set of computer image-processing technologies to identify vehicles by its license plate. We propose a novel super resolution (SR) reconstruction algorithm to handle license plate texts in real traffic videos. To make license plate numbers more legible, a generalized discontinuityadaptive Markov random field (DAMRF) model is proposed based on the recently reported bilateral filtering, which not only preserves edges but is robust to noise as well. Moreover, instead of looking for a fixed value for the regularization parameter, a method for automatically estimating it is applied to the proposed model based on the input images. Information needed to determine the regularization parameter is updated at each iteration step, which is based on the available reconstructed image. Character recognition is the core of LPR.