Conference Presentations by Valeria Ruggiero
Typical applications in signal and image processing involve the numerical solution of large–scale... more Typical applications in signal and image processing involve the numerical solution of large–scale linear least squares problems with simple constraints, related to an m × n nonnegative matrix A, m n. When the size of A is such that the matrix is not available in memory and only the operators of the matrix-vector products involving A and A T can be computed, forward– backward methods combined with suitable accelerating techniques are very effective; in particular, the gradient projection methods can be improved by suitable step–length rules or by an extrapolation/inertial step. In this work, we propose a further acceleration technique for both schemes, based on the use of variable metrics tailored for the considered problems. The numerical effectiveness of the proposed approach is evaluated on randomly generated test problems and real data arising from a problem of fibre orientation estimation in diffusion MRI.
Papers by Valeria Ruggiero
Applied Mathematics in Science and Engineering
Computational Methods for Inverse Problems in Imaging, 2019
Minimization problems often occur in modeling phenomena dealing with real-life applications that ... more Minimization problems often occur in modeling phenomena dealing with real-life applications that nowadays handle large-scale data and require real-time solutions. For these reasons, among all possible iterative schemes, first-order algorithms represent a powerful tool in solving such optimization problems since they admit a relatively simple implementation and avoid onerous computations during the iterations. On the other hand, a well known drawback of these methods is a possible poor convergence rate, especially showed when an high accurate solution is required. Consequently, the acceleration of first-order approaches is a very discussed field which has experienced several efforts from many researchers in the last decades. The possibility of considering a variable underlying metric changing at each iteration and aimed to catch local properties of the starting problem has been proved to be effective in speeding up first-order methods. In this work we deeply analyze a possible way to include a variable metric in first-order methods for the minimization of a functional which can be expressed as the sum of a differentiable term and a nondifferentiable one. Particularly, the strategy discussed can be realized by means of a suitable sequence of symmetric and positive definite matrices belonging to a compact set, together with an Armijo-like linesearch procedure to select the steplength along the descent direction ensuring a sufficient decrease of the objective function.
Applied Mathematics and Computation, 2017
SIAM Journal on Optimization, 2016
Noname manuscript No. (will be inserted by the editor)
Convegno conclusivo sull’attività svolta nell'ambito del progetto lauree Scientifiche con doc... more Convegno conclusivo sull’attività svolta nell'ambito del progetto lauree Scientifiche con docenti e studenti che hanno partecipato al progetto negli anni 2005-2009: Si è tenuto il giorno Giovedì 28 maggio presso l'Aula Magna dell’Università di Ferrara, ore 15.00. Il convegno è stato un'occasione sia per fare un bilancio delle attività svolte nell’ambito del progetto PLS nelle 3 discipline -- Chimica, Fisica e Matematica-- sia per confrontarsi sulle prospettive future, coinvolgendo studenti e docenti delle Scuole che hanno partecipato alle varie attività. Hanno partecipato numerosi docenti ed una numerosa rappresentanza di studenti che hanno frequentato le attività del Progetto (laboratori e corso di eccellenza)
The aim of this paper is to show that the theorem on the global convergence of the Newton interio... more The aim of this paper is to show that the theorem on the global convergence of the Newton interior-point (IP) method presented in Ref. 1 can be proved under weaker assumptions. Indeed, we assume the boundedness of the sequences of multipliers related to nontrivial constraints, instead of the hypothesis that the gradients of the inequality constraints corresponding to slack variables not bounded away from zero are linear independent. By numerical examples, we show that, in the implementation of the Newton IP method, loss of boundedness in the iteration sequence of the multipliers detects when the algorithm does not converge from the chosen starting point.
Computational Optimization and Applications
The aim of the project is to develop image reconstruction methods for applications in microscopy,... more The aim of the project is to develop image reconstruction methods for applications in microscopy, astronomy and medicine, and to release software especially tailored for the specific application. The image acquisition process produces a degradation of the information with both deterministic and statistical features. For this reason, an image enhancement is needed in order to remove the noise and the blurring effects. The underlying mathematical model belongs to the class of inverse problems, which are very difficult to solve, especially when the specific features of real applications are included in the model. Reliable reconstruction methods have to take into account of the statistics in the image formation process, of several physical constraints and of important features to be preserved in the restored image, as edges and details. The proposed reconstruction methods are based on constrained optimization methods which are well suited for the processing of large size images and also for the 3D case. The research group has a long-term experience in optimization methods for such kind of applications and in the realization of algorithms on parallel and distributed systems, as the GPUs
ANNALI DELL'UNIVERSITA' DI FERRARA
Computational Optimization and Applications
Gradient projection methods represent effective tools for solving large-scale constrained optimiz... more Gradient projection methods represent effective tools for solving large-scale constrained optimization problems thanks to their simple implementation and low computational cost per iteration. Despite these good properties, a slow convergence rate can affect gradient projection schemes, especially when high accurate solutions are needed. A strategy to mitigate this drawback consists in properly selecting the values for the steplength along the negative gradient. In this paper, we consider the class of gradient projection methods with line search along the projected arc for box-constrained minimization problems and we analyse different strategies to define the steplength. It is well known in the literature that steplength selection rules able to approximate, at each iteration, the eigenvalues of the inverse of a suitable submatrix of the Hessian of the objective function can improve the performance of gradient projection methods. In this perspective, we propose an automatic hybrid ste...
Lecture Notes in Computer Science, 2020
This paper deals with the steplength selection in stochastic gradient methods for large scale opt... more This paper deals with the steplength selection in stochastic gradient methods for large scale optimization problems arising in machine learning. We introduce an adaptive steplength selection derived by tailoring a limited memory steplength rule, recently developed in the deterministic context, to the stochastic gradient approach. The proposed steplength rule provides values within an interval, whose bounds need to be prefixed by the user. A suitable choice of the interval bounds allows to perform similarly to the standard stochastic gradient method equipped with the best-tuned steplength. Since the setting of the bounds slightly affects the performance, the new rule makes the tuning of the parameters less expensive with respect to the choice of the optimal prefixed steplength in the standard stochastic gradient method. We evaluate the behaviour of the proposed steplength selection in training binary classifiers on well known data sets and by using different loss functions.
Machine Learning, Optimization, and Data Science, 2022
Machine Learning, Optimization, and Data Science, 2020
The steplength selection is a crucial issue for the effectiveness of the stochastic gradient meth... more The steplength selection is a crucial issue for the effectiveness of the stochastic gradient methods for large-scale optimization problems arising in machine learning. In a recent paper, Bollapragada et al. [1] propose to include an adaptive subsampling strategy into a stochastic gradient scheme. We propose to combine this approach with a selection rule for the steplength, borrowed from the full-gradient scheme known as Limited Memory Steepest Descent (LMSD) method [4] and suitably tailored to the stochastic framework. This strategy, based on the Ritz-like values of a suitable matrix, enables to give a local estimate of the local Lipschitz constant of the gradient of the objective function, without introducing line-search techniques, while the possible increase of the subsample size used to compute the stochastic gradient enables to control the variance of this direction. An extensive numerical experimentation for convex and non-convex loss functions highlights that the new rule makes the tuning of the parameters less expensive than the selection of a suitable constant steplength in standard and mini-batch stochastic gradient methods. The proposed procedure has also been compared with the Momentum and ADAM methods
Inverse Imaging with Poisson Data is an invaluable resource for graduate students, postdocs and r... more Inverse Imaging with Poisson Data is an invaluable resource for graduate students, postdocs and researchers interested in the application of inverse problems to the domains of applied sciences, such as microscopy, medical imaging and astronomy. The purpose of the book is to provide a comprehensive account of the theoretical results, methods and algorithms related to the problem of image reconstruction from Poisson data within the framework of the maximum likelihood approach introduced by Shepp and Vardi
Applied Mathematics and Computation, 2017
Optimization Methods and Software, 1993
ABSTRACT This paper is concerned with the development of the method of multipliers combined with ... more ABSTRACT This paper is concerned with the development of the method of multipliers combined with the conjugate gradient algorithm for solving the special linear system where A is a real symmetric non negative definite (n × n) matrixB is a real (n × m) matrix with full column rank (m < n) and A and BT have no nontrivial null vector in common. We assume that A and BT are large and not extremely sparse. The proposed method is well suitable for parallel implementation on a multiprocessor system that can execute concurrently different tasks on a few vector processors with shared central memory, such as the CRAY Y-MP. The results of an extensive computer experimentation, which is aimed at evaluating the effectiveness of the method, are reported.
Uploads
Conference Presentations by Valeria Ruggiero
Papers by Valeria Ruggiero