Papers by Sajad Fathi-Hafshejani
Algorithms, Jul 8, 2024
Bookmarks Related papers MentionsView impact
Optimization and Engineering
Bookmarks Related papers MentionsView impact
Communications in computer and information science, 2023
Bookmarks Related papers MentionsView impact
We give an improved non-monotone line search algorithm for stochastic gradient descent (SGD) for ... more We give an improved non-monotone line search algorithm for stochastic gradient descent (SGD) for functions that satisfy interpolation conditions. We establish theoretical convergence guarantees for the algorithm for strongly convex, convex and non-convex functions. We conduct a detailed empirical evaluation to validate the theoretical results.
Bookmarks Related papers MentionsView impact
Cornell University - arXiv, Oct 16, 2022
Bookmarks Related papers MentionsView impact
In this paper, we propose a new non-monotone conjugate gradient method for solving unconstrained ... more In this paper, we propose a new non-monotone conjugate gradient method for solving unconstrained nonlinear optimization problems. We first modify the non-monotone line search method by introducing a new trigonometric function to calculate the non-monotone parameter, which plays an essential role in the algorithm's efficiency. Then, we apply a convex combination of the Barzilai-Borwein method for calculating the value of step size in each iteration. Under some suitable assumptions, we prove that the new algorithm has the global convergence property. The efficiency and effectiveness of the proposed method are determined in practice by applying the algorithm to some standard test problems and non-negative matrix factorization problems.
Bookmarks Related papers MentionsView impact
Non-negative matrix factorization (NMF) has become a popular method for representing meaningful d... more Non-negative matrix factorization (NMF) has become a popular method for representing meaningful data by extracting a non-negative basis feature from an observed non-negative data matrix. Some of the unique features of this method in identifying hidden data put this method amongst the powerful methods in the machine learning area. The NMF is a known non-convex optimization problem and the initial point has a significant effect on finding an efficient local solution. In this paper, we investigate the most popular initialization procedures proposed for NMF so far. We describe each method and present some of their advantages and disadvantages. Finally, some numerical results to illustrate the performance of each algorithm are presented.
Bookmarks Related papers MentionsView impact
Journal of Computational and Applied Mathematics, 2014
ABSTRACT In this paper, we propose a new kernel function with trigonometric barrier term for prim... more ABSTRACT In this paper, we propose a new kernel function with trigonometric barrier term for primal-dual interior point methods in linear optimization. Using an elegant and simple analysis and under some easy to check conditions, we explore the worst case complexity result for the large update primal-dual interior point methods. We obtain the worst case iteration bound for the large update primal-dual interior point methods as O(n^2^3logn@e) which improves the so far obtained complexity results for the trigonometric kernel function in [M. El Ghami, Z.A. Guennoun, S. Boula, T. Steihaug, Interior-point methods for linear optimization based on a kernel function with a trigonometric barrier term, Journal of Computational and Applied Mathematics 236 (2012) 3613-3623] significantly.
Bookmarks Related papers MentionsView impact
ArXiv, 2021
In this paper, we propose a new non-monotone conjugate gradient method for solving unconstrained ... more In this paper, we propose a new non-monotone conjugate gradient method for solving unconstrained nonlinear optimization problems. We first modify the non-monotone line search method by introducing a new trigonometric function to calculate the non-monotone parameter, which plays an essential role in the algorithm’s efficiency. Then, we apply a convex combination of the Barzilai-Borwein method [Barzilai and Borwein, 1988] for calculating the value of step size in each iteration. Under some suitable assumptions, we prove that the new algorithm has the global convergence property. The efficiency and effectiveness of the proposed method are determined in practice by applying the algorithm to some standard test problems and non-negative matrix factorization problems.
Bookmarks Related papers MentionsView impact
Non-negative matrix factorization (NMF) has become a popular method for representing meaningful d... more Non-negative matrix factorization (NMF) has become a popular method for representing meaningful data by extracting a non-negative basis feature from an observed non-negative data matrix. Some of the unique features of this method in identifying hidden data put this method amongst the powerful methods in the machine learning area. The NMF is a known non-convex optimization problem and the initial point has a significant effect on finding an efficient local solution. In this paper, we investigate the most popular initialization procedures proposed for NMF so far. We describe each method and present some of their advantages and disadvantages. Finally, some numerical results to illustrate the performance of each algorithm are presented.
Bookmarks Related papers MentionsView impact
In this paper, an interior-point algorithm for P∗(κ)-Linear Complementarity Problem (LCP) based o... more In this paper, an interior-point algorithm for P∗(κ)-Linear Complementarity Problem (LCP) based on a new parametric trigonometric kernel function is proposed. By applying strictly feasible starting point condition and using some simple analysis tools, we prove that our algorithm has O((1 + 2κ) √ n log n log n ) iteration bound for large-update methods, which coincides with the best known complexity bound. Moreover, numerical results confirm that our new proposed kernel function is doing well in practice in comparison with some existing kernel functions in the literature.
Bookmarks Related papers MentionsView impact
Journal of Nonlinear Functional Analysis
Bookmarks Related papers MentionsView impact
Journal of Applied Mathematics and Computing
Bookmarks Related papers MentionsView impact
Journal of Optimization Theory and Applications
Bookmarks Related papers MentionsView impact
Low-rank matrix factorization problems such as non negative matrix factorization (NMF) can be cat... more Low-rank matrix factorization problems such as non negative matrix factorization (NMF) can be categorized as a clustering or dimension reduction technique. The latter denotes techniques designed to find representations of some high dimensional dataset in a lower dimensional manifold without a significant loss of information. If such a representation exists, the features ought to contain the most relevant features of the dataset. Many linear dimensionality reduction techniques can be formulated as a matrix factorization. In this paper, we combine the conjugate gradient (CG) method with the Barzilai and Borwein (BB) gradient method, and propose a BB scaling CG method for NMF problems. The new method does not require to compute and store matrices associated with Hessian of the objective functions. Moreover, adopting a suitable BB step size along with a proper nonmonotone strategy which comes by the size convex parameter $\eta_k$, results in a new algorithm that can significantly improv...
Bookmarks Related papers MentionsView impact
Recently, El Ghami (Optim Theory Decis Mak Oper Res Appl 31:331–349, 2013) proposed a primal dual... more Recently, El Ghami (Optim Theory Decis Mak Oper Res Appl 31:331–349, 2013) proposed a primal dual interior point method for P∗(κ) -Linear Complementarity Problem (LCP) based on a trigonometric barrier term and obtained the worst case iteration complexity as O((1+2κ)n34lognϵ) for large-update methods. In this paper, we present a large update primal–dual interior point algorithm for P∗(κ) -LCP based on a new trigonometric kernel function. By a simple analysis, we show that our algorithm based on the new kernel function enjoys the worst case O((1+2κ)n√lognlognϵ) iteration bound for solving P∗(κ) -LCP. This result improves the worst case iteration bound obtained by El Ghami for P∗(κ) -LCP based on trigonometric kernel functions significantly
Bookmarks Related papers MentionsView impact
Kernel functions play an important role in the complexity analysis of the interior point methods ... more Kernel functions play an important role in the complexity analysis of the interior point methods for linear optimization. In this paper, we present a primal-dual interior point method for linear optimization based on a new kernel function consisting of a trigonometric function in its barrier term. By simple analysis, we show that the feasible primal-dual interior point methods based on the new proposed kernel function enjoys O(n√(logn)2lognϵ) worst case complexity result which improves the results obtained by El Ghami et al. (J Comput Appl Math 236:3613–3623, 2012) for the kernel functions with trigonometric barrier terms.
Bookmarks Related papers MentionsView impact
In this paper, we propose a new kernel function with trigonometric barrier term for primal–dual i... more In this paper, we propose a new kernel function with trigonometric barrier term for primal–dual interior point methods in linear optimization. Using an elegant and simple analysis and under some easy to check conditions, we explore the worst case complexity result for the large update primal–dual interior point methods. We obtain the worst case iteration bound for the large update primal–dual interior point methods as which improves the so far obtained complexity results for the trigonometric kernel function in [M. El Ghami, Z.A. Guennoun, S. Boula, T. Steihaug, Interior-point methods for linear optimization based on a kernel function with a trigonometric barrier term, Journal of Computational and Applied Mathematics 236 (2012) 3613–3623] significantly.
Bookmarks Related papers MentionsView impact
An efficient primal-dual interior point method for by Sajad Fathi-Hafshejani
Uploads
Papers by Sajad Fathi-Hafshejani
An efficient primal-dual interior point method for by Sajad Fathi-Hafshejani