Abstract
In this paper, we consider the Extended Kalman Filter (EKF) for solving nonlinear least squares problems. EKF is an incremental iterative method based on Gauss-Newton method that has nice convergence properties. Although EKF has the global convergence property under some conditions, the convergence rate is only sublinear under the same conditions. One of the reasons why EKF shows slow convergence is the lack of explicit stepsize. In the paper, we propose a stepsize rule for EKF and establish global convergence of the algorithm under the boundedness of the generated sequence and appropriate assumptions on the objective function. A notable feature of the stepsize rule is that the stepsize is kept greater than or equal to 1 at each iteration, and increases at a linear rate of k under an additional condition. Therefore, we can expect that the proposed method converges faster than the original EKF. We report some numerical results, which demonstrate that the proposed method is promising.
Similar content being viewed by others
References
D.P. Bertsekas, “Incremental least squares methods and the extended Kalman filter,” SIAM J. Optimization, vol. 3, pp. 807-822, 1996.
D.P. Bertsekas, Nonlinear Programming, Athena Scientific: Belmont, MA, 1996.
D.P. Bertsekas, “A new class of incremental gradient methods for least squares problems,” SIAM J. Optimization, vol. 2, pp. 913-926, 1997.
J.E. Dennis, Jr and R.B. Schnabel, Numerical Methods for Unconstrained Optimization and Nonlinear Equations, Prentice-Hall: Englewood Cliffs, NJ, 1983.
Z.Q. Luo, “On the convergence of the LMS algorithm with adaptive learning rate for linear feedforward networks,” Neural Computation, vol. 3, pp. 226-245. 1991.
Z.Q. Luo and P. Tseng, “Analysis of an approximate gradient projection method with applications to the back propagation algorithm,” Optimization Methods and Software, vol. 4, pp. 85-101, 1994.
O.L. Mangasarian and M.V. Solodov, “Serial and parallel backpropagation convergence via nonmonotone pertubed minimization,” Optimization Methods and Software, vol. 4, 103-116, 1994.
W.S. McCulloch and W. Pitts, “A logical calculus of the ideas immanent in nervous activity,” Bull. Math. Biophys., vol. 5, pp. 115-133, 1943.
T.N. Pappas, “Solution of nonlinear equations by Davidon's least squares method,” M.S. Thesis, Dept. of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, 1982.
B.T. Poljak, Introduction to Optimization, Optimization Software Inc.: New York, NY, 1987.
D.E. Rumelhart, G.E. Hinton, and R.J. Williams, “Learning internal representations by error propagation,” in Parallel Distributed Processing, Vol. 1, D.E. Rumelhart and J.L. MuClelland (Eds.), MIT Press: Cambridge, MA, 1986, pp. 318-362.
P. Tseng, “Incremental gradient(-projection) method with momentum term and adaptive stepsize rule,” SIAM J. Optimization, vol. 8, pp. 506-531, 1998.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Moriyama, H., Yamashita, N. & Fukushima, M. The Incremental Gauss-Newton Algorithm with Adaptive Stepsize Rule. Computational Optimization and Applications 26, 107–141 (2003). https://doi.org/10.1023/A:1025703629626
Issue Date:
DOI: https://doi.org/10.1023/A:1025703629626