Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

The Incremental Gauss-Newton Algorithm with Adaptive Stepsize Rule

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

In this paper, we consider the Extended Kalman Filter (EKF) for solving nonlinear least squares problems. EKF is an incremental iterative method based on Gauss-Newton method that has nice convergence properties. Although EKF has the global convergence property under some conditions, the convergence rate is only sublinear under the same conditions. One of the reasons why EKF shows slow convergence is the lack of explicit stepsize. In the paper, we propose a stepsize rule for EKF and establish global convergence of the algorithm under the boundedness of the generated sequence and appropriate assumptions on the objective function. A notable feature of the stepsize rule is that the stepsize is kept greater than or equal to 1 at each iteration, and increases at a linear rate of k under an additional condition. Therefore, we can expect that the proposed method converges faster than the original EKF. We report some numerical results, which demonstrate that the proposed method is promising.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. D.P. Bertsekas, “Incremental least squares methods and the extended Kalman filter,” SIAM J. Optimization, vol. 3, pp. 807-822, 1996.

    Google Scholar 

  2. D.P. Bertsekas, Nonlinear Programming, Athena Scientific: Belmont, MA, 1996.

    Google Scholar 

  3. D.P. Bertsekas, “A new class of incremental gradient methods for least squares problems,” SIAM J. Optimization, vol. 2, pp. 913-926, 1997.

    Google Scholar 

  4. J.E. Dennis, Jr and R.B. Schnabel, Numerical Methods for Unconstrained Optimization and Nonlinear Equations, Prentice-Hall: Englewood Cliffs, NJ, 1983.

    Google Scholar 

  5. Z.Q. Luo, “On the convergence of the LMS algorithm with adaptive learning rate for linear feedforward networks,” Neural Computation, vol. 3, pp. 226-245. 1991.

    Google Scholar 

  6. Z.Q. Luo and P. Tseng, “Analysis of an approximate gradient projection method with applications to the back propagation algorithm,” Optimization Methods and Software, vol. 4, pp. 85-101, 1994.

    Google Scholar 

  7. O.L. Mangasarian and M.V. Solodov, “Serial and parallel backpropagation convergence via nonmonotone pertubed minimization,” Optimization Methods and Software, vol. 4, 103-116, 1994.

    Google Scholar 

  8. W.S. McCulloch and W. Pitts, “A logical calculus of the ideas immanent in nervous activity,” Bull. Math. Biophys., vol. 5, pp. 115-133, 1943.

    Google Scholar 

  9. T.N. Pappas, “Solution of nonlinear equations by Davidon's least squares method,” M.S. Thesis, Dept. of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, 1982.

    Google Scholar 

  10. B.T. Poljak, Introduction to Optimization, Optimization Software Inc.: New York, NY, 1987.

    Google Scholar 

  11. D.E. Rumelhart, G.E. Hinton, and R.J. Williams, “Learning internal representations by error propagation,” in Parallel Distributed Processing, Vol. 1, D.E. Rumelhart and J.L. MuClelland (Eds.), MIT Press: Cambridge, MA, 1986, pp. 318-362.

    Google Scholar 

  12. P. Tseng, “Incremental gradient(-projection) method with momentum term and adaptive stepsize rule,” SIAM J. Optimization, vol. 8, pp. 506-531, 1998.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Moriyama, H., Yamashita, N. & Fukushima, M. The Incremental Gauss-Newton Algorithm with Adaptive Stepsize Rule. Computational Optimization and Applications 26, 107–141 (2003). https://doi.org/10.1023/A:1025703629626

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1025703629626