Abstract
The pocket algorithm is considered able to provide for any classification problem the weight vector which satisfies the maximum number of input-output relations contained in the training set. A proper convergence theorem ensures the achievement of an optimal configuration with probability one when the number of iterations grows indefinitely. In the present paper a new formulation of this theorem is given; a rigorous proof corrects some formal and substantial errors which invalidate previous theoretical results. In particular it is shown that the optimality of the asymptotical solution is ensured only if the number of permanences for the pocket vector lies in a proper interval of the real axis which bounds depend on the number of iterations.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Cybenko, G. Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals, and Systems2 (1989), 303–314.
Hornik, K., Stinchcombe, M., and White, H. Multilayer feedforward networks are universal approximators. Neural Networks2 (1989), 359–366.
Blum, A., AND Rivest, R. L. Training a 3-node neural network is NP-complete. In Proceedings of the 1988 Workshop on Computational Learning Theory (Cambridge, MA, 1988), D. Haussler and L. Pitt, Eds., Morgan Kaufmann, pp. 9–18.
Hertz, J., Krogh, A., AND Palmer, R. G.Introduction to the Theory of Neural Computation. Redwood City, CA: Addison-Wesley, 1991.
Gallant, S. I.Neural Networks Learning and Expert Systems. Cambridge, MA: MIT Press, 1993.
Rosenblatt, F.Principles of Neurodynamics. Washington, DC: Spartan Press, 1961.
Minsky, M., AND Papert, S.Perceptrons: An Introduction to Computational Geometry. Cambridge, MA: MIT Press, 1969.
Ho, Y.-C., AND Kashyap, R. L. An algorithm for linear inequalities and its applications. IEEE Transactions on Electronic Computers14 (1965), 683–688.
Khachiyan, L. G. A polynomial algorithm in linear programming. Soviet Mathematics Doklady20 (1979), 191–194.
Mansfield, A. J. Comparison of perceptron training by linear programming and by the perceptron convergence procedure. In Proceedings of the International Joint Conference on Neural Networks (Seattle, WA, 1991), pp. II-25–II-30.
Gallant, S. I. Perceptron-based learning algorithms. IEEE Transactions on Neural Networks1 (1990), 179–191.
Mézard, M., AND Nadal, J.-P. Learning in feedforward layered networks: The tiling algorithm. Journal of Physics A22 (1989), 2191–2203.
Frean, M. The upstart algorithm: A method for constructing and training feed-forward neural networks. Neural Computation2 (1990), 198–209.
Muselli, M. On sequential construction of binary neural networks. To appear on IEEE Transactions on Neural Networks.
Frean, M. A “thermal” perceptron learning rule. Neural Computation4 (1992), 946–957.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1995 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Muselli, M. (1995). Is pocket algorithm optimal?. In: Vitányi, P. (eds) Computational Learning Theory. EuroCOLT 1995. Lecture Notes in Computer Science, vol 904. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-59119-2_185
Download citation
DOI: https://doi.org/10.1007/3-540-59119-2_185
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-59119-1
Online ISBN: 978-3-540-49195-8
eBook Packages: Springer Book Archive