Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/167088.167193acmconferencesArticle/Chapter ViewAbstractPublication PagesstocConference Proceedingsconference-collections
Article
Free access

Bounds for the computational power and learning complexity of analog neural nets

Published: 01 June 1993 Publication History
First page of PDF

References

[1]
Y.S. Abu-Mostafa, "The Vapnik-Chervonenkis dimension: information versus complexity in learning", Neural Computation, vol. 1, 1989, 312- 317
[2]
P. L. Bartlett, "Lower bounds on the Vapnik-Chervonenkis dimension of multilayer threshold networks", preprint (July 1992)
[3]
E.B. Baum, D. Haussler, "What size net gives valid generalization?", Neural Computation, vol. 1, 1989, 151- 160
[4]
A. Blum, R. L. Rivest, "Training a 3-node neural network is NP-complete', Proe. of the 1988 Workshop on Computational Learn. ing Theory, Morgan Kaufmann (San Mateo, 1988), 9- 18
[5]
A.K. Chandra, L. Stockmeyer, U. Vishkin, "Constant depth reducibility", SIAM Y. Computing, vol. 13 (2), 1984, 423- 439
[6]
T.M. Cover, "Capacity problems for linear machines", in: Pattern Recognition, L. K#mal ed., Thompson Book Co., 1988, 283- 289
[7]
B. DasGupta, G. Schnitger, "Efficient approximations with neural networks: a comparison of gate functions", preprint (Aug. 1992)
[8]
R. Durbin, D. E. Rumelhart, "Product units: a computationally powerful and biologically plausible extension to backpropagation networks", Neural Computation, vol. 1, 1989, 133- 142
[9]
H. Edelsbrunner, "Algorithms in Combinatorial Geometry", Springer (Berfin, 1987)
[10]
H. Edelsbrunner, J. O'Rourke, R. Seidel, "Constructing arrangements of lines and hyperplanes with applications", SIAM" J. Comp., vol. 15, 1986, 341 - 363
[11]
P. Goldberg, M. Jerrum, "Bounding the Vapnik- Chervonenkis dimension of concept classes parameterized by real numbers", preprint (February 1993).
[12]
M. Goldmann, J. Hastad, A. Razborov, "Majority gates vs. general weighted threshold gates", Proc. of the 7th Structure in Complexity Theory Conference, 1992, 2 - 13
[13]
A. Hajnal, W. Maass, P. Pudlak, M. Szegedy and G. Turan, "Threshold circuits of bounded depth", Proc. of the #Sth Annual IEEE Syrup. on Foundations os~ Computer Science, 1987, 99- 110. Full version to appear in J. Comp. Syst. Sci. 1993
[14]
J. Hastad, "On the size of weights for threshold gates", preprint (September 1992)
[15]
D. Haussler, "Decision theoretic generalizations of the PAC model for neural nets and other learning applications", Information and Computation, vol. 100, 1992, 78- 150
[16]
J.J. Hopfield, "Neurons with graded response have collective computational properties hke those of two-state neurons", Proc. Nat. Acad. of Sciences USA, 1984, 3088 - 3092
[17]
D.S. Johnson, "A catalog of complexity classes", in: Handbook of Theoretical Computer Science vol. A, J. van Leeuwen ed., MIT Press (Cambridge, 1990)
[18]
M. Kearns, L. Valiant, "Cryptographic limitations on learning boolean formulae and finite automata", Proc. of the #lst ACM Symposium on Theory of Computing, 1989, 433- 444
[19]
R.P. Lippmann, "An introduction to computing with neural nets", IEEE ASSP Magazine, 1987, 4- 22
[20]
O.B. Lupanov, "On circuits of threshold elements", Dokl. Akad. Nauk SSSR, vol. 202, 1288- 1291; engl. translation in: Soy. Phys. Dokl., vol. 17, 1972, 91 - 93
[21]
W. Maass, "Bounds for the computational power and learning complexity of analog neural nets", IIG - Report 3#9 of the Technische UniversitSt Graz, (October 1992).
[22]
W. Maass, G. Schnitger, E. D. Sontag, "On the computational power of sigmoid versus boolean threshold circuits", Proc. o} the 3fgnd Annual IEEE Syrup. on Foundations of Computer Science, 1991, 767- 776
[23]
W. Maass, G. Turan, "How fast can a threshold gate learn?", in: Computational Learning Theory and Natural Learning Systems: Constraints and Prospects, G. Drastal, S. J. Hanson and R. Rivest eds., MIT Press, to appear
[24]
A. Macintyre, E. D. Sontag, "Finiteness result for sigmoidal "neural" networks", Proc. of the 25th A CM Symposium on Theory of Computing, 1993.
[25]
J.L. McClelland, D. E. Rumelhart "Parallel Distributed Processing", vol. 2, MIT Press (Cambridge, 1986)
[26]
N. Megiddo, "Linear Programming in linear time when the dimension is fixed", J. o} the A CM, vol. 31, 1984, 114- 127
[27]
M. Minsky, S. Papert, "Perceptrons: An Introduction to Computational Geometry", Expanded Edition, MIT Press (Cambridge, 1988)
[28]
J. Moody, C. J. Darken, "Fast learning in networks of locally-tuned processing units", Neural Computation, vol. 1, 1989, 281- 294
[29]
S. Muroga, "Threshold Logic and its Applications", Wiley (New York, 1971)
[30]
E.I. Neciporuk, "The synthesis of networks from threshold elements", Probl. Kibern. No. 11, 1964, 49 - 62; engl. translation in: Autom. Expr., vol. 7, No. 1, 1964, 35- 39
[31]
N.J. Nilsson, Learning Machines, McGraw- Hill (New York, 1971)
[32]
I. Parberry, G. Schnitger, "Parallel computation with threshold functions", Lecture Notes in Computer Science vol. 223, Springer (Berlin, 1986), 272- 290
[33]
T. Poggio, F. Girosi, "Networks for approximation and learning", Proc. of the IEEE, vol. 78(9), 1990, 1481- 1497
[34]
F. Rosenblatt, "Principles of Neurodynamics", Spartan Books (New York, 1962)
[35]
D.E. Rumelhart, J. L. McClelland, "Parallel Distributed Processing", vol. 1, MIT Press (Cambridge, 1986)
[36]
A. Schrijver, "Theory of Linear and integer Programming", Wiley (New York, 1986)
[37]
H.T. Siegelmann, E. D. Sontag, "Neural networks with real weights: analog computational complexity", Report SYCON-92- 05, Rutgers Center for Systems and Control (Oct. 1992)
[38]
K.Y. Sill, J. Bruck, T. Kailath, T. Hofmeister, "Depth efficient neural networks for division and related problems", to appear in IEEE Transactions on Inf. Theory
[39]
K.Y. Siu, V. Roychowdhury, "On optimal depth threshold circuits for multiplication and related problems", Tech. Report ECE- 92-05, University of California, Irvine (March 1992)
[40]
E.D. Sontag, "Remarks on interpolation and recognition using neural nets", in: Advances in Neural Information Processing Systems 3, R. P. Lippmann, J. Moody, D. S. Touretzky, eds., Morgan Kaufmann (San Mateo, 1991), 939- 945
[41]
E.D. Sontag, "Feedforward nets for interpolation and classification", J. Comp. Syst. Sci., vol. 45, 1992, 20- 48
[42]
E.D. Sontag, private communication (July 1992)
[43]
G. Wuran, private notes (1989)
[44]
L.G. Valiant, "A theory of the learnable", Comm. of the A CM, vol. 27, 1984, 1134- 1142

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
STOC '93: Proceedings of the twenty-fifth annual ACM symposium on Theory of Computing
June 1993
812 pages
ISBN:0897915917
DOI:10.1145/167088
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 01 June 1993

Permissions

Request permissions for this article.

Check for updates

Qualifiers

  • Article

Conference

STOC93
Sponsor:
STOC93: 25th Annual ACM Symposium on the Theory of Computing
May 16 - 18, 1993
California, San Diego, USA

Acceptance Rates

Overall Acceptance Rate 1,469 of 4,586 submissions, 32%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)177
  • Downloads (Last 6 weeks)54
Reflects downloads up to 10 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2009)Designing neural networks for tackling hard classification problemsWSEAS TRANSACTIONS on SYSTEMS10.5555/1639350.16393588:6(743-752)Online publication date: 1-Jun-2009
  • (2007)Estimating the size of neural networks from the number of available training dataProceedings of the 17th international conference on Artificial neural networks10.5555/1776814.1776823(68-77)Online publication date: 9-Sep-2007
  • (2006)Learning pattern classification-a surveyIEEE Transactions on Information Theory10.1109/18.72053644:6(2178-2206)Online publication date: 1-Sep-2006
  • (2006)Machines Over the Reals and Non‐UniformityMathematical Logic Quarterly10.1002/malq.1997043020243:2(143-157)Online publication date: 13-Nov-2006
  • (2005)Approximating the volume of general Pfaffian bodiesStructures in Logic and Computer Science10.1007/3-540-63246-8_10(162-173)Online publication date: 7-Jun-2005
  • (2005)Optimal simulation of automata by neural netsSTACS 9510.1007/3-540-59042-0_85(337-348)Online publication date: 1-Jun-2005
  • (2005)On the VC-dimension of depth four threshold circuits and the complexity of Boolean-valued functionsAlgorithmic Learning Theory10.1007/3-540-57370-4_52(251-264)Online publication date: 2-Jun-2005
  • (2002)Neural networks with local receptive fields and superlinear VC DimensionNeural Computation10.1162/08997660231731901814:4(919-956)Online publication date: 1-Apr-2002
  • (1997)On the Computation of Boolean Functions by Analog Circuits of Bounded Fan-InJournal of Computer and System Sciences10.1006/jcss.1997.148054:1(199-212)Online publication date: 1-Feb-1997
  • (1997)Neural Networks with Quadratic VC DimensionJournal of Computer and System Sciences10.1006/jcss.1997.147954:1(190-198)Online publication date: 1-Feb-1997
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media