Preview
Unable to display preview. Download preview PDF.
References
C. L. Giles and C. W. Omlin, “Extraction, insertion and refinement of symbolic rules in dynamically-driven recurrent neural networks,” Connection Science, vol. 5, no. 3/4, p. 307, 1993. Special Issue on Architectures for Integrating Symbolic and Neural Processes.
R. L. Watrous and G. M. Kuhn, “Induction of finite-state languages using second-order recurrent networks,” Neural Computation, vol. 4, pp. 406–414, May 1992.
H. Siegelmann and E. Sontag, “On the computational power of neural nets,” in Proceedings of the Fifth ACM Workshop on Computational Learning Theory, (New York NY), pp. 440–449, ACM, 1992.
P. Frasconi, M. Gori, M. Maggini, and G. Soda, “Representation of finite state automata in recurrent radial basis function networks,” Machine Learning, vol. 23, pp. 5–32, 1996.
J. F. Kolen, “Recurrent networks: State machines or iterated function systems?,” in Proceedings of the 1993 Connectionist Models Summer School (M. C. Mozer, P. Smolensky, D. S. Touretzky, J. L. Elman, and A. S. Weigend, eds.), (Hillsdale NJ), pp. 203–210, Erlbaum, 1994.
P. Frasconi and M. Gori, “Computational capabilities of local-feedback recurrent networks acting as finite-state machines,” IEEE Transactions on Neural Networks, vol. 7, pp. 1521–1525, November 1996.
H. T. Siegelmann, B. G. Horne, and C. L. Giles, “Computational capabilities of recurrent narx neural networks,” IEEE Trans. on Systems, Man and Cybernetics-Part B: Cybernetics, vol. 27, no. 2, p. 208, 1997.
H. Siegelmann and E. Sontag, “Turing computability with neural nets,” Applied Mathematics Letters, vol. 4, no. 6, pp. 77–80, 1991.
G. Z. Sun, H. H. Chen, Y. C. Lee, and C. L. Giles, “Turing equivalence of neural networks with second order connection weights,” in Proceedings of the International Joint Conference on Neural Networks, vol. II, pp. 357–362, 1991.
W. S. McCulloch and W. Pitts, “A logical calculus of ideas immanent in nervous activity,” Bullettin of Mathematical Biophysics, vol. 5, pp. 115–133, 1943.
E. M. Gold, “Complexity of automaton identification from given data,” Information and Control, vol. 37, pp. 302–320, 1978.
Y. Bengio, P. Frasconi, and P. Simard, “Learning long-term dependencies with gradient descent is difficult,” IEEE Transactions on Neural Networks, vol. 5, pp. 157–166, March 1994. Special Issue on Dynamic Recurrent Neural Networks.
M. F. Barnsley, Fractals Everywhere. Academic Press Professional, 1993. second edition.
C. W. Omlin and C. L. Giles, “Training second-order recurrent neural networks using hints,” in Proceedings of the Ninth International Conference on Machine Learning (D. Sleeman and P. Edwards, eds.), (San Mateo CA), pp. 363–368, Morgan Kaufmann Publishers, 1992.
S. Das and M. C. Mozer, “A unified gradient-descent/clustering architecture for finite state machine induction,” in Neural Information Processing Systems 6 (S. Cowan, G. Tesauro, and J. Alspector, eds.), pp. 19–26, 1994.
Z. Zeng, R. Goodman, and P. Smyth, “Discrete recurrent neural networks for grammatical inference,” IEEE Transactions on Neural Networks, vol. 5, pp. 320–330, March 1994. Special Issue on Dynamic Recurrent Neural Networks.
M. Gori, M. Maggini, and G. Soda, “Inductive inference from noisy examples: The rule-noise dilemma and the hybrid finite state filter,” in Proceedings of ECAI96 Workshop 16 (Neural Networks and Structural Knowledge), (Budapest (Hungary)), pp. 53–58, August 1996.
E. M. Gold, “Language identification in the limit,” Information and Control, vol. 10, pp. 447–474, 1967.
J. E. Hopcroft and J. D. Ullman, Introduction to Automata Theory, Languages and Computation. Reading MA: Addison-Wesley, 1979.
Z. Kohavi, Switching and Finite Automata Theory. New York, NY: McGraw-Hill, Inc., 1978. second edition.
J. Feldman, “Some decidability results on grammatical inference and complexity,” Information and Control, vol. 20, pp. 244–262, 1972.
D. Angluin, “On the complexity of minimum inference of regular sets,” Information and Control, vol. 39, pp. 337–350, 1978.
B. A. Trakhtenbrot and J. M. Barzdin, Finite Automata. Amsterdam: North-Holland, 1973.
K. S. Fu and T. L. Booth, “Grammatical inference: Introduction and survey — part I,” IEEE Transactions on Systems, Man, and Cybernetics, vol. SMC-5, pp. 95–111, January 1975.
S. Porat and J. A. Feldman, “Learning automata from ordered examples,” Machine Learning, vol. 7, no. 2/3, pp. 5–34, 1991. Special issue on Connectionist Approaches to Language Learning.
D. Angluin, “Inferring regular sets from queries and counterexamples,” Information and Computation, vol. 75, pp. 87–106, 1987.
R. L. Rivest and R. E. Schapire, “Inference of finite automata using homing sequences,” in Proceedings of the Twenty-First Annual ACM Symposium on Theory of Computing, (Seattle, WA), pp. 411–420, 1989.
M. Tomita, “Dynamic construction of finite-state automata from examples using hill-climbing,” in Proceedings of the Fourth Annual Cognitive Science Conference, (Ann Arbor MI), pp. 105–108, 1982.
J. R. Koza, Genetic Programming. MIT Press, 1992.
P. Frasconi, M. Gori, M. Maggini, and G. Soda, “Unified integration of explicit rules and learning by example in recurrent networks,” IEEE Transactions on Knowledge and Data Engineering, vol. 7, April 1995.
J. L. Elman, “Finding structure in time,” Cognitive Sciences, vol. 14, pp. 179–211, 1990.
J. L. McClelland and D. E. Rumelhart, Explorations in Parallel Distributed Processing. Cambridge: MIT Press, 1988.
D. Servan-Schreiber, A. Cleeremans, and J. L. McClelland, “Graded state machines: the representation of temporal contingencies in simple recurrent networks,” Machine Learning, vol. 7, no. 2/3, pp. 161–194, 1991. Special issue on Connectionist Approaches to Language Learning.
M. W. Goudreau, C. L. Giles, S. T. Chakradhar, and D. Chen, “First-order vs. second-order single layer recurrent neural networks,” IEEE Transactions on Neural Networks, vol. 5, no. 3, p. 511, 1994.
M. L. Minsky, Computation: Finite and Infinite Machines, ch. 3, pp. 32–66. Englewood Cliffs (NJ): Prentice Hall, Inc., 1967.
P. Frasconi, M. Gori, and G. Soda, “Recurrent neural networks and prior knowledge for sequence processing: a constrained nondeterministic approach,” Knowledge Based Systems, vol. 8, no. 6, pp. 313–332, 1995.
C. B. Miller and C. L. Giles, “Experimental comparison of the effect of order in recurrent neural networks,” International Journal of Pattern Recognition and Artificial Intelligence, vol. 7, no. 4, pp. 849–872, 1993. Special Issue on Applications of Neural Networks to Pattern Recognition.
P. Manolios and R. Fanelli, “First-order recurrent neural networks and deterministic finite state automata,” Neural Computation, vol. 6, pp. 1155–1173, November 1994.
K. Hornik, M. Stinchcombe, and H. White, “Multilayer feedforward networks are universal approximators,” Neural Networks, vol. 2, pp. 359–366, 1989.
B. G. Horne and C. L. Giles, “An experimental comparison of recurrent neural networks,” in Advances in Neural Information Processing Systems (G. Tesauro, D. Touretzky, and T. Leen, eds.), vol. 7, pp. 697–704, The MIT Press, 1995.
P. Frasconi, M. Gori, and G. Soda, “Local feedback multilayered networks,” Neural Computation, vol. 4, no. 1, pp. 120–130, 1992.
A. C. Tsoi and A. D. Back, “Locally recurrent globally feedforward networks: A critical review of architectures,” IEEE Transactions on Neural Networks, vol. 5, pp. 229–239, Mar. 1994.
B. Y. M. Gori, and R. De Mori, “Learning the dynamic nature of speech with back-propagation for sequences,” Pattern Recognition Letters, vol. 13, pp. 375–385, May 1992. Special issue on Artificial Neural Networks.
K. Narendra and K. Parthasarathy, “Identification and control of dynamical systems using neural networks,” IEEE Transactions on Neural Networks, vol. 1, no. 1, pp. 4–27, 1990.
C. L. Giles, B. G. Horne, and T. Lin, “Learning a class of large finite stste machines with a recurrent neural network,” Tech. Rep. UMIACS-TR-94-94 and CS-TR-3328, Institute for Advanced Computer Studies, University of Maryland, College Park MD, August 1994.
C. W. Omlin and C. L. Giles, “Constructing deterministic finite-state automata in sparse recurrent neural networks,” in Proceedings of the IEEE International Conference on Neural Networks (ICNN'94), pp. 1732–1737, 1994.
J. B. Pollack, “The induction of dynamical recognizers,” Machine Learning, vol. 7, no. 2/3, pp. 196–227, 1991. Special issue on Connectionist Approaches to Language Learning.
R. L. Watrous and G. M. Kuhn, “Induction of finite-state automata using second-order recurrent networks,” in Advances in Neural Information Processing Systems 4, pp. 317–324, 1992.
C. L. Giles, C. B. Miller, D. Chen, G. Z. S. H. H. Chen, and Y. C. Lee, “Extracting and learning an unknown grammar with recurrent neural networks,” in Advances in Neural Information Processing Systems 4 (J. Moody, S. Hanson, and R. Lippmann, eds.), (San Mateo CA), pp. 317–324, Morgan Kauffmann Publishers, 1992.
C. L. Giles, C. B. Miller, D. Chen, G. Z. Sun, H. H. Chen, and Y. C. Lee, “Learning and extracting finite state automata with second-order recurrent neural networks,” Neural Computation, vol. 4, no. 3, pp. 393–405, 1992.
C. L. Giles and C. W. Omlin, “Rule refinement with recurrent neural networks,” in Proceedings of the IEEE International Conference on Neural Networks (ICNN'93), vol. II, pp. 801–806, 1993.
M. Forcada and R. Carrasco, “Learning the initial state of a second order recurrent neural network during regular-language inference,” Neural Computation, vol. 7, no. 5, pp. 923–930, 1995.
Z. Zeng, R. Goodman, and P. Smyth, “Learning finite state machines with self-clustering recurrent networks,” Neural Computation, vol. 5, no. 6, pp. 976–990, 1993.
M. Casey, “The dynamics of discrete-time computation, with the application to recurrent neural networks and finite state machine extraction,” Neural Computation, vol. 8, no. 6, pp. 1135–1178, 1996.
D. Ron and R. Rubinfeld, “Learning fallible deterministic finite automata,” Machine Learning, vol. 18, pp. 149–185, 1995.
M. Gori, M. Maggini, and G. Soda, “Learning regular grammars from noisy examples using recurrent neural networks,” in Proceedings of the NEURAP 96, (Marseille (France)), pp. 207–214, March 20–22 1996.
R. C. Carrasco and M. L. Forcada, “Second-order recurrent neural networks can learn regular grammars from noisy strings.,” in From Natural to Artificial Neural Computatiion: Proceedings of IWANN'95 (June 7–9, 1995). (J. Mira and F. Sandoval, eds.), vol. 930 of Lecture Notes in Computer Science, pp. 605–610, Springer-Verlag, 1995.
J. Thatcher, “Tree automata: An informal survey,” in Current Trends in the Theory of Computing (A. Aho, ed.), pp. 143–172, Prentice-Hall, Inc., 1973.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1998 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Maggini, M. (1998). Recursive neural networks and automata. In: Giles, C.L., Gori, M. (eds) Adaptive Processing of Sequences and Data Structures. NN 1997. Lecture Notes in Computer Science, vol 1387. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0054002
Download citation
DOI: https://doi.org/10.1007/BFb0054002
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-64341-8
Online ISBN: 978-3-540-69752-7
eBook Packages: Springer Book Archive