Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
Dynamic Responses of U-Shaped Caverns under Transient Stress Waves in Deep Rock Engineering
Next Article in Special Issue
Dynamical Analysis and Sliding Mode Controller for the New 4D Chaotic Supply Chain Model Based on the Product Received by the Customer
Previous Article in Journal
RETRACTED: Cai et al. Computational Analysis of Variational Inequalities Using Mean Extra-Gradient Approach. Mathematics 2022, 10, 2318
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Recurrent Neural Network for Identifying Multiple Chaotic Systems

by
José Luis Echenausía-Monroy
1,†,
Jonatan Pena Ramirez
1,
Joaquín Álvarez
1,*,
Raúl Rivera-Rodríguez
2,
Luis Javier Ontañón-García
3,*,† and
Daniel Alejandro Magallón-García
3,4,*,†
1
Applied Physics Division, Department of Electronics and Telecommunications, CICESE, Carr. Ensenada-Tijuana 3918, Zona Playitas, Ensenada 22860, Mexico
2
Telematics Division, CICESE, Carr. Ensenada-Tijuana 3918, Zona Playitas, Ensenada 22860, Mexico
3
Coordinación Académica Región Altiplano Oeste, Universidad Autónoma de San Luis Potosí, Carretera a Santo Domingo 200, Salinas de Hidalgo 78600, Mexico
4
Preparatoria Regional de Lagos de Moreno, Universidad de Guadalajara, Lagos de Moreno 47476, Mexico
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2024, 12(12), 1835; https://doi.org/10.3390/math12121835
Submission received: 13 May 2024 / Revised: 4 June 2024 / Accepted: 11 June 2024 / Published: 13 June 2024
(This article belongs to the Special Issue Nonlinear Dynamics, Chaos and Complex Systems)

Abstract

:
This paper presents a First-Order Recurrent Neural Network activated by a wavelet function, in particular a Morlet wavelet, with a fixed set of parameters and capable of identifying multiple chaotic systems. By maintaining a fixed structure for the neural network and using the same activation function, the network can successfully identify the three state variables of several different chaotic systems, including the Chua, PWL-Rössler, Anishchenko–Astakhov, Álvarez-Curiel, Aizawa, and Rucklidge models. The performance of this approach was validated by numerical simulations in which the accuracy of the state estimation was evaluated using the Mean Square Error (MSE) and the coefficient of determination ( r 2 ), which indicates how well the neural network identifies the behavior of the individual oscillators. In contrast to the methods found in the literature, where a neural network is optimized to identify a single system and its application to another model requires recalibration of the neural algorithm parameters, the proposed model uses a fixed set of parameters to efficiently identify seven chaotic systems. These results build on previously published work by the authors and advance the development of robust and generic neural network structures for the identification of multiple chaotic oscillators.
MSC:
68T07; 37M05; 37M10; 37N30; 65P20

1. Introduction

Almost all types of phenomena can be described by differential equations that make it possible to capture the studied entity’s changes over time. In nature, these behaviors are usually nonlinear, leading to the appearance of hybrid systems in which linear components coexist with nonlinear elements, leading to more complex phenomena and interesting geometric patterns associated with chaotic behavior [1]. A vital feature of this type of dynamics is a high sensitivity to initial conditions, which implies a rapid divergence between states at a virtually identical starting point, leading to poor predictions [2,3]. However, various behaviors in nature (e.g., the weather), in the human body (neuronal transmission), in mechanical systems, and in chemical reactions, to name a few, exhibit chaotic properties. Due to their universality, they are a recurring theme in scientific research [4,5,6,7].
Machine learning-based methods are being increasingly explored for their accurate processing, cataloging, and classification of large amounts of information. This type of deep learning algorithm has its roots in the early 1980s in the work of John Hopfield, who presented a model capable of storing and retrieving patterns of information [8]. Since then, the research and the development of more robust and efficient algorithms have intensified, and they are currently used to solve ordinal or temporal problems, such as language translation, speech recognition, and image captioning, among others.
Since chaotic systems and their high sensitivity to initial conditions permeate almost all areas of nature, predicting and identifying their trajectories over time is not trivial; this is why different approaches have been used in designing and using neural algorithms, yielding extensive results [9]. As feature examples, in [10] the authors compared three techniques for predicting chaotic time series (artificial neural networks, adaptive neuro-fuzzy inference systems, and least squares support vector machines) to determine the best in the short-term estimation and prediction of temporal behaviors with different Lyapunov exponents. In [11] an echo state network to improve the prediction horizon of chaotic behaviors was discussed, where the system values were optimized using metaheuristics. In addition, an improvement, on the previously mentioned paper, in the prediction horizon and a reduction in the number of neurons used was reported in [12]. Finally, in [13], the authors used Recurrent Neural Networks (RNN) and Deep Recurrent Neural Networks (DRNN) to estimate the states of chaotic systems with optimal results, in terms of loss and accuracy.
An essential characteristic of these RNN algorithms is that their nucleus is based on two main fronts: On the one hand are the algorithms that are structured from pre-training based on epochs, as presented in [13,14]. These epochs relate to the number of times the data are processed through the algorithm to adjust the weight parameters and improve the performance and convergence to a network solution. On the other hand are the algorithms that perform their training and identification online while the system to be studied is evolving [15,16]. Therefore, the weight parameter estimation is adjusted online due to training processes, such as the filtered error algorithm [17] and the Extended Kalman Filter (EKF) [18].
In particular, regarding the online algorithms, Recurrent Wavelet First-Order Neural Networks (RWFONNs) have been mainly used to implement neural control of electrical machines and emulated energy storage systems, to identify and control electrical systems, robotic manipulators, and crewless aerial vehicles [19,20], and for the identification and control of multi-stable systems [21]. Magallón and coworkers, in turn, have presented an RWFONN that can identify systems based on the jerk equation (an Unstable Dissipative System of type I (UDS I) [22] and a memristive system without equilibrium points [23]), where the values and structure in the network are the same for both systems [17]. In this work, the authors identify a Mean Square Error between the time series that is practically zero. One of the main advantages of this technique is that the network can recognize complex trajectories as the systems evolve; this is possible because the training and validation processes are online, unlike most neural algorithms. In addition, the authors have recently presented a real-time identification of chaotic trajectories using an RWFONN and a Field-Programmable Analog Array (FPAA), which physically implements the chaotic dynamics of a third-order jerky system with reconfigurable analog electronics [24].
Considering that the development of algorithms based on neural networks seems to require a constant calculation of gains, the development of new identification structures, and the optimization of parameters, this work extends the results shown in [17] for the development of an online general-purpose neural network, addressing the question: Is it possible to have a global neural network that can identify different chaotic systems without changing the neural algorithm (topology and/or hyperparameters)? The identification presented here by the studied network has Mean Square Error values of practically zero in all the studied cases. This measure is complemented by the coefficient of determination ( r 2 ), which indicates the neural network’s ability to reproduce the observed phenomenon. The results contribute to developing robust structures capable of identifying multiple chaotic systems with no common topology.
The rest of the article has the following structure: The description of the neural network used and the chaotic systems investigated can be found in Section 2. The methodology used and the results obtained are described in Section 3. The last part of the article deals with the discussions and preliminary conclusions.

2. Preliminaries

2.1. Recurrent Wavelet First-Order Neural Network

There are recent works in which artificial Recurrent Wavelet First-Order Neural Networks (RWFONN) have been used to identify and control different dynamical systems. This is due to their simple structure, good identification capability, and low computational cost [17,21]. The general structure of this kind of RWFONN is given by
y ˙ j i = α j i y j i + ( w j k i ) ϕ j k i ,
where y j i is the state of the i-th neuron, α j i > 0 (for i = 1 , 2 , n ) is part of the underlying network architecture and is determined during the training process, w j k i is the k-th adjustable synaptic weight connecting the j-th state to the i-th neuron, and ϕ j k i is a Morlet wavelet activation function. The systems are identified online using an RWFONN, where the synaptic weights are adjusted using the filtered error algorithm.

2.2. Filtered Error Algorithm

The identification scheme starts from the differential equation that describes the unknown system (see Equation (2)); then, the identifier can be chosen as described by Equation (3):
χ ˙ j i = α j i χ j i + ( w j k i ) * ϕ j k i .
y ˙ j i = α j i y j i + ( w j k i ) ϕ j k i .
Defining the identification error as ξ j i = y j i χ j i leads to
ξ ˙ j i = y ˙ j i χ ˙ j i , = α j i y j i + ( w j k i ) ϕ j k i ( α j i χ j i + ( w j k i ) * ϕ j k i ) , = α j i y j i + ( w j k i ) ϕ j k i + α j i χ j i ( w j k i ) * ϕ j k i , = α j i ( y j i χ j i ) + ( w j k i w j k i * ) ϕ j k i ,
where χ j i denotes each of the state variables of the oscillators to be identified. The model of the identification error, Equation (4), can be rewritten as
ξ ˙ j i = α j i ξ j i + w ˜ j i ϕ j k i ,
where w ˜ j i = w j i w j * i . The synaptic weights w j i (for i = 1 , 2 , , n ) are adjusted according to the learning law defined in [25], also called “filtered error”, described by Equation (6):
w ˙ j i = γ j i ϕ j k i ξ j i .
It is worth noting that the synaptic weights of the neural network for identification are bounded; further details are available in [17].
Theorem 1.
Consider the RWFONN model whose weights are adjustable according to Equation (6) for each i = 1 , 2 , n , so that:
  • ξ j i , w j i L ( i . e . , ξ j i and w j i are uniformly bounded);
  • lim t ξ i ( t ) = 0 .
Proof. 
The proof of Theorem 1 is available in [25], and is also developed by the authors in [17]. □
Theorem 2.
Suppose that any system of Table 1 and model (1) is initially at the same state x ( 0 ) = y ( 0 ) ; then, for any ϵ > 0 and any finite T > 0 an integer L and a matrix w R L × n exist, such that the state y ( t ) of the RWFONN model (1) and the weight values w = w satisfy:
sup 0 t T x ( t ) y ( t ) ϵ .
Then, using the Bellman–Gronwall Lemma [26] we ascertain that the identification error ξ = x y is bounded by:
ξ ϵ 2 .
Proof. 
See reference [27]. □

2.3. Chaotic Systems

The chaotic oscillators investigated in this work are as follows:
  • Multiscroll: System described by a monoparametric family (Figure 1a,b), used to model diffusion processes [28,29].
  • Rucklidge: This system models double convection processes where the motion is confined to coils resembling convection in an applied vertical magnetic field and a slightly rotating fluid layer [30] (see Figure 1c,d).
  • A–C: The Álvarez–Curiel system is derived from classical control schemes with typical nonlinearities where chaotic behavior emerges [31] (shown in Figure 1e,f).
  • A–A: The Anishchenko–Astakhov system (shown in Figure 1g,h) presents a modified oscillator with inertial nonlinearity that describes a radiophysical generator via three coupled equations and is considered a fundamental model of deterministic chaos [32].
  • Chua: One of the classical attractors of chaos theory, composed of a Piecewise (PW) function [33] (Figure 1i,j).
  • Rössler-PWL: A piecewise linear version of the system introduced by the German biochemistry Otto Rössler, presented in [34] (Figure 1k,l).
  • Aizawa: A chaotic three-dimensional system, whose dynamics are described in [35,36] (Figure 1m,n).
The equations of the oscillators mentioned above are described in Table 1, and the values for the parameters used in this work are listed in Table 2.

3. Methodology and Results

Considering the neural network described by Equation (3) and the filtered error model of Equation (6), this paper used the neural network structure (RWFONN) described by Equation (8), under the assumption that i = j = k (for i = 1 , 2 , 3 ):
y 1 ˙ = α 1 y 1 + δ 1 w 1 ϕ 1 ( χ 1 ) + y 2 , y 2 ˙ = α 2 y 2 + δ 2 w 2 ϕ 2 ( χ 2 ) + y 3 , y 3 ˙ = α 3 y 3 + δ 3 w 3 ϕ 3 ( χ 3 ) + u ,
where u is the same control input of the system to be identified. In this case, the functions f ( x ) , α i , and δ i , are parameters of the network described in Table 3 (see [17] for further details of the procedure for determining the neural network values), and χ 1 = x , χ 2 = y , and χ 3 = z . In addition, derived from Equation (6), the algorithm for filtered errors is defined as follows:
w 1 ˙ = γ 1 ϕ 1 ξ 1 , w 2 ˙ = γ 2 ϕ 2 ξ 2 , w 3 ˙ = γ 3 ϕ 3 ξ 3 , being ξ i = χ i y i .
Finally, the Morlet wavelet activation function implemented by the neuron has the following form:
ϕ 1 ( χ 1 ) = exp χ 1 2 β 1 cos ( λ 1 χ 1 ) , ϕ 2 ( χ 2 ) = exp χ 2 2 β 2 cos ( λ 2 χ 2 ) , ϕ 3 ( χ 3 ) = exp χ 3 2 β 3 cos ( λ 3 χ 3 ) ,
where β and λ are expansion and dilation terms.
Remark 1.
In [17], it is stated that the input “u” should contain information of the system, marked as u = f ( x ) in Equation (8), where f ( x ) is a function of the system to be identified. However, we found that a more straightforward input can be used, since more of the benchmark systems utilized do not present an input function. To illustrate this, during the identification process of the Aizawa and Rucklidge models (see Table 3), we considered u = x . The control input is u = f ( x ) to identify the remaining systems. Although the use of u = y or even u = z can be studied, such cases are not considered in the rest of this work.

3.1. Metrics Used for the Identification Quantification

The degree of identification between the variables of the neural network ( y 1 , y 2 , y 3 ) and the state variables of the chaotic oscillators ( x , y , z ) was performed using the following two statistical metrics:
  • The Mean Square Error (MSE): Also known as Mean Squared Deviation (MSD), it measures the average of the squares of the errors between the estimated values and the actual values of any phenomenon. MSE is a risk function that corresponds to the expected value of the squared error loss and is defined as:
    M S E = ( x y 1 ) 2 + ( y y 2 ) 2 + ( z y 3 ) 2 .
  • r 2 (r-squared): This value quantifies the amount of information that a model can reproduce from the original system, i.e., it indicates how well the neuron reproduces the behavior of the analyzed oscillator and is defined by:
    r 2 = σ x y σ x σ y 2 ,
    where σ x y is the covariance between the original data ( x , y , z ) and the information obtained by the neural system ( y 1 , y 2 , y 3 ), σ x is the standard deviation of the time series from the chaotic system, and σ y is the standard deviation of the temporal behavior of the neuron. The closer the value of r 2 is to 1, the greater the amount of information the neuron reproduces from the analyzed oscillator.

3.2. Results

The identification of the chaotic systems was performed as follows:
  • The oscillator to be identified was “connected” to the RWFONN considering χ 1 = x , χ 2 = y , and χ 3 = z , as it is described in Equation (4) for the same structure and values.
  • The neural network identified the three state variables of the oscillator.
  • A time series was obtained from the neural network that was the same size as that injected into the system by each state variable.
  • The MSE and r 2 were calculated for each oscillator, comparing a time series of each state variable of the oscillator with those estimated by the neural network. For the comparison, the transient state of the neural network was not removed.
Remark 2.
The expected values in the identification metrics were MSE 0 and r 2 1 ; this guaranteed that the neuron could correctly identify the states of the connected chaotic system.
All the results in this paper were calculated with a fourth-order Runge–Kutta integrator (RK4) implemented in MatLab 2024 and a step size of τ = 0.01 , with a total observation time of 1500 s, which implied a total of 150,000 samples in each time series.
The initial conditions from the basin of attraction of each chaotic system were used. The neural network and the filtered error system were initialized with values similar but not identical to the oscillator under study.
Figure 2 shows a simulation time window for the identification of each of the analyzed systems. The x-variable time series of the oscillator is shown in blue, while the y 1 -series estimated by the neural network is shown in red. To clarify: The variable y 1 of the neural network estimated the variable x of the oscillator connected to the RWFONN. Different initial conditions were used to initialize the oscillator and the neural network; see Table 4. Theorem 2 states that the filtered error bounds are calculated when the systems are initialized with identical initial conditions. However, the performance tests were performed with different initial conditions in the oscillator and neural network to ensure correct identification (without bias) and robustness to disturbances and noise. Using identical initial conditions would only shorten the identification time, resulting in a shorter transient state. The same figure shows how the dynamics of the neural network stabilized in less than a second and perfectly identified the states of the oscillator. The results based on the MSE value for identifying the oscillators described in Table 2 are shown in Table 5. Table 6 shows the convergence times of the neural identification of each variable of the chaotic systems. On the other hand, the values of the determination coefficients are listed in Table 7.

4. Discussion

The presented online Recurrent Neural Network could identify all the proposed chaotic systems. First, values in the Mean Square Error very close to zero were obtained (as Table 5 depicts). In the case of the Rucklidge oscillator—the one that presented “the worst” values, with a value in the MSE for the state variable z of 3 × 10 3 —it was the only one that presented values of absolute zero in the identification of the other two state variables. A similar case occurred when the identification properties of the neural network were evaluated under the determination coefficient (Table 7). All the oscillators, except for the Rucklidge case, had values equal to 1, indicating that the neural network used can fully reproduce the behavior of each state variable of the oscillators studied. Although the Rucklidge system was not perfectly identified, the values obtained met our criteria as they were close enough to the objective value (MSE ≃ 0 and r 2 1 for Δ = 0.001 ), which is why they were considered a satisfactory identification of the system.
On the other hand, the time the neural network requires to identify the chaotic system unambiguously was very short. A correct identification was possible in less than one second in all cases, as shown in Table 6 and Figure 2. Remark 2 mentions that the neural network and the oscillator were initialized with different but close initial conditions. It is important to emphasize that using completely different initial conditions does not decrease the identification capacity of the neural network but increases the amount of time the network stabilizes the identification. As with any other dynamical system, initializing with values far from the resulting attractor will increase the transient state. Still, it will not affect the identification properties of the neural algorithm.
Although the neural network studied was robust enough to identify nine oscillators (those studied in this work plus those published in [17]), its applicability is not universal. Tests were carried out with the Lorenz (MSE = [ 0.4494 , 3.3921 , 325.4273 ]; r 2 = [ 0.9887 , 0.9309 , 0]) and Rössler (MSE = [ 0.0001 , 0.0002 , 1.7080 ]; r 2 = [1, 1, 0.7648 ]) systems (see Figure 3), where satisfactory state identification could not be achieved, regardless of which state was fed back to the neural network. It is worth noting that this study was conducted in the same way as presented in this paper, with the same structure and values as those described here for the neural network.
Considering the results obtained and the methodology proposed in [37], where a system of equations based on polynomials can be reconstructed by sampling time series to produce a dynamic behavior similar to that of the original system, it would be worthwhile to consider both the inclusion of this methodology in the identification of systems by neural networks and a possible performance evaluation by calculating indices such as MSE and r 2 for the behaviors identified by the neural network as well as the behaviors generated by the polynomial-based system of equations. However, it should be noted that the computational time and complexity between the two methods are different, with RWFONN being more straightforward to implement.

RWFONN vs. RHONN

Since the RWFONN has a unique structure, we considered it valuable to compare the performance of the proposed neural network with a neural network that uses the same type of supervised training for system identification. The structure of a Recurrent High-Order Neural Network (RHONN) is similar to the structure presented by Jurado et al. (see Equation (2.1) in [20]), adjusted with the same hyperparameters given for the RWFONN in Table 3. Contrary to expectations, the results show that our neural network outperformed the RHONN in precision (highlighted in blue in Table 8), reflected in lower MSE values for most of the tests performed. This observation suggests that despite the greater complexity of higher-order networks and their theoretical ability to capture complicated dynamics, the RWFONN studied here is more effective for this particular type of task.

5. Conclusions

In this paper, the applicability of the First-Order Neural Network proposed by [17], with online training and a Morlet wavelet activation function, was tested for the identification of multiple chaotic systems under a fixed set of parameters in the neural algorithm. The state variables of seven chaotic oscillators were successfully identified using a fixed structure and their synaptic weights for the RWFONN. It should be emphasized that the dynamical systems studied in this work have no common structure in the vector field, which guarantees the effectiveness of the neural network and confirms that the approach discussed here is robust enough to identify the state variables of different dynamical systems adequately. The Mean Square Error (MSE) and the coefficient of determination ( r 2 ) were used as measurement tools to quantify the performance of the neural network. In all cases, the errors provided were very close to zero, with r-squared values very close to one (see Table 5 and Table 7). The neural algorithm used provided a good identification of the analyzed chaotic systems in a very short time. It converged to identify the three state variables of each analyzed oscillator in less than a second.
However, when extending the test bench of the proposed neural network and its synaptic values, we were able to find two chaotic systems (the Lorenz system and the Rössler model) that the neural algorithm used could not identify correctly (see Equations (8)–(10)), with the same values as described in Table 3. Nevertheless, the authors identified two questions that need to be addressed in future work: (i) What properties do the systems reported here have in common so that the neural network can adequately identify them? (ii) Is there a combination of values in the synaptic weights of the studied network that allows correct identification of the oscillators described in Table 1 as well as the Lorenz and Rössler models?
The results obtained and reported here contribute to the study and development of neural algorithms for state identification that are robust enough for their application in multiple systems and allow a structure similar to a “plug-and-play”, where the parameters and network configurations remain static for each oscillator studied. Finally, this work contributes to designing neural controllers for chaotic dynamical systems by using an RWFONN, due to its simple structure and low computational cost.

Author Contributions

J.L.E.-M.: conceptualization, writing—original draft, writing—review and editing, methodology, software, validation, data curation, visualization, project administration; D.A.M.-G.: writing—review and editing, conceptualization, formal analysis, methodology, software, validation; L.J.O.-G.: writing—review and editing, conceptualization; J.P.R.: formal analysis, funding acquisition, writing—review and editing; R.R.-R.: funding acquisition, writing—review and editing; J.Á.: supervision, writing—review and editing, conceptualization, resources, project administration. All authors have read and agreed to the published version of the manuscript.

Funding

Partially supported by CONAHCYT México under project A1-S-26123.

Data Availability Statement

The data used to support the findings of this study are available from the corresponding author upon request.

Acknowledgments

J.P.R., D.A.M.-G. and J.L.E.-M. thank CONAHCYT (Project: A1-S-26123, Grant: 2290436, and CVU: 706850). L.J.O.-G. acknowledges COPOCYT (Project:DG-522/2023).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Goebel, R.; Sanfelice, R.G.; Teel, A.R. Hybrid dynamical systems. IEEE Control Syst. Mag. 2009, 29, 28–93. [Google Scholar] [CrossRef]
  2. Lorenz, E. The butterfly effect. World Sci. Ser. Nonlinear Sci. Ser. A 2000, 39, 91–94. [Google Scholar]
  3. Ambika, G. Ed Lorenz: Father of the ‘butterfly effect’. Resonance 2015, 20, 198–205. [Google Scholar] [CrossRef]
  4. Ottino, J.M. Complex systems. Am. Inst. Chem. Eng. AIChE J. 2003, 49, 292. [Google Scholar] [CrossRef]
  5. Larsen-Freeman, D.; Cameron, L. Complex Systems and Applied Linguistics; Oxford University Press: Oxford, UK, 2008. [Google Scholar]
  6. Korotkov, A.G.; Kazakov, A.O.; Levanova, T.A.; Osipov, G.V. Chaotic regimes in the ensemble of FitzhHugh-Nagumo elements with weak couplings. IFAC-PapersOnLine 2018, 51, 241–245. [Google Scholar] [CrossRef]
  7. Buscarino, A.; Belhamel, L.; Bucolo, M.; Palumbo, P.; Manes, C. Modeling a population of switches via chaotic dynamics. IFAC-PapersOnLine 2020, 53, 16791–16795. [Google Scholar] [CrossRef]
  8. Hopfield, J.J. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA 1982, 79, 2554–2558. [Google Scholar] [CrossRef]
  9. Nagarajan, R. Deciphering Dynamical Nonlinearities in Short Time Series Using Recurrent Neural Networks. Sci. Rep. 2019, 9, 14158. [Google Scholar] [CrossRef]
  10. Pano-Azucena, A.D.; Tlelo-Cuautle, E.; Tan, S.X.D.; Ovilla-Martinez, B.; De la Fraga, L.G. FPGA-based implementation of a multilayer perceptron suitable for chaotic time series prediction. Technologies 2018, 6, 90. [Google Scholar] [CrossRef]
  11. González-Zapata, A.M.; Tlelo-Cuautle, E.; Cruz-Vega, I. On the Optimization of Machine Learning Techniques for Chaotic Time Series Prediction. Electronics 2022, 11, 3612. [Google Scholar] [CrossRef]
  12. González-Zapata, A.M.; Tlelo-Cuautle, E.; Ovilla-Martinez, B.; Cruz-Vega, I.; De la Fraga, L.G. Optimizing echo state networks for enhancing large prediction horizons of chaotic time series. Mathematics 2022, 10, 3886. [Google Scholar] [CrossRef]
  13. Serrano-Pérez, J.d.J.; Fernández-Anaya, G.; Carrillo-Moreno, S.; Yu, W. New results for prediction of chaotic systems using deep recurrent neural networks. Neural Process. Lett. 2021, 53, 1579–1596. [Google Scholar] [CrossRef]
  14. Fan, H.; Jiang, J.; Zhang, C.; Wang, X.; Lai, Y.C. Long-term prediction of chaotic systems with machine learning. Phys. Rev. Res. 2020, 2, 012080. [Google Scholar] [CrossRef]
  15. Huang, J.; Xu, D.; Li, Y.; Ma, Y. Near-Optimal Tracking Control of Partially Unknown Discrete-Time Nonlinear Systems Based on Radial Basis Function Neural Network. Mathematics 2024, 12, 1146. [Google Scholar] [CrossRef]
  16. Jia, K.; Lin, S.; Du, Y.; Zou, C.; Lu, M. Research on route tracking controller of Quadrotor UAV based on fuzzy logic and RBF neural network. IEEE Access 2023, 11, 111433–111447. [Google Scholar] [CrossRef]
  17. Magallón-García, D.A.; Ontanon-Garcia, L.J.; García-López, J.H.; Huerta-Cuéllar, G.; Soubervielle-Montalvo, C. Identification of Chaotic Dynamics in Jerky-Based Systems by Recurrent Wavelet First-Order Neural Networks with a Morlet Wavelet Activation Function. Axioms 2023, 12, 200. [Google Scholar] [CrossRef]
  18. Alanis, A.; Rios, J.; Gomez-Avila, J.; Zuniga, P.; Jurado, F. Discrete-time neural control of quantized nonlinear systems with delays: Applied to a three-phase linear induction motor. Electronics 2020, 9, 1274. [Google Scholar] [CrossRef]
  19. Vázquez, L.A.; Jurado, F. Continuous-time decentralized wavelet neural control for a 2 DOF robot manipulator. In Proceedings of the 2014 11th International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE), Campeche, Mexico, 29 September–3 October 2014; pp. 1–6. [Google Scholar]
  20. Jurado, F.; Lopez, S. A wavelet neural control scheme for a quadrotor unmanned aerial vehicle. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 2018, 376, 20170248. [Google Scholar] [CrossRef] [PubMed]
  21. Magallón, D.A.; Jaimes-Reátegui, R.; García-López, J.H.; Huerta-Cuellar, G.; López-Mancilla, D.; Pisarchik, A.N. Control of multistability in an erbium-doped fiber laser by an artificial neural network: A numerical approach. Mathematics 2022, 10, 3140. [Google Scholar] [CrossRef]
  22. Campos-Cantón, E.; Barajas-Ramirez, J.G.; Solis-Perales, G.; Femat, R. Multiscroll attractors by switching systems. Chaos Interdiscip. J. Nonlinear Sci. 2010, 20, 013116. [Google Scholar] [CrossRef]
  23. Li, C.; Sprott, J.C.; Joo-Chen Thio, W.; Gu, Z. A simple memristive jerk system. IET Circuits Devices Syst. 2021, 15, 388–392. [Google Scholar] [CrossRef]
  24. Magallón-García, D.A.; García-López, J.H.; Huerta-Cuellar, G.; Jaimes-Reátegui, R.; Diaz-Diaz, I.A.; Ontanon-Garcia, L.J. Real-time neural identification using a recurrent wavelet first-order neural network of a chaotic system implemented in an FPAA. Integration 2024, 96, 102134. [Google Scholar] [CrossRef]
  25. Kosmatopoulos, E.; Polycarpou, M.; Christodoulou, M.; Ioannou, P. High-order neural network structures for identification of dynamical systems. IEEE Trans Neural Netw. 1995, 6, 422–431. [Google Scholar] [CrossRef]
  26. Hale, J.K. Ordinary Differential Equations; Wiley InterScience: New York, NY, USA, 1969. [Google Scholar]
  27. Rovithakis, G.A.; Christodoulou, M.A. Adaptive Control with Recurrent High-order Neural Networks, Theory and Industrial Applications; Springer: London, UK, 2000. [Google Scholar]
  28. Huerta-Cuéllar, G.; Jiménez-López, E.; Campos-Cantón, E.; Pisarchik, A.N. An approach to generate deterministic Brownian motion. Commun. Nonlinear Sci. Numer. Simul. 2014, 19, 2740–2746. [Google Scholar] [CrossRef]
  29. Gilardi-Velázquez, H.E.; Campos-Cantón, E. Nonclassical point of view of the Brownian motion generation via fractional deterministic model. Int. J. Mod. Phys. C 2018, 29, 1850020. [Google Scholar] [CrossRef]
  30. Keleş, Z.; Sonugür, G.; Alçin, M. The Modeling of the Rucklidge Chaotic System with Artificial Neural Networks. Chaos Theory Appl. 2023, 5, 59–64. [Google Scholar] [CrossRef]
  31. Alvarez, J.; Curiel, E.; Verduzco, F. Complex dynamics in classical control systems. Syst. Control Lett. 1997, 31, 277–285. [Google Scholar] [CrossRef]
  32. Anishchenko, V.S.; Astakhov, V.; Neiman, A.; Vadivasova, T.; Schimansky-Geier, L. Nonlinear Dynamics of Chaotic and Stochastic Systems: Tutorial and Modern Developments; Springer: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
  33. Chua, L.O. Chua’s circuit 10 years later. Int. J. Circuit Theory Appl. 1994, 22, 279–305. [Google Scholar] [CrossRef]
  34. Pisarchik, A.; Jaimes-Reátegui, R. Homoclinic orbits in a piecewise linear Rössler-like circuit. J. Phys. Conf. Ser. 2005, 23, 122. [Google Scholar] [CrossRef]
  35. Arneodo, A.; Coullet, P.H.; Spiegel, E.A.; Tresser, C. Asymptotic chaos. Phys. D Nonlinear Phenom. 1985, 14, 327–347. [Google Scholar] [CrossRef]
  36. Varan, M.; Ulusoy, B.; Pehlivan, I.; Gurevin, B.; Akgul, A. Nonlinear Analysis and Circuit Realization of Chaotic Aizawa System. In Proceedings of the International Conference on Applied Mathematics in Engineering (ICAME), Balikesir, Turkey, 27–29 June 2018. [Google Scholar]
  37. Karimov, A.; Nepomuceno, E.G.; Tutueva, A.; Butusov, D. Algebraic method for the reconstruction of partially observed nonlinear systems using differential and integral embedding. Mathematics 2020, 8, 300. [Google Scholar] [CrossRef]
Figure 1. Projection into the state space and the characteristic plane of the (a,b) Multiscroll system, (c,d) Rucklidge model, (e,f) Álvarez–Curiel attractor, (g,h) Anishchenko–Astakov system, (i,j) Chua model, (k,l) PWL-Rössler system, and (m,n) Aizawa’s model, with the parameter values given in Table 1.
Figure 1. Projection into the state space and the characteristic plane of the (a,b) Multiscroll system, (c,d) Rucklidge model, (e,f) Álvarez–Curiel attractor, (g,h) Anishchenko–Astakov system, (i,j) Chua model, (k,l) PWL-Rössler system, and (m,n) Aizawa’s model, with the parameter values given in Table 1.
Mathematics 12 01835 g001
Figure 2. Temporal behavior of the identification of the x state variable (blue line) and the identified state determined by the neural network y 1 (red line). The third graph in this figure shows the overlap of the variables x and y 1 for (a) the Multiscroll system, (b) the Rucklidge model, (c) the Álvarez–Curiel attractor, (d) the Anishchenko–Astakov system, (e) the Chua model, (f) the PWL-Rössler system, and (g) Aizawa’s chaotic attractor.
Figure 2. Temporal behavior of the identification of the x state variable (blue line) and the identified state determined by the neural network y 1 (red line). The third graph in this figure shows the overlap of the variables x and y 1 for (a) the Multiscroll system, (b) the Rucklidge model, (c) the Álvarez–Curiel attractor, (d) the Anishchenko–Astakov system, (e) the Chua model, (f) the PWL-Rössler system, and (g) Aizawa’s chaotic attractor.
Mathematics 12 01835 g002
Figure 3. The attractors of the Lorenz (up) and Rössler system (down) are in shades of blue, drawn together with the attractor identified by the neural network in red tones.
Figure 3. The attractors of the Lorenz (up) and Rössler system (down) are in shades of blue, drawn together with the attractor identified by the neural network in red tones.
Mathematics 12 01835 g003
Table 1. Equations of the chaotic systems analyzed in this work. The attractors of each of the systems are shown in Figure 1.
Table 1. Equations of the chaotic systems analyzed in this work. The attractors of each of the systems are shown in Figure 1.
System Equations
Multiscroll x ˙ = y , y ˙ = z , z ˙ = a [ x + y + z f ( x ) ] , f ( x ) = 2 if x < 1 , 0 if 1 x < 1 , 2 if x 1 .
Rucklidge x ˙ = a x + b y y z , y ˙ = x , z ˙ = y 2 z .
A–A x ˙ = a x + y x z , y ˙ = x , z ˙ = b z + b f ( x ) x 2 , f ( x ) = 1 if x > 0 , 0 if x 0 .
A–C x ˙ = y , y ˙ = x 2 a y + b f ( z ) , z ˙ = c x , f ( z ) = z ( z 2 1 ) .
Chua x ˙ = c [ y x f ( x ) ] , y ˙ = x y + z , z ˙ = p y , f ( x ) = b x + 1 2 ( a b ) ( | x + 1 | | x 1 | ) .
Rössler-PWL x ˙ = a x b y z , y ˙ = x + c y p y , z ˙ = q f ( x ) z , f ( x ) = 0 if x 3 , x 3 if x > 3 .
Aizawa x ˙ = x ( z b ) p y , y ˙ = p x + y ( z b ) , z ˙ = c + a z z 3 3 ( x 2 + y 2 ) ( 1 + q z ) + r z x 3 .
Table 2. Parameters of the chaotic systems described in Table 1.
Table 2. Parameters of the chaotic systems described in Table 1.
System Parameter
Multiscroll a = 0.45
Rucklidge a = 2 , b = 6.7
A–A a = 1.2 , b = 0.5
A–C a = 0.5 , b = 1.4 , c = 1
Chua a = 1.27 , b = 0.68 , c = 10 , p = 14.87
Rössler-PWL a = 0.05 , b = 0.5 , c = 0.133 , p = 0.02 , q = 15 ,
Aizawa a = 0.95 , b = 0.7 , c = 0.6 , p = 3.5 , q = 0.25 , r = 0.1
Table 3. Parameters of the neural network described by Equations (8)–(10) for the identification of chaotic systems.
Table 3. Parameters of the neural network described by Equations (8)–(10) for the identification of chaotic systems.
Parameter Value
h 0.01
α 1 = α 2 = α 3 5
δ 1 = δ 2 = δ 3 5
β 1 = β 2 = β 3 100
λ 1 = λ 2 = λ 3 0.01
γ 1 = γ 2 = γ 3 5000
Table 4. Initial conditions used for the identification of the chaotic systems.
Table 4. Initial conditions used for the identification of the chaotic systems.
SystemInitial Conditions
x y 1 y y 2 z y 3
Multiscroll 0.2 0.1 0.1 0.2 0.1 0.1
Rucklidge 0.2 0.1 0.1 0.2 0.1 0.1
A–A 0.1 0.1 0.1 0.2 0.1 0.1
A–C 0.1 0.1 0.1 0.2 0.1 0.1
Chua 0.1 0.1 0.1 0.9 0.1 0.1
Rössler-PWL 2.5 0.1 0.1 2.0 0.1 0.1
Aizawa 0.1 0.1 0.1 0.2 0.1 0.1
Table 5. Values obtained in the MSE for identifying the chaotic systems described in Table 1 and Table 2.
Table 5. Values obtained in the MSE for identifying the chaotic systems described in Table 1 and Table 2.
System MSE
x y 1 y y 2 z y 3
Multiscroll 6.157 × 10 7 4.7 × 10 9 9.02 × 10 8
Rucklidge00 3 × 10 3
A–A 5.932 × 10 7 1.9 × 10 8 8.3 × 10 9
A–C 3.736 × 10 5 1 × 10 7 5.825 × 10 5
Chua 4.48 × 10 7 2.8 × 10 8 1.008 × 10 6
Rössler-PWL 6.756 × 10 7 1.732 × 10 7 1.52 × 10 8
Aizawa 8.982 × 10 7 2.606 × 10 7 1.95 × 10 8
Table 6. Convergence times of neuronal identification of chaotic systems.
Table 6. Convergence times of neuronal identification of chaotic systems.
System Convergence Times
x y 1 y y 2 z y 3
Multiscroll 0.4 s 0.2 s 0.3 s
Rucklidge 0.3 s 0.25 s 0.4 s
A–A 0.45 s 0.35 s 0.6 s
A–C 0.55 s 0.4 s 0.6 s
Chua 0.25 s 0.35 s 0.4 s
Rössler-PWL 0.55 s 0.2 s 0.25 s
Aizawa 0.55 s 0.4 s 0.5 s
Table 7. Values obtained in r 2 for identifying the chaotic systems described in Table 1 and Table 2.
Table 7. Values obtained in r 2 for identifying the chaotic systems described in Table 1 and Table 2.
System r 2
x y 1 y y 2 z y 3
Multiscroll111
Rucklidge11 0.9997
A–A111
A–C111
Chua111
Rössler-PWL111
Aizawa111
Table 8. MSE values for the identification of the chaotic systems described in Table 1 using the RWFONN from this work described by the Equations (8)–(10) and the Recurrent High-Order Neural Network (RHONN) presented in [20].
Table 8. MSE values for the identification of the chaotic systems described in Table 1 using the RWFONN from this work described by the Equations (8)–(10) and the Recurrent High-Order Neural Network (RHONN) presented in [20].
System MSE-RWFONN MSE-RHONN
x y 1 y y 2 z y 3 x y 1 y y 2 z y 3
Multiscroll 6.157 × 10 7 4.7 × 10 9 9.02 × 10 8 8.177 × 10 8 8.363 × 10 6 0.0041
Rucklidge00 3 × 10 3 0.0068 0.0061 0.2355
A–A 5.932 × 10 7 1.9 × 10 8 8.3 × 10 9 9.4250 × 10 7 0.0021 0.0104
A–C 3.736 × 10 5 1 × 10 7 5.825 × 10 5 9.6063 × 10 7 0.0011 0.0148
Chua 4.48 × 10 7 2.8 × 10 8 1.008 × 10 6 8.4028 × 10 4 0.0098 0.0108
Rössler-PWL 6.756 × 10 7 1.732 × 10 7 1.52 × 10 8 0.0060 7.8632 × 10 4 1.8730 × 10 4
Aizawa 8.982 × 10 7 2.606 × 10 7 1.95 × 10 8 0.0066 0.0053 0.0436
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Echenausía-Monroy, J.L.; Pena Ramirez, J.; Álvarez, J.; Rivera-Rodríguez, R.; Ontañón-García, L.J.; Magallón-García, D.A. A Recurrent Neural Network for Identifying Multiple Chaotic Systems. Mathematics 2024, 12, 1835. https://doi.org/10.3390/math12121835

AMA Style

Echenausía-Monroy JL, Pena Ramirez J, Álvarez J, Rivera-Rodríguez R, Ontañón-García LJ, Magallón-García DA. A Recurrent Neural Network for Identifying Multiple Chaotic Systems. Mathematics. 2024; 12(12):1835. https://doi.org/10.3390/math12121835

Chicago/Turabian Style

Echenausía-Monroy, José Luis, Jonatan Pena Ramirez, Joaquín Álvarez, Raúl Rivera-Rodríguez, Luis Javier Ontañón-García, and Daniel Alejandro Magallón-García. 2024. "A Recurrent Neural Network for Identifying Multiple Chaotic Systems" Mathematics 12, no. 12: 1835. https://doi.org/10.3390/math12121835

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop