Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
A Method for Calculating the Reliability of 2-Separable Networks and Its Applications
Next Article in Special Issue
Right Quantum Calculus on Finite Intervals with Respect to Another Function and Quantum Hermite–Hadamard Inequalities
Previous Article in Journal
Visualization of Isometric Deformations of Helicoidal CMC Surfaces
Previous Article in Special Issue
A New Optimal Numerical Root-Solver for Solving Systems of Nonlinear Equations Using Local, Semi-Local, and Stability Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Achieving Optimal Order in a Novel Family of Numerical Methods: Insights from Convergence and Dynamical Analysis Results

by
Marlon Moscoso-Martínez
1,2,3,
Francisco I. Chicharro
1,
Alicia Cordero
1,*,
Juan R. Torregrosa
1 and
Gabriela Ureña-Callay
2,3
1
Institute for Multidisciplinary Mathematics, Universitat Politècnica de València, Camino de Vera s/n, 46022 València, Spain
2
Faculty of Sciences, Escuela Superior Politécnica de Chimborazo (ESPOCH), Panamericana Sur km 1 1/2, Riobamba 060106, Ecuador
3
Higher School of Engineering and Technology, Universidad Internacional de la Rioja (UNIR), Avda. de la Paz 137, 26006 Logroño, Spain
*
Author to whom correspondence should be addressed.
Axioms 2024, 13(7), 458; https://doi.org/10.3390/axioms13070458
Submission received: 20 May 2024 / Revised: 25 June 2024 / Accepted: 2 July 2024 / Published: 7 July 2024

Abstract

:
In this manuscript, we introduce a novel parametric family of multistep iterative methods designed to solve nonlinear equations. This family is derived from a damped Newton’s scheme but includes an additional Newton step with a weight function and a “frozen” derivative, that is, the same derivative than in the previous step. Initially, we develop a quad-parametric class with a first-order convergence rate. Subsequently, by restricting one of its parameters, we accelerate the convergence to achieve a third-order uni-parametric family. We thoroughly investigate the convergence properties of this final class of iterative methods, assess its stability through dynamical tools, and evaluate its performance on a set of test problems. We conclude that there exists one optimal fourth-order member of this class, in the sense of Kung–Traub’s conjecture. Our analysis includes stability surfaces and dynamical planes, revealing the intricate nature of this family. Notably, our exploration of stability surfaces enables the identification of specific family members suitable for scalar functions with a challenging convergence behavior, as they may exhibit periodical orbits and fixed points with attracting behavior in their corresponding dynamical planes. Furthermore, our dynamical study finds members of the family of iterative methods with exceptional stability. This property allows us to converge to the solution of practical problem-solving applications even from initial estimations very far from the solution. We confirm our findings with various numerical tests, demonstrating the efficiency and reliability of the presented family of iterative methods.

1. Introduction

A multitude of challenges in Computational Sciences and other fields in Science and Technology can be effectively represented as nonlinear equations through mathematical modeling, see for example [1,2,3]. Finding solutions ξ to nonlinear equations of the form f ( x ) = 0 stands as a classical yet formidable problem in the realms of Numerical Analysis, Applied Mathematics, and Engineering. Here, the function f : I R R is assumed to be differentiable enough within the open interval I. Extensive overviews of iterative methods for solving nonlinear equations published in recent years can be found in [4,5,6], and their associated references.
In recent years, many iterative methods have been developed to solve nonlinear equations. The essence of these methods is as follows: if one knows a sufficiently small domain that contains only one root ξ of the equation f ( x ) = 0 , and we select a sufficiently close initial estimate of the root x 0 , we generate a sequence of iterates x 1 , x 2 , , x k , , by means of a fixed point function g ( x ) , which under certain conditions converges to ξ . The convergence of the sequence is guaranteed, among other elements, by the appropriate choice of the function g and the initial approximation x 0 .
The method described by the iteration function g : I R R such that
x k + 1 = g ( x k ) , k = 0 , 1 , ,
starting from a given initial estimate x 0 , includes a large number of iterative schemes. These differ from each other by the way the iteration function g is defined.
Among these methods, Newton’s scheme is widely acknowledged as the most renowned approach for locating a solution ξ I . This scheme is defined by the iterative formula:
x k + 1 = x k f ( x k ) f ( x k ) ,
where k = 0 , 1 , 2 , , and  f ( x k ) denotes the derivative of the function f evaluated in the kth iteration.
A very important concept of iterative methods is their order of convergence, which provides a measure of the speed of convergence of the iterates. Let { x k } k 0 be a sequence of real numbers such that lim k x k = ξ . The convergence is called (see [7]):
(a)
Linear, if there exist C, 0 < C < 1 and k 0 N such that
| x k ξ | | x k 1 ξ | C ,   for   all     k > k 0 ;
(b)
Is of order p, if there exist C > 0 and k 0 N such that
| x k ξ | | x k 1 ξ | p C ,   for   all     k > k 0 .
We denote by e k = x k ξ the error of the k-th iteration. Moreover, equation e k + 1 = C e k p + O ( e k p + 1 ) is called the error equation of the iterative method, where p is its order of convergence and C is called the asymptotic error constant.
It is known (see, for example, [4]) that if x k + 1 = g ( x k ) is an iterative point-to-point method with d functional evaluations per step, then the order of convergence of the method is, at most, p = d . On the other hand, Traub proves in [4] that to design a point-to-point method of order p, the iterative expression must contain derivatives of the nonlinear function whose zero we are looking for, at least of order p 1 . This is why point-to-point methods are not efficient if we seek to simultaneously increase the order of convergence and computational efficiency.
These restrictions of point-to-point methods are the starting point for the growing interest of researchers in multipoint methods, see for example [4,5,6]. In such schemes, also called predictor–corrector, the  ( k + 1 ) -th iterate is obtained by using functional evaluations of the k-th iterate and also of other intermediate points. For example, a two-step multipoint method has the expression
y k = Ψ ( x k ) , x k + 1 = Φ ( x k , y k ) , k = 0 , 1 , 2 ,
Thus, the main motivation for designing new iterative schemes is to increase the order of convergence without adding many functional evaluations. The first multipoint schemes were designed by Traub in [4]. At that time the concept of optimality had not yet been defined and the fact of designing multipoint schemes with the same order as classical schemes such as Halley or Chebyshev, but with a much simpler iterative expression and without using second derivatives, was of great importance. The techniques used then have been the seed of those that allowed the appearance of higher-order methods.
In recent years, different authors have developed a large number of optimal schemes for solving nonlinear equations [6,8]. A common way to increase the convergence order of an iterative scheme is to use the composition of methods, based on the following result (see [4]).
Theorem 1.
Let g 1 ( x ) and g 2 ( x ) be the fixed-point functions of orders p 1 and p 2 , respectively. Then, the iterative method resulting from composing them, x k + 1 = g 1 ( g 2 ( x k ) ) , k = 0 , 1 , 2 , , has an order of convergence p 1 p 2 .
However, this composition necessarily increases the number of functional evaluations. So, to preserve optimality, the number of evaluations must be reduced. There are many techniques used for this purpose by different authors, such as approximating some of the evaluations that have appeared with the composition by means of interpolation polynomials, Padé approximants, inverse interpolation, Adomian polynomials, etc. (see, for example, [6,9,10]). If after the reduction of functional evaluations the resulting method is not optimal, the weight function technique, introduced by Chun in [11], can be used to increase its order of convergence.
There are also other ways in the literature to compare different iterative methods with each other. Traub in [4] defined the information efficiency of an iterative method as
I ( M ) = p d ,
where p is the order of convergence and d is the number of functional evaluations per iteration. On the other hand, Ostrowski in [12] introduced the so-called efficiency index,
E I ( M ) = p 1 / d ,
which, in turn, gives rise to the concept of optimality of an iterative method.
Regarding the order of convergence, Kung and Traub in their conjecture (see [13]) establish what is the highest order that a multipoint iterative scheme without memory can reach. Schemes that attain this limit are called optimal methods. Such a conjecture states that the order of convergence of any memoryless multistep method cannot exceed 2 d 1 (called optimal order), where d is the number of functional evaluations per iteration, with efficiency index 2 ( d 1 ) / d (called optimal index). In this sense, Newton is an optimal method.
Furthermore, in order to numerically test the behavior of the different iterative methods, Weerakoon and Fernando in [14] introduced the so-called computational order of convergence (COC),
p C O C = ln | x k + 1 ξ | | x k ξ | ln | x k ξ | | x k 1 ξ | , k = 1 , 2 , ,
where x k + 1 , x k and x k 1 are three consecutive approximations of the root of the nonlinear equation, obtained in the iterative process. However, the value of the zero ξ is not known in practice, which motivated the definition in [15] of the approximate computational convergence order ACOC,
p A C O C = ln | x k + 1 x k | | x k x k 1 | ln | x k x k 1 | | x k 1 x k 2 | , k = 2 , 3 , .
On the other hand, the dynamical analysis of rational operators derived from iterative schemes, particularly when applied to low-degree nonlinear polynomial equations, has emerged as a valuable tool for assessing the stability and reliability of these numerical methods. This approach is detailed, for instance, in Refs. [16,17,18,19,20] and their associated references.
Using the tools of complex discrete dynamics, it is possible to compare different algorithms in terms of their basins of attraction, the dynamical behavior of the rational functions associated with the iterative method on low-degree polynomials, etc. Varona [21], Amat et al. [22], Neta et al. [23], Cordero et al. [24], Magreñán [25], Geum et al. [26], among others, have analyzed many schemes and parametric families of methods under this point of view, obtaining interesting results about their stability and reliability.
The dynamical analysis of an iterative method focuses on the study of the asymptotic behavior of the fixed points (roots, or not, of the equation) of the operator, as well as on the basins of attraction associated with them. In the case of parametric families of iterative methods, the analysis of the free critical points (points where the derivative of the operator cancels out that are not roots of the nonlinear function) and stability functions of the fixed points allows us to select the most stable members of these families. Some of the existing works in the literature related to this approach are Refs. [27,28], among others.
In this paper, we introduce a novel parametric family of multistep iterative methods tailored for solving nonlinear equations. This family is constructed by enhancing the traditional Newton’s scheme, incorporating an additional Newton step with a weight function and a frozen derivative. As a result, the family is characterized by a two-step iterative expression that relies on four arbitrary parameters.
Our approach yields a third-order uni-parametric family and a fourth-order member. However, in the course of developing these iterative schemes, we initially start with a first-order quad-parametric family. By selectively setting just one parameter, we manage to accelerate its convergence to a third-order scheme, and for a specific value of this parameter, we achieve an optimal member. To substantiate these claims, we conduct a comprehensive convergence analysis for all classes.
The stability of this newly introduced family is rigorously examined using dynamical tools. We construct stability surfaces and dynamical planes to illustrate the intricate behavior of this class. These stability surfaces help us to identify specific family members with exceptional behavior, making them well-suited for practical problem-solving applications. To further demonstrate the efficiency and reliability of these iterative schemes, we conduct several numerical tests.
The rest of the paper is organized as follows. In Section 2, we present the proposed class of iterative methods depending on several parameters, which is step-by-step modified in order to achieve the highest order of convergence. Section 3 is devoted to the dynamical study of the uni-parametric family; by means of this analysis, we find the most stable members, less dependent from their initial estimation. In Section 4, the previous theoretical results are checked by means of numerical tests on several nonlinear problems, using a wide variety of initial guesses and parameter values. Finally, some conclusions are presented.

2. Convergence Analysis of the Family

In this section, we conduct a convergence analysis of the newly introduced quad-parametric iterative family, with the following iterative expression:
y k = x k α f ( x k ) f ( x k ) , x k + 1 = y k β + γ f ( y k ) f ( x k ) + δ f ( y k ) f ( x k ) 2 f ( x k ) f ( x k ) ,
where α , β , γ , δ are arbitrary parameters and k = 0 , 1 , 2 , .
Additionally, we present a strategy for simplifying it into a uni-parametric class to enhance convergence speed. Consequently, even though the quad-parametric family has a first-order convergence rate, we employ higher-order Taylor expansions in our proof, as they are instrumental in establishing the convergence rate of the uni-parametric subfamily. In Appendix A, the Mathematica code used for checking it is available.
Theorem 2 (quad-parametric family).
Let f : I R R be a sufficiently differentiable function in an open interval I and ξ I a simple root of the nonlinear equation f ( x ) = 0 . Let us suppose that f ( x ) is continuous and nonsingular at ξ, and  x 0 is an initial estimate close enough to ξ. Then, the sequence { x k } k 0 obtained by using the expression (3) converges to ξ with order one, being its error equation
e k + 1 = α 2 δ + α ( γ + 2 δ 1 ) β γ δ + 1 e k + 𝒪 e k 2 ,
where e k = x k ξ , and α, β, γ, and δ are free parameters.
Proof. 
Let us consider ξ as the simple root of nonlinear function f ( x ) and  x k = ξ + e k . We calculate the Taylor expansion of f ( x k ) and f ( x k ) around the root ξ , we get
f ( x k ) = f ( ξ ) + f ( ξ ) e k + 1 2 ! f ( ξ ) e k 2 + 1 3 ! f ( ξ ) e k 3 + 1 4 ! f ( i v ) ( ξ ) e k 4 + 𝒪 ( e k 5 ) = f ( ξ ) e k + 1 2 ! f ( ξ ) f ( ξ ) e k 2 + 1 3 ! f ( ξ ) f ( ξ ) e k 3 + 1 4 ! f ( i v ) ( ξ ) f ( ξ ) e k 4 + 𝒪 ( e k 5 ) = f ( ξ ) e k + C 2 e k 2 + C 3 e k 3 + C 4 e k 4 + 𝒪 ( e k 5 ) ,
and
f ( x k ) = f ( ξ ) + f ( ξ ) e k + 1 2 ! f ( ξ ) e k 2 + 1 3 ! f ( i v ) ( ξ ) e k 3 + 𝒪 ( e k 4 ) = f ( ξ ) 1 + f ( ξ ) f ( ξ ) e k + 1 2 ! f ( ξ ) f ( ξ ) e k 2 + 1 3 ! f ( i v ) ( ξ ) f ( ξ ) e k 3 + 𝒪 ( e k 4 ) = f ( ξ ) 1 + 2 C 2 e k + 3 C 3 e k 2 + 4 C 4 e k 3 + 𝒪 ( e k 4 ) ,
where C p = 1 p ! f ( p ) ( ξ ) f ( ξ ) , p = 2 , 3 , . . .
By a direct division of (4) and (5),
f ( x k ) f ( x k ) = e k C 2 e k 2 + 2 C 2 2 C 3 e k 3 4 C 2 3 7 C 2 C 3 + 3 C 4 e k 4 + 𝒪 e k 5 .
Replacing (6) in (3), we have
y k = ξ + ( 1 α ) e k + α C 2 e k 2 2 α C 2 2 C 3 e k 3 + α 4 C 2 3 7 C 2 C 3 + 3 C 4 e k 4 + 𝒪 e k 5 .
Again a Taylor expansion of f ( y k ) around ξ allows us to get
f ( y k ) = f ( ξ ) ( 1 α ) e k + α 2 α + 1 C 2 e k 2 + 2 α 2 C 2 2 α 3 3 α 2 + α 1 C 3 e k 3 + 5 α 2 C 2 3 + α 2 ( 3 α 10 ) C 2 C 3 + α 4 4 α 3 + 6 α 2 α + 1 C 4 e k 4 + 𝒪 e k 5 .
Dividing (8) by (4), we obtain
f ( y k ) f ( x k ) = ( 1 α ) + α 2 C 2 e k α 2 ( α 3 ) C 3 + 3 C 2 2 e k 2 + α 2 α 2 4 α + 6 C 4 + 2 ( 2 α 7 ) C 2 C 3 + 8 C 2 3 e k 3 + 𝒪 e k 4 .
Finally, substituting (6), (7) and (9), in the second step of family (3), we have
x k + 1 = ξ + A 1 e k + A 2 e k 2 + A 3 e k 3 + A 4 e k 4 + 𝒪 e k 5 ,
where
A 1 = α 2 δ + α ( γ + 2 δ 1 ) β γ δ + 1 , A 2 = 2 α 3 δ α 2 ( γ + δ ) α ( γ + 2 δ 1 ) + β + γ + δ C 2 , A 3 = 2 α 4 δ + α 3 ( γ + 8 δ ) α 2 ( 3 γ + 4 δ ) 2 α ( γ + 2 δ 1 ) + 2 ( β + γ + δ ) C 3 α 4 δ + 8 α 3 δ 2 α 2 ( 2 γ + 3 δ ) 2 α ( γ + 2 δ 1 ) + 2 ( β + γ + δ ) C 2 2 , A 4 = 7 α 4 δ + 26 α 3 δ α 2 ( 13 γ + 22 δ ) 4 α ( γ + 2 δ 1 ) + 4 ( β + γ + δ ) C 2 3 + 2 α 5 δ + 4 α 4 δ α 3 ( 5 γ + 48 δ ) + α 2 ( 19 γ + 31 δ ) + 7 α ( γ + 2 δ 1 ) 7 ( β + γ + δ ) C 2 C 3 + 2 α 5 δ α 4 ( γ + 10 δ ) + 4 α 3 ( γ + 5 δ ) 3 α 2 ( 2 γ + 3 δ ) 3 α ( γ + 2 δ 1 ) + 3 ( β + γ + δ ) C 4 ,
being the error equation
e k + 1 = A 1 e k + A 2 e k 2 + A 3 e k 3 + A 4 e k 4 + 𝒪 e k 5 = α 2 δ + α ( γ + 2 δ 1 ) β γ δ + 1 e k + 𝒪 e k 2 ,
and the proof is finished.    □
From Theorem 2, it is evident that the newly introduced quad-parametric family exhibits a convergence order of one, irrespective of the values assigned to α , β , γ , and  δ . Nevertheless, we can expedite convergence by holding only two parameters constant, effectively reducing the family to a bi-parametric iterative scheme. In Appendix B, the Mathematica code used for checking it is available.
Theorem 3 (bi-parametric family).
Let f : I R R be a sufficiently differentiable function in an open interval I and ξ I a simple root of the nonlinear equation f ( x ) = 0 . Let us suppose that f ( x ) is continuous and nonsingular at ξ, and  x 0 is an initial estimate close enough to ξ. Then, the sequence { x k } k 0 obtained by using the expression (3) converges to ξ with order three, provided that β = ( α 1 ) 2 α 2 δ α 1 α 2 and γ = 2 α 3 δ 2 α 2 δ + 1 α 2 , being its error equation
e k + 1 = α 4 δ 2 C 2 2 + ( α 1 ) C 3 e k 3 + 𝒪 e k 4 ,
where e k = x k ξ , C q = 1 q ! f ( q ) ( ξ ) f ( ξ ) , q = 2 , 3 , . . . , and α, δ are arbitrary parameters.
Proof. 
Using the results of Theorem 2 to cancel A 1 and A 2 accompanying e k and e k 2 in (12), respectively, it must be satisfied that
α 2 δ + α ( γ + 2 δ 1 ) β γ δ + 1 = 0 , 2 α 3 δ α 2 ( γ + δ ) α ( γ + 2 δ 1 ) + β + γ + δ = 0 .
It is clear that system (13) has infinite solutions for
β = ( α 1 ) 2 α 2 δ α 1 α 2 and γ = 2 α 3 δ 2 α 2 δ + 1 α 2 ,
where α and δ are free parameters. Therefore, replacing (14) in (11), we obtain that
A 1 = 0 , A 2 = 0 , A 3 = α 4 δ 2 C 2 2 + ( α 1 ) C 3 , A 4 = 7 α 4 δ 9 C 2 3 + 2 ( α 3 ) α 4 δ 5 α + 12 C 2 C 3 ( α 3 ) ( α 1 ) C 4 ,
being the error equation
e k + 1 = A 3 e k 3 + 𝒪 ( e k 4 ) = α 4 δ 2 C 2 2 + ( α 1 ) C 3 e k 3 + 𝒪 ( e k 4 ) ,
and the proof is finished.    □
According to the findings in Theorem 3, it is evident that the newly introduced bi-parametric family
y k = x k α f ( x k ) f ( x k ) , x k + 1 = y k β + γ f ( y k ) f ( x k ) + δ f ( y k ) f ( x k ) 2 f ( x k ) f ( x k ) ,
where k = 0 , 1 , 2 , . . . , β = ( α 1 ) 2 α 2 δ α 1 α 2 and γ = 2 α 3 δ 2 α 2 δ + 1 α 2 consistently exhibits a third-order convergence across all values of α and δ . Nevertheless, it is noteworthy that by restricting one of the parameters while transitioning to a uni-parametric iterative scheme, not only can we sustain convergence, but we can also enhance performance. This improvement arises from the reduction in the error equation complexity, resulting in more efficient computations.
Corollary 1 (uni-parametric family).
Let f : I R R be a sufficiently differentiable function in an open interval I and ξ I a simple root of the nonlinear equation f ( x ) = 0 . Let us suppose that f ( x ) is continuous and nonsingular at ξ and  x 0 is an initial estimate close enough to ξ. Then, the sequence { x k } k 0 obtained by using the expression (17) converges to ξ with order three, provided that ϵ = α 4 δ = 2 , being its error equation
e k + 1 = ( α 1 ) C 3 e k 3 + 𝒪 e k 4 ,
where e k = x k ξ , C q = 1 q ! f ( q ) ( ξ ) f ( ξ ) , q = 2 , 3 , . . . , and α is an arbitrary parameter. Indeed, α = 1 and, therefore, δ = ϵ = 2 provides the only member of the family of the optimal fourth-order of convergence.
Proof. 
Using the results of Theorem 3 to reduce the expression of A 3 accompanying e k 3 in (15), it must be satisfied that α 4 δ 2 = 0 and/or α 1 = 0 . It is easy to show that the first equation has infinite solutions for
ϵ = α 4 δ = 2 .
Therefore, replacing (18) in (15), we obtain that
e k + 1 = A 3 e k 3 + 𝒪 ( e k 4 ) = ( α 1 ) C 3 e k 3 + 𝒪 e k 4 ,
and the proof is finished.    □
Based on the outcomes derived from Corollary 1, it becomes apparent that the recently introduced uni-parametric family, which we will call MCCTU( α ),
y k = x k α f ( x k ) f ( x k ) , x k + 1 = y k β + γ f ( y k ) f ( x k ) + δ f ( y k ) f ( x k ) 2 f ( x k ) f ( x k ) ,
where k = 0 , 1 , 2 , . . . , β = ( α 1 ) 2 α 2 δ α 1 α 2 , γ = 2 α 3 δ 2 α 2 δ + 1 α 2 and δ = 2 α 4 consistently exhibits a convergence order of three, regardless of the chosen value for α . Nevertheless, a remarkable observation emerges when α = 1 : in such a case, a member of this family attains an optimal convergence order of four.
Due to the previous results, we have chosen to concentrate our efforts solely on the MCCTU( α ) class of iterative schemes moving forward. To pinpoint the most effective members within this family, we will utilize dynamical techniques outlined in Section 3.

3. Stability Analysis

This section delves into the examination of the dynamical characteristics of the rational operator linked to the iterative schemes within the MCCTU( α ) family. This exploration provides crucial insights into the stability and dependence of the members of the family with respect to the initial estimations used. To shed light on the performance, we create rational operators and visualize their dynamical planes. These visualizations enable us to discern the behavior of specific methods in terms of the attraction basins of periodic orbits, fixed points, and other relevant dynamics.
Now, we introduce the basic concepts of complex dynamics used in the dynamical analysis of iterative methods. The texts [29,30], among others, provide extensive and detailed information on this topic.
Given a rational function R : C ^ C ^ , where C ^ is the Riemann sphere, the orbit of a point z 0 C ^ is defined as:
{ z 0 , R 1 z 0 , R 2 z 0 , . . . , R n z 0 , } .
We are interested in the study of the asymptotic behavior of the orbits depending on the initial estimate z 0 , analyzed in the dynamical plane of the rational function R defined by the different iterative methods.
To obtain these dynamical planes, we must first classify the fixed or periodic points of the rational operator R. A point z 0 C ^ is called a fixed point if it satisfies R z 0 = z 0 . If the fixed point is not a solution of the equation, it is called a strange fixed point. z 0 is said to be a periodic point of period p > 1 if R p z 0 = z 0 and R k z 0 z 0 , k < p . A critical point z C is a point where R z C = 0 .
On the other hand, a fixed point z 0 is called attracting if | R ( z 0 ) | < 1 , superattracting if | R ( z 0 ) | = 0 , repulsive if | R ( z 0 ) | > 1 , and parabolic if | R ( z 0 ) | = 1 .
The basin of attraction of an attractor z ¯ is defined as the set of pre-images of any order:
A z ¯ = { z 0 C ^ : R n z 0 z ¯ , n } .
The Fatou set consists of the points whose orbits have an attractor (fixed point, periodic orbit or infinity). Its complementary in C ^ is the Julia set, J . Therefore, the Julia set includes all the repulsive fixed points and periodic orbits, and also their pre-images. So, the basin of attraction of any fixed point belongs to the Fatou set. Conversely, the boundaries of the basins of attraction compose the Julia set.
The following classical result, which is due to Fatou [31] and Julia [32], includes both periodic points (of any period) and fixed points, considered as periodic points of the unit period.
Theorem 4. ([31,32]).
Let R be a rational function. The immediate basins of attraction of each attracting periodic point contain at least one critical point.
By means of this key result, all the attracting behavior can be found using the critical points as a seed.

3.1. Rational Operator

While the fixed-point operator can be formulated for any nonlinear function, our focus here lies on constructing this operator for low-degree nonlinear polynomial equations, in order to get a rational function. This choice stems from the fact that the stability or instability criteria applied to methods on these equations can often be extended to other cases. Therefore, we introduce the following nonlinear equation represented by f ( x ) :
f ( x ) = ( x a ) ( x b ) = 0 ,
where a , b R are the roots of the polynomial.
Let us remark that when MCCTU( α ) is directly applied to f ( x ) , parameter α disappears in the resulting rational expression; so, no dynamical analysis can be made. However, if we use parameter ϵ = α 4 δ appearing in Corollary 1 the same class of iterative methods can be expressed as MCCTU( ϵ ) and the dynamical analyisis can be made depending on ϵ .
Proposition 1 (rational operator R f ).
Let the polynomial equation f ( x ) given in (21), for a , b C . Rational operator R f related to the MCCTU(ϵ) family given in (20) on f ( x ) is
R f ( x , ϵ ) = x 3 ϵ x 3 4 x 2 5 x 2 x 3 ϵ 2 5 x 2 4 x 1 ,
with ϵ C being an arbitrary parameter.
Proof. 
Let f ( x ) be a generic quadratic polynomial function with roots a , b C . We apply the iterative scheme MCCTU( ϵ ) given in (20) on f ( x ) and obtain a rational function A f ( x , ϵ ) that depends on the roots a , b C and the parameters ϵ C . Then, by using a Möbius transformation (see [22,33,34]) on A f ( x , ϵ ) with
h ( w ) = w a w b ,
satisfying h ( ) = 1 , h ( a ) = 0 and h ( b ) = , we get
R f ( x , ϵ ) = h A f ( x , ϵ ) h 1 ( x ) = x 3 ϵ x 3 4 x 2 5 x 2 x 3 ϵ 2 5 x 2 4 x 1 ,
which depends on two arbitrary parameters ϵ C , thus completing the proof.    □
From Proposition 1, if we set ϵ 2 = 0 , we obtain
δ = 2 α 4 ,
and, then, it is easy to show that the rational operator R f ( x , ϵ ) simplifies to the expression
R f ( x ) = x 4 x 2 + 4 x + 5 5 x 2 + 4 x + 1 ,
which does not depend on any free parameters.

3.2. Fixed Points

Now, we calculate all the fixed points of R f ( x , ϵ ) given by (22), to afterwards analyze their character (attracting, repulsive, or neutral or parabolic).
Proposition 2.
The fixed points of R f ( x , ϵ ) are x = 0 , x = , and also five strange fixed points:
  • e x 1 = 1 ,
  • e x 2 , 3 ( ϵ ) = 5 4 1 4 1 4 ϵ ± 1 2 5 2 ϵ 20 ϵ + 8 165 2 1 4 ϵ , and 
  • e x 4 , 5 ( ϵ ) = 5 4 + 1 4 1 4 ϵ ± 1 2 5 2 ϵ + 20 ϵ + 8 165 2 1 4 ϵ .
By using Equation (24), the strange fixed points e x 2 , 3 ( ϵ ) and e x 4 , 5 ( ϵ ) do not depend on any free parameter,
  • e x 2 , 3 ( 2 ) = 2.1943 ± 1.5370 i , and 
  • e x 4 , 5 ( 2 ) = 0.3057 ± 0.2142 i .
Morover, strange fixed points depending on ϵ are conjugated, e x 2 , 3 ( ϵ ) and e x 4 , 5 ( ϵ ) . If  ϵ = 1 4 , e x 1 ( ϵ ) = e x 3 ( ϵ ) and e x 2 ( ϵ ) = e x 2 ( ϵ ) , so the amount of strange fixed points is three. Indeed, e x 3 ( 20 ) = e x 4 ( 20 ) = 1 and e x 3 ( 0 ) = e x 4 ( 0 ) = 1 .
From Proposition 2, we establish that there are seven fixed points. Among these, 0 and come from the roots a and b of f ( x ) . e x 1 = 1 comes from the divergence of the original scheme, previous to the Möbius transformation.
Proposition 3.
The strange fixed point e x 1 = 1 , ϵ C , has the following character:
(i) 
If ϵ 12 > 32 , then e x 1 is an attractor.
(ii) 
If ϵ 12 < 32 , then e x 1 is a repulsor.
(iii) 
If ϵ 12 = 32 , then e x 1 is parabolic.
Moreover, e x 1 can be attracting but not superattracting. The superattracting fixed points of R f are x = 0 , x = , and the strange fixed points e x 4 , 5 ( ϵ ) for ϵ = 1 9 5 97 47 and ϵ = 1 9 5 97 47 .
In the particular case of ϵ = 2 (using the Equation (24)), all the strange fixed points are repulsive.
Proof. 
We prove this result by analyzing the stability of the fixed points found in Proposition 2. It must be done by evaluating R f ( x , ϵ ) at each fixed point and, if it is lower, equal, or greater than one it is called attracting, neutral, or repulsive, respectively.
The cases of x = 0 and are straightforward from the expression of R f ( x , ϵ ) . When e x 1 ( ϵ ) is studied, then
R f ( 1 , ϵ ) = 32 12 ϵ ,
so it is attracting, repelling or neutral if ϵ 12 is greater, lower, or equal to 32. It can be graphically viewed in Figure 1.
By a graphical and numerical study of R f ( e x i ( ϵ ) , ϵ ) , i = 1 , 2 , 3 , 4 , it can be deduced that e x 2 , 3 ( ϵ ) are repulsive for all ϵ , meanwhile e x 4 , 5 ( ϵ ) are superattracting for ϵ = 1 9 5 97 47 10.6938 or ϵ = 1 9 5 97 47 0.249365 . Their stability function is presented in Figure 2a,b. Moreover, e x 1 can not be a superattractor as R f ( 1 , ϵ ) 0 .    □
It is clear that 0 and are always superattracting fixed points, but the stability of the remaining fixed points depends on the values of ϵ . According to Proposition 3, two strange fixed points can become superattractors. This implies that there would exist basins of attraction for them, potentially causing the method to fail to converge to the solution. However, even when they are only attracting (that can be the case of e x 1 ), these basins of attraction exist.
As we have stated previously, Figure 1 represents the stability function of the strange fixed point e x 1 . In this figure, the zones of attraction are the yellow area and the repulsion zone corresponds to the grey area. For values of ϵ within the disk, e x 1 is repulsive; whereas for values of ϵ outside the grey disk, e x 1 becomes attracting. So, it is natural to select values within the grey disk, as a repulsive divergence improves the performance of the iterative scheme.
Similar conclusions can be stated from the stability region of strange fixed points e x 4 , 5 ( ϵ ) , appearing in Figure 2. When a value of parameter ϵ is taken in the yellow area of Figure 2, both points are simultaneously attracting, so there are at least four different basins of attraction.
However, the basins of attraction also appear when there exist attracting periodic orbits of any period. To detect this kind of behavior, the role of critical points is crucial.

3.3. Critical Points

Now, we obtain the critical points of R f ( x , ϵ ) .
Proposition 4.
The critical points of R f ( x , ϵ ) are x = 0 , x = and also:
  • c r 1 = 1 , and 
  • c r 2 , 3 ( ϵ ) = 2 ϵ + 6 ± 5 12 ϵ ϵ 2 3 ϵ 2 .
Morover, if  ϵ = 2 , critical points are not free c r 2 , 3 ( 2 ) = 0 . In any other case, c r 2 , 3 ( ϵ ) are conjugated free critical points.
From Proposition 4, we establish that, in general, there are five critical points. The free critical point c r 1 = 1 is a pre-image of the strange fixed point e x 1 = 1 . Therefore, the stability of c r 1 corresponds to the stability of e x 1 (see Section 3.2). Note that if the Equation (24) is satisfied, the only remaining free critical point is c r 1 . Since c r 1 is the pre-image of e x 1 , it would be a repulsor.
Then, we use the only independent free critical point c r 2 ( ϵ ) (conversely, c r 3 ( ϵ ) , as they are conjugates) to generate the parameter plane. This a graphical representation of the global stability performance of the member of the class of iterative methods. In a definite area of the complex plane, a mesh of 500 × 500 points is generated. Each one of these points is used as a value of parameter ϵ , i.e., we get a particular element of the family. For each one of these values, we get as our initial guess the critical point c r 2 ( ϵ ) and calculate its orbit. If it converges to x = 0 or x = , then the point corresponding to this value of ϵ is represented using a red color. In other case, it is left in black. So, convergent schemes to the original roots of the quadratic equations appear in the red stable area and the black area corresponds to schemes of the classes that are not able to converge to them, by reason of an attracting strange fixed point or periodic orbit. This performance can be seen in Figure 3, representing the domain D 1 = [ 30 , 50 ] × [ 40 , 40 ] , where a wide area of stable performance can be found around the origin, D 2 = [ 5 , 15 ] × [ 10 , 10 ] (Figure 3b).

3.4. Dynamical Planes

A dynamical plane is defined as a mesh in a limited domain of the complex plane, where each point corresponds to a different initial estimate x 0 . The graphical representation shows the method’s convergence starting from x 0 within a maximum of 80 iterations and 10 3 as the tolerance. Fixed points appear as a white circle ‘○’, critical points are ‘□’, and  a white asterisk ‘∗’ symbolizes an attracting point. Additionally, the basins of attraction are depicted in different colors. To generate this graph, we use MATLAB R2020b with a resolution of 400 × 400 pixels.
Here, we analyze the stability of various MCCTU( ϵ ) methods using dynamical planes. We consider methods with ϵ values both inside and outside the stability surface of e x 1 , specifically, in the red and black areas of the parameter plane represented in Figure 3a.
Firstly, examples of methods within the stability region are provided for ϵ { 1 , 2 , 10 , 5 + 5 i } . Their dynamical planes, along with their respective basins of attraction, are shown in Figure 4. Let us remark that all selected values of ϵ lie in the red area of the parameter plane and have only two basins of attraction, corresponding to x = 0 (in orange color in the figures) and x = (blue in the figures).
Secondly, some schemes outside the stability region (in black in the parameter plane) are provided for ϵ { 100 , 15 , 15 , 30 } . Their dynamical planes are shown in Figure 5. Each of these members have specific characteristics: in Figure 5a, the widest basin of attraction (in green color) corresponds to e x 1 = 1 , which is attracting for this value of ϵ , the basin of x = 0 is a very narrow area around the point; for ϵ = 15 , we observe in Figure 5b three different basins of attraction, the third of the two being attracting periodic orbits of period 2 (one of them is plotted in yellow in the figure); Figure 5c corresponds to ϵ = 15 , inside the stability area of e x 4 , 5 ( ϵ ) (see Figure 2), where both are simultaneously attracting; finally, for  ϵ = 30 , the widest basin of attraction corresponds to an attracting periodic orbit of period 2, see Figure 5d.

4. Numerical Results

In this section, we conduct several numerical tests to validate the theoretical convergence and stability results of the MCCTU( α ) family obtained in previous sections. We use both stable and unstable methods from (20) and apply them to ten nonlinear test equations, with their expressions and corresponding roots provided in Table 1.
We aim to demonstrate the theoretical results by testing the MCCTU( α ) family. Specifically, we evaluate three representative members of the family with δ = 2 α 4 and α = 1 , α = 2 , and  α = 100 . Therefore, in all cases, ϵ = 2 .
We conduct two experiments. In the first experiment, we analyze the stability of the MCCTU( α ) family using two of its methods, chosen based on stable and unstable values of the parameter α . In the second experiment, we perform an efficiency analysis of the MCCTU( α ) family through a comparative study between its optimal stable member and fifteen different fourth-order methods from the literature: Ostrowski (OS) in [12,35], King (KI) in [35,36], Jarratt (JA) in [35,37], Özban and Kaya (OK1, OK2, OK3) in [8], Chun (CH) in [38], Maheshwari (MA) in [39], Behl et al. (BMM) in [40], Chun et al. (CLND1, CLND2) in [41], Artidiello et al. (ACCT1, ACCT2) in [42], Ghanbari (GH) in [43], and Kou et al. (KLW) in [44].
While performing these numerical tests, we start the iterations with different initial estimates: close ( x 0 ξ ), far ( x 0 3 ξ ), and very far ( x 0 10 ξ ) from the root ξ . This approach allows us to evaluate how sensitive the methods are to the initial estimation when finding a solution.
The calculations are performed using the MATLAB R2020b programming package with variable precision arithmetic set to 200 digits of mantissa (in Appendix C, an example with double-precision arithmetics is included). For each method, we analyze the number of iterations (iter) required to converge to the solution, with stopping criteria defined as | x k + 1 x k | < 10 100 or | f ( x k + 1 ) | < 10 100 . Here, | x k + 1 x k | represents the error estimation between two consecutive iterations, and  | f ( x k + 1 ) | is the residual error of the nonlinear test function.
To check the theoretical order of convergence (p), we calculate the approximate computational order of convergence (ACOC) as described by Cordero and Torregrosa in [15]. In the numerical results, if the ACOC values do not stabilize throughout the iterative process, it is marked as ‘-’; and if any method fails to converge within a maximum of 50 iterations, it is marked as ‘nc’.

4.1. First Experiment: Stability Analysis of MCCTU( α ) Family

In this experiment, we conducted a stability analysis of the MCCTU( α ) family by considering values of α both within the stability regions ( α = 2 ) and outside of them ( α = 100 ), setting δ = 2 α 4 . The methods analyzed are of order 3, consistent with the theoretical convergence results. A special case occurs when α = 0 , where the associated method never converges to the solution because the denominator in the relation δ = 2 / α 4 becomes zero, causing δ to grow indefinitely.
The numerical performance of the iterative methods MCCTU(2) and MCCTU(100) is presented in Table 2 and Table 3, using initial estimates that are close, far, and very far from the root. This approach enables us to assess the stability and reliability of the methods under various initial conditions.
From the analysis of the first experiment, it is evident that the MCCTU(2) method exhibits robust performance. For initial estimates close to the root ( x 0 ξ ), the method consistently converges to the solution with very low errors, achieving convergence in three or four iterations, and the ACOC value stabilizes at 3. For initial estimates that are far ( x 0 3 ξ ), the number of iterations increases, but the method still converges to the solution in nine out of ten cases. For initial estimates that are very far ( x 0 10 ξ ), the method holds a similar performance, converging to the solution in eight out of ten cases. It is notable that as the initial condition moves further away, the method shows a slight difficulty in finding the solution. This slight dependence is understandable given the complexity of the nonlinear functions f 4 and f 7 . Nonetheless, the method is shown to be stable and robust, with a convergence order of 3, verifying the theoretical results.
On the other hand, MCCTU(100) method encounters significant difficulties in finding the solution. As the initial conditions move further away, the number of iterations increases. Despite lacking good stability characteristics, the method converges to the solution for initial estimates close to the root. However, for initial estimates that are far and very far from the root, it fails to converge in four out of ten cases. Additionally, the method never stabilizes the ACOC value in any case. These results confirm the theoretical instability of the method, as  α = 100 lies outside the stability surface studied in Section 3.

4.2. Second Experiment: Efficiency Analysis of MCCTU( α ) Family

In this experiment, we conduct a comparative study between an optimal method of the MCCTU( α ) family and the fifteen fourth-order methods mentioned in the introduction of Section 4, to contrast their numerical performances on nonlinear equations. We consider the method associated with α = 1 and δ = 2 , denoted as MCCTU(1), as the optimal stable member of the MCCTU( α ) family with fourth-order of convergence.
Thus, in Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12, Table 13 and Table 14, we present the numerical results for the sixteen known methods, considering initial estimates that are close, far, and very far from the root, as well as the ten test equations.
In Table 4, Table 5, Table 6 and Table 7, we observe that MCCTU(1) consistently converges to the solution for initial estimates close to the root ( x 0 ξ ), with a similar number of iterations as other methods across all equations. The theoretical convergence order is confirmed by the ACOC, which is close to 4. However, what about the dependence of MCCTU(1) on initial estimates? To answer this, we analyze the method for initial estimates far and very far from the solution, specifically for x 0 3 ξ and x 0 10 ξ , respectively. The results are shown in Table 8, Table 9, Table 10, Table 11, Table 12, Table 13, Table 14 and Table 15.
The results presented in Table 8, Table 9, Table 10 and Table 11 are promising. MCCTU(1) converges to the solution in nine out of the ten nonlinear equations, even when the initial estimate is far from the root ( x 0 3 ξ ). In these cases, the ACOC consistently stabilizes and approaches 4. Only in one instance, for the function f 4 , does MCCTU(1) fail to converge, similar to the other thirteen methods. For this particular equation, only two methods successfully approximate the root. In the remaining equations, MCCTU(1) converges to the solution with a comparable number of iterations to other methods and even requires fewer iterations than Ostrowski’s method, as seen with function f 10 . Therefore, we confirm that this method is robust, consistent with the stability results shown in previous sections.
The results presented in Table 12, Table 13, Table 14 and Table 15 confirm the exceptional robustness of the MCCTU(1) method for initial estimates that are very far from the root ( x 0 10 ξ ), as the method converges in eight out of ten cases. A slight dependence on the initial estimate is observed for functions f 4 and f 7 , where the method does not converge; however, in these two cases, the other methods also fail to approximate the solution, except for the ACCT2 method, which converges to the root of function f 7 with 50 iterations. The complexity of the nonlinear equations plays a significant role in finding their solutions. Moreover, in the cases where the MCCTU(1) method converges to the roots, it does so with a comparable number of iterations to other methods and often with fewer iterations, as seen in function f 2 . Additionally, for these cases, the ACOC consistently stabilizes at values close to 4.
Therefore, based on the results of the second experiment, we conclude that the MCCTU( α ) family demonstrates impressive numerical performance when using the optimal stable member with α = 1 as a representative, highlighting its robustness and efficiency even with challenging initial conditions. Overall, the selected MCCTU(1) method exhibits low errors and requires a similar or fewer number of iterations compared to other methods. In certain cases, as the complexity of the nonlinear equation increases, the MCCTU(1) method outperforms Ostrowski’s method and others. The theoretical convergence order is also confirmed by the ACOC, which is always close to 4.

5. Conclusions

The development of the parametric family of multistep iterative schemes MCCTU( α ) based on the damped Newton scheme has proven to be an effective strategy for solving nonlinear equations. The inclusion of an additional Newton step with a weight function and a “frozen” derivative significantly improved the convergence speed from a first-order class to a uniparametric third-order family.
The numerical results confirm the robustness of the MCCTU(2) method for initial estimates close to the root ( x 0 ξ ), with very low errors and convergence in three or four iterations. As the initial estimates move further away ( x 0 3 ξ ) and ( x 0 10 ξ ), the method continues to show solid performance, converging in most cases and confirming its theoretical stability and robustness.
Through the analysis of stability surfaces and dynamical planes, specific members of the MCCTU( α ) family with exceptional stability were identified. These members are particularly suitable for scalar functions with challenging convergence behavior, exhibiting attractive periodic orbits and strange fixed points in their corresponding dynamical planes. The MCCTU(1) member stood out for its optimal and stable performance.
In the comparative analysis, the MCCTU(1) method demonstrated superior numerical performance in many cases, requiring a similar or fewer number of iterations compared to well-established fourth-order methods such as Ostrowski’s method. This superior performance is especially notable in more complex nonlinear equations, where MCCTU(1) outperforms several alternative methods.
The theoretical convergence order of the MCCTU( α ) family was confirmed by calculating the approximate computational order of convergence (ACOC). In most cases, the ACOC value stabilized close to 3, validating the effectiveness and accuracy of the proposed methods both theoretically and practically. Additionally, it was confirmed that the convergence order of the method associated with α = 1 is optimal, achieving a fourth-order convergence.
Finally, the analysis revealed that certain members of the MCCTU( α ) family, particularly those with α values outside the stability surface, exhibited significant instability. These methods struggled to converge to the solution, especially when initial estimates were far or very far from the root. For instance, the method with α = 100 failed to stabilize and did not meet the convergence criteria in four out of ten cases. Additionally, the ACOC values for this method did not stabilize, confirming its theoretical instability. This highlights the importance of selecting appropriate parameter values within the stability regions to ensure reliable performance.

Author Contributions

Conceptualization, A.C. and J.R.T.; methodology, G.U.-C. and M.M.-M.; software, M.M.-M. and G.U.-C.; validation, M.M.-M.; formal analysis, J.R.T.; investigation, A.C.; writing—original draft preparation, M.M.-M.; writing—review and editing, A.C. and F.I.C.; supervision, J.R.T. All authors have read and agreed to the published version of the manuscript.

Funding

Funded with Ayuda a Primeros Proyectos de Investigación (PAID-06-23), Vicerrectorado de Investigación de la Universitat Politècnica de València (UPV).

Data Availability Statement

Data is contained within the article.

Acknowledgments

The authors would like to thank the anonymous reviewers for their useful comments and suggestions.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Detailed Computation of Theorem 2

The comprehensive proof of Theorem 2, methodically detailed step-by-step in Section 2, is further validated in Wolfram Mathematica software v13.2 using the following code:
  • fx = dFa SeriesData[Subscript[e, k], 0, {0, 1, Subscript[C, 2], Subscript[C, 3],
  • Subscript[C, 4], Subscript[C, 5]}, 0, 5, 1];
  • dfx = D[fx, Subscript[e, k]];
  • fx/dfx // Simplify;
  • (∗Error in the first step∗)
  • Subscript[y, e] = Simplify[Subscript[e, k] - \[Alpha]∗fx/dfx];
  • fy = fx /. Subscript[e, k] -> Subscript[y, e] // Simplify;
  • (∗Error in the second step∗)
  • Subscript[x, e] = Subscript[y, e] - (\[Beta] + \[Gamma]∗fy/fx +
  • \[Delta]∗(fy/fx)^2)∗(fx/dfx) // Simplify

Appendix B. Detailed Computation of Theorem 3

The comprehensive proof of Theorem 3, methodically detailed step-by-step in Section 2, is further validated in Wolfram Mathematica software v13.2 using the following code:
  • fx = dFa SeriesData[Subscript[e, k], 0, {0, 1, Subscript[C, 2], Subscript[C, 3],
  • Subscript[C, 4], Subscript[C, 5]}, 0, 5, 1];
  • dfx = D[fx, Subscript[e, k]];
  • fx/dfx // Simplify;
  • (∗Error in the first step∗)
  • Subscript[y, e] = Simplify[Subscript[e, k] - \[Alpha]∗fx/dfx];
  • fy = fx /. Subscript[e, k] -> Subscript[y, e] // Simplify;
  • (∗Error in the second step∗)
  • Subscript[x, e] = Subscript[y, e] - (\[Beta] + \[Gamma]∗fy/fx +
  • \[Delta]∗(fy/fx)^2)∗(fx/dfx) // Simplify;
  • Solve[1 - \[Beta] - \[Gamma] - \[Delta] - \[Alpha]^2 \[Delta] + \[Alpha]
  • (-1 + \[Gamma] + 2 \[Delta]) == 0 && \[Beta] + \[Gamma] + \[Delta] + 2 \[Alpha]^3
  • \[Delta] - \[Alpha]^2 (\[Gamma] + \[Delta]) - \[Alpha] (-1 + \[Gamma] + 2 \[Delta])
  • == 0, {\[Alpha], \[Beta], \[Gamma], \[Delta]}];
  • Subscript[x, e] = FullSimplify[Subscript[x, e] /. {\[Beta] -> ((-1 + \[Alpha])^2
  • (-1 - \[Alpha] + \[Alpha]^2 \[Delta]))/\[Alpha]^2, \[Gamma] -> (1 - 2 \[Alpha]^2
  • \[Delta] + 2 \[Alpha]^3 \[Delta])/\[Alpha]^2}]

Appendix C. Additional Experiment Focused on Practical Calculations

In this comprehensive experiment, we conduct an in-depth efficiency analysis of the MCCTU(1) method, set with ϵ = α 4 δ = 2 , specifically tailored for practical calculations. This analysis begins with initial estimates that closely approximate the roots ( x 0 ξ ). All computations are carried out using the MATLAB R2020b software package with standard floating-point arithmetic. We assess the number of iterations (iter) each method requires to reach the solution, with sttopping criteria of | x k + 1 x k | < 10 10 . We also calculate the Approximate Computational Order of Convergence (ACOC) to verify the theoretical order of convergence (p). Our findings indicate that fluctuating ACOC values are marked with a ‘-’, and methods that do not converge within 50 iterations are labeled as ‘nc’. Additionally, this study aims to examine how the convergence order is influenced by the number of digits in the variable precision arithmetic employed in the experiments, using the same ten nonlinear test equations listed in Table 1. Thus, the numerical results are presented in Table A1.
Table A1. Numerical results of MCCTU(1) in practical calculations for x 0 close to ξ .
Table A1. Numerical results of MCCTU(1) in practical calculations for x 0 close to ξ .
Function x 0 | x k + 1 x k | IterACOC ξ
f 1 −0.68.4069  × 10 27 34.0111−0.6367
f 2 0.24.0915  × 10 36 33.96240.2575
f 3 0.61.8066  × 10 21 34.01210.6392
f 4 −14.13.6467  × 10 15 2-−14.1013
f 5 1.31.2827  × 10 20 34.02261.3652
f 6 0.11.1439  × 10 32 33.99690.1281
f 7 −1.28.1090  × 10 29 34.0025−1.2076
f 8 2.33.2363  × 10 36 34.00102.3320
f 9 −1.41.2504  × 10 28 33.9982−1.4142
f 10 −0.91.3096  × 10 27 34.0263−0.9060
From the analysis of this experiment, it is confirmed that convergence to the solution is achieved in all cases, with errors smaller than the set threshold, reaching convergence within 2 or 3 iterations. The value of the ACOC stabilizes at 4, thus verifying the theoretical results. Furthermore, it is clear that the convergence order is not affected by the number of digits in the variable precision arithmetic used. The number of digits plays a crucial role when higher precision is required, particularly for smaller errors, preventing divisions by zero in this case. Additionally, it is noted that the ACOC for function f 4 cannot be calculated, due to convergence to the solution in just 2 iterations, while (2) requires at least 3 iterations to calculate the approximate order of convergence.

References

  1. Danchick, R. Gauss meets Newton again: How to make Gauss orbit determination from two position vectors more efficient and robust with Newton–Raphson iterations. Appl. Math. Comput. 2008, 195, 364–375. [Google Scholar] [CrossRef]
  2. Tostado-Véliz, M.; Kamel, S.; Jurado, F.; Ruiz-Rodriguez, F.J. On the Applicability of Two Families of Cubic Techniques for Power Flow Analysis. Energies 2021, 14, 4108. [Google Scholar] [CrossRef]
  3. Arroyo, V.; Cordero, A.; Torregrosa, J.R. Approximation of artificial satellites’ preliminary orbits: The efficiency challenge. Math. Comput. Model. 2011, 54, 1802–1807. [Google Scholar] [CrossRef]
  4. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  5. Petković, M.; Neta, B.; Petković, L.; Džunić, J. Multipoint Methods for Solving Nonlinear Equations; Academic Press: Boston, MA, USA, 2013. [Google Scholar]
  6. Amat, S.; Busquier, S. Advances in Iterative Methods for Nonlinear Equations; Springer: Cham, Switzerland, 2017. [Google Scholar]
  7. Ostrowski, A.M. Solution of Equations in Euclidean and Banach Spaces; Academic Press: New York, NY, USA, 1973. [Google Scholar]
  8. Özban, A.Y.; Kaya, B. A new family of optimal fourth-order iterative methods for nonlinear equations. Results Control Optim. 2022, 8, 1–11. [Google Scholar] [CrossRef]
  9. Adomian, G. Solving Frontier Problem of Physics: The Decomposition Method; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1994. [Google Scholar]
  10. Petković, M.; Neta, B.; Petković, L.; Džunić, J. Multipoint methods for solving nonlinear equations: A survey. Appl. Math. Comput. 2014, 226, 635–660. [Google Scholar] [CrossRef]
  11. Chun, C. Some fourth-order iterative methods for solving nonlinear equations. Appl. Math. Comput. 2008, 195, 454–459. [Google Scholar] [CrossRef]
  12. Ostrowski, A.M. Solution of Equations and Systems of Equations; Academic Press: New York, NY, USA, 1960. [Google Scholar]
  13. Kung, H.T.; Traub, J.F. Optimal Order of One-Point and Multipoint Iteration. J. Assoc. Comput. Mach. 1974, 21, 643–651. [Google Scholar] [CrossRef]
  14. Weerakoon, S.; Fernando, T. A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
  15. Cordero, A.; Torregrosa, J.R. Variants of Newton’s Method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
  16. Kansal, M.; Cordero, A.; Bhalla, S.; Torregrosa, J.R. New fourth- and sixth-order classes of iterative methods for solving systems of nonlinear equations and their stability analysis. Numer. Algorithms 2021, 87, 1017–1060. [Google Scholar] [CrossRef]
  17. Cordero, A.; Soleymani, F.; Torregrosa, J.R. Dynamical analysis of iterative methods for nonlinear systems or how to deal with the dimension? Appl. Math. Comput. 2014, 244, 398–412. [Google Scholar] [CrossRef]
  18. Cordero, A.; Moscoso-Martínez, M.; Torregrosa, J.R. Chaos and Stability in a New Iterative Family for Solving Nonlinear Equations. Algorithms 2021, 14, 101. [Google Scholar] [CrossRef]
  19. Husain, A.; Nanda, M.N.; Chowdary, M.S.; Sajid, M. Fractals: An Eclectic Survey, Part I. Fractal Fract. 2022, 6, 89. [Google Scholar] [CrossRef]
  20. Husain, A.; Nanda, M.N.; Chowdary, M.S.; Sajid, M. Fractals: An Eclectic Survey, Part II. Fractal Fract. 2022, 6, 379. [Google Scholar] [CrossRef]
  21. Varona, J.L. Graphic and numerical comparison between iterative methods. Math. Intell. 2002, 24, 37–46. [Google Scholar] [CrossRef]
  22. Amat, S.; Busquier, S.; Plaza, S. Review of some iterative root–finding methods from a dynamical point of view. SCI. A Math. Sci. 2004, 10, 3–35. [Google Scholar]
  23. Neta, B.; Chun, C.; Scott, M. Basins of attraction for optimal eighth order methods to find simple roots of nonlinear equations. Appl. Math. Comput. 2014, 227, 567–592. [Google Scholar] [CrossRef]
  24. Cordero, A.; García-Maimó, J.; Torregrosa, J.R.; Vassileva, M.P.; Vindel, P. Chaos in King’s iterative family. Appl. Math. Lett. 2013, 26, 842–848. [Google Scholar] [CrossRef]
  25. Magreñán, A.; Argyros, I. A Contemporary Study of Iterative Methods; Academic Press: Cambridge, MA, USA, 2018. [Google Scholar]
  26. Geum, Y.H.; Kim, Y.I. Long–term orbit dynamics viewed through the yellow main component in the parameter space of a family of optimal fourth-order multiple-root finders. Discrete Contin. Dyn. Syst. B 2020, 25, 3087–3109. [Google Scholar] [CrossRef]
  27. Cordero, A.; Torregrosa, J.; Vindel, P. Dynamics of a family of Chebyshev-Halley type method. Appl. Math. Comput. 2012, 219, 8568–8583. [Google Scholar] [CrossRef]
  28. Magreñán, A. Different anomalies in a Jarratt family of iterative root-finding methods. Appl. Math. Comput. 2014, 233, 29–38. [Google Scholar]
  29. Devaney, R. An Introduction to Chaotic Dynamical Systems; Addison-Wesley Publishing Company: Boston, MA, USA, 1989. [Google Scholar]
  30. Beardon, A. Iteration of Rational Functions; Graduate Texts in Mathematics; Springer: New York, NY, USA, 1991. [Google Scholar]
  31. Fatou, P. Sur les équations fonctionelles. Bull. Soc. Mat. Fr. 1919, 47, 161–271. [Google Scholar] [CrossRef]
  32. Julia, G. Mémoire sur l’iteration des fonctions rationnelles. Mat. Pur. Appl. 1918, 8, 47–245. [Google Scholar]
  33. Scott, M.; Neta, B.; Chun, C. Basin attractors for various methods. Appl. Math. Comput. 2011, 218, 2584–2599. [Google Scholar] [CrossRef]
  34. Blanchard, P. Complex analytic dynamics on the Riemann sphere. Bull. Am. Math. Soc. 1984, 11, 85–141. [Google Scholar] [CrossRef]
  35. José, L. Hueso, E.M.; Teruel, C. Multipoint efficient iterative methods and the dynamics of Ostrowski’s method. Int. J. Comput. Math. 2019, 96, 1687–1701. [Google Scholar] [CrossRef]
  36. King, R.F. A Family of Fourth Order Methods for Nonlinear Equations. SIAM J. Numer. Anal. 1973, 10, 876–879. [Google Scholar] [CrossRef]
  37. Jarratt, P. Some fourth order multipoint iterative methods for solving equations. Math. Comput. 1966, 20, 434–437. [Google Scholar] [CrossRef]
  38. Chun, C. Construction of Newton-like iteration methods for solving nonlinear equations. Numer. Math. 2006, 104, 297–315. [Google Scholar] [CrossRef]
  39. Maheshwari, A.K. A fourth order iterative method for solving nonlinear equations. Appl. Math. Comput. 2009, 211, 383–391. [Google Scholar] [CrossRef]
  40. Behl, R.; Maroju, P.; Motsa, S. A family of second derivative free fourth order continuation method for solving nonlinear equations. J. Comput. Appl. Math. 2017, 318, 38–46. [Google Scholar] [CrossRef]
  41. Chun, C.; Lee, M.Y.; Neta, B.; Džunić, J. On optimal fourth-order iterative methods free from second derivative and their dynamics. Appl. Math. Comput. 2012, 218, 6427–6438. [Google Scholar] [CrossRef]
  42. Artidiello, S.; Chicharro, F.; Cordero, A.; Torregrosa, J.R. Local convergence and dynamical analysis of a new family of optimal fourth-order iterative methods. Int. J. Comput. Math. 2013, 90, 2049–2060. [Google Scholar] [CrossRef]
  43. Ghanbari, B. A new general fourth-order family of methods for finding simple roots of nonlinear equations. J. King Saud Univ. Sci. 2011, 23, 395–398. [Google Scholar] [CrossRef]
  44. Kou, J.; Li, Y.; Wang, X. A composite fourth-order iterative method for solving non-linear equations. Appl. Math. Comput. 2007, 184, 471–475. [Google Scholar] [CrossRef]
Figure 1. Stability function of e x 1 = 1 , R f ( 1 , ϵ ) for a complex ϵ .
Figure 1. Stability function of e x 1 = 1 , R f ( 1 , ϵ ) for a complex ϵ .
Axioms 13 00458 g001
Figure 2. Stability surfaces of e x 4 , 5 ( ϵ ) for different complex regions.
Figure 2. Stability surfaces of e x 4 , 5 ( ϵ ) for different complex regions.
Axioms 13 00458 g002
Figure 3. Parameter plane of c r 2 ( ϵ ) on domain D 1 and a detail on D 2 .
Figure 3. Parameter plane of c r 2 ( ϵ ) on domain D 1 and a detail on D 2 .
Axioms 13 00458 g003
Figure 4. Dynamical planes for some stable methods.
Figure 4. Dynamical planes for some stable methods.
Axioms 13 00458 g004
Figure 5. Unstable dynamical planes.
Figure 5. Unstable dynamical planes.
Axioms 13 00458 g005
Table 1. Nonlinear test equations and corresponding roots.
Table 1. Nonlinear test equations and corresponding roots.
Nonlinear Test EquationsRoots
f 1 ( x ) = sin ( x ) x 2 + 1 = 0 ξ 0.63673
f 2 ( x ) = x 2 e x 3 x + 2 = 0 ξ 0.25753
f 3 ( x ) = cos ( x ) x e x + x 2 = 0 ξ 0.63915
f 4 ( x ) = e x 1.5 arctan ( x ) = 0 ξ 14.10127
f 5 ( x ) = x 3 + 4 x 2 10 = 0 ξ 1.36523
f 6 ( x ) = 8 x cos ( x ) 2 x 2 = 0 ξ 0.12808
f 7 ( x ) = x e x 2 sin 2 ( x ) + 3 cos ( x ) + 5 = 0 ξ 1.20765
f 8 ( x ) = x 2 + 2 x + 5 2 sin ( x ) x 2 + 3 = 0 ξ 2.33197
f 9 ( x ) = x 4 + sin π x 2 5 = 0 ξ 1.41421
f 10 ( x ) = x 4 + sin π x 2 3 16 = 0 ξ 0.90599
Table 2. Numerical performance of MCCTU(2) method on nonlinear equations (“nc” means non-convergence).
Table 2. Numerical performance of MCCTU(2) method on nonlinear equations (“nc” means non-convergence).
Function x 0 | x k + 1 x k | | f ( x k + 1 ) | IterACOC
Close to ξ
f 1 −0.62.2252 × 10 54 1.4765  × 10 162 43
f 2 0.21.8447  × 10 50 1.3536  × 10 150 43
f 3 0.62.3846  × 10 44 1.4235  × 10 131 43
f 4 −14.15.1414  × 10 36 3.3633  × 10 111 33
f 5 1.31.6295  × 10 53 4.3267  × 10 159 43
f 6 0.14.6096  × 10 78 2.4334  × 10 208 43
f 7 −1.23.6237  × 10 54 1.9349  × 10 159 43
f 8 2.33.0861  × 10 54 6.9791  × 10 162 43
f 9 −1.47.0858  × 10 51 3.8746  × 10 150 43
f 10 −0.98.9456  × 10 45 9.7874  × 10 131 43
Far from ξ
f 1 −1.81.5223   × 10 92 053
f 2 0.66.6012  × 10 87 053
f 3 1.83.8851  × 10 45 6.1565  × 10 134 63
f 4 −42.3ncncncnc
f 5 3.91.0792  × 10 59 1.2569  × 10 177 63
f 6 0.31.0805  × 10 48 2.6855  × 10 146 43
f 7 −3.62.2394  × 10 55 4.5662  × 10 163 143
f 8 6.91.1722  × 10 41 3.8248  × 10 124 63
f 9 −4.21.3408  × 10 101 083
f 10 −2.74.3149  × 10 78 3.1147  × 10 207 83
Very far from ξ
f 1 −6.01.5491  × 10 52 4.9812  × 10 157 63
f 2 2.01.6192  × 10 89 063
f 3 6.07.1447  × 10 57 3.8290  × 10 169 103
f 4 −141.0ncncncnc
f 5 13.01.6531  × 10 82 083
f 6 1.01.6423  × 10 56 9.4291  × 10 170 53
f 7 −12.0ncncncnc
f 8 23.01.2648  × 10 44 4.8043  × 10 133 73
f 9 −14.02.3358  × 10 43 1.3880  × 10 127 103
f 10 −9.03.0298  × 10 44 1.2080  × 10 128 63
Table 3. Numerical performance of MCCTU(100) method on nonlinear equations (“nc” means non-convergence).
Table 3. Numerical performance of MCCTU(100) method on nonlinear equations (“nc” means non-convergence).
Function x 0 | x k + 1 x k | | f ( x k + 1 ) | IterACOC
Close to ξ
f 1 −0.66.1808  × 10 99 7.6768  × 10 113 9-
f 2 0.22.1827  × 10 88 4.9309  × 10 102 9-
f 3 0.66.0791  × 10 94 8.8104  × 10 108 9-
f 4 −14.14.5379  × 10 95 1.3573  × 10 111 8-
f 5 1.34.9631  × 10 94 4.8998  × 10 107 9-
f 6 0.13.0953  × 10 100 1.4092  × 10 113 9-
f 7 −1.28.7126  × 10 95 1.0578  × 10 107 9-
f 8 2.32.1622  × 10 95 3.1373  × 10 109 9-
f 9 −1.44.0458  × 10 95 2.7366  × 10 108 9-
f 10 −0.96.2830  × 10 95 3.1368  × 10 108 9-
Far from ξ
f 1 −1.82.7746  × 10 92 3.4462  × 10 106 10-
f 2 0.66.8191  × 10 99 1.5405  × 10 112 10-
f 3 1.88.0835  × 10 90 1.1715  × 10 103 12-
f 4 −42.3ncncncnc
f 5 3.9ncncncnc
f 6 0.34.0669  × 10 95 1.8516  × 10 108 9-
f 7 −3.6ncncncnc
f 8 6.91.5980  × 10 88 2.3186  × 10 102 11-
f 9 −4.2ncncncnc
f 10 −2.71.5127  × 10 97 3.0929  × 10 110 11-
Very far from ξ
f 1 −6.01.2947  × 10 94 1.6081  × 10 108 11-
f 2 2.03.5429  × 10 94 8.0036  × 10 108 11-
f 3 6.04.5426  × 10 97 6.5836  × 10 111 18-
f 4 −141.0ncncncnc
f 5 13.0ncncncnc
f 6 1.01.4843  × 10 94 6.7580  × 10 108 10-
f 7 −12.0ncncncnc
f 8 23.07.4725  × 10 92 1.0842  × 10 105 12-
f 9 −14.0ncncncnc
f 10 −9.06.5629e  × 10 95 3.2765  × 10 108 12-
Table 4. Numerical performance of iterative methods on nonlinear equations for x 0 close to ξ (1/4).
Table 4. Numerical performance of iterative methods on nonlinear equations for x 0 close to ξ (1/4).
FunctionMethod | x k + 1 x k | | f ( x k + 1 ) | IterACOC
f 1 MCCTU(1)8.4069  × 10 27 2.2344  × 10 105 34.0111
x 0 =  −0.6OS1.2193  × 10 29 2.7787  × 10 117 34.0062
KI3.9435  × 10 29 3.8183  × 10 115 34.0070
JA1.3498  × 10 29 4.2651  × 10 117 34.0061
OK15.0547  × 10 32 2.9443  × 10 127 33.9991
OK24.0266  × 10 30 2.6729  × 10 119 34.0052
OK32.5735  × 10 30 4.5908  × 10 120 33.9937
CH1.6691  × 10 28 1.6213  × 10 112 34.0081
MA3.0371  × 10 27 3.1217  × 10 107 34.0103
BMM1.2299  × 10 28 4.4824  × 10 113 34.0084
CLND18.643  × 10 27 2.5116  × 10 105 34.0110
CLND21.6691  × 10 28 1.6213  × 10 112 34.0081
ACCT18.4069  × 10 27 2.2344  × 10 105 34.0111
ACCT27.4417  × 10 32 1.0756  × 10 126 34.0294
GH1.9739  × 10 26 8.0112  × 10 104 34.0119
KLW8.4441  × 10 28 1.4567  × 10 109 34.0092
f 2 MCCTU(1)4.0916  × 10 36 1.3257  × 10 144 33.9624
x 0 = 0.2OS2.6718  × 10 32 8.6963  × 10 129 33.9998
KI1.7333  × 10 32 1.4291  × 10 129 33.9987
JA1.1553  × 10 31 4.1074  × 10 126 33.9990
OK12.4295  × 10 31 9.1464  × 10 125 34.0008
OK21.4863  × 10 31 1.1754  × 10 125 33.9997
OK31.3844  × 10 31 8.8054  × 10 126 33.9988
CH5.002  × 10 32 1.2502  × 10 127 33.9969
MA2.1425  × 10 34 1.6464  × 10 137 33.9844
BMM5.5838  × 10 31 2.8585  × 10 123 34.0057
CLND19.7338  × 10 34 9.6229  × 10 135 33.9830
CLND25.002  × 10 32 1.2502  × 10 127 33.9969
ACCT14.0916  × 10 36 1.3257  × 10 144 33.9624
ACCT21.4832  × 10 31 1.1243  × 10 125 34.0029
GH2.5248  × 10 38 6.6868  × 10 154 33.9675
KLW1.8553  × 10 33 1.2914  × 10 133 33.9925
f 3 MCCTU(1)2.2096  × 10 83 044
x 0 = 0.6OS1.7622  × 10 27 3.3439  × 10 108 33.9992
KI1.6297  × 10 100 044
JA2.9708  × 10 27 2.989  × 10 107 33.9996
OK16.1743  × 10 100 1.9467  × 10 208 44
OK21.0137  × 10 33 6.7493  × 10 135 34.0975
OK31.0148  × 10 27 3.9188  × 10 110 34.2357
CH2.1262  × 10 94 044
MA6.9765  × 10 86 044
BMM1.6076  × 10 85 1.9467  × 10 208 44
CLND12.5512  × 10 83 044
CLND22.1262  × 10 94 044
ACCT12.2096  × 10 83 6.8135  × 10 208 44
ACCT22.5202  × 10 91 044
GH2.2217  × 10 81 044
KLW2.4336  × 10 89 044
Table 5. Numerical performance of iterative methods on nonlinear equations for x 0 close to ξ (2/4).
Table 5. Numerical performance of iterative methods on nonlinear equations for x 0 close to ξ (2/4).
FunctionMethod | x k + 1 x k | | f ( x k + 1 ) | IterACOC
f 4 MCCTU(1)2.4812  × 10 61 034
x 0 =  −14.1OS5.7494  × 10 76 034
KI2.6178  × 10 66 034
JA4.5662  × 10 69 3.8934  × 10 208 34
OK11.6181  × 10 64 034
OK21.2341  × 10 67 034
OK34.782  × 10 68 3.8934  × 10 208 33.9998
CH4.1273  × 10 64 034
MA5.9003  × 10 62 034
BMM2.4555  × 10 61 3.8934  × 10 208 34
CLND12.8374  × 10 61 034
CLND24.1273  × 10 64 034
ACCT12.4812  × 10 61 034
ACCT27.6144  × 10 63 034
GH7.562  × 10 61 034
KLW7.8025  × 10 63 034
f 5 MCCTU(1)1.5146  × 10 80 044
x 0 = 1.3OS4.0399  × 10 98 044
KI4.6142  × 10 94 044
JA4.0399  × 10 98 044
OK11.6263  × 10 26 3.9339  × 10 104 34.0265
OK23.7251  × 10 26 1.5538  × 10 102 34.0049
OK32.6244  × 10 29 4.1697  × 10 115 33.8563
CH5.0966  × 10 90 044
MA8.4188  × 10 83 044
BMM8.6757  × 10 85 044
CLND11.5146  × 10 80 044
CLND25.0966  × 10 90 044
ACCT11.5146  × 10 80 044
ACCT21.0557  × 10 91 044
GH1.0682  × 10 78 044
KLW8.3547  × 10 86 044
f 6 MCCTU(1)1.1439  × 10 32 5.0948  × 10 129 33.9969
x 0 = 0.1OS5.058  × 10 36 4.1154  × 10 143 33.9980
KI2.4554  × 10 35 3.1386  × 10 140 33.9979
JA7.2379  × 10 36 1.8516  × 10 142 33.9981
OK18.6178  × 10 41 3.6529  × 10 163 34.0021
OK21.3158  × 10 36 1.4361  × 10 145 33.9982
OK32.2299  × 10 36 1.2384  × 10 144 34.0031
CH1.6189  × 10 34 8.6635  × 10 137 33.9977
MA3.8478  × 10 33 5.2367  × 10 131 33.9971
BMM7.9902  × 10 34 7.003  × 10 134 33.9985
CLND11.2375  × 10 32 7.0855  × 10 129 33.9970
CLND21.6189  × 10 34 8.6635  × 10 137 33.9977
ACCT11.1439  × 10 32 5.0948  × 10 129 33.9969
ACCT22.0595  × 10 36 9.7992  × 10 145 33.9952
GH2.7869  × 10 32 2.1488  × 10 127 33.9967
KLW9.5356  × 10 34 1.49  × 10 133 33.9974
Table 6. Numerical performance of iterative methods on nonlinear equations for x 0 close to ξ (3/4).
Table 6. Numerical performance of iterative methods on nonlinear equations for x 0 close to ξ (3/4).
FunctionMethod | x k + 1 x k | | f ( x k + 1 ) | IterACOC
f 7 MCCTU(1)8.109  × 10 29 1.224  × 10 110 34.0025
x 0 =  −1.2OS1.0259  × 10 36 8.588  × 10 144 33.9987
KI2.2  × 10 33 8.266  × 10 130 34.0003
JA1.3275  × 10 35 3.987  × 10 139 33.9993
OK12.8995  × 10 32 4.1375  × 10 125 34.0011
OK24.5559  × 10 36 4.3528  × 10 141 34.0014
OK33.3899  × 10 35 9.9763  × 10 138 34.0602
CH1.5282  × 10 31 4.4541  × 10 122 34.0011
MA1.9806  × 10 29 3.2969  × 10 113 34.0021
BMM5.13  × 10 29 1.8531  × 10 111 33.9988
CLND18.8475  × 10 29 1.7657  × 10 110 34.0024
CLND21.5282  × 10 31 4.4541  × 10 122 34.0011
ACCT18.109  × 10 29 1.224  × 10 110 34.0025
ACCT21.7685  × 10 30 1.2708  × 10 117 34.0037
GH2.4542  × 10 28 1.2766  × 10 108 34.0029
KLW2.7616  × 10 30 8.4579  × 10 117 34.0014
f 8 MCCTU(1)3.2362  × 10 36 1.278  × 10 144 34.0010
x 0 = 2.3OS4.7781  × 10 35 1.1082  × 10 139 33.9959
KI3.867  × 10 35 4.5395  × 10 140 33.9962
JA6.4886  × 10 36 2.6103  × 10 143 33.9934
OK11.3631  × 10 35 5.9439  × 10 142 33.9927
OK28.3354  × 10 36 7.4954  × 10 143 33.9931
OK38.2958  × 10 36 7.3117  × 10 143 33.9935
CH2.8017  × 10 36 7.5934  × 10 145 33.9943
MA7.3689  × 10 36 4.1437  × 10 143 33.9992
BMM2.6822  × 10 34 1.5979  × 10 136 33.9934
CLND15.1248  × 10 38 3.528  × 10 152 34.0005
CLND22.8017  × 10 36 7.5934  × 10 145 33.9943
ACCT13.2362  × 10 36 1.278  × 10 144 34.0010
ACCT21.2316  × 10 34 5.9984  × 10 138 33.9946
GH1.2035  × 10 36 1.9404  × 10 146 34.0036
KLW1.4928  × 10 35 8.1719  × 10 142 33.9978
f 9 MCCTU(1)1.2504  × 10 28 6.0286  × 10 111 33.9982
x 0 =  −1.4OS2.2297  × 10 33 5.9539  × 10 131 34.0107
KI3.571  × 10 39 4.8453  × 10 155 33.9663
JA6.6365  × 10 33 5.9006  × 10 129 34.0095
OK11.7043  × 10 30 8.4881  × 10 119 34.0019
OK28.1242  × 10 32 2.3078  × 10 124 34.0049
OK31.3061  × 10 31 1.4689  × 10 123 34.0184
CH5.6961  × 10 33 3.922  × 10 129 33.9887
MA2.4063  × 10 29 5.9988  × 10 114 33.9973
BMM2.911  × 10 28 2.1169  × 10 109 33.9971
CLND11.0887  × 10 28 3.3751  × 10 111 33.9980
CLND25.6961  × 10 33 3.922  × 10 129 33.9887
ACCT11.2504  × 10 28 6.0286  × 10 111 33.9982
ACCT21.7434  × 10 29 1.473  × 10 114 34.0025
GH4.3546  × 10 28 1.1301  × 10 108 33.9989
KLW2.0248  × 10 30 1.8702  × 10 118 33.9955
Table 7. Numerical performance of iterative methods on nonlinear equations for x 0 close to ξ (4/4).
Table 7. Numerical performance of iterative methods on nonlinear equations for x 0 close to ξ (4/4).
FunctionMethod | x k + 1 x k | 1c | f ( x k + 1 ) | IterACOC
f 10 MCCTU(1)1.3096  × 10 27 1.0557  × 10 105 34.0263
x 0 =  −0.9OS1.2157  × 10 28 5.2236  × 10 110 34.0178
KI1.6268  × 10 28 1.7588  × 10 109 34.0189
JA2.5808  × 10 28 1.2592  × 10 108 34.0158
OK11.2566  × 10 28 6.3023  × 10 110 34.0126
OK22.0733  × 10 28 5.0608  × 10 109 34.0149
OK31.9638  × 10 28 4.0898  × 10 109 34.0133
CH4.7545  × 10 28 1.6033  × 10 107 34.0184
MA7.9356  × 10 28 1.3045  × 10 106 34.0246
BMM1.3934  × 10 30 4.5027  × 10 118 33.9969
CLND12.1208  × 10 27 8.164  × 10 105 34.0242
CLND24.7545  × 10 28 1.6033  × 10 107 34.0184
ACCT11.3096  × 10 27 1.0557  × 10 105 34.0263
ACCT21.9256  × 10 29 2.4651  × 10 113 34.0090
GH2.0723  × 10 27 7.171  × 10 105 34.0278
KLW4.546  × 10 28 1.2771  × 10 107 34.0226
Table 8. Numerical performance of iterative methods on nonlinear equations for x 0 far from ξ (“nc” means non-convergence) (1/4).
Table 8. Numerical performance of iterative methods on nonlinear equations for x 0 far from ξ (“nc” means non-convergence) (1/4).
FunctionMethod | x k + 1 x k | | f ( x k + 1 ) | IterACOC
f 1 MCCTU(1)3.15  × 10 28 4.4044  × 10 111 43.9913
x 0 =  −1.8OS5.8375  × 10 36 1.46  × 10 142 43.9979
KI1.5765  × 10 34 9.7538  × 10 137 43.9972
JA5.3832  × 10 35 1.0789  × 10 138 43.9976
OK19.9392  × 10 40 4.4016  × 10 158 44.0001
OK22.4525  × 10 36 3.6785  × 10 144 43.9982
OK31.1878  × 10 33 2.0829  × 10 133 44.0017
CH2.7866  × 10 32 1.2595  × 10 127 43.9958
MA2.0068  × 10 29 5.9514  × 10 116 43.9929
BMM1.3126  × 10 31 5.814  × 10 125 44.0050
CLND17.4349  × 10 28 1.3753  × 10 109 43.9907
CLND22.7866  × 10 32 1.2595  × 10 127 43.9958
ACCT13.15  × 10 28 4.4044  × 10 111 43.9913
ACCT26.7276  × 10 44 7.1842  × 10 175 43.9954
GH2.7189  × 10 27 2.8837  × 10 107 43.9896
KLW7.9363  × 10 31 1.1367  × 10 121 43.9946
f 2 MCCTU(1)6.8509  × 10 86 044
x 0 = 0.6OS7.8707  × 10 82 044
KI4.1628  × 10 82 044
JA5.9451  × 10 78 7.7869  × 10 208 44
OK11.7391  × 10 77 7.7869  × 10 208 44
OK28.4717  × 10 78 044
OK38.9827  × 10 78 044
CH1.8951  × 10 78 044
MA1.8733  × 10 84 044
BMM1.2206  × 10 79 044
CLND12.1212  × 10 80 044
CLND21.8951  × 10 78 044
ACCT16.8509  × 10 86 044
ACCT21.366  × 10 80 044
GH1.4879  × 10 88 044
KLW2.1154  × 10 83 044
f 3 MCCTU(1)6.0868  × 10 31 6.9016  × 10 121 53.9978
x 0 = 1.8OS7.2812  × 10 73 054
KI8.2846  × 10 59 054
JA6.0259  × 10 71 054
OK11.1879  × 10 82 054
OK26.2111  × 10 27 9.5138  × 10 108 44.1522
OK37.5783  × 10 53 054.0205
CH9.6923  × 10 49 1.3715  × 10 192 53.9999
MA1.3275  × 10 34 1.1979  × 10 135 53.9989
BMMncncncnc
CLND19.5034  × 10 31 4.1315  × 10 120 53.9978
CLND29.6923  × 10 49 1.3715  × 10 192 53.9999
ACCT16.0868  × 10 31 6.9016  × 10 121 53.9978
ACCT26.1271  × 10 31 2.8102  × 10 121 43.9953
GH1.4039  × 10 28 2.4077  × 10 111 53.9965
KLW4.5965  × 10 39 1.1996  × 10 153 53.9996
Table 9. Numerical performance of iterative methods on nonlinear equations for x 0 far from ξ (“nc” means non-convergence) (2/4).
Table 9. Numerical performance of iterative methods on nonlinear equations for x 0 far from ξ (“nc” means non-convergence) (2/4).
FunctionMethod | x k + 1 x k | | f ( x k + 1 ) | IterACOC
f 4 MCCTU(1)ncncncnc
x 0 =  −42.3OS2.602  × 10 54 064.0004
KIncncncnc
JA1.0645  × 10 51 064
OK1ncncncnc
OK2ncncncnc
OK3ncncncnc
CHncncncnc
MAncncncnc
BMMncncncnc
CLND1ncncncnc
CLND2ncncncnc
ACCT1ncncncnc
ACCT2ncncncnc
GHncncncnc
KLWncncncnc
f 5 MCCTU(1)5.1192  × 10 33 6.3445  × 10 129 53.9976
x 0 = 3.9OS1.8922  × 10 60 054
KI1.6925  × 10 53 053.9999
JA1.8922  × 10 60 054
OK14.3746  × 10 85 054
OK28.1261  × 10 70 054
OK38.7491  × 10 49 5.1503  × 10 193 54.0015
CH1.68  × 10 47 2.7094  × 10 187 53.9998
MA3.351  × 10 36 9.1961  × 10 142 53.9986
BMMncncncnc
CLND15.1192  × 10 33 6.3445  × 10 129 53.9976
CLND21.68  × 10 47 2.7094  × 10 187 53.9998
ACCT15.1192  × 10 33 6.3445  × 10 129 53.9976
ACCT21.7037  × 10 75 054
GH8.0066  × 10 31 4.5963  × 10 120 53.9964
KLW5.3477  × 10 40 4.373  × 10 157 53.9993
f 6 MCCTU(1)2.8249  × 10 77 1.2167  × 10 208 44
x 0 = 0.3OS4.615  × 10 92 1.2167  × 10 208 44
KI4.375  × 10 89 1.2167  × 10 208 44
JA1.7544  × 10 91 1.2167  × 10 208 44
OK13.5822  × 10 29 1.0907  × 10 116 33.9593
OK21.0602  × 10 94 1.2167  × 10 208 44
OK31.3907  × 10 101 1.2167  × 10 208 44
CH1.6778  × 10 85 1.2167  × 10 208 44
MA2.2127  × 10 79 1.2167  × 10 208 44
BMM3.3893  × 10 83 1.2167  × 10 208 44
CLND13.6933  × 10 77 1.2167  × 10 208 44
CLND21.6778  × 10 85 1.2167  × 10 208 44
ACCT12.8249  × 10 77 2.4334  × 10 208 44
ACCT23.1138  × 10 91 1.2167  × 10 208 44
GH1.6004  × 10 75 1.2167  × 10 208 44
KLW3.9889  × 10 82 1.2167  × 10 208 44
Table 10. Numerical performance of iterative methods in nonlinear equations for x 0 far from ξ (“nc” means non-convergence) (3/4).
Table 10. Numerical performance of iterative methods in nonlinear equations for x 0 far from ξ (“nc” means non-convergence) (3/4).
FunctionMethod | x k + 1 x k | | f ( x k + 1 ) | IterACOC
f 7 MCCTU(1)2.1695  × 10 40 6.2709  × 10 157 123.9997
x 0 =  −3.6OS8.7445  × 10 42 4.5328  × 10 164 94.0005
KI3.5832  × 10 59 0104
JA1.445  × 10 33 5.5984  × 10 131 94.0010
OK14.5904  × 10 56 094
OK22.2993  × 10 100 094
OK34.4822  × 10 55 0113.9955
CH2.5759  × 10 93 0114
MA1.0752  × 10 70 0124
BMMncncncnc
CLND15.1735  × 10 38 2.0643  × 10 147 123.9995
CLND22.5759  × 10 93 0114
ACCT12.1695  × 10 40 6.2709  × 10 157 123.9997
ACCT27.4341  × 10 57 084
GH2.2938  × 10 28 9.7412  × 10 109 123.9971
KLW6.4489  × 10 31 2.515  × 10 119 113.9988
f 8 MCCTU(1)8.9717  × 10 34 7.5485  × 10 135 53.9964
x 0 = 6.9OS3.5465  × 10 42 3.3638  × 10 168 53.9988
KI3.0788  × 10 46 1.8241  × 10 184 53.9994
JA3.8134  × 10 44 3.1142  × 10 176 53.9984
OK14.9365  × 10 41 1.0225  × 10 163 53.9972
OK21.6379  × 10 42 1.1175  × 10 169 53.9979
OK39.2522  × 10 52 1.0123  × 10 206 54.0004
CH6.7803  × 10 74 1.5574  × 10 207 54
MA4.1752  × 10 36 4.2707  × 10 144 54.0002
BMM1.4528  × 10 98 1.5574  × 10 207 54
CLND12.6243  × 10 36 2.4259  × 10 145 53.9960
CLND26.7803  × 10 74 1.5574  × 10 207 54
ACCT18.9717  × 10 34 7.5485  × 10 135 53.9964
ACCT27.0924  × 10 38 6.5962  × 10 151 53.9970
GH9.3051  × 10 33 6.9333  × 10 131 53.9867
KLW1.1619  × 10 39 2.9996  × 10 158 54.0009
f 9 MCCTU(1)1.9461  × 10 27 3.5371  × 10 106 64.0014
x 0 =  −4.2OS4.9716  × 10 100 064
KI2.3907  × 10 73 064.0003
JA1.5785  × 10 91 064
OK17.3314  × 10 101 064
OK25.2967  × 10 94 064
OK33.3512  × 10 48 6.3655  × 10 190 63.9987
CH1.4014  × 10 54 064.0003
MA2.5559  × 10 32 7.6354  × 10 126 64.0012
BMMncncncnc
CLND11.866  × 10 27 2.9131  × 10 106 64.0016
CLND21.4014  × 10 54 064.0003
ACCT11.9461  × 10 27 3.5371  × 10 106 64.0014
ACCT27.4372  × 10 40 4.8779  × 10 156 53.9995
GH3.5153  × 10 95 074
KLW1.7919  × 10 38 1.147  × 10 150 64.0009
Table 11. Numerical performance of iterative methods on nonlinear equations for x 0 far from ξ (4/4).
Table 11. Numerical performance of iterative methods on nonlinear equations for x 0 far from ξ (4/4).
FunctionMethod | x k + 1 x k | | f ( x k + 1 ) | IterACOC
f 10 MCCTU(1)1.4573  × 10 79 1.0707  × 10 207 63.9998
x 0 =  −2.7OS5.9731  × 10 31 3.7644  × 10 111 104.0223
KI3.5019  × 10 84 3.3094  × 10 207 64.0006
JA2.7724  × 10 41 1.3193  × 10 154 53.9916
OK14.672  × 10 37 3.3675  × 10 141 54.0067
OK26.9502  × 10 38 2.3984  × 10 144 53.9545
OK34.1808  × 10 48 2.2923  × 10 179 54.0014
CH1.8136  × 10 97 3.8389  × 10 205 64
MA2.2689  × 10 93 3.8934  × 10 208 54
BMM2.6823  × 10 41 6.1829  × 10 161 63.9999
CLND18.2625  × 10 101 2.7254  × 10 207 54
CLND21.8136  × 10 97 3.8389  × 10 205 64
ACCT11.4573  × 10 79 1.0707  × 10 207 63.9998
ACCT23.3917  × 10 58 2.3908  × 10 204 64.0003
GH2.2337  × 10 72 1.0707  × 10 207 53.9994
KLW2.4673  × 10 32 6.302  × 10 122 43.8791
Table 12. Numerical performance of iterative methods on nonlinear equations for x 0 very far from ξ (“nc” means non-convergence) (1/4).
Table 12. Numerical performance of iterative methods on nonlinear equations for x 0 very far from ξ (“nc” means non-convergence) (1/4).
FunctionMethod | x k + 1 x k | | f ( x k + 1 ) | IterACOC
f 1 MCCTU(1)3.9494  × 10 95 064
x 0 =  −6.0OS7.5454  × 10 40 4.0753  × 10 158 53.9989
KI2.2846  × 10 36 4.3008  × 10 144 53.9980
JA4.2789  × 10 41 4.3067  × 10 163 53.9992
OK12.8437  × 10 53 054
OK22.0962  × 10 46 1.9632  × 10 184 53.9997
OK31.9553  × 10 33 1.5296  × 10 132 54.0018
CH1.6161  × 10 33 1.4248  × 10 132 53.9966
MA2.632  × 10 26 1.7609  × 10 103 53.9875
BMMncncncnc
CLND15.8255  × 10 95 064
CLND21.6161  × 10 33 1.4248  × 10 132 53.9966
ACCT13.9494  × 10 95 064
ACCT25.5395  × 10 58 053.9996
GH6.2374  × 10 89 064
KLW1.0186  × 10 28 3.0849  × 10 113 53.9921
f 2 MCCTU(1)1.0368  × 10 34 5.4646  × 10 139 44.0222
x 0 = 2.0OS2.3862  × 10 93 054
KI1.2873  × 10 25 4.3485  × 10 102 43.9933
JA4.1797  × 10 95 054
OK18.2892  × 10 82 054
OK25.051  × 10 87 054
OK34.2138  × 10 33 7.557  × 10 132 43.9991
CH1.639  × 10 29 1.4412  × 10 117 43.9949
MA9.1807  × 10 43 5.5512  × 10 171 44.0028
BMM2.9675  × 10 52 063.9998
CLND19.0974  × 10 32 7.3425  × 10 127 44.0155
CLND21.639  × 10 29 1.4412  × 10 117 43.9949
ACCT11.0368  × 10 34 5.4646  × 10 139 44.0222
ACCT21.9358  × 10 73 054
GH6.4242  × 10 33 2.803  × 10 132 44.0791
KLW1.6753  × 10 39 8.5848  × 10 158 43.9976
f 3 MCCTU(1)1.0037  × 10 74 094
x 0 = 6.0OS1.4888  × 10 45 1.7033  × 10 180 74
KI3.4193  × 10 27 1.1139  × 10 106 73.9978
JA2.4788  × 10 42 1.4488  × 10 167 74
OK17.5531  × 10 64 074
OK25.2399  × 10 91 074
OK32.024  × 10 56 084.0130
CH3.2307  × 10 66 6.8135  × 10 208 84
MA2.7511  × 10 26 2.21  × 10 102 83.9951
BMMncncncnc
CLND15.7924  × 10 73 094
CLND23.2307  × 10 66 6.8135  × 10 208 84
ACCT11.0037  × 10 74 094
ACCT21.643  × 10 31 1.4531  × 10 123 64.0042
GH1.8548  × 10 60 094
KLW1.1156  × 10 35 4.1633  × 10 140 83.9992
Table 13. Numerical performance of iterative methods on nonlinear equations for x 0 very far from ξ (“nc” means non-convergence) (2/4).
Table 13. Numerical performance of iterative methods on nonlinear equations for x 0 very far from ξ (“nc” means non-convergence) (2/4).
FunctionMethod | x k + 1 x k | | f ( x k + 1 ) | IterACOC
f 4 MCCTU(1)ncncncnc
x 0 =  −141.0OSncncncnc
KIncncncnc
JAncncncnc
OK1ncncncnc
OK2ncncncnc
OK3ncncncnc
CHncncncnc
MAncncncnc
BMMncncncnc
CLND1ncncncnc
CLND2ncncncnc
ACCT1ncncncnc
ACCT2ncncncnc
GHncncncnc
KLWncncncnc
f 5 MCCTU(1)1.2254  × 10 58 074
x 0 = 13.0OS4.3174  × 10 43 5.0572  × 10 170 63.9996
KI5.4113  × 10 35 1.9154  × 10 137 63.9985
JA4.3174  × 10 43 5.0572  × 10 170 63.9996
OK11.2884  × 10 87 064
OK21.1488  × 10 54 064
OK31.3547  × 10 27 2.9602  × 10 108 64.0338
CH1.1569  × 10 28 6.0929  × 10 112 63.9948
MA2.1036  × 10 69 6.2295  × 10 207 74
BMMncncncnc
CLND11.2254  × 10 58 074
CLND21.1569  × 10 28 6.0929  × 10 112 63.9948
ACCT11.2254  × 10 58 074
ACCT27.7193  × 10 68 064
GH6.6848  × 10 52 2.2302  × 10 204 73.9999
KLW1.4904  × 10 82 6.2295  × 10 207 74
f 6 MCCTU(1)1.4106  × 10 84 1.2167  × 10 208 54
x 0 = 1.0OS1.0011  × 10 40 6.3155  × 10 162 43.9991
KI6.4033  × 10 37 1.4516  × 10 146 43.9984
JA2.3288  × 10 40 1.9843  × 10 160 43.9991
OK11.8287  × 10 53 1.2167  × 10 208 43.9997
OK21.2544  × 10 44 1.1864  × 10 177 43.9995
OK37.2892  × 10 32 1.4139  × 10 126 43.9897
CH9.2308  × 10 32 9.1584  × 10 126 43.9962
MA1.5346  × 10 95 1.2167  × 10 208 54
BMM2.958  × 10 31 1.3155  × 10 123 44.0024
CLND11.5451  × 10 84 1.2167  × 10 208 54
CLND29.2308  × 10 32 9.1584  × 10 126 43.9962
ACCT11.4106  × 10 84 1.2167  × 10 208 54
ACCT25.8091  × 10 73 1.2167  × 10 208 54
GH4.0743  × 10 77 1.2167  × 10 208 54
KLW3.7745  × 10 28 3.658  × 10 111 43.9931
Table 14. Numerical performance of iterative methods on nonlinear equations for x 0 very far from ξ (“nc” means non-convergence) (3/4).
Table 14. Numerical performance of iterative methods on nonlinear equations for x 0 very far from ξ (“nc” means non-convergence) (3/4).
FunctionMethod | x k + 1 x k | | f ( x k + 1 ) | iterACOC
f 7 MCCTU(1)ncncncnc
x 0 =  −12.0OSncncncnc
KIncncncnc
JAncncncnc
OK1ncncncnc
OK2ncncncnc
OK3ncncncnc
CHncncncnc
MAncncncnc
BMMncncncnc
CLND1ncncncnc
CLND2ncncncnc
ACCT1ncncncnc
ACCT21.2624  × 10 39 3.2997  × 10 154 504.0007
GHncncncnc
KLWncncncnc
f 8 MCCTU(1)7.1071  × 10 32 2.9726  × 10 127 63.9944
x 0 = 23.0OS3.9961  × 10 44 5.4218  × 10 176 63.9991
KI1.9961  × 10 46 3.223  × 10 185 63.9995
JA2.8208  × 10 93 1.5574  × 10 207 64
OK11.2245  × 10 30 3.8716  × 10 122 53.9812
OK21.3604  × 10 44 5.3174  × 10 178 53.9985
OK31.477  × 10 56 1.5574  × 10 207 63.9998
CH3.6894  × 10 61 1.5574  × 10 207 63.9999
MA5.2575  × 10 34 1.0738  × 10 135 64.0001
BMM3.7259  × 10 45 3.1549  × 10 179 74.0007
CLND11.0076  × 10 36 5.271  × 10 147 63.9964
CLND23.6894  × 10 61 1.5574  × 10 207 63.9999
ACCT17.1071  × 10 32 2.9726  × 10 127 63.9944
ACCT21.1468  × 10 52 1.5574  × 10 207 63.9997
GH3.8246  × 10 31 1.9787  × 10 124 63.9800
KLW1.5819  × 10 37 1.0306  × 10 149 64.0012
f 9 MCCTU(1)7.6712  × 10 52 8.5422  × 10 204 94
x 0 = −14.0OS2.3748  × 10 99 084
KI7.9344  × 10 69 084.0006
JA1.0139  × 10 94 084
OK13.2135  × 10 50 1.0728  × 10 197 73.9999
OK21.5049  × 10 32 2.7173  × 10 127 73.9952
OK37.1691  × 10 34 1.3332  × 10 132 83.9822
CH8.1205  × 10 44 1.62  × 10 172 84.0015
MA5.7604  × 10 70 094
BMMncncncnc
CLND19.8978  × 10 52 2.306  × 10 203 94
CLND28.1205  × 10 44 1.62  × 10 172 84.0015
ACCT17.6712  × 10 52 8.5422  × 10 204 94
ACCT27.9723  × 10 43 6.4405  × 10 168 74.0003
GH3.8751  × 10 42 7.0871  × 10 165 94.0001
KLW1.1099  × 10 94 094
Table 15. Numerical performance of iterative methods in nonlinear equations for x 0 very far from ξ (“nc” means non-convergence) (4/4).
Table 15. Numerical performance of iterative methods in nonlinear equations for x 0 very far from ξ (“nc” means non-convergence) (4/4).
FunctionMethod | x k + 1 x k | | f ( x k + 1 ) | IterACOC
f 10 MCCTU(1)1.2776  × 10 70 1.0707  × 10 207 64.0008
x 0 =  −9.0OS2.9225  × 10 29 1.7446  × 10 112 53.9821
KI7.9476  × 10 94 1.5574  × 10 207 84
JA6.1519  × 10 29 1.4803  × 10 108 84.0703
OK12.3282  × 10 63 9.7336  × 10 208 64
OK22.4314  × 10 75 1.9467  × 10 208 74
OK31.9369  × 10 93 4.9203  × 10 206 64
CH1.0491  × 10 37 1.9596  × 10 135 84.0092
MA9.5564  × 10 89 3.8934  × 10 208 64
BMMncncncnc
CLND11.5067  × 10 28 4.9584  × 10 107 123.7072
CLND21.0491  × 10 37 1.9596  × 10 135 84.0092
ACCT11.2776  × 10 70 1.0707  × 10 207 64.0008
ACCT21.463  × 10 49 5.2801  × 10 191 54.0594
GH2.4513  × 10 45 6.3226  × 10 174 74.0044
KLW5.7956  × 10 39 1.4646  × 10 150 64.0153
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Moscoso-Martínez, M.; Chicharro, F.I.; Cordero, A.; Torregrosa, J.R.; Ureña-Callay, G. Achieving Optimal Order in a Novel Family of Numerical Methods: Insights from Convergence and Dynamical Analysis Results. Axioms 2024, 13, 458. https://doi.org/10.3390/axioms13070458

AMA Style

Moscoso-Martínez M, Chicharro FI, Cordero A, Torregrosa JR, Ureña-Callay G. Achieving Optimal Order in a Novel Family of Numerical Methods: Insights from Convergence and Dynamical Analysis Results. Axioms. 2024; 13(7):458. https://doi.org/10.3390/axioms13070458

Chicago/Turabian Style

Moscoso-Martínez, Marlon, Francisco I. Chicharro, Alicia Cordero, Juan R. Torregrosa, and Gabriela Ureña-Callay. 2024. "Achieving Optimal Order in a Novel Family of Numerical Methods: Insights from Convergence and Dynamical Analysis Results" Axioms 13, no. 7: 458. https://doi.org/10.3390/axioms13070458

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop