Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
On Small Energy Solutions of the Nonlinear Schrödinger Equation in 1D with a Generic Trapping Potential with a Single Eigenvalue
Previous Article in Journal
An Option Pricing Formula for Active Hedging Under Logarithmic Investment Strategy
Previous Article in Special Issue
Asymptotics for Finite-Time Ruin Probabilities of a Dependent Bidimensional Risk Model with Stochastic Return and Subexponential Claims
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Approximation of the Fractional SDEs with Stochastic Forcing

by
Kęstutis Kubilius
Faculty of Mathematics and Informatics, Vilnius University, Akademijos g. 4, LT-03225 Vilnius, Lithuania
Mathematics 2024, 12(24), 3875; https://doi.org/10.3390/math12243875
Submission received: 5 November 2024 / Revised: 2 December 2024 / Accepted: 4 December 2024 / Published: 10 December 2024
(This article belongs to the Special Issue Probabilistic Models in Insurance and Finance)

Abstract

:
Using the implicit Euler and Milstein approximation schemes, the conditions for the pathwise convergence rate of these approximations to the solution of the fractional SDEs with stochastic forcing are found.

1. Introduction

The mathematical literature has extensively analyzed stochastic differential equations (SDEs) driven by fractional Brownian motion. Most of these efforts have been motivated by problems arising in the financial applications of SDEs, such as option pricing, stochastic volatility, and interest rate modeling. However, there are few results concerning SDEs with boundary conditions. Typically, only SDEs involving reflection at the boundary are considered (see [1,2]). Here, we consider a new class of SDEs with stochastic forcing. This class allows us to consider boundaries of a new type.
We consider stochastic differential equations of the following form:
X t = X 0 + Φ ( X t ) Φ ( X 0 ) + 0 t f ( s , X s ) d s + 0 t g ( s , X s ) d B s H , t [ 0 , T ]
where Φ : R R is a continuous function, f , g : [ 0 , T ] × R R are measurable functions, and B H = ( B H ) t 0 , 1 / 2 < H < 1 is a fractional Brownian motion. The stochastic integral in Equation (1) is a pathwise generalized Lebesgue–Stieltjes integral. Thus, we can use the pathwise approach to consider these fractional stochastic differential equations (FSDEs). We call Equation (1) the FSDE equation with stochastic forcing term Φ . Examining such a model can be interpreted as studying the environment’s influence on the behavior of a process. Such equations can be used to consider FSDEs with a permeable wall. The permeable wall model describes a process that can cross the wall, but where the force does not allow the process to move far from the wall. In [3,4], the fractional Vasicek process with soft wall was considered as a modeling example. This example explains what a fractional SDE with a permeable wall is. These types of processes can be applied in the natural sciences. In particular, such processes can be used in financial mathematics as models for stochastic volatility. Indeed, it has recently been irrefutably proven that financial markets have a memory that is best interpreted in the framework of stochastic volatility. A stochastic differential equation involving fractional Brownian motion is a natural model with a memory [5,6,7]. On the other hand, volatility should have certain limits and reasonable sizes, and should not deviate infinitely far; otherwise, such a market model would not allow equivalent Martingale measures and would poorly describe real financial processes. Thus, the presence of a permeable wall allows us to construct a model of stochastic volatility with reasonable behavior.
In general, SDEs rarely possess closed analytic-form solutions; therefore, both in general and in our case, it is important to consider certain numerical methods for their solution. The existence and uniqueness of the solution of Equation (1) was obtained in [8]. A special case of Equation (1) with constant and strictly positive diffusion coefficients was considered in [3]. In the article, we are interested in pathwise numerical approximations of the solution to Equation (1).
Much of the literature is devoted to numerical methods for SDEs driven by fBm or a combination of Brownian motion and fBm. Strong SDE approximation schemes are usually considered in the literature. Euler, modified Euler, and other higher-order approximation schemes should be mentioned here (see, e.g., [9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26] and references therein).
The rate of convergence for Euler approximations Y n of solutions of pathwise SDEs driven by fBm with Hurst index H > 1 / 2 was obtained in [17]. It was proved that for any natural number ε > 0 there exists a random variable C ε such that it is almost certain that
sup 0 t T X t Y t n C ε Δ n 2 H 1 ε .
Here, we apply the implicit Euler- and implicit Milstein-type approximations to the solution of Equation (1) and find the pathwise convergence rate. These results were obtained for the first time under fairly general coefficient restrictions. For simplicity, we consider an implicit Milstein-type approximation for the time-homogeneous Equation (1).
The paper is organized in the following way. In Section 2, we present the paper’s main results. Section 3 contains definitions of considered spaces of functions and a priori estimates for the Lebesgue-Stieltjes integral. Section 4 defines a deterministic differential equation corresponding to FSDE (1) and considers its implicit Euler approximation properties. Some results are taken from [8]. In Section 5, we obtain a convergence rate for implicit Euler approximation for a deterministic differential equation corresponding to FSDE (1). Section 6 presents the implicit Milstein-type approximation and auxiliary results. In Section 7, the convergence rate of the Milstein-type approximation is obtained. Finally, Section 8 considers the fractional Pearson diffusion process as an example.

2. Main Result

We assume that the coefficients f , g satisfy the following conditions with some nonrandom constants:
( A 1 )   g ( t , x ) is differentiable in x, and there exist some constants 0 < β , δ 1 ; moreover, for every N 0 there exists M N > 0 such that the following properties hold:
(i) Lipschitz continuity in x:
| g ( t , x ) g ( t , y ) | M 0 | x y | , x , y R , t [ 0 , T ] .
(ii) Local Hölder continuity of the derivative in x:
| g x ( t , x ) g x ( t , y ) | M N | x y | δ , x , y [ N , N ] , t [ 0 , T ] .
(iii) Hölder continuity in t:
| g ( s , x ) g ( t , x ) | + | g x ( s , x ) g x ( t , x ) | M 0 | t s | β , x R , t , s [ 0 , T ] .
( A 2 ) There exists a constant 0 < β 1 , and for every N 0 there exists L N > 0 such that the following properties hold:
(i) Local Lipschitz continuity in x:
| f ( t , x ) f ( t , y ) | L N | x y | , x , y [ N , N ] , t [ 0 , T ] .
(ii) Linear growth condition:
| f ( t , x ) | L 0 ( 1 + | x | ) x R , t [ 0 , T ] .
(iii) Hölder continuity in t:
| f ( s , x ) f ( t , x ) | L 0 | t s | β , x R , t , s [ 0 , T ] .
( A 3 ) The function Φ : R R is differentiable and there exist some constants 0 < c < 1 , 0 < ρ 1 ’ moreover, for every N 0 there exists K N > 0 such that the following properties hold:
(i) Φ ( x ) c for all x R .
(ii) Local Hölder continuity of the following derivative:
| Φ x ( x ) Φ x ( y ) | K N | x y | ρ , x , y [ N , N ] .
( A 4 ) The function D : R R , where D ( x ) : = x Φ ( x ) , has the following properties:
(i) It is strictly monotonic and surjective.
(ii) There is a constant d > 0 such that
| D ( x ) D ( y ) | d | x y | .
Remark 1
(see Remark 8 in [3]). Under Assumption ( A 3 ) , the function D satisfies Assumption ( A 4 ) with d = 1 c .
For the time-homogeneous version of Equation (1), we assume that the coefficients f , g satisfy the following conditions:
( B ) There exist constants M 0 , L 0 > 0 such that the following properties hold:
( i ) | f ( x ) f ( y ) | L 0 | x y | , | g g ( x ) g g ( y ) | M 0 | x y | , x , y R , ( i i ) | f ( x ) | L 0 ( 1 + | x | ) , | g ( x ) | M 0 ( 1 + | x | ) , | g g ( x ) | M 0 ( 1 + | x | ) , x R , ( i i i ) | g ( x ) | M 0 , | g ( x ) | M 0 , x R ,
where we write g g ( · ) instead of g ( · ) g ( · ) to shorten notation.
Remark 2.
Linear growth conditions for the functions f, g, and g g are unnecessary, but simplify the future notation.
For simplicity of presentation, we consider uniform partitions of the interval [ 0 , T ] . Let π n = { t k n = k n T , 1 k n } be a sequence of uniform partitions of the interval [ 0 , T ] and let Δ n = t k n t k 1 n , 1 k n , Δ n < 1 . We define the time-continuous interpolation of the implicit Euler approximation for the Equation (1) as
Y n ( t ) Φ ( Y t n ) = X 0 Φ ( X 0 ) + 0 t f ( τ s n , Y n ( τ s n ) ) d s + 0 t g ( τ s n , Y n ( τ s n ) ) d B s H
and the time-continuous interpolation of the implicit Milstein-type approximation for the time-homogeneous Equation (1) as
Y n ( t ) Φ ( Y t n ) = X 0 Φ ( X 0 ) + 0 t f ( Y n ( τ s n ) ) d s + 0 t g ( Y n ( τ s n ) ) d B s H + 0 t τ s n s g x g ( Y n ( τ s n ) ) d B u H d B s H ,
where τ s n = t k 1 n and Y n ( τ s n ) = Y n ( t k 1 n ) if s [ t k 1 n , t k n ) , 1 k n .
We introduce the symbol O ω for simplicity of notation. Let ( ξ n ) be a sequence of r.v.s, let ς be an a.s. nonnegative r.v., and let ( a n ) ( 0 , ) be a vanishing sequence. Then, ξ n = O ω ( a n ) means that | ξ n | ς · a n for all n.
Set
γ 0 = 1 α 0 , α 0 = min 1 2 , β , δ 1 + δ , ρ 1 + ρ
and denote by Y E , n and Y M , n the implicit Euler- and Milstein-type approximations. The norm 1 γ , is defined in Section 3.1.
Theorem 1
(See Theorem 1 [8]). Suppose that the functions f ( t , x ) and g ( t , x ) satisfy Assumptions ( A 1 ) and ( A 2 ) with 1 H 1 < δ , ρ 1 , 1 H < β 1 . If Assumption ( A 3 ) is satisfied and γ ( γ 0 , H ) , then there exists a unique stochastic process X C γ ( 0 , T ) satisfying FSDE (1), where C γ ( 0 , T ) is the space of γ-Hölder continuous functions.
Theorem 2.
Under the hypotheses of Theorem 1 with 1 H < β 1 replaced by H β 1 , we have
X Y E , n 1 γ , = O ω Δ n 2 γ 1 .
Set
γ ^ 0 = 1 α ^ 0 , α ^ 0 = min 1 2 , ρ 1 + ρ .
Theorem 3.
Suppose that the functions f ( x ) and g ( x ) satisfy Assumption ( B ) with 1 H 1 < ρ 1 . If Assumption ( A 3 ) is satisfied and γ ( γ ^ 0 , H ) , then there exists a unique stochastic process X C γ ( 0 , T ) and
X Y M , n 1 γ , = O ω Δ n γ .
The statements of Theorems 2 and 3 follow directly from the results for deterministic differential equations, as we can apply the pathwise approach for FSDE (1).

3. Preliminaries

3.1. Spaces of Functions and Norms

Let us recall some functional spaces that are used in the future.
We use W 0 α , ( 0 , T ) , where 0 < α < 1 / 2 , to denote the space of real-valued measurable functions f : [ 0 , T ] R , meaning that we have
f α , : = sup s [ 0 , T ] | f ( s ) | + 0 s | f ( s ) f ( u ) | ( s u ) 1 + α d u < .
The space W 0 α , ( 0 , T ) is a Banach space with respect to the norm f α , ; for λ 0 , the equivalent norm is defined by
f α , λ : = sup t [ 0 , T ] e λ t | f ( t ) | + 0 t | f ( t ) f ( s ) | ( t s ) 1 + α d s .
For any μ ( 0 , 1 ] , we denote by C μ ( 0 , T ) the space of μ -Hölder continuous functions f : [ 0 , T ] R equipped with a norm f μ : = | f | + | f | μ , where we have
| f | μ : = sup 0 s < t T | f ( t ) f ( s ) | | s t | μ , | f | : = sup t [ 0 , T ] | f ( t ) | .
Clearly, we have C 1 α ( 0 , T ) W 0 α , ( 0 , T ) for 0 < α < 1 / 2 and
f α , λ f 1 α 1 + T 1 2 α 1 2 α .
We denote by W T 1 α , ( 0 , T ) , 0 < α < 1 / 2 the space of measurable functions g : [ 0 , T ] R , meaning that we have
g 1 α , , T : = sup 0 s < t < T | g ( t ) g ( s ) | ( t s ) 1 α + s t | g ( y ) g ( s ) | ( y s ) 2 α d y < ,
such that W T 1 α , ( 0 , T ) C 1 α ( 0 , T ) (see [21]).
In addition, we denote by W 0 α , 1 ( 0 , T ) , 0 < α < 1 / 2 the space of measurable functions f on [ 0 , T ] , such that we have
f α , 1 : = 0 T | f ( s ) | s α d s + 0 T 0 s | f ( s ) f ( y ) | | s y | 1 + α d y d s < .
Fixing p ( 0 , ) and letting ϰ = { { t 0 , , t n } : 0 = t 0 < < t n = T , n 1 } denote a set of all possible partitions of [ 0 , T ] , for any f : [ 0 , T ] R we define the following:
v p f ; [ 0 , T ] = sup ϰ k = 1 n | f ( t k ) f ( t k 1 ) | p , V p f ; [ 0 , T ] = v p 1 / p f ; [ 0 , T ] .
Recall that v p is called the p-variation of f on [ 0 , T ] . We denote by W p ( [ a , b ] ) (resp. C W p ( [ a , b ] ) ) the class of (resp., continuous) functions on [ 0 , T ] with bounded p-variation, p ( 0 , ) .
We define V p ( f ) : = V p ( f ; [ 0 , T ] ) , which is a seminorm on W p ( [ 0 , T ] ) ; in addition, V p ( f ) is 0 if and only if f is constant. For each f, V p ( f ) is a non-increasing function of p 1 , i.e., if 1 q < p , then V p ( f ) V q ( f ) . Thus, W q ( [ 0 , T ] ) W p ( [ 0 , T ] ) if 1 q < p < . If f W p ( [ a , b ] ) , then f is bounded.
Let p 1 and V p , ( f ) : = V p , ( f ; [ 0 , T ] ) = V p ( f ) + | f | . Then, V p , ( f ) is a norm, and W p ( [ 0 , T ] ) equipped with the p-variation norm is a Banach space.

3.2. Riemann–Stieltjes Integral

Assume that f W 0 α , 1 ( 0 , T ) and h W T 1 α , ( 0 , T ) , where 0 < α < 1 / 2 . The generalized Lebesgue–Stieltjes integral (see [21]) 0 t f d h exists for all t [ 0 , T ] and for any 0 s < t T
s t f d h Λ α ( h ) s t | f ( r ) | ( r s ) α d r + s t s r | f ( r ) f ( y ) | | r y | α + 1 d y d r ,
where
Λ α ( h ) : = 1 Γ ( 1 α ) sup 0 s < t < T D t 1 α h t ( s ) 1 Γ ( 1 α ) Γ ( α ) h 1 α , , T ,
D t 1 α h t ( · ) is the Weyl derivative, and Γ ( · ) is a Gamma function. Furthermore, the integral 0 t f d h exists if f W 0 α , ( 0 , T ) .
If f C ν ( a , b ) and h C μ ( a , b ) with ν + μ > 1 , then the generalized Lebesgue–Stieltjes integral exists and coincides with the Riemann–Stieltjes integral (see [22]).
From Young’s Stieltjes integrability theorem [23] (p. 264), the Riemann–Stieltjes integral 0 t f d h can be defined for functions having bounded p-variation on [ 0 , T ] (see [24]).
Let f W q ( [ a , b ] ) and h W p ( [ a , b ] ) with p > 0 , q > 0 , 1 / p + 1 / q > 1 . If f and h have no common discontinuities, then the extended Riemann–Stieltjes integral a b f d h exists and the Love–Young inequality
| a b f d h f ( y ) h ( b ) h ( a ) | C p , q V q f ; [ a , b ] V p h ; [ a , b ]
holds for any y [ a , b ] , where C p , q = ζ ( p 1 + q 1 ) , ζ ( s ) denotes the Riemann zeta function, i.e., ζ ( s ) = n 1 n s .
Proposition 1
(Chain rule [22] (comment on Theorem 4.3.1)). Let f C λ ( [ a , b ] ) and F C 1 ( R × [ a , b ] ) be real-valued functions such that F ( f ( · ) , · ) C μ ( [ a , b ] ) with λ + μ > 1 . Then, for any y ( a , b ) ,
F ( f ( y ) , y ) F ( f ( a ) , a ) = a y F 1 ( f ( x ) , x ) d f ( x ) + a y F 2 ( f ( x ) , x ) d x ,
where F 1 and F 2 are the partial derivatives of F with respect to the first and second variables, respectively.

3.3. Estimation of the Generalized Lebesgue–Stieltjes Integrals

From now on, we fix 0 < α < 1 / 2 . For any function u W 0 α , ( 0 , T ) , we define
F t ( f ) ( u ) = 0 t f ( s , u s ) d s ,
where f satisfies Assumptions ( A 2 ) ( i ) , ( i i ) .
Proposition 2
(See Proposition 4.4 [21]). If u W 0 α , ( 0 , T ) , then F ( f ) ( u ) C 1 α ( 0 , T ) .
If u , v W 0 α , ( 0 , T ) are such that | u | N and | v | N , then
F ( f ) ( u ) F ( f ) ( v ) α , λ c N λ 1 α u v α , λ
for all λ 1 , where c N = C α , T L N Γ ( 1 α ) , C α , T = T α + α 1 , L N from ( A 2 ) .
Given two functions h W T 1 α , ( 0 , T ) and u W 0 α , ( 0 , T ) , we denote
G t ( u ) = 0 t u s d h s , G t ( g ) ( u ) = 0 t g ( s , u s ) d h s ,
where g satisfies Assumption ( A 1 ) with constant β > α .
Proposition 3
(See Proposition 4.1 [21]). Let u W 0 α , 1 ( 0 , T ) ; then, the following estimates hold for s < t :
| G t ( u ) G s ( u ) | Λ α ( h ) s t | u ( r ) ) | ( r s ) α + α s r u ( r ) u ( v ) ( r v ) 1 + α d v d r
and
G t ( u ) + 0 t G t ( u ) G s ( u ) ( t s ) 1 + α d s Λ α ( h ) 0 t c α ( 1 ) ( t r ) 2 α + 1 r α | u ( r ) ) | d r + 0 t 0 r u ( r ) u ( v ) ( r v ) 1 + α ( t v ) α + α d v d r ,
where c α ( 1 ) = B ( 2 α , 1 α ) , B ( · , · ) is the Beta function.
Proposition 4
(See Proposition 4.2 [21]). If u W 0 α , ( 0 , T ) , then G ( g ) ( u ) C 1 α ( 0 , T ) .
If u , v W 0 α , ( 0 , T ) are such that | u | N and | v | N , then
G ( g ) ( u ) G ( g ) ( v ) α , λ Λ α ( h ) C N ( 1 ) λ 1 2 α 1 + Δ ( u ) + Δ ( v ) u v α , λ
for all λ 1 , where
Δ ( u ) = sup r [ 0 , T ] 0 r | u r u s | δ | r s | 1 + α d s
and
C N ( 1 ) = c α , T ( 1 ) c α T β α β α ( M 0 + M N ) , c α , T ( 1 ) = max { c α , 1 } + T α , c α = 0 e y y 2 α d y + sup z > 0 0 z e y ( z y ) α d y , c α 1 1 2 α + 4 .
Remark 3.
If u C 1 α ( 0 , T ) and δ 1 + δ > α , then
Δ ( u ) | u | 1 α sup r [ 0 , T ] 0 r | r s | ( 1 α ) δ | r s | 1 + α d s = T δ α ( 1 + δ ) δ α ( 1 + δ ) | u | 1 α .

4. The Implicit Euler Approximation and Auxiliary Results

Let 1 H < α < 1 / 2 . Recall that almost all trajectories of fBm B H belong to W T 1 α , ( 0 , T ) . Instead of considering Equation (1), we consider the deterministic differential equation on R :
x t = x 0 + Φ ( x t ) Φ ( x 0 ) + 0 t f ( s , x s ) d s + 0 t g ( s , x s ) d h s , t [ 0 , T ]
where x 0 R , h W T 1 α , , h 0 = 0 .
Theorem 4.
Suppose that the functions f ( t , x ) and g ( t , x ) satisfy Assumptions ( A 1 ) and ( A 2 ) with 1 H 1 < δ , ρ 1 , 1 H < β 1 . If Assumption ( A 3 ) is satisfied, then Equation (17) has a unique solution x C 1 α ( 0 , T ) , where α ( 1 H , α 0 ) , α 0 is defined in (5).
Proof. 
The theorem statement follows directly from Theorem 1. Set α = 1 γ . It is sufficient to note that α ( 1 H , α 0 ) if γ ( γ 0 , H ) . □
We define the implicit Euler approximations for Equation (17) as
y E , n ( t k + 1 n ) Φ ( y E , n ( t k + 1 n ) ) = y E , n ( t k n ) Φ ( y E , n ( t k n ) ) + f ( t k n , y E , n ( t k n ) ) Δ n + g ( t k n , y E , n ( t k n ) ) h ( t k + 1 n ) h ( t k n ) , y n ( 0 ) = x 0
and their continuous interpolations as
y E , n ( t ) Φ ( y t E , n ) = x 0 Φ ( x 0 ) + 0 t f ( τ s n , y E , n ( τ s n ) ) d s + 0 t g ( τ s n , y E , n ( τ s n ) ) d h s ,
where τ s n = t k 1 n and y E , n ( τ s n ) = y E , n ( t k 1 n ) if s [ t k 1 n , t k n ) , 1 k n and where t k n π n . For abbreviation, let y n stand for y E , n .
We can rewrite the implicit Euler approximations (18) and (19) in a more compact way:
D ( y n ( t k + 1 n ) ) = D ( y n ( t k n ) ) + f ( t k n , y n ( t k n ) ) Δ n + g ( t k n , y n ( t s n ) ) h ( t k + 1 n ) h ( t k n )
with y n ( 0 ) = x 0 and
D y t n = D ( x 0 ) + F t ( f , τ n ) y n + G t ( g , τ n ) y n , y n ( 0 ) = x 0 ,
where
F t ( f , τ n ) ( y n ) = 0 t f ( τ s n , y n ( τ s n ) ) d s , G t ( g , τ n ) ( y n ) = 0 t g ( τ s n , y n ( τ s n ) ) d h s .
The implicit Euler approximation scheme (20) is correctly defined. From the recursive expression (20), we calculate D ( y n ( t k + 1 n ) ) . The properties of the function D ( x ) provide a single value of y n ( t k + 1 n ) . Because D ( y t n ) is a continuous function, y n is a continuous function. Indeed, because D 1 ( x ) and D ( y t n ) are continuous functions, y n is a continuous function.
The following properties hold for the implicit Euler approximation:
Proposition 5
(See Propositions 4 and 5 [8]). Under the assumptions of Theorem 4, we obtain
sup n y n α , < a n d sup n y n 1 α < .

5. Rate of Convergence of the Implicit Euler Approximation

Lemma 1
(See Lemma 7.1 in [21]; see also Lemma 3 [8]). Let Φ be function satisfying Assumption ( A 3 ) . Then, for all N > 0 and | x r | , | x v | , | x ˜ r | , | x ˜ v | N :
Φ ( x r ) Φ ( x v ) ) ( Φ ( x ˜ r ) Φ ( x ˜ v ) ) c | ( x r x ˜ r ) ( x v x ˜ v ) | + K N | x v x ˜ v | · [ | x r x v | ρ + | x ˜ r x ˜ v | ρ ] .
Theorem 5.
Under the hypotheses of Theorem 1 with 1 H < β 1 replaced by H β 1 , we have
x y E , n 1 γ , = O Δ n 2 γ 1 ,
where γ ( γ 0 , H ) and where x is the solution of Equation (17).
Proof. 
We denote α = 1 γ . Because x is an element of the space C 1 α ( 0 , T ) and because sup n y n 1 α < , there exists N such that x 1 α N and y n 1 α N for all n. Furthermore, x , y n W 0 α , ( 0 , T ) and F ( f ) ( x ) , F ( f ) ( y n ) , G ( g ) ( x ) , G ( g ) ( y n ) W 0 α , ( 0 , T ) (see Propositions 2 and 4), and Φ ( x ) , Φ ( y n ) W 0 α , ( 0 , T ) . From Lemma 1 in [8] we have F ( f , τ n ) ( y n ) , G ( g , τ n ) ( y n ) W 0 , α ( 0 , T ) for any fixed n 1 . □
Recall that elements of the space W 0 , α ( 0 , T ) have the finite norm · α , λ with λ 0 . Thus,
x y n α , λ Φ ( x ) Φ ( y n ) α , λ + F ( f ) ( x ) F ( f ) ( y n ) α , λ + F ( f ) ( y n ) F ( f , τ n ) ( y n ) α , λ + G ( g ) ( x ) G ( g ) ( y n ) α , λ + G ( g ) ( y n ) G ( g , τ n ) ( y n ) α , λ .
Now, we can evaluate the terms on the right-hand side of (23). If ρ > α 1 α , then the estimate
Φ ( x ) Φ ( y n ) α , λ c x y n α , λ + 2 N K N λ α ρ ( 1 α ) Γ ( ρ ( 1 α ) α ) x y n α , λ
follows from Lemma 1 and the arguments used to prove the uniqueness of the solution in [8]. The estimates of the second and fourth terms follow from Propositions 2 and 4 if δ > α ( 1 + δ ) . Because γ ( γ 0 , H ) , we have δ > α ( 1 + δ ) and ρ ( 1 α ) > α and our restrictions for δ and ρ are satisfied. Thus,
x y n α , λ c x y n α , λ + 2 N K N λ α ρ ( 1 α ) Γ ( ρ ( 1 α ) α ) x y n α , λ + c N λ 1 α x y n α , λ + Λ α ( h ) C N ( 1 ) λ 1 2 α 1 + C N ( 2 ) x y n α , λ + F ( f ) ( y n ) F ( f , τ n ) ( y n ) α , λ + G ( g ) ( y n ) G ( g , τ n ) ( y n ) α , λ ,
where
C N ( 2 ) : = T δ α ( 1 + δ ) δ α ( 1 + δ ) sup n y n 1 α + x 1 α 2 N T δ α ( 1 + δ ) δ α ( 1 + δ ) .
For any c < b < 1 , we can choose a sufficiently large λ 1 such that
c + 2 N K N λ 1 α ρ ( 1 α ) Γ ( ρ ( 1 α ) α ) + c N λ 1 1 α + Λ α ( h ) C N ( 1 ) λ 1 1 2 α 1 + C N ( 2 ) < b .
Thus,
x y n α , λ 1 ( 1 b ) 1 F ( f ) ( y n ) F ( f , τ n ) ( y n ) α , λ 1 + G ( g ) ( y n ) G ( g , τ n ) ( y n ) α , λ 1 .
To complete the estimate of x y n α , λ 1 , it remains to estimate the right-hand side of the above inequality. From (9) it follows that instead of the norm · α , λ 1 it is sufficient to estimate the norm · 1 α .
We first estimate the norm F ( f ) ( y n ) F ( f , τ n ) ( y n ) 1 α . Combining Assumptions ( A 1 ) ( A 2 ) , we obtain
g ( τ r n , y n ( τ r n ) ) g ( τ r n , y n ( τ r n ) ) g ( 0 , 0 ) + | g ( 0 , 0 ) | g ( τ r n , y n ( τ r n ) ) g ( 0 , y n ( τ r n ) ) + g ( 0 , y n ( τ r n ) ) g ( 0 , 0 ) + | g ( 0 , 0 ) | M 0 T β + M 0 | y n | , r + | g ( 0 , 0 ) |
and
y r n y n ( τ r n ) ( 1 c ) 1 D ( y r n ) D ( y n ( τ r n ) ) = ( 1 c ) 1 f ( τ r n , y n ( τ r n ) ) ( r τ r n ) + g ( τ r n , y n ( τ r n ) ) ( h ( r ) h ( τ r n ) ) λ ( α ) ( 1 + | y n | ) ( r τ r n ) 1 α ,
where
λ ( α ) = ( 1 c ) 1 max L 0 T α + ( M 0 T β + | g ( 0 , 0 ) | ) | h | 1 α , ( L 0 T α + M 0 | h | 1 α ) .
Thus,
( F t ( f ) ( y n ) F t ( f , τ n ) ( y n ) ) ( F s ( f ) ( y n ) F s ( f , τ n ) ( y n ) ) s t f ( u , y u n ) f ( τ u n , y n ( τ u n ) ) d u L 0 s t | u τ u n | β d u + L N s t y u n y n ( τ u n ) ) d u L 0 Δ n β ( t s ) + L N λ ( α ) 1 + | y n | Δ n 1 α ( t s ) .
Because β H and y n 1 α N , it is the case that β > γ = 1 α and
F ( f ) ( y n ) F ( f , τ n ) ( y n ) 1 α = O Δ n 1 α .
Moreover, from (9) and the inequality · α , e λ T · α , λ , which is valid for all λ 0 , we obtain
F ( f ) ( y n ) F ( f , τ n ) ( y n ) α , = O Δ n 1 α .
Now, we can go back to the estimate of the norm G ( g ) ( y n ) G ( g , τ n ) ( y n ) 1 α . Because y n W 0 , α ( 0 , T ) , we have g ( · , y n ) W 0 , α ( 0 , T ) (see Proposition 4.2 in [21]). From Lemma 2 in [8], it follows that g ( τ n , y n ( τ n ) ) W 0 α , 1 ( 0 , T ) for any fixed n 1 if y n W 0 α , 1 ( 0 , T ) for any fixed n 1 . Note that W 0 , α ( 0 , T ) W 0 α , 1 ( 0 , T ) .
From the above and Proposition 3, it follows that
[ G t ( g ) ( y n ) G t ( g , τ n ) ( y n ) ] [ G s ( g ) ( y n ) G s ( g , τ n ) ( y n ) ] = s t [ g ( r , y r n ) g ( τ r n , y n ( τ r n ) ) ] d h r Λ α ( h ) ( s t | g ( r , y r n ) g ( τ r n , y n ( τ r n ) ) | ( r s ) α d r + α s t s r | [ g ( r , y r n ) g ( τ r n , y n ( τ r n ) ) ] [ g ( u , y u n ) g ( τ u n , y n ( τ u n ) ) ] | | r u | α + 1 d u d r ) .
Applying (27), we have
| g ( r , y r n ) g ( τ r n , y n ( τ r n ) ) | M 0 ( r τ r n ) β + M 0 y r n y n ( τ r n ) M 0 1 + λ ( α ) ( 1 + | y n | ) ( r τ r n ) 1 α
and
s t | g ( r , y r n ) g ( τ r n , y n ( τ r n ) ) | ( r s ) α d r 2 M 0 1 + λ ( α ) ( 1 + | y n | ) ( t s ) 1 α Δ n 1 α .
To estimate the second term in (29), we note that
[ g ( r , y r n ) g ( τ r n , y n ( τ r n ) ) ] [ g ( u , y u n ) g ( τ u n , y n ( τ u n ) ) ] M 0 ( r τ r n ) β + ( u τ u n ) β + M 0 λ ( α ) ( 1 + | y n | ) ( r τ r n ) 1 α + ( u τ u n ) 1 α
and (see Proposition 5 in [8])
s t ( r τ r n ) α d r ( 1 α ) 1 [ 2 ( t s ) 1 α + ( t s ) Δ n α ] 2 [ 2 + ( T 1 ) α ] ( t s ) 1 α .
Consequently,
s t s τ r n | [ g ( r , y r n ) g ( τ r n , y n ( τ r n ) ) ] [ g ( u , y u n ) g ( τ u n , y n ( τ u n ) ) ] | | r u | α + 1 d u d r M 0 s t s τ r n ( r τ r n ) β + ( u τ u n ) β | r u | α + 1 d u d r + M 0 λ ( α ) ( 1 + | y n | ) s t s τ r n ( r τ r n ) 1 α + ( u τ u n ) 1 α | r u | α + 1 d u d r 4 M 0 α 1 [ 2 + ( T 1 ) α ] 1 + λ ( α ) ( 1 + | y n | ) Δ n 1 2 α ( t s ) 1 α .
It is easy to check that
s t τ r n r | [ g ( r , y r n ) g ( τ r n , y n ( τ r n ) ) ] [ g ( u , y u n ) g ( τ u n , y n ( τ u n ) ) ] | | r u | α + 1 d u d r = s t τ r n r | g ( r , y r n ) g ( u , y u n ) | | r u | α + 1 d u d r ,
as τ r n = τ u n for τ r n u < r and
| g ( r , y r n ) g ( u , y u n ) | M 0 1 + λ ( α ) ( 1 + | y n | ) ( r u ) 1 α .
Thus,
s t τ r n r | g ( r , y r n ) g ( u , y u n ) | | r u | α + 1 d u d r M 0 1 + λ ( α ) ( 1 + | y n | ) s t τ r n r | r u | 2 α d u d r M 0 1 + λ ( α ) ( 1 + | y n | ) Δ n 1 2 α ( t s ) .
Therefore,
G ( g ) ( y n ) G ( g , τ n ) ( y n ) 1 α = O ( Δ n 1 2 α )
and
x y n 1 γ , = O ( Δ n 2 γ 1 ) ,
as 2 γ 1 = 1 2 α .

6. The Implicit Milstein Type Approximation and Auxiliary Results

For a given partition ϰ n , we define the implicit Milstein-type approximations for the time-homogenous equation
x t = x 0 + Φ ( x t ) Φ ( x 0 ) + 0 t f ( x s ) d s + 0 t g ( x s ) d h s , t [ 0 , T ]
as
y ( t k + 1 n ) = y ( t k n ) + f ( y ( t k n ) ) Δ n + g ( y ( t k n ) ) ( h ( t k + 1 n ) h ( t k n ) ) + g g ( y ( t k n ) ) t k n t k + 1 n ( h ( s ) h ( t k n ) ) d h s ,
where x 0 R , h W T 1 α , , h 0 = 0 , 1 H < α < 1 / 2 .
Theorem 6.
Let 1 H < α < α ^ 0 and let the functions f ( x ) and g ( x ) satisfy Assumptions ( B ) and ( A 3 ) , where α ^ 0 is defined in (7). Then, Equation (33) has a unique solution x C 1 α ( 0 , T ) , where α ( 1 H , α ^ 0 ) .
Proof. 
From Assumption ( B ) , we have δ = 1 . Because α ( 1 H , α ^ 0 ) , it follows that ρ 1 + ρ > 1 H . Consequently, ρ > 1 H 1 . Therefore, the conditions of Theorem 4 are satisfied and the theorem’s statement holds. □
Applying the chain rule, we can rewrite (34) as
y n ( t k + 1 n ) = y n ( t k n ) + f ( y n ( t k n ) ) Δ n + g ( y n ( t s n ) ) h ( t k + 1 n ) h ( t k n ) + 1 2 g g ( y n ( t k n ) ) h ( t k + 1 n ) h ( t k n ) 2 .
The continuous-time interpolation of the Milstein scheme is defined by
y t n = x 0 + F t ( f , τ n ) y n + G t ( g , τ n ) y n + G t ( g g , τ n ) ( y n ) ,
where
G t ( g g , τ n ) ( y n ) = 0 t τ s n s g g ( y n ( τ s n ) ) d h u d h s .
Because h W T 1 α , ( 0 , T ) C 1 α ( 0 , T ) , we have h C W p ( [ 0 , T ] ) and
V p ( h ; [ s , t ] ) | h | 1 α ( t s ) 1 α ,
where p = ( 1 α ) 1 . From now on, we assume that p = ( 1 α ) 1 .
The method of proving the convergence of the implicit Milstein approximation to the solution of Equation (1) repeats the idea of the proof for the implicit Euler approximation.
Lemma 2.
Let Assumption ( A 3 ) be satisfied. For any fixed n 1 , the functions y n , F ( f , τ n ) ( y n ) , G ( g , τ n ) ( y n ) , and G ( g g , τ n ) ( y n ) belong to C 1 α ( 0 , T ) .
Proof. 
From Lemma 1 in [8], we have F ( f , τ n ) ( y n ) , G ( g , τ n ) ( y n ) C 1 α ( 0 , T ) . Indeed, the proof does not change when the Euler approximation is replaced with the Milstein one. Now, we can consider G ( g g , τ n ) ( y n ) .
We first note that the function g g ( y n ( τ n ) ) has a bounded variation on [ 0 , T ] for any fixed n; thus, it is bounded and has p-bounded variation. Because h , h ( τ n ) W p ( [ 0 , T ] ) , it is the case that
g g ( y n ( τ n ) ) [ h ( · ) h ( τ n ) ] W p ( [ 0 , T ] ) .
To simplify the notation, we write n ( s ) instead of g g ( y n ( τ s n ) ) [ h ( s ) h ( τ s n ) ] .
Now, it remains to prove that G ( g g , τ n ) ( y n ) C 1 α ( 0 , T ) for fixed n 1 . Assume that s [ t k n , t k + 1 n ) for some 0 k n 1 and t t k + 1 n T . Then,
s t n ( u ) d h u = g g ( y n ( t k n ) ) s t [ h ( u ) h ( τ u n ) ] d h u | g g ( y n ( τ n ) ) | | h | 1 α 2 ( t s ) 1 α ,
as the chain rule implies an inequality
s t ( h ( u ) h ( τ u n ) ) d h u = 2 1 ( h 2 ( t ) h 2 ( s ) ) h ( τ s n ) ( h ( t ) h ( s ) ) = 2 1 | h ( t ) h ( s ) | ( h ( t ) h ( τ s n ) ) + ( h ( s ) h ( τ s n ) ) | h | 1 α 2 Δ n 1 α ( t s ) 1 α .
If t > t k + 1 n , then from (38), the Love–Young inequality, and (36) it follows that
G t ( g g , τ n ) ( y n ) G s ( g g , τ n ) ( y n ) s t k + 1 n n ( u ) d h u + t k + 1 n t n ( u ) d h u | g g ( y n ( τ n ) ) | | h | 1 α + C p , p V p , n ; [ 0 , T ] | h | 1 α ( t s ) 1 α .
The boundedness of the last term in the above inequality follows from (37). Consequently, G ( g g , τ n ) ( y n ) C 1 α ( 0 , T ) .
From Assumption ( A 3 ) , it follows that
y n ( t ) y n ( s ) ( 1 c ) 1 ( F t ( f , τ n ) ( y n ) F s ( f , τ n ) ( y n ) + G t ( g , τ n ) ( y n ) G s ( g , τ n ) ( y n ) + G t ( g g , τ n ) ( y n ) G s ( g g , τ n ) ( y n ) ) .
Thus, y n C 1 α ( 0 , T ) for any fixed n 1 . □
The next lemma allows us to apply the estimate (11) to the integral G ( g g , τ n ) ( y n ) .
Lemma 3.
Let Assumptions ( B ) and ( A 3 ) be satisfied. If y n W 0 α , 1 ( 0 , T ) for any fixed n 1 , then g ( y n ( τ n ) ) , n W 0 α , 1 ( 0 , T ) for any fixed n 1 .
Proof. 
It follows from Lemma 2 that y n W 0 α , 1 ( 0 , T ) for any fixed n 1 and that there exists a constant C n depending on n such that
| y n ( τ s n ) y n ( τ u n ) | C n τ s n τ u n 1 α .
First, we note that g ( y n ( τ n ) ) W 0 α , 1 ( 0 , T ) for any fixed n 1 (see Lemma 2 in [8]).
Now, we prove that n W 0 α , 1 ( 0 , T ) for any fixed n 1 . It is clear that
| n ( s ) | | h | 1 α g g ( y n ( τ s n ) ) M 0 | h | 1 α ( 1 + | y n ( τ s n ) | )
and
n ( s ) n ( u ) g g ( y n ( τ s n ) ) g g ( y n ( τ u n ) ) · | h ( s ) h ( τ s n ) | + [ h ( s ) h ( τ s n ) ] [ h ( u ) h ( τ u n ) ] · g g ( y n ( τ u n ) ) M 0 | h | 1 α ( s τ s n ) 1 α | y n ( τ s n ) y n ( τ u n ) | + | h | 1 α | s u | 1 α + | τ s n τ u n | 1 α g g ( y n ( τ u n ) ) .
Thus,
0 t | n ( s ) | s α d s | h | 1 α g x g ( y n ( τ n ) ) ( 1 α ) 1 t 1 α
and
0 t 0 s | n ( s ) n ( u ) | ( s u ) 1 + α d u d s M 0 | h | 1 α 0 t 0 τ s n | y n ( τ s n ) y n ( τ u n ) | ( s u ) 1 + α d u d s + | h | 1 α g g ( y n ( τ n ) ) 0 t 0 s 1 ( s u ) 2 α + | τ s n τ u n | 1 α ( s u ) 1 + α d u d s | h | 1 α M 0 C n + 2 g g ( y n ( τ n ) ) 0 t 0 s 1 ( s u ) 2 α d u d s + | h | 1 α M 0 C n + g g ( y n ( τ n ) ) Δ n 1 α 0 t 0 τ s n 1 ( s u ) 1 + α d u d s | h | 1 α M 0 C n + 2 g g ( y n ( τ n ) ) ( 1 2 α ) 1 t 2 2 α + | h | 1 α M 0 C n + g g ( y n ( τ n ) ) 2 α 1 t ,
as
τ s n τ u n = ( s u ) + ( u τ u n ) ( s τ s n ) ( s u ) + ( u τ u n )
and
0 t ( s τ s n ) α d s ( 1 α ) 1 t Δ n α .
The above inequality was proved in Lemma 2 in [8].
Consequently, it follows that W 0 α , 1 ( 0 , T ) for any fixed n 1 . □

Boundedness of the Norm y n α ,

Proposition 6.
Let 1 H < α < α ^ 0 and the functions f ( x ) and g ( x ) satisfy Assumptions ( B ) and ( A 3 ) . Then, there exists a constant C such that we have the following:
sup n y n α , C .
Proof. 
Set
u , α , t : = sup s [ 0 , t ] u α , s , u α , s : = | u ( s ) | + | u | α , s , | u | α , s : = 0 s | u ( s ) u ( r ) | ( s r ) 1 + α d r , u , t : = sup s [ 0 , t ] | u ( s ) | .
It is easy to check (see Proposition 4 in [8]) that
y n α , t | x 0 | + ( 1 c ) 1 F ( f , τ n ) ( y n ) α , t + G ( g , τ n ) ( y n ) α , t + G ( g g , τ n ) ( y n ) α , t ,
where the constant c is taken from Assumption ( A 3 ) . From Lemma 2, we know that the norms mentioned above are finite for any fixed n 1 .
To obtain the statement of the proposition, we repeat the proof of Proposition 4 in [8]. First, we note that from Assumption ( B ) it follows that
y n ( s ) y n ( τ s n ) f ( y n ( τ s n ) ) ( s τ s n ) + g ( y n ( τ s n ) ) ( h ( s ) h ( τ s n ) ) + 1 2 g g ( y n ( τ s n ) ) ( h ( s ) h ( τ s n ) ) 2 L 0 ( 1 + | y n ( τ s n ) | ) ( s τ s n ) + M 0 | h | 1 α ( 1 + | y n ( t k n ) | ) ( s τ s n ) 1 α + M 0 | h | 1 α 2 ( 1 + | y n ( τ s n ) | ) ( s τ s n ) 2 ( 1 α ) λ ( α ) ( 1 + | y n ( τ s n ) | ) ( s τ s n ) 1 α ,
where λ ( α ) = L 0 + M 0 ( 1 + | h | 1 α ) | h | 1 α .
Based on the above and the proof of Proposition 4 in [8], we can use the results obtained in Proposition 4. Thus, we obtain
F ( f , τ n ) ( y n ) α , t C 0 + C 1 0 t y n , α , s ( t s ) α d s ,
G ( g , τ n ) ( y n ) α , t C 2 + C 3 0 t ( t r ) 2 α + r α y n , α , r d r
with certain constants C i , 0 i 3 .
Now, we estimate G ( g g , τ n ) ( y n ) α , t . From Lemma 3, it follows that we can apply Proposition 3. Thus, we obtain the inequality
G ( g g , τ n ) ( y n ) α , t Λ α ( h ) ( 0 t c α ( 1 ) ( t r ) 2 α + 1 r α | n ( r ) | d r + 0 t 0 r | n ( r ) n ( v ) | ( r v ) 1 + α ( t v ) α + α d v d r ) .
From inequalities (40) and (41) and the inequality
τ r n τ v n = ( r v ) + ( v τ v n ) ( r τ r n ) ( r v ) + ( v τ v n ) ,
we have
G t ( g g , τ n ) ( y n ) + 0 t G t ( g g , τ n ) ( y n ) G s ( g g , τ n ) ( y n ) ( t s ) 1 + α d s Λ α ( h ) M | h | 1 α [ 0 t c α ( 1 ) ( t r ) 2 α + r α ( 1 + | y n ( τ r n ) | ) d r + 0 t 0 τ r n 2 | r v | 1 α + | v τ v n | 1 α ( r v ) 1 + α ( t v ) α + α ( 1 + | y n ( τ v n ) | ) d v d r + T α 0 t 0 τ r n y n ( τ r n ) y r n + | y r n y v n | + y v n y n ( τ v n ) ( r v ) 1 + α ( t v ) 2 α + r α d v d r ] .
Thus,
G ( g g , τ n ) ( y n ) α , t Λ α ( h ) M | h | 1 α [ 0 t 0 τ r n | v τ v n | 1 α ( r v ) 1 + α ( t v ) α + α ( 1 + | y n ( τ v n ) | ) d v d r + c α , T ( 2 ) 0 t r α + ( t r ) 2 α ( ( 1 + | y n ( τ r n ) | ) + 0 τ r n | r v | 1 α ( r v ) 1 + α ( 1 + | y n ( τ v n ) | ) d v + 0 τ r n y n ( τ r n ) y r n + | y r n y v n | + y v n y n ( τ v n ) ( r v ) 1 + α d v ) d r ] ,
where c α , T ( 2 ) = max { c α ( 1 ) 1 , T α 2 } .
Now, let us move on to estimating the norm of G ( g g , τ n ) ( y n ) α , t . We divide the first integral into two parts and estimate each separately.
Let t m n t < t m + 1 n . Applying change the order of integration, noting that v < τ r n τ v n + Δ n r (as in [18] (p. 349)), we obtain
α 0 t 0 τ r n ( v τ v n ) 1 α ( r v ) 1 + α ( 1 + | y n ( τ v n ) | ) d v d r = α 0 τ t n ( 1 + | y n ( τ v n ) | ) ( v τ v n ) 1 α τ v n + Δ n t 1 ( r v ) 1 + α d r d v k = 1 m 1 + | y n ( t k 1 n ) | t k 1 n t k n ( v t k 1 n ) 1 α ( t k n v ) α d v = k = 1 m 1 + | y n ( t k 1 n ) | Δ n · Δ n 1 2 α 0 1 x 1 α ( 1 x ) α d x B ( 2 α , 1 α ) 0 t ( 1 + y n , r ) d r .
By changing the order of integration in the first part of the integral (as in [18], p. 3497), we obtain
0 t 0 τ r n ( v τ v n ) 1 α ( r v ) 1 + α ( 1 + | y n ( τ v n ) | ) ( t v ) α d v d r = 0 τ t n ( 1 + | y n ( τ v n ) | ) ( t v ) α ( v τ v n ) 1 α τ v n + Δ n t 1 ( r v ) 1 + α d r d v α 1 0 τ t n ( 1 + | y n ( τ v n ) | ) ( t v ) α ( v τ v n ) 1 α ( τ v n + Δ n v ) α d v α 1 k = 1 m 1 1 + | y n ( t k 1 n ) | t k 1 n t k n ( t v ) α ( v t k 1 n ) 1 α ( t k n v ) α d v + α 1 1 + | y n ( t m 1 n ) | t m 1 n t m n ( t v ) α ( v t m 1 n ) 1 α ( t m n v ) α d v α 1 k = 1 m 1 1 + | y n ( t k 1 n ) | ( t t k n ) α t k 1 n t k n ( v t k 1 n ) 1 α ( t k n v ) α d v + α 1 1 + | y n ( t m 1 n ) | Δ n 1 α t m 1 n t m n ( t v ) α ( t m n v ) α d v α 1 B ( 2 α , 1 α ) k = 1 m 1 1 + | y n ( t k 1 n ) | ( t t k n ) α Δ n + α 1 ( 1 2 α ) 1 1 + | y n ( t m 1 n ) | ( t m n t m 1 n ) 1 2 α Δ n α 1 ( B ( 2 α , 1 α ) 2 ) ( T 1 ) α k = 1 m 1 + y n , t k 1 n ( t t k n ) 2 α Δ n α 1 ( B ( 2 α , 1 α ) 2 ) ( T 1 ) α 0 t 1 + y n , s ( t s ) 2 α d s .
Note that for the second term in (49) we have
0 t ( t r ) 2 α + r α ( 1 + | y n ( τ r n ) | d r c α , T ( 3 ) 1 + 0 t ( t r ) 2 α + r α y n , r d r ,
where
c α , T ( 3 ) = max 2 ( T 1 ) 1 2 α , 1 .
For the third term, it is evident that
0 t 0 τ r n | r v | 1 α ( r v ) 1 + α ( 1 + y n , r ) d v d r ( 1 2 α ) 1 ( T 1 ) 2 2 α 1 + 0 t y n , r d r .
The estimation of the fourth term was proved in [8], and we repeat it below:
0 t 0 τ r n y n ( τ r n ) y r n + | y r n y v n | + y v n y n ( τ v n ) ( r v ) 1 + α d v d r 2 c α , T ( 4 ) 1 + 0 t ( t r ) 2 α + r α y n , r d r + 0 t ( t r ) 2 α + r α y n α , r d r
where
c α , T ( 4 ) = λ ( α ) α 1 max 2 ( T 1 ) 1 2 α , 1
with λ ( α ) as defined in (44).
Consequently, for certain constants C i , 4 i 8 , we obtain
G ( g g , τ n ) ( y n ) α , t C 4 + C 5 0 t y n , r d r + C 6 0 t y n , r ( t r ) 2 α d r + C 7 0 t ( t r ) 2 α + r α y n , α , r d r C 4 + C 8 0 t ( t r ) 2 α + r α y n , α , r d r .
Obviously, from (43), (45), (46), and (50), we have
y n , α , t | x 0 | + ( 1 c ) 1 ( ( C 0 + C 2 + C 4 ) + C 1 0 t y n , α , s ( t r ) α d r + ( C 3 + C 8 ) 0 t ( t r ) 2 α + r α y n , α , r d r ) .
Note that for r < t , we have
( t r ) α t 2 α ( t r ) α r 2 α ( t r ) 2 α t 3 α r 2 α ( t r ) 2 α T α t 2 α r 2 α ( t r ) 2 α , r α + ( t r ) 2 α r α t 2 α r 2 α ( t r ) 2 α + r 2 α r 2 α ( t r ) 2 α ( T α + 1 ) t 2 α r 2 α ( t r ) 2 α .
Thus,
y n , α , t | x 0 | + ( 1 c ) 1 ( C 0 + C 2 + C 4 ) + ( 1 c ) 1 ( C 1 + C 3 + C 8 ) ( T α + 1 ) t 2 α 0 t y n , α , r r 2 α ( t r ) 2 α d r ,
and from Lemma 7.6 in [21] it follows that
y n , α , t a d α exp k α t b 1 / ( 1 2 α ) ,
where k α and d α are positive constants depending only on α :
a = | x 0 | + ( 1 c ) 1 ( C 0 + C 2 + C 4 ) , b = ( 1 c ) 1 ( C 1 + C 3 + C 8 ) ( T α + 1 ) .
Now, we can strengthen the result of Lemma 2.
Proposition 7.
Under the assumptions of Proposition 6, we obtain sup n y n 1 α < .
Proof. 
Recall that from Lemma 2 we have y n , F ( f , τ n ) ( y n ) , G ( g , τ n ) ( y n ) , G ( g g , τ n ) ( y n ) C 1 α ( 0 , T ) for any fixed n 1 . Thus, for any fixed n 1 , we have the following:
y n 1 α | x 0 | + ( 1 c ) 1 F ( f , τ n ) y n 1 α + G ( g , τ n ) y n 1 α + G ( g g , τ n ) y n 1 α .
The proof repeats the arguments of the proof of Proposition 5 in [8]. The terms F ( f , τ n ) y n 1 α , G ( g , τ n ) y n 1 α , are bounded for all n. This follows from Proposition 6 and the proof of Proposition 5 in [8].
The boundedness of the norm G ( g g , τ n ) y n 1 α can be proved in much the same way as was done for the norm G ( g , τ n ) y n 1 α . From (11), it follows that
G t ( g g , τ n ) ( y n ) G s ( g g , τ n ) ( y n ) Λ α ( h ) s t | n ( r ) ) | ( r s ) α d r + s t s r n ( r ) ) n ( v ) ) ( r v ) 1 + α d v d r .
Applying (40), we obtain
s t | n ( r ) ) | ( r s ) α d r M 0 | h | 1 α ( 1 + | y n | ) ( 1 α ) 1 ( t s ) 1 α .
From (41), (48), and (31), it is obvious that
s t s r ( y n ( r ) ) ( y n ( v ) ) ( r v ) 1 + α d v d r M 0 | h | 1 α ( s t s τ r n | r τ r n | 1 α y n ( τ r n ) y n ( τ v n ) ( r v ) 1 + α d v d r + s t s r | r v | 1 α ( r v ) 1 + α ( 1 + | y n ( τ v n ) | ) d v d r + s t s τ r n | τ r n τ v n | 1 α ( r v ) 1 + α ( 1 + | y n ( τ v n ) | ) d v d r ) M 0 | h | 1 α 2 ( 1 + | y n | ) s t s τ r n ( r v ) 1 α d v d r + s t s r ( r v ) 2 α d v d r 2 M 0 | h | 1 α ( 1 + | y n | ) α 1 s t ( r τ r n ) α d r + ( 1 2 α ) 1 s t ( r s ) 1 2 α d r 2 M 0 | h | 1 α ( 1 + | y n | ) 2 α 1 [ 2 + ( T 1 ) α ] ( t s ) 1 α + ( 1 2 α ) 1 ( t s ) 2 2 α .
Thus, the norm G ( g g , τ n ) y n 1 α is bounded for all n and the proof is complete. □

7. Rate of Convergence of the Implicit Milstein-Type Approximation

Theorem 7.
Under conditions of Theorem 6,
x y M , n 1 γ , = O Δ n γ ,
where γ ( γ ^ 0 , H ) and x is the solution of Equation (33).
The proof is similar in spirit to the proof of the rate of convergence of the implicit Euler approximation. For abbreviation, let y n stand for y M , n .
Because x and y n are elements of the space C 1 α ( 0 , T ) , there exists N such that x 1 α N and y n 1 α N for all n. It is obvious that
x y n α , λ Φ ( x ) Φ ( y n ) α , λ + F ( f ) ( x ) F ( f ) ( y n ) α , λ + F ( f ) ( y n ) F ( f , τ n ) ( y n ) α , λ + G ( g ) ( x ) G ( g ) ( y n ) α , λ + G ( g ) ( y n ) G ( g , τ n ) ( y n ) G ( g g , τ n ) ( y n ) α , λ .
We divide the proof of the convergence rate of the norm x y n α , λ into two steps. We estimate the first, second, and fourth terms in the first step. The estimation of the first term is provided in (24). Estimates for the second and fourth terms follow from Propositions 2 and 4.
Similarly, as in Section 4, for sufficiently large λ 1 and b satisfying inequality (25), we obtain
x y n α , λ 1 ( 1 b ) 1 ( F ( f ) ( y n ) F ( f , τ n ) ( y n ) α , λ 1 + G ( g ) ( y n ) G ( g , τ n ) ( y n ) G ( g g , τ n ) ( y n ) α , λ 1 ) .
In the second step, we estimate the right-hand side of the above inequality. Estimation of the first term follows immediately from (28). It follows from (9) that to estimate the norm of the second term it is sufficient to estimate the norm of G ( g ) ( y n ) G ( g , τ n ) ( y n ) G t ( g g , τ n ) ( y n ) 1 α .
From Proposition 6, Proposition 4.2 in [21], and Lemma 3, we obtain g ( y n ) g ( y n ( τ n ) ) n W 0 α , 1 ( 0 , T ) for any fixed n 1 . Proposition 3 shows that
[ G t ( g ) ( y n ) G t ( g , τ n ) ( y n ) G t ( g g , τ n ) ( y n ) ] [ G s ( g ) ( y n ) G s ( g , τ n ) ( y n ) G s ( g g , τ n ) ( y n ) ] = s t [ g ( y r n ) g ( y n ( τ r n ) ) n ( r ) ] d h r Λ α ( h ) ( s t | g ( y r n ) g ( y n ( τ r n ) ) n ( r ) | ( r s ) α d r + α s t s r | [ g ( y r n ) g ( y n ( τ r n ) ) n ( r ) ] [ g ( y u n ) g ( y n ( τ u n u ) ) n ( u ) ] | | r u | α + 1 d u d r ) .
Assume that τ r n u < r < τ r + 1 n . First, observe that
g ( y n ( r ) ) g ( y n ( u ) ) g ( y n ( u ) ) g ( y n ( τ r n ) ) ( h ( r ) h ( u ) ) = [ g y n ( u ) + y n ( r ) y n ( u ) g y n ( u ) + f ( y n ( τ r n ) ) ( r u ) + g ( y n ( τ r n ) ) ( h ( r ) h ( u ) ) ] + [ g y n ( u ) + f ( y n ( τ r n ) ) ( r u ) + g ( y n ( τ r n ) ) ( h ( r ) h ( u ) ) g ( y n ( u ) ) g ( y n ( u ) ) f ( y n ( τ r n ) ) ( r u ) + g ( y n ( τ r n ) ) ( h ( r ) h ( u ) ) ] + [ g ( y n ( u ) ) f ( y n ( τ r n ) ) ( r u ) + g ( y n ( τ r n ) ) ( h ( r ) h ( u ) ) g ( y n ( u ) ) g ( y n ( τ r n ) ) ( h ( r ) h ( u ) ) ] : = J 1 ( r ) + J 2 ( r ) + J 3 ( r ) .
From Assumption ( B ) and (44), we obtain
| J 1 ( r ) | M 0 ( y n ( r ) y n ( u ) ) f ( y n ( τ r n ) ) ( r u ) g ( y n ( τ r n ) ) ( h ( r ) h ( u ) ) M 0 g g ( y n ( τ r n ) ) ( h ( r ) h ( u ) ) 2 M 0 2 | h | 1 α 2 ( 1 + | y n ( τ r n ) | ) ( r u ) 2 ( 1 α ) .
Further,
| J 3 ( r ) | = g ( y n ( u ) ) f ( y n ( τ r n ) ) ( r u ) M 0 L 0 ( 1 + | y n ( τ r n ) | ) ( r u )
and
| J 2 ( r ) | 0 1 | g y n ( u ) + θ [ f ( y n ( τ r n ) ) ( r u ) + g ( y n ( τ r n ) ) ( h ( r ) h ( u ) ) ] g ( y n ( u ) ) | d θ × | f ( y n ( τ r n ) ) ( r u ) + g y n ( τ r n ) ) ( h ( r ) h ( u ) ) | g | | f ( y n ( τ r n ) ) ( r u ) + g y n ( τ r n ) ) ( h ( r ) h ( u ) ) 2 2 M 0 L 0 2 ( r u ) 2 + | h | 1 α 2 M 0 2 ( r u ) 2 ( 1 α ) ( 1 + | y n ( τ r n ) | ) 2 2 M 0 L 0 2 + | h | 1 α 2 M 0 2 ( 1 + | y n | ) 2 ( r u ) 2 ( 1 α ) .
From (53)–(55) and the fact that sup n y n N , we can conclude that there exists a constant C independent of n such that
g ( y n ( r ) ) g ( y n ( u ) ) g ( y n ( u ) ) g ( y n ( τ r n ) ) ( h ( r ) h ( u ) ) C ( r u ) .
Set u = τ r n . For the first term in (52), we obtain
s t | g ( y r n ) g ( y n ( τ r n ) ) n ( r ) | ( r s ) α d r C Δ n ( 1 α ) 1 ( t s ) 1 α .
We can rewrite the second integral (52) as the sum of two integrals:
s t s τ r n | [ g ( y r n ) g ( y n ( τ r n ) ) n ( r ) ] [ g ( y u n ) g ( y n ( τ u n ) ) n ( u ) ] | | r u | α + 1 d u d r + s t τ r n r | [ g ( y r n ) g ( y n ( τ r n ) ) n ( r ) ] [ g ( y u n ) g ( y n ( τ u n ) ) n ( u ) ] | | r u | α + 1 d u d r : = I 1 + I 2
and evaluate each of them separately.
Applying (56) and (31), we obtain
I 1 C s t s τ r n ( r τ r n ) + ( u τ u n ) | r u | α + 1 d u d r 2 C α 1 Δ n s t ( r τ r n ) α d r 4 C α 1 [ 2 + ( T 1 ) α ] Δ n 1 α ( t s ) 1 α .
Note that
I 2 = s t τ r n r | g ( y r n ) g ( y u n ) g x g ( y n ( τ r n ) ) [ h ( r ) h ( u ) ] | | r u | α + 1 d u d r
and that for τ r n u < r < τ r + 1 n we have
g ( y n ( r ) ) g ( y n ( u ) ) g g ( y n ( τ r n ) ) [ h ( r ) h ( u ) ] g ( y n ( r ) ) g ( y n ( u ) ) g ( y n ( u ) ) g ( y n ( τ r n ) ) ( h ( r ) h ( u ) ) + g ( y n ( u ) ) g ( y n ( τ r n ) ) g g ( y n ( τ r n ) ) [ h ( r ) h ( u ) ] = V 1 ( r , u ) + V 2 ( r , u ) .
We can divide the integral I 2 into two parts and estimate each separately. From (56), it is evident that
s t τ r n r V 1 ( r , u ) | r u | α + 1 d u d r C s t τ r n r | r u | α d u d r C ( 1 α ) 1 s t | r τ r n | 1 α d r 2 C Δ n 1 α ( t s ) .
From Assumption ( B ) and (44), we obtain
V 2 ( r , u ) M 0 2 y n ( u ) y n ( τ r n ) ( 1 + | y n ( τ r n ) | ) | h | 1 α ( r u ) 1 α M 0 2 λ ( α ) | h | 1 α ( 1 + | y n | ) ( u τ r n ) 1 α ( r u ) 1 α .
Thus,
s t τ r n r V 2 ( r , u ) | r u | α + 1 d u d r M 0 λ ( α ) ( 1 + | y n | ) | h | 1 α s t τ r n r ( u τ r n ) 1 α | r u | 2 α d u d r M 0 λ ( α ) ( 1 + | y n | ) | h | 1 α Δ n 1 α ( 1 2 α ) 1 s t ( r τ r n ) 1 2 α d r M 0 λ ( α ) ( 1 + | y n | ) | h | 1 α ( 1 2 α ) 1 Δ n 1 α ( t s ) .
Consequently,
G ( g ) ( y n ) G ( g , τ n ) ( y n ) G t ( g g , τ n ) ( y n ) 1 α = O Δ n 1 α
and
x y n α , = O Δ n 1 α .
The statement of the theorem follows from γ = 1 α

8. Example: Fractional Pearson Diffusion with a Stochastic Force

Consider the Pearson diffusion process with a stochastic force
D ( X t ) = D ( x 0 ) + 0 t α ( X t ) d t + 0 t σ ( X t ) d B t H , t 0 ,
where
D ( x ) : = x Φ ( x ) , α ( x ) = b a x , σ ( x ) = σ 0 + σ 1 x + σ 2 x 2
and the function Φ satisfies Assumption ( A 3 ) . Assume that the coefficients σ i , i = 0 , 1 , 2 are such that σ 2 > 0 and σ 1 2 4 σ 2 σ 0 < 0 . Then, σ ( x ) > 0 .
For the existence of a unique solution to problem (57), it is necessary to check the conditions of Theorem 1. Note that
| α ( x ) | | a | , σ ( x ) = σ 1 + 2 σ 2 x 2 σ ( x ) , 0 < σ ( x ) = 4 σ 2 σ 0 σ 1 2 4 σ 3 ( x ) 4 σ 2 σ 0 σ 1 2 4 σ 3 ( x 0 ) ,
where x 0 = σ 1 2 σ 2 is a critical point of the function σ ( x ) .
Straightforward computation shows that
σ 2 ( x ) σ 2 x + σ 1 2 σ 2 2 = 1 4 σ 2 2 σ 2 x + σ 1 2
and
( σ ( x ) ) 2 4 σ 2 ( σ 1 + 2 σ 2 x ) 2 4 ( 2 σ 2 x + σ 1 ) 2 = σ 2 , σ ( x ) σ 2 .
Thus, the Pearson diffusion process with a stochastic force has a unique solution under the above conditions.
Note that
σ ( x ) σ ( x ) = σ ( x ) σ 1 + 2 σ 2 x 2 σ ( x ) = 1 2 ( σ 1 + 2 σ 2 x ) .
Thus, Assumption ( B ) is satisfied, and for an implicit Milstein-type approximation we have the following:
y n ( t k + 1 n ) Φ ( y n ( t k + 1 n ) ) = y n ( t k n ) Φ ( y n ( t k n ) ) + ( b a y n ( t k n ) ) Δ n + σ ( y n ( t k n ) ) h ( t k + 1 n ) h ( t k n ) + 1 2 σ ( y n ( t k n ) ) σ ( y n ( t k n ) ) h ( t k + 1 n ) h ( t k n ) 2 = y n ( t k n ) Φ ( y n ( t k n ) ) + ( b a y n ( t k n ) ) Δ n + σ ( y n ( t k n ) ) h ( t k + 1 n ) h ( t k n ) + 1 4 σ 1 + 2 σ 2 y n ( t k n ) h ( t k + 1 n ) h ( t k n ) 2 ,
where the rate of convergence is O ( Δ n γ ) . The convergence rate for Euler approximation is O ( Δ n 1 γ ) .

9. Conclusions

The mathematical literature has extensively analyzed stochastic differential equations driven by a fractional Brownian motion. Most of these efforts have been motivated by problems arising in the financial applications of SDEs, such as option pricing, stochastic volatility, and interest rate modeling. Our attention is focused on approximating solutions of stochastic differential equations where their behavior can be interpreted as environmental influences on the behavior of a process. These types of processes can be applied in the natural sciences. We have presented and investigated two pathwise process approximation schemes, namely, the implicit Euler and Milstein schemes. The Milstein scheme has a better convergence rate than the Euler scheme. Our results represent a new and original addition to the field of fractional SDEs, and may have broad application perspectives in various fields.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Falkowski, A.; Słomiński, L. Sweeping processes with stochastic perturbations generated by a fractional Brownian motion. arXiv 2015, arXiv:1505.01315. [Google Scholar]
  2. Falkowski, A.; Słomiński, L. Mean reflected stochastic differential equations with two constraints. Stoch. Process. Appl. 2021, 141, 172–196. [Google Scholar] [CrossRef]
  3. Kubilius, K.; Medžiūnas, A. A class of the fractional stochastic differential equations with a soft wall. Fractal Fract. 2023, 7, 110. [Google Scholar] [CrossRef]
  4. Kubilius, K. Fractional SDEs with stochastic forcing: Existence, uniqueness, and approximation. Nonlinear Anal. Model. Control 2023, 28, 1196–1225. [Google Scholar] [CrossRef]
  5. Bollerslev, T.; Mikkelsen, H.O. Modeling and pricing long memory in stock market volatility. J. Econom. 1996, 73, 151–184. [Google Scholar] [CrossRef]
  6. Di Nunno, G.; Kubilius, K.; Mishura, Y.; Yurchenko-Tytarenko, A. From Constant to Rough: A Survey of Continuous Volatility Modeling. Mathematics 2023, 11, 4201. [Google Scholar] [CrossRef]
  7. Fischer, T.; Krauss, C. Deep learning with long short-term memory networks for financial market predictions. Eur. J. Oper. Res. 2018, 270, 654–669. [Google Scholar] [CrossRef]
  8. Kubilius, K. The implicit Euler scheme for FSDEs with stochastic forcing: Existence and uniqueness of the solution. Mathematics 2024, 12, 2436. [Google Scholar] [CrossRef]
  9. Jamshidi, N.; Kamrani, M. Convergence of a numerical scheme associated to stochastic differential equations with fractional Brownian motion. Appl. Numer. Math. 2021, 167, 108–118. [Google Scholar] [CrossRef]
  10. Hong, J.; Huang, C.; Wang, X. Optimal rate of convergence for two classes of schemes to stochastic differential equations driven by fractional Brownian motions. IMA J. Numer. Anal. 2021, 41, 1608–1638. [Google Scholar] [CrossRef]
  11. Hu, H.; Liu, Y.; Nualart, D. Rate of convergence and asymptotic error distribution of Euler approximation schemes for fractional diffusions. Ann. Appl. Probab. 2016, 26, 1147–1207. [Google Scholar] [CrossRef]
  12. Hu, Y.; Liu, Y.; Nualart, D. Taylor schemes for rough differential equations and fractional diffusions. Discrete Contin. Dyn. Syst. Ser. B 2016, 21, 3115–3162. [Google Scholar]
  13. Hu, Y.; Liu, Y.; Nualart, D. Crank–Nicolson scheme for stochastic differential equations driven by fractional Brownian motions. Ann. Appl. Probab. 2021, 31, 39–83. [Google Scholar] [CrossRef]
  14. Kubilius, K.; Medžiūnas, A. Pathwise convergent approximation for the fractional SDEs. Mathematics 2022, 10, 669. [Google Scholar] [CrossRef]
  15. Liu, W.; Luo, J. Modified Euler approximation of stochastic differential equation driven by Brownian motion and fractional Brownian motion. Commun.-Stat.–Theory Methods 2017, 46, 7427–7443. [Google Scholar] [CrossRef]
  16. Liu, W.; Jiang, Y.; Li, Z. Rate of convergence of Euler approximation of time-dependent mixed SDEs driven by Brownian motions and fractional Brownian motions. AIMS Math. 2020, 5, 2163–2195. [Google Scholar] [CrossRef]
  17. Yu, M.; Shevchenko, G. The rate of convergence for Euler approximations of solutions of stochastic differential equations driven by fractional Brownian motion. Stochastics 2008, 80, 489–511. [Google Scholar]
  18. Yu, M.; Shevchenko, G. Existence and Uniqueness of the Solution of Stochastic Differential Equation Involving Wiener Process and Fractional Brownian Motion with Hurst Index H>1/2. Commun. Stat. Theory Methods 2011, 40, 3492–3508. [Google Scholar]
  19. Yu, M.; Shevchenko, G. Rate of convergence of Euler approximations of solution to mixed stochastic differential equation involving Brownian motion and fractional Brownian motion. Random Oper. Stoch. Equations 2011, 19, 387–406. [Google Scholar]
  20. Nourdin, I.; Neunkirch, A. Exact rate of convergence of some approximation schemes associated to SDEs driven by a fractional Brownian motion. J. Theor. Probab. 2007, 20, 871–899. [Google Scholar]
  21. Nualart, D.; Răşcanu, A. Differential equations driven by fractional Brownian motion. Collect. Math. 2002, 53, 55–81. [Google Scholar]
  22. Zähle, M. Integration with respect to fractal functions and stochastic calculus, I. Probab. Theory Relat. Fields 1998, 111, 333–374. [Google Scholar] [CrossRef]
  23. Young, L.C. An inequality of the Hölder type, connected with Stieltjes integration. Acta Math. 1936, 67, 251–282. [Google Scholar] [CrossRef]
  24. Dudley, R.M.; Norvaiša, R. Differentiability of Six Operators on Nonsmooth Functions and p-Variation; Lecture Notes in Mathematics; Springer: New York, NY, USA, 1999; Volume 1703. [Google Scholar]
  25. Butkovsky, O.; Dareiotis, K.; Gerencsér, M. Approximation of SDEs: A stochastic sewing approach. Probab. Theory Relat. Fields 2021, 181, 975–1034. [Google Scholar] [CrossRef]
  26. Zhang, S.Q.; Yuan, C. Stochastic differential equations driven by fractional Brownian motion with locally Lipschitz drift and their Euler approximation. Proc. R. Soc. Edinb. A 2020, 150, 1–27. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kubilius, K. Approximation of the Fractional SDEs with Stochastic Forcing. Mathematics 2024, 12, 3875. https://doi.org/10.3390/math12243875

AMA Style

Kubilius K. Approximation of the Fractional SDEs with Stochastic Forcing. Mathematics. 2024; 12(24):3875. https://doi.org/10.3390/math12243875

Chicago/Turabian Style

Kubilius, Kęstutis. 2024. "Approximation of the Fractional SDEs with Stochastic Forcing" Mathematics 12, no. 24: 3875. https://doi.org/10.3390/math12243875

APA Style

Kubilius, K. (2024). Approximation of the Fractional SDEs with Stochastic Forcing. Mathematics, 12(24), 3875. https://doi.org/10.3390/math12243875

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop