Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
J Indian Soc Probab Stat https://doi.org/10.1007/s41096-018-0055-y RESEARCH ARTICLE Normal Approximation of Posterior Distribution in GI/G/1 Queue Saroja Kumar Singh1 · Sarat Kumar Acharya1 Accepted: 10 October 2018 © The Indian Society for Probability and Statistics (ISPS) 2018 Abstract The paper deals with the asymptotic joint posterior distribution of (θ, φ) in a G I /G/1 queueing system over a continuous time interval (0, T ] where θ and φ are unknown parameters of arrival process and departure process respectively and T is a suitable stopping time. Keywords G I /G/1 queue · Exponential families · Maximum likelihood estimator · Posterior distribution · Asymptotic normality Mathematics Subject Classification 60K25 · 68M20 · 62F12 1 Introduction Though statistical inference plays a major role in any use of queueing models, study of asymptotic inference problems for queueing system can be hardly traced back to the works by Basawa and Prabhu (1981, 1988) where they have discussed about the maximum likelihood (ML) estimators of the parameters in single server queues. Basawa et al. (1996) have studied the consistency and asymptotic normality of the parameters in a G I /G/1 queue based on information on waiting times. Acharya (1999) has studied the rate of convergence of the distribution of the maximum likelihood estimators of the arrival and the service rates from a single server queue. Acharya and Mishra (2007) have proved the Bernstein–von Mises theorem for the arrival process in a M/M/1 queue. From a Bayesian outlook, inferences about the parameter are based on its posterior distribution. The study of asymptotic posterior normality can be traced back to the B Saroja Kumar Singh sarojasngh@gmail.com Sarat Kumar Acharya acharya_sarat@yahoo.co.in 1 P. G. Department of Statistics, Sambalpur University, Sambalpur, Odisha, India 123 J Indian Soc Probab Stat time of Laplace and it has attracted the attention of many authors. A conventional approach to such problems starts from a Taylor series expansion of the log-likelihood function around the maximum likelihood estimator (MLE) and proceeds from there to develop expansions that have standard normal as a leading term and hold in probability or almost surely, given the data. This type of study have not been done in queueing system. For the general set up in this direction the previous work seems to be those by Walker (1969), Johnston (1970) for i.i.d observations; Hyde and Johnston (1979), Basawa and Prakasa Rao (1980), Chen (1985) and Sweeting and Adekola (1987) for stochastic process. The most recent work was done by Kim (1998) in which he provided a set of conditions to prove the asymptotic normality under quite general situations of possible non-stationary time series model and Weng and Tsai (2008) where they studied asymptotic normality for multiparameter problems. In this paper, our aim is to prove that the joint posterior distribution of (θ, φ) is asymptotically normal for G I /G/1 queueing model in the context of exponential families. In Sect. 2 we introduce the model of our interest and explain some elements of maximum likelihood estimator (MLE) as well as Bayesian procedure. In Sect. 3 we prove our main result. For the illustration purpose we provide an example Sect. 4. Section 5 deals with the simulation study while in Sect. 6 concluding remarks are given. 2 GI/G/1 Queueing Model Consider a single server queueing system in which the interarrival times {u k , k ≥ 1} and the service times {vk , k ≥ 1} are two independent sequences of independent and identically distributed nonnegative random variables with densities f (u; θ ) and g(v; φ), respectively, where θ and φ are unknown parameters. Let us assume that f and g belong to the continuous exponential families given by f (u; θ ) = a1 (u)exp{θ h 1 (u) − k1 (θ )}, g(v; φ) = a2 (v)exp{φh 2 (v) − k2 (φ)}. (2.1) (2.2) and f (u; θ ) = g(v; φ) = 0 on (−∞, 0) where 1 = {θ > 0 : k1 (θ ) < ∞} and 2 = {φ > 0 : k2 (φ) < ∞} are open subsets of R. It is easy to see that, E θ (h 1 (u)) = k1′ (θ ), varθ (h 1 (u)) = k1′′ (θ ), E φ (h 2 (v)) = k2′ (φ), varφ (h 2 (v)) = k2′′ (φ), are supposed to be finite. For simplicity we assume that the initial customer arrives at time t = 0. Our sampling scheme is to observe the system over a continuous time interval (0, T ], where T is a suitable stopping time. The sample data consist of {A(T ), D(T ), u 1 , u 2 , u 3 , . . . , u A(T ) , v1 , v2 , . . . , v D(T ) }, 123 (2.3) J Indian Soc Probab Stat where A(T ) is the number of arrivals and D(T ) is the number of departures during  A(T ) (0, T ]. Obviously no arrivals occur during [ i=1 u i , T ] and no departures during  D(T ) [γ (T ) + i=1 vi , T ], where γ (T ) is the total idle period in (0, T ]. The likelihood function based on data (2.3) is given by L T (θ, φ) = A(T ) f (u i , θ ) i=1 D(T ) f (vi , φ) i=1 ⎡ ⎡ × ⎣1 − Fθ ⎣T − A(T ) i=1 ⎡ ⎤⎤ ⎡ u i ⎦⎦ ⎣1 − G φ ⎣T − γ (T ) − D(T ) i=1 ⎤⎤ vi ⎦⎦ , (2.4) where F and G are distribution functions corresponding to the densities f and g respectively. The approximate likelihood L (a) T (θ, φ) is defined as (a) L T (θ, φ) where and = A(T ) i=1 ⎡ ⎣ L (a) T (θ ) = (a) ⎡ L T (φ) = ⎣ f (u i , θ ) i=1 A(T ) i=1 D(T ) i=1 D(T ) (a) ⎤ ⎧ ⎨ A(T ) ⎤ ⎧ ⎨ D(T ) a1 (u i )⎦ exp a2 (vi )⎦ exp (a) f (vi , φ) = L T (θ )L T (φ), ⎩ ⎩ i=1 i=1 ⎫ ⎬ [θ h 1 (u i ) − k1 (θ )] ⎭ ⎫ ⎬ [φh 2 (vi ) − k2 (φ)] . ⎭ (2.5) (2.6) (2.7) The maximum likelihood estimates obtained from (2.5) are asymptotically equivalent to those obtained from (2.4) provided that the following two conditions are satisfied for T → ∞: (A(T ))−1/2 ⎡ ⎛ ⎞⎤ A(T ) ∂ p log ⎣1 − Fθ ⎝T − u i ⎠⎦ −→ 0 ∂θ (2.8) i=1 and (D(T ))−1/2 ⎡ ⎛ ∂ log ⎣1 − G φ ⎝T − γ (T ) − ∂φ D(T ) i=1 ⎞⎤ p vi ⎠⎦ −→ 0. (2.9) The implications of these conditions have been explained by Basawa and Prabhu (1988). 123 J Indian Soc Probab Stat Basawa and Prabhu (1988) have shown that the maximum likelihood estimator of θ and φ are given by   A(T ) θ̂T = η1−1 (A(T ))−1 h 1 (u i ) , (2.10) i=1 η2−1 φ̂T =   D(T ) −1 (D(T )) h 2 (vi ) (2.11) i=1 where ηi−1 (.) denotes the inverse functions of ηi (.) for i = 1, 2 and ′ η1 (θ ) = E θ (h 1 (u)) = k1 (θ ) and ′ η2 (φ) = E φ (h 2 (v)) = k2 (φ). The Fisher information matrix is given by I (θ, φ) =     ′′ k1 (θ )E(A(T )) 0 I (θ ) 0 . = ′′ 0 I (φ) 0 k2 (φ)E(D(T )) (2.12) Under suitable stability conditions on stopping times, Basawa and Prabhu (1988) have proved that the estimators θ̂T and φ̂T are consistent, i.e, a.s. a.s. θ̂T −→ θ0 and φ̂T −→ φ0 as T → ∞ and  θ̂ − θ0 I (θ0 , φ0 ) T φ̂T − φ0 1 2      0 10 ⇒N , , 0 01 (2.13) (2.14) where θ0 and φ0 denote the true value of θ and φ respectively, and the symbol ⇒ denotes the convergence in distribution. From Eq. (2.5) we have the loglikelihood function (a) ℓT (θ, φ) = logL T (θ, φ) = ℓT (θ ) + ℓT (φ), (2.15) where (a) ℓT (θ ) = logL T (θ ) = 123 A(T ) i=1 a1 (u i ) + θ A(T ) i=1 h 1 (u i ) − A(T )k1 (θ ) (2.16) J Indian Soc Probab Stat and (a) ℓT (φ) = logL T (φ) = D(T ) i=1 a2 (vi ) + φ D(T ) i=1 h 2 (vi ) − D(T )k2 (φ). (2.17) Let     ∂ ∂ ℓT (θ, φ) ℓT (θ ) = , ∂θ ∂θ θ=θ0 θ=θ0     ∂2 ∂2 ′′  = 2 ℓT (θ ) . ℓT (θ0 ) = 2 ℓT (θ, φ) ∂θ ∂θ θ=θ0 θ=θ0 ′ ℓT (θ0 ) = ′ ′ ′ ′′ ′′ ′′ Similarly ℓT (θ̂T ), ℓT (φ̂T ), ℓT (φ0 ), ℓT (φ0 ), ℓT (θ̂T ) and ℓT (φ̂T ) are defined. Let π1 (θ ) and π2 (φ) be the prior distributions of θ and φ respectively. Let the joint prior distribution θ and φ be π(θ, φ). Since the interarrival time and service time distributions are independent, so we have π(θ, φ) = π1 (θ )π2 (φ). Then the joint posterior density of (θ, φ) is π(θ, φ|(u i , vi ); i ≥ 1) = π1 (θ |u i ; i = 1, . . . , A(T ))π2 (φ|vi ; i = 1, . . . , D(T )) (2.18) with (a) π1 (θ |u i ; i = 1, . . . , A(T )) =  and L T (θ )π1 (θ ) (a) L T (θ )π1 (θ )dθ    A(T ) exp i=1 [θ h 1 (u i ) − k1 (θ )] π1 (θ ) (2.19) =    A(T ) i=1 [θ h 1 (u i ) − k1 (θ )] π1 (θ )dθ 1 exp 1 π2 (φ|vi ; i = 1, . . . , D(T )) =   D(T )  i=1 [φh 2 (vi ) − k2 (φ)] π2 (φ)     D(T ) i=1 [φh 2 (vi ) − k2 (φ)] π2 (φ)dφ 2 exp exp (2.20) the marginal posterior densities of θ and φ, respectively. Let θ̃T and φ̃T be Bayes estimator of θ and φ respectively. In the next section we will state and prove our main result. 123 J Indian Soc Probab Stat 3 Main Result Theorem 3.1 Let (θ0 , φ0 ) ∈ 1 × 2 . If the prior densities π1 (θ ) and π2 (φ) are continuous and positive at θ0 and φ0 respectively then, for any αi , βi such that −∞ ≤ αi ≤ βi ≤ ∞, i = 1, 2, the posterior probability that (θ̂T + α1 σT ≤ θ ≤ θ̂T + β1 σT , φ̂T + α2 τT ≤ φ ≤ φ̂T + β2 τT ), namely θ̂T +β  1 σT φ̂T +β2 τT π(θ, φ|(u i , vi ), i ≥ 1)dθ dφ θ̂T +α1 σT φ̂T +τT σ2 tends in [P(θ0 ,φ0 ) ] probability to (2π ) −1 β1 β2 1 e− 2 (x 2 +y 2 ) d xd y α1 α2 ′′ as T → ∞, where σT and τT are the positive square roots of [−ℓT (θ̂T )]−1 and ′′ [−ℓT (φ̂T )]−1 respectively. Proof of Theorem 3.1 The former integral of the above theorem can be written as the product of the integrals of the marginal posterior densities, i.e., θ̂T  +β1 σT φ̂T  +β2 τT π(θ, φ|(u i , vi ), i ≥ 1)dθ dφ θ̂T +α1 σT φ̂T +α2 τT = θ̂T  +β1 σT θ̂T +α1 σT × exp  1 φ̂T  +β2 τT φ̂T +α2 τT  exp A(T ) i=1 [θ h 1 (u i ) − k1 (θ )]  A(T ) i=1  π1 (θ )  dθ [θ h 1 (u i ) − k1 (θ )] π1 (θ )dθ  [φh (v ) − k (φ)] π2 (φ) 2 i 2 i=1   dφ  D(T ) i=1 [φh 2 (vi ) − k2 (φ)] π2 (φ)dφ 2 exp exp  D(T ) (3.1) and the convergence of both can be established separately. For any δ > 0, let us write N (̺, δ) = (̺ − δ, ̺ + δ) with ̺ ∈ 1 and J B =  (a) B L T (θ )π1 (θ )dθ where B ⊆ 1 . Hence,  θ̂T +β1 σT θ̂T +α1 σT 123  [θ h 1 (u i ) − k1 (θ )] π1 (θ )   dθ = (J1 )−1 JN (θT ,δT )  A(T ) i=1 [θ h 1 (u i ) − k1 (θ )] π1 (θ )dθ 1 exp (3.2) exp  A(T ) i=1 J Indian Soc Probab Stat with δT = σT (β1 −α1 ) 2 and θT = θ̂T + σT (α1 +β1 ) . 2 Then, we want to prove that 1 (J1 )−1 JN (θT ,δT ) → (β1 ) − (α1 ) = √ 2π  β1 x2 e− 2 d x (3.3) α1 z s2 in probability [Pθ0 ], where (z) = √1 −∞ e− 2 ds. 2π Let us split J1 into J1 \JN (θ0 ,δ) and JN (θ0 ,δ) . Then, to obtain the above result it is sufficient to prove that the following statements holds in probability [Pθ0 ] : For some δ > 0, (a) (a) lim [L T (θ̂T )σT ]−1 J1 \JN (θ0 ,δ) = 0 T →∞ (a) 1 (b) lim [L T (θ̂T )σT ]−1 JN (θ0 ,δ) = (2π ) 2 π1 (θ0 ) T →∞ 1 −1 2 (c) lim [L (a) T (θ̂T )σT ] JN (θT ,δT ) = (2π ) π1 (θ0 )((β1 ) − (α1 )) T →∞ Define r T (θ ) = − ℓ′′T (θ ) − ℓ′′T (θ̂T ) ℓ′′T (θ̂T ) =1− ℓ′′T (θ )/ℓ′′T (θ0 ) ℓ′′T (θ̂T )/ℓ′′T (θ0 ) (3.4) . If θ belongs to N (θ0 , δ) for some δ > 0, ℓ′′T (θ )/ℓ′′T (θ0 ) is close enough to 1 and, since θ̂T → θ0 almost surely, ℓ′′T (θ̂T )/ℓ′′T (θ0 ) is almost surely close to 1 for T sufficiently large. Therefore we can deduce that for given ε > 0, we can take δ such that, if T is large enough, sup |r T (θ )| < ε [Pθ0 ]. (3.5) θ∈N (θ0 ,δ) Consider also (θ − θ̂T ) ℓT (θ ) − ℓT (θ̂T ) = qT (θ ) = − ℓ′′T (θ0 )  A(T ) i=1 h 1 (u i ) − A(T )(k1 (θ ) − k1 (θ̂T )) . A(T )k1′′ (θ0 ) Since ℓT (.) has a strict maximum at θ̂T , it is obvious that qT (.) is negative on 1 \ N (θ0 , δ) for T large enough. Moreover, since θ̂T → θ0 almost surely, it can be shown that there exists a positive constant κ(δ) such that sup θ∈1 \N (θ0 ,δ) qT < −κ(δ) [Pθ0 ]. (3.6) Now, −1 [L (a) T (θ̂T )σT ] J1 \N (θ0 ,δ)  −1 = [L (a) ( θ̂ )σ ] T T T 1 \N (θ0 ,δ) (a) (a) = [L T (θ̂T )σT ]−1 L T (θ̂T )  L (a) T (θ )π1 (θ )dθ 1 \N (θ0 ,δ) π1 (θ )exp{ℓT (θ ) − ℓT (θ̂T )}dθ 123 J Indian Soc Probab Stat = 1 (−ℓ′′T (θ̂T )) 2 1  π1 (θ )exp{qT (θ )(−ℓ′′T (θ0 ))}dθ 1 \N (θ0 ,δ) ≤ (−ℓ′′T (θ̂T )) 2 exp{−κ(δ)(−ℓ′′T (θ0 ))} = (−ℓ′′T (θ̂T )) 1 2 1 (−ℓ′′T (θ0 )) 2 (using Eq. 3.6) 1 (−ℓ′′T (θ0 )) 2 exp{−κ(δ)(−ℓ′′T (θ0 ))} [Pθ0 ]. We have −ℓ′′T (θ0 ) = A(T )σ 2 (θ0 ) diverges to ∞ almost surely as T → ∞. So, in the above expression 1 (−ℓ′′T (θ0 )) 2 exp{−κ(δ)(−ℓ′′T (θ0 ))} → 0 in probability and, using Eq. (3.5), for some constant M and T large enough 1 (−ℓ′′T (θ̂T )) 2 (−ℓ′′T (θ0 )) 1 2 =  1 1 − r T (θ0 ) 1 2 <M in probability and, consequently (a) holds. Let us prove (b). Write (a) (a) L T (θ ) = L T (θ̂T )exp{ℓT (θ ) − ℓT (θ̂T )}. (3.7) Using Taylor expansion around θ̂T , 1 ℓT (θ ) = ℓT (θ̂T ) + (θ − θ̂T )2 ℓ′′T (θ̄T ) 2 (3.8) for θ̄T = θ + ξ(θ̂T − θ ) with 0 < ξ < 1. Thus letting RT = RT (θ ) = σT2 {ℓ′′T (θ̄T ) − ℓ′′T (θ̂T )}, we have − 1 − RT = ℓ′′T (θ̄T ). σT2 (3.9) Using Eqs. (3.8) and (3.9) in Eq. (3.7) and, for some δ > 0 and T large enough such that θ̂T ∈ N (θ0 , δ), we have, for every θ ∈ N (θ0 , δ) L (a) T (θ ) = L (a) T (θ̂T )exp  (θ − θ̂T )2 (1 − RT ) − 2σT2  [Pθ0 ] (3.10) and consequently, (a) [L T (θ̂T )π1 (θ0 )]−1 JN (θ0 ,δ) 123 =  N (θ0 ,δ)   (θ − θ̂T )2 π1 (θ ) exp − (1 − RT ) dθ [Pθ0 ] π1 (θ0 ) 2σT2 (3.11) J Indian Soc Probab Stat Since π1 (θ ) is continuous and positive at θ = θ0 , then for given 0 < ε < 1, we can choose δ small enough so that 1−ε < π1 (θ ) π1 (θ ) < sup < 1 + ε. θ∈N (θ0 ,δ) π1 (θ0 ) θ∈N (θ0 ,δ) π1 (θ0 ) inf (3.12) Denote J˜B =    (θ − θ̂T )2 exp − (1 − R ) dθ, T 2σT2 B B ⊆ 1 . Then from Eq. (3.12) we get that (a) (1 − ε)J˜N (θ0 ,δ) < [L T (θ̂T )π1 (θ0 )]−1 JN (θ0 ,δ) < (1 + ε)J˜N (θ0 ,δ) . (3.13) If supθ∈N (θ0 ,δ) |RT | < ε < 1, then   (θ − θ̂T )2 (1 + ε) dθ exp − 2σT2 N (θ0 ,δ)    (θ − θ̂T )2 ˜ (1 − ε) dθ < JN (θ0 ,δ) < exp − 2σT2 N (θ0 ,δ)  and for η = +ε or −ε, making a change of variable,  (θ − θ̂T )2 exp − (1 + η) dθ 2σT2 N (θ0 ,δ)  (θ0 +δ−θ̂T )(1+η) 21 σ −1 2 T σT − x2 = dx 1 −1 e 1 (1 + η) 2 (θ0 −δ−θ̂T )(1+η) 2 σT   1 1 1 = (2π ) 2 σT (1 + η)− 2  σT−1 (θ0 + δ − θ̂T )(1 + η) 2   1 −  σT−1 (θ0 − δ − θ̂T )(1 + η) 2 .   (3.14) Since σT−1 → ∞ and θ̂T → θ0 almost surely, it is deduced that the limits (θ0 − 1 1 δ − θ̂T )(1 + η) 2 σT−1 and (θ0 + δ − θ̂T )(1 + η) 2 σT−1 of the integrals in the above equation converges to −∞ and ∞ respectively. Therefore, the term in square brackets in Eq. (3.14) converges to 1. Thus, using an appropriate bound on RT it follows that, 1 1 1 1 (2π ) 2 (1 + ε)− 2 < σT−1 J˜N (θ0 ,δ) < (2π ) 2 (1 − ε)− 2 123 J Indian Soc Probab Stat in probability as T → ∞ and, using the above expression with the Eq. (3.13) we have the following bounds for JN (θ0 ,δ) : 1 1 (1 + ε)− 2 (1 − ε) < L T (θ̂T )π1 (θ0 )(2π ) 2 σT !−1 1 JN (θ0 ,δ) < (1 − ε)− 2 (1 + ε) [Pθ0 ] Hence (b) holds. Finally, let us show (c). Using the same arguments and notations above, given ε > 0, there exists δ such that if N (θT , δT ) ⊆ N (θ0 , δ) for T large enough then (a) (1 − ε)J˜N (θT ,δT ) < [L T (θ̂T )π1 (θ0 )]−1 JN (θT ,δT ) < (1 + ε)J˜N (θT ,δT ) [Pθ0 ] While the last term in Eq. (3.14) becomes " # 1 1 1 1 (2π ) 2 σT (1 + η)− 2 (β1 (1 + η) 2 ) − (α1 (1 + η) 2 ) . Therefore, we obtain that 1 −1 2 [L (a) T (θ̂T )π1 (θ0 )] JN (θT ,δT ) → (2π ) π1 (θ0 )[(β1 ) − (α1 )] [Pθ0 ] and now (3.3) is established. Similarly, using the same arguments as in the above, it can be shown that φ̂T  +β2 τT φ̂T +α2 τT exp  2  exp D(T ) i=1 [φh 2 (vi ) − k2 (φ)]  D(T ) i=1   β2 π2 (φ) y2 1  dφ → √ e− 2 2π α2 [φh 2 (vi ) − k2 (φ)] π2 (φ)dφ in probability [Pφ0 ] and the proof is completed. ⊔ ⊓ 4 Example Let us consider a M/M/1 queueing system. Under the Markovian set-up we have f (u; θ ) = θ e−θu and g(v; φ) = φe−φv . So, the loglikelihood function is written as ℓT (θ, φ) = A(T )logθ − θ A(T ) i=1 u i + D(T )logφ − φ D(T ) i=1 and the MLEs are given by θ̂T = 123 $  A(T ) i=1 u i A(T ) %−1 and φ̂T = $  D(T ) i=1 vi D(T ) %−1 . vi J Indian Soc Probab Stat  A(T )  D(T ) " ′′ #− 1 " ′′ #− 1 ui v 2 2 √i=1 i . and τ = −ℓ ( φ̂ ) Here σT = −ℓT (θ̂T ) = √i=1 = T T T A(T ) D(T ) Let us assume that the conjugate prior distributions of θ and φ are gamma distributions with hyper-parameters (a1 , b1 ) and (a2 , b2 ), that is π1 (θ ) = b1a1 a1 −1 −b1 θ b a2 θ e and π2 (φ) = 2 φ a2 −1 e−b2 φ Ŵ(a1 ) Ŵ(a2 ) where ai , bi > 0 for i = 1, 2. Then, the posterior distribution of θ can be computed as: π1 (θ |u i ; i = 1, 2, . . . , A(T )) L a (θ )π1 (θ ) = T a 1 L T π1 (θ )dθ = = ' & A(T ) − A(T )+a −1 i=1 u i +b1 θ 1 θ e & ' A(T ) ∞ − u i +b1 θ i=1 A(T )+a −1 1 dθ e 0 θ ' A(T )+a1 & A(T ) & ' A(T ) i=1 u i + b1 A(T )+a1 −1 − i=1 u i +b1 θ Ŵ (A(T ) + a1 ) θ e . Similarly, π2 (φ|vi ; i = 1, 2, . . . , D(T )) ' D(T )+a2 & D(T ) & ' D(T ) i=1 vi + b2 D(T )+a2 −1 − i=1 vi +b2 φ = . φ e Ŵ (D(T ) + a2 ) It is easy to see that D(T ) + a2 A(T ) + a1 θ̃T =  A(T ) and φ̃T =  D(T ) . u + b v + b i 1 i 2 i=1 i=1 Here, the posterior distributions of θ and φ are seen to be gamma distributions  A(T )  D(T ) [Gamma(A(T )+a1 , i=1 u i +b1 ) and Gamma(D(T )+a2 , i=1 vi +b2 )]. Hence, by Central Limit Theorem (CLT), the joint posterior distribution converges to normal distribution as T → ∞. 5 Simulation For the feasibility of the main result discussed in Sect. 3, simulation was conducted for M/M/1 queueing system. For given values of true parameters θ0 and φ0 MLEs (θ̂T and φ̂T ) are computed at different time interval (0, T ]. Also by choosing different values of hyper-parameters of gamma distribution we compute the Bayes estimators (θ̃T and φ̃T ) 123 J Indian Soc Probab Stat Table 1 For (θ0 , φ0 ) = (1, 2), (a1 , b1 ) = (1.5, 2.5) and (a2 , b2 ) = (3, 3.5) calculation of MLEs, Bayes estimators and their statndard errors (0, T ] θ̂T , φ̂T θ̃T , φ̃T (0, 10] 1.1093, 2.1080 1.0392, 2.0640 (0.0119, 0.0116) (0.0015, 0.0041) (0, 20] 1.0507, 2.0584 1.0917, 2.1175 (0.0025, 0.0034) (0.0084, 0.0138) (0, 30] 1.0414, 2.0149 1.0938, 1.9102 (0.0017, 0.0002) (0.0087, 0.0081) 1.0148, 2.0304 1.0502, 1.9866 (0, 40] Table 2 For (θ0 , φ0 ) = (2, 3), (a1 , b1 ) = (1.5, 2.5) and (a2 , b2 ) = (3, 3.5) calculation of MLEs, and their standard errors (0.0087, 0.0002) (0.0025, 0.0002) (0, 50] 1.0144, 2.0239 1.0722, 2.0243 (0.0002, 0.0006) (0.0052, 0.0006) (0, 60] 1.0146, 2.0199 1.1026, 2.0856 (0.0002, 0.0004) (0.0111, 0.0073) (0, 70] 1.0081, 2.0181 1.0563, 2.0965 (0.0001, 0.0003) (0.0031, 0.0093) (0, 80] 1.0090, 2.0173 1.1066, 2.0206 (0.0001, 0.0003) (0.0001, 0.0004) (0, T ] θ̂T , φ̂T θ̃T , φ̃T (0, 10] 2.1267, 3.0669 1.9663, 3.0188 (0.0161, 0.0044) (0.0012, 0.0004) (0, 20] 2.0663, 3.0271 2.0351, 2.8563 (0.0043, 0.0007) (0.0012, 0.0207) (0, 30] 2.0513, 3.0262 1.9825, 3.0906 (0.0026, 0.0007) (0.0003, 0.0082) (0, 40] 2.0158, 3.0152 1.9966, 3.1323 (0.0003, 0.0002) (0.0001, 0.0175) (0, 50] 2.0200, 3.0240 1.9671, 2.9936 (0.0004, 0.0006) (0.0010, 0.0001) (0, 60] 2.0072, 3.0104 1.9643, 2.9299 (0.0001, 0.0002) (0.0013, 0.0049) (0, 70] 2.0119, 2.9944 2.0410, 2.9256 (0.0002, 0.0001) (0.0017, 0.0055) (0, 80] 2.0198, 3.0088 2.0941, 3.0678 (0.0004, 0.0001) (0.0088, 0.0046) of θ and φ. Here, we consider two pair of true value of parameters θ0 and φ0 as (1, 2) and (2, 3). For the hyper-parameters we have taken as: (a1 , b1 ) = (1.5, 2.5), (a2 , b2 ) = (3, 3.5) and (a1 , b1 ) = (3, 5), (a2 , b2 ) = (4, 5.5). The simulation procedure are repeated 10000 time to estimate the parameters. The computed values of estimators 123 J Indian Soc Probab Stat Table 3 For (θ0 , φ0 ) = (1, 2), (a1 , b1 ) = (3, 5) and (a2 , b2 ) = (4, 5.5) calculation of MLEs, Bayes estimators and standard errors (0, T ] θ̂T , φ̂T θ̃T , φ̃T (0, 10] 1.0171, 2.1420 1.0164, 2.0274 (0.0002, 0.0201) (0.0003, 0.0007) (0, 20] 1.0616, 2.0595 1.0566, 1.9658 (0.0037, 0.0035) (0.0032, 0.0012) (0, 30] 1.0462, 2.0415 1.1018, 2.0134 (0.0021, 0.0017) (0.0104, 0.0002) 1.0134, 2.0216 1.0536, 2.0560 (0, 40] (0.0002, 0.0005) (0.0029, 0.0031) (0, 50] 1.0162, 2.0220 0.9989, 2.1331 (0.0002, 0.0005) (0.0029, 0.0031) (0, 60] 1.0153, 2.0163 1.0809, 2.0087 (0.0002, 0.0002) (0.0065, 0.0001) (0, 70] 1.0135, 2.0172 0.9642, 1.9908 (0.0001, 0.0003) (0.0012, 0.0001) (0, 80] 1.0182, 2.0120 1.0099 1.9962 (0.0117, 0.0001) (0.0001, 0.0001) and their respective standard errors are presented in Tables 1, 2 and 3. The values in the parenthesis indicate the standard errors. 6 Concluding Remarks In simulation study we present the estimates by proposed methods. It is clear that the estimators are quite closer to the true parameter values and their standard errors are negligible. Acknowledgements The authors are thankful to the referees for their comments and useful suggestions. References Acharya SK (1999) On normal approximation for maximum likelihood estimation from single server queues. Queueing Syst 31:207–216 Acharya SK, Mishra MN (2007) Bernstein–von Mises theorem for M|M|1 queue. Commun Stat Theory Methods 36:207–209 Basawa IV, Prabhu NU (1981) Estimation in single server queues. Naval Res Logist Q 28:475–487 Basawa IV, Prabhu NU (1988) Large sample inference from single server queues. Queueing Syst 3(4):289– 304 Basawa IV, Prakasa Rao BLS (1980) Statistical inference for stochastic processes. Academic, London Basawa IV, Prabhu NU, Lund R (1996) Maximum likelihood estimation for single server queues from waiting time data. Queueing Syst 24:155–167 Chen CF (1985) On asymptotic normality of limiting density functions with Bayesian implications. J R Stat Soc Ser B 47:540–546 123 J Indian Soc Probab Stat Hyde CC, Johnston IM (1979) On asymptotic posterior normality for stochastic process. J R Stat Soc Ser B 41:184–189 Johnston R (1970) Asymptotic expansion associated with posterior distributions. Ann Math Stat 41:851–864 Kim JY (1998) Large sample properties of posterior densities, Bayesian information criterion and the likelihood principle in nonstationary time series models. Econometrica 66(2):359–380 Sweeting TJ, Adekola AO (1987) Asymptotic posterior normality for stochastic process revisted. J R Stat Soc Ser B 49:215–222 Walker AM (1969) On the asymptotic behaviour of posterior distributions. J R Stat Soc Ser B 31:80–88 Weng RC, Tsai WC (2008) Asymptotic posterior normality for multiparameter problems. J Stat Plan Inference 138:4068–4080 123