Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Queueing Theory

Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

Author: Ethan Schreiber - ethan@cs.ucla.

edu
Tot Prob: P(B) = P(A1 ∩ B) + ... + P(An ∩ B) = P(A1 )P(B|A1 ) + ... + P(An )P(B|An )
Bayes: P(A|B) = P(B|A)P(A)
P(B)
P(B|A)P(A)
= P(B|A ) k P(A )
P
i i=1 i

Counting Order No Order


n!
Replacement nr M ultinomial : f (x1 , ..., xk ; n, p1 , ..., pk ) = P(X1 = x1 , ..., Xk = xk ) = px1 ...pxkk
n!
No Replacement (n−r)! n! x1 ! · ... · xk ! 1
(n−r!)r!

N ame Def inition


 E[X] var(X) notes
p, if x = 1
Bernoulli: fX (x) = p p(1 − p)
1 − p, if x = 0
Pk
pX (k) = nk pk (1 − p)n−k ≤ k) = i=0 ni pi (1 − p)n−i
 
Binomial: np p(1 − p) P(X
1 1−p
Geometric: pX (k) = (1 − p)k−1 p p p2 P(X > k) = (1 − pk )
P(X < k) = 1 − (1 − pk )
P(X = t) = P(X = k + t − 1|X ≥ k)
P(X = t) = P(X = k + t|X > k)
λe−λx , if x ≥ 0

Exponential: fX (x) = 1
λ
1
λ2 P(X ≥ a) = e−λa
0, otherwise
k
Poisson pX (k) = e−λ λk! λ λ λ = np for large n and small p
P (X=x∩A)
Conditional: pX|A (x) = P(X = x|A) = P(A) Memoryless: P R(X = k + t − 1|X ≥ k) = P(X = t)
Pn cn+1 −1
P∞ 1
P∞ 1
Pn ncn+2 −(n+1)cn+1 +c P∞ c
i=0 ci = c−1 , i=0 ci = 1−c , i=0 ci = 1−c , i=0 ici = (c−1)2 , i=0 ici = (1−c)2 |c| < 1
X X X 1
Λi = Λj,i Λj,i = Λi,j Λ i = λ i πi | Λi =
j j j
Ti

Flow Balance Equations - (fj,i = fraction of departures from sj to si ).

ΛO I
πi T1i = πj T1j fj,i
P P P P P
i = Λi πi j λi,j = j πj λj,i πi λ i = j πj λj fj,i j πi = j πj fj,i (all Ti equal)

0 = (−λA − λB )π1 + λRA π2 + λRB π3


−λA − λB λA λB 0 = λA π1 − λRA π2
CTMC Q= λRA −λRA 0 0 = λB π1 − λRB π3
λRB 0 −λRB π1 + π2 + π3 = 1

Little’s Result - ΛT = n̄ Λ = mean arrival rate to box, n̄ =mean entities in box, T =mean time in box
1
P∞1 λ ρ n̄
M/M/1 - πn−1 λ = πn µ, ∀n ≥ 1 π0 = n =1−ρ πn = (1 − ρ)ρn ρ= n̄ = T = = µ
n=0 ρ µ 1−ρ λ 1−ρ
P(no wait) = π0 P(leave empty system) = πλ1 µ = πλ0 λ = π0
Qn
M/M/1 (state dependent) - πn µn = πn−1 λn−1 , ∀n ≥ 1 πn = π0 k=1 λµk−1 k
π0 = P∞
1
Qn λk−1
1+ n=1 k=1 µk
PN
Finite Population N - Same except now n=1 πn = 1
λ λn ρn
Discouraged Arrivals λn = n+1 , µn is constant: πn = π0 n!µ n = π0 n! π0 = P∞1 ρn = 1
eρ = e−ρ
n=0 n!
ρn e−ρ ρ
Combined: πn = n! (poisson) n̄ = ρ λ̄ = µ(1 − e−ρ ) = µ(1 − π0 ) T = n̄
λ = µ(1−e−ρ ) limρ→0 T = 1
µ
n
Infinite Servers: µn = nµ: πn = π0 ρn! T = n̄λ = λρ = µ1
Finite Population: : mean arrival rate: nλ mean departure rate: µ (except in state N when it’s 0)
Finite Buffer Space: πn = λn−1
µ πn−1 mean conditional rate of loss: πN λN Total arrival rate of customers:
PN πN λN
n=0 πn λn Fraction lost customers:
P N . λn = λ, ∀n =⇒ this reduces to πN
n=0 πn λn
Transient Analysis
Q(0)
= [100]
3 2 4
Q(1) Q(0)
∗P = 93 29 94
 
=
P = 9 9
1
9
1
Q(2) Q(1)
∗P = 91 27
 5 19 
0 2 2 = 27
19 8
0 0 1 P[N > 2] = 1 − P [N ≤ 2] = 1 − 27 = 27

HMM - N :#states, si : ith state, qt : state at t, ∀1 ≤ j ≤ M, vk : distinct observations, Oj : Observed value at j


bj (k) = P([Ot = vk |qt = sj ]) (conditional distribution for observed value given state of the underlying MC),
B = {bj (k)}
PN
αt (Ot , qt = si |M ) = P[Ot , qt = si |M ] αt (Ot+1 , qt+1 = sj |M ) = i=1 αt (Ot , qt = si |M )ai,j bj (Ot+1 )
ie: α1 (o1 = a, q1 = q|M ) = π[1] ∗ b1 [a] = π[1] ∗ P (a|1) |
Semi Markov 1

Pidle→request = P(t ≤ T0 ) = FX (T0 )


Pidle→sleep = P(t > T0 ) = 1 − FX (T0 )
Mean Sojourn Time R ∞ In:
Request : E[R] = 0 rfR (r)dr
Idle : P [X ≤ T0 ] ∗ E[X|X ≤ T0 ] + P [X > T0 ] ∗ T0 =
R T0
0
xfX (x)dx + (1 − FX (T0 ))T0
fX (y+T0 )
Sleep : (Remaining time in idle) : fY (y) = 1−F X (T0 )
R∞
E[Y ] = 0 y ∗ fY (y)dy

V = V P to solve for visit ratios to markov chain. Use V to solve for π, the fraction of time spent in each state.
πi = P4vi Tvi T
i=1 i i

Semi Markov 2

λa e−λa t with probability p



time between arrivals = fX (t) =
λb e−λb t with probability 1-p
Service time is µ. After request done, if a server remains idle for T (a constant), then sleep.

Sojourn Times
1
P[(B, a)] = µ+λ b
1
P[(B, b)] = µ+λa
RT
P[(I, a)] = 0 λa e−λa t ∗ tdt + T ∗ (e−λa T
RT
P[(I, b)] = 0 λb e−λb t ∗ tdt + T ∗ (e−λa T
P[(S, a)] = λ1a
P[(S, b)] = λ1b

Probabilities:
B, a → I, a = p ∗ [1 − e−λa T ] B, a → B, b = λBλB+µ B, b → B, a = λaλ+µ
a

B, b → I, b = λaµ+µ I, a → B, a = p ∗ [1 − e−λa T )
] I, a → B, b = (1 − p) ∗ [1 − e−λa T ]
I, a → S, a = e−λa T I, b → B, a = (p) ∗ [1 − e−λb T ] I, b → B, b =
I, b → S, b = e−λb T S, a → B, a =p S, a → B, b =1−p
S, b → B, a =p S, b → B, b =1−p

PDF:
R Rb R∞
P(X ∈ B) = B
fX (x)dx P(a ≤ X ≤ b) = a
fX (x)dx P(−∞ < X < ∞) = −∞
fx (x)dx = 1
R∞ R∞ R∞
E[X] = −∞
xfX (x)dx E[g(X)] = −∞
g(x)fX (x)dx Var(X) = E[(X − E[X])2 ] = −∞
(x = E[X])2 fX (x)dx

0 ≤ Var(X) = E[X 2 ] − (E[X])2

Y = aX + b =⇒ E[Y ] = aE[X] + b Var(Y ) = a2 Var(X)


(
fX (x)
R if x ∈ A R
P(X ∈ B|A) = B
fX|A (x)dx P r(fX|A (x)) = P(X∈A) P(X ∈ B|X ∈ A) = B
fX|A (x)dx
0 otherwise
R R∞ R∞
P(X ∈ B|X ∈ A) = B
fX|A (x)dx E[X|A] = −∞
xfX|A (x)dx E[g(X)|A] = −∞
g(x)fX|A (x)dx
Pn Pn Pn
disjoint{A1 ...An } =⇒ fx (x) = i=1 P(Ai )fX|Ai (x) E[X] = i=1 P(Ai )E[X|Ai ] E[g(X)] = i=1 P(Ai )E[g(X)|Ai ]
Pk
FX (x) = P(X ≤ x), ∀x x ≤ y =⇒ FX (x) ≤ FX (y) FX (k) = i=−∞ px (i)

pX (k) = P(X ≤ k) − P(X ≤ k − 1) = FX (k) − Fx (k − 1)


CDF: Rx
FX (x) = R −∞ fx (t)dt fX (x) R= dF
dx (x)
X

∞ ∞
fX (x) = −∞ fX,Y (x, y)dy fY (x) = −∞ fX,Y (x, y)dx.
P P
E[X] = x x · pX (x) Y = g(x) =⇒ E[Y ] = x g(x) · PX (x)

You might also like