Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Lecture 20 David Aldous 14 October 2015 David Aldous Lecture 20 Continuous-time Markov chains [PK] section 6.6 In discrete time t = 0, 1, 2, . . . we specify a Markov chain by specifying the matrix P of transition probabilities pij = P(X (t + 1) = j|X (t) = i, past ). In continuous time 0 ≤ t < ∞ we specify transition rates qij = lim δ↓0 P(X (t+δ)=j|X (t)=i, δ past ) or informally P(X (t + dt) = j|X (t) = i) = qij dt but note these are defined only for j 6= i. Then note that X P(X (t + dt) 6= i|X (t) = i) = P(X (t + dt) = j|X (t) = i) j6=i = X qij dt = qi dt j6=i where qi = X qij . j6=i David Aldous Lecture 20 In discrete time the time-t distribution π(t) = (πi (t)) = (P(X (t) = i)) evolves as π(t + 1) = π(t)P. In continuous time we have [board] X X d qjk . πi (t)qij − πj (t)qj qj := dt πj (t) = k6=j i6=j We can re-write this in vector-matrix notation as d dt π(t) = π(t)Q where Q is the matrix with off-diagonal entries (qij ) and with diagonal entries defined by X qii = −qi = − qij . j6=i Note this implies that the condition for a probability distribution π to be a stationary distribution is πQ = 0 (the zero vector). Note [PK] write A instead of Q. David Aldous Lecture 20 Starting at state i, Si = min{t : X (t) 6= i} is called the sojourn time in i. It is the time spent at i before jumping to another state. The fact P(X (t + dt) 6= i|X (t) = i) = qi dt is the fact P(Si ∈ [t, t + dt] |Si > t) = qi dt which shows that Si has Exponential(qi ) distribution. At time Si the process jumps to another state: the probability it jumps to state j is [board] b pij = qij /qi . David Aldous Lecture 20 This leads to a “jump and hold” description of a continuous-time Markov chain. After jumping into a state i, the process remains in state i for a random time with Exponential(qi ) distribution. Then it jumps to some other state, to state j 6= i with probability bij = qij /qi . p So the matrix b = (b P pij ), where b pii = 0 is the transition matrix for the discrete-time jump chain X̂ (0), X̂ (1), . . . that shows the successive states visited. The relationship between the stationary distributions (where they exist) π and π b can be seen using a long-run argument [board] or algebraically b=π from the equations π bP b, πQ = 0: for c = P 1 bi /qi j π = P πi = c π bi /qi ; j π bi = c −1 qi πi q i πi . David Aldous Lecture 20 In very special cases we can solve the differential equations d dt π(t) = π(t)Q where Q is the matrix with off-diagonal entries (qij ) and with diagonal entries defined by X qij . qii = −qi = − j6=i ----------------------------------------Example: For the rate-λ PPP on [0, ∞) the counting process N(t) is the continuous-time chain with qi,i+1 = λ. Example: 2-state chain: q01 = λ, q10 = µ. [board] P0 (X (t) = 0) = µ λ+µ David Aldous + λ λ+µ exp(−(λ + µ)t). Lecture 20 Example: Yule process parameter β > 0 states 1, 2, 3, . . . transition rates qi,i+1 = βi X (0) = 1. The differential equations are d dt πj (t) = β[(j − 1)πj−1 (t) − jπj (t)]. One can solve these equations – see [PK] section 6.1.3 πj (t) = P(X (t) = j) = e −βt (1 − e −βt )j−1 , j = 1, 2, . . . In other words X (t) has Geometric e −βt distribution, so EX (t) = e βt . David Aldous Lecture 20 The Yule process is a basic example of a continuous-time branching process [picture on board] The Yule process is also an example of a “pure birth” process, meaning the only transitions are i → i + 1. For such processes the distribution of X (t) can be related to the sum of independent Exponentials RVs – see [PK] section 6.1.2. David Aldous Lecture 20