# Transient Analysis of Markov Processes

Lecture 2: Algorithmic Methods for transient analysis of continuous time Markov Chains Dr. Ahmad Al Hanbali Industrial Engineering Dep University of Twente [email protected] 1 Lecture 2 This Lecture deals with continuous time Markov processes as opposed to discrete time Markov chains in Lecture 1

Objectives: 1. Find equilibrium distribution 2. Find transient probabilities 1. Matrix decomposition 2. Uniformization method 3. Find Transient measures Lecture 2: transient analysis of continuous time Markov chains 2 Background (1) Let denote a continuous time stochastic process

of state space is a Markov chain if the conditional transition probability, for every and is homogeneous (or stationary) if is irreducible if all states can communicate Lecture 2: transient analysis of continuous time Markov chains 3 Background (2)

Define (infinitesimal) transition rate from state i to j of a Markov process Let denote epochs of transition of CTMC then for (by convention ) where is the total outgoing rate of state Lecture 2: transient analysis of continuous time Markov chains 4 Background (3)

Let . The matrix is called the generator of the continuous time Markov chain (CTMC). Note: Let equilibrium probabilities. The equilibrium equations of CTMC gives in matrix equation , Idea: take advantage of Methods developed for discrete time Markov chains (in Lecture 1) Lecture 2: transient analysis of continuous time Markov chains 5 An equivalent discrete time Markov chain

Equilibrium distribution can be obtained from an equivalent Markov chain via an elementary transformation. Let be real number such that , and is a stochastic matrix, i.e., its entries are between 0 and 1, and its rows sum to 1. Further, The Markov chain with transition probability is a discretization of the Markov process of with time step Lecture 2: transient analysis of continuous time Markov chains

6 Uniformization of CTMC To have same mean sojourn time in all states per visit the uniformization of CTMC introduces fictitious transitions from states to themselves Let introduce a fictitious transition from state to itself with rate This yields: 1. Equilibrium distribution of Q doesn't change 2. Outgoing rate from state i becomes () same for all states 3. Equilibrium distribution of the uniformized Markov process of Q is same as the Markov chain of transition matrix P(=I+Q) embedded at epoch of transitions (jumps)

The transitions of the uniformized process take place according to a Poisson process with rate Lecture 2: transient analysis of continuous time Markov chains 7 Equilibrium distribution All methods developed for solving the equilibrium equation for discrete time Markov chain can be applied to the uniformized Markov chain of transition matrix P Lecture 2: transient analysis of continuous time Markov chains

8 Transient Behavior of CTMC Kolmogorov's equations are needed for the transient analysis. Let define the transient probability Then, for Kolmogorov's equations are set of differential equations for Lecture 2: transient analysis of continuous time Markov chains

9 Background (1) Let the matrix of entries Kolmogorov's forward equations are derived by letting s approaches t from below (backward equations) Hence, Truncating the infinite sum is inefficient since has

positive and negative elements Lecture 2: transient analysis of continuous time Markov chains 10 Matrix decomposition method Let be the eigenvalues of Let and be the left and right eigenvectors corresponding to , such that and for . The matrix then reads

where is the matrix whose rows are , is the diagonal matrix of entries , and is the matrix whose columns are Lecture 2: transient analysis of continuous time Markov chains 11 Matrix decomposition method (cnt'd) The transient probability matrix then reads What is the interpretation of ?

What conditions li should satisfy when t tends infinity ? Disadvantage of matrix decomposition? Lecture 2: transient analysis of continuous time Markov chains 12 Uniformization method

Let and . Conditioning on , number of transitions in which is Poisson distributed with mean , gives Truncating the latter sum on the first terms gives a good approximation, It is better to take the largest possible value of Lecture 2: transient analysis of continuous time Markov chains 13 Occupancy Time: mean

Occupancy time of a state is the sojourn time in that state during . Note that depends on state at time 0 Let denote the mean occupancy time in state during given initial state . Then, In matrix equation, Lecture 2: transient analysis of continuous time Markov chains 14 Mean occupancy time (cnt'd) Using the uniformized process (,P) then, Note , where Y is a Poisson random variable of mean

We find that Lecture 2: transient analysis of continuous time Markov chains 15 Cumulative distribution of occupancy time Let denote the total sojourn time during [0,T] in a subset of states, . Then, for , where is the probability that uniformized process visits times during given that it makes transitions. Proof, see for details Tijms 2003: 1. Condition on Poisson number of transitions of the uniformized process to be n 2. Occupancy time is smaller than x if uniformized process will visit

k times out of the n visits and at least k of these transitions happens before x. The former probability is and latter is function of a binomial distribution Lecture 2: transient analysis of continuous time Markov chains 16 Moments of occupancy time Proposition: The m-th moment of O(T) is given by: Proposition: Given that the chain starts in equilibrium the second moment of the occupancy time in the subset during [0,T] gives where is the steady state probability of the Markov chain in state , is the column vector with i-th entry equal to if and zero otherwise, and is the column vector with i-th entry equal to 1 if and zero otherwise. Lecture 2: transient analysis of continuous time Markov chains

17 References V.G. Kulkarni. Modeling, analysis, design, and control of stochastic systems. Springer, New York, 1999 Tijms, H. C. A first course in stochastic models. New York: Wiley, 2003 http://www.win.tue.nl/~iadan/algoritme/ 18

## Recently Viewed Presentations

• First Step: What Do You Want To Do? Define a clear project. Write it out. Gather support from others. Know what good will come from this project, anything that is good for the community or will directly lead to cost...
• Is the development gap widening or becoming narrower - what do you think? Remember....the development gap is :.. is the difference in levels of social well-being and economic development between the poorest and the richest people on the planet
• Mission statement. To inspire and nurture the human spirit - one person, one cup and one neighborhood at a time.
• Amen. 19 A Journey in Love - Year 5 Spiritual 20 A Journey in Love - Year 5 To live is to change and to be perfect is to have changed often. Cardinal Newman Prayer God, grant me the serenity...
• Can we articulate a shared vision for PCSI? How do we get there? Develop an agreed typology for PCSI Determine current distribution of integrated services Clarify roles, responsibilities and governance Establish training, policies, guidelines for transformation Monitor and evaluate progress...
• Arial Arial Narrow Brush Script MT Courier New Times New Roman Default Design Microsoft Word Document Oracle Concepts Part 15: Control PL/SQL Flow of Execution Objectives Control PL/SQL Overview Control PL/SQL Overview IF-THEN-ELSE Syntax Execute Statements Conditionally IF-THEN-ELSE Example Build...
• Every migration flow generates a counterflow. The majority of migrants move a short distance. Migrants who move long distances tend to choose big city destinations. Urban residents are less migratory than inhabitants of rural areas. Families are less likely to...
• Similarities between regions: Farming was common throughout. Trade with Natives. Southern Economy: Tobacco in the Chesapeake: More tobacco = more land . Demand caused problems of overproduction in 1640s. GA and SC = rice. Harsh conditions, many whites refused to...