0% found this document useful (0 votes)
75 views3 pages

Random Process

Markov chains are stochastic processes where the probability of moving to the next state depends only on the current state, not on past states. The state transition matrix contains the transition probabilities Pij between each pair of states i and j. It represents the probability that the process in state i will transition to state j at the next time step. Markov chains have the property that the conditional probability distribution of the next state depends only on the present state, not past states.

Uploaded by

Sudershan Dolli
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
75 views3 pages

Random Process

Markov chains are stochastic processes where the probability of moving to the next state depends only on the current state, not on past states. The state transition matrix contains the transition probabilities Pij between each pair of states i and j. It represents the probability that the process in state i will transition to state j at the next time step. Markov chains have the property that the conditional probability distribution of the next state depends only on the present state, not past states.

Uploaded by

Sudershan Dolli
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Define Markov chains. What is state transition matrix? What is transition probability matrix?

Explain
with suitable example.
We consider a stochastic process {Xn, n = 0, 1, 2 . . .} that takes on a finite or countable number of
possible values. Unless otherwise mentioned, this set of possible values of the process will be denoted by the set
of nonnegative integers {0, 1, 2 . . .}. If Xn = i, then the process is said to be in state i at time n. We suppose that
whenever the process is in state i, there is a fixed probability Pij that it will next be in state j.
That is, we suppose that
P {Xn+1 = j |Xn = i, Xn1 = in1. . . X1 = i1, X0 = i0} = Pij (4.1)
For all states i0, i1. . . in1, i, j and all n> 0. Such a stochastic process is known as a Markov chain.
Equation (4.1) may be interpreted as stating that, for a Markov chain, the conditional distribution of any
future state Xn+1 given the past states X0,X1, . . . , Xn1 and the present state Xn, is independent of the past
states and depends only on the present state.
The value Pij represents the probability that the process will, when in state i, next make a transition into
state j. Since probabilities are nonnegative and since the process must make a transition into some state, we
have that


















Stability of Markov system:-

You might also like