Controlling state transition using transition probability matrix in Markov chain?

3 views (last 30 days)
Hello all, I am working on Markov chain and in that I would like control the state transition using transition probability matrix (TPM).
The TPM in my case is 6x6 and is given as TPM = [0.6,0.4,0,0,0,0;0.3,0.4,0.3,0,0,0;0,0.3,0.4,0.3,0,0;0,0,0.3,0.4,0.3,0;0,0,0,0.3,0.4,0.3;0,0,0,0,0.4,0.6]. For clarity, let us denote the state at time t by the row and state at time t+1 by the column of TPM.
I understood that if my current state is 2 and if transition probability is 0.4 then from TPM my next state will also be 2.
But my query is how this condition of transition probability of value 0.4 is generated ?
Any help in this regard will be highly appreciated.

Answers (1)

Torsten
Torsten on 6 Dec 2022
Edited: Torsten on 6 Dec 2022
I understood that if my current state is 2 and if transition probability is 0.4 then from TPM my next state will also be 2.
That's wrong. If you are in state 2, the probability to change in state 1 is 0.3, to remain in state 2 is 0.4 and to change in state 3 is 0.3.
But my query is how this condition of transition probability of value 0.4 is generated ?
The transition probabilities are not generated, but they are fixed and calculated in advance depending on what your Markov chain tries to model.
  6 Comments
chaaru datta
chaaru datta on 7 Dec 2022
Ok Thank you sir. But that again creates little confusion to me about Q-learning with Markov chain.
chaaru datta
chaaru datta on 8 Dec 2022
Edited: chaaru datta on 8 Dec 2022
My query is I am not getting about what value of next state should we put in Bellman equation (used in q-learning) if current state is , where Bellman equation is:

Sign in to comment.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!