/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Problem 37 Show that the stationary probabi... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

Show that the stationary probabilities for the Markov chain having transition probabilities \(P_{i, j}\) are also the stationary probabilities for the Markov chain whose transition probabilities \(Q_{i, j}\) are given by $$ Q_{i, j}=P_{i, j}^{k} $$ for any specified positive integer \(k\).

Short Answer

Expert verified
In conclusion, we have shown that the stationary probabilities for a Markov chain with transition probabilities \(P_{i,j}\) are also the stationary probabilities for the Markov chain with transition probabilities \(Q_{i,j} = P_{i,j}^k\), for any specified positive integer \(k\). We achieved this by expanding the equations for the stationary probabilities of both Markov chains and showing that the stationary probabilities of the original Markov chain with matrix \(P\) satisfy the stationary probability conditions for the modified Markov chain having matrix \(Q\) as well.

Step by step solution

01

Define stationary probabilities and Markov chains

A stationary probability distribution (also known as a stationary distribution or steady-state distribution) for a Markov chain is a probability distribution that remains unchanged as the process transitions from one state to another. Consider two Markov chains with transition probability matrices \(P\) and \(Q\), where the components of \(Q\) are obtained by raising the components of \(P\) to the power \(k\) (\(Q_{i,j} = P_{i,j}^k)\). Let \(\pi = (\pi_1, \pi_2, ..., \pi_n)\) denote the stationary probability distribution for the original Markov chain with the transition probability matrix \(P\), where \(n\) is the number of states.
02

Apply the definition of stationary probabilities

According to the definition of stationary probabilities, \(\pi\) should satisfy the following equation for the Markov chain with the transition probability matrix \(P\): \[ \pi P = \pi \] We need to prove that the same stationary probability distribution \(\pi\) also satisfies this equation for the Markov chain with the transition probability matrix \(Q\): \[ \pi Q = \pi \]
03

Expand the equation for the stationary probabilities of Q

Substitute \(Q_{i,j}\) with \(P_{i,j}^k\), and expand the equation to obtain: \[ \pi_1 Q_{1,1} + \pi_2 Q_{2,1} + ... + \pi_n Q_{n,1} = \pi_1 \\ \pi_1 Q_{1,2} + \pi_2 Q_{2,2} + ... + \pi_n Q_{n,2} = \pi_2 \\ \cdots \\ \pi_1 Q_{1,n} + \pi_2 Q_{2,n} + ... + \pi_n Q_{n,n} = \pi_n \] Replace the \(Q_{i,j}\) with \(P_{i,j^k}\) in the expanded equations: \[ \pi_1 P_{1,1}^k + \pi_2 P_{2,1}^k + ... + \pi_n P_{n,1}^k = \pi_1 \\ \pi_1 P_{1,2}^k + \pi_2 P_{2,2}^k + ... + \pi_n P_{n,2}^k = \pi_2 \\ \cdots \\ \pi_1 P_{1,n}^k + \pi_2 P_{2,n}^k + ... + \pi_n P_{n,n}^k = \pi_n \]
04

Verify the stationary probability conditions for Q

We know that the stationary probabilities of P satisfy the following equation: \[ \pi_1 P_{1,1} + \pi_2 P_{2,1} + ... + \pi_n P_{n,1} = \pi_1 \\ \pi_1 P_{1,2} + \pi_2 P_{2,2} + ... + \pi_n P_{n,2} = \pi_2 \\ \cdots \\ \pi_1 P_{1,n} + \pi_2 P_{2,n} + ... + \pi_n P_{n,n} = \pi_n \] Now raise both sides of these equations to the power of \(k\), and we will have: \[ (\pi_1 P_{1,1} + \pi_2 P_{2,1} + ... + \pi_n P_{n,1})^k = \pi_1^k \\ (\pi_1 P_{1,2} + \pi_2 P_{2,2} + ... + \pi_n P_{n,2})^k = \pi_2^k \\ \cdots \\ (\pi_1 P_{1,n} + \pi_2 P_{2,n} + ... + \pi_n P_{n,n})^k = \pi_n^k \] These equations are exactly the same as the ones we obtained in step 3 for the stationary probabilities of \(Q\). This means that the stationary probability distribution \(\pi\) for the Markov chain with the transition probability matrix \(P\) also satisfies the stationary probability conditions for the Markov chain with the transition probability matrix \(Q\).
05

Conclusion

Hence, the stationary probabilities for a Markov chain with transition probabilities \(P_{i,j}\) are also the stationary probabilities for the Markov chain with transition probabilities \(Q_{i,j} = P_{i,j}^k\), for any specified positive integer \(k\).

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Consider a Markov chain with states \(0,1,2,3,4\). Suppose \(P_{0,4}=1\); and suppose that when the chain is in state \(i, i>0\), the next state is equally likely to be any of the states \(0,1, \ldots, i-1\). Find the limiting probabilities of this Markov chain.

Let \(P^{(1)}\) and \(P^{(2)}\) denote transition probability matrices for ergodic Markov chains having the same state space. Let \(\pi^{1}\) and \(\pi^{2}\) denote the stationary (limiting) probability vectors for the two chains. Consider a process defined as follows: (i) \(X_{0}=1 .\) A coin is then flipped and if it comes up heads, then the remaining states \(X_{1}, \ldots\) are obtained from the transition probability matrix \(P^{(1)}\) and if tails from the matrix \(P^{(2)} .\) Is \(\left\\{X_{n}, n \geqslant 0\right\\}\) a Markov chain? If \(p=P\\{\) coin comes up heads \(\\}\), what is \(\lim _{n \rightarrow \infty} P\left(X_{n}=i\right) ?\) (ii) \(X_{0}=1 .\) At each stage the coin is flipped and if it comes up heads, then the next state is chosen according to \(P^{(1)}\) and if tails comes up, then it is chosen according to \(P^{(2)} .\) In this case do the successive states constitute a Markov chain? If so, determine the transition probabilities. Show by a counterexample that the limiting probabilities are not the same as in part (i).

Each day, one of \(n\) possible elements is requested, the \(i\) th one with probability \(P_{i}, i \geqslant 1, \sum_{1}^{n} P_{i}=1 .\) These elements are at all times arranged in an ordered list which is revised as follows: The element selected is moved to the front of the list with the relative positions of all the other elements remaining unchanged. Define the state at any time to be the list ordering at that time and note that there are \(n !\) possible states. (a) Argue that the preceding is a Markov chain. (b) For any state \(i_{1}, \ldots, i_{n}\) (which is a permutation of \(1,2, \ldots, n\) ), let \(\pi\left(i_{1}, \ldots, i_{n}\right)\) denote the limiting probability. In order for the state to be \(i_{1}, \ldots, i_{n}\), it is necessary for the last request to be for \(i_{1}\), the last non- \(i_{1}\) request for \(i_{2}\), the last non- \(i_{1}\) or \(i_{2}\) request for \(i_{3}\), and so on. Hence, it appears intuitive that $$ \pi\left(i_{1}, \ldots, i_{n}\right)=P_{i_{1}} \frac{P_{i_{2}}}{1-P_{i_{1}}} \frac{P_{i_{3}}}{1-P_{i_{1}}-P_{i_{2}}} \cdots \frac{P_{i_{n-1}}}{1-P_{i_{1}}-\cdots-P_{i_{n-2}}} $$ Verify when \(n=3\) that the preceding are indeed the limiting probabilities.

For a time reversible Markov chain, argue that the rate at which transitions from \(i\) to \(j\) to \(k\) occur must equal the rate at which transitions from \(k\) to \(j\) to \(i\) occur.

It follows from the argument made in Exercise 38 that state \(i\) is null recurrent if it is recurrent and \(\pi_{i}=0 .\) Consider the one-dimensional symmetric random walk of Example \(4.15\). (a) Argue that \(\pi_{i}=\pi_{0}\) for all \(i\). (b) Argue that all states are null recurrent.

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.