/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Problem 22 Show that if \(\mathbf{P}\) is t... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

Show that if \(\mathbf{P}\) is the transition matrix of a regular Markov chain, and \(\mathbf{W}\) is the matrix each of whose rows is the fixed probability vector corresponding to \(\mathbf{P},\) then \(\mathbf{P} \mathbf{W}=\mathbf{W},\) and \(\mathbf{W}^{k}=\mathbf{W}\) for all positive integers \(k\).

Short Answer

Expert verified
\( \mathbf{P} \mathbf{W} = \mathbf{W} \) and \( \mathbf{W}^k = \mathbf{W} \) because \( \mathbf{W} \) contains stationary distributions. Each multiplication results in the stationary distribution itself.

Step by step solution

01

Understanding Regular Markov Chains

A Markov chain is considered regular if some power of its transition matrix has all positive elements. This means that from any state, there is a non-zero probability of reaching any other state. This ensures that the chain is ergodic and has a unique stationary distribution.
02

Define Fixed Point Vector

The fixed probability vector, often called the stationary distribution, is a row vector \( \mathbf{w} \) such that when multiplied by the transition matrix \( \mathbf{P} \), results in the same vector: \( \mathbf{w} \mathbf{P} = \mathbf{w} \). The matrix \( \mathbf{W} \) consists of repeated rows of this stationary distribution vector \( \mathbf{w} \).
03

Show \( \mathbf{P} \mathbf{W} = \mathbf{W} \)

To prove \( \mathbf{P} \mathbf{W} = \mathbf{W} \), consider any row \( \mathbf{w} \) of matrix \( \mathbf{W} \). Since each row of \( \mathbf{W} \) is the fixed probability vector, \( \mathbf{w} \mathbf{P} = \mathbf{w} \). Therefore, every row remains unchanged after multiplication by \( \mathbf{P} \), maintaining the equality \( \mathbf{P} \mathbf{W} = \mathbf{W} \).
04

Prove \( \mathbf{W}^{k} = \mathbf{W} \)

Since \( \mathbf{W} \) is constructed such that each row is a fixed probability vector \( \mathbf{w} \), multiplying \( \mathbf{W} \) by itself yields \( \mathbf{W} \). This is because multiplying the stationary row vector by any other matrix of the same form results in a matrix where each row remains as the stationary distribution, so \( \mathbf{W}^{2} = \mathbf{W} \). By induction, it follows that \( \mathbf{W}^{k} = \mathbf{W} \) for all positive integers \( k \).
05

Conclusion and Summarization

We have shown that the transition matrix of a regular Markov chain, when multiplied by the matrix \( \mathbf{W} \) of its stationary distribution, results in the matrix \( \mathbf{W} \) itself: \( \mathbf{P} \mathbf{W} = \mathbf{W} \). Moreover, the matrix \( \mathbf{W} \) raised to any positive integer power remains unchanged: \( \mathbf{W}^{k} = \mathbf{W} \).

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

Key Concepts

These are the key concepts you need to understand to accurately answer the question.

Transition Matrix
In a regular Markov chain, the transition matrix plays a critical role in determining how the system progresses from one state to another. A transition matrix, often denoted by \( \mathbf{P} \), is a square matrix where each entry \( P_{ij} \) represents the probability of transitioning from state \( i \) to state \( j \). This matrix is fundamental to analyzing Markov chains.

To qualify as a regular transition matrix, a specific power of this matrix must have all positive entries. This means that there is a positive probability of reaching any state from any other state in the chain, no matter where you start. Such a quality ensures that the Markov chain is ergodic, implying the chain is irreducible and aperiodic. This condition gives rise to a unique stationary distribution, satisfying certain stable properties as time progresses.

The transition matrix is essential, as it not only describes the immediate behavior of the chain but also determines critical long-term properties, like the stationary distribution.
Fixed Probability Vector
The fixed probability vector, more formally known as the stationary distribution, is a key concept in Markov chains and is necessary for understanding their long-term behavior. Represented as \( \mathbf{w} \), it is a row vector that remains unchanged when multiplied by the transition matrix \( \mathbf{P} \). Mathematically, it satisfies the equation: \( \mathbf{w} \mathbf{P} = \mathbf{w} \).

This means that once the chain reaches this distribution, the probabilities of being in each state no longer change as steps proceed. The stationary distribution represents a kind of equilibrium where the relative proportions of time the system spends in each state become constant.

For a regular Markov chain, this stationary distribution is unique, which is a direct consequence of the matrix \( \mathbf{P} \) being regular. The repeated rows of this vector form the matrix \( \mathbf{W} \), which when multiplied by \( \mathbf{P} \) gives back \( \mathbf{W} \), demonstrating that \( \mathbf{W} \) captures the enduring nature of the system.
Stationary Distribution
The stationary distribution of a Markov chain is essentially what gives the chain its long-term characteristics. It tells us the proportion of time that the chain will spend in each state in the long run. This distribution is crucial for understanding how a Markov chain behaves over an extended period. All states in the chain communicate with each other in a regular chain, ensuring its uniqueness.

When represented as a matrix \( \mathbf{W} \), composed of identical rows of the stationary distribution \( \mathbf{w} \), it has an interesting property: multiplying \( \mathbf{W} \) by itself repeatedly (i.e., \( \mathbf{W}^{k} \) for any positive integer \( k \)) still results in \( \mathbf{W} \). This equality signifies that the equilibrium proportions remain unchanged through the progression.

Understanding stationary distributions helps in various practical scenarios, like predicting steady states or equilibrium in systems modeled by regular Markov chains. It gives insights into the likelihood of finding the system in any particular state, thereby serving as a powerful tool in fields like stochastic processes, queuing theory, and economics.

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Consider a game played as follows: You are given a regular Markov chain with transition matrix \(\mathbf{P},\) fixed probability vector \(\mathbf{w},\) and a payoff function \(\mathbf{f}\) which assigns to each state \(s_{i}\) an amount \(f_{i}\) which may be positive or negative. Assume that \(\mathbf{w} \mathbf{f}=0\). You watch this Markov chain as it evolves, and every time you are in state \(s_{i}\) you receive an amount \(f_{i} .\) Show that your expected winning after \(n\) steps can be represented by a column vector \(\mathbf{g}^{(n)}\), with $$ \mathbf{g}^{(n)}=\left(\mathbf{I}+\mathbf{P}+\mathbf{P}^{2}+\cdots+\mathbf{P}^{n}\right) \mathbf{f} $$ Show that as \(n \rightarrow \infty, \mathbf{g}^{(n)} \rightarrow \mathbf{g}\) with \(\mathbf{g}=\mathbf{Z f}\).

A process moves on the integers \(1,2,3,4,\) and \(5 .\) It starts at 1 and, on each successive step, moves to an integer greater than its present position, moving with equal probability to each of the remaining larger integers. State five is an absorbing state. Find the expected number of steps to reach state five.

Consider an absorbing Markov chain with state space \(S .\) Let \(f\) be a function defined on \(S\) with the property that $$ f(i)=\sum_{j \in S} p_{i j} f(j) $$ or in vector form $$ \mathbf{f}=\mathbf{P f} $$ Then \(f\) is called a harmonic function for \(\mathbf{P}\). If you imagine a game in which your fortune is \(f(i)\) when you are in state \(i,\) then the harmonic condition means that the game is fair in the sense that your expected fortune after one step is the same as it was before the step. (a) Show that for \(f\) harmonic $$ \mathbf{f}=\mathbf{P}^{n} \mathbf{f} $$ for all \(n\). (b) Show, using (a), that for \(f\) harmonic $$ \mathbf{f}=\mathbf{P}^{\infty} \mathbf{f} $$ where $$ \mathbf{P}^{\infty}=\lim _{n \rightarrow \infty} \mathbf{P}^{n}=\left(\begin{array}{c|c} \mathbf{0} & \mathbf{B} \\ \hline \mathbf{0} & \mathbf{I} \end{array}\right) $$ (c) Using (b), prove that when you start in a transient state \(i\) your expected final fortune $$ \sum_{k} b_{i k} f(k) $$ is equal to your starting fortune \(f(i)\). In other words, a fair game on a finite state space remains fair to the end. (Fair games in general are called martingales. Fair games on infinite state spaces need not remain fair with an unlimited number of plays allowed. For example, consider the game of Heads or Tails (see Example 1.4). Let Peter start with 1 penny and play until he has 2. Then Peter will be sure to end up 1 penny ahead.)

A study of the strengths of Ivy League football teams shows that if a school has a strong team one year it is equally likely to have a strong team or average team next year; if it has an average team, half the time it is average next year, and if it changes it is just as likely to become strong as weak; if it is weak it has \(2 / 3\) probability of remaining so and \(1 / 3\) of becoming average. (a) A school has a strong team. On the average, how long will it be before it has another strong team? (b) A school has a weak team; how long (on the average) must the alumni wait for a strong team?

Assume that an experiment has \(m\) equally probable outcomes. Show that the expected number of independent trials before the first occurrence of \(k\) consecutive occurrences of one of these outcomes is \(\left(m^{k}-1\right) /(m-1) .\) Hint: Form an absorbing Markov chain with states \(1,2, \ldots, k\) with state \(i\) representing the length of the current run. The expected time until a run of \(k\) is 1 more than the expected time until absorption for the chain started in state \(1 .\) It has been found that, in the decimal expansion of pi, starting with the 24,658,601 st digit, there is a run of nine 7 's. What would your result say about the expected number of digits necessary to find such a run if the digits are produced randomly?

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.