/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Problem 10 Show that if state \(i\) is recu... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

Show that if state \(i\) is recurrent and state \(i\) does not communicate with state \(j\), then \(P_{i}=0 .\) This implies that once a process enters a recurrent class of states it can never leave that class. For this reason, a recurrent class is often referred to as a closed class.

Short Answer

Expert verified
In a Markov Chain, if state \(i\) is recurrent and does not communicate with state \(j\), the probability of transitioning from state \(i\) to state \(j\) is 0. This implies that once a process enters a recurrent class of states (a closed class), it cannot leave that class. This result is based on the definitions of "recurrent state" and "communication" and the properties of these probabilities.

Step by step solution

01

Definition of Recurrent State

A state \(i\) in a Markov Chain is recurrent if the process will eventually visit the state again, with probability 1. Mathematically speaking, for a state \(i\) to be recurrent, it must satisfy the following condition: \(\displaystyle \sum_{n=1}^{\infty}p_{i,i}^{(n)}=\infty\) This means that the sum of the probabilities of returning to state \(i\) in exactly \(n\) steps, over all possible values of \(n\), is infinite.
02

Definition of Communication

States \(i\) and \(j\) are said to communicate with each other if there exist integers \(m, n \geq 1\) such that: 1. \(p_{i,j}^{(m)}>0\) , which means there's a positive probability of transitioning from state \(i\) to state \(j\) in exactly \(m\) steps. 2. \(p_{j,i}^{(n)}>0\) , which means there's a positive probability of transitioning from state \(j\) to state \(i\) in exactly \(n\) steps. If state \(i\) does not communicate with state \(j\), that means there are no \(m, n \geq 1\) satisfying both conditions.
03

Proof

We need to prove that if state \(i\) is recurrent and state \(i\) does not communicate with state \(j\), then \(P_i = 0\). Following the given conditions: 1. State \(i\) is recurrent, meaning \(\displaystyle \sum_{n=1}^{\infty}p_{i,i}^{(n)}=\infty\) 2. State \(i\) does not communicate with state \(j\), meaning for all \(m, n \geq 1\), either \(p_{i,j}^{(m)}=0\) or \(p_{j,i}^{(n)}=0\) Now, let's determine \(P_i\), the probability of transitioning from state \(i\) to state \(j\): \(P_i = \displaystyle \sum_{n=1}^{\infty}p_{i,j}^{(n)}\) Since state \(i\) and \(j\) do not communicate, it implies that for every \(n\), there is no positive probability of transitioning from state \(i\) to state \(j\). Therefore, for each \(n\), we have: \(p_{i,j}^{(n)} = 0\) Substituting this result into the expression for \(P_i\), we get: \(P_i = \displaystyle \sum_{n=1}^{\infty}0 = 0\) We have proved that if state \(i\) is recurrent and state \(i\) does not communicate with state \(j\), then \(P_i = 0\), which implies that once a process enters a recurrent class of states, it can never leave that class. This is why a recurrent class is often referred to as a closed class.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

Key Concepts

These are the key concepts you need to understand to accurately answer the question.

Recurrent State
One of the fundamental concepts in the study of Markov Chains is the idea of a recurrent state. A recurrent state, also known as a persistent state, is one that, once visited, the process is guaranteed to return to with a probability of 1.

To understand this in a simple way, imagine walking in a city with interconnected streets. If a particular street (representing a state) is recurrent, no matter how far you walk away from it, you're certain to find your way back to it eventually. Mathematically, the probability of returning to the recurrent state is calculated by evaluating

\(\ \displaystyle \sum_{n=1}^{\infty}p_{i,i}^{(n)}=\ \infty\ \)

This equation sums the probabilities of returning to street 'i' after 'n' number of walks (or transitions), taken over all possible 'n'. Since this sum is infinite for a recurrent state, it solidifies the concept that you'll definitely return to that street at some point during your endless walk—essentially, you can't avoid coming back to a recurrent state in a Markov Chain.
State Communication in Markov Chains
States in Markov Chains have a significant property known as communication. Essentially, two states communicate with each other if it is possible to go from one to the other and vice versa, not necessarily in one step but across a series of transitions.

For instance, think of two towns (states) with roads (transitions) connecting them directly or indirectly through other towns. These towns communicate if you can travel from one town to the other and back through any number of road segments. The mathematical definition requires that there be some positive number of steps, let's call them 'm' and 'n', such that:

From state i to j:

\(p_{i,j}^{(m)}>0\)

And from state j to i:

\(p_{j,i}^{(n)}>0\)

When states do not communicate, this implies there are no paths connecting them; for our towns scenario, it's as if the roads between them don't exist. In a Markov Chain, if there is no communication from state 'i' to 'j', no sequence of transitions will lead from one to the other, making probabilities of such transitions equal to zero.
Closed Class of States
A closed class of states in a Markov Chain is a group of states that, once entered, cannot be left. In simpler terms, once you're in a closed network of streets (states), you can only travel within them and can never wander away onto new streets. A closed class can consist of a single state if it is recurrent and does not communicate with any other state.

The reason for this is that the probabilities of transitioning from any state within the class to any state outside the class are zero. Considering our city streets analogy, imagine a gated community with its own network of streets that has no exits -- this is a real-life reflection of a closed class of states in Markov Chains.

The exercise proved this concept by showing that if state 'i' is recurrent (which means it is part of a closed class) and does not communicate with state 'j', then the probability of ever reaching 'j' from 'i' is zero, symbolically, \(P_i = 0\). This mathematical proof supports the intuitive understanding that in a closed class of states, once you enter, you are contained within it for the duration of the process, just as you would be confined within the boundaries of a gated community with no exits to the outside world.

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

A transition probability matrix \(\mathbf{P}\) is said to be doubly stochastic if the sum over each column equals one; that is, $$ \sum_{i} P_{U}=1, \quad \text { for all } $$ If such a chain is irreducible and aperiodic and consists of \(M+1\) states \(0,1, \ldots, M\), show that the limiting probabilities are given by $$ \pi_{j}=\frac{1}{M+1}, \quad j=0,1, \ldots, M $$

Each day, one of \(n\) possible elements is requested, the ith one with probability \(P_{i}, i \geq 1, \sum_{1}^{n} P_{i}=1 .\) These elements are at all times arranged in an ordered list which is revised as follows: The element selected is moved to the front of the list with the relative positions of all the other elements remaining unchanged. Define the state at any time to be the list ordering at that time and note that there are \(n !\) possible states. (a) Argue that the preceding is a Markov chain. (b) For any state \(i_{1}, \ldots, i_{n}\) (which is a permutation of \(\left.1,2, \ldots, n\right)\), let \(\pi\left(i_{1}, \ldots, t_{n}\right)\) denote the limiting probability. In order for the state to be \(i_{1}, \ldots, i_{n}\), it is necessary for the last request to be for \(i_{1}\), the last non- \(i_{1}\) request for \(i_{2}\), the last non- \(i_{1}\) or \(i_{2}\) request for \(l_{3}\), and so on. Hence, it appears intuitive that $$ \pi\left(i_{1}, \ldots, i_{n}\right)=P_{t_{1}} \frac{P_{i_{2}}}{1-P_{i_{1}}} \frac{P_{h_{1}}}{1-P_{i_{1}}-P_{h_{2}}} \cdots \frac{P_{i_{m-1}}}{1-P_{i_{1}}-\cdots-P_{i_{\alpha}-2}} $$ Verify when \(n=3\) that the above are indeed the limiting probabilities.

A professor continually gives exams to her students. She can give three possible types of exams, and her class is graded as either having done well or badly. Let \(p_{i}\) denote the probability that the class does well on a type ¿ exam, and suppose that \(p_{1}=0.3, p_{2}=0.6\), and \(p_{3}=0.9\). If the class does well on an exam, then the next exam is equally likely to be any of the three types. If the class does badly, then the next exam is always type 1. What proportion of exams are type \(i, i=1,2,3 ?\)

Consider a Markov chain with states \(0,1,2,3,4\). Suppose \(P_{0,4}=1\); and suppose that when the chain is in state \(l, i>0\), the next state is equally likely to be any of the states \(0,1, \ldots, i-1\). Find the limiting probabilities of this Markov chain.

Consider an irreducible finite Markov chain with states \(0,1, \ldots, N\). (a) Starting in state \(i\), what is the probability the process will ever visit state \(j\) ? Explain! (b) Let \(x_{i}=P\) [visit state \(N\) before state \(0 \mid\) start in \(\left.i\right]\). Compute a set of linear equations which the \(x_{i}\) satisfy, \(i=0,1, \ldots, N\). (c) If \(\Sigma_{j} j p_{U}=i\) for \(i=1, \ldots, N-1\), show that \(x_{i}=i / N\) is a solution to the equations in part (b).

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.