/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Problem 45 44\. Suppose that a population c... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

44\. Suppose that a population consists of a fixed number, say, \(m\), of genes in any generation. Each gene is one of two possible genetic types. If any generation Consider an irreducible finite Markov chain with states \(0,1, \ldots, N\). (a) Starting in state \(i\), what is the probability the process will ever visit state \(j\) ? Explaia! (b) Let \(x_{i}=P\) \\{visit state \(N\) before state 0|start in \(\left.i\right\\}\). Compute a set of linear equations which the \(x_{i}\) satisfy, \(i=0,1, \ldots, N\). (c) If \(\sum_{j} j P_{i j}=i\) for \(i=1, \ldots, N-1\), show that \(x_{i}=i / N\) is a solution to the equations in part (b).

Short Answer

Expert verified
The short answer to the question is: (a) Since the Markov chain is irreducible (every state can be reached from any other state), the probability of ever visiting state $j$ from state $i$ is 1 (100% probability). (b) The set of linear equations that the \(x_i\) satisfy is: \( x_0 = 0 \), \( x_N = 1 \), and for \( 1 \leq i \leq N - 1 \), \( x_i = \sum_{j=0}^{N} P_{ij} x_j \). (c) We showed that if \( \sum_{j} j P_{i j} = i \) for \( i = 1, \ldots, N-1 \), then the solution \( x_i = \frac{i}{N} \) is valid for the equations in part (b).

Step by step solution

01

Find the probability of visiting state j from state i

Since the Markov chain is irreducible (every state can be reached from any other state), the process will eventually visit state j, starting from state i. The probability of ever visiting state j from state i will be 1 (100% probability) for an irreducible finite Markov chain.
02

Define x_i and create a set of linear equations

Let x_i be the probability of visiting state N before state 0, starting in state i. We can create a set of linear equations that x_i satisfies as follows: \( x_0 = P(\text{visit state N before state 0 | start in state 0}) = 0 \), \( x_N = P(\text{visit state N before state 0 | start in state N}) = 1 \), and for \( 1 \leq i \leq N - 1 \), \( x_i = P(\text{visit state N before state 0 | start in state i}) = \sum_{j=0}^{N} P_{ij} x_j \), where \( P_{ij} \) is the probability of moving from state i to state j.
03

Show that x_i = i/N is a solution under the given conditions

We are given: \( \sum_{j} j P_{i j} = i \) for \( i = 1, \ldots, N-1 \). Now, let's consider \( x_i = \frac{i}{N} \) as a proposed solution for the equations in step 2. To verify this, we need to check if it satisfies the conditions mentioned above: 1. \( x_0 = 0 \), which is consistent with the proposed solution \( x_i = \frac{i}{N} \) since \( x_0 = \frac{0}{N} = 0 \). 2. \( x_N = 1 \), consistent with the proposed solution since \( x_N = \frac{N}{N} = 1 \). 3. For \( 1 \leq i \leq N-1 \), we have: \( x_i = \sum_{j=0}^{N} P_{ij} x_j \), Substituting the proposed solution, we get: \( \frac{i}{N} = \sum_{j=0}^{N} P_{ij} \frac{j}{N} \). Multiplying both sides by N, we get: \( i = \sum_{j=0}^{N} P_{ij} j \), which is consistent with the given condition \( \sum_{j} j P_{i j} = i \) for \( i=1, \ldots, N-1 \). Since the proposed solution \( x_i = \frac{i}{N} \) satisfies all the conditions, it is a valid solution to the equations in step 2.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

Key Concepts

These are the key concepts you need to understand to accurately answer the question.

Irreducible Markov Chain
An irreducible Markov chain is a special type of Markov chain where it's possible to reach any state from any other state. This characteristic ensures that the process isn’t stuck inside certain subsets of states. Instead, given enough time, it can move freely throughout the entire set of states. This property is significant because it implies that every state communicates with all others; no state is isolated.
In this exercise, the concept of irreducibility tells us that starting from any given state, the chain will eventually reach any other state, including state \( j \) when starting from state \( i \). Hence, for an irreducible finite Markov chain, the probability of eventually visiting state \( j \) from state \( i \) is always 1, or 100%.
This means that in scenarios like population genetics models, one can confidently predict that a particular genetic type will eventually appear regardless of the initial state. Irreducibility ensures all possible genetic states remain reachable over generations.
Linear Equations
Linear equations play an essential role in determining the behavior of Markov chains, especially when calculating transition probabilities between states. In this exercise, we use linear equations to determine the probability of reaching state \( N \) before state \( 0 \) from any given state \( i \).
The set of linear equations is established by defining \( x_i \) as this probability for each state \( i \). We know that for boundary states, \( x_0 = 0 \) and \( x_N = 1 \) since traveling from state 0 cannot reach \( N \) beforehand, and if you start at \( N \), you are already there.
For \( 1 \leq i \leq N-1 \), the equation must account for all possible direct transitions from state \( i \) to any other state \( j \), resulting in the equation:
\( x_i = \sum_{j=0}^{N} P_{ij} x_j \)
where \( P_{ij} \) is the transition probability from state \( i \) to state \( j \). This equation allows us to understand and calculate the probability of reaching desired states within the chain's dynamics.
Probability Transition
Probability transitions define the likelihood of moving from one state to another in a Markov chain. In this problem, these transitions are denoted as \( P_{ij} \), which represents the probability of moving from state \( i \) to state \( j \).
Understanding these transitions is key, as they form the foundation for calculating complex probabilities, such as the probability of reaching state \( N \) before state 0 starting from state \( i \). Such a calculation involves summing the probabilities of moving from state \( i \) to all other states \( j \), weighted by the probability of then reaching state \( N \) from state \( j \).
The essence of this task is to appreciate how each transition impacts the likelihood of particular outcomes. In an irreducible Markov chain, these transitions ensure eventual movement to every state, thus influencing the overall dynamics and probabilities within the system. Accurate modeling of these transitions facilitates robust predictions about future states.
Finite State Space
A finite state space in the context of Markov chains implies that there is a limited number of possible states the system can be in. All the states from 0 to \( N \) in the exercise are part of this well-defined and limited set.
Understanding finite state spaces is crucial, as they simplify the analysis and computation of probabilities compared to infinite state spaces. In finite state Markov chains, all possible outcomes are contained within a manageable set, allowing for a complete matrix representation of transition probabilities.
Such finite constraints also enhance predictability and feasibility in practical applications. For instance, when examining genetic traits in a fixed population, one can rely on finite state models to compute transition probabilities accurately, facilitating practical predictions about future distributions and developments in the population's genetic makeup.
This enables clear mapping and understanding of eventual outcomes within the limited confines of the established states, thus allowing deeper insights and straightforward analysis.

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

In the gambler's ruin problem of Section 4.5.1, suppose the gambler's fortune is presently \(i\), and suppose that we know that the gambler's fortune will eventually reach \(N\) (before it goes to 0 ). Given this information, show that the probability he wins the next gamble is $$ \begin{array}{ll} \frac{p\left[1-(q / p)^{i+1}\right]}{1-(q / p)^{i}}, & \text { if } p \neq \frac{1}{2} \\ \frac{i+1}{2 i}, & \text { if } p=\frac{1}{2} \end{array} $$ Hint: The probability we want is $$ \begin{aligned} &P\left\\{X_{n+1}=i+1 \mid X_{n}=i, \lim _{m \rightarrow \infty} X_{m}=N\right\\} \\ &\quad=\frac{P\left\\{X_{n+1}=i+1, \lim _{m} X_{m}=N \mid X_{n}=i\right\\}}{P\left\\{\lim _{m} X_{m}=N \mid X_{n}=i\right\\}} \end{aligned} $$

Consider three urns, one colored red, one white, and one blue. The red urn contains 1 red and 4 blue balls; the white urn contains 3 white balls, 2 red balls, and 2 blue balls; the blue urn contains 4 white balls, 3 red balls, and 2 blue balls. At the initial stage, a ball is randomly selected from the red urn and then returned to that urn. At every subsequent stage, a ball is randomly selected from the urn whose color is the same as that of the ball previously selected and is then returned to that urn. In the long run, what proportion of the selected balls are red? What proportion are white? What proportion are blue?

In a Markov decision problem, another criterion often used, different than the expected average return per unit time, is that of the expected discounted return. In this criterion we choose a number \(\alpha, 0<\alpha<1\), and try to choose a policy so as to maximize \(E\left[\sum_{i=0}^{\infty} \alpha^{i} R\left(X_{i}, a_{i}\right)\right]\) (that is, rewards at time \(n\) are discounted at rate \(\alpha^{n}\) ). Suppose that the initial state is chosen according to the probabilities \(b_{i} .\) That is, $$ P\left[X_{0}=i\right\\}=b_{i}, \quad i=1, \ldots, n $$ For a given policy \(\beta\) let \(y_{j a}\) denote the expected discounted time that the process is in state \(j\) and action \(a\) is chosen. That is, $$ y_{j a}=E_{\beta}\left[\sum_{n=0}^{\infty} \alpha^{n} I_{\left(X_{n}=j, a_{n}=a\right)}\right] $$ where for any event \(A\) the indicator variable \(I_{A}\) is defined by $$ I_{A}=\left\\{\begin{array}{ll} 1, & \text { if } A \text { occurs } \\ 0, & \text { otherwise } \end{array}\right. $$ (a) Show that $$ \sum_{a} y_{j a}=E\left[\sum_{n=0}^{\infty} \alpha^{n} I_{\left[X_{n}=j\right\\}}\right] $$ or, in other words, \(\sum_{a} y_{j a}\) is the expected discounted time in state \(j\) under \(\beta\). (b) Show that $$ \begin{aligned} \sum_{j} \sum_{a} y_{j a} &=\frac{1}{1-\alpha}, \\ \sum_{a} y_{j a} &=b_{j}+\alpha \sum_{t} \sum_{a} y_{i a} P_{t j}(a) \end{aligned} $$ Hint: For the second equation, use the identity $$ I_{\left(X_{u+1}=J\right)}=\sum_{i} \sum_{a} I_{\left\\{X_{n=1}, a_{\text {mwa }}\right]} I_{\left\\{X_{u+1}=j\right\\}} $$ Take expectations of the preceding to obtain $$ E\left[I_{\left.X_{n+1}=j\right\\}}\right]=\sum_{i} \sum_{a} E\left[I_{\left\\{X_{\text {ivai }}, a_{n=a \rightarrow n}\right)}\right] P_{i j}(a) $$ (c) Let \(\left\\{y_{j a}\right\\}\) be a set of numbers satisfying $$ \begin{aligned} \sum_{j} \sum_{a} y_{j a} &=\frac{1}{1-\alpha} \\ \sum_{a} y_{j a} &=b_{j}+\alpha \sum_{i} \sum_{a} y_{i a} P_{i j}(a) \end{aligned} $$ Argue that \(y_{j a}\) can be interpreted as the expected discounted time that the process is in state \(j\) and action \(a\) is chosen when the initial state is chosen according to the probabilities \(b_{j}\) and the policy \(\beta\), given by $$ \beta_{i}(a)=\frac{y_{i a}}{\sum_{a} y_{i a}} $$ is employed. Hint: Derive a set of equations for the expected discounted times when policy \(\beta\) is used and show that they are equivalent to Equation (4.35). (d) Argue that an optimal policy with respect to the expected discounted return criterion can be obtained by first solving the linear program $$ \begin{array}{ll} \text { maximize } & \sum_{j} \sum_{a} y_{j a} R(j, a), \\ \text { such that } & \sum_{j} \sum_{a} y_{j a}=\frac{1}{1-\alpha}, \\ \sum_{a} y_{j a}=b_{j}+\alpha \sum_{i} \sum_{a} y_{i a} P_{i j}(a) \\ y_{j a} \geqslant 0, \quad \text { all } j, a \end{array} $$ and then defining the policy \(\beta^{*}\) by $$ \beta_{i}^{*}(a)=\frac{y_{i a}^{*}}{\sum_{a} y_{i a}^{*}} $$ where the \(y_{j a}^{*}\) are the solutions of the linear program.

Each day, one of \(n\) possible elements is requested, the \(i\) th one with probability \(P_{i}, i \geqslant 1, \sum_{1}^{n} P_{i}=1\). These elements are at all times arranged in an ordered list which is revised as follows: The element selected is moved to the front of the list with the relative positions of all the other elements remaining unchanged. Define the state at any time to be the list ordering at that time and note that there are \(n !\) possible states. (a) Argue that the preceding is a Markov chain. (b) For any state \(i_{1}, \ldots, i_{n}\) (which is a permutation of \(1,2, \ldots, n\) ), let \(\pi\left(i_{1}, \ldots, i_{n}\right)\) denote the limiting probability. In order for the state to be \(i_{1}, \ldots, i_{n}\), it is necessary for the last request to be for \(i_{1}\), the last non- \(i_{1}\) request for \(i_{2}\), the last non- \(i_{1}\) or \(i_{2}\) request for \(i_{3}\), and so on. Hence, it appears intuitive that $$ \pi\left(i_{1}, \ldots, i_{n}\right)=P_{i_{1}} \frac{P_{i_{2}}}{1-P_{i_{1}}} \frac{P_{i_{3}}}{1-P_{i_{1}}-P_{i_{2}}} \cdots \frac{P_{i_{n-1}}}{1-P_{i_{1}}-\cdots-P_{i_{n-2}}} $$ Verify when \(n=3\) that the preceding are indeed the limiting probabilities.

For a Markov chain \(\left\\{X_{n}, n \geqslant 0\right\\}\) with transition probabilities \(P_{i, j}\), consider the conditional probability that \(X_{n}=m\) given that the chain started at time 0 in state \(i\) and has not yet entered state \(r\) by time \(n\), where \(r\) is a specified state not equal to either \(i\) or \(m\). We are interested in whether this conditional probability is equal to thé \(n\) stage transition probability of a Markov chain whose state space does not include state \(r\) and whose transition probabilities are $$ \underline{Q}_{i, j}=\frac{P_{i, j}}{1-P_{i, r}}, \quad i, j \neq r $$ Either prove the equality $$ P\left\\{X_{n}=m \mid X_{0}=i, X_{k} \neq r, k=1, \ldots, n\right\\}=Q_{i, m}^{n} $$ or construct a, counterexample.

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.