/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Problem 40 It follows from the argument mad... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

It follows from the argument made in Exercise 38 that state \(i\) is null recurrent if it is recurrent and \(\pi_{i}=0 .\) Consider the one-dimensional symmetric random walk of Example 4.18. (a) Argue that \(\pi_{i}=\pi_{0}\) for all \(i\). (b) Argue that all states are null recurrent.

Short Answer

Expert verified
In a one-dimensional symmetric random walk, the stationary distribution \(\pi_i\) is constant for all states \(i\). So, \(\pi_i = \pi_0\) for all states \(i\). Since all states are recurrent and infinite in number, the normalization condition (sum of \(\pi_i\) over all states should be equal to 1) can only be satisfied if \(\pi_0 = 0\). Thus, all states are null recurrent, as they are recurrent with a stationary distribution of \(\pi_i = 0\).

Step by step solution

01

Understanding One-Dimensional Symmetric Random Walk

In a one-dimensional symmetric random walk, a particle moves either to the left or to the right with equal probability. At each step, the particle has a probability of 0.5 to move in either direction.
02

Part (a): Argue that \(\pi_i = \pi_0\) for all i

A state \(i\) is said to be recurrent if the particle is expected to return to this state infinite number of times. We know that state \(i\) is recurrent if the sum of the probabilities to the left and right of the state is equal. We also know that, in a symmetric random walk, the particle moves either to the left or to the right with equal probability, which is 0.5. So for all states \(i\), we have \[\pi_i = \frac{1}{2} \pi_{i-1} + \frac{1}{2} \pi_{i+1}\] This equation shows the balance condition for the symmetric random walk. Since we have equal probabilities for moving left and right, we can argue that the stationary distribution \(\pi_i\) is constant for all states \(i\). Thus, we have \[\pi_i = \pi_0\] for all states \(i\).
03

Part (b): Argue that all states are null recurrent

To argue that all states are null recurrent, we need to show that they are recurrent and that the stationary distribution \(\pi_i = 0\) for all i. From part (a), we already know that all states have the same stationary distribution \(\pi_i = \pi_0\). In a one-dimensional symmetric random walk, we also know that the particle is expected to return to each state infinite number of times, meaning that every state is recurrent. In order to satisfy the normalization condition (sum of \(\pi_i\) over all states should be equal to 1), we need to find a value for \(\pi_0\). However, since there are an infinite number of states in a one-dimensional random walk, the sum of the stationary distribution \(\pi_i\) over all states would be infinite if \(\pi_0 > 0\). This would violate the normalization condition. Therefore, we must have \[\pi_0 = 0\] Thus, all states are recurrent, and their stationary distribution is \(\pi_i = 0\), meaning that all states in a one-dimensional symmetric random walk are null recurrent.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

Key Concepts

These are the key concepts you need to understand to accurately answer the question.

One-dimensional symmetric random walk
In a one-dimensional symmetric random walk, a particle moves along a line, either to the left or to the right, with equal probability at each step. This probability is typically 0.5 for either direction. Imagine the particle standing at position 0. It then makes a decision at each step whether to move to position 1 or -1, and so on.
This model is an essential concept in probability theory and helps illustrate random processes. It's often visualized as a coin toss; heads lead the particle one way, while tails lead it the other.
  • Each step is independent of the previous steps.
  • The walk is unbounded, meaning the particle can potentially move infinitely in either direction.
  • Because the probabilities are identical, the system is symmetric.
This randomness makes symmetric random walks fascinating to study in statistical and probability analyses.
Stationary distribution
The stationary distribution of a Markov chain is a probability distribution that remains unchanged as time progresses. It represents a kind of balance where the probabilities of being in each state no longer change.
In a one-dimensional symmetric random walk, this distribution can be tricky because of its infinite nature. In this specific example, each position has the same chance according to the balance condition:
  • For state \(i\), we have \(\pi_i = \frac{1}{2} \pi_{i-1} + \frac{1}{2} \pi_{i+1}\).
  • This implies the stationary distribution \(\pi_i\) is constant across all states.
However, because there are infinitely many states, a stationary distribution doesn't sum to 1 unless each is zero, leading us to the concept of null recurrence.
Recurrent states
A state in a Markov chain is recurrent if there is a certainty of returning to it. In the context of our random walk, every position or state is considered recurrent because the particle will inevitably return to any given state an infinite number of times almost surely.
Here's why recurrence matters:
  • If a state is recurrent, the long-term probability of finding the process in that state isn't zero.
  • Recurrent states are especially common in infinite state spaces like our random walk.
However, recurrence in this infinite context doesn't imply a positive stationary probability, leading to the realization that all recurrent states are in fact null recurrent, as their stationary distribution value is zero.
Normalization condition
The normalization condition is a crucial requirement for any probability distribution. It states that the total probability must add up to one. In our Markov chain, this condition affects how we interpret the stationary distribution.
For the random walk:
  • The sum of all stationary distribution values \(\pi_i\) across states must be one.
  • If\(\pi_0 > 0\), the sum is infinite, violating the normalization condition.
Thus, the only way to satisfy the normalization condition in this infinite state system is by setting all stationary distribution probabilities to zero. This results in what we call a ‘null recurrent’ state. It's a paradox of sorts, where every state is reachable infinitely, yet none have a lasting probability presence in the stationary sense.

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Consider a population of individuals each of whom possesses two genes that can be either type \(A\) or type \(a\). Suppose that in outward appearance type \(A\) is dominant and type \(a\) is recessive. (That is, an individual will have only the outward characteristics of the recessive gene if its pair is aa.) Suppose that the population has stabilized, and the percentages of individuals having respective gene pairs \(A A, a a\), and \(A a\) are \(p, q\), and \(r .\) Call an individual dominant or recessive depending on the outward characteristics it exhibits. Let \(S_{11}\) denote the probability that an offspring of two dominant parents will be recessive; and let \(S_{10}\) denote the probability that the offspring of one dominant and one recessive parent will be recessive. Compute \(S_{11}\) and \(S_{10}\) to show that \(S_{11}=S_{10}^{2} .\) (The quantities \(S_{10}\) and \(S_{11}\) are known in the genetics literature as Snyder's ratios.)

In a Markov decision problem, another criterion often used, different than the expected average return per unit time, is that of the expected discounted return. In this criterion we choose a number \(\alpha, 0<\alpha<1\), and try to choose a policy so as to maximize \(E\left[\sum_{i=0}^{\infty} \alpha^{i} R\left(X_{i}, a_{i}\right)\right]\) (that is, rewards at time \(n\) are discounted at rate \(\left.\alpha^{n}\right)\) Suppose that the initial state is chosen according to the probabilities \(b_{i} .\) That is, $$ P\left\\{X_{0}=i\right\\}=b_{i}, \quad i=1, \ldots, n $$ For a given policy \(\beta\) let \(y_{j a}\) denote the expected discounted time that the process is in state \(j\) and action \(a\) is chosen. That is, $$ y_{j a}=E_{\beta}\left[\sum_{n=0}^{\infty} \alpha^{n} I_{\left[X_{n}=j, a_{n}=a\right\\}}\right] $$ where for any event \(A\) the indicator variable \(I_{A}\) is defined by $$ I_{A}=\left\\{\begin{array}{ll} 1, & \text { if } A \text { occurs } \\ 0, & \text { otherwise } \end{array}\right. $$ (a) Show that $$ \sum_{a} y_{j a}=E\left[\sum_{n=0}^{\infty} \alpha^{n} I_{\left\\{X_{n}=j\right\\}}\right] $$ or, in other words, \(\sum_{a} y_{j a}\) is the expected discounted time in state \(j\) under \(\beta\). (b) Show that $$ \begin{aligned} \sum_{j} \sum_{a} y_{j a} &=\frac{1}{1-\alpha}, \\ \sum_{a} y_{j a} &=b_{j}+\alpha \sum_{i} \sum_{a} y_{i a} P_{i j}(a) \end{aligned} $$ Hint: For the second equation, use the identity $$ I_{\left\\{X_{n+1}=j\right\\}}=\sum_{i} \sum_{a} I_{\left\\{X_{n-i}, a_{n-a}\right\rangle} I_{\left\\{X_{n+1}=j\right\\}} $$ Take expectations of the preceding to obtain $$ E\left[I_{\left.X_{n+1}=j\right\\}}\right]=\sum_{i} \sum_{a} E\left[I_{\left\\{X_{n-i}, a_{n-a}\right\\}}\right] P_{i j}(a) $$ (c) Let \(\left\\{y_{j a}\right\\}\) be a set of numbers satisfying $$ \begin{aligned} \sum_{j} \sum_{a} y_{j a} &=\frac{1}{1-\alpha}, \\ \sum_{a} y_{j a} &=b_{j}+\alpha \sum_{i} \sum_{a} y_{i a} P_{i j}(a) \end{aligned} $$ Argue that \(y_{j a}\) can be interpreted as the expected discounted time that the process is in state \(j\) and action \(a\) is chosen when the initial state is chosen according to the probabilities \(b_{j}\) and the policy \(\beta\), given by $$ \beta_{i}(a)=\frac{y_{i a}}{\sum_{a} y_{i a}} $$ is employed. Hint: Derive a set of equations for the expected discounted times when policy \(\beta\) is used and show that they are equivalent to Equation \((4.38) .\) (d) Argue that an optimal policy with respect to the expected discounted return criterion can be obtained by first solving the linear program $$ \begin{array}{ll} \operatorname{maximize} & \sum_{j} \sum_{a} y_{j a} R(j, a), \\ \text { such that } & \sum_{j} \sum_{a} y_{j a}=\frac{1}{1-\alpha}, \\ \sum_{a} y_{j a}=b_{j}+\alpha \sum_{i} \sum_{a} y_{i a} P_{i j}(a) \\ y_{j a} \geqslant 0, \quad \text { all } j, a \end{array} $$ and then defining the policy \(\beta^{*}\) by $$ \beta_{i}^{*}(a)=\frac{y_{i a}^{*}}{\sum_{a} y_{i a}^{*}} $$ where the \(y_{j a}^{*}\) are the solutions of the linear program.

For the random walk of Example \(4.18\) use the strong law of large numbers to give another proof that the Markov chain is transient when \(p \neq \frac{1}{2}\). Hint: Note that the state at time \(n\) can be written as \(\sum_{i=1}^{n} Y_{i}\) where the \(Y_{i}\) s are independent and \(P\left\\{Y_{i}=1\right\\}=p=1-P\left\\{Y_{i}=-1\right\\}\). Argue that if \(p>\frac{1}{2}\), then, by the strong law of large numbers, \(\sum_{1}^{n} Y_{i} \rightarrow \infty\) as \(n \rightarrow \infty\) and hence the initial state 0 can be visited only finitely often, and hence must be transient. A similar argument holds when \(p<\frac{1}{2}\).

A flea moves around the vertices of a triangle in the following manner: Whenever it is at vertex \(i\) it moves to its clockwise neighbor vertex with probability \(p_{i}\) and to the counterclockwise neighbor with probability \(q_{i}=1-p_{i}, i=1,2,3\). (a) Find the proportion of time that the flea is at each of the vertices. (b) How often does the flea make a counterclockwise move that is then followed by five consecutive clockwise moves?

Suppose that a population consists of a fixed number, say, \(m\), of genes in any generation. Each gene is one of two possible genetic types. If exactly \(i\) (of the \(m\) ) genes of any generation are of type 1 , then the next generation will have \(j\) type 1 (and \(m-j\) type 2 ) genes with probability $$ \left(\begin{array}{c} m \\ j \end{array}\right)\left(\frac{i}{m}\right)^{j}\left(\frac{m-i}{m}\right)^{m-j}, \quad j=0,1, \ldots, m $$ Let \(X_{n}\) denote the number of type 1 genes in the \(n\) th generation, and assume that \(X_{0}=i\) (a) Find \(E\left[X_{n}\right]\). (b) What is the probability that eventually all the genes will be type \(1 ?\)

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.