/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Problem 1 Three white and three black ball... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

Three white and three black balls are distributed in two urns in such a way that each contains three balls. We say that the system is in state \(i, i=0,1,2,3\), if the first urn contains \(i\) white balls. At each step, we draw one ball from each urn and place the ball drawn from the first urn into the second, and conversely with the ball from the second urn. Let \(X_{n}\) denote the state of the system after the \(n\) th step. Explain why \(\left\\{X_{n}, n=0,1,2, \ldots\right\\}\) is a Markov chain and calculate its transition probability matrix.

Short Answer

Expert verified
The sequence { \(X_n\), n=0,1,2,...} is a Markov chain because the probability of each transition depends only on the current state of the system. The transition probability matrix P for the Markov chain is: \[ P = \begin{pmatrix} 0 & 1 & 0 & 0 \\ \frac{2}{9} & 0 & \frac{2}{9} & 0 \\ 0 & \frac{2}{9} & 0 & \frac{2}{9} \\ 0 & 0 & 1 & 0 \end{pmatrix} \] This matrix shows the probabilities of transitioning from one state to another in the given system of two urns with white and black balls.

Step by step solution

01

First, let's identify the possible transitions between states i. - If the system is in state 0: urn 1 has 0 white balls and urn 2 has 3 white balls. In this state, only one transition is possible: one white ball is drawn from urn 2 and placed in urn 1, and the system moves to state 1. - If the system is in state 1: urn 1 has 1 white ball and urn 2 has 2 white balls. There are two possible transitions: either both urns exchange a white ball and the system moves back to state 0, or both urns exchange black balls and the system moves to state 2. - If the system is in state 2: urn 1 has 2 white balls and urn 2 has 1 white ball. There are two possible transitions, similar to state 1: either both urns exchange white balls and the system moves to state 1, or both urns exchange black balls and the system moves to state 3. - If the system is in state 3: urn 1 has 3 white balls and urn 2 has 0 white balls, similar to state 0, only one transition is possible: 1 white ball is drawn from urn 1 and placed in urn 2, and the system moves to state 2. #Step 2: Verify if the system is a Markov chain#

Since the probability of each transition depends only on the current state of the system, the sequence { \(X_n\), n=0,1,2,...} is a Markov chain. #Step 3: Calculate transition probabilities#
02

Now we will calculate the transition probabilities for each possible transition. - From state 0 to state 1: Since there is only 1 white ball in urn 2 and all balls are different, the probability of this transition is 1, i.e., \(P_{01} = 1\). - From state 1 to state 0: The probability of picking a white ball from both urns is \(\frac{1}{3}\times\frac{2}{3} = \frac{2}{9}\), i.e., \(P_{10} = \frac{2}{9}\). - From state 1 to state 2: The probability of picking a black ball from both urns is \(\frac{2}{3}\times\frac{1}{3} = \frac{2}{9}\), i.e., \(P_{12} = \frac{2}{9}\). - From state 2 to state 1: Similar to state 1 to state 0, the probability is \(P_{21} = \frac{2}{9}\). - From state 2 to state 3: Similar to state 1 to state 2, the probability is \(P_{23} = \frac{2}{9}\). - From state 3 to state 2: Since there is only 1 white ball in urn 1 and all balls are different, the probability is 1, i.e., \(P_{32} = 1\). #Step 4: Form the transition probability matrix#

Using the transition probabilities calculated in the previous step, we can now form the transition probability matrix P for the Markov chain. The matrix will have dimensions 4x4 (since there are 4 possible states). \[ P = \begin{pmatrix} 0 & 1 & 0 & 0 \\ \frac{2}{9} & 0 & \frac{2}{9} & 0 \\ 0 & \frac{2}{9} & 0 & \frac{2}{9} \\ 0 & 0 & 1 & 0 \end{pmatrix} \] This matrix represents the Markov chain describing the system, where each element \(P_{ij}\) represents the probability of transitioning from state i to state j.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Each day, one of \(n\) possible elements is requested, the \(i\) th one with probability \(P_{i}, i \geqslant 1, \sum_{1}^{n} P_{i}=1 .\) These elements are at all times arranged in an ordered list which is revised as follows: The element selected is moved to the front of the list with the relative positions of all the other elements remaining unchanged. Define the state at any time to be the list ordering at that time and note that there are \(n !\) possible states. (a) Argue that the preceding is a Markov chain. (b) For any state \(i_{1}, \ldots, i_{n}\) (which is a permutation of \(1,2, \ldots, n\) ), let \(\pi\left(i_{1}, \ldots, i_{n}\right)\) denote the limiting probability. In order for the state to be \(i_{1}, \ldots, i_{n}\), it is necessary for the last request to be for \(i_{1}\), the last non- \(i_{1}\) request for \(i_{2}\), the last non- \(i_{1}\) or \(i_{2}\) request for \(i_{3}\), and so on. Hence, it appears intuitive that $$ \pi\left(i_{1}, \ldots, i_{n}\right)=P_{i_{1}} \frac{P_{i_{2}}}{1-P_{i_{1}}} \frac{P_{i_{3}}}{1-P_{i_{1}}-P_{i_{2}}} \cdots \frac{P_{i_{n-1}}}{1-P_{i_{1}}-\cdots-P_{i_{n-2}}} $$ Verify when \(n=3\) that the preceding are indeed the limiting probabilities.

Consider an irreducible finite Markov chain with states \(0,1, \ldots, N\). (a) Starting in state \(i\), what is the probability the process will ever visit state \(j\) ? Explain! (b) Let \(x_{i}=P\\{\) visit state \(N\) before state \(0 \mid\) start in \(i\\}\). Compute a set of linear equations which the \(x_{i}\) satisfy, \(i=0,1, \ldots, N\). (c) If \(\sum_{j} j P_{i j}=i\) for \(i=1, \ldots, N-1\), show that \(x_{i}=i / N\) is a solution to the equations in part (b)

A DNA nucleotide has any of 4 values. A standard model for a mutational change of the nucleotide at a specific location is a Markov chain model that supposes that in going from period to period the nucleotide does not change with probability \(1-3 \alpha\), and if it does change then it is equally likely to change to any of the other 3 values, for some \(0<\alpha<\frac{1}{3}\). (a) Show that \(P_{1,1}^{n}=\frac{1}{4}+\frac{3}{4}(1-4 \alpha)^{n}\). (b) What is the long run proportion of time the chain is in each state?

In a Markov decision problem, another criterion often used, different than the expected average return per unit time, is that of the expected discounted return. In this criterion we choose a number \(\alpha, 0<\alpha<1\), and try to choose a policy so as to maximize \(E\left[\sum_{i=0}^{\infty} \alpha^{i} R\left(X_{i}, a_{i}\right)\right]\) (that is, rewards at time \(n\) are discounted at rate \(\alpha^{n}\) ). Suppose that the initial state is chosen according to the probabilities \(b_{i} .\) That is, $$ P\left\\{X_{0}=i\right\\}=b_{i}, \quad i=1, \ldots, n $$ For a given policy \(\beta\) let \(y_{j a}\) denote the expected discounted time that the process is in state \(j\) and action \(a\) is chosen. That is, $$ y_{j a}=E_{\beta}\left[\sum_{n=0}^{\infty} \alpha^{n} I_{\left\\{X_{n}=j, a_{n}=a\right\\}}\right] $$ where for any event \(A\) the indicator variable \(I_{A}\) is defined by $$ I_{A}=\left\\{\begin{array}{ll} 1, & \text { if } A \text { occurs } \\ 0, & \text { otherwise } \end{array}\right. $$ (a) Show that $$ \sum_{a} y_{j a}=E\left[\sum_{n=0}^{\infty} \alpha^{n} I_{\left\\{X_{n}=j\right\\}}\right] $$ or, in other words, \(\sum_{a} y_{j a}\) is the expected discounted time in state \(j\) un\(\operatorname{der} \beta\). (b) Show that $$ \begin{aligned} \sum_{j} \sum_{a} y_{j a} &=\frac{1}{1-\alpha}, \\ \sum_{a} y_{j a} &=b_{j}+\alpha \sum_{i} \sum_{a} y_{i a} P_{i j}(a) \end{aligned} $$ (c) Let \(\left\\{y_{j a}\right\\}\) be a set of numbers satisfying $$ \begin{aligned} \sum_{j} \sum_{a} y_{j a} &=\frac{1}{1-\alpha}, \\ \sum_{a} y_{j a} &=b_{j}+\alpha \sum_{i} \sum_{a} y_{i a} P_{i j}(a) \end{aligned} $$ Argue that \(y_{j a}\) can be interpreted as the expected discounted time that the process is in state \(j\) and action \(a\) is chosen when the initial state is chosen according to the probabilities \(b_{j}\) and the policy \(\boldsymbol{\beta}\), given by $$ \beta_{i}(a)=\frac{y_{i a}}{\sum_{a} y_{i a}} $$ is employed. Hint: Derive a set of equations for the expected discounted times when policy \(\beta\) is used and show that they are equivalent to Equation ( \(4.38\) ). (d) Argue that an optimal policy with respect to the expected discounted return criterion can be obtained by first solving the linear program $$ \begin{array}{ll} \text { maximize } & \sum_{j} \sum_{a} y_{j a} R(j, a), \\ \text { such that } & \sum_{j} \sum_{a} y_{j a}=\frac{1}{1-\alpha}, \\ \sum_{a} y_{j a}=b_{j}+\alpha \sum_{i} \sum_{a} y_{i a} P_{i j}(a) \\ y_{j a} \geqslant 0, \quad \text { all } j, a \end{array} $$ and then defining the policy \(\beta^{*}\) by $$ \beta_{i}^{*}(a)=\frac{y_{i a}^{*}}{\sum_{a} y_{i a}^{*}} $$ where the \(y_{i a}^{*}\) are the solutions of the linear program.

Suppose that coin 1 has probability \(0.7\) of coming up heads, and \(\operatorname{coin} 2\) has probability \(0.6\) of coming up heads. If the coin flipped today comes up heads, then we select coin 1 to flip tomorrow, and if it comes up tails, then we select coin 2 to flip tomorrow. If the coin initially flipped is equally likely to be coin 1 or coin 2 , then what is the probability that the coin flipped on the third day after the initial flip is coin \(1 ?\)

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.