/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Problem 52 \(M\) balls are initially distri... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

\(M\) balls are initially distributed among \(m\) urns. At each stage one of the balls is selected at random, taken from whichever urn it is in, and then placed, at random, in one of the other \(M-1\) urns. Consider the Markov chain whose state at any time is the vector \(\left(n_{1}, \ldots, n_{m}\right)\) where \(n_{i}\) denotes the number of balls in urn \(1 .\) Guess at the limiting probabilities for this Markov chain and then verify your guess and show at the same time that the Markov chain is time reversible.

Short Answer

Expert verified
The limiting probabilities for the given Markov chain are given by the uniform distribution: \[\pi\left(n_{1}, \ldots, n_{m}\right) = \frac{1}{\binom{M + m - 1}{m - 1}}.\] This is confirmed by showing that the detailed balance equations hold for these limiting probabilities. Since the detailed balance equations hold, we can conclude that the Markov chain is time-reversible.

Step by step solution

01

Define the Markov Chain

We have a Markov chain whose state at any given time is represented by a vector \(\left(n_1, \ldots, n_m\right)\), where \(n_i\) denotes the number of balls in urn \(i\). The state space is given by \(S = \{(n_1,\ldots, n_m) \mid \sum_{i=1}^m n_i = M\text{ and }0 \le n_i \le M \text{ for all }i\}\).
02

Propose Limiting Probabilities

A natural guess for the limiting probabilities would be the uniform distribution. We hypothesize that all states in the Markov chain are equally likely, so the limiting probabilities would be given by \[\pi\left(n_{1}, \ldots, n_{m}\right) = \frac{1}{\binom{M + m - 1}{m - 1}}.\] The reason for this choice is that there are a total of \(\binom{M + m - 1}{m - 1}\) ways to distribute the \(M\) balls and \(m\) urns, and we are assuming that each configuration is equally likely.
03

Show Detailed Balance Equations Hold

To confirm our guess, let's show that the detailed balance equations hold for the proposed limiting probabilities: \[\pi\left(n_{1}, \ldots, n_{m}\right)Q\left(\left(n_{1}, \ldots, n_{m}\right),\left(n_{1}', \ldots, n_{m}'\right)\right) = \pi\left(n_{1}', \ldots, n_{m}'\right)Q\left(\left(n_{1}', \ldots, n_{m}'\right),\left(n_{1}, \ldots, n_{m}\right)\right).\] Let's analyze the transition probabilities between two adjacent states, \((n_1, \ldots, n_m)\) and \((n_1',\ldots,n_m')\). There are two ways to make a transition between these two states: 1. Move a ball from urn \(i\) to urn \(j\) (where \(i \neq j\)): In this case, \(n_i' = n_i - 1\), \(n_j' = n_j + 1\), and \(n_k' = n_k\) for \(k \neq i,j\). The transition probability is given by \[Q\left(\left(n_{1}, \ldots, n_{m}\right),\left(n_{1}', \ldots, n_{m}'\right)\right) = \frac{n_i}{M}\frac{M-n_j}{M-1} = \frac{n_i(M-n_j)}{M(M-1)}.\] 2. Move a ball from urn \(j\) to urn \(i\) (where \(i \neq j\)): In this case, \(n_i' = n_i + 1\), \(n_j' = n_j - 1\), and \(n_k' = n_k\) for \(k \neq i,j\). The transition probability is given by \[Q\left(\left(n_{1}', \ldots, n_{m}'\right),\left(n_{1}, \ldots, n_{m}\right)\right) = \frac{n_j}{M}\frac{M-n_i}{M-1} = \frac{n_j(M-n_i)}{M(M-1)}.\] Now, let's plug in the values for the proposed limiting probabilities: \[\begin{aligned} &\pi\left(n_{1}, \ldots, n_{m}\right)Q\left(\left(n_{1}, \ldots, n_{m}\right),\left(n_{1}', \ldots, n_{m}'\right)\right)\\ =&\frac{1}{\binom{M + m - 1}{m - 1}}\frac{n_i(M-n_j)}{M(M-1)}\end{aligned}\] and \[\begin{aligned} &\pi\left(n_{1}' , \ldots, n_{m}'\right)Q\left(\left(n_{1}', \ldots, n_{m}'\right),\left(n_{1}, \ldots, n_{m}\right)\right)\\ =&\frac{1}{\binom{M + m - 1}{m - 1}}\frac{n_j(M-n_i)}{M(M-1)}, \end{aligned}\] which indeed satisfy the detailed balance equations: \[\frac{1}{\binom{M + m - 1}{m - 1}}\frac{n_i (M - n_j)}{M (M - 1)} = \frac{1}{\binom{M + m - 1}{m - 1}}\frac{n_j (M - n_i)}{M(M - 1)}.\]
04

Verify Time-Reversible Condition

Since we have shown that the detailed balance equations hold for the limiting probabilities, we can now conclude that the Markov chain is time-reversible. In other words, the probability of moving from a given state to another is the same as the probability of moving back to the original state when considering the limiting probabilities. As a result, we have shown that the limiting probabilities for this Markov chain are given by the uniform distribution, and the Markov chain is indeed time-reversible.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

In the gambler's ruin problem of Section \(4.5 .1\), suppose the gambler's fortune is presently \(i\), and suppose that we know that the gambler's fortune will eventually reach \(N\) (before it goes to 0). Oiven this information, show that the probability he wins the next gamble is $$ \begin{aligned} &\frac{p\left[1-(q / p)^{1+1}\right]}{1-(q / p)^{l}}, & \text { if } p \neq \frac{1}{2} \\ &\frac{i+1}{2 i}, \quad \text { if } p=\frac{1}{2} \end{aligned} $$ The probability we want is $$ \begin{aligned} P\left(X_{n+1}\right.&\left.=i+1 \mid X_{n}=i, \lim _{m \rightarrow \infty} X_{m}=N\right] \\ &=\frac{P\left[X_{n+1}=i+1, \lim _{m} X_{m}=N \mid X_{n}=i\right)}{P\left[\lim _{m} X_{m}=N \mid X_{n}=i\right]} \end{aligned} $$

Suppose that a population consists of a fixed number, say, \(m\), of genes in any generation. Each gene is one of two possible genetic types. If any generation has exactly \(i\) (of its \(m\) genes being type 1 , then the next generation will have \(j\) type 1 (and \(m-j\) type 2) genes with probability $$ \left(\begin{array}{c} m \\ j \end{array}\right)\left(\frac{i}{m}\right)^{\prime}\left(\frac{m-i}{m}\right)^{m-j}, \quad j=0,1, \ldots, m $$ Let \(X_{n}\) denote the number of type 1 genes in the \(n\) th generation, and assume that \(X_{0}=i .\) (a) Find \(E\left[X_{n}\right]\). (b) What is the probability that eventually all the genes will be type \(1 ?\)

Each morning an individual leaves his house and goes for a run. He is equally likely to leave either from his front or back door. Upon leaving the house, he chooses a pair of running shoes (or goes running barefoot if there are no shoes at the door from which he departed). On his return he is equally likely to enter, and leave his running shoes, either by the front or back door. If he owns a total of \(k\) pairs of running shoes, what proportion of the time does he run barefooted?

Consider an irreducible finite Markov chain with states \(0,1, \ldots, N\). (a) Starting in state \(i\), what is the probability the process will ever visit state \(j\) ? Explain! (b) Let \(x_{i}=P\) [visit state \(N\) before state \(0 \mid\) start in \(\left.i\right]\). Compute a set of linear equations which the \(x_{i}\) satisfy, \(i=0,1, \ldots, N\). (c) If \(\Sigma_{j} j p_{U}=i\) for \(i=1, \ldots, N-1\), show that \(x_{i}=i / N\) is a solution to the equations in part (b).

For the Markov chain with states \(1,2,3,4\) whose transition probability matrix \(\mathbf{P}\) is as specified below find \(f_{i 3}\) and \(s_{a}\) for \(i=1,2,3 .\) $$ \mathbf{P}=\left[\begin{array}{llll} 0.4 & 0.2 & 0.1 & 0.3 \\ 0.1 & 0.5 & 0.2 & 0.2 \\ 0.3 & 0.4 & 0.2 & 0.1 \\ 0 & 0 & 0 & 1 \end{array}\right] $$

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.