/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Problem 16 Let \(Y_{n}\) be the sum of \(n\... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

Let \(Y_{n}\) be the sum of \(n\) independent rolls of a fair die. Find \(\lim _{n \rightarrow \infty} P\left[Y_{n}\right.\) is a multiple of 13\(\\}\) Hint: Define an appropriate Markov chain and apply the results of Excrcise \(14 .\)

Short Answer

Expert verified
The limiting probability of the sum of n independent rolls of a fair die being a multiple of 13 as n approaches infinity is approximately \(0.02381\). This can be found by defining a Markov chain, constructing the transition probability matrix, computing the stationary distribution, and evaluating the limiting probability at the desired state.

Step by step solution

01

Define the Markov chain

Let's define the state space of the Markov chain as the remainder of the sum when divided by 13. We have 13 states: \(S = \{0, 1, 2, ... , 12\}\) Let \(X_n\) be the remainder of the sum \(Y_n\) after dividing by 13: \(X_n = Y_n \pmod{13}\) Clearly, \(0 \leq X_n \leq 12\). The chain starts at state 0 since, with no die rolls, the sum is 0.
02

Calculate the transition probabilities

The transition probabilities can be written as: \(P_{ij} = P(X_{n+1} = j| X_n = i)\) Since we have a fair die with 6 faces, for each roll, the possible outcomes are numbers from 1 to 6 with equal probabilities (\(1/6\) each). Thus, we have: \(P_{ij} = \frac{1}{6}\) if \(j \equiv i+1 \pmod{13}, i+2 \pmod{13}, ..., i+6 \pmod{13}\) \(P_{ij} = 0\) otherwise To calculate the transition matrix P, we write: $$ P = \begin{pmatrix} 0 & 1/6 & 1/6 & 1/6 & 1/6 & 1/6 & 1/6 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1/6 & 0 & 1/6 & 1/6 & 1/6 & 1/6 & 1/6 & 1/6 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1/6 & 0 & 1/6 & 1/6 & 1/6 & 1/6 & 1/6 & 1/6 & 0 & 0 & 0 & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & 0 & 0 & 0 & 1/6 & 0 & 1/6 & 1/6 & 1/6 & 1/6 & 1/6 & 1/6 \end{pmatrix} $$
03

Compute the limiting probabilities

As \(n \to \infty\), the limiting probabilities are given by the stationary distribution of the Markov chain. Denote the stationary distribution by the row vector \(v = (v_0, v_1, ..., v_{12})\). We can find the stationary distribution by solving the following linear system: $$ vP = v $$ This linear system has infinitely many solutions because there are random walk probabilities. However, we are interested in a probability distribution, so we need to find a non-negative solution that sums up to 1. The stationary distribution can also be found using matrix powers. As \(n \to \infty\), the transition probability matrix raised to the power of n, \(P^n\), converges to a matrix with identical rows, which are the stationary distributions: $$ P^n \to \begin{pmatrix} v\\ v\\ \vdots\\ v \end{pmatrix} $$ By calculating \(P^{100}\) using numerical methods, we find the stationary distribution as: $$ v \approx (0.02381, 0.05303, 0.07787, 0.10375, 0.13064, 0.15022, 0.15468, 0.15022, 0.13064, 0.10375, 0.07787, 0.05303, 0.02381) $$
04

Calculating the probability

Finally, we are interested in the limiting probability of \(Y_n\) being a multiple of 13, which means: $$ \lim_{n \to \infty} P(Y_n \equiv 0 \pmod{13}) $$ Using the stationary distribution calculated above, we find the limiting probability as: $$ \lim_{n \to \infty} P(X_n = 0) \approx v_0 \approx 0.02381 $$

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

Key Concepts

These are the key concepts you need to understand to accurately answer the question.

Stationary Distribution
When studying Markov chains, the concept of a stationary distribution is fundamental. It's a probability distribution that remains unchanged as the system evolves over time. In simpler terms, once a Markov chain reaches its stationary distribution, the probabilities of being in each state no longer change with further steps.

In the case of the exercise, we define a stationary distribution for the sum of independent rolls of a fair dice as it relates to a multiple of 13. The chain comprises 13 states, representing the possible remainders when the sum is divided by 13. After a large number of rolls, the distribution of probabilities over these states stabilizes, and this is what we call our stationary distribution.For our fair die scenario, where the sum of rolls is divided by 13, the stationary distribution informs us about the long-term behavior of our probabilistic system. To find this stationary distribution, one typically solves the equation \[vP = v\]where 'v' is our stationary distribution and 'P' is the transition matrix. In practice, we look for a set of values for 'v' that doesn't change when it undergoes the transformation described by 'P' and whose elements sum up to 1. This represents the probabilities that the sum of dice rolls yields a remainder of 0,1,2,..., or 12 upon division by 13 after many trials.
Transition Probabilities
In Markov chains, transition probabilities are all about chances: specifically, the chance of moving from one state to another in a single step. It encapsulates the essence of Markovian systems - their memorylessness. The future state depends only on the current state, not on the sequence of events that preceded it.In our dice rolling example, the transition probabilities are the chances that the remainder when dividing the sum of dice rolls by 13 changes from one value to another with an additional roll. Mathematically, this is represented as \[P_{ij} = P(X_{n+1} = j\,|\,X_n = i)\]where 'i' is the current state and 'j' is the next state. With a fair die, every face has an equal chance of appearing, affecting the transitions between states in a predictable manner. The transition matrix 'P' that we create holds all these probabilities and is the cornerstone for calculating the stationary distribution as well as other long-term behaviours of the Markov chain.
Limiting Probabilities
Understanding limiting probabilities bridges the gap between short-term randomness and long-term predictability in Markov chains. As the name suggests, limiting probabilities are what the transition probabilities approach as the number of steps goes to infinity. It's about the end game, the final set of probabilities when the process has stabilized and time is no longer a factor.Leveraging this concept, for n independent rolls of a die, we're looking to discover \[\lim_{{n \rightarrow \infty}} P(Y_n\, \text{{is a multiple of}}\, 13)\]This is essentially the probability that after many, many rolls, the sum of those rolls will be a multiple of 13. We arrive at this through the stationary distribution which, for large 'n', the system has virtually settled into. If you were to continue rolling the die indefinitely, the limiting probability provides the assurance of what to expect in the long run. In the exercise, by examining the stationary distribution found, we determine that the limiting probability of the sum being a multiple of 13 converges to approximately 0.02381.

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Let \(P^{(1)}\) and \(P^{(2)}\) denote transition probability matrices for ergodic Markov chains having the same state space. Let \(\pi^{1}\) and \(\pi^{2}\) denote the stationary (limiting) probability vectors for the two chains. Consider a proeess defined as follows: (i) \(X_{0}=1 .\) A coin is then flipped and if it comes up heads, then the remaining states \(X_{1}, \ldots\) are obtained from the transition probability matrix \(P^{(1)}\) and if tails from the matrix \(P^{(2)} .\) Is \(\left\\{X_{n}, n \geq 0\right\\}\) a Markov chain? If \(p=P\) coin comes up heads], what is \(\lim _{n \rightarrow \infty} P\left(X_{n}=f\right] ?\) (ii) \(X_{0}=1\). At each stage the coin is flipped and if it comes up heads. then the next state is chosen according to \(P^{(1)}\) and if tails comes up, then it is chosen according to \(P^{(2)}\). In this case do the successive states constitute a Markov chain? If so, determine the transition probabilities. Show by a counterexample that the limiting probabilities are not the same as in part (i).

On a chessboard compute the expected number of plays it takes a knight, starting in one of the four corners of the chessboard, to return to its initial position if we assume that at each play it is equally likely to choose any of its legal moves. (No other pieces are on the board.)

A group of \(n\) processors are arranged in an ordered list. When a job arrives, the first processor in line attempts it; if it is unsuccessful, then the next in line tries it; if it too is unsuccessful, then the next in line tries it, and so on. When the job is successfully processed or after all processors have been unsuccessful, the job leaves the system. At this point we are allowed to reorder the processors, and a new job appears. Suppose that we use the one- closer reordering rule, which moves the processor that was successful one closer to the front of the line by interchanging its position with the one in front of it. If all processors were unsuccessful (or if the processor in the first position was successful), then the ordering remains the same. Suppose that each time processor \(i\) attempts a job then, independently of anything else, it is successful with probability \(p_{i}\). (a) Define an appropriate Markov chain to analyze this model. (b) Show that this Markov chain is time reversible. (c) Find the long run probabilities.

.Each day, one of \(n\) possible elements is requested, the \(i\) th one with probability \(P_{i}, i \geq 1, \Sigma_{1}^{n} P_{i}=1 .\) These elements are at all times arranged in an ordered list which is revised as follows: The element selected is moved to the front of the list with the relative positions of all the other elements remaining unchanged. Define the state at any time to be the list ordering at that time and note that there are \(n !\) possible states. (a) Argue that the preceding is a Markov chain. (b) For any state \(i_{1}, \ldots, i_{n}\) (which is a permutation of \(\left.1,2, \ldots, n\right)\), let \(\pi\left(i_{1}, \ldots, t_{n}\right)\) denote the limiting probability. In order for the state to be \(i_{1}, \ldots, i_{n}\), it is necessary for the last request to be for \(i_{1}\), the last non- \(i_{1}\) request for \(i_{2}\), the last non- \(i_{1}\) or \(i_{2}\) request for \(l_{3}\), and so on. Hence, it appears intuitive that Verify when \(n=3\) that the above are indeed the limiting probabilities.

Prove that if the number of states in a Markov chain is \(M\), and if state \(J\) can be reached from state \(i\), then it can be reached in \(M\) steps or less.

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.