/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Problem 20 Assume that an experiment has \(... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

Assume that an experiment has \(m\) equally probable outcomes. Show that the expected number of independent trials before the first occurrence of \(k\) consecutive occurrences of one of these outcomes is \(\left(m^{k}-1\right) /(m-1) .\) Hint: Form an absorbing Markov chain with states \(1,2, \ldots, k\) with state \(i\) representing the length of the current run. The expected time until a run of \(k\) is 1 more than the expected time until absorption for the chain started in state \(1 .\) It has been found that, in the decimal expansion of pi, starting with the 24,658,601 st digit, there is a run of nine 7 's. What would your result say about the expected number of digits necessary to find such a run if the digits are produced randomly?

Short Answer

Expert verified
For random pi digits, expect about 111,111,111 digits before nine consecutive 7s.

Step by step solution

01

Understanding the Markov Chain

First, create a Markov chain with states labeled as 1, 2, ..., k, where k represents reaching k consecutive occurrences of an outcome. The state i represents having i consecutive outcomes.
02

Identifying Transition Probabilities

Transition from state i to state i+1 happens if the same outcome occurs, otherwise you transition back to state 1. Therefore, probabilities are: - Stay in state i (i.e., event happens again): \(1/m\)- Return to state 1 (i.e., a different event occurs): \((m-1)/m\).- Absorption occurs at state k, i.e., reaching the end.
03

Setting up Equations for Expectation

Define the expected number of trials starting at state i as \(E_i\). For state \(k\) (absorbing state), \(E_k = 0\); for other states: \[E_i = 1 + \frac{1}{m}E_{i+1} + \frac{m-1}{m}E_1\]
04

Equation Simplification

Using initial state, \(E_1\) follows: \[E_1 = 1 + \frac{1}{m}E_2 + \frac{m-1}{m}E_1\] leading to \[E_1m = m + E_2 - m + (m-1)E_1\].
05

Construct Equation for E_k

Follow similar logic: \[E_{k-1} = 1 + \frac{1}{m}E_k + \frac{m-1}{m}E_1 = 1 + \frac{m-1}{m}E_1\],as a similar relation holds for state transitions from k-1 back to 1.
06

Deriving General Formula

By back-substitution, derive relation: \[E_1 = \frac{m^k - 1}{m-1}\], using similar equations iteratively combined and simplified, reflecting absorption expectation at state 1 with k-length achieved in Markov chain.
07

Applying Solution to the Pi Problem

Using \(m = 10\) and \(k = 9\) for pi's digits, inferring: \[E_1 = \frac{10^9 - 1}{9}\] to compute necessary digits.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

Key Concepts

These are the key concepts you need to understand to accurately answer the question.

Absorbing Markov Chain
In a Markov chain, an absorbing state is where once entered, cannot be left. An *absorbing Markov chain* consists of transient states and absorbing states. The process stops once an absorbing state is reached. To apply this in our exercise, we imagine a chain where each consecutive occurrence of the same outcome moves us closer to absorption, which is our goal of achieving \(k\) consecutive outcomes.

The states, labeled as \(1, 2, \ldots, k\), represent the run of consecutive outcomes. The absorbing state is state \(k\), where the run is successfully completed. All states before \(k\) are transient, meaning that there is potential to stay in them but also to eventually transition out until the sequence is interrupted, or the goal is met.
Expected Number of Trials
The expected number of trials needed before achieving a desired result in an experiment is a measurement of how long we expect to continue the process. In this context, it refers to the trials required to reach \(k\) consecutive occurrences of an outcome. The formula given, \(\left(m^{k} - 1\right)/(m-1)\), computes this expected number.

This formula derives from setting up expectations for each state, from absorbing Markov chain theory. The process starts at state \(1\) and expected time is computed for each stage until absorption, or achieving the desired \(k\) consecutive outcomes.
Transition Probabilities
Transition probabilities in a Markov chain determine the likelihood of moving from one state to another. In our exercise, these probabilities arise based upon the occurrence or non-occurrence of an outcome.

* If you stay in the same state (i.e., the next outcome is the same as the last), the probability is \(1/m\).
* If a different outcome occurs, you reset to state \(1\) with a probability of \((m-1)/m\).
The transition probabilities thereby influence the expected number of trials since they depict the movement between consecutive outcomes and not. Understanding these probabilities is critical for setting up our Markov chain model.
Consecutive Outcomes
Consecutive outcomes refer to the repeated occurrence of the same event multiple times in sequence. In many probabilistic experiments, especially those modeled by Markov chains, the focus often lies on achieving such a sequence.

The exercise problem deals with this by setting the goal to reach \(k\) consecutive outcomes. Each added consecutive occurrence signifies progress toward the absorbing state in our Markov chain. Although each outcome happens independently, the probability remains constant, symbolizing independence. Achieving the desired length of consecutive outcomes, however, relies on continually hitting the same outcome until state \(k\) is reached, ensuring our condition is satisfied.

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Consider the Markov chain with general \(2 \times 2\) transition matrix $$ \mathbf{P}=\left(\begin{array}{cc} 1-a & a \\ b & 1-b \end{array}\right) $$ (a) Under what conditions is \(\mathbf{P}\) absorbing? (b) Under what conditions is \(\mathbf{P}\) ergodic but not regular? (c) Under what conditions is \(\mathbf{P}\) regular?

In his book, Wahrscheinlichkeitsrechnung und Statistik, \(^{15}\) A. Engle proposes an algorithm for finding the fixed vector for an ergodic Markov chain when the transition probabilities are rational numbers. Here is his algorithm: For each state \(i\), let \(a_{i}\) be the least common multiple of the denominators of the non-zero entries in the \(i\) th row. Engle describes his algorithm in terms of moving chips around on the states-indeed, for small examples, he recommends implementing the algorithm this way. Start by putting \(a_{i}\) chips on state \(i\) for all \(i\). Then, at each state, redistribute the \(a_{i}\) chips, sending \(a_{i} p_{i j}\) to state \(j\). The number of chips at state \(i\) after this redistribution need not be a multiple of \(a_{i} .\) For each state \(i\), add just enough chips to bring the number of chips at state \(i\) up to a multiple of \(a_{i}\). Then redistribute the chips in the same manner. This process will eventually reach a point where the number of chips at each state, after the redistribution, is the same as before redistribution. At this point, we have found a fixed vector. Here is an example: $$ \mathbf{P}=\begin{array}{c} 1 & 2 & 3 \\ 1 \\ 2 \\ 3 \end{array}\left(\begin{array}{ccc} 1 / 2 & 1 / 4 & 1 / 4 \\ 1 / 2 & 0 & 1 / 2 \\ 1 / 2 & 1 / 4 & 1 / 4 \end{array}\right) $$ We start with \(\mathbf{a}=(4,2,4)\). The chips after successive redistributions are shown in Table 11.4 . We find that \(\mathbf{a}=(20,8,12)\) is a fixed vector. (a) Write a computer program to implement this algorithm. (b) Prove that the algorithm will stop. Hint: Let \(\mathbf{b}\) be a vector with integer components that is a fixed vector for \(\mathbf{P}\) and such that each coordinate of the starting vector \(\mathbf{a}\) is less than or equal to the corresponding component of b. Show that, in the iteration, the components of the vectors are always increasing, and always less than or equal to the corresponding component of \(\mathbf{b}\).

Consider the Markov chain with transition matrix $$ \mathbf{P}=\left(\begin{array}{ccc} 1 / 2 & 1 / 3 & 1 / 6 \\ 3 / 4 & 0 & 1 / 4 \\ 0 & 1 & 0 \end{array}\right) $$ (a) Show that this is a regular Markov chain. (b) The process is started in state \(1 ;\) find the probability that it is in state 3 after two steps. (c) Find the limiting probability vector w.

Define \(\mathbf{P}\) and \(\mathbf{y}\) by $$ \mathbf{P}=\left(\begin{array}{cc} .5 & .5 \\ .25 & .75 \end{array}\right), \quad \mathbf{y}=\left(\begin{array}{l} 1 \\ 0 \end{array}\right) $$ Compute \(\mathbf{P y}, \mathbf{P}^{2} \mathbf{y},\) and \(\mathbf{P}^{4} \mathbf{y}\) and show that the results are approaching a constant vector. What is this vector?

Write a program to simulate the outcomes of a Markov chain after \(n\) steps, given the initial starting state and the transition matrix \(\mathbf{P}\) as data (see Example 11.12). Keep this program for use in later problems.

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.