/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Problem 67 A coin having probability \(p\) ... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

A coin having probability \(p\) of coming up heads is continually flipped. Let \(P_{j}(n)\) denote the probability that a run of \(j\) successive heads occurs within the first \(n\) flips. (a) Argue that $$ P_{j}(n)=P_{j}(n-1)+p^{j}(1-p)\left[1-P_{j}(n-j-1)\right] $$ (b) By conditioning on the first non-head to appear, derive another equation relating \(P_{j}(n)\) to the quantities \(P_{j}(n-k), k=1, \ldots, j\)

Short Answer

Expert verified
In part (a), we have proved that \(P_{j}(n) = P_{j}(n-1) + p^{j}(1-p)\left[1-P_{j}(n-j-1)\right]\) by considering two possible cases: having a run of j heads within the first n-1 flips or having a run of j heads followed by a tail within the first n flips. In part (b), by conditioning on the first non-head flip at position k, we derived another equation relating \(P_{j}(n)\) to the quantities \(P_{j}(n-k), k=1, \ldots, j\): \(P_{j}(n) = \sum_{k=1}^{j} p^{k-1}(1-p)P_{j}(n-k)\).

Step by step solution

01

Prove the given equation for Part (a)

To prove the given equation, we will first analyze the possible outcomes of the nth flip. There are two cases to consider: 1. The nth flip is a part of a run of j heads, i.e., the (n-1)th flip is also a head. 2. The nth flip is a tail that follows a run of j heads, i.e., the n-1 first flips have j heads, and the last flip is a tail. In the first case, the probability of having a run of j heads within the first n flips is the same as the probability of having a run of j heads within the first n-1 flips. This probability is \(P_{j}(n-1)\). In the second case, we have a run of j heads followed by a tail. The probability of having j heads in a row is \(p^{j}\), and the probability of a tail is \((1-p)\). Accordingly, the probability of not having a run of j heads within the first n-j-1 flips is \(1-P_{j}(n-j-1)\). Putting everything together, the probability of this scenario is \(p^{j}(1-p)\left[1-P_{j}(n-j-1)\right]\). Now, adding the probabilities of both cases, we get: \(P_{j}(n) = P_{j}(n-1) + p^{j}(1-p)\left[1-P_{j}(n-j-1)\right]\). This proves the equation for part (a).
02

Derive the equation for Part (b)

In part (b), we are asked to derive another equation relating \(P_{j}(n)\) to \(P_{j}(n-k), k=1, \ldots, j\) by conditioning on the first non-head to appear. Let's consider the first non-head flip occurs at position k. This means that the first k-1 flips are heads, and the kth flip is a tail. After this tail (i.e., from flips k+1 to n), we need to have a run of j heads. The probability of having a run of j heads in the remaining n-k flips is \(P_{j}(n-k)\). The probability of having k-1 heads followed by a tail is \(p^{k-1}(1-p)\). Hence, the probability of having a run of j heads within the first n flips, given that the first non-head flip occurs at position k, is given by: \(P_{j}(n|k) = p^{k-1}(1-p)P_{j}(n-k)\). Now, we will consider all the possible values of k, i.e., from 1 to j. We get: \(P_{j}(n) = \sum_{k=1}^{j} p^{k-1}(1-p)P_{j}(n-k)\). This gives us another equation relating \(P_{j}(n)\) to the quantities \(P_{j}(n-k), k=1, \ldots, j\) as asked in part (b).

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Show in the discrete case that if \(X\) and \(Y\) are independent, then $$ E[X \mid Y=y]=E[X] \quad \text { for all } y $$

Let \(X\) be exponential with mean \(1 / \lambda\); that is, $$ f_{X}(x)=\lambda e^{-\lambda x}, \quad 01]\)

If \(R_{i}\) denotes the random amount that is earned in period \(i\), then \(\sum_{i=1}^{\infty} \beta^{i-1} R_{i}\), where \(0<\beta<1\) is a specified constant, is called the total discounted reward with discount factor \(\beta .\) Let \(T\) be a geometric random variable with parameter \(1-\beta\) that is independent of the \(R_{i} .\) Show that the expected total discounted reward is equal to the expected total (undiscounted) reward earned by time \(T\). That is, show that $$ E\left[\sum_{i=1}^{\infty} \beta^{i-1} R_{i}\right]=E\left[\sum_{i=1}^{T} R_{i}\right] $$

Let \(X_{1}, X_{2}, \ldots\) be independent continuous random variables with a common distribution function \(F\) and density \(f=F^{\prime}\), and for \(k \geqslant 1\) let $$ N_{k}=\min \left\\{n \geqslant k: X_{n}=k \text { th largest of } X_{1}, \ldots, X_{n}\right\\} $$ (a) Show that \(P\left\\{N_{k}=n\right\\}=\frac{k-1}{n(n-1)}, n \geqslant k\). (b) Argue that $$ f_{X_{N_{k}}}(x)=f(x)(\bar{F}(x))^{k-1} \sum_{i=0}^{\infty}\left(\begin{array}{c} i+k-2 \\ i \end{array}\right)(F(x))^{i} $$ (c) Prove the following identity: $$ a^{1-k}=\sum_{i=0}^{\infty}\left(\begin{array}{c} i+k-2 \\ i \end{array}\right)(1-a)^{i}, \quad 0

If \(X\) and \(Y\) are both discrete, show that \(\sum_{x} p_{X \mid Y}(x \mid y)=1\) for all \(y\) such that \(p_{Y}(y)>0\)

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.