/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Q17E We defined a negative binomial ... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

We defined a negative binomial\({\rm{rv}}\)as the number of failures that occur before the\({\rm{rth}}\)success in a sequence of independent and identical success/failure trials. The probability mass function (\({\rm{pmf}}\)) of\({\rm{X}}\)is\({\rm{nb(x,r,p) = }}\)\(\left( {\begin{array}{*{20}{c}}{{\rm{x + r - 1}}}\\{\rm{x}}\end{array}} \right){{\rm{p}}^{\rm{r}}}{{\rm{(1 - p)}}^{\rm{x}}}\quad {\rm{x = 0,1,2, \ldots }}\)

a. Suppose that. Show that\({\rm{\hat p = (r - 1)/(X + r - 1)}}\)is an unbiased estimator for\({\rm{p}}\). (Hint: Write out\({\rm{E(\hat p)}}\)and cancel\({\rm{x + r - 1}}\)inside the sum.)

b. A reporter wishing to interview five individuals who support a certain candidate begins asking people whether\({\rm{(S)}}\)or not\({\rm{(F)}}\)they support the candidate. If the sequence of responses is SFFSFFFSSS, estimate\({\rm{p = }}\)the true proportion who support the candidate.

Short Answer

Expert verified

a) Total probability of a valid probability distribution is equal to \({\rm{1}}{\rm{.}}\)\({\rm{E(\hat p) = p}}\)

b) The estimate is \({\rm{\hat p = 0}}{\rm{.4444}}\)the true proportion.

Step by step solution

01

Introduction

An estimator is a rule for computing an estimate of a given quantity based on observable data: the rule (estimator), the quantity of interest (estimate), and the output (estimate) are all distinct.

02

Explanation

(a)

Given: The pmf of \({\rm{X}}\)is:

\({\rm{nb(x;r,p) = }}\left( {\begin{array}{*{20}{c}}{{\rm{x + r - 1}}}\\{\rm{x}}\end{array}} \right){{\rm{p}}^{\rm{r}}}{{\rm{(1 - p)}}^{\rm{x}}}\)

The expected value of \({\rm{\hat p}}\)is the sum of the product of each possible value of \(\frac{{{\rm{r - 1}}}}{{{\rm{X + r - 1}}}}\) with its probability\({\rm{nb(x;r,p)}}\):

\(\begin{aligned}E(\hat p) &= \sum\limits_{x = 0}^{ + \infty } {\frac{{r - 1}}{{x + r - 1}}} nb(x;r,p)\\ &= \sum\limits_{x = 0}^{ + \infty } {\frac{{r - 1}}{{x + r - 1}}} \left( {\begin{array}{*{20}{c}}{x + r - 1}\\x\end{array}} \right){p^r}{(1 - p)^x}\\ &= \sum\limits_{x = 0}^{ + \infty } {\frac{{r - 1}}{{x + r - 1}}} \frac{{(x + r - 1)!}}{{x!(x + r - 1 - x)!}}{p^r}{(1 - p)^x}\\ &= \sum\limits_{x = 0}^{ + \infty } {(r - 1)} \frac{{(x + r - 2)!}}{{x!(r - 1)!}}{p^r}{(1 - p)^x}\\ &= \sum\limits_{x = 0}^{ + \infty } {\frac{{(x + r - 2)!}}{{x!(r - 2)!}}} {p^r}{(1 - p)^x}\\ &= \sum\limits_{x = 0}^{ + \infty } {\frac{{(x + r - 2)!}}{{x!((x + r - 2) - x)!}}} {p^r}{(1 - p)^x}\\ &= \sum\limits_{x = 0}^{ + \infty } {\left( {\begin{array}{*{20}{c}}{x + r - 2}\\x\end{array}} \right)} {p^r}{(1 - p)^x}\\ &= p\sum\limits_{x = 0}^{ + \infty } {\left( {\begin{array}{*{20}{c}}{x + r - 2}\\x\end{array}} \right)} \\ &= p{(1 - p)^x}\end{aligned}\)

Note:

\(\sum\limits_{x = 0}^{ + \infty } {\left( {\begin{array}{*{20}{c}}{x + r - 2}\\x\end{array}} \right)} {p^{r - 1}}{(1 - p)^x}\)\(\sum\limits_{{\rm{x = 0}}}^{{\rm{ + \yen}}} {\left( {\begin{array}{*{20}{c}}{{\rm{x + r - 2}}}\\{\rm{x}}\end{array}} \right)} {{\rm{p}}^{{\rm{r - 1}}}}{{\rm{(1 - p)}}^{\rm{x}}}\)which is the negative binomial distribution with the \({\rm{(r - 1)}}\)th success (instead of \({\rm{r}}\)th success) and thus total probability of a valid probability distribution is equal to \({\rm{1}}{\rm{.}}\)

03

Explanation

(b)

Considering the given information:

\({\rm{r = 5}}\)

SFFSFFFSSS

SFFSFFFSSS contains \({\rm{5}}\) successes and \({\rm{5}}\) failures. \({\rm{x}}\)is the number of failures.

\({\rm{x = 5}}\)

Determine the estimator of part (a):

\(\begin{aligned}\hat p &= \frac{{{\rm{r - 1}}}}{{{\rm{x + r - 1}}}}\\ &= \frac{{{\rm{5 - 1}}}}{{{\rm{5 + 5 - 1}}}}\\ &= \frac{{\rm{4}}}{{\rm{9}}}\\{\rm{\gg 0}}{\rm{.4444}}\end{aligned}\)

Therefore, the solution is\({\rm{\hat p = 0}}{\rm{.4444}}\).

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Let\({{\rm{X}}_{\rm{1}}}{\rm{,}}{{\rm{X}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{n}}}\)represent a random sample from a Rayleigh distribution with pdf

\({\rm{f(x,\theta ) = }}\frac{{\rm{x}}}{{\rm{\theta }}}{{\rm{e}}^{{\rm{ - }}{{\rm{x}}^{\rm{2}}}{\rm{/(2\theta )}}}}\quad {\rm{x > 0}}\)a. It can be shown that\({\rm{E}}\left( {{{\rm{X}}^{\rm{2}}}} \right){\rm{ = 2\theta }}\). Use this fact to construct an unbiased estimator of\({\rm{\theta }}\)based on\({\rm{\Sigma X}}_{\rm{i}}^{\rm{2}}\)(and use rules of expected value to show that it is unbiased).

b. Estimate\({\rm{\theta }}\)from the following\({\rm{n = 10}}\)observations on vibratory stress of a turbine blade under specified conditions:

\(\begin{array}{*{20}{l}}{{\rm{16}}{\rm{.88}}}&{{\rm{10}}{\rm{.23}}}&{{\rm{4}}{\rm{.59}}}&{{\rm{6}}{\rm{.66}}}&{{\rm{13}}{\rm{.68}}}\\{{\rm{14}}{\rm{.23}}}&{{\rm{19}}{\rm{.87}}}&{{\rm{9}}{\rm{.40}}}&{{\rm{6}}{\rm{.51}}}&{{\rm{10}}{\rm{.95}}}\end{array}\)

Let\({\rm{X}}\)represent the error in making a measurement of a physical characteristic or property (e.g., the boiling point of a particular liquid). It is often reasonable to assume that\({\rm{E(X) = 0}}\)and that\({\rm{X}}\)has a normal distribution. Thus, the pdf of any particular measurement error is

\({\rm{f(x;\theta ) = }}\frac{{\rm{1}}}{{\sqrt {{\rm{2\pi \theta }}} }}{{\rm{e}}^{{\rm{ - }}{{\rm{x}}^{\rm{2}}}{\rm{/2\theta }}}}\quad {\rm{ - \yen < x < \yen}}\)

(Where we have used\({\rm{\theta }}\)in place of\({{\rm{\sigma }}^{\rm{2}}}\)). Now suppose that\({\rm{n}}\)independent measurements are made, resulting in measurement errors\({{\rm{X}}_{\rm{1}}}{\rm{ = }}{{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{X}}_{\rm{2}}}{\rm{ = }}{{\rm{x}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{n}}}{\rm{ = }}{{\rm{x}}_{\rm{n}}}{\rm{.}}\)Obtain the mle of\({\rm{\theta }}\).

Consider a random sample \({{\rm{X}}_{\rm{1}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{n}}}\) from the pdf

\({\rm{f(x;\theta ) = }}{\rm{.5(1 + \theta x)}}\quad {\rm{ - 1£ x£ 1}}\)

where \({\rm{ - 1£ \theta £ 1}}\) (this distribution arises in particle physics). Show that \({\rm{\hat \theta = 3\bar X}}\) is an unbiased estimator of\({\rm{\theta }}\). (Hint: First determine\({\rm{\mu = E(X) = E(\bar X)}}\).)

a. Let \({{\rm{X}}_{\rm{1}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{n}}}\) be a random sample from a uniform distribution on \({\rm{(0,\theta )}}\). Then the mle of \({\rm{\theta }}\) is \({\rm{\hat \theta = Y = max}}\left( {{{\rm{X}}_{\rm{i}}}} \right)\). Use the fact that \({\rm{Y}} \le {\rm{y}}\) if each \({{\rm{X}}_{\rm{i}}} \le {\rm{y}}\) to derive the cdf of Y. Then show that the pdf of \({\rm{Y = max}}\left( {{{\rm{X}}_{\rm{i}}}} \right)\) is \({{\rm{f}}_{\rm{Y}}}{\rm{(y) = }}\left\{ {\begin{array}{*{20}{c}}{\frac{{{\rm{n}}{{\rm{y}}^{{\rm{n - 1}}}}}}{{{{\rm{\theta }}^{\rm{n}}}}}}&{{\rm{0}} \le {\rm{y}} \le {\rm{\theta }}}\\{\rm{0}}&{{\rm{ otherwise }}}\end{array}} \right.\)

b. Use the result of part (a) to show that the mle is biased but that \({\rm{(n + 1)}}\)\({\rm{max}}\left( {{{\rm{X}}_{\rm{i}}}} \right){\rm{/n}}\) is unbiased.

When the sample standard deviation S is based on a random sample from a normal population distribution, it can be shown that \({\rm{E(S) = }}\sqrt {{\rm{2/(n - 1)}}} {\rm{\Gamma (n/2)\sigma /\Gamma ((n - 1)/2)}}\)

Use this to obtain an unbiased estimator for \({\rm{\sigma }}\) of the form \({\rm{cS}}\). What is \({\rm{c}}\) when \({\rm{n = 20}}\)?

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.