/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Q20E A diagnostic test for a certain ... [FREE SOLUTION] | 91影视

91影视

A diagnostic test for a certain disease is applied to\({\rm{n}}\)individuals known to not have the disease. Let\({\rm{X = }}\)the number among the\({\rm{n}}\)test results that are positive (indicating presence of the disease, so\({\rm{X}}\)is the number of false positives) and\({\rm{p = }}\)the probability that a disease-free individual's test result is positive (i.e.,\({\rm{p}}\)is the true proportion of test results from disease-free individuals that are positive). Assume that only\({\rm{X}}\)is available rather than the actual sequence of test results.

a. Derive the maximum likelihood estimator of\({\rm{p}}\). If\({\rm{n = 20}}\)and\({\rm{x = 3}}\), what is the estimate?

b. Is the estimator of part (a) unbiased?

c. If\({\rm{n = 20}}\)and\({\rm{x = 3}}\), what is the mle of the probability\({{\rm{(1 - p)}}^{\rm{5}}}\)that none of the next five tests done on disease-free individuals are positive?

Short Answer

Expert verified

a) The estimated valueis\({\rm{\hat p = }}\frac{{\rm{X}}}{{\rm{n}}}{\rm{;\hat p = 0}}{\rm{.15}}\).

b) Estimator is unbiased.

c) The probability is\({\rm{h(\hat p) = 0}}{\rm{.4437}}\).

Step by step solution

01

Introduction

An estimator is a rule for computing an estimate of a given quantity based on observable data: the rule (estimator), the quantity of interest (estimate), and the output (estimate) are all distinct.

02

Explanation

a)

Let random variables \({{\rm{X}}_{\rm{1}}}{\rm{,}}{{\rm{X}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{n}}}\) have joint pdf or pmb

\({\rm{f}}\left( {{{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{x}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{x}}_{\rm{n}}}{\rm{;}}{{\rm{\theta }}_{\rm{1}}}{\rm{,}}{{\rm{\theta }}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{\theta }}_{\rm{m}}}} \right){\rm{,}}\quad {\rm{n,m\^I N}}\)

Where the parameters \({{\rm{\theta }}_{\rm{i}}}{\rm{,i = 1,2, \ldots ,m}}\)are unknown. When function \({\rm{f}}\)is a function of parameters\({{\rm{\theta }}_{\rm{i}}}{\rm{,i = 1,2, \ldots ,m}}\), it is called the

likelihood function

Values \({{\rm{\hat \theta }}_{\rm{i}}}\)that maximize the likelihood function are the maximum likelihood estimates (mle's), or equally values \({{\rm{\hat \theta }}_{\rm{i}}}\)for which

\(f\left( {{x_1},{x_2}, \ldots ,{x_n};{{\hat \theta }_1},{{\hat \theta }_2}, \ldots ,{{\hat \theta }_m}} \right)f\left( {{x_1},{x_2}, \ldots ,{x_n};{\theta _1},{\theta _2}, \ldots ,{\theta _m}} \right),\)

For every\({{\rm{\theta }}_{\rm{i}}}{\rm{,i = 1,2, \ldots ,m}}\). By substituting\({{\rm{X}}_{\rm{i}}}{\rm{ with }}{{\rm{x}}_{\rm{i}}}\), the

maximum likelihood estimators

are obtained.

First, notice that the random variable \({\rm{X}}\)has Binomial distribution with pdf given in the theorem below.

Theorem:

\({\rm{b(x;n,p) = }}\left\{ {\begin{array}{*{20}{l}}{\left( {\begin{array}{*{20}{l}}{\rm{n}}\\{\rm{x}}\end{array}} \right){{\rm{p}}^{\rm{x}}}{{{\rm{(1 - p)}}}^{{\rm{n - x}}}}}&{{\rm{,x = 0,1,2, \ldots ,n}}}\\{\rm{0}}&{{\rm{, otherwise }}}\end{array}} \right.\)

In order to obtain the maximum likelihood estimator, one needs to find \({\rm{p}}\)which maximizes pmf. To do that, look at natural logarithm of the pmf. By finding maximum of

\({\rm{ln}}\left( {\left( {\begin{array}{*{20}{l}}{\rm{n}}\\{\rm{x}}\end{array}} \right){{\rm{p}}^{\rm{x}}}{{{\rm{(1 - p)}}}^{{\rm{n - x}}}}} \right)\)

One also finds the maximum of\({{\rm{p}}^{\rm{x}}}{{\rm{(1 - p)}}^{{\rm{n - x}}}}\)because natural logarithm won't change the maximum value. To find maximum of\({\rm{lnb(x;n,p)}}\), first take the derivative and then set it to be equal to zero (classic method for finding maximum), and then solve for\({\rm{p}}\).

03

Calculation

(a)

The derivative is:

\(\begin{aligned}\frac{{\rm{d}}}{{{\rm{dp}}}}\left( {{\rm{ln}}\left( {\left( {\begin{array}{*{20}{l}}{\rm{n}}\\{\rm{x}}\end{array}} \right){{\rm{p}}^{\rm{x}}}{{{\rm{(1 - p)}}}^{{\rm{n - x}}}}} \right)} \right)\\ &= \frac{{\rm{d}}}{{{\rm{dp}}}}\left( {{\rm{ln}}\left( {\begin{array}{*{20}{l}}{\rm{n}}\\{\rm{x}}\end{array}} \right){\rm{ + xlnp + (n - x)ln(1 - p)}}} \right)\\ & = 0 + x \times \frac{{\rm{1}}}{{\rm{p}}}{\rm{ + (n - x) \times }}\frac{{\rm{1}}}{{{\rm{1 - p}}}}{\rm{ \times ( - 1)}}\\ &= \frac{{\rm{x}}}{{\rm{p}}}{\rm{ - }}\frac{{{\rm{n - x}}}}{{{\rm{1 - p}}}}\end{aligned}\)

Set it to be equal to zero in order to find maximum

\(\frac{{\rm{x}}}{{\rm{p}}}{\rm{ - }}\frac{{{\rm{n - x}}}}{{{\rm{1 - p}}}}{\rm{ = 0}}\)

And now solve for \({\rm{p}}\)

\(\begin{aligned}\frac{{\rm{x}}}{{{\rm{\hat p}}}}{\rm{ - }}\frac{{{\rm{n - x}}}}{{{\rm{1 - \hat p}}}}\\ &= 0 \frac{{\rm{x}}}{{{\rm{\hat p}}}}\\ &= \frac{{{\rm{n - x}}}}{{{\rm{1 - \hat p}}}}\frac{{{\rm{1 - \hat p}}}}{{{\rm{\hat p}}}}\\ & = \frac{{{\rm{n - x}}}}{{\rm{x}}}\frac{{\rm{1}}}{{{\rm{\hat p}}}}{\rm{ - 1}}\\ &= \frac{{\rm{n}}}{{\rm{x}}}{\rm{ - 1}}\end{aligned}\)

Finally, the estimator is

\({\rm{\hat p = }}\frac{{\rm{X}}}{{\rm{n}}}{\rm{.}}\)

For \({\rm{n = 20}}\)and\({\rm{x = 3}}\), the estimate is\({\rm{\hat p = }}\frac{{\rm{3}}}{{{\rm{20}}}}{\rm{ = 0}}{\rm{.15}}\).

Thus, the estimated value is\({\rm{\hat p = }}\frac{{\rm{X}}}{{\rm{n}}}{\rm{;\hat p = 0}}{\rm{.15}}\).

04

Explanation

b)

The estimator is unbiased if the expected value of the estimator is\({\rm{p}}\). The following holds

\(\begin{aligned} E(\hat p) &= E \left( {\frac{{\rm{X}}}{{\rm{n}}}} \right)\\ & = \frac{{\rm{1}}}{{\rm{n}}}{\rm{ \times E(X)}}\\\ &= \frac{{\rm{1}}}{{\rm{n}}}{\rm{ \times np}}\\ &= p \end{aligned}\)

(1): see the proposition below.

Proposition: For a binomial random variable \({\rm{X}}\)with parameters\({\rm{n,p}}\), and\({\rm{q = 1 - p}}\), the following is true

\(\begin{aligned}{l}E(X) &= np\\ V(X) &= np(1 - p) = npq \\{{\rm{\sigma }}_{\rm{X}}} &= \sqrt {{\rm{npq}}} \end{aligned}\)

Since, \({\rm{E(\hat p) = p}}\)

Therefore, theestimator is unbiased.

05

Explanation

c)

The Invariance Principle:

Let \({{\rm{\hat \theta }}_{\rm{i}}}{\rm{,i = 1,2, \ldots ,n}}\)be maximum likelihood estimates of parameters\({{\rm{\theta }}_{\rm{i}}}{\rm{,i = 1,2, \ldots ,n}}\). The mle of any function of parameters \({{\rm{\theta }}_{\rm{i}}}\)is the function of the mle's\({{\rm{\hat \theta }}_{\rm{i}}}\).

The mle of function\({\rm{h(p) = (1 - p}}{{\rm{)}}^{\rm{5}}}\)is the function of \({\rm{\hat p}}\),

\(\begin{array}{l}{\rm{h(\hat p) = (1 - \hat p}}{{\rm{)}}^{\rm{5}}}\\{\rm{ = (1 - 0}}{\rm{.15}}{{\rm{)}}^{\rm{5}}}{\rm{ = 0}}{\rm{.4437}}{\rm{.}}\end{array}\)

Therefore, the required probability is\({\rm{h(\hat p) = 0}}{\rm{.4437}}\).

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91影视!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Suppose a certain type of fertilizer has an expected yield per acre of \({{\rm{\mu }}_{\rm{2}}}\)with variance \({{\rm{\sigma }}^{\rm{2}}}\)whereas the expected yield for a second type of fertilizer is with the same variance \({{\rm{\sigma }}^{\rm{2}}}\).Let \({\rm{S}}_{\rm{1}}^{\rm{2}}\) and \({\rm{S}}_{\rm{2}}^{\rm{2}}\)denote the sample variances of yields based on sample sizes \({{\rm{n}}_{\rm{1}}}\)and \({{\rm{n}}_{\rm{2}}}\),respectively, of the two fertilizers. Show that the pooled (combined) estimator

\({{\rm{\hat \sigma }}^{\rm{2}}}{\rm{ = }}\frac{{\left( {{{\rm{n}}_{\rm{1}}}{\rm{ - 1}}} \right){\rm{S}}_{\rm{1}}^{\rm{2}}{\rm{ + }}\left( {{{\rm{n}}_{\rm{2}}}{\rm{ - 1}}} \right){\rm{S}}_{\rm{2}}^{\rm{2}}}}{{{{\rm{n}}_{\rm{1}}}{\rm{ + }}{{\rm{n}}_{\rm{2}}}{\rm{ - 2}}}}\)

is an unbiased estimator of \({{\rm{\sigma }}^{\rm{2}}}\)

\({{\rm{X}}_{\rm{1}}}{\rm{,}}.....{\rm{,}}{{\rm{X}}_{\rm{n}}}\)be a random sample from a gamma distribution with parameters \({\rm{\alpha }}\) and \({\rm{\beta }}\). a. Derive the equations whose solutions yield the maximum likelihood estimators of \({\rm{\alpha }}\) and \({\rm{\beta }}\). Do you think they can be solved explicitly? b. Show that the mle of \({\rm{\mu = \alpha \beta }}\) is \(\widehat {\rm{\mu }}{\rm{ = }}\overline {\rm{X}} \).

Consider randomly selecting \({\rm{n}}\) segments of pipe and determining the corrosion loss (mm) in the wall thickness for each one. Denote these corrosion losses by \({{\rm{Y}}_{\rm{1}}}{\rm{,}}.....{\rm{,}}{{\rm{Y}}_{\rm{n}}}\). The article 鈥淎 Probabilistic Model for a Gas Explosion Due to Leakages in the Grey Cast Iron Gas Mains鈥 (Reliability Engr. and System Safety (\({\rm{(2013:270 - 279)}}\)) proposes a linear corrosion model: \({{\rm{Y}}_{\rm{i}}}{\rm{ = }}{{\rm{t}}_{\rm{i}}}{\rm{R}}\), where \({{\rm{t}}_{\rm{i}}}\) is the age of the pipe and \({\rm{R}}\), the corrosion rate, is exponentially distributed with parameter \({\rm{\lambda }}\). Obtain the maximum likelihood estimator of the exponential parameter (the resulting mle appears in the cited article). (Hint: If \({\rm{c > 0}}\) and \({\rm{X}}\) has an exponential distribution, so does \({\rm{cX}}\).)

Let\({\rm{X}}\)denote the proportion of allotted time that a randomly selected student spends working on a certain aptitude test. Suppose the pdf of\({\rm{X}}\)is

\({\rm{f(x;\theta ) = }}\left\{ {\begin{array}{*{20}{c}}{{\rm{(\theta + 1)}}{{\rm{x}}^{\rm{\theta }}}}&{{\rm{0拢 x拢 1}}}\\{\rm{0}}&{{\rm{ otherwise }}}\end{array}} \right.\)

where\({\rm{ - 1 < \theta }}\). A random sample of ten students yields data\({{\rm{x}}_{\rm{1}}}{\rm{ = }}{\rm{.92,}}{{\rm{x}}_{\rm{2}}}{\rm{ = }}{\rm{.79,}}{{\rm{x}}_{\rm{3}}}{\rm{ = }}{\rm{.90,}}{{\rm{x}}_{\rm{4}}}{\rm{ = }}{\rm{.65,}}{{\rm{x}}_{\rm{5}}}{\rm{ = }}{\rm{.86}}\),\({{\rm{x}}_{\rm{6}}}{\rm{ = }}{\rm{.47,}}{{\rm{x}}_{\rm{7}}}{\rm{ = }}{\rm{.73,}}{{\rm{x}}_{\rm{8}}}{\rm{ = }}{\rm{.97,}}{{\rm{x}}_{\rm{9}}}{\rm{ = }}{\rm{.94,}}{{\rm{x}}_{{\rm{10}}}}{\rm{ = }}{\rm{.77}}\).

a. Use the method of moments to obtain an estimator of\({\rm{\theta }}\), and then compute the estimate for this data.

b. Obtain the maximum likelihood estimator of\({\rm{\theta }}\), and then compute the estimate for the given data.

Let\({{\rm{X}}_{\rm{1}}}{\rm{,}}{{\rm{X}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{n}}}\)represent a random sample from a Rayleigh distribution with pdf

\({\rm{f(x,\theta ) = }}\frac{{\rm{x}}}{{\rm{\theta }}}{{\rm{e}}^{{\rm{ - }}{{\rm{x}}^{\rm{2}}}{\rm{/(2\theta )}}}}\quad {\rm{x > 0}}\)a. It can be shown that\({\rm{E}}\left( {{{\rm{X}}^{\rm{2}}}} \right){\rm{ = 2\theta }}\). Use this fact to construct an unbiased estimator of\({\rm{\theta }}\)based on\({\rm{\Sigma X}}_{\rm{i}}^{\rm{2}}\)(and use rules of expected value to show that it is unbiased).

b. Estimate\({\rm{\theta }}\)from the following\({\rm{n = 10}}\)observations on vibratory stress of a turbine blade under specified conditions:

\(\begin{array}{*{20}{l}}{{\rm{16}}{\rm{.88}}}&{{\rm{10}}{\rm{.23}}}&{{\rm{4}}{\rm{.59}}}&{{\rm{6}}{\rm{.66}}}&{{\rm{13}}{\rm{.68}}}\\{{\rm{14}}{\rm{.23}}}&{{\rm{19}}{\rm{.87}}}&{{\rm{9}}{\rm{.40}}}&{{\rm{6}}{\rm{.51}}}&{{\rm{10}}{\rm{.95}}}\end{array}\)

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.