/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Q32E a. Let \({{\rm{X}}_{\rm{1}}}{\rm... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

a. Let \({{\rm{X}}_{\rm{1}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{n}}}\) be a random sample from a uniform distribution on \({\rm{(0,\theta )}}\). Then the mle of \({\rm{\theta }}\) is \({\rm{\hat \theta = Y = max}}\left( {{{\rm{X}}_{\rm{i}}}} \right)\). Use the fact that \({\rm{Y}} \le {\rm{y}}\) if each \({{\rm{X}}_{\rm{i}}} \le {\rm{y}}\) to derive the cdf of Y. Then show that the pdf of \({\rm{Y = max}}\left( {{{\rm{X}}_{\rm{i}}}} \right)\) is \({{\rm{f}}_{\rm{Y}}}{\rm{(y) = }}\left\{ {\begin{array}{*{20}{c}}{\frac{{{\rm{n}}{{\rm{y}}^{{\rm{n - 1}}}}}}{{{{\rm{\theta }}^{\rm{n}}}}}}&{{\rm{0}} \le {\rm{y}} \le {\rm{\theta }}}\\{\rm{0}}&{{\rm{ otherwise }}}\end{array}} \right.\)

b. Use the result of part (a) to show that the mle is biased but that \({\rm{(n + 1)}}\)\({\rm{max}}\left( {{{\rm{X}}_{\rm{i}}}} \right){\rm{/n}}\) is unbiased.

Short Answer

Expert verified

(a) It is shown that \({{\rm{f}}_{\rm{Y}}}{\rm{(y) = F}}_{\rm{Y}}'{\rm{(y) = }}\frac{{{\rm{n}}{{\rm{y}}^{{\rm{n - 1}}}}}}{{{{\rm{\theta }}^{\rm{n}}}}}{\rm{, 0}} \le {\rm{y}} \le {\rm{\theta }}\).

(b) It is shown that \({\rm{E(}}\widetilde {\rm{Y}}{\rm{) = \theta }}\).

Step by step solution

01

Define uniform distribution

A uniform distribution is a probability distribution in which every event inside a certain interval has the same probability of occurring. It's a graphical representation of a set of data in the form of a graph or a list.

02

Explanation

(a) The cdf of random variable Y can be calculated in the following manner.

\(\begin{array}{l}{{\rm{F}}_{\rm{Y}}}{\rm{(y) = P(Y}} \le {\rm{y)}}\\{\rm{ = P}}\left( {{\rm{max}}\left( {{{\rm{X}}_{\rm{i}}}} \right) \le {\rm{y}}} \right)\\\mathop {\rm{ = }}\limits^{{\rm{(1)}}} {\rm{P}}\left( {{{\rm{X}}_{\rm{1}}} \le {\rm{y,}}{{\rm{X}}_{\rm{2}}} \le {\rm{y, \ldots ,}}{{\rm{X}}_{\rm{n}}} \le {\rm{y}}} \right)\\\mathop {\rm{ = }}\limits^{{\rm{(2)}}} {\rm{P}}\left( {{{\rm{X}}_{\rm{1}}} \le {\rm{y}}} \right){\rm{ \times P}}\left( {{{\rm{X}}_{\rm{2}}} \le {\rm{y}}} \right){\rm{ \times \ldots \times P}}\left( {{{\rm{X}}_{\rm{n}}} \le {\rm{y}}} \right)\\\mathop {\rm{ = }}\limits^{{\rm{(3)}}} {\left( {\frac{{\rm{y}}}{{\rm{\theta }}}} \right)^{\rm{n}}}{\rm{,0}} \le {\rm{y}} \le {\rm{\theta ,}}\end{array}\)

(1): if maximum is less than\({\rm{y}}\), then all\({{\rm{X}}_{\rm{i}}}\)are less.

(2): as a result of one's independence,

(3): a uniform distribution cdf on\(\left( {{\rm{0,\theta }}} \right)\).

It is simple to obtain pdf as a derivative of cdf if you have cdf,

\({{\rm{f}}_{\rm{Y}}}{\rm{(y) = F}}_{\rm{Y}}'{\rm{(y) = }}\frac{{{\rm{n}}{{\rm{y}}^{{\rm{n - 1}}}}}}{{{{\rm{\theta }}^{\rm{n}}}}}{\rm{, 0}} \le {\rm{y}} \le {\rm{\theta }}\)

Otherwise, it is zero.

03

Explanation

(b) The estimator is unbiased if \({\rm{E(Y) = \theta }}\), however

\(\begin{array}{c}{\rm{E(Y) = }}\int_{\rm{0}}^{\rm{\theta }} {\rm{y}} {\rm{ \times }}\frac{{{\rm{n}}{{\rm{y}}^{{\rm{n - 1}}}}}}{{{{\rm{\theta }}^{\rm{n}}}}}{\rm{dy}}\\{\rm{ = }}\left. {\frac{{\rm{n}}}{{{{\rm{\theta }}^{\rm{n}}}}}\frac{{{{\rm{y}}^{{\rm{n + 1}}}}}}{{{\rm{n + 1}}}}} \right|_{\rm{0}}^{\rm{\theta }}\\{\rm{ = }}\frac{{\rm{n}}}{{{\rm{n + 1}}}}{\rm{\theta }} \ne {\rm{\theta }}\end{array}\)

This indicates that the estimator isn't completely objective. Estimator,

however,

\({\rm{\tilde Y = }}\frac{{{\rm{n + 1}}}}{{\rm{n}}}\)

is objective because

\(\begin{array}{c}{\rm{E(\tilde Y) = E}}\left( {\frac{{{\rm{n + 1}}}}{{\rm{n}}}} \right)\\{\rm{ = }}\frac{{{\rm{n + 1}}}}{{\rm{n}}}{\rm{E(Y)}}\\{\rm{ = }}\frac{{{\rm{n + 1}}}}{{\rm{n}}}\frac{{\rm{n}}}{{{\rm{n + 1}}}}{\rm{\theta }}\\{\rm{ = \theta }}\end{array}\)

This indicates that the estimator \({\rm{\tilde Y}}\) is unbiased.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Of \({{\rm{n}}_{\rm{1}}}\)randomly selected male smokers, \({{\rm{X}}_{\rm{1}}}\) smoked filter cigarettes, whereas of \({{\rm{n}}_{\rm{2}}}\) randomly selected female smokers, \({{\rm{X}}_{\rm{2}}}\) smoked filter cigarettes. Let \({{\rm{p}}_{\rm{1}}}\) and \({{\rm{p}}_{\rm{2}}}\) denote the probabilities that a randomly selected male and female, respectively, smoke filter cigarettes.

a. Show that \({\rm{(}}{{\rm{X}}_{\rm{1}}}{\rm{/}}{{\rm{n}}_{\rm{1}}}{\rm{) - (}}{{\rm{X}}_{\rm{2}}}{\rm{/}}{{\rm{n}}_{\rm{2}}}{\rm{)}}\) is an unbiased estimator for \({{\rm{p}}_{\rm{1}}}{\rm{ - }}{{\rm{p}}_{\rm{2}}}\). (Hint: \({\rm{E(}}{{\rm{X}}_{\rm{i}}}{\rm{) = }}{{\rm{n}}_{\rm{i}}}{{\rm{p}}_{\rm{i}}}\) for \({\rm{i = 1,2}}\).)

b. What is the standard error of the estimator in part (a)?

c. How would you use the observed values \({{\rm{x}}_{\rm{1}}}\) and \({{\rm{x}}_{\rm{2}}}\) to estimate the standard error of your estimator?

d. If \({{\rm{n}}_{\rm{1}}}{\rm{ = }}{{\rm{n}}_{\rm{2}}}{\rm{ = 200, }}{{\rm{x}}_{\rm{1}}}{\rm{ = 127}}\), and \({{\rm{x}}_{\rm{2}}}{\rm{ = 176}}\), use the estimator of part (a) to obtain an estimate of \({{\rm{p}}_{\rm{1}}}{\rm{ - }}{{\rm{p}}_{\rm{2}}}\).

e. Use the result of part (c) and the data of part (d) to estimate the standard error of the estimator.

Let \({{\rm{X}}_{\rm{1}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{n}}}\) be a random sample from a pdf that is symmetric about \({\rm{\mu }}\). An estimator for \({\rm{\mu }}\) that has been found to perform well for a variety of underlying distributions is the Hodges–Lehmann estimator. To define it, first compute for each \({\rm{i}} \le {\rm{j}}\) and each \({\rm{j = 1,2, \ldots ,n}}\) the pairwise average \({{\rm{\bar X}}_{{\rm{i,j}}}}{\rm{ = }}\left( {{{\rm{X}}_{\rm{i}}}{\rm{ + }}{{\rm{X}}_{\rm{j}}}} \right){\rm{/2}}\). Then the estimator is \({\rm{\hat \mu = }}\) the median of the \({{\rm{\bar X}}_{{\rm{i,j}}}}{\rm{'s}}\). Compute the value of this estimate using the data of Exercise \({\rm{44}}\) of Chapter \({\rm{1}}\). (Hint: Construct a square table with the \({{\rm{x}}_{\rm{i}}}{\rm{'s}}\) listed on the left margin and on top. Then compute averages on and above the diagonal.)

The mean squared error of an estimator \({\rm{\hat \theta }}\) is \({\rm{MSE(\hat \theta ) = E(\hat \theta - \hat \theta }}{{\rm{)}}^{\rm{2}}}\). If \({\rm{\hat \theta }}\) is unbiased, then \({\rm{MSE(\hat \theta ) = V(\hat \theta )}}\), but in general \({\rm{MSE(\hat \theta ) = V(\hat \theta ) + (bias}}{{\rm{)}}^{\rm{2}}}\) . Consider the estimator \({{\rm{\hat \sigma }}^{\rm{2}}}{\rm{ = K}}{{\rm{S}}^{\rm{2}}}\), where \({{\rm{S}}^{\rm{2}}}{\rm{ = }}\) sample variance. What value of K minimizes the mean squared error of this estimator when the population distribution is normal? (Hint: It can be shown that \({\rm{E}}\left( {{{\left( {{{\rm{S}}^{\rm{2}}}} \right)}^{\rm{2}}}} \right){\rm{ = (n + 1)}}{{\rm{\sigma }}^{\rm{4}}}{\rm{/(n - 1)}}\) In general, it is difficult to find \({\rm{\hat \theta }}\) to minimize \({\rm{MSE(\hat \theta )}}\), which is why we look only at unbiased estimators and minimize \({\rm{V(\hat \theta )}}\).)

At time \({\rm{t = 0}}\), there is one individual alive in a certain population. A pure birth process then unfolds as follows. The time until the first birth is exponentially distributed with parameter \({\rm{\lambda }}\). After the first birth, there are two individuals alive. The time until the first gives birth again is exponential with parameter \({\rm{\lambda }}\), and similarly for the second individual. Therefore, the time until the next birth is the minimum of two exponential (\({\rm{\lambda }}\)) variables, which is exponential with parameter \({\rm{2\lambda }}\). Similarly, once the second birth has occurred, there are three individuals alive, so the time until the next birth is an exponential \({\rm{rv}}\) with parameter \({\rm{3\lambda }}\), and so on (the memoryless property of the exponential distribution is being used here). Suppose the process is observed until the sixth birth has occurred and the successive birth times are \({\rm{25}}{\rm{.2,41}}{\rm{.7,51}}{\rm{.2,55}}{\rm{.5,59}}{\rm{.5,61}}{\rm{.8}}\) (from which you should calculate the times between successive births). Derive the mle of l. (Hint: The likelihood is a product of exponential terms.)

Suppose a certain type of fertilizer has an expected yield per acre of \({{\rm{\mu }}_{\rm{2}}}\)with variance \({{\rm{\sigma }}^{\rm{2}}}\)whereas the expected yield for a second type of fertilizer is with the same variance \({{\rm{\sigma }}^{\rm{2}}}\).Let \({\rm{S}}_{\rm{1}}^{\rm{2}}\) and \({\rm{S}}_{\rm{2}}^{\rm{2}}\)denote the sample variances of yields based on sample sizes \({{\rm{n}}_{\rm{1}}}\)and \({{\rm{n}}_{\rm{2}}}\),respectively, of the two fertilizers. Show that the pooled (combined) estimator

\({{\rm{\hat \sigma }}^{\rm{2}}}{\rm{ = }}\frac{{\left( {{{\rm{n}}_{\rm{1}}}{\rm{ - 1}}} \right){\rm{S}}_{\rm{1}}^{\rm{2}}{\rm{ + }}\left( {{{\rm{n}}_{\rm{2}}}{\rm{ - 1}}} \right){\rm{S}}_{\rm{2}}^{\rm{2}}}}{{{{\rm{n}}_{\rm{1}}}{\rm{ + }}{{\rm{n}}_{\rm{2}}}{\rm{ - 2}}}}\)

is an unbiased estimator of \({{\rm{\sigma }}^{\rm{2}}}\)

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.