/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Q13E Consider a random sample \({{\rm... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

Consider a random sample \({{\rm{X}}_{\rm{1}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{n}}}\) from the pdf

\({\rm{f(x;\theta ) = }}{\rm{.5(1 + \theta x)}}\quad {\rm{ - 1£ x£ 1}}\)

where \({\rm{ - 1£ \theta £ 1}}\) (this distribution arises in particle physics). Show that \({\rm{\hat \theta = 3\bar X}}\) is an unbiased estimator of\({\rm{\theta }}\). (Hint: First determine\({\rm{\mu = E(X) = E(\bar X)}}\).)

Short Answer

Expert verified

The estimator is unbiased.

Step by step solution

01

Definition

An estimator is a rule for computing an estimate of a given quantity based on observable data: the rule (estimator), the quantity of interest (estimate), and the output (estimate) are all distinct.

02

Finding estimator unbiased

The expected value of random variable \({{\rm{X}}_{\rm{1}}}\) (or any other) is

\(\begin{array}{l}{\rm{\mu = E(X) = }}\int_{{\rm{ - 1}}}^{\rm{1}} {\rm{x}} {\rm{ \times 0}}{\rm{.5 \times (1 + \theta x)dx}}\\{\rm{ = }}\left. {{\rm{0}}{\rm{.5}}\frac{{{{\rm{x}}^{\rm{2}}}}}{{\rm{2}}}} \right|_{{\rm{ - 1}}}^{\rm{1}}{\rm{ + }}\left. {{\rm{0}}{\rm{.5 \times \theta \times }}\frac{{{{\rm{x}}^{\rm{3}}}}}{{\rm{3}}}} \right|_{{\rm{ - 1}}}^{\rm{1}}\\{\rm{ = }}\frac{{\rm{1}}}{{\rm{3}}}{\rm{\theta }}{\rm{.}}\end{array}\)

If the expected values of estimator \({\rm{\hat \theta }}\) is\({\rm{\theta }}\), then the estimator in unbiased. Therefore, the expected value is

\(\begin{array}{l}{\rm{E(\hat \theta ) = E(3\bar X)}}\\{\rm{ = 3 \times E(\bar X) = 3 \times E}}\left( {\frac{{\rm{1}}}{{\rm{n}}}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{{\rm{X}}_{\rm{i}}}} } \right)\\{\rm{ = }}\frac{{\rm{3}}}{{\rm{n}}}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {\rm{E}} \left( {{{\rm{X}}_{\rm{i}}}} \right)\mathop {\rm{ = }}\limits^{{\rm{(1)}}} \frac{{\rm{3}}}{{\rm{n}}}{\rm{ \times n \times E}}\left( {{{\rm{X}}_{\rm{1}}}} \right)\\{\rm{ = 3\mu = 3 \times }}\frac{{\rm{1}}}{{\rm{3}}}{\rm{ \times \theta }}\\{\rm{ = \theta }}\end{array}\)

(1): \({{\rm{X}}_{\rm{i}}}\)have the same distribution.

The estimator is unbiased.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Let \({{\rm{X}}_{\rm{1}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{n}}}\) be a random sample from a pdf that is symmetric about \({\rm{\mu }}\). An estimator for \({\rm{\mu }}\) that has been found to perform well for a variety of underlying distributions is the Hodges–Lehmann estimator. To define it, first compute for each \({\rm{i}} \le {\rm{j}}\) and each \({\rm{j = 1,2, \ldots ,n}}\) the pairwise average \({{\rm{\bar X}}_{{\rm{i,j}}}}{\rm{ = }}\left( {{{\rm{X}}_{\rm{i}}}{\rm{ + }}{{\rm{X}}_{\rm{j}}}} \right){\rm{/2}}\). Then the estimator is \({\rm{\hat \mu = }}\) the median of the \({{\rm{\bar X}}_{{\rm{i,j}}}}{\rm{'s}}\). Compute the value of this estimate using the data of Exercise \({\rm{44}}\) of Chapter \({\rm{1}}\). (Hint: Construct a square table with the \({{\rm{x}}_{\rm{i}}}{\rm{'s}}\) listed on the left margin and on top. Then compute averages on and above the diagonal.)

\({{\rm{X}}_{\rm{1}}}{\rm{,}}.....{\rm{,}}{{\rm{X}}_{\rm{n}}}\)be a random sample from a gamma distribution with parameters \({\rm{\alpha }}\) and \({\rm{\beta }}\). a. Derive the equations whose solutions yield the maximum likelihood estimators of \({\rm{\alpha }}\) and \({\rm{\beta }}\). Do you think they can be solved explicitly? b. Show that the mle of \({\rm{\mu = \alpha \beta }}\) is \(\widehat {\rm{\mu }}{\rm{ = }}\overline {\rm{X}} \).

A sample of \({\rm{n}}\) captured Pandemonium jet fighters results in serial numbers\({{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{x}}_{\rm{2}}}{\rm{,}}{{\rm{x}}_{\rm{3}}}{\rm{, \ldots ,}}{{\rm{x}}_{\rm{n}}}\). The CIA knows that the aircraft were numbered consecutively at the factory starting with \({\rm{\alpha }}\)and ending with\({\rm{\beta }}\), so that the total number of planes manufactured is \({\rm{\beta - \alpha + 1}}\) (e.g., if \({\rm{\alpha = 17}}\) and\({\rm{\beta = 29}}\), then \({\rm{29 - 17 + 1 = 13}}\)planes having serial numbers \({\rm{17,18,19, \ldots ,28,29}}\)were manufactured). However, the CIA does not know the values of \({\rm{\alpha }}\) or\({\rm{\beta }}\). A CIA statistician suggests using the estimator \({\rm{max}}\left( {{{\rm{X}}_{\rm{i}}}} \right){\rm{ - min}}\left( {{{\rm{X}}_{\rm{i}}}} \right){\rm{ + 1}}\)to estimate the total number of planes manufactured.

a. If\({\rm{n = 5, x\_}}\left\{ {\rm{1}} \right\}{\rm{ = 237, x\_}}\left\{ {\rm{2}} \right\}{\rm{ = 375, x\_}}\left\{ {\rm{3}} \right\}{\rm{ = 202, x\_}}\left\{ {\rm{4}} \right\}{\rm{ = 525,}}\)and\({{\rm{x}}_{\rm{5}}}{\rm{ = 418}}\), what is the corresponding estimate?

b. Under what conditions on the sample will the value of the estimate be exactly equal to the true total number of planes? Will the estimate ever be larger than the true total? Do you think the estimator is unbiased for estimating\({\rm{\beta - \alpha + 1}}\)? Explain in one or two sentences.

a. Let \({{\rm{X}}_{\rm{1}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{n}}}\) be a random sample from a uniform distribution on \({\rm{(0,\theta )}}\). Then the mle of \({\rm{\theta }}\) is \({\rm{\hat \theta = Y = max}}\left( {{{\rm{X}}_{\rm{i}}}} \right)\). Use the fact that \({\rm{Y}} \le {\rm{y}}\) if each \({{\rm{X}}_{\rm{i}}} \le {\rm{y}}\) to derive the cdf of Y. Then show that the pdf of \({\rm{Y = max}}\left( {{{\rm{X}}_{\rm{i}}}} \right)\) is \({{\rm{f}}_{\rm{Y}}}{\rm{(y) = }}\left\{ {\begin{array}{*{20}{c}}{\frac{{{\rm{n}}{{\rm{y}}^{{\rm{n - 1}}}}}}{{{{\rm{\theta }}^{\rm{n}}}}}}&{{\rm{0}} \le {\rm{y}} \le {\rm{\theta }}}\\{\rm{0}}&{{\rm{ otherwise }}}\end{array}} \right.\)

b. Use the result of part (a) to show that the mle is biased but that \({\rm{(n + 1)}}\)\({\rm{max}}\left( {{{\rm{X}}_{\rm{i}}}} \right){\rm{/n}}\) is unbiased.

Let\({{\rm{X}}_{\rm{1}}}{\rm{,}}{{\rm{X}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{n}}}\)be a random sample from a pdf\({\rm{f(x)}}\)that is symmetric about\({\rm{\mu }}\), so that\({\rm{\backslash widetildeX}}\)is an unbiased estimator of\({\rm{\mu }}\). If\({\rm{n}}\)is large, it can be shown that\({\rm{V (\tilde X)\gg 1/}}\left( {{\rm{4n(f(\mu )}}{{\rm{)}}^{\rm{2}}}} \right)\).

a. Compare\({\rm{V(\backslash widetildeX)}}\)to\({\rm{V(\bar X)}}\)when the underlying distribution is normal.

b. When the underlying pdf is Cauchy (see Example 6.7),\({\rm{V(\bar X) = \yen}}\), so\({\rm{\bar X}}\)is a terrible estimator. What is\({\rm{V(\tilde X)}}\)in this case when\({\rm{n}}\)is large?

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.