/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Q18E Let \({{\rm{X}}_{\rm{1}}}{\rm{,... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

Let\({{\rm{X}}_{\rm{1}}}{\rm{,}}{{\rm{X}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{n}}}\)be a random sample from a pdf\({\rm{f(x)}}\)that is symmetric about\({\rm{\mu }}\), so that\({\rm{\backslash widetildeX}}\)is an unbiased estimator of\({\rm{\mu }}\). If\({\rm{n}}\)is large, it can be shown that\({\rm{V (\tilde X)\gg 1/}}\left( {{\rm{4n(f(\mu )}}{{\rm{)}}^{\rm{2}}}} \right)\).

a. Compare\({\rm{V(\backslash widetildeX)}}\)to\({\rm{V(\bar X)}}\)when the underlying distribution is normal.

b. When the underlying pdf is Cauchy (see Example 6.7),\({\rm{V(\bar X) = \yen}}\), so\({\rm{\bar X}}\)is a terrible estimator. What is\({\rm{V(\tilde X)}}\)in this case when\({\rm{n}}\)is large?

Short Answer

Expert verified

a) The comparison is\({\rm{V(\tilde X) > V(\bar X)}}\)

b) The variance now becomes\({\rm{V(\tilde X) = }}\frac{{{{\rm{\pi }}^{\rm{2}}}{{\rm{\beta }}^{\rm{2}}}}}{{{\rm{4n}}}}{\rm{.}}\)

Step by step solution

01

Introduction

An estimator is a rule for computing an estimate of a given quantity based on observable data: the rule (estimator), the quantity of interest (estimate), and the output (estimate) are all distinct.

02

Explanation

The pdf \({\rm{f(x)}}\)of normally distributed random variable with parameters \({\rm{\mu }}\)and \({\rm{\sigma }}\) is

\({\rm{f(x) = }}\frac{{\rm{1}}}{{\sqrt {{\rm{2\pi }}} {\rm{ \times \sigma }}}}{\rm{exp}}\left\{ {{\rm{ - }}\frac{{{{{\rm{(x - \mu )}}}^{\rm{2}}}}}{{{\rm{2}}{{\rm{\sigma }}^{\rm{2}}}}}} \right\}{\rm{,}}\quad {\rm{x\hat I R}}{\rm{.}}\)

As given in the exercise, it can be shown that

\({\rm{V(\tilde X)\gg }}\frac{{\rm{1}}}{{{\rm{4n \times (f(\mu )}}{{\rm{)}}^{\rm{2}}}}}\)

where \({\rm{f(\mu )}}\)is

\(\begin{aligned}f(\mu ) = \frac{{\rm{1}}}{{\sqrt {{\rm{2\pi }}} {\rm{ \times \sigma }}}}{\rm{exp}}\left\{ {{\rm{ - }}\frac{{{{{\rm{(\mu - \mu )}}}^{\rm{2}}}}}{{{\rm{2}}{{\rm{\sigma }}^{\rm{2}}}}}} \right\}\\ &= \frac{{\rm{1}}}{{\sqrt {{\rm{2\pi }}} {\rm{ \times \sigma }}}}{\rm{.}}\end{aligned}\)

Then, the variance is

\({\rm{V(\tilde X)\gg }}\frac{{\rm{1}}}{{{\rm{4n \times (f(\mu )}}{{\rm{)}}^{\rm{2}}}}}{\rm{ = }}\frac{{{\rm{2\pi }}{{\rm{\sigma }}^{\rm{2}}}}}{{{\rm{4n}}}}\)

Also, the variance of random variable \({\rm{\bar X}}\)is

\(\begin{aligned} V(\bar X) &= V \left( {\frac{{\rm{1}}}{{\rm{n}}}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{{\rm{X}}_{\rm{i}}}} } \right)\\ &= \frac{{\rm{1}}}{{{{\rm{n}}^{\rm{2}}}}}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {\rm{V}} \left( {{{\rm{X}}_{\rm{i}}}} \right)\\ &= \frac{{\rm{1}}}{{{{\rm{n}}^{\rm{2}}}}}{\rm{ \times n \times V}}\left( {{{\rm{X}}_{\rm{1}}}} \right)\\ &= \frac{{{{\rm{\sigma }}^{\rm{2}}}}}{{\rm{n}}}\end{aligned}\)

(1): all \({{\rm{X}}_{\rm{i}}}\)have the same distribution and are independent.

It is obvious that

\({\rm{V(\tilde X)\gg }}\frac{{\rm{\pi }}}{{\rm{2}}}{\rm{ \times }}\frac{{{{\rm{\sigma }}^{\rm{2}}}}}{{\rm{n}}}{\rm{ > }}\frac{{{{\rm{\sigma }}^{\rm{2}}}}}{{\rm{n}}}{\rm{ = V(\bar X)}}\)

because

\(\frac{{\rm{\pi }}}{{\rm{2}}}{\rm{ > 1}}\)

Therefore, the comparison is\({\rm{V(\tilde X) > V(\bar X)}}\)

03

Explanation

The pdf \({\rm{f(x)}}\)of Cauchy distributed random variable with parameter \({\rm{\beta }}\)is

\({\rm{f(x) = }}\frac{{\rm{1}}}{{{\rm{\pi \beta }}\left( {{\rm{1 + ((x - \mu )/\beta }}{{\rm{)}}^{\rm{2}}}} \right)}}{\rm{,}}\quad {\rm{x\hat I R}}{\rm{.}}\)

As given in the exercise, it can be shown that

\({\rm{V(\tilde X)\gg }}\frac{{\rm{1}}}{{{\rm{4n \times (f(\mu )}}{{\rm{)}}^{\rm{2}}}}}{\rm{,}}\)

where \({\rm{f(\mu )}}\)is

\(\begin{aligned}f(\mu ) &= \frac{{\rm{1}}}{{{\rm{\pi \beta }}\left( {{\rm{1 + }}{{\left( {{{{\rm{(\mu - \mu )}}}^{\rm{2}}}{\rm{/\beta }}} \right)}^{\rm{2}}}} \right)}}\\&= \frac{{\rm{1}}}{{{\rm{\pi \beta }}}}{\rm{.}}\end{aligned}\)

Therefore, the variance now becomes\({\rm{V(\tilde X)\gg }}\frac{{\rm{1}}}{{{\rm{4n \times (f(\mu )}}{{\rm{)}}^{\rm{2}}}}}{\rm{ = }}\frac{{{{\rm{\pi }}^{\rm{2}}}{{\rm{\beta }}^{\rm{2}}}}}{{{\rm{4n}}}}{\rm{.}}\)

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Let\({\rm{X}}\)represent the error in making a measurement of a physical characteristic or property (e.g., the boiling point of a particular liquid). It is often reasonable to assume that\({\rm{E(X) = 0}}\)and that\({\rm{X}}\)has a normal distribution. Thus, the pdf of any particular measurement error is

\({\rm{f(x;\theta ) = }}\frac{{\rm{1}}}{{\sqrt {{\rm{2\pi \theta }}} }}{{\rm{e}}^{{\rm{ - }}{{\rm{x}}^{\rm{2}}}{\rm{/2\theta }}}}\quad {\rm{ - \yen < x < \yen}}\)

(Where we have used\({\rm{\theta }}\)in place of\({{\rm{\sigma }}^{\rm{2}}}\)). Now suppose that\({\rm{n}}\)independent measurements are made, resulting in measurement errors\({{\rm{X}}_{\rm{1}}}{\rm{ = }}{{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{X}}_{\rm{2}}}{\rm{ = }}{{\rm{x}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{n}}}{\rm{ = }}{{\rm{x}}_{\rm{n}}}{\rm{.}}\)Obtain the mle of\({\rm{\theta }}\).

An estimator \({\rm{\hat \theta }}\) is said to be consistent if for any \( \in {\rm{ > 0}}\), \({\rm{P(|\hat \theta - \theta |}} \ge \in {\rm{)}} \to {\rm{0}}\) as \({\rm{n}} \to \infty \). That is, \({\rm{\hat \theta }}\) is consistent if, as the sample size gets larger, it is less and less likely that \({\rm{\hat \theta }}\) will be further than \( \in \) from the true value of \({\rm{\theta }}\). Show that \({\rm{\bar X}}\) is a consistent estimator of \({\rm{\mu }}\) when \({{\rm{\sigma }}^{\rm{2}}}{\rm{ < }}\infty \) , by using Chebyshev’s inequality from Exercise \({\rm{44}}\) of Chapter \({\rm{3}}\). (Hint: The inequality can be rewritten in the form \({\rm{P}}\left( {\left| {{\rm{Y - }}{{\rm{\mu }}_{\rm{Y}}}} \right| \ge \in } \right) \le {\rm{\sigma }}_{\rm{Y}}^{\rm{2}}{\rm{/}} \in \). Now identify \({\rm{Y}}\) with \({\rm{\bar X}}\).)

Let\({\rm{X}}\)have a Weibull distribution with parameters\({\rm{\alpha }}\)and\({\rm{\beta }}\), so

\(\begin{array}{l}{\rm{E(X) = \beta \times \Gamma (1 + 1/\alpha )V(X)}}\\{\rm{ = }}{{\rm{\beta }}^{\rm{2}}}\left\{ {{\rm{\Gamma (1 + 2/\alpha ) - (\Gamma (1 + 1/\alpha )}}{{\rm{)}}^{\rm{2}}}} \right\}\end{array}\)

a. Based on a random sample\({{\rm{X}}_{\rm{1}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{n}}}\), write equations for the method of moments estimators of\({\rm{\beta }}\)and\({\rm{\alpha }}\). Show that, once the estimate of\({\rm{\alpha }}\)has been obtained, the estimate of\({\rm{\beta }}\)can be found from a table of the gamma function and that the estimate of\({\rm{\alpha }}\)is the solution to a complicated equation involving the gamma function.

b. If\({\rm{n = 20,\bar x = 28}}{\rm{.0}}\), and\({\rm{\Sigma x}}_{\rm{i}}^{\rm{2}}{\rm{ = 16,500}}\), compute the estimates. (Hint:\(\left. {{{{\rm{(\Gamma (1}}{\rm{.2))}}}^{\rm{2}}}{\rm{/\Gamma (1}}{\rm{.4) = }}{\rm{.95}}{\rm{.}}} \right)\)

Suppose a certain type of fertilizer has an expected yield per acre of \({{\rm{\mu }}_{\rm{2}}}\)with variance \({{\rm{\sigma }}^{\rm{2}}}\)whereas the expected yield for a second type of fertilizer is with the same variance \({{\rm{\sigma }}^{\rm{2}}}\).Let \({\rm{S}}_{\rm{1}}^{\rm{2}}\) and \({\rm{S}}_{\rm{2}}^{\rm{2}}\)denote the sample variances of yields based on sample sizes \({{\rm{n}}_{\rm{1}}}\)and \({{\rm{n}}_{\rm{2}}}\),respectively, of the two fertilizers. Show that the pooled (combined) estimator

\({{\rm{\hat \sigma }}^{\rm{2}}}{\rm{ = }}\frac{{\left( {{{\rm{n}}_{\rm{1}}}{\rm{ - 1}}} \right){\rm{S}}_{\rm{1}}^{\rm{2}}{\rm{ + }}\left( {{{\rm{n}}_{\rm{2}}}{\rm{ - 1}}} \right){\rm{S}}_{\rm{2}}^{\rm{2}}}}{{{{\rm{n}}_{\rm{1}}}{\rm{ + }}{{\rm{n}}_{\rm{2}}}{\rm{ - 2}}}}\)

is an unbiased estimator of \({{\rm{\sigma }}^{\rm{2}}}\)

Let\({\rm{X}}\)denote the proportion of allotted time that a randomly selected student spends working on a certain aptitude test. Suppose the pdf of\({\rm{X}}\)is

\({\rm{f(x;\theta ) = }}\left\{ {\begin{array}{*{20}{c}}{{\rm{(\theta + 1)}}{{\rm{x}}^{\rm{\theta }}}}&{{\rm{0£ x£ 1}}}\\{\rm{0}}&{{\rm{ otherwise }}}\end{array}} \right.\)

where\({\rm{ - 1 < \theta }}\). A random sample of ten students yields data\({{\rm{x}}_{\rm{1}}}{\rm{ = }}{\rm{.92,}}{{\rm{x}}_{\rm{2}}}{\rm{ = }}{\rm{.79,}}{{\rm{x}}_{\rm{3}}}{\rm{ = }}{\rm{.90,}}{{\rm{x}}_{\rm{4}}}{\rm{ = }}{\rm{.65,}}{{\rm{x}}_{\rm{5}}}{\rm{ = }}{\rm{.86}}\),\({{\rm{x}}_{\rm{6}}}{\rm{ = }}{\rm{.47,}}{{\rm{x}}_{\rm{7}}}{\rm{ = }}{\rm{.73,}}{{\rm{x}}_{\rm{8}}}{\rm{ = }}{\rm{.97,}}{{\rm{x}}_{\rm{9}}}{\rm{ = }}{\rm{.94,}}{{\rm{x}}_{{\rm{10}}}}{\rm{ = }}{\rm{.77}}\).

a. Use the method of moments to obtain an estimator of\({\rm{\theta }}\), and then compute the estimate for this data.

b. Obtain the maximum likelihood estimator of\({\rm{\theta }}\), and then compute the estimate for the given data.

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.