/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Q22E Let聽\({\rm{X}}\)denote the prop... [FREE SOLUTION] | 91影视

91影视

Let\({\rm{X}}\)denote the proportion of allotted time that a randomly selected student spends working on a certain aptitude test. Suppose the pdf of\({\rm{X}}\)is

\({\rm{f(x;\theta ) = }}\left\{ {\begin{array}{*{20}{c}}{{\rm{(\theta + 1)}}{{\rm{x}}^{\rm{\theta }}}}&{{\rm{0拢 x拢 1}}}\\{\rm{0}}&{{\rm{ otherwise }}}\end{array}} \right.\)

where\({\rm{ - 1 < \theta }}\). A random sample of ten students yields data\({{\rm{x}}_{\rm{1}}}{\rm{ = }}{\rm{.92,}}{{\rm{x}}_{\rm{2}}}{\rm{ = }}{\rm{.79,}}{{\rm{x}}_{\rm{3}}}{\rm{ = }}{\rm{.90,}}{{\rm{x}}_{\rm{4}}}{\rm{ = }}{\rm{.65,}}{{\rm{x}}_{\rm{5}}}{\rm{ = }}{\rm{.86}}\),\({{\rm{x}}_{\rm{6}}}{\rm{ = }}{\rm{.47,}}{{\rm{x}}_{\rm{7}}}{\rm{ = }}{\rm{.73,}}{{\rm{x}}_{\rm{8}}}{\rm{ = }}{\rm{.97,}}{{\rm{x}}_{\rm{9}}}{\rm{ = }}{\rm{.94,}}{{\rm{x}}_{{\rm{10}}}}{\rm{ = }}{\rm{.77}}\).

a. Use the method of moments to obtain an estimator of\({\rm{\theta }}\), and then compute the estimate for this data.

b. Obtain the maximum likelihood estimator of\({\rm{\theta }}\), and then compute the estimate for the given data.

Short Answer

Expert verified

a) The estimate data is\({\rm{\hat \theta = }}\frac{{\rm{1}}}{{{\rm{1 - \bar X}}}}{\rm{ - 2; \hat \theta = 3}}\).

b) The estimate data is \({\rm{\hat \theta = - }}\frac{{\rm{n}}}{{\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{\rm{ln}}} {{\rm{X}}_{\rm{i}}}}}{\rm{ - 1;\hat \theta = 3}}{\rm{.12}}{\rm{.}}\)

Step by step solution

01

Introduction

An estimator is a rule for computing an estimate of a given quantity based on observable data: the rule (estimator), the quantity of interest (estimate), and the output (estimate) are all distinct.

02

Explanation

(a)

Let random variables \({{\rm{X}}_{\rm{1}}}{\rm{,}}{{\rm{X}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{n}}}\)have same distribution with pmf or pdf\({\rm{f}}\left( {{\rm{x;}}{{\rm{\theta }}_{\rm{1}}}{\rm{,}}{{\rm{\theta }}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{\theta }}_{\rm{m}}}} \right){\rm{,m\hat I N}}\), where the parameters \({{\rm{\theta }}_{\rm{i}}}{\rm{,i = 1,2, \ldots ,m}}\) are unknown. The moment estimators

\(\widehat {{{\rm{\theta }}_{\rm{1}}}}{\rm{,}}\widehat {{{\rm{\theta }}_{\rm{2}}}}{\rm{, \ldots ,}}\widehat {{{\rm{\theta }}_{\rm{m}}}}\)can be obtaining by equating sample moment to the corresponding population moments and solving for unknown parameters\({{\rm{\theta }}_{\rm{i}}}{\rm{,i = 1,2, \ldots ,m}}\).

There is only one unknown parameter\({\rm{\theta }}\), therefore, by solving equation

\({\rm{\bar X = E(X)}}\)for\({\rm{\theta }}\), the moment estimator \({\rm{\hat \theta }}\) will be obtained. Remember that \({\rm{\bar X}}\)is the sample moment of first order, and \({\rm{E(X)}}\)is the population moment of first order.

The following is true for the expected value

\(\begin{aligned}E(X) &= \int_{\rm{0}}^{\rm{1}} {\rm{x}} {\rm{(\theta + 1)}}{{\rm{x}}^{\rm{\theta }}}{\rm{dx}}\\&= \left. {{\rm{(\theta + 1) \times }}\frac{{{{\rm{x}}^{\rm{\theta }}}{\rm{ + 2}}}}{{{\rm{\theta + 2}}}}} \right|_{\rm{0}}^{\rm{1}}\\&= \frac{{{\rm{\theta + 1}}}}{{{\rm{\theta + 2}}}}{\rm{.}}\end{aligned}\)

The solution of equation\({\rm{\bar X = E(X)}}\)is:

\(\begin{aligned} \bar X &= \frac{{{\rm{\hat \theta + 1}}}}{{{\rm{\hat \theta + 2}}}}\\ \bar X &= \frac{{{\rm{\hat \theta + 1 + (1 - 1)}}}}{{{\rm{\hat \theta + 2}}}}\\ \bar X &= \frac{{{\rm{\hat \theta + 2}}}}{{{\rm{\hat \theta + 2}}}}{\rm{ - }}\frac{{\rm{1}}}{{{\rm{\hat \theta + 2}}}}\\\ \bar X - 1 &= - \frac{{\rm{1}}}{{{\rm{\hat \theta + 2}}}}{\rm{\hat \theta + 2}}\\ &= \frac{{\rm{1}}}{{{\rm{1 - \bar X}}}}\end{aligned}\)

Which, yields the moment estimator \({\rm{\hat \theta }}\)

\({\rm{\hat \theta = }}\frac{{\rm{1}}}{{{\rm{1 - \bar X}}}}{\rm{ - 2}}\)

The Sample Mean \({\rm{\bar x}}\)of observations \({{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{x}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{x}}_{\rm{n}}}\) is given by:

\({\rm{\bar x = }}\frac{{{{\rm{x}}_{\rm{1}}}{\rm{ + }}{{\rm{x}}_{\rm{2}}}{\rm{ + \ldots + }}{{\rm{x}}_{\rm{n}}}}}{{\rm{n}}}{\rm{ = }}\frac{{\rm{1}}}{{\rm{n}}}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{{\rm{x}}_{\rm{i}}}} \)

Using this, the sample moment of first order is

\({\rm{\bar x = }}\frac{{\rm{1}}}{{{\rm{10}}}}{\rm{(0}}{\rm{.92 + 0}}{\rm{.79 + \ldots + 0}}{\rm{.88) = 0}}{\rm{.8}}\)

Therefore, the estimate \({\rm{\hat \theta }}\)is\(\frac{{\rm{1}}}{{{\rm{1 - 0}}{\rm{.8}}}}{\rm{ - 2 = 3}}\).

03

Explanation

(b)

Let random variables \({{\rm{X}}_{\rm{1}}}{\rm{,}}{{\rm{X}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{n}}}\)have joint pdf or pmb,

\({\rm{f}}\left( {{{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{x}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{x}}_{\rm{n}}}{\rm{;}}{{\rm{\theta }}_{\rm{1}}}{\rm{,}}{{\rm{\theta }}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{\theta }}_{\rm{m}}}} \right){\rm{,}}\quad {\rm{n,m\hat I N}}\)

Where, the parameters \({{\rm{\theta }}_{\rm{i}}}{\rm{,i = 1,2, \ldots ,m}}\)are unknown. When function \({\rm{f}}\)is a function of parameters\({{\rm{\theta }}_{\rm{i}}}{\rm{,i = 1,2, \ldots ,m}}\), it is called the

likelihood function

Values \({{\rm{\hat \theta }}_{\rm{i}}}\)that maximize the likelihood function are the maximum likelihood estimates (mle's), or equally values \({{\rm{\hat \theta }}_{\rm{i}}}\)for which

for every\({{\rm{\theta }}_{\rm{i}}}{\rm{,i = 1,2, \ldots ,m}}\). By substituting\({{\rm{X}}_{\rm{i}}}{\rm{ with }}{{\rm{x}}_{\rm{i}}}\), the

maximum likelihood estimators

are obtained.

The pdf is given in the exercise. The likelihood function (assuming independence) becomes

\(\begin{aligned}{\rm{f}}\left( {{{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{x}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{x}}_{\rm{n}}}{\rm{;\theta }}} \right)\\&= (\theta + 1)x_{\rm{1}}^{\rm{\theta }}{\rm{ \times (\theta + 1)x}}_{\rm{2}}^{\rm{\theta }}{\rm{ \times \ldots \times (\theta + 1)x}}_{\rm{n}}^{\rm{\theta }}\\&= (\theta + 1 {{\rm{)}}^{\rm{2}}}{\rm{ \times }}{\left( {{{\rm{x}}_{\rm{1}}}{\rm{ \times }}{{\rm{x}}_{\rm{2}}}{\rm{ \times \ldots \times }}{{\rm{x}}_{\rm{n}}}} \right)^{\rm{\theta }}}\end{aligned}\)

In order to find maximum, look at the log likelihood function

\(\begin{aligned}{\rm{lnf}}\left( {{{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{x}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{x}}_{\rm{n}}}{\rm{;\theta }}} \right)\\&= ln \left( {{{{\rm{(\theta + 1)}}}^{\rm{n}}}{\rm{ \times }}{{\left( {{{\rm{x}}_{\rm{1}}}{\rm{ \times }}{{\rm{x}}_{\rm{2}}}{\rm{ \times \ldots \times }}{{\rm{x}}_{\rm{n}}}} \right)}^{\rm{\theta }}}} \right)\\ &= n \times ln(\theta + 1) + \theta \times \sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{\rm{ln}}} {{\rm{x}}_{\rm{i}}}{\rm{.}}\end{aligned}\)

By taking derivative of log likelihood function in respect to \({\rm{\theta }}\)and equating it to \({\rm{0}}\) the maximum likelihood estimator is obtained. Therefore, the derivative is

\(\begin{aligned}\frac{{\rm{d}}}{{{\rm{d\theta }}}}{\rm{f}}\left( {{{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{x}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{x}}_{\rm{n}}}{\rm{;\theta }}} \right)\\& = \frac{{\rm{d}}}{{{\rm{d\theta }}}}\left( {{\rm{2 \times ln(\theta + 1) + \theta \times }}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{\rm{ln}}} {{\rm{x}}_{\rm{i}}}} \right)\\{\rm{ = n \times }}\frac{{\rm{1}}}{{{\rm{\theta + 1}}}}{\rm{ + }}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{\rm{ln}}} {{\rm{x}}_{\rm{i}}}{\rm{.}}\end{aligned}\)

Therefore, the maximum likelihood estimator is obtained by solving equation

\({\rm{n \times }}\frac{{\rm{1}}}{{{\rm{\hat \theta + 1}}}}{\rm{ + }}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{\rm{ln}}} {{\rm{x}}_{\rm{i}}}{\rm{ = 0}}\)

For\({\rm{\hat \theta }}\). Obviously, the solution is

\({\rm{\hat \theta = - }}\frac{{\rm{n}}}{{\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{\rm{ln}}} {{\rm{X}}_{\rm{i}}}}}{\rm{ - 1}}{\rm{.}}\)

which is the maximum likelihood estimator.

By taking \({\rm{ln}}{{\rm{x}}_{\rm{i}}}\)for every\({\rm{i = 1,2, \ldots ,10}}\), and summing the values, the maximum likelihood estimate is obtained as

\({\rm{\hat \theta = - }}\frac{{{\rm{10}}}}{{{\rm{ - 2}}{\rm{.4295}}}}{\rm{ - 1 = 3}}{\rm{.12}}\)

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91影视!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Suppose a certain type of fertilizer has an expected yield per acre of \({{\rm{\mu }}_{\rm{2}}}\)with variance \({{\rm{\sigma }}^{\rm{2}}}\)whereas the expected yield for a second type of fertilizer is with the same variance \({{\rm{\sigma }}^{\rm{2}}}\).Let \({\rm{S}}_{\rm{1}}^{\rm{2}}\) and \({\rm{S}}_{\rm{2}}^{\rm{2}}\)denote the sample variances of yields based on sample sizes \({{\rm{n}}_{\rm{1}}}\)and \({{\rm{n}}_{\rm{2}}}\),respectively, of the two fertilizers. Show that the pooled (combined) estimator

\({{\rm{\hat \sigma }}^{\rm{2}}}{\rm{ = }}\frac{{\left( {{{\rm{n}}_{\rm{1}}}{\rm{ - 1}}} \right){\rm{S}}_{\rm{1}}^{\rm{2}}{\rm{ + }}\left( {{{\rm{n}}_{\rm{2}}}{\rm{ - 1}}} \right){\rm{S}}_{\rm{2}}^{\rm{2}}}}{{{{\rm{n}}_{\rm{1}}}{\rm{ + }}{{\rm{n}}_{\rm{2}}}{\rm{ - 2}}}}\)

is an unbiased estimator of \({{\rm{\sigma }}^{\rm{2}}}\)

Consider randomly selecting \({\rm{n}}\) segments of pipe and determining the corrosion loss (mm) in the wall thickness for each one. Denote these corrosion losses by \({{\rm{Y}}_{\rm{1}}}{\rm{,}}.....{\rm{,}}{{\rm{Y}}_{\rm{n}}}\). The article 鈥淎 Probabilistic Model for a Gas Explosion Due to Leakages in the Grey Cast Iron Gas Mains鈥 (Reliability Engr. and System Safety (\({\rm{(2013:270 - 279)}}\)) proposes a linear corrosion model: \({{\rm{Y}}_{\rm{i}}}{\rm{ = }}{{\rm{t}}_{\rm{i}}}{\rm{R}}\), where \({{\rm{t}}_{\rm{i}}}\) is the age of the pipe and \({\rm{R}}\), the corrosion rate, is exponentially distributed with parameter \({\rm{\lambda }}\). Obtain the maximum likelihood estimator of the exponential parameter (the resulting mle appears in the cited article). (Hint: If \({\rm{c > 0}}\) and \({\rm{X}}\) has an exponential distribution, so does \({\rm{cX}}\).)

A sample of \({\rm{n}}\) captured Pandemonium jet fighters results in serial numbers\({{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{x}}_{\rm{2}}}{\rm{,}}{{\rm{x}}_{\rm{3}}}{\rm{, \ldots ,}}{{\rm{x}}_{\rm{n}}}\). The CIA knows that the aircraft were numbered consecutively at the factory starting with \({\rm{\alpha }}\)and ending with\({\rm{\beta }}\), so that the total number of planes manufactured is \({\rm{\beta - \alpha + 1}}\) (e.g., if \({\rm{\alpha = 17}}\) and\({\rm{\beta = 29}}\), then \({\rm{29 - 17 + 1 = 13}}\)planes having serial numbers \({\rm{17,18,19, \ldots ,28,29}}\)were manufactured). However, the CIA does not know the values of \({\rm{\alpha }}\) or\({\rm{\beta }}\). A CIA statistician suggests using the estimator \({\rm{max}}\left( {{{\rm{X}}_{\rm{i}}}} \right){\rm{ - min}}\left( {{{\rm{X}}_{\rm{i}}}} \right){\rm{ + 1}}\)to estimate the total number of planes manufactured.

a. If\({\rm{n = 5, x\_}}\left\{ {\rm{1}} \right\}{\rm{ = 237, x\_}}\left\{ {\rm{2}} \right\}{\rm{ = 375, x\_}}\left\{ {\rm{3}} \right\}{\rm{ = 202, x\_}}\left\{ {\rm{4}} \right\}{\rm{ = 525,}}\)and\({{\rm{x}}_{\rm{5}}}{\rm{ = 418}}\), what is the corresponding estimate?

b. Under what conditions on the sample will the value of the estimate be exactly equal to the true total number of planes? Will the estimate ever be larger than the true total? Do you think the estimator is unbiased for estimating\({\rm{\beta - \alpha + 1}}\)? Explain in one or two sentences.

The shear strength of each of ten test spot welds is determined, yielding the following data (psi):

\(\begin{array}{*{20}{l}}{{\rm{392}}}&{{\rm{376}}}&{{\rm{401}}}&{{\rm{367}}}&{{\rm{389}}}&{{\rm{362}}}&{{\rm{409}}}&{{\rm{415}}}&{{\rm{358}}}&{{\rm{375}}}\end{array}\)

a. Assuming that shear strength is normally distributed, estimate the true average shear strength and standard deviation of shear strength using the method of maximum likelihood.

b. Again assuming a normal distribution, estimate the strength value below which\({\rm{95\% }}\)of all welds will have their strengths. (Hint: What is the\({\rm{95 th}}\)percentile in terms of\({\rm{\mu }}\)and\({\rm{\sigma }}\)? Now use the invariance principle.)

c. Suppose we decide to examine another test spot weld. Let\({\rm{X = }}\)shear strength of the weld. Use the given data to obtain the mle of\({\rm{P(X拢400)}}{\rm{.(Hint:P(X拢400) = \Phi ((400 - \mu )/\sigma )}}{\rm{.)}}\)

Consider a random sample \({{\rm{X}}_{\rm{1}}}{\rm{,}}{{\rm{X}}_{\rm{2}}}.....{\rm{,}}{{\rm{X}}_{\rm{n}}}\) from the shifted exponential pdf

\({\rm{f(x;\lambda ,\theta ) = }}\left\{ {\begin{array}{*{20}{c}}{{\rm{\lambda }}{{\rm{e}}^{{\rm{ - \lambda (x - \theta )}}}}}&{{\rm{x}} \ge {\rm{\theta }}}\\{\rm{0}}&{{\rm{ otherwise }}}\end{array}} \right.\). Taking \({\rm{\theta = 0}}\) gives the pdf of the exponential distribution considered previously (with positive density to the right of zero). An example of the shifted exponential distribution appeared in Example \({\rm{4}}{\rm{.5}}\), in which the variable of interest was time headway in traffic flow and \({\rm{\theta = }}{\rm{.5}}\) was the minimum possible time headway. a. Obtain the maximum likelihood estimators of \({\rm{\theta }}\) and \({\rm{\lambda }}\). b. If \({\rm{n = 10}}\) time headway observations are made, resulting in the values \({\rm{3}}{\rm{.11,}}{\rm{.64,2}}{\rm{.55,2}}{\rm{.20,5}}{\rm{.44,3}}{\rm{.42,10}}{\rm{.39,8}}{\rm{.93,17}}{\rm{.82}}\), and \({\rm{1}}{\rm{.30}}\), calculate the estimates of \({\rm{\theta }}\) and \({\rm{\lambda }}\).

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.