/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Q21E Let \({\rm{X}}\)have a Weibull ... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

Let\({\rm{X}}\)have a Weibull distribution with parameters\({\rm{\alpha }}\)and\({\rm{\beta }}\), so

\(\begin{array}{l}{\rm{E(X) = \beta \times \Gamma (1 + 1/\alpha )V(X)}}\\{\rm{ = }}{{\rm{\beta }}^{\rm{2}}}\left\{ {{\rm{\Gamma (1 + 2/\alpha ) - (\Gamma (1 + 1/\alpha )}}{{\rm{)}}^{\rm{2}}}} \right\}\end{array}\)

a. Based on a random sample\({{\rm{X}}_{\rm{1}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{n}}}\), write equations for the method of moments estimators of\({\rm{\beta }}\)and\({\rm{\alpha }}\). Show that, once the estimate of\({\rm{\alpha }}\)has been obtained, the estimate of\({\rm{\beta }}\)can be found from a table of the gamma function and that the estimate of\({\rm{\alpha }}\)is the solution to a complicated equation involving the gamma function.

b. If\({\rm{n = 20,\bar x = 28}}{\rm{.0}}\), and\({\rm{\Sigma x}}_{\rm{i}}^{\rm{2}}{\rm{ = 16,500}}\), compute the estimates. (Hint:\(\left. {{{{\rm{(\Gamma (1}}{\rm{.2))}}}^{\rm{2}}}{\rm{/\Gamma (1}}{\rm{.4) = }}{\rm{.95}}{\rm{.}}} \right)\)

Short Answer

Expert verified

a) The moment estimator \({\rm{\hat \beta }}\)can be obtained once the equation is solved. The goal was to figure out how to get the method estimators alpha and\({\rm{\hat \beta }}\).

b) The estimates are\(\hat \alpha = 5;\hat \beta = \frac{{28}}{{\Gamma (1.2)}}\)

Step by step solution

01

Introduction

An estimator is a rule for computing an estimate of a given quantity based on observable data: the rule (estimator), the quantity of interest (estimate), and the output (estimate) are all distinct.

02

Explanation

a)

Let random variables \({{\rm{X}}_{\rm{1}}}{\rm{,}}{{\rm{X}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{n}}}\) have the same distribution as pmf or pdf \(f\left( {x;{\theta _1},{\theta _2}, \ldots ,{\theta _m}} \right),m \^I N\) with unknown parameters \({{\rm{\theta }}_{\rm{i}}}{\rm{,i = 1,2, \ldots ,m}}\).

By equating sample moments to corresponding population moments and solving for unknown parameters\(\widehat {{{\rm{\theta }}_{\rm{1}}}}{\rm{,}}\widehat {{{\rm{\theta }}_{\rm{2}}}}{\rm{, \ldots ,}}\widehat {{{\rm{\theta }}_{\rm{m}}}}\), the moment estimators\({{\rm{\theta }}_{\rm{i}}}{\rm{,i = 1,2, \ldots ,m}}\)may be obtained.

The specified distribution in this exercise is Weibull's distribution with parameters\({\rm{\alpha }}\)and\({\rm{\beta }}\),forwhichmomentestimatorsmustbedeveloped.

The sample moment of first order is

\({\rm{\bar X = }}\frac{{\rm{1}}}{{\rm{n}}}\left( {{{\rm{X}}_{\rm{1}}}{\rm{ + }}{{\rm{X}}_{\rm{2}}}{\rm{ + \ldots + }}{{\rm{X}}_{\rm{n}}}} \right)\)

and the population moment of first order is

\({\rm{E(X) = \beta \times \Gamma }}\left( {{\rm{1 + }}\frac{{\rm{1}}}{{\rm{\alpha }}}} \right)\)

The first equation in the system of equations from which the moment estimators are obtained is

\({\rm{\bar X = E(X)}}\)

The sample moment of second order is:

\(\frac{{\rm{1}}}{{\rm{n}}}\left( {{\rm{X}}_{\rm{1}}^{\rm{2}}{\rm{ + X}}_{\rm{2}}^{\rm{2}}{\rm{ + \ldots + X}}_{\rm{n}}^{\rm{2}}} \right)\)

And the population moment of second order is:

\({\rm{E}}\left( {{{\rm{X}}^{\rm{2}}}} \right){\rm{ = V(X) + (E(X)}}{{\rm{)}}^{\rm{2}}}{\rm{ - }}{{\rm{\beta }}^{\rm{2}}}\left\{ {{\rm{\Gamma }}\left( {{\rm{1 + }}\frac{{\rm{2}}}{{\rm{\alpha }}}} \right){\rm{ - }}{{\left( {{\rm{\Gamma }}\left( {{\rm{1 + }}\frac{{\rm{1}}}{{\rm{\alpha }}}} \right)} \right)}^{\rm{2}}}} \right\}{\rm{ - }}{\left\{ {{\rm{\beta \times \Gamma }}\left( {{\rm{1 + }}\frac{{\rm{1}}}{{\rm{\alpha }}}} \right)} \right\}^{\rm{2}}}{\rm{ - }}{{\rm{\beta }}^{\rm{2}}}{\rm{\Gamma }}\left( {{\rm{1 + }}\frac{{\rm{2}}}{{\rm{\alpha }}}} \right)\)

The moment estimators are derived from the second equation in the system of equations.

\(\frac{{\rm{1}}}{{\rm{n}}}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{\rm{X}}_{\rm{i}}^{\rm{2}}} {\rm{ - E}}\left( {{{\rm{X}}^{\rm{2}}}} \right)\)

As a result, the system of equations that must be solved for\({\rm{\hat \alpha }}\) and \({\rm{\hat \beta }}\) is

\(\begin{array}{*{20}{r}}{{\rm{\bar X - \hat \beta \times \Gamma }}\left( {{\rm{1 + }}\frac{{\rm{1}}}{{{\rm{\hat \alpha }}}}} \right){\rm{,}}}\\{\frac{{\rm{1}}}{{\rm{n}}}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{\rm{X}}_{\rm{i}}^{\rm{2}}} {\rm{ - }}{{{\rm{\hat \beta }}}^{\rm{2}}}{\rm{\Gamma }}\left( {{\rm{1 + }}\frac{{\rm{2}}}{{{\rm{\hat \alpha }}}}} \right){\rm{.}}}\end{array}\)

Hence, \(\beta \)can be computed from the first equation as

\({\rm{\hat \beta = }}\frac{{{\rm{\bar X}}}}{{{\rm{\Gamma }}\left( {{\rm{1 + }}\frac{{\rm{1}}}{{{\rm{\dot \alpha }}}}} \right)}}{\rm{.}}\)

In order to compute\({\rm{\hat \beta }}\), first \({\rm{\hat \alpha }}\) needs to be determined and gamma functioned evaluated.

From first equation, by squaring both sides, the following stands

\({{\rm{\bar X}}^{\rm{2}}}{\rm{ = }}{{\rm{\hat \beta }}^{\rm{2}}}{{\rm{\Gamma }}^{\rm{2}}}\left( {{\rm{1 + }}\frac{{\rm{1}}}{{{\rm{\hat \alpha }}}}} \right){\rm{.}}\)

or equally

\({{\rm{\hat \beta }}^{\rm{2}}}{\rm{ = }}\frac{{{{{\rm{\bar X}}}^{\rm{2}}}}}{{{{\rm{\Gamma }}^{\rm{2}}}\left( {{\rm{1 + }}\frac{{\rm{1}}}{{{\rm{\hat \alpha }}}}} \right)}}{\rm{.}}\)

Now plug in in the second equation:

\(\frac{{\rm{1}}}{{\rm{n}}}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{\rm{X}}_{\rm{i}}^{\rm{2}}} {\rm{ = }}\frac{{{{{\rm{\bar X}}}^{\rm{2}}}}}{{{{\rm{\Gamma }}^{\rm{2}}}\left( {{\rm{1 + }}\frac{{\rm{1}}}{{\rm{\alpha }}}} \right)}}{\rm{ \times \Gamma }}\left( {{\rm{1 + }}\frac{{\rm{2}}}{{{\rm{\hat \alpha }}}}} \right)\)

or equally

\(\frac{{\rm{1}}}{{\rm{n}}}\frac{{\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{\rm{X}}_{\rm{i}}^{\rm{2}}} }}{{{{{\rm{\bar X}}}^{\rm{2}}}}}{\rm{ = }}\frac{{{\rm{\Gamma }}\left( {{\rm{1 + }}\frac{{\rm{2}}}{{\rm{\alpha }}}} \right)}}{{{{\rm{\Gamma }}^{\rm{2}}}\left( {{\rm{1 + }}\frac{{\rm{1}}}{{\rm{\alpha }}}} \right)}}{\rm{.}}\)

The only unknown variable in the last equation is \(\hat \alpha \), and the moment estimator alpha may be obtained by solving this equation. The moment estimator \({\rm{\hat \beta }}\)can be obtained once the equation is solved.

Because, as stated in the exercise, solving this equation is difficult, there is no need to solve it. The goal was to figure out how to get the method estimators alpha and \({\rm{\hat \beta }}\).

03

Explanation

b)

Consider the given information,

\(\frac{{\rm{1}}}{{\rm{n}}}\frac{{\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{\rm{X}}_{\rm{i}}^{\rm{2}}} }}{{{{{\rm{\bar X}}}^{\rm{2}}}}}{\rm{ = }}\frac{{{\rm{\Gamma }}\left( {{\rm{1 + }}\frac{{\rm{2}}}{{{\rm{\hat \alpha }}}}} \right)}}{{{{\rm{\Gamma }}^{\rm{2}}}\left( {{\rm{1 + }}\frac{{\rm{1}}}{{\rm{\alpha }}}} \right)}}\)

For given\({\rm{n = 20,\bar x = 28}}\), and \(\sum {{\rm{x}}_{\rm{i}}^{\rm{2}}} {\rm{ = 16,500,\hat \alpha }}\) needs to be found.

The following is true:

\(\frac{{\rm{1}}}{{{\rm{20}}}}{\rm{ \times }}\left( {\frac{{{\rm{16,500}}}}{{{\rm{2}}{{\rm{8}}^{\rm{2}}}}}} \right){\rm{ = 1}}{\rm{.05}}\)

therefore,

\(\frac{{{\rm{\Gamma }}\left( {{\rm{1 + }}\frac{{\rm{2}}}{{\frac{{\rm{\alpha }}}{{\rm{\alpha }}}}}} \right)}}{{{{\rm{\Gamma }}^{\rm{2}}}\left( {{\rm{1 + }}\frac{{\rm{1}}}{{\rm{\alpha }}}} \right)}}{\rm{ = 1}}{\rm{.05}}{\rm{.}}\)

From the hint

\(\frac{{{{{\rm{(\Gamma (1}}{\rm{.2))}}}^{\rm{2}}}}}{{{\rm{\Gamma (1}}{\rm{.4)}}}}{\rm{ = 0}}{\rm{.95}}\)

or equally

\(\begin{array}{l}\frac{{{\rm{\Gamma (1 + 0}}{\rm{.4)}}}}{{{{\rm{\Gamma }}^{\rm{2}}}{\rm{(1 + 0}}{\rm{.2)}}}}{\rm{ = }}\frac{{\rm{1}}}{{{\rm{0}}{\rm{.95}}}}\\\frac{{{\rm{\Gamma (1 + 0}}{\rm{.4)}}}}{{{{\rm{\Gamma }}^{\rm{2}}}{\rm{(1 + 0}}{\rm{.2)}}}}{\rm{ = 1}}{\rm{.05}}\end{array}\)

which means that

\(\begin{array}{l}\frac{{{\rm{\Gamma }}\left( {{\rm{1 + }}\frac{{\rm{2}}}{{{\rm{\bar \alpha }}}}} \right)}}{{{{\rm{\Gamma }}^{\rm{2}}}\left( {{\rm{1 + }}\frac{{\rm{1}}}{{{\rm{\dot \alpha }}}}} \right)}}{\rm{ = 1}}{\rm{.05}}\\{\rm{ = }}\frac{{{\rm{\Gamma (1 + 0}}{\rm{.4)}}}}{{{{\rm{\Gamma }}^{\rm{2}}}{\rm{(1 + 0}}{\rm{.2)}}}}\end{array}\)

Therefore, because of this equality, the following must hold

\(\frac{{\rm{2}}}{{{\rm{\hat \alpha }}}}{\rm{ = 0}}{\rm{.4}}\)

hence,

\({\rm{\hat \alpha = 5}}{\rm{.}}\)

From the estimator

\({\rm{\hat \beta = }}\frac{{{\rm{\bar X}}}}{{{\rm{\Gamma }}\left( {{\rm{1 + }}\frac{{\rm{1}}}{{{\rm{\bar \alpha }}}}} \right)}}\)

The estimate is computed as follows:\({\rm{\hat \beta = }}\frac{{{\rm{28}}}}{{{\rm{\Gamma (1}}{\rm{.2)}}}}\)

04

Additional

The average value of a function \(f\left( x,y \right)\) over a rectangle \(R\) is defined to be \({{f}_{ave}}=\frac{1}{A\left( R \right)}\iint\limits_{R}{f\left( x,y \right)dA}\).

Find the average value of \(f\) over the given rectangle, \(f\left( x,y \right)={{e}^{y}}\sqrt{x+{{e}^{y}}}\), \(R=\left( 0,4 \right)\times \left( 0,1 \right)\).

Given: \(f\left( x,y \right)={{e}^{y}}\sqrt{x+{{e}^{y}}}\)

\(R=\left( 0,4 \right)\times \left( 0,1 \right)\)

To find: average value of f.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Let\({\rm{X}}\)denote the proportion of allotted time that a randomly selected student spends working on a certain aptitude test. Suppose the pdf of\({\rm{X}}\)is

\({\rm{f(x;\theta ) = }}\left\{ {\begin{array}{*{20}{c}}{{\rm{(\theta + 1)}}{{\rm{x}}^{\rm{\theta }}}}&{{\rm{0£ x£ 1}}}\\{\rm{0}}&{{\rm{ otherwise }}}\end{array}} \right.\)

where\({\rm{ - 1 < \theta }}\). A random sample of ten students yields data\({{\rm{x}}_{\rm{1}}}{\rm{ = }}{\rm{.92,}}{{\rm{x}}_{\rm{2}}}{\rm{ = }}{\rm{.79,}}{{\rm{x}}_{\rm{3}}}{\rm{ = }}{\rm{.90,}}{{\rm{x}}_{\rm{4}}}{\rm{ = }}{\rm{.65,}}{{\rm{x}}_{\rm{5}}}{\rm{ = }}{\rm{.86}}\),\({{\rm{x}}_{\rm{6}}}{\rm{ = }}{\rm{.47,}}{{\rm{x}}_{\rm{7}}}{\rm{ = }}{\rm{.73,}}{{\rm{x}}_{\rm{8}}}{\rm{ = }}{\rm{.97,}}{{\rm{x}}_{\rm{9}}}{\rm{ = }}{\rm{.94,}}{{\rm{x}}_{{\rm{10}}}}{\rm{ = }}{\rm{.77}}\).

a. Use the method of moments to obtain an estimator of\({\rm{\theta }}\), and then compute the estimate for this data.

b. Obtain the maximum likelihood estimator of\({\rm{\theta }}\), and then compute the estimate for the given data.

When the sample standard deviation S is based on a random sample from a normal population distribution, it can be shown that \({\rm{E(S) = }}\sqrt {{\rm{2/(n - 1)}}} {\rm{\Gamma (n/2)\sigma /\Gamma ((n - 1)/2)}}\)

Use this to obtain an unbiased estimator for \({\rm{\sigma }}\) of the form \({\rm{cS}}\). What is \({\rm{c}}\) when \({\rm{n = 20}}\)?

The article from which the data in Exercise 1 was extracted also gave the accompanying strength observations for cylinders:

\(\begin{array}{l}\begin{array}{*{20}{r}}{{\rm{6}}{\rm{.1}}}&{{\rm{5}}{\rm{.8}}}&{{\rm{7}}{\rm{.8}}}&{{\rm{7}}{\rm{.1}}}&{{\rm{7}}{\rm{.2}}}&{{\rm{9}}{\rm{.2}}}&{{\rm{6}}{\rm{.6}}}&{{\rm{8}}{\rm{.3}}}&{{\rm{7}}{\rm{.0}}}&{{\rm{8}}{\rm{.3}}}\\{{\rm{7}}{\rm{.8}}}&{{\rm{8}}{\rm{.1}}}&{{\rm{7}}{\rm{.4}}}&{{\rm{8}}{\rm{.5}}}&{{\rm{8}}{\rm{.9}}}&{{\rm{9}}{\rm{.8}}}&{{\rm{9}}{\rm{.7}}}&{{\rm{14}}{\rm{.1}}}&{{\rm{12}}{\rm{.6}}}&{{\rm{11}}{\rm{.2}}}\end{array}\\\begin{array}{*{20}{l}}{{\rm{7}}{\rm{.8}}}&{{\rm{8}}{\rm{.1}}}&{{\rm{7}}{\rm{.4}}}&{{\rm{8}}{\rm{.5}}}&{{\rm{8}}{\rm{.9}}}&{{\rm{9}}{\rm{.8}}}&{{\rm{9}}{\rm{.7}}}&{{\rm{14}}{\rm{.1}}}&{{\rm{12}}{\rm{.6}}}&{{\rm{11}}{\rm{.2}}}\end{array}\end{array}\)

Prior to obtaining data, denote the beam strengths by X1, … ,Xm and the cylinder strengths by Y1, . . . , Yn. Suppose that the Xi ’s constitute a random sample from a distribution with mean m1 and standard deviation s1 and that the Yi ’s form a random sample (independent of the Xi ’s) from another distribution with mean m2 and standard deviation\({{\rm{\sigma }}_{\rm{2}}}\).

a. Use rules of expected value to show that \({\rm{\bar X - \bar Y}}\)is an unbiased estimator of \({{\rm{\mu }}_{\rm{1}}}{\rm{ - }}{{\rm{\mu }}_{\rm{2}}}\). Calculate the estimate for the given data.

b. Use rules of variance from Chapter 5 to obtain an expression for the variance and standard deviation (standard error) of the estimator in part (a), and then compute the estimated standard error.

c. Calculate a point estimate of the ratio \({{\rm{\sigma }}_{\rm{1}}}{\rm{/}}{{\rm{\sigma }}_{\rm{2}}}\)of the two standard deviations.

d. Suppose a single beam and a single cylinder are randomly selected. Calculate a point estimate of the variance of the difference \({\rm{X - Y}}\) between beam strength and cylinder strength.

Let\({{\rm{X}}_{\rm{1}}}{\rm{,}}{{\rm{X}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{n}}}\)represent a random sample from a Rayleigh distribution with pdf

\({\rm{f(x,\theta ) = }}\frac{{\rm{x}}}{{\rm{\theta }}}{{\rm{e}}^{{\rm{ - }}{{\rm{x}}^{\rm{2}}}{\rm{/(2\theta )}}}}\quad {\rm{x > 0}}\)a. It can be shown that\({\rm{E}}\left( {{{\rm{X}}^{\rm{2}}}} \right){\rm{ = 2\theta }}\). Use this fact to construct an unbiased estimator of\({\rm{\theta }}\)based on\({\rm{\Sigma X}}_{\rm{i}}^{\rm{2}}\)(and use rules of expected value to show that it is unbiased).

b. Estimate\({\rm{\theta }}\)from the following\({\rm{n = 10}}\)observations on vibratory stress of a turbine blade under specified conditions:

\(\begin{array}{*{20}{l}}{{\rm{16}}{\rm{.88}}}&{{\rm{10}}{\rm{.23}}}&{{\rm{4}}{\rm{.59}}}&{{\rm{6}}{\rm{.66}}}&{{\rm{13}}{\rm{.68}}}\\{{\rm{14}}{\rm{.23}}}&{{\rm{19}}{\rm{.87}}}&{{\rm{9}}{\rm{.40}}}&{{\rm{6}}{\rm{.51}}}&{{\rm{10}}{\rm{.95}}}\end{array}\)

Consider a random sample \({{\rm{X}}_{\rm{1}}}{\rm{,}}{{\rm{X}}_{\rm{2}}}.....{\rm{,}}{{\rm{X}}_{\rm{n}}}\) from the shifted exponential pdf

\({\rm{f(x;\lambda ,\theta ) = }}\left\{ {\begin{array}{*{20}{c}}{{\rm{\lambda }}{{\rm{e}}^{{\rm{ - \lambda (x - \theta )}}}}}&{{\rm{x}} \ge {\rm{\theta }}}\\{\rm{0}}&{{\rm{ otherwise }}}\end{array}} \right.\). Taking \({\rm{\theta = 0}}\) gives the pdf of the exponential distribution considered previously (with positive density to the right of zero). An example of the shifted exponential distribution appeared in Example \({\rm{4}}{\rm{.5}}\), in which the variable of interest was time headway in traffic flow and \({\rm{\theta = }}{\rm{.5}}\) was the minimum possible time headway. a. Obtain the maximum likelihood estimators of \({\rm{\theta }}\) and \({\rm{\lambda }}\). b. If \({\rm{n = 10}}\) time headway observations are made, resulting in the values \({\rm{3}}{\rm{.11,}}{\rm{.64,2}}{\rm{.55,2}}{\rm{.20,5}}{\rm{.44,3}}{\rm{.42,10}}{\rm{.39,8}}{\rm{.93,17}}{\rm{.82}}\), and \({\rm{1}}{\rm{.30}}\), calculate the estimates of \({\rm{\theta }}\) and \({\rm{\lambda }}\).

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.