/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Q25E The shear strength of each of te... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

The shear strength of each of ten test spot welds is determined, yielding the following data (psi):

\(\begin{array}{*{20}{l}}{{\rm{392}}}&{{\rm{376}}}&{{\rm{401}}}&{{\rm{367}}}&{{\rm{389}}}&{{\rm{362}}}&{{\rm{409}}}&{{\rm{415}}}&{{\rm{358}}}&{{\rm{375}}}\end{array}\)

a. Assuming that shear strength is normally distributed, estimate the true average shear strength and standard deviation of shear strength using the method of maximum likelihood.

b. Again assuming a normal distribution, estimate the strength value below which\({\rm{95\% }}\)of all welds will have their strengths. (Hint: What is the\({\rm{95 th}}\)percentile in terms of\({\rm{\mu }}\)and\({\rm{\sigma }}\)? Now use the invariance principle.)

c. Suppose we decide to examine another test spot weld. Let\({\rm{X = }}\)shear strength of the weld. Use the given data to obtain the mle of\({\rm{P(X£400)}}{\rm{.(Hint:P(X£400) = \Phi ((400 - \mu )/\sigma )}}{\rm{.)}}\)

Short Answer

Expert verified

a) The maximum likelihood estimates of mean value and maximum likelihood estimate of standard deviation is \({\rm{\hat \mu = 384}}{\rm{.4, \hat \sigma = 18}}{\rm{.86}}{\rm{.}}\)

b) The estimate would be a function the estimates or equality \({\rm{\hat \mu + 1}}{\rm{.645 \times \hat \sigma = 415}}{\rm{.42}}\)

c) The mle of the given data is\({\rm{P(X£400) = 0}}{\rm{.7967}}\).

Step by step solution

01

Introduction

An estimator is a rule for computing an estimate of a given quantity based on observable data: the rule (estimator), the quantity of interest (estimate), and the output (estimate) are all distinct.

02

Explanation

a)

The maximum likelihood estimators for \({\rm{\mu }}\)and\({{\rm{\sigma }}^{\rm{2}}}\), when assuming normality, are

\({\rm{\hat \mu = \bar X,}}\)

and

\(\widehat {{{\rm{\sigma }}^{\rm{2}}}}{\rm{ = }}\frac{{\rm{1}}}{{\rm{n}}}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{{\left( {{{\rm{x}}_{\rm{i}}}{\rm{ - \bar X}}} \right)}^{\rm{2}}}} \)

Therefore, the maximum likelihood estimates of mean value

\({\rm{\hat \mu = }}\frac{{\rm{1}}}{{{\rm{10}}}}{\rm{(392 + 376 + \ldots + 375) = 384}}{\rm{.4}}{\rm{.}}\)

the maximum likelihood estimate of variance is

\(\begin{array}{c}\widehat {{{\rm{\sigma }}^{\rm{2}}}}{\rm{ = }}\frac{{\rm{1}}}{{{\rm{10}}}}\left( {{{{\rm{(392 - 384}}{\rm{.4)}}}^{\rm{2}}}{\rm{ + (376 - 384}}{\rm{.4}}{{\rm{)}}^{\rm{2}}}{\rm{ + \ldots + (375 - 384}}{\rm{.4}}{{\rm{)}}^{\rm{2}}}} \right)\\{\rm{ = }}\frac{{\rm{1}}}{{{\rm{10}}}}{\rm{(57}}{\rm{.76 + 70}}{\rm{.56 + \ldots + 88}}{\rm{.36)}}\\{\rm{ = 355}}{\rm{.65}}\end{array}\)

and maximum likelihood estimate of standard deviation is

\({\rm{\hat \sigma = }}\sqrt {{\rm{355}}{\rm{.65}}} {\rm{ = 18}}{\rm{.86}}\)

Note that the mle of standard deviation differs from the sample standard deviation\({\rm{s}}\)!

03

Explanation

b)

The \({95^{{\rm{th }}}}\)percentile in terms of \(\mu \)and \(\sigma \)is

\({\rm{\mu + }}{{\rm{z}}_{{\rm{1 - 0}}{\rm{.05}}}}{\rm{\sigma }}\)

Where \({{\rm{z}}_{{\rm{1 - 0}}{\rm{.05}}}}\)is \({\rm{z - }}\)score which can be found in the appendix, the standard normal distribution and

\({{\rm{z}}_{{\rm{1 - 0}}{\rm{.05}}}}{\rm{ = 1}}{\rm{.645}}\)The Invariance Principle:

Let \({{\rm{\hat \theta }}_{\rm{i}}}{\rm{,i = 1,2, \ldots ,n}}\)be maximum likelihood estimates of parameters\({{\rm{\theta }}_{\rm{i}}}{\rm{,i = 1,2, \ldots ,n}}\).

The mle of any function of parameters \({{\rm{\theta }}_{\rm{i}}}\)is the function of the mle's\({{\rm{\hat \theta }}_{\rm{i}}}\).

Since, \({\rm{\mu + }}{{\rm{z}}_{{\rm{1 - 0}}{\rm{.05}}}}{\rm{\sigma }}\) is a function of parameters, the estimate would be a function the estimates or equality

\(\begin{array}{l}{\rm{\hat \mu + 1}}{\rm{.645 \times \hat \sigma = 384}}{\rm{.4 + 1}}{\rm{.645 \times 18}}{\rm{.86}}\\{\rm{ = 415}}{\rm{.42}}\end{array}\)

Where \({\rm{\hat \mu }}\)and \({\rm{\hat \sigma }}\)are maximum likelihood estimates computed in\({\rm{(a)}}\).

04

Explanation

c)

The Invariance Principle:

Let \({{\rm{\hat \theta }}_{\rm{i}}}{\rm{,i = 1,2, \ldots ,n}}\)be maximum likelihood estimates of parameters \({{\rm{\theta }}_{\rm{i}}}{\rm{,i = 1,2, \ldots ,n}}{\rm{.}}\)The mle of any function of parameters \({{\rm{\theta }}_{\rm{i}}}\)is the function of the mle's\({{\rm{\hat \theta }}_{\rm{i}}}\).

Therefore, by the invariance principle, the maximum likelihood estimates of function \({\rm{P(X£400)}}\)is

\(\begin{array}{c}{\rm{P(X£400) = P}}\left( {\frac{{{\rm{X - \hat \mu }}}}{{{\rm{\hat \sigma }}}}{\rm{£}}\frac{{{\rm{400 - 384}}{\rm{.4}}}}{{{\rm{18}}{\rm{.86}}}}} \right)\\{\rm{ = P(Z£0}}{\rm{.83)}}\\{\rm{ = \Phi (0}}{\rm{.83)}}\\\mathop {\rm{ = }}\limits^{{\rm{(1)}}} {\rm{0}}{\rm{.7967,}}\end{array}\)

(1): From the appendix's normal probability table a programmer can also be used to calculate the probability.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

The mean squared error of an estimator \({\rm{\hat \theta }}\) is \({\rm{MSE(\hat \theta ) = E(\hat \theta - \hat \theta }}{{\rm{)}}^{\rm{2}}}\). If \({\rm{\hat \theta }}\) is unbiased, then \({\rm{MSE(\hat \theta ) = V(\hat \theta )}}\), but in general \({\rm{MSE(\hat \theta ) = V(\hat \theta ) + (bias}}{{\rm{)}}^{\rm{2}}}\) . Consider the estimator \({{\rm{\hat \sigma }}^{\rm{2}}}{\rm{ = K}}{{\rm{S}}^{\rm{2}}}\), where \({{\rm{S}}^{\rm{2}}}{\rm{ = }}\) sample variance. What value of K minimizes the mean squared error of this estimator when the population distribution is normal? (Hint: It can be shown that \({\rm{E}}\left( {{{\left( {{{\rm{S}}^{\rm{2}}}} \right)}^{\rm{2}}}} \right){\rm{ = (n + 1)}}{{\rm{\sigma }}^{\rm{4}}}{\rm{/(n - 1)}}\) In general, it is difficult to find \({\rm{\hat \theta }}\) to minimize \({\rm{MSE(\hat \theta )}}\), which is why we look only at unbiased estimators and minimize \({\rm{V(\hat \theta )}}\).)

Let\({{\rm{X}}_{\rm{1}}}{\rm{,}}{{\rm{X}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{n}}}\)be a random sample from a pdf\({\rm{f(x)}}\)that is symmetric about\({\rm{\mu }}\), so that\({\rm{\backslash widetildeX}}\)is an unbiased estimator of\({\rm{\mu }}\). If\({\rm{n}}\)is large, it can be shown that\({\rm{V (\tilde X)\gg 1/}}\left( {{\rm{4n(f(\mu )}}{{\rm{)}}^{\rm{2}}}} \right)\).

a. Compare\({\rm{V(\backslash widetildeX)}}\)to\({\rm{V(\bar X)}}\)when the underlying distribution is normal.

b. When the underlying pdf is Cauchy (see Example 6.7),\({\rm{V(\bar X) = \yen}}\), so\({\rm{\bar X}}\)is a terrible estimator. What is\({\rm{V(\tilde X)}}\)in this case when\({\rm{n}}\)is large?

Let\({\rm{X}}\)have a Weibull distribution with parameters\({\rm{\alpha }}\)and\({\rm{\beta }}\), so

\(\begin{array}{l}{\rm{E(X) = \beta \times \Gamma (1 + 1/\alpha )V(X)}}\\{\rm{ = }}{{\rm{\beta }}^{\rm{2}}}\left\{ {{\rm{\Gamma (1 + 2/\alpha ) - (\Gamma (1 + 1/\alpha )}}{{\rm{)}}^{\rm{2}}}} \right\}\end{array}\)

a. Based on a random sample\({{\rm{X}}_{\rm{1}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{n}}}\), write equations for the method of moments estimators of\({\rm{\beta }}\)and\({\rm{\alpha }}\). Show that, once the estimate of\({\rm{\alpha }}\)has been obtained, the estimate of\({\rm{\beta }}\)can be found from a table of the gamma function and that the estimate of\({\rm{\alpha }}\)is the solution to a complicated equation involving the gamma function.

b. If\({\rm{n = 20,\bar x = 28}}{\rm{.0}}\), and\({\rm{\Sigma x}}_{\rm{i}}^{\rm{2}}{\rm{ = 16,500}}\), compute the estimates. (Hint:\(\left. {{{{\rm{(\Gamma (1}}{\rm{.2))}}}^{\rm{2}}}{\rm{/\Gamma (1}}{\rm{.4) = }}{\rm{.95}}{\rm{.}}} \right)\)

\({{\rm{X}}_{\rm{1}}}{\rm{,}}.....{\rm{,}}{{\rm{X}}_{\rm{n}}}\)be a random sample from a gamma distribution with parameters \({\rm{\alpha }}\) and \({\rm{\beta }}\). a. Derive the equations whose solutions yield the maximum likelihood estimators of \({\rm{\alpha }}\) and \({\rm{\beta }}\). Do you think they can be solved explicitly? b. Show that the mle of \({\rm{\mu = \alpha \beta }}\) is \(\widehat {\rm{\mu }}{\rm{ = }}\overline {\rm{X}} \).

An investigator wishes to estimate the proportion of students at a certain university who have violated the honor code. Having obtained a random sample of\({\rm{n}}\)students, she realizes that asking each, "Have you violated the honor code?" will probably result in some untruthful responses. Consider the following scheme, called a randomized response technique. The investigator makes up a deck of\({\rm{100}}\)cards, of which\({\rm{50}}\)are of type I and\({\rm{50}}\)are of type II.

Type I: Have you violated the honor code (yes or no)?

Type II: Is the last digit of your telephone number a\({\rm{0 , 1 , or 2}}\)(yes or no)?

Each student in the random sample is asked to mix the deck, draw a card, and answer the resulting question truthfully. Because of the irrelevant question on type II cards, a yes response no longer stigmatizes the respondent, so we assume that responses are truthful. Let\({\rm{p}}\)denote the proportion of honor-code violators (i.e., the probability of a randomly selected student being a violator), and let\({\rm{\lambda = P}}\)(yes response). Then\({\rm{\lambda }}\)and\({\rm{p}}\)are related by\({\rm{\lambda = }}{\rm{.5p + (}}{\rm{.5)(}}{\rm{.3)}}\).

a. Let\({\rm{Y}}\)denote the number of yes responses, so\({\rm{Y\sim}}\)Bin\({\rm{(n,\lambda )}}\). Thus Y / n is an unbiased estimator of\({\rm{\lambda }}\). Derive an estimator for\({\rm{p}}\)based on\({\rm{Y}}\). If\({\rm{n = 80}}\)and\({\rm{y = 20}}\), what is your estimate? (Hint: Solve\({\rm{\lambda = }}{\rm{.5p + }}{\rm{.15}}\)for\({\rm{p}}\)and then substitute\({\rm{Y/n}}\)for\({\rm{\lambda }}\).)

b. Use the fact that\({\rm{E(Y/n) = \lambda }}\)to show that your estimator\({\rm{\hat p}}\)is unbiased.

c. If there were\({\rm{70}}\)type I and\({\rm{30}}\)type II cards, what would be your estimator for\({\rm{p}}\)?

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.