/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Q26E Consider randomly selecting \({\... [FREE SOLUTION] | 91影视

91影视

Consider randomly selecting \({\rm{n}}\) segments of pipe and determining the corrosion loss (mm) in the wall thickness for each one. Denote these corrosion losses by \({{\rm{Y}}_{\rm{1}}}{\rm{,}}.....{\rm{,}}{{\rm{Y}}_{\rm{n}}}\). The article 鈥淎 Probabilistic Model for a Gas Explosion Due to Leakages in the Grey Cast Iron Gas Mains鈥 (Reliability Engr. and System Safety (\({\rm{(2013:270 - 279)}}\)) proposes a linear corrosion model: \({{\rm{Y}}_{\rm{i}}}{\rm{ = }}{{\rm{t}}_{\rm{i}}}{\rm{R}}\), where \({{\rm{t}}_{\rm{i}}}\) is the age of the pipe and \({\rm{R}}\), the corrosion rate, is exponentially distributed with parameter \({\rm{\lambda }}\). Obtain the maximum likelihood estimator of the exponential parameter (the resulting mle appears in the cited article). (Hint: If \({\rm{c > 0}}\) and \({\rm{X}}\) has an exponential distribution, so does \({\rm{cX}}\).)

Short Answer

Expert verified

The maximum likelihood estimator is \({\rm{\hat \lambda = }}\frac{{\rm{n}}}{{\sum\limits_{{\rm{j = 1}}}^{\rm{n}} {{{\rm{Y}}_{\rm{j}}}} {\rm{/}}{{\rm{t}}_{\rm{j}}}}}\).

Step by step solution

01

Define exponential function

A function that increases or decays at a rate proportional to its present value is called an exponential function.

02

Explanation

Given a set of random variables,

\({{\rm{Y}}_{\rm{i}}}{\rm{ = }}{{\rm{t}}_{\rm{i}}}{\rm{R}}\)

where\({{\rm{t}}_{\rm{i}}}\)is an integer and\({\rm{R}}\)is an exponentially distributed random variable with parameters\({\rm{\lambda }}\). The\({{\rm{Y}}_{\rm{i}}}\)cdf is,

\(\begin{array}{c}{{\rm{F}}_{{{\rm{Y}}_{\rm{i}}}}}{\rm{(y) = P}}\left( {{{\rm{Y}}_{\rm{i}}} \le {\rm{y}}} \right)\\{\rm{ = P}}\left( {{{\rm{t}}_{\rm{i}}}{\rm{R}} \le {\rm{y}}} \right)\\{\rm{ = P}}\left( {{\rm{R}} \le \frac{{\rm{y}}}{{{{\rm{t}}_{\rm{i}}}}}} \right)\\{\rm{ = }}{{\rm{F}}_{\rm{R}}}\left( {\frac{{\rm{y}}}{{{{\rm{t}}_{\rm{i}}}}}} \right)\\{\rm{ = 1 - exp}}\left\{ {{\rm{ - \lambda }}\frac{{\rm{y}}}{{{{\rm{t}}_{\rm{i}}}}}} \right\}\\{\rm{ = 1 - exp}}\left\{ {{\rm{ - }}\frac{{\rm{\lambda }}}{{{{\rm{t}}_{\rm{i}}}}}{\rm{y}}} \right\}{\rm{,y}} \ge {\rm{0}}\end{array}\)

For\({\rm{y < 0}}\), and is zero. As a result,\({{\rm{Y}}_{\rm{i}}}\); has a parametric exponential distribution.

\({{\rm{\lambda }}_{\rm{i}}}{\rm{ = }}\frac{{\rm{\lambda }}}{{{{\rm{t}}_{\rm{i}}}}}\)

Allow joint pdf or pmb for random variables \({{\rm{X}}_{\rm{1}}}{\rm{,}}{{\rm{X}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{n}}}\).

\({\rm{f}}\left( {{{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{x}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{x}}_{\rm{n}}}{\rm{;}}{{\rm{\theta }}_{\rm{1}}}{\rm{,}}{{\rm{\theta }}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{\theta }}_{\rm{m}}}} \right){\rm{, n,m}} \in {\rm{N}}\)

where\({{\rm{\theta }}_{\rm{i}}}{\rm{,i = 1,2, \ldots ,m}}\)are unknown parameters. The likelihood function is defined as a function of parameters\({{\rm{\theta }}_{\rm{i}}}{\rm{,i = 1,2, \ldots ,m}}\)where function f is a function of parameter. The maximum likelihood estimates (mle's), or values\(\widehat {{{\rm{\theta }}_{\rm{i}}}}\)for which the likelihood function is maximised, are the maximum likelihood estimates,

\({\rm{f}}\left( {{{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{x}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{x}}_{\rm{n}}}{\rm{;}}{{{\rm{\hat \theta }}}_{\rm{1}}}{\rm{,}}{{{\rm{\hat \theta }}}_{\rm{2}}}{\rm{, \ldots ,}}{{{\rm{\hat \theta }}}_{\rm{m}}}} \right) \ge {\rm{f}}\left( {{{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{x}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{x}}_{\rm{n}}}{\rm{;}}{{\rm{\theta }}_{\rm{1}}}{\rm{,}}{{\rm{\theta }}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{\theta }}_{\rm{m}}}} \right)\)

As, \({\rm{i = 1,2, \ldots ,m}}\) for every \({{\rm{\theta }}_{\rm{i}}}\). Maximum likelihood estimators are derived by replacing \({{\rm{X}}_{\rm{i}}}\) with \({{\rm{x}}_{\rm{i}}}\).

03

Evaluating the maximum likelihood estimators

Because of the independence, the likelihood function becomes,

\(\begin{array}{c}{\rm{f}}\left( {{{\rm{y}}_{\rm{1}}}{\rm{,}}{{\rm{y}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{y}}_{\rm{n}}}{\rm{;\lambda }}} \right){\rm{ = }}\frac{{\rm{\lambda }}}{{{{\rm{t}}_{\rm{1}}}}}{\rm{exp}}\left\{ {{\rm{ - }}\frac{{\rm{\lambda }}}{{{{\rm{t}}_{\rm{1}}}}}{{\rm{y}}_{\rm{1}}}} \right\}{\rm{ \times }}\frac{{\rm{\lambda }}}{{{{\rm{t}}_{\rm{2}}}}}{\rm{exp}}\left\{ {{\rm{ - }}\frac{{\rm{\lambda }}}{{{{\rm{t}}_{\rm{2}}}}}{{\rm{y}}_{\rm{2}}}} \right\}{\rm{ \times \ldots \times }}\frac{{\rm{\lambda }}}{{{{\rm{t}}_{\rm{n}}}}}{\rm{exp}}\left\{ {{\rm{ - }}\frac{{\rm{\lambda }}}{{{{\rm{t}}_{\rm{n}}}}}{{\rm{y}}_{\rm{n}}}} \right\}\\{\rm{ = }}{{\rm{\lambda }}^{\rm{n}}}\prod\limits_{{\rm{i = 1}}}^{\rm{n}} {\frac{{\rm{1}}}{{{{\rm{t}}_{\rm{i}}}}}} {\rm{ \times exp}}\left\{ {{\rm{ - }}\frac{{\rm{\lambda }}}{{{{\rm{t}}_{\rm{i}}}}}{{\rm{y}}_{\rm{i}}}} \right\}\\{\rm{ = }}{{\rm{\lambda }}^{\rm{n}}}\left( {\prod\limits_{{\rm{i = 1}}}^{\rm{n}} {\frac{{\rm{1}}}{{{{\rm{t}}_{\rm{i}}}}}} } \right){\rm{ \times exp}}\left\{ {{\rm{ - }}\sum\limits_{{\rm{j = 1}}}^{\rm{n}} {\frac{{\rm{\lambda }}}{{{{\rm{t}}_{\rm{j}}}}}} {{\rm{y}}_{\rm{j}}}} \right\}\\{\rm{ = }}{{\rm{\lambda }}^{\rm{n}}}{\rm{ \times p \times exp}}\left\{ {{\rm{ - \lambda }}\sum\limits_{{\rm{j = 1}}}^{\rm{n}} {\frac{{{{\rm{y}}_{\rm{j}}}}}{{{{\rm{t}}_{\rm{j}}}}}} } \right\}\end{array}\)

Where,\({\rm{p = }}\sum\limits_{{\rm{j = 1}}}^{\rm{n}} {\frac{{\rm{\lambda }}}{{{{\rm{t}}_{\rm{j}}}}}} \).

Look at the log likelihood function to determine the maximum.

\(\begin{array}{c}{\rm{lnf}}\left( {{{\rm{y}}_{\rm{1}}}{\rm{,}}{{\rm{y}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{y}}_{\rm{n}}}{\rm{;\lambda }}} \right){\rm{ = ln}}\left( {{{\rm{\lambda }}^{\rm{n}}}{\rm{ \times p \times exp}}\left\{ {{\rm{ - \lambda }}\sum\limits_{{\rm{j = 1}}}^{\rm{n}} {\frac{{{{\rm{y}}_{\rm{j}}}}}{{{{\rm{t}}_{\rm{j}}}}}} } \right\}} \right)\\{\rm{ = n \times ln\lambda + lnp - \lambda }}\sum\limits_{{\rm{j = 1}}}^{\rm{n}} {\frac{{{{\rm{y}}_{\rm{j}}}}}{{{{\rm{t}}_{\rm{j}}}}}} \end{array}\)

The maximum likelihood estimator is generated by taking the derivative of the log likelihood function in regard to\({\rm{\lambda }}\)and equating it to\({\rm{0}}\).

As a result, the derivative,

\(\begin{array}{c}\frac{{\rm{d}}}{{{\rm{d\lambda }}}}{\rm{f}}\left( {{{\rm{y}}_{\rm{1}}}{\rm{,}}{{\rm{y}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{y}}_{\rm{n}}}{\rm{;\lambda }}} \right){\rm{ = }}\frac{{\rm{d}}}{{{\rm{d\lambda }}}}\left( {{\rm{n \times ln\lambda + lnp - \lambda }}\sum\limits_{{\rm{j = 1}}}^{\rm{n}} {\frac{{{{\rm{y}}_{\rm{j}}}}}{{{{\rm{t}}_{\rm{j}}}}}} } \right)\\{\rm{ = }}\frac{{\rm{n}}}{{\rm{\lambda }}}{\rm{ + 0 - }}\sum\limits_{{\rm{j = 1}}}^{\rm{n}} {\frac{{{{\rm{y}}_{\rm{j}}}}}{{{{\rm{t}}_{\rm{j}}}}}} \\{\rm{ = }}\frac{{\rm{n}}}{{\rm{\lambda }}}{\rm{ - }}\sum\limits_{{\rm{j = 1}}}^{\rm{n}} {\frac{{{{\rm{y}}_{\rm{j}}}}}{{{{\rm{t}}_{\rm{j}}}}}} \end{array}\)

As a result, solving equation provides the maximum likelihood estimator.

\(\begin{array}{c}\frac{{\rm{n}}}{{{\rm{\hat \lambda }}}}{\rm{ - }}\sum\limits_{{\rm{j = 1}}}^{\rm{n}} {\frac{{{{\rm{y}}_{\rm{j}}}}}{{{{\rm{t}}_{\rm{j}}}}}} {\rm{ = 0}}\\\frac{{\rm{n}}}{{{\rm{\hat \lambda }}}}{\rm{ = }}\sum\limits_{{\rm{j = 1}}}^{\rm{n}} {\frac{{{{\rm{y}}_{\rm{j}}}}}{{{{\rm{t}}_{\rm{j}}}}}} \end{array}\)

for\({\rm{\hat \lambda }}\). Hence, the maximum likelihood estimator is,

\({\rm{\hat \lambda = }}\frac{{\rm{n}}}{{\sum\limits_{{\rm{j = 1}}}^{\rm{n}} {{{\rm{Y}}_{\rm{j}}}} {\rm{/}}{{\rm{t}}_{\rm{j}}}}}\).

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91影视!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

An estimator \({\rm{\hat \theta }}\) is said to be consistent if for any \( \in {\rm{ > 0}}\), \({\rm{P(|\hat \theta - \theta |}} \ge \in {\rm{)}} \to {\rm{0}}\) as \({\rm{n}} \to \infty \). That is, \({\rm{\hat \theta }}\) is consistent if, as the sample size gets larger, it is less and less likely that \({\rm{\hat \theta }}\) will be further than \( \in \) from the true value of \({\rm{\theta }}\). Show that \({\rm{\bar X}}\) is a consistent estimator of \({\rm{\mu }}\) when \({{\rm{\sigma }}^{\rm{2}}}{\rm{ < }}\infty \) , by using Chebyshev鈥檚 inequality from Exercise \({\rm{44}}\) of Chapter \({\rm{3}}\). (Hint: The inequality can be rewritten in the form \({\rm{P}}\left( {\left| {{\rm{Y - }}{{\rm{\mu }}_{\rm{Y}}}} \right| \ge \in } \right) \le {\rm{\sigma }}_{\rm{Y}}^{\rm{2}}{\rm{/}} \in \). Now identify \({\rm{Y}}\) with \({\rm{\bar X}}\).)

Each of 150 newly manufactured items is examined and the number of scratches per item is recorded (the items are supposed to be free of scratches), yielding the following data:

Assume that X has a Poisson distribution with parameter \({\bf{\mu }}.\)and that X represents the number of scratches on a randomly picked item.

a. Calculate the estimate for the data using an unbiased \({\bf{\mu }}.\)estimator. (Hint: for X Poisson, \({\rm{E(X) = \mu }}\) ,therefore \({\rm{E(\bar X) = ?)}}\)

c. What is your estimator's standard deviation (standard error)? Calculate the standard error estimate. (Hint: \({\rm{\sigma }}_{\rm{X}}^{\rm{2}}{\rm{ = \mu }}\), \({\rm{X}}\))

Of \({{\rm{n}}_{\rm{1}}}\)randomly selected male smokers, \({{\rm{X}}_{\rm{1}}}\) smoked filter cigarettes, whereas of \({{\rm{n}}_{\rm{2}}}\) randomly selected female smokers, \({{\rm{X}}_{\rm{2}}}\) smoked filter cigarettes. Let \({{\rm{p}}_{\rm{1}}}\) and \({{\rm{p}}_{\rm{2}}}\) denote the probabilities that a randomly selected male and female, respectively, smoke filter cigarettes.

a. Show that \({\rm{(}}{{\rm{X}}_{\rm{1}}}{\rm{/}}{{\rm{n}}_{\rm{1}}}{\rm{) - (}}{{\rm{X}}_{\rm{2}}}{\rm{/}}{{\rm{n}}_{\rm{2}}}{\rm{)}}\) is an unbiased estimator for \({{\rm{p}}_{\rm{1}}}{\rm{ - }}{{\rm{p}}_{\rm{2}}}\). (Hint: \({\rm{E(}}{{\rm{X}}_{\rm{i}}}{\rm{) = }}{{\rm{n}}_{\rm{i}}}{{\rm{p}}_{\rm{i}}}\) for \({\rm{i = 1,2}}\).)

b. What is the standard error of the estimator in part (a)?

c. How would you use the observed values \({{\rm{x}}_{\rm{1}}}\) and \({{\rm{x}}_{\rm{2}}}\) to estimate the standard error of your estimator?

d. If \({{\rm{n}}_{\rm{1}}}{\rm{ = }}{{\rm{n}}_{\rm{2}}}{\rm{ = 200, }}{{\rm{x}}_{\rm{1}}}{\rm{ = 127}}\), and \({{\rm{x}}_{\rm{2}}}{\rm{ = 176}}\), use the estimator of part (a) to obtain an estimate of \({{\rm{p}}_{\rm{1}}}{\rm{ - }}{{\rm{p}}_{\rm{2}}}\).

e. Use the result of part (c) and the data of part (d) to estimate the standard error of the estimator.

A sample of \({\rm{n}}\) captured Pandemonium jet fighters results in serial numbers\({{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{x}}_{\rm{2}}}{\rm{,}}{{\rm{x}}_{\rm{3}}}{\rm{, \ldots ,}}{{\rm{x}}_{\rm{n}}}\). The CIA knows that the aircraft were numbered consecutively at the factory starting with \({\rm{\alpha }}\)and ending with\({\rm{\beta }}\), so that the total number of planes manufactured is \({\rm{\beta - \alpha + 1}}\) (e.g., if \({\rm{\alpha = 17}}\) and\({\rm{\beta = 29}}\), then \({\rm{29 - 17 + 1 = 13}}\)planes having serial numbers \({\rm{17,18,19, \ldots ,28,29}}\)were manufactured). However, the CIA does not know the values of \({\rm{\alpha }}\) or\({\rm{\beta }}\). A CIA statistician suggests using the estimator \({\rm{max}}\left( {{{\rm{X}}_{\rm{i}}}} \right){\rm{ - min}}\left( {{{\rm{X}}_{\rm{i}}}} \right){\rm{ + 1}}\)to estimate the total number of planes manufactured.

a. If\({\rm{n = 5, x\_}}\left\{ {\rm{1}} \right\}{\rm{ = 237, x\_}}\left\{ {\rm{2}} \right\}{\rm{ = 375, x\_}}\left\{ {\rm{3}} \right\}{\rm{ = 202, x\_}}\left\{ {\rm{4}} \right\}{\rm{ = 525,}}\)and\({{\rm{x}}_{\rm{5}}}{\rm{ = 418}}\), what is the corresponding estimate?

b. Under what conditions on the sample will the value of the estimate be exactly equal to the true total number of planes? Will the estimate ever be larger than the true total? Do you think the estimator is unbiased for estimating\({\rm{\beta - \alpha + 1}}\)? Explain in one or two sentences.

An investigator wishes to estimate the proportion of students at a certain university who have violated the honor code. Having obtained a random sample of\({\rm{n}}\)students, she realizes that asking each, "Have you violated the honor code?" will probably result in some untruthful responses. Consider the following scheme, called a randomized response technique. The investigator makes up a deck of\({\rm{100}}\)cards, of which\({\rm{50}}\)are of type I and\({\rm{50}}\)are of type II.

Type I: Have you violated the honor code (yes or no)?

Type II: Is the last digit of your telephone number a\({\rm{0 , 1 , or 2}}\)(yes or no)?

Each student in the random sample is asked to mix the deck, draw a card, and answer the resulting question truthfully. Because of the irrelevant question on type II cards, a yes response no longer stigmatizes the respondent, so we assume that responses are truthful. Let\({\rm{p}}\)denote the proportion of honor-code violators (i.e., the probability of a randomly selected student being a violator), and let\({\rm{\lambda = P}}\)(yes response). Then\({\rm{\lambda }}\)and\({\rm{p}}\)are related by\({\rm{\lambda = }}{\rm{.5p + (}}{\rm{.5)(}}{\rm{.3)}}\).

a. Let\({\rm{Y}}\)denote the number of yes responses, so\({\rm{Y\sim}}\)Bin\({\rm{(n,\lambda )}}\). Thus Y / n is an unbiased estimator of\({\rm{\lambda }}\). Derive an estimator for\({\rm{p}}\)based on\({\rm{Y}}\). If\({\rm{n = 80}}\)and\({\rm{y = 20}}\), what is your estimate? (Hint: Solve\({\rm{\lambda = }}{\rm{.5p + }}{\rm{.15}}\)for\({\rm{p}}\)and then substitute\({\rm{Y/n}}\)for\({\rm{\lambda }}\).)

b. Use the fact that\({\rm{E(Y/n) = \lambda }}\)to show that your estimator\({\rm{\hat p}}\)is unbiased.

c. If there were\({\rm{70}}\)type I and\({\rm{30}}\)type II cards, what would be your estimator for\({\rm{p}}\)?

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.