/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Q29E Consider a random sample \({{\rm... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

Consider a random sample \({{\rm{X}}_{\rm{1}}}{\rm{,}}{{\rm{X}}_{\rm{2}}}.....{\rm{,}}{{\rm{X}}_{\rm{n}}}\) from the shifted exponential pdf

\({\rm{f(x;\lambda ,\theta ) = }}\left\{ {\begin{array}{*{20}{c}}{{\rm{\lambda }}{{\rm{e}}^{{\rm{ - \lambda (x - \theta )}}}}}&{{\rm{x}} \ge {\rm{\theta }}}\\{\rm{0}}&{{\rm{ otherwise }}}\end{array}} \right.\). Taking \({\rm{\theta = 0}}\) gives the pdf of the exponential distribution considered previously (with positive density to the right of zero). An example of the shifted exponential distribution appeared in Example \({\rm{4}}{\rm{.5}}\), in which the variable of interest was time headway in traffic flow and \({\rm{\theta = }}{\rm{.5}}\) was the minimum possible time headway. a. Obtain the maximum likelihood estimators of \({\rm{\theta }}\) and \({\rm{\lambda }}\). b. If \({\rm{n = 10}}\) time headway observations are made, resulting in the values \({\rm{3}}{\rm{.11,}}{\rm{.64,2}}{\rm{.55,2}}{\rm{.20,5}}{\rm{.44,3}}{\rm{.42,10}}{\rm{.39,8}}{\rm{.93,17}}{\rm{.82}}\), and \({\rm{1}}{\rm{.30}}\), calculate the estimates of \({\rm{\theta }}\) and \({\rm{\lambda }}\).

Short Answer

Expert verified

(a) Maximum likelihood estimators are \({\rm{\hat \lambda = }}\frac{{\rm{n}}}{{\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {\left( {{{\rm{X}}_{\rm{i}}}{\rm{ - \hat \theta }}} \right)} }}\) and \({\rm{\hat \theta = min(}}{{\rm{X}}_{\rm{i}}}{\rm{)}}\).

(b) The estimates are \({\rm{\hat \theta = 0}}{\rm{.64}}\) and \({\rm{\hat \lambda = 0}}{\rm{.202}}\).

Step by step solution

01

Define exponential function

A function that increases or decays at a rate proportional to its present value is called an exponential function.

02

Explanation

(a) Allow joint pdf or pmb for random variables \({{\rm{X}}_{\rm{1}}}{\rm{,}}{{\rm{X}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{n}}}\).

\({\rm{f}}\left( {{{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{x}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{x}}_{\rm{n}}}{\rm{;}}{{\rm{\theta }}_{\rm{1}}}{\rm{,}}{{\rm{\theta }}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{\theta }}_{\rm{m}}}} \right){\rm{, n,m}} \in {\rm{N}}\)

where \({{\rm{\theta }}_{\rm{i}}}{\rm{,i = 1,2, \ldots ,m}}\) are unknown parameters. The likelihood function is defined as a function of parameters \({{\rm{\theta }}_{\rm{i}}}{\rm{,i = 1,2, \ldots ,m}}\) where function f is a function of parameter. The maximum likelihood estimates (mle's), or values \(\widehat {{{\rm{\theta }}_{\rm{i}}}}\) for which the likelihood function is maximised, are the maximum likelihood estimates,

\({\rm{f}}\left( {{{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{x}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{x}}_{\rm{n}}}{\rm{;}}{{{\rm{\hat \theta }}}_{\rm{1}}}{\rm{,}}{{{\rm{\hat \theta }}}_{\rm{2}}}{\rm{, \ldots ,}}{{{\rm{\hat \theta }}}_{\rm{m}}}} \right) \ge {\rm{f}}\left( {{{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{x}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{x}}_{\rm{n}}}{\rm{;}}{{\rm{\theta }}_{\rm{1}}}{\rm{,}}{{\rm{\theta }}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{\theta }}_{\rm{m}}}} \right)\)

As,\({\rm{i = 1,2, \ldots ,m}}\)for every\({{\rm{\theta }}_{\rm{i}}}\). Maximum likelihood estimators are derived by replacing\({{\rm{X}}_{\rm{i}}}\)with\({{\rm{x}}_{\rm{i}}}\).

Because of the independence, the likelihood function becomes,

\(\begin{array}{c}{\rm{f}}\left( {{{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{x}}_{\rm{x}}}{\rm{, \ldots ,}}{{\rm{x}}_{\rm{n}}}{\rm{;\lambda ,\theta }}} \right){\rm{ = \lambda }}{{\rm{e}}^{{\rm{ - \lambda }}\left( {{{\rm{x}}_{\rm{1}}}{\rm{ - \theta }}} \right)}}{\rm{ \times \lambda }}{{\rm{e}}^{{\rm{ - \lambda }}\left( {{{\rm{x}}_{\rm{2}}}{\rm{ - \theta }}} \right)}}{\rm{ \times \ldots \times \lambda }}{{\rm{e}}^{{\rm{ - \lambda }}\left( {{{\rm{x}}_{\rm{n}}}{\rm{ - \theta }}} \right)}}\\{\rm{ = }}{{\rm{\lambda }}^{\rm{n}}}{{\rm{e}}^{{\rm{ - \lambda }}\sum\limits_{{\rm{i - 1}}}^{\rm{n}} {\left( {{{\rm{x}}_{\rm{i}}}{\rm{ - \theta }}} \right)} }}\end{array}\)

Look at the log likelihood function to determine the maximum.

\(\begin{array}{c}{\rm{lnf}}\left( {{{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{x}}_{\rm{x}}}{\rm{, \ldots ,}}{{\rm{x}}_{\rm{n}}}{\rm{;\lambda ,\theta }}} \right){\rm{ = ln}}\left( {{{\rm{\lambda }}^{\rm{n}}}{{\rm{e}}^{{\rm{ - \lambda }}\sum\limits_{{\rm{i - 1}}}^{\rm{n}} {\left( {{{\rm{x}}_{\rm{i}}}{\rm{ - \theta }}} \right)} }}} \right)\\{\rm{ = nln\lambda - \lambda }}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {\left( {{{\rm{x}}_{\rm{i}}}{\rm{ - \theta }}} \right)} \end{array}\)

03

Evaluating the maximum likelihood estimators

The maximum likelihood estimator is generated by taking the derivative of the log likelihood function in regard to\({\rm{\lambda }}\)and equating it to\({\rm{0}}\).

As a result, the derivative,

\(\begin{array}{c}\frac{{\rm{d}}}{{{\rm{d\theta }}}}{\rm{f}}\left( {{{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{x}}_{\rm{x}}}{\rm{, \ldots ,}}{{\rm{x}}_{\rm{n}}}{\rm{;\lambda ,\theta }}} \right){\rm{ = }}\frac{{\rm{d}}}{{{\rm{d\lambda }}}}\left( {{\rm{nln\lambda - \lambda }}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {\left( {{{\rm{x}}_{\rm{i}}}{\rm{ - \theta }}} \right)} } \right)\\{\rm{ = n}}\frac{{\rm{1}}}{{\rm{\lambda }}}{\rm{ - }}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {\left( {{{\rm{x}}_{\rm{i}}}{\rm{ - \theta }}} \right)} \end{array}\)

As a result, solving equation provides the maximum likelihood estimator \({\rm{\hat \lambda }}\).

\(\begin{array}{c}{\rm{n}}\frac{{\rm{1}}}{{{\rm{\hat \lambda }}}}{\rm{ - }}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {\left( {{{\rm{x}}_{\rm{i}}}{\rm{ - \hat \theta }}} \right)} {\rm{ = 0}}\\\frac{{\rm{1}}}{{{\rm{\hat \lambda }}}}{\rm{ = }}\frac{{\rm{1}}}{{\rm{n}}}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {\left( {{{\rm{x}}_{\rm{i}}}{\rm{ - \hat \theta }}} \right)} \end{array}\)

For\({\rm{\hat \lambda }}\). Hence, the maximum likelihood estimator is,

\({\rm{\hat \lambda = }}\frac{{\rm{n}}}{{\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {\left( {{{\rm{X}}_{\rm{i}}}{\rm{ - \hat \theta }}} \right)} }}\)

The maximum likelihood estimator of parameter\({\rm{\theta }}\)is shown as\({\rm{\hat \theta }}\), with the estimator calculated as follows.

To find the maximum in terms\({\rm{\theta }}\)of the likelihood function,

\({{\rm{\lambda }}^{\rm{n}}}{{\rm{e}}^{{\rm{ - \lambda }}\sum\limits_{{\rm{i - 1}}}^{\rm{n}} {\left( {{{\rm{x}}_{\rm{i}}}{\rm{ - \theta }}} \right)} }}{\rm{ = }}{{\rm{\lambda }}^{\rm{n}}}{{\rm{e}}^{{\rm{ - \lambda }}\sum\limits_{{\rm{i - 1}}}^{\rm{n}} {{{\rm{x}}_{\rm{i}}}} }}{\rm{ \times }}{{\rm{e}}^{{\rm{ - n\lambda \theta }}}}{\rm{ }}\)

it's worth noting that\({\rm{\theta }}\)only appears in the term

\({{\rm{e}}^{{\rm{ - n\lambda \theta }}}}\)

Furthermore, the likelihood function is defined only for values where all\({{\rm{x}}_{\rm{i}}}\); are greater or equal to\({\rm{\theta }}\), and where the minimum value of all\({{\rm{x}}_{\rm{i}}}\); is greater or equal to\({\rm{\theta }}\).

Because\({\rm{\theta }}\)only exists in the specified term, and the exponent\({\rm{n\lambda \theta }}\)is positive, and the likelihood function is zero for\({\rm{\theta }}\)bigger than\({\rm{min}}\left( {{{\rm{x}}_{\rm{i}}}} \right)\), the highest value is attained when

\({\rm{\hat \theta = min(}}{{\rm{X}}_{\rm{i}}}{\rm{)}}\)

04

Explanation

(b) The minimum values - the maximum likelihood estimate of \({\rm{\theta }}\) is calculated using available data.

\({\rm{\hat \theta = 0}}{\rm{.64}}\)

The fact that\({\rm{n = 10}}\)and\({\rm{\hat \theta = 0}}{\rm{.64}}\),

\(\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{{\rm{x}}_{\rm{i}}}} {\rm{ = 3}}{\rm{.11 + 0}}{\rm{.64 + \ldots + 1}}{\rm{.3 = 55}}{\rm{.8}}\)

The maximum likelihood estimates of\({\rm{\lambda }}\)is,

\(\begin{array}{c}{\rm{\hat \lambda = }}\frac{{\rm{n}}}{{\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {\left( {{{\rm{x}}_{\rm{i}}}{\rm{ - \hat \theta }}} \right)} }}\\{\rm{ = }}\frac{{\rm{n}}}{{\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{{\rm{x}}_{\rm{i}}}} {\rm{ - n\hat \theta }}}}\\{\rm{ = }}\frac{{{\rm{10}}}}{{{\rm{55}}{\rm{.8 - 6}}{\rm{.4}}}}\\{\rm{ = 0}}{\rm{.202}}\end{array}\)

Therefore, \({\rm{\hat \lambda = 0}}{\rm{.202}}\) and \({\rm{\hat \theta = 0}}{\rm{.64}}\).

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

We defined a negative binomial\({\rm{rv}}\)as the number of failures that occur before the\({\rm{rth}}\)success in a sequence of independent and identical success/failure trials. The probability mass function (\({\rm{pmf}}\)) of\({\rm{X}}\)is\({\rm{nb(x,r,p) = }}\)\(\left( {\begin{array}{*{20}{c}}{{\rm{x + r - 1}}}\\{\rm{x}}\end{array}} \right){{\rm{p}}^{\rm{r}}}{{\rm{(1 - p)}}^{\rm{x}}}\quad {\rm{x = 0,1,2, \ldots }}\)

a. Suppose that. Show that\({\rm{\hat p = (r - 1)/(X + r - 1)}}\)is an unbiased estimator for\({\rm{p}}\). (Hint: Write out\({\rm{E(\hat p)}}\)and cancel\({\rm{x + r - 1}}\)inside the sum.)

b. A reporter wishing to interview five individuals who support a certain candidate begins asking people whether\({\rm{(S)}}\)or not\({\rm{(F)}}\)they support the candidate. If the sequence of responses is SFFSFFFSSS, estimate\({\rm{p = }}\)the true proportion who support the candidate.

When the sample standard deviation S is based on a random sample from a normal population distribution, it can be shown that \({\rm{E(S) = }}\sqrt {{\rm{2/(n - 1)}}} {\rm{\Gamma (n/2)\sigma /\Gamma ((n - 1)/2)}}\)

Use this to obtain an unbiased estimator for \({\rm{\sigma }}\) of the form \({\rm{cS}}\). What is \({\rm{c}}\) when \({\rm{n = 20}}\)?

The article from which the data in Exercise 1 was extracted also gave the accompanying strength observations for cylinders:

\(\begin{array}{l}\begin{array}{*{20}{r}}{{\rm{6}}{\rm{.1}}}&{{\rm{5}}{\rm{.8}}}&{{\rm{7}}{\rm{.8}}}&{{\rm{7}}{\rm{.1}}}&{{\rm{7}}{\rm{.2}}}&{{\rm{9}}{\rm{.2}}}&{{\rm{6}}{\rm{.6}}}&{{\rm{8}}{\rm{.3}}}&{{\rm{7}}{\rm{.0}}}&{{\rm{8}}{\rm{.3}}}\\{{\rm{7}}{\rm{.8}}}&{{\rm{8}}{\rm{.1}}}&{{\rm{7}}{\rm{.4}}}&{{\rm{8}}{\rm{.5}}}&{{\rm{8}}{\rm{.9}}}&{{\rm{9}}{\rm{.8}}}&{{\rm{9}}{\rm{.7}}}&{{\rm{14}}{\rm{.1}}}&{{\rm{12}}{\rm{.6}}}&{{\rm{11}}{\rm{.2}}}\end{array}\\\begin{array}{*{20}{l}}{{\rm{7}}{\rm{.8}}}&{{\rm{8}}{\rm{.1}}}&{{\rm{7}}{\rm{.4}}}&{{\rm{8}}{\rm{.5}}}&{{\rm{8}}{\rm{.9}}}&{{\rm{9}}{\rm{.8}}}&{{\rm{9}}{\rm{.7}}}&{{\rm{14}}{\rm{.1}}}&{{\rm{12}}{\rm{.6}}}&{{\rm{11}}{\rm{.2}}}\end{array}\end{array}\)

Prior to obtaining data, denote the beam strengths by X1, … ,Xm and the cylinder strengths by Y1, . . . , Yn. Suppose that the Xi ’s constitute a random sample from a distribution with mean m1 and standard deviation s1 and that the Yi ’s form a random sample (independent of the Xi ’s) from another distribution with mean m2 and standard deviation\({{\rm{\sigma }}_{\rm{2}}}\).

a. Use rules of expected value to show that \({\rm{\bar X - \bar Y}}\)is an unbiased estimator of \({{\rm{\mu }}_{\rm{1}}}{\rm{ - }}{{\rm{\mu }}_{\rm{2}}}\). Calculate the estimate for the given data.

b. Use rules of variance from Chapter 5 to obtain an expression for the variance and standard deviation (standard error) of the estimator in part (a), and then compute the estimated standard error.

c. Calculate a point estimate of the ratio \({{\rm{\sigma }}_{\rm{1}}}{\rm{/}}{{\rm{\sigma }}_{\rm{2}}}\)of the two standard deviations.

d. Suppose a single beam and a single cylinder are randomly selected. Calculate a point estimate of the variance of the difference \({\rm{X - Y}}\) between beam strength and cylinder strength.

Let\({{\rm{X}}_{\rm{1}}}{\rm{,}}{{\rm{X}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{n}}}\)be a random sample from a pdf\({\rm{f(x)}}\)that is symmetric about\({\rm{\mu }}\), so that\({\rm{\backslash widetildeX}}\)is an unbiased estimator of\({\rm{\mu }}\). If\({\rm{n}}\)is large, it can be shown that\({\rm{V (\tilde X)\gg 1/}}\left( {{\rm{4n(f(\mu )}}{{\rm{)}}^{\rm{2}}}} \right)\).

a. Compare\({\rm{V(\backslash widetildeX)}}\)to\({\rm{V(\bar X)}}\)when the underlying distribution is normal.

b. When the underlying pdf is Cauchy (see Example 6.7),\({\rm{V(\bar X) = \yen}}\), so\({\rm{\bar X}}\)is a terrible estimator. What is\({\rm{V(\tilde X)}}\)in this case when\({\rm{n}}\)is large?

a. A random sample of 10 houses in a particular area, each of which is heated with natural gas, is selected and the amount of gas (therms) used during the month of January is determined for each house. The resulting observations are 103, 156, 118, 89, 125, 147, 122, 109, 138, 99. Let m denote the average gas usage during January by all houses in this area. Compute a point estimate of m. b. Suppose there are 10,000 houses in this area that use natural gas for heating. Let t denote the total amount of gas used by all of these houses during January. Estimate t using the data of part (a). What estimator did you use in computing your estimate? c. Use the data in part (a) to estimate p, the proportion of all houses that used at least 100 therms. d. Give a point estimate of the population median usage (the middle value in the population of all houses) based on the sample of part (a). What estimator did you use?

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.