/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Q6E Question: Suppose that \({\bf{X1... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

Question: Suppose that \({\bf{X1, }}{\bf{. }}{\bf{. }}{\bf{. , Xn}}\)form a random sample froma normal distribution for which both the mean and thevariance are unknown. Find the M.L.E. of the 0.95 quantileof the distribution, that is, of the point \({\bf{\theta }}\) such that

\({\bf{Pr}}\left( {{\bf{X < \theta }}} \right){\bf{ = 0}}{\bf{.95}}\)

Short Answer

Expert verified

The M.L.E. of 0.95 quantile of X is:\({\theta _0}\sqrt {\overline {{{\left( {{X_i} - \overline X } \right)}^2}} } + \overline X \).

Step by step solution

01

Define the pdf

Let \({X_1},{\bf{ }}.{\bf{ }}.{\bf{ }}.{\bf{ }},{\bf{ }}{X_n}\)is drawn from a random sample of a normal distribution where both the mean and the variance are unknown.

Let mean be \(\mu \) and variance\({\sigma ^2}\) .

The probability density function is:

\(f\left( x \right) = \frac{1}{{\sigma \sqrt {2\pi } }}{e^{ - \frac{{{{\left( {x - \mu } \right)}^2}}}{{2{\sigma ^2}}}}}\)

02

Defining the quantile

The 0.95 quantile of\({X_1},{\bf{ }}.{\bf{ }}.{\bf{ }}.{\bf{ }},{\bf{ }}{X_n}\)is when the following parameter’s value \(\theta = \theta \left( {\mu ,\sigma } \right)\) Condition is fulfilled

\(\begin{array}{c}P\left( {X < \theta } \right) = 0.95\\ = P\left( {\frac{{X - \mu }}{\sigma } < \frac{{\theta - \mu }}{\sigma }} \right)\end{array}\)

Let,

\(\begin{array}{l}{\theta _0} = \frac{{\theta - \mu }}{\sigma }\\ \Rightarrow \theta = \sigma {\theta _0} + \mu \ldots \left( 1 \right)\end{array}\)

03

Calculating the log likelihood and differentiate it with respect to the parameters

\(L\left( {\mu ,{\sigma ^2}} \right) = \frac{{ - n}}{2}\ln \left( {2\pi {\sigma ^2}} \right) - {\sum {\frac{{\left( {{X_i} - \mu } \right)}}{{2{\sigma ^2}}}} ^2}\)

One need to find values for parameter in order to maximize the log likelihood functions.

Differentiating with respect to \(\mu \)

\(\begin{array}{c}\frac{{\partial L}}{{\partial \mu }} = \frac{\partial }{{\partial \mu }}\left( {\frac{{ - n}}{2}\ln \left( {2\pi {\sigma ^2}} \right) - \sum {\frac{{\left( {{X_i} - \mu } \right)}}{{2{\sigma ^2}}}} } \right)\\ = 0 - \sum {\frac{{{X_i} - \mu }}{{{\sigma ^2}}}} \\ = - \sum {\frac{{{X_i} - \mu }}{{{\sigma ^2}}}} \end{array}\)

Equating the differentiated equation with zero,

\(\sum {\frac{{{X_i} - \mu }}{{{\sigma ^2}}}} = 0\)

Therefore, for any \({\sigma ^2}\), the likelihood functions is maximized when

\(\begin{array}{c}\mu = \overline X \\ = \frac{1}{n}\sum {{X_i}} \end{array}\)

This is a sample mean.

Differentiating with respect to\({\sigma ^2}\)

\(\begin{array}{c}\frac{{\partial L}}{{\partial {\sigma ^2}}} = \frac{\partial }{{\partial {\sigma ^2}}}\left( {\frac{{ - n}}{2}\ln \left( {2\pi {\sigma ^2}} \right) - \sum {\frac{{\left( {{X_i} - \mu } \right)}}{{2{\sigma ^2}}}} } \right)\\ = \frac{{ - n}}{{2{\sigma ^2}}} + \left( {\sum {\frac{{{{\left( {{X_i} - \mu } \right)}^2}}}{2}} } \right)\frac{1}{{{{\left( {{\sigma ^2}} \right)}^2}}}\end{array}\)

Equating the differentiated equation with 0,

\(\begin{array}{c}\frac{{ - n}}{{2{\sigma ^2}}} + \left( {\sum {\frac{{{{\left( {{X_i} - \mu } \right)}^2}}}{2}} } \right)\frac{1}{{{{\left( {{\sigma ^2}} \right)}^2}}} = 0\\\left( {\sum {\frac{{{{\left( {{X_i} - \mu } \right)}^2}}}{2}} } \right)\frac{1}{{{{\left( {{\sigma ^2}} \right)}^2}}} = \frac{n}{{2{\sigma ^2}}}\\\sum {\frac{{{{\left( {{X_i} - \mu } \right)}^2}}}{2}} = \frac{{n \times \left( {{\sigma ^4}} \right)}}{{2{\sigma ^2}}}\\{\sigma ^2} = \sum {\frac{{{{\left( {{X_i} - \mu } \right)}^2}}}{n}} \end{array}\)

For \(\mu = \overline X \), the likelihood functions is maximized when

\(\begin{array}{c}{\sigma ^2} = \sum {\frac{{{{\left( {{X_i} - \overline X } \right)}^2}}}{n}} \\ = \overline {{{\left( {{X_i} - \overline X } \right)}^2}} \end{array}\)

This is a sample variance.

04

Substitute the values in the quantile functions

Substituting the values of\(\mu \,\,and\,\,{\sigma ^2}\) in the quantile expression obtained in (1), the M.L.E. of 0.95 quantile of X is:

\(\begin{array}{c}\theta = \sigma {\theta _0} + \mu \\ = {\theta _0}\sqrt {\overline {{{\left( {{X_i} - \overline X } \right)}^2}} } + \overline X \end{array}\)

The M.L.E. of 0.95 quantile of X is:\({\theta _0}\sqrt {\overline {{{\left( {{X_i} - \overline X } \right)}^2}} } + \overline X \).

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

The Pareto distribution with parameters\({{\bf{x}}_{\bf{0}}}\)andα\(\left( {{{\bf{x}}_{\bf{0}}}{\bf{ > 0}}\;{\bf{and}}\;{\bf{\alpha > 0}}} \right)\)is defined in Exercise 16 of Sec. 5.7.Show that the family of Pareto distributions is a conjugate family of prior distributions for samples from a uniformdistribution on the interval (0, θ), where the value of the endpointθis unknown.

Question: Let\({\bf{X1, }}{\bf{. }}{\bf{. }}{\bf{. , Xn}}\) be a random sample from the uniform distribution on the interval \(\left( {{\bf{0,\theta }}} \right)\)

a. Find the method of moments estimator of \({\bf{\theta }}\).

b. Show that the method of moments estimator is not the M.L.E.

Suppose that \({X_1}...{X_n}\) form a random sample from a distribution for which the p.d.f. is \(f\left( {x|\theta } \right)\), the value of \(\theta \) is unknown, and the prior p.d.f. of\(\theta \) is. Show that the posterior p.d.f. \(\xi \left( \theta \right)\) is the same regardless of whether it is calculated directly by using Eq. (7.2.7) or sequentially by using Eqs. (7.2.14), (7.2.15), and (7.2.16).

Suppose that the proportion \(\theta \) of defective items in a large manufactured lot is unknown, and the prior distribution of \(\theta \) is the uniform distribution on the interval \(\left[ {0,1} \right]\). When eight items are selected at random from the lot, it is found that exactly three of them are defective. Determine the posterior distribution of \(\theta \).

Question: Consider the conditions of Exercise 2 again. Suppose that the prior distribution of θ is as given in Exercise 2, and suppose again that 20 items are selected at random from the shipment.

a. For what number of defective items in the sample will the mean squared error of the Bayes estimate be a maximum?

b. For what number the mean squared error of the Bayes estimate will be a minimum?

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.