/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Q13E Question: Suppose that \({{\bf{X... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

Question: Suppose that \({{\bf{X}}_{\bf{1}}}{\bf{, }}{\bf{. }}{\bf{. }}{\bf{. , }}{{\bf{X}}_{\bf{n}}}\)form a random sample froma distribution for which the p.d.f. is as specified in Exercise9 of Section 7.5. Show that the sequence of M.L.E.’sof \({\bf{\theta }}\) is a consistent sequence.

Short Answer

Expert verified

The proof is established

\(\)

Step by step solution

01

Given information

The given pdf is \(Beta\left( {\theta ,1} \right)\)

\(f\left( x \right) = \left\{ \begin{array}{l}\theta {x^{\theta - 1}},\,\,0 < x < 1\\0,\,\,{\rm{otherwise}}\end{array} \right.\)

02

Calculate the MLE of the beta distribution

Defining the likelihood function:

\(\begin{array}{c}L\left( \theta \right) = \prod\limits_{i = 1}^n {f\left( {{x_i}} \right)} \\ = \prod\limits_{i = 1}^n {\frac{{{x^{\alpha - 1}}{{\left( {1 - x} \right)}^{\beta - 1}}}}{{\beta \left( {\alpha ,\beta } \right)}}} \\ = \frac{{\prod\limits_{i = 1}^n {{x^{\alpha - 1}}} \prod\limits_{i = 1}^n {{{\left( {1 - x} \right)}^{\beta - 1}}} }}{{\beta {{\left( {\alpha ,\beta } \right)}^n}}}\end{array}\)

The log-likelihood is:

\(\ln L\left( \theta \right) = \left( {\alpha - 1} \right)\sum {\log {x_i} \times } \left( {\beta - 1} \right)\sum {\log \left( {1 - {x_i}} \right)} - n\log \beta \left( {\alpha ,\beta } \right)\)

Setting its derivative with respect to parameter:

\(\begin{array}{c}\frac{\partial }{{\partial \alpha }}\ln L\left( \theta \right) = \frac{\partial }{{\partial \alpha }}\left( {\alpha - 1} \right)\sum {\log {x_i} \times } \left( {\beta - 1} \right)\sum {\log \left( {1 - {x_i}} \right)} - n\log \beta \left( {\alpha ,\beta } \right)\\ = \sum {\log {x_i}} + 0 + 0\\ = \sum {\log {x_i}} \end{array}\)

\(\begin{array}{c}\frac{\partial }{{\partial \beta }}\ln L\left( \theta \right) = \frac{\partial }{{\partial \beta }}\left( {\alpha - 1} \right)\sum {\log {x_i} \times } \left( {\beta - 1} \right)\sum {\log \left( {1 - {x_i}} \right)} - n\log \beta \left( {\alpha ,\beta } \right)\\ = 0 + \sum {\log \left( {1 - {x_i}} \right)} + 0\\ = \sum {\log \left( {1 - {x_i}} \right)} \end{array}\)

Equating both of the equations to 0

\(\begin{array}{l}\sum {\log {x_i}} = 0\\\sum {\log \left( {1 - {x_i}} \right)} = 0\end{array}\)

Applying antilog on both sides we get

\(\prod {{X_i}} \,\,{\rm{and}}\,\,\prod {\left( {1 - {X_i}} \right)} \)

The log likelihood function is a decreasing function and it is maximised at \(\begin{array}{l}\alpha = \prod {{X_i}} \\\beta = \prod {\left( {1 - {X_i}} \right)} \end{array}\)

03

Calculate the MLE for \({\bf{Beta}}\left( {{\bf{\theta ,1}}} \right)\)

Since,

\(\alpha = \prod {{X_i}} \), in our case,\(\alpha = \theta \), therefore the MLE is

\(\widehat \theta = \prod {{X_i}} \)

Since all MLEs are consistent estimators, therefore, is proved.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

The Pareto distribution with parameters\({{\bf{x}}_{\bf{0}}}\)andα\(\left( {{{\bf{x}}_{\bf{0}}}{\bf{ > 0}}\;{\bf{and}}\;{\bf{\alpha > 0}}} \right)\)is defined in Exercise 16 of Sec. 5.7.Show that the family of Pareto distributions is a conjugate family of prior distributions for samples from a uniformdistribution on the interval (0, θ), where the value of the endpointθis unknown.

The uniform distribution on the interval [θ, θ + 3], where the value of θ is unknown \(\left( { - \infty < \theta < \infty } \right);{T_1} = \min \left\{ {{X_1},...,{X_n}} \right\}\,\,\,{\rm{and}}\,\,\,{T_2} = \max \left\{ {{X_1},...,{X_n}} \right\}\)

Question: Suppose that a certain large population contains k different types of individuals (k ≥ 2), and let \({{\bf{\theta }}_{\bf{i}}}\)denote the proportion of individuals of type i, for i = 1,...,k. Here, 0 ≤ \({\theta _i}\)≤ 1 and\({{\bf{\theta }}_{\bf{1}}}{\bf{ + }}...{\bf{ + }}{{\bf{\theta }}_{\bf{k}}}{\bf{ = 1}}\). Suppose also that in a random sample of n individuals from this population, exactly ni individuals are of type i, where\({{\bf{n}}_{\bf{1}}}{\bf{ + }}...{\bf{ + }}{{\bf{n}}_{\bf{k}}}{\bf{ = n}}\). Find the M.L.E.’s of \({{\bf{\theta }}_{\bf{1}}}{\bf{,}}...{\bf{,}}{{\bf{\theta }}_{\bf{n}}}\)

Question: Prove that the method of moments estimators of themean and variance of a normal distribution are also the M.L.E.’s.

Suppose that we model the lifetimes (in months) of electronic components as independent exponential random variables with unknown parameter\(\beta \)We model\(\beta \)as having the gamma distribution with parameters a and b. We believe that the mean lifetime is four months before we see any data. If we were to observe 10 components with an average observed lifetime of six months, we would then claim that the mean lifetime is five months. Determine a and b. Hint: Use Exercise 21 in Sec. 5.7.

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.