/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Q18E Question: Prove that the method ... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

Question: Prove that the method of moments estimator for the parameter of a Bernoulli distribution is the M.L.E.

Short Answer

Expert verified

The method of moments estimator for the parameter of a Bernoulli distribution is the M.L.E.

Step by step solution

01

Define the pdf of Bernoulli distribution

Let the random variables be IID and defined as\({{\rm{X}}_{\rm{1}}}{\rm{ \ldots }}{{\rm{X}}_{\rm{n}}}\).Every one of these random variables is assumed to be a sample from the same Bernoulli, with the same p, \({X_i} \sim Ber\left( p \right)\)

The PMF of X is:

\(f\left( x \right) = {p^x}{\left( {1 - p} \right)^{1 - x}},x = 0,1\)

02

Calculate the method of moments estimator of the Bernoulli distribution

In the method of moments method, the parameter and the sample estimator moments are equated.

Let the parameter moment for the population be be defined as follows:

\({\mu _j}\left( \theta \right) = E\left( {X_i^j} \right)\)

Here the j term varies.

Therefore, for j=1,

\(\begin{array}{c}{\mu _1}\left( \theta \right) = E\left( {X_i^1} \right)\\ = E\left( X \right) \ldots \left( 1 \right)\end{array}\)

This denotes that \({\mu _1}\left( \theta \right)\) is equal to the population mean, that is,\(E\left( {{X_1}} \right)\)

Let the sample moment for the population be be defined as follows:

\({m_j} = \frac{1}{n}\sum\limits_{i = 1}^n {X_i^j} \)Here the j term varies.

Therefore, for j=1,

\(\begin{array}{c}{m_1} = \frac{1}{n}\sum\limits_{i = 1}^n {X_i^1} \\ = \frac{1}{n}\sum\limits_{i = 1}^n X \ldots \left( 2 \right)\end{array}\)

This denotes that \({m_1}\) is equal to the sample mean, that is,\(\frac{1}{n}\sum\limits_{i = 1}^n X \)

Therefore, for equating the first-order moment with mean,

Equating (1) and (2)

\(\begin{array}{c}{\mu _1} = {m_1}\\ = \frac{1}{n}\sum\limits_{i = 1}^n {{X_i}} \end{array}\)

Therefore, the MOM estimate is the sample mean.

03

Calculate the MLE of the Bernoulli distribution

The maximum likelihood to estimate the p parameter of a Bernoulli distribution.

\(\begin{array}{c}{\bf{L}}\left( {\bf{p}} \right){\bf{ = }}{{\bf{p}}^{{{\bf{x}}_{\bf{1}}}}}{\left( {{\bf{1 - p}}} \right)^{{\bf{1 - }}{{\bf{x}}_{\bf{1}}}}}{\bf{ \times }}{{\bf{p}}^{{{\bf{x}}_{\bf{2}}}}}{\left( {{\bf{1 - p}}} \right)^{{\bf{1 - }}{{\bf{x}}_{\bf{2}}}}}{\bf{ \times \ldots \times }}{{\bf{p}}^{{{\bf{x}}_{\bf{n}}}}}{\left( {{\bf{1 - p}}} \right)^{{\bf{1 - }}{{\bf{x}}_{\bf{n}}}}}\\{\bf{ = }}\prod\limits_{{\bf{i = 1}}}^{\bf{n}} {{{\bf{p}}^{{{\bf{x}}_{\bf{i}}}}}{{\left( {{\bf{1 - p}}} \right)}^{{\bf{1 - }}{{\bf{x}}_{\bf{i}}}}}} \end{array}\)

Apply log to the likelihood function.

\({\rm{log}}L\left( p \right) = \sum\limits_{i = 1}^n {{x_i}\left( {\log \left( p \right)} \right) + \left( {n - \sum\limits_{i = 1}^n {{x_i}} } \right)} \log \left( {1 - p} \right)\)

To find the value of p that maximised the log-likehood we find its first derivative and equate it to 0.

\(\frac{\partial }{{\partial p}}{\rm{log}}L\left( p \right) = \frac{{\sum\limits_{i = 1}^n {{x_i}} }}{p} + \left( {n - \sum\limits_{i = 1}^n {{x_i}} } \right)\frac{{ - 1}}{{1 - p}}\)

Equating it with 0, we get

\(p = \frac{{\sum\limits_{i = 1}^n {{x_i}} }}{n}\)

Therefore, the MLE estimate is the sample mean.

Hence, the method of moments estimator for the parameter of a Bernoulli distribution is the M.L.E.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Let θ be a parameter, and let X be discrete with p.f. \({\bf{f}}\left( {{\bf{x|\theta }}} \right)\) conditional on θ. Let T = r(X) be a statistic. Prove that T is sufficient if and only if, for every t and every x such that t = r(x), the likelihood function from observing T = t is proportional to the likelihood function from observing X = x.

Question: Let \({{\bf{x}}_{\bf{1}}}{\bf{,}}...{\bf{,}}{{\bf{x}}_{\bf{n}}}\) be distinct numbers. Let Y be a discrete random variable with the following p.f.:

\(\begin{array}{c}{\bf{f}}\left( {\bf{y}} \right){\bf{ = }}\frac{{\bf{1}}}{{\bf{n}}}\,{\bf{if}}\,{\bf{y}} \in \left\{ {{{\bf{x}}_{\bf{1}}}{\bf{,}}...{\bf{,}}{{\bf{x}}_{\bf{n}}}} \right\}\\ = 0\,otherwise\end{array}\)

Prove that Var(Y ) is given by Eq. (7.5.5).

In Example 7.1.6, identify the components of the statistical model as defined in Definition 7.1.1.

Question: Suppose that \({{\bf{X}}_{\bf{1}}}{\bf{,}}{{\bf{X}}_{\bf{2}}}{\bf{,}}...{\bf{,}}{{\bf{X}}_{\bf{n}}}\) form a random sample from the uniform distribution on the interval [0, θ], where the value of the parameter θ is unknown. Suppose also that the prior distribution of θ is the Pareto distribution with parameters \({{\bf{x}}_{\bf{0}}}\) and α (\({{\bf{x}}_{\bf{0}}}\)> 0 and α > 0), as defined in Exercise 16 of Sec. 5.7. If the value of θ is to be estimated by using the squared error loss function, what is the Bayes estimator of θ? (See Exercise 18 of Sec. 7.3.)

In Example 5.8.3 (page 328), identify the components of the statistical model as defined in Definition 7.1.1.

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.