/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Problem 7 Consider two Bernoulli distribut... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

Consider two Bernoulli distributions with unknown parameters \(p_{1}\) and \(p_{2}\). If \(Y\) and \(Z\) equal the numbers of successes in two independent random samples, each of size \(n\), from the respective distributions, determine the mles of \(p_{1}\) and \(p_{2}\) if we know that \(0 \leq p_{1} \leq p_{2} \leq 1\)

Short Answer

Expert verified
The maximum likelihood estimate (MLE) of \(p_{1}\) and \(p_{2}\) are given as \(\hat{p_{1}} = \frac{Y}{n}\) and \(\hat{p_{2}} = \frac{Z}{n}\) respectively. If \(p_{1}\) is more than \(p_{2}\) considering the given constraint, then both \(p_{1}\) and \(p_{2}\) should be the maximum value of \(\hat{p_{1}}\) and \(\hat{p_{2}}\). So, \(\hat{p} = max(\hat{p_{1}}, \hat{p_{2}})\).

Step by step solution

01

Write down the likelihood function

For Bernoulli distributions, the likelihood function for observing the given data sample is given by \(p^{y}(1-p)^{n-y}\) where \(y\) is the number of successes and \(n\) is the total sample size. The combined likelihood function for \(Y\) key and \(Z\) key becomes \(p_{1}^{Y}(1-p_{1})^{n-Y}p_{2}^{Z}(1-p_{2})^{n-Z}\)
02

Log transformation

To simplify the function, we take the logarithm of the likelihood function and obtain its log-likelihood. We ignore the constants, as they won't affect the position of the maximum, yielding \(Y log(p_{1}) + (n - Y)log(1 - p_{1}) + Z log(p_{2}) + (n - Z)log(1 - p_{2})\)
03

Solve for MLEs

By taking derivatives of this function with respect to \(p_{1}\) and \(p_{2}\) and equating it to zero, we can solve for \(p_{1}\) and \(p_{2}\). The equations obtained are: \[ Y = n p_{1} \] and \[ Z = n p_{2} \] So the MLEs of \(p_{1}\) and \(p_{2}\) are \(\hat{p_{1}} = \frac{Y}{n}\) and \(\hat{p_{2}} = \frac{Z}{n}\) respectively. But we must remember that \(0 \leq p_{1} \leq p_{2} \leq 1\). If value of \(p_{1}\) is more than \(p_{2}\), then we can't accept this result. In this case, both \(p_{1}\) and \(p_{2}\) should be take the same value which is \(\hat{p} = max(\hat{p_{1}}, \hat{p_{2}})\)

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Consider a location model $$X_{i}=\theta+e_{i}, \quad i=1, \ldots, n$$ where \(e_{1}, e_{2}, \ldots, e_{n}\) are iid with pdf \(f(z)\). There is a nice geometric interpretation for estimating \(\theta .\) Let \(\mathbf{X}=\left(X_{1}, \ldots, X_{n}\right)^{\prime}\) and \(\mathbf{e}=\left(e_{1}, \ldots, e_{n}\right)^{\prime}\) be the vectors of observations and random error, respectively, and let \(\mu=\theta 1\) where 1 is a vector with all components equal to one. Let \(V\) be the subspace of vectors of the form \(\mu_{i}\) i.e, \(V=\\{\mathbf{v}: \mathbf{v}=a \mathbf{1}\), for some \(a \in R\\} .\) Then in vector notation we can write the model as $$\mathbf{X}=\boldsymbol{\mu}+\mathbf{e}, \quad \boldsymbol{\mu} \in V$$

Let \(X_{1}, X_{2}, \ldots, X_{n}\) be a random sample from a Bernoulli distribution with parameter \(p .\) If \(p\) is restricted so that we know that \(\frac{1}{2} \leq p \leq 1\), find the mle of this parameter.

Let \(X_{1}, X_{2}, \ldots, X_{n}\) be a random sample from \(N\left(\mu, \sigma^{2}\right)\). (a) If the constant \(b\) is defined by the equation \(\operatorname{Pr}(X \leq b)=0.90\), find the mle of \(b\). (b) If \(c\) is given constant, find the mle of \(\operatorname{Pr}(X \leq c)\).

Let \(X_{1}, X_{2}, \ldots, X_{n}\) be a random sample from a distribution with pmf \(p(x ; \theta)=\theta^{x}(1-\theta)^{1-x}, x=0,1\), where \(0<\theta<1 .\) We wish to test \(H_{0}: \theta=1 / 3\) versus \(H_{1}: \theta \neq 1 / 3\) (a) Find \(\Lambda\) and \(-2 \log \Lambda\). (b) Determine the Wald-type test. (c) What is Rao's score statistic?

A machine shop that manufactures toggle levers has both a day and a night shift. A toggle lever is defective if a standard nut cannot be screwed onto the threads. Let \(p_{1}\) and \(p_{2}\) be the proportion of defective levers among those manufactured by the day and night shifts, respectively. We shall test the null hypothesis, \(H_{0}: p_{1}=p_{2}\), against a two-sided alternative hypothesis based on two random samples, each of 1000 levers taken from the production of the respective shifts. Use the test statistic \(Z^{*}\) given in Example \(6.5 .3\). (a) Sketch a standard normal pdf illustrating the critical region having \(\alpha=0.05\). (b) If \(y_{1}=37\) and \(y_{2}=53\) defectives were observed for the day and night shifts, respectively, calculate the value of the test statistic and the approximate \(p-\) value (note that this is a two-sided test). Locate the calculated test statistic on your figure in Part (a) and state your conclusion. Obtain the approximate \(p\) -value of the test.

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.