/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Problem 8 Let \(X\) be \(N(0, \theta), 0&l... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

Let \(X\) be \(N(0, \theta), 0<\theta<\infty\). (a) Find the Fisher information \(I(\theta)\). (b) If \(X_{1}, X_{2}, \ldots, X_{n}\) is a random sample from this distribution, show that the mle of \(\theta\) is an efficient estimator of \(\theta\). (c) What is the asymptotic distribution of \(\sqrt{n}(\widehat{\theta}-\theta) ?\)

Short Answer

Expert verified
The Fisher information \(I(\theta)\) is \(\frac{1}{2\theta^2}\), the MLE estimator of \(\theta\) is efficient, and the asymptotic distribution of \(\sqrt{n}(\widehat{\theta}-\theta)\) is \(N\left(0,2\theta^2\right)\).

Step by step solution

01

Fisher Information

First, let's find out the Fisher information. Note that the probability density function of \(X\), a normal distribution is given by \(f(x; \theta) = \frac{1}{\sqrt{2 \pi \theta}} e^{-\frac{x^2}{2\theta}}\). The derivative of the log-likelihood with respect to \( \theta \) is \(\frac{d}{d\theta} \log f(x;\theta) = -\frac{1}{2\theta} + \frac{x^2}{2\theta^2}\). The Fisher information \(I(\theta)\) is the variance of the derivative of the log-likelihood, which is \(I(\theta) = E\left[\left(\frac{d}{d\theta} \log f(X; \theta)\right)^2\right] = \frac{1}{2\theta^2}\).
02

Efficiency of the MLE

Let's denote the sample mean of \(n\) observations by \(\overline{X}\). We know from the theory, the MLE of \( \theta \) is \( \widehat{\theta} = \overline{X}^2 \). The variance of \(\overline{X}\) is \( \theta/n \). Therefore, the variance of \( \widehat{\theta} \) is \(4 \cdot \theta^2/n \), which is exactly the Cramér-Rao lower bound. Thus, the MLE \( \widehat{\theta} \) is an efficient estimator of \( \theta \).
03

Asymptotic Distribution

Because the MLE \( \widehat{\theta} \) is efficient, we can use the Fisher Information to find the asymptotic distribution of \( \sqrt{n}(\widehat{\theta} - \theta) \). By the Central Limit Theorem, we know that \(\sqrt{n}(\widehat{\theta} - \theta)\) converges in distribution to \(N(0, I(\theta)^{-1})\). Substituting \(I(\theta) = \frac{1}{2\theta^2}\), the asymptotic distribution of \(\sqrt{n}(\widehat{\theta}-\theta)\) is \(N\left(0,2\theta^2\right)\).

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

Key Concepts

These are the key concepts you need to understand to accurately answer the question.

Maximum Likelihood Estimation (MLE)
When trying to estimate a parameter, one powerful tool statisticians use is Maximum Likelihood Estimation (MLE). The goal of MLE is to find the value of the parameter that maximizes the likelihood function, which represents the probability of observing the given data. For instance, if we consider the exercise, where we have observations from a normal distribution with variance \( \theta \), we aim to find \( \theta \) such that the observed data is most probable.
To compute MLE, we first write down the likelihood function for the data. Next, we take the natural logarithm of the likelihood — because it simplifies calculations — leading to what is termed the log-likelihood function. Differentiating the log-likelihood with respect to \( \theta \), we solve the resulting equation to find the MLE \( \widehat{\theta} \).
This estimate is helpful as it uses the entire sample and tends to have desirable properties like being unbiased and consistent as the sample size increases.
Efficient Estimators
An estimator is considered efficient if it achieves the lowest possible variance among all unbiased estimators, given by the Cramér-Rao Lower Bound (CRLB). This efficiency ensures that the estimator makes optimal use of the data available. In practical terms, an efficient estimator will vary less around the true parameter value compared to any other unbiased estimator of that parameter.
In the exercise, the MLE \( \widehat{\theta} = \overline{X}^2 \) is shown to be efficient. The variance derived for \( \widehat{\theta} \) achieves the CRLB, which confirms its efficiency. Achieving this lower bound is desirable because it indicates the estimator uses the Fisher Information fully, making it reliable and precise for inference.
Asymptotic Distribution
The concept of asymptotic distribution comes into play when dealing with large sample sizes. It describes the distribution of an estimator as the sample size tends to infinity. One key point is that many estimators, under certain regularity conditions, become normally distributed as the sample size grows large.
In the given exercise, the asymptotic distribution of \( \sqrt{n}(\widehat{\theta} - \theta) \) is used to describe how the MLE behaves with a growing sample size. We leverage the Central Limit Theorem, which suggests that, for large \( n \), this quantity will follow a normal distribution with mean 0 and variance \( I(\theta)^{-1} \). Here, \( I(\theta) \) is the Fisher Information, playing a pivotal role in determining the spread of the distribution.
This distribution enables statisticians to construct confidence intervals and perform hypothesis tests for \( \theta \), even when the original distribution is not normal.

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Let \(X_{1}, X_{2}, \ldots, X_{n}\) be a random sample from a distribution with pmf \(p(x ; \theta)=\theta^{x}(1-\theta)^{1-x}, x=0,1\), where \(0<\theta<1 .\) We wish to test \(H_{0}: \theta=1 / 3\) versus \(H_{1}: \theta \neq 1 / 3\). (a) Find \(\Lambda\) and \(-2 \log \Lambda\). (b) Determine the Wald-type test. (c) What is Rao's score statistic?

Recall that \(\widehat{\theta}=-n / \sum_{i=1}^{n} \log X_{i}\) is the mle of \(\theta\) for a beta \((\theta, 1)\) distribution. Also, \(W=-\sum_{i=1}^{n} \log X_{i}\) has the gamma distribution \(\Gamma(n, 1 / \theta)\). (a) Show that \(2 \theta W\) has a \(\chi^{2}(2 n)\) distribution. (b) Using part (a), find \(c_{1}\) and \(c_{2}\) so that $$ P\left(c_{1}<\frac{2 \theta n}{\hat{\theta}}

Let \(X_{1}, X_{2}, \ldots, X_{n}\) be a random sample from a \(\Gamma(\alpha=4, \beta=\theta)\) distribution, where \(0 \leq \theta<\infty\). (a) Show that the likelihood ratio test of \(H_{0}: \theta=\theta_{0}\) versus \(H_{1}: \theta \neq \theta_{0}\) is based upon the statistic \(W=\sum_{i=1}^{n} X_{i} .\) Obtain the null distribution of \(2 W / \theta_{0}\). (b) For \(\theta_{0}=3\) and \(n=5\), find \(c_{1}\) and \(c_{2}\) so that the test that rejects \(H_{0}\) when \(W \leq c_{1}\) or \(W \geq c_{2}\) has significance level \(0.05 .\)

Rao (page 368,1973 ) considers a problem in the estimation of linkages in genetics. McLachlan and Krishnan (1997) also discuss this problem and we present their model. For our purposes, it can be described as a multinomial model with the four categories \(C_{1}, C_{2}, C_{3}\), and \(C_{4}\). For a sample of size \(n\), let \(\mathbf{X}=\left(X_{1}, X_{2}, X_{3}, X_{4}\right)^{\prime}\) denote the observed frequencies of the four categories. Hence, \(n=\sum_{i=1}^{4} X_{i} .\) The probability model is $$ \begin{array}{|c|c|c|c|} \hline C_{1} & C_{2} & C_{3} & C_{4} \\ \hline \frac{1}{2}+\frac{1}{4} \theta & \frac{1}{4}-\frac{1}{4} \theta & \frac{1}{4}-\frac{1}{4} \theta & \frac{1}{4} \theta \\ \hline \end{array} $$ where the parameter \(\theta\) satisfies \(0 \leq \theta \leq 1\). In this exercise, we obtain the mle of \(\theta\). (a) Show that likelihood function is given by $$ L(\theta \mid \mathbf{x})=\frac{n !}{x_{1} ! x_{2} ! x_{3} ! x_{4} !}\left[\frac{1}{2}+\frac{1}{4} \theta\right]^{x_{1}}\left[\frac{1}{4}-\frac{1}{4} \theta\right]^{x_{2}+x_{3}}\left[\frac{1}{4} \theta\right]^{x_{4}} $$ (b) Show that the log of the likelihood function can be expressed as a constant (not involving parameters) plus the term $$ x_{1} \log [2+\theta]+\left[x_{2}+x_{3}\right] \log [1-\theta]+x_{4} \log \theta $$ (c) Obtain the partial derivative with respect to \(\theta\) of the last expression, set the result to 0 , and solve for the mle. (This will result in a quadratic equation that has one positive and one negative root.)

Let \(X_{1}, X_{2}, \ldots, X_{n}\) be a random sample from a \(\Gamma(\alpha, \beta)\) distribution where \(\alpha\) is known and \(\beta>0\). Determine the likelihood ratio test for \(H_{0}: \beta=\beta_{0}\) against \(H_{1}: \beta \neq \beta_{0}\)

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.