/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Problem 10 Suppose the pdf of \(X\) is of a... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

Suppose the pdf of \(X\) is of a location and scale family as defined in Example 6.4.4. Show that if \(f(z)=f(-z)\), then the entry \(I_{12}\) of the information matrix is 0 . Then argue that in this case the mles of \(a\) and \(b\) are asymptotically independent.

Short Answer

Expert verified
The result shows that under a symmetric location-scale family, the correlation in the estimations of the location and scale parameters vanishes as the sample size goes to infinity, i.e., the estimators are asymptotically independent. This suggests that the statistical properties of the maximum likelihood estimators of \(a\) and \(b\) are not impacted by each other, provided the sample size is large enough.

Step by step solution

01

Define the Probability Density Function

Let the probability density function (pdf) for \(X\) be \(f(z) = f((x - a) / b)/b\) where \(z = (x - a) / b). This is a location-scale family distribution with location parameter \(a\) and scale parameter \(b\).
02

Evaluate the Fisher Information Matrix elements

The Fisher Information matrix \(I(a,b)\) has the form \(I(a,b) = [[I_{11}, I_{12}],[I_{21}, I_{22}]]\), where:- \(I_{11} = E[(d/da log f(X))^2]\)- \(I_{12} = I_{21} = E[d/da log f(X) * d/db log f(X)] \)- \(I_{22} = E[(d/db log f(X))^2] \).Here, the expectation is taken over the distribution of \(X\). We are interested in \(I_{12}\).
03

Derive \(I_{12}\)

Express \(I_{12} = E[d/da log f(X) * d/db log f(X)] = E[(d/dz log f(z)) * (d/dz log f(z)) / b]\) under the change of variable \(z = (x - a) / b\). Develop this expression using explicit differentiation.
04

Apply Symmetry Condition

Given that \(f(z)=f(-z)\), we can show that \(d/dz log f(z) = - d/dz log f(-z)\). Therefore, \(E[(d/dz log f(z))^2]\) is equal to \(E[-(d/dz log f(-z))^2] = - E[(d/dz log f(z))^2]\). This makes \(I_{12}\) equal to 0.
05

Discuss Implications on Asymptotic Independence

A zero off-diagonal entry in the Fisher Information matrix signifies that the parameters do not influence each other's estimations, thus are asymptotically independent. This means that if the true distribution of the data is of this form and we have a large sample, the maximum likelihood estimators of location and scale parameters \(a\) and \(b\) will not affect each other.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Let \(X_{1}, X_{2}, \ldots, X_{n}\) be a random sample from a \(\Gamma(\alpha=3, \beta=\theta)\) distribution, where \(0 \leq \theta<\infty\). (a) Show that the likelihood ratio test of \(H_{0}: \theta=\theta_{0}\) versus \(H_{1}: \theta \neq \theta_{0}\) is based upon the statistic \(W=\sum_{i=1}^{n} X_{i} .\) Obtain the null distribution of \(2 W / \theta_{0}\). (b) For \(\theta_{0}=3\) and \(n=5\), find \(c_{1}\) and \(c_{2}\) so that the test that rejects \(H_{0}\) when \(W \leq c_{1}\) or \(W \geq c_{2}\) has significance level \(0.05 .\)

Let \(X_{1}, X_{2}, \ldots, X_{n}\) be a random sample from a \(\Gamma(\alpha, \beta)\) distribution where \(\alpha\) is known and \(\beta>0\). Determine the likelihood ratio test for \(H_{0}: \beta=\beta_{0}\) against \(H_{1}: \beta \neq \beta_{0}\)

Rao (page 368,1973 ) considers a problem in the estimation of linkages in genetics. McLachlan and Krishnan (1997) also discuss this problem and we present their model. For our purposes, it can be described as a multinomial model with the four categories \(C_{1}, C_{2}, C_{3}\), and \(C_{4} .\) For a sample of size \(n\), let \(\mathbf{X}=\left(X_{1}, X_{2}, X_{3}, X_{4}\right)^{\prime}\) denote the observed frequencies of the four categories. Hence, \(n=\sum_{i=1}^{4} X_{i} .\) The probability model is $$\begin{array}{|c|c|c|c|}\hline C_{1} & C_{2} & C_{3} & C_{4} \\ \hline \frac{1}{2}+\frac{1}{4} \theta & \frac{1}{4}-\frac{1}{4} \theta & \frac{1}{4}-\frac{1}{4} \theta & \frac{1}{4} \theta \\\\\hline\end{array}$$ where the parameter \(\theta\) satisfies \(0 \leq \theta \leq 1 .\) In this exercise, we obtain the mle of \(\theta\). (a) Show that likelihood function is given by $$L(\theta \mid \mathbf{x})=\frac{n !}{x_{1} ! x_{2} ! x_{3} ! x_{4} !}\left[\frac{1}{2}+\frac{1}{4} \theta\right]^{x_{1}}\left[\frac{1}{4}-\frac{1}{4} \theta\right]^{x_{2}+x_{3}}\left[\frac{1}{4} \theta\right]^{x_{4}}$$ (b) Show that the log of the likelihood function can be expressed as a constant (not involving parameters) plus the term $$x_{1} \log [2+\theta]+\left[x_{2}+x_{3}\right] \log [1-\theta]+x_{4} \log \theta$$ (c) Obtain the partial derivative with respect to \(\theta\) of the last expression, set the result to 0, and solve for the mle. (This will result in a quadratic equation which has one positive and one negative root.)

Let \(X\) and \(Y\) be two independent random variables with respective pdfs $$f\left(x ; \theta_{i}\right)=\left\\{\begin{array}{ll} \left(\frac{1}{\theta_{i}}\right) e^{-x / \theta_{i}} & 0

Suppose \(X_{1}, X_{2}, \ldots, X_{n_{1}}\) is a random sample from a \(N(\theta, 1)\) distribution. Besides these \(n_{1}\) observable items, suppose there are \(n_{2}\) missing items, which we denote by \(Z_{1}, Z_{2}, \ldots, Z_{n_{2}} .\) Show that the first-step EM estimate is $$\widehat{\theta}^{(1)}=\frac{n_{1} \bar{x}+n_{2} \hat{\theta}^{(0)}}{n}$$ where \(\hat{\theta}^{(0)}\) is an initial estimate of \(\theta\) and \(n=n_{1}+n_{2} .\) Note that if \(\hat{\theta}^{(0)}=\bar{x}\), then \(\widehat{\theta}^{(k)}=\bar{x}\) for all \(k\).

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.