/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Problem 55 Each of \(n\) specimens is to be... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

Each of \(n\) specimens is to be weighed twice on the same scale. Let \(X_{i}\) and \(Y_{i}\) denote the two observed weights for the \(i\) th specimen. Suppose \(X_{i}\) and \(Y_{i}\) are independent of each other, each normally distributed with mean value \(\mu_{i}\) (the true weight of specimen \(i\) ) and variance \(\sigma^{2}\). a. Show that the maximum likelihood estimator of \(\sigma^{2}\) is \(\hat{\sigma}^{2}=\sum\left(X_{i}-Y_{i}\right)^{2} /(4 n)\) [Hint: If \(\bar{z}=\left(z_{1}+z_{2}\right) / 2\), then \(\sum\left(z_{i}-\bar{z}\right)^{2}=\) \(\left.\left(z_{1}-z_{2}\right)^{2} / 2 .\right]\) b. Is the mle \(\hat{\sigma}^{2}\) an unbiased estimator of \(\sigma^{2}\) ? Find an unbiased estimator of \(\sigma^{2}\). [Hint: For any rv \(Z, E\left(Z^{2}\right)=V(Z)+[E(Z)]^{2}\). Apply this to \(Z=X_{i}-Y_{i}\).]

Short Answer

Expert verified
MLE \(\hat{\sigma}^2 = \sum (X_i - Y_i)^2 /(4n)\) is biased. Unbiased: \(\tilde{\sigma}^2 = \sum (X_i - Y_i)^2 /(2n)\).

Step by step solution

01

Understanding the Probability Distribution

Each specimen's weight is measured twice: once as \(X_i\) and once as \(Y_i\). Both measurements are normally distributed with a mean \(\mu_i\) and a variance \(\sigma^2\). The weights \(X_i\) and \(Y_i\) are independent.
02

Finding the Variance of the Difference

Calculate the variance of the difference \(Z_i = X_i - Y_i\). Apply the property of variance \(V(X - Y) = V(X) + V(Y)\) (since \(X\) and \(Y\) are independent): \( V(Z_i) = \sigma^2 + \sigma^2 = 2\sigma^2 \).
03

Defining the Maximum Likelihood Estimator

Use the hint: if \(\bar{z} = (z_1 + z_2)/2\), then \(\sum (z_i - \bar{z})^2 = (z_1 - z_2)^2/2\). For \(n\) pairs, consider \((X_i - Y_i)^2/4\). Sum over all \(n\): \( \sum (X_i - Y_i)^2/4 \). The maximum likelihood estimator (MLE) \(\hat{\sigma}^2\) for \(\sigma^2\) then becomes \(\hat{\sigma}^2 = \frac{1}{4n}\sum (X_i - Y_i)^2 \).
04

Checking Unbiasedness of the MLE

Compute the expected value of the MLE: \( E(\hat{\sigma}^2) = E\left( \frac{1}{4n} \sum (X_i - Y_i)^2 \right) = \frac{1}{4n} \sum E[(X_i - Y_i)^2]\). From the hint, \(E(Z^2) = V(Z) + [E(Z)]^2\), calculate \(E(Z_i^2) = 2\sigma^2 + \mu_i^2\) since \(E(Z) = 0\). Simplify, noting that \(E(Z_i^2) = 2\sigma^2\). Hence, \(E(\hat{\sigma}^2) = \frac{2\sigma^2}{4} = \frac{\sigma^2}{2}\). This shows \(\hat{\sigma}^2\) is biased as it underestimates \(\sigma^2\).
05

Finding an Unbiased Estimator

To correct the bias, multiply the MLE by a factor of \(2\) so that the expectation equals \(\sigma^2\): \(\tilde{\sigma}^2 = 2\cdot \frac{1}{4n} \sum (X_i - Y_i)^2 = \frac{1}{2n} \sum (X_i - Y_i)^2\).

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

Key Concepts

These are the key concepts you need to understand to accurately answer the question.

Normal Distribution
The normal distribution, a fundamental concept in statistics, serves as the basis for many statistical methods and analyses. It represents a continuous probability distribution characterizing the distribution of many natural phenomena. A variable that follows a normal distribution will have a bell-shaped curve symmetry about its mean value. This mean is denoted by \(\mu\), which sits at the center of the distribution curve.

Two critical parameters define normal distribution:
  • Mean (\(\mu\)): The central or average value of the distribution.
  • Variance (\(\sigma^2\)): Measures the spread of the distribution, providing insight into data variability.
In the context of our problem, the weights \(X_i\) and \(Y_i\) adhere to this normal distribution, each sharing the mean \(\mu_i\) and variance \(\sigma^2\). These two weights, taken independently for each specimen, help establish a foundation for variance estimation using maximum likelihood.
Estimator Bias
In statistics, estimator bias refers to the systematic deviation between the expected value of an estimator and the actual value it intends to estimate. An unbiased estimator is such that its expected value equals the true parameter value. In our problem, the MLE for variance, \(\hat{\sigma}^2\), is initially calculated as a biased estimator.

Here, the bias arises because the expected value of \(\hat{\sigma}^2\) does not equal the actual variance \(\sigma^2\) but rather half of it: \(\frac{\sigma^2}{2}\). This discrepancy indicates that the MLE underestimates the true variance. Such a finding highlights the importance of adjusting biased estimators to achieve more accurate representation and reconciling estimation methods to improve model accuracy and reliability.
Variance Calculation
Variance is a measure of the dispersion or spread within a set of data points. For normally distributed variables, variance quantifies how far individual values fall from the mean. In the context of this problem, we calculate the variance of the difference between two weights, \(Z_i = X_i - Y_i\).

Since \(X\) and \(Y\) are independent, the variance of \(Z_i\) follows the sum of the variances: \(V(Z_i) = V(X_i) + V(Y_i) = 2\sigma^2\). Thus, variance calculation becomes a critical step in determining the properties of the Maximum Likelihood Estimator. Recognizing that the variance of differences aligns with the sum of individual variances offers clarity in analyzing where potential biases in estimates may arise.
  • The computation begins with identifying variance of differences: \(V(X-Y) = V(X) + V(Y)\).
  • The independence of values ensures this relationship holds accurately, simplifying the estimation process.
Unbiased Estimator
An unbiased estimator is a statistical statistic that accurately estimates a parameter without systematic error. For variance estimates, achieving unbiasedness involves aligning the expected value of the estimator with the true parameter value.

In the given exercise, the MLE \(\hat{\sigma}^2\) was shown to underestimate the true variance \(\sigma^2\). To create an unbiased estimator, a correction factor was applied. By multiplying the MLE by a factor of 2 (i.e., \(\tilde{\sigma}^2 = 2\cdot \frac{1}{4n} \sum (X_i - Y_i)^2\)), it successfully adjusts the expectation to \(\sigma^2\). This adjustment emphasizes the estimator's reliability and precision by eliminating systematic bias.
  • Correcting biased estimators ensures accuracy in parameter representation.
  • The adjusted unbiased estimator builds foundation for further statistical analyses.

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

The principle of unbiasedness (prefer an unbiased estimator to any other) has been criticized on the grounds that in some situations the only unbiased estimator is patently ridiculous. Here is one such example. Suppose that the number of major defects \(X\) on a randomly selected vehicle has a Poisson distribution with parameter \(\lambda\). You are going to purchase two such vehicles and wish to estimate \(\theta=P\left(X_{1}=0, \quad X_{2}=0\right)=e^{-2 \lambda}\), the probability that neither of these vehicles has any major defects. Your estimate is based on observing the value of \(X\) for a single vehicle. Denote this estimator by \(\hat{\theta}=\delta(X)\). Write the equation implied by the condition of unbiasedness, \(E[\delta(X)]=e^{-2 \lambda}\), cancel \(e^{-\lambda}\) from both sides, then expand what remains on the right-hand side in an infinite series,and compare the two sides to determine \(\delta(X)\). If \(X=200\), what is the estimate? Does this seem reasonable? What is the estimate if \(X=199 ?\) Is this reasonable?

Two different computer systems are monitored for a total of \(n\) weeks. Let \(X_{i}\) denote the number of breakdowns of the first system during the \(i\) th week, and suppose the \(X_{i}\) 's are independent and drawn from a Poisson distribution with parameter \(\lambda_{1}\). Similarly, let \(Y_{i}\) denote the number of breakdowns of the second system during the \(i\) th week, and assume independence with each \(Y_{i}\) Poisson with parameter \(\lambda_{2}\). Derive the mle's of \(\lambda_{1}, \lambda_{2}\), and \(\lambda_{1}-\lambda_{2}\). [Hint: Using independence, write the joint \(\mathrm{pmf}\) (likelihood) of the \(X_{i}\) 's and \(Y_{i}\) 's together.]

Let \(X_{1}, X_{2}, \ldots, X_{n}\) be a random sample from a continuous distribution with pdf \(f(x ; \theta)\). For large \(n\), the variance of the sample median is approximately \(1 /\left\\{4 n[f(\tilde{\mu} ; \theta)]^{2}\right\\}\). If \(X_{1}, X_{2}, \ldots, X_{n}\) is a random sample from the normal distribution with known standard deviation \(\sigma\) and unknown \(\mu\), determine the efficiency of the sample median.

Components of a certain type are shipped in batches of size \(k\). Suppose that whether or not any particular component is satisfactory is independent of the condition of any other component, and that the long run proportion of satisfactory components is \(p\). Consider \(n\) batches, and let \(X_{i}\) denote the number of satisfactory components in the ith batch ( \(i=1,2, \ldots, n\) ). Statistician A is provided with the values of all the \(X_{i}\) 's, whereas statistician B is given only the value of \(X=\sum X_{i}\). Use a conditional probability argument to decide whether statistician A has more information about \(p\) than does statistician B.

For \(\theta>0\) consider a random sample from a uniform distribution on the interval from \(\theta\) to \(2 \theta\) (pdf \(1 / \theta\) for \(\theta

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.