/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Problem 8 (a) Let \(Y_{1}, \ldots, Y_{n}\)... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

(a) Let \(Y_{1}, \ldots, Y_{n}\) be a random sample from the exponential density \(\lambda e^{-\lambda y}, y>0, \lambda>0\) Say why an unbiased estimator \(W\) for \(\lambda\) should have form \(a / S\), and hence find \(a\). Find the Fisher information for \(\lambda\) and show that \(\mathrm{E}\left(W^{2}\right)=(n-1) \lambda^{2} /(n-2)\). Deduce that no unbiased estimator of \(\lambda\) attains the Cramér-Rao lower bound, although \(W\) does so asymptotically. (b) Let \(\psi=\operatorname{Pr}(Y>a)=e^{-\lambda a}\), for some constant \(a\). Show that $$ I\left(Y_{1}>a\right)= \begin{cases}1, & Y_{1}>a \\ 0, & \text { otherwise }\end{cases} $$ is an unbiased estimator of \(\psi\), and hence obtain the minimum variance unbiased estimator. Does this attain the Cramér-Rao lower bound for \(\psi\) ?

Short Answer

Expert verified
(a) The unbiased estimator is \(W = \frac{n-1}{S}\); it does not achieve CRLB but is asymptotically efficient. (b) \(I(Y_1 > a)\) is an unbiased estimator for \(\psi\). It does not attain CRLB for \(\psi\).

Step by step solution

01

Understanding the Unbiased Estimator Form

For the exponential distribution, the sum of the sample data, i.e., \( S = \sum_{i=1}^{n} Y_{i} \), follows a gamma distribution with parameters \( n \) and \( \lambda \) (i.e., \( S \sim \text{Gamma}(n,\lambda) \)). An unbiased estimator \( W \) of \( \lambda \) must have expected value equal to \( \lambda \). Since \( S \) is a sufficient statistic for \( \lambda \), we consider an estimator of the form \( W = \frac{a}{S} \). This requires \( \mathbb{E}\left(\frac{a}{S}\right) = \lambda \) to hold.
02

Calculating the Constant 'a'

Matching the expectation, we have \( \mathbb{E}\left(\frac{a}{S}\right) = a \mathbb{E}\left(\frac{1}{S}\right) = \lambda \). Since \( S \sim \text{Gamma}(n,\lambda) \), \( \mathbb{E}(S) = \frac{n}{\lambda} \). By properties of the gamma distribution, \( \mathbb{E}\left(\frac{1}{S}\right) = \frac{\lambda}{n-1} \). Solving \( a\frac{\lambda}{n-1} = \lambda \), gives \( a = n-1 \). Thus, the unbiased estimator is \( W = \frac{n-1}{S} \).
03

Calculating Fisher Information

The observed Fisher Information for one sample of the exponential distribution is \( \mathcal{I}(\lambda) = \frac{1}{\lambda^{2}} \). For \( n \) independent samples, \( \mathcal{I}(\lambda) = \frac{n}{\lambda^{2}} \).
04

Showing Expected Value of \(W^2\)

For \( W = \frac{n-1}{S} \), we need to compute \( \mathbb{E}(W^2) = \mathbb{E}\left(\left(\frac{n-1}{S}\right)^2\right) \). The required \( \mathbb{E}\left(S^{-2}\right) = \frac{\lambda^2}{(n-2)(n-1)} \) using gamma distribution properties. Thus, \( \mathbb{E}(W^2) = \frac{(n-1)^2\lambda^2}{(n-1)(n-2)} = \frac{(n-1)\lambda^2}{n-2} \).
05

Cramér-Rao Lower Bound

The Cramér-Rao Lower Bound (CRLB) for an unbiased estimator of \( \lambda \) is \( \frac{1}{n\mathcal{I}(\lambda)} = \frac{\lambda^2}{n} \). We observe that \( \mathbb{E}(W^2) eq \text{CRLB} \), indicating that the estimator does not attain the CRLB. However, evaluating the asymptotic behavior as \( n \to \infty \), \( \frac{n-1}{n-2} \to 1 \), showing \( W \) is asymptotically efficient.
06

Unbiased Estimator for \( \psi \)

Given \( \psi = \operatorname{Pr}(Y>a) \), the indicator function \( I(Y_1 > a) \) is one if \( Y_1 > a \) and zero otherwise. It is an unbiased estimator of \( \psi \) since \( \mathbb{E}[I(Y_1 > a)] = \operatorname{Pr}(Y_1 > a) = \psi \).
07

Minimum Variance Unbiased Estimator (MVUE)

The MVUE of \( \psi = e^{-\lambda a} \) is the sample mean of \( I(Y_i > a) \) for \( i=1,2,\ldots,n \). However, it does not attain the Cramér-Rao lower bound, a common result for indicator functions, because MVUEs for probabilities are discrete and often cannot achieve CRLB.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

Key Concepts

These are the key concepts you need to understand to accurately answer the question.

Exponential Distribution
The exponential distribution is a continuous probability distribution commonly used to model the time between events in a Poisson process. It is characterized by a single parameter, \( \lambda \), which is the rate parameter, essentially describing the frequency at which the events occur. The probability density function (PDF) of an exponential distribution is given by:\[ f(y; \lambda) = \lambda e^{-\lambda y}, \quad y > 0 \]Here are some key points about the exponential distribution:
  • It is memoryless, meaning the probability of an event occurring in the next interval is independent of any prior intervals.
  • The mean or expected value of the exponential distribution is \( 1/\lambda \), and the variance is \( 1/\lambda^{2} \).
  • It is a special case of the gamma distribution, with the shape parameter equal to 1.
This distribution is especially useful for modeling waiting times and life testing scenarios.
Fisher Information
The Fisher Information is a concept that describes the amount of information that an observable random variable carries about an unknown parameter upon which the likelihood function depends. For the exponential distribution, a single observed Fisher Information is given by:\[ \mathcal{I}(\lambda) = \frac{1}{\lambda^2} \]This value indicates that as the rate \( \lambda \) increases, the information we have about the parameter decreases. When considering a sample of size \( n \), the total Fisher Information becomes:\[ \mathcal{I}(\lambda) = \frac{n}{\lambda^2} \]Understanding Fisher Information helps in assessing the efficiency of an estimator, ensuring that it can potentially converge to the actual parameter as accurately as possible when more data is available.
Cramér-Rao Lower Bound
The Cramér-Rao Lower Bound (CRLB) provides a lower bound on the variance of unbiased estimators of a parameter. It is important because it sets a benchmark for how good an estimator can theoretically be. For an unbiased estimator of \( \lambda \), the CRLB is expressed as:\[ \text{CRLB} = \frac{1}{n \mathcal{I}(\lambda)} = \frac{\lambda^2}{n} \]This means that any unbiased estimator of \( \lambda \) will have a variance at least as large as this bound. In our problem, the estimator \( W = \frac{n-1}{S} \) does not attain the CRLB, though it becomes asymptotically efficient due to its variance approaching the CRLB as the sample size \( n \) increases. This behavior illustrates that while some estimators may not be efficient in finite samples, they can still perform well as the dataset grows.
Unbiased Estimator
An unbiased estimator is a statistical estimate where the expected value of the estimate equals the actual parameter value being estimated. In simple terms, an unbiased estimator, on average, hits the true parameter value. For example, in this exercise, the estimator \( W = \frac{n-1}{S} \) is unbiased for the parameter \( \lambda \) of an exponential distribution because:\[ \mathbb{E}\left(W\right) = \lambda \]An unbiased estimator is crucial because it ensures no systematic deviation from the true parameter value. Finding such estimators often involves leveraging properties of probability distributions and known parameters, so the expected value of the estimator aligns perfectly with the parameter in question.

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

In \(n\) independent food samples the bacterial counts \(Y_{1}, \ldots, Y_{n}\) are presumed to be Poisson random variables with mean \(\theta\). It is required to estimate the probability that a given sample would be uncontaminated, \(\pi=\operatorname{Pr}\left(Y_{j}=0\right)\). Show that \(U=n^{-1} \sum I\left(Y_{j}=0\right)\), the proportion of the samples uncontaminated, is unbiased for \(\pi\), and find its variance. Using the Rao- Blackwell theorem or otherwise, show that an unbiased estimator of \(\pi\) having smaller variance than \(U\) is \(V=\\{(n-1) / n\\}^{n \bar{Y}}\), where \(\bar{Y}=n^{-1} \sum Y_{j} .\) Is this a minimum variance unbiased estimator of \(\pi\) ? Find \(\operatorname{var}(V)\) and hence give the asymptotic efficiency of \(U\) relative to \(V\).

Show that no unbiased estimator exists of \(\psi=\log \\{\pi /(1-\pi)\\}\), based on a binomial variable with probability \(\pi\).

Let \(T=a \sum\left(Y_{j}-\bar{Y}\right)^{2}\) be an estimator of \(\sigma^{2}\) based on a normal random sample. Find values of \(a\) that minimize the bias and mean squared error of \(T\).

A source at location \(x=0\) pollutes the environment. Are cases of a rare disease \(\mathcal{D}\) later observed at positions \(x_{1}, \ldots, x_{n}\) linked to the source? Cases of another rare disease \(\mathcal{D}^{\prime}\) known to be unrelated to the pollutant but with the same susceptible population as \(\mathcal{D}\) are observed at \(x_{1}^{\prime}, \ldots, x_{m}^{\prime} .\) If the probabilities of contracting \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\) are respectively \(\psi(x)\) and \(\psi^{\prime}\), and the population of susceptible individuals has density \(\lambda(x)\), show that the probability of \(\mathcal{D}\) at \(x\), given that \(\mathcal{D}\) or \(\mathcal{D}^{\prime}\) occurs there, is $$ \pi(x)=\frac{\psi(x) \lambda(x)}{\psi(x) \lambda(x)+\psi^{\prime} \lambda(x)} $$ Deduce that the probability of the observed configuration of diseased persons, conditional on their positions, is $$ \prod_{j=1}^{n} \pi\left(x_{j}\right) \prod_{i=1}^{m}\left\\{1-\pi\left(x_{i}^{\prime}\right)\right\\} $$ The null hypothesis that \(\mathcal{D}\) is unrelated to the pollutant asserts that \(\psi(x)\) is independent of \(x\). Show that in this case the unknown parameters may be eliminated by conditioning on having observed \(n\) cases of \(\mathcal{D}\) out of a total \(n+m\) cases. Deduce that the null probability of the observed pattern is \(\left({ }_{n}^{n+m}\right)^{-1}\). If \(T\) is a statistic designed to detect decline of \(\psi(x)\) with \(x\), explain how permutation of case labels \(\mathcal{D}, \mathcal{D}^{\prime}\) may be used to obtain a significance level \(p_{\text {obs }}\). Such a test is typically only conducted after a suspicious pattern of cases of \(\mathcal{D}\) has been observed. How will this influence \(p_{\text {obs }}\) ?

Independent random samples \(Y_{i 1}, \ldots, Y_{i n_{i}}\), where \(n_{i} \geq 2\), are drawn from each of \(k\) normal distributions with means \(\mu_{1}, \ldots, \mu_{k}\) and common unknown variance \(\sigma^{2}\). Derive the likelihood ratio statistic \(W_{\mathrm{p}}\) for the null hypothesis that the \(\mu_{i}\) all equal an unknown \(\mu\), and show that it is a monotone function of $$ R=\frac{\sum_{i=1}^{k} n_{i}\left(\bar{Y}_{i \cdot}-\bar{Y}_{. .}\right)^{2}}{\sum_{i=1}^{k} \sum_{j=1}^{n_{i}}\left(Y_{i j}-\bar{Y}_{i}\right)^{2}} $$ where \(\bar{Y}_{i}=n_{i}^{-1} \sum_{j} Y_{i j}\) and \(\bar{Y}_{. .}=\left(\sum n_{i}\right)^{-1} \sum_{i, j} Y_{i j}\). What is the null distribution of \(R ?\)

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.