/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Problem 12 Jeffreys' prior distributions: s... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

Jeffreys' prior distributions: suppose \(y \mid \theta \sim \operatorname{Poisson}(\theta)\). Find Jeffreys' prior density for \(\theta\), and then find \(\alpha\) and \(\beta\) for which the \(\operatorname{Gamma}(\alpha, \beta)\) density is a close match to Jeffreys' density.

Short Answer

Expert verified
The Jeffreys' prior density for \(\theta\) in Poisson is \(\theta^{-1/2}\), matched to Gamma(\(\alpha=1/2, \beta=0\)).

Step by step solution

01

Recall the Fisher Information

First, we need to find the Fisher Information for the Poisson distribution. For a Poisson random variable, the likelihood is \[L( heta) = e^{- heta} \frac{\theta^y}{y!},\]giving us the log-likelihood\[\ell(\theta) = -\theta + y\log\theta - \log(y!).\]Taking the derivative of the log-likelihood with respect to \(\theta\), we find the score\[\frac{\partial}{\partial \theta}\ell(\theta) = -1 + \frac{y}{\theta}.\]Then the second derivative is\[\frac{\partial^2}{\partial \theta^2}\ell(\theta) = -\frac{y}{\theta^2}.\]The expected value of the negative of this derivative gives us the Fisher Information:\[I(\theta) = \mathbb{E} \left[ -\frac{y}{\theta^2} \right] = \frac{1}{\theta}.\]
02

Determine the Jeffreys' Prior

The Jeffreys' prior is proportional to the square root of the Fisher Information:\[p(\theta) \propto \sqrt{I(\theta)} = \sqrt{\frac{1}{\theta}} = \theta^{-1/2}.\]Thus, the Jeffreys' prior for \(\theta\) in the Poisson distribution is\[p(\theta) \propto \theta^{-1/2}.\]
03

Matching the Gamma Distribution

The Gamma distribution is given by\[p(\theta) = \frac{\beta^\alpha}{\Gamma(\alpha)} \theta^{\alpha - 1} e^{-\beta \theta}.\]To find values of \(\alpha\) and \(\beta\) that match the Jeffreys' prior \(p(\theta) \propto \theta^{-1/2}\), we need \(\alpha - 1 = -1/2\). Hence, \(\alpha = 1/2\). Also, we assume a non-informative \(\beta\) by setting \(\beta\) to a small or neutral value, commonly \(\beta = 0\) as there is no decay factor the Jeffreys' prior term.Thus, the matching Gamma parameters are \(\alpha = 1/2\) and \(\beta = 0\).

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

Key Concepts

These are the key concepts you need to understand to accurately answer the question.

Jeffreys' Prior
Jeffreys' Prior is a critical concept in Bayesian statistics. Named after the famous statistician Sir Harold Jeffreys, this prior is often used for objective Bayesian analysis. Its purpose is to be non-informative, which means it doesn't impose much subjective bias when estimating parameters based on observed data. This is particularly useful when you don't have much prior information about the parameter you are studying.

To find Jeffreys' Prior for a parameter, you start by calculating the Fisher Information. This measures how much information an observable random variable carries about an unknown parameter. In our example, we have a Poisson distribution where the parameter is \(\theta\).

The Fisher Information for a Poisson-distributed variable can be computed and it turns out to be \(I(\theta) = \frac{1}{\theta}\). Jeffreys' Prior is then proportional to the square root of this Fisher Information: \(\theta^{-1/2}\). This prior is especially suited for situations where you need a minimally informative prior to avoid skewing your results.
Poisson Distribution
The Poisson Distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time or space, provided these events happen with a known constant mean rate, and are independent of the time since the last event.

A Poisson-distributed random variable \(y\) can be described with the parameter \(\theta\), which represents the average number of events in the given interval. The probability mass function of the Poisson distribution is given by \[P(Y = y \mid \theta) = \frac{e^{-\theta} \theta^y}{y!}\]

This distribution is very useful in real-world scenarios such as modeling the number of emails received in an hour or the number of cars passing through a toll booth in a day. In Bayesian analysis, the Poisson distribution can be paired with conjugate priors like the Gamma distribution to simplify inference and calculations.
Gamma Distribution
The Gamma Distribution is continuous and often employed in Bayesian statistics as a conjugate prior for various distributions. It is specifically used as the prior for the Poisson distribution in modeling processes where the Poisson rate parameter \(\theta\) is unknown. This helps in computational simplification and tractability of the Bayesian solutions.

The Gamma distribution is defined by two parameters: \(\alpha\) and \(\beta\). These parameters are shape and rate parameters, respectively. The probability density function for a Gamma distribution is given by:
\[p(\theta) = \frac{\beta^\alpha}{\Gamma(\alpha)} \theta^{\alpha - 1} e^{-\beta \theta}\]

In the context of our exercise, the parameters \(\alpha\) and \(\beta\) are chosen to match Jeffreys' Prior. We found that \(\alpha = 1/2\) and \(\beta\) can often be set to zero or another non-informative value, effectively turning the Gamma distribution into a flat prior. This allows it to closely resemble Jeffreys' non-informative prior and aids in making unbiased statistical inferences.

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Exponential model with conjugate prior distribution: (a) Show that if \(y \mid \theta\) is exponentially distributed with rate \(\theta\), then the gamma prior distribution is conjugate for inferences about \(\theta\) given an iid sample of \(y\) values. (b) Show that the equivalent prior specification for the mean, \(\phi=1 / \theta\), is inverse-gamma. (That is, derive the latter density function.)

Predictive distributions: let \(y\) be the number of 6 's in 1000 independent rolls of a particular real die, which may be unfair. Let \(\theta\) be the probability that the die lands on '6.' Suppose your prior distribution for \(\theta\) is as follows: $$ \begin{aligned} \operatorname{Pr}(\theta=1 / 12) &=0.25 \\ \operatorname{Pr}(\theta=1 / 6) &=0.5 \\ \operatorname{Pr}(\theta=1 / 4) &=0.25 \end{aligned} $$ (a) Using the normal approximation for the conditional distributions, \(p(y \mid \theta)\), sketch your approximate prior predictive distribution for \(y\). (b) Give approximate \(5 \%, 25 \%, 50 \%, 75 \%\), and \(95 \%\) points for the distribution of \(y .\) (Be careful here: \(y\) does not have a normal distribution, but you can still use the normal distribution as part of your analysis.)

Posterior inference: suppose there is \(\operatorname{Beta}(4,4)\) prior distribution on the probability \(\theta\) that a coin will yield a 'head' when spun in a specified manner. The coin is independently spun ten times, and 'heads' appear fewer than 3 times. You are not told how many heads were seen, only that the number is less than 3. Calculate your exact posterior density (up to a proportionality constant) for \(\theta\) and sketch it.

Informative prior distribution: as a modern-day Laplace, you have more definite beliefs about the ratio of male to female births than reflected by his uniform prior distribution. In particular, if \(\theta\) represents the proportion of female births in a given population, you are willing to place a Beta(100, 100) prior distribution on \(\theta\). (a) Show that this means you are more than \(95 \%\) sure that \(\theta\) is between \(0.4\) and \(0.6\), although you are ambivalent as to whether it is greater or less than \(0.5\). (b) Now you observe that out of a random sample of 1,000 births, 511 are boys. What is your posterior probability that \(\theta>0.5 ?\) Compute using the exact beta distribution or the normal approximation.

Noninformative prior densities: (a) For the binomial likelihood, \(y \sim \operatorname{Bin}(n, \theta)\), show that \(p(\theta) \propto \theta^{-1}(1-\) \(\theta)^{-1}\) is the uniform prior distribution for the natural parameter of the exponential family. (b) Show that if \(y=0\) or \(n\), the resulting posterior distribution is improper.

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.