/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Q15E Let X1,……Xn+m be a random sa... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

Let X1,……Xn+m be a random sample from the exponential distribution with parameter θ. Suppose that θ has the prior gamma distribution with known parameters α and β. Assume that we get to observe X1,…, Xn, but Xn+1,..., Xn+m are censored.

  1. First, suppose that the censoring works as follows: For i = 1,….,m, if Xn+i ≤ c, then we learn only that Xn+i ≤ c, but not the precise value of Xn+i. Set up a Gibbs sampling algorithm that will allow us to simulate the posterior distribution of θ in spite of the censoring.
  2. Next, suppose that the censoring works as follows: For i = 1,...,m, if Xn+i ≥ c, then we learn only that Xn+i ≥ c, but not the precise value of Xn+i. Set up a Gibbs sampling algorithm that will allow us to simulate the posterior distribution of θ in spite of the censoring

Short Answer

Expert verified

The Gibbs Sampling Algorithm: The steps of the algorithm are

\(\begin{aligned}{}\left( {\left( a \right)} \right){\rm{ }}Find{\rm{ }}Pr\left( {Xn + i{\rm{ }} \le {\rm{ }}C} \right);\\\left( {\left( b \right)} \right){\rm{ Find}}\,\,\,\,\,Pr(Xn + i{\rm{ }} \ge {\rm{ }}C);\end{aligned}\)

Step by step solution

01

Gibbs sampling

Gibbs Samplingis a Monte Carlo Markov Chain method for estimating complex joint distributions that draw an instance from the distribution of each variable iteratively based on the current values of the other variables.

The Gibbs Sampling Algorithm: The steps of the algorithm are

\(\left( {1.} \right)\,\)Pick starting values \({x_2}^{\left( 0 \right)}\) for \(\,{x_2}\) , and let \(\,\,i = 0\,\)

\(\left( {2.} \right)\,\)let be simulated values from the conditional distribution of \(\,{x_1}\)given that \(\,\,{X_1} = {x_2}^{\left( i \right)}\)

\(\left( {3.} \right)\,\)Let\(\,{x_2}^{\left( {i + 1} \right)\,\,}\,\) be a simulated value from the conditional distribution of \(\,{x_2}\) given that \(\,{X_1} = {x_1}^{\left( {i + 1} \right)}\)

\(\left( {4.} \right)\)Repeat steps \(\,2.\,\,and\,3.\) \(\,i\) where\(\,i + 1\)

There is a total of \(n + m\) random variables from the exponential distribution with parameter \(\theta \) , from which the first n can be observed. For the censored m, random variables\(\,{X_n} + i \le c,i = 1,2,...,m\) can be observed. It is true that

\(\Pr \left( {{X_n} + i \le c} \right) = 1{ - ^{ - c\theta }}\)

Because \(\,{X_n} + i\) they are the exponential distribution with parameters.

Let be \({x_1},{x_2},{x_n}\)i.i.d. random variables from an exponential distribution with an unknown parameter \(\theta > 0\). If the prior distribution \(\theta \)is gamma distribution with parameters \(\alpha ,{\rm{ }}\beta > 0\), then the posterior distribution of \(\theta \)the given is\({X_1} = {x_1},{X_2} = {x_2},{X_n} = {x_n}\) gamma distribution where

\({\alpha _1} = \alpha + n\)

And

\({\beta _1} = \beta + \sum\limits_{i = 1}^n {{x_i}.} \)

Next, the posterior distribution is approximate to the likelihood function times the prior distribution, which is

\({\xi _{nm}} = {\theta ^{n + \alpha - 1}}{\left( {1 - {e^{ - c\theta }}} \right)^m}\,{e^{ - \theta \,\sum\nolimits_{i = 1}^n {{x_i}} }}\)

For\(\theta < x,c\) , it follows that the conditional distribution of\({x_{n + i\,\,}}\)given\({x_{n + i\,\,}} \le c\) is obtained as the following quotient of two p.d.f.'s

\({f_1}\,\left( {x\mid \theta ,{x_{n + i}} \le c} \right) = \frac{{\theta {e^{ - \theta x}}}}{{1 - {e^{ - \theta c}}}},\)

Using this p.d.f. in the likelihood function, the product of the likelihood function times the prior distribution \(\theta > 0,0 < {x_i} < c,i = 1,2,...,n + m\,\) becomes

\(\theta {e^{n + \alpha - 1}}{\left( {1 - {e^{ - c\theta }}} \right)^m}\,{e^{ - \theta }}\sum\nolimits_{i = n}^{n + m} {{x_i}} \)

This is the p.d.f. of the gamma distribution with parameters

\(\begin{aligned}{l}{\alpha _g} = n + m + \alpha \\{\beta _g}\sum\nolimits_{i = n}^{n + m} {{x_i}} \end{aligned}\)

as a function of\(\theta \) , and it looks like the p.d.f\({f_\alpha }\left( {x\mid \theta \mid {x_{n + i\,\,}} \le c} \right)\)

02

 Random variable is a variable  

A random variableis a factor with an uncertain number or function that gives numbers to each study's results.

There are a total of \(n + m\)random variables from the exponential distribution with parameter, from which the first n can be observed. For the censored m, random variables\(\,{X_n} + i \le c,i = 1,2,...,m\) can be observed. It is true that

Because \(\,{X_n} + i\) they are from the exponential distribution with parameter \(\theta ,i = 1,2,...,m\).

Next, the posterior distribution is approximate to the likelihood function times the prior distribution, which is

\({\xi _{nm}} = {\theta ^{n + \alpha - 1}}{\left( {1 - {e^{ - c\theta }}} \right)^m}\,{e^{ - \theta \,\sum\nolimits_{i = 1}^n {{x_i}} }}\)

For\(\theta < x,c\) , it follows that the conditional distribution of \(\,{X_n} + i\) given \({x_{n + i\,\,}} \ge c\)is obtained as the following quotient of two p.d.f.'s

\({f_1}\,\left( {x\mid \theta ,{x_{n + i}} \ge c} \right) = \frac{{\theta {e^{ - \theta x}}}}{{1 - {e^{ - \theta c}}}},\)

Or equally

\({\xi _{nm}} = {\theta ^{n + }}{m^{ + \alpha - 1}}\,{e^{ - \theta \,\sum\nolimits_{i = 1}^{n + m} {{x_i}} }}\)

This is the p.d.f. of the gamma distribution with parameters

\(\begin{aligned}{l}{\alpha _g} = n + m + \alpha \\{\beta _g} = \sum\limits_{i = 1}^{n + m} {{x_i}} ,\end{aligned}\)

As a function \(\theta \)of, and it looks like the p.d.f. \({f_\alpha }\left( {x\mid \theta \mid {x_{n + i\,\,}} \ge c} \right)\)

The Gibbs Sampling algorithm starts by picking a starting value for\(\theta \) , e.g., M.L.E., and then simulate observations from the p.d.f. \({f_1}\,\). The quantile function \(0 < q < 1\)is obtained from

\(q = {\theta ^m}\,{e^{ - \theta \left( {x - c} \right)}}\)

which yields

\({F^{ - 1}}\left( q \right) = c - \frac{{\log \left( {1 - q} \right)}}{\theta }\)

Use the quantile function to simulate observations from as

\(x = c - \frac{{\log \left( {1 - q} \right)}}{\theta }\)

U is a random variable from the uniform distribution on interval\(\left( {0,1} \right)\).

Finally, simulate a new value of \(\theta \) with the given parameters above.

There is an easier way for simulation without using the Gibbs Sampling algorithm

Hence,

\(\begin{aligned}{}\left( {\left( a \right)} \right){\rm{ }}Find{\rm{ }}Pr\left( {Xn + i{\rm{ }} \le {\rm{ }}C} \right);\\\left( {\left( b \right)} \right){\rm{ Find}}\,\,\,\,\,Pr(Xn + i{\rm{ }} \ge {\rm{ }}C);\end{aligned}\)

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Test the t pseudo-random number generator on your computer. Simulate 10,000 t pseudo-random variables with m degrees of freedom for m=1,2,5,10,20. Then draw t quantile plots

Let \({{\bf{X}}_{\bf{1}}},...,{{\bf{X}}_n}\) be i.i.d. with the normal distribution having mean \(\mu \) and precision \(\tau \).Gibbs sampling allows one to use a prior distribution for \(\left( {\mu ,\tau } \right)\) in which \(\mu \) and\(\tau \) are independent. With mean \({\mu _0}\) and variance, \({\gamma _0}\) Let the prior distribution of \(\tau \)being the gamma distribution with parameters \({\alpha _0}\) and \({\beta _0}\) .

a. Show that the Table \({\bf{12}}{\bf{.8}}\) specifies the appropriate conditional distribution for each parameter given the other.

b. Use the new Mexico nursing home data(Examples \({\bf{12}}{\bf{.5}}{\bf{.2}}\,{\bf{and}}\,{\bf{12}}{\bf{.5}}{\bf{.3}}\) ). Let the prior hyperparameters be \({{\bf{\alpha }}_{\bf{0}}}{\bf{ = 2,}}{{\bf{\beta }}_{\bf{0}}}{\bf{ = 6300,}}{{\bf{\mu }}_{\bf{0}}}{\bf{ = 200}}\), and \({\gamma _0} = 6.35 \times {10^{ - 4}}.\) Implement a Gibbs sampler to find the posterior distribution \(\left( {\mu ,\tau } \right).\,\) . In particular, calculate an interval containing \(95\) percent of the posterior distribution of \(\mu \)

Use the data in table 11.19 on page 762.This time fits the model developed in Example 12.5.6.use the prior hyperparameters \(\,{{\bf{\lambda }}_{\scriptstyle{\bf{0}}\atop\scriptstyle\,}}{\bf{ = }}{{\bf{\alpha }}_{\scriptstyle{\bf{0}}\atop\scriptstyle\,}}\,{\bf{ = 1,}}\,\,{{\bf{\beta }}_{\scriptstyle{\bf{0}}\atop\scriptstyle\,}}{\bf{ = 0}}{\bf{.1}},{{\bf{\mu }}_{\scriptstyle{\bf{0}}\atop\scriptstyle\,}}{\bf{ = 0}}{\bf{.001}}\)and \({{\bf{\psi }}_{\scriptstyle{\bf{0}}\atop\scriptstyle\,}}{\bf{ = 800}}\)obtain a sample of 10,000 from the posterior joint distribution of the parameters. Estimate the posterior mean of the three parameters \({{\bf{\mu }}_{\scriptstyle{\bf{1}}\atop\scriptstyle\,}}{\bf{,}}{{\bf{\mu }}_{\scriptstyle{\bf{2}}\atop\scriptstyle\,}}{\bf{,}}{{\bf{\mu }}_{\scriptstyle{\bf{3}}\atop\scriptstyle\,}}\)

\({{\bf{x}}_{\scriptstyle{\bf{1}}\atop\scriptstyle\,}}.....{{\bf{x}}_{\scriptstyle{\bf{n}}\atop\scriptstyle\,}}\) be uncorrelated, each with variance \({\sigma ^2}\) Let \({{\bf{y}}_{\scriptstyle{\bf{1}}\atop\scriptstyle\,}}.....{{\bf{y}}_{\scriptstyle{\bf{n}}\atop\scriptstyle\,}}\) be positively correlated. each with variance, prove that the variance of \(\overline x \)is smaller than the variance of \(\overline y \)

For each of the exercises in this section that requires a simulation, see if you can think of a way to use control variates or antithetic variates to reduce the variance of the simulation estimator.

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.