/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Q8SE In Example 12.5.6, we used a hie... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

In Example 12.5.6, we used a hierarchical model. In that model, the parameters\({\mu _i},...,{\mu _P}\,\)were independent random variables with\({\mu _i}\)having the normal distribution with mean ψ and precision\({\lambda _0}{T_i}\,\)conditional on ψ and\({T_1},\,....{T_P}\). To make the model more general, we could also replace\({\lambda _0}\)with an unknown parameter\(\lambda \). That is, let the\({\mu _i}\)’s be independent with\({\mu _i}\)having the normal distribution with mean ψ and precision\(\,\lambda {T_i}\)conditional on\(\psi \),\(\lambda \) and\({T_1},\,....{T_P}\). Let\(\lambda \)have the gamma distribution with parameters\({\gamma _0}\)and\(\,{\delta _0}\), and let\(\lambda \)be independent of ψ and\({T_1},\,....{T_P}\). The remaining parameters have the prior distributions stated in Example 12.5.6.

a. Write the product of the likelihood and the prior as a function of the parameters\({\mu _i},...,{\mu _P}\,\), \({T_1},\,....{T_P}\)ψ, and\(\lambda \).

b. Find the conditional distributions of each parameter given all of the others. Hint: For all the parameters besides\(\lambda \), the distributions should be almost identical to those given in Example 12.5.6. It wherever\({\lambda _0}\)appears, of course, something will have to change.

c. Use a prior distribution in which α0 = 1, β0 = 0.1, u0 = 0.001, γ0 = δ0 = 1, and \({\psi _0}\)= 170. Fit the model to the hot dog calorie data from Example 11.6.2. Compute the posterior means of the four μi’s and 1/τi’s.

Short Answer

Expert verified
  1. The product of the two functions uses parameters given in the exercise and distribution \(\lambda \).
  2. The conditional distribution of \(\lambda \) all other parameters is gamma distribution.
  3. The estimated posterior means of \({\mu _i}\) \(i = 1,2,3,4\) are, respectively, 157 158.6, 118.9, and 160.4.
  4. The estimated posterior means for the \(1/{T_i}\) \(i = 1,2,3,4\) are, respectively, 487, 598.9, 479.44, and 548.1.

Step by step solution

01

Normal distribution and gamma distribution

Recall what is stated in the mentioned example and the normal and gamma distribution probability density functions. Then, the required product is a product of

\(\exp \left\{ { - \frac{{{u_0}{{\left( {\psi - {\psi _0}} \right)}^2}}}{2} - \sum\limits_{i = 1}^p {{T_i}} \left( {{\beta _0} + \frac{{{n_i}{{\left( {{\mu _i} - \mathop {{y_i}}\limits^\_ } \right)}^2} + {w_i} + \lambda {{\left( {{\mu _i} - \psi } \right)}^2}}}{2}} \right)} \right\}\)

and the following expression

\({\lambda ^{p/2 + {\gamma _0} - 1}}\exp \left( { - \lambda {\delta _0}} \right)\,\,\mathop \Pi \limits_{i = 1}^p T_i^{{\alpha _0} + \left( {{n_i} + 1} \right)/2 - 1},\)

and \({w_i}\) is given in the same way as in the example

\({w_i} = \sum\limits_{j = 1}^{{n_i}} {{{\left( {{y_{ij}} - \mathop {{y_i}}\limits^\_ } \right)}^2}} ,\,\,\,\,i = 1,2,...,p.\)

The second term corresponds to the prior distribution.

02

Conditional distribution 

As mentioned in the hint, the conditional distributions stay practically the same as in the example.

If they look at a product as a function of \({T_i}\), then you get a gamma distribution with parameters

\({\alpha _0} + \frac{{{n_i} + 1}}{2}\,\) and \(\,\,{\beta _0} + \frac{{{n_i}{{\left( {{\mu _i} - \mathop {{y_i}}\limits^\_ } \right)}^2} + {w_i} + \lambda {{\left( {{\mu _i} - \psi } \right)}^2}}}{2}\)

Next, as a function of \(\psi \)by writing it in a form widely known, one gets a normal distribution with parameters

\(\frac{{{u_0}{\psi _0} + \lambda \sum\nolimits_{i = 1}^p {\,\,} {\mu _i}}}{{{u_0} + \lambda \sum\nolimits_{i = 1}^p {{T_i}\,} }}\,\) and \({u_0} + \lambda \sum\limits_{i = 1}^p {{T_i}} \,.\)

Before commenting on the distribution of given all others, notice that the product above looks like a probability density function of a normal distribution when it is a function of for the same reason. The mean and precision are given with

\(\frac{{{n_i}\mathop {{y_i}}\limits^\_ + \lambda \psi }}{{{n_i} + {\lambda _0}}}\) and \(\,{u_0}{T_i}\,\left( {{n_i} - \lambda } \right).\)

Finally, if the product is seen as a function of \(\lambda \), it approximately probability density function of gamma distribution with parameters

\(\frac{p}{2} + {\gamma _0}\,\) and \({\delta _0} + \frac{1}{2}\sum\limits_{i = 1}^p {{T_i}} {\left( {{\mu _i} - \psi } \right)^2}.\)

03

Estimated posterior 

The parameters that should be used are given in the exercise, and the others are

\({n_1} = 20\) for beef,

\({n_2} = 17\) for meat,

\({n_3} = 17\) for poultry,

\({n_4} = 9\) for specialty.

Means \({\mu _i}\) \(i = 1,2,3,4\)correspond to the indices of \({n_i}\), \(i = 1,2,3,4\)

The code used for this simulation uses N = 20000 Markov chains with I = 100000 iterations/steps. It means that there are a total of I = 100000 parameter vector from which the result is obtained. Note that the given code should be changed as the initial parameters are different. The estimated posterior means of \({\mu _i}\), \(i = 1,2,3,4\) are, respectively, 157, 158.6, 118.9, and 160.4. Similarly, the estimated posterior means for the \(1/{T_i}\) \(i = 1,2,3,4\) are respectively, 487 598.9, 479.44, and 548.1

Here, the result of part(a), part(b) and part(c)

  1. The product of the two functions uses parameters given in the exercise and distribution \(\lambda \).
  2. The conditional distribution of \(\lambda \) all other parameters is gamma distribution.
  3. The estimated posterior means of \({\mu _i}\) \(i = 1,2,3,4\) are, respectively, 157 158.6, 118.9, and 160.4.
  4. The estimated posterior means for the \(1/{T_i}\) \(i = 1,2,3,4\) are, respectively, 487 598.9, 479.44, and 548.1.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Test the standard normal pseudo-random number generator on your computer by generating a sample of size 10,000 and drawing a normal quantile plot. How straight does the plot appear to be?

Let X1,……Xn+m be a random sample from the exponential distribution with parameter θ. Suppose that θ has the prior gamma distribution with known parameters α and β. Assume that we get to observe X1,…, Xn, but Xn+1,..., Xn+m are censored.

  1. First, suppose that the censoring works as follows: For i = 1,….,m, if Xn+i ≤ c, then we learn only that Xn+i ≤ c, but not the precise value of Xn+i. Set up a Gibbs sampling algorithm that will allow us to simulate the posterior distribution of θ in spite of the censoring.
  2. Next, suppose that the censoring works as follows: For i = 1,...,m, if Xn+i ≥ c, then we learn only that Xn+i ≥ c, but not the precise value of Xn+i. Set up a Gibbs sampling algorithm that will allow us to simulate the posterior distribution of θ in spite of the censoring

Use the blood pressure data in Table 9.2 that was described in Exercise 10 of Sec. 9.6. Suppose now that we are not confident that the variances are the same for the two treatment groups. Perform a parametric bootstrap analysis of the sort done in Example 12.6.10. Use v=10,000 bootstrap simulations.

a. Estimate the probability of type I error for a two-sample t-test whose nominal level is \({\alpha _0} = 0.1.\)

b. Correct the level of the two-sample t-test by computing the appropriate quantile of the bootstrap distribution of \(\left| {{U^{(i)}}} \right|.\)

c. Compute the standard simulation error for the quantile in part (b).

The method of antithetic variates is a technique for reducing the variance of simulation estimators. Antithetic variates are negatively correlated random variables with an expected mean and variance. The variance of the average of two antithetic variates is smaller than the variance of the average of two i.i.d. variables. In this exercise, we shall see how to use antithetic variates for importance sampling, but the method is very general. Suppose that we wish to compute \(\smallint \,g\left( x \right)\,\,dx\), and we wish to use the importance function f. Suppose that we generate pseudo-random variables with the p.d.f. f using the integral probability transformation. For \(\,{\bf{i = 1,2,}}...{\bf{,\nu ,}}\,\)let \({{\bf{X}}^{\left( {\bf{i}} \right)}}{\bf{ = }}{{\bf{F}}^{{\bf{ - 1}}}}\left( {{\bf{1 - }}{{\bf{U}}^{\left( {\bf{i}} \right)}}} \right)\), where \({{\bf{U}}^{\left( {\bf{i}} \right)}}\)has the uniform distribution on the interval (0, 1) and F is the c.d.f. Corresponding to the p.d.f. f . For each \(\,{\bf{i = 1,2,}}...{\bf{,\nu ,}}\,\) define

\(\begin{aligned}{l}{{\bf{T}}^{\left( {\bf{i}} \right)}}{\bf{ = }}{{\bf{F}}^{ - {\bf{1}}}}\left( {{\bf{1}} - {{\bf{U}}^{\left( {\bf{i}} \right)}}} \right)\,\,{\bf{.}}\\{{\bf{W}}^{\left( {\bf{i}} \right)}}{\bf{ = }}\frac{{{\bf{g}}\left( {{{\bf{X}}^{\left( {\bf{i}} \right)}}} \right)}}{{{\bf{f}}\left( {{{\bf{X}}^{\left( {\bf{i}} \right)}}} \right)}}\\{{\bf{V}}^{\left( {\bf{i}} \right)}}{\bf{ = }}\frac{{{\bf{g}}\left( {{{\bf{T}}^{\left( {\bf{i}} \right)}}} \right)}}{{{\bf{f}}\left( {{{\bf{T}}^{\left( {\bf{i}} \right)}}} \right)}}\\{{\bf{Y}}^{\left( {\bf{i}} \right)}}{\bf{ = 0}}{\bf{.5}}\left( {{{\bf{W}}^{\left( {\bf{i}} \right)}}{\bf{ + k}}{{\bf{V}}^{\left( {\bf{i}} \right)}}} \right){\bf{.}}\end{aligned}\)

Our estimator of\(\smallint \,{\bf{g}}\left( {\bf{x}} \right)\,\,{\bf{dx}}\)is then\({\bf{Z = }}\frac{{\bf{I}}}{{\bf{\nu }}}\sum\nolimits_{{\bf{i = 1}}}^{\bf{\nu }} {{{\bf{Y}}^{\left( {\bf{i}} \right)}}{\bf{.}}} \).

a. Prove that\({T^{\left( i \right)}}\)has the same distribution as\({X^{\left( i \right)}}\).

b. Prove that\({\bf{E}}\left( {\bf{Z}} \right){\bf{ = }}\smallint \,\,{\bf{g}}\left( {\bf{x}} \right)\,\,{\bf{dx}}\).

c. If\({\bf{g}}\left( {\bf{x}} \right)\,{\bf{/f}}\left( {\bf{x}} \right)\)it is a monotone function, explain why we expect it \({{\bf{V}}^{\left( {\bf{i}} \right)}}\)to be negatively correlated.

d. If \({{\bf{W}}^{\left( {\bf{i}} \right)}}\) and \({{\bf{V}}^{\left( {\bf{i}} \right)}}\)are negatively correlated, show that Var(Z) is less than the variance one would get with 2v simulations without antithetic variates.

The \({\chi ^2}\) goodness-of-fit test (see Chapter 10) is based on an asymptotic approximation to the distribution of the test statistic. For small to medium samples, the asymptotic approximation might not be very good. Simulation can be used to assess how good the approximation is. Simulation can also be used to estimate the power function of a goodness-of-fit test. For this exercise, assume that we are performing the test that was done in Example 10.1.6. The idea illustrated in this exercise applies to all such problems.

a. Simulate \(v = 10,000\) samples of size \(n = 23\) from the normal distribution with a mean of 3.912 and variance of 0.25. For each sample, compute the \({\chi ^2}\) goodness of fit statistic Q using the same four intervals that were used in Example 10.1.6. Use the simulations to estimate the probability that Q is greater than or equal to the 0.9,0.95 and 0.99 quantiles of the \({\chi ^2}\) distribution with three degrees of freedom.

b. Suppose that we are interested in the power function of a \({\chi ^2}\) goodness-of-fit test when the actual distribution of the data is the normal distribution with a mean of 4.2 and variance of 0.8. Use simulation to estimate the power function of the level 0.1,0.05 and 0.01 tests at the alternative specified.

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.