/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Q2E Let \({\bf{f}}\left( {{{\bf{x}}_... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

Let \({\bf{f}}\left( {{{\bf{x}}_{{\bf{1}}\,}}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}} \right)\) be a joint p.d.f. Suppose that \(\left( {{{\bf{x}}_{{\bf{1}}\,}}^{\left( {\bf{i}} \right)}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}^{\left( {\bf{i}} \right)}} \right)\)has the joint p.d.f. Let \(\left( {{{\bf{x}}_{{\bf{1}}\,}}^{\left( {{\bf{i + 1}}} \right)}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}^{\left( {{\bf{i + 1}}} \right)}} \right)\)be the result of applying steps \(2\,\,and\,\,3\) of the Gibbs sampling algorithm on-page \({\bf{824}}\). Prove that \(\left( {{{\bf{x}}_{{\bf{1}}\,}}^{\left( {{\bf{i + 1}}} \right)}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}^{\left( {\bf{i}} \right)}} \right)\) and \(\left( {{{\bf{x}}_{{\bf{1}}\,}}^{\left( {{\bf{i + 1}}} \right)}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}^{\left( {{\bf{i + 1}}} \right)}} \right)\)also have the joint p.d.f. f.

Short Answer

Expert verified

The Gibbs sampling algorithm.

\(\left( {1.} \right)\,\)Pick starting values \({x_2}^{\left( 0 \right)}\) for \(\,{x_2}\) , and let \(\,\,i = 0\,\)

\(\left( {2.} \right)\,\)let be a simulated value from the conditional distribution \(\,{x_1}\)given that \(\,\,{X_1} = {x_2}^{\left( i \right)}\)

\(\left( {3.} \right)\,\)Let\(\,{x_2}^{\left( {i + 1} \right)\,\,}\,\) be a simulated value from the conditional distribution \(\,{x_2}\) given that \(\,{X_1} = {x_1}^{\left( {i + 1} \right)}\)

Use the Gibbs Sampling Algorithm

Step by step solution

01

Definition of Gibbs Sampling Algorithm

Gibbs Sampling is a Monte Carlo Markov Chain method for estimating complex joint distributions that draw an instance from the distribution of each variable iteratively based on the current values of the other variables.

The Gibbs Sampling Algorithm:

The steps of the algorithm are

\(\left( {1.} \right)\,\)Pick starting values \({x_2}^{\left( 0 \right)}\) for \(\,{x_2}\) , and let \(\,\,i = 0\,\)

\(\left( {2.} \right)\,\)let be a simulated value from the conditional distribution \(\,{x_1}\)given that \(\,\,{X_1} = {x_2}^{\left( i \right)}\)

\(\left( {3.} \right)\,\)Let\(\,{x_2}^{\left( {i + 1} \right)\,\,}\,\) be a simulated value from the conditional distribution \(\,{x_2}\) given that \(\,{X_1} = {x_1}^{\left( {i + 1} \right)}\)

\(\left( {4.} \right)\)Repeat steps \(\,2.\,\,and\,3.\)\(\,i\) where\(\,i + 1\)

Let \(f\left( {{x_1},{x_2}} \right)\) be the joint p.d.f. of \(\,\,\left( {{x_1}^{\left( i \right)},{x_2}^{\left( i \right)}} \right)\)The conditional distribution of\(\,{x_1}\) given that \(\,{X_2} = {x_2}^{\left( {i + 1} \right)}\), denoted \({f_1}\)with is

\({g_1}\left( {{x_1}\mid {x_2}} \right) = \frac{{\left( {{x_1},{x_2}} \right)}}{{{f_2}\left( {{x_2}} \right)\,\,\,\,}}\)

02

Marginal probability density function

In the case of a pair of random variables (X, Y), the density function of random variable X (or Y) considered alone is known as the marginal density function.

The \({f_2}\,\)is the marginal probability density function of \(\,\,{x_2}^{\left( i \right)}\).

Step 2 is \(\,{x_1}^{\left( {i + 1} \right)}\,\)a simulated value from the conditional distribution of \({x_1}\,\)given that \({X_2} = {x_2}^{\left( i \right)}\,\), which implies that the joint p.d.f. of \(\,\left( {{x_1}^{\left( {i + 1} \right)},{x_2}^{\left( i \right)}} \right)\) is the product of \(\,{g_{1\,\,}}\)and \({g_{2\,\,}}\)

\({g_1}\left( {{x_1}\mid {x_2}} \right)\,{f_2}\left( {{x_2}} \right) = f\left( {{x_1},{x_2}} \right).\)

Next, since it is also the joint p.d.f. of \(\,\,\,\left( {{x_1}^{\left( i \right)},{x_2}^{\left( i \right)}} \right)\), the marginal distribution must be the same as the marginal distribution \({x_1}^{\left( {i + 1} \right)}\)because integrating overall \(\,{x_2}\,\,\)gives the same function.

Similarly, in the algorithm step\({x_2}^{\left( {i + 1} \right)}\) is a simulated value from the conditional distribution of \({x_2}\,\)given that\({X_1} = {x_1}^{\left( {i + 1} \right)}\). Analogously, it must be that the marginal distribution of\(\,{x_2}^{\left( i \right)}\,\) must be the same as the marginal distribution of\(\,{x_2}^{\left( {i + 1} \right)\,\,\,}\), which implies that\(\left( {{x_1}^{\left( {i + 1} \right)},{x_2}^{\left( {i + 1} \right)}} \right)\,\,\) it must have the same joint distribution as\(\,\,\,\left( {{x_1}^{\left( i \right)},{x_2}^{\left( i \right)}} \right)\)

Hence,

Use the Gibbs Sampling Algorithm.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Use the data in table 11.19 on page 762.This time fits the model developed in Example 12.5.6.use the prior hyperparameters \(\,{{\bf{\lambda }}_{\scriptstyle{\bf{0}}\atop\scriptstyle\,}}{\bf{ = }}{{\bf{\alpha }}_{\scriptstyle{\bf{0}}\atop\scriptstyle\,}}\,{\bf{ = 1,}}\,\,{{\bf{\beta }}_{\scriptstyle{\bf{0}}\atop\scriptstyle\,}}{\bf{ = 0}}{\bf{.1}},{{\bf{\mu }}_{\scriptstyle{\bf{0}}\atop\scriptstyle\,}}{\bf{ = 0}}{\bf{.001}}\)and \({{\bf{\psi }}_{\scriptstyle{\bf{0}}\atop\scriptstyle\,}}{\bf{ = 800}}\)obtain a sample of 10,000 from the posterior joint distribution of the parameters. Estimate the posterior mean of the three parameters \({{\bf{\mu }}_{\scriptstyle{\bf{1}}\atop\scriptstyle\,}}{\bf{,}}{{\bf{\mu }}_{\scriptstyle{\bf{2}}\atop\scriptstyle\,}}{\bf{,}}{{\bf{\mu }}_{\scriptstyle{\bf{3}}\atop\scriptstyle\,}}\)

In Sec. 10.2, we discussed \({\chi ^2}\) goodness-of-fit tests for composite hypotheses. These tests required computing M.L.E.'s based on the numbers of observations that fell into the different intervals used for the test. Suppose instead that we use the M.L.E.'s based on the original observations. In this case, we claimed that the asymptotic distribution of the \({x^2}\) test statistic was somewhere between two different \({\chi ^2}\) distributions. We can use simulation to better approximate the distribution of the test statistic. In this exercise, assume that we are trying to test the same hypotheses as in Example 10.2.5, although the methods will apply in all such cases.

a. Simulate \(v = 1000\) samples of size \(n = 23\) from each of 10 different normal distributions. Let the normal distributions have means of \(3.8,3.9,4.0,4.1,\) and \(4.2\) Let the distributions have variances of 0.25 and 0.8. Use all 10 combinations of mean and variance. For each simulated sample, compute the \({\chi ^2}\) statistic Q using the usual M.L.E.'s of \(\mu \) , and \({\sigma ^2}.\) For each of the 10 normal distributions, estimate the 0.9,0.95, and 0.99 quantiles of the distribution of Q.

b. Do the quantiles change much as the distribution of the data changes?

c. Consider the test that rejects the null hypothesis if \(Q \ge 5.2.\) Use simulation to estimate the power function of this test at the following alternative: For each \(i,\left( {{X_i} - 3.912} \right)/0.5\) has the t distribution with five degrees of freedom.

Suppose that \({x_1},...,{x_n}\) from a random sample from an exponential distribution with parameter\(\theta \).Explain how to use the parametric bootstrap to estimate the variance of the sample average\(\overline X \).(No simulation is required.)

Test the gamma pseudo-random number generator on your computer. Simulate 10,000 gamma pseudo-random variables with parameters a and 1 for \(a = 0.5,1,1.5,2,5,\) 10. Then draw gamma quantile plots

Use the data in Exercise 16 of Sec. 10.7.

a. Use the nonparametric bootstrap to estimate the variance of the sample median.

b. How many bootstrap samples does it appear that you need to estimate the variance to within .005 with a probability of 0.95?

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.