/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Q8E Suppose that we wish to approxim... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

Suppose that we wish to approximate the integral\(\int g (x)dx\). Suppose that we have a p.d.f. \(f\)that we shall use as an importance function. Suppose that \(g(x)/f(x)\) is bounded. Prove that the importance sampling estimator has finite variance.

Short Answer

Expert verified

Directly follows from the fact that the quotient is bounded.

Step by step solution

01

Definition for importance sampling

  • Many integrals can be advantageously recast as random variable functions.
  • We can estimate integrals that would otherwise be impossible to compute in closed form if we can simulate a large number of random variables with proper distributions.
02

Determine the importance sampling estimator

Assume that the quotient is bounded.

Since \(f\) is the importance function, it follows that

\({Y^{(i)}} = \frac{{g\left( {{X^{(i)}}} \right)}}{{f\left( {{X^{(i)}}} \right)}}\)Are random variables from which the importance sampling estimator is obtained.

The importance sampling estimator is

\(Z = \frac{1}{\nu }\sum\limits_{i = 1}^\nu {{Y^{(i)}}} \)

And the p.d.f. of \({X^{(i)}}\)is \({f_i}\) thus, because the quotient

\(Y(X) = \frac{{g(X)}}{{f(X)}}\)

Is bounded, the variance of \(Z\) is finite.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Use the data in the Table \({\bf{11}}{\bf{.5}}\) on page \({\bf{699}}\) suppose that \({{\bf{y}}_{\bf{i}}}\) is the logarithm of pressure \({x_i}\)and is the boiling point for the I the observation \({\bf{i = 1,}}...{\bf{,17}}{\bf{.}}\) Use the robust regression scheme described in Exercise \({\bf{8}}\) to \({\bf{a = 5, b = 0}}{\bf{.1}}\,\,{\bf{and f = 0}}{\bf{.1}}{\bf{.}}\) Estimate the posterior means and standard deviations of the parameter \({{\bf{\beta }}_{\bf{0}}}{\bf{,}}{{\bf{\beta }}_{\bf{1}}}\,\) and n.

Use the blood pressure data in Table 9.2 that was described in Exercise 10 of Sec. 9.6. Suppose now that we are not confident that the variances are the same for the two treatment groups. Perform a parametric bootstrap analysis of the sort done in Example 12.6.10. Use v=10,000 bootstrap simulations.

a. Estimate the probability of type I error for a two-sample t-test whose nominal level is \({\alpha _0} = 0.1.\)

b. Correct the level of the two-sample t-test by computing the appropriate quantile of the bootstrap distribution of \(\left| {{U^{(i)}}} \right|.\)

c. Compute the standard simulation error for the quantile in part (b).

Consider, once again, the model described in Example \({\bf{7}}{\bf{.5}}{\bf{.10}}{\bf{.}}\) Assume that \({\bf{n = 10}}\) the observed values of \({{\bf{X}}_{\bf{1}}},...,{{\bf{X}}_{{\bf{1}}0}}\) are

\( - 0.92,\,\, - 0.33,\,\, - 0.09,\,\,\,0.27,\,\,\,0.50, - 0.60,\,1.66,\, - 1.86,\,\,\,3.29,\,\,\,2.30\).

a. Fit the model to the observed data using the Gibbs sampling algorithm developed in Exercise. Use the following prior hyperparameters: \({{\bf{\alpha }}_{\bf{0}}}{\bf{ = 1,}}{{\bf{\beta }}_{\bf{0}}}{\bf{ = 1,}}{{\bf{\mu }}_{\bf{0}}}{\bf{ = 0}}\,{\bf{and}}\,{\bf{ }}{{\bf{\lambda }}_{\bf{0}}}{\bf{ = 1}}\)

b. For each i, estimate the posterior probability that \({\rm{ }}{{\rm{x}}_i}\)came for the normal distribution with unknown mean and variance.

Suppose that \(\left( {{X_1},{Y_1}} \right),...,\left( {{X_n},{Y_n}} \right)\) form a random sample from a bivariate normal distribution with means \({\mu _x} and {\mu _y},variances \sigma _x^2and \sigma _y^2,and correlation \rho .\) Let R be the sample correlation. Prove that the distribution of R depends only on \(\rho ,not on {\mu _x},{\mu _y},\sigma _x^2,or \sigma _y^2.\)

Let \({{\bf{X}}_{\bf{1}}},...,{{\bf{X}}_n}\) be i.i.d. with the normal distribution having mean \(\mu \) and precision \(\tau \).Gibbs sampling allows one to use a prior distribution for \(\left( {\mu ,\tau } \right)\) in which \(\mu \) and\(\tau \) are independent. With mean \({\mu _0}\) and variance, \({\gamma _0}\) Let the prior distribution of \(\tau \)being the gamma distribution with parameters \({\alpha _0}\) and \({\beta _0}\) .

a. Show that the Table \({\bf{12}}{\bf{.8}}\) specifies the appropriate conditional distribution for each parameter given the other.

b. Use the new Mexico nursing home data(Examples \({\bf{12}}{\bf{.5}}{\bf{.2}}\,{\bf{and}}\,{\bf{12}}{\bf{.5}}{\bf{.3}}\) ). Let the prior hyperparameters be \({{\bf{\alpha }}_{\bf{0}}}{\bf{ = 2,}}{{\bf{\beta }}_{\bf{0}}}{\bf{ = 6300,}}{{\bf{\mu }}_{\bf{0}}}{\bf{ = 200}}\), and \({\gamma _0} = 6.35 \times {10^{ - 4}}.\) Implement a Gibbs sampler to find the posterior distribution \(\left( {\mu ,\tau } \right).\,\) . In particular, calculate an interval containing \(95\) percent of the posterior distribution of \(\mu \)

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.