/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Q15E The method of antithetic variate... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

The method of antithetic variates is a technique for reducing the variance of simulation estimators. Antithetic variates are negatively correlated random variables with an expected mean and variance. The variance of the average of two antithetic variates is smaller than the variance of the average of two i.i.d. variables. In this exercise, we shall see how to use antithetic variates for importance sampling, but the method is very general. Suppose that we wish to compute \(\smallint \,g\left( x \right)\,\,dx\), and we wish to use the importance function f. Suppose that we generate pseudo-random variables with the p.d.f. f using the integral probability transformation. For \(\,{\bf{i = 1,2,}}...{\bf{,\nu ,}}\,\)let \({{\bf{X}}^{\left( {\bf{i}} \right)}}{\bf{ = }}{{\bf{F}}^{{\bf{ - 1}}}}\left( {{\bf{1 - }}{{\bf{U}}^{\left( {\bf{i}} \right)}}} \right)\), where \({{\bf{U}}^{\left( {\bf{i}} \right)}}\)has the uniform distribution on the interval (0, 1) and F is the c.d.f. Corresponding to the p.d.f. f . For each \(\,{\bf{i = 1,2,}}...{\bf{,\nu ,}}\,\) define

\(\begin{aligned}{l}{{\bf{T}}^{\left( {\bf{i}} \right)}}{\bf{ = }}{{\bf{F}}^{ - {\bf{1}}}}\left( {{\bf{1}} - {{\bf{U}}^{\left( {\bf{i}} \right)}}} \right)\,\,{\bf{.}}\\{{\bf{W}}^{\left( {\bf{i}} \right)}}{\bf{ = }}\frac{{{\bf{g}}\left( {{{\bf{X}}^{\left( {\bf{i}} \right)}}} \right)}}{{{\bf{f}}\left( {{{\bf{X}}^{\left( {\bf{i}} \right)}}} \right)}}\\{{\bf{V}}^{\left( {\bf{i}} \right)}}{\bf{ = }}\frac{{{\bf{g}}\left( {{{\bf{T}}^{\left( {\bf{i}} \right)}}} \right)}}{{{\bf{f}}\left( {{{\bf{T}}^{\left( {\bf{i}} \right)}}} \right)}}\\{{\bf{Y}}^{\left( {\bf{i}} \right)}}{\bf{ = 0}}{\bf{.5}}\left( {{{\bf{W}}^{\left( {\bf{i}} \right)}}{\bf{ + k}}{{\bf{V}}^{\left( {\bf{i}} \right)}}} \right){\bf{.}}\end{aligned}\)

Our estimator of\(\smallint \,{\bf{g}}\left( {\bf{x}} \right)\,\,{\bf{dx}}\)is then\({\bf{Z = }}\frac{{\bf{I}}}{{\bf{\nu }}}\sum\nolimits_{{\bf{i = 1}}}^{\bf{\nu }} {{{\bf{Y}}^{\left( {\bf{i}} \right)}}{\bf{.}}} \).

a. Prove that\({T^{\left( i \right)}}\)has the same distribution as\({X^{\left( i \right)}}\).

b. Prove that\({\bf{E}}\left( {\bf{Z}} \right){\bf{ = }}\smallint \,\,{\bf{g}}\left( {\bf{x}} \right)\,\,{\bf{dx}}\).

c. If\({\bf{g}}\left( {\bf{x}} \right)\,{\bf{/f}}\left( {\bf{x}} \right)\)it is a monotone function, explain why we expect it \({{\bf{V}}^{\left( {\bf{i}} \right)}}\)to be negatively correlated.

d. If \({{\bf{W}}^{\left( {\bf{i}} \right)}}\) and \({{\bf{V}}^{\left( {\bf{i}} \right)}}\)are negatively correlated, show that Var(Z) is less than the variance one would get with 2v simulations without antithetic variates.

Short Answer

Expert verified
  1. \({F^{ - 1}}\)is a monotone increasing function
  2. Use equality of random variables\({X^{\left( i \right)}}\)and\({T^{\left( i \right)}}\)

\(\,E\left( Z \right)\,\,\,\,\,\mathop = \limits^{\left( 1 \right)} \frac{I}{2}\,\,.\,\,2E\left( {{W^{\left( i \right)}}} \right) = \smallint \,g\left( x \right)\,\,dx\)

  1. See how\({X^{\left( i \right)}}\)and\({T^{\left( i \right)}}\)behave
  2. Compare two variances.

\( - 1 \le \rho \le 1\)

Step by step solution

01

(a) Uniform distribution

This follows directly from the fact that\({U^{\left( i \right)}}\)and\(1 - {U^{\left( i \right)}}\)are from the uniform distribution on (0, 1).

\({F^{ - 1}}\) is a monotone increasing function

02

(b) Find \({\bf{E}}\left( {\bf{Z}} \right)\) the value

It follows from the importance sampling method as

\(\begin{aligned}{}E\left( Z \right) = E\left( {\frac{I}{\nu }\sum\limits_{I = 1}^\nu {{Y^{\left( i \right)}}} } \right) = 0.5\frac{I}{\nu }\,\,.\,\,\nu E\left( {{W^{\left( i \right)}}} \right) + 0.5\frac{I}{\nu }\,\,.\,\,\nu E\left( {{V^{\left( i \right)}}} \right)\\\,\,\,\,\,\,\,\,\,\,\,\,\,\mathop = \limits^{\left( 1 \right)} \frac{I}{2}\,\,.\,\,2E\left( {{W^{\left( i \right)}}} \right) = \smallint \,g\left( x \right)\,\,dx\end{aligned}\)

(1): from (a) \({X^{\left( i \right)}}\) and \({T^{\left( i \right)}}\) have the same distribution, which implies that \({W^{\left( i \right)}}\) and \({V^{\left( i \right)}}\) have the same distribution.

03

(c) Definition of \({{\bf{X}}^{\left( {\bf{i}} \right)}}\) and \({{\bf{T}}^{\left( {\bf{i}} \right)}}\)

From the definition of\({X^{\left( i \right)}}\)and\({T^{\left( i \right)}}\), because\({F^{ - 1}}\)is a monotone increasing function, when\({X^{\left( i \right)}}\)increases\({T^{\left( i \right)}}\)decrease and opposite.

By the assumption that \(g\left( x \right)/f\left( x \right)\) is a monotone function, the relation between \({W^{\left( i \right)}}\) and \({V^{\left( i \right)}}\) would be the same as the relation between \({X^{\left( i \right)}}\) , which means that when one increases, the other decreases and the opposite, which indicates that the random variables should be negatively correlated.

04

(d) Find \({\bf{Var}}\left( {\bf{Z}} \right)\) the value

Compare the following two variances. The first variance is

\(\begin{aligned}{}Var\left( Z \right) = Var\left( {\frac{I}{\nu }\sum\limits_{I = 1}^\nu {{Y^{\left( i \right)}}} } \right) = \frac{I}{{{\nu ^2}}}\,\,.\,\nu {0.5^2}\,Var\left( {{W^{\left( i \right)}}\,\, + \,{V^{\left( i \right)}}} \right)\\\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, = 0.25\,\,.\,\,\left( {Var\,\,\left( {{W^{\left( i \right)}}} \right) + Var\,\left( {\,{V^{\left( i \right)}}} \right) + \,\,2\rho } \right)\\\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, = \frac{I}{\nu }\,\,.\,\,0.5\,\left( {1 - \rho } \right)\,\,Var\,\left( {{W^{\left( i \right)}}} \right)\end{aligned}\)

The variances are equal (the distribution is the same). The second variance to compare is

\(Var\,\,\left( {\frac{I}{{2\nu }}\sum\limits_{I = 1}^{2\nu } {{W^{\left( i \right)}}} } \right) = \frac{I}{{2\nu }}\,Var\,\,\left( {{W^{\left( i \right)}}} \right).\)

The following is true

\(\frac{I}{\nu }\,\,.\,\,0.5\,\left( {1 - \rho } \right)\,\,Var\,\left( {{W^{\left( i \right)}}} \right) < \frac{I}{{2\nu }}\,Var\,\,\left( {{W^{\left( i \right)}}} \right)\)

if and only if

\(\left( {1 - \rho } \right)\,\, < 1\)

This is true for (remember that which is confirmed by assuming they are negatively correlated.

Here, the result of part (a), part (b), part (c) and part (d)

  1. \({F^{ - 1}}\)is a monotone increasing function
  2. Use equality of random variables\({X^{\left( i \right)}}\)and\({T^{\left( i \right)}}\)
  3. See how\({X^{\left( i \right)}}\)and\({T^{\left( i \right)}}\)behave
  4. Compare two variances.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Use the data on fish prices in Table 11.6 on page 707. Suppose that we assume only that the distribution of fish prices in 1970 and 1980 is a continuous joint distribution with finite variances. We are interested in the properties of the sample correlation coefficient. Construct 1000 nonparametric bootstrap samples for solving this exercise.

a. Approximate the bootstrap estimate of the variance of the sample correlation.

b. Approximate the bootstrap estimate of the bias of the sample correlation.

c. Compute simulation standard errors of each of the above bootstrap estimates.

The skewness of a random variable was defined in Definition 4.4.1. Suppose that \({X_1},...,{X_n}\) form a random sample from a distribution \(F\). The sample skewness is defined as

\({M_3} = \frac{{\frac{1}{n}\sum\limits_{i = 1}^n {{{\left( {{X_i} - \bar X} \right)}^3}} }}{{{{\left( {\frac{1}{n}\sum\limits_{i = 1}^n {{{\left( {{X_i} - \bar X} \right)}^2}} } \right)}^{3/2}}}}\)

One might use \({M_3}\) as an estimator of the skewness \(\theta \) of the distribution F. The bootstrap can estimate the bias and standard deviation of the sample skewness as an estimator \(\theta \).

a. Prove that \({M_3}\) is the skewness of the sample distribution \({F_{{n^*}}}\)

b. Use the 1970 fish price data in Table 11.6 on page 707. Compute the sample skewness and then simulate 1000 bootstrap samples. Use the bootstrap samples to estimate the bias and standard deviation of the sample skewness.

In Sec. 10.2, we discussed \({\chi ^2}\) goodness-of-fit tests for composite hypotheses. These tests required computing M.L.E.'s based on the numbers of observations that fell into the different intervals used for the test. Suppose instead that we use the M.L.E.'s based on the original observations. In this case, we claimed that the asymptotic distribution of the \({x^2}\) test statistic was somewhere between two different \({\chi ^2}\) distributions. We can use simulation to better approximate the distribution of the test statistic. In this exercise, assume that we are trying to test the same hypotheses as in Example 10.2.5, although the methods will apply in all such cases.

a. Simulate \(v = 1000\) samples of size \(n = 23\) from each of 10 different normal distributions. Let the normal distributions have means of \(3.8,3.9,4.0,4.1,\) and \(4.2\) Let the distributions have variances of 0.25 and 0.8. Use all 10 combinations of mean and variance. For each simulated sample, compute the \({\chi ^2}\) statistic Q using the usual M.L.E.'s of \(\mu \) , and \({\sigma ^2}.\) For each of the 10 normal distributions, estimate the 0.9,0.95, and 0.99 quantiles of the distribution of Q.

b. Do the quantiles change much as the distribution of the data changes?

c. Consider the test that rejects the null hypothesis if \(Q \ge 5.2.\) Use simulation to estimate the power function of this test at the following alternative: For each \(i,\left( {{X_i} - 3.912} \right)/0.5\) has the t distribution with five degrees of freedom.

Let \({\bf{f}}\left( {{{\bf{x}}_{{\bf{1}}\,}}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}} \right){\bf{ = cg}}\left( {{{\bf{x}}_{{\bf{1}}\,}}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}} \right)\) be a joint p.d.f for \(\left( {{{\bf{x}}_{{\bf{1}}\,}}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}} \right){\bf{,}}\)each \({x_{2\,}}\)let\({{\bf{h}}_{{\bf{2}}\,}}\left( {{{\bf{x}}_{{\bf{1}}\,}}} \right){\bf{ = g}}\left( {{{\bf{x}}_{{\bf{1}}\,}}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}} \right)\) that is what we get by considering \({\bf{g}}\left( {{{\bf{x}}_{{\bf{1}}\,}}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}} \right)\) as a function of \({{\bf{x}}_{{\bf{1}}\,}}\)for fixed \({{\bf{x}}_{2\,}}\)show that there is a multiplicative factor \({{\bf{c}}_{{\bf{2}}\,}}\)that does not depend on such that is the conditional p.d.f of \({{\bf{x}}_{{\bf{1}}\,}}\) given \(\left( {{{\bf{x}}_{{\bf{2}}\,}}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}} \right)\)

Let \({\bf{f}}\left( {{{\bf{x}}_{{\bf{1}}\,}}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}} \right)\) be a joint p.d.f. Suppose that \(\left( {{{\bf{x}}_{{\bf{1}}\,}}^{\left( {\bf{i}} \right)}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}^{\left( {\bf{i}} \right)}} \right)\)has the joint p.d.f. Let \(\left( {{{\bf{x}}_{{\bf{1}}\,}}^{\left( {{\bf{i + 1}}} \right)}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}^{\left( {{\bf{i + 1}}} \right)}} \right)\)be the result of applying steps \(2\,\,and\,\,3\) of the Gibbs sampling algorithm on-page \({\bf{824}}\). Prove that \(\left( {{{\bf{x}}_{{\bf{1}}\,}}^{\left( {{\bf{i + 1}}} \right)}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}^{\left( {\bf{i}} \right)}} \right)\) and \(\left( {{{\bf{x}}_{{\bf{1}}\,}}^{\left( {{\bf{i + 1}}} \right)}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}^{\left( {{\bf{i + 1}}} \right)}} \right)\)also have the joint p.d.f. f.

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.