/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Q4E \({{\bf{x}}_{\scriptstyle{\bf{1}... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

\({{\bf{x}}_{\scriptstyle{\bf{1}}\atop\scriptstyle\,}}.....{{\bf{x}}_{\scriptstyle{\bf{n}}\atop\scriptstyle\,}}\) be uncorrelated, each with variance \({\sigma ^2}\) Let \({{\bf{y}}_{\scriptstyle{\bf{1}}\atop\scriptstyle\,}}.....{{\bf{y}}_{\scriptstyle{\bf{n}}\atop\scriptstyle\,}}\) be positively correlated. each with variance, prove that the variance of \(\overline x \)is smaller than the variance of \(\overline y \)

Short Answer

Expert verified

Let be uncorrelated, each with variance positively correlated.

The variance of \(\overline x \)is

\({\mathop{\rm var}} \left( {\overline x } \right) = \frac{1}{{{n^2}}}.n\sigma = \frac{{{\sigma ^2}}}{n}.\)

\(Use{\mathop{\rm cov}} \left( {{Y_i},{Y_j}} \right) > 0\)

Step by step solution

01

Definition of variance

A statistical measurement of the spread between numbers in a data set is called variance.

The variance of \(\overline x \)is

\({\mathop{\rm var}} \left( {\overline x } \right) = \frac{1}{{{n^2}}}.n\sigma = \frac{{{\sigma ^2}}}{n}.\)

The variance of \(\overline y \)is

\({\mathop{\rm var}} \left( {\overline y } \right) = \frac{1}{{{n^2}}}.\left( {\sum\limits_{i = 1}^n {{\mathop{\rm var}} \left( {{y_i}} \right) + \sum } \sum\nolimits_{i \ne j} {{\mathop{\rm cov}} \left( {{Y_i},{Y_j}} \right) > 0} } \right) > \frac{1}{{{n^2}}}.n\sigma = {\mathop{\rm var}} \left( {\overline x } \right)\)

02

Use cov

\(Use{\mathop{\rm cov}} \left( {{Y_i},{Y_j}} \right) > 0\)

Hence,

\(Use{\mathop{\rm cov}} \left( {{Y_i},{Y_j}} \right) > 0\)

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Suppose that we wish to approximate the integral\(\int g (x)dx\). Suppose that we have a p.d.f. \(f\)that we shall use as an importance function. Suppose that \(g(x)/f(x)\) is bounded. Prove that the importance sampling estimator has finite variance.

Let \({x_1},...,{x_n}\) be the observed values of a random sample \(X = \left( {{x_1},...,{x_n}} \right)\) . Let \({F_n}\)be the sample c.d.f. Let \(\,{j_1},...\,,{j_n}\) be a random sample with replacement from the numbers \(\left\{ {1,.....,n} \right\}\) Define\({x_i}^ * = x{j_i}\) for \(i = 1,..,n.\)ashow that \({x^ * } = \left( {{x_1}^ * ,...,{x_n}^ * } \right)\) is an i.i.d. sample from the distribution\({F_n}\)

Use the data on fish prices in Table 11.6 on page 707. Suppose that we assume only that the distribution of fish prices in 1970 and 1980 is a continuous joint distribution with finite variances. We are interested in the properties of the sample correlation coefficient. Construct 1000 nonparametric bootstrap samples for solving this exercise.

a. Approximate the bootstrap estimate of the variance of the sample correlation.

b. Approximate the bootstrap estimate of the bias of the sample correlation.

c. Compute simulation standard errors of each of the above bootstrap estimates.

In Sec. 10.2, we discussed \({\chi ^2}\) goodness-of-fit tests for composite hypotheses. These tests required computing M.L.E.'s based on the numbers of observations that fell into the different intervals used for the test. Suppose instead that we use the M.L.E.'s based on the original observations. In this case, we claimed that the asymptotic distribution of the \({x^2}\) test statistic was somewhere between two different \({\chi ^2}\) distributions. We can use simulation to better approximate the distribution of the test statistic. In this exercise, assume that we are trying to test the same hypotheses as in Example 10.2.5, although the methods will apply in all such cases.

a. Simulate \(v = 1000\) samples of size \(n = 23\) from each of 10 different normal distributions. Let the normal distributions have means of \(3.8,3.9,4.0,4.1,\) and \(4.2\) Let the distributions have variances of 0.25 and 0.8. Use all 10 combinations of mean and variance. For each simulated sample, compute the \({\chi ^2}\) statistic Q using the usual M.L.E.'s of \(\mu \) , and \({\sigma ^2}.\) For each of the 10 normal distributions, estimate the 0.9,0.95, and 0.99 quantiles of the distribution of Q.

b. Do the quantiles change much as the distribution of the data changes?

c. Consider the test that rejects the null hypothesis if \(Q \ge 5.2.\) Use simulation to estimate the power function of this test at the following alternative: For each \(i,\left( {{X_i} - 3.912} \right)/0.5\) has the t distribution with five degrees of freedom.

Show how to simulate Cauchy random variables using the probability integral transformation.

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.