/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Q2E Let \({x_1},...,{x_n}\)  be the... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

Let \({x_1},...,{x_n}\) be the observed values of a random sample \(X = \left( {{x_1},...,{x_n}} \right)\) . Let \({F_n}\)be the sample c.d.f. Let \(\,{j_1},...\,,{j_n}\) be a random sample with replacement from the numbers \(\left\{ {1,.....,n} \right\}\) Define\({x_i}^ * = x{j_i}\) for \(i = 1,..,n.\)ashow that \({x^ * } = \left( {{x_1}^ * ,...,{x_n}^ * } \right)\) is an i.i.d. sample from the distribution\({F_n}\)

Short Answer

Expert verified

The observed values of a random sample.

\(\begin{aligned}{l}pr\left( {{x_1}^ * = {x_{i1}},{x_2}^ * = {x_{i2}},..,{x^ * }_n = {x_{in}}} \right)\\ = {\prod ^n}_{j = 1}\,pr\left( {{x_j}^ * = {x_{ij}}} \right)\end{aligned}\)

\(Pr\left( {{x_1}^ * = {x_{i1}},{x_2}^ * = {x_{i2}},..,{x^ * }_n = {x_{in}}} \right) = {\prod ^n}_{j = 1}\,pr\left( {{x_j}^ * = {x_{ij}}} \right)\)

Step by step solution

01

Definition of a random variable is a variable  

A random variableis a variable with an unknown value or a function that assigns values to each of the outcomes of an experiment.

To prove that the random sample \({{\rm{x}}^ * }\)is i.i.d.

\(\begin{aligned}{l}pr\left( {{x_1}^ * = {x_{i1}},{x_2}^ * = {x_{i2}},..,{x^ * }_n = {x_{in}}} \right)\\ = {\prod ^n}_{j = 1}\,pr\left( {{x_j}^ * = {x_{ij}}} \right)\end{aligned}\)

Let \({i_1},{i_2},..,{i_n}\, \in \left\{ {1,2,...,n} \right\}\) and \({x_i},i = 1,2,...,n\) be a sample from the distribution with c.d.f. \({F_n}\,\). A random variable \({x_i}\,\)takes value \({x_{ij}}\) when the random variable \({j_i}\) takes value \({i_j}.\,{j_1},{j_{2,}},...,{j_n}\) . is a random sample,

02

Random variables are independent

Random variables are independent. Next, directly it follows that

\(pr\left( {{x_1}^ * = {x_{i1}},{x_2}^ * = {x_{i2}},..,{x^ * }_n = {x_{in}}} \right)\)

\( = \Pr \left( {{j_1} = {i_1},{j_2} = {i_2},...,{j_n} = {i_n}} \right)\)

\(\Pr = \left( {{j_1} = {i_1}} \right)\Pr = \left( {{j_2} = {i_2}} \right)...\Pr = \left( {{j_n} = {i_n}} \right)\)

\( = {\prod ^n}_{j = 1}\,pr\left( {{x_j}^ * = {x_{ij}}} \right)\)

Hence,

\(Pr\left( {{x_1}^ * = {x_{i1}},{x_2}^ * = {x_{i2}},..,{x^ * }_n = {x_{in}}} \right) = {\prod ^n}_{j = 1}\,pr\left( {{x_j}^ * = {x_{ij}}} \right)\)

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

In Sec. 10.2, we discussed \({\chi ^2}\) goodness-of-fit tests for composite hypotheses. These tests required computing M.L.E.'s based on the numbers of observations that fell into the different intervals used for the test. Suppose instead that we use the M.L.E.'s based on the original observations. In this case, we claimed that the asymptotic distribution of the \({x^2}\) test statistic was somewhere between two different \({\chi ^2}\) distributions. We can use simulation to better approximate the distribution of the test statistic. In this exercise, assume that we are trying to test the same hypotheses as in Example 10.2.5, although the methods will apply in all such cases.

a. Simulate \(v = 1000\) samples of size \(n = 23\) from each of 10 different normal distributions. Let the normal distributions have means of \(3.8,3.9,4.0,4.1,\) and \(4.2\) Let the distributions have variances of 0.25 and 0.8. Use all 10 combinations of mean and variance. For each simulated sample, compute the \({\chi ^2}\) statistic Q using the usual M.L.E.'s of \(\mu \) , and \({\sigma ^2}.\) For each of the 10 normal distributions, estimate the 0.9,0.95, and 0.99 quantiles of the distribution of Q.

b. Do the quantiles change much as the distribution of the data changes?

c. Consider the test that rejects the null hypothesis if \(Q \ge 5.2.\) Use simulation to estimate the power function of this test at the following alternative: For each \(i,\left( {{X_i} - 3.912} \right)/0.5\) has the t distribution with five degrees of freedom.

Use the data in table 11.19 on page 762.This time fits the model developed in Example 12.5.6.use the prior hyperparameters \(\,{{\bf{\lambda }}_{\scriptstyle{\bf{0}}\atop\scriptstyle\,}}{\bf{ = }}{{\bf{\alpha }}_{\scriptstyle{\bf{0}}\atop\scriptstyle\,}}\,{\bf{ = 1,}}\,\,{{\bf{\beta }}_{\scriptstyle{\bf{0}}\atop\scriptstyle\,}}{\bf{ = 0}}{\bf{.1}},{{\bf{\mu }}_{\scriptstyle{\bf{0}}\atop\scriptstyle\,}}{\bf{ = 0}}{\bf{.001}}\)and \({{\bf{\psi }}_{\scriptstyle{\bf{0}}\atop\scriptstyle\,}}{\bf{ = 800}}\)obtain a sample of 10,000 from the posterior joint distribution of the parameters. Estimate the posterior mean of the three parameters \({{\bf{\mu }}_{\scriptstyle{\bf{1}}\atop\scriptstyle\,}}{\bf{,}}{{\bf{\mu }}_{\scriptstyle{\bf{2}}\atop\scriptstyle\,}}{\bf{,}}{{\bf{\mu }}_{\scriptstyle{\bf{3}}\atop\scriptstyle\,}}\)

The skewness of a random variable was defined in Definition 4.4.1. Suppose that \({X_1},...,{X_n}\) form a random sample from a distribution \(F\). The sample skewness is defined as

\({M_3} = \frac{{\frac{1}{n}\sum\limits_{i = 1}^n {{{\left( {{X_i} - \bar X} \right)}^3}} }}{{{{\left( {\frac{1}{n}\sum\limits_{i = 1}^n {{{\left( {{X_i} - \bar X} \right)}^2}} } \right)}^{3/2}}}}\)

One might use \({M_3}\) as an estimator of the skewness \(\theta \) of the distribution F. The bootstrap can estimate the bias and standard deviation of the sample skewness as an estimator \(\theta \).

a. Prove that \({M_3}\) is the skewness of the sample distribution \({F_{{n^*}}}\)

b. Use the 1970 fish price data in Table 11.6 on page 707. Compute the sample skewness and then simulate 1000 bootstrap samples. Use the bootstrap samples to estimate the bias and standard deviation of the sample skewness.

Test the standard normal pseudo-random number generator on your computer by generating a sample of size 10,000 and drawing a normal quantile plot. How straight does the plot appear to be?

Consider, once again, the model described in Example \({\bf{7}}{\bf{.5}}{\bf{.10}}{\bf{.}}\) Assume that \({\bf{n = 10}}\) the observed values of \({{\bf{X}}_{\bf{1}}},...,{{\bf{X}}_{{\bf{1}}0}}\) are

\( - 0.92,\,\, - 0.33,\,\, - 0.09,\,\,\,0.27,\,\,\,0.50, - 0.60,\,1.66,\, - 1.86,\,\,\,3.29,\,\,\,2.30\).

a. Fit the model to the observed data using the Gibbs sampling algorithm developed in Exercise. Use the following prior hyperparameters: \({{\bf{\alpha }}_{\bf{0}}}{\bf{ = 1,}}{{\bf{\beta }}_{\bf{0}}}{\bf{ = 1,}}{{\bf{\mu }}_{\bf{0}}}{\bf{ = 0}}\,{\bf{and}}\,{\bf{ }}{{\bf{\lambda }}_{\bf{0}}}{\bf{ = 1}}\)

b. For each i, estimate the posterior probability that \({\rm{ }}{{\rm{x}}_i}\)came for the normal distribution with unknown mean and variance.

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.