/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Q7SE In Sec. 10.2, we discussed \({\... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

In Sec. 10.2, we discussed \({\chi ^2}\) goodness-of-fit tests for composite hypotheses. These tests required computing M.L.E.'s based on the numbers of observations that fell into the different intervals used for the test. Suppose instead that we use the M.L.E.'s based on the original observations. In this case, we claimed that the asymptotic distribution of the \({x^2}\) test statistic was somewhere between two different \({\chi ^2}\) distributions. We can use simulation to better approximate the distribution of the test statistic. In this exercise, assume that we are trying to test the same hypotheses as in Example 10.2.5, although the methods will apply in all such cases.

a. Simulate \(v = 1000\) samples of size \(n = 23\) from each of 10 different normal distributions. Let the normal distributions have means of \(3.8,3.9,4.0,4.1,\) and \(4.2\) Let the distributions have variances of 0.25 and 0.8. Use all 10 combinations of mean and variance. For each simulated sample, compute the \({\chi ^2}\) statistic Q using the usual M.L.E.'s of \(\mu \) , and \({\sigma ^2}.\) For each of the 10 normal distributions, estimate the 0.9,0.95, and 0.99 quantiles of the distribution of Q.

b. Do the quantiles change much as the distribution of the data changes?

c. Consider the test that rejects the null hypothesis if \(Q \ge 5.2.\) Use simulation to estimate the power function of this test at the following alternative: For each \(i,\left( {{X_i} - 3.912} \right)/0.5\) has the t distribution with five degrees of freedom.

Short Answer

Expert verified

(a) There are a total of 30 estimates.

(b) There are changes. They do not seem to be huge.

(c) Varies around 0.0357.

Step by step solution

01

To find the sample and distribution of Q

  1. Follow the assumptions in the exercise. First, note that there should be\(\nu = 10000\)normal distribution samples of size n=23 with mean\(log(50)\)and the standard deviation of\(\sqrt {0.25} \)under the null hypothesis. Hence,\(\mu = 3.921,\)and\({\sigma ^2} = 0.25.\)

Another assumption is that the generated samples are taken from the 10 different normal distributions from all possible combinations of means and standard deviations given in the exercise. For each of the 10 combinations, there should be 3 different quantiles estimates, that is,\(3 \times 10 = 30\)in total.

The number of intervals should not exceed 5, so\(5 - 1 = 4\) it will be used in this simulation. The intervals depend on the probability of the distribution under the null hypothesis, which is the mentioned normal distribution. The upper limit of the first interval is

\(U{L_1} = \mu + 0.5 \times {\Phi ^{ - 1}}(0.25) = 3.192 + 0.5 \times ( - 0.674) = 3.575\)

where\(P\left( {Z < - 0.674} \right) = 0.25.\)

This way, the probability that the observation falls in the interval\(( - \infty ,3.575)\)is 0.25. The other 3 intervals shall have the exact probabilities and are computed using\({\Phi ^{ - 1}}(0.5)\)\({\Phi ^{ - 1}}(0.75)\)instead of\({\Phi ^{ - 1}}(0.25)\)

\(\begin{aligned}{c}U{L_2} = \mu + 0.5 \times {\Phi ^{ - 1}}(0.5) = 3.192 + 0.5 \times 0 = 3.912U{L_3} \\ = \mu + 0.5 \times {\Phi ^{ - 1}}(0.75) = 3.192 + 0.5 \times 0.674 = 4.249\end{aligned}\)

Then, the\({\chi ^2}\)statistic value is computed using

\(Q = \sum\limits_{i = 1}^k {\frac{{{{\left( {{N_i} - np_i^0} \right)}^2}}}{{np_i^0}}} \)

Where, in this case,\({N_i},i = 1,2,3,4\)are the number of observations that fall in the\({i^{th }}\)interval, n=23, and\(p_i^0\)will be calculated using the maximum likelihood estimates of the generated samples. For example, if\(\hat \mu \)and\({\hat \sigma ^2}\)are the MLE of the parameters\(\mu \)and\(\sigma ,\)then the probabilities are given/estimated by

\(\begin{aligned}{l}\hat p_1^0 = \Phi \;\;\;{\kern 1pt} \frac{{U{L_1} - \hat \mu }}{{\hat \sigma }}\\\hat p_2^0 = \Phi \;\;\;{\kern 1pt} \frac{{U{L_2} - \hat \mu }}{{\hat \sigma }} - \Phi \frac{{U{L_1} - \hat \mu }}{{\hat \sigma }}\\\hat p_3^0 = \Phi \;\;\;{\kern 1pt} \frac{{U{L_3} - \hat \mu }}{{\hat \sigma }} - \Phi \frac{{U{L_2} - \hat \mu }}{{\hat \sigma }}\\\hat p_4^0 = 1 - \Phi \;\;\;{\kern 1pt} \frac{{U{L_3} - \hat \mu }}{{\hat \sigma }}\end{aligned}\)

Then, use the\({\chi ^2}\)statistic given above. After computing the Q statistic values, order them and take the\(n \times {p^{th }}\)value where\(p \in \{ 0.9,0.95,0.99\} \)to obtain the estimate of the quantile of the distribution.

The resulting estimates are given in the following table. As mentioned, there are a total of 30 estimates of the quantiles. The "Quantile Matrix" from the code gives the desired result. The code in RStudio is given below.


02

To find the estimated probability.

There are changes. However, they do not seem to be huge.

03

Five degrees of freedom

Generate a student t distribution with 5 degrees of freedom, as given in the exercise. The estimated probability using the code below is around \(0.0357.\)

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

In this problem, we shall outline a Bayesian solution to the problem described in Example 7.5.10 on page 423. Let \(\tau \)= 1/σ2 and use a proper normal-gamma prior to the form described in Sec. 8.6. In addition to the two parameters, μ and \(\tau \), introduce n additional parameters. For i = 1, n, let Yi = 1 if Xi came from the normal distribution with mean μ and precision \(\tau \), and let Yi = 0 if Xi came from the standard normal distribution

a. Find the conditional distribution of μ given τ; Y1, ..., Yn; and X1, Xn.

b. Find the conditional distribution of τ given μ; Y1, ..., Yn; and X1, Xn.

c. Find the conditional distribution of Yi given μ; τ; X1, Xn; and the other Yj's.

d. Describe how to find the posterior distribution of μ \(\tau \)using Gibbs sampling.

e. Prove that the posterior mean of Yi is the posterior probability that Xi came from the normal distribution with unknown mean and variance

In Example 12.5.6, we used a hierarchical model. In that model, the parameters\({\mu _i},...,{\mu _P}\,\)were independent random variables with\({\mu _i}\)having the normal distribution with mean ψ and precision\({\lambda _0}{T_i}\,\)conditional on ψ and\({T_1},\,....{T_P}\). To make the model more general, we could also replace\({\lambda _0}\)with an unknown parameter\(\lambda \). That is, let the\({\mu _i}\)’s be independent with\({\mu _i}\)having the normal distribution with mean ψ and precision\(\,\lambda {T_i}\)conditional on\(\psi \),\(\lambda \) and\({T_1},\,....{T_P}\). Let\(\lambda \)have the gamma distribution with parameters\({\gamma _0}\)and\(\,{\delta _0}\), and let\(\lambda \)be independent of ψ and\({T_1},\,....{T_P}\). The remaining parameters have the prior distributions stated in Example 12.5.6.

a. Write the product of the likelihood and the prior as a function of the parameters\({\mu _i},...,{\mu _P}\,\), \({T_1},\,....{T_P}\)ψ, and\(\lambda \).

b. Find the conditional distributions of each parameter given all of the others. Hint: For all the parameters besides\(\lambda \), the distributions should be almost identical to those given in Example 12.5.6. It wherever\({\lambda _0}\)appears, of course, something will have to change.

c. Use a prior distribution in which α0 = 1, β0 = 0.1, u0 = 0.001, γ0 = δ0 = 1, and \({\psi _0}\)= 170. Fit the model to the hot dog calorie data from Example 11.6.2. Compute the posterior means of the four μi’s and 1/τi’s.

Use the data on fish prices in Table 11.6 on page 707. Suppose that we assume only that the distribution of fish prices in 1970 and 1980 is a continuous joint distribution with finite variances. We are interested in the properties of the sample correlation coefficient. Construct 1000 nonparametric bootstrap samples for solving this exercise.

a. Approximate the bootstrap estimate of the variance of the sample correlation.

b. Approximate the bootstrap estimate of the bias of the sample correlation.

c. Compute simulation standard errors of each of the above bootstrap estimates.

Suppose that \(\left( {{X_1},{Y_1}} \right),...,\left( {{X_n},{Y_n}} \right)\) form a random sample from a bivariate normal distribution with means \({\mu _x} and {\mu _y},variances \sigma _x^2and \sigma _y^2,and correlation \rho .\) Let R be the sample correlation. Prove that the distribution of R depends only on \(\rho ,not on {\mu _x},{\mu _y},\sigma _x^2,or \sigma _y^2.\)

Let \({\bf{f}}\left( {{{\bf{x}}_{{\bf{1}}\,}}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}} \right)\) be a joint p.d.f. Suppose that \(\left( {{{\bf{x}}_{{\bf{1}}\,}}^{\left( {\bf{i}} \right)}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}^{\left( {\bf{i}} \right)}} \right)\)has the joint p.d.f. Let \(\left( {{{\bf{x}}_{{\bf{1}}\,}}^{\left( {{\bf{i + 1}}} \right)}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}^{\left( {{\bf{i + 1}}} \right)}} \right)\)be the result of applying steps \(2\,\,and\,\,3\) of the Gibbs sampling algorithm on-page \({\bf{824}}\). Prove that \(\left( {{{\bf{x}}_{{\bf{1}}\,}}^{\left( {{\bf{i + 1}}} \right)}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}^{\left( {\bf{i}} \right)}} \right)\) and \(\left( {{{\bf{x}}_{{\bf{1}}\,}}^{\left( {{\bf{i + 1}}} \right)}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}^{\left( {{\bf{i + 1}}} \right)}} \right)\)also have the joint p.d.f. f.

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.