/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Q10E In this problem, we shall outlin... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

In this problem, we shall outline a Bayesian solution to the problem described in Example 7.5.10 on page 423. Let \(\tau \)= 1/σ2 and use a proper normal-gamma prior to the form described in Sec. 8.6. In addition to the two parameters, μ and \(\tau \), introduce n additional parameters. For i = 1, n, let Yi = 1 if Xi came from the normal distribution with mean μ and precision \(\tau \), and let Yi = 0 if Xi came from the standard normal distribution

a. Find the conditional distribution of μ given τ; Y1, ..., Yn; and X1, Xn.

b. Find the conditional distribution of τ given μ; Y1, ..., Yn; and X1, Xn.

c. Find the conditional distribution of Yi given μ; τ; X1, Xn; and the other Yj's.

d. Describe how to find the posterior distribution of μ \(\tau \)using Gibbs sampling.

e. Prove that the posterior mean of Yi is the posterior probability that Xi came from the normal distribution with unknown mean and variance

Short Answer

Expert verified

In this problem Bayesian solution, the problem is described.

\({\mu _{\scriptstylecond\atop\scriptstyle\,}} = \frac{{{\mu _{\scriptstyle0\atop\scriptstyle\,}} + \sum\nolimits_{i = 1}^n {{y_{\scriptstylei\atop\scriptstyle\,}}{x_{\scriptstylei\atop\scriptstyle\,}}} }}{{{\lambda _{\scriptstyle0\atop\scriptstyle\,}} + \sum\nolimits_{i = 1}^n {{y_{\scriptstylei\atop\scriptstyle\,}}} }}\)

((a)) Normal distribution;

((b)) Gamma distribution;

((c)) \({y_{\scriptstylei\atop\scriptstyle\,}}\) can take values either \(0\,\,or\,\,1\);

((d)) The posterior mean is the posterior probability\({y_{\scriptstylei\atop\scriptstyle\,}} = 1\) .

Step by step solution

01

Definition of a random variable is a variable  

A random variable is a factor with an indefinite quantity or a program that gives numbers to each study's results.

Suppose that μ and τ are random variables. Let the conditional distribution of given is the normal distribution with parameters \({\mu _{\scriptstyle0\atop\scriptstyle\,}}\) and \({\lambda _{\scriptstyle0\atop\scriptstyle\,}}\tau \) mean and precision, respectively, and let the marginal distribution of μ and τ is the gamma distribution with parameters and, then the joint distribution of and is called with the hyperparameters \({\mu _{\scriptstyle0\atop\scriptstyle\,}},{\lambda _{\scriptstyle0\atop\scriptstyle\,}},{\alpha _{\scriptstyle0\atop\scriptstyle\,}},{\beta _{\scriptstyle0\atop\scriptstyle\,}}{\rm{ }}\)

  1. First that the conditional distribution of \(\mu \) the given \(\tau ,{x_{\scriptstylei\atop\scriptstyle\,}},{y_{\scriptstylei\atop\scriptstyle\,}},i = 1,2,..,n\) is normal. The mean of the conditional distribution is going to be the following quotient

\({\mu _{\scriptstylecond\atop\scriptstyle\,}} = \frac{{{\mu _{\scriptstyle0\atop\scriptstyle\,}} + \sum\nolimits_{i = 1}^n {{y_{\scriptstylei\atop\scriptstyle\,}}{x_{\scriptstylei\atop\scriptstyle\,}}} }}{{{\lambda _{\scriptstyle0\atop\scriptstyle\,}} + \sum\nolimits_{i = 1}^n {{y_{\scriptstylei\atop\scriptstyle\,}}} }}\)

Because of how random variables \({y_{\scriptstylei\atop\scriptstyle\,}}\) are defined. The variance of the conditional distribution is

\({\sigma ^2}_{\scriptstylecond\atop\scriptstyle\,} = {\rm{ }}\frac{1}{{\tau \sum\nolimits_{i = 1}^n {{y_{\scriptstylei\atop\scriptstyle\,}}} }}.\)

  1. The definition again follows that the conditional distribution of given is \(\mu {x_{\scriptstylei\atop\scriptstyle\,}},{y_{\scriptstylei\atop\scriptstyle\,}},i = 1,2,..,n\) gamma distribution. The parameters are

\({\alpha _{\scriptstylecond\atop\scriptstyle\,}} = {\alpha _0} + \sum\limits_{i = 1}^n {\frac{{{y_{\scriptstylei\atop\scriptstyle\,}}}}{2} + \frac{1}{2}} \)

AND

\({\beta _{\scriptstylecond\atop\scriptstyle\,}} = {\beta _0} + \frac{{{\lambda _0}{{\left( {\mu - {\mu _0}} \right)}^2}}}{2} + \frac{1}{2}{\sum\limits_{i = 1}^n {{y_{\scriptstylei\atop\scriptstyle\,}}\left( {{X_{\scriptstylei\atop\scriptstyle\,}} - \mu } \right)} ^2}\)

  1. A random variable can either take the value of 1 or 0. Hence, the conditional distribution of \({y_{\scriptstylei\atop\scriptstyle\,}}\) given is\(\mu ,{\rm{ }}\tau {\rm{ }},{\rm{ }}{X_{\scriptstylei\atop\scriptstyle\,}},i{\rm{ }} = {\rm{ }}1,{\rm{ }}2, \ldots ,{\rm{ }}n\) entirely determined by finding, e.g., \(\Pr \left( {{y_{\scriptstylei\atop\scriptstyle\,}} = 1} \right)\).

Use the fact that \({y_{\scriptstylei\atop\scriptstyle\,}} = 1\)when they are from the normal distribution with mean\(\mu \) and precision, and\({y_{\scriptstylei\atop\scriptstyle\,}} = 0\) if \({X_{\scriptstylei\atop\scriptstyle\,}}\)theyare from the standard normal distribution.

The p.d.f.'s cancel the constant factor; thus, the probability is

\(Pr\left( {{y_{\scriptstylei\atop\scriptstyle\,}} = 1} \right){\rm{ }} = \frac{{\sqrt {{\rm{ }}\tau } \exp \left( { - {{\left( {{x_{\scriptstylei\atop\scriptstyle\,}} - \mu } \right)}^2}\tau /2} \right)}}{{\sqrt {{\rm{ }}\tau } \exp \left( { - {{\left( {{x_{\scriptstylei\atop\scriptstyle\,}} - \mu } \right)}^2}\tau /2} \right) + \exp \left( { - {x_{\scriptstylei\atop\scriptstyle\,}}^2/2} \right)}}{\rm{. }}\)

02

Gibbs sampling

Gibbs Sampling is aMonte Carlo Markov Chain method for estimating complex joint distributions that draw an instance from the distribution of each variable iteratively based on the current values of the other variables.

  1. The Gibbs Sampling Algorithm: The steps of the algorithm are

\(\left( {1.} \right)\,\)Pick starting values \({x_2}^{\left( 0 \right)}\) for \(\,{x_2}\) , and let \(\,\,i = 0\,\)

\(\left( {2.} \right)\,\)let be a simulated value from the conditional distribution of \(\,{x_1}\)given that \(\,\,{X_1} = {x_2}^{\left( i \right)}\)

\(\left( {3.} \right)\,\)Let\(\,{x_2}^{\left( {i + 1} \right)\,\,}\,\) be a simulated value from the conditional distribution of \(\,{x_2}\) given that \(\,{X_1} = {x_1}^{\left( {i + 1} \right)}\)

\(\left( {4.} \right)\)Repeat steps \(\,2.\,\,and\,3.\) \(\,i\) where\(\,i + 1\)

It would require a large simulation run to estimate the posterior distribution parameters. The distributions from (a)to(c) could be used to estimate the parameters. Start by splitting the data set into two subsets, preferably equal.

One could do that by assigning the probability of each observation. Next, start with \(\mu \,and{\rm{ }}\tau \)at their posterior means, given that the observations came from the corresponding distribution (with unknown parameters), using starting values of \({y_{\scriptstylei\atop\scriptstyle\,}}\). Obtain the sample parameters using distribution (a)to(c), and by averaging them, one obtains the estimate of the posterior distribution. Note that one should do that after burn-in and, as mentioned earlier, an extensive simulation run

  1. Random variables\({y_{\scriptstylei\atop\scriptstyle\,}}\) can take only values \(1\,and\,0\).

From the exercise\({y_1} = 1\), when\({x_{\scriptstylei\atop\scriptstyle\,}}\) are from the normal distribution with mean \(\mu \,\)and precision \(\tau \) unknown. The fact that the posterior mean of is the posterior probability that\({y_{\scriptstylei\atop\scriptstyle\,}} = 1\)

because of the definition of the mean (\({y_{\scriptstylei\atop\scriptstyle\,}} = 0\) cancel the probability\(\Pr \left( {{y_{\scriptstylei\atop\scriptstyle\,}} = 0} \right)1\) ), the statement follows from the first sentence.

Hence,

((a) ) Normal distribution;

((b)) Gamma distribution;

((c)) \({y_{\scriptstylei\atop\scriptstyle\,}}\) can take values either \(0\,\,or\,\,1\);

((d)) The posterior mean is the posterior probability\({y_{\scriptstylei\atop\scriptstyle\,}} = 1\) .

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

\({{\bf{x}}_{\scriptstyle{\bf{1}}\atop\scriptstyle\,}}.....{{\bf{x}}_{\scriptstyle{\bf{n}}\atop\scriptstyle\,}}\) be uncorrelated, each with variance \({\sigma ^2}\) Let \({{\bf{y}}_{\scriptstyle{\bf{1}}\atop\scriptstyle\,}}.....{{\bf{y}}_{\scriptstyle{\bf{n}}\atop\scriptstyle\,}}\) be positively correlated. each with variance, prove that the variance of \(\overline x \)is smaller than the variance of \(\overline y \)

In Example 12.6.7, let \(\left( {{X^*},{Y^*}} \right)\) be a random draw from the sample distribution\({F_n}\) . Prove that the correlation \({X^*} and {Y^*}\) is R in Eq. (12.6.2).

Use the method of antithetic variates that was described in Exercise 15. Let g(x) be the function that we tried to integrate into Example 12.4.1. Let f (x) be the function\({f_3}\)in Example 12.4.1. Estimate Var\(\left( {{V^{\left( i \right)}}} \right)\), and compare it to\(\mathop \sigma \limits\ _32\)Example 12.4.1.

Let X and Y be independent random variables with \(X\) having the t distribution with five degrees of freedom and Y having the t distribution with three degrees of freedom. We are interested in \(E\left( {|X - Y|} \right).\)

a. Simulate 1000 pairs of \(\left( {{X_i},{Y_i}} \right)\) each with the above joint distribution and estimate \(E\left( {|X - Y|} \right).\)

b. Use your 1000 simulated pairs to estimate the variance of \(|X - Y|\) also.

c. Based on your estimated variance, how many simulations would you need to be 99 percent confident that your estimator is within the actual mean?

In Sec. 10.2, we discussed \({\chi ^2}\) goodness-of-fit tests for composite hypotheses. These tests required computing M.L.E.'s based on the numbers of observations that fell into the different intervals used for the test. Suppose instead that we use the M.L.E.'s based on the original observations. In this case, we claimed that the asymptotic distribution of the \({x^2}\) test statistic was somewhere between two different \({\chi ^2}\) distributions. We can use simulation to better approximate the distribution of the test statistic. In this exercise, assume that we are trying to test the same hypotheses as in Example 10.2.5, although the methods will apply in all such cases.

a. Simulate \(v = 1000\) samples of size \(n = 23\) from each of 10 different normal distributions. Let the normal distributions have means of \(3.8,3.9,4.0,4.1,\) and \(4.2\) Let the distributions have variances of 0.25 and 0.8. Use all 10 combinations of mean and variance. For each simulated sample, compute the \({\chi ^2}\) statistic Q using the usual M.L.E.'s of \(\mu \) , and \({\sigma ^2}.\) For each of the 10 normal distributions, estimate the 0.9,0.95, and 0.99 quantiles of the distribution of Q.

b. Do the quantiles change much as the distribution of the data changes?

c. Consider the test that rejects the null hypothesis if \(Q \ge 5.2.\) Use simulation to estimate the power function of this test at the following alternative: For each \(i,\left( {{X_i} - 3.912} \right)/0.5\) has the t distribution with five degrees of freedom.

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.