/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Q11E Consider, once again, the model ... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

Consider, once again, the model described in Example \({\bf{7}}{\bf{.5}}{\bf{.10}}{\bf{.}}\) Assume that \({\bf{n = 10}}\) the observed values of \({{\bf{X}}_{\bf{1}}},...,{{\bf{X}}_{{\bf{1}}0}}\) are

\( - 0.92,\,\, - 0.33,\,\, - 0.09,\,\,\,0.27,\,\,\,0.50, - 0.60,\,1.66,\, - 1.86,\,\,\,3.29,\,\,\,2.30\).

a. Fit the model to the observed data using the Gibbs sampling algorithm developed in Exercise. Use the following prior hyperparameters: \({{\bf{\alpha }}_{\bf{0}}}{\bf{ = 1,}}{{\bf{\beta }}_{\bf{0}}}{\bf{ = 1,}}{{\bf{\mu }}_{\bf{0}}}{\bf{ = 0}}\,{\bf{and}}\,{\bf{ }}{{\bf{\lambda }}_{\bf{0}}}{\bf{ = 1}}\)

b. For each i, estimate the posterior probability that \({\rm{ }}{{\rm{x}}_i}\)came for the normal distribution with unknown mean and variance.

Short Answer

Expert verified

a) The Gibbs Sampling Algorithm: The steps of the algorithm are

\(\left( {1.} \right)\,\)Pick starting values \({x_2}^{\left( 0 \right)}\) for \(\,{x_2}\) , and let \(\,\,i = 0\,\)

b) Total of \(i = 1,2,...,10\)estimations of the posterior probability.

((a)) Use Gibbs Algorithm.

((b)) \(0.286,{\rm{ }}0.289,{\rm{ }}0.306,{\rm{ }}0.341,{\rm{ }}0.365,{\rm{ }}0.285,{\rm{ }}0.659,{\rm{ }}0.378,{\rm{ }}0.951,{\rm{ }}0.826\)

Step by step solution

01

(a) Definition of Gibbs sampling

Gibbs Sampling is a Monte Carlo Markov Chain method for estimating complex joint distributions that draw an instance from the distribution of each variable iteratively based on the current values of the other variables.

The Gibbs Sampling Algorithm: The steps of the algorithm are

\(\left( {1.} \right)\,\)Pick starting values \({x_2}^{\left( 0 \right)}\) for \(\,{x_2}\) , and let \(\,\,i = 0\,\)

\(\left( {2.} \right)\,\)let be a simulated value from the conditional distribution of \(\,{x_1}\)given that \(\,\,{X_1} = {x_2}^{\left( i \right)}\)

\(\left( {3.} \right)\,\)Let\(\,{x_2}^{\left( {i + 1} \right)\,\,}\,\) be a simulated value from the conditional distribution of \(\,{x_2}\) given that \(\,{X_1} = {x_1}^{\left( {i + 1} \right)}\)

\(\left( {4.} \right)\)Repeat steps \(\,2.\,\,and\,3.\) \(\,i\) where\(\,i + 1\)

Given data, the hyperparameters, and the Gibbs Algorithms, after running \(10\) Markov Chains with a total of \(N = 100000\) samples.

One may obtain the required values to estimate part(b) probabilities.

First, one may use for loop to fit the model described in the previous Exercise.

02

(b) Estimation probabilities

Total of estimations of the posterior probability.

Total of \(i = 1,2,...,10\)estimations of the posterior probability.

Using the method in the previous exercise, the 10 estimated probabilities are as follows

\(0.286,{\rm{ }}0.289,{\rm{ }}0.306,{\rm{ }}0.341,{\rm{ }}0.365,{\rm{ }}0.285,{\rm{ }}0.659,{\rm{ }}0.378,{\rm{ }}0.951,{\rm{ }}0.826\)

Hence,

((a)) Use Gibbs Algorithm.

((b)) \(0.286,{\rm{ }}0.289,{\rm{ }}0.306,{\rm{ }}0.341,{\rm{ }}0.365,{\rm{ }}0.285,{\rm{ }}0.659,{\rm{ }}0.378,{\rm{ }}0.951,{\rm{ }}0.826\)

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Use the data in Table 10.6 on page 640. We are interested in the bias of the sample median as an estimator of the median of the distribution.

a. Use the non-parametric bootstrap to estimate this bias.

b. How many bootstrap samples does it appear that you need in order to estimate the bias to within .05 with a probability of 0.99?

Assume that one can simulate as many \({\bf{i}}.{\bf{i}}.{\bf{d}}.\)exponential random variables with parameters\({\bf{1}}\) as one wishes. Explain how one could use simulation to approximate the mean of the exponential distribution with parameters\({\bf{1}}\).

Suppose that we wish to approximate the integral\(\int g (x)dx\). Suppose that we have a p.d.f. \(f\)that we shall use as an importance function. Suppose that \(g(x)/f(x)\) is bounded. Prove that the importance sampling estimator has finite variance.

Let X1,……Xn+m be a random sample from the exponential distribution with parameter θ. Suppose that θ has the prior gamma distribution with known parameters α and β. Assume that we get to observe X1,…, Xn, but Xn+1,..., Xn+m are censored.

  1. First, suppose that the censoring works as follows: For i = 1,….,m, if Xn+i ≤ c, then we learn only that Xn+i ≤ c, but not the precise value of Xn+i. Set up a Gibbs sampling algorithm that will allow us to simulate the posterior distribution of θ in spite of the censoring.
  2. Next, suppose that the censoring works as follows: For i = 1,...,m, if Xn+i ≥ c, then we learn only that Xn+i ≥ c, but not the precise value of Xn+i. Set up a Gibbs sampling algorithm that will allow us to simulate the posterior distribution of θ in spite of the censoring

Use the data consisting of 30 lactic acid concentrations in cheese,10 from example 8.5.4 and 20 from Exercise 16 in sec.8,6, Fit the same model used in Example 8.6.2 with the same prior distribution, but this time use the Gibbs sampling algorithm in Example 12.5.1. simulate 10,000 pairs of \(\left( {{\bf{\mu ,\tau }}} \right)\) parameters. Estimate the posterior mean of \({\left( {\sqrt {{\bf{\tau \mu }}} } \right)^{ - {\bf{1}}}}\), and compute the standard simulation error of the estimator.

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.