/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Q3E Let \({{\bf{z}}_{\scriptstyle{\b... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

Let \({{\bf{z}}_{\scriptstyle{\bf{1}}\atop\scriptstyle\,}}{\bf{,}}{{\bf{z}}_{\scriptstyle{\bf{2}}\atop\scriptstyle\,}}....\) from a Markov chain, and assume that distribution of \({z_{\scriptstyle1\atop\scriptstyle\,}}\)is the stationary distribution. Show that the joint distribution \(\left( {{{\bf{z}}_{\scriptstyle{\bf{1}}\atop\scriptstyle\,}}{\bf{,}}{{\bf{z}}_{\scriptstyle{\bf{2}}\atop\scriptstyle\,}}} \right)\)of is the same as the joint distribution of \(\left( {{{\bf{z}}_{\scriptstyle{\bf{i}}\atop\scriptstyle\,}}{\bf{,}}{{\bf{z}}_{\scriptstyle{\bf{i + 1}}\atop\scriptstyle\,}}} \right)\) for all\(i > 1\) convenience, you may assume that the Markov chain has finite state space, but the result holds in general.

Short Answer

Expert verified

From a Markov chain, assume that distribution of is the stationary distribution.

The joint probability mass function \(\,\left( {{z_1}\,\,,{z_2}} \right)\)

\({g_{1,2}}\left( {{z_1}\,\,,{z_2}} \right) = g\left( {{z_1}} \right)h\left( {{z_2}\,\,\mid {z_1}} \right)\,\,\)

Use that Z1 has the stationary distribution.

Step by step solution

01

Definition of the stationary distribution

A stationary distribution is a specific entity that remains unaffected by the effect of a matrix or operator.

The distribution of \({z_1}\,\) is the stationary distribution. By proving that \({z_i}\,\,\) it has the stationary distribution for every i, it follows that \(\,\left( {{z_1}\,\,,{z_2}} \right)\) have the same distribution as \(\left( {{z_1}\,\,,{z_{i + 1}}} \right)\)

The joint probability mass function \(\,\left( {{z_1}\,\,,{z_2}} \right)\)

\({g_{1,2}}\left( {{z_1}\,\,,{z_2}} \right) = g\left( {{z_1}} \right)h\left( {{z_2}\,\,\mid {z_1}} \right)\,\,\)

02

Proven by simple induction

where g is p.f. or p.d.f. of\({z_1}\,\)

And his conditional p.f. or p.d.f. given that\(\,{Z_1}\, = {z_1}\)

which is true of the fact that \({z_1}\,\)it has stationary distribution.

Also, it follows that\({z_i}\,\) a stationary distribution for all i can be proven by simple induction when for\(n = 1\,\) \({Z_1}\,\)is stationary is the base. Next, because we now\({z_i}\,\) have stationary distribution, it follows that

\({g_{i,i + 1}}\left( {{z_i}\,\,,{z_{i + 1}}} \right) = g\left( {{z_i}} \right)h\left( {{z_{i + 1}}\,\,{z_i}\,} \right) = {g_{1,2}}\left( {{z_i}\,\,,{z_{i + 1}}} \right)\)

For arbitrary i, which was to be seen.

Hence,

Use that Z1 has the stationary distribution.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Consider, once again, the model described in Example \({\bf{7}}{\bf{.5}}{\bf{.10}}{\bf{.}}\) Assume that \({\bf{n = 10}}\) the observed values of \({{\bf{X}}_{\bf{1}}},...,{{\bf{X}}_{{\bf{1}}0}}\) are

\( - 0.92,\,\, - 0.33,\,\, - 0.09,\,\,\,0.27,\,\,\,0.50, - 0.60,\,1.66,\, - 1.86,\,\,\,3.29,\,\,\,2.30\).

a. Fit the model to the observed data using the Gibbs sampling algorithm developed in Exercise. Use the following prior hyperparameters: \({{\bf{\alpha }}_{\bf{0}}}{\bf{ = 1,}}{{\bf{\beta }}_{\bf{0}}}{\bf{ = 1,}}{{\bf{\mu }}_{\bf{0}}}{\bf{ = 0}}\,{\bf{and}}\,{\bf{ }}{{\bf{\lambda }}_{\bf{0}}}{\bf{ = 1}}\)

b. For each i, estimate the posterior probability that \({\rm{ }}{{\rm{x}}_i}\)came for the normal distribution with unknown mean and variance.

Use the data in the Table \({\bf{11}}{\bf{.5}}\) on page \({\bf{699}}\) suppose that \({{\bf{y}}_{\bf{i}}}\) is the logarithm of pressure \({x_i}\)and is the boiling point for the I the observation \({\bf{i = 1,}}...{\bf{,17}}{\bf{.}}\) Use the robust regression scheme described in Exercise \({\bf{8}}\) to \({\bf{a = 5, b = 0}}{\bf{.1}}\,\,{\bf{and f = 0}}{\bf{.1}}{\bf{.}}\) Estimate the posterior means and standard deviations of the parameter \({{\bf{\beta }}_{\bf{0}}}{\bf{,}}{{\bf{\beta }}_{\bf{1}}}\,\) and n.

Test the standard normal pseudo-random number generator on your computer by generating a sample of size 10,000 and drawing a normal quantile plot. How straight does the plot appear to be?

Let \({\bf{f}}\left( {{{\bf{x}}_{{\bf{1}}\,}}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}} \right)\) be a joint p.d.f. Suppose that \(\left( {{{\bf{x}}_{{\bf{1}}\,}}^{\left( {\bf{i}} \right)}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}^{\left( {\bf{i}} \right)}} \right)\)has the joint p.d.f. Let \(\left( {{{\bf{x}}_{{\bf{1}}\,}}^{\left( {{\bf{i + 1}}} \right)}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}^{\left( {{\bf{i + 1}}} \right)}} \right)\)be the result of applying steps \(2\,\,and\,\,3\) of the Gibbs sampling algorithm on-page \({\bf{824}}\). Prove that \(\left( {{{\bf{x}}_{{\bf{1}}\,}}^{\left( {{\bf{i + 1}}} \right)}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}^{\left( {\bf{i}} \right)}} \right)\) and \(\left( {{{\bf{x}}_{{\bf{1}}\,}}^{\left( {{\bf{i + 1}}} \right)}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}^{\left( {{\bf{i + 1}}} \right)}} \right)\)also have the joint p.d.f. f.

If \({\bf{X}}\)has the \({\bf{p}}.{\bf{d}}.{\bf{f}}.\)\({\bf{1/}}{{\bf{x}}^{\bf{2}}}\)for\({\bf{x > 1}}\), the mean of \({\bf{X}}\) is infinite. What would you expect to happen if you simulated a large number of random variables with this \({\bf{p}}.{\bf{d}}.{\bf{f}}.\) and computed their average?

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.