/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Problem 9 Suppose \(p\) is irreducible and... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

Suppose \(p\) is irreducible and has a stationary measure \(\mu\) with. \(\sum_{x} \mu(x)=\infty\). Then \(p\) is not positive recurrent.

Short Answer

Expert verified
Given the sum of the stationary measures \(\mu(x)\) over all states \(x\) is infinite, it indicates that the expected time to return to at least one state is infinite. Therefore, the irreducible Markov chain \(p\) is not positively recurrent.

Step by step solution

01

Understanding the Notions

Let's start by understanding each term clearly. A Markov chain is said to be irreducible if it is possible to get to any state from any state. A stationary measure \(\mu\) of a Markov chain is a measure that remains unchanged in the Markov chain dynamic. Positive recurrence means that the expected return time to a given state is finite. Now, let's move on to the problem.
02

Applying the Definition of Positive Recurrence

A state in a Markov chain is said to be positive recurrent if the expected time to return to this state is finite. Here we know that the sum over all states x in the stationary measure of \mu(x) is infinite. This means that the expected time to return to at least one state is infinite. Therefore, we can conclude that the Markov chain is not positive recurrent.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

Key Concepts

These are the key concepts you need to understand to accurately answer the question.

Irreducible Markov Chains
An irreducible Markov chain represents a system where every state can be reached from every other state, possibly over multiple steps. This is an essential concept in the study of such chains because it ensures that the system is interconnected and that there are no isolated states.

In the context of our exercise, we consider a Markov chain to be irreducible, which reflects a scenario where, irrespective of one's initial position, it is theoretically possible to visit all the states of the system in the long run. This property is important when considering the long-term behavior and stability of a Markov chain. For students, it's crucial to visualize this concept; think of it as a game board where you can eventually move from any square to any other square, given enough moves.
Stationary Measure
A stationary measure, denoted by \(\mu\), is a remarkably stable characteristic of a Markov chain. When a Markov chain has a stationary measure, it means that the distribution of states does not change over time, even as the process moves from state to state.

This can be thought of as a balance where, over time, the proportion of a system's presence in each state remains constant, even as transitions occur. In our problem, the condition that \(\sum_{x}\mu(x)=\infty\) indicates an 'infinite' stationary measure, suggesting that the system does not reach a normalized steady-state distribution in the long run. Students should understand that in practical terms, a stationary measure provides a snapshot of the system that can be expected to look the same at any future time, given that the initial conditions remain undisturbed.
Expected Return Time
The expected return time to a specific state in a Markov chain is the average number of steps it takes to return to that state after leaving it. If this time is finite for all states, the Markov chain is classified as positive recurrent; otherwise, if it's infinite for at least one state, the chain is non-positive recurrent.

In the problem provided, having an infinite stationary measure means that the average long-term behavior of the system involves potentially infinitely long durations before the system returns to certain states. This idea is crucial for students to grasp: a positive recurrent state is like having a guarantee that you'll eventually return home after a trip within a predictable timeframe. The absence of positive recurrence is akin to embarking on a journey with no definite time of return to your starting point.

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Let \(\xi_{1}, \xi_{2}, \ldots\) be i.i.d. \(\in\\{1,2, \ldots, N\\}\) and taking each value with probability \(1 / N\). Show that \(X_{n}=\left|\left\\{\xi_{1}, \ldots, \xi_{n}\right\\}\right|\) is a Markov chain and compute its transition probability.

For any transition matrix \(p\), define $$ \alpha_{n}=\sup _{i, j} \frac{1}{2} \sum_{k}\left|p^{n}(i, k)-p^{n}(j, k)\right| $$ The \(1 / 2\) is there because for any \(i\) and \(j\) we can define r.v.'s \(X\) and \(Y\) so that \(P(X=k)=p^{n}(i, k), P(Y=k)=p^{n}(j, k)\), and $$ P(X \neq Y)=(1 / 2) \sum_{k}\left|p^{n}(i, k)-p^{n}(j, k)\right| $$ Show that \(\alpha_{m+n} \leq \alpha_{n} \alpha_{m} .\) Here you may find the coupling interpretation may help you from getting lost in the algebra. Remark. Using \((9.1)\) in Chapter 1 , we can conclude that $$ \frac{1}{n} \log \alpha_{n} \rightarrow \inf _{m \geq 1} \frac{1}{m} \log \alpha_{m} $$ so if \(\alpha_{m}<1\) for some \(m\), it approaches 0 exponentially fast.

Strong law for additive functionals. Suppose \(p\) is irreducible and has stationary distribution \(\pi .\) Let \(f\) be a function that has \(\sum|f(y)| \pi(y)<\infty\). Let \(T_{x}^{k}\) be the time of the \(k\) th return to \(x\). (i) Show that $$ V_{k}^{f}=f\left(X\left(T_{x}^{k}\right)\right)+\cdots+f\left(X\left(T_{x}^{k+1}-1\right)\right), \quad k \geq 1 \text { are i.i.d. } $$ with \(E\left|V_{k}^{f}\right|<\infty\). (ii) Let \(K_{n}=\inf \left\\{k: T_{x}^{k} \geq n\right\\}\) and show that $$ \frac{1}{n} \sum_{m=1}^{K_{n}} V_{m}^{f} \rightarrow \frac{E V_{1}^{f}}{E_{x} T_{x}^{1}}=\sum f(y) \pi(y) \quad P_{\mu}-\text { a.s } $$ (iii) Show that \(\max _{1 \leq m \leq n} V_{m}^{|f|} / n \rightarrow 0\) and conclude $$ \frac{1}{n} \sum_{m=1}^{n} f\left(X_{m}\right) \rightarrow \sum_{y} f(y) \pi(y) \quad P_{\mu}-\text { a.s. } $$ for any initial distribution \(\mu\).

If \(X_{n}\) is a recurrent Harris chain on a countable state space, then \(S\) can only have one irreducible set of recurrent states but may have a nonempty set of transient states. For a concrete example, consider a branching process in which the probability of no children \(p_{0}>0\) and set \(A=B=\\{0\\}\).

A gambler is playing roulette and betting \(\$ 1\) on black each time. The probability she wins \(\$ 1\) is \(18 / 38\), and the probability she loses \(\$ 1\) is \(20 / 38\). (i) Calculate the probability that starting with \(\$ 20\) she reaches \(\$ 40\) before losing her money. (ii) Use the fact that \(X_{n}+2 n / 38\) is a martingale to calculate \(E\left(T_{40} \wedge T_{0}\right)\)

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.