/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Problem 27 Consider an absorbing Markov cha... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

Consider an absorbing Markov chain with state space \(S .\) Let \(f\) be a function defined on \(S\) with the property that $$ f(i)=\sum_{j \in S} p_{i j} f(j) $$ or in vector form $$ \mathbf{f}=\mathbf{P f} $$ Then \(f\) is called a harmonic function for \(\mathbf{P}\). If you imagine a game in which your fortune is \(f(i)\) when you are in state \(i,\) then the harmonic condition means that the game is fair in the sense that your expected fortune after one step is the same as it was before the step. (a) Show that for \(f\) harmonic $$ \mathbf{f}=\mathbf{P}^{n} \mathbf{f} $$ for all \(n\). (b) Show, using (a), that for \(f\) harmonic $$ \mathbf{f}=\mathbf{P}^{\infty} \mathbf{f} $$ where $$ \mathbf{P}^{\infty}=\lim _{n \rightarrow \infty} \mathbf{P}^{n}=\left(\begin{array}{c|c} \mathbf{0} & \mathbf{B} \\ \hline \mathbf{0} & \mathbf{I} \end{array}\right) $$ (c) Using (b), prove that when you start in a transient state \(i\) your expected final fortune $$ \sum_{k} b_{i k} f(k) $$ is equal to your starting fortune \(f(i)\). In other words, a fair game on a finite state space remains fair to the end. (Fair games in general are called martingales. Fair games on infinite state spaces need not remain fair with an unlimited number of plays allowed. For example, consider the game of Heads or Tails (see Example 1.4). Let Peter start with 1 penny and play until he has 2. Then Peter will be sure to end up 1 penny ahead.)

Short Answer

Expert verified
The function \( f \) remains harmonic as \( \mathbf{f} = \mathbf{P}^n \mathbf{f} \) and converges to \( \mathbf{f} = \mathbf{P}^\infty \mathbf{f} \), maintaining the fair game condition.

Step by step solution

01

Define Harmonic Function Condition

A function \( f \) is harmonic for a Markov chain with transition matrix \( \mathbf{P} \) if it satisfies \( \mathbf{f} = \mathbf{P} \mathbf{f} \). This means that the expected value of \( f \) after one step remains unchanged, implying a fair game.
02

Prove General Form \( \mathbf{f} = \mathbf{P}^n \mathbf{f} \)

To show \( \mathbf{f} = \mathbf{P}^n \mathbf{f} \) for all \( n \), use induction on \( n \). Base case: for \( n = 1 \), \( \mathbf{f} = \mathbf{P} \mathbf{f} \) is given. Assume true for \( n = k \), i.e., \( \mathbf{f} = \mathbf{P}^k \mathbf{f} \). Prove for \( n = k+1 \): \( \mathbf{f} = \mathbf{P} \mathbf{f} = \mathbf{P}^{k+1} \mathbf{f} \), using \( \mathbf{f} = \mathbf{P}^k \mathbf{f} \) assumption.
03

Show Limit Behavior \( \mathbf{f} = \mathbf{P}^\infty \mathbf{f} \)

From Step 2, \( \mathbf{f} = \mathbf{P}^n \mathbf{f} \) for all \( n \). Taking the limit \( n \to \infty \) gives \( \mathbf{f} = \lim_{n \to \infty} \mathbf{P}^n \mathbf{f} = \mathbf{P}^\infty \mathbf{f} \). Since \( \mathbf{P}^\infty = \left(\begin{array}{c|c} \mathbf{0} & \mathbf{B} \ \hline \mathbf{0} & \mathbf{I} \end{array}\right) \), \( \mathbf{f} \) aligns with its form.
04

Evaluate Expected Final Fortune

From Step 3, when you begin in a transient state, \( f(i) = \sum_{k} b_{i k} f(k) \) due to the structure of \( \mathbf{P}^\infty \). This represents the expected final fortune being equal to the starting fortune \( f(i) \), preserving the concept of a fair game.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

Key Concepts

These are the key concepts you need to understand to accurately answer the question.

Harmonic Function
In the context of absorbing Markov chains, a harmonic function is a fascinating concept. Imagine you have a system where each state corresponds to a different fortune. A function \( f \) is defined as harmonic for a Markov chain if it satisfies the equation \( \mathbf{f} = \mathbf{P} \mathbf{f} \). This means that applying the transition matrix \( \mathbf{P} \) to the function does not change the expected value.

In simpler terms, if you're in any state with a certain fortune, the average fortune after a transition remains unchanged.
- This property is crucial as it captures the essence of fair play in probabilistic games.
- Mathematically, it can be expressed as: - If you are in state \( i \), the expected future fortune is \( f(i) \). - After transitions, your fortune still equals \( f(i) \).

Understanding harmonic functions allows us to explore fair games deeply, revealing the underpinnings of stability and fairness in these systems.
Fair Game
The concept of a fair game is central to the study of harmonic functions within Markov processes. A game is considered fair if the expected value of your fortune does not change over time.

In relation to absorbing Markov chains, the fairness is derived from the harmonic condition - Your fortune in a state is represented by a harmonic function, \( f(i) \).- The expected fortune after a transition does not deviate from \( f(i) \).

This unchanging nature signifies that the game doesn’t inherently favor or disadvantage the player over time. It implies that any state-based gambling or decision-making based on these functions is unbiased.
- Such fairness guarantees that in transient states, your expected final outcome mirrors your initial state fortune, maintaining the equilibrium of the game.

By analyzing how this fairness is preserved in a structured way, students can grasp the broader implications of harmonic functions on financial or probabilistic decision making.
Transition Matrix
A transition matrix, \( \mathbf{P} \), is a fundamental component of Markov chains and plays a key role in defining harmonic functions. It encapsulates the probabilities of moving from one state to another within a system.

- Each element \( p_{ij} \) of this matrix represents the probability of transitioning from state \( i \) to state \( j \).- The transition matrix \( \mathbf{P} \) is often dissected into different sub-components when dealing with absorbing states or transient states.

In the context of harmonic functions, using \( \mathbf{P} \), the equation \( \mathbf{f} = \mathbf{P} \mathbf{f} \) ensures that no matter how many steps \( n \) you take, your expected outcome (in terms of fortune) remains equal.
- This is beautifully exemplified when considering \( \mathbf{P}^n \), representing the probabilities across multiple steps.- In the limit, as \( n \) increases, the matrix converges to \( \mathbf{P}^\infty \), ensuring a definitive, stable form.

Thus, the transition matrix not only dictates probabilities but also ensures that the characteristic of a fair game persists through its harmonic nature.
Expected Value
Expected value is a statistical measure that is pivotal in evaluating the fairness of a game. It represents the average outcome if an experiment is repeated many times.

In the framework of harmonic functions and Markov chains, expected value helps affirm the game's fairness by demonstrating consistent outcomes.
- When a function \( f \) is harmonic, it implies that your expected future value remains as it initially was.- The expected value formalizes the idea that you only risk what you start with, with no hidden gains or losses.

This is particularly evident when considering the relation \( \sum_{k} b_{ik} f(k) = f(i) \).
- This expression affirms that, even in transient states, the expected final fortune correlates perfectly with the starting position.
- It offers a robust structure to calculate average outcomes and make informed predictions about possible results.

Harnessing the power of expected values within this framework solidifies understanding of how fairness operates within stochastic processes.

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Here is an elegant method due to Guibas and Odlyzko \(^{10}\) to obtain the expected time to reach a pattern, say \(\mathrm{HTH},\) for the first time. Let \(f(n)\) be the number of sequences of length \(n\) which do not have the pattern HTH. Let \(f_{p}(n)\) be the number of sequences that have the pattern for the first time after \(n\) tosses. To each element of \(f(n),\) add the pattern HTH. Then divide the resulting sequences into three subsets: the set where HTH occurs for the first time at time \(n+1\) (for this, the original sequence must have ended with \(\mathrm{HT}\) ); the set where HTH occurs for the first time at time \(n+2\) (cannot happen for this pattern); and the set where the sequence HTH occurs for the first time at time \(n+3\) (the original sequence ended with anything except \(\mathrm{HT}\) ). Doing this, we have $$ f(n)=f_{p}(n+1)+f_{p}(n+3) $$ Thus, $$ \frac{f(n)}{2^{n}}=\frac{2 f_{p}(n+1)}{2^{n+1}}+\frac{2^{3} f_{p}(n+3)}{2^{n+3}} $$ If \(T\) is the time that the pattern occurs for the first time, this equality states that $$ P(T>n)=2 P(T=n+1)+8 P(T=n+3) $$ Show that if you sum this equality over all \(n\) you obtain $$ \sum_{n=0}^{\infty} P(T>n)=2+8=10 $$ Show that for any integer-valued random variable $$ E(T)=\sum_{n=0}^{\infty} P(T>n) $$ and conclude that \(E(T)=10 .\) Note that this method of proof makes very clear that \(E(T)\) is, in general, equal to the expected amount the casino pays out and avoids the martingale system theorem used by Li.

Consider a game played as follows: You are given a regular Markov chain with transition matrix \(\mathbf{P},\) fixed probability vector \(\mathbf{w},\) and a payoff function \(\mathbf{f}\) which assigns to each state \(s_{i}\) an amount \(f_{i}\) which may be positive or negative. Assume that \(\mathbf{w} \mathbf{f}=0\). You watch this Markov chain as it evolves, and every time you are in state \(s_{i}\) you receive an amount \(f_{i} .\) Show that your expected winning after \(n\) steps can be represented by a column vector \(\mathbf{g}^{(n)}\), with $$ \mathbf{g}^{(n)}=\left(\mathbf{I}+\mathbf{P}+\mathbf{P}^{2}+\cdots+\mathbf{P}^{n}\right) \mathbf{f} $$ Show that as \(n \rightarrow \infty, \mathbf{g}^{(n)} \rightarrow \mathbf{g}\) with \(\mathbf{g}=\mathbf{Z f}\).

Let \(\mathbf{P}\) be the transition matrix of an ergodic Markov chain. Let \(\mathbf{w}\) be a fixed probability vector (i.e., \(\mathbf{w}\) is a row vector with \(\mathbf{w} \mathbf{P}=\mathbf{w}) .\) Show that if \(w_{i}=0\) and \(p_{j i}>0\) then \(w_{j}=0\). Use this to show that the fixed probability vector for an ergodic chain cannot have any 0 entries.

Define \(\mathbf{P}\) and \(\mathbf{y}\) by $$ \mathbf{P}=\left(\begin{array}{cc} .5 & .5 \\ .25 & .75 \end{array}\right), \quad \mathbf{y}=\left(\begin{array}{l} 1 \\ 0 \end{array}\right) $$ Compute \(\mathbf{P y}, \mathbf{P}^{2} \mathbf{y},\) and \(\mathbf{P}^{4} \mathbf{y}\) and show that the results are approaching a constant vector. What is this vector?

Consider the following process. We have two coins, one of which is fair, and the other of which has heads on both sides. We give these two coins to our friend, who chooses one of them at random (each with probability \(1 / 2\) ). During the rest of the process, she uses only the coin that she chose. She now proceeds to toss the coin many times, reporting the results. We consider this process to consist solely of what she reports to us. (a) Given that she reports a head on the \(n\) th toss, what is the probability that a head is thrown on the \((n+1)\) st toss? (b) Consider this process as having two states, heads and tails. By computing the other three transition probabilities analogous to the one in part (a), write down a "transition matrix" for this process. (c) Now assume that the process is in state "heads" on both the \((n-1)\) st and the \(n\) th toss. Find the probability that a head comes up on the \((n+1)\) st toss. (d) Is this process a Markov chain?

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.