/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Problem 16 An erratic bishop starts at the ... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

An erratic bishop starts at the bottom left of a chess board and performs random moves. At each stage, she picks one of the available legal moves with equal probability, independently of earlier moves. Let \(X_{n}\) be her position after \(n\) moves. Show that \(\left(X_{n}: n \geq 0\right)\) is a reversible Markov chain, and find its invariant distribution. What is the mean number of moves before she returns to her starting square?

Short Answer

Expert verified
The expected number of moves before returning is inverse to the start position probability in the invariant distribution.

Step by step solution

01

Understand the Problem

We need to prove that the bishop's movement on the chessboard forms a reversible Markov chain and find the invariant distribution. Additionally, determine the expected moves before she returns to the start.
02

Define the State Space

The bishop can move diagonally on a chessboard. The state space is the set of all squares on the board that the bishop can access, given its starting position.
03

Identify Transition Probabilities

Calculate the transition probabilities, which are uniform among the available diagonal moves. These probabilities are needed to verify the reversibility condition.
04

Verify Reversibility Condition

A Markov chain is reversible if there exists an invariant probability distribution \( \pi \) such that for any states \( i \) and \( j \), the detailed balance equation \( \pi(i) P(i, j) = \pi(j) P(j, i) \) holds. Compute this for the bishop's movement.
05

Determine Invariant Distribution

Since the bishop moves uniformly across available positions, solve for the invariant distribution \( \pi \) by solving \( \pi P = \pi \), ensuring that probabilities sum to one, using the normalization condition.
06

Calculate Expected Return Time

The expected number of moves to return to the original square is the reciprocal of the probability of being in the original state in the invariant distribution: \( E(T) = 1/\pi(\text{starting square}) \).
07

Solve the Equations

Solving the equations from Step 5 and 6, we find \( \pi \) with equal probabilities for the visitable positions. Conclude by using these to find the expected return time.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

Key Concepts

These are the key concepts you need to understand to accurately answer the question.

Invariant Distribution
An invariant distribution for a Markov chain is a probability distribution that remains unchanged as the system evolves over time. In simpler terms, if the Markov chain reaches this distribution, it will stay in that distribution, assuming the process is run for a long period. It gives us a way to understand the long-term behavior of the Markov chain.

In the context of our erratic bishop on a chessboard, we look for an invariant distribution by determining probabilities for each board position. These probabilities reflect the bishop's long-term chance of being on each square as she moves unpredictably. The key property is that the sum of all these probabilities must equal 1, satisfying the normalization condition.

To find this invariant distribution, you look at the transition probabilities and apply the equation \( \pi P = \pi \), where \( \pi \) is the invariant distribution and \( P \) is the transition matrix of the Markov chain. This involves solving for \( \pi \) such that this equation holds true, ensuring the system remains in equilibrium over time.
Transition Probabilities
Transition probabilities define the likelihood of moving from one state to another within a Markov chain. They are fundamental for predicting the future behavior of the chain and are represented by the matrix \( P \). Each entry \( P(i, j) \) in this matrix represents the probability of transitioning from state \( i \) to state \( j \).

For our bishop on the chessboard, the transition probabilities are uniform, meaning that the bishop has an equal chance of moving to any of the legal diagonal squares from its current position. This uniformity simplifies calculations and is vital for checking the reversibility of the chain. It helps set up the conditions necessary for both verifying the reversible nature and finding the invariant distribution.

These probabilities are particularly used in the detailed balance equation, which is crucial for proving that the Markov chain is reversible. By understanding and calculating these probabilities, we gain insight into how the bishop's movement patterns evolve over time.
Detailed Balance Equation
The detailed balance equation is a fundamental principle for determining whether a Markov chain is reversible. It states that for a chain to be reversible, there needs to be a balance between transitions in both directions between any two states. Mathematically, it is represented as \( \pi(i) P(i, j) = \pi(j) P(j, i) \).

In our scenario with the bishop, we need to check this equation for all pairs of states, i.e., chessboard positions. This ensures that irrespective of where the bishop moves from or to, the flow of probability from state \( i \) to state \( j \) is balanced by the flow from state \( j \) to state \( i \).

This condition guarantees that no matter the bishop’s starting position or its path, the chain retains the same statistical properties. It is essential for verifying the invariant distribution, ensuring our calculation under normal conditions holds true consistently.
Expected Return Time
Expected return time is an important concept that helps us understand how often a state will recur within a Markov chain. Specifically, it tells us the average number of steps the chain will take to return to a given starting point.

For our erratic bishop, the expected return time to her starting square can be calculated using the formula \( E(T) = \frac{1}{\pi(\text{starting square})} \), where \( \pi(\text{starting square}) \) is the invariant probability for the initial square. This calculation leverages the invariant distribution we discussed earlier.

This measure is particularly useful in evaluating the efficiency or robustness of the Markov process, and it provides insights into how long it takes, on average, for the bishop to traverse and explore the accessible board positions before returning home. Understanding return times helps in assessing the dynamic patterns of movement on the chessboard, strengthening our grasp of recurrent behaviors within the chain itself.

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Consider a collection of \(N\) books arranged in a line along a bookshelf. At successive units of time, a book is selected randomly from the collection. After the book has been consulted, it is replaced on the shelf one position to the left of its original position, with the book in that position moved to the right by one. That is, the selected book and its neighbour to the left swap positions. If the selected book is already in the leftmost position, it is returned there. All but one of the books have plain covers and are equally likely to be selected. The other book las a red cover. At each time unit, the red book will be selected with probability \(p\), where \(

Markov chain Monte Carlo. We wish to simulate a discrete random variable \(Z\) with mass function satisfying \(\mathbb{P}(Z=i) \propto \pi_{i}\), for \(i \in S\) and \(S\) countable. Let \(\mathbf{X}\) be an irreducible Markov chain with state space \(S\) and transition matrix \(P=\left(p_{i, j}\right)\). Let \(Q=\left(q_{i, j}\right)\) be given by $$ q_{i, j}= \begin{cases}\min \left\\{p_{i, j},\left(\pi_{j} / \pi_{i}\right) p_{j, i}\right\\} & \text { if } i \neq j \\ 1-\sum_{j: j \neq i} q_{i, j} & \text { if } i=j\end{cases} $$ Show that \(Q\) is the transition matrix of a Markov chain which is reversible in equilibrium, and has invariant distribution equal to the mass function of \(Z\).

A transition matrix is called doubly stochastic if its column sums equal 1, that is, if \(\sum_{i \in S} p_{i, j}=1\) for \(j \in S\) Suppose an irreducible chain with \(N(<\infty)\) states has a doubly stochastic transition matrix. Find its invariant distribution. Deduce that all states are positive recurrent and that, if the chain is aperiodic, then \(p_{i, j}(n) \rightarrow 1 / N\) as \(n \rightarrow \infty\).

Each morning, a student takes one of three books (labelled 1,2, and 3 ) from her shelf. She chooses book \(i\) with probability \(\alpha_{i}\), and choices on successive days are independent. In the evening, she replaces the book at the left-hand end of the shelf. If \(p_{n}\) denotes the probability that on day \(n\) she finds the books in the order \(1,2,3\) from left to right, show that \(p_{n}\) converges as \(n \rightarrow \infty\), and find the limit.

Let \(i\) be a state of an irreducible, positive recurrent Markov chain \(\mathbf{X}\), and let \(V_{n}\) be the number of visits to \(i\) between times 1 and \(n\). Let \(\mu=\mathbb{E}_{i}\left(T_{i}\right)\) and \(\sigma^{2}=\mathbb{E}_{i}\left(\left[T_{i}-\mu\right]^{2}\right)\) be the mean and variance of the first retum time to the starting state \(i\), and assume \(0<\sigma^{2}<\infty\). Suppose \(X_{0}=i\). Show that $$ U_{n}=\frac{V_{n}-(n / \mu)}{\sqrt{n \sigma^{2} / \mu^{3}}} $$ converges in distribution to the normal distribution \(\mathrm{N}(0,1)\) as \(n \rightarrow \infty\).

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.