/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Problem 56 A group of \(n\) processors are ... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

A group of \(n\) processors are arranged in an ordered list. When a job arrives, the first processor in line attempts it; if it is unsuccessful, then the next in line tries it; if it too is unsuccessful, then the next in line tries it, and so on. When the job is successfully processed or after all processors have been unsuccessful, the job leaves the system. At this point we are allowed to reorder the processors, and a new job appears. Suppose that we use the one- closer reordering rule, which moves the processor that was successful one closer to the front of the line by interchanging its position with the one in front of it. If all processors were unsuccessful (or if the processor in the first position was successful), then the ordering remains the same. Suppose that each time processor \(i\) attempts a job then, independently of anything else, it is successful with probability \(p_{i}\). (a) Define an appropriate Markov chain to analyze this model. (b) Show that this Markov chain is time reversible. (c) Find the long run probabilities.

Short Answer

Expert verified
In this problem, we analyze a system of $n$ processors in a one-closer reordering model. To study this system, we first define a Markov chain on the state space consisting of all permutations of $\{1, 2, ..., n\}$. We then show that this Markov chain is time reversible by comparing the probabilities of transitioning between two consecutive states and their stationary distribution products. Finally, we find the long run probabilities of each state in the Markov chain by solving the detailed balance equations, resulting in: \[ \pi(x) = \frac{\prod_{i=1}^{n-1} (1-p_i)p_{i+1}}{\sum_{j=1}^{n} \prod_{i=1}^{j-1} (1-p_i)p_{i+1}} \]

Step by step solution

01

Defining a Markov Chain

Let's represent the state of the system by a permutation of \(\{1, 2, ..., n\}\), where \(i^{th}\) element in the permutation denotes the processor at the \(i^{th}\) position in the line. Then, the Markov chain is on the state space consisting of all permutations of \(\{1, 2, ..., n\}\). In this chain, there is a transition from a particular state (permutation) to another state (permutation) when the state of the processors reorders according to the one-closer rule and the probabilities of successful processing. #Step 2: Time Reversibility#
02

Time Reversibility

First, let's consider two consecutive states \(x\) and \(y\) where \(x\) is the state before job processing and \(y\) is the state after job processing and reordering according to the one-closer rule. We can say that the Markov chain has time-reversibility property if the probability of transitioning from state \(x\) to state \(y\) is the same as the probability of transitioning from state \(y\) to state \(x\). Mathematically, this can be represented as: \[ \pi(x)P(x,y) = \pi(y)P(y,x) \] Since we only need to check the reversibility property, let's assume that the chain has reached its stationary distribution \(\pi\). The probability of moving from state \(x\) to state \(y\) is given by the probability that the job is processed by the processor in position \(i-1\) if there is a switch between positions \(i\) and \(i-1\): \[ P(x,y) = p_{x_{i-1}} \prod_{j=1}^{i-2} (1-p_{x_j}) \] The probability of moving from state \(y\) to state \(x\) is given by the probability that the processor in position \(i\) processes the job: \[ P(y,x) = p_{y_i} \prod_{j=1}^{i-1} (1-p_{y_j}) \] Now, let's compare these two probabilities and stationary distribution product: \[ \pi(x)p_{x_{i-1}} \prod_{j=1}^{i-2} (1-p_{x_j}) = \pi(y)p_{y_i} \prod_{j=1}^{i-1} (1-p_{y_j}) \] After calculating both probabilities, we can find that indeed the chain is time reversible. #Step 3: Long Run Probabilities#
03

Long Run Probabilities

Since the Markov Chain is time-reversible, to find the long run probabilities, we need to solve the detailed balance equations: \[ \pi(x)P(x,y) = \pi(y)P(y,x) \] We know from Step 2 that, \[ \pi(x)p_{x_{i-1}} \prod_{j=1}^{i-2} (1-p_{x_j}) = \pi(y)p_{y_i} \prod_{j=1}^{i-1} (1-p_{y_j}) \] In the stationary state, the product of swap probabilities will be equal. To find the long run probabilities, we need to sum the probabilities over all states for a given permutation: \[ \pi(x) = \frac{\prod_{i=1}^{n-1} (1-p_i)p_{i+1}}{\sum_{j=1}^{n} \prod_{i=1}^{j-1} (1-p_i)p_{i+1}} \] This gives us the long run probabilities of each state in the Markov Chain describing the processors' system.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Consider a branching process having \(\mu<1 .\) Show that if \(X_{0}=1\), then the expected number of individuals that ever exist in this population is given by \(1 /(1-\mu) .\) What if \(X_{0}=n ?\)

Show that if state \(i\) is recurrent and state \(i\) does not communicate with state \(j\), then \(P_{i}=0 .\) This implies that once a process enters a recurrent class of states it can never leave that class. For this reason, a recurrent class is often referred to as a closed class.

For the Markov chain with states \(1,2,3,4\) whose transition probability matrix \(\mathbf{P}\) is as specified below find \(f_{i 3}\) and \(s_{a}\) for \(i=1,2,3 .\) $$ \mathbf{P}=\left[\begin{array}{llll} 0.4 & 0.2 & 0.1 & 0.3 \\ 0.1 & 0.5 & 0.2 & 0.2 \\ 0.3 & 0.4 & 0.2 & 0.1 \\ 0 & 0 & 0 & 1 \end{array}\right] $$

Coin 1 comes up heads with probability \(0.6\) and coin 2 with probability 0.5. A coin is continually flipped until it comes up tails, at which time that coin is put aside and we start flipping the other one. (a) What proportion of flips use coin \(1 ?\) (b) If we start the process with coin 1 what is the probability that \(\operatorname{coin} 2\) is used on the fifth flip?

Consider an irreducible finite Markov chain with states \(0,1, \ldots, N\). (a) Starting in state \(i\), what is the probability the process will ever visit state \(j\) ? Explain! (b) Let \(x_{i}=P\) [visit state \(N\) before state \(0 \mid\) start in \(\left.i\right]\). Compute a set of linear equations which the \(x_{i}\) satisfy, \(i=0,1, \ldots, N\). (c) If \(\Sigma_{j} j p_{U}=i\) for \(i=1, \ldots, N-1\), show that \(x_{i}=i / N\) is a solution to the equations in part (b).

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.