Chapter 11: Problem 7
Give an algorithm for simulating a random variable having density function
$$
f(x)=30\left(x^{2}-2 x^{3}+x^{4}\right), \quad 0
/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none}
Learning Materials
Features
Discover
Chapter 11: Problem 7
Give an algorithm for simulating a random variable having density function
$$
f(x)=30\left(x^{2}-2 x^{3}+x^{4}\right), \quad 0
All the tools & learning materials you need for study success - in one app.
Get started for free
Suppose \(n\) balls having weights \(w_{1}, w_{2}, \ldots, w_{n}\) are in an urn. These balls are sequentially removed in the following manner: At each selection, a given ball in the urn is chosen with a probability equal to its weight divided by the sum of the weights of the other balls that are still in the urn. Let \(I_{1}, I_{2}, \ldots, I_{n}\) denote the order in which the balls are removed-thus \(I_{1}, \ldots, I_{n}\) is a random permutation with weights. (a) Give a method for simulating \(I_{1}, \ldots, I_{n}\). (b) Let \(X_{i}\) be independent exponentials with rates \(w_{i}, i=1, \ldots, n\). Explain how \(X_{i}\) can be utilized to simulate \(I_{1}, \ldots, I_{n}\).
The Discrete Rejection Method: Suppose we want to simulate \(X\) having probability mass function \(P\\{X=i\\}=P_{i}, i=1, \ldots, n\) and suppose we can easily simulate from the probability mass function \(Q_{i}, \sum_{i} Q_{i}=1, Q_{i} \geqslant 0 .\) Let \(C\) be such that \(P_{i} \leqslant C Q_{i}, i=1, \ldots, n .\) Show that the following algorithm generates the desired random variable: Step 1: Generate \(Y\) having mass function \(Q\) and \(U\) an independent random number. Step 2: If \(U \leqslant P_{Y} / C Q_{Y}\), set \(X=Y\). Otherwise return to step 1 .
Order Statistics: Let \(X_{1}, \ldots, X_{n}\) be i.i.d. from a continuous
distribution \(F\), and let \(X_{(i)}\) denote the \(i\) th smallest of \(X_{1},
\ldots, X_{n}, i=1, \ldots, n .\) Suppose we want to simulate
\(X_{(1)}
Suppose we want to simulate a large number \(n\) of independent exponentials
with rate \(1-\) call them \(X_{1}, X_{2}, \ldots, X_{n} .\) If we were to employ
the inverse transform technique we would require one logarithmic computation
for each exponential generated. One way to avoid this is to first simulate
\(S_{n}\), a gamma random variable with parameters \((n, 1)\) (say, by the method
of Section 11.3.3). Now interpret \(S_{n}\) as the time of the \(n\) th event of a
Poisson process with rate 1 and use the result that given \(S_{n}\) the set of
the first \(n-1\) event times is distributed as the set of \(n-1\) independent
uniform \(\left(0, S_{n}\right)\) random variables. Based on this, explain why
the following algorithm simulates \(n\) independent exponentials:
Step 1: Generate \(S_{n}\), a gamma random variable with parameters \((n, 1)\).
Step 2: Generate \(n-1\) random numbers \(U_{1}, U_{2}, \ldots, U_{n-1}\). Step 3:
\(\quad\) Order the \(U_{i}, i=1, \ldots, n-1\) to obtain
\(U_{(1)}
In Example \(11.4\) we simulated the absolute value of a standard normal by using the Von Neumann rejection procedure on exponential random variables with rate \(1 .\) This raises the question of whether we could obtain a more efficient algorithm by using a different exponential density-that is, we could use the density \(g(x)=\lambda e^{-\lambda x}\). Show that the mean number of iterations needed in the rejection scheme is minimized when \(\lambda=1\).
What do you think about this solution?
We value your feedback to improve our textbook solutions.