Chapter 11: Problem 3
Give a method for simulating a hypergeometric random variable.
/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none}
Learning Materials
Features
Discover
Chapter 11: Problem 3
Give a method for simulating a hypergeometric random variable.
All the tools & learning materials you need for study success - in one app.
Get started for free
Suppose we want to simulate a large number \(n\) of independent exponentials
with rate \(1-\) call them \(X_{1}, X_{2}, \ldots, X_{n} .\) If we were to employ
the inverse transform technique we would require one logarithmic computation
for each exponential generated. One way to avoid this is to first simulate
\(S_{n}\), a gamma random variable with parameters \((n, 1)\) (say, by the method
of Section 11.3.3). Now interpret \(S_{n}\) as the time of the \(n\) th event of a
Poisson process with rate 1 and use the result that given \(S_{n}\) the set of
the first \(n-1\) event times is distributed as the set of \(n-1\) independent
uniform \(\left(0, S_{n}\right)\) random variables. Based on this, explain why
the following algorithm simulates \(n\) independent exponentials:
Step 1: Generate \(S_{n}\), a gamma random variable with parameters \((n, 1)\).
Step 2: Generate \(n-1\) random numbers \(U_{1}, U_{2}, \ldots, U_{n-1}\). Step 3:
Order the \(U_{i}, i=1, \ldots, n-1\) to obtain
\(U_{(1)}
Consider the following algorithm for generating a random permutation of the elements \(1,2, \ldots, n .\) In this algorithm, \(P(i)\) can be interpreted as the element in position \(i\) Step 1: \(\quad\) Set \(k=1\). Step 2: \(\quad\) Set \(P(1)=1\). Step 3: If \(k=n\), stop. Otherwise, let \(k=k+1\). Step 4: Generate a random number \(U\), and let $$ \begin{aligned} P(k) &=P([k U]+1), \\ P([k U]+1) &=k . \end{aligned} $$ Go to step 3 . (a) Explain in words what the algorithm is doing. (b) Show that at iteration \(k\) -that is, when the value of \(P(k)\) is initially set-that \(P(1), P(2), \ldots, P(k)\) is a random permutation of \(1,2, \ldots, k\). Hint: Use induction and argue that $$ \begin{aligned} &P_{k}\left\\{i_{1}, i_{2}, \ldots, i_{j-1}, k, i_{j}, \ldots, i_{k-2}, i\right\\} \\ &\quad=P_{k-1}\left\\{i_{1}, i_{2}, \ldots, i_{j-1}, i, i_{j}, \ldots, i_{k-2}\right\\} \frac{1}{k} \end{aligned} $$ \(=\frac{1}{k !}\) by the induction hypothesis The preceding algorithm can be used even if \(n\) is not initially known.
If \(f\) is the density function of a normal random variable with mean \(\mu\) and variance \(\sigma^{2}\), show that the tilted density \(f_{t}\) is the density of a normal random variable with mean \(\mu+\sigma^{2} t\) and variance \(\sigma^{2}\).
Give a method for simulating a negative binomial random variable.
Suppose \(n\) balls having weights \(w_{1}, w_{2}, \ldots, w_{n}\) are in an urn. These balls are sequentially removed in the following manner: At each selection, a given ball in the urn is chosen with a probability equal to its weight divided by the sum of the weights of the other balls that are still in the urn. Let \(I_{1}, I_{2}, \ldots, I_{n}\) denote the order in which the balls are removed-thus \(I_{1}, \ldots, I_{n}\) is a random permutation with weights. (a) Give a method for simulating \(I_{1}, \ldots, I_{n}\). (b) Let \(X_{i}\) be independent exponentials with rates \(w_{i}, i=1, \ldots, n .\) Explain how \(X_{i}\) can be utilized to simulate \(I_{1}, \ldots, I_{n}\).
What do you think about this solution?
We value your feedback to improve our textbook solutions.