Chapter 11: Problem 2
Give a method for simulating a negative binomial random variable.
/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none}
Learning Materials
Features
Discover
Chapter 11: Problem 2
Give a method for simulating a negative binomial random variable.
All the tools & learning materials you need for study success - in one app.
Get started for free
If \(U_{1}, U_{2}, U_{3}\) are independent uniform \((0,1)\) random variables, find \(P\left(\prod_{i=1}^{3} U_{i}>0.1\right)\) Hint: Relate the desired probability to one about a Poisson process.
Let \(R\) denote a region in the two-dimensional plane. Show that for a twodimensional Poisson process, given that there are \(n\) points located in \(R\), the points are independently and uniformly distributed in \(R\) - that is, their density is \(f(x, y)=c,(x, y) \in R\) where \(c\) is the inverse of the area of \(R\).
Give an algorithm for simulating a random variable having density function
$$
f(x)=30\left(x^{2}-2 x^{3}+x^{4}\right), \quad 0
Suppose we want to simulate a large number \(n\) of independent exponentials
with rate \(1-\) call them \(X_{1}, X_{2}, \ldots, X_{n} .\) If we were to employ
the inverse transform technique we would require one logarithmic computation
for each exponential generated. One way to avoid this is to first simulate
\(S_{n}\), a gamma random variable with parameters \((n, 1)\) (say, by the method
of Section 11.3.3). Now interpret \(S_{n}\) as the time of the \(n\) th event of a
Poisson process with rate 1 and use the result that given \(S_{n}\) the set of
the first \(n-1\) event times is distributed as the set of \(n-1\) independent
uniform \(\left(0, S_{n}\right)\) random variables. Based on this, explain why
the following algorithm simulates \(n\) independent exponentials:
Step 1: Generate \(S_{n}\), a gamma random variable with parameters \((n, 1)\).
Step 2: Generate \(n-1\) random numbers \(U_{1}, U_{2}, \ldots, U_{n-1}\). Step 3:
Order the \(U_{i}, i=1, \ldots, n-1\) to obtain
\(U_{(1)}
Consider the technique of simulating a gamma \((n, \lambda)\) random variable by using the rejection method with \(g\) being an exponential density with rate \(\lambda / n\). (a) Show that the average number of iterations of the algorithm needed to generate a gamma is \(n^{n} e^{1-n} /(n-1) !\) (b) Use Stirling's approximation to show that for large \(n\) the answer to part (a) is approximately equal to \(e[(n-1) /(2 \pi)]^{1 / 2}\) (c) Show that the procedure is equivalent to the following: Step 1: Generate \(Y_{1}\) and \(Y_{2}\), independent exponentials with rate \(1 .\) Step 2: If \(Y_{1}<(n-1)\left[Y_{2}-\log \left(Y_{2}\right)-1\right]\), return to step 1 . Step 3: \(\quad\) Set \(X=n Y_{2} / \lambda\) (d) Explain how to obtain an independent exponential along with a gamma from the preceding algorithm.
What do you think about this solution?
We value your feedback to improve our textbook solutions.