Chapter 11: Problem 3
Give a method for simulating a hypergeometric random variable.
Short Answer
Step by step solution
Key Concepts
These are the key concepts you need to understand to accurately answer the question.
/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none}
Learning Materials
Features
Discover
Chapter 11: Problem 3
Give a method for simulating a hypergeometric random variable.
These are the key concepts you need to understand to accurately answer the question.
All the tools & learning materials you need for study success - in one app.
Get started for free
If \(f\) is the density function of a normal random variable with mean \(\mu\) and variance \(\sigma^{2}\), show that the tilted density \(f_{t}\) is the density of a normal random variable with mean \(\mu+\sigma^{2} t\) and variance \(\sigma^{2}\).
The Discrete Rejection Method: Suppose we want to simulate \(X\) having probability mass function \(P[X=i\\}=P_{i}, i=1, \ldots, n\) and suppose we can easily simulate from the probability mass function \(Q_{i}, \sum_{i} Q_{i}=1, Q_{i} \geqslant 0 .\) Let \(C\) be such that \(P_{i} \leqslant C Q_{i}, i=1, \ldots, n\). Show that the following algorithm generates the desired random variable: Step 1: Generate \(Y\) having mass function \(Q\) and \(U\) an independent random number. Step 2: If \(U \leqslant P_{Y} / C Q_{Y}\), set \(X=Y\). Otherwise return to step \(1 .\)
The Discrete Hazard Rate Method: Let \(X\) denote a nonnegative integer valued random variable. The function \(\lambda(r)=P\\{X=n \mid X \geqslant n\\}, n \geqslant 0\), is called the discrete hazard rate function. (a) Show that \(P[X=n\\}=\lambda(n) \prod_{i=0}^{n-1}(1-\lambda(i))\). (b) Show that we can simulate \(X\) by generating random numbers \(U_{1}, U_{2}, \ldots\) stopping at $$ X=\min \left\\{n: U_{n} \leqslant \lambda(n)\right] $$ (c) Apply this method to simulating a geometric random variable. Explain, intuitively, why it works. (d) Suppose that \(\lambda(n) \leqslant p<1\) for all \(n\). Consider the following algorithm for simulating \(X\) and explain why it works: Simulate \(X_{i}, U_{i}, i \geqslant 1\) where \(X_{i}\) is geometric with mean \(1 / p\) and \(U_{i}\) is arandom number. Set \(S_{k}=X_{1}+\cdots+X_{k}\)
Let \(X_{1}, \ldots, X_{n}\) be independent random variables with \(E\left[X_{i}\right]=\theta\), \(\operatorname{Var}\left(X_{i}\right)=\sigma_{i}^{2} i=1, \ldots, n\), and consider estimates of \(\theta\) of the form \(\sum_{i=1}^{n} \lambda_{i} X_{i}\) where \(\sum_{i=1}^{n} \lambda_{i}=1\). Show that \(\operatorname{Var}\left(\sum_{i=1}^{n} \lambda_{i} X_{i}\right)\) is minimized when $$ \lambda_{i}=\left(1 / \sigma_{i}^{2}\right) /\left(\sum_{j=1}^{n} 1 / \sigma_{j}^{2}\right), \quad i=1, \ldots, n $$ Possible Hint: If you cannot do this for general \(n\), try it first when \(n=2\). The following two problems are concerned with the estimation of \(\int_{0}^{1} g(x) d x=\) \(E[g(U)]\) where \(U\) is uniform \((0,1)\).
Show that if \(X\) and \(Y\) have the same distribution then $$ \operatorname{Var}((X+Y) / 2) \leqslant \operatorname{Var}(X) $$ Hence, conclude that the use of antithetic variables can never increase variance (though it need not be as efficient as generating an independent set of random numbers).
What do you think about this solution?
We value your feedback to improve our textbook solutions.