Chapter 12: Problem 9
Suppose the random variable \(e\) has \(\operatorname{cdf} F(t) .\) Let \(\varphi(u)=\sqrt{12}[u-(1 / 2)]\), \(0
/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none}
Learning Materials
Features
Discover
Chapter 12: Problem 9
Suppose the random variable \(e\) has \(\operatorname{cdf} F(t) .\) Let \(\varphi(u)=\sqrt{12}[u-(1 / 2)]\), \(0
All the tools & learning materials you need for study success - in one app.
Get started for free
In Exercise 12.1.5, the influence function of the variance functional was derived directly. Assuming that the mean of \(Y\) is 0 , note that the variance functional, \(V\left(F_{Y}\right)\), also solves the equation $$ 0=\int_{-\infty}^{\infty}\left[t^{2}-V\left(F_{Y}\right)\right] f_{Y}(t) d t $$ (a) Determine the natural estimator of the variance by writing the defining equation at the empirical cdf \(F_{n}(t)\), for \(Y_{1}-\bar{Y}, \ldots Y_{n}-\bar{Y}\) iid with cdf \(F_{Y}(t)\), and solving for \(V\left(F_{n}\right)\)
Assume that \(\mathbf{X}\) is an \(n \times p\) matrix. Then the kernel of \(\mathbf{X}\) is defined to be the space \(\operatorname{ker}(\mathbf{X})=\\{\mathbf{b}: \mathbf{X} \mathbf{b}=\mathbf{0}\\}\) (a) Show that \(\operatorname{ker}(\mathbf{X})\) is a subspace of \(R^{p}\). (b) The dimension of \(\operatorname{ker}(\mathbf{X})\) is called the nullity of \(\mathbf{X}\) and is denoted by \(\nu(\mathbf{X})\). Let \(\rho(\mathbf{X})\) denote the rank of \(\mathbf{X}\). A fundamental theorem of linear algebra says that \(\rho(\mathbf{X})+\nu(\mathbf{X})=p .\) Use this to show that if \(\mathbf{X}\) has full column rank, then \(\operatorname{ker}(\mathbf{X})=\\{\mathbf{0}\\}\)
Let \(\mathbf{H}_{c}=\mathbf{X}_{c}\left(\mathbf{X}_{c}^{\prime} \mathbf{X}_{c}\right)^{-1} \mathbf{X}_{c}^{\prime}\) denote the projection matrix onto \(V_{c} .\) Let \(h_{\text {cnil }}\) denote the \((i, l)\) entry of \(\mathbf{H}_{c}\) and let \(h_{c n i}\) denote its ith diagonal entry. (a) Because \(\mathbf{H}_{c}\) is idempotent, show that the following inequality is true: $$ h_{c n i}=\sum_{j=1} h_{c n i j}^{2} \geq h_{c n i l}^{2}, \quad \text { for all } i, l=1, \ldots, n $$ (b) Based on this last result, show that if (d) of Assumptions \(12.4 .1\) is true then all design points get uniformly small. Hint: The range of the design matrix \(\mathbf{X}_{c}\) is the same as the range of \(\mathbf{H}_{c}\).
Establish the identity $$ \|\mathbf{v}\|_{W}=\frac{\sqrt{3}}{2(n+1)} \sum_{i=1}^{n} \sum_{j=1}^{n}\left|v_{i}-v_{j}\right| $$ for all \(\mathbf{v} \in R^{n}\). Thus we have shown that $$ \widehat{\beta}_{W}=\operatorname{Argmin} \sum_{i=1}^{n} \sum_{j=1}^{n}\left|\left(y_{i}-y_{j}\right)-\beta\left(x_{c i}-x_{c j}\right)\right| . $$ Note that the formulation of \(\widehat{\beta}_{W}\) given in expression \((12.2 .29)\) allows an easy way to compute the Wilcoxon estimate of slope by using an \(L_{1}\), (least absolute deviations), routine. This was used in the cited article by Terpstra, et al. for their \(\mathrm{R}\) or S-PLUS functions which compute the Wilcoxon fit.
Obtain the sensitivity curves for the sample mean and median for the following data set. Evaluate the curves at the values \(-300\) to 300 in increments of 10 and graph the curves on the same plot. Compare the sensitivity curves. $$ \begin{array}{rrrrrrrr} -9 & 58 & 12 & -1 & -37 & 0 & 11 & 21 \\ 18 & -24 & -4 & -53 & -9 & 9 & 8 & \end{array} $$
What do you think about this solution?
We value your feedback to improve our textbook solutions.