Chapter 6: Q7E (page 331)
Compute the least-squares error associated with the least square solution found in Exercise 3.
Short Answer
The least-square error is \(2\sqrt 5 \).
/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none}
Learning Materials
Features
Discover
Chapter 6: Q7E (page 331)
Compute the least-squares error associated with the least square solution found in Exercise 3.
The least-square error is \(2\sqrt 5 \).
All the tools & learning materials you need for study success - in one app.
Get started for free
Suppose \(A = QR\), where \(R\) is an invertible matrix. Showthat \(A\) and \(Q\) have the same column space.
In Exercises 5 and 6, describe all least squares solutions of the equation \(A{\bf{x}} = {\bf{b}}\).
5. \(A = \left( {\begin{aligned}{{}{}}{\bf{1}}&{\bf{1}}&{\bf{0}}\\{\bf{1}}&{\bf{1}}&{\bf{0}}\\{\bf{1}}&{\bf{0}}&{\bf{1}}\\{\bf{1}}&{\bf{0}}&{\bf{1}}\end{aligned}} \right)\), \({\bf{b}} = \left( {\begin{aligned}{{}{}}{\bf{1}}\\{\bf{3}}\\{\bf{8}}\\{\bf{2}}\end{aligned}} \right)\)
In Exercises 17 and 18, all vectors and subspaces are in \({\mathbb{R}^n}\). Mark each statement True or False. Justify each answer.
17. a.If \(\left\{ {{{\bf{v}}_1},{{\bf{v}}_2},{{\bf{v}}_3}} \right\}\) is an orthogonal basis for\(W\), then multiplying
\({v_3}\)by a scalar \(c\) gives a new orthogonal basis \(\left\{ {{{\bf{v}}_1},{{\bf{v}}_2},c{{\bf{v}}_3}} \right\}\).
b. The Gram鈥揝chmidt process produces from a linearly independent
set \(\left\{ {{{\bf{x}}_1}, \ldots ,{{\bf{x}}_p}} \right\}\)an orthogonal set \(\left\{ {{{\bf{v}}_1}, \ldots ,{{\bf{v}}_p}} \right\}\) with the property that for each \(k\), the vectors \({{\bf{v}}_1}, \ldots ,{{\bf{v}}_k}\) span the same subspace as that spanned by \({{\bf{x}}_1}, \ldots ,{{\bf{x}}_k}\).
c. If \(A = QR\), where \(Q\) has orthonormal columns, then \(R = {Q^T}A\).
In Exercises 3鈥6, verify that\[\left\{ {{{\bf{u}}_1},{{\bf{u}}_2}} \right\}\]is an orthogonal set, and then find the orthogonal projection of\[{\bf{y}}\]onto Span\[\left\{ {{{\bf{u}}_1},{{\bf{u}}_2}} \right\}\].
5.\[y = \left[ {\begin{aligned}{ - 1}\\2\\6\end{aligned}} \right]\],\[{{\bf{u}}_1} = \left[ {\begin{aligned}3\\{ - 1}\\2\end{aligned}} \right]\],\[{{\bf{u}}_2} = \left[ {\begin{aligned}1\\{ - 1}\\{ - 2}\end{aligned}} \right]\]
Exercises 19 and 20 involve a design matrix \(X\) with two or more columns and a least-squares solution \(\hat \beta \) of \({\bf{y}} = X\beta \). Consider the following numbers.
(i) \({\left\| {X\hat \beta } \right\|^2}\)鈥攖he sum of the squares of the 鈥渞egression term.鈥 Denote this number by \(SS\left( R \right)\).
(ii) \({\left\| {{\bf{y}} - X\hat \beta } \right\|^2}\)鈥攖he sum of the squares for error term. Denote this number by \(SS\left( E \right)\).
(iii) \({\left\| {\bf{y}} \right\|^2}\)鈥攖he 鈥渢otal鈥 sum of the squares of the -values. Denote this number by \(SS\left( T \right)\).
Every statistics text that discusses regression and the linear model \(y = X\beta + \in \) introduces these numbers, though terminology and notation vary somewhat. To simplify matters, assume that the mean of the -values is zero. In this case, \(SS\left( T \right)\) is proportional to what is called the variance of the set of \(y\)-values.
20. Show that \({\left\| {X\hat \beta } \right\|^2} = {\hat \beta ^T}{X^T}{\bf{y}}\). (Hint: Rewrite the left side and use the fact that \(\hat \beta \) satisfies the normal equations.) This formula for is used in statistics. From this and from Exercise 19, obtain the standard formula for \(SS\left( E \right)\):
\(SS\left( E \right) = {y^T}y - \hat \beta {X^T}y\)
What do you think about this solution?
We value your feedback to improve our textbook solutions.