/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Q6.3-21E 21. Question: In Exercises 21 an... [FREE SOLUTION] | 91影视

91影视

21. Question: In Exercises 21 and 22, all vectors are in \({\mathbb{R}^n}\). Mark each statement True or False. Justify each answer.

  1. If z is orthogonal to \({{\mathop{\rm u}\nolimits} _1}\) and to \({{\mathop{\rm u}\nolimits} _2}\) and if \(W = {\mathop{\rm Span}\nolimits} \left\{ {{{\mathop{\rm u}\nolimits} _1},{{\mathop{\rm u}\nolimits} _2}} \right\}\), then z must be in \({W^ \bot }\).
  2. For each y and each subspace \(W\), the vector \({\mathop{\rm y}\nolimits} - {{\mathop{\rm proj}\nolimits} _W}{\mathop{\rm y}\nolimits} \) is orthogonal to \(W\).
  3. The orthogonal projection \(\widehat {\mathop{\rm y}\nolimits} \) of \({\mathop{\rm y}\nolimits} \) onto a subspace W can sometimes depend on the orthogonal basis for W used to compute \(\widehat {\mathop{\rm y}\nolimits} \).
  4. If y is in a subspace W, then the orthogonal projection of y onto W is y itself.
  5. If the columns of an \(n \times p\) matrix \(U\) are orthonormal, then \(U{U^T}{\mathop{\rm y}\nolimits} \) is the orthogonal projection of y onto the column space of \(U\).

Short Answer

Expert verified
  1. The given statement is true.
  2. The given statement is true.
  3. The given statement is false.
  4. The given statement is true.
  5. The given statement is true.

Step by step solution

01

Check whether the statement is true or false

a)

The vector \({\bf{x}}\) is in \({W^ \bot }\), such that if \({\bf{x}}\) is orthogonal to any vector in the set that spans \(W\).

Also, \({\mathbb{R}^n}\) has a subspace \({W^ \bot }\).

Thus, the given statement (a) is true.

02

Check whether the statement is true or false

b)

The Orthogonal Decomposition theoremstates that, suppose that \(W\) is a subspace of \({\mathbb{R}^n}\). Then each \({\bf{y}}\) in \({\mathbb{R}^n}\) can be expressed uniquely in the form:

\({\bf{y}} = \widehat {\bf{y}} + {\bf{z}}\) 鈥 (1)

With \(\widehat {\bf{y}}\) is in \(W\) and \({\bf{z}}\) is in \({W^ \bot }\). In particular, when \(\left\{ {{{\mathop{\rm u}\nolimits} _1}, \ldots ,{{\mathop{\rm u}\nolimits} _p}} \right\}\) is anorthogonal basis of \(W\), then;

\(\widehat {\bf{y}} = \frac{{{\mathop{\rm y}\nolimits} \cdot {{\mathop{\rm u}\nolimits} _1}}}{{{{\mathop{\rm u}\nolimits} _1} \cdot {{\mathop{\rm u}\nolimits} _1}}}{{\mathop{\rm u}\nolimits} _1} + \ldots + \frac{{{\mathop{\rm y}\nolimits} \cdot {{\mathop{\rm u}\nolimits} _p}}}{{{{\mathop{\rm u}\nolimits} _p} \cdot {{\mathop{\rm u}\nolimits} _p}}}{{\mathop{\rm u}\nolimits} _p}\) 鈥 (2)

And, \({\bf{z}} = {\bf{y}} - \widehat {\bf{y}}\).

Thus, the given statement (b) is true.

03

Check whether the statement is true or false

c)

It is observed from the uniqueness of the decomposition \({\mathop{\rm y}\nolimits} = \widehat {\mathop{\rm y}\nolimits} + {\mathop{\rm z}\nolimits} \) demonstrates that theorthogonal projection \(\widehat {\mathop{\rm y}\nolimits} \)depends only on \(W\) but not on the particular basis used in \(\widehat {\mathop{\rm y}\nolimits} = \frac{{{\mathop{\rm y}\nolimits} \cdot {{\mathop{\rm u}\nolimits} _1}}}{{{{\mathop{\rm u}\nolimits} _1} \cdot {{\mathop{\rm u}\nolimits} _1}}}{{\mathop{\rm u}\nolimits} _1} + \ldots + \frac{{{\mathop{\rm y}\nolimits} \cdot {{\mathop{\rm u}\nolimits} _p}}}{{{{\mathop{\rm u}\nolimits} _p} \cdot {{\mathop{\rm u}\nolimits} _p}}}{{\mathop{\rm u}\nolimits} _p}\).

Thus, the given statement (c) is false.

04

Check whether the statement is true or false

d)

When \({\bf{y}}\) is in \(W = {\mathop{\rm Span}\nolimits} \left\{ {{{\bf{u}}_1}, \ldots ,{{\bf{u}}_p}} \right\}\), then \({{\mathop{\rm proj}\nolimits} _W}{\bf{y}} = {\bf{y}}\).

Thus, the given statement (d) is true.

05

Check whether the statement is true or false

e)

Theorem 4 holds for column space \(W\) of \(U\) since the columns of \(U\) are linearly independent and therefore, constitute a basis for \(W\).

Thus, the given statement (e) is true.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91影视!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Exercises 19 and 20 involve a design matrix \(X\) with two or more columns and a least-squares solution \(\hat \beta \) of \({\bf{y}} = X\beta \). Consider the following numbers.

(i) \({\left\| {X\hat \beta } \right\|^2}\)鈥攖he sum of the squares of the 鈥渞egression term.鈥 Denote this number by \(SS\left( R \right)\).

(ii) \({\left\| {{\bf{y}} - X\hat \beta } \right\|^2}\)鈥攖he sum of the squares for error term. Denote this number by \(SS\left( E \right)\).

(iii) \({\left\| {\bf{y}} \right\|^2}\)鈥攖he 鈥渢otal鈥 sum of the squares of the -values. Denote this number by \(SS\left( T \right)\).

Every statistics text that discusses regression and the linear model \(y = X\beta + \in \) introduces these numbers, though terminology and notation vary somewhat. To simplify matters, assume that the mean of the -values is zero. In this case, \(SS\left( T \right)\) is proportional to what is called the variance of the set of \(y\)-values.

20. Show that \({\left\| {X\hat \beta } \right\|^2} = {\hat \beta ^T}{X^T}{\bf{y}}\). (Hint: Rewrite the left side and use the fact that \(\hat \beta \) satisfies the normal equations.) This formula for is used in statistics. From this and from Exercise 19, obtain the standard formula for \(SS\left( E \right)\):

\(SS\left( E \right) = {y^T}y - \hat \beta {X^T}y\)

Compute the least-squares error associated with the least square solution found in Exercise 3.

Find a \(QR\) factorization of the matrix in Exercise 11.

Find a \(QR\) factorization of the matrix in Exercise 12.

A certain experiment produces the data \(\left( {1,1.8} \right),\left( {2,2.7} \right),\left( {3,3.4} \right),\left( {4,3.8} \right),\left( {5,3.9} \right)\). Describe the model that produces a least-squares fit of these points by a function of the form

\(y = {\beta _1}x + {\beta _2}{x^2}\)

Such a function might arise, for example, as the revenue from the sale of \(x\) units of a product, when the amount offered for sale affects the price to be set for the product.

a. Give the design matrix, the observation vector, and the unknown parameter vector.

b. Find the associated least-squares curve for the data.

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.