/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Q2.4-4Q In Exercises 1鈥9, assume that ... [FREE SOLUTION] | 91影视

91影视

In Exercises 1鈥9, assume that the matrices are partitioned conformably for block multiplication. Compute the products shown in Exercises 1鈥4.

4. \[\left[ {\begin{array}{*{20}{c}}I&0\\{ - X}&I\end{array}} \right]\left[ {\begin{array}{*{20}{c}}A&B\\C&D\end{array}} \right]\]

Short Answer

Expert verified

The product is \(\left[ {\begin{array}{*{20}{c}}A&B\\{ - XA + C}&{ - XB + D}\end{array}} \right]\).

Step by step solution

01

State the row-column rule

If the sum of the products of matching entries from row\(i\)of matrix A and column\(j\)of matrix B equals the item in row\(i\)and column\(j\)of AB, then it can be said that product AB is defined.

The product is shown below:

\({\left( {AB} \right)_{ij}} = {a_{i1}}{b_{1j}} + {a_{i2}}{b_{2j}} + ... + {a_{in}}{b_{nj}}\)

02

Obtain the product

Compute the product by using the row-column rule, as shown below:

\(\begin{array}{c}\left[ {\begin{array}{*{20}{c}}I&0\\{ - X}&I\end{array}} \right]\left[ {\begin{array}{*{20}{c}}A&B\\C&D\end{array}} \right] = \left[ {\begin{array}{*{20}{c}}{I\left( A \right) + 0\left( C \right)}&{I\left( B \right) + 0\left( D \right)}\\{ - X\left( A \right) + I\left( C \right)}&{ - X\left( B \right) + I\left( D \right)}\end{array}} \right]\\ = \left[ {\begin{array}{*{20}{c}}{IA}&{IB}\\{ - XA + IC}&{ - XB + ID}\end{array}} \right]\\ = \left[ {\begin{array}{*{20}{c}}A&B\\{ - XA + C}&{ - XB + D}\end{array}} \right]\end{array}\)

Thus, \(\left[ {\begin{array}{*{20}{c}}0&I\\I&0\end{array}} \right]\left[ {\begin{array}{*{20}{c}}W&X\\Y&Z\end{array}} \right] = \left[ {\begin{array}{*{20}{c}}A&B\\{ - XA + C}&{ - XB + D}\end{array}} \right]\).

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91影视!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Suppose Ais an \(n \times n\) matrix with the property that the equation \[A{\mathop{\rm x}\nolimits} = 0\] has at least one solution for each b in \({\mathbb{R}^n}\). Without using Theorem 5 or 8, explain why each equation Ax = b has in fact exactly one solution.

Let \(X\) be \(m \times n\) data matrix such that \({X^T}X\) is invertible., and let \(M = {I_m} - X{\left( {{X^T}X} \right)^{ - {\bf{1}}}}{X^T}\). Add a column \({x_{\bf{0}}}\) to the data and form

\(W = \left[ {\begin{array}{*{20}{c}}X&{{x_{\bf{0}}}}\end{array}} \right]\)

Compute \({W^T}W\). The \(\left( {{\bf{1}},{\bf{1}}} \right)\) entry is \({X^T}X\). Show that the Schur complement (Exercise 15) of \({X^T}X\) can be written in the form \({\bf{x}}_{\bf{0}}^TM{{\bf{x}}_{\bf{0}}}\). It can be shown that the quantity \({\left( {{\bf{x}}_{\bf{0}}^TM{{\bf{x}}_{\bf{0}}}} \right)^{ - {\bf{1}}}}\) is the \(\left( {{\bf{2}},{\bf{2}}} \right)\)-entry in \({\left( {{W^T}W} \right)^{ - {\bf{1}}}}\). This entry has a useful statistical interpretation, under appropriate hypotheses.

In the study of engineering control of physical systems, a standard set of differential equations is transformed by Laplace transforms into the following system of linear equations:

\(\left[ {\begin{array}{*{20}{c}}{A - s{I_n}}&B\\C&{{I_m}}\end{array}} \right]\left[ {\begin{array}{*{20}{c}}{\bf{x}}\\{\bf{u}}\end{array}} \right] = \left[ {\begin{array}{*{20}{c}}{\bf{0}}\\{\bf{y}}\end{array}} \right]\)

Where \(A\) is \(n \times n\), \(B\) is \(n \times m\), \(C\) is \(m \times n\), and \(s\) is a variable. The vector \({\bf{u}}\) in \({\mathbb{R}^m}\) is the 鈥渋nput鈥 to the system, \({\bf{y}}\) in \({\mathbb{R}^m}\) is the 鈥渙utput鈥 and \({\bf{x}}\) in \({\mathbb{R}^n}\) is the 鈥渟tate鈥 vector. (Actually, the vectors \({\bf{x}}\), \({\bf{u}}\) and \({\bf{v}}\) are functions of \(s\), but we suppress this fact because it does not affect the algebraic calculations in Exercises 19 and 20.)

Suppose a linear transformation \(T:{\mathbb{R}^n} \to {\mathbb{R}^n}\) has the property that \(T\left( {\mathop{\rm u}\nolimits} \right) = T\left( {\mathop{\rm v}\nolimits} \right)\) for some pair of distinct vectors u and v in \({\mathbb{R}^n}\). Can Tmap \({\mathbb{R}^n}\) onto \({\mathbb{R}^n}\)? Why or why not?

Use matrix algebra to show that if A is invertible and D satisfies \(AD = I\) then \(D = {A^{ - {\bf{1}}}}\).

Let Abe an invertible \(n \times n\) matrix, and let \(B\) be an \(n \times p\) matrix. Explain why \({A^{ - 1}}B\) can be computed by row reduction: If\(\left( {\begin{aligned}{*{20}{c}}A&B\end{aligned}} \right) \sim ... \sim \left( {\begin{aligned}{*{20}{c}}I&X\end{aligned}} \right)\), then \(X = {A^{ - 1}}B\).

If Ais larger than \(2 \times 2\), then row reduction of \(\left( {\begin{aligned}{*{20}{c}}A&B\end{aligned}} \right)\) is much faster than computing both \({A^{ - 1}}\) and \({A^{ - 1}}B\).

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.