Chapter 5: Problem 12
Show by a suitable example that in general \((A B)^{+} \neq B^{+} A^{+} .\)
Short Answer
Expert verified
This specific example results in \((AB)^+ = B^+ A^+ = 0\). However, different matrices can illustrate the inequality \((AB)^+ \neq B^+ A^+\) in general.
Step by step solution
01
Define Matrices A and B
Let's define two matrices, \( A \) and \( B \), such that \( A = \begin{pmatrix} 1 & 0 \ 0 & 0 \end{pmatrix} \) and \( B = \begin{pmatrix} 0 & 0 \ 0 & 1 \end{pmatrix} \). These two matrices are non-invertible and will help to illustrate our example.
02
Calculate Product AB
Calculate the product of matrices \( A \) and \( B \): \( AB = \begin{pmatrix} 1 & 0 \ 0 & 0 \end{pmatrix} \begin{pmatrix} 0 & 0 \ 0 & 1 \end{pmatrix} = \begin{pmatrix} 0 & 0 \ 0 & 0 \end{pmatrix} \). The result is the zero matrix.
03
Find the Pseudoinverse of AB
The pseudoinverse of the zero matrix \( \begin{pmatrix} 0 & 0 \ 0 & 0 \end{pmatrix} \) is also the zero matrix \( (AB)^{+} = \begin{pmatrix} 0 & 0 \ 0 & 0 \end{pmatrix} \).
04
Find the Pseudoinverse of B
Find the pseudoinverse of matrix \( B = \begin{pmatrix} 0 & 0 \ 0 & 1 \end{pmatrix} \). The pseudoinverse \( B^{+} = \begin{pmatrix} 0 & 0 \ 0 & 1 \end{pmatrix} \), since \( B \) is diagonal and its non-zero entry remains unchanged.
05
Find the Pseudoinverse of A
Find the pseudoinverse of matrix \( A = \begin{pmatrix} 1 & 0 \ 0 & 0 \end{pmatrix} \). The pseudoinverse \( A^{+} = \begin{pmatrix} 1 & 0 \ 0 & 0 \end{pmatrix} \), which is essentially \( A \) itself due to its diagonal structure.
06
Calculate the Product of Pseudoinverses
Calculate the product \( B^{+} A^{+} = \begin{pmatrix} 0 & 0 \ 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 \ 0 & 0 \end{pmatrix} = \begin{pmatrix} 0 & 0 \ 0 & 0 \end{pmatrix} \).
07
Compare Results
Compare \( (AB)^{+} = \begin{pmatrix} 0 & 0 \ 0 & 0 \end{pmatrix} \) with \( B^{+} A^{+} = \begin{pmatrix} 0 & 0 \ 0 & 0 \end{pmatrix} \). In this example, the pseudoinverses happen to match, but generally, \( (AB)^{+} eq B^{+} A^{+} \) due to properties of matrix operations.
Unlock Step-by-Step Solutions & Ace Your Exams!
-
Full Textbook Solutions
Get detailed explanations and key concepts
-
Unlimited Al creation
Al flashcards, explanations, exams and more...
-
Ads-free access
To over 500 millions flashcards
-
Money-back guarantee
We refund you if you fail your exam.
Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!
Key Concepts
These are the key concepts you need to understand to accurately answer the question.
Matrix Multiplication
Matrix multiplication is a fundamental operation in linear algebra. It involves finding the product of two matrices, often denoted as \( C = AB \). To multiply two matrices, it is important to follow specific rules:
In the example given, we multiplied matrices \( A \) and \( B \), where \( A = \begin{pmatrix} 1 & 0 \ 0 & 0 \end{pmatrix} \) and \( B = \begin{pmatrix} 0 & 0 \ 0 & 1 \end{pmatrix} \). The resulting product \( AB \) was the zero matrix \( \begin{pmatrix} 0 & 0 \ 0 & 0 \end{pmatrix} \), as each entry was the result of multiplying a zero with corresponding entries of \( B \) or \( A \).
Matrix multiplication is associative and distributive but not commutative, meaning \( AB eq BA \) in general.
- Dimensions must align: The number of columns in the first matrix \( A \) must equal the number of rows in the second matrix \( B \). If \( A \) is an \( m \times n \) matrix and \( B \) is an \( n \times p \) matrix, their product \( AB \) will be an \( m \times p \) matrix.
- Entry calculation: Each entry \( c_{ij} \) of the resulting matrix \( C \) is computed by taking the dot product of the \( i \text{th} \) row of \( A \) and the \( j \text{th} \) column of \( B \).
In the example given, we multiplied matrices \( A \) and \( B \), where \( A = \begin{pmatrix} 1 & 0 \ 0 & 0 \end{pmatrix} \) and \( B = \begin{pmatrix} 0 & 0 \ 0 & 1 \end{pmatrix} \). The resulting product \( AB \) was the zero matrix \( \begin{pmatrix} 0 & 0 \ 0 & 0 \end{pmatrix} \), as each entry was the result of multiplying a zero with corresponding entries of \( B \) or \( A \).
Matrix multiplication is associative and distributive but not commutative, meaning \( AB eq BA \) in general.
Singular Matrices
In linear algebra, a matrix is termed as singular if it does not have an inverse. For a square matrix \( A \), this means that there is no matrix \( B \) such that \( AB = BA = I \), where \( I \) is the identity matrix.
In the exercise, both matrix \( A \) and matrix \( B \) are singular, as seen with their diagonal structures. They do not possess full rank due to zero rows or columns. Their singularity plays a key role in the exercise, as it highlights why the pseudoinverse was calculated as opposed to a regular inverse. This concept is crucial when working with non-invertible matrices in various applications such as solving system equations and optimization.
- Zero determinant: A matrix is singular when its determinant is zero, indicating that it cannot be inverted.
- Non-full rank: Singular matrices do not have full rank, meaning not all of their rows or columns are linearly independent.
In the exercise, both matrix \( A \) and matrix \( B \) are singular, as seen with their diagonal structures. They do not possess full rank due to zero rows or columns. Their singularity plays a key role in the exercise, as it highlights why the pseudoinverse was calculated as opposed to a regular inverse. This concept is crucial when working with non-invertible matrices in various applications such as solving system equations and optimization.
Diagonal Matrices
Diagonal matrices are a simple yet powerful type of matrix characterized by having non-zero entries only on their main diagonal. All other positions outside this diagonal are zeros. This unique structure gives diagonal matrices several useful properties:
In this exercise, both matrices \( A \) and \( B \) are diagonal, which simplifies the computation of their pseudoinverses. This makes the calculations efficient and demonstrates why diagonal matrices are frequently utilized in theoretical and applied mathematics problems.
- Easier calculations: Operations like addition, subtraction, and finding powers of the matrix become simpler due to their zero off-diagonal entries.
- Diagonal pseudoinverses: The pseudoinverse of a diagonal matrix is particularly straightforward. Each non-zero diagonal element is inverted while zero elements remain unchanged, as seen with matrices \( A \) and \( B \) in the exercise.
- Eigensystem simplification: The eigenvalues of a diagonal matrix are simply the entries on the main diagonal, and their eigenvectors correspond to the standard basis vectors.
In this exercise, both matrices \( A \) and \( B \) are diagonal, which simplifies the computation of their pseudoinverses. This makes the calculations efficient and demonstrates why diagonal matrices are frequently utilized in theoretical and applied mathematics problems.