/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Problem 2 Suppose \(A\) and \(B\) are in \... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

Suppose \(A\) and \(B\) are in \(\mathbf{R}^{n \times n}\). Give an algorithm for computing orthogonal \(Q\) and \(Z\) such that \(Q^{T} A Z\) is upper Hessenberg and \(Z^{T} B Q\) is upper triangular.

Short Answer

Expert verified
Use QR and orthogonal transformations with \(Q\) and \(Z\) to simultaneously transform \(A\) and \(B\) as required.

Step by step solution

01

Understand the Task

Our goal is to find orthogonal matrices \(Q\) and \(Z\) such that the resulting products \(Q^T A Z\) is upper Hessenberg and \(Z^T B Q\) is upper triangular.
02

Use the QR Factorization

To approach the problem, we can apply the QR factorization technique. Recall that any square matrix can be decomposed into an orthogonal matrix \(Q\) and an upper triangular matrix \(R\).
03

Transform Matrix A

Begin by applying a sequence of Householder reflections to form an orthogonal matrix \(Q_1\) that transforms \(A\) into an upper Hessenberg matrix \(H\). This is done by setting \(H = Q_1^T A Q_2\) such that \(H\) remains upper Hessenberg.
04

Transform Matrix B

Next, consider the matrix \(B\). Apply Givens rotations or another orthogonal transformation \(Z_1\) to \(B\) such that \(T = Z_1^T B Z_2\) becomes upper triangular. Choose \(Z_1\) such that it does not affect the upper Hessenberg structure obtained in \(A\).
05

Combining Transformations

The matrices \(Q = Q_1\) and \(Z = Z_1\) must simultaneously satisfy the conditions required for both matrices \(A\) and \(B\) by carefully selecting them such that \(Q^T A Z\) and \(Z^T B Q\) meet the criteria. This may necessitate iterative or simultaneous transformations.
06

Verification

Finally, confirm that \(Q\) is indeed orthogonal and \(Z\) is orthogonal as well. Check that the matrices have the desired structure: \(Q^T A Z\) is upper Hessenberg and \(Z^T B Q\) is upper triangular.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

Key Concepts

These are the key concepts you need to understand to accurately answer the question.

Orthogonal Matrices
Orthogonal matrices are a foundational concept in linear algebra and matrix decompositions. An orthogonal matrix is a square matrix whose columns and rows are orthogonal unit vectors, which means that they are perpendicular and have a length of 1.
This property ensures that multiplying a matrix by an orthogonal matrix will preserve distances and angles between vectors. Some important properties of orthogonal matrices include:
  • Inverse is its transpose: For an orthogonal matrix, the inverse is simply its transpose, i.e., if a matrix is orthogonal, then Q^{-1} = Q^T.
  • Determinant: The determinant of an orthogonal matrix is always either +1 or -1. This property helps maintain a stable numerical behavior during computations.
  • Preservation of norms: Orthogonal matrices preserve vector norms, meaning that the length of a vector remains unchanged after transformation.
These properties make orthogonal matrices very useful in numerical methods and applications such as QR factorization, where stability and efficiency are crucial.
QR Factorization
QR factorization is a technique used to decompose a matrix into two components: an orthogonal matrix Q and an upper triangular matrix R.
This decomposition is particularly useful for solving linear systems, eigenvalue problems, and least squares solutions. To perform QR factorization on a matrix A, follow these basic steps:
  • Identify an orthogonal matrix Q such that when multiplied by A results in an upper triangular matrix R, i.e., A = QR.
  • To find Q, one can use either Householder reflections or Givens rotations, both of which are techniques to create orthogonal transformations.
    • Householder Reflections: These are used to zero out below diagonal elements, forming the upper triangular matrix R.
    • Givens Rotations: Another technique which involves creating a sequence of rotations to similarly form an upper triangular matrix.
This factorization is pivotal when transforming matrices into simpler forms for analysis or computation, particularly ensuring that the transformations can be reversed efficiently due to the properties of orthogonal matrices.
Householder Reflections
Householder reflections are a powerful method used to compute QR factorizations by using reflection matrices to zero out elements below the diagonal of a given matrix. A Householder reflection operates by reflecting the vector being transformed about a plane or hyperplane. Here is how you can understand Householder reflections:
  • Define a reflection vector u such that it maps a vector x to one with the desired zeros below the diagonal.
  • The Householder matrix H is defined as H = I - 2uu^T, where I is the identity matrix.
  • By applying H to the target matrix, it transforms the matrix, creating zeros beneath the diagonal while maintaining orthogonality.
Householder transformations are beneficial due to their numerical stability and efficiency. They provide a reliable method for obtaining the orthogonal and upper triangular components needed when decomposing matrices.
Givens Rotations
Givens rotations are another method to achieve QR factorization, particularly useful when working on sparse matrices or requiring incremental updates.
Unlike Householder reflections, Givens rotations focus on zeroing out one element at a time, using rotations in a plane defined by two coordinates. The essential steps for applying Givens rotations are:
  • Select two indices, i and j, representing the plane of elements to be transformed.
  • Calculate the necessary trigonometric parameters, cosine (c) and sine (s), to create the rotation matrix G.
  • Apply the rotation matrix G to zero out the chosen element, effectively maintaining the orthogonal nature of the transformation.
Givens rotations are particularly advantageous for their ability to handle matrix computations incrementally and are suited for numerical applications requiring high precision.

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Suppose $$ A=\left[\begin{array}{rr} w & x \\ y & z \end{array}\right] $$ is a real matrix having eigenvalues \(\lambda \pm i \mu\), where \(\mu\) is nonzero. Give an algorithm that stably determines \(c=\cos (\theta)\) and \(s=\sin (\theta)\) such that $$ \left[\begin{array}{rr} c & s \\ -s & c \end{array}\right]^{T}\left[\begin{array}{ll} w & x \\ y & z \end{array}\right]\left[\begin{array}{rr} c & s \\ -s & c \end{array}\right]=\left[\begin{array}{ll} \lambda & \beta \\ \alpha & \lambda \end{array}\right] $$ where \(\alpha \beta=-\mu^{2}\)

Suppose \(A_{k} \rightarrow A\) and that \(Q_{k}^{H} A_{k} Q_{k}=T_{k}\) is a Schur decomposition of \(A_{k}\). Show that \(\left\\{Q_{k}\right\\}\) has a converging subequence \(\left\\{Q_{k_{i}}\right\\}\) with the property that $$ \lim _{i \rightarrow \infty} Q_{k_{i}}=Q $$ where \(Q^{H} A Q=T\) is upper triangular. This shows that the eigenvalues of a matrix are continuous functions of its entries.

A nonnegative matrix \(P \in \mathbf{R}^{n \times n}\) is stochastic if the entries in each column sum to 1. A vector \(v \in \mathbf{R}^{n}\) is a probabtity vector if its entries are nonnegative and sum to 1. (a) Show that if \(P \in \mathbf{R}^{n \times n}\) is stochastic and \(v \in \mathbf{R}^{n}\) is a probability veo tor, then \(w=P v\) is also a probability vector. (b) The entries in a stochastic matrix \(P \in \mathbf{R}^{n \times n}\) can be regarded as the transition probabilities associated with an \(n\)-state Markov Chain. Let \(v_{j}\) be the probability of being in state \(j\) at time \(t=t_{\text {current }}\). In the Markov model, the probability of being in state \(i\) at time \(t=t_{\text {next }}\) is given by $$ w_{i}=\sum_{j=1}^{n} p_{i j} v_{j} \quad i=1: n_{y} $$ L.e., \(w=\) Pv. With the help of a biased coin, a surfer on the World Wide Web randomly jumps from page to page. Assume that the surfer is currently viewing web page \(j\) and that the coin comes up heads with probability \(\alpha\). Here is how the surfer determines the next page to visit: Step 1. A coin is tossed. Step 2. If it comes up heads and web page \(j\) has at least one outlink, then the next page to visit is randomly selected from the list of outlink pages. Step 9. Otherwise, the next page to visit is randomly selected from the list of all possible pages. Let \(P \in \mathbf{R}^{n \times n}\) be the matrix of transition probabilities that define this random process. Specify \(P\) in terms of \(\alpha\), the vector of ones \(e\), and the link matrix \(H \in \mathbf{R}^{n \times n}\) defined by $$ h_{i j}= \begin{cases}1 & \text { if there is a link on web page } j \text { to web page i } \\ 0 & \text { otherwise }\end{cases} $$

Suppose \(H \in \mathbf{R}^{n \times n}\) is upper Hessenberg with a complex eigenvalue \(\lambda+1 \cdot \mu\). How could inverse iteration be used to compute \(x, y \in \mathbf{R}^{n}\) so that \(H(x+i y)=(\lambda+i \mu)(x+i y) ?\) Hint: Compare real and imaginary parts in this equation and obtain a \(2 n\)-by-2n real system.

Assume that \(T \in \mathbf{R}^{n \times n}\) is block upper triangular and partitioned as follows: $$ T=\left[\begin{array}{rrr} T_{11} & T_{12} & T_{13} \\ 0 & T_{22} & T_{23} \\ 0 & 0 & T_{33} \end{array}\right], \quad T \in \mathbf{R}^{n \times n} $$ Suppose that the diagonal block \(T_{22}\) is 2-by-2 with complex eigenvalues that are disjoint from \(\lambda\left(T_{11}\right)\) and \(\lambda\left(T_{33}\right)\). Give an algorithm for computing the 2-dimensional real invariant subspace associated with \(T_{22}\) 's eigenvalues.

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.