/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Problem 1 As another example, consider the... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

As another example, consider the case \(\Omega=\mathcal{L}_{m}^{2}(\sigma, \tau), X=\mathbb{K}^{n}\), and $$ L \omega:=\int_{\sigma}^{\tau} k(t)^{*} \omega(t) d t $$ where \(k\) is a fixed \(m \times n\) matrix of elements of \(\mathcal{L}^{2}(\sigma, \tau)\). Thus, if we denote by \(k_{i}\) the \(i\) th column of \(k, i=1, \ldots, n\), then each \(k_{i}\) is in \(\mathcal{L}_{m}^{2}(\sigma, \tau)\) and $$ L \omega=\left(\begin{array}{c} \left\langle k_{1}, \omega\right\rangle \\ \vdots \\ \left\langle k_{n}, \omega\right\rangle \end{array}\right) $$ We now compute the adjoint of \(L\). Pick any \(\omega \in \Omega\) and \(x \in X\). Since \(\langle L \omega, x\rangle\) equals $$ \left(\int_{\sigma}^{\tau} k(t)^{*} \omega(t) d t\right)^{*} x=\left(\int_{\sigma}^{\tau} \omega(t)^{*} k(t) d t\right) x=\int_{\sigma}^{\tau} \omega(t)^{*}(k(t) x) d t $$ we have that \(L^{*} x\) is the element of \(\mathcal{L}_{m}^{2}\) given by the function \(\left(L^{*} x\right)(t)=k(t) x\). From now on we assume that \(X\) is finite dimensional. (Most of what we do generalizes easily to the case of arbitrary \(X\) but \(L\) of closed range. The latter property is automatic in the finite dimensional case.) The following Lemma is basic. Here "im" denotes image, "ker" kernel or nullspace, and "\perp" indicates orthogonal complement: $$ S^{\perp}=\\{z \mid\langle x, z\rangle=0 \text { for all } x \in S\\} . $$ Since \(X\) was assumed finite dimensional, it holds that \(\left(S^{\perp}\right)^{\perp}=S\) for all subspaces \(S\) of \(x\).

Short Answer

Expert verified
In short, the adjoint of the linear operator \(L\) is given by \(L^*x = k(t)x\), where \(L\omega = \int_{\sigma}^{\tau} k(t)^{*} \omega(t) dt\) and \(k\) is an \(m \times n\) matrix with elements in \(\mathcal{L}^{2}(\sigma, \tau)\). This adjoint operator \(L^*\) maps a vector in \(X=\mathbb{K}^n\) to a function in \(\Omega=\mathcal{L}_{m}^{2}(\sigma,\tau)\). Moreover, if \(S\) is a subspace of \(X\), then \(\left(S^{\perp}\right)^{\perp}=S\), which is important for analyzing the orthogonal properties of \(L^*\).

Step by step solution

01

Understanding the operator L

Given the operator \(L\) defined as: $$L \omega:=\int_{\sigma}^{\tau} k(t)^{*} \omega(t) d t$$ where \(k\) is an \(m \times n\) matrix with elements in \(\mathcal{L}^{2}(\sigma, \tau)\), we can express it in terms of inner products with the columns \(k_i\) of the matrix \(k\): $$L \omega=\left(\begin{array}{c} \left\langle k_{1}, \omega\right\rangle \\ \vdots \\ \left\langle k_{n}, \omega\right\rangle \end{array}\right)$$
02

Find the adjoint of L

The adjoint of an operator \(L\) is the operator \(L^{*}\) that satisfies $$\langle L \omega, x\rangle=\langle \omega, L^{*} x\rangle$$ Considering the expression for \(\langle L \omega, x\rangle\): $$ \left(\int_{\sigma}^{\tau} k(t)^{*} \omega(t) d t\right)^{*} x=\left(\int_{\sigma}^{\tau} \omega(t)^{*} k(t) d t\right) x=\int_{\sigma}^{\tau} \omega(t)^{*}(k(t) x) d t, $$ we want to find the operator \(L^{*}\) that gives us the same value when applied to \(\omega\). Comparing terms, we identify $$ L^{*} x = k(t) x $$
03

Understanding the properties of adjoint

Since \(L^{*} x = k(t) x\), an element of \(\mathcal{L}_{m}^{2}\), the adjoint operator \(L^*\) maps a vector in \(X=\mathbb{K}^n\) to a function in \(\Omega=\mathcal{L}_{m}^{2}(\sigma,\tau)\). Here are some properties we need to consider: 1. "im" denotes image, which is the collection of all outputs seen when applying an operator to elements in its domain. 2. "ker" denotes the kernel or nullspace, which is the set of all elements in the domain that are mapped to zero. 3. "\perp" indicates the orthogonal complement, which is the set of all elements that have an inner product of zero with each element of the given set. 4. In this exercise, we assume that \(X\) is finite-dimensional.
04

Lemma to identify properties

The Lemma states: $$ S^{\perp}=\\{z \mid\langle x, z\rangle=0 \text { for all } x \in S\\} . $$ If \(X\) is finite dimensional, then \(\left(S^{\perp}\right)^{\perp}=S\) for all subspaces \(S\) of \(x\). This is important for analyzing orthogonal or near-orthogonal properties of the adjoint \(L^{*}\).

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

Key Concepts

These are the key concepts you need to understand to accurately answer the question.

Adjoint Operator
The concept of an adjoint operator is central in the study of linear operators within inner product spaces. If we have an operator \(L\) that maps elements from space \(\Omega\) into another space \(X\), then the adjoint operator, denoted \(L^*\), is the unique operator satisfying the relation:
  • \(\langle L \omega, x \rangle = \langle \omega, L^* x \rangle\), for all \(\omega \in \Omega\) and \(x \in X\).
In simple terms, the adjoint \(L^*\) allows you to "reverse" the action of \(L\) while respecting the inner product structure of the spaces.
This means that \(L^*\) transforms vectors in \(X\) into elements of \(\Omega\), akin to \(L\) transforming elements of \(\Omega\) into vectors in \(X\).
Understanding the adjoint gives insight into the symmetry and reversibility of linear transformations, particularly in physics and engineering applications.
Inner Product Space
An inner product space is a vector space equipped with an additional structure called an 'inner product'. This inner product is a way to multiply vectors together to get a scalar (a simple number).
In mathematical terms, an inner product \(\langle u, v \rangle\) of vectors \(u\) and \(v\) satisfies several key properties:
  • Conjugate Symmetry: \(\langle u, v \rangle = \overline{\langle v, u \rangle}\).
  • Linearity in the first argument: \(\langle au+bv, w \rangle = a\langle u, w \rangle + b\langle v, w \rangle\) for scalars \(a\), \(b\).
  • Positivity: \(\langle u, u \rangle \geq 0\), with equality if and only if \(u = 0\).
An inner product provides a rich geometry on the space — it introduces concepts like orthogonality (vectors being at a perfect right angle to each other) and norm (a measure of vector length).
In our exercise, these properties are used to help define the operator \(L\) and its adjoint \(L^*\), and help in understanding their actions and relationships.
Finite Dimensional Spaces
Finite-dimensional spaces are vector spaces that have a basis consisting of a finite number of vectors. This means any vector in the space can be represented as a linear combination of the basis vectors.
The dimension of a space is simply the number of vectors in any basis. Finite dimensions simplify many problems, as they provide a framework where every vector is "reachable" with a finite number of steps.
Key aspects to note about finite-dimensional spaces are:
  • Linear maps are easier to analyze in these spaces because they can be represented by matrices.
  • Properties such as orthogonality and completeness are straightforward due to the finite number of vectors involved.
  • In a finite-dimensional inner product space, every subspace \(S\) can be related to its orthogonal complement \(S^\perp\), giving \(\left(S^\perp\right)^\perp = S\).
In the solution to our problem, working within finite-dimensional spaces aids in utilizing concepts like image, kernel, and orthogonal complement easily.

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Let \(\Sigma\) be a continuous-time system as in Definition \(2.6 .7\) and let \(\sigma<\tau\). With the present terminology, Lemma \(2.6 .8\) says that \((x, \sigma) \sim\) \((z, \tau)\) for the system \(\Sigma\) iff \((z, \sigma) \sim(x, \tau)\) for the system \(\Sigma_{\sigma+\tau}^{-}\). This remark is sometimes useful in reducing many questions of control to a given state to analogous questions (for the time-reversed system) of control from that same state, and vice versa. Recall that a linear system is one that is either as in Definition 2.4.1 or as in Definition 2.7.2.

Let \(\Sigma\) be a linear system, and pick any \(\sigma, \tau, T \in \mathcal{T}\). (a) \(\Sigma\) is controllable on \([\sigma, \tau]\) iff \((0, \sigma) \sim(y, \tau)\) for all \(y \in X\). (b) If \(\Sigma\) is time-invariant, then \(\Sigma\) is controllable in time \(T\) iff \(0 \sim_{T}^{\sim} y\) for all \(y \in X\). (c) If \(\Sigma\) is continuous-time, then it is controllable on \([\sigma, \tau]\) iff \((x, \sigma) \sim(0, \tau)\) for all \(x \in X\). (d) If \(\Sigma\) is time-invariant and continuous-time, then \(\Sigma\) is controllable in time \(T\) iff \(\underset{T}{x} 0\) for all \(x \in X\). (e) The conclusions in (c) and (d) hold also if "continuous-time" is replaced by "discrete-time and \(A(k)\) is invertible for all \(k \in[\sigma, \tau]\) " (in (d), " \(A\) is invertible").

Assume that \((A, B)\) is controllable and \(\mathcal{U} \subseteq \mathbb{R}^{m}\) is a convex and bounded neighborhood of 0 . Then there exists a set \(\mathcal{B}\) such that \(\mathcal{R}_{u}(0)=\) \(\mathcal{B}+L\) and \(\mathcal{B}\) is bounded, convex, and open relative to \(M\).

The following statements are equivalent for \(L, W\) as above: (a) \(L\) is onto. (b) \(L^{*}\) is one-to-one. (c) \(W\) is onto. (d) \(\operatorname{det} W \neq 0\). (e) \(W\) is positive definite. Consider again the situation in Example 3.5.1. Here \(L\) is onto iff the matrix $$ W=\int_{\sigma}^{\tau} k(t)^{*} k(t) d t>0 . $$ Equivalently, \(L\) is onto iff \(L^{*}\) is one-to-one, i.e., there is no \(p \neq 0\) in \(X\) with \(k(t) p=0\) for almost all \(t \in[\sigma, \tau)\), (3.18) or, with a slight rewrite and \(k_{i}:=i\) th column of \(k^{*}\) : \(\left\langle p, k_{i}\right\rangle=0\) for all \(i\) and almost all \(t \Rightarrow p=0 .\) (3.19)

One could also define a class of systems as in (3.28) with other choices of \(\theta\). Theorem 8 may not be true for such other choices. For instance, the theorem fails for \(\theta=\) identity (why?). It also fails for \(\theta=\arctan\) : Show that the 4-dimensional, single-input system $$ \begin{aligned} \dot{x}_{1} &=\arctan \left(x_{1}+x_{2}+x_{3}+x_{4}+2 u\right) \\ \dot{x}_{2} &=\arctan \left(x_{1}+x_{2}+x_{3}+x_{4}+12 u\right) \\ \dot{x}_{3} &=\arctan (-3 u) \\ \dot{x}_{4} &=\arctan (-4 u) \end{aligned} $$ satisfies that \(B \in \mathbf{B}_{n, m}\) but is not controllable. Explain exactly where the argument given for \(\theta=\tanh\) breaks down.

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.