/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Problem 4 If \(C\) is an open convex subse... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

If \(C\) is an open convex subset of \(\mathbb{R}^{n}\) and \(L\) is a subspace of \(\mathbb{R}^{n}\) contained in \(C\), then \(C+L=C\).

Short Answer

Expert verified
To prove that if \(C\) is an open convex subset of \(\mathbb{R}^{n}\) and \(L\) is a subspace of \(\mathbb{R}^{n}\) contained in \(C\), then \(C+L=C\), we proceed in two steps. First, we show that \(C + L \subset C\) by using the convexity of \(C\) and the properties of a subspace. Then, we show that \(C \subset C + L\) by observing that every element in \(C\) can be written as a sum of elements in \(C\) and \(L\). Consequently, we conclude that \(C + L = C\).

Step by step solution

01

Recall the definitions of convex sets, subspaces and Minkowski sum

Recall that a set \(C \subset \mathbb{R}^n\) is convex if for any two points \(x, y \in C\) and any \(t \in [0, 1]\), the linear combination \(tx + (1 - t)y\) is also in \(C\). A subspace \(L\) of \(\mathbb{R}^{n}\) is a subset of \(\mathbb{R}^{n}\) such that for all \(l_{1}, l_{2} \in L\) and any scalar \(\alpha\), the following properties hold: 1. \(l_{1} + l_{2} \in L\) 2. \(\alpha l_{1} \in L\) The Minkowski sum of two sets \(A\) and \(B\), denoted \(A + B\), is defined as the set of all possible sums of elements in \(A\) and \(B\). In other words, \(A + B = \lbrace a + b \mid a \in A, b \in B \rbrace\).
02

Prove \(C + L \subset C\)

Let's start by showing that \(C + L \subset C\). Take any element \(c \in C\) and \(l \in L\). Since \(C\) is an open convex set and \(L\) is a subspace contained in \(C\), their sum \(c + l\) must also be in \(C\) by the convexity of \(C\). Specifically, let \(t \in [0, 1]\). Then, \(tl + (1 - t)c\) is in \(C\) by the definition of a convex set. Note that \((1-t)c + tl = c + (t-1)c + tl = c - (1-t)c + tl = c + (-1)(1-t)c + tl = c + l' \in C\), where \(l' = -(-1)(1-t)c + tl \in L\) because \(L\) is a subspace. Therefore, any element in \(C + L\) must be in \(C\), and \(C + L \subset C\).
03

Prove \(C \subset C + L\)

Now, let's prove that \(C \subset C + L\). Take any element \(c \in C\). Since \(L\) is a subspace of \(\mathbb{R}^{n}\) and, by definition, contains the zero vector, \(0 \in L\). So, we can express \(c\) as \(c + 0\) (with \(0 \in L\)). This means that \(c\) is an element of the Minkowski sum \(C + L\), and thus, any element in \(C\) must be in \(C + L\). Therefore, \(C \subset C + L\).
04

Conclusion

Since we have shown that both \(C + L \subset C\) and \(C \subset C + L\), we can conclude that \(C + L = C\) for an open convex subset \(C\) of \(\mathbb{R}^{n}\) and a subspace \(L\) of \(\mathbb{R}^{n}\) contained in \(C\).

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

Key Concepts

These are the key concepts you need to understand to accurately answer the question.

Convex Sets
Understanding convex sets is crucial for grasping various concepts in linear algebra and optimization. A set of points in \( \mathbb{R}^n \) is called a convex set if, when you pick any two points within it, the whole line segment connecting these points also lies inside the set. Mathematically, we say a set \( C \) is convex if for any two points \( x, y \in C \) and any \( t \in [0, 1] \)—which represents any point between \( x \) and \( y \) along the line segment—the combination \( tx + (1 - t)y \) is also in \( C \).

In simpler terms, if you can 'stretch' a rubber band within the set to connect any two points and it doesn't leave the set at any part, your set is convex. This property is fundamental in understanding how different mathematical concepts interact in higher-dimensional spaces and is especially important when discussing functions that are conducive to optimization problems.
Subspaces of R^n
Subspaces are a fundamental concept in the study of linear algebra and vector spaces. A subspace is basically a smaller vector space that sits inside a larger one. In the case of \( \mathbb{R}^n \)—which represents the space of all n-dimensional vectors—a subspace, denoted by \( L \) here, is any set of vectors that also satisfies two main rules:
  1. For any two vectors \( l_1, l_2 \in L \) you can pick, their sum should also be a member of \( L \) (Closed under addition).
  2. For any vector \( l_1 \in L \) and any scalar \( \alpha \) you can multiply, the result should also be part of \( L \) (Closed under scalar multiplication).
These properties ensure that the subspace carries the same structure as the larger space but is limited in extent. A key example of a subspace is the origin in any \( \mathbb{R}^n \) space, since it satisfies both closure properties without containing any other points.
Minkowski Sum
Moving onto a slightly more complex concept, the Minkowski sum is a way of combining two sets to form a new set. When given two sets \( A \) and \( B \) in the same space, their Minkowski sum, denoted by \( A + B \) is created by adding every element \( a \) from set \( A \) to every element \( b \) from set \( B \). The resulting set \( A + B = \{ a + b | a \in A, b \in B \} \) consists of all possible sums of pairs of elements from both sets.

This method of combination is especially powerful in geometrical contexts because it can describe how shapes and spaces expand and interact by translation. In our example with convex sets and subspaces, we observed the Minkowski sum of an open convex set \( C \) and a subspace \( L \) to validate that \( C + L \) equals \( C \) itself when \( L \) is entirely contained in \( C \). This illustrates that the Minkowski sum can also be used to understand deeper properties of mathematical structures in \( \mathbb{R}^n \) space.

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

One could also define a class of systems as in (3.28) with other choices of \(\theta\). Theorem 8 may not be true for such other choices. For instance, the theorem fails for \(\theta=\) identity (why?). It also fails for \(\theta=\arctan\) : Show that the 4-dimensional, single-input system $$ \begin{aligned} \dot{x}_{1} &=\arctan \left(x_{1}+x_{2}+x_{3}+x_{4}+2 u\right) \\ \dot{x}_{2} &=\arctan \left(x_{1}+x_{2}+x_{3}+x_{4}+12 u\right) \\ \dot{x}_{3} &=\arctan (-3 u) \\ \dot{x}_{4} &=\arctan (-4 u) \end{aligned} $$ satisfies that \(B \in \mathbf{B}_{n, m}\) but is not controllable. Explain exactly where the argument given for \(\theta=\tanh\) breaks down.

Let \(\Sigma\) be a linear system, and pick any \(\sigma, \tau, T \in \mathcal{T}\). (a) \(\Sigma\) is controllable on \([\sigma, \tau]\) iff \((0, \sigma) \sim(y, \tau)\) for all \(y \in X\). (b) If \(\Sigma\) is time-invariant, then \(\Sigma\) is controllable in time \(T\) iff \(0 \sim_{T}^{\sim} y\) for all \(y \in X\). (c) If \(\Sigma\) is continuous-time, then it is controllable on \([\sigma, \tau]\) iff \((x, \sigma) \sim(0, \tau)\) for all \(x \in X\). (d) If \(\Sigma\) is time-invariant and continuous-time, then \(\Sigma\) is controllable in time \(T\) iff \(\underset{T}{x} 0\) for all \(x \in X\). (e) The conclusions in (c) and (d) hold also if "continuous-time" is replaced by "discrete-time and \(A(k)\) is invertible for all \(k \in[\sigma, \tau]\) " (in (d), " \(A\) is invertible").

For this problem, we call a time-invariant system output controllable if it holds that for each \(x \in X\) and each \(y \in Y\) there exists a \(T \geq 0\) and a control \(u\) such that \(h(\phi(T, x, u))=y\). Prove that a time-invariant continuous-time linear system \((A, B, C)\) is output controllable iff $$ \operatorname{rank} C R(A, B)=\operatorname{rank}\left[C B, \ldots, C A^{n-1} B\right]=p . $$

As another example, consider the case \(\Omega=\mathcal{L}_{m}^{2}(\sigma, \tau), X=\mathbb{K}^{n}\), and $$ L \omega:=\int_{\sigma}^{\tau} k(t)^{*} \omega(t) d t $$ where \(k\) is a fixed \(m \times n\) matrix of elements of \(\mathcal{L}^{2}(\sigma, \tau)\). Thus, if we denote by \(k_{i}\) the \(i\) th column of \(k, i=1, \ldots, n\), then each \(k_{i}\) is in \(\mathcal{L}_{m}^{2}(\sigma, \tau)\) and $$ L \omega=\left(\begin{array}{c} \left\langle k_{1}, \omega\right\rangle \\ \vdots \\ \left\langle k_{n}, \omega\right\rangle \end{array}\right) $$ We now compute the adjoint of \(L\). Pick any \(\omega \in \Omega\) and \(x \in X\). Since \(\langle L \omega, x\rangle\) equals $$ \left(\int_{\sigma}^{\tau} k(t)^{*} \omega(t) d t\right)^{*} x=\left(\int_{\sigma}^{\tau} \omega(t)^{*} k(t) d t\right) x=\int_{\sigma}^{\tau} \omega(t)^{*}(k(t) x) d t $$ we have that \(L^{*} x\) is the element of \(\mathcal{L}_{m}^{2}\) given by the function \(\left(L^{*} x\right)(t)=k(t) x\). From now on we assume that \(X\) is finite dimensional. (Most of what we do generalizes easily to the case of arbitrary \(X\) but \(L\) of closed range. The latter property is automatic in the finite dimensional case.) The following Lemma is basic. Here "im" denotes image, "ker" kernel or nullspace, and "\perp" indicates orthogonal complement: $$ S^{\perp}=\\{z \mid\langle x, z\rangle=0 \text { for all } x \in S\\} . $$ Since \(X\) was assumed finite dimensional, it holds that \(\left(S^{\perp}\right)^{\perp}=S\) for all subspaces \(S\) of \(x\).

Let \(\Sigma\) be a continuous-time linear system, analytically varying on \(\mathcal{I}\). Prove that if \(\Sigma\) is controllable on any nontrivial subinterval \([\sigma, \tau]\) then $$ \operatorname{rank}\left[B_{0}(t), B_{1}(t), \ldots, B_{n-1}(t)\right]=n $$ for almost all \(t \in \mathcal{I}\). (Hint: First prove that if rank \(M^{(k)}(t)=\operatorname{rank} M^{(k+1)}(t)\) for \(t\) in an open interval \(J \subseteq \mathcal{I}\), then there must exist another subinterval \(J^{\prime} \subseteq J\) and analytic matrix functions $$ V_{0}(t), \ldots, V_{k}(t) $$ on \(J^{\prime}\) such that $$ M_{k+1}(t)=\sum_{i=0}^{k} M_{i}(t) V_{i}(t) $$ on \(J^{\prime} .\) Conclude that then rank \(M^{(k)}(t)=\operatorname{rank} M^{(l)}(t)\) for all \(l>k\) on \(J^{\prime}\). Argue now in terms of the sequence \(\left.n_{k}:=\max \left\\{\operatorname{rank} M^{(k)}(t), t \in \mathcal{I}\right\\} .\right)\)

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.