/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Problem 11 Derive a dual problem for $$ ... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

Derive a dual problem for $$ \text { minimize } \sum^{N}\left\|A_{i} x+b_{i}\right\|_{2}+(1 / 2) \mid x-x_{0} \| \frac{2}{2} $$ The problem data are \(A_{i} \in \mathbf{R}^{m i \times n}, b_{i} \in \mathbf{R}^{-4}\), and \(x_{0} \in \mathbf{R}^{n}\). First introduce new variables \(y_{i} \in \mathbf{R}^{-i}\) and equality coastraints \(y_{i}=A_{i} x+b_{i}\)

Short Answer

Expert verified
To derive the dual problem for the given optimization problem, we first introduce new variables \(y_{i}\) and equality constraints \(y_{i} = A_{i} x + b_{i}\). The Lagrangian is then given by: $$ L(x, y_{1},...,y_{N}, p_{1},...,p_{N}) = \sum_{i=1}^{N}\left\|y_{i}\right\|_{2} + (1/2) \|x-x_{0}\|^2_2 + \sum_{i=1}^{N}p^{T}_{i}(y_{i} - A_{i}x-b_{i}) $$ Applying the infimal convolution property, we find the dual function: $$ g(p_{1}, ..., p_{N}) = \sum_{i=1}^{N}\left(-\|p_{i}\|_{2}\right) + (1/2) \left\|x_{0}+\sum_{i=1}^{N}A^T_{i}p_{i}-x_{0}\right\|_{2}^{2} $$ The dual problem is then given by: \[ \begin{array}{c l} \text{maximize} & -\sum_{i=1}^{N}\|p_{i}\|_{2} + (1/2) \|x_{0}+\sum_{i=1}^{N}A^T_{i}p_{i} - x_{0}\|^2_2 \\ \text{subject to} & p_{i} \in \mathbf{R}^{m}, \> i = 1, ..., N \end{array} \]

Step by step solution

01

Rewrite the optimization problem with new variables

First, rewrite the given optimization problem by introducing new variables \(y_{i}\) and equality constraints \(y_{i}=A_{i} x+b_{i}\) for \(i=1, ..., N\): \[ \begin{array}{c l} \text{minimize} & \sum_{i=1}^{N}\left\|y_{i}\right\|_{2}+(1/2) \|x-x_{0}\|_{2}^{2} \\ \text{subject to} & y_{i}=A_{i} x+b_{i} \quad i = 1, ..., N \end{array} \]
02

Write the Lagrangian of the optimization problem

Now, write the Lagrangian of the optimization problem. For this, use the multiplier vector \(p_{i} \in \mathbf{R}^{m}\), \(i = 1,...,N\). The Lagrangian will be: $$ L(x, y_{1},...,y_{N}, p_{1},...,p_{N}) = \sum_{i=1}^{N}\left\|y_{i}\right\|_{2} + (1/2) \|x-x_{0}\|^2_2 + \sum_{i=1}^{N}p^{T}_{i}(y_{i} - A_{i}x-b_{i}) $$
03

Apply the infimal convolution property

To find the dual function, apply the infimal convolution property, which is: $$ g(p_{1}, ..., p_{N}) = \inf_{x, y_{1}, ..., y_{N}}L(x, y_{1},...,y_{N}, p_{1},...,p_{N}) $$ Find infimum with respect to each variable, \(x, y_{1},..., y_{N}\) separately: - For \(x\), take the gradient of \(L\) with respect to \(x\) and equate it to zero: $$ \nabla_{x}L = x - x_{0} - \sum_{i=1}^{N}A^T_{i}p_{i} = 0 \Rightarrow x = x_{0}+\sum_{i=1}^{N}A^T_{i}p_{i} $$ - For each \(y_{i}\), \(1\leq i\leq N\), the infimum is given by: $$ \inf_{y_{i}}\left(\|y_{i}\|_{2} + p^{T}_{i}y_{i}\right) = -\|p_{i}\|_{2} $$ Now we plug in the solutions for \(x\) and \(y_i\) back into the Lagrangian: $$ g(p_{1}, ..., p_{N}) = \sum_{i=1}^{N}\left(-\|p_{i}\|_{2}\right) + (1/2) \left\|x_{0}+\sum_{i=1}^{N}A^T_{i}p_{i}-x_{0}\right\|_{2}^{2} $$ This expression constitutes the dual function.
04

Derive the dual problem

With the dual function established, the dual problem is given by: \[ \begin{array}{c l} \text{maximize} & -\sum_{i=1}^{N}\|p_{i}\|_{2} + (1/2) \|x_{0}+\sum_{i=1}^{N}A^T_{i}p_{i} - x_{0}\|^2_2 \\ \text{subject to} & p_{i} \in \mathbf{R}^{m}, \> i = 1, ..., N \end{array} \] This is the dual problem for the given optimization problem.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

Key Concepts

These are the key concepts you need to understand to accurately answer the question.

Convex Optimization
Convex optimization is a subfield of optimization that deals with problems where the objective function is convex, and the feasible region is a convex set. This means for any two points within the region, the line segment connecting them lies entirely within the region. A function is convex if its second derivative is always greater or equal to zero—equating to a 'U' shaped curve. These types of problems are particularly useful because they have certain properties that allow for efficient solving techniques.

For example, unlike non-convex optimization problems, a local minimum in a convex optimization problem is also a global minimum. This guarantees that if we find a solution that satisfies the constraints and where the derivative equals to zero, we can be certain that it's the best possible solution. Continuing from the exercise, the given objective function, though not entirely convex due to the mixed norms, can be approached using methods from convex optimization when reformulated with additional variables and constraints.
Lagrangian
The Lagrangian is a mathematical device that combines both the objective function and the constraints of the optimization problem into one expression. It introduces new variables, known as Lagrange multipliers, which are associated with each constraint. These multipliers have a physical interpretation as the 'price' you need to pay for a marginal increase in the constraint and they play a central role when transforming a constrained problem into one that can be analyzed using optimization techniques.

In our exercise, the Lagrangian was created by adding the product of the transpose of the Lagrange multipliers \(p_{i}\) with the difference between the new variable \(y_{i}\) and the linear transformation \(A_{i}x + b_{i}\) applied to the decision variable \(x\). This transformation allows us to solve for the dual problem, thereby providing an alternative and sometimes easier way to approach the original problem.
Infimal Convolution
Infimal convolution is an operation used in convex analysis and optimization that combines two convex functions into one. It helps in finding the tightest lower bound—or infimum—for a given expression. Specifically, in the context of dual problem formulation, it allows us to decompose the Lagrangian of a primal problem and find the infimum with respect to each variable separately.

In the solution provided, it was utilized to isolate the variables \(x\) and \(y_i\) to find their respective infimum values. After taking the gradient of the Lagrangian with respect to \(x\) and setting it to zero, one can solve for \(x\), and similarly find the optimal value for \(y_i\) which turns out to be based on the norms of the Lagrange multipliers \(p_i\). This concept is very important because it simplifies the dual function derivation, allowing us to convert a possibly complex problem into a more tractable one.
Equality Constraints
Equality constraints are conditions that must be exactly met for a solution to be considered feasible in an optimization problem. These constraints are represented as equations that the decision variables must satisfy. In the Lagrangian formulation of an optimization problem, these equality constraints are incorporated using Lagrange multipliers, which adjust the objective function based on how severely a constraint would impact the solution if it were tightened or loosened.

In the provided exercise, the new variables \(y_i\) are related to the decision variable \(x\) through the equality constraints \(y_i = A_i x + b_i\). The presence of these constraints enables us to transform the original minimization problem into an equivalent problem that can be tackled using dual optimization techniques. Moreover, adhering to these constraints ensures that the dual function correctly represents the original problem's feasible region, crucial for finding accurate solutions.

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Dual of general \(L P\). Find the dual function of the LP $$ \begin{array}{ll} \text { tinimine } & c^{T} x \\ \text { sabject to } & G x \leq h \\ & A x=b \end{array} $$ Give the dual problem, and malee the implicit equality constraints explicit.

A example crample. Consider the optimization problem \(\begin{array}{ll}\text { minimize } & x^{2}+1 \\ \text { subject to } & (x-2)(x-4) \leq 0\end{array}\) with variable \(x \in R\). (a) Analsys of pramal prowem. Give the feasible set, the optimal valus, and the optimal solution. (b) Logrungran and dual fumction. Plot the objoctive \(x^{2}+1\) vorsus \(x .0 \mathrm{n}\) the same plot, show the feasible set, optimall point and value, and plot the Lagrangian \(L(x, \lambda)\) versas \(x\) for a few positive valnes of \(\lambda\). Verifs the lower bound property \(\left(p^{*} \geq \operatorname{lof}_{2} L\left(x_{1} \lambda\right)\right.\) for \(\lambda \geq 9)\). Derive and sloetch the Lagrange dual finction \(g\). (c) Legrange dual probtem. Stabe the dual problew, aad yerify that it is a concave maximiation problem. Find the deal optimal value and dual optimal solution \(\lambda^{*}\). Does stroag duality holel? (d) Sensitiwity analysts. Let \(p^{*}(u)\) denote the optimal value of the problem $$ \begin{aligned} &\text { minimize } & x^{2}+1 \\ &\text { subject to } & (x-2)(x-4) \leq \pi \end{aligned} $$ as a fuaction of the paratmeter \(u\). Plot \(y^{\prime}(u)\). Verify that \(d p^{*}(0) / d u=-\lambda^{*}\).

\(\mathrm{~ [ B L . 0 0 , ~ p a g e ~ 9 5 ] ~ C o n e r t - c o n c a e e ~ f u n c t i o n s ~ a n d ~ U h e ~ s}\) ditions under which the nadalle-point propesty holds, where \(f: \mathrm{R}^{4} \times \mathrm{R}^{\cdots+} \rightarrow \mathrm{R}_{1}, \| \times Z \subseteq\) dom \(f, \mathrm{~ a}\) aseume that, the funetion $$ g_{1}(w)= \begin{cases}f\left(w_{1}, z\right) & w \in W \\ a & \text { otherwise }\end{cases} $$ is clased and convex fot all \(t \in Z_{+}\)and the funetion $$ h_{\infty}(z)= \begin{cases}-f(w, t) & x \in \eta \\ \infty & \text { otherwise }\end{cases} $$ is closed asd coavex for all at \(\in \mathrm{II}\). (a) The tighe hand side of \((5.112)\) ean be expreswed as \(p(0)\), where Show that \(p\) is a convex functioti. (b) Saow that the conjugate of \(p\) is wiven by $$ p^{*}(v)= \begin{cases}-\inf _{w} \in w f(w, v) & v \in Z \\ \infty & \text { otherwise }\end{cases} $$ (c) Show that the coajugate of \(p^{\text {" is kiven by }}\) $$ p^{* *}(u)=\sup _{1

Problems with one inequality constrint. Express the dual problem of $$ \begin{array}{ll} \text { minimize } & c^{x} x \\ \text { subject to } & f(x) \leq 0 \end{array} $$ with \(c \neq 0\), in terms of the conjugate \(f^{*}\), Explain why the problem yon give is convex. We do not assume \(f\) is couvex.

supporting hyperplane interyretation of \(K K T\) conditions. Consider a convex problem with no equality constraints, $$ \begin{array}{ll} \text { minimize } & h(x) \\ \text { subject to } & f_{i}(x) \leq 0, \quad i=1, \ldots, \mathrm{m} \text { : } \end{array} $$ Aseume that \(x^{*} \in \mathbf{R}^{n}\) and \(\lambda^{*} \in \mathbf{R}^{\prime \prime \prime}\) satisfy the KKT conditions $$ \begin{aligned} f_{1}\left(x^{*}\right) & \leq 0, & i=1, \ldots, \mathrm{m} \\ \lambda_{1}^{-} & \geq 0, & i=1, \ldots, \mathrm{m} \\ \lambda_{i} f_{1}\left(x^{*}\right) &=0, & i=1, \ldots, \mathrm{m} \end{aligned} $$ Show that $$ \nabla f_{0}\left(x^{*}\right)^{T}\left(x-x^{*}\right) \geq 0 $$ for all frasible \(x\). In other words the KKT couditions taply the simple optimality criterion. of \(84.2 .3 .\)

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.