Dual LP & Sensitivity Analysis#
Revised
18 Apr 2023
\( \begin{aligned} \exists\mathbf{y}\ge\mathbf{0}\ni\nabla z =\sum_i y_i\nabla g_i \,\,\,\text{and}\,\,\, \mathbf{y\cdot s}=0 \end{aligned} \)
Method 1, Brute Force: check each extreme point until a solution is found
Method 2, Dual LP: drive the objective of the dual LP to the desired solution
\(\text{Dual LP}\)#
First Standard Form: \(\min\) and \(\ge\)#
\( \boxed{ \text{LP}= \begin{aligned} \min z&=\mathbf{cx}+d \\ \mathbf{Ax}&\ge\mathbf{b} \\ \mathbf{x}&\ge\mathbf{0} \end{aligned} \underset{\text{dual to}}{\iff} \begin{aligned} \max w&=\mathbf{yb}+d \\ \mathbf{yA}&\le\mathbf{c} \\ \mathbf{y}&\ge\mathbf{0} \end{aligned} =\text{LP}^* } \)
PROOF
\( \text{LP}= \begin{aligned} \max w&=\mathbf{yb}+d \\ \mathbf{yA}&\le\mathbf{c} \\ \mathbf{y}&\ge\mathbf{0} \end{aligned} \iff \begin{aligned} -\min -w&=\mathbf{y(-b)}+(-d) \\ \mathbf{y(-A)}&\ge\mathbf{(-c)} \\ \mathbf{y}&\ge\mathbf{0} \end{aligned} \underset{\text{dual to}}{\iff} \begin{aligned} -\max -z&=\mathbf{(-c)x}+(-d) \\ \mathbf{(-A)x}&\le\mathbf{(-b)} \\ \mathbf{x}&\ge\mathbf{0} \end{aligned} \iff \begin{aligned} \min z&=\mathbf{cx}+d \\ \mathbf{Ax}&\ge\mathbf{b} \\ \mathbf{x}&\ge\mathbf{0} \end{aligned} =\text{LP}^* \)
\(\blacksquare\)
\( \begin{aligned} \min z&=\mathbf{cx}+d \\ \mathbf{Ax}&\ge\mathbf{b} \\ \mathbf{x}&\ge\mathbf{0} \end{aligned} \)
has a solution at \(\mathbf{x}\) if \(\exists\mathbf{y}\ge\mathbf{0}\ni\sum y_i\nabla g_i=\nabla z\) and \(\mathbf{y\cdot s}=0\)
\(\min\) and \(\le\)#
\( \boxed{ \text{LP}= \begin{aligned} \min z&=\mathbf{cx}+d \\ \mathbf{Ax}&\le\mathbf{b} \\ \mathbf{x}&\ge\mathbf{0} \end{aligned} \underset{\text{dual to}}{\iff} \begin{aligned} \max w&=\mathbf{y(-b)}+d \\ \mathbf{y(-A)}&\le\mathbf{c} \\ \mathbf{y}&\ge\mathbf{0} \end{aligned} =\text{LP}^* } \)
Note that this is just the first standard form where \(\mathbf{A}\) and \(\mathbf{b}\) are negated.
PROOF
\( \text{LP}= \begin{aligned} \min z&=\mathbf{cx}+d \\ \mathbf{(-A)x}&\ge\mathbf{(-b)} \\ \mathbf{x}&\ge\mathbf{0} \end{aligned} \iff \begin{aligned} \min z&=\mathbf{cx}+d \\ \mathbf{Ax}&\le\mathbf{b} \\ \mathbf{x}&\ge\mathbf{0} \end{aligned} \iff \begin{aligned} -\max -z&=\mathbf{(-c)x}+(-d) \\ \mathbf{Ax}&\le\mathbf{b} \\ \mathbf{x}&\ge\mathbf{0} \end{aligned} \underset{\text{dual to}}{\iff} \begin{aligned} -\min -w&=\mathbf{yb}+(-d) \\ \mathbf{yA}&\ge\mathbf{(-c)} \\ \mathbf{y}&\ge\mathbf{0} \end{aligned} \iff \begin{aligned} \max w&=\mathbf{y(-b)}+d \\ \mathbf{yA}&\ge\mathbf{(-c)} \\ \mathbf{y}&\ge\mathbf{0} \end{aligned} \iff \begin{aligned} \max w&=\mathbf{y(-b)}+d \\ \mathbf{y(-A)}&\le\mathbf{c} \\ \mathbf{y}&\ge\mathbf{0} \end{aligned} =\text{LP}^* \)
\(\blacksquare\)
\( \begin{aligned} \min z&=\mathbf{cx}+d \\ \mathbf{Ax}&\le\mathbf{b} \\ \mathbf{x}&\ge\mathbf{0} \end{aligned} \)
has a solution at \(\mathbf{x}\) if \(\exists\mathbf{y}\ge\mathbf{0}\ni\sum y_i(-\nabla g_i)=-\mathbf{yA}=\nabla z=\mathbf{c}\) and \(\mathbf{y\cdot s}=0\)
The sign change in the objective gradient as a linear combination of the constraint gradients is just the sign change that arises in the dual LP when the primal is not given in one of the two standard forms.
Second Standard Form: \(\max\) and \(\le\)#
\( \boxed{ \text{LP}= \begin{aligned} \max z&=\mathbf{cx}+d \\ \mathbf{Ax}&\le\mathbf{b} \\ \mathbf{x}&\ge\mathbf{0} \end{aligned} \underset{\text{dual to}}{\iff} \begin{aligned} \min w&=\mathbf{yb}+d \\ \mathbf{yA}&\ge\mathbf{c} \\ \mathbf{y}&\ge\mathbf{0} \end{aligned} =\text{LP}^* } \)
PROOF
\( \text{LP}= \begin{aligned} \min w&=\mathbf{yb}+d \\ \mathbf{yA}&\ge\mathbf{c} \\ \mathbf{y}&\ge\mathbf{0} \end{aligned} \iff \begin{aligned} -\max -w&=\mathbf{y(-b)}+(-d) \\ \mathbf{y(-A)}&\le\mathbf{(-c)} \\ \mathbf{y}&\ge\mathbf{0} \end{aligned} \underset{\text{dual to}}{\iff} \begin{aligned} -\min -z&=\mathbf{(-c)x}+(-d) \\ \mathbf{(-A)x}&\ge\mathbf{(-b)} \\ \mathbf{x}&\ge\mathbf{0} \end{aligned} \iff \begin{aligned} \max z&=\mathbf{cx}+d \\ \mathbf{Ax}&\le\mathbf{b} \\ \mathbf{x}&\ge\mathbf{0} \end{aligned} =\text{LP}^* \)
\(\blacksquare\)
\( \begin{aligned} \max z&=\mathbf{cx}+d \\ \mathbf{Ax}&\le\mathbf{b} \\ \mathbf{x}&\ge\mathbf{0} \end{aligned} \)
has a solution at \(\mathbf{x}\) if
\(\max\) and \(\ge\)#
\( \boxed{ \text{LP}= \begin{aligned} \max z&=\mathbf{cx}+d \\ \mathbf{Ax}&\ge\mathbf{b} \\ \mathbf{x}&\ge\mathbf{0} \end{aligned} \underset{\text{dual to}}{\iff} \begin{aligned} \min w&=\mathbf{y(-b)}+d \\ \mathbf{y(-A)}&\ge\mathbf{c} \\ \mathbf{y}&\ge\mathbf{0} \end{aligned} =\text{LP}^* } \)
Note that this is just the second standard form where \(\mathbf{A}\) and \(\mathbf{b}\) are negated.
PROOF
\( \text{LP}= \begin{aligned} \max z&=\mathbf{cx}+d \\ \mathbf{(-A)x}&\le\mathbf{(-b)} \\ \mathbf{x}&\ge\mathbf{0} \end{aligned} \iff \begin{aligned} \max z&=\mathbf{cx}+d \\ \mathbf{Ax}&\ge\mathbf{b} \\ \mathbf{x}&\ge\mathbf{0} \end{aligned} \iff \begin{aligned} -\min -z&=\mathbf{(-c)x}+(-d) \\ \mathbf{Ax}&\ge\mathbf{b} \\ \mathbf{x}&\ge\mathbf{0} \end{aligned} \underset{\text{dual to}}{\iff} \begin{aligned} -\max -w&=\mathbf{yb}+(-d) \\ \mathbf{yA}&\le\mathbf{(-c)} \\ \mathbf{y}&\ge\mathbf{0} \end{aligned} \iff \begin{aligned} \min w&=\mathbf{y(-b)}+d \\ \mathbf{yA}&\le\mathbf{(-c)} \\ \mathbf{y}&\ge\mathbf{0} \end{aligned} \iff \begin{aligned} \min w&=\mathbf{y(-b)}+d \\ \mathbf{y(-A)}&\ge\mathbf{c} \\ \mathbf{y}&\ge\mathbf{0} \end{aligned} =\text{LP}^* \)
\(\blacksquare\)
\( \begin{aligned} \max z&=\mathbf{cx}+d \\ \mathbf{Ax}&\ge\mathbf{b} \\ \mathbf{x}&\ge\mathbf{0} \end{aligned} \)
has a solution at \(\mathbf{x}\) if \(\exists\mathbf{y}\ge\mathbf{0}\ni\sum y_i\nabla g_i=\mathbf{yA}=-\nabla z=-\mathbf{c}\) and \(\mathbf{y\cdot s}=0\)
The sign change in the objective gradient as a linear combination of the constraint gradients is just the sign change that arises in the dual LP when the primal is not given in one of the two standard forms.
[PROPOSITION]
The dual of the dual of the primal is just the primal.
[PROPOSITION]
If the primal and the dual are both feasible, then they both have optimal solutions and
\(z^*=\min z=\max w=w^*\)
PROOF
Suppose \(\mathbf{x}\) is feasible for the primal and \(\mathbf{y}\) is feasible for the dual.
\( \begin{aligned} \text{feasible}\,\,\,\mathbf{y} \implies \mathbf{c}\ge\mathbf{yA} \,\,\,\text{and}\,\,\, \mathbf{y}\ge\mathbf{0} \\ \text{feasible}\,\,\,\mathbf{x} \implies \mathbf{Ax}\ge\mathbf{b} \,\,\,\text{and}\,\,\, \mathbf{x}\ge\mathbf{0} \\ \mathbf{cx}\ge\mathbf{(yA)x}=\mathbf{y(Ax)}\ge\mathbf{yb} \\ \mathbf{cx}+d\ge\mathbf{(yA)x}+d=\mathbf{y(Ax)}+d\ge\mathbf{yb}+d \end{aligned} \)
Every primal feasible \(\mathbf{x}\) creates an upper bound on the objective function \(w=\mathbf{yb}+d\) maximized by the dual LP.
Every dual feasible \(\mathbf{y}\) creates a lower bound on the objective function \(z=\mathbf{cx}+d\) minimized by the primal LP.
Each LP is feasible and bounded and therefore has an optimal solution.
Since the primal has an optimal solution, then
\( \begin{aligned} \exists\mathbf{y}\ge\mathbf{0}\ni \mathbf{c} =\nabla z =\sum_i y_i\nabla g_i =\mathbf{yA} \,\,\,\text{and}\,\,\, \mathbf{y\cdot s}=0 \end{aligned} \)
where \(g_i(x)\) is the \(i\)-th row of \(\mathbf{Ax}\) (the \(i\)-th row of \(\mathbf{A}=\nabla g_i\))
Since the dual has an optimal solution, then
\( \begin{aligned} \exists\mathbf{x}\ge\mathbf{0}\ni \mathbf{b} =\nabla z =\sum_j x_j\nabla h_j =\mathbf{Ax} \,\,\,\text{and}\,\,\, \mathbf{x\cdot v}=0 \end{aligned} \)
where \(h_j(y)\) is the \(j\)-th column of \(\mathbf{yA}\) (the \(j\)-th column of \(\mathbf{A}=\nabla h_j\))
These \(\mathbf{x}\) and \(\mathbf{y}\) yield equality and are therefore optimal solutions to the primal and dual
\( z^*=\min\mathbf{cx}+d=\mathbf{(yA)x}+d=\mathbf{y(Ax)}+d=\max\mathbf{yb}+d=w^* \)
\(\blacksquare\)
[COROLLARY]
If an LP has a solution, then so does its dual.
[THEOREM]
For every primal-dual pair, exactly one of the following holds:
Both the primal and dual have optimal solutions and their optimal values are equal.
The dual is infeasible and the primal is unbounded.
The primal is infeasible and the dual is unbounded.
Both the primal and dual are infeasible.
PROOF
\(\blacksquare\)
If the size of the feasible set increases, then the optimum either improves (increases for a maximization problem, decreases for a minimization problem) or, in the worst case scenario, remains the same.
If the size of the feasible set decreases, then the optimum either degrades (decreases for a maximization problem, increases for a minimization problem) or, in the best case scenario, remains the same.
To relax a constraint is to change the constraint to increase the size of the feasible set, the set of points satisfying the constraint.
To tighten a constraint is to change the constraint to decrease the size of the feasible set, the set of points satisfying the constraint.
Given a constraint \(g_i(\mathbf{x})\le b\), increasing \(b_i\) relaxes the constraint and decreasing \(b_i\) tightens the constraint.
Given a constraint \(g_i(\mathbf{x})\ge b\), increasing \(b_i\) tightens the constraint and decreasing \(b_i\) relaxes the constraint.
[DEFINITION]
A regular point of an LP is a feasible point at which the gradients of the active constraints are linearly independent.
A feasible point is not regular if there is at least one redundant active constraint at that point.
For each constraint \(i\) in an LP, how much does the optimum change if \(b_i\) is increased?
If \(z^*\) denotes the optimal value of the LP, can \(\begin{aligned}\frac{\partial z^*}{\partial b_i}\end{aligned}\) be computed? YES!
[PROPOSITION]
If the primal LP is \(\max z=\mathbf{cx}\) such that \(g_1(\mathbf{x})\le b_1,...,g_k(\mathbf{x})\le b_k\) and \(\mathbf{x^*}\) is a regular optimal point for the primal and \(\mathbf{y^*}\) is a [regular optimal point???] for the dual, then
\( \begin{aligned} y_i^*=\frac{\partial z^*}{b_i} \end{aligned} \)
where \(z^*\) is the optimal value of the primal LP.
PROOF
Starting at a regular optimal point \(\mathbf{x^*}\), let \(\mathbf{x}\) be a new optimal point if the bound on a single active constraint \(b_i\) is increased by \(\Delta b_i=1\).
Assume that the same constraints are active at \(\mathbf{x}\) as at \(\mathbf{x^*}\) and set \(\Delta\mathbf{x=x-x^*}\)
\( \begin{aligned} \frac{\partial z^*}{\partial b_i} =z(\mathbf{x})-z(\mathbf{x^*}) =\nabla z\cdot\frac{\Delta\mathbf{x}}{\|\Delta\mathbf{x}\|}\|\Delta\mathbf{x}\| =\nabla z\cdot\Delta\mathbf{x} =\left(\sum_{s_j=0}y_j\nabla g_j\right)\cdot\Delta\mathbf{x} =\sum_{s_j=0}\left(y_j\nabla g_j\cdot\Delta\mathbf{x}\right) =\sum_{s_j=0}y_j\left(\nabla g_j\cdot\Delta\mathbf{x}\right) \end{aligned} \)
The assumption that all the same constraints are active at \(\mathbf{x}\) and that only constraint \(i\) changed from
\(g_i(\mathbf{x})\le b_i\)
to
\(g_i(\mathbf{x})\le b_i+\Delta b_i\)
implies that
\(\forall j\ne i,g_j(\mathbf{x})=b_j\)
the original bound on \(g_j\), so
\( \Delta g_j=0 \implies \nabla g_j\cdot\Delta\mathbf{x}=0 \implies \sum_{s_j=0}y_j(\nabla g_j\cdot\Delta\mathbf{x}) =y_i(\nabla g_i\cdot\Delta\mathbf{x}) =y_i\Delta g_i =y_i\Delta b_i =y_i \)
Complimentary slackness stipulates that if the \(i\)-th constraint of the primal LP is not active, then neither relaxing it nor tightening it will change the optimal value of the primal objective
\( \begin{aligned} \frac{\partial z^*}{b_i} =y_i^* =0 \end{aligned} \)
\(\blacksquare\)
Example#
\( \text{LP} =\begin{aligned} \min z&=2x \\ 5x&\ge3 \\ x&\ge0 \end{aligned} \underset{\text{dual to}}{\iff} \begin{aligned} \max w&=3y \\ 5y&\le2 \\ y&\ge0 \end{aligned} =\text{LP}^* \)
\( \begin{aligned} x^* &=\frac{3}{5} \\ z^* &=2\left(\frac{3}{5}\right) =\frac{6}{5} \\ y^* &=\frac{2}{5} \\ w^* &=3\left(\frac{2}{5}\right) =\frac{6}{5} \end{aligned} \)
\( \begin{aligned} z^* &=z\left(\frac{b_1}{5}\right) =2\left(\frac{b_1}{5}\right) =\frac{2}{5}b_1 \\ \frac{\partial}{\partial b_1}z^* &=\frac{2}{5} =y^* \\ w^* &=w\left(\frac{c_1}{5}\right) =3\left(\frac{c_1}{5}\right) =\frac{3}{5}c_1 \\ \frac{\partial}{\partial c_1}w^* &=\frac{3}{5} =x^* \end{aligned} \)
Example#
\( \text{LP} =\begin{aligned} \min z&=5x_1+5x_2 \\ g_1&=x_1+3x_2\ge6 \\ g_2&=2x_1+x_2\ge4 \end{aligned} \underset{\text{dual to}}{\iff} \begin{aligned} \max w&=6y_1+4y_2 \\ h_1&=y_1+2y_2\le5 \\ h_2&=3y_1+y_2\le5 \end{aligned} =\text{LP}^* \)
[EXAMPLE]
You have some amount of raw materials in the form of wood and metal: Maximize your income by either producing and selling tables and chairs or selling the raw materials themselves.
You have \(60\,\text{ft}\) of wood and \(36\,\text{ft}\) of metal.
A table requires \(10\,\text{ft}\) of wood and \(2\,\text{ft}\) of metal. A chair requires \(3\,\text{ft}\) of wood and \(6\,\text{ft}\) of metal.
You sell a table for \(\$20.00\) and a chair for \(\$10.00\).
\( \begin{aligned} \$20.00=c_1 & \,\,\,\text{sale price of a table} \\ \$10.00=c_2 & \,\,\,\text{sale price of a chair} \\ 60\,\text{ft}=b_1 & \,\,\,\text{quantity of wood} \\ 36\,\text{ft}=b_2 & \,\,\,\text{quantity of metal} \\ -\$30.00=d & \,\,\,\text{one-time expense} \end{aligned} \)
\( \begin{aligned} x_1 & \,\,\,\text{number of tables} \\ x_2 & \,\,\,\text{number of chairs} \\ \end{aligned} \)
\( \text{LP}= \begin{aligned} \max z=20x_1+10x_2-30 \\ 10x_1+3x_2\le60 &\,\,\,\text{wood constraint} \\ 2x_1+6x_2\le36 &\,\,\,\text{metal constraint} \\ x\ge0 &\,\,\,\text{no negative construction} \\ \end{aligned} \)
\( \text{LP}= \begin{aligned} \max z=20x_1+10x_2-30\\ 10x_1+3x_2\le60\\ 2x_1+6x_2\le36\\ x\ge0\\ \end{aligned} \underset{\text{dual to}}{\iff} \begin{aligned} \min w=60y_1+36y_2-30\\ 10y_1+2y_2\ge20\\ 3y_1+6y_2\ge10\\ y\ge0\\ \end{aligned} =\text{LP}^* \)
\( \begin{aligned} y_1 & \,\,\,\text{sale price on a unit of wood} \\ y_2 & \,\,\,\text{sale price on a unit of metal} \\ \end{aligned} \)
\( \text{LP}^*= \begin{aligned} \min w=60y_1+36y_2-30 \\ 10y_1+2y_2\ge20 &\,\,\,\text{table constraint} \\ 3y_1+6y_2\ge10 &\,\,\,\text{chair constraint} \\ y\ge0 &\,\,\,\text{you do not pay people to take the raw materials off your hands} \\ \end{aligned} \)
\(y_1(10,3)+y_2(2,6)=(20,10)=\nabla z\)
\( \begin{aligned} 10y_1+2y_2=20\\ 3y_1+6y_2=10\\ \end{aligned} \)
\(y_1=\frac{50}{27},y_2=\frac{20}{27}\)
\(\frac{540}{27}=20,\frac{270}{27}=10\)