DDA5002Lecture12 Annotated
DDA5002Lecture12 Annotated
Yuang Chen
March 5, 2025
3 LP Duality
3 LP Duality
Definition
If a basic solution x also satisfies all constraints, then we call it a basic
feasible solution (BFS).
To find a BFS
First find a basic solution x
Then check if it satisfies all constraints
Theorem
For a non-empty polyhedron, the followings are equivalent:
1 x is a vertex
2 x is an extreme point
3 x is a basic feasible solution
x + ωd ↑ P ↓ω
Theorem
P = {x ↑ Rn |Ax ↔ b} =
↗ !. P has a BFS if and only if P does not contain
a line.
Corollary
Polyhedron in standard form P = {x ↑ Rn |Ax = b, x ↔ 0} always has
a BFS.
Bounded polyhedron always has a BFS.
min c ↭x
s.t. x ↑ P = {x|Ax ↔ b}
Theorem
Suppose P has at least one extreme point. The LP is either unbounded or
there exists a extreme point which is optimal.
3 LP Duality
min c ↭x
s.t. Ax = b
x ↔0
NJTBY
m
mrows
I B i
matrix
B : basis
N non-basis
: matrix
basic variables
Xi :
Xn : non-basic variables
b #Y Ax =
b
Con equations) BXB + ~X =
XI0 .
Yn = 0
=>
in equations :
BY +NXr b
Ax b
> =
S
=
= 0
Xv BS
Note : XB = B"b Xn,
=o
may not be feasible !
make XB B
+
b20 the
We need to sure = ,
Quiz !
(m)
How many BES can
you find ?
1 to (mm)
Quiz
How many non-zeros could one have in a basic solution (assuming there
are m constraints)?
No more than m
Could be anything between 0 to m, but typically it is m
How many basic solutions can one have for a linear program with m
constraints and n variables?
n!
At most C (n, m) = m!(n→m)! (Combination number)
Therefore for a finite number of linear constraints, there can only be a
finite number of basic solutions
maximize x1 +2x2
subject to x1 → 100
2x2 → 200
x1 +x2 → 150
x1 , x2 ↑0
-
subject to x1 +s1 = 100
2x2 +s2 = 200
x1 +x2 +s3 = 150
x1 , x2 , s 1 , s 2 , s3 ↑0
D
We can write the feasible set by {x : Ax = b, x ↑ 0}. where
1 0 1 0 0 100
A = 0 2 0 1 0 b = 200
1 1 0 0 1 150
Choose three independent columns of A, e.g., the first three, we get the
corresponding basic solution is
→1
1 0 1 100 50
feasible !
xB = B →1 b = 0 2 0 200 = 100
1 1 0 150 50
Av = b
X, + X2
BYp + NXr =
b
Example Continued
The other two choices {1, 3, 5} and {2, 4, 5} lead to dependent basic
columns (therefore no basic solutions can be obtained)
3 LP Duality
(P) min
s.t.
cT x
Ax ↓ b
B
Any feasible solution of a minimization LP (P) provides an upper
bound. The dual problem (D) of (P) is to find a lower bound (the
best lower bond) to the optimal cost of (P).
This lower bound is useful, because if the lower bound is very close to
an upper bound, we have a good estimate of the true optimal.
However, to get a lower bound, we need to modify the original LP. In
particular, we need to relax the problem.
The process of formulating the dual that will give the best lower bound to
the optimal cost of (P) is called relaxation. Relax a minimization problem
(P) involves three principal steps:
Relax the objective function: by constructing a new objective function
that is always smaller or equal to the original objective function of (P)
over the feasible region of (P).
Relax the feasible region of (P): by enlarging the feasible region of the
original problem (P).
Maximize the lower bound over Lagrangian multipliers so that we get
the best lower bound.
This final maximization problem is called the Lagrangian dual problem, or
the dual problem for short.
n
!
min c j xj
j=1
n
!
s.t. aij xj → bi ↔i = 1, ..., m1
j=1
!n
aij xj = bi ↔i = m1 + 1, ..., m2
j=1
!n
aij xj ↓ bi ↔i = m2 + 1, ..., m
j=1
xj → 0 ↔j = 1, ..., n1
xj free ↔j = n1 + 1, ..., n2
xj ↓ 0 ↔j = n2 + 1, ..., n
Signated
n
!
aij xj → bi ↗ yi → 0
j=1
n
!
aij xj = bi ↗ yi free
j=1
!n
aij xj ↓ bi ↗ yi ↓ 0
j=1
Yuang Chen (SDS/CUHK-SZ) DDA5002/CIE6010 Lecture 11 March 5, 2025 24 / 51
New objective >
Longration Multiple
-
= ji-ij
②X;
8 Fix, Ibi the Yi
E If
If ,
bi then 3 : fres
If MijXj =
,
MiXjbi ,
the Like
Example
20
= 0
-
~
The new objective is = O
so treem
x1 + 2x2 + 3x3 + y1 · (6 → x1 → 5x2 → 4x3 ) + y2 · (3 → 2x1 → 3x2 + x3 )
-
·
+y3 · (4 → x1 → x2 + 2x3 )
We need
x1 + 5x2 + 4x3 ↑ 6 ↓ y1 ↑ 0
2x1 + 3x2 → x3 = 3 ↓ y2 free
x1 + x2 → 2x3 ↔ 4 ↓ y3 ↔ 0
s.t. xj ↑ 0 ↗j = 1, ..., n1
xj free ↗j = n1 + 1, ..., n2
xj ↔ 0 ↗j = n2 + 1, ..., n ↑
B .
add constraints
X
Lagrangian relaxation problem
n
! m
! n
!
(LR) Z (y ) = min c j xj + yi · (bi → aij xj )
xj
j=1 i=1 j=1
s.t. xj ↑ 0 ↓j = 1, ..., n1
xj free ↓j = n1 + 1, ..., n2
xj ↔ 0 ↓j = n2 + 1, ..., n
m
! n
! m
! n
!
(LR) Z (y ) = yi bi + min c j xj → yi aij xj
xj
i=1 j=1 i=1 j=1
s.t. xj ↑ 0 ↓j = 1, ..., n1
xj free ↓j = n1 + 1, ..., n2
xj ↔ 0 ↓j = n2 + 1, ..., n
Fitbiti X
; free ,jEJ
Xio JES
min((-) Xj
Xj
minIdij)X ;
=
=
mindjXj
Edi
We want to solve
me
.
I
Case I:
mdXj
Case I
mida
II
Case
min diXj =
Separable Lagrangian Relaxation Problem
=
i=1 j=1 i=1
m
" #
! 0 if cj → m
ja
i=1 yi aij ↑ 0 (a)
min (cj → yi aij )xj =
xj →0
i=1
→↓ otherwise
m
" #
! 0 if cj → m i=1 yi aij = 0 (b)
min (cj → yi aij )xj =
xj free
i=1
→↓ otherwise
m
" #
! 0 if cj → mi=1 yi aij ↔ 0 (c)
min (cj → yi aij )xj =
xj ↑0
i=1
→↓ otherwise
m
! n
! m
!
Z (y ) = yi b i + min cj xj → yi aij xj
xj
i=1 j=1 i=1
"#
m
i=1 yi bi
if (a), (b), (c) hold for all xj
=
→↓ otherwise
#m
With designated y ’s, Z (y ) is a lower bound on ZP , i.e. i=1 yi bi ↔ ZP .
P ro
s.t. x1 ↑ 0, x2 ↔ 0, x3 free
-
"
0 if 1 → (y1 + 2y2 + y3 ) ↑ 0 (aa)
min {(1 → (y1 + 2y2 + y3 ))x1 } =
x1 →0 →↓ otherwise
"
0 if 2 → (5y1 + 3y2 + y3 ) ↔ 0 (bb)
min {(2 → (5y1 + 3y2 + y3 ))x2 } =
x2 ↑0 →↓ otherwise
"
0 if 3 → (4y1 → y2 → 2y3 ) = 0 (cc)
min {(3 → (4y1 → y2 → 2y3 ))x3 } =
x3 free →↓ otherwise
"
6y1 + 3y2 + 4y3 if (aa), (bb), (cc) hold
Z (y ) =
→↓ otherwise
The best lower bound means the lower bound that is the closest to ZP .
The following LP finds the best lower bound by maximizing Z (y ) over the
constraints (a), (b), (c), and the sign constraints on y’s.
ZD = max Z (y )
Oyi
-zl
yi is free ↘i = m1 + 1, ..., m2
-
z(Y)
yi ↔ 0 ↘i = m2 + 1, ..., m
-
z(Yz)
-
z(Y)
Yuang Chen (SDS/CUHK-SZ) DDA5002/CIE6010 Lecture 11 March 5, 2025 32 / 51
Dual problem
m
!
(D) ZD = max b i yi
yi
i=1
!m
s.t. aij yi ↔ cj j = 1, ..., n1
i=1
!m
aij yi = cj j = n1 + 1, ..., n2
i=1
m
!
aij yi ↑ cj j = n2 + 1, ..., n
i=1
yi ↑ 0 ↘i = 1, ..., m1
yi is free ↘i = m1 + 1, ..., m2
yi ↔ 0 ↘i = m2 + 1, ..., m
Primal Dual
minimize cT x maximize bT y
subject to aiT x ↑ bi , i ≃ M1 , subject to yi ↑ 0, i ≃ M1
aiT x ↔ bi , i ≃ M2 , yi ↔ 0, i ≃ M2
aiT x = bi , i ≃ M3 , yi free, i ≃ M3
xj ↑ 0, j ≃ N1 , ATj y ↔ cj , j ≃ N1
xj ↔ 0, j ≃ N2 , ATj y ↑ cj , j ≃ N2
xj free, j ≃ N3 , ATj y = cj , j ≃ N3
*
Each primal constraint corresponds to a dual variable
Each primal variable corresponds to a dual constraint
Equality constraints always correspond to free variables
↓ d
Primal minimize maximize Dual
-
↑ bi ↑0
Constraints ↔ bi ↔0 Variables
= bi free
↑0 ↔ cj
Variables ↔0 ↑ cj Constraints
free = cj
max x1 + 2x2 + x3 + x4
s.t. x1 + 2x2 + 3x3 ↔ 2<Y ) .
x2 + x4 ↔ 1 [Yz)
I
x1 + 2x3 ↑ 1 x(ys)
x1 , x3 ↑ 0, x2 , x4 free
StgXv
&2 (9 . /
②I (Y3/
V,, X320 ,
X2 ,
X+ free
(D) min .
2y ,
+ y + y
+
y -1
+
S t
.
ye = 2
+2 % 31
Y2 = 1
,
Y 20, Y220 , 30
Primal and Dual Pair in Standard Form
Theorem
If we transform a linear program to an equivalent one (such as by replacing
free variables, adding slack variables, etc), then the dual of the two
problems will be equivalent.
Theorem
If we transform the primal to its dual, then transform the dual to its dual,
then we will obtain a problem equivalent to the primal problem, that is,
the dual of dual is the primal.
Primal Dual
min cT x max b T y
s.t. Ax = b, x → 0 s.t. AT y ↑ c
bT y ↑ c T x
Based on the strong duality theorem, we know that (x, y ) is optimal to the
primal and dual respectively if and only if
x is primal feasible
y is dual feasible
They achieve the same objective value
3 LP Duality
Theorem
Let x and y are feasible solutions to the primal and dual problems
respectively. Then x and y are optimal if and only if
Proof in homework!