[go: up one dir, main page]

0% found this document useful (0 votes)
24 views13 pages

Optimizations, Chapter 1,2,3,4

Optimization techniques all chapters
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views13 pages

Optimizations, Chapter 1,2,3,4

Optimization techniques all chapters
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

CH AP TE R 4

3.15 4 0 O_o tim,::.ation Tech nroue s

I
s (> 6 SO<.)
' .i5
I
I
7 ; 0 5 500
DY NA M IC
i

i
j
s .1 j
6 0 4 900

-!()) 400 500 .::-00soo PR OG RA M M IN G


300
= SOO. ~ = ..!00. x,. = 100. x__ = -:WO. x .. = 200.· x 1~ = 300 • x-0 =
. (.-\ns: .xu = O. =-- 1.; 1 - .,__ _._,
ram .:-o.st Rs. 9.200)
--

I ~

..,
I3 2 IO

i
- i 5 0 13
:
3 I ·s 6 12 4.1 Intro duct ion
I ent pomt s m
haYe to be made seque nnall y at differ
In most practi cal probl ems. decis ions :md.· or for a syste m. The
s 5 -+ time. at differ ent levels say, for a comp onem . for a subsy stem
ti::i.J decis ion
= 5. ¾ = S._'S =.: :md min cost= Rs 23) to be made seque ntiall y are called seqnen
(.-\.ns: .x:.: proble ms in which the decis ions are numb er of stage s they are also
to be made at a
probl ems. Since these decis ions are
referr ed to as multi stage decis ion
probl ems.
25 I 1, I 25 I ...&.~ I~ .J Dyna mic progr ammi ng is a mathe matic
al techn ique well suited for the optim
ing fathe r of dynam ic progn mmin g and
iz:nio n of
the man
multi stage decis ion probl ems. The found an .
dynam ic progr :,.mm mg 1s R ich:ir d Bellm

,'
I
15 IS 2-i 5 nsible for the devel opme nt of
prima rily respo
i
IIO' the conce pts of dynam ic progr ammi ng in the l:ne 19-"-0S :md e:i.rly
Bellm an first devel oped
16
I 10 I s I
l3 6
I 950s while worki ng as a resea rcher
at the Rand corpo ration .
when applic :1ble repre sems or decom
poses a
3 5 5 The dynam ic progr ammi ng techn ique,
3 e decis ion probl ems. Thus 1n N
= Rs a seque nce of singl e-stag
= 3. ~ = ~- ,½ = 3. ~ = 5, ~ = 1 and min COS[ 155) multi stage decis ion probl em as probl ems that are
(_-\ns : :c,., a seque nce ofN singl e - v::m:i ble
-varia ble probl em is repre sente d as ems are easie r to solYe than the
, these N sub probl
sol\'e d succe ssivel y. In most cases
decom positt on to N subp robkm s 1s done 1~ suc-h :1 mann er that the
origin al probl em. The from the opttm al
- vana bk probl em c:rn be obtain ed
optim al soluti on of the origin al N
probk ms
soluti ons of the None d1me ns1on al
op11m1zat1on
opttm iz.::ttion t~'chniqne- used for the
It 1s impor tant 10 note that the pnrt1c ular from ;:i simpl e enum eratio n
irrek \':lnt . lt m~w r~mge
of the N srngle ,·an able probl ems 1s
a nonlm e::ir progr ~mm ;ng techm que.
proce ss lo n differ ential calcu lus or ques, non
be sol\ ed b\ dassi cal optim tution rechm
Multt slagc dec1s1on probl ems can also ved
g tcchm qucs. But their apphc att\)l1 r~'quires the functions, variab les invol
linear progrnmm1n hand, Dyna mic progr ammi ng
y di ffr-re nttabk etc. On th<-" other
to be continuous and contm uousl fferent1able funcf rom.
riHlH '\, nonco ntmuo us and nond1
can den I wtth 1.h:;erete varia hks, non
0 4.3
4.2 0 Optimiz stion Techniq ues Dynam ic Progra mming

the input throug h a stage


programmmg technique suffers from a ntnJor drawba
ck known us the 1s re Iate d t o
·c The dynamic !·or a single stage decision process, the output
3 ts comput ationnll y, the presenc e ofmult1 pl~ state vanublcs denoted by
urse of<limension li1y· that s. ll' one considers trunsformal1on function (I)
1cs m the solutton of dynami c program mmg problem .
createbslmaJor ditlicult
1 · T = t(x, s)
a pro em wi1 1 5 "'late ,anab l" , I1 \\it · I JO ents the amount of information . ns we.ma • ke the return functio n
t:ac l compon dec1s1o
' Since the mput state of the system mfluences the
. d t:;,
be astronom1cat. This dramatic tncrease
~eq~ire for bot~ to be computed and stored ,rnuld can be represented as
t e amount
01
"ork {and storage ) require d to reach an optnnal solution has been called (2)
i~ R = r (x, s)
er, despite this disadvantage, it is very
t te "curse of dime'ls onality" ~y Bellmann. Howe,
the solutio n of a wide range of comp le, problems in several areas of dec1s1on
su~t- able for Example:
maiemg . particu lar pomt m time (~e stage). He
Consider a manager c the decision makers at a
t of money to mvest (state variabl e? m one of 10
ammin g has at her disposal a certain amoun
4.2 The Devel opme nt Of Dynamic Progr will yield a return on ms m~eSl ment
possible projects (decision variables). Each project
.Multis tage Decisio n Proces ses: a function of both the state vanabl e and
(return function). Hence, each possible return is
age decision process is one in which a the feasible decision variable.
In lhe theof) of dynamic programmrng a multist
in series so that the output of one stage is nted schem2 ncally as shown below.
nur~ber of smgle -stage process are connected A serial multistage decision process can be represe
should be called a serial multistage i, 2, l are labeled in decrea smg order.
lhe mput of the succeedmg stage. This type of process For the sake of convenience, the stages n, n-1, ....
n process . srnce the mdivid ual stages are connected head to tail with no recycle.
decisio
types of practical problems.
Sena! multistage dec1s1on problems arise in many
stage decisio n process, which is a component
'.\lathematicaJ repres entatio n: A single -
a rectangular box, shown below:
of the mulnst age problem can be represented as
Decision X

Stage stage (n-1) stage i sta;e 2 stage 1


stage n
IAputS transform ation Output T
T = t(X,s)
Multis tage dec!s'o n proble m
bys .. , and the output state vector as
For the i' stage, the mput state wctor is denoted
h

one, the output stage i .... I must 'be equal to the input to
Return R = r (X,S) s,. Since the system is a serial
and return functio ns can~ represe nted as
,Singl e - stage decisio n proble m stage i. Hence the state transformation
(S . '\) (3)
S, t.
input parameters S (or data}, certain (4)
A decision process can be characterized by certain R r,(S_, ,'\)
ters (T) representing as a result of making I The state trJnsfo rmatio n
dec1s1on variables (x) and certain output parame Where '\ dl'nOt\'S the , ector of <lee1s1 on, anat>le-~ 3t stage
input state vanabl cs, and lhc oulput
the dec1s1on. The input parame ters are called cquullons (3) ari> ahu c,tlkd design cquatwm~.
-
, there is a return or objective function
parame ters are called out put state variables. Finally
The ob1cct1vc ofu multistage <lcds1on ph,hk" ' is

measur es the effectiv eness of the decisio ns made and the output that results
R, which stage returns , say,
from these decisio ns . Find,, ,,, '• so as tlw optimtZl' ~\,ll\C iuit-'tt\"~ \,: th<" tndi, 1Juul
n is a stage, and input und outp111 f(R 1, R,, R.) and satist\ l qu;tti\,n~ . U) und (4)
Formal ly, the point at which we make a decisio ~c'-111110 ::.ubvptim1za11on problems
are state variabl es. And the decisio n itself governed by some sort of equn11on In gcm•1 ill. 1111y optn11i;:\tion \lh'bkm th:it \':\\\ l--.· J~"\,m~x
parame ters n, l'l\'grJm mm~. l'hcrefore, the nature of
stage we arc forced to muke 11 decision s 1s a c11nd1duit· for \'l't1\'ll'llt ~llhllw n, 1:1 \w namic-
or rule, called a transfo rmatio n. Also, at each n g1wn multistage problem can be
Every decisio n that we make has a relative worth ,,r
benefit rcfleclcd by a dcc1s1011 lhc n st,,gl' rct111n l\111ct11,n •f" d\'l\"tll\l llt'~ \\hl."th.-
1
r
"orb as a 1.kcornposition techniq ue, it
\I
benefit equatio n. This equatio n can be rcpre1u :ntcd a11 a n:turn fu11ct1011 Hince f'ur CVL'IY solvl'd by Dv1111i11ie p111~r.1mm111~ . ~llh'l' th\· ll\\'th\
c function .
Cllch clcci,1011 lh1H rL·l11111 f11nctJl)11, 111 rcq111n·s thl' S\'1wrn\lility illld 11\\1111\lll\\l\'tty \,fthl' \'hJl'l'tn
set of decisio ns we make, we get a return on
the dcc1·,1on made al lhat Hllll{c,
genera l, depend on both the state variable and
On theory,~ ~---=....,.-- ,......-0:,.....- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -- -

0 4.5
Dynamic Programming
4.4 0 Optimization Techniques

Now, A dynamic programming problem can be stated as follows:


To have separability of the objective function, we must be able to represent the
obJect!ve function as the compos1t1on of the ind1v1dual stage returns. This requirements 1s Ftnd x,, x 2 , •... x,
satisfied for additive objective functions :

t. t.
which optimizes n n

f= R, = R; (S;, 1, X.) (5) f(x , x


2
, .•. x.) =L R; =L r,(s,.,, x)
1 t=I t=I

where X; are real and satisfies the design equations


and for mult1phcat1ve obJective functions s; = t; (s;+i' xJ, 1 = I, 2, ... n
In solving the above problem, the dynamic programmmg makes use of the concept of
f= n R; = n R, (S,,,, X.)
t=I i=I
(6)
sub-optimization and the principle ofoptimahty.
where x, are real and nonnegative.
4.3 Suboptimization and Principle of Optimality
The objective function 1s said to be monoto111c if for all values of a and b that make
The proc_ess of sub optimization was stated by Richard Bellman as the pnnciple of optimality:
R, (X, = a, S; +I);,,: R, (X, = b, S, + I)
Bellman's Principl e of Optimality
the followmg mequality is satisfied: An optimal policy (or set of decisions) bas the property that whatever the initial state and
f ex., x •. , ... x,+I' X; = a, x;.,, .... x,, s•• ,) initial decision are the remaining decisions must constitute an optimal policy with regard
<': f (X., x .. , ..... , x,.,, X; = b, x,_, x,, s.• 1) where I = J to 11. to the state resulting from the first decision .
The serial multistage decision problems can be classified into three categories as follows: It implies that given the initial state of a system, an optimal policy for the subsequent
I. Initial v~lue problem: If the value of initial state variable, S,.,, is prescnbed, the stages does not depend on the policy adopted at the preceedmg stages
problem 1s called an initial value problem. (OR)

2.
s ~ .• .
X,. X,._ 1

e X

Final value problem: If the value of the final state variable S 1 is prescribed, the problem
Future decisions for the remaining stages will constitute an optimal policy regardless
of the policy adopted in previous stages.

Explanation (Suboptimization)
In a problem of optimization related to a compute system. n would be des1rable to break
the system into components which could be optimized more or less mdividually, instead of
is called a final value problem. trying to optimize the complete system as a smgle unit . For the break.mg and component
Note that a final value problem can be transformed into an initial value problem by sub optimization, a logical procedure is to be used, othenv1se. the procedure might result
reversmg the d1rect10ns of S,, i = I, 2, ..... n+ l. in a poor solution.
X,. X,._ 1 x, Now, for instance, consider that a given system can be broken down mto three

3.
~--· 0
Boundary value problem: If the values of both the mput and output variables are
components: component 1, component j, and component k.

component component i component i

specified, the problem is called a boundary value problem


x, Original complete system

s. , _t_si,
~

where the symbol ~ 1s used to indicate a prescribed state vanable.


... b Consider the suboptimization of component J without a cons1derat1on of other components.
But the sub optimization of any mtenor component affects the subsequent components.
such 1t cannot be sub opt1mi7ed "1thout cons1denng its effect on the down ream
st

(subsequent) components. Then, the following mode of sub opt1m1zatton can be adopted as
As
Q 4.7
Dynamic Programming
4.6 0 Optimization Techniques

a rational optumzat10n strategJ . Smee the last component ma senal system influences no 12
other component. it can be subopnm1zed independently . Then the last two components can
be considered together as a smgle (larger) component and can be suboptimized without
7 destination
ad\·ersely influencing any of the subsequent components. This process can be continued to
group any number of end components as a smgle (larger) end component and sub optimize
I.hem. This process ofsuboptimization is shown below, since the suboptim1zations are to be
done in the re,·erse order, the components of the system are also numbered in the same 13
manner of com·enience.
Solution: We can solve this problem by exhaustively enumerating all the routes between
nodes I and 7 (there are five such routes). However. in a large network_ exhaustive
enumeration is not efficient computationally.
I I
I I To solve this problem by DP. First we decompose it into stages, the vertical dashed
'--------------·
Suboptimize component i lines show the three stages of the problem.

t,[D t,0 t, f e
I~--'-----'
I I
I 12 §] [§
l _ _ _ _ _ - - - - - - - - - - - - - - - - - - _ _ _ _ _I

Suboptimize component j and i t. ~


@] 0 0 [i]
7 e
@1 B2]
I
... ------- ------ - - -- - - -- -- - --- - ---- ---- - - - - - -"
I

[II 7 '\~
e-
Suboptimize component k, j and i (complete system) [I]
stage 1 stage 2 stage 3
Note: The problem which does not satisfy the principles of optimality cannot be so lved
by the dynamic programming. Next, we carry out the computation for each stage separately.
The general idea is to compute the shortest (cumulative) distances to all the terminal
4.4 Computations in Dynamic Programming (Op) , nodes of a stage and then use these distance as input data to the immediately succeeding
Computations in DP are done recursively, in the sense that the optimum sol ution of one stage I, we can see that nodes 2, 3 and 4 a_re each connected to the startmg node l by a
sub problem is used as an input to the next subproblem. By the time we solve the last single arc. Hence, for stage I we have:
subproblem, we will have at hand the optimum solution for the complete prob lem. The
manner in which the recursive computations are carried out depends on how we decompose Stage 1 Summary Results
the original problem. In particular, the sub problems are normally linked together by some Shortest distance (S. d) to node 2 7 miles (from node I)
common constraints. As we move from one subprob lem to the next, we must acco unt for Shortest distance to node 3 8 miles {from node 1)
the feasibility of these constraints. Shortest distance to node 4 S miles (from node l)

Example: (Shortest path problem) Next, we move the stage 2 to determine the shortest {cumulative) distances to nodes 5
Suppose that we want to select the shortest highway rote (path) between two cities. The and 6.
following route network provides the possible routes between the starting ci ty and node I Considering node 5 first, from network, there are three possible routes to reach node
and the destination city at nod 7. The routes pass through intermediate cities designated 5: name ly (2, 5), (3, 5) and (4, 5). This information together with the shortest distances to
by nodes 2 to 6. nodes 2, 3 and 4 determines the shortest (cumulative) distance to node 5 as
0 4.9
4.8 0 Dyna mic Prog ramm ing
Opti miza tion Tec/Jniques

tion of the stag e 1.


of the stage 1 (or) objective func
+ (Distance from } R, (x,, s;,,) is the return function p\e ofop uma lity
= Min { (shortest distance ical formulation ofBe llma n'sp
rmci
(Sho rtest distance to node S) to node i) node i to node S) To understand the above mathemat
1 = 2, 3, 4
on:
consider the following explanati e obJective function f
7+1 2=1 9} ctive 1s to optimize the n -stag
Suppose that the desired obje
= Mm { 8+ 8 = 16 = 12 (from node 4) which is given by the sum of the
ind1V1dual stage retur ns.
5 + 7 = 12
Simi larly for node 6 we have Optimize (I)

= Min { (shortest dis~ance + (Distance from }


bles are related as
(Sho rtest distance to node 6) to node 1) node i to node 6) where the state and decision vana (2)
1 = 3, 4
s, = t, (s,." x,) , i = I, 2, .... n this stag e
If the mpu t to
_ . { 8 +9 = 17 } starting at the final stage i = 1
-Mm s+ 13 = 18 =17 (fro
mno de3) consider the first sub problem by optim ality . x must be selec ted to optimize
the principle of
s2 is specified, then according to
1
selected such that R, (x,.
to the other stages x 1 must be
Stag e 2 Sum mary Results Rr Irrespective of what happens num is deno ted as f 1, we have
t s If the ophr
Shor test distance to node S
= 12 miles (from node 4) s2) is an optimum for the inpu 2

Opt (3)
Shor test dista nce to node 6 = 17 miles (from node 3) qs) = xi [R, (x 1, s 2)
reached from either
st 3. The destination node 7 can be 1s specified, the opnm al
The la step is to consider stage y. Since once the mput state s 2
node s S or 6. This is called a one - stage polic (3) is a parametric equation
ly defined. Thus Equation
values ofR " x 1 and s 1 are complete t para mete r· s 2• Next, consider the second
+ (Distance from } tion of the mpu
= Min { (shortest distance giving the optimum f 1 as a func tes the optimum objective
(Sho rtest distance to node 7) node i to node 7) two stages together. If f2 deno
i = 5, 6 to node i) subproblem by grouping the last valu e of the mpm sJ we have
for a specified
value of the second sub problem
Stag e 3 Sum mary Results (-t)
21 miles (from node 5)
Shor test distance to node 7 =
een nodes I and 7 is 21 optimize R, for a given
that the Shortest distance betw nes that x, be selected so as to
The given computations show The principle of optimality requ
miles. x and SJ are spec ified . Equa uon (J) can be wntt en as
) 1s : I - 4 - S - 7. Sr Since S2 can be obtained once 2
Thus , the optimum route (path Opt (5)
putations can be on f2 (sJ) = xi [R 2 (x 2 , sJ) + f
(s~))
math ematically, the DP recursive com
From the above problem,
ws: problem It can be seen
pres sed as follo policy for the two - stage sub
Thus f2 represents the optimal ty of the problem from two [in
f, (s,.,) = Min {R, (x,, s,, 1) + f,_1 (s,)} redu ced the d1m ens1 onali
x, that the pnnc1ple of optimality ly by rewnttng Equation
(5)). This can be seen more dear
value of the objective function
corresponding to the last Equation (4)] to one [m Equation
where f,_ 1 denote the optimal stage 1 - I. (5) usmg Equation (2) as
t to
i-1 stag es, and s, as the mpu as
(6)
1ple of optimality can be wntten
Mathematically, Bell man 's pnnc
um 1s determined
Opt f, (s,)) for a specified mput Si, the optim
f, (s,") = x; (R, (x,, s,) + 1 In this form it can be seen that bk x. Thus the optimization problem
the dec1s 1on vana
ion corresponding to the solely by a suitable choice of 2
ously vaned to produce
al value of the objective funct both ,, and , 1 an.' to be simultane
where f,(s,+ 1) denotes the optim stage ~lated m Equation (4), tn which by Equa tion (3) and Equation (S).
t to the 1.
o sub problems defin ed
last i stages and s,. 1 is the inpu the optunum f2, 1s reduced tot\\ ble,
function corresponding to invo lves only a smgle dec1s1on vana
optimal value of the objective Since the opt1m1zot1on of each of these subp roble ms
Similarly, f,_ 1 (s,) denotes the -1; x, deno tes the vector of decisior ler; gcnc raliz mg, the 1th sub problem defined by
input to the stage i much simp
the last i -J stages and s, is the the opt1m1zat1on 1s, tn general,
can be wntten as
vari abk at stage i.
r, (s ,,, ) - x,~P\1 [R, (x,. s,t1) + R,_, (x.,.,, s,) 4- ••• + R, (x,, s)]
0 4.11
Dynami c Program ming
4.to ;:;
0,::· - zat,on Techniqu es

stage n r
s12ge (n -1)

state 0--- -'-- --~ f,., s,_


f 0 (s 0 )
C-narac tenmcs of dynami c ,p rogramm ing
presented below.
T.:r h.asic feziures "- hzch charzctenze dynzm1c programming are Id the same solut10n. Althoug h
n"anab ly uses backwa rd
into several subproblems - Remark· Both the forward and backward recurs10ns y1e
i Suge; Adj namic p;-og;zm mmg problem can i>e divided .,. I' DPk literatur e 1s10n

cu:;:. £:::, p:oble;ns 1s called 2 stage. · procedure appears more 1o,,1ca d may be more efficien t
ihe forward recur
recursion. The reason for this is that, in general bac war
system 1s at that stage of
2 Stu.£: States a.re sn era) possible conrli11ons in which the computationally.
t.1;.e .,roblem_ r,ms each stage has a number of states
associated with it. The dynamic
states at various states· to . .
p;ogr~ mg 1.s the selection of the best combinations of Dynamic program m ing algorith m
a.,n-e al an opttma! solution. followmg sreps . - d
The solution of a multistage problems by DP involves the
the current state into a state s and ·r
spec, yo :iectn
b. ' e function to be opt1m1z e .
3. Poliq·: Pohcy 15 the decision at each stage to transform Step I: Identify the decision variable
stage. It is natural that a dynamic program ming can be viewed tion as s.
. sub problem Identify
a function
a5SOCtate <l V.'Jth next
Step 2: Decompose the given problem mto . a num ber of smaller
correspo nding to a state with each column corresponding mation func
as 2z :-ienr.·ork
to st.age. with each node the state variables each stage and write down the transfor _
of the state variable and decision variable at the next stage.
be the cost or distance
Tte Hluc assigned to a branch connectmg two nodes would Step 3: Write down a general recursive relations hip for compler mg the optimal policy.
comnbutmg to the ob1ective function. solve the problem.
Decide whether forward or backward recursive method to
~ Optima l policy: Given current state, a~ optimal policy
for the remaining stages is
the current status conveys Step 4: Construct appropriate stages to show the required
values of the return function
independent of the policy adopted in previous stages. That is at each stage.
the informa tion about previous behaviou r necessar y for determining the optimal
all
in a stage to the end is and its value at each stage.
policy form now - on. This is called principle of optimali ty Step S: Determine the overall optimal decision or policy
independent of how one actually arrives at that stage.
Example 1
optimal policy at stage n to the
5. Recursi ve Relation : A relationship which links the g route network by DP (usmg
follow. By using this recursive relation, the solution procedure moves Find the shortest path from city 1 to city 7 is the followm
(n-1) stages that forward recursive)
from stage to stage - each time finding an optimal policy for each state at that stage
until the optimal policy for the last stage is found .
tion begins from the last
Backwa rd recursiv e Relatio n/fun ction: Here computa
last stage. Since recursion
stage/sub problem and this stage will be numbered as the
1s known as backward
proceeds ma backward direction, this type of recursive function
in which S, is regarded as
recursive function . In other words, A recursive procedure
d recursive procedure.
input and s,... as the output for the 1th stage is called backwar
the problem involves optimiza tion with respect to a
This method is convenient when
of consideration.
given outputs . because the output s 1s naturally left out
0

the stages of the original


For ward rec ursive r elati on/func tion : While defining
I and last sub problem
problem, the first sub problem w1 II be numbered as the stage
recursive function will be
function will be numbered as the last stage. Then, the 1s cny 7 which 1s mi nimum
is known as forward Sotutio11 : Here the problem 1s to find a path between city I
defined as per this assumption . This type of recursive function among all the other paths between ctty l to city 7.
other words, A recursiv e procedure in which s is rega rded as
recursive function . In follows:
procedure. First, decompose the problem mto sub problem s/ stages as
1
recurs ive
out put and s, , as the input for the i'" stage is called forward
A determin istic model of DP assu mes that the stage an d the
6. Deterministic mod el:
the state and policy at the
nex t stage of the system can be completely determined by
current stage.
4.12
Opllmi.:at,on Techm~ues 0
Dynamic Programming 4 . 13

SI s) ,, R/xi, ~,) RZ + rI rz (s,) Optim2I policy

2 2-5 12 19 2-5
\~\
3- 5 8 16* 16 3-5
3-6 9 17
6
4-5 7 12· 12 .! - 5 _/
l12l 4 4-6 l3 18
f,
stage O Fo r last stage 3 (i = 3):
stage 2 stage 3
f. (s)
,
= Min
X3
~ (~, s.J - f2 (sJ]

Now S 1 1s the state in which the node l lies. Also, s 1 has only state value s = 1. State sJ s. X J R 3 (x 3 , s.J ~ + f2) fl O ptim21 pelicy
1
s 2 has only three possible values: say 2, 3, .1 corresponding stage 1, and so on. Possible
5-7 9 21• 21 5-:
alternative paths from one stage to the other will be called decision variables by xi, the
dec1s1on which takes from s, _1 to s,. The return or the gam which obviously being the 7 2S
funcuon of ciecision will be denoted by R, (x,, s; _J
Here R, (xi' si +.> can be idennfied 25
wnh the dist2.nce or length of the corresponding arc. 6 6-7 6 2.!
_...
Now we use forward recursive procedure 'With the recursive relation. 23
f ~ (s ,., ) = Min
Xi
(R. (x. - s. •1 ) +
I I I
f 1-I (s i)) Therefore, the path with minimum distance from city l to city i 1,;: 1 - .! - 5 - i v.7.tb
Now. m1t1ally for 1 = 0, f, (s, • 1) = f 0 (s 1) = f 0(1) = 0 21 mi les.
for,= l(st.age I)·
Example 2
Solve the previous problem by DP. backw:ird recursl\'e pro.:-e-dure.
Solution: The backward recursiYe equ:ition is g:ven :is
= Min [R (,c , s )]
X( I I 2 f,(S ) = mm {R I.'-. S ) + f, 1 S 'I
Now tabulatmg the data for f 1(s 2) here f (S,)- f 1 (7) = 0 m1t1a\ly ,,hen sta;tmg bac-k\\:liuS. . _
Stage 3: Because node 7 or city - \ s- . -, ,s co~rne,·t e d t, ' '·111~,:,· •' ~nd c- '- :>, 3 " -d I6 )
SI s2 x, R.(x 1 ~,) f 1(s 1 ) Optimal policy
e:- to ,·hoose tn,m.. an-: ".~::c
with c'.\actly one route enc h , t here arc no a l tern.in,.. - - .> re,.u ts
2 I- 2 7 7 I- 2 can be tabulated as
3 l - 3 8 8 1-3 r (S) ~Im (Rl (\. ,. S) fl (s,)\
\l
4 4 5· 1-4 ----- ,, ~ , ,, , ,) O pt imll polic)
\ I \ I f:
For stage 2( i = 2):
9 Q
f1(s)J - Min
x2
(R, (x,, x1) t r, (11,)I
7
1

7 (I (\ tit, 7 ·6
4.14 0 Opt1m11at,on Tc-chn1q11os 0 4.15
Dynamic Programming

Stage 2: Route l2. t,) 1s nol a fcas1bk :1lternativc because it <loes not extsl. Given f,{SiJ
from stagl" 3, \\l" can then compare the fc:is1ble alternat11es as shown 111 the follow111g n

}
table:
Maximize Z = f (x" x2, ...... x. ) == "
L.,c.i X1
llhn j=l
f lS) · , [R 2 (x,, Si) + fi (s 3)]
2 ())
n

Sl :I:
2 Rz(xz, sz) f 1(S 2) optima l policy Subject to L a;J xi ~ b,, i == I, 2, ... , m
j=I
2 5-2 12 21* 21(5 - 2) and x ~ 0 , J = I, 2, .... n
3 5- 3 8 17 i • . . PP· Th oblem can be considered
Now, consider the conversion of this LPP mto D . is p~ ble x must be
.1 5-4 16*: 16(5 - 4) - as an n - stage decision problem where the value of the dec1s1on varf1a ; to be
~ 6-3 9 15-* 15(6 - 3) . b d m types o resources
determined at stage j. The _b,, t = I t~ ~ - can e treate as ma be cons1dered as the
6 3 allocated among different kmds of acttv1t1es xi" The constant Ci y f h 1·,n
4 6-4 13 19 profit per unit of x .. The coefficient aij ~ 0 may be thought of as the amount O t e
Stage I: Usmg f 1(s,.) from stage 2 resource bi needed for one unit of /1 activity~-
1

Mm that is Maximize= c 1 x, + c2 x 2 + ..... c. x.


f0(s 1) = xi [R 1 (x 1 S 1) + f 1 (s 2)]
Subject to .:
we can ha,·e the following table + ... + X ~ b,
a 11 x 1 + a,2 x2 a ••
s2 SI x, R 1(x 1, s,) f/s,) Optimal policy a2, x, + an Xl + ... + a2. X . $ b2

2 2- I 7 28
3- l 8 23
4 4- I 5 21 * n (4- I) aml x, + aml Xl + ... + a mo X
. $ b.
Stage 1 Stage 2 Stage n Resources
The optimum soluuon at stage l shows that city l is linked to city 4. Next, the optimum
solution at stage 2 shows that city 4 in linked to city 5. Finally, the optimum solution at Hencf when the value of the decision variable xi at the j• stage ts determined, a,i x1
stage 3 indicates that city 5 is linked to city 7. Thus the complete optimum - minimum units of resource I, a2i xi units of resource 2, ... ami xi units ofresource m \\ 111 be allocated
path is: I - 4 - 5 - 7 with 21 miles. to j'h activity if sufficient unused resources exist. Thus the amounts of the available
Note: The problem could have been done backward recursion also by reversing the resources must be determined before allocating them to an} particular act1,·tty. In
procedure. · otherwords, the state of the system (i.e., the amounts ofresources rem:nning for allocation)
must be known before making a decision (about allocation) at any stage of then - stage
Note: Any sub problem ma DP can also be optimized by calculus method (problems 3 in system. In this problem there are m state parameters. constirnting the state vector.
solved problems is an er.ample for calculus method of solving along with DP approach). By denoting the optimal value of the composite· obJective function over n stage as f0 *,
Note: In DP, the decisions can be tabulated at each stage (previous minimal path problems we cnn stntc the problem as:
are examples for tabulated method of solving along with DP procedure).

4.5 Linear Programming as a Case of Dynamic Programming


Find r.• = r.• (b 1
, b,_. .. h ) =-
"'
~t.•,"
\1, , , \r
(+- -
JI
c )
'i
A linear programming problem with n dec1s1on variable~ and m constraints can be considered
n
as an n - stage dynamic programming problem with m str1tc variables. llccall that n
typical LPP can be stated as: SubJCC!l:d to L ,l
0
\ b1 , 1 1, 2, :. m
JI
,
1
' 0, J I, 2, .. n
~7 ~- - - - - - - - - - - - - - - - ~ - - - - -
~ l
'.....,
0 4.17

"'l 4.16 0 Optimization Techniques


Dynamic Programming

)1
'e ,-~
~ Max [ , (2500 - 5x2 2000 - IOx2 450 I .5
The

recurrence rdat1onsh1 p ( accord mg DP), when applied to this problem yields max f2 (13,, 132, 13,) = o ~ x1 ~ ""P IOOx2 + r, __1_0_· _ _4___ - io.2

r-3 ~ (13,, t\. · · l3.,) = 0 ~Max

where 13 13 . 13
X + f-
x, ~ ~ , ,
rec
• , (13, - a,. x,, 13 2 - a,. x,, .... j3m - a.. x~

i = 2,3, ... n
where 13,, (3 , 13 are the resources available for allocatton at stage 2 wh1~h are equal to
2500, 2000 and 450 respectively. The maximum value that x 2 can assume without v10latmg
2 1

~ I~ h
the reso~rc~s· ~ilo~:t~edt t; :~:o~:;,~it:v: i'.able_f:r a~locat~on at stage i: a,. x, ... am, X;
resources available for allocation to th ' ~• 1. " 1,, 13, _a,. _x,, .... Pm maximum
- am, x, are the
arc any constraint given by
is
_ (2500 2000 450)

I ~ that x can take w th


' •
. e activity - and 13 indicates the
1 ou 1 vio 1atmg any of the constraints.
value
/3 =min
Thus the recurrent relation (*) can be restated as
s·10·T.s =200

I --3 The value of p 1s given by


Max f2 (2500, 2000 , 450)
2000
I~ Since any value larger than
P = mm
P would
(~. !!2 .... , I!:!!_)
a1, a2, am;
violate at least one constraint. Thus at the i~
= Max
0 ~ x2 ~ 200
(1()()J2 +50min (~-
_ 10
~ IOxi ,.i50- l.5x2))
-

I ~
7'. -

sta ge, the optimal values x, * and ~• can be determined as functions of 13 1, 13 2, •••• j3 . • ~ , 4S0- J.5x2 )
mm ( ~ .
Finally, at the n,. stage, smce the values of 13,, 13 2 , ••••• 13. are known to be 13 1, 13,,......
. since 10 4

I~ f3m respectively, we can detenmne x * and f *. Once x* is known the remainmg ,,;-alues
x* ..... x *.... •··· x , * can be determine<l° by retr;cing the s~boptimiz;t10n steps.
2
~
10
· if0 s x 2 :S 125

I~ Example
Maximize f (x, , x 2) = 50 x , + l00x2
{ 2000.:.. !Ox2
- -- -
4
If 125 :S X1 :S 200

I ~ SubJect to l0x, + 5 x 2 :S 2500


4x, + 10x2 :S 2000 Max { 100 x 2 + 50 (
2500 5 2
1~ " ) ,f O :S x S 200
1

I ~ x, + l.5x 2 :S450
x, ~ 0, x 2 ~ 0 then Max f2 = 0 :'.:: x2 ~ 200

I ~
and
Solution: Smee n = 2 and m = 3, this problem can be considered as a two stage dynamic JOO x + 50 ( - -4 - - -
2000- IO"u) if 125 :S x :S 200
1
2
programmin g problem with three state parameters.

I~ The first stage problem is to find the maximum value off, :


Max max
75 x2 + 1250.0 1fO :S x 2 ~ 125

~
Max f,(13 1, p 2 , p 3 , x,) = 0 ~ xi ~ P (50x,)
A A

I where 13,, 13 and l3 3 are the resources available for allocation at stage I and
x, 1s a max
{ 25Q0Q - 25 X
2
if \25 S
(75 x, + 12500) - 21S75 at x 1 = 125
X. 1 <:; 2QQ

~
l3 Here Now
nonnegative value that satisfies the side constraints !Ox, :S 13,, 4x , :S l3 2 and x 1 :S 3•
I
1
max (25000 - 25:\:) - 2 I S75 at x! = 125
13 = 2500 - 5x , 13 = 2000 - I 0x and f3 = 450 - I 5x 2 and hence the maximum value P
2 3
2 2 f/ = 21875 at x/ -= 125
I~ 1
that x , can assume is given by
_
/3 == x,* = mm - . -- - , ~ -
I0x2
[2500 - 5x2 2000 - --.450 t.5x2
]
Hence
~500 - 5,i :!000 - IOxi - ·)
-,-o-·
--4--.4S 0- t )x2
I 10

f* ( ~ 2000- IOxi 45Q-1.5x2)


4

= 50 x"'
and
x1• -mm (

Thus the optimum solutton: ( l 87 .S. 125), f.," ~ 21875

.~
I ' Thus I JO ' 4 ' ' I

....~ 2500 - 5x2 2000 - I0x2 )


== 50 mm ( - - - - . ----,45 0- t .5x2
I The second stage problem 1s
10 4

to find the maximum value of f2


4.18 0 0 4.19
Optimization Techniques Dynamic Programming

Solved Problems 1
Problem 2: Find the value of Max (y, Y2 YJ)
Problem 1: Use dynamic programming to solve the following problem
subject to Y, + Y2 +Yi= 5, Y,, Y2, Yi 2 O
Min z == y/ + y/ + y/
Sollltio11: Here the state variables are
c-
Subject to the constraint y 1 + y 2 + y 3 2 15 and y 1, y , Yi, 2 O
Solution:
2
5 i = Y, + Y2 +Yi= 5

Since the dec1s10n Yariables are Y,, y 2, y 3 the given problem is a three stage problem s2 = si - Y3 = Y, + Y2
defined as follows:
s, = s1 - Y2 = Y,
S3 == Y, + Y2 + y 2 15
3
Max
s2 == Y, + Y2 == S3 - Yj Also fi(sJ) = YJ [y) f2(s2)]
s, == Y, == Sy- Y2
The recurrence relation is

f,(s 1) == 7i 0
(y,2) == (~ _ y)2
f, (s,) = Y, = sz - Y2
f2(s2)==min (y1+y2)==min {y2+f(s)} Max
Y2 I 2 Y2 2 1 1 Hence f/s2) = Y2 [Yz (s2 - Y2)l
and f (s.J == min (y i + y i + y 1) = min Using differential calculus, we get that y 2 (s 2 - Y) is maximum
3 YJ I 2 3 YJ

f/s2) == min
s2
Now Y2 (y/ + (~- Y2)2} when yl= 2
Since the function attains its minimum when By Bellman's principle of optimality

at Y2 =
F3 (s) == ~:x [y 3 s//4]

) 2
== Max [y3(s3 - y3) ]
Again
Y3 4
. . S3
or Agam, usmg calculus, we get y3 == ==
3 3
5 5 125
Alsoy 1 ==
Th . .
e minimum value of the function y/ + - -- - occurs at y = 5
(15 - y3)2 3 ,y 2
==
3 andmaxy 1
y 2 y 3 ==
27
2 3

f3 (15) = 75 Problem 3: ·A small machine tool manufacturing c.ompan) entered into a contract to
Thus supply 80 drilling machines at the end of the first month and l 20 at the end of the second
s 3 =!5 ⇒ y/=5
111011th. The unit cost of manufacturing n drilling mnchme many month is given by Rs. SOx
s2 = s3 - y3 = 15 - ·5 = IO ⇒ y2 * = 2s2 = 5 ·I 0.2x2, where x denotes the number of drilling mnchmes manufactured in that month. If
the company manufactures more units than needed m the first month. there is an mventory
s 1 = s2 - y2 = 10 - 5 = 5 ⇒ y * = s 1 ::: 5
1 carrying cost of Rs. 8 for each unit earned to the nnt month. Fmd the number of drilling
Hence the optimal value is (5, 5, 5) with f*m,n (l 5) = 75. machines to be mnnufnctnrcd in cnch month to nnmnHzc the total cost. Assume that the
company hns enough fncditics to manufacture upto 200 dnlhng machines per month and
lhnt there is no initial inventory. Solve the problem

i
4.20 0 0 4 .21
Optimi zation Techniq ues Dynam ic Program ming

:
Solutio n: The problem can be stated as follows Maxim ize P 1 P2 Pl
Problem 4:
+ 0.2 x/) + 8 (x, - 80)
Minim ize f (x" i:) = (50x 1 + 0.2 x/) + (50 x 2 Subject to P 1 + P2 +Pl= l0
x, ~ 80
~ subJect to
P,, P2 pl> 0 are p p p the given pro blem is a 3 stage problem
x, + x 2 = 200 and . bl ,, z• l .
Solutio n: Since the decision vana es
~ and x 1 2': 0, x 2 2': 0. as follows:
machin es manufa ctured in the first month yl = P, + P2 +pl= l0
where x, and x2 indicat e the numbe r of drilling
~ and the second month. respect ively to solve this problem , start from the second month Y2 = P, + P1 = yl - P1
Ifl is the mvemo ry at the beginni ng of the second month, the optimum
and go bad,.'wards. Y, = P, = Y2 - P2
~ numbe rs of dnlling michin es to be manufa ctured in the second month is given by
. e function is obtaine d as follows :
( 1) Therefo re the recursiv
x./ = 120 - 11 Max (1)
~ and the cost mcurre d in the second month by f,(Y) = Yt Y,
~ (xz*, I)= 8 12 + 50x/ + 0.2 x*/ (2)
3 By using (i) ~ can oe express ed as
f(y)
2 2
= Max
Y1
(p2 f, (y2 - P2)
(3)
~ ~ (xz*,1i) = 812 + 50x/ + 0.2 ~ 2 f(y) = Max
l l YJ
(p 3 f, (y 3 - P3)
-
1)2
~ (xz*, 1i) = 81i + 50 (120 - 1 + 0.2 (120 -
)
2
- (p. (10 - p,))
~
2
conside r rpo) = 10 by (1)
-
= 0.2 1/ - 90 12 + 8880 (2)
f(l0) = max (p 2 f 1 (10 - Pz)) - max 2
first and 2

first month is zero the cost involve d in the


~ - since lhe invento ry at lhe beginn ing of the
month is given by By calculu s, P2 (10 - P2) is maxim um when P2
= z10
~
R,(x,) = 50x 1 + 0.2x 12
Thus lhe total cost involve is given by
s f2 (10) = (10/2) (10 - 10 (10)
2) = 2
2

+ 8880)
f 2 (12' x,) = (50x,-+ 0.2x/) + (0.2 1/ - 90 12
(3)
~ 3

But the invento ry at the beginn ing of the second


month is related to then rpo) = ( 310)
(4)
~ x, as 12 = x, - 80
then workin g back
Equati ons (3) and (4) lead to 10
e f = f 2 (1 2) = (50x 1 + 0.2x, ) + 0.2(x 1 - 80)2 -
2 90 (x, - 80) + 8880 P, = P2 = P1 = 3
= 0.4 x/- 72 x, + 17.3670
= ( ~) 3
Hence maximum of objecti ve functio n
value of x 1 can be obtaine d as
since f is a functio n of x, only, the optimu m
Problem S: Maxim ize f(x,, xz) = x, + 9x:
~
dx1
= 0.8 x 1 - 72 = 0 or x 1* = 90
subject to 2x, + x 1 S 25
x 2 s 11
2
to the m1n1mum off. Thus the optimum
d f(xj) = 0.8 > 0, t h e va Iue ofx 1* corresp onds
d IC
as ~ x x ' 0 • d ns a two stage ynam
I ,,
2,
i
this pro_bkm can be cons~d e:: e problem is to fmd the
Sol11tio11: Since n.., 2 and m=
The fir:.t s g
solutw n is given by programming problem with two stntc µammeters.
maximum off,:
at x/ = 90 and xz* = 110. ~h\\
mnx f, (~ 1, P" ,J o, ,, ~ ~
0 4.23
Dynam ic Progr ammi ng
0
Op/ m U1t1on T,:,c-h n,quos
stage dynam ic
proble m can be consi dered as a two
Soluti on: ~incc n ~ 2 and m = 2, this first stage probl em is to find the
param eters The
\\he!"(' p f\ a.re the rt'sour<'c'- a,a,la ncga11vr prorra mm1n g proble m with two state
\~r :¼oc t1on/ 1 stogc land x, Is a non
':lue that satufi e~ the <idc c~nst ram~~ and hence ma~1mum off,:
:llue °f, that ' can as<un1A, - ,. icrc , '"'25 - \i• fl2"" 11 - xi
l c Cl:t:.\1mum, ~ 1s g1Ycn by max
= o::: x,::: p
1 v

Max f, 03., 13,, i1 1) (3x 1)


25
P ~ '-, • = mm ( ~ .\i) ced availa ble for alloca tion at stage I and X 1 1s a
non negat ive
(I) where 13,. I\ are the resour .S f3 1:
side constr aints 2:x, S f3,, 2x
value that satisfi es _the 1

x, can assum e 1s
n~ f," (25~':)-,•=mm(25;x2) here 13, = 40 - x 2 , l3 2 = 180 - 5;< 2 and
hence the maxim um value ~ that

'-ecoa d ·
given by
The:
• _ {(.!0-2x'.?) • (180-25x'.?)}
um value off
~tage probl em is to find the maxim
- 2·
__
fJ - x, - mm ·
(I)

max f 2 (i/3 ,. A) -
f.'2 - 0.:51; 1
9 (25 -
x2 ) ]
max.:5P [ x2+fj - -
(2)
(40; ll:2). (180; 5x2) = 3x,"'
_ 2
i..::.::re S • 1-':
A _ ,
l?.le tile resou rces a\·aila bi
f,
to 25 Thus f,*

alloca t1on at stage 2 which are equal
~ l; ~CU Ycl) . The maxim um Yalue ~i:rt x2 can assume witho ut violating any constramt
lS ~en :,y = 3mm (.!O;xl cro; 5xc)
-f3 . {25 11} = 11 the ma.~imum off=.
=mm T·T The secon d stage proble m is to fmd
TI:.:r.s e<;'l!Z.Oor; (2) can be restat ed as (2)

f9x2 + mm. (25- 2- x-2) } a::e equal to -rn.


max
[if 25, JI)= 0 .:5J1:2.:5 ble for .alloc ation :u st:?g~ :.. whlc' t
In.2.X II where 13,, 13 2 are the resour ces avatb
180 respec tively .
s gn en by
25 - X2::; 12.5
I I IearIy -I.$ --2-: e \1o;thout ,;ohu ng :my ron~t ramt 1
2S 0 :::; ,:2 $
c The maxim um value that"- : can assum
1
25 - -
X2)
l = min ( 4o. ~) = 3o
mm ( - -2 7 occus at x/ = 11 ed as
Now, the equat ion(:! ) can be restat

m2,: fl (25, 11) = 0 ~ ;2a~ l I f9xi


Then + 11)

106 at ,; 1 • 11

clearl y
(-10 ,~ . - - ~ -h~)
tllll\ \
l~l)
- -
From (IJ ,;,• m1n( 25;x2 ) 1

Thus the optim al wlut, on 1,. :t.,. 7 . x, • llancl f.,.,,


11\,l.\
106. o, "\:~:::: >{' {h~-+ - .3(.3 5)}
Maxim ize ftx,. r.,J J,;, ~ 4x,
Prob lem /,:
SUbJC:Cl 10 2,: 1 ..- % ,,- 4(J
1
2x, • Sx, ~ 180
,md
F111111 t-qtlllll llll {I) \
I
. ,.to-,~)
1\\11\ \ - ~ - •
r1so-s,~)
\--2 -- :: 2.5
I ~

0 4.25
o,n,mlo Pro,,1mmlnf
4-.24 0
x,• •6

-
. . . . . . . . ii
c:lc■rlymin
-.•-u. ~• ss and
max 11 (4, 6, ti)• o~-, ~ 6 CS.,+ 3(l)J
Mmaiat ~.-.J•3x,+Sit•
• S(6)+ 3(%)
..... to 1M COIISftlMlS
•36
x_ s 4
~s, From(t),
b, ·~~ 18 The optimal solution i1 x,• • 2-. s,• • 6 aa4 f_ • 36
x., ~ ~o.
.S.W..: SialDl:a • 2 • • • l. d11s problem can be considered as a two stage dynamic
fDQ I,.___
wida tine $late parameters.
~&at-.preWc:a1110 fiacl Ilk lDllimum value of f 1:
au
This chapter explains the oplia · ••~ teclaiflllC (at)
programming. The successful appticati• of DP IS aftm
modelingratherthantbematbaDaticalaspcctsofpn,Wca~
=:!':
-DP•
or
e1

- ~(II..J2'~X.)=o~x1 ~J (3x 1) not the panacea of all solubOD pra krcs. LC-. DP••.- - ,
wlaac 11, '2 ai I\~ d i e ~ aailable for allocation at stage 1 and x1 is a non techniques.
In general, anyoptimizabOD,...._~ca1'-t I '"•~
13, Ox 1 S 132, 3x 1 s 133 here 13, = 4,
atplllR wlac 611 llllil&cs 6e silkeastramtl x,-s
~-' -s, I\• 11-lKa..t llmce die IDUimum value J that x, can assume is given by provides the framework for dei a I asm& DP
problems). Althoughlhepriacipkol'.,..._•.._.......
,..WC-.--::.::
of optimality is a c:andiclalC r«dlicielll.....flallP. De~tif......-

ta lie
J -x.••11UD
18-2x1)
( ◄.-3-
(1) how each stage is optimmed. ,tsapplic:atiGN'-a:-'...._~ ._
DPdoesnotposscssastaallerd.... •nUa;£ • _._, i 4 1
n.. of the methods vary Ill their tc:;lr 1"' a-silllfk.Wal .... P a
to calculus applications. Howe'ftl'.•M_... . ., re I~
principle. DP is very usel\11 ~IGr-'ilC•.....-•illliC.a:.
decisions. lt requnes a . . . . . - ~ ~ . . _ . illllli•
problem.
(2)
Its
Tho sub PfOblom of I ffllll\
wbae ~ for atlocad 2, w~iob are equal
• 1t110

'°"·'
n,e....-
opfu'l\1\l\? pnno

Th111dle

You might also like