[go: up one dir, main page]

0% found this document useful (0 votes)
20 views83 pages

03 - LP Introduction and Graphical Method MOO

The document provides an overview of Linear Programming (LP) and its applications in operations research, focusing on methods for optimizing resource allocation. It details the formulation of LP problems, including objective functions and constraints, and presents a graphical method for finding optimal solutions. Additionally, it discusses multi-objective optimization techniques and the concept of Pareto optimality in decision-making.

Uploaded by

Vishvam Dani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views83 pages

03 - LP Introduction and Graphical Method MOO

The document provides an overview of Linear Programming (LP) and its applications in operations research, focusing on methods for optimizing resource allocation. It details the formulation of LP problems, including objective functions and constraints, and presents a graphical method for finding optimal solutions. Additionally, it discusses multi-objective optimization techniques and the concept of Pareto optimality in decision-making.

Uploaded by

Vishvam Dani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 83

Operations Research and Optimization Techniques

Introduction to Linear Programming – Graphical method

Nagaraj Shanbhog, M.Tech. Ph.D.


Mechanical Engineering Department
Faculty of Technology and Engineering
The M. S. University of Baroda
2023

1
Operations Research –Linear Programming
What is Linear Programming ?

Linear – All the mathematical functions are linear


Programming – Involves the planning of activities
Linear Programming provides methods for allocating limited resources
among competing activities in an optimal way.
Linear Programming
LP review: Definitions
Linear programming problem:
– problem of maximizing or minimizing a linear function of a finite number of
variables
– subject to a finite number of linear constraints: £, ³ or = constraints
Generic Statement of the LP Problem
Objective function
Objective
max/min f(x) = c1x1 + c2x2 + … + cnxn function (Linear)
£
subject to ai1x1 + ai2x2 + … + ain³
xn bi "i=1,…,m m Constraints
= (All Linear)
Feasible point: xÎR s.t. x satisfies all constraints
n

Feasible region: set of all feasible points


P = {xÎRn: x satisfies all
Called a
constraints}
polyhedron
Linear Programming
LP : more definitions
Decision
Objective function variables: should
completely
describe all
max/min f(x) = c1x1 + c2x2 + … + decisions to be
cnxn made

subject to a11x1 + a12x2 + … + a1nxn Set of


£ b1
Constraints
a21x1 + a22x2 + … + a2nxn
³ b2
Optimal solution: feasible solution with best (max/min) objective-function value
a31x1 + a32x2 + … + a3nxn
Optimal value:
= b3 objective-function value at an optimal
solution
Example (The Reddy Mikks Company) (Ref. Taha Book solved)
- Reddy Mikks produces both interior and exterior paints from two raw
materials M1 and M2
Tons of raw material per ton of
Exterior paint Interior paint Maximum daily availability (tons)
Raw material M1 6 4 24
Raw material M2 1 2 6____
Profit per ton (Rs.’000) 5 4

-Daily demand for interior paint cannot exceed that of exterior paint by more than 1 ton
- Maximum daily demand of interior paint is 2 tons
-Reddy Mikks wants to determine the optimum product mix of interior and
exterior paints that maximizes the total daily profit
Formulation – Mathematical model
Let x1 = tons produced daily of exterior paint
x2 = tons produced daily of interior paint
Let z represent the total daily profit (in thousands of rupees)
 Objective function : Maximize z = f(x , x ) = 5 x + 4 x
1 2 1 2
(Usage of a raw material by both paints) < (Maximum raw material availability)
Usage of raw material M1 per day = 6x1 + 4x2 tons
Usage of raw material M2 per day = 1x1 + 2x2 tons
- daily availability of raw material M1 is 24 tons
- daily availability of raw material M2 is 6 tons
6x1 + 4x2 < 24 (raw material M1)
x1 + 2x2 < 6 (raw material M2)
Difference of daily demand of interior (x2) and exterior (x1) paints does not exceed 1 ton,
x2 - x1 < 1
-Maximum daily demand of interior paint is 2 tons,
x2 < 2
- Decision variables x1 and x2 cannot be negative values, so x1 > 0 , x2 > 0
(Implicit Constraints )
Formulation – Mathematical model
Assumptions of LP
Complete Reddy Mikks problem In short an LP must have a linear objective
function and must have linear constraints
Implicitly this translates to:

• Proportionality – The contribution of


each activity to Z or constraint is
proportional to the level of activity Xj
• Additivity – Every function is the sum
of the individual contributions of the
activities.
• Divisibility – Decision variables are
allowed to have any value, including non
Mathematical model integer values.
LP must have a linear objective • Certainty- The value assigned to each
function and all the constraints must parameter is assumed to be a known
be linear constant.
RAPHICAL LP SOLUTION
The graphical procedure includes two steps:
1) Determination of the feasible solution space.
2) Determination of the optimum solution from among all the feasible points in the
solution space.
Step 1:
1) Determination of the feasible solution space: - Change all equations to equality signs

6x1 + 4x2 < 24 1


6x1 + 4x2 = 24 1
2
2
x1 + 2x2 < 6
3 x1 + 2x2 = 6 3

x2 - x1 < 1 x2 - x1 = 1 4
4
x2 < 2 x2 = 2 5
5
x1 > 0 x1 = 0 6
6
x2 > 0 x2 = 0
 Plot graphs of x1 = 0 and x2 = 0 GRAPHICAL LP SOLUTION
 Plot graph of 6x1 + 4x2 = 24 by using
the coordinates of the equation
 Plot graph of x1 + 2x2 = 6 by using
the coordinates of the equation
 Plot graph of x2 - x1 = 1 by using
the coordinates of the equation
 Plot graph of x2 = 2 by using
the coordinates of the equation
- Now include the inequality of all the 6 equations
- Inequality divides the (x1, x2) plane into two half
spaces , one on each side of the graphed line
- Only one of these two halves satisfies the inequality
- To determine the correct side , choose (0,0) as a
reference point
- If (0,0) coordinate satisfies the inequality, then the
side in which (0,0) coordinate lies is the feasible
half-space , otherwise the other side is • Feasible solution = a solution for which all
- If the graph line happens to pass through the origin constraints are satisfied. The set of all feasible
(0,0) , then any other point can be used to find the solutions create the feasible region.
feasible half-space
Step 2: GRAPHICAL LP SOLUTION
2) Determination of the optimum solution from among
all the feasible points in the solution space:
- After finding out all the feasible half-spaces of all
the 6 equations, feasible space is obtained by
the line segments joining all the corner points A,
B, C, D ,E and F
- Any point within or on the boundary of the

solution space ABCDEF is feasible as it


satisfies all the constraints
- Feasible space ABCDEF consists of infinite
number of feasible points

Convex A set of points S is a


polyhedrons convex set if the line
segment joining any pair The optimal feasible solution,
Feasible area of points in S is wholly if it exists, will occur at one or
contained in S. more of the corner points.
- To find optimum solution identify the direction in GRAPHICAL LP SOLUTION
which the
maximum profit increases , that is z = 5x1 + 4x2
- Assign random increasing values to z ,
z = 10 and z = 15
5x1 + 4x2 = 10
5x1 + 4x2 = 15
- Plot graphs of above two equations
- Thus in this way the optimum solution occurs at corner
point C which is the point in the solution space
- Any further increase in z that is beyond corner
point C will put points outside the boundaries of
ABCDEF feasible space

- Values of x1 and x2 associated with optimum


corner point C are determined by solving the
equations 1 and 2
A binding constraint is one where some optimal
6x1 + 4x2 = 24
1
solution is on the line for the constraint. Thus if
x1 + 2x2 = 6 2 this constraint were to be changed slightly (in a
x1 = 3 and x2 = 1.5 with z = 5 * 3 + 4 * 1.5 = 21 certain direction), this optimal solution would no
- So daily product mix of 3 tons of exterior paint and longer be feasible. A non-binding constraint is
1.5 tons of interior paint produces the daily profit of one where no optimal solution is on the line for
Rs. 21,000 . the constraint.
GRAPHICAL LP SOLUTION
 Important characteristic of the optimum LP solution is that it is always associated with a
corner point of the solution space (where two lines intersect)
 This is even true if the objective function happens to be parallel to a constraint
- For example if the objective function is,
z = 6x1 + 4x2
- The above equation is parallel to constraint of
equation
- So optimum occurs at either corner point B or
corner point C when parallel
- Actually any point on the line segment BC will be an
alternative optimum
- Line segment BC is totally defined by the corner
points
B and C
GRAPHICAL LP SOLUTION
A second approach to solving LP problems employs the corner point method It involves looking at the
profit at every corner point of the feasible region The mathematical theory behind LP is that the optimal
solution must lie at one of the corner points, or extreme point, in the feasible region
Since optimum LP solution is always associated with a corner point of the solution space, so
optimum solution can be found by enumerating all the corner points as below:-
_ Corner point (x1,x2) z________
A (0,0) 0
B (4,0) 20
C (3,1.5) 21
D (2,2) 18 optimum solution
E (1,2) 13
F (0,1) 4

- As number of constraints and variables increases,


the number of corner points also increases
 Limitation of Graphical method
- Limited to two variable, at the most 3 variables.
- For more than 3 variables, visualizing and plotting is not possible.
- Alternatively some analytical methods available to solve LP problems
Multi-objective Optimization MOO problem solution Approaches
• Traditional Approaches (Non-Pareto Techniques) - A priori
The preferences (Relative importance) are known before solution
• Aggregating approaches :
Aggregating the objectives into a single and parameterized objective function and
performing several runs with different parameter settings to achieve a set of
solutions.
– Weighted sum method
• Convert multiple objectives into one single objective using weights and summation:
• Determine the importance of each objective function assigning it a weight (w)
• Add up all functions:
min Z = ( w1 z1 + w2 z2 +…+ wn zn )
– Goal programming approach:
Target level of each objective rather than maximized or minimized function. Goals are
soft constraints.
The Optimizing the Objective is to minimize the deviation fro the goals.
Multi-objective Optimization MOO problem solution Approaches
Posterior approach solutions
For conflicting objectives the optimality concept may be inappropriate
So, what does optimal mean in this case?
A new concept that can measure solutions against multiple, conflicting
objectives is : the non-inferiority concept
Terms used:
Dominance (mathematical programmers),
Efficiency (statisticians and economists)
Pareto optimality (welfare economists)
A solution is non-inferior or non dominated, if there exists no other feasible solution
with better performance with respect to any objective without having worse
performance for at least one other objective
Multi-objective Optimization
• Optimization of multiple objective (more than one) objective functions at the
same time.
• Different in the optimal solutions corresponding to each objectives because
the objective functions are often conflicting (competing) to each other.
• No one solution can be considered to be better than any other with respect
to all objective functions.
• The non-dominant solution concept.
• Set of trade-off optimal solutions instead of one optimal
solution, generally known as “Pareto-Optimal” solutions
(named after Italian economist Vilfredo Pareto.
Pareto noticed that many economic solutions helped some people
while hurting others. He was interested in finding solutions that helped
some people without hurting anyone else. Solutions like this are now
called “Pareto improvements.” Vilfredo Pareto
 The evaluation of solutions in MOO is different: In seeking an (1848 -1923)
optimal or best overall solution, the goal is to quantify the degree of
conflict (trade-off)
Multi-objective Optimization Posterior approach solutions
Pareto - optimal solutions • Pareto front: The curve formed by joining all
• Multi-objective optimization Pareto - optimal solutions in function space
leads to a set of optimal Objective Space,
Variable Space
solutions (known as Pareto- Decision Space Function Space
optimal solutions) since no (Design Space) (Criterion Space)
solution can be considered
better than any other
regarding all the objectives
• Find all the pareto optimal
solutions (a set of Non-
dominated solutions)
• The decision Maker has
option to explores this set
to select a suitable
solution.
Decision Space Vs. Objective Space
Multi-objective Optimization Which Solutions are Optimal?
Posteriori approach Solutions
Concept of Domination
Dominated and Non-dominated solutions
• Non-dominated : given two objectives, a non-
dominated solution is when no other solutions are
better with respect to both the objectives.
• Dominated: when solution can be further improved at
least in one objective without compromising the other
objective.
A solution x(1) is said to dominate the other solution x(2),
if both conditions a and b are true:
(a) The solution x(1) is no worse that x(2) in all objectives.
(b) The solution x(1) is strictly better than x(2) in at least
one objective. Examples:
3 dominates 2,1 ; 5 dominates2,4;
3 does not dominate 5 Seek Non dominated Solutions
Multi-objective Optimization
Pareto optimum set consists of
the solutions which cannot be
improved with respect to all The
functions at the same time.
 Depends on the type of objectives
 Definition of domination takes care
of possibilities
 Always on the boundary of feasible
region

- Evolutionary Multi-
objective Optimization
(EMO) Algorithms
Pareto-Optimal Fronts
Multi-objective Linear programming problem Pareto - optimal solutions
Total cost Minimize Z1= f1(x) = 2000x1 + 500x2 Function space
f2
Total time Minimize Z2= f2(x) = x1 + 2x2 Objective space
Subject to 2x1 + 2x2 ≥ 16 (1) If there is only one point that Criterion Space
minimize both cost and time,
x1 + 5x2 ≥ 28 (2) obviously that would be the A (4000, 16)
solution; but often, there are C
D
Variable space different optimal points for
Decision space each objective. B (8500, 13)
x2
Design Space f1
Line segment AB
Z1 time16h Line segment AB in function
between the two optimal
(optimal cost) Z2 cost = Rs.8500 points represents Non space is ‘Pareto front’
Z1 =cost = Rs.4000 (optimal time) dominant solutions. Any point on this front is
Z2=Time=13h
x1 x2 f1 f2 considered “Pareto optimal”.
A (0, 8) C By moving along the curve,
D A 0 8 4000 16
B(3, 5) you could minimize cost at the
x1 C ? ? 5500 15 expense of time, or minimize
time at the expense of cost,
D ? ? 7000 14 but you can not improve both
(1) 3 5 8500 13 at once.
(2) B
Example: Single objective NL
constrained Optimization Subject to -x1 - x2 + 10 ≤ 0 (1)
GRAPHICAL SOLUTION
-2x1 + 3x2 -10 ≤ 0 (2)
Solution:
Figure is a graphical representation of
the problem. The feasible set S is
convex, as shown in the figure.

A few objective function contours are


also shown. It is seen that the problem
has a distinct minimum at the point A(4,
6) with the objective function value of
f1(4,6) = 5. At the minimum point, both
constraints are active.

Point A represents the unique global


Multi-objective Optimization problem
Graphical Solution
Subject to -x1 - x2 + 10 ≤ 0 (1)
-2x1 + 3x2 -10 ≤ 0 (2)
Figure is a modification of Figure 17.1, where the contours
of the second objective function are also shown.
The minimum value of f2 is 3.25 at B(5.5, 7). Point B is a
unique global minimum for f2.
The minimum points for the two objective functions are
different.
Therefore, if one wishes to minimize f1 and f2 simultaneously,
pinpointing a single optimum point is not possible.
In fact, there are infinitely many possible solution points,
called the Pareto optimal set, which is explained. The
challenge is to find a solution that suits requirements of the
designer.
Multi-objective Optimization problem Variable space
f2 =6.5
Decision space
Function space x1 x2 f1 f2 Design Space
f2 Objective space A 4 6 5 6.5
Criterion Space C ? ? ? ?
D ? ? ? ?
B 5.5 7 16.25 3.25 f2 =3.25
B(5.5, 7)
D
C
f1 =5 A
(4, 6)
A(5, 6.5)
C
D B (16.5, 3.25)

f1 =16.25

f1
Multi-objective Optimization
Multi-objective Optimization Posteriori approach Solutions

• Non-dominated : given two objectives, a


non-dominated solution is when none of
both solution are better than the other with
respect to two objectives.
• Dominated: when solution a is no worse
than b in all objectives, and solution a is
strictly better than b in at least one objective,
then solution a dominate solution b.
• A weakly dominated solution: when solution
a is no worse than b in all objectives.
Multi-objective Optimization Which Solutions are Optimal?
Concept of Domination
A solution x(1) is said to dominate the other
solution x(2), if both conditions 1 and 2 are true:
The solution x(1) is no worse that x(2) in all
objectives
The solution x(1) is strictly better than x(2) in at
least one objective Examples:
3 dominates 2,1 ; 5 dominates2,4
3 does not dominate 5 Dominated and
Non-dominated solutions Non-dominated solutions
If no feasible solution is at least as good for Seek Non dominated Solutions
every objective and strictly better atleast in
one.
Multi-objective Linear programming problem Pareto - optimal solutions
Total cost Minimize Z1= f1(x) = 2000x1 + 500x2 Function space
f2
Total time Minimize Z2= f2(x) = x1 + 2x2 Objective space
Subject to 2x1 + 2x2 ≥ 16 (1) If there is only one point that Criterion Space
minimize both cost and time,
x1 + 5x2 ≥ 28 (2) obviously that would be the A (4000, 16)
solution; but often, there are C
D
Variable space different optimal points for
Decision space each objective. B (8500, 13)
x2
Design Space f1
Line segment AB
Z1 time16h Line segment AB in function
between the two optimal
(optimal cost) Z2 cost = Rs.8500 points represents Non space is ‘Pareto front’
Z1 =cost = Rs.4000 (optimal time) dominant solutions. Any point on this front is
Z2=Time=13h
x1 x2 f1 f2 considered “Pareto optimal”.
A (0, 8) C By moving along the curve,
D A 0 8 4000 16
B(3, 5) you could minimize cost at the
x1 C ? ? 5500 15 expense of time, or minimize
time at the expense of cost,
D ? ? 7000 14 but you can not improve both
(1) 3 5 8500 13 at once.
(2) B
Example: Single objective NL
constrained Optimization Subject to -x1 - x2 + 10 ≤ 0 (1)
GRAPHICAL SOLUTION
-2x1 + 3x2 -10 ≤ 0 (2)
Solution:
Figure is a graphical representation of
the problem. The feasible set S is
convex, as shown in the figure.

A few objective function contours are


also shown. It is seen that the problem
has a distinct minimum at the point A(4,
6) with the objective function value of
f1(4,6) = 5. At the minimum point, both
constraints are active.

Point A represents the unique global


Multi-objective Optimization problem
Graphical Solution
Subject to -x1 - x2 + 10 ≤ 0 (1)
-2x1 + 3x2 -10 ≤ 0 (2)
Figure is a modification of Figure 17.1, where the contours
of the second objective function are also shown.
The minimum value of f2 is 3.25 at B(5.5, 7). Point B is a
unique global minimum for f2.
The minimum points for the two objective functions are
different.
Therefore, if one wishes to minimize f1 and f2 simultaneously,
pinpointing a single optimum point is not possible.
In fact, there are infinitely many possible solution points,
called the Pareto optimal set, which is explained. The
challenge is to find a solution that suits requirements of the
designer.
Multi-objective Optimization problem Variable space
f2 =6.5
Decision space
Function space x1 x2 f1 f2 Design Space
f2 Objective space A 4 6 5 6.5
Criterion Space C ?
? ? ?
D ? ? ? ?
B 5.5 7 16.25 3.25 f2 =3.25
B(5.5, 7)
D
C
f1 =5 A
(4, 6)
A(5, 6.5)
C
D B (16.5, 3.25)

f1 =16.25

f1
Multi-objective Optimization
Multi-objective Optimization
OPTIMIZATION PROBLEMS - CLASSIFICATION
ample:
The Local bag manufacturing Company manufactures standard school and office bags.
Products require.(1) cutting of material (Outer and inner) and (2) stitching and finishing.
The following is the formulated product mix LP model
Maximize profit z =
GRAPHICAL
5 LP SOLUTION
0
X
X2
1 40 –
Optimal Solution at Point A
+
X1 = 0 School bags 30 –
X2 = 20 Office bags 1
Profits = Rs. 2,400 2
0 20 – Profit Line for 50X1 + 120X2
A(0, 20) (Passes through Point A)
X
2
B
10 –
Subject to
| | | | | |
0– 10 20 30 40 50 60 X1
C
Sensitivity Analysis
 Optimal solutions to LP problems have been found under what are called deterministic assumptions
 This means that we assume complete certainty in the data and relationships of a problem
 But in the real world, conditions are dynamic and changing
 We can analyze how sensitive a deterministic solution is to changes in the assumptions of the model
 This is called sensitivity analysis , postoptimality analysis or parametric programming

Sensitivity analysis often involves a series of what-if Questions concerning to changes in


constraints, variable coefficients, and the objective function coeff.
The answer is to use an analytic postoptimality analysis
Sensitivity analysis means determining effects of changes in parameters on the
solution. It is also called What if analysis, Parametric analysis, Post optimality analysis, etc,. It
is not restricted to LP problems.
After a problem has been solved, we determine a range of changes in problem
parameters that will not affect the optimal solution or change the variables in
the solution without re-solving the entire problem
Sensitivity analysis can be used to deal with errors in estimating input
parameters to the LP model and also with management’s experiments with
possible future changes in the firm that may affect profits
Sensitivity Analysis
Changes in the Objective Function Coefficient
 In real-life problems, contribution rates in the objective functions fluctuate
periodically
 Graphically, this means that although the feasible solution region remains
exactly the same, the slope of the isoprofit or isocost line will change
 We can often make modest increases or decreases in the objective function
coefficient of any variable without changing the current optimal corner point
 We need to know how much an objective function coefficient can change
before the optimal solution would be at a different corner point
Sensitivity Analysis
Changes in the Objective Function Coefficient
 Changes in the receiver contribution coefficients Objective function coeff.

X2 Ans: As long as the slope of the Maximize profit = 50X1 + 120X2


objective function isoprofit line Q: How much the unit profit of
40 –
stays within the binding constraints office bags can go up or down
from 120 without changing the
30 – Profit Line for 50X1 + 80X2 current optimal production
(Passes through Point B)
quantities?
20 – Profit Line for 50X1 + 120X2
A (Passes through Point A)
Profit Line for 50X1 + 150X2
10 – B (Passes through Point A)

| | C | | | |
0– 10 20 30 40 50 60 X1

Software in sensitivity analysis also provide Windows and Changes in Objective Function
Coefficients
Sensitivity Analysis
Changes in the Constraint Coefficients
 If the amount of resources needed to produce a product changes, coefficients
in the constraint equations change
 This does not change the objective function, but it can produce a significant
change in the shape of the feasible region
 This may cause a change in the optimal solution

X2 (a) Original Problem X2 (b) Change in Circled X2 (c) Change in Circled


Coefficient Coefficient
60 – 60 – 60 –
3X1 + 1X2 ≤ 60 2 X1 + 1X2 ≤ 60 3X1 + 1X2 ≤ 60

40 – 40 – 40 –
Optimal Optimal Optimal
Solution unchanged Solution
20 – 20 – 2X1 + 4X2 ≤ 80 20 –
16
2X1 + 4X2 ≤ 80 2X1 + 5 X2 ≤ 80
| | | | | | | | | | |
0– 20 30 40 X1 0– 20 40 X1 0– 20 30 40 X1
Sensitivity Analysis
Changes in Resources or Right-Hand-Side Values Constraint
Coefficients :
 The right-hand-side values of the constraints often represent resources available
 If additional resources were available, a higher total profit could be realized
 Sensitivity analysis about resources will help answer questions such as:
 How much should the company be willing to pay for additional hours?
 Is it profitable to have some cutting work overtime?
 Should we be willing to pay for more titching and finishing time?

 If the right-hand side of a constraint is changed, the feasible region will change
(unless the constraint is redundant) and often the optimal solution will change
 The amount of change (increase or decrease) in the objective function value that
results from a unit change in one of the resources available is called the dual price
or dual value
Sensitivity Analysis
Changes in Resources or Right-Hand-Side Values Constraint
Coefficients :
 However, the amount of possible increase in the right-hand side of a resource is
limited
 If the number of hours increases beyond the upper bound (or decreases below
the lower bound), then the objective function would no longer increase
(decrease) by the dual price.
 There may be excess (slack) hours of a resource or the objective function
may change by an amount different from the dual price.
 Thus, the dual price is relevant only within limits.
 If the dual value of a constraint is zero
 The slack is positive, indicating unused resource
 Additional amount of resource will simply increase the amount of slack.
 The upper limit of infinity indicates that adding more hours would simply
increase the amount of slack.
Sensitivity Analysis
Changes in Resources or Right-Hand-Side Values Constraint Coefficients :
Cutting is a binding constraint; increasing the resource will Shadow price represents change in
increase the solution space and move the optimal point. the objective function value per one-
Stitching is a nonbinding constraint; increasing the resource will unit increase in the RHS of the
increase the solution space and but will not move the optimal point. constraint. In a business application, a
The cutting hours are changed from 80 to 100, the X2
X2 new optimal solution is (0,25) with profit of 3,000.
shadow price is the maximum price
The extra 20 hours resulted in an increase in profit that we can pay for an extra unit of a
60 – of 600 or 30/hour 60 – given limited resource.Right Hand Side
Max. z =
5 (RHS).
0 – Max. z =
X 5
40 –
A(0,25) Constraint 60 Hours 40 – 0
1
Profit Line for 50X1 + 120X2 A = (0,20) X
a
25 – (Passes through Point A) + – 1

b Changed Constraint Profit Line for 50X1 + 120X2


20 – Representing cutting100 120 – (Passes through+ Point A)
Hours 2
010 – 1
c X | | | | | |
2
| | | |
0– 0
0– 20 40 60 70 10 20 30 40 50 60 X1
X1 2
X
s.t.
2 2
Sensitivity Analysis
Changes in Resources or Right-Hand-Side Values Constraint Coefficients :
Shadow price represents change in the objective
If the hours are decreased to 60, the new solution is function value per one-unit increase in the RHS of
(0,15) and the profit is 1,800. Reducing hours by 20 the constraint. In a business application, a shadow
results in a 600 decrease in profit or 30/hour price is the maximum price that we can pay for an
extra unit of a given limited resource.
X2

Max. z = Max. z =
60 – 5 60 – 5
0 0
X – X
Constraint 1 1

40 – Representing 60 40 –
Hours + A (0,20) +
Profit Line for 50X1 + 120X2 – Profit Line for 50X1 + 120X2
A(0,15)
(Passes through Point A) 1 (Passes through Point A) 1
Changed Constraint 2 20 – 2
20 ––
15 Representing 60 0
0
Hours X 10 – X
2
| | | 2 | | | | | |
s.t.40 0–
10 20
s.t.
30 40 50 60
0– 20 60 X1 X1
2 2
Sensitivity Analysis
Changes in Resources or Right-Hand-Side Values Constraint Coefficients :
 If total cutting time was increased to
Max. z = 240, the optimal solution would be
5 (0,60) with a profit of 7,200. This is
0 2,400 (the original solution) + 30 (dual
A(0,60) X price)*160 hours(240-80)
1  If the hours increases beyond 240,
Changed Constraint
60 – Representing 240 Hours
+ then the optimal solution would
still be (0,60) and profit would not
Profit Line for 50X1 + 120X2
1
(Passes through Point A)
increase. The extra time is slack
2 during which the cutting are not
40 – 0 working
X
Constraint
2
Representing
s.t. 60 Hours
20 – 2
X
1
| | | | | |
0
– 20 40 60 + 80 100 120

4
Linear Programming problem
General characteristics and Terminology
 In an ‘n’ dimensional space, described by variables x 1, … , xn,
have a “feasible region” which is a “polytope” which mean a
region whose boundaries are defined by linear constraints.
 In two dimensions, a polytope is a polygon, (which can be
unbounded) and linear constraints as equalities are straight lines .
 The points in n dimensional space that obey linear constraints as equalities, define a
hyperplane. In three dimensions a hyperplane is just a plane, and in fact a hyperplane is
the generalization of a plane in 3D to higher dimensions. A hyperplane actual forms part of
the boundary of the feasible region is called an n-1 face of that region.
 The points that obey two constraints lie on the intersection of the hyperplanes the n-2
dimensional region of the boundary of the feasible region they form an n-2 face of it. And we
have similar definitions for the intersection of k of the constraints that form part of the
boundary of the feasible region: these are called n-k faces of it.
 A zero face of the feasible region, also called a vertex of it, is a point which lies on the
intersection of n (linearly independent) constraints and is on the boundary of the feasible
region. The intersection of the n-1 shared constraints is a 1-face that is called the edge joining
them.
Linear Programming problem
General characteristics of LP problem
• Feasible region is convex set
– Convex region: given any two points from the region, the line segment
between them is part of the region
– Each constraint generates at most one edge in the feasible region

• General problem definition:


– n variables and m constraints,
Linear Programming problem
General characteristics and Solution
 Extreme Point Theorem:
If there exists an optimal solution to LP , then it exists as one is an extreme point .
■ Only need to consider finitely many possible solutions.
• Solution of LP Problem
– For m variables satisfying n constraints)
– This gives a total of m choose n corners

=> complexity O(mn) (exponential in n)

 For LP, we have very efficient algorithms already


 Meta-heuristics are not needed
Polynomial and Exponential run time Algorithms
Polynomial and Exponential
Big “O” Notation : A theoretical measure of run time with size of problem
the execution of an algorithm given the n=5 n=10 n=10 n=1000
problem size n. e.g. run time O(n3) The growth of n 5 10 100
functions with input size n is usually described using 1000
the big-O notation. n2 25 100 10000 1000000
• Polynomial-Time Algorithm : The Computation n3 125 1000 1000000 109
run time increases polynomially with size of 2n 32 1024 1.27 x 1030 1.07 x
problem. 10301
• In Exponential-Time Algorithms, worst case n! 120 3.6 x 106 9.33 x 10157 4.02 x
run time grows exponentially with the size of 10 2567

input parameters.
• Why is a polynomial-time algorithm better than
an exponential-time one? And strongly preferred
because it can handle arbitrarily large data.
Exponential time algorithms have an explosive
growth rate in terms of computations.
Polynomial and Exponential run time Algorithms
• Let’s see some common functions, ordered by how fast they grow.
• Computer Scientists divide these functions into
two classes:
Polynomial functions: Any function that is O(nk), i.e.
bounded from above by nk for some constant k.
E.g. O(1), O(log n), O(n), O(n × log n), O(n2), O(n3)
Exponential functions: The remaining functions. E.g.
O(2n), O(n!), O(nn)
The growth of functions is usually described using the big-O
notation.
This is really a different definition of the word ‘polynomial’. We defined ‘polynomial’ to be any
function of the form aknk + ak−1nk−1 + . . . + a1n + a0.
But here the word ‘polynomial’ is used to lump together functions that are bounded from above
by polynomials. So, log n and n × log n, which are not polynomials in original sense, are
polynomials by alternative definition, because they are bounded from above by, e.g., n and n 2
respectively.
• Why have we lumped functions together into these two broad classes?
The next two
Polynomial and Exponential run time Algorithms
On the basis of this
classification of
functions into
polynomial and
exponential, we can
classify algorithms:

Polynomial-Time Algorithm: an algorithm whose order-of-magnitude time performance is


bounded from above by a polynomial function of n, where n is the size of its inputs.
Exponential Algorithm: an algorithm whose order-of-magnitude time performance is not
bounded from above by a polynomial function of n.
• Why do we divide algorithms into
these two broad classes? The next
table, which assumes that one
instruction can be executed every
The efficiency of an algorithm depends
microsecond,
on its use of resources, such as (1)the
time it takes the algorithm to execute;
the memory it uses for its variables;
Linear Programming problem
Algorithms to solve LP problems :
Simplex Algorithm (George B. Dantzig, 1947)
 Developed shortly after WW II in response to logistical problems.
 Move from BFS to adjacent BFS,
without decreasing objective function. (replace one basic variable
with another)
 BFS optimal if no adjacent BFS is better.
George B. Dantzig
 Practical solution method that moves from one (1914–2005)
extreme point to a neighboring extreme point. Finds
optimum:(because feasible region is convex set)
 Challenge! Number of BFS can be exponential!
 Runtime: average time is linear; worst
case time is exponential (covers all
feasible corners)
 Inefficient for very large problems
Linear Programming problem
Algorithms to solve LP problems :

Ellipsoidal algorithm (Leonid Khachiyan, 1979)


– Developed by Soviet mathematicians.
– Khachian, first to apply to LPP in 1979.
– First polynomial-time algorithm discovered for linear
programming.(Till than no polynomial time known
algorithms for LPP)
– In polynomial time: O(n4 L) bit operations.
(n = # variables, L = # bits in input)
– Theoratical value; Not used in Practice.
Linear Programming problem Algorithms
• Karmakar’s algorithm: Interior point method
(Narendra Karmarkar, AT&T Bell Labs, 1984)
Invented one of the first polynomial time algorithms for
linear programming, which is generally referred to as an interior point
method.
The algorithm is a cornerstone in the field of Linear Programming
• Uses an iterative approach starting with a feasible trial solution
– Trial solutions are interior points, Inside the boundary of the feasible
region
Runtime : polynomial time - O(n3.5 L).
Practical value, Efficient for very large scale problems (Born 1956)
Simplex Vs Interior point
Linear Programming problem
Wyndor Glass problem
(Hillier and Lieberman book) Successive trial solutions get
Simplex Vs Interior point Algorithm closer and closer to the
optimal solution, but never
literally get there. However, the
deviation is infinitely small.
The final trial solution can be
taken as optimal solution fro
practical purposes.
Linear Programming problem Simplex
LP model in equation Form Algorithm
The simplex algorithm computations is facilitated by imposing
two requirements on the LP model:

1. All the constraints are equations with nonnegative right-


hand side.
2. All the variables
Converting are nonnegative.
inequalities into equations with nonnegative
right-hand side.
To convert ( ≤ ) inequality to an equation, a nonnegative slack
variable is added to the left-hand side of the constraint.
Simplex LP model in equation Form
Algorithm Problem in Standard LP form (Augmented form)
The Reddy Mikks (1. All the constraints are made equations by introducing
problem (formulated) slack variables.2.R.H.S. of equations non negative.3. All
the variables are nonnegative)
Simplex Methodology of working
Algorithm
The standard L.P. form is system of equations. ‘m’ no. of equations and ‘n’ no. of
variables. (Where m < n. No. of variables are always more than no. of equations).
If the no. of variables are equal to no. of independent equations ---
Then We get Unique solution. ( Values of variables which will be answer)
But here, Since no. of variables are more than the no. of equations,
To get the unique solution, We need to make the no. of variables equal to no. of
independent equations.
Here out of ‘n’ no. of variables only ‘m’ no. is selected and rest are made zero. So the
number variables are equal to no. of equations and solve.

‘m’ no. of equations ‘m’ variables that Called as Basic


‘n’ no. of variables.
‘n’ no. of take part in solution Variables
(Where m < n.) variables
Remaining ‘n- m’
Called as Non
variables are
Basic Variables
forced to zero
Simplex Terminology and starting basic solution
Basic Variables:
Algorithm
Take part in finding the solution – Those variable introduced as slacks takes the role of
Basics of respective equation in Initial solution (Zeroth iteration)
Non basic Variables:
Forced as Zero -- Decision variables are non basic in Initial solution (Zeroth iteration)
Standard L.P. form (set of equations) is written in Table form(Zeroth iteration table)
Simplex Applied to the Reddy Mikks problem
Algorithm
Simplex Methodology applied to the Reddy Mikks problem
Algorithm

Is this solution Optimal ? What is the Optimality condition - Test for Optimality?
For a maximization problem, In objective function row (z- row), if the coefficients
of all the Non Basic variable are non negative then the solution is Optimal.
(For a minimization problem, In objective function row (z- row), if the coefficients
of all the Non Basic variable are non positive then the solution is Optimal.)

If this solution is not optimal – One of the Variable from Non Basic selected and
put in to the Basic category and against the One variable from Basic category is
selected and put into the Non Basic category.
Simplex Which Non basic variable to
Methodology applied to the Reddy Mikks problem select as a entering variable ?
Algorithm One with Most negative Coff.
This rule is known as Simplex
Optimality condition.
Which Basic variable to select
as leaving variable ?
Compute the ratios of Solutions
to corresponding Entering
variable column Positive
constraint coeff. Smallest
(Minimum) Non-negative ratio
selected as leaving variable
This rule is known as Simplex
feasibility condition.
The New solution is determined
by Swapping the Leaving
variable S1 and the Entering
variable X1 in the Next Simplex
Table. (Solution by Gauss-
Jordan Method)
Simplex Methodology applied to the Reddy Mikks problem

Algorithm
Entering Variable
Leaving Variable
Basis Z X1 X2 S1 S2 S3 S4 Sol. Ratio
Z 1 -5 -4 0 0 0 0 0
S1 0 6 4 1 0 1 0 24 24/6 =4 Select
S2 0 1 3 0 1 0 0 6 6/1 = 6 Minimum Non negative
S3 0 -1 1 0 0 1 0 1 1/-1 = -
S4 0 0 1 0 0 0 1 2 1 Negative and Infinity
2/0 = α (Division by Zero)
Ignore
‘m’ Basic Selected next
‘m’ equations Variables based on ratio Solve by
Leaving Variable Gauss-
‘n’ variables.
‘n-m’ Jordan
(m < n.) Non Basic Selected first Method
Variables Entering variable

Swapping of variables
Simplex Methodology applied to the Reddy Mikks problem
 The New solution by Gauss-Jordan Method:
Algorithm
Simplex Applied to the Reddy Mikks problem
 The New solution by Gauss-Jordan Method:
Algorithm
Simplex Applied to the Reddy Mikks problem
Algorithm
Simplex Applied to the Reddy Mikks problem
Algorithm
The New solution by Gauss-Jordan Method:
Simplex Applied to the Reddy Mikks problem
Algorithm
Based on the Optimality condition, None of the Z- row coeff. are negative.
So This is an Optimal solution Tableau.
The optimal solution can be read as follow, from the Simplex Tableau.

Geometric Concepts of Simplex Method:


 The five points A, B, C, D, E and F are the corner-point feasible
solutions (CPF solutions).
 Each iteration evaluates the corner point, starting from optimal
origin. Next moves to adjacent corner point and so on…
Iteration Corner point (x1,x2) z
0 (Starting) A (Origin) (0,0) 0
1 B (4,0) 20
2 C (3,1.5) 21
Algorithm STOPS, Once the Optimal is reached !
Simplex
Algorithm
Simplex algorithm for Minimization Problems
Need to change only Optimality condition (For Entering variable)
Non basic variable with most positive coefficient will be the Entering
variable for Minimization problem
And when all non basic variables have non positive coefficient, the Optimal
solution is reached.
Feasibility condition same (For leaving variable) Smallest non negative Ration
Some Special cases in Linear Programming problem
There are four special cases arise in LPP

1. Degeneracy
2. Alternative optima
3. Unbounded solution
4. Nonexisting (infeasible) solution
Some Special cases in Linear Programming problem
Tie for the Entering Basic Variable
The first step of each iteration selects the non basic variable having most negative
coeff. in Z row as the entering variable. Now suppose two or more non basic
variables are having same largest negative value in z row.
Say in Reddy Mikks example, the objective function is changed to Z=5X1+5x2.
Now the Z row will be Z- 5X1-5x2=0
There is a tie for entering bvariable. How this tie be broken ?
The answer is that the selection between these contenders may be made
arbitrarily
The optimal solution will be reached eventually, regardless of the tied
variable chosen.
Some Special cases in Linear Programming problem
Tie Breaking in the Simplex Method
Tie for the Leaving Variable - Degeneracy
If two or more basic variables tie for being the leaving basic variable, choose any
one of the tied basic variables to be the leaving basic variable. One or more tied
basic variables not chosen to be the leaving basic variable will have a value of
zero.
If a basic variable has a value of zero, it is called degenerate.

For a degenerate problem, a perpetual loop in computation is theoretically


possible, but it has rarely been known to occur in practical problems. If a loop
were to occur, one could always get out of it by changing the choice of the leaving
basic variable.
Special cases in L P problems Degeneracy Example:
Max. z= 3x1 + 9x2 Max. z = 3x1 + 9x2 Graphical Solution
s.t. s.t. Without changing
the solution space,
X1 + 4X2 ≤ 8 X1 + 4X2 + S1 =8
(0,2) this constraint is
X1 + 2X2 ≤ 4 X1 + 2X2 + S2 = 4 redundant. Others
X1, X2 ≥ 0 X1, X2, S1, S2 ≥ 0 are not.
Basic X1 X2 S1 S2 Sol. Ratio

Z -3 -9 0 0 0 --
S1 1 4 1 0 8 2 S1 and s2 tie for the leaving variable,
Tie
S2 1 2 0 1 4 2 leading to degeneracy in next iteration
because the basic variable S2 has zero
Z -3/4 0 2/4 0 18
value. The optimal is reached in
X2 1/4 1 1/4 0 2 2 additional iteration.
Zero
S2 1/2 0 -1/2 1 0 0 From theoretical standpoint it leads
Z 0 0 3/2 3/2 18 to cycling. The solution does not
Degenerate
improve.
X2 0 1 1/2 -1/2 2 Optimal

X1 1 0 -1 2 0
Special cases in L P problems Degeneracy Example:
Special cases in L P problems Unbounded solution Example:
No Leaving Basic Variable – Unbounded Z Graphical Solution
Objective Unbounded
If every coefficient in the pivot column of the simplex value optimal
tableau is either negative or zero, there is no leaving basic solution
variable. This case has an unbound objective function Z because the
Max Z = X1+ 2X2 Max Z = X1+ 2X2 value of can be
s.t. s.t. increased
X1 – X2 ≤10 indefinetely in
X1 – X2 + S1= 10
region. 𝑍 will
the feasible
2X1 ≤ 40 2X1 + S2= 40
X1, X2 ≥0 X1, X2,S1,S2 ≥0 be unbounded.
Basic X1 X2 S1 S2 Sol. Ratio
Z -1 -2 0 0 -1 --
S1 1 -1 1 0 1
No one Unbounded feasible solution region
2 0 0 1 2 Qualifies
S2 Unbounded optimal solution
Unbounded (depends on the objective function
All value if x2( nonbasic variable) either optimal Which direction it optimize ?
zero or negative. solution Unbounded or Bounded
Special cases in L P problems Unbounded solution space:
Max Z = 2X1- X2 Max Z = 2X1- X2 Graphical Solution
s.t. s.t.
X1 – X2 ≤ 10 X1 – X2 + S1= 10
2X1 ≤ 40 2X1 + S2= 40

1-X2
X1, X2 ≥0 X1, X2,S1,S2 ≥0 Objective

2X
value

Z=
Basic X1 X2 S1 S2 Sol. Ratio
NOTE: Optimizes
Z -2 +1 0 0 0 -- Unbounded feasible
S1 1 -1 1 0 10 10 solution region
Do not imply
S2 2 0 0 1 40 20 (20,10)
Unbounded solution. Optimal
Z 0 -1 3 0 30
X1 1 -1 1 0 10
Unbounded feasible solution region
S2 0 2 -2 1 20 10
Optimal solution as usual
Z 0 0 2 1/2 30 Optimal (depends on the objective function
X1 1 0 0 1/2 20 (X1=20, Which direction it optimize ?)
X2 0 1 --1 1/2 10 X2=10) Unbounded or Bounded
Special cases in L P problems Unboundness of solution space

NOTE:
Unbounded feasible
solution region
Do not imply
Unbounded solution.
(depends on the objective
function Which direction it
optimize ?)
Unbounded or Bounded

Unbounded solution Optimal solution


Special cases in L P problems Alternate / multiple optimal solutions
Max. z= 2x1 + 4x2 Max. z= 2x1 + 4x2 Infinite optimal solutions
s.t. s.t.
B and C are two CPF Solutions,
X1 + 2X2 ≤ 5 X1 + 2X2 + S1 =5
both are optimal. So every point on
X1 + X2 ≤ 4 X1 + X2 + S2 = 4 the line segment CD is optimal.
X1, X2 ≥ 0 X1, X2, S1, S2 ≥ 0
Basic X1 X2 S1 S2 Sol. Ratio
Z -2 -4 0 0 0 --
B (0,5/2)
S1 1 2 1 0 4 2
C (3,1)
S2 1 1 0 1 5 5
Z 0 0 2 0 10
Optimal
X2 1/2 1 1/2 0 5/2 5 Z-row value for one or more
(X1=0,X2=5/2)
S2 1/2 0 -1/2 1 3/2 3 nonbasic variables 0 (zero) in
Z 0 0 2 0 10 Alternate the optimal table, indicate
Optimal alternate optimal solution exists.
X2 0 1 1 -1 1 (X1=3, X2=1)
X1 1 0 -1 2 3
Special cases in L P problems
Alternate / multiple optimal solutions
Infinite optimal solutions
Any linear programming problem with multiple optimal
solutions has at least two CPF solutions that are optimal. All
optimal solutions are a weighted average of these two optimal
CPF solutions.
Special cases in L P problems
Alternate / multiple optimal solutions
Infinite optimal solutions

C (2,6)

E
(4,3)

C and D are two CPF Solutions, both


are optimal. So every point on the
line segment CD is optimal.
All optimal solutions are weighted
average of these two optimal CPF
solutions.

You might also like