LINEAR PROGRAMMING
Linear programming (LP) is an optimization approach that deals
with meeting a desired objective such as maximizing profit or
minimizing cost in the presence of constraints such as limited
resources. The term linear connotes that the mathematical functions
representing both the objective and the constraints are linear. The
term programming does not mean “computer programming,” but
rather, connotes “scheduling” or “setting an agenda”
Standard Form
The basic linear programming problem consists of two major
parts: the objective function and a set of constraints. For a
maximization problem, the objective function is generally expressed
as:
Where = payoff of each unit of the jth activity that is
undertaken and = magnitude of the jth activity. Thus, the value of
the objective function, Z, is the total payoff due to the total number of
activities, n.
The constraints can be represented generally as
Where = amount of the ith resources that is consumed for
each unit of the jth activity and = amount of the ith resource that is
available. That is, the resources are limited.
The second general type of constraint specifies that all activities
must have a positive value,
In the present context, this expresses the realistic notion that,
for some problems, negative activity is physically impossible (for
example, we cannot produce negative goods).
Together, the objective function and the constraints specify the
linear programming under the constraint that these activities utilize
finite amounts of resources.
Graphical Solution
Because they are limited to two or three dimensions, graphical
solutions have limited practical utility. However, they are very useful
for demonstrating some basic concepts that underlie the general
algebraic techniques used to solve higher-dimensional problems with
the computer.
For a two-dimensional problem, the solution space is defined as a
plane with measured along the abscissa and along the ordinate.
Because they are linear, the constraints can be plotted on this plane as
straight lines. If the LP problem was formulated properly (that is, it has
a solution), these constraint lines will delineate a region, called the
feasible solution space, encompassing all possible combinations of
and that obey the constraints and hence represent feasible
solutions. The objective function for a particular value of Z can then be
plotted as another straight line and superimposed on this space. The
value of Z can then be adjusted until it is at the maximum value while
still touching the feasible space. This value of Z represents the optimal
solution. The corresponding values of and , where Z touches the
feasible solution space, represent the optimal values for the activities.
The Simplex Method
The simplex method is predicated on the assumption that the
optimal solution will be an extreme point. Thus, the approach must be
able to discern whether during problem solution an extreme point
occurs. To do this, the constraint equations are reformulated as
equalities by introducing what are called slack variables.
Slack Variables. As the name implies, a slack variable measures
how much of a constrained resource is available, that is, how much
“slack” of the resource is available.
We can define a slack variable as the amount of raw gas that is
not used for a particular production level . If this quantity is
added to the left side of the constraint, it makes the relationship exact,
Now recognize what the slack variable tell us. If it is positive, it
means that we have some “slack” for this constraint. That is, we have
some surplus resource that is not being fully utilized. If it is negative, it
tells us that we have exceeded the constraint. Finally, if it is zero, we
exactly meet the constraint. That is, we have used up all the allowable
resource. Since this is exactly the condition where constraint lines
intersect, the slack variable provides a means to detect extreme
points.
A different slack variable is developed for each constraint
equations, resulting in what is called the fully augmented version.
Subject to
= 7
7
= 8
0
= 9
= 6
Notice how we have set up the four equality equations so that
the unknowns are aligned in columns. We did this to underscore that
we are now dealing with a system of linear algebraic equations.
NONLINEAR CONSTRAINED OPTIMIZATION
There are a number of approaches for handling nonlinear
optimization problems in the presence of constraints. These can
generally be divided into indirect and direct approaches. A typical
indirect approach uses so-called penalty functions. These involve
placing additional expressions to make the objective function less
optimal as the solution approaches a constraint. Thus, the solution will
be discouraged from violating constraints. Although such methods can
be useful in some problems, they can become arduous when the
problem involves many constraints.
The generalized reduced gradient (GRG) search method is one of
the more popular of the direct methods. It is, in fact, the nonlinear
method used within the Excel Solver.
It first “reduces” the problem to an unconstrained optimization
problem. It does this by solving a set of nonlinear equations for the
basic variables in terms of the nonbasic variables. Then, the
unconstrained problem is solved using approaches. First, a search
direction is chosen along which an improvement in the objective
function is sought. The default choice is a quasi-Newton approach
(BFGS) that requires storage of an approximation of the Hessian
matrix. This approach performs very well for most cases. The
conjugate gradient approach is also available in Excel as in alternative
for large problems. The Excel Solver has the nice feature that is
automatically switches to the conjugate gradient method, depending
on available storage. Once the search direction is established, a one-
dimensional search is carried out along that direction using a variable
step-size approach.
OPTIMIZATION WITH PACKAGES
Software libraries and packages have great capabilities for
optimization.
Excel for Linear Programming
There are a variety of software packages expressly designed to
implement linear programming. However, because of its broad
availability, we will focus on the Excel spreadsheet. This involves using
the Solver option previously employed for root location.
The manner in which Solver is used for linear programming is
similar to our previous applications in that the data is entered into
spreadsheet cells. The basic strategy is to arrive at a single cell that is
to be optimized as a function of variations of other cells on the
spreadsheet.
Excel for Nonlinear Optimization
The manner in which Solver is used for nonlinear optimization is
similar to our previous applications in that the data is entered into
spreadsheet cells. Once again, the basic strategy is to arrive at a single
cell that is to be optimized as a function of variations of other cells on
the spreadsheet.
IMSL
IMSL has several Fortran subroutines for optimization. We will
focus n the UVMID routine. This routine locates the minimum point of a
smooth function of a single variable using function and first-derivative
evaluations.
UVMID is implemented by the following CALL statement:
CALL UVMID (F, G, XGUESS, ERREL, GTOL, MAXFN, A, B, X, FX,
GX)
Where
F = User-supplied FUNCTION to compute the value of the
function to be minimized. The form is F(X), where X = the
point at which the function is evaluated. (Input). X should
not be changed by F, F = the computed function value at
the point X. (Output).
G = User-defined FUNCTION to compute the derivative of the
function, where G = The computed function value at the
point X. (Output).
F and G must be declared EXTERNAL in the calling program.
XGUESS = An initial guess of the minimum point of F. (Input).
ERREL = Required relative accuracy of the final value of X.
(Input).
GTOL = The derivative tolerance used to decide if the current
point is a minimum. (Input).
MAXFN = Maximum number of function evaluations allowed.
(Input).
A = Lower endpoint of interval in which maximum is to be
located. (Input).
B = Upper endpoint of interval in which maximum is to be
located. (Input).
FX = Function value at X. (Output).
GX = Derivative value at X. (Output).
Problems
1. The Activity Analysis Problem. There are n activities, A1, . . . ,An ,
that a company may employ, using the available supply of m
resources, R1, . . . , Rm (labor hours, steel, etc.). Let bi be the
available supply of resource Ri. Let aij be the amount of resource Ri
used in operating activity Aj at unit intensity. Let cj be the net value
to the company of operating activity Aj at unit intensity. The
problem is to choose the intensities which the various activities are
to be operated to maximize the value of the output to the company
subject to the given resources.
Let xj be the intensity at which Aj is to be operated. The value of
such an activity allocation is
(8)
The amount of resource Ri used in this activity allocation must be
no greater than the supply, bi; that is,
(9)
It is assumed that we cannot operate an activity at negative
intensity; that is,
(10)
Our problem is: maximize (8) subject to (9) and (10). This is
exactly the standard maximum problem.
2. The Optimal Assignment Problem. There are I persons available
for J jobs. The value of person i working 1 day at job j is aij , for i = 1,
. . . , I, and j = 1, . . . , J . The problem is to choose an assignment of
persons to jobs to maximize the total value.
An assignment is a choice of numbers, xij , for i = 1, . . . , I, and j
= 1, . . . , J, where xij represents the proportion of person i ’s time that
is to be spent on job j. Thus,
(11)
(12)
And
(13)
Equation (11) reflects the fact that a person cannot spend more
than 100% of his time working, (12) means that only one person is
allowed on a job at a time, and (13) says that no one can work a
negative amount of time on any job. Subject to (11), (12) and (13), we
wish to maximize the total value,
This is a standard maximum problem with m = I + J and n = IJ.