Air Cargo Terminal Workload Optimization
Air Cargo Terminal Workload Optimization
XU ZHIYONG
A THESIS SUBMITTED
2004
Acknowledgement
Id like to express the full measure of my gratitude to my two supervisors, Dr. Lee
Chulung and Prof. Huang Huei Chuen. It is under their favorable and challenging
guidance that I have been tirelessly dedicated to my study and research in NUS and hence
Besides, three professors from Georgia Institute of Technology also made valuable
contributions to my thesis with stimulating advice. They are Prof. Ellis Johnson, Prof.
I also would like to extend my gratitude to all my friends who made my stay at NUS an
unforgettable experience. Special thanks to Liu Bin and Zeng Yifeng, who are my study
partners and always give me unforgettable help. Thanks to Leong Chun How, Li Dong,
Ng Tsan Sheng Adam and Huang Peng, all of whom helped my research much; thanks to
all other friends in the Logistics Lab where I spent most of the time in the past two years.
They are Lin Wei, Yang Guiyu, Peng Ji, Zhang Lifang, Liu Rujing, Wang Xuan, and Hu
Qingpei.
Finally, thanks to my family and old friends in China for their support, understanding,
-I-
Table of Contents
Acknowledgement ................................................................................................................ I
Table of Contents................................................................................................................. II
Summary ..............................................................................................................................V
1 Introduction...................................................................................................................1
2 Literature Review........................................................................................................12
- II -
4 Solution Methodology ................................................................................................33
- III -
6.1 Conclusions.........................................................................................................77
References...........................................................................................................................80
- IV -
Summary
substantially increasing for the last decade, competitions have become pierce and airlines
and airports strive to streamline their cargo handling operations. Airport cargo terminals
play a crucial role in air freight operations, including activities such as: receiving, order-
handling employees. This area is particularly critical for achieving cost reductions as well
as maintaining customer service levels. No matter how experienced the planners or how
manpower shortages are often worsened by the fact that completely idle time arises from
time to time during working hours. This dilemma is termed the self-contradiction of
hands shortage and idleness. The chief reason for this self-contradiction lies in the
unbalanced workload distribution existing in the special operations of air cargo terminals.
This thesis discusses the idea that airline re-assignment is a potential and feasible
In this model, two parallel measurements of imbalance are chosen to be minimized. Thus,
two objectives are achieved: even distribution of workload along the time line, and
-V-
cannot be estimated accurately, and vary seasonally, uncertainty is considered and a
approximation (SAA), which is well studied and broadly applied in literatures. This
method generates some scenarios, which are realizations of random parameters. Each
scenario can be tackled as a deterministic problem, the average objective value of which
is deemed to be the approximation of the original stochastic problem. However, even the
Computational results obtained from actual data of a large airport cargo terminal are
presented. Different formulations and different algorithms are compared. Suitable sample
size for SAA is found with good solution quality. The efficiency of the two-stage
investigated.
- VI -
List of Figures
Figure 5-5: Average cycle time with the number of cargo retrievals ............................... 75
- VII -
List of Tables
Table 5.7: Comparison between the two-stage approximation and the optimal solution. 69
Table 5.8: Case 1 of partial reassignment: Terminate the service of a big airline............ 71
Table 5.10: Case 3 of partial reassignment: Terminate the service of a small airline ...... 72
Table 5.11: Case 4 of partial reassignment: Insert a new small airline ............................ 72
Table 5.12: Imbalance measurements under the current and optimal assignment ........... 74
- VIII -
Chapter 1 Introduction
1 Introduction
The efficiency of airport operations has received considerable research interest, but most
of the work focuses on passenger terminals (see, for example, Baron 1969, De Neufville
and Rusconi-Clerici 1978, Wirasinghe and Bandara 1990, Haghani and Chen 1998).
However, as the revenue from cargo transportation has substantially increased in the last
decade, competitions have become pierce and airlines and airports strive to streamline
their cargo handling operations. Air cargo terminals play a crucial role in air freight
operations, handling all freight at airports, where they function as a type of warehouse.
later), very few publications have discussed the operations at air cargo terminals.
characteristics of air cargo terminals. It then presents the motivations of the research and
The logistics industry is characterized by customer markets, high service, and quality
requirements. Short lead-times, high delivery frequency and variability are in demand,
-1-
Chapter 1 Introduction
system to help accomplishing the service level at the lowest possible cost. Warehouse
operations involve all movements of goods within warehouses and distribution centers
(DC's), namely: receiving, storage, order-picking, accumulation, sorting and shipping, etc.
satisfying customer requirements. The key to achieving these objectives is planning and
control by warehouse management systems. We may establish high quality solutions for
subproblems. A well-defined hierarchy will avoid solutions that are locally optimal
without considering the global context. The hierarchy of management decisions can be
broadly classified into decisions at the strategic level, tactical level and operational level.
Strategic management decisions are long-term decisions that concern the determination
of broad policies and plans for using the resources of a company to support its long-term
competitive strategy. These decisions face high uncertainties. Typical methods for
solving such problems are stochastic modeling or simulation, based on demand estimates.
At the present context, the main strategic decisions are warehouse design & layout, and
capacity planning to determine the best design parameters. The best warehouse layout,
which maximize the space utilization and minimize traveling time, can achieve effective
use of floor space and equipments. Capacity planning involves estimating the future
capacity requirements due to expansion, while maintaining the customer service level.
-2-
Chapter 1 Introduction
On the tactical decision level, a number of medium-term decisions are to be made, based
on the outcomes of the strategic decisions. The tactical decisions have a lower impact
than the strategic decisions, but still require some investments and should therefore not be
reconsidered too often. Planning problems concern an existing situation and the
algorithms are based on historical data. Tactical decisions typically concern how to
efficiently schedule material and labor. Products are assigned to warehouse storage
locations to reduce the mean transaction times for retrieval to satisfy customer orders.
Labor scheduling problem consists of daily shift and days-off scheduling in each week to
minimize the number of hours that employees require to satisfy hourly demand
requirement over the week in service delivery systems. Also workload balancing should
Operational decisions are narrow and short-term by comparison and act under the
operating constraints set out by the strategic and tactical management decisions. Control
algorithms are based on actual data, and attempt to find solutions with high-quality
performance. Combinatorial optimization techniques are well suited for solving planning
and control problems. Control problems concern the actual sequencing, scheduling and
Air cargo terminals serve air freight at airports, including receiving, order-picking,
checking-packing, shipping, loading and unloading, etc. Whether for import or export, all
-3-
Chapter 1 Introduction
air freight needs to pass air cargo terminals. Generally, a large air cargo terminal is
divided into several sub-terminals, each of which undertakes the function of import or
export. In addition, import and export terminals may themselves be separated into
multiple sub-terminals. In this research, for example, a large air cargo terminal is
considered, which is separated into four sub-terminals, two of which are for import and
The typical flow of air shipments starts when freight is delivered to the airport cargo
terminal by shippers at the origin. The freight may be delivered in containers, on pallets
or as loose freight. Freight can also be picked up at the shippers door by freight
forwarders and consolidated into containers before delivery to the cargo terminal. Then,
freight is unloaded from the delivery trucks and verified against the information
freight, and so forth. The shippers determine tariffs and prepare a waybill. The air waybill
accompanies the freight and is used to verify it in subsequent handling operations. Next,
freight is sorted according to its destination and consolidated into containers. It is then
taken to a loading area where it awaits the next planned aircraft departure along with
other containers that were delivered to the terminal by freight forwarders. At the
destination terminal, freight is unloaded, verified, tagged, and sorted into racks where it
awaits pickup by the consignees. (One should note that the forenamed shippers,
forwarders and consignees are all called cargo agents at air cargo terminals. This term
-4-
Chapter 1 Introduction
It is clear that air cargo terminals possess the common properties of warehouses. To
achieve the objective of desired service level with the lowest cost, we still need to
decompose planning and control at strategic, tactical and operational levels. In this thesis,
we consider strategic and tactical decisions. However, air cargo terminals have certain
special characteristics that differentiate them from the traditional warehouses. These
require more attention when we are studying how to improve the performance of air
cargo terminals. Three differences that we will start the research with are described as
follows:
Firstly, cargo storage duration is relatively short. Customer service requires air
transportation to be fast and efficient. It requires the air cargo terminal to be highly
Secondly, although more and more automated equipment and warehouse management
systems (WMS) have been introduced, most of the manual operations in air cargo
terminals must still be kept. Thus, the number of workers, which has become the
Thirdly, the workload at air cargo terminals can be measured by the quantity of air
cargo, which completely depends on the business contracts between airlines and
cargo agents. As a ground handling agent, air cargo terminal follows the workload
demand in a timely manner, without control of the quantity of cargo, or the cargo
arrival time. This uncertainty in workload demand strongly increases the difficulty of
execution planning, which is very important for productivity and cost reduction.
-5-
Chapter 1 Introduction
Due to the time-sensitive nature of the air cargo trade, it is imperative that work needs to
be done swiftly once it arises. Consequentially there is often a surge of manpower and
equipment demand when multiple flights are scheduled to depart or arrive within the
same time-window. In addition the workload is stochastic over time, as cargo volume
varies from day to day and week to week. Still, for achieving cost reductions, the number
challenge to find a cost efficient method to schedule a regular workforce to meet this
highly varied workload. In practice, no matter how experienced the planners are or how
manpower shortages are often worsened by the fact that completely idle time arises from
time to time during working hours. This dilemma is termed the self-contradiction of
The chief cause of this problem is the unbalanced workload distribution, since the quality
and cost of manpower schedule hinges very much on the workload. At air cargo terminals,
the workload distribution depends on flight schedules, which vary in intensity throughout
the day; however, the manpower schedule is comprised of a regular eight hour shift for
each employee. Under these circumstances, when the workload surges, employees may
become very busy and overloaded, and when it slows, they may become underutilized or
even idle. In summary, the relatively regular manpower schedule does not match the
-6-
Chapter 1 Introduction
The ideal situation would be for the manpower schedule to always match the workload
this matching, such as splitting shifts, but such approaches are generally not practical due
be a practical approach. If the workload can be evenly distributed along the time line,
then manpower scheduling becomes easy, and the aforementioned self-contradiction will
not occur. Even if absolutely even workload distribution cannot be achieved, we may still
useful and beneficial in improving other operational behaviors. More details are
In this research, the workload balancing is achieved by assigning the services for
world example in which the cargo terminal has two identical sub-terminals for the same
function (import or export); otherwise, if all airlines are served by the same terminal, the
workload will be absolutely determined by the fixed flight schedule, and terminal
operators can do nothing to improve the workload balancing. When two such sub-
terminals exist, one can alter the workload distribution for the sub-terminals by changing
achieve the optimal airlines assignment, with respect to workload balancing objectives.
-7-
Chapter 1 Introduction
For a large air cargo terminal with four sub-terminals, two of which are for import and
two for export, we consider two sub-terminals with the same function, and the method
can be applied on either of the import terminals or export terminals. We assume these two
Given a number of airlines, each of which includes a number of flights, the problem is to
assign some of these airlines to one sub-terminal and the others to another (identical) sub-
terminal. The objective of this assignment is to balance the workload between the sub-
For a sub-terminal, workload can be measured by the total quantity of cargo that needs to
be handled, summed over all the flights that it serves. Each flight has its own workload
distribution along the time line, and can be estimated from aircraft type and historical
experience. More details can be found in Section 5.1. Since flight schedule is determined
by airlines, and relatively fixed for a long term, we assume that the weekly flight
schedule is fixed and given. Practically, all flights belonging to the same airline are
That is why we directly make the assignment of airlines, rather than flights.
Workload balancing should achieve two objectives. First, the overall workload is
balanced between the two identical sub-terminals, since they are assumed to have the
same resources. Second, the workload is evenly distributed along the time line at each
-8-
Chapter 1 Introduction
sub-terminal. This objective is the key to ameliorating the manpower scheduling problem
factored in.
Based on the model that re-optimizes overall airline assignment, the extended model
only adjusts the assignment partially, which is more practical for industrial
running the simulation model, the average cycle-times under current assignment and
the proposed optimal assignment are compared. The results show that the average
-9-
Chapter 1 Introduction
The rest of this thesis consists of 5 chapters. In chapter 2, we present a literature review
find both the history of research achievements and suggestions for research directions.
and applied to extend the deterministic model to a stochastic model. The sample average
approximation (SAA) method is introduced and employed to handle this stochastic model.
Chapter 4 presents a series of methodologies to improve the solution. First the model is
reformulated, then the Benders decomposition is reviewed and applied to solve the
presented. This speeds up the computational time with good solution quality. For more
from the overall re-assignment that we have previously discussed. This chapter ends with
Chapter 5 describes parameter estimation and lists two experimental data instances. Next,
main computation results are summarized and discussed. Finally, the proposed optimal
- 10 -
Chapter 1 Introduction
Chapter 6 summarizes the main results obtained from this research project, and discusses
- 11 -
Chapter 2 Literature Review
2 Literature Review
We first outline related work on the warehouse operations problem in Section 2.1, then
focus on the workload balancing problem in Section 2.2. In Section 2.3 we present a
control. Ever since Hausman et al. (1976) and Graves et al. (1977), etc. introduced these
new research topics in the 1970s, the operations of warehousing systems have received
considerable interest and so far there is a fairly large literature, which we cannot hope to
review thoroughly. Since some authors have written excellent reviews in different epochs,
here we attempt to present the major conclusions on warehouse operations from these
Matson and White (1982) performed a general survey of material handling research, with
warehousing. Ashayeri and Gelders (1985) addressed warehousing specifically, and they
concluded that the most practical approach to study the complexities of a total
- 12 -
Chapter 2 Literature Review
Cormier and Gunn (1992) discussed the literature on the optimization of warehouse
design and operation. After reviewing the most important definitions encountered in this
research area, they described the throughput capacity models, and briefly discussed
warehouse design models (internal arrangement and overall design issues). Attention was
also paid to models concerning the maximization of space utilization. The authors
Berg (1999) presented a literature survey on methods and techniques for the planning and
control of warehousing systems, and remarked that few papers provided algorithms to
obtain the optimal solution and that their formulations used simplistic objective functions,
ignoring other considerations and trade-offs. The author concluded with suggestions for
further research, noting that flexibility of labor is an important issue in warehouses, and
resources may be redistributed among activities. Particularly, this survey pointed out that
improving the system throughput capacity, no publications addressing this issue were
found.
systems, the finding was that the majority of papers are primarily analysis-oriented and
do not pursue a synthesis of models and techniques as a basis for warehouse design. Thus
the authors emphasized the need for design-oriented studies, as opposed to the strong
- 13 -
Chapter 2 Literature Review
They also concluded that future studies should examine the trade-offs between costs and
From this short review of the literature, we can see that warehouse operations have been
well studied. However, the disadvantage of the research so far is that most studies have
research is suggested in areas of integrated systems, such as labor cost, integrated models,
Workload balancing has been studied for a long time; however, as Berg (1999) pointed
out, very few publications have discussed this issue in the context of warehousing. Here
we review the literature from two viewpoints: First is the benefit of workload balancing;
second is the definition of balance (or imbalance), which determines the balancing
objective function.
Stecke and Solberg (1985) and Dallery and Stecke (1990) showed that throughput is
maximized by balancing the workloads among equal size machine groups. Shanthikumar
and Stecke (1986) demonstrated that balanced workloads minimize the work-in-process
inventory over time stochastically, and Ammon et al. (1985) showed similar results in the
- 14 -
Chapter 2 Literature Review
context of bottleneck formation. Balancing may also make it easier to handle machine
Kumar (1998, 2001) argued that throughput, utilization, material handling and due dates
are more focused objectives than workload balancing. However, considering any one of
these would neglect the others; on the other hand, even though a workload balancing
objective may not directly represent the other objectives, by virtue of distributing the load
equally, it can quickly give a solution that may be good with respect to many criteria
simultaneously.
These criteria were called systems objectives by Kim (1993). He explored the possibility
balancing objective was verified by his results, which showed that performance measures
like throughput, makespan, mean flow time and mean tardiness (after scheduling) are
significantly correlated with workload balance achieved at the loading stage. In his paper,
a queuing model and some heuristics were used to compute these performance measures.
Potts and Whitehead (2001) integrated scheduling and machine layout problems, and they
derived a three-phase integer programming formulation in which the first phase was used
to balance workloads, the second phase minimized inter-machine travel, and the third
phase assigned machines to positions in the loop layout so that the total number of
circuits made by the products was minimized. The hierarchical nature of the model meant
- 15 -
Chapter 2 Literature Review
that the solutions obtained principally satisfy the workload balancing objective of the first
phase. The author concluded that workload balance is the governing factor in determining
Workload balancing also benefits parallel distributed computing. Lan et al. (2002)
showed good quality of load balancing reduced a lot of execution time. Similar results
were reported by Boukerche and Das (2004), who indicated that their dynamic load
balancing schemes significantly reduced the running time. Liu and Righter (1998) also
In light of the above, it is not surprising that workload balancing is a commonly and
objectives requires a clear definition on the measurement of balance. However, there are
- 16 -
Chapter 2 Literature Review
different functions have been used to measure the deviation of assigned workloads from
ideal workloads. Berrada and Stecke (1986) used the objective function of minimizing
the maximum workload among the machines or machine groups; Moreno and Ding (1989)
aimed at the minimum sum of overload and underload of the machines, while Ammons et
al. (1985) tried to minimize a function of the maximum deviation from the ideal
workloads. Similar work can also be found in Shanker and Tzen (1985). Kim (1993)
tested six objective functions as measures of balance which can be easily handled. He
linked these commonly-used loading objectives with systems objectives in real fields.
Regarding balancing effectiveness, Guerro et al. (1999) stated that after thoroughly
testing eight different linear imbalance measures, they concluded that the simplest and
most effective measure is difference between max and min. However, they did not detail
which eight measures they tested. Also, the methodology of testing and the magnitude of
most of previous and most popular objective functions, he compared the effectiveness of
different balancing objectives. The most significant finding is that the min [average
pairwise difference] is the best objective for workload balancing. These conclusions are
virtue of dependence on every workload in the system, the pairwise difference is extra-
sensitive to the distribution of workloads. While the statistic maximum consists of just
- 17 -
Chapter 2 Literature Review
one pairwise difference, the statistic average incorporates all the pairwise differences
Over the past several decades, linear programming (LP) and mixed integer linear
programming (MILP) have become the fundamental planning tools. They assume that the
parameters for the given problem are known accurately; however, in the real world, the
data cannot be known accurately for a variety of reasons. The first reason is simple
measurement errors. The second and more fundamental reason is that some data represent
information about the future (e.g., product demand or price for a future time period) and
simply cannot be known with certainty. When one or more of the data elements in a
introduced.
within an optimization model. Such models are appropriate when data evolve over time
and decisions need to be made prior to observing the entire data stream. Furthermore, it
SP models have been studied since the late 1950s by Dantzig (1955), Beale (1955),
Charnes and Cooper (1959), and others. They proposed a stochastic view to replace the
deterministic one, where the unknown coefficients or parameters are random with
assumed probability distributions that are independent of the decision variables. A variety
- 18 -
Chapter 2 Literature Review
network planning (Sen et al. 1994), and supply chain management (Fisher et al. 1997,
For stochastic linear program (SLP), the main challenges are due to the need for multi-
optimization algorithm. Even in cases where the random variables are discrete, the total
calculations associated with the summations may be far too demanding. Hence methods
that are based on approximation become paramount. The most popular approximation
perform a statistical analysis of the output, as suggested by the work of Romisch and
For stochastic mixed integer linear program (S-MILP), if only the first stage decisions
include integer restrictions, then the remaining problem inherits the properties of SLP;
otherwise, when integer variables appear in future stages, the S-MILP is much more
challenging. In this thesis, we consider the problems with first stage integer variables.
For this class of problems, decomposition-based algorithms are employed popularly. This
genre of methods traces back to the L-shaped method of Van Slyke and Wets (1969)
which builds on arguments similar to Benders decomposition (Benders 1962) for two-
stage problems. Birge (1985, 1988, 1996, 1997) did lots of research on this topic. Also
- 19 -
Chapter 2 Literature Review
Higle and Sen (1991), Mulvey and Ruszcczynski (1995) and Linderoth (2002) proposed
different research progress. Researchers are trying to improve the algorithms efficiency
for computation from the basic Benders decomposition. Birge proposed a nested
decomposition and a piecewise linear partition method for multi-stage problems in 1985.
problems in 1988. Santoso et al. (2003) developed several acceleration techniques for
computing has been broadly developed, and decomposition methods are combined. That
and all the sub-solutions are communicated by a host computer, which controls the whole
process. Mulvey and Ruszcczynski (1995), Birge (1996), Linderoth (2002) proposed such
- 20 -
Chapter 3 Mathematical Formulations
3 Mathematical Formulations
This chapter describes the mathematical formulations for the workload balancing
problem. Section 3.1 presents the deterministic model under the assumption that all
assumption is relaxed and the stochastic model is formulated. The sample average
Under the ideal situation of perfect balance, the workload is uniformly distributed along
the time line so that the workload is equal on any equal time period. Hence for definition
of workload balance, equal time periods are chosen. The workload imbalance is measured
Assuming a continuous time line, the target time horizon is one week, which is the flight
schedule cycle. Without loss of generality, the cycle is considered to begin at 0:00AM on
Monday, and end at 24:00PM on Sunday, a total of 168 hours. For convenience, time is
denoted by integer numbers from 0 to 168 (refer to Figure 3-1), and the number 168 is
An appropriate length of a unit period is necessary for problem formulations since the
different length affects the variation, which will be the imbalance measurement. In
addition, a shorter length of time period means a larger number of periods, which would
- 21 -
Chapter 3 Mathematical Formulations
increase the problem size as well as the computational complexity. Therefore, the time
an integer number of hours can be chosen as the time period length. Under this
consideration, one hour is the shortest time period, and thus represents the most accurate
measurement. As shown in Figure 3-1, 168 one-hour time periods are discretized along
Denote the index set of time periods by J = {1, 2, ...,168} , indexed by j , and the airlines
index set by I = {airlines} , indexed by i . Let aij denote the estimated workload of
airline i at time period j ; aij varies with time and cannot be estimated with certainty.
For convenience, we begin by assuming it to be certain and accurate, and the stochastic
Denote the two identical terminals by T1 and T2, and define the decision variable xi :
1 if airline i is served by T 1
xi =
0 otherwise , airline i is served by T 2
w1j = aij xi
iI
- 22 -
Chapter 3 Mathematical Formulations
where A j = aij = w1j + w 2j is the total workload at time period j and it is a given
iI
constant.
2) Evenly distribute the workload along the time line at each terminal (intra-balance).
For inter-balance, the overall workload should be approximately distributed to the two
terminals equally; and the pairwise workload at all time periods should differ within itself
as little as possible. One of the most popular objective functions is to minimize the
s.t. w w
jJ
1
j
jJ
2
j (3.2)
Replacing the approximate sign in constraint (3.2) by a quantitative expression, lower and
upper bounds for each terminal are given to be 50% of the total workload. The
- 23 -
Chapter 3 Mathematical Formulations
For example, when = 0.05 , 45% and 55% of the total workload are those lower and
upper bounds. It may not be possible to satisfy this constraint for some data sets.
However, for the data sets used in this study, which we obtained from real terminal
operations, infeasibility has not occurred. In case of infeasibility, we can adjust the
alternatively, it can assume different values for different data sets, fixed according to the
w
jJ
1
j = l i xi
iI
x {0,1}
For the intra-balance problem at each individual terminal, the objective function could be
chosen from the popular imbalance measures found in the literature (refer to Kumar
2001). Following the recommended min [average pairwise difference], the objective
- 24 -
Chapter 3 Mathematical Formulations
1
Minimize
| J | jJ
| w1j w 2j | (3.4)
1
The constant is dropped, and the objective function becomes min [sum pairwise
|J|
Minimize | w
jJ
1
j w 2j | (3.5)
w 2j = A j w 2j for all j J
x {0,1}
Lastly, the inter-balance formulation (3.3) and the intra-balance formulation (3.5) are
0.45L li xi 0.55 L
iI
x {0,1} w 0
where C is constant and defined as the weight for this dual objective function. C
The value of C would affect the optimal solution and computing time, and we performed
- 25 -
Chapter 3 Mathematical Formulations
a computational experiment to observe such an effect. With the results, we will give a
To transform the formulation (3.6) to a linear form, new auxiliary decision variables are
introduced:
z w1j = aij xi
iI
z w 2j = A j aij xi
iI
y j w1j w 2j = 2 aij xi A j
iI
y j w 2j w1j = A j 2 aij xi
iI
y, z 0
Additionally, considering the symmetry of the airline assignment between the two
terminals, any one of the airlines can be pre-specified to either one of the two terminals.
x1 = 1
which can reduce the running time by nearly 50% the search space is shrinked by half.
Next, the formulation (3.6) is rewritten as a mixed integer linear program (MILP). We
- 26 -
Chapter 3 Mathematical Formulations
y, z 0 x {0,1} x1 = 1
For clarity, all related notations are collected and listed as follows:
Parameters:
- 27 -
Chapter 3 Mathematical Formulations
Decision variables:
to terminal T2)
w1j , w 2j : Overall workload assigned at time period j , for terminal T1 and T2,
respectively
In [OP], the average workload is used as an estimate for aij . It is doubtful that the
optimal solution remains valid when the workload varies randomly over time. To model
distribution estimated from the historical data. Define k := {aijk , for all i, j} as a certain
realization, in which aijk is one possible value of the workload estimation aij . Define P
formulated as follows:
Minx { f ( x ) := E k P h( x, k ) } (3.11)
- 28 -
Chapter 3 Mathematical Formulations
h( x, k ) = min y k , z k Cz k + y kj
jJ
s.t. z aijk xi
k
for all j
iI
0.45Lk l ik xi 0.55Lk
iI
y ,z 0
k k
It is very difficult if not impossible to solve this optimization problem. First, for a given
decision vector x , computing the objective function requires the expected value of a
multiple dimensions and there is no closed form to express it. Second, even if the
intractable to find the best assignment from among all the possible assignments.
alternative (refer to Shapiro and Mello 1998, Verweij et al. 2003, and Kleywegt et al.
2001). The basic idea is simple: a random sample is generated and the expectation is
approximated by the corresponding sample average value. The idea of using sample
average approximations (SAA) for solving stochastic programs is a natural one and has
- 29 -
Chapter 3 Mathematical Formulations
) 1 N
Min x { f N ( x ) :=
N
h ( x,
k =1
k
)} (3.13)
Combining (3.12) and (3.13), the assignment problem can be modeled deterministically
again for a given sample. To differentiate this from the original model [OP], we call it
[SP].
N N
1
[ SP] Minimize ( C z k + y kj )
N k =1 k =1 jJ
x {0,1} x1 = 1 y k , z k 0
) )
Let v N and x N be the optimal value and the optimal solution vector, respectively, of the
model [SP]. Because the parameters are fixed from a random sample, the corresponding
optimal solution is not the actual optimal solution of the original problem (3.11). It is
Thus we would like to know to what extent this approximate solution is close to the
- 30 -
Chapter 3 Mathematical Formulations
It is apparent that the approximate optimal solution x N converges to the exact optimal
of the rate of convergence, and there are numerous publications discussing various
aspects of convergence properties. Central limit theorem-type results give such estimates
of order O( N 1 / 2 ) (e.g., King and Rockafellar (1993)). That is, in order to improve the
)
accuracy of the estimator f N (x) by one digit, the sample size needs to increase 100 times
However, this pessimistic situation changes very much after the exponential convergence
is found and proved theoretically. Shapiro and Mello (2001) and Kleywegt et al. (2001)
proposed that if the involved probabilistic distributions are discrete, the behavior of
asymptote is completely different from that of the continuous distributions. They proved
that with the discrete distributions, as the sample size N increases, the SAA optimal
) )
solutions v N and x N from (3.13) converge to their true stochastic problem (3.11) with
surprising fact is that one does not need a very large sample size to find a good quality
solution. It suggests that a fairly good approximate solution to the true problem (3.11) can
be obtained by solving the SAA problem (3.14) with a modest sample size N.
With larger N, the objective function of the SAA problem tends to be a more accurate
estimate of the true stochastic problem; and the optimal solution of the SAA problem
linearly at least, and often exponentially, with the sample size. Thus, in the choice of the
- 31 -
Chapter 3 Mathematical Formulations
sample size, the trade-off between the solution quality and the computational effort
In practice, the SAA method involves finding solutions of (3.14) with independent
samples for any given sample size. Santoso et al. (2003) developed such a procedure, in
which ways to compute the statistical lower and upper bounds are further suggested.
Using this procedure, we try out the problem [SP] on a data set which comprises 50
airlines (see, in Section 5.1, the data set ImD). The computational results are reported in
Table 3.1, which reveal that the gap decreases very quickly as sample size increases.
Noticeably, when the sample size is greater than 30, the gap becomes less than 1%; when
the size is greater than 60, the gap is less than 0.1%. Based on the observations, we set
- 32 -
Chapter 4 Solution Methodology
4 Solution Methodology
The true stochastic problem (3.20) is prohibitively difficult, and the SAA method is
employed to give an approximation. For the target problem [SP], there are | I | binary
constraints is (4 | J | +2) N , so that the problem size may increase rapidly with sample
size N. It is well known that the efficiency of solving a mixed integer program depends a
lot on how well approximation its underlying linear program (LP) provides. A bad
formulation in which the LP provides a very loose bound would make a mixed integer
program very difficult to solve and very often the problem becomes computationally
intractable. By exploiting the problem structure, Sections 4.1-4.3 introduce some solution
methodologies, to deal with these two considerations. Section 4.4 introduces an extension
of the model for more practical application in industry. The software CPLEX is briefly
The mathematical models have been formulated as mixed integer programs [OP] and
[SP]. In this section, we reformulate the models in order to reduce the solution time. The
reformulations change the structure of the original models to make the computation faster,
while maintaining the original relationships between the objective function and the
constraints. Recall [SP] is an extension of [OP] that adds the stochastic scenarios. For
convenience, we reformulate [OP] first, then the two reformulations of [SP] are listed
- 33 -
Chapter 4 Solution Methodology
The structure of the original model [OP] is somewhat special. The constraints (3.8a) and
(3.8b) are a pair of restrictions on the decision variable z to capture the maximum
workload at all time periods. The constraints (3.9a) and (3.9b) are another pair of
restrictions on the decision variable y to capture the absolute value of a difference. Each
For constraints (3.8a) and (3.8b), auxiliary variables s j and t j are introduced, for all j :
z s j = aij xi sj 0 (4.1a)
iI
z t j = A j aij xi tj 0 (4.1b)
iI
(4.1a ) (4.1b) :
t j s j = 2 aij xi A j
iI
Q t j = s j + 2 aij xi A j 0
iI
s j + 2 aij xi A j (4.1)
iI
z = s j + aij xi (4.2)
iI
Similarly for constraints (3.9a) and (3.9b), auxiliary variables p j and q j are introduced,
for all j :
- 34 -
Chapter 4 Solution Methodology
y j p j = 2 aij xi A j pj 0 (4.3a)
iI
y j q j = A j 2 aij xi qj 0 (4.3b)
iI
(4.3a ) (4.3b) :
q j p j = 4 aij xi 2 A j
iI
q j = p j + 4 aij xi 2 A j 0
iI
p j + 4 aij xi 2 A j (4.3)
iI
y j = p j + 2 aij xi A j (4.4)
iI
Substitute the decision variables z and y by (4.2) and (4.4) respectively in the objective
function (3.7), and change the constraints (3.8a)-(3.9b) by (4.1) and (4.3), then the
0.45L li xi 0.55 L
iI
s, p 0 x {0,1} x1 = 1
- 35 -
Chapter 4 Solution Methodology
C ( s1 + ai1 xi ) + ( p j + 2 aij xi A j )
iI jJ iI
= C s1 + p j + (C ai1 + 2li ) xi L
jJ iI
Denote bi = C ai1 + 2l i , and the constant L is dropped from the objective function, then
[Hy] Minimize b x
iI
i i + (C s1 + p j )
jJ
0.45% li xi 0.55L
iI
s, p 0 x {0,1} x1 = 1
In this formulation, the decision variable z is replaced by the new variable s , and the
constraints are rewritten, but the number of constraints do not reduce. The decision
variable y is replaced by the new variable p , and the number of constraints reduces to a
Recall the constraints for z and y are both related to the term a
iI
ij xi , A j and their
- 36 -
Chapter 4 Solution Methodology
z a ij xi for all j
iI
2 z 2 aij xi
iI
2 z A j 2 aij xi A j (4.5a)
iI
2 z A j A j 2 aij xi (4.5b)
iI
Comparing (4.5a) to (3.9a), and (4.5b) to (3.9b), respectively, their right hand side (RHS)
are exactly the same. Thus, in stead of (3.8a) and (3.8b), we can tighten the restrictions
At the same time, z is minimized in the objective function (3.7). Therefore the
constraints (4.5) are just the equivalence of (3.8a) and (3.8b), without any relaxation or
constraint (3.8a) and (3.8b) with (4.5). This reformulation is denoted as [Sz].
- 37 -
Chapter 4 Solution Methodology
[Sz] Minimize 2 l i xi + (C z + p j )
iI jJ
45% L l i xi 55% L
iI
z , p 0 x {0,1} x1 = 1
We have obtained three different formulations so far: [OP], [Hy] and [Sz]. With
considering the stochastic parameters, these three formulations are extended to SAA
formulations. [SP] is already presented in the previous section, and the other two
extensions of [Hy] and [Sz] are listed as follows, denoted as [Hy] and [Sz], respectively.
1 N
[Hy] Minimize fx
iI
i i + ( (C s1k + p kj )
N k =1 jJ
s, p 0 x {0,1} x1 = 1
N N
1 1
where f i =
N
bik =
k =1 N
(C a
k =1
k
i1 + 2lik ) .
- 38 -
Chapter 4 Solution Methodology
N
1
[Sz] Minimize
N
( 2 l
k =1 iI
i
k
xi + (C z k + p kj ))
jJ
z , p 0 x {0,1} x1 = 1
The sample average approximation method transforms the true problem (3.11) to the
popular way in literatures, we employ the Benders decomposition to solve this two-stage
problem [SP]. However, the computational results show that the convergence is
decomposition is reviewed first in Section 4.2.1, and the general application framework is
presented in Section 4.2.2. In Section 4.2.3, a new Benders feasibility cut is developed.
Minimize cx + hy
s.t. Ax = b
(4.6)
Tx + Wy = q
x 0, y 0
- 39 -
Chapter 4 Solution Methodology
Generally, the computational difficulty arises from the integer variables x . In some
applications, if such complicating variables are fixed, the resulting problem becomes a
linear program (LP), which is a relatively easy problem. Benders decomposition method
assigns some trial values to these variables and finds the corresponding best solution.
This solution will be examined, and if it is an optimal solution of the original problem,
Benders decomposition begins with reformulating (4.6) as a problem that includes only a
subset of variables by projecting out the others. Fix x at x , which satisfies Ax = b , the
problem becomes:
Minimize z ( x ) = hy + cx
s.t. Wy = q Tx (4.7)
y0
The feasibility of the above problem is considered first. The objective function is left
Minimize 0
s.t. Wy = q Tx (4.8)
y0
Maximize (q Tx )
(4.9)
s.t. W 0
- 40 -
Chapter 4 Solution Methodology
The primal problem (4.6) is feasible if its dual objective value is less than or equal to 0
for all , and the equality holds for at least one . (Note that the dual is always feasible.)
That is,
(q Tx ) 0 : W 0 (4.10)
*f (q Tx) 0, f F (4.11)
The specific x is replaced by the general x variables because the extreme rays of C are
independent of x. Condition (4.11) is then the sufficient condition for the primal to be
feasible. Multiply the constraint in (4.8) by all extreme rays, and relax variable x to
*f Wy *f (q Tx)
.
0 *f (q Tx), f F
Therefore (4.11) is also the necessary condition for the primal to be feasible. For this
Minimize z ( x ) = hy
s.t. Wy = q Tx (4.12)
y0
Its dual is
Maximize (q Tx )
(4.13)
s.t. W h
- 41 -
Chapter 4 Solution Methodology
The primal problem (4.12) is feasible for the given x , the dual (4.13) is either infeasible
or has an optimal solution. In the case of dual infeasibility (4.12) will be unbounded
( ), and so is the original problem (4.6). If the dual has an optimal solution, according
to the strong duality, the primal must have the same optimal value as the dual. That is
z o* (q Tx ), o O (4.14)
{ }
where o* , o O is the collection of the extreme points of Q = {W h} and the
equality holds for at least one extreme point. Since the extreme point of Q does not
depend on the value of x, x is relaxed to general x . (4.14) should be satisfied at all times.
Given the above results, the original problem (4.6) can be reformulated to include the
written as follows:
Minimize z = cx +
x ,
Compared to the original problem, the reformulation (4.15) eliminates all the non-integer
variables y , but we have also added an extremely large number of constraints. We call
this problem (4.15) the full master problem. However, only a subset of these constraints
decomposition algorithm generates them one by one from an initial relaxed master
problem, until all active constraints are added and the optimal solution is achieved.
- 42 -
Chapter 4 Solution Methodology
The Benders decomposition algorithm proceeds as follows. At each iteration, the Benders
cuts generated so far are added to the following Benders relaxed master problem (4.16).
Minimize cx +
x ,
s.t. Ax = b
(q Tx) o* , some o O (4.16)
*f (q Tx) 0, some f F
x 0, R
The solution of the relaxed master problem provides a trial solution x and the
Minimize z = hy
s.t. Wy = q Tx (4.17)
y R+
If (4.17) is infeasible, then a new feasibility cut (4.11) is obtained and added to the
original problem is reached. Otherwise z * > , and a new optimality cut (4.14) is
generated and added to (4.16). At this point the procedure repeats. More details can be
Shown in Figure 4-1, the problem [SP] has two outstanding characteristics. One is the
block structure, which is common in a scenario-based stochastic problem. Each block can
- 43 -
Chapter 4 Solution Methodology
employed. The other characteristic is that the block structure also exists in every block.
Thus, this problem can be decomposed further than general scenario-based problems. We
should note that the decision variables x are the only decision variables. y and z are
previous section that, when the complicating variables are fixed, the resulting problem
becomes a much easier LP. Especially for the [SP], when x is fixed, the resulting
problem is trivial since y and z can be easily determined numerically (shown in the
- 44 -
Chapter 4 Solution Methodology
These two characteristics show that when the Benders decomposition is employed to
solve [SP], the decomposed subproblem is extremely easy. The complete algorithm is
described as follows:
Initialization step: Set lower and upper bounds lb = and ub = + , respectively. Set
[MP] Minimize
[Benders Cuts]
R x {0,1} x1 = 1
) ) )
Assume the optimal solutions are and x . Update lower bound lb = .
)
Step2: For k=1,2N, subproblems [SPk], corresponding to x , are set up as:
[SPk] Minimize k = Cz k + y kj
jJ
)
s.t. z k aijk xi for all j : 1j k
iI
)
z k A kj aijk xi for all j : 1j k
iI
)
y kj 2 aijk xi A kj for all j : 2j k
iI
)
y kj A kj 2 aijk xi for all j : 2j k
iI
yk , zk 0
- 45 -
Chapter 4 Solution Methodology
The subproblems [SPk], k=1, 2, N, are always feasible, and the optimal solutions exist
at
) )
z k * = Maximum { aijk xi , Akj aijk xi } (4.18)
jI jI
)
y kj * =| 2 aijk xi A kj | (4.19)
iI
Assume the corresponding values of dual variables are 1jk , 1jk , 2j k , 2j k and the
objective value is k :
N
1
=
N
k =1
k
)
If ub > , update upper bound ub = , and update incumbent solution ~
x = x.
step 4.
N J J J J
N ((1jk aijk xi ) + (1jk (Akj aijk xi )) + ( 2j k (2 aijk xi Akj )) + (2j k (Akj 2 aijk xi )))
k =1 j =1 iI j =1 iI j =1 iI j =1
- 46 -
Chapter 4 Solution Methodology
While the Benders decomposition algorithm is a finite scheme, the number of iterations
required may be too large in practice. Such as for [SP], the convergence is dissatisfactory
N
1
z=
N
z
k =1
k
N
1
yj =
N
y
k =1
k
j for all j
N
1
s.t. z=
N
z
k =1
k
(SP2.2)
N
1
yj =
N
y
k =1
k
j for all j (SP2.3)
- 47 -
Chapter 4 Solution Methodology
y , y, z , z 0 x {0,1} x1 = 1
N N
z k aijk xi
k =1 k =1 iI
for all j (SP2.4a)
Divided by N at both sides, and switch the sequence of the summation symbol at the right
hand side:
N N
1 1
N
zk (
k =1 iI N
a
k =1
k
ij ) xi for all j
N
1
Denote aij =
N
a
k =1
k
ij , we have the inequalities for z
- 48 -
Chapter 4 Solution Methodology
Put all the constraints (SP2.4a*) (SP2.4b*) (SP2.5a*) (SP2.5b*) into the formulation:
N
1
z=
N
z
k =1
k
(SP3.4)
N
1
yj =
N
y
k =1
k
j for all j (SP3.5)
y , y, z , z 0 x {0,1} x1 = 1
Comparing [SP3] to the original [SP], the objective function (SP3.1) with the constraints
(SP3.4) and (SP3.5) gives the same value as the original objective function. The
- 49 -
Chapter 4 Solution Methodology
constraints (SP3.6a) - (SP3.8) are completely the same with the constraints in [SP]. The
only difference lies in the new constraints (SP3.2a) - (SP3.3b), which are the relaxation
of (SP3.6a) - (SP3.7b). Thus, the objective (SP3.1) and the constraints (SP3.2a) - (SP3.3b)
form a deterministic model as [OP], and the parameter aij is the mean workload. Hence,
Put the decision variables x and z , y in the first stage, and z , y in the second stage.
[MP*] Minimize C z + y j
jJ
y , z 0 x {0,1} x1 = 1
becomes:
- 50 -
Chapter 4 Solution Methodology
N
1
[SP*] Minimize
N
(Cz
k =1
k
+ y kj )
jJ
(SP*.1)
N )
s.t. z
k =1
k
= Nz (SP*.2)
N )
y
k =1
k
j = N yj for all j (SP*.3)
)
zk
i I
a ijk x i for all j , k (SP*.4a)
)
z k A kj aijk xi for all j , k (SP*.4b)
iI
)
y kj 2 aijk xi A kj for all j , k (SP*.5a)
iI
)
y kj A kj 2 aijk xi for all j , k (SP*.5b)
iI
y, z 0
)
In constraints (SP*.2), z represents the maximal value of the average workload, but the
corresponding z represents the absolute maximal value of all the scenarios, so that the
equations (SP*.2) cannot be satisfied unless the solution is optimal, and the subproblem
[SP*] is always infeasible. In this case, a Benders feasibility cut needs to be developed.
[FP] Minimize 0
N )
s.t. z
k =1
k
= Nz :
N )
y
k =1
k
j = N yj for all j :j
- 51 -
Chapter 4 Solution Methodology
)
zk
i I
a ijk x i for all j , k : kj
)
z k A kj aijk xi for all j , k : kj
iI
)
y kj 2 aijk xi A kj for all j , k : kj
iI
)
y kj A kj 2 aijk xi for all j , k : kj
iI
y, z 0
[DP] Maximize
) ) N
) N
) N
) N
)
N z + j N y j + kj aijk xi + kj ( Akj aijk xi ) + kj (2aijk xi Akj ) + kj ( Akj 2aijk xi )
jJ k =1 jJ iI k =1 jJ iI k =1 jJ iI k =1 jJ iI
s.t. ( z )k
+ + 0 k
j
k
j for all k
jJ jJ
( y kj ) j + kj + kj 0 for all j, k
, unrestricted , , , 0
Before reaching optimality, [FP] is infeasible, and the objective function of [DP] is
) ) N ) N ) N
) N
)
0 N z + j N y j + kj aijk xi + kj ( Akj aijk xi ) + kj (2aijk xi Akj ) + kj ( Akj 2aijk xi )
jJ k =1 jJ iI k =1 jJ iI k =1 jJ iI k =1 jJ iI
To identify the dual variables, which form an extreme ray, we note that (4.18) and (4.19)
can be easily found, and we choose their corresponding dual variables to be one, with the
jJ
k
j + kj = 1
jJ
for all k
- 52 -
Chapter 4 Solution Methodology
kj + kj = 1 for all j , k .
We therefore choose = 1 and j = 1 (for all j) to satisfy the constraints of [DP].
It is interesting to note that although the master problem [MP*] is more difficult to solve
than the original [MP] due to the increased number of variables and constraints, the
convergence is accelerated considerably and the total solution time is reduced (refer to
Section 5.2.3).
To improve the computational efficiency of solving [SP], reformulations and the Benders
decomposition-based algorithms are presented in the previous sections. However, for the
method is suggested, rather than finding the exact optimal solution. The advantage is to
The main idea for this approximation is to separate all airlines into two groups. One
group is assigned optimally first by solving a small size [SP], then based on the results,
the other group of airlines is assigned by solving [SP] again, which is still a small-size
- 53 -
Chapter 4 Solution Methodology
For two groups, one is called the big airlines, which contains much more workload than
the other group, called the small airlines. The workload discrepancy of the airlines
originates from several factors, such as the number of flights, the function type
(passenger flight or cargo flight), and the aircraft type, etc. Generally more flights mean
more workload; a freighter flight contains more cargo than a passenger flight; and
different aircraft types have different cargo capacity. All these factors have been taken
The idea of such airline separation comes from the ABC analysis in inventory
management. For an airport cargo terminal, airlines are its customers. The workload,
which is measured by the cargo quantity, is the demand. 80% of workload may come
from a few big airlines. Figure 4-2 shows an example, in which the first top 20 airlines
demand more than 80% of the workload, and the remaining 30 small airlines only
100.00%
Percent of total workload
90.00%
80.00%
70.00%
60.00%
50.00%
40.00%
30.00%
20.00%
10.00%
0.00%
1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49
Airlines
- 54 -
Chapter 4 Solution Methodology
Step 0: Separate the airlines into two groups: sort all the airlines according to the
estimated average workload in descending order; draw the ABC analysis graph
like Figure 4-2; the big airlines BI and small airlines SI are separated at the
)
T 2 = {i | xib = 0, i BI }
Step 2: For SI , solve [SP] again by pre-fixing the assignment on BI . There has been
) )
some pre-assigned workload at each time period, w kj = aijk xib , for all j . The second-
iT 1
N N
1
[SP-s] Minimize ( C z k + y kj )
N k =1 k =1 jJ
)
s.t. z k aijk xis + w kj for all j , k
iSI
) )
z k A kj a ijk x is + ( A kj w kj ) for all j , k
iSI
) )
y kj 2 aijk xis A kj + (2w kj A kj ) for all j , k
iSI
) )
y kj Aijk 2 aijk xis + ( A kj 2w kj ) for all j , k
iSI
) ) )
0.45( Lk + Lk ) l ik xis + l k 0.55( Lk + Lk ) for all k
iSI
y, z 0 x s {0,1}
- 55 -
Chapter 4 Solution Methodology
Differing from [SP] the constraint x1s = 1 is eliminated in [SP-s], as no airlines can be
pre-specified in the second stage. The assignment of big airlines in step 1 is the basis for
the assignment of the small airlines in step 2. Hence, if the solution from the first stage
can accidentally coincide with the exact optimal solution, then this two-stage
Section 5.2.4 show the good solution quality of such an approximation method.
The airline assignment is the decision made at the strategic level, and generally they are
fixed for a long term once the decision is made. In practice, however, air cargo terminals
have to face some unexpected incidents. For example, they may seek new service
contracts with new airlines, or the current airlines may increase their flights. Or adversely,
some airlines may end the service contract, or reduce their flights. When these incidents
occur, the designed workload balancing may not be valid anymore. To handle these
situations, the seemingly best approach is to reshuffle the airlines and to re-optimize the
workload balancing problem, but it is not practical, since the airline assignment should
not be changed too often. In this section, we develop an extended solution from the
previous workload balancing problem. This solution is practical to handle the unexpected
incidents.
When the incidents happen, we only adjust some part of the airlines, rather than all the
airlines. This approach is termed as the partial re-assignment. It should be easier to solve
and simpler to operate than the overall re-optimization. An intuitive idea for this partial
- 56 -
Chapter 4 Solution Methodology
assignment can be changed is constricted, so that the current schedule cannot be changed
too much. Hence a partial re-assignment can be capped by the decision made at the
tactical level.
Assume the current assignment is represented by the solution x 0 , which has been
obtained from the previous workload balancing problem. After some changes occur, such
as when airlines (flights) are added or removed, the airline assignment is re-examined,
and the balancing problem is re-optimized. The same idea also applies to the cases that
| x% x
iI '
i
0
i | P | I ' | (*)
where the set I ' is the new set of airlines that remain from the previous assignment to the
new assignment, and the parameter P is defined as a proportion of the total number of
airlines whose original assignment can be changed, and P [ 0, 100% ] . For the two
Based on the original overall workload balancing problem [SP], the new constraint (*) is
- 57 -
Chapter 4 Solution Methodology
N N
1
[TP] Minimize ( C z k + y kj )
N k =1 k =1 jJ
s.t. z k aijk ~
xi for all j , k
iI '
z k A kj aijk ~
xi for all j , k
iI '
y kj 2 aijk ~
xi A kj for all j , k
iI '
y kj A kj 2 aijk ~
xi for all j , k
iI '
| ~x
iI '
i xi0 | P | x 0 | (*)
y, z 0 ~
x {0,1}
x 0 and ~
x are both binary variables and x 0 is known. If xi0 = 1 , then
|~
xi xi0 |=| ~
xi 1|= 1 ~
xi ;
|~
xi xi0 |=| ~
xi 0 |= ~
xi
Therefore, the model [TP] is always a mixed integer linear program after transforming
the absolute terms. It can employ all the methodologies presented in Chapter 4.
With the increasing value of P , more and more airlines can be reassigned so that the
- 58 -
Chapter 4 Solution Methodology
Shown in the computational results in Section 5.2.5, a small P value makes the problem
The software, ILOG CPLEX8.1 and Concert Technology 1.3, are employed to model and
solve all related mixed integer programs. The mixed integer programs are solved by the
mixed integer optimizer that uses the Branch-and-Cut method. The cuts generated by
CPLEX include clique cuts, cover cuts, disjunctive cuts, Gomory fractional cuts, etc. The
dual simplex algorithm with steepest edge pricing is employed to solve the LP relaxation
at each node in the Branch-and-Bound tree. The linear subproblem is solved by the
primal simplex optimizer. All CPLEX parameters are set at their default values. All
computations are carried out in a DELL Pentium IV computer with 2.4 GHz CPU and
512 MB of RAM.
- 59 -
Chapter5 Computational Results
5 Computational Results
In this chapter we present results of the computational experiments. Section 5.1 describes
the experimental data instances, then some computational results are presented in Section
5.2. Discussions are also included with these numerical results. In Section 5.3, the
proposed optimal airline assignment is compared with the current airline assignment.
The data come from a big airport cargo terminal operator, which serves more than 50
international airlines. The weekly flight schedule contains around 800 flights.
Workload aij for all i and j are the parameters to be estimated. It is obtained from the
analysis of historical data, if the data is enough; otherwise, the workload is estimated
from capacity of aircrafts and experience. For either ways, the workload estimation of an
terminals. The total area is the total workload of this flight, and it is distributed along the
time line, discretized by hour periods. The shape of the workload distribution follows like
- 60 -
Chapter5 Computational Results
a reversed lognormal distribution. For an airline, all its flights are estimated
independently like Figure 5-1, then the workload is summed up together at all
corresponding time periods. Figure 5-2 illustrates an example for a certain airline, which
contains 63 flights weekly. Similarly, Figure 5-3 and 5-4 illustrate the examples of a
flight and an airline, respectively, at the import terminals, where the shape of the
1.5
1
#ULD
0.5
0
0 5 10 15 20 24
STD
Time / hour
- 61 -
Chapter5 Computational Results
# ULD
0
0 20 40 60 80 100 120 140 160 180
Time / hour
1.4
1.2
#ULD
0.8
0.6
0.4
0.2
0
0 5 10 15 20 25
STA Time / hour
- 62 -
Chapter5 Computational Results
Table 5.1 describes our test data instances. The first instance ExD is a small size data set,
which contains 17 airlines, a subset of the export airlines. It is designed for convenience
of computational experiments, and we mainly use ExD to test the solution methodologies.
Instance ImD is a complete actual data set. It contains 50 airlines at two import terminals.
The reason for employing this data set of import terminals is to evaluate the proposed
optimal solution by a well-built simulation model, which is designed only for import
terminals.
- 63 -
Chapter5 Computational Results
comparison of the three formulations n Section 5.2.2. All of these models are solved by
the CPLEX mixed integer optimizer directly. Then the results from the Benders
the two-stage approximation in Section 5.2.4. Four typical examples of the partial re-
optimal solution. To investigate this effect, we tried out some examples, which employed
the data set ImD. The sample size is set to 1, which means a simple deterministic problem.
Table 5.2 shows the results, which reveals the optimal solution may change with the
change of C. As the motivation of this research is to balance the workload along the
timeline, the workload variance along the timeline is a good measurement of the
performance of the different solutions. Therefore, Table 5.2 also reports the variance
C = 0 and C = 10e6 are the two extreme cases that consider only one objective. The
results reveal that different C values may change the optimal solution, but not all the time.
- 64 -
Chapter5 Computational Results
The results show that the best solution is obtained when both objectives are substantially
considered. Thus, we propose the C value that equals to the total number of periods (e.g.
168 for ExD), where both objectives are considered with equal importance.
Table 5.3 reports the computational statistics of the three different formulations. The first
column records the number of sample sizes, and the next two columns report the sizes of
the problem. The column of Pre-solve time reports the pre-processing time in the
CPLEX integer optimizer. Root relaxation solution time reflects the complexity of the
Table 5.4 compares the solution time of the three different formulations. The results show
that the solution time increases/decreases with the increase/decrease of the sample size.
Both formulations [Hy] and [Sz] have less solution time than that of [SP]. They shorten
the total solution time by around 8.69% to 55.44%, dependent on the sample sizes. The
reason is that [Hy] and [Sz] simplify the formulation structure, so that their LP relaxation
becomes easier to solve. It is reflected in column 5 in Table 5.3, from which we can see
the root relaxation solution time of [Hy] and [Sz] is less than that of [SP].
- 65 -
Chapter5 Computational Results
- 66 -
Chapter5 Computational Results
When the sample size is small, less than 50, the reduction of solution time is substantial;
however, when the size becomes large, the reduction is limited. This phenomenon can be
explained by the nature of the Branch-and-Cut algorithm. The solution time depends a lot
on the number of LPs resolved, which is shown in the column # of Nodes. The new
formulations reduce the number of constraints; however, it increases the number of nodes,
which weakens the reduction of the solution time. This weakening effect becomes more
evident, especially when the sample size is large. That is why the solution time reduces
only 8% and 18% when the sample size is 100, while it is more than 30% when the
The finding is that the new formulations [Hy] and [Sz] are helpful when the sample size
is not too large, while they can only reduce the solution time marginally when the sample
size becomes large. Notwithstanding, the new formulations simplify the model structure,
and they may help in other algorithms, such as the Benders decomposition-based
algorithms.
Three algorithms were employed to solve the problem using the small data set ExD, with
varying numbers of scenarios. Table 5.5 reports the number of iterations in the general
performance. The relative difference between these two algorithms is computed. Table
5.6 reports the computational time of the three algorithms and the relative time reduction
is also reported.
- 67 -
Chapter5 Computational Results
The results show that the general Benders decomposition reduces the computation time
considerably when the sample size is greater than 50. The reduction is more than 90%
when the size is 100, which is the sample size we determine in Section 3.2. It reaches
96% when the sample size is 200; that is, the solution time is 1/25th of the original
reduces the number of iterations more than 60% from the general Benders decomposition.
increases in the accelerated algorithm, but the convergence rate is improved a lot. The
superior position of these two factors still reduces the solution time when the sample size
becomes large. When the sample size is equal or less than 50, this reduction may be
negative; when the sample size is 100, the reduction is 15%. From Table 5.6, when the
sample size is very small, less than 20, the MIP Optimizer is the most efficient algorithm;
- 68 -
Chapter5 Computational Results
as the sample size increases, the general Benders decomposition becomes more efficient,
and with the sample size increases further, the accelerated Benders decomposition
Table 5.7 reports the computational results of the two-stage approximation with the data
instance ImD. The exact optimal solution is also reported, and the relative gap between
the approximating solutions and the optimal solutions are calculated. The reduction of
Table 5.7: Comparison between the two-stage approximation and the optimal solution
# of Two-stage approximation Optimal solution Relative Reduction of
sample Objective Solution Objective Solution time gap solution time
size value time (s) value (s)
10 6802.91 33 6770.83 125 0.474% 73.600%
20 6850.4 121 6850.4 667 0.000% 81.859%
30 6852.56 345 6849.05 1153 0.051% 70.078%
40 6865.25 582 6860.08 2014 0.075% 71.102%
50 6866.14 952 6865.92 4141 0.003% 77.010%
60 6862.56 1111 6862.56 4837 0.000% 77.031%
70 6857.05 2099 6857.05 6218 0.000% 66.243%
In these results, the maximal gap is 0.474%, and most of the gaps are below 0.1%.
Especially, among all these 7 instances, 3 instances accidentally achieve exact optimal
solutions. This illustrates the points we mention in Section 4.4 that if the big airlines are
assigned optimally in the first stage, then this approximation will give the exact optimal
solution.
- 69 -
Chapter5 Computational Results
solution time is reduced a lot while maintaining a good solution quality. The reduction of
solution time is around 70% of the original time, and the gap of the objective values is
less than 0.5%. On the other hand, this two-stage approach separates all airlines to two
groups in terms of their workload. From this separation, it is easy to rank the importance
of the airlines. It provides the terminal operators a priority ranking to handle different
airlines. More attentions should be focused on the assignment of these big airlines. It is
Case 1: From the original 50 airlines, service for the second biggest airline is terminated.
Case 3: From the original 50 airlines, service for the second smallest airline is terminated;
With a value of P , the constraint (*) in Section 4.4 restricts the number of airlines whose
assignment can be changed, so that the complexity of the problem will vary, and the
workload balancing will change. To show the effect of the parameter P , a series of
values are investigated. Results are given in the following tables and figures. P values and
the number of airlines whose assignment can be changed are reported in the first two
- 70 -
Chapter5 Computational Results
columns, and the number of airlines whose assignment is really changed is reported in
the fourth column. Objective value and solution time are also reported.
Table 5.8: Case 1 of partial reassignment: Terminate the service of a big airline
- 71 -
Chapter5 Computational Results
Table 5.10: Case 3 of partial reassignment: Terminate the service of a small airline
Any change of airline assignment breaks the whole balance, no matter whether the
airline is big or small. But the big airlines have bigger influences on the balance more
than the small ones. Table 5.8 / 5.9 shows that if we do not re-optimize, that is P = 0 ,
- 72 -
Chapter5 Computational Results
the big airline will cause 19% / 17% of increase of the objective value comparing to
the optimal one, but the small airline just affects 1.7% / 1.2% in Table 5.10 / 5.11.
Therefore, if the airline is too small, its effect on workload balance can be ignored.
Even for the big airlines whose assignments are changed, only a small P is needed
for re-optimizing. The gap decreases from 19% to 6% when only 1 airline is
permitted to change. And a permission of changing 4 airlines brings the gap from
19% to 1.9%. This permission is less than 10% of the total number of airlines.
Concerning the solution time, a small value of P makes the re-optimization problem
easy. For case 1 in Table 5.8, for example, the solution time is 11 seconds when
P=10%, and the gap is 1.98%, while when P=100% which represents the optimal
solution, the solution time is 199 seconds. This example shows that a good quality
solution can be obtained with little computer time. The other 3 cases show the similar
results.
For industrial applications, the above findings suggest a tactical operational policy: A
The optimal airline assignment is obtained by solving [SP] with N = 100 , which is large
enough (see Section 3.3). The performance of the optimal solution versus the current
- 73 -
Chapter5 Computational Results
assignment is examined. Section 5.3.1 presents the numerical results and the simulation
The mean value of the estimated workload is chosen as a realized scenario. The
maximum workload at all time periods is computed, as well as the variance and the sum
of difference, all of which are popular imbalance measurements (refer to Section 2.2.2).
Table 5.12 reports these results. The maximum workload at all the time periods reduces
24.68%, and the variance reduces 26.27%. The sum of differences, which is the
recommended measurement, reduces 85.10%. These data show that the optimal solution
Table 5.12: Imbalance measurements under the current and optimal assignment
5.3.2 Simulation
conducted using a simulator developed by Leong (2004) The simulator visualizes the
simulation of cargo handling process in an import terminal. The cycle time of per
weeks operations by using the actual flight schedule and the first week is not analyzed
- 74 -
Chapter5 Computational Results
In this experiment, we only use the simulation model itself as a tool, without considering
the original authors research results. By sustaining all the parameters of the model itself,
we only change the airline assignments. Results are reported in Figure 5.5 and Table 5.13.
Figure 5-5: Average cycle time with the number of cargo retrievals
The results show that the proposed optimal airline assignments improve the operational
efficiency by 1.34%. The marginal improvement is due to the assumption that each
terminal has enough capacity (in order to ensure the steady state simulation), and which
is constant over the time (an assumption in Leongs model). However, in real life, the
cargo handling capacity, particular the manpower resource, often becomes a bottleneck,
which is the motivation of this research. Therefore, it is natural that this improvement
- 75 -
Chapter5 Computational Results
from simulation can be deemed as a very conservative lower bound. The actual
- 76 -
Chapter 6 Conclusions and Further Research
This chapter concludes this thesis in Section 6.1 and proposes potential directions for
6.1 Conclusions
In this thesis we study the problem to make workload distributed evenly along the time
line between two identical terminals, and the objective is realized by re-optimizing the
airline assignment between the two terminals. In addition, the workload cannot be
We develop a mixed integer programming model for the workload balancing problem.
Two parallel balance measurements are chosen to be minimized. One is for inter-balance,
which is to balance the workload between the two terminals; and the other is for intra-
balance, which aims to balance the workload along the time line at each terminal. After
the deterministic model is formulated, the stochastic model is extended to handle the
employed to handle the stochastic program. These models optimize the overall airline
adjustments.
- 77 -
Chapter 6 Conclusions and Further Research
Two actual data instances of a big air cargo terminal operator are investigated in our
computational experiments. The results show that the reformulations can reduce the
solution time, and the accelerated Benders decomposition improves the convergence rate
a lot. SAA only needs a small sample size to achieve good solution quality, and the two-
stage approximation solves the problem much faster while obtaining good quality
solutions. The results also reveal that the strategy of partial re-assignment is practical.
Finally, the proposed optimal airline assignment is evaluated by both numerical study and
simulation.
In summary, this thesis provides mathematical models for the decision of airline
assignment for a big air cargo terminal operator at both the strategic and tactical levels; it
also provides both optimality and approximation algorithms to solve these models.
scheduling problem. But in this research we do not consider the manpower scheduling
problem directly, and we do not validate the effectiveness of workload balancing on the
manpower scheduling, either. Therefore, further research can focus on the manpower
- 78 -
Chapter 6 Conclusions and Further Research
these two problems. The resulting model will simultaneously assign airlines to different
Also recall that workload balancing can benefit the system objectives. In this research, we
only use simulation to test the average retrieval cycle time. Therefore, other objectives
could also be incorporated into this workload balancing problem. Considering the
could be employed.
In addition this thesis does not claim that the newly developed accelerated Bender
and experimented based on the special model structure and data sets. Therefore, it is
supposed to be worth to validate the effectiveness of the method on more general models
and data.
Furthermore, in this problem we make a strong assumption that all airlines are assigned
between only two terminals, and both terminals are identical with the same capacity. This
is not always true in industry. Therefore, new models can be extended to consider more
- 79 -
References
References
Ammons, J.C., Lofgren, C.B. and Mcginnis, L.F. (1985). A large scale machine loading
problem in flexible assembly. Annals of Operations Research 3, pp.319332.
Ashayeri, J., and Gelders, L.F. (1985). Warehouse design optimization. European Journal
of Operational Research 21, pp.285-294.
Berg, J.P. van den (1999). A literature survey on planning and control of warehousing
systems. IIE Transactions 31/8, pp.751-763.
Berrada, M., Stecke, K.E. (1986). A branch and bound approach for machine load
balancing in flexible manufacturing systems. Management Science 32/10, pp.1316.
Birge, J.R. (1985). Decomposition and partitioning methods for multistage linear
programs. Operation Research 33, pp.1089-1107.
Birge, J.R., and Louveaux, F.V. (1988). A multicut algorithm for two-stage linear
programs. European of Journal of Operational Research 34, pp.384-392.
Birge, J.R., Donohue, C.J., Holmes, D.F., and Svintsiski, O.G. (1996). A parallel
implementation of the nested decomposition algorithm for multistage stochastic linear
programs. Mathematical Programming 75, pp.327-352.
Birge, J.R., and Louveaux, F.V. (1997). Introduction to stochastic programming. Springer,
New York.
Boukerche, A. and Das, S.K. (2004). Reducing null messages overhead through load
balancing in conservative distributed simulation systems. Journal of Parallel and
Distributed Computing 64/3, pp.330-344.
Carino, D.R., Kent, T., Meyers, D.H., Stacy, C., Sylvanus, M., Turner, A.L., Watanabe,
K., and Ziemba, W.T. (1994). The Russell-Yasuda Kasai Model: An asset/liability model
for a Japanese insurance company using multistage stochastic programming. Interfaces24.
80
References
Cormier, G., and Gunn, E.A. (1992). A review of warehouse models. European Journal
of Operational Research 58, pp.3-13.
Dallery, Y., and Stecke, K.E. (1990). On the optimal allocation of servers and workloads
in closed queuing networks. Operations Research 38, pp.694703.
Fisher, M., Hammond, J., Obermeyer, W. and Raman, A. (1997). Configuring a supply
chain to reduce the cost of demand uncertainty. Production and Operations Management
6, pp.211-225.
Graves, S.C., Hausman, W.H. and Schwarz, L.B. (1977). Storage retrieval interleaving in
automatic warehousing systems. Management Science 23/9, pp.935-945.
Guerro, F., Lozano, S. and Koltai, T.J. (1999). Machine loading and part type selection in
flexible manufacturing systems. International Journal of Production Research 37,
pp.13031317.
Hausman, W.H., Schwarz, L.B. and Graves, S.C. (1976). Optimal storage assignment in
automatic warehousing systems. Management Science 22/6, pp.629-638.
Higle, J.L. and Sen, S. (1991). Stochastic decomposition: an algorithm for two-stage
linear programs with recourse. Mathematics of Operations Research 16, pp.650-669.
Kim, Y.-D. (1993). A study on surrogate objectives for loading a certain type of flexible
manufacturing system. International Journal of Production Research 31, pp.381392.
King, A.J. and Rockafellar, R.T. (1993). Asymptotic theory for solutions in statistical
estimation and stochastic programming. Mathematics of Operations Research 18,
pp.148162.
81
References
Lan, Z-L, Taylor, V.E. and Bryan, G. (2002). A novel dynamic load balancing scheme
for parallel systems. Journal of Parallel and Distributed Computing 62/12, pp.1763-1781.
Leong C.H. (2004). A Simulation Model of an Air Cargo Terminal. Unpublished Master
dissertation, Department of Industrial and Systems Engineering, National University of
Singapore.
Matson, J.O. and White, J.A. (1982). Operational Research and material handling.
European Journal of Operational Research 11, pp.309-318.
Moreno, A.A. and Ding, F.-Y. (1989). Goal oriented heuristics for the FMS loading (and
part type selection) problems. Proceedings of the Third ORSA/TIMS Conference on
Flexible Manufacturing Systems, Cambridge, MA, pp.105110.
Mulvey, J.M. and Ruszcczynski (1995). A., A new scenario decomposition method for
large-scale stochastic optimization. Operations Research 43/3, pp.477-490.
Murphy, F.H., Sen, S. and Soyster, A.L. (1982). Electric utility capacity expansion
planning with uncertain load forecasts. AIIE Transaction 14, pp. 52-59.
Potts, C.N. and Whitehead, J.D. (2001). Workload balancing and loop layout in the
design of a flexible manufacturing system. European Journal of Operational Research
129/2, pp.326-336.
Rouwenhorst, B., Reuter, B., Stockrahm, V., Houtum, G.J., Mantel, R.J., and Zijm,
W.H.M. (2000). Warehouse design and control: Framework and literature review.
European Journal of Operational Research 122, pp.515-533.
82
References
Sen, S., Doverspike, R.D. and Cosares, S. (1994). Network planning with random
demand. Telecommunication Systems 3, pp.11-30.
Shanker, K. and Tzen, Y.-J.J. (1985). A loading and dispatching problem in a random
flexible manufacturing system. International Journal of Production Research 23, pp.579
595.
Stecke, K.E. and Solberg, J.J. (1985). The optimality of unbalancing both workloads and
machine group sizes in closed queuing networks of multi-server queues. Operations
Research 33, pp.882910.
Stecke, K.E. (1989). Algorithms for efficient planning and operation of a particular FMS.
International Journal of Flexible Manufacturing Systems 1, pp.287324.
Van Slyke, R. and Wets, R.J-B. (1969). L-shaped linear programs with applications to
control and stochastic programming. SIAM Journal on Applied Mathematics 17, pp.638-
663.
Verweij, B., Ahmed, S., Kleywegt, A.J., Nemhauser, G. and Shapiro, A. (2003). The
sample average approximation method applied to stochastic routing problems: a
computational study. Computational Optimization and Applications 24, pp.289333.
83
References
Wirasinghe, S.E., and Bandara, S. (1990). Airport gate position estimation for minimum
total costs - approximate closed form solution. Transportation Research B 24B, pp. 287-
297.
84