[go: up one dir, main page]

0% found this document useful (0 votes)
21 views9 pages

Simulation

The document discusses cost-cutting strategies for a call center while maintaining service quality, emphasizing the use of process simulation for quantitative analysis of process models. It outlines the limitations of basic queueing theory and details the anatomy of process simulation, including the input required for effective simulations. Additionally, it provides an example of a loan application process simulation, highlighting the importance of resource utilization and average waiting times in identifying bottlenecks.

Uploaded by

vta19.stu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views9 pages

Simulation

The document discusses cost-cutting strategies for a call center while maintaining service quality, emphasizing the use of process simulation for quantitative analysis of process models. It outlines the limitations of basic queueing theory and details the anatomy of process simulation, including the input required for effective simulations. Additionally, it provides an example of a loan application process simulation, highlighting the importance of resource utilization and average waiting times in identifying bottlenecks.

Uploaded by

vta19.stu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

7.

3 Simulation 279

• The call center manager has a mandate to cut costs by at least 20%. Give at least
two ideas to achieve this cut without reducing the salaries of the call center agents
and while keeping an average waiting time below or close to 1 min.

7.2.3 Limitations of Basic Queueing Theory

The basic queueing analysis techniques presented above allow us to estimate waiting
times and queue lengths based on the assumptions that inter-arrival times and
processing times follow an exponential distribution. When these parameters follow
different distributions, one needs to use different queueing models. Fortunately,
queueing theory tools nowadays support a broad range of queueing models and of
course they can do the calculations for us. The discussion above must be seen as an
overview of single-queue models, with the aim of providing a starting point from
where you can learn more about this family of techniques.
A more fundamental limitation of the techniques introduced in this section is that
they only deal with one task at a time. When we have to analyze an entire process
that involves several tasks, events, and resources, these basic techniques are not
sufficient. There are many other queueing analysis techniques that could be used for
this purpose, like for example queueing networks. Essentially, queueing networks
are systems consisting of multiple inter-connected queues. However, the maths
behind queueing networks can become quite complex, especially when the process
includes concurrent tasks. A more popular approach for quantitative analysis of
process models under varying levels of resource contention is process simulation,
as discussed below.

7.3 Simulation

Process simulation is arguably the most popular and widely supported technique for
quantitative analysis of process models. The essential idea underpinning process
simulation is to use the process simulator for generating a large number of
hypothetical instances of a process, executing these instances step-by-step, and
recording each step in this execution. The output of a simulator then includes the
logs of the simulation as well as statistics of cycle times, average waiting times, and
average resource utilization.

7.3.1 Anatomy of a Process Simulation

During a process simulation, the tasks in the process are not actually executed.
Instead, the simulation of a task proceeds as follows. When a task is ready to be
executed, a so-called work item is created and the simulator first tries to find a
280 7 Quantitative Process Analysis

resource to which it can assign this work item. If no resource able to perform the
work item is available, the simulator puts the work item in waiting mode until a
suitable resource becomes available. Once a resource is assigned to a work item, the
simulator determines the duration of the work item by drawing a random number
according to the probability distribution of the task processing time. This probability
distribution and the corresponding parameters need to be defined in the simulation
model.
Once the simulator has determined the duration of a work item, it puts the work
item in sleeping mode for that duration. This sleeping mode simulates the fact that
the task is being executed. Once the time interval has passed (according to the
simulation clock), the work item is declared to be completed and the resource that
was assigned to it becomes available.
In reality, the simulator does not effectively wait for tasks to come back from their
sleeping mode. For example, if the simulator determines that the duration of a work
item is 2 days and 2 h, it will not wait for this amount of time to pass by. You can
imagine how long a simulation would take if that was the case. Instead, simulators
use smart algorithms to complete the simulation as fast as possible. Modern business
process simulators can effectively simulate thousands of process instances and tens
of thousands of work items in a matter of seconds.
For each work item created during a simulation, the simulator records the
identifier of the resource that was assigned to this instance as well as three
timestamps:
• The time when the task was ready to be executed.
• The time when the task was started, meaning that it was assigned to a resource.
• The time when the task completed.
Using the collected data, the simulator can compute the average waiting time for
each task. These measures are quite important when we try to identify bottlenecks
in the process. Indeed, if a task has a high average waiting time, it means that there
is a bottleneck at the level of this task. The analyst can then consider several options
for addressing this bottleneck.
Additionally, since the simulator records which resources perform which work
items and it knows how long each work item takes, the simulator can find out the
total amount of time during which a given resource is busy handling work items. By
dividing the amount of time that a resource was busy during a simulation by the total
duration of the simulation, we obtain the resource utilization, that is, the percentage
of time that the resource is busy on average.

7.3.2 Input for Process Simulation

From the above description of how a simulation works, we can see that the following
information needs to be specified for each task in the process model in order to
simulate it:
7.3 Simulation 281

• The probability distribution for the processing time of each task.


• Other performance attributes for the task such as cost and added-value produced
by the task.
• The resource pool that is responsible for performing the task. In the loan
application process, there are three resource pools: the claim handlers, the clerks
and the managers. For each resource pool, we need to specify its size (e.g., the
number of claim handlers or the number of clerks) and optionally their cost per
time unit (e.g., the hourly cost of a claims handler). If we specify the cost per
time unit for every resource pool, the simulation will calculate the mean labor
cost per case in addition to calculating cycle times and waiting times.
Common probability distributions for task durations in the context of process
simulation include:
• Fixed. This is the case where the processing time of the task is the same for
all executions of this task. It is rare to find such tasks because most tasks,
especially those involving human resources, would exhibit some variability in
their processing time. Examples of tasks with fixed processing time can be found
among automated tasks such as for example a task that generates a report from
a database. Such a task would take a relatively constant amount of time, say for
example 5 s.
• Exponential distribution. As discussed in Section 7.2, the exponential distri-
bution may be applicable when the processing time of the task is most often
around a given mean value, but sometimes it is considerably longer. For example,
consider a task “Assess insurance claims” in an insurance claims handling
process. For normal cases, the claim is assessed in an hour, or perhaps less.
However, some insurance claims require special treatment, for example because
the assessor considers that there is a risk that the claim is fraudulent. In this
case, the assessor might spend several hours or even an entire day assessing a
single claim. A similar observation can be made of diagnostics tasks, such as
diagnosing a problem in an IT infrastructure or diagnosing a problem during a
car repair process.
• Normal distribution. This distribution is used when the processing time of the
task is around a given average and the deviation around this value is symmetric,
which means that the actual processing time can be above or below the mean with
the same probability. Simple checks, such as for example checking whether or not
a paper form has been fully completed might follow this distribution. Indeed, it
generally takes about 3 min to make such a check. In such cases, this time can
be lower because for example the form is clearly incomplete or clearly complete.
In other cases, it can take a bit longer, because a couple of fields have been
left empty and it is unclear if these fields are relevant or not for the specific
customer who submitted the form. Some simulators also support the half-normal
distribution, which is similar to the normal distribution but it only allows for
positive values. Negative values do not make sense when applied to processing
times or costs.
282 7 Quantitative Process Analysis

When assigning an exponential distribution to a task duration the analyst has


to specify the mean value. Meanwhile, when assigning a normal distribution, the
analyst has to specify two parameters: mean value and standard deviation. These
values are determined based on an informed guess (based on interviews with the
relevant stakeholders), but preferably by means of sampling (the analyst collects
data for a sample of task executions) or by analyzing execution logs of relevant
information systems. Some simulation tools allow the analyst to import logs into the
simulation tool and assist the analyst in selecting the right probability distribution
for task durations based on these logs. This functionality is called simulation input
analysis.
In addition to the above per-task simulation data, a branching probability needs
to be specified for every flow coming out of a decision gateway. These probabilities
may be determined by interviewing relevant stakeholders, observing the process
during a period of time, or collecting logs from relevant information systems.
Finally, in order to run a simulation, the analyst additionally needs to specify at
least the following:
• The mean inter-arrival time and its associated probability distribution. As
explained above, a very frequent distribution of inter-arrival times is the exponen-
tial distribution and this is usually the default distribution supported by business
process simulators. It may happen however that the inter-arrival times follow
a different distribution such as for example a normal distribution. By feeding a
sample of inter-arrival times during a certain period of time to a statistical tool, we
can find out which distribution best matches the data. Some simulators provide a
module for selecting a distribution for the inter-arrival times and for computing
the mean inter-arrival time from a data sample.
• The starting date and time of the simulation (e.g., “11 Nov. 2017 at 8:00”).
• One of the following:
– The end date and time of the simulation. If this option is selected, the
simulation will stop producing more process instances once the simulation
clock reaches the end time.
– The real-time duration of the simulation (e.g., 7 days, 14 days). In this way,
the end time of the simulation can be derived by adding this duration to the
starting time.
– The required number of process instances to be simulated (e.g., 1,000). If
this option is selected, the simulator generates process instances according to
the arrival rate until it reaches the required number of process instances. At
this point, the simulation stops. Some simulators will not stop immediately,
but will allow the active process instances to complete before stopping the
simulation.
Example 7.10 We consider the process for loan application approval modeled in
Figure 4.2 (page 118). We simulate this model using the BIMP simulator.11 This

11 http://bimp.cs.ut.ee.
7.3 Simulation 283

simulator takes as input a BPMN process model. We provide the following inputs
for the simulation.
• Three loan applications arrive per hour on average, meaning an inter-arrival time
of 20 min. Loan applications arrive only from 9 a.m. to 5 p.m. during weekdays.
• The tasks “Check credit history” and “Check income sources” are performed by
clerks.
• The tasks “Notify rejection”, “Make credit offer”, and “Assess application” are
performed by credit officers.
• The task “Receive customer feedback” is in fact an event. It takes zero time and
it only involves the credit information system (no human resources involved). To
capture this, the task is assigned to a special “System” role.
• There are two clerks and two credit officers. The hourly cost of a clerk is e 25
while that of a credit officer is e 50.
• Clerks and credit officers work from 9 a.m. to 5 p.m. during weekdays.
• The cycle time of the task “Assess application” follows an exponential distribu-
tion with a mean of 20 min.
• Cycle times of all other tasks follow a normal distribution. The tasks “Check
credit history”, “Notify rejection”, and “Make credit offer” have a mean cycle
time of 10 min with a 20% standard deviation, while “Check income sources”
has a cycle time of 20 min with a 20% standard deviation as well.
• The probability that an application is accepted is 80%.
• The probability that a customer, whose application was rejected, asks that the
application be re-assessed is 20%.
We run a simulation with 2,400 instances, which means 100 working days given
that 24 loan applications arrive per day. The simulation gives an average cycle time
of around 7.5 h if we count the time outside working hours (cycle time including
off-timetable hours in BIMP). If we count only working hours, the cycle time is 2 h.
The latter is called the cycle time excluding off-timetable hours in BIMP. These cycle
time measurements may vary by about ± 10% when we run the simulation multiple
times. These variations are expected due to the stochastic nature of the simulation.
For this reason, we recommend running the simulation multiple times and to take
averages of the simulation results.
Figure 7.14 shows the histograms for process cycle times (both including and
excluding off-timetable hours), waiting times (excluding off-timetable costs), and
costs. It can be seen that the waiting times are relatively low. This is because the
resource utilization of clerks and credit officers is around 76–80%. 


Exercise 7.10 The insurance company called Cetera is facing the following prob-
lem: Whenever there is a major event (e.g., a storm), their claim-to-resolution
process is unable to cope with the ensuing spike in demand. During normal times,
the insurance company receives about 9,000 calls per week, but during a storm
scenario the number of calls per week doubles.
The claim-to-resolution process model of Cetera is presented in Figure 7.15. The
process starts when a call related to lodging a claim is received. The call is routed
284 7 Quantitative Process Analysis

Process cycle times including off-timetable hours Process cycle times excluding off-timetable hours

23.2 m – 7.5 h 23.2 m – 1.2 h


7.5 h – 14.7 h 1.2 h – 2 h
14.7 h – 21.8 h 2 h – 2.8 h
21.8 h – 1.2 d 2.8 h – 3.7 h
1.2 d – 1.5 d 3.7 h – 4.5 h
1.5 d – 1.8 d 4.5 h – 5.3 h
1.8 d – 2.1 d 5.3 h – 6.1 h
2.1 d – 2.4 d 6.1 h – 6.9 h
2.4 d – 2.7 d 6.9 h – 7.8 h
2.7 d – 3 d 7.8 h – 8.6 h
0 500 1,000 1,500 2,000 0 250 500 750 1,000

Process waiting times Process costs (EUR)

0 s – 43.1 m 15.8 – 44.8


43.1 m – 1.4 h 44.8 – 73.8
1.4 h – 2.2 h 73.8 – 102.8
2.2 h – 2.9 h 102.8 – 131.8
2.9 h – 3.6 h 131.8 – 160.8
3.6 h – 4.3 h 160.8 – 189.8
4.3 h – 5 h 189.8 – 218.8
5 h – 5.7 h 218.8 – 247.8
5.7 h – 6.5 h 247.8 – 276.8
6.5 h – 7.2 h 276.8 – 305.8
0 300 600 900 1,200 0 500 1,000 1,500 2,000

Fig. 7.14 Histograms produced by simulating the credit application process with BIMP

to one of two call centers depending on the location of the caller. Each call center
receives approximately the same amount of calls (50–50) and has the same number
of operators (40 per call center). The process for handling calls is identical across
both call centers. When a call is received at a call center, the call is picked up by
a call center operator. The call center operator starts by asking a standard set of
questions to the customer to determine if the customer has the minimum information
required to lodge a claim (e.g., insurance policy number). If the customer has enough
information, the operator then goes through a questionnaire with the customer,
enters all relevant details, checks the completeness of the claim, and registers the
claim.
Once a claim has been registered, it is routed to the claims handling office, where
all remaining steps are performed. There is one single claims handling office, so
regardless of the call center agent where the claim is registered, the claim is routed
to the same office. In this office, the claim goes through a two-stage evaluation
process. First of all, the liability of the customer is determined. Secondly, the claim
is assessed in order to determine if the insurance company has to cover this liability
and to what extent. If the claim is accepted, payment is initiated and the customer is
advised of the amount to be paid. The tasks of the claims handling department are
performed by claims handlers. There are 150 claims handlers in total.
The mean cycle time of each task (in seconds) is indicated in Figure 7.15. For
every task, the cycle time follows an exponential distribution. The hourly cost of a
call center agent is e 30, while the hourly cost of a claims handler is e 50.
Describe the input that should be given to a simulator in order to simulate this
process in the normal scenario and in the storm scenario. Using a simulation tool,
encode the normal and the storm scenarios, and run a simulation in order to compare
these two scenarios.
7.3 Simulation 285

Phone call received

Call center 2 Call center 1


60 seconds
60 seconds
Check if Check if
customer customer
has all has all
required required
information information

missing info missing info


(10% of cases) (10% of cases)

Call ended Call ended

540 seconds 540 seconds


Register claim Register claim

120 seconds
Determine
likelihood of
the claim

insured could
not be liable
(15% of cases)

Case closed

1200 seconds
Assess claim

claim is rejected
(20% of cases)

Claim rejected

120 seconds
240 seconds
Initiate Advise
payment claimant of
reimbursement

60 seconds
Close claim

Claim closed

Fig. 7.15 Cetera’s claim-to-resolution process


286 7 Quantitative Process Analysis

7.3.3 Simulation Tools

Nowadays, most business process modeling tools provide simulation capabilities.


Examples of such tools with simulation support include Appian, ARIS, IBM BPM,
Logizian, Oracle Business Process Analysis Suite, and Signavio Process Manager.
The landscape of tools evolves continuously. Thus, it is important to understand
the fundamental concepts of process simulation before trying to grasp the specific
features of a given tool.
In general, the provided functionality varies from one tool to another. For
example, some tools offer the functionality to specify that resources do not work
continuously, but only during specific periods of time. This is specified by attaching
a calendar to each resource pool. Some tools additionally allow one to specify that
new process instances are created only during certain periods of time, for example
only during business hours. Again, this is specified by means of a calendar.
Some of the more sophisticated tools capture not only branching conditions, but
also actual boolean expressions that use attributes attached to data objects in the
process model. In this way, we can specify, for example, that a branch coming out
of an XOR-split should be taken when the attribute loanAmount of a data object
called “loan application” is greater than e 10,000, whereas another branch should
be taken when this amount is up to e 10,000. When the simulator generates objects
of type loan, it gives them a value according to a probability distribution attached to
this attribute.
There are minor differences in the way parameters are specified across simulation
tools. Some tools require one to specify the mean arrival rate, that is the number
of cases that start during one time unit (e.g., 50 cases per day), while other tools
require one to specify the mean inter-arrival time between cases (e.g., 2 min until a
new case arrives). Recall that the distinction between mean arrival rate (written λ in
queueing theory) and mean inter-arrival time (1/λ) was discussed in Section 7.2.1.
Other tools go further by allowing one to specify not only the inter-arrival time, but
how many cases are created every time. By default, cases arrive one by one, but in
some business processes, cases may arrive in batches.
Example 7.11 An example of a process with batch arrivals is an archival process
at the Macau Historical Archives. At the beginning of each year, transfer lists are
sent to the Historical Archives by various organizations. Each transfer list contains
approximately 225 historical records. On average, two transfer lists are received
each year. Each record in a transfer list needs to go through a process that includes
appraisal, classification, annotation, backup, and re-binding. If we consider that each
record is a case of this archival process, then we can say that cases arrive in batches
of 225 × 2 = 450 cases. Moreover, these batches arrive at a fixed inter-arrival time
of one year. 

Finally, process simulation tools typically differ in terms of how resource pools
and resource costs are specified. Some tools restrict the specification to a resource
pool and its number of resources. A single cost per time unit is then attached to the
7.3 Simulation 287

entire resource pool. Other tools support a more fine-grained specification of the
resources of a pool one by one with specific cost rates for each created resource
(e.g., create 10 clerks one by one, each with its name and hourly cost).
The above discussion illustrates some of the nuances found across simulation
tools. In order to avoid diving straight away into the numerous details of a tool,
it may be useful for beginners to take their first steps using the BIMP simulator
referred to in Example 7.10. BIMP is a rather simple BPMN process model
simulator that provides the core functionality found in commercial business process
simulation tools.

7.3.4 A Word of Caution

One should keep in mind that the quantitative analysis techniques we have seen in
this chapter, and simulation in particular, are based on models and on simplifying
assumptions. The reliability of the output produced by these techniques largely
depends on the accuracy of the numbers that are given as input. Additionally,
simulation assumes that process participants work mechanically. However, process
participants are not robots. They are subject to unforeseen interruptions, they display
varying performance depending on various factors, and they may adapt differently
to new ways of working.
It is good practice whenever possible to derive the input parameters of a
simulation from actual observations, meaning from historical process execution
data. This is possible when simulating an as-is process that is being executed in the
company, but not necessarily when simulating a to-be process. In a similar spirit, it
is recommended to cross-check simulation outputs against expert advice. This can
be achieved by presenting the simulation results to process stakeholders (including
process participants). The process stakeholders are usually able to provide feedback
on the credibility of the resource utilization levels calculated via simulation and
the bottlenecks put into evidence by the simulation. For instance, if the simulation
points to a bottleneck in a given task, while the stakeholders and participants
perceive this task to be uncritical, there is an indication that incorrect assumptions
have been made. Feedback from stakeholders and participants helps to reconfigure
the parameters such that the results are closer to matching the actual behavior. In
other words, process simulation is an iterative analysis technique.
Finally, it is advisable to perform sensitivity analysis of the simulation. Con-
cretely, this means observing how the output of the simulation changes when adding
one resource to or removing one resource from a resource pool, or when changing
the processing times by ± 10%, for example. If such small changes in the simulation
input parameters significantly affect the conclusions drawn from the simulation
outputs, one must be careful when interpreting the simulation results.

You might also like