Section 9
Section 9
The iterative loops between simulation and assignment have already been noted
in Section 3.1 and are illustrated again in Figure 9.1 below. Thus the assignment
sub-stage supplies the link flows which are needed by the simulation which in turn
supplies the flow-delay curves for simulated turning movements which the
assignment requires.
This loop may exist either as a loop between two distinct programs SATEASY and
SATSIM or, in versions of SATURN from 9.1 onwards, within a single combined
program SATALL. In both cases the basic principles are the same although, in
certain respects, there is greater potential for flexibility within the one program
SATALL. The use of SATALL is now strongly recommended.
The reason why the loops are necessary is essentially due to the fact that the
turn-based flow-delay curves used by the assignment are only approximations in
that they ignore the “interactions” between links in the determination of delays. To
illustrate a very simple case, consider a T-junction with one minor (give-way) arm
and one major arm. Here the delay on the minor arm will depend both on the flow
along the minor arm as well as the flow on the major arm (and indeed the latter
effect will probably dominate). However the flow-delay relationship set by
simulation only includes the effect of the minor arm flow - the implicit assumption
is that the major flow is fixed throughout the assignment. If the flow does remain
constant on two successive assignments then the assumption is correct; if it does
not then the routes generated by the assignment are, to a greater or lesser extent,
inconsistent with the simulated delays.
In order to deal with the interactions SATURN loops between assignment and
simulation until (reasonably) steady flows are obtained, at which point the model is
judged to be “self-consistent” or “in equilibrium”; i.e., the flows that go into the
simulation produce delays which in turn produces the same flows on assignment.
(Technically this approach is known as the “diagonalisation method”).
The (main) parameter used to monitor the rate of convergence is the percentage
of link flows which vary by less than, say, 5% (the parameter PCNEAR) between
loop n and loop n-1. If this exceeds the parameter ISTOP then the process is
judged to have converged satisfactorily. If the criteria is satisfied on NISTOP
successive loops then the process is terminated (where, by default and in all
versions prior to 10.3, NISTOP = 1 so that the loops stop as soon as the ISTOP
criterion is met).
Clearly good convergence is highly desirable; poor convergence means that the
results are imprecise / “in error”. A more precise description of the statistics used
to monitor convergence is given in 9.2 and advice on how to achieve good
convergence (or, conversely, how to avoid poor convergence) is given in 9.5. On
the other hand, poor convergence is not the only source of error in the model
outputs (cf. data input errors, section 2.1) and good convergence generally
requires longer run times so that, in practice, the target level of convergence will
represent a compromise between accuracy and run times.
The program SATALL, first introduced with SATURN 9.1, in effect combines the
programs SATASS/SATEASY and SATSIM into a single program and carries out
a full assignment/simulation convergence loop internally. Thus, as shown in
Figure 3.1, taking as input a .ufn network file built by SATNET, it both assigns and
simulates so as to produce an output .ufs file containing converged assigned flows
plus the corresponding simulated delays.
In addition to the final flows, travel times etc., the output .ufs files also contain a
wide range of convergence plus selected aggregate statistics from each individual
assignment/simulation loop which may accessed via SATLOOK and/or P1X. The
aggregate statistics from several loops may also be averaged; see 17.9.2.
By combining two programs into one SATALL should be both faster and,
ultimately, “more clever” in terms of the steps that can be introduced in order to
improve the rates of convergence. For example it can combine the DIDDLE
option with elastic assignment which is otherwise impossible; see 9.10. All the
various distinct options for assignment and/or simulation that can be invoked with
the separate programs SATEASY and SATSIM may now be carried out within
SATALL. Indeed, as with the elastic DIDDLE (what a great name!) mentioned
above, there are extra options only available with SATALL; see 9.12.
The traditional „control procedure‟ to automatically run the loops between the
distinct assignment and simulation programs, SATURN8, is described in Section
14.3. The equivalent and current standard procedure, SATURN, which runs
SATALL is described in Section 9.14.
9.2.1 Table 1
The contents of Table 1 may vary depending on the precise assignment algorithm
being applied. In the case of standard Wardrop Equilibrium the output plus
interpretation is as given below:
Thus column 1 monitors the “delta function” which is internal to the assignment
and measures the degree to which the routes assigned traffic are minimum cost
routes; see section 7.1.4 If, as a rule of thumb, these figures are much in excess
of 1% then you should consider increasing the parameter NITA to obtain better
internal convergence. Otherwise the “uncertainty” in each assignment may be
adversely affecting the overall convergence. (For an alternative point of view see
9.5.4.)
Columns 1 and 2 also give the numbers of assignment and simulation internal
iterations undertaken (for which NITA and NITS set upper limits).
Firstly “% FLOWS” reports the fraction of assignment links whose assigned flows
changes by less than 5% (or, strictly speaking, less than PCNEAR%) from one
simulation-assignment loop to the next. This is a somewhat arbitrary function but
one which has been used in SATURN from the beginning and has thereby
acquired a certain historical momentum. It is (by default, see 9.2.3) the “main”
convergence parameter in that it is used to stop the loops once the figure exceeds
the parameter ISTOP; typical values for ISTOP are 90 or 95%. ISTOP is set in
&PARAM; see 6.3.2. See also 9.2.3 for alternative stopping criteria.
These two measures, while generally similar, can present different viewpoints
under certain circumstances. Thus in highly congested networks, where delays
are a very sensitive function of flows, it is quite possible for the flows to settle
down (high %FLOWS) but for the delays to fluctuate wildly (low %DELAYS).
Alternatively if the delay-flow relationship is very “flat” it is possible that the delays
will be stable (high %DELAYS) but for the flows to wander (low %FLOWS). Thus if
only 1 of these measures is high it probably implies that your overall convergence
is acceptable even though either flow or delay is uncertain.
%VI goes strongly negative on, say, loop 6 then that is a strong argument for
setting KOMBI to 6. We return to this question in Section 9.3 below.
Generally speaking the GAP values will be greater than the DELTA values since
the routes chosen based on the assignment cost estimates will tend to be slightly
worse when the costs are further changed by the simulation. The difference
between GAP and DELTA is therefore an indication as to how far the assignment
and simulation stages “disagree” on the calculation of delays; for further statistics
on the level of disagreement and the turning movements which may be causing it
please see Section 9.9.1.
As with delta (7.1.4) a gap of under 1% may be regarded – at least for some
purposes – as satisfactory. For example it is much better than the ability of
drivers in real life to choose “true” minimum cost routes which may be categorised
as around 5%. However, for other modelling purposes, significantly lower GAP
values need to be achieved and, indeed, should be achieved. A GAP value of
under 0.1% or even 0.01% should be regarded as a suitable target. See 9.2.4 for
a more complete discussion and 9.5 for advice on improving convergence.
Note that the latter two measures assume that the assignment is trying to assign
all trips to minimum cost routes. As this is not true under stochastic assignment %
VI and % GAP are not reported there.
9.2.2 Table 2
The second table contains extra convergence statistics relating to the progress of
the assignment-simulation loops:
Certain of the above measures are those recommended by the DfT for monitoring
the degree of convergence of any „congested assignment model‟ as set out in
4.4.19-4.4.28, Chapter 4, Section 2, Volume 12 of DMRB.
Or
The choice between conditions 2(a), (b) and (c) is set by the parameter KONSTP
= 0, 1 or 2 respectively. Note that in versions of SATURN prior to 10.6 KONSTP
did not exist and the second condition was only based on %FLOWS. All three
parameters KONSTP, STPGAP and STPCPU are set under &PARAM in SATNET
(or SATALL) and default to 0, 1.0 and 1000.0 respectively.
Note as well that tests based on GAP are not always available, depending on the
exact form of assignment algorithm used. Thus GAP is not calculated under
Stochastic Assignment or Multiple User Class Elastic Assignment.
Historically SATURN has always used %FLOWS as its stopping criteria (in
addition to MASL) although it is a fairly simplistic measure with very little
theoretical pedigree. For example, having a fixed cut-off between “OK” and “not
OK” means that it fails to distinguish links that fail marginally and links that fail by
a large amount. Essentially it was introduced at a very early stage of program
development when a rule was needed and it just hung about until it was so deeply
entrenched it was difficult to get rid of! On the other hand, in its favour, it is easy to
calculate and understand and works in all possible situations, not just Wardrop-
based models.
A further problem with the use of %FLOWS as the stopping criterion is that it may
depend on the “accuracy” of the assignment method used. Thus if one uses an
extremely accurate assignment such as OBA (see Section 21) the true difference
in link flows between loops n-1 and n will be obtained (to a good approximation)
whereas with a less accurate technique, such as the default Frank-Wolfe
algorithm, there is less scope for the assignment in loop n to move away from the
assignment in loop n-1 (which is used as its starting point, assuming of course
that DIDDLE = T as it should be); hence the differences in link flows tend to be
reduced and %FLOWS measure increased. Hence, despite being a better
assignment method with better convergence properties, OBA may perversely
appear less convergent than Frank-Wolfe in terms of %FLOWS.
On the other hand, GAP does have a definite theoretical interpretation, does
differentially weight “good” and “bad” fits and is easy to compare between
networks of wildly different shapes and sizes. It is also more “neutral” with respect
to the problem of assignment accuracy discussed above.
On balance, therefore, our current “best buy” for a stopping criterion is the GAP
(set KONSTP = 1 or 4) although we recognise that there is a strong case for
carrying on with %FLOWS for historical continuity and the default continues to be
%FLOWS (KONSTP = 0).
However, whichever stopping criterion users choose, they should always view
GAP as their most important single indicator of overall convergence.
We consider here the question of what sort of values are “acceptable” for the
various stopping criteria used not only in the assignment-simulation loops (ISTOP
etc.) but also in the assignment and simulation sub-stages themselves.
Such questions are intimately connected with the purposes for which the model is
being run. For example, if you wish to do a very broad-brush “quick and dirty”
estimate of what traffic conditions may look like in 20 years time the results will, of
necessity, be very inexact and there is no point in imposing very strict
convergence criteria. On the other hand calibrating a present-day network where
you have extremely good data may justify very strict criteria.
One particular case where very good convergence – and therefore very strict
criteria – is absolutely required is the comparison of “with” and “without” scheme
networks where the differences are likely to be relatively small and can only be
accurately measured if both sets of results are extremely accurate. Otherwise the
differences will simply get lost in the “noise”.
It can be argued that, as a very general rule of thumb, the reduction in total
vehicle-costs due to the “scheme” should be ten times larger than the “noise” in
the model (as outlined in WebTAG Variable Demand Modelling, Section 3.10.4).
This implies that if a scheme reduces total travel cost by, say, 1%, then you
require a GAP value of 0.1% or better to achieve a satisfactory evaluation.
The current advice on model convergence was set out in DMRB 12.2.1 Table 4.1
(Chapter 4, Vol 12, Section 2, Part 1) and reproduced below. The advice was
originally issued in 1997 and has not been updated since - it is widely recognised
that the convergence targets are set substantially below the level required in order
to produce robust estimates of traffic flows and costs for model development and
appraisal.
The latest, emerging guidance from the transport practitioners suggests a more
stringent set of standards as described below for illustrative purposes only. The
emerging guidance notes that different levels of convergence may required
through the course of the study. For example, a more relaxed convergence level
may be appropriate to ensure sufficient number of loops of matrix estimation may
In all circumstances, the onus remains on the SATRUN user to ensure that
their assignment is converged to an appropriate level.
Clearly non-convergent flows are undesirable. One way of trying to deal with the
problem is to average the assigned flows over successive loops. Thus if after n
assignment-simulation loops the link flows are Va(n) and we carry out a further
assignment (using whatever assignment method -- Wardrop, Stochastic, etc) to
obtain flows Fa(n+1) then we take a strict 50:50 average of the two flows to obtain:
Equation 9.1
Va n1 Fa n1 Va n / 2
or (post-SATURN 10.4) a Λ-weighted average :
Equation 9.2
The first method is associated with the KOMBI parameter and the second with
AUTOK as explained below. If neither is used then:
Equation 9.3
Va
n 1
Fa
n 1
The loop at which 50:50 averaging first occurs is set by the parameter KOMBI;
thus setting KOMBI = 3 allows 3 assignments with the “normal” method before
averaging is introduced. If, of course, convergence (relative to ISTOP) is
achieved within KOMBI loops then no averaging takes place.
Provided that convergence does occur naturally then there are strong reasons for
not invoking KOMBI (see below). Our advice to users is to first test networks with
KOMBI = 99 (or 0, the effect is the same) and if convergence is seen to be
proceeding happily enough (see 9.2) then leave well enough alone. If however
the flow-convergence is seen to decrease, say, after loop 5 then consider setting
KOMBI to 5.
The AUTOK facility (AUTOmatic Kombi) was first added in SATURN 10.4 as a
method for automatically determining (a) at which point averaging should be
introduced and (b) the appropriate Λ-weights so that the user would not need to
make such decisions as to appropriate values of KOMBI via a process of trial and
error. Its theory is very similar to that used under the ROSIE option, 7.1.3.
Thus if AUTOK = T (the default is F) then after each assignment a full simulation
is carried out using the latest assigned flows (i.e., without any averaging), at which
point a test is carried out on % (as described qualitatively in 9.2.1 and defined
more precisely below, equation (9.5)) to test whether averaging would improve
convergence. If % is positive then no further action is taken; if, however, % is
(significantly) negative then the flows are averaged as per equation (9.2) with an
“optimum” value of Λ.
The criterion on which the optimum value of Λ is derived from the same optimising
rule as applied within the Frank-Wolfe algorithm for networks with “separable”
cost-flow curves; see 7.1.2. This in turn is based on a 1987 paper by one Dirck
Van Vliet (“The Frank-Wolfe Algorithm for Equilibrium traffic Assignment Viewed
as a Variational Inequality”, Transportation Research 21B, 87-89) which in turn
was based on “Viewing Wardrop Equilibrium as a Variational Inequality” (Smith,
1979). Thus the Frank-Wolfe rule for combining the current assigned flows Va with
the latest all-or-nothing assigned flows Fa may be written as the solution to:
Equation 9.4
c V
a a Fa 0
where ca(λ) represent the link costs with the flows averaged as per equation (7.2).
We may think of the (negative) costs as representing a “Social Force Field” which
is pushing the current solution Va in the direction of the cheaper routes
5101396 / May 11 9-11
10_09_24_Section 9.doc
SATURN MANUAL
represented by Fa. The solution (9.4) represents the point at which the social force
field has shifted such that the all-or-nothing routes are no longer the cheapest
routes (because they have been allocated extra traffic and the original routes less)
and the force field is now at right angles (“normal”) to the direction of flow change.
The equivalent rule when applied to two successive sets of fully assigned flows
from assignments n and n+1 would be to require that:
Equation 9.5
c V F 0
a a
n
a
n 1
where now the costs ca(Λ) are those derived from simulation n+1 based on Λ-
averaged flows. Again we may think of the costs as a force field pushing the
solution from Va(n) towards Fa(n+1) and that the simulation is changing the direction
of the force field by calculating different costs.
If, as mentioned above, (1) > 0 then a step-size of 1 is used and no averaging
takes place. N.B. This is the most frequent result; see below. Alternatively, if (1)
< 0, then equation (9.5) is solved by a heuristic method of successive
interpolations.
Thus we first calculate (0); this may be done, in fact, immediately after
simulation n where the simulated costs are, in effect, ca(0). In theory (0) > 0
(although it is possible that lack of assignment convergence and/or rounding
errors might drive it marginally positive) so that our first estimate of the optimum
Λ1 is simply based on a linear interpolation between (0) and (1):
Equation 9.6
1 0 / 0 1
and we carry out another simulation with weighted flows and obtain the “correct”
value for (Λ1).
If (Λ1) is zero, or very near zero, then we terminate. If, however, (Λ1) is
significantly different from zero we may estimate an improved value Λ2 by
approximating (Λ) as a quadratic function using the three points we have already
simulated at Λ = 0, Λ1 and 1. If, having carried out a further simulation with Λ =
Λ2, (Λ2) is not sufficiently near zero then the process of quadratic approximations
and re-simulation continues using the last three estimated points until the zero-
point is obtained with sufficient accuracy. Empirically it would appear that the
solution is obtained “most of the time” with a simple linear interpolation or a small
number of quadratic steps.
If an optimum value of Λ is not obtained after, say, 2 or 3 iterations and the same
behavior is noted consistently over several assignment-simulation loops then it
may be possible to “accelerate” the estimation of Λ by introducing an additional
empirical factor based on the ratio of the final Λ value to the initial linear
interpolation given by equation 9.6. Thus if we consistently observe that the final
value is 0.5 times the initial value then it may well save time (by reducing the
number of repeated simulation steps) if on the next application of AUTOK we
reduce the initial estimate by a factor of somewhere between 0.5 and 1.0.
This factor is referred to as the “AUTOK AVERAGE STEPS FACTOR” in the .LPT
output files and is not a constant but varies throughout based on the most recent
experiences.
This “fix” was first introduced in 10.9.17 and has been found to marginally reduce
the number of repeated simulations and therefore reduce CPU time.
Information on the Λ-weights used to average assigned flows and the number of
internal loops used to calculate those weights on each loop are displayed within
the standard table of assignment-simulation convergence statistics (see 9.2.1) as
printed within the .lpt files or as displayed by request by P1X or SATLOOK.
One minor reason for NOT using KOMBI is that, if you are using link-based
assignment (the default), any SAVEIT-based analysis of the route pattern AFTER
the end of the loops, for example a PIJA analysis, printing a forest or cordoning a
5101396 / May 11 9-13
10_09_24_Section 9.doc
SATURN MANUAL
trip matrix, becomes approximate although, see 15.23, it should be a very good
approximation. However if using KOMBI is the price that has to be paid for
achieving good convergence then it is a price well worth paying - the problems
introduced by an approximate “SAVEIT” are relatively minor.
The same problem does not arise with AUTOK as long as averaging was not
applied on the final assignment - simulation loop.
However this objection does not apply if you are using either path-based or origin-
based assignment where the path information is preserved exactly.
DIDDLE is a relatively new SATURN option but it has proved to be highly effective
both in terms of reducing the number of internal assignment iterations and
improving the assignment/simulation convergence loops. With version 9.3 (and
later) DIDDLE also works with elastic assignment when the loops are within
SATALL (but not with SATEASY). It will not however function with stochastic
assignment.
An earlier problem that SAVEIT could not be used in conjunction with DIDDLE no
longer applies as (see 15.23) the route flows are estimated at the end of a full
assignment whenever DIDDLE is in effect. As with KOMBI (last paragraph in 9.3),
the use of DIDDLE introduces an element of approximation into the saved routes
but these problems are minor compared to problems of convergence.
Given the intrinsic advantages of using DIDDLE its default value is set to .TRUE.
Unfortunately there are no precise rules for what to do if your network does not
converge as well as you might expect or require (see 9.2.4). “All happy networks
are alike but an unhappy network is unhappy after its own fashion”. The following
are therefore only suggestions as to what you might try; if they work, fine - if they
don‟t, keep on trying!
%FLOWS reach 90% but fail to progress much further. The next two sections
provide advice on what to do with (a) really badly converged networks and (b)
very well-converged networks.
4) Check that the global levels of network congestion “look right”. Highly
congested networks tend to converge erratically so if, for example, there
are too many trips in the trip matrix the resulting congestion may
adversely affect convergence. (So, possibly, you need to consider elastic
assignment.)
8) When using DIDDLE there is a strong case for reducing the maximum
number of internal iterations per assignment, NITA, but increasing the
number of assignment/simulation loops, MASL, since the total number of
all-or-nothing assignments used in the final solution is the product of the
two. Introducing more frequent updates of the cost-flow curves via the
simulation may possibly reduce the total cpu time required to converge.
See 9.5.4 below and also see NITA_S, 15.23.3. On the other hand it may
sometimes happen with DIDDLE that the overall convergence is impeded
9) Use the Convergence options within P1X (11.15) to help to identify those
points in the network where convergence is an issue. For example, the
tables of the “ten worst” nodes/delays/flows may indicate critical nodes,
although without giving a particular cause for the poor convergence at
that point. However you should certainly look at all error messages at
such nodes.
10) Increase the dependence of cost on distance (i.e. increase PPK) since
distance (clearly) is fixed and does not vary between loops as does time
so that the assigned routes are less sensitive to time changes. (Although
your choice of PPM and PPK may well be constrained by other factors
such as evaluation.)
11) Make sure that LTRP > 0 as this helps to reduce “discontinuities” in cost-
flow curves, particularly at V = C for major arms at priority junctions
where there are very few other causes of below-capacity delays; see
8.6.1. Equally set RTP108 = T; see 8.6.3.
12) The basic reason why networks do not converge is because of the
interaction effects between flows and delays; reducing the level of
interaction may therefore help (but on the other hand it may make your
network representation less precise!). Therefore you could try to:
Reduce the critical gap values (the default values are, in all likelihood,
to be too high);
Do not use blocking back (set ALEX to zero but not recommended);
Do not use blocking back (set ALEX to zero but not recommended);
13) (Repeat of 1!). Check the error messages in your .lpn file and/or use the
Highlight facility in P1X – most of the time that‟s where the problem lies!
The next section illustrates one or two possible extreme cases.
Inevitably there are certain networks whose convergence behaviour can only be
described as “terrible”. For example, their %FLOWS figures get to the mid 80‟s
and then suddenly shoot back to the 70‟s.
The main cause for such instabilities is, almost always, blocking back, aided and
abetted by coding “peculiarities”. For example, if a simulation link has been given
a link distance of, say, 10 metres and one lane then its default stacking capacity
will be less than 2 pcus. In this case if the exit turns go from V/C ratios of 0.99 to
1.01 then the link will start to block back, the blocking back factor may be
extremely low in order to reduce the queue to under 2 pcus and the resulting
queue may extend backwards through a long series of links. If V/C drops back
from 1.01 to 0.99 then the queues disappear. In this case the modeller has to ask
whether or not a link distance of 10 metres is realistic. (In some cases, of course,
it may be, but it may also well be that the link was simply added as some sort of
pseudo-link, 10 might be a typo for 100, etc., etc.).
Instabilities in blocking back may be detected from the table which displays the 10
links whose blocking back factors change most over a single assignment-
simulation loop. In particular look for any links whose stacking capacity is low.
Any link with a stacking capacity of less than, say, 5 pcus is an accident waiting to
happen and should be carefully vetted.
The converse situation may also occur, i.e., the link distance and stacking
capacity are correct but the queue is unrealistically long due to coding errors at
the junction leading to very low capacities. To illustrate the point from one
(anonymous!) network: a priority T-junction on a major road into town was coded
such that the X-marker which was meant to indicate that right-turning traffic
leaving town had to give way to traffic entering town was inadvertently coded on
the opposite arm which meant that the major traffic into town had to give way to
right-turning traffic from the other direction. As a result the capacity for traffic into
town was reduced from what should have been its saturation flow of 1800 down to
100 or 200, the resulting queue stretched back for 10 km and the whole model
became highly unstable. (The coding error, by the way, is now detected as
Serious Warning 137 in SATNET).
The morale of the story is that even very small errors in network coding can lead
to very large network problems and, unfortunately, users have to: (a) be very,
very careful in their coding and (b) look very, very carefully at their outputs.
Most networks, provided that most of the coding “funnies” have been removed,
should be capable of converging to a very high degree; e.g., gap values of 0.01%
or better, %FLOWS of 100%, etc., etc. However, in order to achieve such
convergence levels, three conditions have to be satisfied:
The first may be achieved most easily using origin-based assignment (OBA,
Section 21) which can reduce the assignment convergence (i.e., Delta 7.1.4) to
(effectively) zero, the second is generally just a question of setting sufficiently tight
simulation convergence criteria (described next), while the third is best obtained
by making use of AUTOK (9.3.2).
In addition there is almost always a fourth condition which is that the network must
be well coded such that, even if there are no fatal or semi-fatal errors detected by
SATNET, certain Serious Warnings and/or Non-Fatal Errors (mostly involving X-
turns at signals) are removed.
It may also be useful, if not using OBA, to set NITA_M, the minimum number of
assignment iterations to, say, 3 or 4, otherwise the assignment may stop after a
single iteration which does not allow a sufficient improvement in the assigned
flows to take place.
Currently, it is probably safe to say, most networks are not run to anywhere near
their potential convergence levels – all it needs is a bit of ambition, confidence and
application of the above rules. Go for it!
Various “tricks” exist to try to minimise the cpu time required by SATALL to reach
the required level of assignment-simulation convergence, particularly for “large”
networks where run times become a practical consideration. Most of these
methods are based on the empirical observations that, certainly for large
networks, the assignments take much more cpu than the simulations (since the
number of assignment calculations is roughly speaking proportional to zones
times links whereas the simulation is proportional to links only).
(1) Increased MASL, decreased NITA - The first suggestion is, by using the
DIDDLE = T option to continue one assignment from the end point of the
previous assignment loop (see 9.4), the full assignment is made up of (in
effect) NITA * MASL all-or-nothing iterations with the simulation introduced
after every NITA iterations in order to update the cost-flow curves. By
decreasing NITA and increasing MASL in proportion the same number of total
iterations may be achieved in roughly the same cpu time (any increases in
cpu will be due only to the extra simulations) and with - hopefully! - the same
overall convergence. Thus instead of setting, say, NITA = 50 and MASL = 20
we would recommend using NITA = 10 (or even 5) and MASL = 100 in the
hope that convergence would be achieved in far fewer than 100 loops.
In addition the final re-calculated value of UNCRTS (which must also be its
minimum value) is carried through to the final SAVEIT assignment (15.23.1) - if
one is carried out - so that the convergence of the SAVEIT assignment should be
comparable to the convergence of the final loop assignment and therefore to the
GAP value.
The same principles of “relaxed convergence” are also used by CASSINI in order
to minimise CPU time for supply-demand feedback loops; see 15.54.
Note that AUTONA cannot be used with any form of elastic assignment.
Whether or not an elastic loop will converge better than an inelastic loop is difficult
to say in advance; it is probably a question of “horses for courses”. Thus including
trip matrix variability probably complicates matters; however the lower level of
congestion to be expected with elastic assignment probably simplifies matters.
Experience to date is limited.
Net.DAT
SATNET
Net.UFN
Cijø.UFM Control.DAT
Tijø.UFM SATEASY Tij.UFM
Net.UFA
Tij.UFM
SATSIM
Net.UFS
Thus starting from a network data file net.dat fed through SATNET the process
starts with an initial elastic assignment using SATEASY. This requires input
reference matrices cij0 and Tij0 in order to define the demand function (best defined
within net.dat; see 7.12.3). An initial estimate of the road trip matrix, Tij‟, may or
may not be available. However on subsequent iterative loops the output trip
matrix Tij.ufm may be “re-cycled” back to the subsequent elastic assignment with
REDMEN = T.
SATSIM is run in order to update the link flow-delay curves based on the elastic
link flows and loops through the elastic assignment. Parameters MASL, ROSIE
and KOMBI (but not DIDDLE or AUTOK) may be used to control convergence.
The DIY nature of this approach is further reflected in the fact that there are no
dos-style bat procedures available to simplify the loops. This is left to your
ingenuity!
The program SATALL, first introduced with SATURN 9.1, in effect combines the
programs SATASS/SATEASY and SATSIM into a single program and carries out
a full assignment/simulation convergence loop internally. Thus, as shown in
Figure 3.1, taking as input a .ufn network file built by SATNET, it both assigns and
simulates so as to produce an output file containing converged assigned flows
plus the corresponding simulated delays.
By combining two programs into one SATALL should be both faster and,
ultimately, “more clever” in terms of the steps that can be introduced in order to
improve the rates of convergence. For example it can combine the DIDDLE
option with elastic assignment which is otherwise impossible; see 9.10. All the
various distinct options for assignment and/or simulation that can be invoked with
the separate programs SATEASY and SATSIM may now be carried out within
SATALL. Indeed, as with the elastic DIDDLE (what a great name!) mentioned
above, there are extra options only available with SATALL; see 9.12.
(ii) The percentage of links whose flows differ by less than 5% (or, strictly
speaking, PCNEAR %) between successive assignments; see 9.2.
(iii) The average GEH parameter (see 15.6) indicating the differences in
demand flows per link between successive assignment loops.
Note that both measures i) and ii) are used to determine stopping criteria; iii) is
provided for information only. Convergence is reached as ii) goes to 100% and iii)
goes to zero.
The statistics displayed depend on the precise assignment option used. For
Wardrop-based assignment the window displays:
Of these only i) and ii) determine stopping conditions; iii) goes to zero as one
approaches convergence.
Only i) is used as a stopping criteria; ii) stabilizes and iii) goes to zero at
convergence.
(ii) the average absolute change in OUT profiles in pcu/hr (see 8.3);
where, under iii), junctions are only simulated on a particular iteration if there has
been a “significant” change in the flow profiles into that junction. A decreasing
number of simulated junctions implies convergence.
Statistics i) and ii) determine the simulation stopping criteria; see 8.3. At
convergence ii) approaches zero.
The line printer (.LPT) output file contains fully comprehensive statistics illustrating
the convergence of both the individual assignment and simulation stages plus the
loops and includes all the “window” data described in 9.8. To reduce the size of
the file these are given in tables as far as possible.
loop number is given at the end of the complete set of loops and may also be
obtained as part of the convergence statistics output by SATLOOK (11.11.8)
and/or P1X.
1. The percentage of links “PCOK” whose (demand) flows differ by less than 5%
(or, strictly speaking, PCNEAR%) between successive assignments; see
9.2.1. The distribution of PCOK for a standard set of (in effect) PCNEAR
values from 0.5% up to 50% is also given. In addition Table L(9) – see below -
lists the 10 “worst” links.
2. Further comparison statistics of flow differences per link between the last two
assignments, e.g.:
MEAN GEH STATISTIC = 0.83
MEAN ABSOLUTE DIFFERENCE = 15.96 %
RELATIVE MEAN ABS DIFFERENCE = 3.65 %
RELATIVE MEAN STANDARD DEVIATION = 7.37 %
3. Compare the turn delays as estimated by the assignment with the „correct‟
delays calculated by the simulation; at convergence the two should be
identical, e.g.:
MEAN SIMULATED TURN DELAY = 31.75 secs
MEAN ABS. DIFF. IN ASS/SIM DELAYS = 19.38 secs
RELATIVELY = 61.03%
NUMBER DIFFERING BY < 5.0% OR 1 96
SECOND =
RELATIVELY (PCOK) = 78.05%
Note that whereas certain global statistics such as the pcu-distance may
appear to stabilize rapidly others, such as those related to delays and
stops, are considerably more variable.
7. List of the (up to) 10 largest changes in Blocking Back Factors used on the
current and previous loops plus an r-squared value comparing the same
factors.
Further disaggregate measures include “gap”, capacity and flow differences. The
“Gap” is defined to be difference between the assigned and simulated delays (as
above) multiplied by the demand flow. It is therefore similar, but not strictly
identical, to the route-based contributions to the assignment delta values (Eqn.
7.3, section 7.1.4) and/or simulation-assignment gap function (9.2.1). Capacity
differences are simply the differences in turn capacity as calculated on two
successive simulations (with an assignment stage in between) while flow
differences are the differences between two successive assignments.
Firstly, at the end of each simulation Table L(8) lists, in decreasing order, the 10
turning movements whose current simulated delays differ most from those
calculated from the time-flow curves used in the previous assignment (as given in
aggregate terms under (3) above). These differences indicate where the
assignment and the simulation “disagree” in terms of how to define delays even
though the demand flows are identical; this “disagreement” is the main cause of
“gap” values which exceed the “delta” values; see 9.2.1. To help in identifying the
likely cause of these differences the current and previous capacities plus the
(demand) flows are listed.
Secondly, at the end of each assignment stage, Table L(9) lists the 10
(assignment) links whose (demand) flows differ most in terms of GEH statistics
between the latest assignment and that on the previous loop.
The same two tables may also be displayed interactively within P1X (see 11.15)
which also optionally offers two additional “10 worst” tables in terms of (a) “gaps”
and (b) capacities.
In addition the various turn-based convergence statistics, e.g., delays, gaps, etc.
may be generated and displayed within P1X as either turn or link annotation data
which may, in turn, be saved as SATDB data columns. In the latter format they
may be displayed in a window with the data ordered by, say, decreasing absolute
value so that it becomes possible to view not just the 10 worst examples but the
full list of worst examples.
A summary table is given at the end of each assignment, the precise contents of
which depend on the assignment technique used (e.g. Wardrop vrs Stochastic,
elastic vrs fixed trip matrix, etc). For the simplest case of Wardrop Equilibrium the
following measures are given for each iteration:
FRACTION: The ultimate fraction of trips assigned to this iteration; see 7.1.2.
C1: The total cost associated with the current flows and the current costs.
C2: The total cost if all trips could be assigned to the current minimum cost
routes.
FDZ = DZ/Z (as a %), the fractional improvements in the objective function.
ZULB: The upper lower bound on the objective function; see 7.1.5.
EPS: Epsilon, the current “uncertainty” in the objective function; see 7.1.5.
A standard summary table is given at the end of each simulation, listing for each
iteration:
Elastic assignment within SATALL is carried out in much the same way that it is
carried out in SATEASY with the obvious proviso that it is part of the outer
simulation/assignment loop, not an isolated elastic assignment.
Thus the same control parameters are used; eg MCGILL >0 designates the form
of elastic demand function. BETA and POWER are elasticity - related
parameters, etc. Similarly the share-based demand functions, the extended logit
models (7.6) and the distribution models (7.10) may all be invoked within
SATALL. All these and the necessary file names may be set either in the original
.dat file - highly recommended, 7.12.3 - or input/re-defined in the SATALL control
file.
On loops after the first SATALL always uses the REDMEN option of starting the
latest elastic assignment with the estimate of the trip matrix from the previous
assignment. The reasons for doing so are (a) it almost certainly helps
convergence and (b) the previous trip matrix is already stored internally so no user
intervention to define the matrix is required.
Hence if REDMEN = T and an estimated trip matrix is set in the original .dat file
(or otherwise) this is only used on the very first elastic assignment. (NB. This is
5101396 / May 11 9-27
10_09_24_Section 9.doc
SATURN MANUAL
not a reason not to invoke REDMEN since using a good estimate of Tij on the first
assignment is still a very good thing).
SATALL can use the DIDDLE option such that, if DIDDLE = T, any inelastic
assignment after the first will commence with the initial set of link flows equal to
the flows from the previous assignment. This is very similar to the REDMEN
option, the difference being that REDMEN specifies, in effect, the initial flows on
the pseudo links while DIDDLE specifies the flows on the real links. Empirically
using DIDDLE appears to improve convergence significantly.
Multiple user class assignment within SATALL is essentially no different than with
the separate assignment steps within SATEASY. Follow the instructions in 7.3.
The same applies for elastic multiple user class assignment; see 7.9.
It is a frequent problem that having run a network through SATALL over, say, 20
assignment - simulation loops, you find that what you really wanted was to do 21
loops. An obvious solution is to change the convergence parameters in the
original file, e.g. MASL to 21, and re-run but on large networks this is potentially
very time consuming.
will take the file net.ufs (which has been through 20 loops, say) and carry out one
more simulation-assignment loop. The output file will also be named net.ufs and
therefore over-writes the input file.
Using a parameter MASL 5 will run (up to) 5 extra loops; the actual number may
be less if the loops terminate on the ISTOP criteria rather than MASL. It will,
however, always carry out one additional loop even if the original ISTOP criterion
was satisfied. One may circumvent this “problem” by using both a MASL
parameter and the KR parameter to define a new control file which increases the
ISTOP value. Alternatively one can edit the network .ufs using P1X (11.9.11) to
change parameter values such as ISTOP prior to the continuation run.
Strictly speaking the MASL n option increments the existing value of MASL by by
n; it does not guarantee that exactly n extra loops are run since, as mentioned
above, the number of loops may be terminated by other criteria such as ISTOP.
For example, if the original value of MASL was 20 but the loops stopped after 10
due to ISTOP, then using MASL 5 on the command line and resetting ISTOP to a
higher value may actually result in 15 extra loops since the new value of MASL
will be 25. In principle it should be possible to set up the continuation option such
that “MASL 5” implies “run exactly 5 extra loops” or that it means “set MASL equal
to the current number of loops plus 5”. But it doesn‟t do that – it does what it says
on the tin!
Note that in this case the output network file, net.ufs, has the same name as the
input network file, i.e. it overwrites it. This creates a problem since both versions
of the file need to co-exist at the same time so, to avoid this, the input file is first
copied into net.ufn and that file is used as input. A consequence of using this
option is that an existing net.ufn file will itself be over-written.
Prior to version 10.5 signal settings (either stage times if SIGOPT = T or offsets if
SATOFF = T) could be continuously optimised during the assignment / simulation
loops; i.e., once per loop. However, as explained in 15.31.1, this is almost
certainly not a very realistic procedure for setting green times since it almost
certainly leads to an overly optimistic view of network performance, nor is it very
efficient in terms of cpu time.
Thus in 10.5 an option has been introduced such that the optimisations only occur
at the end of a fully converged simulation / assignment sequence, e.g., at the end
of MASL loops. At this stage the stage times and/or offsets are optimised and the
simulation /assignment loops re-started until convergence is again achieved. The
“outer-outer” loop is repeated NIPS times, where NIPS is a parameter set under
&PARAM in the original network .dat file.
We note that, if the optimisation changes are relatively small (as is generally to be
expected) then subsequent should converge very quickly. There is a good case,
therefore, for reducing MASL in proportion to NIPS (or, strictly NIPS+1). For
example, to carry out a maximum of 60 simulation/assignments with NIPS = 2,
then set MASL = 20 as that will give 20 loops followed by the first optimisation, 20
more followed by the second and 20 more to follow – a total of 60.
A new option, first included in version 10.6, allows the user to run SATALL
without actually assigning any trips. In this case the only network flows will be
those included as “fixed flows”, e.g., bus routes, pre-load flows, etc.
To request this option set the parameter ZILCH = T either within &PARAM in the
network .dat file or within the SATALL control file.
At first glance this may seem a bit of a silly option – why have an assignment
model that doesn‟t actually do any assignment? However there are a number of
circumstances in which it could be useful.
For example, one might wish to cordon off a segment of a network and run the
simulation with the identical flows as in the “master” network since extracting and
re-assigning a cordoned trip matrix (a) takes time and (b) is not guaranteed to give
identically assigned flows. To do this the master network must be introduced as a
pre-load network to the cordoned network (with some care being taken that any
bus flows etc. are not double-counted – more on that one later).
Equally one might wish to simulate traffic flows extracted from a different suite of
programmes entirely within SATURN and transferred as a text file. (Recall the use
of text files to define pre-load flows, not just .ufs files; 15.5.4.) Or, similarly, one
might wish to test the effect of “pre-scheme” flows on a “with-scheme” network so
that the network may have been altered but the link flows themselves remain
unchanged.
Note that because no assignment takes place certain options such as SAVEIT
etc. are ignored and there will be no route information output. Equally any form of
elastic assignment is ignored. On the other hand if there are multiple user classes
flows (if any are input via pre-load) are retained.
SATALL network trips (KR control COST cost REDMEN tij1 TIJ tij2
FREEZE ice MASL n RESTART
where:
If the KR option is not invoked on the command line, then SATALL expects to find
the parameters in the default control file SATALL0.DAT. (See 9.15.2.)
Note that the input file has the “new” extension .UFN and that the output file is
always .UFS whether or not the network is pure buffer or not. If network.UFN
does not exist but network.UFS does then it is renamed.
The special DOS .bat file, SATURN (also known as SATURN9), has been
provided to run the complete set of network building (SATNET) plus
assignment/simulation (SATALL) operations in sequence.
In the simplest case, where the trip matrix and all other file names have been
defined within the network .dat file (recommended), use:
SATURN network
Files:
network.DAT Input SATURN network data file
5101396 / May 11 9-31
10_09_24_Section 9.doc
SATURN MANUAL
Further details of the SATURN .bat procedure are given in Section 14.3 and
special extensions thereto are described in Section 14.4.
(MCUBC=1) (Optional)
20 Input cost matrix “CKLFIL” used at the lower level of a nested logit
model (MCGILL=5) (Optional)
0
21 Input “reference” cost UFM matrix c ij used under Elastic Assignment
(Mandatory under Elastic Assignment)
22 Input initial trip matrix estimate used under Elastic Assignment
(Optional under Elastic Assignment (REDMEN = T))
23 Output road trip UFM matrix used under Elastic Assignment: trips by
road (Mandatory under Elastic Assignment)
24 Input “freeze” matrix indicating which cells in an elastic assignment
are to be frozen (ICING=T); see 7.5.5. (Optional)
28 A scratch UF file used under OBA plus AUTOK or KOMBI
29 Input update .UFS file used under Warm Starts
Input consists only of a set of Namelist Parameters associated with the name
&PARAM and an optional new network title.
Network Title
A new descriptive network title may be set by either:
Default File
The default control file SATALL0.DAT is as follows:
&PARAM
&END