Section 8
Section 8
9)
The simulation may be carried out either within a separate program SATSIM (in
which case it would need to be run in conjunction with the separate assignment
program SATEASY) or, much more usually, within SATALL. The discussion in
this chapter applies to both. Some technical information on SATSIM specifically is
given in 8.10; more general information on SATALL is given in Section 9.
That traffic flows are approximately constant over time periods of the order of
30 minutes (or the value set by LTP);
That traffic signals operating with fixed cycle times of the order of, say, 90
seconds impose a pattern of “cyclic flow profiles” within the longer time frame.
Thus the main building block in SATURN is the cyclic flow profile (CFP), the flow
of traffic past a certain point as a function of time. For example, we assume that if
one were to stand downstream from a set of traffic signals operating at a fixed
cycle time of 75 seconds one might observe a pattern of flow as illustrated in
Figure 8.1.
In other words there would be cyclical surges of traffic corresponding to the green
period at the lights and periods of minimal flow corresponding to the reds. Each
75-second CFP would be identical to every other over the full 30 minutes
simulation period, thus enabling us to simulate only one, in this case, 75 second
cycle rather than the full 30 minutes with consequent reductions in computation
time.
The basic principles of cyclic flow profiles are well tried and tested, for example in
the TRANSYT program. (Robertson, D.I. (1974) Cyclic flow profiles. T.E.C. 15
pp.640-1).
In order to simulate flows within this framework the user must define several
parameters. Firstly the length of the period of constant flow (e.g., 30 minutes) is
set by LTP (in units of minutes). Secondly the length of the shorter cycle is set by
the parameter LCY (in seconds). Finally each cycle is sub-divided into a number
of time units (as specified by the user-input parameter NUC, see 15.15.2),
typically 10 to 20, so a CFP can be represented in the computer as an array, the
elements of which represent the flow in each discrete time unit of length LCY/NUC
seconds (as opposed to being a continuous function such as illustrated in Fig. 8.1)
A typical time unit duration is around 5 seconds and represents the time resolution
of flow within SATURN [Note however that traffic signals in SATURN are
effectively defined with a resolution of one second since signal phases may begin
or end in the “middle” of individual time units.]
CFP’s in SATURN are based on turning movements, thus allowing for banned
turns, separate turn phases at signals, etc. and also for the fact that, for example,
the expected delay to a right turner is usually greater than that for a left turner.
Each turn from link i to j has associated with it four CFP’s, as illustrated in Figure
8.2; viz:
the ACCEPT pattern, the pattern of traffic which can actually make the turn;
Note that the IN and ARRIVE profiles are similar in shape (but not absolute
magnitude) for all turning movements whereas ACCEPT and OUT are turn
specific. The profiles are related as follows:
The IN profile is set, essentially, by the assigned flows at the upstream end of the
link, although its precise size/shape/profile is determined by the OUT profiles at
the upstream node and may be less than the assigned flows (in to) by virtue of
flows queued upstream (see 8.2.8).
The ARRIVE pattern is derived from the IN using platoon dispersion, a well-
established traffic engineering technique (and also allowing for traffic that departs
at the upstream end of the link, e.g., to zones or terminating bus flows, and/or
joins at the downstream end, e.g., from zones or buses).
The OUT pattern of a turn is based on the ARRIVE and ACCEPT patterns
whereby the ACCEPT pattern functions essentially as a “filter” on the ARRIVE’s to
determine how much traffic can cross the stop line at each point during the cycle.
OUT profiles then combine to determine the total IN pattern of succeeding turns.
In addition there is a 5th CFP, the QUEUE profile, representing the number of
vehicles (pcus) queued at the stop line at any point in the cycle, which is important
in it allows one to calculate the average delay per vehicle; see 8.4.1 and 8.4.8.
Thus, at a signalised junction, the QUEUE profile might resemble the classic “saw
tooth” pattern as the queue grows during the red periods and declines in the
green. At priority junctions or roundabouts it might be virtually flat representing the
average number of vehicles waiting for a suitable gap to appear. For further
details see Section 8.4.8.
The best way to appreciate the definition of and the interaction between the
various cyclical flow profiles is to use the node analysis functions within PIX
(and/or SATLOOK) to examine actual simulation nodes; see 11.1.1 and 11.12.
As the OUT pattern from one junction generates the IN profiles for the next
junction traffic, in effect, “moves” through the network.
Since the IN profile at the upstream of link A,B is derived from the various OUT
profiles at A node B cannot be fully simulated until A (and all of B’s other upstream
nodes) have been simulated. Similarly A cannot be fully simulated until all its
upstream nodes have been simulated as well. Thus starting at node B we can
follow the simulation backwards until we wind up at node B again. Simulation is
therefore an iterative process in which we iterate over each node in turn (generally
in numerical order but see 8.3.4 for alternative strategies) until convergence is
achieved as explained further in 8.3.
The parameter NITS sets the maximum number of internal simulation iterations
(default of 20) while NITS_M (which defaults to five) sets the minimum.
Note that the average level of cyclical flow is essentially determined by the
assignment but the shape of the profile is determined by the simulation.
Since the ACCEPT profile is particularly important in that the sum of the ACCEPT
values per time unit establishes the capacity for the turn, we describe in
somewhat greater detail how it is obtained.
In essence the process starts by assuming saturation flow and then reduces it in a
sequential manner to obtain the final ACCEPT profile. More specifically:
Set the ACCEPT value for each time unit equal to its turn saturation flow; see
6.4.6.
If the exit link is blocking back (i.e. the queue on that link exceeds its stacking
capacity) reduce the value per time unit by a uniform “blocking back factor”;
see Section 8.5.
If traffic signals reduce it to zero during the red phases (such that if the phase
change occurs in the middle of a time unit an appropriate adjustment is
made).
Reduce it further to allow for lanes shared with other turning movements.
See 8.9.2 for interactions with traffic on the same arm and 8.8.3.3 for the
interactions with traffic on other arms (which only occurs with merging
movements).
For X-turns at signals (i.e., right turners which are opposed by straight ahead
vehicles in the opposite direction during the green stages) allow a maximum
of TAX vehicles to clear at the end of each green stage from the centre of the
junction (where TAX is user defined and may be link-specific (6.13); default
2). See 8.2.4.
In the case of “blocked” and “unblocked” movements sharing the same lane
at signals (e.g. a lane where straight ahead vehicles can go on green but
right-turners have to wait) a certain number of unblocked vehicles are allowed
to go at the “head of the queue”, their number being determined by their
relative proportion (based on a probabilistic argument as to where in the
queue the first blocked vehicle will occur). See 8.2.5.
If flared lanes have been explicitly coded add extra capacity for those turning
movements which may use them directly and/or for turns in lanes which have
been “relieved” by the flares. See 8.2.6. (N.B. New in 10.9.22)
If the link has been coded as a “capacity restrained” simulation link and the
total capacity at the junction exceeds that of the link then the ACCEPT values
of every turn are factored down in order to equal the mid-link capacity. See
6.4.12 and 8.4.4.
The turn capacity is then determined by summing the individual ACCEPT's per
time unit.
We further note the point made in 6.4.6 that if there are additional effects which
influence the capacity but which are not included in the above set (e.g. the effect
of pedestrian crossings) then it is essential that the user incorporates these
additional effects within the input data, e.g. by reducing the saturation flow.
are either crossed or share the same exit as determined by the junction
geometry). The basic equation may be written as:
Equation 8.1
C S P1 P2 ...
where:
Equation 8.2
GS
V
Pi 1 i
Si
where :
However variations are used to represent other less clear-cut situations. For
example, consider two minor arms at a priority junction, both of which feed into the
same exit or else cross and for which there is no well-established priority rule. In
such cases SATURN assumes that the two movements “share” and that the
probability of one finding a gap in the other is given by:
Equation 8.3
GS
V
Pi 0.5 0.5 1 i
Si
and vice versa. Hence each movement receives at least 50% of the available
time and must look for gaps in the remaining 50%.
Another variation can occur with merge-style situations where the SATURN rule is
that the merging traffic needs to find a gap in one lane only of the major turn (as
opposed to traffic crossing a major road which needs gaps in all lanes).
For “simple” merges, e.g., an entry ramp onto a motorway, equation (8.2) is still
applied but the flow and saturation flows are those in one lane only, i.e., on the
inner lane of the motorway. Pragmatically this means that Vi/Si is the same but
the power G Si is reduced by a factor equal to the number of lanes, hence giving a
higher probability of a gap. See 6.4.2.3 for the rules for defining the “major” turn
and the single lane within which the merge takes place and 8.8.3.1 for the lane
choice rules and further possible reductions in the ACCEPT patterns.
For “Y-merges”, i.e., situations where a M-marker appears for two turns which
share an exit as with two motorways merging, equation 8.3 is applied to both
turns. Hence, in effect, both turns have a “guarantee” of 50% of the available
capacity and have to “fight” for the remaining 50%. Lane choice rules are
described in 8.8.3.2.
Note that further capacity restrictions may be applied to both types of merges to
account for the limited physical capacity of the exit lane; see 8.8.3.3 and 8.8.3.4.
8.2.3 CAPMIN
(N.B. Prior to 10.6 CAPMIN was applied at the level of the cycle; i.e., if the total
capacity as summed over each individual time unit were less than CAPMIN then
they were factored up to equal CAPMIN. In 10.6 the rule is applied at the level of
the time unit; i.e., if the capacity per time unit were less than CAPMIN it would be
factored up. If the flows are relatively flat, i.e., if there is no pronounced platooning
due to traffic signals in the vicinity, then the differences are very small; but, if there
is significant platooning, the overall capacity may be somewhat greater than
CAPMIN.)
8.2.4 Stacked Vehicles Clearing at the end of Green; Link-Specific TAX Values
As mentioned above (note 6, 8.2.1) up to ‘TAX’ vehicles can clear at the end of
each green phase for X-coded turns at signals in order to represent the behaviour
of vehicles which “stack” in the centre of the junction and only clear once the
green phase has ended.
Originally TAX was a universal parameter; with SATURN 9.4 it became a link-
specific parameter which may be set either using the X-file facility (6.13) or (post
10.9) on individual link data records (see 6.4.14 and 6.4.15). Thus links with very
wide junctions or with multiple stacking lanes may be assigned a much larger TAX
value than more constricted junctions. Users will probably find that, post 10.9, the
“extra line convention” described in 6.4.14 will be the most convenient method to
define link-specific TAX values.
Note that TAX refers to the total number of stackable pcu’s, not the number per
lane.
TAX plays a further role in allowing unblocked vehicles to pass in a lane shared
with an X-turn; see 6.4.9.5 and 8.2.5.1 below.
The most common, but not the only, example of this occurs with X-turns where the
(right) turning traffic coded X is blocked by opposing flow but the straight-ahead
traffic in the same lane is unblocked.
In these cases if the vehicle at the head of the queue is “unblocked” it will
proceed; if blocked, it will not and all vehicles in the queue behind will have to wait
until the head vehicle can go. Clearly if the head vehicle does go then the same
situation occurs with the next vehicle in the queue, etc. etc.
It may be shown that the average number of unblocked vehicles at the head of the
queue is given by:
p
n
1 p
where p is the probability (fraction) of unblocked vehicles. Thus if 3/4 of vehicles
in the queue are unblocked on average 3 will proceed.
A variation of the above rule has been introduced in 10.8 whereby, if a parameter
MONACO (think tax-free!) in &PARAM is set to .TRUE., then the number of
blocked vehicles required to completely block the lane becomes TAX+1, not 1,
where TAX is the number of pcus which are able to clear at the end of a green
phase by queuing in the centre of the junction (8.2.4). In effect, the assumption
becomes that there is physical space for TAX pcus to wait in the middle of the
junction and that, while the blocked vehicles are waiting, other traffic can pass by
on the inside, and will continue to do so until the (TAX+1)th pcu arrives and finally
blocks the head of the lane.
The basic concept of MONACO, i.e., an increase in the number of initial straight
ahead vehicles at the start of a green phase, may be extended to include the
impact of an explicit offside flared lane. In this situation potentially queued X-
turners most fully occupy both the centre of the junction (represented by TAX)
and the flared lane space (represented by FLAREX) before they prevent a straight
ahead vehicle with green from proceeding.
Thus if FLAREX pcus can queue in a flared lane plus TAX in the middle of the
junction then the average number of non-blocked pcus that can pass is (TAX +
FLAREX + 1) p / ((1-p).
Once the initial “head of queue” ACCEPT profile has been reached then the
capacity of the straight ahead vehicles may be reduced by lane sharing with the
X-turners as per normal.
Note that contribution of a flared lane to the “head of queue” capacity is only
included if MONACO = T.
We may also note that the presence of a flared lane will also clearly increase the
capacity for X-turners but this affect is discussed in the next section; the above
contribution applies only to straight ahead capacities.
The main difference between TAX and FLAREX, as far as the X-turners are
concerned, is that the TAX pcus are allowed to clear at the end of a green phase
whereas any vehicles remaining queued in the flared lane must wait until the next
green phase to clear.
Post release 10.9.20 flared lanes may be added on a link at either a signalised or
priority junction in order to increase the capacities of either the extreme nearside
or offside turn which can use the flare as well as to increase the capacities of
those turns which share the inside and/or outside lanes and which are “relieved”
by the flares. The approach differs depending on whether the flare diverges from a
shared or an unshared lane as well as on junction type.
The geometry of a flared lane and the (shared) lane from which it diverges is
illustrated below:
(Note: the above layout represents the use of the right turn FLAREX lane but also
equally applies to the left-turn FLAREF lane).
A represents the ahead movement in the “main lane” while F represents the flared
movement which diverges from the main lane into the flared lane. The length of
the flare can stack up to X PCUs. Let QA and QF represent the length of the
queues at the stop line at any point in time for movements A and F respectively.
Priority Junctions
We consider first the case of priority junctions which are somewhat simpler than
traffic signals although most of the modelling principles apply to both. Extra rules
for signals are dealt with below.
Thus in case (1) both lane queues are shorter than the flare and there is no direct
interference between either turn and they should both experience their “normal”
accept profiles at the stop line (i.e., allowing for saturation flows, give-ways,
red/green signals etc. etc.) subject to the constraint that their total capacity cannot
exceed the normal combined capacity from a single lane.
In case (2), where both queues stretch the full length of the flare and beyond the
point of divergence, the capacity is determined primarily at the point of
divergence. Thus, if there are just two turning movements, ahead and flare, we
calculate both their “downstream” or “stopline” capacities, where they each occupy
a single lane and do not interfere with one another, plus their “upstream”
capacities at the entry point to the flare where they do restrict one another. The
movement with the maximum ratio of upstream to downstream capacities is
judged to be the “most restricted” or “major” flow and its capacity equals its
downstream capacity. The capacity of the other or “minor” term is then equal to
its upstream capacity factored down by the downstream to upstream ratio of the
“major” flow.
For more than 2 turning movements the same principles apply although the
equations are slightly more complicated.
In case (3) the accept profile for A is set by what happens at the stop line but the
accept profile for F is limited by the amount of traffic that can enter the flared lane
upstream. This in turn equals the OUT profile for movement A factored down by
the ratio VF / (VF + VA); i.e., each time an A vehicle moves up what is the
probability that the next vehicle in the queue is an F which can freely enter the
flared lane?
Case (2) gives minimum capacity, case (1) the maximum with cases (3) and (4)
intermediate. The final accept profile / capacity is a probabilistically weighted sum
of the four cases such that as the flare length increases the best-case scenario (1)
becomes increasingly more likely relative to the worst, (2), and the overall
capacity increases.
In order to calculate probabilities per case the model is sub-divided into signalised
junctions and priority junctions, the distinction being that in the former we model
queues deterministically so that we “know” at any time unit during the cycle which
“case” is active, whereas in the latter we use a probabilistic model to determine
the probability distributions of queues at any point during the cycle and therefore
the probability splits between the 4 cases.
Signalised Junctions
Signalised junctions differ primarily in that the queue profiles are “deterministic”
rather than “stochastic” although the general principle of determining whether the
“shoe pinches” either upstream or downstream still applies.
An additional factor, however, at signals is that during the red phase it is assumed
that both A and F vehicles will build up queues in their distinct lanes and that there
is therefore a fixed “reservoir” of traffic which can exit at the stopline capacity once
the lights go green independent of any restrictions at the point of entry to the flare.
The stopline capacity continues to apply as long as there are vehicles left in the
reservoir which is continuously modelled by subtracting those vehicles which can
exit at the stopline while adding the maximum (restricted) entry flow at the flare
entry.
The situation where the flared lane diverges from an unshared lane for which only
a single turning movement is allowed – and must therefore be the turn which also
uses the flared lane – is considerably simpler in modelling terms. Thus if a single
unshared lane flares into two its accept profiles and capacity is (normally but not
always – see next) doubled, two unshared lanes into three is factored by 1.5, etc.
etc.
But, for example, if a give-way turn at a priority junction has its per lane capacity
reduced from its saturation flow by a factor of 0.3 to account for give-ways its total
capacity would be increased to 0.6 times its saturation flow if it has a flare. On the
other hand if the reduction factor were 0.6 then the combined normal lane plus
flare would not have a capacity of 1.2 times its saturation flow, only its saturation
flow as set by the maximum throughput from a single lane at the point of
divergence into the flare.
One of the advantages of using cyclical flow profiles as the basic building block
within SATURN is that it enables one to model the effect of co-ordinated signals.
For example, let us suppose that the CFP depicted in Fig 8.1 represents the
arrival pattern of traffic at a signalised junction with the peak of the arrivals during
the middle of each 75-second cycle; this profile will presumably have resulted
from the timing of a signalised junction upstream which releases traffic at fixed
times within the same 75-second cycle. If, furthermore, the green phase of the
current junction is also timed to occur during the middle of the cycle that junction
will be co-ordinated with those upstream which generated the arrival profile. On
the other hand if the green phase were set to occur nearer to the start/end of the
cycle when arrivals fall off then the signals would not be co-ordinated.
Clearly the resulting delays - and queues - would be less with good co-ordination
and more with poor (although, in general, the co-ordination would not affect the
actual capacity).
Note that co-ordination can only be modelled within SATURN if the cycle times of
the upstream and downstream junctions are identical. This is explained further in
Section 15.15. If they are not SATURN assumes that the arrival profile is flat, in
which case the delays are independent of exactly when during the cycle the green
phases occur. Thus in order to model co-ordinated signals within an area it is
essential that all junctions within that area (including non-signalised junctions and
dummies) are coded with a common cycle time. This may be achieved by
ensuring that the global parameter LCY (upon which the ‘cycle times’ of non-
signalised junctions are universally set) reflects the cycle time of the co-ordinated
traffic signals.
Section 15.31 discusses the various options available within SATURN to optimise
signal settings (stage times and offsets). See also Section 12.2 for information on
SATOFF for offset optimisation on it own.
Another feature of the use of CFP’s is that it enables the simulation to cope with
the effects of “flow metering” whereby the flow downstream of an over capacity
junction or other pinch point is reduced accordingly. A detailed description is given
in Sections 17.1 and 17.2 where we differentiate between “demand flow” and
“actual flow”.
Basically the phenomenon arises since the OUT CFP can never exceed the
ACCEPT CFP so that if a turning movement is in excess of capacity the total of
the OUT flow profile will equal the capacity and be less than the sum of the
ARRIVE’s. Hence the subsequent IN profile (and all downstream flow profiles) will
be based on actual rather than demand flows.
At any point where traffic streams meet there is usually some interaction; for
example, at a simple crossroads right turners from the south are restricted by
ahead traffic from the north, which in turn may be limited by right turners from the
north whose flow is dependent on the ahead flow from the south, and so on.
Rather than modelling this in detail, the same cycle is simulated iteratively, the
ACCEPT patterns on the n-th iteration being derived from the conflicting flows in
the (n-1)-th. Convergence is considered to have been reached when all OUT
patterns are effectively unchanged by further iterations.
The convergence of the simulated OUT profiles can be thought of as arising from
both “between-” and “within-junction” effects. The between convergence arises
primarily as the IN profiles at each junction are affected by the OUT profiles of the
previous (upstream) junction; as these change so too does the simulation of the
next junction and hence its OUT patterns. Similar effects can also occur in the
opposite direction; for example, if the blocking back characteristics of a
downstream exit link change (see 8.5), then the ACCEPT profiles at the upstream
simulation node will also change.
On the other hand the “within” or “internal” convergence arises from the fact that
OUT profiles of one turn can affect the ACCEPT profiles of other turns at the
same junction, e.g., via giving way or lane sharing.
The next two sections describe parameters which are used to monitor either both
effects combined (8.3.2) or the between effects on their own (8.3.3).
The basic parameter used to monitor the convergence of OUT profiles per node is
the average change in individual components of all OUT patterns expressed in
units of pcu’s per hour. These are then averaged to give a “global” convergence at
the end of each simulation iteration.
Thus at the end of each iteration over all nodes the line printer file contains a
message such as:
LPT and LPS files list the internal convergence statistics (defined as above in
terms of OUT profiles) for each simulation node in order to identify possible “badly
behaved” nodes; e.g.:
would indicate major convergence problems at node 21, possibly minor problems
at node 47, but none elsewhere.
Note that the IN and OUT convergence measures are evaluated differently (the
former is averaged over links, the latter over turns) which means that they are not
strictly “additive”; i.e., one cannot obtain a precise measure of the within-junction
convergence by subtracting IN from OUT. Nonetheless if the IN convergence is
zero and the OUT is positive then all the convergence problems are internal.
Equally if both are significantly greater than zero it probably means that the main
source of convergence problems arises from between-junction effects (e.g.,
changes in blocking back) rather than internal.
The IN convergence values are printed along with the OUT’s in a table that lists
the 10 worst converged nodes in order of their OUT values. Both may also be
obtained and printed for individual nodes in a variety of ways (e.g. within P1X
Information/Nodes and Convergence).
Traditionally SATURN has adopted a very simple iterative strategy for the order in
which individual nodes are simulated by simply starting with the lowest numbered
node and working numerically upwards through all nodes. However we note that if
a node B has only one entry arm from node A then the IN profiles at B are
determined only by the OUT profiles at A and therefore it makes good sense to
simulate B after A, in which case the IN profiles are always up to date. Thus the
numerical strategy above “works” if B has a higher node number than A but not
the other way round. Of course most nodes have several input arms and it is not
strictly possible to simulate all the upstream nodes prior to simulating B but it may
be possible to identify the “major” contributors to flow into B.
The choice between the original numerical ordering and the topological ordering is
controlled by a parameter SIM109 set in the network .dat file; if T the newer
topological order is used. Default = T.
A second rule was also introduced in 10.9.17 whereby certain simulated nodes
were designated as “moons” relative to neighbouring “suns” and node simulations
could be separated into individual link simulations.
Thus, if node N has 3 entry arms from A, B and C but has no “internal between-
arm” interactions, i.e., the simulation of traffic from A is not affected by entry traffic
from B or C (e.g., there are no give-ways for turns out of A) then the only
commodity that determines the simulation of turns from link A-N will be the IN
profiles at the upstream node A.
There is one qualification to this which is that if one or more arms at a node are
blocking back then the simulation is also influenced by affects “downstream” and
therefore that node may no longer be considered as a “moon” and must be
included within the main topologically ordering. Thus the order of node simulation
is not necessarily fixed over simulation iterations.
Having calculated the profiles for each turn as explained above the program may
now use the queuing profile to calculate the average delay per vehicle based on
the average number of vehicles (pcus) in the queue (see 8.4.8).
where, for example at traffic signals, the transient delays correspond to the time
spent queuing during the red phase by vehicles which then depart during the
green phase, whereas the queuing delays only occur for turning movements in
excess of capacity where a permanent queue builds up which is unable to clear in
a single cycle.
the “geometric delay” term TDEL applied to all give-ways at priority junctions
and roundabout turns;
Equation 8.4
LTP / 2 V C / C
Note that a vehicle which arrives at the start of the time period would suffer no
extra delay (since the queue is zero) whereas a vehicle arriving at the end of the
time period (i.e., LTP) suffers the maximum delay equal to twice that given by
(8.4).
Clearly LTP is a critical parameter in determining delays (in particular for over-
capacity turns where the queuing delays can far exceed the transient delays. Its
role is discussed further in 8.4.5.
(The case of multiple time periods is more complicated since the initial queue
need not be zero and permanent queues may either increase or decrease but the
distinction between transient and queue delays still holds; see 17.6.)
Note that with queuing delays (assuming as above that the queue increases over
the time period) part of the time the arriving vehicles spend in a permanent queue
will be outside (later than) the time period simulated. Normally this component of
delay is included in the definition of the average delay per turn. However in
certain simulation summary statistics a distinction is made between the vehicle-
hours spent in permanent queues within the time period and in later time periods;
see Section 17.8.
See section 8.4.6 for a discussion of extremely long simulated delays being
calculated.
Equation 8.5
t AV n t0 V C (a)
t AC n t0 B V C / C V C (b)
Where:
B is a constant worked out by the program equal to one half the time period being
modelled. (Numerically B = 30*LTP where LTP is in minutes and B in seconds.)
(See equation (8.11) for a more complex form of the same basic equation
appropriate to turns which share lanes.)
Note that the mathematical form of the equation is identical to that used by buffer
links (see 5.4) so that we can refer to it as the “standard form”.
For turning movements A and n are calculated by the program simulating three
different delays: that at zero flow, that at the current assignment flow and that at
capacity as illustrated in Figure 8.3. If the current flow is either too high (over
capacity) or too low (near zero) then a suitable arbitrary flow is used to determine
the middle point (20% or 80% of capacity).
Warnings occur if problems occur, for example if the simulated delays decrease
with increases in flow or if the best-fit value of n exceeds an upper limit. However
if problems do occur the flow-delay curve always goes through the “actual” point in
order to ensure consistency between assignment and simulation.
Note that the parameter A not only has rather strange and indeterminate units
since it must convert pcu/hr raised to a power n into seconds but it may also,
potentially, take on rather extreme numerical values. To take an improbable but
not impossible example of an “upper limit”, if a link has a capacity of 10,000
pcu/hr, a power n = 10 and capacity delay of 10 seconds then A must equal 10 to
the power –49. At the lower limits, say capacity = 1 pcu/hr, then A = the time at
capacity independent of n. This can create computational problems in calculating
and storing A, particularly of underflow.
The final term in the equation (8.5b) for V>C corresponds to the “permanent
queuing delay” mentioned in 8.4.1 and 17.6. (And, since flows appear in both the
numerator and denominator, the question of the units used to define V and C is
not relevant.) See section 8.4.6 for restrictions on the upper limits of queued
delays for simulation turns.
It is possible, see 15.38, to make the equation (8.5a) extend over the full range of
turning volumes from zero to infinity via the parameter KINKY.
An alternative form of flow-delay curve is also calculated under the ROSIE option.
This has the same algebraic form as equations (8.5) but in this case the quantity V
is the total weighted flow of all turning movements which share lanes with one
another. Once again the parameters A, n and C are estimated by calculating
delays at three different flows and fitting the curve to pass through these points.
Note as well that the parameters are turn-specific so that delays need not be
equal for a given total flow V (although the delays for turns sharing the same lanes
will be very similar to one another).
Equation 8.6
V aiVi
Where:
ai is a weight proportional to the inverse of the capacity of that turn “at the
stop line”.
Hence heavily impeded turns (e.g. due to give-ways) are assigned a greater
weight than unimpeded turns.
Generally speaking, simulation links have fixed travel times, assumed equal to
their “free flow travel times”, and, in effect, zero additional delays. However it is
possible to define link speed-flow curves for simulation links in the same way that
one may define link speed-flow curves for buffer links. Equally, explicit capacities
may be defined on simulation links themselves (as opposed to capacities set by
downstream junctions). This information is included within the network .dat file as
detailed in Section 6.4.12.
This facility is extremely useful for modelling relatively long motorway-style links
where delays tend to be dictated by conditions on the link itself as opposed to
junction properties. Indeed speed-flow curves may be the only way to adequately
model such links. For short urban links speed-flow curves are generally not
required on the presumption that the limiting capacity and the main delays occur
directly at the junction stopline.
Equation 8.7
t v t0 AV n V C (a)
where the parameters to, A etc. are defined via the .dat file. C is interpreted as the
“link capacity”.
For flows in excess of capacity it is assumed that the link travel time remains fixed
at tC = to + A * Cn. The reasoning behind this assumption is as follows.
The link capacity C should be interpreted as the limiting capacity of the link itself,
generally a function of the minimum road width along the link, as opposed to the
“junction capacity” Cj set at the downstream end. Generally speaking on most
urban roads, “case (a)”, Cj < C and therefore the question of what happens to the
link speeds when flows are in excess of Cj is not highly relevant since the travel
times on the links are almost certain to be “swamped” by the queuing delays at
the junction.
However for some sorts of links, in particular motorway-style links, “case (b)”, the
above reasoning does not hold since the “junction” and “link” capacities are
effectively one and the same thing. Flows in excess of capacity, in a very strict
sense, cannot exist, although in SATURN flows in excess of capacity are allowed
to enter a link if there is sufficient capacity upstream. We would, however wish
them to be simulated as having travel times in excess of tC. How this is done is
explained next.
In either case an extra condition is imposed on the capacities of the turns at the
downstream end of speed-flow links which is that their “total” capacity Cj must be
less than or equal to the link capacity C. If it is less - case (a) above - no further
action is taken; if greater - case (b) - the individual turn capacities are factored
down such that Cj = C.
In the event of V > C and C < Cj the turn capacities will be reduced (“choked”)
such that queuing occurs at the turns. As flows increase beyond C there will be
no increase in their “link” travel times, fixed at tC, but there will be a rapid increase
in their junction queuing times. Hence on motorway-style links the total travel time
- link plus turn -- is an increasing function of flow, even though one component,
the link time, has an upper limit. In effect the extra link travel time is now
associated with a queue at the junction.
Finally we note that any “delays” on the link due to capacity-restraint curves are
distinct from the queuing delays “at the stop-line” and are modelled quite distinctly.
One obvious difference is that the capacity-restraint delays are the same for all
traffic on the link whereas the stop-line delays may differ for individual turning
movements.
If a junction turn capacity is reduced due to link capacity restraint it is done via a
“choke factor” < 1.0 which is applied to all ACCEPT CFP’s to reduce the total link
capacities to equal the link capacity. Choke factors may be viewed (post 10.7) in a
number of ways:
(1) The SATLOOK table of flows and delays indicates by a % those turns
whose capacity is reduced but the value of the choke factor is not given.
(3) The SATLOOK master option 12 which prints all over-capacity links
indicates by a ‘b’ when a link’s capacity is by its speed-flow curve.
(4) P1X link annotation options include “active” mid-link capacities; i.e., those
instances of mid-link capacities which “choke” the stop-line.
LTP (the length of the time period modelled) plays a crucial role in determining the
over-capacity queues and delays and its chosen value needs to be carefully
considered by the user.
Thus we note from equation (8.4) that the over-capacity delay is directly
proportional to LTP: double LTP and (assuming that the flows are unchanged, see
below) you double the delays and the length of queues.
Figure 8.4 - The “rectangular” simulation demand profile vs the “true” profile
The fundamental assumption within SATURN is that flows remain constant (i.e.,
the profile is flat) over the time period LTP at the rate specified by the trip matrix.
Clearly this must be seen as an approximation to what happens in real life where
flow rates change in a continuous fashion over time. As illustrated in Figure 8.4
SATURN approximates the (real life) continuous profile by a flat profile such that
the total flow within the time period modelled is the same (or should be the same)
in both cases. In this example SATURN slightly underestimates the flow during
the “peak of the peak” and correspondingly overestimates the flow during the
shoulder; clearly the “flatter” the true profile is, the better the approximation.
If the “structure” of the observed flow profile does not fit nicely into the rectangular
assumption illustrated in Figure 8.4 then the user may need to consider modelling
multiple time periods as discussed in Section 17.3.
If flows are in excess of capacity in a large number of links across the network the
over-all time spent in over-capacity queues can be a significant component of the
total travel time and it becomes highly important for scheme evaluation to get
those numbers correct. Which, in turn, means that it is highly important to get LTP
“right” so that taking simple arbitrary values such as 30 or 60 minutes may not be
all that useful. Indeed LTP should be regarded as a base-year “calibration”
parameter which needs to be carefully considered.
It should also be pointed out that the traditional default value used in SATURN for
LTP, i.e., 30 minutes, is probably not a very sensible default for modelling peak
conditions since in real life most peaks persist for longer than 30 minutes. You
have been warned! And a warning message is generated in SATNET if LTP is not
explicitly set in the network .dat file, (On the other hand 30 minutes may quite a
reasonable default for use with multiple time periods.)
In addition users may need to consider changing the value of LTP in future year
forecasts if it is felt that growth in the future is not going to be uniform over time of
day but that some form of “peak spreading” is likely to occur whereby peak flows
do not so much increase “vertically” but tend to spread “horizontally”. Thus an
alternative to assuming that the trip matrix grows by, say, 2% per annum may be
to assume that LTP grows by 2% per annum. Clearly this may introduce a
number of other problems such as how to evaluate and compare flows over 60
minutes in the base year with flows over 70 minutes in the future; in practice
SATURN users are unlikely to vary LTP from base-year values, but the possibility
needs to be considered as do the base-year calibration issues.
We may also note that the simple rule that delays are proportional to LTP is not
necessarily strictly correct since, as you increase LTP and therefore increase the
delays on over-capacity links/routes, more traffic will divert to alternative routes.
The rate of increase may therefore be less than linear. Hence the choice of LTP
also has an impact on assigned flows as well as delays and equally has an impact
on blocking back (8.5).
Basically unrealistic delays may occur when a procedure that is valid for “normal”
flows is applied under highly “abnormal” flow conditions such as a straight-ahead
movement of 1,000 pcu/hr sharing a lane with an assigned right-turning flow of
0.001 pcu/hr which is totally blocked by opposing traffic. As long as the long
delays are coupled with very low flows, as is virtually guaranteed by the
assignment finding alternative routes, the effect on total vehicle hours is likely to
From version 10.5 onwards MAXQCT is also used by the assignment model.
Thus the over-capacity queued component of the flow-delay curves as used in the
assignment, i.e., the final term in equation (8.5b), has been re-defined so as to
incorporate an upper limit of MAXQCT (in minutes). The revised curve is
illustrated in Figure 8.5. At a delay equal to half MAXQCT the revised curve starts
to divert from the linear curve and approaches the upper limit of MAXQCT (as
represented by the dashed horizontal line) exponentially.
Figure 8.5 - Revised queued delays as a function of flow with an upper limit of
MAXQCT
The upper time limit MAXQCT may also be interpreted in terms of an upper V/C
limit. We may see what this is by solving an equation:
Thus if MAXQCT = LTP then the upper limit in Fig 8.5 corresponds to V/C = 3; if
we take the default values of MAXQCT = 60 and LTP = 30 then V/C = 5.
We may therefore see that MAXQCT only “kicks in” at V/C ratios which are much
higher than would be normally be expected in most networks. However, given that
initial stages of the assignment very often generate extreme solutions with
abnormally high V/C ratios, it is possible that the upper delay limits will at least
have some bearing on the pattern of assignment convergence.
Very high PASSQ flows (i.e., initial queues well in excess of the capacity of
the subsequent time period);
Very high V/C ratios (such that MAXQCT would come into play);
Lane sharing;
Insufficient capacity for traffic from external zones to enter the simulation
network.
As a result the methods used to calculate flow-delay curves and/or lane sharing
were “tightened up” to remove the problems and to provide much more robust
solutions.
It needs to be stressed that these changes only really affect networks which are
very heavily overloaded for one reason or another so that, for the vast majority of
tested networks, the effects will be minimal. Therefore, in order to assist users
who wish to replicate previous results using SATURN 10.5, a new logical
parameter Q105 (set under &PARAM in network .dat files) has been introduced. If
Q105 = T (its default) the new rules are used; if Q105 = F the old rules are used.
Users are strongly advised to keep Q105 = T. Even if Q105 is F it does not
guarantee that old results will be replicated since, inevitably, there are a number
of other new features in 10.5 which cannot be removed and which will also give
potentially slightly different results.
N.B. From release 10.9 onwards Q105 is always assumed to be T, i.e., setting to
F has no effect.
This section describes in somewhat greater detail how queue profiles, average
and maximum queues are calculated and displayed within SATURN.
where, for example at traffic signals, the transient queues correspond to the time
spent queuing during the red phase by vehicles which then depart during the
green phase, whereas the over-capacity queues only occur for turning movements
in excess of capacity where a permanent queue builds up which is unable to clear
in a single cycle.
Transient queue cyclical flow profiles (CFP) for each turning movement are
calculated by the simulation as part of the process whereby the ARRIVE profiles
are converted into OUT profiles (see 8.1). It represents the number of pcu’s which
are estimated to be queued at the downstream stop line during each of the NUC
time units into which the basic cycle is divided.
In the simplest situation if there are 2 “ARRIVE” pcu’s and 1 “OUT” pcu (e.g.,
during a green phase at signals) then the QUEUE increases by 1. Equally during
a red phase at signals the queue increases with every pcu that arrives since there
can be no departures.
The basic principle is that if the (permanent) queue at the start of the modelled
simulation period (LTP) is zero and during each cycle (LCY) the number of pcu’s
that arrives exceeds the number that can depart by x then by the end of the time
period the permanent queue will have increased linearly from 0 up to x * LTP/LCY
(or, strictly, 60 LTP/LCY since LTP is defined in minutes and LCY in seconds).
(Note the linear growth model is based on the fundamental SATURN assumption
that flows are constant throughout the time period modelled.)
If there is a permanent queue at the start of the time period associated with
PASSQ (see 17.3) then the linear increase is added on to that initial queue.
Although, clearly, if you start with an initial queue but the capacity exceeds the
arrival flow in that time period (V<C) then the permanent queue will decrease
linearly and might, potentially, be reduced to zero at some point before the end of
the time period.
Average queue lengths may be calculated either as the transient average (i.e.,
averaged over the cycle) or the over-capacity queue averaged over the time
period (generally the average of the initial and final queues unless, using PASSQ,
the permanent queue goes to zero part way through the time period) or as the
sum of the two.
We may define maximum queues for either transient or permanent (V>C) queues
separately or as their sum.
Thus the maximum transient queue length is the maximum reached during a
single cycle (which, generally speaking, only differs significantly from the average
transient queue at signalised junctions where the maximum queue tends to occur
at the start of a green stage).
The maximum over-capacity queue will occur either (most commonly) at the end
of the time period if V > C and the queue is increasing or, otherwise, at the start of
the time period as determined by PASSQ. In the simplest case of V>C and no
PASSQ the maximum (final) queue is twice the average.
Note that the “maximum transient queue” is more like the “average maximum
queue per cycle” than the “longest queue length reached during the time period” –
where the latter is likely to occur when, at random, a particularly heavy surge of
traffic arrives some time during the time period and/or there is a particularly heavy
surge of traffic on a major arm which reduces the available gaps. In real life it is
probably more realistic to think in terms of a probability distribution of maximum
queue lengths; however this is the sort of detail that only a true micro-simulation
model is capable of providing whereas SATURN can only work in terms of
averages. Clearly one would expect that the SATURN average would be an
under-estimate of the “worst case scenario”.
The (transient) queue profiles per turn may be displayed numerically within the
SATLOOK node display menu (option 4). In addition SATLOOK node display
may also display the maximum and the average transient queues, the average
over-capacity queue and the total average queue, all disaggregated by either link,
turn or lane (option 17).
The total (i.e., transient plus over-capacity) average queue length per simulation
link is included as an array (DA code 1433) within every .ufs file and may be
displayed as a link property using P1X (along with various other definitions of
queues such as the V>C queue at the end of the time period). As mentioned
above this array is in units of pcu’s aggregated over all turns and all lanes.
Similarly maximum transient queues per turn and per link are also stored as DA
codes.
8.4.8.7 Comments
Finally we note that the queuing model used in the SATURN simulation is
somewhat simplified. For example, it is basically a “vertical” queue model in which
vehicles arriving at the downstream stopped line are added to the queue. It
therefore ignores the fact that the vehicles actually arrive at the end of the queue
a bit further back on the link and therefore spend a bit longer in the queue than in
“cruise mode” (although the total travel time on the link is unaffected). It also
ignores the phenomenon whereby, at traffic signals, the queue continues to
progress backwards at the start of a green phase until the “start wave” meets up
with the “arrival wave” (in terms of shock wave theory).
This, along with other sources of error in calculating flows and capacities, means
that any queue lengths calculated by SATURN need to be taken with a large
pinch of salt. For example, it is not very realistic to expect the modelled queue
lengths to closely reproduce observed queue lengths (however they are defined);
at best one might hope to classify a link into very broad bands such as “well below
capacity”, “roughly at capacity”, etc., etc. and hope that at this level modelled and
observed queue lengths match.
“Blocking back” refers to the situation where the queue of vehicles on a link from A
to B extends back as far as the upstream junction A and reduces the flow of
vehicles entering A-B. Blocking back is modelled in the simulation network but not
in the buffer network. More precisely it occurs on a simulation link if either the
average or the maximum (see 8.5.2 (g) below) queue over the time period
simulated exceeds the link stacking capacity; if so, a “blocking-back factor” - BBF -
is applied as a uniform reduction factor to the capacities of ALL turns entering A-B
at A, and (optionally - see below) to all centroid connectors entering that link. The
size of BBF is chosen such that the average/maximum queue (see 8.5.2 (f))
resulting from the reduced entry exactly equals the stacking capacity. (N.B. Note
that the flows in question are always actual, not demand.)
The blocking-back factor is applied to capacities, not to actual entry flows, and its
value is in fact primarily determined by the V/C ratios of the feeder turns, not the
degree of over-saturation of the link itself. For example, imagine a link which is fed
by a single turn with an (actual) flow of 500 pcu/hr and a capacity of 1,000 pcu/hr,
and that the entry flow of 500 pcu/hr causes a queue which marginally exceeds
the stacking capacity S but that reducing it by 1 pcu/hr would yield a queue
exactly equal to the stacking capacity (i.e., Q(499) = S, Q(500) > S). Here BBF
would need to be 0.499 in order to reduce the capacity and the flow to 499 pcu/hr,
despite the fact that the queue was only marginally over-capacity. Had the
capacity been 500 the blocking back factor would only have been 499/500 =
0.998.
The following specific points regarding the operation of blocking back deserve
mention:
Note that this restriction does not apply to blocking back upstream from
roundabouts so that a long queue which builds up at the entry to a
roundabout can block the upstream junction (unless, of course, the
upstream junction is also a roundabout). However a long queue back from,
say, a signalised junction to a roundabout would have no effect.
Blocking back also applies to any centroid connector flows entering the
blocked link. For example, if a link A- B has an upstream entry flow of 500
and a centroid entry flow of 500 as well and the exact capacity and stacking
capacity of A- B is such that a flow of 600 would block the link both
upstream then both the upstream and the zonal entry flows are reduced pro
rata to 300. (Normally simulation centroid connectors are assumed to have
an “infinite” capacity – numerically 99999; this is the one exception.)
The queued traffic on the centroid connector is treated in the same manner
as any other permanent queues at turns; i.e. the actual flows downstream
would be reduced and, in the case of running a subsequent time period
under PASSQ, the “missing” flows would be “passed over” as fixed flows.
The existence of blocking back on a centroid connector is indicated by a
finite capacity (which is related indirectly to BBF) as stored in DA code 1383.
(Previously (pre 9.3) centroid connector flows were, in effect, given priority
on blocked links. Hence, in the above example, the centroid entry flow of
500 was allowed in full but the BBF would have been set so as to reduce the
upstream entry flow to 100. This could lead to potentially extreme situations
where, if the zonal flow exceeded the maximum permissible flow on its own,
the upstream entries would be reduced to near zero.
b) Blocking back can extend over several junctions; e.g., a queue at A can
cause blocking back at B, the reduction in capacity at B can cause queues
which block back to C, etc., etc. (Note however that if B were a dummy link
in the middle of link C-A the queue would only progress as far as B and
there would be no queue on C-B; in such cases it would be preferable to
represent B as a priority junction where the effect of blocking back would be
included.)
e) The queue length used to determine whether blocking back occurs includes
both the (average) transient queue length and the over-capacity component
(for which two options exist as explained next). However, a further condition
is that for blocking back to be applied from a link there must be at least one
over-capacity turn out of that link. This is to avoid possible problems with
very short links (e.g., where a pedestrian crossing node has been included a
few metres away from a signalised junction) where the average transient
queue on its own may exceed the stacking capacity.
However for multiple time periods there are distinct advantages in setting
QUEEN=TRUE, maximum queues, since one might expect that the
maximum queue, once formed, would continue to block back at the same
level over several time periods until it dissipated. This also minimises
problems of oscillating queue lengths. However, we note that this option is
very rarely used with user feedback suggesting that the resulting levels of
congestion in the subsequent time periods were too high. Further feedback
from user’s applications would be welcome.
It also implies that, for traffic which does have to give way, there will be gaps in
the movement A-B-C. For example, traffic from C-B which turns across and gives
way to A-B-C (right turners for drive on the left, left turners for drive on the right)
will not be totally blocked. (These turns would be assigned a priority marker X.)
Thus equations such as (8.2) will continue to apply with the “major” flow V1 (i.e., A-
B-C) being the restricted flow due to blocking back and therefore less than the
saturation flow S1.
However, prior to SATURN 10.4, the yellow-box give-way rules only applied to
priority junctions, not when junction B was signalised. Thus, at signals, if the
turning traffic out of C-B were coded as priority marker X, it was assumed that
(during those stages when both movements had green) the “major” flow across A-
B-C would be continuous and that there would be no gaps available. Thus the
only way that X-turners could cross A-B-C would be as “TAX” vehicles at the end
of each green phase (or, clearly, during any stages when the crossing traffic had
its own exclusive green signal).
In 10.4, however, the yellow-box rules were extended to also cover the above
situation. This meant that signalised X-turners were given a greater capacity at
blocked back junctions than they would have had in earlier versions. N.B. The old
situation may be retained by setting NFT = 103 or less (although in the initially
released version 10.4.10 this facility was not included and the new rule was
effectively “hard-wired in”).
Blocking back rules were extended in version 10.5 as follows. (N.B. Some of
these rules have been modified under 10.9 with BB109 = T; see Section 8.5.5
below. Read both together!)
Firstly, if, in a simulation link (A,B), A is not an external node (in which case there
would be no blocking back on the link for the reasons as explained above in 8.5.1)
but is connected directly to an external node by one or more intermediate two-arm
nodes then there is no blocking back on (A,B). In other words, as illustrated in
Figure 8.6a, if A is some form of “artificial” node that has been inserted in what is
essentially a single road from an external node X to B then full set of links
between X and B are modelled as a single link.
Figure 8.6 (a)- A 2-arm node A connecting an internal and external simulation nodes
(B and X)
However, there is one exception to the second rule which is when link (A,B) has
more lanes than (S,B), e.g., there is a flared segment at the end of (S,A). In that
case link (A,B) may block back directly into (S,A) and we treat A as a “genuine”
node.
N.B. This rule does not apply post 10.9 under BB109 = T (see 8.5.5 below) where
alternative rules are introduced to identify genuine mid-link nodes which “break”
chains.
In release 10.8 the above rule was extended to work in a “downstream” sense as
well as “upstream”. Thus, in the above diagram, assuming that A-B did not have a
sufficient queue to block back on its own but S-A did (e.g., A was very near to S
rather than near to B) then the stacking capacity considered for S-A would be the
sum of S-A and A-B.
The situation depicted in Fig. 8.6(b) has been extended in SATURN Version 10.9
to apply to all “link chains” (see 5.1.12) where a “real” road between two “real”
junctions has apparently been coded, for good reasons or bad, by including one or
more intermediate “artificial” nodes. Thus links S-A and A-B constitute a chain as
would B-A plus A-S. The new rules apply if parameter BB109 = T as set under
&PARAM (default = F for the moment, although it shall certainly become T in the
future).
the individual link stacking capacities from A-B to Y-Z and the queue length is
calculated primarily at Z (although contributions from intermediate links are
sometimes included). If the total queue exceeds the total stacking capacity then a
blocking back factor is calculated for the upstream link A-B in order to restrict
entry flow into A-B. The objective, therefore, is to create a (downstream) queue at
Y-Z which will stretch back precisely to (upstream) node A.
In addition, if the chain blocks back, each internal link within the chain blocks back
into its upstream feeder link so that queues on internal links within the chain
should equal their individual stacking capacities. Thus Y-Z blocks back into X-Y,
X-Y into W-X etc. etc. until the upstream link A-B has been reached.
If, on the other hand, the total queue along the chain does not exceed the total
chain stacking capacity then there is no “internal” or “intra-chain” blocking back
modelled either. For example, referring to Fig. 8.6(b), if QAB + QSA < SAB + SSA
then (a) there is no blocking back at S but (b) neither is there blocking back at A
even if QAB > SAB,
The purpose of this rule is to reduce the overall application of blocking back to
“major” junctions so as to minimise any convergence problems which may be
created by blocking back on very short “internal” links.
We note that the definition of when a series of links constitutes a chain is set in
the network build stage by SATNET whereas, pre 10.9, the identification of
intermediate nodes was carried out entirely within the simulation. 10.9 has also
extended the definition of a chain to several alternative configurations as depicted,
for example, in Figure 5.2 (b), (c) and (d).
The definition of a chain in 10.9 for the purposes of adding stacking capacities
also differs from that applied previously (i.e., as described in 8.5.4) in that a 10.9
chain allows 2-arm dummy nodes (junction type 4) to be part of a chain whereas
previously they were excluded. For example, in Figure 8.6(b), if A were a dummy
node then the stacking capacities of B-A and A-S would not have been added
together while in 10.9 they would be. Therefore, for the purposes of upstream
blocking back, queues are allowed to “jump over” dummy nodes.
On the other hand there is no explicit blocking back through dummy nodes, either
in 10.9 or 10.5. Thus, again referring to Fig. 8.6(b), turning movements entering S-
A would be restricted by blocking back if the (total) queue on S-B were greater
than its (total) stacking capacity but there can never be any restrictions on traffic
passing from S-A to A-B through dummy node A. (The basic property of a dummy
node is that it effectively has infinite capacities and traffic entering on one arm
passes unimpeded to its exit arm.)
In addition, any restrictions in 10.5 associated with the number of lanes increasing
or decreasing along a “chain” (see 8.5.4) have now been dropped with BB109 = T.
However users may explicitly allow for the effect of lane reductions etc. by the use
of certain coding “tricks” as explained next.
Post 10.9.18 it is possible to “break” a chain by including a negative value for the
stacking capacity defined in field 1, columns 1-5, on a simulation link data record.
Thus, referring to Fig. 8.6.(b) above, if the stacking capacity of link S-A were input
with a negative value then the chain from S to B would not be extended through A
and links S-A and A-B would be treated as independent links as far as blocking
back is concerned. (The same as would have happened under 10.5 if B-A had
more lanes than S-B)
Thus if the queue on A-B exceeded the stacking capacity on A-B then the capacity
for turn S-A-B would be suitably reduced. Similarly if the queue on S-B exceeded
its stacking capacity entry traffic into S-A would be reduced. So neither, one or
both could block back depending on the Q vrs. S values on each individual link.
This situation has been found to occur on entry ramps onto a motorway where a
signalised junction at A controls the flow entering the motorway at B by the choice
of percentage green time at A.
Due to the way in which “intra-chain” blocking back is or is not modelled (see
8.5.5.4 above) it is possible, when the chain as a whole does not block back, that
the queue at the most downstream link in the chain may exceed its own individual
link stacking capacity without any blocking back mechanism being introduced in
order to bring its ratio down to 1.0. In such cases for reporting purposes we treat
the stack for the individual link as though it were the summed stack for the chain
so that the reported Q/S ratio will be less than 1.0, consistent with there being no
blocking back.
Version 10.9 introduced a new – and hopefully very useful – concept of “phased in
blocking back” whereby, if the queue on a link is “almost” equal to its stacking
capacity, then a blocking back factor is introduced but reduced in scale depending
on the degree of under-saturation.
The new rules are applied if &PARAM namelist parameters (a) BB109 = T (default
= F) and (b) BBKING < 1.0. Thus BBKING (Blocking Back Kicks IN Geddit?) = 0.8
implies that if the queue is less than 80% of the stacking capacity – Q/S < 0.8 -
then no blocking back factor is applied; however if BBKING < Q/S < 1.0 then a
blocking back factor is calculated as though S = Q but then increased towards 1.0
in proportion to (1.0 - Q/S) / (1.0 – BBKING). So if, for example, Q/S = 0.95,
BBKING = 0.8 and the initial blocking back factor with S = Q were 0.6 then it
would be increased to 0.7. If Q/S = 0.81 it would be 0.98. (Recall that a blocking
back factor of 1.0 implies no blocking back capacity reduction upstream.)
The worst cases very often arise on very short links where the stacking capacity
(particularly if it is calculated by default using ALEX and the link length; see
6.4.11) is very small and a very small change in assigned and/or simulated flow
can have a very large impact on blocking back factors. A not infrequent example
is a pedestrian crossing slightly displaced from a signalised junction which is
coded as a separate node with a connecting link of, say, 2 metres. In this case the
stacking capacity may be less than 1 pcu and almost any over-capacity queue will
create severe blocking back. Other examples occur at signalised roundabouts
which are coded (quite legitimately) as a series of separate signals with very short
links and, again, very small stacking capacities.
In principle, the “sum of stacks” rules described in 8.5.4 and 8.5.5 may adequately
deal with the problems; however, this may not always be the case.
Tables showing the (up to) 10 worst links in terms of changes in their blocking-
back factors from one simulation to the next are given in the SATALL .lpt files
(see 9.9.1) and may also be viewed interactively within P1X.
The basic theory of cyclical flow profiles described in 8.1 assumes that the same
pattern of movement is repeated every, say, 90 seconds so that the same number
of vehicles arrive at a junction during each 90 seconds. In reality this is not of
course true and some variations about the mean are to be expected, not only from
one cycle to the next but also from one day to the next or from one week to the
The effects of random arrivals are explicitly accounted for in the SATURN
treatment of delays to give-ways at roundabouts and minor arms at priority
junctions. However at major arms at priority junctions and all arms at traffic
signals delays are explicitly divided into two components - a “uniform delay”
calculated from the queuing cyclical flow profile (Section 8.1 above) plus a
“random delay” component which is calculated from the following formula as used
in the traffic simulation model TRANSYT:
Equation 8.8
d T / 4q q s 4q / T
2 1/ 2
q s
Where:
d T / 4q
0.5
For over-capacity turns, q> s, random delays are “capped” at the maximum value
calculated for q = s. In addition the definitions of q and/or s may be affected by
lane choice and/or blocking back; see 8.6.3 to 8.6.5 below.
As an example for a flow equal capacity of 1800 pcu/hr with LRTP = 30 (minutes)
<d> would be 30 seconds. For LRTP = 60 minutes it would be 42.4 seconds or
with flow and capacity doubled it would reduce to 21.2 seconds. At 90% of
capacity (q = 1620, s = 1800, LRTP = 30) <d> would be 9.16 seconds.
Hence the contributions from the random delay components are not particularly
large and, in terms of modelling “realism”, are probably preferable to assuming
zero or near zero delay for flows right up to capacity. The latter affect occurs in
particular with major arms at priority junctions where, unlike signals, there is no
cause of delays (apart from possibly small perturbations in the arrive profiles due
to platooning from traffic signals which can cause temporary periods of over-
saturation to occur during the simulated cycle). The introduction of a random
delay component is therefore generally to be recommended by setting LRTP > 0.
Note that in calculating <d> in SATURN the time period T need NOT be identical
to the time period simulated - in fact the value of T used is given by the parameter
LRTP (Length of the Random Time Period) as opposed to LTP. There are two
essential reasons for this:
By setting LRTP = 0 the user can effectively suppress all random delays since
in this case <d> = 0 (not recommended; see below);
The random delays calculated for, say, two successive 15 minute time
periods will not be the same as the delay calculated for a single 30 minute
time period. Thus LRTP should be chosen to approximately equal the length
of time since the flow became equal to q, e.g., the beginning of a peak period.
This distinction, however, is probably only critical to users who are using
SATURN to look at successive time periods, not those who are modelling a
single time period.
As a rule of thumb we would recommend that, for single time period models,
LRTP should be set equal to LTP. For multiple time periods LRTP should probably
be longer than the LTP-values for single time periods but possibly less than the
total time period modelled.
a) the additional delay contribution is probably realistic and (b) by making delay
a “smoother” function of flow it makes both assignment and assignment-
simulation convergence easier.
From version 10.5 onwards random delays may also be applied to turning
movements which are “give-way” (i.e., from minor arms at priority junctions or all
entries at roundabouts) if the parameter RAGS = T, using equation (8.8) but with
one important distinction.
Thus with give-way movements the quantity “s” in (8.8) is interpreted as the
saturation flow rather than the capacity. Generally speaking at give-way turns the
capacity will be less than - possibly considerably less than - the saturation flow
due, in particular, to the reduction due to gap acceptance in major arms’ cross
traffic. Equally delays are high. However, if (unusually) there is zero cross traffic
then capacity C equals saturation flow S and delays will be virtually zero for all
flows up until C (i.e., S). Setting RAGS = T introduces extra delays as V
approaches C (i.e., S) which, it could be argued, are more realistic.
On the other hand, if the cross traffic is significant and the capacity C is (much)
less than S then the delays calculated by equation (8.8) will be small and the
additional delays generated by setting RAGS = T will equally be insignificant.
In most situations the latter case is most common and it is therefore expected that
setting RAGS = T will have a relatively small impact on total network performance.
Although the default value of RAGS is .FALSE. (see 6.3.1) this is largely for
reasons of compatibility with previous versions and a value of .TRUE. is generally
recommended.
Random delays calculated for turning movements which share lanes with other
turning movements are calculated using flows and capacities (i.e., q and s in
equation 8.8) aggregated over all movements with common lanes, generally
referred to as “rivers”; see 8.8.2. Thus all turns within the river will experience an
identical random delay.
However, the definition of which precise turns are combined together into a
particular river depends on the modelled lane choice and is not necessarily fixed;
these may lead to certain problems of “discontinuity” in calculating random delays.
For example, consider a 2-lane road in which left turns may only use lane 1, right
turns may only use lane 2 and straight-ahead traffic can use either lane 1 or lane
2. If the straight-aheads actually use both lanes then all three turns will be in the
same single river and they will have the same random delay. However, if the
straight-aheads only use lane 1 then there will be two separate rivers with distinct
random delays.
The problem which this introduces is that the random delays may jump in a
discontinuous fashion with changes in flow as the lane choice shifts between
shared lanes and a single lane. This, in turn, creates problems of convergence
between assignment and simulation.
This problem may be overcome if the random delays are based on “estuaries”
rather than “rivers” where an “estuary” is defined to be a river assuming that all
available lanes are used by all turns. Thus, in the above example, all three turns /
both lanes would always be in the same estuary independent of the actual lane
choice. If, therefore, we always use the estuary to define the aggregate flows and
capacities for random delay calculations there cannot be any discontinuities in the
calculation.
To use the estuary definition rather than the river definition a logical parameter
RTP108 must be set to .TRUE, under &PARAM in the network .dat file. N.B. The
default value for RTP108 was changed to T from F in release 10.9.
The concept of “link chains” has been described in section 5.1.12 whereby a
single “real” link is (presumably artificially) subdivided into a continuous set of sub-
links, e.g., with intermediate 2-arm nodes. In such cases we presume that the
queue should form primarily on the (sub-) link at the (downstream) head of the
chain and, therefore, that is only appropriate to associate a random delay with just
that one link. Thus, version 10.9 and beyond, an extra rule has been introduced
such that any links which are part of a chain upstream do not have any random
delay calculated but the downstream “end of chain” link will (provided, of course,
that the other necessary conditions described above are satisfied).
The one exception to this rule is where an intermediate node is signalised, e.g., if
it is a pelican crossing, in which case random delays are applied as per normal.
N.B. There is no parameter provided to turn this new rule “off” or “on: it is always
on.
Post 10.9 the “capacity” used in equation (8.8) to calculate random delay does not
include any reduction from any blocking back on the upstream exit link. The
rationale behind this change is that, since the turn will already be over capacity by
definition since it is being blocked, there is no need to model extra delays due to
the turn going temporarily over capacity.
It also removes a possible discontinuity when the link downstream goes from a
state of blocking back to not blocking back or vice versa.
Equation 8.9
C Sm 1 VM / S M
GS M
where:
Sm is the saturation flow for that entry;
VM is the on-roundabout flow crossing that entry;
SM is the maximum roundabout capacity as defined on the node data record
RSAT (6.4.7);
G is the gap parameter (GAPR).
(N.B. strictly speaking (8.9) is applied to each individual time unit so that the
ACCEPT or capacity profile may vary over the basic cycle time as the circulating
flows vary.)
This implies that in fact the values of the first and last lanes used per turn on
roundabouts (see 6.4.1) are ignored and set equal to 1 and the number of lanes
respectively, and equally that the saturation flows per turn must all be equal to the
total saturation flow for the arm as a whole. No turn priority markers are necessary
- the give-way rules are implicit.
The maximum roundabout capacity is also the same for all entry arms and
logically should be greater than or equal to the saturation flow for any input arm
(or, strictly speaking, the saturation flow per arm should be less than or equal to
the circulating capacity).
Note that an extra delay is also added to each turning movement at roundabouts
to reflect the proportion of the total circulating time on the roundabout as defined
in columns 16-20 of the 11111 node data records (6.4.1). Thus if the circulating
time on a 4-arm roundabout is 3 seconds then the circulating time to the second
exit arm would be 1.5 seconds.
As shown in equation (8.9) entry capacity from an arm is a function only of the
circulating traffic at that point. It is possible to extend that definition such that the
total opposing flow VM in (8.9) may be written as:
VM K s Eij
where Ei is the exit flow on that arm (and which therefore exits before traffic
enters). Clearly for a one-way inbound arm Ei is zero.
The Ks factors may only be defined on a link-by-link basis using the network X-file
facility; see 6.13. There is therefore no universal parameter which may be set as,
for example, with TAX. (In effect the universal default value is zero).
The correction, introduced in 10.6, involves applying CAPMIN at each time unit
rather than over a full cycle.
Pre 10.9 the new check could be removed by setting RB106 = F in &PARAM; post
10.9 RB106 is ignored and the new methods are always used.
The choice of a lane is very often of critical significance in the modelling of turn
capacities. For example, consider the case of a single lane at a priority junction
which is shared between straight ahead traffic and right turners (assuming drive
on the left) where the straight aheads are otherwise unimpeded, but the turning
traffic must find gaps in the opposing traffic. If there is heavy opposing traffic and
few gaps the turning capacity will be low and equally, since a straight ahead
vehicle cannot go if there is a stationary turner ahead, the straight ahead capacity
will be low as well. Clearly in this case if there were alternative lanes available to
the straight ahead traffic then they would use them in preference to the shared
lane but even so they would still lose the saturation flow from the unused (and
blocked) lane.
Lane choice in SATURN begins with the definition of available lanes per turn on
the network link data records (see 6.4.9). If there is no lane sharing then traffic is
divided equally amongst its allocated lanes. If two or more movements share then
the lane choice is determined on the basis of, in effect, a Wardrop Equilibrium
Principle (see 7.1.1) whereby:
All lanes used by a particular turning movement have equal ‘stop line delays’
and all unused lanes have equal or greater stop line delays.
Stop line delay is defined in a very similar way to normal delay except that it
includes an element of “clearance time” at the stop line equal to the inverse of the
saturation flow. It is a function of the total flow within each lane.
Further details of the allocation process may be found in SATURN Notes; for the
moment it is sufficient to say that it uses an algorithm very similar to but much
simpler than the Frank-Wolfe algorithm for solving Wardrop Equilibrium
assignment (7.1.2).
We note that lane choice is not fixed but is flow dependent, not only directly on the
arrival rates per turn on one junction arm but also, potentially and more indirectly,
on the flows on other arms which (via give-ways) affect the stop-line delays on
other arms. It is therefore one of the sources of problems for internal simulation
convergence (8.3)
8.8.2 Rivers
For certain modelling purposes turning movements and/or lanes are aggregated
together into “rivers” where a river consists of a set of movements where each
member shares lanes with at least one other member. For example, if a left turn
uses lane 1, straight ahead turns use lanes 1 and 2 and right turns use lane 2
then the left and right turns are in the same river even though they do not have a
lane in common.
Note that the definition of a river is based on “usage”, not purely on lane markings
so that, in the above example, if straight ahead traffic were allowed to use lanes 1
and 2 but (due, say, to heavy right turning traffic) only used lane 1 then the left
and straight ahead traffic would be in one river and the right turners in another.
If one member of a river is over capacity then all members are and they have
equal V/C ratios and discharge, in effect, as a single queue. The same principle
applies equally to individual lanes. This has implications for the way in which the
over-capacity delays are calculated, particularly under the ROSIE option (8.4.3).
Information on actual lane choice, the grouping of turns into rivers, various
definitions of capacities etc may be found using the numerical output tables option
within SATLOOK (11.11.1). An explanation of the tables used is given in the next
section.
There is, however, one exception to this rule which is that traffic which merges
(turn priority marker M) from a “minor” arm onto a “major” arm can affect the lane
choice on the major arm based on the (sometimes highly dubious!) concept that
drivers on the major arm will shift away from the lane(s) where merging takes
place in order to make life easier for the merging traffic. The same effect is also
considered on “Y-merges” or “double-M merges” between two equal-priority turns.
The following two sub-sections describe the principles of lane choice for single-M
and double-M merges; section 8.8.4 describes further possible capacity
reductions due to limited space on the exit lanes.
For example, Figure 8.7 illustrates a situation typical of an entry ramp onto a
motorway or other major road where junction B is coded as a priority junction and
turn D-B-C only is assigned a turn priority marker M (i.e., A-B-C has no priority
marker). Assume 2 lanes on A-B, one on D-B. Prior to 10.3, traffic on A-B would
be equally divided between lanes 1 and 2 and the merging traffic would have to
find a gap in the single inside lane carrying 50% of the major traffic. With 10.3
more traffic on A-B would be allocated to the outside lane (i.e., the bottom lane in
the diagram) in order to make the merge easier.
The amount of traffic transferred between lanes is calculated using the (assumed)
principle that each lane of the “major” arm A-B carries equal flow including the
flow D-B-C allocated to the merging lane (in this case lane 1) but factored by a
parameter APRESV (as in “apres vous”) which indicates the willingness of drivers
to accommodate merges. Thus if APRESV = 0 no preference is given to merging
traffic whatsoever, if APRESV = 1.0 they are effectively given equal weight.
Mathematically we could write:
Equation 8.10
VDBC APRESV VAB
1 2
VAB
This allocation is independent of: (a) how many lanes there are on the merging
arm D-B and (b) how many lanes there are downstream of B (except in so far as if
there were 3 lanes downstream of B then D-B-C should probably not have been
coded as a merge in the first place). Similarly, if there were three lanes on A-B
then the same basic principle would apply with equal total (weighted) flow on all
three lanes.
In some circumstances, e.g., heavy merging traffic, light major traffic, it may not be
possible to exactly balance the flows, in which case there might be no major traffic
allocated to the merge lane and the (weighted) traffic in the merging lane would
exceed that in the other major lanes. (It follows that there would therefore be no
traffic for the minor flow to merge with and hence no delay or capacity loss to the
merging traffic).
Clearly the end effect of the new lane choice mechanism is to increase the
capacity for the merging traffic and to reduce its delay with the effect being greater
with increasing values of APRESV.
Note that, with merges (including Y-merges described next), an extra line is added
to lane-choice tables such as 8.1a or 8.2a (see 8.9.1) to indicate the flow of
merging traffic which contributed to the final lane choice.
The same principle also applies in the case of a “Y-merge” (Figure 8.8) where two
streams of traffic, both coded M, merge into one with equal priority, as happens
most commonly when two motorways merge together. In this case both arms will
have their lane choices adjusted such that for both arms the underlying principle
of equal flow per lane (including the “other” merging traffic in the central lane) will
be established.
Note that in the case of a Y-merge the implicit assumption is that there is always
one and only one central shared merging lane independent of the number of lanes
on the two merging arms and on the number of arms downstream. Thus our
model is one of 2 + 2 lanes into 3 or 3 + 3 into 5 even if the downstream arm had,
say, 4 lanes in both cases. (The presumption is that 2 + 2 into 4 would not have
been coded as a Y-merge in the first place.)
There is therefore a strong implication that both arms will have more than one
lane, most likely the same number. Indeed the situation where one arm has one
lane and the other has multiple lane leads to a “Serious Warning” since it is
possible for the multi-lane approach to totally block off any capacity for the single
lane; this situation, presumably an entry ramp onto a motorway, is best handled
by coding the entry ramp as M and the motorway entry without any priority
marker.
Note that the case of a single lane on both entry arms of the Y is accepted
although possibly unusual in practice.
We may further note that in the case of a Y-merge the parameter APRESV is not
used as both merging streams are given equal weight with the flows into which
they are merging: in effect APRESV = 1.0 here.
(1) The two lanes that merge or “funnel” into a single lane which therefore
restricts the total number of vehicles which may enter from both streams;
(2) The merging operation is more a question of two parallel streams being
brought into lateral contact without any significant restriction on total exit
flow.
For example, referring to Figure 8.7, if the motorway has 2 lanes on A-B and 2
lanes on B-C and there is no elongated slip road area for the single lane entry
from D then the merge operation would be a “funnel”. If, on the other hand, there
were a definite slip road that eventually leads to a section of 3 lanes on B-C then
the merging would be more “lateral”. And there will, of course, be intermediate
layouts.
In both cases SATURN models the capacity loss due to the “minor” arm seeking
gaps in the “major” arm via the same equation. Thus the capacity Cm for a “minor”
turn coded as M is, combining Figure 8.1 and Figure 8.2, given by:
Equation 8.11
Cm Sm 1 VM S M
GS M
We note the critical role played by G SM . Figure 8.9 illustrates graphs of Cm vrs
VM under three possible situations where: (a) G SM < 1, (b) G SM =1 and (c) G SM
> 1. Thus, under (a), Cm is relatively insensitive to the major flow VM until it
approaches capacity whereas with (c) Cm decreases much more rapidly as a
function of VM.
0.8
0.6
Cm
0.4
0.2
0.0
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
VM SM
However there may be certain conditions under which the explicit capacity of the
exit lane implied by a funnel-merge can further restrict capacities; the various
possibilities are discussed in the following three sub-sections.
We consider here the possibility of further capacity reductions above and beyond
equation (Equation 8.11) in the case of a single-M merge, as illustrated in Figure
8.7, where the merge is judged to “funnel”. In this case the capacity restrictions on
the exit link may be written as:
Equation 8.12
Vm VM Sx 1
where SX is the saturation flow (assumed equal to capacity) of the exit lane. In
general, within SATURN, we will not have the value of SX specified, particularly if it
is at the upstream end of a simulation-coded link. However, if we assume that the
physical characteristics of the exit lane will not be that different from those of the
two entry lanes, we can approximate (Equation 8.12) as:
Equation 8.13
Vm Sm VM S M 1
or
Equation 8.14
Vm Cm S m 1 VM S M
Equation 8.14 therefore imposes a second restriction on Cm such that the final
capacity will be the minimum of Equation 8.11 and Equation 8.14. Note that
Equation 8.14 is identical to that represented by case (b) in Figure 8.9 where GSM
= 1. Thus Equation 8.14 only sets a lower capacity Cm in case (a) in Figure 8.9
when GSM < 1 or G < 1 / SM. In other words the capacity restrictions due to
funnelling place a lower limit on the effective value of the gap for merging, G (i.e.,
GAPM) = 1 / SM.
If the critical gap is greater than 1 / SM or the merge is not considered to funnel
then no further capacity restrictions above and beyond Equation 8.11 are
considered.
More specifically, the capacity is removed from the lane on the major arm where
merging occurs, i.e., the inside lane for a “normal” near-side merge. The “post-
squeeze” capacity of the merging lane on the major arm CM is given by:
Equation 8.15
CM CM 1 Vm S m
where Vm and Sm are the (actual) flow and the saturation flow of the minor
merging turn and CM’ is the “pre-squeeze” capacity of the major arm lane (subject
to the restriction that CM > CAPMIN).
Note that the actual minor-arm flow Vm is restricted to be less than its capacity
calculated after gap acceptance and/or funnelling, i.e. Equation 8.11 and
Equation 8.14, which, in turn, depend on the flow on the major arm. Hence there
is a state of “equilibrium” between the two movements
In general this reduction in capacity should not be sufficient to convert the major
movement A-B-C from under capacity to over capacity on the basis that, if A-B-C
were near to capacity, then there would be relatively few gaps available to D-B-C
to enter and the traffic that would be allowed to enter would be less than VABC –
VDBC. However, it may be possible with very small GAPM values for this to
happen.
Note that the reduction in capacity takes place whether or not APRES equals
zero.
The principles of “funnelling” described above for “single-M” merges are equally
applied to Y-merges where it applies to both arms. Thus, if the minimum gap
(GAPM) is very small (< 1/S) and total V/S ratio for the two exit flows after gap
acceptance has been applied initially exceed 1 then the capacities of both arms
need to be further restricted. There is, however, an added proviso that both arms
are “guaranteed” 50% sat flow capacities as per “shared” movements (see
equation (8.3)).
In addition the principles of allocating any “space” unused by one turn to the other
as applied for shared lanes (see 8.9.2) are also applied to Y-merges.
N.B. Versions 10.8 and beyond always apply the principles of funnelling to Y-
merges whether or not the parameter Funnel = T and/or clear exit priority
modifiers are used (see 8.8.4.5). The thinking is that “funnelling” (i.e., setting an
upper limit on the amount of traffic that enter a single exit lane at a Y-merge) is a
fundamental property of that configuration and that a “lateral” Y-merge is not a
good modelling concept.
Note that the new “rules” for Y-merges introduced in 10.8 and applied when M108
= T (see 8.8.4.5) produce different (possibly significantly different) results from
those prior to 10.8, although the general principles applied are broadly similar. It is
felt that the new rules are more realistic and are therefore recommended although
it is appreciated that users may wish to preserve the old results from well-
calibrated networks by setting M108 = F (ie not the default value).
Thus if M108 = F then the rules for (and interpretation of) merges are those
applied prior to the release of version 10.8. In particular this means that the
possible capacity restrictions described under 8.8.4.2 do not apply, i.e. the
“funnelling” effect of the exit lane is ignored. On the other hand the capacity
reductions to major arms at single-M merges and to both arms at Y-merges are
still applied as they were in 10.7.
On the other hand, for Y-merges, as noted in 8.8.4.4 above, funnelling restrictions
are always applied independent of whether FUNNEL = T or F and/or whether a
priority marker C is used as long as M108 = T.
Finally, it should be pointed out that if GAPM is sufficiently large, i.e.,> 1/SM in all
possible cases, then none of the capacity restrictions mentioned above due to
funnelling affect the results and the values used for M108, FUNNEL and C-
modifiers are irrelevant.
In summary, capacity restrictions on a single-M exit arm only come into play when
M108 = T, FUNNEL = T, a C-modifier has not been used and GAPM < 1/SM.
Capacities at the lowest level of disaggregation can be displayed using the lane
distribution option in SATLOOK as illustrated in Tables 8.1a and 8.1b for link 68 to
18, taken from the standard test run. We first describe how these numbers are to
be interpreted before describing how, in general terms, they are calculated below
Table 8.1 - Lane distribution of stop line arrivals for traffic on link 68 to 18
Lanes
Turn To 1 2 Total
Note: All flows in pcus per hour. Figures in brackets are capacities
Thus we see that lane 1 above is shared by two turns with a flow of 248 pcu/hr
turning to node 19 and 130 pcu/hr to node 45. Their respective capacities within
this lane are 241 and 126 pcu/hr. Lane 2 on the other hand is reserved for the
second turn only with a flow of 459 pcu/hr and a capacity of 448. The -1 under
lane 2 for the turn capacity into 19 indicates a banned movement from that lane.
A value of 0 would indicate an allowed movement but no capacity.
The numbers on the far right give the total flow and total capacity for each turning
movement. Those at the bottom give the total flows and capacities per lane.
Totals for the link as a whole are given lower right. V/C ratios are also given by
lane and in total.
Note that in this case the capacities per turn and per lane are additive but this is
solely due to the fact that the lanes are over-capacity and not a general rule. See
8.9.2.
Table 8.1 (b) - Comparison of turn capacities with lane sharing included vrs. turn
capacities with lane sharing excluded
The second table illustrates how much capacity is lost through the presence of
other vehicles in shared lanes. Thus the column headed CAPACITY EXC gives
the capacity for each turn calculated as though there were no other movements
present on the link, while the CAPACITY INCL gives the actual capacity with such
effects included. Thus the presence of vehicles making turn 68-18-45 reduces the
capacity of turn 68-18-19 from 336 to 241 pcu/hr, a ratio of 0.718.
Let us now consider how capacities such as those above are calculated in the
presence of lane sharing. If turns have exclusive lanes their capacity is assumed
to be equally shared between the permitted lanes (but see 8.8.3 for an exception)
and the problems of defining and calculating lane capacities do not arise; lane and
turn capacities are identical.
We start at the most “disaggregate” level of turn capacities per shared lane. These
are calculated in two different ways, depending upon whether the lane is under
capacity or over capacity (as in Table 8.1).
Under-Capacity Lanes
For under capacity lanes the rule is that each turning movement in that lane has a
capacity equal to its actual flow plus an additional capacity reflecting the spare
capacity in that lane.
The problem which emerges here is that the “spare capacity” may differ for
different turning movements if they have different saturation flow rates. If, for
example, we had a left and straight ahead movement in the same lane they might
have been assigned saturation capacities of 1200 and 1500 pcu/hr per lane. (The
saturation capacity of a lane for a given turn is assumed to be the user-input turn
saturation flow divided by the number of lanes; e.g., the ahead movement above
might have been coded with a capacity of 3000 pcu/hr and 2 lanes.) If the lane is
judged to carry, say, 400 lefts and 500 aheads (one third of their respective
saturation flows in both cases) then we assume that, in total, the lane is two thirds
full, and that the remaining one third capacity could accommodate either an
additional 400 lefts or 500 aheads. Hence their total capacities in that lane would
be 800 and 1,000 respectively.
To assign a total lane capacity we cannot simply add up the constituent turn
capacities since in the above case this would give us the unrealistic figure of
1,800 pcu/hr; we would be, in effect, counting the spare capacity twice. We
therefore define the lane capacity to equal the total flow plus a “weighted” average
of any spare capacity. In the above case the actual flow is 900 pcu/hr, the “spare”
capacity is one third of a weighted average of 1200 and 1500 pcu/hr, with the
weights determined by the relative flows. The weighted average saturation flow is
therefore:
These capacity calculations are in fact carried out separately for each individual
time unit and summed, whereas the above example applied strictly speaking only
to the case of uniform flows over the time period simulated (i.e., “flat” profiles).
Over-Capacity Lanes
If however the lane is over capacity then the individual turn capacities are
proportional to their flows. Thus if one turn contributes 75% of the flow to an over-
capacity lane it is allocated 75% of its capacity for that lane. This implies that all
turns in a lane have equal V/C ratios.
In this case the total capacity for a lane is simply the sum of its individual turn
capacities per lane since the complication of spare capacity does not, by
definition, arise.
Total Capacities
The total capacity for a turn is the sum of its individual capacities in each lane,
while the total capacity of the link is the sum of its individual lane capacities
calculated as above. Therefore the total link capacity is not equal to the total of its
individual turn capacities, again because the latter sum may “double count” any
shared spare capacity.
Finally the total capacity at a node is obtained by summing up the link capacities
of its entry arms.
Examples
To further illustrate the principles involved Tables 8.2a and 8.2b show data for link
68 to 18 with the flows reduced by 50% so that the link is now under capacity.
Table 8.2 (a) - Lane distribution stop line arrival for traffic on link 68 to 18.
Lanes
Turn To 1 2 Total
Note: All flows in pcus per hour. Figures in brackets are capacities
Table 8.2 (b) - Comparison of turn capacities with lane sharing included vrs.
turn capacities with lane sharing excluded
We may note several differences from the previous results in Table 8.1:
1) While the turn to 45 still uses both lanes 1 and 2 the V/C ratios per lane are
not equal (although the measure of their stop line delays are; see 8.8.1)
3) The lane capacity in lane 1 – 375 - is not the sum of the two turning
capacities due to the fact that the spare capacity has not been double
counted. Thus the spare capacity in lane 1 as seen by the turn to node 19 is
273 - 120 = 153, to node 45 it is 292 - 86 = 206 while combined it is 375 -
206 = 169. The differences per turn are explained by different saturation
flow across the stop line.
4) Note as before, however, that turn capacities are additive by lane; e.g. 740 =
292 + 448, as are the totals: 375 + 448 = 823.
The implications of comparing Tables 8.1b and 8.2b are discussed below.
Yet further complications arise with the capacities C as used in the simulation-set
time-volume relationships (8.5) in the presence of shared and over-capacity lanes.
What is required in equation (8.5) is that the point of transition from “under
capacity” flow, (8.5a), to “over capacity” flow (8.5b) should occur at that flow for a
particular turning movement at which queues should start to form assuming that
all other turning movements in that link remain at their current level. Defining a
“queue capacity” in this manner can lead to different values than those defined
above and, in certain situations, as we shall demonstrate below, the “queue
capacity” may actually be zero.
Consider a lane with two movements 1 and 2 with saturation flows of 1800 and
1200 pcu/hr respectively and no further restrictions. If, case (i), V1 = 900 and V2 =
600 then the lane has a V/C ratio of 1.0 and the queue capacities would be 900
and 600. These equal the “normal” capacities as defined above.
If, case (ii), V1 = 1200 and V2 = 800 (i.e. both equal to 2/3 of their saturation flows)
then the combined V/C ratio would be 4/3 or 1.33. Hence the “normal” capacities
would be 50% of the saturation flows, again 900 and 600. (50% since both flows
have equal V/S ratios). However, if V1 is fixed at 1200, 2/3 of S1, then queues
would form whenever V2 exceeds 1/3 of S2; hence the queued capacity of
movement 2 would be 400, and similarly that for movement 1 would be 600.
Finally, case (iii), consider the case where either movement on its own would be
over capacity; e.g. V1 = 2000, V2 = 1500. Here, the queue capacity, by definition,
would be zero for both movements. [If, say, V1 = 2000 but V2 = 600 then only
movement 2 would have zero queue capacity, not both.]
The example shown in Table 8.1 fits into case (ii) above where neither turn is over
capacity in lane 1 on its own but together they are. Hence the capacities listed
under QCAP are lower than the normal capacities. Note that this only arises from
the shared movements in lane 1; in lane 2, which has only one turn, the
contributions to QCAP and to the normal capacity are identical.
A problem related to that of determining the flow (by lane by movement) at which
permanent queues start to form is the rate at which the queue disperses. In the
simple case of a single unshared movement if, say, the capacity is 1,000 pcu/hr
then if the arrival flow exceeds 1,000 then a permanent queue begins to form
which disperses at a rate of 1,000 pcu/hr from the stop line. With shared over-
capacity movements the rate at which the queue disperses is more complicated.
Thus it may be shown that the rate Sd at which a shared queue disperses is given
by the weighted harmonic mean:
Equation 8.16
S d 1/ Pi / Si
i
For example in all three cases given in 8.9.3.1 above the queue dispersion rate
would be 1500 pcu/hr (equal to the average of 1200 and 1800 but only by
coincidence, not a general rule) which exceeds any of the individual turn
capacities. Note equally that in tables 8.1b and 8.2b that the dispersion capacities
also exceed individual capacities.
Note that (8.10) refers to the full queue made up of all turning movements which
share; the rate at which individual movements disperse may differ.
The above considerations affect the way in which we must define assignment
cost-flow curves for each simulation turn in order to retain the fundamental
principle that the predicted delay for any specified flow for that turn must assume
that the flows for all other turns with which it shares are fixed. This requires that:
the “transition point” C in equations (8.5a) and (8.5b) should be the queue
capacity;
the C in the numerator of the second term in (8.5b) should also be the queue
capacity;
Equation 8.17
t t0 AV n V CQ (a)
t t0 ACQn B V CQ / CD V CQ (b)
where B as before is half the time period which is being modelled. The
parameters to, A and n are again evaluated by the simulation. Figure 8.9
sketches the qualitative differences between the two sets of equations
(assuming queued conditions with the current flows). Note that the delay at
the current flow V is identical with both equations.
Figure 8.10 - The basic and modified cost-flow curves for over-capacity simulation
turns.
Note that since CQ may be zero in heavily loaded cases equation (8.11a) may
never apply and the minimum delay term to will need to include both a transient
delay to vehicles once they reach the stop line plus a normal over-capacity delay
component in order to reach the stop line.
We may further note that equations (8.5) (assuming C = the “normal” turn
capacity) and 8.11 both go through the same point defined by the current turn flow
and simulated delay, the difference being that the modified curve has a lower
slope reflecting the fact that mathematically the dispersion capacity must be
greater than any individual turning capacity. This is illustrated in Figure 8.9.
Our definition of the assignment cost (or time) vrs flow curve does not therefore
have any impact on the simulation outputs (e.g. total vehicle hours) but is only
introduced in order to make the calculated delays in the assignment into better
predictors of what a subsequent simulation would produce. Their function is
therefore primarily to improve the convergence between assignment and
simulation sub-models. We should reach the same answers whether we use
equations (8.5) or (8.11) but (8.11) should get us there more quickly.
In fact life is even more complicated than that described in 8.9.3.3 which, strictly
speaking, is couched in terms of a single lane. The concept of a “river” has been
noted in 8.8.2 where a particular property of a river is that, if one turn which is a
member of a river is over capacity, then all turns in that river must be over
capacity and indeed they must have the same V/C ratio.
Note that it is possible for a link as a whole to have a V/C ratio less than 1
(indicating a lack of queue) but for individual turns on that link to have V/C > 1.
This arises if the spare capacity of one turn is unavailable to another (and may be
indicative of poor engineering design).
Similar considerations apply to total node capacities where the node as a whole
may have less flow than capacity but individual links and/or turns may be over-
capacity.
We list here the codes for those DA arrays in .ufs network files which refer to
capacities and explain their differences. Section 17.10 performs a similar function
for time-based arrays.
1393 Capacity for entry flows from simulation centroid connectors to the
upstream end of simulation links (see 15.16) N.B. Only relevant if
blocking back on a link extends to entering flows from a zone (see
8.5, note (b))
1473 Simulation link capacities as per 8.9.4.
1643 Simulation turn capacities as per 8.9.1 and tables 8.1a and 8.2a.
1833 Capacity of an assignment link as used in the cost-flow curves; thus,
for simulation turns, CQ as defined by 8.9.3.3.
1843 The “reverse capacity” of an assignment link; thus, for simulation
turns, 1/CD as defined in equation (8.11b).
1863 The “practical capacity” of an assignment link: for simulation links and
turns identical to 1473 and 1643 and thus as defined in 8.9.1; for
buffer links it is the “normal” capacity.
with default values taken from the input UFA file, plus a single additional logical
parameter TITLE, default value FALSE, such that if TITLE is TRUE record 2 - see
below - is to be input.
End of Input.
In order to run SATSIM as part of the normal iterative sequence shown in Figure
3.1a a “dummy file” SATSIM0.DAT is required, the “standard” version of which is
as follows:
&PARAM
&END