Skip to main content
R R Weber
  • Statistical Laboratory
    Centre for Mathematical Sciences
    Wilberforce Road
    Cambridge CB3 0WB
  • +44 1223 337944
  • I am interested in the mathematics of systems that are large, complex and subject to uncertainty. By developing mathe... moreedit
The mean queueing time in a G/GI/m queue is shown to be a nonincreasing and convex function of the number of servers, m. This means that the marginal decrease in mean queueing time brought about by the addition of two extra servers is... more
The mean queueing time in a G/GI/m queue is shown to be a nonincreasing and convex function of the number of servers, m. This means that the marginal decrease in mean queueing time brought about by the addition of two extra servers is always less than twice the decrease brought about by the addition of one extra server. As a consequence, a method of marginal analysis is optimal for allocating a number of servers amongst several service facilities so as to minimize the sum of die mean queueing times at the facilities.
A number of multi-priority jobs are to be processed on two heterogeneous processors. Of the jobs waiting in the buffer, jobs with the highest priority have the first option of being dispatched for processing when a processor becomes... more
A number of multi-priority jobs are to be processed on two heterogeneous processors. Of the jobs waiting in the buffer, jobs with the highest priority have the first option of being dispatched for processing when a processor becomes available. On each processor, the processing times of the jobs within each priority class are stochastic, but have known distributions with decreasing mean residual (remaining) processing times. Processors are heterogeneous in the sense that, for each priority class, one has a lesser average processing time than the other. It is shown that the non-preemptive scheduling strategy for each priority class to minimize its expected flowtime is of threshold type. For each class, the threshold values, which specify when the slower processor is utilized, may be readily computed. It is also shown that the social and the individual optimality coincide.
A number of items are arranged in a line. At each unit of time one of the items is requested, the i th being requested with probability Pi. We consider rules which reorder the items between successive requests in a fashion which depends... more
A number of items are arranged in a line. At each unit of time one of the items is requested, the i th being requested with probability Pi. We consider rules which reorder the items between successive requests in a fashion which depends only on the position in which the most recently requested item was found. It has been conjectured that the rule which always moves the requested item one closer to the front of the line minimizes the average position of the requested item. An example with six items shows that the conjecture is false.
We show that the fluid approximation to Whittle's index policy for restless bandits has a globally asymptotically stable equilibrium point when the bandits move on just three states. It follows that in this case the index policy is... more
We show that the fluid approximation to Whittle's index policy for restless bandits has a globally asymptotically stable equilibrium point when the bandits move on just three states. It follows that in this case the index policy is asymptotic optimal.
We model a selection process arising in certain storage problems. A sequence (X1, · ··, Xn) of non-negative, independent and identically distributed random variables is given. F(x) denotes the common distribution of the Xi′s. With F(x)... more
We model a selection process arising in certain storage problems. A sequence (X1, · ··, Xn) of non-negative, independent and identically distributed random variables is given. F(x) denotes the common distribution of the Xi′s. With F(x) given we seek a decision rule for selecting a maximum number of the Xi′s subject to the following constraints: (1) the sum of the elements selected must not exceed a given constant c > 0, and (2) the Xi′s must be inspected in strict sequence with the decision to accept or reject an element being final at the time it is inspected.We prove first that there exists such a rule of threshold type, i.e. the ith element inspected is accepted if and only if it is no larger than a threshold which depends only on i and the sum of the elements already accepted. Next, we prove that if F(x) ~ Axα as x → 0 for some A, α> 0, then for fixed c the expected number, En(c), selected by an optimal threshold is characterized by Asymptotics as c → ∞and n → ∞with c/n he...
We consider a flexible manufacturing facility that can be operated in any of m different modes. While running in mode k certain intermediate products are consumed and other intermediate or finished products are created. There may be... more
We consider a flexible manufacturing facility that can be operated in any of m different modes. While running in mode k certain intermediate products are consumed and other intermediate or finished products are created. There may be variability in the manufacturing process, as well as random arrivals of raw materials and orders for finished products. We establish conditions that ensure demands can be satisfied while maintaining bounded levels of inventories. These results that may be viewed as generalizing to the flexible manufacturing context the notion of stability for a queueing system.
Abstract. We consider a model in which wireless LANs are to be provided in a number of locations. The owners of these WLANs have decided to peer with one another so that they can roam in locations other than their own. We consider the... more
Abstract. We consider a model in which wireless LANs are to be provided in a number of locations. The owners of these WLANs have decided to peer with one another so that they can roam in locations other than their own. We consider the question of designing a ...
Nowadays, in the markets of broadband access services, traditional con- tracts are of "static" type. Customers buy the right to use a specific amount of resources for a specific period of time. On the other hand, modern services... more
Nowadays, in the markets of broadband access services, traditional con- tracts are of "static" type. Customers buy the right to use a specific amount of resources for a specific period of time. On the other hand, modern services and applications render the demand for bandwidth highly variable and bursty. New types of contracts emerge ("dynamic contracts") which allow customers to dynamically adjust their bandwidth demand. In such an environment, we study the case of a price competition situation between two providers of static and dynamic contracts. We investigate the resulting reaction curves, search for the existence of an equilibrium point and examine if and how the market is segmented between the two providers. Our first model considers sim- ple, constant provision costs. We then extend the model to include costs that depend on the multiplexing capabilities that the contracts offer to the providers, taking into consideration the size of the market. We base ou...
We study usage-sensitive charging schemes for broadband communications networks. We argue that a connection's “effective bandwidth” is a good proxy for the quantity of network resource that the connection consumes and can be the basis... more
We study usage-sensitive charging schemes for broadband communications networks. We argue that a connection's “effective bandwidth” is a good proxy for the quantity of network resource that the connection consumes and can be the basis for a usage charge. The determination of effective bandwidth can be problematic, however, since it involves the moment-generating function of the cell arrival process, which may be difficult to model or measure. This article describes methods of computing usage charges from simple measurements and relating these to bounds on the effective bandwidth. Thus we show that charging for usage on the basis of effective bandwidths can be approximated well by charges based on simple measurements.
We show that in any single server queue having a FCFS discipline, the queueing and waiting times of every customer are nonincreasing convex functions of the service rate. An example shows that this need not be the case if the queue has... more
We show that in any single server queue having a FCFS discipline, the queueing and waiting times of every customer are nonincreasing convex functions of the service rate. An example shows that this need not be the case if the queue has more than one server.
We consider scheduling problems with m machines in parallel and n jobs. The machines are subject to breakdown and repair. Jobs have exponentially distributed processing times and possibly random release dates. For cost functions that only... more
We consider scheduling problems with m machines in parallel and n jobs. The machines are subject to breakdown and repair. Jobs have exponentially distributed processing times and possibly random release dates. For cost functions that only depend on the set of uncompleted jobs at time t we provide necessary and sufficient conditions for the LEPT rule to minimize the expected cost at all t within the class of preemptive policies. This encompasses results that are known for makespan, and provides new results for the work remaining at time t. An application is that if the cµ rule has the same priority assignment as the LEPT rule then it minimizes the expected weighted number of jobs in the system for all t. Given appropriate conditions, we also show that the cµ rule minimizes the expected value of other objective functions, such as weighted sum of job completion times, weighted number of late jobs, or weighted sum of job tardinesses, when jobs have a common random due date.
A series of queues consists of a number of · /M/1 queues arranged in a series order. Each queue has an infinite waiting room and a single exponential server. The rates of the servers may differ. Initially the system is empty. Customers... more
A series of queues consists of a number of · /M/1 queues arranged in a series order. Each queue has an infinite waiting room and a single exponential server. The rates of the servers may differ. Initially the system is empty. Customers enter the first queue according to an arbitrary stochastic input process and then pass through the queues in order: a customer leaving the first queue immediately enters the second queue, and so on. We are concerned with the stochastic output process of customer departures from the final queue. We show that the queues are interchangeable, in the sense that the output process has the same distribution for all series arrangements of the queues. The ‘output theorem' for the M/M/1 queue is a corollary of this result.
A number of identical machines operating in parallel are to be used to complete the processing of a collection of jobs so as to minimize either the jobs' makespan or flowtime. The total processing required to complete each job has the... more
A number of identical machines operating in parallel are to be used to complete the processing of a collection of jobs so as to minimize either the jobs' makespan or flowtime. The total processing required to complete each job has the same probability distribution, but some jobs may have received differing amounts of processing prior to the start. When the distribution has a monotone hazard rate the expected value of the makespan (flowtime) is minimized by a strategy which always processes those jobs with the least (greatest) hazard rates. When the distribution has a density whose logarithm is concave or convex these strategies minimize the makespan and flowtime in distribution. These results are also true when the processing requirements are distributed as exponential random variables with different parameters.
Consider m queueing stations in tandem, with infinite buffers between stations, all initially empty, and an arbitrary arrival process at the first station. The service time of customer j at station i is geometrically distributed with... more
Consider m queueing stations in tandem, with infinite buffers between stations, all initially empty, and an arbitrary arrival process at the first station. The service time of customer j at station i is geometrically distributed with parameter pi, but this is conditioned on the fact that the sum of the m service times for customer j is cj. Service times of distinct customers are independent. We show that for any arrival process to the first station the departure process from the last station is statistically unaltered by interchanging any of the pi's. This remains true for two stations in tandem even if there is only a buffer of finite size between them. The well-known interchangeability of ·/M/1 queues is a special case of this result. Other special cases provide interesting new results.
In recent years, overlay networks have proven a popular way of disseminating potentially large files from a single server S to a potentially large group of N end users via the Internet. A number of algorithms and protocols have been... more
In recent years, overlay networks have proven a popular way of disseminating potentially large files from a single server S to a potentially large group of N end users via the Internet. A number of algorithms and protocols have been suggested, implemented and studied. In particular, much attention has been given to peer-to-peer (P2P) systems such as BitTorrent [5], Slurpie [20], SplitStream [4], Bullet [11] and Avalanche [6]. The key idea is that the file is divided into M parts of equal size and that a given user may download any one of these - or, for Avalanche, linear combinations of these - either from the server or from a peer who has previously downloaded it.
A number of data items (1,2,…,n) are to be maintained in a structure which consists of several linear lists. Successive requests to access items are independent random variables, and the probability that a particular request is for item i... more
A number of data items (1,2,…,n) are to be maintained in a structure which consists of several linear lists. Successive requests to access items are independent random variables, and the probability that a particular request is for item i is pi. The cost of accessing the jth item from the front of a list is j. For a single list, the move-to-front rule (MF) has been extensively studied and has been shown to provide good performance. In some actual circumstances, MF is the only physically realizable or convenient policy. We extend the study of move-to-front by examining the case where items are kept in several lists. Following its access, an item must be replaced at the front of one of the lists. In certain cases, assuming the pi's are known, the policy which minimizes the average retrieval cost takes a particularly simple form: no item is ever moved from the list in which it is placed initially.
At a buffered switch in an ATM (asynchronous transfer mode) network it is important to know what combinations of different types of traffic can be carried simultaneously without risking more than a very small probability of overflowing... more
At a buffered switch in an ATM (asynchronous transfer mode) network it is important to know what combinations of different types of traffic can be carried simultaneously without risking more than a very small probability of overflowing the buffer. We show that a simple and serviceable measure of effective bandwidths may be computed for stationary traffic sources. For large buffers the effective bandwidth of a source is a function only of its mean rate, index of dispersion, and the size of the buffer.

And 120 more