CN113946455A - Multi-stage feedback queue flow scheduling method based on bottleneck perception - Google Patents
Multi-stage feedback queue flow scheduling method based on bottleneck perception Download PDFInfo
- Publication number
- CN113946455A CN113946455A CN202111202127.8A CN202111202127A CN113946455A CN 113946455 A CN113946455 A CN 113946455A CN 202111202127 A CN202111202127 A CN 202111202127A CN 113946455 A CN113946455 A CN 113946455A
- Authority
- CN
- China
- Prior art keywords
- flow
- coflow
- queue
- bottleneck
- optimization
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 230000008447 perception Effects 0.000 title claims abstract description 18
- 238000005457 optimization Methods 0.000 claims abstract description 42
- 230000009977 dual effect Effects 0.000 claims abstract description 29
- 230000008713 feedback mechanism Effects 0.000 claims abstract description 4
- 230000006870 function Effects 0.000 claims description 15
- 230000008569 process Effects 0.000 claims description 15
- 230000005540 biological transmission Effects 0.000 claims description 5
- 230000008901 benefit Effects 0.000 claims description 2
- 238000013461 design Methods 0.000 claims description 2
- 238000012544 monitoring process Methods 0.000 abstract description 5
- 230000007246 mechanism Effects 0.000 description 8
- 238000004891 communication Methods 0.000 description 7
- 230000007547 defect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 241000404172 Minois dryas Species 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 150000001875 compounds Chemical class 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000002360 explosive Substances 0.000 description 1
- 239000007788 liquid Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000013468 resource allocation Methods 0.000 description 1
- 238000010845 search algorithm Methods 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/546—Message passing systems or structures, e.g. queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/54—Indexing scheme relating to G06F9/54
- G06F2209/548—Queue
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The invention discloses a bottleneck perception-based multi-stage feedback queue flow scheduling method, which comprises the following steps: after the main node generates the flow, initially distributing queue priority and monitoring the link state according to the flow information; with the continuous increase of the flow, the flow scheduling problem is modeled, and the flow rate and throughput are increased by modeling through Lagrange dual optimization of the flow scheduling problem; aiming at the problem of network congestion caused by increasing throughput, a multi-stage queue feedback mechanism is designed, and a bottleneck factor is constructed according to the size, width and flow rate information of sent flow; and taking the minimized flow completion time as an optimization target, and dynamically adjusting the priority of the multistage feedback queue according to the size of the bottleneck factor through Lyapunov optimization. The invention can fully use the link bandwidth and reduce CCT; the overall stability of the queue is ensured, the congestion is reduced, the throughput is increased while the average CCT is reduced, and the link utilization rate is improved.
Description
Technical Field
The invention relates to the technical field of cloud computing communication, in particular to a bottleneck perception-based multi-level feedback queue flow scheduling method.
Background
With the explosive development of cloud data centers, the internal traffic volume thereof is explosively increased, and great challenges are created for the traditional traffic scheduling mechanism. In order to process mass data, a large number of computing nodes are required to work cooperatively, and a data parallel computing framework is widely applied to cloud computing data centers, such as Apache Spark, Dryad, MapReduce, CIEL and the like. In such a framework, a task produces results through a series of computation and communication phases, typically with a rule at the end of each communication phase: the following calculation phase can only be started after all the input data have been received. Since the communication phase may account for more than 50% of the task completion time, the performance of the application will be significantly affected. This fully explains that networks in data centers have become the bottleneck of data parallel frameworks. Management and optimization of network resources in a data center is therefore important to shorten the completion time of tasks. However, conventional stream-level optimization cannot meet the requirements of low latency and high throughput of applications, and is therefore not efficient enough in application optimization. In this case, the advent of the Coflow made up the gap between the application and the framework.
In the existing research, the flow is called a group of communication data flows with semantic correlation, communication information in a parallel application program can be captured, the flow can be used as a basic element of network resource allocation or scheduling, an optimization target can be better achieved, and reducing the flow completion time (CCT) is generally called a flow scheduling problem, and the flow scheduling is widely applied to data center transmission design. In order to effectively reduce the average CCT of the flow, a number of flow scheduling methods, such as Varys, Aalo, Baraat, etc., have emerged, all of which reduce the average CCT to some extent.
However, the existing flow scheduling method has certain defects, a network is abstracted into a non-blocking large-scale switch, network resource limitation is ignored, and most of the method does not consider network bottlenecks. In a cloud data center, due to the influences of link failure, unbalanced routing and the like in a network, congestion may occur on a core link, and the bottleneck of traffic is transferred to the inside of the network. Bottlenecks within the network determine how many bandwidth streams are actually available, directly affecting the CCT. Therefore, in order to obtain the desired CCT performance, bottlenecks in the network must be considered. In general, in the existing research, the CCT performance is poor due to the bottleneck problem in the network, the method starts from the bottleneck perception point of view, analyzes the typical scene of the Coflow scheduling under the cloud data center architecture, and provides a multi-stage feedback queue Coflow scheduling mechanism based on the bottleneck perception.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a bottleneck perception-based multi-level feedback queue flow scheduling method, which comprises the steps of firstly, initially distributing queue priority according to flow information and monitoring a link state, and then, through Lagrange dual optimization modeling, increasing the flow rate of the flow and increasing the throughput; aiming at the problem of network congestion caused by increasing throughput, a multi-stage queue feedback mechanism is designed, and a bottleneck factor is constructed according to the size, width and flow rate information of sent flows; the priority of the multi-stage feedback queue is dynamically adjusted according to the size of the bottleneck factor, link contention is reduced, the link utilization rate is improved, and congestion perception is realized, so that CCT is reduced, and the flow scheduling performance is enhanced.
In order to achieve the purpose, the invention adopts the following technical scheme:
a multi-stage feedback queue flow scheduling method based on bottleneck perception comprises the following steps:
s1, after the main node generates the Coflow flow, the main node initially distributes queue priority and monitors the link state according to the Coflow information;
s2, modeling a Coflow scheduling problem along with the continuous increase of the Coflow flow, and increasing the flow rate and throughput by modeling the Coflow scheduling problem through Lagrange dual optimization; aiming at the problem of network congestion caused by increasing throughput, a multi-stage queue feedback mechanism is designed, and a bottleneck factor is constructed according to the size, width and flow rate information of sent flow; and taking the minimized flow completion time as an optimization target, and dynamically adjusting the priority of the multistage feedback queue according to the size of the bottleneck factor through Lyapunov optimization.
In order to optimize the technical scheme, the specific measures adopted further comprise:
further, in step S1, the initial allocation of the queue priority according to the flow information means:
the bandwidth of each flow is evenly distributed, and the waiting queue adopts a FIFO strategy.
Further, in step S2, the process of modeling the Coflow scheduling problem includes the following steps:
assuming a cloud computing data center, which is provided with N hosts and M exchangers and has an arbitrary topological structure, to form L links; suppose Coflow CiHas all information about its stream and at time TiThe transmission is started immediately when the network is reached, and the time T is more than or equal to TiWhile, flow CiIs assigned to a ratePath B oflThe above step (1); the flow scheduling problem is modeled as follows:
in the formula, Sk(t) represents the size of the Coflow transmitted flow within time t, df(t) residual data of flow f in time t, 1/Sk(t)+df(t) flow velocity as flow fWeight of (3), weighted sum minimization;the desired bandwidth rate allocated for flow f in link l;is the completion time of the w-th flow of Coflow K, where K represents the total number of Coflow flows and K represents the K-th Coflow flow; since the kth Coflow flow contains W inside itkThe w-th flow of Coflow k is denoted by w.
Further, modeling by Lagrange dual optimization Coflow scheduling problem, the process of increasing the Coflow flow rate and throughput includes the following steps:
the flow scheduling problem model is converted into:
for the (r, lambda, v) triad, r represents the flow velocity of the Coflow flow, D represents the value range of the flow velocity r, and the flow velocity r is solved to obtainSo as to minimize the dual function,representing the value of an equation in the flow scheduling problem;constraint of inequality in Lagrange dual optimization;constraint for equality in Lagrange dual optimization; (λ, v) is Lagrange dual parameter, g (λ, v) is dual problem of original Coflow scheduling problem; (lambdai,vj) Are respectively inequality constraintsConstraint of equalityDual parameters of (d); l denotes the total number of links, j denotes the jth link, BjRepresenting the bandwidth of the jth link; p is a radical ofklIndicating whether flow k occupies link l, pkl∈{0,1};df(t) represents the remaining amount of data within stream f at time t;
solving for (r, lambda, v) by gradient descent, letAnd (lambda)*,v*) Respectively is a pair of optimal solutions of the original problem and the dual problem, and the dual gap is zero;
solving the Coflow scheduling problem through Lagrange optimization, converting the original Coflow scheduling problem into a convex problem, and solving the maximum flow rate of the Coflow.
Further, in step S2, the process of dynamically adjusting the priority of the multi-level feedback queue according to the size of the bottleneck factor through Lyapunov optimization with the goal of minimizing the flow completion time as an optimization goal includes the following steps:
the transmit queue model is built as follows:
in the formula (I), the compound is shown in the specification,is the maximum flow rate, beta represents the ratio of the sent size to the width of the flow of Coflow,is a strictly convex function of the intensity of the light,is the bottleneck factor of the flow waiting queue within time t; the priority of the flow in the virtual queue is related to beta, the larger the data sent by the flow is, the larger the size of the data is, and the smaller the width of the data is, the higher the priority is;is the bandwidth rate allocated by stream f in link i during time t,represents the maximum flow benefit (throughput is maximum);
the virtual queue is:
in the formula, Ql(t) represents the length of the virtual queue over time t, which is related to the flow rate of the distributed flow of CoflowAnd link bandwidth BlDirect correlation, l (l) denotes the total link set;
solving the stability of the queue by adopting Lyapunov drift, and obtaining the expected flow rate as follows:
solving the optimal solution of each linkTo minimize the flow completion time, where Ul(t) represents the total length of the virtual queue, r, over time tmaxRepresenting the maximum flow rate achieved by Lagrange optimization, where V is a non-negative parameter representing a penalty factor, affects the performance tradeoff between the optimal objective function value and the convergence time required to meet the desired constraints.
The invention has the beneficial effects that:
the invention provides a multi-level feedback queue flow scheduling mechanism (BamHI) based on bottleneck perception; firstly, analyzing the network bottleneck problem in the cloud data center flow scheduling process, and aiming at the bottleneck problem in the network, optimizing the flow velocity by Lagrange dual, fully using link bandwidth and reducing CCT; then, for the bottleneck problem between networks, according to the bottleneck factor, network congestion is sensed, a multi-level feedback queue model is improved, Lyapunov optimization is utilized, the overall stability of a queue is ensured, congestion is reduced, and when the average CCT is reduced, the throughput is increased and the link utilization rate is improved by adopting a BamH scheduling mechanism.
Drawings
Fig. 1 is a diagram of a multi-stage feedback queue flow scheduling structure based on bottleneck sensing according to an embodiment of the present invention.
Fig. 2 is a flowchart of a bottleneck-aware-based multi-stage feedback queue flow scheduling method according to an embodiment of the present invention.
Fig. 3 is a comparison diagram of completion times of a multi-stage feedback queue flow scheduling method based on bottleneck sensing according to an embodiment of the present invention and a conventional method.
Fig. 4 is a graph comparing throughput of a multi-stage feedback queue flow scheduling method based on bottleneck perception and the prior art, provided by the embodiment of the present invention.
Detailed Description
The present invention will now be described in further detail with reference to the accompanying drawings.
It should be noted that the terms "upper", "lower", "left", "right", "front", "back", etc. used in the present invention are for clarity of description only, and are not intended to limit the scope of the present invention, and the relative relationship between the terms and the terms is not limited by the technical contents of the essential changes.
As shown in FIG. 1, the invention discloses a bottleneck-aware-based multi-level feedback queue flow scheduling method, wherein a scheduling model of the method comprises a master node, a work node, a daemon process and a monitor. The master node is the brain of the system, which acts as a controller. And the daemon is used as a global scheduler to cooperatively process the optimization information of the main node and the working node, collects the flow information from the working node and executes the flow priority scheduling. And the monitoring module collects the flow information, monitors the link state and judges the congestion condition. The flow chart of the present invention is shown in fig. 2, and the scheduling process includes the following steps:
s1, information collection
Since the flow is a group of communication data flows with semantic correlation, if a group of flow flows is not transmitted, the number of bytes transmitted by each flow, the flow relationship and the flow width information cannot be obtained in advance. The monitoring module collects the flow related information on one hand and monitors the state of the whole network on the other hand. When the flow is more, link contention is generated, which causes network congestion; and collecting network information by information collection, and then transmitting the information to a scheduler for network optimization.
S2, preprocessing data
When a group of flow flows enter a cloud computing data center, a daemon initializes flow scheduling and adopts a simple scheduling mechanism, namely: the daemon process equally distributes the bandwidth of each flow, the waiting queue adopts FIFO strategy and the like, the network bandwidth is distributed fairly, and each flow can be sent.
S3 performance optimization
With the continuous increase of the flow, the just-coming flow waits for the flow transmission of the first-coming flow, which causes queuing in the virtual queue, and since the initialized flow scheduling adopts a policy of allocating bandwidth on average, a large amount of residual space is caused, which greatly affects the scheduling performance of the flow. Aiming at the problems, the monitoring node monitors the link condition in real time, and when congestion is detected, the scheduler of the main node is immediately informed to further optimize. By using Lagrange optimization, the partial flow velocity is increased, thereby shortening the CCT; however, when the throughput is increased, link competition can be generated, and partial flows can enter a waiting queue to influence the CCT performance; and (3) adopting Lyapunov optimization, allocating different priorities to the flow according to the bottleneck factors in the flow in the waiting queue, and scheduling preferentially to achieve a stable state of increasing throughput and reducing congestion, and finally reducing CCT.
In order to optimize the technical scheme, the specific measures adopted further comprise:
further, step S3 includes:
s31, modeling a flow scheduling process: assuming a cloud computing data center, which is provided with N hosts and M exchangers and has an arbitrary topological structure, to form L links; and then, abstracting an undirected graph G (V, E), where | V | ═ N + M and | E | ═ L, and for the undirected graph G, a node V corresponds to a node of the cloud computing data center, and each edge E represents a full-duplex link. Without loss of generality, Coflow C is assumediHas all information about its stream and at time TiThe transmission is started immediately when the network is reached, and the time T is more than or equal to TiWhile, flow CiIs assigned to a ratePath B oflThe above. The basic notation used in this section is as follows.
On this basis, the scheduling problem is represented as an optimization problem in time t:
the constraint conditions that should be satisfied are as follows:
transforming the formula into:
the constraint equation reflects the link state, which means that the aggregate flow bandwidth cannot exceed the link capacity (i.e., the bottleneck in the network) at any time; to simplify the model, the bandwidth allocated for the Coflow per unit time is considered as the sum of the flow rates allocated for the Coflow. On the one hand, to minimize CCT, the problem can be seen as a linear programming problem. 1/Sk(t)+df(t) flow velocity which can be regarded as flow fThen the weighted sum is minimized. To minimize the weighted sum, i.e. the flow rate needs to be increased; on the other hand, increasing the flow rate can cause link contention, create an inter-link bottleneck, cause congestion, and can instead damage the overall CCT. Therefore, there is a need to find a balance between increasing throughput and reducing bottlenecks.
Furthermore, minimizing CCT is an NP-hard problem. In actual production, multiple coflows in the network may compete for link resources at the same time, which makes the Coflow scheduling more complicated. Convex optimization is widely applied to the field of machine learning and the like, and the convex structure in the non-convex optimization problems is found out at present, so that a plurality of non-convex optimization problems are broken through. For the NP-hard problem of minimized CCT, the solution rate can be greatly increased by converting the original problem into a convex problem.
S32 Lagrange optimizing scheduling
For the NP-hard problem of minimized CCT, the solution rate can be greatly increased by converting the original problem into a convex problem. The Lagrange duality idea is to consider the constraint condition of the problem in the objective function, and the method is used for solving through Lagrange duality. The Lagrange dual function of the primitive function is then:
for the (r, λ, v) triplet, r represents the flow rate of the flow stream, (λ, v) is the Lagrange dual parameter, and g (λ, v) is the dual problem of the original problem. Since the dual function is a point-by-point infimum of a family of affine functions about (λ, v). In addition, the dual function is the optimal solution p of the original problem*The lower bound of (a), i.e., the following holds for any λ ≧ 0 and v
g(λ,v)≤p*
Solving for (r, lambda, v) by gradient descent, letAnd (lambda)*,v*) The method is characterized in that the method is respectively a pair of optimal solutions of an original problem and a dual problem, and the dual gap is zero. Because of the fact thatIs aboutMinimum inTakes a minimum value, so the function isThe gradient at (a) is zero, i.e.:
obviously, must exist (lambda)*,v*) So that the following holds:
the Lagrange optimization is used for converting the original problem into a convex optimization problem to be solved, so that the solving speed is increased; by solving an optimal solution for each linkEnsuring that the CCT is taken to a minimum. Since the CCT is reduced by increasing the flow rate of the Coflow flow, link contention is objectively caused, and network congestion is formed.
S33, Lyapunov optimization scheduling
Since increasing the flow rate will occupy the link bandwidth as much as possible, the flow rate will be backlogged in the waiting queue, resulting in network congestion. And designing a brand-new bottleneck factor by the BamH, ensuring the stability of the queue and further optimizing the flow scheduling of the multi-stage feedback queue. Global stability, among other things, means ensuring that the network is always pursuing an ideal state to ensure that all arriving data leaves the buffer for a limited time, thereby ensuring throughput, fairness, and load balancing. In reality, the network is rarely in equilibrium, but global stability may enable the congestion control algorithm to ensure that it has the desired equalization characteristics.
The transmit queue model is built as follows:
here, theIs the maximum flow rate of the flow,is a strictly convex function of the intensity of the light,is a bottleneck factor. On one hand, the priority of the flow in the virtual queue is related to β, that is, the larger the data sent by the flow is, the larger the size of the data itself is, and the smaller the width of the flow is, the higher the priority is. On the other hand, in the case of a liquid,is the flow rate of the flow, and the flow rate state directly reflects the congestion condition of the link, so the flow rateThe impact on the priority of the flow in the virtual queue is very critical.
Virtual queue:
Ql(t) represents the length of the virtual queue, which is related to the flow rate of the distributed flowAnd link bandwidth BlAre directly related.
Due to the randomness of the arrival of computational tasks, queue stability is a key constraint to guarantee strict service delays, discrete time queuingTeam Process Ql(t) achieving queue steady state is defined as follows:
the Lyapunov function is defined as follows:
the conditional Lyapunov drift per unit time can be expressed as
Δ(Q(t))=E{L(Q(t+1))-L(Q(t))|Q(t)}
The queue stability was solved using the Lyapunov drift Δ (q (t)).
Next, the DPP expression of the virtual queue becomes:
where V is a penalty factor, network stability can be derived by minimizing the equation. However, random and non-linear Lyapunov drift is difficult. The minimum value is obtained by deducing the upper bound of the original equation, and then taking the minimized upper bound as a target. Under the heuristic search algorithm based on Lyapunov optimization,andthe following inequality holds for DPP:
defining α as an upper bound for the worst case value. The value α is limited because of the aggregationχIs assumed to be continuous, with for each time slotGiven a time gap t, by searchingSo thatAnd minimum. Minimization of useTo update the queue value Q of the next time slot1(t+1),...,QL(t + 1). Obtaining a special solution:
since the flow scheduling is a method that approximates actual network dynamic queuing, this method is a good approximation of network performance, given that all arrivals are placed immediately on all links of a path.
The invention provides a bottleneck perception-based multi-level feedback queue flow scheduling method (BamH), which is characterized in that a real data set of Facebook is used for verifying a flow scheduling mechanism on the basis of a small-scale flow level simulator and an open source network simulator, and the effectiveness of the bottleneck perception-based multi-level feedback queue flow scheduling method is verified by comparing the scheduling mechanism under the scene that the prior knowledge is known and the prior knowledge is unknown. A comparison of the Coflow completion times of the prior art method and BamHI is obtained as shown in FIG. 3. As can be seen from FIG. 3, the average CCT ratio of BamHI to BamHI is 9.2%, 13.5%, 39.1% higher than that of Baraat, Varys, Aalo, respectively. This is because Bamq considers bottlenecks within the network and thus more accurate flow priority and bandwidth allocation than Varys, using a more efficient priority scheduling method than baraat (fifo). Another reason is that Varys link utilization is not high, while Bamq improves link utilization. Fig. 4 compares the throughput of 4 mechanisms at different workloads, which also partially explains the better CCT performance of Bamq in fig. 3. Throughput is the maximum completion time of all data transmitted through the entire network divided by all coflows. At lower loads (4 algorithms have similar throughput, while at higher loads the throughput of Bamq is significantly higher than Varys (21.3% at 100% load), slightly lower than Baraat (4.2% at 100 load), with a convergence process for Bamq relative to Baraat, so that the throughput loss is small.
The above is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above-mentioned embodiments, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention. It should be noted that modifications and embellishments within the scope of the invention may be made by those skilled in the art without departing from the principle of the invention.
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111202127.8A CN113946455A (en) | 2021-10-15 | 2021-10-15 | Multi-stage feedback queue flow scheduling method based on bottleneck perception |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111202127.8A CN113946455A (en) | 2021-10-15 | 2021-10-15 | Multi-stage feedback queue flow scheduling method based on bottleneck perception |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113946455A true CN113946455A (en) | 2022-01-18 |
Family
ID=79330671
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111202127.8A Pending CN113946455A (en) | 2021-10-15 | 2021-10-15 | Multi-stage feedback queue flow scheduling method based on bottleneck perception |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113946455A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114666283A (en) * | 2022-03-07 | 2022-06-24 | 国家电网有限公司信息通信分公司 | An application-aware multi-tenant Coflow scheduling method and system |
CN117221126A (en) * | 2023-11-09 | 2023-12-12 | 之江实验室 | Network collaboration flow-oriented route scheduling method and system |
-
2021
- 2021-10-15 CN CN202111202127.8A patent/CN113946455A/en active Pending
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114666283A (en) * | 2022-03-07 | 2022-06-24 | 国家电网有限公司信息通信分公司 | An application-aware multi-tenant Coflow scheduling method and system |
CN114666283B (en) * | 2022-03-07 | 2023-11-24 | 国家电网有限公司信息通信分公司 | An application-aware multi-tenant Coflow scheduling method and system |
CN117221126A (en) * | 2023-11-09 | 2023-12-12 | 之江实验室 | Network collaboration flow-oriented route scheduling method and system |
CN117221126B (en) * | 2023-11-09 | 2024-02-13 | 之江实验室 | A routing scheduling method and system for network collaborative traffic |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109104373B (en) | Method, device and system for processing network congestion | |
Chen et al. | Scheduling mix-flows in commodity datacenters with karuna | |
CN108768876B (en) | Traffic scheduling method facing machine learning framework | |
Rojas-Cessa et al. | Schemes for fast transmission of flows in data center networks | |
Li et al. | TrafficShaper: Shaping inter-datacenter traffic to reduce the transmission cost | |
US20110007631A1 (en) | Network Communication | |
CN113946455A (en) | Multi-stage feedback queue flow scheduling method based on bottleneck perception | |
Tang et al. | Effectively reconfigure the optical circuit switching layer topology in data center network by OCBridge | |
Li et al. | Co-Scheduler: A coflow-aware data-parallel job scheduler in hybrid electrical/optical datacenter networks | |
Zhang et al. | Distributed bottleneck-aware coflow scheduling in data centers | |
CN109298932B (en) | Resource scheduling method, scheduler and system based on OpenFlow | |
Liu et al. | HPSTOS: High-performance and scalable traffic optimization strategy for mixed flows in data center networks | |
Dong et al. | Task-aware flow scheduling with heterogeneous utility characteristics for data center networks | |
CN108347378A (en) | A kind of control dedicated network and dynamic routing method for bulk power grid | |
Vargaftik et al. | Composite-path switching | |
Gopalakrishnan et al. | Switch scheduling and network design for real-time systems | |
Eugster et al. | Essential traffic parameters for shared memory switch performance | |
Kogan et al. | A programmable buffer management platform | |
Liu et al. | An OpenFlow-based performance-oriented multipath forwarding scheme in datacenters | |
Zhang et al. | Towards stable flow scheduling in data centers | |
Cano-Cano et al. | QoS provision in hierarchical and non-hierarchical switch architectures | |
Li et al. | Jms: Joint bandwidth allocation and flow assignment for transfers with multiple sources | |
Zhao et al. | Training Job Placement in Clusters with Statistical In-Network Aggregation | |
Wang et al. | Efficient and fair: Information-agnostic online coflow scheduling by combining limited multiplexing with DRL | |
Eugster et al. | Admission control in shared memory switches |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |