CN117714381B - Fair congestion control method and device with flow perception under SDN data center network - Google Patents
Fair congestion control method and device with flow perception under SDN data center network Download PDFInfo
- Publication number
- CN117714381B CN117714381B CN202311727381.9A CN202311727381A CN117714381B CN 117714381 B CN117714381 B CN 117714381B CN 202311727381 A CN202311727381 A CN 202311727381A CN 117714381 B CN117714381 B CN 117714381B
- Authority
- CN
- China
- Prior art keywords
- data
- flow
- congestion control
- link
- sub
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 29
- 230000008447 perception Effects 0.000 title claims abstract description 10
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 40
- 238000004364 calculation method Methods 0.000 claims abstract description 4
- 238000005457 optimization Methods 0.000 claims description 19
- 230000005540 biological transmission Effects 0.000 claims description 16
- 238000004590 computer program Methods 0.000 claims description 10
- 238000012546 transfer Methods 0.000 claims description 9
- 238000001514 detection method Methods 0.000 claims description 3
- 238000013139 quantization Methods 0.000 claims 1
- 238000004891 communication Methods 0.000 description 11
- 238000002474 experimental method Methods 0.000 description 7
- 230000008901 benefit Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 4
- 238000010168 coupling process Methods 0.000 description 4
- 238000005859 coupling reaction Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 230000010485 coping Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 238000013401 experimental design Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000001105 regulatory effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/24—Traffic characterised by specific attributes, e.g. priority or QoS
- H04L47/2483—Traffic characterised by specific attributes, e.g. priority or QoS involving identification of individual flows
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/12—Avoiding congestion; Recovering from congestion
- H04L47/125—Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
- H04L47/52—Queue scheduling by attributing bandwidth to queues
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
- H04L47/62—Queue scheduling characterised by scheduling criteria
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/50—Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The invention discloses a fair congestion control method and equipment with flow perception under an SDN data center network, which comprises the steps of S1, designing a fine-granularity data scheduling algorithm, S2, carrying out congestion control on data flows based on data packet scheduling realized by the data scheduling algorithm, S3, adjusting sending rates of different data flows based on the current network environment, namely, realizing calculation and bandwidth allocation based on sub-flow rate weights in the congestion control algorithm in the step S2, and S4, finally executing the optimal congestion control algorithm to enable network performance to reach optimal performance. According to the method and the device for controlling fair congestion with flow perception under the SDN data center network, when a scheduling algorithm is designed, the attributes of different services are fully considered, large flows and small flows are distinguished, and the problem of disorder of cache data packets of a receiving end is avoided. The BBR congestion control algorithm is a fair congestion control algorithm with flow sensing, and can ensure fairness on traditional single runoff while improving network performance.
Description
Technical Field
The invention relates to the technical field of communication, in particular to a fair congestion control method, equipment and a storage medium with flow perception under an SDN data center network.
Background
Network congestion is unavoidable during communication in a data center, mainly due to the limited buffer capacity of network devices, the limited bandwidth capacity of the data links, and the limited processing power of the network nodes. Network congestion can lead to increased packet loss rate, increased end-to-end delay and reduced resource utilization, and even brings great loss to large-scale internet enterprises.
Software Defined Networking (SDN) is a new network paradigm for managing data flow transmission of a computer network, and performs unified network resource scheduling and flexible allocation of implementation bandwidth through a control plane in a logic set, thereby improving the network resource utilization rate. By utilizing technical advantages of SDN centralized management and control, flexible scheduling and the like, the efficient and safe multipath transmission mechanism is deployed in the data center network, so that the data transmission performance is further improved, and the service quality of the network is comprehensively improved. The decentralized and aggregate workflow communication mode of the data center switch results in the problem that it still inevitably creates congestion. In addition, there are often multiple paths for data transmission between the source end and the destination end in the data center, which may easily cause a problem of uneven traffic load distribution if load balancing cannot be performed properly.
Disclosure of Invention
The method, the device and the storage medium for controlling fair congestion with flow perception in the SDN data center network can at least solve one of the technical problems in the background technology.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
a fair congestion control method with flow perception under SDN data center network comprises the following steps,
S1, designing a fine-granularity data scheduling algorithm;
S2, carrying out congestion control on the data flow based on data packet scheduling realized by a data scheduling algorithm;
s3, adjusting the sending rates of different data streams based on the current network environment, namely, calculating the sub-stream rate weights and realizing bandwidth allocation based on the congestion control algorithm in the step S2;
and S4, finally executing the optimal congestion control algorithm to enable the network performance to reach the optimal performance.
In yet another aspect, the invention also discloses a computer readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of the method as described above.
In yet another aspect, the invention also discloses a computer device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of the method as above.
According to the technical scheme, the flow-aware fair congestion control method and the flow-aware fair congestion control device under the SDN data center network fully consider the attributes of different services when a scheduling algorithm is designed, and distinguish large flows from small flows, so that the problem of disorder of the cached data packets of a receiving end is avoided. In addition, the invention designs a fair congestion control algorithm with flow perception based on the BBR congestion control algorithm, and ensures fairness to the traditional single runoff while meeting the requirement of improving the network performance.
Drawings
Fig. 1 is a schematic diagram of a BBR congestion control model based on flow awareness according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a fat tree topology;
Experimental results of the group 1 experiment shown in fig. 3 using data throughput as a performance index;
Fig. 4 shows the experimental results using the packet loss rate as the performance index;
fig. 5 is a graph of DCTCP versus packet rate for the network of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention.
The embodiment of the invention provides a new scheduling algorithm on the basis of coupling BBR congestion control, because the existing predictive scheduling algorithm is based on the window growth rule, and the predictive scheduling algorithm is not applicable after the underlying congestion control algorithm is changed into the BBR congestion control algorithm. There is therefore a need to propose new predictive scheduling algorithms.
Congestion control algorithms commonly used in SDN are typically packet loss based or delay based algorithms that avoid congestion by triggering window reduction, which in multipath transmission mechanisms can cause load migration, resulting in a reduction of overall transmission performance. Google has proposed a hybrid congestion control algorithm BBR, which has received much attention since its release and has been applied to various network architectures to improve network transmissibility. The BBR adjusts the sending rate by measuring the network bottleneck bandwidth and the minimum delay in real time, regards the maximum transfer rate in the last 10 round trip delays (RTTs) as the bottleneck Bandwidth (BW), regards the minimum delay measured in the past 10 seconds as RTprop, and further controls the sending behavior thereof by pacing gain and window gain, the sending rate (serving_rate/sending_rate) and Congestion Window (CWND) are respectively:
pacing_rate(sending_rate)=pacing_gain×BW (6)
CWND=cwnd_gain×BW×RTprop (7)
where the pacing_gain and cwnd_gain are pacing gain and window gain, respectively.
Due to the excellent performance of BBR, expanding it into a multi-path congestion control scenario in SDN, a BBR congestion control scheme based on flow awareness is proposed, as shown in fig. 1, comprising the steps of,
S1, designing a fine-granularity data scheduling algorithm;
S2, carrying out congestion control on the data flow based on data packet scheduling realized by a data scheduling algorithm;
S3, adjusting the sending rates of different data streams based on the current network environment, namely, based on the calculation of the sub-flow rate weight and the bandwidth allocation in the congestion control algorithm in the step S2;
and S4, finally executing the optimal congestion control algorithm to enable the network performance to reach the optimal performance.
In the following description of the present invention,
Before the coupling BBR algorithm is executed, the embodiment of the invention designs a fine-grained data scheduling algorithm S1, which comprises the following specific steps:
S1.1, BW and RTT of each sub-flow in the initial link are respectively obtained.
S1.2, for each data packet in the queue, selecting a path with smaller link load as a route according to the load condition of each link, and selecting proper substreams for transmission in FIFO sequence according to the arrival time.
S1.3, in order to avoid overlong data packet transmission queuing time, when a sending end detects a packet loss event, the sending end updates the packet loss rate and re-predicts the time reaching a receiving end.
S1.4, carrying out data packet scheduling again
S1.4.1, periodically detecting the byte variation of transmission according to the characteristics of the size data stream in the data center network to distinguish the size stream, and carrying out data packet scheduling.
S1.4.2, selecting the large stream most suitable for transfer. Specifically, when a packet of a connection level is to be scheduled, the arrival time of the packet in each sub-stream is estimated, and the sub-stream with the earliest arrival time is selected each time.
S2, coupling BBR congestion control, wherein the specific steps comprise:
S2.1, when in design, the performance is improved, meanwhile, the fairness of a bottleneck link is required to be ensured, and BBR cannot be simply overlapped. Specifically, through further analysis of the data stream transmission rate model, it is assumed that there is a bottleneck link with a bandwidth of C, and n different RTT streams are passing. Let flow i i e [1, n ] denote flow i,di (t) denote the transfer rate of flow i, and in an ideal case d 1+d2+…+dn=C.Ii (t) denote the amount of in-flight data of flow i, i.e. the upper bound of the bottleneck link, calculated by equation (3):
In combination with formulas (1) and (3), flow i can obtain 1.25 times of gain in the first upward detection stage, so the maximum transfer rate at time t is:
Where d i (t) represents the transfer rate of flow i, RTprop is flow i. If the probing period of the BBR flow is 8RTprop, the new round of bandwidth estimation of flow i is updated as follows:
as can be seen from equation (5), the actual gain of the data stream of the same bottleneck link is And the RTTs of the data flows are not exactly the same, so that their initial bandwidths are hard to achieve fair sharing.
S2.2, fairness optimization between BBR data flows comprises the following specific steps:
s2.2.1, the link utilization ω at the bottleneck can be quantified from the RTT, expressed as a percentage of RTT and link maximum delay Tmax for flow i.
S2.2.2, determining a congested link. When the bandwidth utilization of the link reaches 90%, it is determined that congestion occurs in the link, and the data flow in the link needs to be rerouted to relieve the congestion condition of the link.
S2.2.3, partitioning of bottleneck sets. If two or more sub-streams have the phenomenon of linearly increasing RTT at the same time, the sub-streams are primarily judged to share the same bottleneck. And coupling the sub-flow rates in the same bottleneck, adjusting the pacing gains of different sub-flows according to the link state, and balancing the transmission rates among the data flows. Step S2.2.3 specifically includes:
s2.2.3.1, small flows are sensitive to time delay, and therefore are typically limited only to large flows, depending on the characteristics of the large and small data flows in the data center network.
S2.2.3.2, calculating the sub-flow velocity weight. The currently measured bandwidth proportionally allocates the rate of each substream. The calculation of the parameter α is as follows:
s2.2.3.3, equalizing the bandwidth. When a plurality of sub-flows compete with each other on a bottleneck link, the bandwidth obtained by each sub-flow is in direct proportion to the size of a buffer area occupied by the sub-flow in a link queue, the competition of each sub-flow for the bandwidth is regulated by utilizing a parameter alpha, and the sending rate is updated by combining a formula 1:
pacing_rate(sending_rate)=α×pacing_gain×BW (12)
and S3, adjusting the sending rate of different data streams based on the current network environment.
And S4, executing the optimal congestion control algorithm to enable the network performance to reach the optimal performance.
The following illustrates the advantages of embodiments of the present invention:
The proposed optimization scheme is evaluated based on an NS3 simulation tool, and codes used for simulation are based on source codes provided by Google group. Then, an optimization scheme is realized in the Linux kernel, and a test is performed. A fat tree topology as shown in fig. 2 is employed. In comparison of the optimization schemes, we compared with the optimization schemes using the congestion control algorithm DCTCP commonly used in SDN networks.
To avoid network congestion, congestion control algorithms should be able to overcome the degradation of overall network throughput performance. The experiment results of the group 1 experiment shown in fig. 3 using data throughput as a performance index. Experiments set up that 23 hosts (host 1 to host 23) had a link bandwidth of 1GB by transmitting more than 2GB of data to the same host (host 24). By monitoring and sampling the data throughput of the ethernet from the host24 to its edge switches, a comparison of the throughput performance of DCTCP and the optimization scheme is made.
Fig. 3 shows fluctuations in the throughput of data transmitted during the first 200 seconds of the entire transmission process, wherein the throughput of the optimization scheme is significantly higher than DCTCP. The throughput of DCTCP fluctuates frequently and drastically, with an average throughput of around 757 Mbps. This means that when network congestion occurs, the bandwidth resources of DCTCP are not fully occupied and network congestion cannot be effectively eliminated. In the optimized scheme, the throughput performance is kept stable at about 806Mbps after being reduced slightly. Compared with DCTCP, its throughput is improved by 6.47%, which means that the performance of optimizing the scheme bandwidth throughput can stably maintain high quality even if network congestion occurs.
The flow completion time (Flow completion time, FCT) is an important indicator of the application, directly affecting the performance quality of the application. Fig. 3 shows the experimental results using the data completion time as a performance index. The experimental design client side sends streams with different sizes to the server in parallel, wherein the streams comprise small streams (< 100 KB), medium streams (100 KB-10 MB) and large streams (> 10 MB), and the average stream completion time of DCTCP and an optimization scheme under different concurrent flow sizes is measured.
As can be seen from fig. 4, the DCTCP and the optimization scheme do not show much difference in time performance in congestion environment for small-flow scale (< 1 MB), and the optimization scheme improves by 19.3%. For medium flows (100 KB-10 MB), especially data flow sizes greater than 1MB, the FCT transmissions using the optimization scheme are about 22.5% lower than the FCT transmissions of DCTCP. But for large flows (> 10 MB), the FCT increase amplitude of the optimization is greater than that of DCTCP, but the FCT of the optimization is still lower than that of DCTCP. Overall, the optimization scheme has a lower completion time than DCTCP, providing better bandwidth for the data stream.
Fig. 4 shows the flow completion time of the algorithm, which is the experimental result using the packet loss rate as the performance index. Experiments have caused network congestion by introducing bursty traffic in the context of a steady background flow. And generating background streams in the network by using the iperf in 10-60 s of the experiment, wherein each host transmits a data stream to a designated target host.
Fig. 5 is a graph comparing the network packet loss rates, and as can be seen from fig. 5, when there are burst flows less than 2, the DCTCP and the optimal scheme packet loss rates are 0.78% and 0.85%, respectively. As the number of the burst flows becomes larger, the packet loss resistance advantage of the optimization scheme is more obvious. The optimization scheme is reduced by 16.3% compared with the DCTCP packet loss rate. Through the experiment, the packet loss rate can be effectively reduced by the optimization scheme, which shows that the optimization scheme has stability in coping with the sudden flow condition and can present better network performance.
In summary, the invention fully considers the attribute of different services when designing the scheduling algorithm, distinguishes large flows from small flows, and avoids the problem of disorder of the buffer data packets of the receiving end.
In yet another aspect, the invention also discloses a computer readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of the method as described above.
In yet another aspect, the invention also discloses a computer device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of the method as above.
In yet another embodiment of the present application, a computer program product comprising instructions that, when run on a computer, cause the computer to perform the method of fair congestion control with flow awareness under any of the SDN data center networks of the above embodiments is also provided.
It may be understood that the system provided by the embodiment of the present invention corresponds to the method provided by the embodiment of the present invention, and explanation, examples and beneficial effects of the related content may refer to corresponding parts in the above method.
The embodiment of the application also provides an electronic device, which comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus,
A memory for storing a computer program;
And the processor is used for realizing the fair congestion control method with flow perception under the SDN data center network when executing the program stored in the memory.
The communication bus mentioned by the above electronic device may be a peripheral component interconnect standard (english: PERIPHERAL COMPONENT INTERCONNECT, abbreviated as PCI) bus or an extended industry standard architecture (english: extended Industry Standard Architecture, abbreviated as EISA) bus, etc. The communication bus may be classified as an address bus, a data bus, a control bus, or the like.
The communication interface is used for communication between the electronic device and other devices.
The Memory may include random access Memory (RAM, english: random Access Memory) or nonvolatile Memory (NVM, english: non-Volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central Processing unit (Central Processing Unit, abbreviated as CPU), a network processor (Network Processor, abbreviated as NP), a digital signal processor (DIGITAL SIGNAL Processing, abbreviated as DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable gate array (34, abbreviated as FPGA), or other Programmable logic device, discrete gate or transistor logic device, or discrete hardware component.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk Solid STATE DISK (SSD)), etc.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing embodiments are merely for illustrating the technical solution of the present invention, but not for limiting the same, and although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those skilled in the art that modifications may be made to the technical solution described in the foregoing embodiments or equivalents may be substituted for parts of the technical features thereof, and that such modifications or substitutions do not depart from the spirit and scope of the technical solution of the embodiments of the present invention in essence.
Claims (5)
1. A method for controlling fair congestion with flow perception in SDN data center network is characterized by comprising the following steps,
S1, designing a fine-granularity data scheduling algorithm;
S2, carrying out congestion control on the data flow based on data packet scheduling realized by a data scheduling algorithm;
s3, adjusting the sending rates of different data streams based on the current network environment, namely, calculating the sub-stream rate weights and realizing bandwidth allocation based on the congestion control algorithm in the step S2;
S4, finally executing an optimal congestion control algorithm to enable the network performance to reach the optimal performance;
s2, carrying out congestion control on the data flow based on data packet scheduling realized by a data scheduling algorithm, wherein the congestion control comprises the following steps of,
S2.1, further analyzing through a data stream sending rate model, and assuming that a bottleneck link with a bandwidth of C is provided, n different RTT streams are passing through; representing n different RTT flows, d i (t) represents the transfer rate of flow i, and in an ideal case d 1+d2+…+dn=C;Ii (t) represents the amount of in-flight data of flow i, i.e. the upper bound of the bottleneck link, calculated by equation (3):
(3)
The flow i can obtain 1.25 times of gain in the first upward detection stage, so the maximum transfer rate at time t is:
(4)
where di (t) represents the transfer rate of flowi, RTprop of flowi, and the detection period of the BBR stream is 8RTprop, the flowi bandwidth estimation in the new round is updated as follows:
(5)
as can be seen from equation (5), the actual gain of the data stream of the same bottleneck link is The BBR regards the maximum transfer rate in the last 10 RTTs as BtIBw and the minimum delay measured in the last 10 seconds as RTprop;
S2.2, fairness optimization between BBR data flows, including in particular,
S2.2.1, calculating the link utilization omega at the RTT quantization bottleneck, wherein the link utilization omega is expressed as a percentage of RTT of flow i and the maximum delay Tmax of the link;
s2.2.2, judging a congestion link, when the bandwidth utilization rate of the link reaches 90%, determining that the link is congested, and needing to reroute data flow in the link to relieve the congestion condition of the link;
s2.2.3, dividing a bottleneck set, preliminarily judging that two or more substreams share the same bottleneck if the phenomenon of linear increase of RTT occurs in the two or more substreams at the same time;
Step S2.2.3 specifically includes:
s2.2.3.1, according to the characteristics of the large and small data flows in the data center network, the small flow is sensitive to time delay, and only the large flow is limited;
S2.2.3.2 calculating the rate weight of the sub-flow, proportionally distributing the rate of each sub-flow to the currently measured bandwidth, and parameters The calculation of (1) is as follows:
(6)
S2.2.3.3, equalizing the bandwidths, when multiple sub-streams compete with each other on the bottleneck link, the bandwidth obtained by each sub-stream is proportional to the buffer size occupied by the sub-stream in the link queue, and the parameters are utilized To regulate the competition of each sub-stream to the bandwidth, the sending rate is updated as follows:
(7)。
2. the method for fair congestion control with flow awareness under an SDN datacenter network of claim 1, wherein the step of designing a fine-grained data scheduling algorithm includes,
S1.1, respectively acquiring the bandwidth of each sub-stream in the initial linkAnd RTT;
S1.2, selecting a path with smaller link load as a route according to the load condition of each link for each data packet in the queue, and selecting proper substreams for transmission according to the arrival time of the path in FIFO order;
S1.3, when the sending end detects a packet loss event, the sending end updates the packet loss rate and re-predicts the time reaching the receiving end;
s1.4, carrying out data packet scheduling again.
3. The method for fair congestion control with flow awareness under an SDN datacenter network of claim 2, wherein the step of S1.4 re-scheduling packets includes,
S1.4.1, periodically detecting the byte variation of transmission according to the characteristics of the large and small data streams in the data center network to distinguish the large and small data streams, and carrying out data packet scheduling;
S1.4.2 selecting the most suitable large stream to be transferred, in particular, when the data packet of the connection level is to be scheduled, estimating the arrival time of the data packet in each sub-stream, and selecting the sub-stream with the earliest arrival time each time.
4. A computer readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of the method of any one of claims 1 to 3.
5. A computer device comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of the method of any of claims 1 to 3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311727381.9A CN117714381B (en) | 2023-12-14 | 2023-12-14 | Fair congestion control method and device with flow perception under SDN data center network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311727381.9A CN117714381B (en) | 2023-12-14 | 2023-12-14 | Fair congestion control method and device with flow perception under SDN data center network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117714381A CN117714381A (en) | 2024-03-15 |
CN117714381B true CN117714381B (en) | 2024-12-03 |
Family
ID=90147504
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311727381.9A Active CN117714381B (en) | 2023-12-14 | 2023-12-14 | Fair congestion control method and device with flow perception under SDN data center network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117714381B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118041798B (en) * | 2024-03-29 | 2024-10-18 | 清华大学 | Performance evaluation method and device for data center network congestion control algorithm |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11929939B2 (en) * | 2020-07-28 | 2024-03-12 | The Board Of Trustees Of The University Of Illinois | Remote bandwidth allocation |
CN113518040B (en) * | 2021-04-30 | 2022-12-09 | 东北大学 | Multipath coupling congestion control method for delay sensitive service |
-
2023
- 2023-12-14 CN CN202311727381.9A patent/CN117714381B/en active Active
Non-Patent Citations (3)
Title |
---|
BBR 拥塞控制算法的RTT 公平性优化;潘婉苏等;哈尔滨工业大学学报;20221130;第1节 * |
TCP-BBR拥塞控制算法的公平性;潘婉苏;中国科学技术大学博士学位论文;20230315;第2.2.1节,4.2.1节 * |
潘婉苏.TCP-BBR拥塞控制算法的公平性.中国科学技术大学博士学位论文.2023,第2.2.1节,4.2.1节. * |
Also Published As
Publication number | Publication date |
---|---|
CN117714381A (en) | 2024-03-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020236301A1 (en) | Systems and methods for per traffic class routing | |
JP7467645B2 (en) | Service level adjustment method, apparatus, device, and storage medium | |
US8693489B2 (en) | Hierarchical profiled scheduling and shaping | |
US9154438B2 (en) | Port-based fairness protocol for a network element | |
US7969881B2 (en) | Providing proportionally fair bandwidth allocation in communication systems | |
CN106537824B (en) | Method and apparatus for the response time for reducing information centre's network | |
CN117714381B (en) | Fair congestion control method and device with flow perception under SDN data center network | |
CN115175265A (en) | Transmission path determination method and device, computer equipment and storage medium | |
Park et al. | Efficient routing for traffic offloading in Software-defined Network | |
Alipio et al. | TCP incast solutions in data center networks: A classification and survey | |
Ye et al. | Delay-based network utility maximization modelling for congestion control in named data networking | |
CN111131061B (en) | Data transmission method and network equipment | |
US7869366B1 (en) | Application-aware rate control | |
EP3186927B1 (en) | Improved network utilization in policy-based networks | |
KR20180129376A (en) | Smart gateway supporting iot and realtime traffic shaping method for the same | |
Rezaei et al. | Smartbuf: An agile memory management for shared-memory switches in datacenters | |
JP2016122960A (en) | Management system, network management method, network system | |
JP7156410B2 (en) | Communication device, communication control system, method and program | |
CN118900253B (en) | Congestion control method, system, device, computer equipment and storage medium | |
US20230413117A1 (en) | Searchlight distributed qos management | |
Saadaoui et al. | Extended QoS modelling based on multi-application environment in network on chip | |
Almasi | Latency Optimization in Datacenters using Adaptive Transport and Reliable Training | |
Pawar et al. | A survey on congestion notification algorithm in data centers | |
CN116055404A (en) | Load balancing method, device, system and medium in lossless network of data center | |
Mishra et al. | QoS Analysis in Data Network: Stability, Reliability, QoS Invoke Rate Perspectives |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |