[go: up one dir, main page]

CN104836750B - A kind of data center network stream scheduling method based on round-robin - Google Patents

A kind of data center network stream scheduling method based on round-robin Download PDF

Info

Publication number
CN104836750B
CN104836750B CN201510222086.7A CN201510222086A CN104836750B CN 104836750 B CN104836750 B CN 104836750B CN 201510222086 A CN201510222086 A CN 201510222086A CN 104836750 B CN104836750 B CN 104836750B
Authority
CN
China
Prior art keywords
flow
sdn switch
priority queue
data
low
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510222086.7A
Other languages
Chinese (zh)
Other versions
CN104836750A (en
Inventor
李克秋
盛佩
齐恒
李文信
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN201510222086.7A priority Critical patent/CN104836750B/en
Publication of CN104836750A publication Critical patent/CN104836750A/en
Application granted granted Critical
Publication of CN104836750B publication Critical patent/CN104836750B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本发明公开了一种基于时间片轮转的数据中心网络流调度方法,属于数据中心网络领域。该方法利用SDN技术,在具有冗余链路的数据中心网络中,综合考虑长流和短流的分布情况,动态地控制网络中流的转发,并能根据网络中流的大小,动态调整流在SDN交换机中的排队情况。本发明可实现数据流在数据中心网络中的负载均衡,并降低短流在互联网络中的延迟。

The invention discloses a data center network flow scheduling method based on time slice rotation, belonging to the field of data center networks. This method utilizes SDN technology, in the data center network with redundant links, comprehensively considers the distribution of long flows and short flows, dynamically controls the flow forwarding in the network, and can dynamically adjust the flow in the SDN network according to the size of the flows in the network. Queuing in the switch. The invention can realize the load balance of the data flow in the data center network, and reduce the delay of the short flow in the Internet network.

Description

一种基于时间片轮转的数据中心网络流调度方法A data center network flow scheduling method based on time slice rotation

技术领域technical field

本发明涉及一种基于时间片轮转的数据中心网络流调度方法,属于数据中心网络领域。The invention relates to a data center network flow scheduling method based on time slice rotation, belonging to the field of data center networks.

背景技术Background technique

数据中心作为高收益在线服务(web搜索,社交网络,广告系统和推荐系统)的关键基础设施,其网络性能越来越受到人们的关注。这些应用对数据中心网络产生了低延迟的需求。用户体验受应用响应速度的影响很大,甚至几百毫秒的延迟都能显著降低用户体验。例如,亚马逊发现延迟每增加100ms就会造成收入下降一个百分点。As the key infrastructure of high-yield online services (web search, social network, advertising system, and recommender system), the network performance of data center has attracted more and more attention. These applications place low latency requirements on data center networks. User experience is greatly affected by application response speed, and even a delay of a few hundred milliseconds can significantly degrade user experience. Amazon, for example, found that every 100ms increase in latency resulted in a 1 percent drop in revenue.

web搜索应用是数据中心中的一种典型应用,其遵循partition-aggregate工作模式。在该工作模式下,不同设备具有不同的角色。它们分别为top-level aggregators(TLAs)、mid-level aggregators(MLAs)和工作节点。TLAs接收请求并将完成请求所需的计算分成几部分交给各个MLA。MLA继续将计算分成适当的规模交给各个工作节点。各个工作节点并行地执行具体的计算并将结果返回给MLA,每个MLA将接收到的结果合并后转发给相应的TLA。这种工作模式会导致数据中心网络中同时产生短数据流和长数据流。这种短数据流主要由不同层(level)的aggregators,或者是MLAs与工作节点的交互而产生。长数据流主要由工作节点执行大规模的后台并行计算而造成。对于交互式应用来说,减少短数据流的完成时间可以显著提高这种交互式应用的响应时间,但是这并不意味着可以完全的忽略由后台计算产生的长数据流。换句话说,这种由后台计算而产生的长数据流只需要得到一定的资源将所有的数据量传输完就可以,是对吞吐率敏感的流,而这种短数据流是对延迟敏感的。当延迟不敏感的长数据流和延迟敏感的短数据流共享相同的队列时,短数据流会因队列中长数据流的累积而经历较长的时延。同时,典型的流调度算法ECMP并未很好的解决网络拥塞问题,一旦链路发生拥塞又会进一步加重网络的延迟,从而造成短数据流迟迟不能完成。为了改善网络性能,一种高效的调度算法需要在不显著降低长数据流的传输性能的同时保证短数据流的延迟需求。The web search application is a typical application in the data center, and it follows the partition-aggregate working mode. In this working mode, different devices have different roles. They are top-level aggregators (TLAs), mid-level aggregators (MLAs) and worker nodes. The TLAs receive the request and divide the calculations required to fulfill the request to the individual MLAs. MLA continues to divide the calculations into appropriate sizes and hand them over to each worker node. Each working node executes specific calculations in parallel and returns the results to the MLA, and each MLA combines the received results and forwards them to the corresponding TLA. This mode of operation results in simultaneous short and long data flows in the data center network. This short data flow is mainly generated by aggregators at different levels, or by the interaction of MLAs with worker nodes. Long data streams are mainly caused by worker nodes performing large-scale background parallel computations. For interactive applications, reducing the completion time of short data streams can significantly improve the response time of such interactive applications, but this does not mean that long data streams generated by background calculations can be completely ignored. In other words, this kind of long data flow generated by background calculation only needs to obtain certain resources to transfer all the data. It is a throughput-sensitive flow, while this short data flow is sensitive to delay. . When delay-insensitive long data flows and delay-sensitive short data flows share the same queue, short data flows experience longer delays due to the accumulation of long data flows in the queue. At the same time, the typical stream scheduling algorithm ECMP does not solve the network congestion problem well. Once the link is congested, it will further increase the delay of the network, resulting in delays in the completion of short data streams. In order to improve network performance, an efficient scheduling algorithm needs to guarantee the delay requirements of short data streams without significantly degrading the transmission performance of long data streams.

发明内容Contents of the invention

为了达到这个目的,本发明提供了一种基于SDN且同时考虑长数据流和短数据流的网络流调度方法。首先,为了充分利用网络中的冗余线路,通过动态选路的方法均衡分配网络中的数据流,从而避免拥塞。其次,为减小短数据流的传输延迟,在交换设备中的出端口处配置两个具有不同优先级的FIFO队列,高优先级队列中存放短数据流的数据包,低优先级的队列中存放长数据流的数据包,使得短数据流优先长数据流发送,从而降低短数据流的延迟。In order to achieve this purpose, the present invention provides a network flow scheduling method based on SDN and considering both long data flow and short data flow. First of all, in order to make full use of the redundant lines in the network, the data flow in the network is evenly distributed through the method of dynamic routing, so as to avoid congestion. Secondly, in order to reduce the transmission delay of short data flows, two FIFO queues with different priorities are configured at the egress port of the switching device. The packets of short data flows are stored in the high-priority queues, and the packets of short data flows are stored in the low-priority queues. The data packets of the long data stream are stored, so that the short data stream is sent in priority to the long data stream, thereby reducing the delay of the short data stream.

本发明采取的技术方案如下:The technical scheme that the present invention takes is as follows:

(1)采用支持OpenFlow协议的SDN交换机来构建互联网络。(1) Use SDN switches that support the OpenFlow protocol to build the Internet.

(2)利用OpenFlow协议获取交换机间的链路使用情况,控制器通过向支持OpenFlow协议的SDN交换机发送ofp_port_stats_request消息来获取交换机间的链路信息。具体步骤如下:(2) Using the OpenFlow protocol to obtain the link usage between the switches, the controller sends the ofp_port_stats_request message to the SDN switch supporting the OpenFlow protocol to obtain the link information between the switches. Specific steps are as follows:

a.每隔一定的时间间隔T1,控制器向网络中所有SDN交换机发送一次ofp_port_stats_request消息,并等待SDN交换机响应;a. Every certain time interval T 1 , the controller sends an ofp_port_stats_request message to all SDN switches in the network, and waits for the SDN switch to respond;

b.控制器收到ofp_port_stats的响应消息后触发PortStatsReceived事件,通过调用事件处理函数获取到目前为止该端口传输的总字节数;b. The controller triggers the PortStatsReceived event after receiving the response message of ofp_port_stats, and obtains the total number of bytes transmitted by the port so far by calling the event processing function;

c.计算当前传输总字节数与上次收集的传输总字节数的差值并除以时间间隔T1,该值可近似看作该端口当前所占带宽。c. Calculate the difference between the current total number of transmitted bytes and the last collected total number of transmitted bytes and divide it by the time interval T 1 . This value can be approximately regarded as the current bandwidth occupied by the port.

d.将计算得到的带宽值存储在结构{dpid,port,bandwidth}中,其中bandwidth既为端口的传输带宽。d. Store the calculated bandwidth value in the structure {dpid, port, bandwidth}, where bandwidth is the transmission bandwidth of the port.

(3)动态选路策略(3) Dynamic routing strategy

利用网络中的冗余路径来减小网络拥塞的可能性,具体步骤如下:Use redundant paths in the network to reduce the possibility of network congestion. The specific steps are as follows:

a.当控制收到openflow交换机发送的packet-in消息后,触发PACKETIN事件,通过调用事件处理函数,控制器将流的标识<srcip,dstip,srcport,dstport,proto>解析出,并记录下来;a. After the controller receives the packet-in message sent by the openflow switch, it triggers the PACKETIN event, and by calling the event processing function, the controller parses out the flow identifier <srcip, dstip, srcport, dstport, proto> and records it;

b.利用(2)中得到的不同链路的带宽使用情况,选取带宽利用率最低的那条链路作为数据流的转发路径;B. Utilize the bandwidth usage of different links obtained in (2), select that link with the lowest bandwidth utilization rate as the forwarding path of the data flow;

c.将所选择的路径添加到相应SDN交换机的流表中,并通知交换机转发该流。c. Add the selected path to the flow table of the corresponding SDN switch, and notify the switch to forward the flow.

(4)利用OpenFlow协议获取各个流的已传输字节数,控制器通过向支持OpenFlow协议的SDN交换机发送ofp_flow_stats_request消息来获取各个流的已传输字节数。具体步骤如下:(4) Using the OpenFlow protocol to obtain the number of transmitted bytes of each flow, the controller sends the ofp_flow_stats_request message to the SDN switch supporting the OpenFlow protocol to obtain the number of transmitted bytes of each flow. Specific steps are as follows:

a.每隔一定的时间间隔T2,控制器向网络中所有SDN交换机发送一次ofp_flow_stats_request消息,并等待SDN交换机响应;a. Every certain time interval T 2 , the controller sends an ofp_flow_stats_request message to all SDN switches in the network, and waits for the SDN switch to respond;

b.控制器收到ofp_flow_stats的响应消息后触发FlowStatsReceived事件,通过调用事件处理函数获取流的已传输总字节数;b. The controller triggers the FlowStatsReceived event after receiving the response message of ofp_flow_stats, and obtains the total number of bytes transmitted by the flow by calling the event processing function;

c.将流的已传输总字节数存储在结构<srcip,dstip,srcport,dstport,proto,transmitted>中,其中transmitted为已传输总字节数。c. Store the total number of transmitted bytes of the stream in the structure <srcip,dstip,srcport,dstport,proto,transmitted>, where transmitted is the total number of transmitted bytes.

(5)队列调度策略(5) Queue scheduling strategy

在不显著降低低优先级数据流输出性能的情况下,显著降低高优先级数据流的传输时延。具体步骤如下:Significantly reduce the transmission delay of high-priority data streams without significantly reducing the output performance of low-priority data streams. Specific steps are as follows:

a.在交换设备的输出端口处设置两个具有不同优先级的队列,交换设备按时间片T3调度输出这两个队列内的数据;a. two queues with different priorities are set at the output port of the switching device, and the switching device schedules and outputs the data in these two queues according to the time slice T3;

b.交换设备优先将高优先级队列中的数据出队输出;b. The switching device dequeues the data in the high-priority queue first;

c.若在一个时间片内的某一时刻高优先级队列为空则转去调度低优先级队列,并标记该时间片已调度过低优先级队列,直到时间片用完;c. If the high-priority queue is empty at a certain moment in a time slice, transfer to the scheduling low-priority queue, and mark that the time slice has been dispatched to the low-priority queue until the time slice is used up;

d.若在一个时间片内高优先级队列始终不为空,则不调度低优先级队列,直到时间片用完;d. If the high-priority queue is not empty within a time slice, the low-priority queue will not be scheduled until the time slice is used up;

e.当某一时间片用完时,检测是否连续两个时间片未调度低优先级队列,若是则下一个时间片转去调度低优先级队列,并标记已调度过低优先级队列,转步骤b;e. When a certain time slice is used up, check whether the low-priority queue has not been scheduled for two consecutive time slices. If so, the next time slice will be transferred to the low-priority queue, and the low-priority queue will be marked as scheduled. step b;

f.若不是则转去步骤b。f. If not, go to step b.

(6)网络数据包入队策略,具体步骤如下:(6) Network data packet enqueue strategy, the specific steps are as follows:

a.初始时所有流都在高优先级队列中排队等待输出;a. Initially, all streams are queued in the high-priority queue for output;

b.当(4)中发现某一个流的已传输字节数达到了某一个阈值,这个流将被视为一个长数据流;b. When it is found in (4) that the number of transmitted bytes of a stream reaches a certain threshold, the stream will be regarded as a long data stream;

c.控制器向交换机发送ofp_flow_mod消息,在交换机的流表中增加一条流表项,match为该流的标识,而action则为一enqueue动作,通知交换机将长数据流排到低优先级队列中;c. The controller sends the ofp_flow_mod message to the switch, and adds a flow entry in the flow table of the switch, match is the identifier of the flow, and action is an enqueue action, instructing the switch to queue long data flows into a low priority queue ;

d.交换机在收到匹配的数据流时,将该数据流排在低优先级队列中;d. When the switch receives the matching data flow, it arranges the data flow in the low priority queue;

e.交换机在输出端口处按(5)中的方法对两个队列进行调度。e. The switch schedules the two queues at the output port according to the method in (5).

本发明的数据中心网络流调度方法同时考虑了网络中长数据流和短数据流的调度,动态调度网络流以均衡各个链路的工作负载,同时本方法设计了一种QOS功能,优先输出短流,以减少短数据流的排队延迟,同时又不显著“惩罚”长流。The data center network flow scheduling method of the present invention takes into account the scheduling of long data flows and short data flows in the network at the same time, and dynamically schedules network flows to balance the workload of each link. streams to reduce queuing delays for short streams without significantly "penalizing" long streams.

附图说明Description of drawings

图1是本发明的测试网络拓扑示意图。FIG. 1 is a schematic diagram of the test network topology of the present invention.

图2是本发明的系统架构图。Fig. 2 is a system architecture diagram of the present invention.

图3是本发明数据流调度的流程图。Fig. 3 is a flowchart of data stream scheduling in the present invention.

图4是本发明队列调度的流程图。Fig. 4 is a flowchart of queue scheduling in the present invention.

具体实施方式detailed description

以下结合附图说明和技术方案进一步说明本发明的具体实施方式。The specific implementation manners of the present invention will be further described below in conjunction with the accompanying drawings and technical solutions.

如图1所示,利用支持OpenFlow协议的SDN交换机组成一个测试网络,网络中每条链路的带宽均为100Mbps,同时为SDN交换机的每个端口配置两个具有不同优先级的队列。默认情况下,流在高优先级队列中排队等待发送。As shown in Figure 1, a test network is composed of SDN switches supporting the OpenFlow protocol. The bandwidth of each link in the network is 100 Mbps, and two queues with different priorities are configured for each port of the SDN switch. By default, streams are queued for delivery in the high priority queue.

如图2所示,使用POX作为控制器,安装调度模块,运行两个定时任务以收集网络中所有链路的统计信息和所有存在的数据流的统计信息。As shown in Figure 2, use POX as the controller, install the scheduling module, and run two scheduled tasks to collect statistics of all links in the network and statistics of all existing data flows.

如图3所示,当图1中的主机H1向主机H2发送一条数据流f1时,若网络中的交换机无法转发该流,则向POX控制器报告,要求转发该流。POX收到请求后开始为f1计算转发路径。As shown in Figure 3, when the host H1 in Figure 1 sends a data flow f1 to the host H2, if the switch in the network cannot forward the flow, it will report to the POX controller and request to forward the flow. After receiving the request, POX starts to calculate the forwarding path for f1.

根据网络的拓扑信息,可知H1、H2之间拥有四条等价路径,基于定时任务T1收集的链路使用信息,查找当前链路最空闲的路径分配给流f1,假设为H1-E1-A1-C1-A3-E2-H2。下发流表到交换机E1、A1、C1、A3、E2,并控制E1转发该流。According to the topology information of the network, it can be seen that there are four equal-cost paths between H1 and H2. Based on the link usage information collected by the scheduled task T1, find the path with the most idle current link and assign it to flow f1, assuming it is H1-E1-A1 -C1-A3-E2-H2. Send the flow table to switches E1, A1, C1, A3, and E2, and control E1 to forward the flow.

定时任务T2每隔T2时间收集一次流的已发送字节数。当某条流的已发送字节数超过一定阈值时,则POX控制器判断该数据流为大数据流。POX控制器向交换机发送ofp_flow_mod消息,在交换机的流表中增加一条流表项,match为该流的标识,而action则为一enqueue动作,通知交换机将长数据流排到低优先级队列中。当交换机收到后续的匹配数据流时,将该数据流排在低优先级队列中等待交换机调度输出。Timed task T 2 collects the number of bytes sent by the stream every T 2 time. When the number of sent bytes of a flow exceeds a certain threshold, the POX controller judges that the data flow is a large data flow. The POX controller sends the ofp_flow_mod message to the switch, and adds a flow entry in the flow table of the switch, match is the identifier of the flow, and action is an enqueue action, instructing the switch to queue long data flows into a low-priority queue. When the switch receives the subsequent matching data flow, it arranges the data flow in the low priority queue and waits for the switch to dispatch the output.

如图4所示,交换机以T3时间为一周期调度输出端口队列中的数据,我们称一个周期为一个时间片。初始时设置标志Flag为1,启动T3定时器。定时器任务主要分三个部分:As shown in Figure 4 , the switch schedules the data in the output port queue with T3 time as a cycle, and we call a cycle a time slice. Initially, the flag Flag is set to 1 , and the T3 timer is started. The timer task is mainly divided into three parts:

1.在高优先级队列不为空的情况下,循环出队高优先级队列中的数据。1. When the high-priority queue is not empty, loop out the data in the high-priority queue.

2.在调度高优先级队列过程中,发现高优先级队列为空则转去调度低优先级队列,直到定时器超时。2. In the process of scheduling the high-priority queue, if the high-priority queue is found to be empty, it will switch to scheduling the low-priority queue until the timer expires.

3.当连续两个时间片未调度低优先级队列时,则将下一个时间片分配给低优先级队列。3. When the low-priority queue is not scheduled for two consecutive time slices, the next time slice is allocated to the low-priority queue.

Claims (1)

1.一种基于时间片轮转的数据中心网络流调度方法,其特征在于:1. A data center network flow scheduling method based on time slice rotation, characterized in that: (1)采用支持OpenFlow协议的SDN交换机来构建互联网络;(1) Use SDN switches supporting the OpenFlow protocol to build the Internet; (2)利用OpenFlow协议获取SDN交换机间的链路使用情况,控制器通过向支持OpenFlow协议的SDN交换机发送ofp_port_stats_request消息来获取SDN交换机间的链路信息;具体步骤如下:(2) Utilize the OpenFlow protocol to obtain the link usage between the SDN switches, and the controller obtains the link information between the SDN switches by sending the ofp_port_stats_request message to the SDN switch supporting the OpenFlow protocol; the specific steps are as follows: a.每隔一定的时间间隔T1,控制器向网络中所有SDN交换机发送一次ofp_port_stats_request消息,并等待SDN交换机响应;a. Every certain time interval T1, the controller sends an ofp_port_stats_request message to all SDN switches in the network, and waits for the SDN switch to respond; b.控制器收到ofp_port_stats的响应消息后触发PortStatsReceived事件,通过调用事件处理函数获取到目前为止该端口传输的总字节数;b. The controller triggers the PortStatsReceived event after receiving the response message of ofp_port_stats, and obtains the total number of bytes transmitted by the port so far by calling the event processing function; c.计算当前传输总字节数与上次收集的传输总字节数的差值并除以时间间隔T1,该值看作该端口当前所占带宽;c. Calculate the difference between the total number of bytes transmitted currently and the total number of bytes transmitted last time and divide it by the time interval T1. This value is regarded as the current bandwidth occupied by the port; d.将计算得到的带宽值存储在结构{dpid,port,bandwidth}中,其中bandwidth即为端口的传输带宽;d. Store the calculated bandwidth value in the structure {dpid, port, bandwidth}, where bandwidth is the transmission bandwidth of the port; (3)动态选路策略(3) Dynamic routing strategy a.当控制器收到支持openflow协议的SDN交换机发送的packet-in消息后,触发PACKETIN事件,通过调用事件处理函数,控制器将流的标识<srcip,dstip,srcport,dstport,proto>解析出,并记录下来;a. When the controller receives the packet-in message sent by the SDN switch supporting the openflow protocol, it triggers the PACKETIN event, and by calling the event processing function, the controller parses out the flow identifier <srcip,dstip,srcport,dstport,proto> , and record it; b.利用(2)中得到的不同链路的带宽使用情况,选取带宽利用率最低的那条链路作为数据流的转发路径;B. Utilize the bandwidth usage of different links obtained in (2), select that link with the lowest bandwidth utilization rate as the forwarding path of the data flow; c.将所选择的路径添加到相应SDN交换机的流表中,并通知SDN交换机转发该流;c. Add the selected path to the flow table of the corresponding SDN switch, and notify the SDN switch to forward the flow; (4)利用OpenFlow协议获取各个流的已传输字节数,控制器通过向支持OpenFlow协议的SDN交换机发送ofp_flow_stats_request消息来获取各个流的 已传输字节数;具体步骤如下:(4) Utilize the OpenFlow protocol to obtain the transmitted bytes of each flow, and the controller obtains the transmitted bytes of each flow by sending the ofp_flow_stats_request message to the SDN switch supporting the OpenFlow protocol; the specific steps are as follows: a.每隔一定的时间间隔T2,控制器向网络中所有SDN交换机发送一次ofp_flow_stats_request消息,并等待SDN交换机响应;a. Every certain time interval T2, the controller sends an ofp_flow_stats_request message to all SDN switches in the network, and waits for the SDN switch to respond; b.控制器收到ofp_flow_stats的响应消息后触发FlowStatsReceived事件,通过调用事件处理函数获取流的已传输总字节数;b. The controller triggers the FlowStatsReceived event after receiving the response message of ofp_flow_stats, and obtains the total number of bytes transmitted by the flow by calling the event processing function; c.将流的已传输总字节数存储在结构<srcip,dstip,srcport,dstport,proto,transmitted>中,其中transmitted为已传输总字节数;c. Store the total number of transmitted bytes of the stream in the structure <srcip, dstip, srcport, dstport, proto, transmitted>, where transmitted is the total number of transmitted bytes; (5)队列调度策略(5) Queue scheduling strategy 在不显著降低低优先级数据流输出性能的情况下,显著降低高优先级数据流的传输时延;具体步骤如下:Significantly reduce the transmission delay of high-priority data streams without significantly reducing the output performance of low-priority data streams; the specific steps are as follows: a.在SDN交换机的输出端口处设置两个具有不同优先级的队列,SDN交换机按时间片T3调度输出这两个队列内的数据;a. Two queues with different priorities are set at the output port of the SDN switch, and the SDN switch schedules and outputs the data in these two queues according to the time slice T3; b. SDN交换机优先将高优先级队列中的数据出队输出;b. The SDN switch prioritizes the output of the data in the high priority queue; c.若在一个时间片内的某一时刻高优先级队列为空则转去调度低优先级队列,并标记该时间片已调度过低优先级队列,直到时间片用完;c. If the high-priority queue is empty at a certain moment in a time slice, transfer to the scheduling low-priority queue, and mark that the time slice has been dispatched to the low-priority queue until the time slice is used up; d.若在一个时间片内高优先级队列始终不为空,则不调度低优先级队列,直到时间片用完;d. If the high-priority queue is not empty within a time slice, the low-priority queue will not be scheduled until the time slice is used up; e.当某一时间片用完时,检测是否连续两个时间片未调度低优先级队列,若是则下一个时间片转去调度低优先级队列,并标记已调度过低优先级队列,转步骤b;e. When a certain time slice is used up, check whether the low-priority queue has not been scheduled for two consecutive time slices. If so, the next time slice will be transferred to the low-priority queue, and the low-priority queue will be marked as scheduled. step b; f.若检测不是连续两个时间片未调度低优先级队列,则转去步骤b;f. If it is detected that the low priority queue is not scheduled for two consecutive time slices, then go to step b; (6)网络数据包入队策略(6) Network data packet enqueue strategy a.初始时所有流都在高优先级队列中排队等待输出;a. Initially, all streams are queued in the high-priority queue for output; b.当(4)中发现某一个流的已传输字节数达到了某一个阈值,这个流将被视为一个长数据流;b. When it is found in (4) that the number of transmitted bytes of a stream reaches a certain threshold, the stream will be regarded as a long data stream; c.控制器向SDN交换机发送ofp_flow_mod消息,在SDN交换机的流表中增加一条流表项,match为该流的标识,而action则为一enqueue动作,通知SDN交换机将长数据流排到低优先级队列中;c. The controller sends the ofp_flow_mod message to the SDN switch, and adds a flow entry in the flow table of the SDN switch, match is the identifier of the flow, and action is an enqueue action, instructing the SDN switch to queue long data flows to low priority in the class queue; d.SDN交换机在收到匹配的数据流时,将该数据流排在低优先级队列中;d. When the SDN switch receives the matching data flow, it arranges the data flow in the low priority queue; e.SDN交换机在输出端口处按(5)中的方法对两个队列进行调度。e. The SDN switch schedules the two queues at the output port according to the method in (5).
CN201510222086.7A 2015-05-04 2015-05-04 A kind of data center network stream scheduling method based on round-robin Active CN104836750B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510222086.7A CN104836750B (en) 2015-05-04 2015-05-04 A kind of data center network stream scheduling method based on round-robin

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510222086.7A CN104836750B (en) 2015-05-04 2015-05-04 A kind of data center network stream scheduling method based on round-robin

Publications (2)

Publication Number Publication Date
CN104836750A CN104836750A (en) 2015-08-12
CN104836750B true CN104836750B (en) 2017-12-05

Family

ID=53814394

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510222086.7A Active CN104836750B (en) 2015-05-04 2015-05-04 A kind of data center network stream scheduling method based on round-robin

Country Status (1)

Country Link
CN (1) CN104836750B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106911593B (en) * 2015-12-23 2019-09-13 中国科学院沈阳自动化研究所 A Queue Scheduling Method for Industrial Control Networks Based on SDN Architecture
CN107087280A (en) * 2016-02-16 2017-08-22 中兴通讯股份有限公司 A kind of data transmission method and device
CN105827547B (en) * 2016-03-10 2019-02-05 中国人民解放军理工大学 A Stream Scheduling Method to Reduce Streaming Completion Time in Data Center Networks
CN107040605B (en) * 2017-05-10 2020-05-01 安徽大学 Cloud platform resource scheduling and management system based on SDN and application method thereof
CN107332786B (en) * 2017-06-16 2019-08-13 大连理工大学 A kind of dispatching method ensureing data flow deadline under service chaining environment
CN107154897B (en) * 2017-07-20 2019-08-13 中南大学 Isomery stream partition method based on packet scattering in DCN
CN109861923B (en) 2017-11-30 2022-05-17 华为技术有限公司 Data scheduling method and TOR switch
CN108777697A (en) * 2018-04-09 2018-11-09 中国电信股份有限公司上海分公司 A method of slow down SDN switch to controller network-impacting load
CN108390820B (en) 2018-04-13 2021-09-14 华为技术有限公司 Load balancing method, equipment and system
CN110191061A (en) * 2019-05-07 2019-08-30 荆楚理工学院 A campus network management system based on SDN technology
CN110166372B (en) * 2019-05-27 2022-04-19 中国科学技术大学 Method for online scheduling of co-flows in optical circuit switch-based data centers
CN111580886A (en) * 2020-05-11 2020-08-25 上海英方软件股份有限公司 Method and device for loading mass data through time slice rotation

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101179487A (en) * 2006-11-10 2008-05-14 中兴通讯股份有限公司 Computer network data packet forwarding queue management method
CN101179486A (en) * 2006-11-10 2008-05-14 中兴通讯股份有限公司 Computer network data packet forwarding CAR queue management method
CN101188547A (en) * 2006-11-17 2008-05-28 中兴通讯股份有限公司 Router for improving forward efficiency based on virtual monitoring group and CAR rate limit
CN101193051A (en) * 2006-11-20 2008-06-04 中兴通讯股份有限公司 Router for improving forward speed and efficiency based on virtual monitoring group

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7142513B2 (en) * 2002-05-23 2006-11-28 Yea-Li Sun Method and multi-queue packet scheduling system for managing network packet traffic with minimum performance guarantees and maximum service rate control

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101179487A (en) * 2006-11-10 2008-05-14 中兴通讯股份有限公司 Computer network data packet forwarding queue management method
CN101179486A (en) * 2006-11-10 2008-05-14 中兴通讯股份有限公司 Computer network data packet forwarding CAR queue management method
CN101188547A (en) * 2006-11-17 2008-05-28 中兴通讯股份有限公司 Router for improving forward efficiency based on virtual monitoring group and CAR rate limit
CN101193051A (en) * 2006-11-20 2008-06-04 中兴通讯股份有限公司 Router for improving forward speed and efficiency based on virtual monitoring group

Also Published As

Publication number Publication date
CN104836750A (en) 2015-08-12

Similar Documents

Publication Publication Date Title
CN104836750B (en) A kind of data center network stream scheduling method based on round-robin
US20220200923A1 (en) Dynamic buffer management in data-driven intelligent network
CN101478483B (en) Method for implementing packet scheduling in switch equipment and switch equipment
US6654374B1 (en) Method and apparatus to reduce Jitter in packet switched networks
US6714517B1 (en) Method and apparatus for interconnection of packet switches with guaranteed bandwidth
Hafeez et al. Detection and mitigation of congestion in SDN enabled data center networks: A survey
CN113438163B (en) Data center network mixed flow routing method and system based on path isolation
CN106533960A (en) Data center network routing method based on Fat-Tree structure
CN108718283B (en) A TCP Congestion Control Method for Centralized End-Network Coordination in Data Center Networks
CN103329490B (en) Improve method and the communication network of data transmission quality based on packet communication network
US8144588B1 (en) Scalable resource management in distributed environment
EP1810466A1 (en) Directional and priority based flow control between nodes
CN102752192B (en) Bandwidth allocation method of forwarding and control element separation (ForCES) transmission mapping layer based on stream control transmission protocol (SCTP)
CN104767695B (en) A kind of stream scheduling method of task rank in data center
CN105490962A (en) QoS management method based on OpenFlow network
Imtiaz et al. Approaches to reduce the latency for high priority traffic in IEEE 802.1 AVB networks
CN106330710B (en) Data flow scheduling method and device
CN115643220B (en) Deterministic service transmission method and device based on jitter time delay
Mliki et al. A comprehensive survey on carrier ethernet congestion management mechanism
Kaur et al. Core-stateless guaranteed throughput networks
JP5307745B2 (en) Traffic control system and method, program, and communication relay device
TW200303670A (en) Inverse multiplexing of managed traffic flows over a multi-star network
Wadekar Enhanced ethernet for data center: Reliable, channelized and robust
JP5406136B2 (en) Communication system, traffic control method, and traffic control program
KR101681613B1 (en) Apparatus and method for scheduling resources in distributed parallel data transmission system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant