Disclosure of Invention
The invention provides a method, a device, a computer readable storage medium and electronic equipment for dispatching Ethernet multi-queue traffic, which flexibly adjust the traffic flows of different priorities of Ethernet through real-time adjustment of the dispatching method and the weight value setting of each queue, thereby improving the data transmission service quality.
Other objects and advantages of the present invention will be further appreciated from the technical features disclosed in the present invention.
In order to achieve one or a part of or all of the purposes or other purposes, the invention provides an Ethernet multi-queue traffic scheduling method, which comprises the steps of classifying data, confirming queue serial numbers corresponding to the classified data, confirming remaining cache space in the queue, storing the classified data into the corresponding queue when the remaining cache space in the queue is sufficient, selecting a scheduling method according to the traffic type of the data to be transmitted in the queue and the network state, simultaneously carrying out current round of data scheduling according to the priority of each traffic class and the weight corresponding to the traffic class which are set by the selected scheduling method, and after the current round of data scheduling is completed, selecting the scheduling method, the traffic class priority or the traffic class weight through monitoring feedback information of a register control and state monitoring module. The technical scheme has the advantages that the Ethernet traffic scheduling method uses multiple queues to transmit data, the queues are classified accurately, the scheduling method is selected according to traffic types in the queues and network states, and meanwhile, the current round of data scheduling is performed according to the scheduling method, each traffic type priority and corresponding weight, so that the use efficiency of bandwidth and the reliability of data transmission are improved, and the transmission requirements of different traffic types are flexibly met.
The optional scheduling method comprises the steps of scheduling according to the priority order set by the traffic class, scheduling according to the priority order of each traffic class and the weight of the bandwidth of the traffic class, scheduling according to the priority order of the traffic class and the length of the data packet in the traffic class, and scheduling according to the priority order of the traffic class and the time required by the transmission of the head data packet in the traffic class.
The register of the configuration scheduling method stores corresponding values, and the system selects the corresponding scheduling method according to the values in the register.
Scheduling strictly according to the priority order set by the traffic class comprises the step of sequentially sending the data packets in the traffic class according to the set priority order of the traffic in a scheduling period.
When the data packets of the low priority traffic class are scheduled, the data packets of the high priority traffic class are sent to the queue for waiting scheduling, the data packets of the high priority traffic class are scheduled preferentially, and when the data packets of the same priority are sent to the queue for waiting scheduling, the data packets are scheduled in sequence according to the sequence of sending the data to the queue.
When the selected scheduling method is to schedule according to the priority order set by the traffic class, the register with the set weight is not effective.
Scheduling according to the priority order of each flow class and the weight of the bandwidth of the flow class, wherein the scheduling comprises the steps of configuring a weight value for each flow class according to the percentage of the bandwidth allocated by each flow class to the total bandwidth, sequentially polling each flow class according to the priority order of each flow class, outputting a data packet by the polled flow class, correspondingly reducing the corresponding weight value of the flow class until the weight value of the polled flow class is 0, and continuing the scheduling of the next flow class.
Setting a weighted counter for each flow class, setting an initial value of the weighted counter according to the corresponding weight, outputting a data packet by each flow class in each polling in the scheduling process, subtracting 1 from the value of the weighted counter corresponding to the flow class, suspending the scheduling of the flow class corresponding to the weighted counter when the counter value of any flow class is zero, and continuing the scheduling of the next flow class by the system when the weighted counters of all the flow classes are zero.
The method comprises the steps of setting a first traffic class for each traffic class according to the priority sequence of the traffic classes and the length of data packets in the traffic classes, setting a first traffic class for each traffic class, scheduling the maximum bytes allowed to transmit data once, sequentially polling each traffic class according to the priority sequence of the traffic classes, scheduling, if one traffic class is polled and the length of data packets in the traffic class is smaller than or equal to the value of the first traffic class, outputting a data packet by the traffic class, subtracting the length of the data packet by the first traffic class, and when the length of the data packet in the traffic class is larger than the value of the first traffic class, transmitting the data packet in the traffic class, wherein the value of the first traffic class is not changed, and transmitting the schedule of the traffic class with the next value of the first traffic class, wherein the value of the first traffic class is not zero, and when the value of the first traffic class is zero, transmitting the data packet with the next traffic class with the zero value of the first traffic class, and when the value of the first traffic class is zero, transmitting the data packet with the first traffic class with the zero value, and when the value of the first traffic class is equal to the value of the first traffic class.
And scheduling according to the priority order of the traffic classes and the time required by the transmission of the head data packet in the traffic classes, wherein the scheduling comprises the steps of forwarding the head data packet in one round of scheduling, if data still exists in each traffic class after one round of scheduling is completed, calculating the transmission time of the head data packet in each traffic class after one round of scheduling, and continuing the next round of data scheduling.
The user classifies the data according to the extracted destination address, type, operation code and VLAN ID part type of the data frame to be transmitted.
And judging the queue sequence number corresponding to the data transmission of the data after the classification according to the buffer queue sequence number mapped by the different types of data configured by the register.
Temporarily buffering the data to be transmitted in a data classification routing module when the buffer space in the queue is insufficient but the threshold value is not reached, and storing the data into a corresponding buffer queue after the buffer space in the queue is released;
And if the space of the classified routing module is insufficient, discarding the data to be transmitted, informing a register control and state monitoring module, and adjusting a data scheduling method, a traffic priority or the traffic class weight.
And after the previous round of data scheduling is completed, the scheduling method, the traffic class priority or the traffic class weight is adjusted through the monitoring feedback information selection of the register control and state monitoring module.
The Ethernet multi-queue flow scheduling device provided by the other technical scheme of the invention is used for executing the Ethernet multi-queue flow scheduling method, and comprises a sending passage and a receiving passage, wherein the data transmission directions of the sending passage and the receiving passage are opposite, the sending passage and the receiving passage respectively comprise a data classification routing module, a cache module and a flow scheduling module which are sequentially connected, the data classification routing module receives data sent by upstream equipment and classifies the data, the classified data is stored in the cache queue in the cache module, and the flow scheduling module schedules the data in the cache queue according to the configured scheduling method, the priority and the weight of flow types.
The device also comprises a register control and state monitoring module for monitoring the data classification routing module, the cache module and the flow scheduling module on the sending path and the receiving path, wherein the register control and state monitoring module is used for setting configuration information and carrying out monitoring statistics on the operation scheduling condition of the device.
Another embodiment of the present invention provides a computer readable storage medium, where a program code is stored, where the program code is called by a processor to execute the ethernet multi-queue traffic scheduling method described above.
Another aspect of the present invention provides an electronic device comprising one or more processors, memory, and one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more applications configured to perform the ethernet multi-queue traffic scheduling method described above.
Compared with the prior art, the method for dispatching the Ethernet traffic mainly has the advantages that 1, the method for dispatching the Ethernet traffic uses multiple queues to transmit data, meanwhile, the queues are accurately classified, the dispatching method is selected according to traffic types in the queues and network states, meanwhile, the current round of data dispatching is conducted according to the dispatching method, each traffic type priority and corresponding weight, the use efficiency of bandwidth and the reliability of data transmission are improved, and the transmission requirements of different traffic types are flexibly met.
2. According to the Ethernet flow scheduling method, the register control and state monitoring module arranged in the system monitors the flow scheduling transmission process in the system, and the scheduling method, the flow priority or the flow weight can be adjusted according to the monitoring feedback information, so that the system is more flexible, and the possibility of data congestion is reduced.
The foregoing and other objects, features and advantages of the invention will be apparent from the following more particular description of preferred embodiments, as illustrated in the accompanying drawings.
Detailed Description
The foregoing and other features, aspects, and advantages of the present invention will become more apparent from the following detailed description of a preferred embodiment, which proceeds with reference to the accompanying drawings. The directional terms mentioned in the following embodiments, such as up, down, left, right, front or rear, etc., are only referring to the directions of the attached drawings. Thus, the directional terminology is used for purposes of illustration and is not intended to be limiting of the invention.
Example 1
The first embodiment provides an Ethernet multi-queue traffic scheduling method, which comprises the steps of classifying data, confirming queue serial numbers corresponding to the classified data, confirming remaining buffer spaces in the queues, storing the classified data into the corresponding queues when the remaining buffer spaces in the queues are sufficient, selecting a scheduling method according to the traffic types of the data to be transmitted in the queues and the network state, and simultaneously combining the priority of each traffic class and the weight corresponding to the traffic class according to the selected scheduling method to perform current round of data scheduling. The invention can select and adjust the scheduling method in real time according to the scheduling demands by presetting a plurality of scheduling methods, thereby improving the scheduling efficiency.
The implementation process of the invention is shown in fig. 1, and specifically comprises the following steps:
Step 1, classifying the data, extracting the destination address, type, operation code, VLAN ID (digital label for identifying different VLAN) and the like of the data frame to be transmitted, and judging the type of the data to be transmitted. For example, the data to be transmitted can be divided into a data frame destination address to be transmitted, wherein the data frame destination address to be transmitted accords with the prior transmission destination address set by the register and a data frame destination address to be transmitted does not accord with the prior transmission destination address set by the register. The classification is based on the data frame type, e.g., the data frame type is 0x8808, indicating that it is a pause frame (802.3 x pause frame) or PFC frame (Priority Flow Control), and the data frame type is 0x8100, indicating that it contains VLAN tagged data. For data classification, a user can adjust according to the data flow of the transmission to be scheduled, so that a plurality of virtual channels of PFC are fully utilized, and the utilization rate of bandwidth is effectively improved.
And 2, confirming the queue sequence number corresponding to the classified data, and judging which queue the current data needs to be transmitted to according to the buffer queue sequence numbers mapped by the different types of data configured by the register.
And 3, confirming the residual buffer space in the corresponding queue, and judging according to the length of the data to be transmitted currently and the residual buffer space in the queue. Mainly comprises the following steps:
If the residual space in the queue is sufficient, storing the data into the corresponding cache queue;
if the buffer space in the queue is insufficient or the threshold value is about to be reached, temporarily buffering the data in the data classification routing module, and storing the data into the corresponding buffer queue after waiting for the buffer space in the queue to be released;
if the buffer space in the classified routing module is insufficient, discarding the group of data, and simultaneously informing the register control and state monitoring module in time to prompt a user to adjust the classification strategy or the scheduling method in time.
Step 4, selecting a current scheduling method according to the register configuration, wherein different scheduling methods can adapt to different working scenes, and a user can select the scheduling method according to the type of data traffic to be transmitted in the queue and the network state;
Step 5, according to the selected scheduling method, simultaneously combining the set priority of each flow class and the weight corresponding to the flow class, performing data scheduling of the current round;
The method for scheduling the Ethernet multi-queue traffic scheduling method can be set as many as needed, and the first embodiment takes four scheduling methods as an example, namely a strict priority scheduling method for scheduling according to the priority order of traffic setting, a weighted polling scheduling method for scheduling according to the weight of each traffic class bandwidth, a red word weighted polling scheduling method for scheduling according to the data packet length in the traffic class and a weighted fair queuing scheduling method for scheduling according to the time required by data transmission in the traffic class, and the specific flow charts of the four scheduling methods are shown in figures 3, 4, 5 and 6.
Referring to fig. 3, in the alternative strict priority scheduling method of the first embodiment, scheduling is strictly performed according to the priority order set by each traffic class, and in a round of scheduling period, data packets of a high priority traffic class are sent first until the data packet in the high priority traffic class is sent, then data packets in a medium priority traffic class are sent, and finally a low priority traffic class is sent. When a traffic class with low priority is scheduled, when a traffic class data packet with high priority arrives, the data packet of the traffic class with high priority is scheduled preferentially. And scheduling a certain traffic class data packet to come by the traffic class data packet with the same priority, wherein the data of the traffic class with the same priority obeys the first come first serve criterion, and forwarding the data in sequence. The scheduling of the front and rear flow class data in the strict priority scheduling method is only related to the priority of the flow class, is irrelevant to the weight, and a register for setting the weight is not effective when the system selects the strict priority scheduling method.
Referring to fig. 4, an alternative weighted round robin scheduling method according to an embodiment includes determining a priority order of traffic classes, and allocating a weight to each traffic class according to a percentage of a configured bandwidth to a total bandwidth. During scheduling, a counter count is set for each flow class, and initial values of the counters are set according to corresponding weights. When dispatching, each flow class is polled in turn according to the priority of the flow class, when one flow class is polled, the flow class outputs a data packet, the counter value corresponding to the flow class is reduced by one, until the counter value of the flow class is reduced to zero, the system can pause the dispatching of the flow class, and then the next flow class is dispatched.
After the scheduling, if the data still exists in each flow class, adding a value to the weight counter of each flow class, and continuing to perform data scheduling according to the weighted polling scheduling method until the data does not exist in each flow class, and completing the scheduling.
The weight in the weighted round robin scheduling method actually represents the number of data packets that each traffic class can transmit in one round of scheduling.
Referring to fig. 5, the method for scheduling the red-letter weighted polling in the first embodiment is similar to the method for scheduling the weighted polling, except that the weighted polling is scheduled according to the number of data packets in the traffic class, and the red-letter weighted polling is scheduled according to the length of the data packets in the traffic class. The method includes first determining a priority order of traffic classes, and then setting a red counter Deficit for each traffic class, where an initial value of the red counter Deficit is set to a maximum number of bytes (assumed to be Quantum, configured by a register) allowed to transmit data for one scheduling. In one scheduling, each traffic class is polled in turn according to the priority of the traffic class, when one traffic class is polled, if the header data packet length of the traffic class is smaller than or equal to the value of the deficit counter Deficit, the traffic class forwards the header data packet, the value of the deficit counter Deficit subtracts the data packet length until the data packet length is larger than the value of the deficit counter Deficit, and the data packet is not sent, at this time, the value of the deficit counter Deficit is unchanged, and the system can continue to schedule the next traffic class. When all data packets in the traffic class with a certain priority are forwarded, the count value of the traffic class is cleared, the traffic class is stopped to be scheduled, and the traffic classes with other count values not equal to 0 are continuously scheduled. When the packet length in all traffic classes exceeds the corresponding counter of the deficit counter Deficit, the module adds Quantum to the value of the deficit counter Deficit of each traffic class, and starts a new round of scheduling.
When the scheduling method is selected as the red-word weighted polling scheduling, the weight is expressed as the maximum number of bytes which each traffic class can transmit in one round of scheduling.
Referring to fig. 6, in the weighted fair queuing scheduling method in the first embodiment, the scheduling method performs scheduling based on the shortest time, and calculates the time required for transmitting the head data packet according to the size of the head data packet to be transmitted in each traffic class and the weight (traffic class priority) of the corresponding traffic class, so that the traffic class data with the shortest time and the highest corresponding priority can be preferentially transmitted.
And allocating corresponding bandwidths for each flow class according to the weight value configured by the register, judging the priority order of each flow class, and calculating the transmission time of the head data packet in each flow class. In one scheduling, the traffic class with the shortest time required for transmitting the header packet and the highest priority forwards the header packet in the traffic class. And after one round of data scheduling is finished, if the data packet exists in each flow class, calculating the transmission time of the head data packet in each flow class after one round of scheduling again, and performing the next round of scheduling until the data does not exist in each flow class.
When the scheduling method is selected as weighted fair queuing scheduling, the weight is expressed as the ratio of the bandwidth occupied by each traffic class.
Among the four scheduling methods, the strict priority scheduling method is a default traffic scheduling method, and when more data exists in the traffic class with high priority, namely the traffic class with high priority always exists in the scheduling request period, so that the traffic with low priority can not be forwarded, and the starvation phenomenon of the traffic class with low priority is caused. At this point, a switch to another three scheduling methods is required.
The weighted polling method has flexible bandwidth allocation and improves the resource utilization rate of the network. However, the weight allocation of each priority traffic class may not meet the actual requirement of each traffic class, and the large data packet may have a larger bandwidth, so that the network bandwidth allocation is not public, and the switching to the red-word weighted polling scheduling is required.
The red-letter weighted polling scheduling effectively solves the problem that low-priority flow class data packets possibly occurring in strict priority scheduling cannot be served for a long time, and simultaneously solves the defect that bandwidth resources cannot be allocated according to preset proportion when the length difference of the data packets is large or the change is frequent in the weighted polling scheduling.
The weighted fair queuing scheduling method has the advantages of providing fairness at the flow level, and selecting the method if the condition that some flows occupy most bandwidth and other flows are influenced is met, so that the data flows with high priority can be transmitted faster.
The scheduling method is selected according to the type of data traffic to be transmitted in the queue and the network state, and meanwhile, the scheduling method can be adjusted according to whether congestion and other abnormal conditions occur in data forwarding of each queue in the monitored system.
And 6, judging and analyzing the monitoring feedback information of the register control and state monitoring module after the data of the previous round are scheduled, and determining whether the scheduling method, the traffic class priority or the traffic class weight is required to be adjusted. The register control and state monitoring module monitors whether the data forwarding of each queue in the system has abnormal conditions such as congestion and adjusts the scheduling method, the traffic class priority or the traffic class weight according to the information fed back by the abnormal conditions.
Example two
The second embodiment provides an ethernet multi-queue traffic scheduling device, referring to fig. 1, including a transmit path and a receive path, where the data transfer directions of the transmit path and the receive path are opposite, but the internal compositions of the transmit path and the receive path are the same;
the sending path and the receiving path comprise a data classification routing module, a cache module and a flow scheduling module which are sequentially connected;
The data classifying and routing module has two functions, namely, the first is to classify the data transmitted by the received upstream equipment, judge the type of the data according to the destination address, the type, the operation code, the VLAN ID and the like of the data frame, judge which corresponding cache queues the data should enter according to the configuration of a register, and prepare for the next scheduling. And secondly, confirming the residual length space in the buffer queue, judging whether the length of the buffer queue exceeds a threshold value after the current data is transmitted, if the residual length space meets the requirement, transmitting the data to the corresponding queue by a route, and if the residual length space does not meet the requirement, waiting for the release of the buffer space, so that the data is prevented from being truncated or lost.
The main function of the buffer module is to buffer the data to be scheduled, the number of the internal buffer queues can be configured through a register before data transmission, the highest configurable upper limit is 16, and the buffer spaces of the queues are relatively independent. Since PFC supports a maximum of 8 virtual channels, the module also assumes the function of mapping 16 queues to 8 traffic classes (i.e., virtual channels). 1 traffic class may map to 1 or more queues, but 1 queue may only correspond to one traffic class.
The main function of the flow scheduling module is to schedule the data to be transmitted in the flow classes with different priorities according to the scheduling method configured at present and the priority and weight value of each flow class, which is the most core part of the device. The module supports 4 methods of strict priority scheduling, weighted polling scheduling, right-word weighted polling scheduling and weighted fair queuing scheduling together, and can perform accurate and flexible scheduling processing according to the characteristics and requirements of different traffic categories. In a strict priority scheduling mode, the traffic with high priority always obtains the opportunity of priority transmission, and the real-time performance and reliability of key service data are ensured. In the weighted polling scheduling mode and the declined weighted polling scheduling mode, the module reasonably distributes transmission resources according to the weight setting of each flow category, and realizes more balanced bandwidth utilization. The weighted fair queuing scheduling method provides an effective solution for scenarios requiring fair allocation of bandwidth.
The device also comprises a register control and state monitoring module for monitoring the data classification routing module, the cache module and the flow scheduling module on the sending path and the receiving path.
The register control and status monitoring module has two functions, namely, setting various configuration information of the device, such as mapping relation between different types of data and queues, number and length of cache queues, mapping relation between the queues and traffic types, scheduling method selection, priority and weight of each traffic type and the like. And secondly, monitoring and counting the operation scheduling condition of the device, and transmitting the data to the outside. The user can evaluate and adjust the scheduling method in real time according to the monitored information, such as the bandwidth utilization rate, the queuing length of the data packet, the transmission delay and other key indexes, so that the flow scheduling is more accurate, and the utilization efficiency of resources is improved.
Example III
An embodiment three provides a computer readable storage medium, in which a program code is stored, and the program code is called by a processor to execute the ethernet multi-queue traffic scheduling method in the embodiment one.
Example IV
An embodiment IV provides an electronic device comprising one or more processors, memory, and one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more applications configured to perform the Ethernet multi-queue traffic scheduling method of embodiment one.
The method, the device, the computer readable storage medium and the electronic equipment for dispatching the Ethernet multi-queue traffic provided by the invention are described in detail, and specific examples are applied to illustrate the structure and the working principle of the invention, and the description of the above embodiments is only used for helping to understand the method and the core idea of the invention. It should be noted that it will be apparent to those skilled in the art that various improvements and modifications can be made to the present invention without departing from the principles of the invention, and such improvements and modifications fall within the scope of the appended claims.