[go: up one dir, main page]

CN119520416A - Ethernet multi-queue traffic scheduling method, device, computer-readable storage medium and electronic device - Google Patents

Ethernet multi-queue traffic scheduling method, device, computer-readable storage medium and electronic device Download PDF

Info

Publication number
CN119520416A
CN119520416A CN202510059628.7A CN202510059628A CN119520416A CN 119520416 A CN119520416 A CN 119520416A CN 202510059628 A CN202510059628 A CN 202510059628A CN 119520416 A CN119520416 A CN 119520416A
Authority
CN
China
Prior art keywords
traffic
scheduling
queue
data
traffic class
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202510059628.7A
Other languages
Chinese (zh)
Other versions
CN119520416B (en
Inventor
盛迪
魏育成
蔡刚
徐维涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ehiway Microelectronic Science And Technology Suzhou Co ltd
Original Assignee
Ehiway Microelectronic Science And Technology Suzhou Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ehiway Microelectronic Science And Technology Suzhou Co ltd filed Critical Ehiway Microelectronic Science And Technology Suzhou Co ltd
Priority to CN202510059628.7A priority Critical patent/CN119520416B/en
Publication of CN119520416A publication Critical patent/CN119520416A/en
Application granted granted Critical
Publication of CN119520416B publication Critical patent/CN119520416B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2441Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/52Queue scheduling by attributing bandwidth to queues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6275Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Small-Scale Networks (AREA)

Abstract

本发明提供的一种以太网多队列流量调度方法,包括对数据进行分类,并确认分类后的数据所对应的队列序号;确认所述队列中的剩余缓存空间,并在所述队列中剩余缓存空间充足时,将分类后的所述数据存入对应的所述队列中;依据所述队列中待传输的数据流量类型以及网络状态选择调度方法,根据选择的调度方法同时结合设置的各个流量类的优先级以及所述流量类对应的权值,进行当前一轮的数据调度。该技术方案的有益效果在于,依据队列中的流量类型以及网络状态选择调度方法,使用不同调度方法进行当前一轮数据调度,提高了带宽的使用效率和数据传输的可靠性。本发明还提供了一种以太网多队列流量调度装置、计算机可读存储介质及电子设备。

The present invention provides an Ethernet multi-queue traffic scheduling method, including classifying data and confirming the queue number corresponding to the classified data; confirming the remaining cache space in the queue, and storing the classified data in the corresponding queue when the remaining cache space in the queue is sufficient; selecting a scheduling method based on the type of data traffic to be transmitted in the queue and the network status, and performing the current round of data scheduling based on the selected scheduling method and the priority of each traffic class set and the weight corresponding to the traffic class. The beneficial effect of this technical solution is that a scheduling method is selected based on the traffic type in the queue and the network status, and different scheduling methods are used to schedule the current round of data, thereby improving the efficiency of bandwidth utilization and the reliability of data transmission. The present invention also provides an Ethernet multi-queue traffic scheduling device, a computer-readable storage medium and an electronic device.

Description

Ethernet multi-queue traffic scheduling method and device, computer readable storage medium and electronic equipment
Technical Field
The present invention belongs to the field of communication technologies, and in particular, to a method and an apparatus for scheduling ethernet multi-queue traffic, a computer readable storage medium, and an electronic device.
Background
With the rapid development of modern network technology, ethernet becomes the core of data transmission, and carries increasing data transmission demands. However, with the rapid popularization of technologies such as cloud computing, big data, internet of things and the like, the global data volume is increased explosively, and a series of problems such as network congestion, service quality degradation, poor user experience and the like are caused. To address these issues, it becomes particularly important to promote the traffic scheduling capability of ethernet networks.
Traditional Ethernet traffic scheduling methods rely mainly on a single queue for data processing and forwarding. This approach can meet basic communication requirements in the early stages of less network traffic. However, with significant increases in network size and complexity, single queue processing gradually exposes serious drawbacks. For example, in the face of large-traffic impact, a single queue is prone to cause network congestion, resulting in increased data transmission delay and even data packet loss, which seriously affect the quality of network communication and user experience.
To address these challenges, priority-based flow control PFC (Priority-based Flow Control) approaches have evolved. PFC divides an ethernet link into multiple virtual channels with different priorities, and assigns a priority level to each virtual channel, where each channel may contain multiple data queues. When a link is blocked, the transmission of all data on the link is not suspended, but each channel can be individually suspended or restarted, and the channels are independent and do not affect each other. The method can theoretically better utilize network bandwidth resources and improve the overall performance of the network.
However, in practical application, the existing PFC multi-queue scheduling method still has the problem that the existing scheduling method often lacks flexibility and expandability. They typically support only one or a few fixed scheduling strategies, which are difficult to adapt to changing network environments and diverse application requirements. In addition, when the network traffic changes dynamically, the existing method cannot respond timely, so that some queues are overloaded, and other queues are in an idle state, which seriously wastes network resources and reduces the overall performance of the network.
More importantly, the existing PFC multi-queue scheduling method is difficult to balance between fairness and efficiency. Some approaches may sacrifice overall efficiency of the network in pursuit of fairness, whereas others may ignore fairness in order to increase efficiency, resulting in some queues not being serviced for long periods of time.
Disclosure of Invention
The invention provides a method, a device, a computer readable storage medium and electronic equipment for dispatching Ethernet multi-queue traffic, which flexibly adjust the traffic flows of different priorities of Ethernet through real-time adjustment of the dispatching method and the weight value setting of each queue, thereby improving the data transmission service quality.
Other objects and advantages of the present invention will be further appreciated from the technical features disclosed in the present invention.
In order to achieve one or a part of or all of the purposes or other purposes, the invention provides an Ethernet multi-queue traffic scheduling method, which comprises the steps of classifying data, confirming queue serial numbers corresponding to the classified data, confirming remaining cache space in the queue, storing the classified data into the corresponding queue when the remaining cache space in the queue is sufficient, selecting a scheduling method according to the traffic type of the data to be transmitted in the queue and the network state, simultaneously carrying out current round of data scheduling according to the priority of each traffic class and the weight corresponding to the traffic class which are set by the selected scheduling method, and after the current round of data scheduling is completed, selecting the scheduling method, the traffic class priority or the traffic class weight through monitoring feedback information of a register control and state monitoring module. The technical scheme has the advantages that the Ethernet traffic scheduling method uses multiple queues to transmit data, the queues are classified accurately, the scheduling method is selected according to traffic types in the queues and network states, and meanwhile, the current round of data scheduling is performed according to the scheduling method, each traffic type priority and corresponding weight, so that the use efficiency of bandwidth and the reliability of data transmission are improved, and the transmission requirements of different traffic types are flexibly met.
The optional scheduling method comprises the steps of scheduling according to the priority order set by the traffic class, scheduling according to the priority order of each traffic class and the weight of the bandwidth of the traffic class, scheduling according to the priority order of the traffic class and the length of the data packet in the traffic class, and scheduling according to the priority order of the traffic class and the time required by the transmission of the head data packet in the traffic class.
The register of the configuration scheduling method stores corresponding values, and the system selects the corresponding scheduling method according to the values in the register.
Scheduling strictly according to the priority order set by the traffic class comprises the step of sequentially sending the data packets in the traffic class according to the set priority order of the traffic in a scheduling period.
When the data packets of the low priority traffic class are scheduled, the data packets of the high priority traffic class are sent to the queue for waiting scheduling, the data packets of the high priority traffic class are scheduled preferentially, and when the data packets of the same priority are sent to the queue for waiting scheduling, the data packets are scheduled in sequence according to the sequence of sending the data to the queue.
When the selected scheduling method is to schedule according to the priority order set by the traffic class, the register with the set weight is not effective.
Scheduling according to the priority order of each flow class and the weight of the bandwidth of the flow class, wherein the scheduling comprises the steps of configuring a weight value for each flow class according to the percentage of the bandwidth allocated by each flow class to the total bandwidth, sequentially polling each flow class according to the priority order of each flow class, outputting a data packet by the polled flow class, correspondingly reducing the corresponding weight value of the flow class until the weight value of the polled flow class is 0, and continuing the scheduling of the next flow class.
Setting a weighted counter for each flow class, setting an initial value of the weighted counter according to the corresponding weight, outputting a data packet by each flow class in each polling in the scheduling process, subtracting 1 from the value of the weighted counter corresponding to the flow class, suspending the scheduling of the flow class corresponding to the weighted counter when the counter value of any flow class is zero, and continuing the scheduling of the next flow class by the system when the weighted counters of all the flow classes are zero.
The method comprises the steps of setting a first traffic class for each traffic class according to the priority sequence of the traffic classes and the length of data packets in the traffic classes, setting a first traffic class for each traffic class, scheduling the maximum bytes allowed to transmit data once, sequentially polling each traffic class according to the priority sequence of the traffic classes, scheduling, if one traffic class is polled and the length of data packets in the traffic class is smaller than or equal to the value of the first traffic class, outputting a data packet by the traffic class, subtracting the length of the data packet by the first traffic class, and when the length of the data packet in the traffic class is larger than the value of the first traffic class, transmitting the data packet in the traffic class, wherein the value of the first traffic class is not changed, and transmitting the schedule of the traffic class with the next value of the first traffic class, wherein the value of the first traffic class is not zero, and when the value of the first traffic class is zero, transmitting the data packet with the next traffic class with the zero value of the first traffic class, and when the value of the first traffic class is zero, transmitting the data packet with the first traffic class with the zero value, and when the value of the first traffic class is equal to the value of the first traffic class.
And scheduling according to the priority order of the traffic classes and the time required by the transmission of the head data packet in the traffic classes, wherein the scheduling comprises the steps of forwarding the head data packet in one round of scheduling, if data still exists in each traffic class after one round of scheduling is completed, calculating the transmission time of the head data packet in each traffic class after one round of scheduling, and continuing the next round of data scheduling.
The user classifies the data according to the extracted destination address, type, operation code and VLAN ID part type of the data frame to be transmitted.
And judging the queue sequence number corresponding to the data transmission of the data after the classification according to the buffer queue sequence number mapped by the different types of data configured by the register.
Temporarily buffering the data to be transmitted in a data classification routing module when the buffer space in the queue is insufficient but the threshold value is not reached, and storing the data into a corresponding buffer queue after the buffer space in the queue is released;
And if the space of the classified routing module is insufficient, discarding the data to be transmitted, informing a register control and state monitoring module, and adjusting a data scheduling method, a traffic priority or the traffic class weight.
And after the previous round of data scheduling is completed, the scheduling method, the traffic class priority or the traffic class weight is adjusted through the monitoring feedback information selection of the register control and state monitoring module.
The Ethernet multi-queue flow scheduling device provided by the other technical scheme of the invention is used for executing the Ethernet multi-queue flow scheduling method, and comprises a sending passage and a receiving passage, wherein the data transmission directions of the sending passage and the receiving passage are opposite, the sending passage and the receiving passage respectively comprise a data classification routing module, a cache module and a flow scheduling module which are sequentially connected, the data classification routing module receives data sent by upstream equipment and classifies the data, the classified data is stored in the cache queue in the cache module, and the flow scheduling module schedules the data in the cache queue according to the configured scheduling method, the priority and the weight of flow types.
The device also comprises a register control and state monitoring module for monitoring the data classification routing module, the cache module and the flow scheduling module on the sending path and the receiving path, wherein the register control and state monitoring module is used for setting configuration information and carrying out monitoring statistics on the operation scheduling condition of the device.
Another embodiment of the present invention provides a computer readable storage medium, where a program code is stored, where the program code is called by a processor to execute the ethernet multi-queue traffic scheduling method described above.
Another aspect of the present invention provides an electronic device comprising one or more processors, memory, and one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more applications configured to perform the ethernet multi-queue traffic scheduling method described above.
Compared with the prior art, the method for dispatching the Ethernet traffic mainly has the advantages that 1, the method for dispatching the Ethernet traffic uses multiple queues to transmit data, meanwhile, the queues are accurately classified, the dispatching method is selected according to traffic types in the queues and network states, meanwhile, the current round of data dispatching is conducted according to the dispatching method, each traffic type priority and corresponding weight, the use efficiency of bandwidth and the reliability of data transmission are improved, and the transmission requirements of different traffic types are flexibly met.
2. According to the Ethernet flow scheduling method, the register control and state monitoring module arranged in the system monitors the flow scheduling transmission process in the system, and the scheduling method, the flow priority or the flow weight can be adjusted according to the monitoring feedback information, so that the system is more flexible, and the possibility of data congestion is reduced.
The foregoing and other objects, features and advantages of the invention will be apparent from the following more particular description of preferred embodiments, as illustrated in the accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions of specific embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort to a person of ordinary skill in the art.
Fig. 1 is an ethernet multi-queue traffic scheduler of the present invention.
Fig. 2 is a flow chart of the ethernet multi-queue traffic method of the present invention.
Fig. 3 is a flow chart of the strict priority scheduling method of the present invention.
Fig. 4 is a flow chart of a weighted round robin scheduling method of the present invention.
Fig. 5 is a flow chart of the method for scheduling the red-word weighted polling of the present invention.
Fig. 6 is a flow chart of a weighted fair queuing scheduling method of the present invention.
Detailed Description
The foregoing and other features, aspects, and advantages of the present invention will become more apparent from the following detailed description of a preferred embodiment, which proceeds with reference to the accompanying drawings. The directional terms mentioned in the following embodiments, such as up, down, left, right, front or rear, etc., are only referring to the directions of the attached drawings. Thus, the directional terminology is used for purposes of illustration and is not intended to be limiting of the invention.
Example 1
The first embodiment provides an Ethernet multi-queue traffic scheduling method, which comprises the steps of classifying data, confirming queue serial numbers corresponding to the classified data, confirming remaining buffer spaces in the queues, storing the classified data into the corresponding queues when the remaining buffer spaces in the queues are sufficient, selecting a scheduling method according to the traffic types of the data to be transmitted in the queues and the network state, and simultaneously combining the priority of each traffic class and the weight corresponding to the traffic class according to the selected scheduling method to perform current round of data scheduling. The invention can select and adjust the scheduling method in real time according to the scheduling demands by presetting a plurality of scheduling methods, thereby improving the scheduling efficiency.
The implementation process of the invention is shown in fig. 1, and specifically comprises the following steps:
Step 1, classifying the data, extracting the destination address, type, operation code, VLAN ID (digital label for identifying different VLAN) and the like of the data frame to be transmitted, and judging the type of the data to be transmitted. For example, the data to be transmitted can be divided into a data frame destination address to be transmitted, wherein the data frame destination address to be transmitted accords with the prior transmission destination address set by the register and a data frame destination address to be transmitted does not accord with the prior transmission destination address set by the register. The classification is based on the data frame type, e.g., the data frame type is 0x8808, indicating that it is a pause frame (802.3 x pause frame) or PFC frame (Priority Flow Control), and the data frame type is 0x8100, indicating that it contains VLAN tagged data. For data classification, a user can adjust according to the data flow of the transmission to be scheduled, so that a plurality of virtual channels of PFC are fully utilized, and the utilization rate of bandwidth is effectively improved.
And 2, confirming the queue sequence number corresponding to the classified data, and judging which queue the current data needs to be transmitted to according to the buffer queue sequence numbers mapped by the different types of data configured by the register.
And 3, confirming the residual buffer space in the corresponding queue, and judging according to the length of the data to be transmitted currently and the residual buffer space in the queue. Mainly comprises the following steps:
If the residual space in the queue is sufficient, storing the data into the corresponding cache queue;
if the buffer space in the queue is insufficient or the threshold value is about to be reached, temporarily buffering the data in the data classification routing module, and storing the data into the corresponding buffer queue after waiting for the buffer space in the queue to be released;
if the buffer space in the classified routing module is insufficient, discarding the group of data, and simultaneously informing the register control and state monitoring module in time to prompt a user to adjust the classification strategy or the scheduling method in time.
Step 4, selecting a current scheduling method according to the register configuration, wherein different scheduling methods can adapt to different working scenes, and a user can select the scheduling method according to the type of data traffic to be transmitted in the queue and the network state;
Step 5, according to the selected scheduling method, simultaneously combining the set priority of each flow class and the weight corresponding to the flow class, performing data scheduling of the current round;
The method for scheduling the Ethernet multi-queue traffic scheduling method can be set as many as needed, and the first embodiment takes four scheduling methods as an example, namely a strict priority scheduling method for scheduling according to the priority order of traffic setting, a weighted polling scheduling method for scheduling according to the weight of each traffic class bandwidth, a red word weighted polling scheduling method for scheduling according to the data packet length in the traffic class and a weighted fair queuing scheduling method for scheduling according to the time required by data transmission in the traffic class, and the specific flow charts of the four scheduling methods are shown in figures 3, 4, 5 and 6.
Referring to fig. 3, in the alternative strict priority scheduling method of the first embodiment, scheduling is strictly performed according to the priority order set by each traffic class, and in a round of scheduling period, data packets of a high priority traffic class are sent first until the data packet in the high priority traffic class is sent, then data packets in a medium priority traffic class are sent, and finally a low priority traffic class is sent. When a traffic class with low priority is scheduled, when a traffic class data packet with high priority arrives, the data packet of the traffic class with high priority is scheduled preferentially. And scheduling a certain traffic class data packet to come by the traffic class data packet with the same priority, wherein the data of the traffic class with the same priority obeys the first come first serve criterion, and forwarding the data in sequence. The scheduling of the front and rear flow class data in the strict priority scheduling method is only related to the priority of the flow class, is irrelevant to the weight, and a register for setting the weight is not effective when the system selects the strict priority scheduling method.
Referring to fig. 4, an alternative weighted round robin scheduling method according to an embodiment includes determining a priority order of traffic classes, and allocating a weight to each traffic class according to a percentage of a configured bandwidth to a total bandwidth. During scheduling, a counter count is set for each flow class, and initial values of the counters are set according to corresponding weights. When dispatching, each flow class is polled in turn according to the priority of the flow class, when one flow class is polled, the flow class outputs a data packet, the counter value corresponding to the flow class is reduced by one, until the counter value of the flow class is reduced to zero, the system can pause the dispatching of the flow class, and then the next flow class is dispatched.
After the scheduling, if the data still exists in each flow class, adding a value to the weight counter of each flow class, and continuing to perform data scheduling according to the weighted polling scheduling method until the data does not exist in each flow class, and completing the scheduling.
The weight in the weighted round robin scheduling method actually represents the number of data packets that each traffic class can transmit in one round of scheduling.
Referring to fig. 5, the method for scheduling the red-letter weighted polling in the first embodiment is similar to the method for scheduling the weighted polling, except that the weighted polling is scheduled according to the number of data packets in the traffic class, and the red-letter weighted polling is scheduled according to the length of the data packets in the traffic class. The method includes first determining a priority order of traffic classes, and then setting a red counter Deficit for each traffic class, where an initial value of the red counter Deficit is set to a maximum number of bytes (assumed to be Quantum, configured by a register) allowed to transmit data for one scheduling. In one scheduling, each traffic class is polled in turn according to the priority of the traffic class, when one traffic class is polled, if the header data packet length of the traffic class is smaller than or equal to the value of the deficit counter Deficit, the traffic class forwards the header data packet, the value of the deficit counter Deficit subtracts the data packet length until the data packet length is larger than the value of the deficit counter Deficit, and the data packet is not sent, at this time, the value of the deficit counter Deficit is unchanged, and the system can continue to schedule the next traffic class. When all data packets in the traffic class with a certain priority are forwarded, the count value of the traffic class is cleared, the traffic class is stopped to be scheduled, and the traffic classes with other count values not equal to 0 are continuously scheduled. When the packet length in all traffic classes exceeds the corresponding counter of the deficit counter Deficit, the module adds Quantum to the value of the deficit counter Deficit of each traffic class, and starts a new round of scheduling.
When the scheduling method is selected as the red-word weighted polling scheduling, the weight is expressed as the maximum number of bytes which each traffic class can transmit in one round of scheduling.
Referring to fig. 6, in the weighted fair queuing scheduling method in the first embodiment, the scheduling method performs scheduling based on the shortest time, and calculates the time required for transmitting the head data packet according to the size of the head data packet to be transmitted in each traffic class and the weight (traffic class priority) of the corresponding traffic class, so that the traffic class data with the shortest time and the highest corresponding priority can be preferentially transmitted.
And allocating corresponding bandwidths for each flow class according to the weight value configured by the register, judging the priority order of each flow class, and calculating the transmission time of the head data packet in each flow class. In one scheduling, the traffic class with the shortest time required for transmitting the header packet and the highest priority forwards the header packet in the traffic class. And after one round of data scheduling is finished, if the data packet exists in each flow class, calculating the transmission time of the head data packet in each flow class after one round of scheduling again, and performing the next round of scheduling until the data does not exist in each flow class.
When the scheduling method is selected as weighted fair queuing scheduling, the weight is expressed as the ratio of the bandwidth occupied by each traffic class.
Among the four scheduling methods, the strict priority scheduling method is a default traffic scheduling method, and when more data exists in the traffic class with high priority, namely the traffic class with high priority always exists in the scheduling request period, so that the traffic with low priority can not be forwarded, and the starvation phenomenon of the traffic class with low priority is caused. At this point, a switch to another three scheduling methods is required.
The weighted polling method has flexible bandwidth allocation and improves the resource utilization rate of the network. However, the weight allocation of each priority traffic class may not meet the actual requirement of each traffic class, and the large data packet may have a larger bandwidth, so that the network bandwidth allocation is not public, and the switching to the red-word weighted polling scheduling is required.
The red-letter weighted polling scheduling effectively solves the problem that low-priority flow class data packets possibly occurring in strict priority scheduling cannot be served for a long time, and simultaneously solves the defect that bandwidth resources cannot be allocated according to preset proportion when the length difference of the data packets is large or the change is frequent in the weighted polling scheduling.
The weighted fair queuing scheduling method has the advantages of providing fairness at the flow level, and selecting the method if the condition that some flows occupy most bandwidth and other flows are influenced is met, so that the data flows with high priority can be transmitted faster.
The scheduling method is selected according to the type of data traffic to be transmitted in the queue and the network state, and meanwhile, the scheduling method can be adjusted according to whether congestion and other abnormal conditions occur in data forwarding of each queue in the monitored system.
And 6, judging and analyzing the monitoring feedback information of the register control and state monitoring module after the data of the previous round are scheduled, and determining whether the scheduling method, the traffic class priority or the traffic class weight is required to be adjusted. The register control and state monitoring module monitors whether the data forwarding of each queue in the system has abnormal conditions such as congestion and adjusts the scheduling method, the traffic class priority or the traffic class weight according to the information fed back by the abnormal conditions.
Example two
The second embodiment provides an ethernet multi-queue traffic scheduling device, referring to fig. 1, including a transmit path and a receive path, where the data transfer directions of the transmit path and the receive path are opposite, but the internal compositions of the transmit path and the receive path are the same;
the sending path and the receiving path comprise a data classification routing module, a cache module and a flow scheduling module which are sequentially connected;
The data classifying and routing module has two functions, namely, the first is to classify the data transmitted by the received upstream equipment, judge the type of the data according to the destination address, the type, the operation code, the VLAN ID and the like of the data frame, judge which corresponding cache queues the data should enter according to the configuration of a register, and prepare for the next scheduling. And secondly, confirming the residual length space in the buffer queue, judging whether the length of the buffer queue exceeds a threshold value after the current data is transmitted, if the residual length space meets the requirement, transmitting the data to the corresponding queue by a route, and if the residual length space does not meet the requirement, waiting for the release of the buffer space, so that the data is prevented from being truncated or lost.
The main function of the buffer module is to buffer the data to be scheduled, the number of the internal buffer queues can be configured through a register before data transmission, the highest configurable upper limit is 16, and the buffer spaces of the queues are relatively independent. Since PFC supports a maximum of 8 virtual channels, the module also assumes the function of mapping 16 queues to 8 traffic classes (i.e., virtual channels). 1 traffic class may map to 1 or more queues, but 1 queue may only correspond to one traffic class.
The main function of the flow scheduling module is to schedule the data to be transmitted in the flow classes with different priorities according to the scheduling method configured at present and the priority and weight value of each flow class, which is the most core part of the device. The module supports 4 methods of strict priority scheduling, weighted polling scheduling, right-word weighted polling scheduling and weighted fair queuing scheduling together, and can perform accurate and flexible scheduling processing according to the characteristics and requirements of different traffic categories. In a strict priority scheduling mode, the traffic with high priority always obtains the opportunity of priority transmission, and the real-time performance and reliability of key service data are ensured. In the weighted polling scheduling mode and the declined weighted polling scheduling mode, the module reasonably distributes transmission resources according to the weight setting of each flow category, and realizes more balanced bandwidth utilization. The weighted fair queuing scheduling method provides an effective solution for scenarios requiring fair allocation of bandwidth.
The device also comprises a register control and state monitoring module for monitoring the data classification routing module, the cache module and the flow scheduling module on the sending path and the receiving path.
The register control and status monitoring module has two functions, namely, setting various configuration information of the device, such as mapping relation between different types of data and queues, number and length of cache queues, mapping relation between the queues and traffic types, scheduling method selection, priority and weight of each traffic type and the like. And secondly, monitoring and counting the operation scheduling condition of the device, and transmitting the data to the outside. The user can evaluate and adjust the scheduling method in real time according to the monitored information, such as the bandwidth utilization rate, the queuing length of the data packet, the transmission delay and other key indexes, so that the flow scheduling is more accurate, and the utilization efficiency of resources is improved.
Example III
An embodiment three provides a computer readable storage medium, in which a program code is stored, and the program code is called by a processor to execute the ethernet multi-queue traffic scheduling method in the embodiment one.
Example IV
An embodiment IV provides an electronic device comprising one or more processors, memory, and one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more applications configured to perform the Ethernet multi-queue traffic scheduling method of embodiment one.
The method, the device, the computer readable storage medium and the electronic equipment for dispatching the Ethernet multi-queue traffic provided by the invention are described in detail, and specific examples are applied to illustrate the structure and the working principle of the invention, and the description of the above embodiments is only used for helping to understand the method and the core idea of the invention. It should be noted that it will be apparent to those skilled in the art that various improvements and modifications can be made to the present invention without departing from the principles of the invention, and such improvements and modifications fall within the scope of the appended claims.

Claims (18)

1.一种以太网多队列流量调度方法,其特征在于,包括,对数据进行分类,并确认分类后的数据所对应的队列序号;1. An Ethernet multi-queue traffic scheduling method, characterized by comprising classifying data and confirming the queue sequence number corresponding to the classified data; 确认所述队列中的剩余缓存空间,并在所述队列中剩余缓存空间充足时,将分类后的所述数据存入对应的所述队列中;confirming the remaining buffer space in the queue, and storing the classified data into the corresponding queue when the remaining buffer space in the queue is sufficient; 依据所述队列中待传输的数据流量类型以及网络状态选择调度方法,根据选择的调度方法同时结合设置的各个流量类的优先级以及所述流量类对应的权值,进行当前一轮的数据调度。A scheduling method is selected based on the type of data traffic to be transmitted in the queue and the network status, and the current round of data scheduling is performed based on the selected scheduling method combined with the set priorities of each traffic class and the weights corresponding to the traffic class. 2.根据权利要求1所述的一种以太网多队列流量调度方法,其特征在于,可选择的所述调度方法包括:严格按照流量类设定的优先级高低顺序进行调度、依据每个流量类优先级顺序以及所述流量类带宽的权重大小进行调度、依据所述流量类优先级顺序以及所述流量类中数据包长度进行调度、依据所述流量类优先级顺序以及所述流量类中头部数据包发送所需时间进行调度。2. According to claim 1, an Ethernet multi-queue traffic scheduling method is characterized in that the selectable scheduling methods include: scheduling strictly in the order of priority set by the traffic class, scheduling according to the priority order of each traffic class and the weight of the traffic class bandwidth, scheduling according to the priority order of the traffic class and the length of the data packet in the traffic class, and scheduling according to the priority order of the traffic class and the time required to send the header data packet in the traffic class. 3.根据权利要求2所述的一种以太网多队列流量调度方法,其特征在于,配置调度方法的寄存器存储有相应的值,系统依据所述寄存器中的值选择相应的调度方法。3. According to claim 2, an Ethernet multi-queue traffic scheduling method is characterized in that the register for configuring the scheduling method stores a corresponding value, and the system selects the corresponding scheduling method based on the value in the register. 4.根据权利要求2所述的一种以太网多队列流量调度方法,其特征在于,严格按照流量类设定的优先级高低顺序进行调度包括,在一个调度周期内,依据设置的所述流量的优先级顺序,按序发送流量类中的数据包。4. According to claim 2, an Ethernet multi-queue traffic scheduling method is characterized in that scheduling is performed strictly in the order of priority set by the traffic class, including sending data packets in the traffic class in sequence within a scheduling cycle according to the set priority order of the traffic. 5.根据权利要求4所述的一种以太网多队列流量调度方法,其特征在于,在调度低优先级流量类的数据包时,高优先级流量类的数据包发送至队列等待调度,优先调度高优先级流量类的数据包;5. An Ethernet multi-queue traffic scheduling method according to claim 4, characterized in that when scheduling data packets of low priority traffic classes, data packets of high priority traffic classes are sent to queues to wait for scheduling, and data packets of high priority traffic classes are scheduled first; 相同优先级的数据包发送至队列等待调度时,按照数据发送至队列的顺序按序调度。When data packets of the same priority are sent to a queue and wait for scheduling, they are scheduled in the order in which they are sent to the queue. 6.根据权利要求4所述的一种以太网多队列流量调度方法,其特征在于,当选择的调度方法为严格按照流量类设定的优先级高低顺序进行调度,则设置权值的寄存器不生效。6. An Ethernet multi-queue traffic scheduling method according to claim 4, characterized in that when the selected scheduling method is to schedule strictly in the order of priority set by the traffic class, the register for setting the weight is ineffective. 7.根据权利要求2所述的一种以太网多队列流量调度方法,其特征在于,依据每个流量类优先级顺序以及所述流量类带宽的权重大小进行调度包括,根据每个流量类分配的带宽占总带宽的百分比分,为每个流量类配置一个权重值,按照每个流量类优先级顺序依次轮询每个流量类,被轮询到的流量类输出一个数据包,并相应降低对应的所述流量类权重值,直至被轮询到的流量类权重值为0,则继续下一流量类调度。7. An Ethernet multi-queue traffic scheduling method according to claim 2 is characterized in that scheduling is performed according to the priority order of each traffic class and the weight of the traffic class bandwidth, including configuring a weight value for each traffic class according to the percentage of the bandwidth allocated to each traffic class in the total bandwidth, polling each traffic class in turn according to the priority order of each traffic class, the polled traffic class outputs a data packet, and the corresponding traffic class weight value is reduced accordingly, until the weight value of the polled traffic class is 0, then continuing with the next traffic class scheduling. 8.根据权利要求7所述的一种以太网多队列流量调度方法,其特征在于,为每个流量类设定一个加权计数器,所述加权计数器的初值依据对应的权重设置,调度过程中,每轮询一个流量类,所述流量类输出一个数据包,并将所述流量类对应的加权计数器数值减1;8. According to claim 7, a method for Ethernet multi-queue traffic scheduling is characterized in that a weighted counter is set for each traffic class, and the initial value of the weighted counter is set according to the corresponding weight. During the scheduling process, each time a traffic class is polled, the traffic class outputs a data packet, and the weighted counter value corresponding to the traffic class is reduced by 1; 当任一所述流量类的计数器数值归零时,归零的所述加权计数器所对应的流量类调度暂停,系统继续下一流量类调度;When the counter value of any of the traffic classes returns to zero, the scheduling of the traffic class corresponding to the weighted counter that returns to zero is suspended, and the system continues the scheduling of the next traffic class; 当全部所述流量类的加权计数器归零时,系统开始下一轮的调度。When the weighted counters of all the traffic classes return to zero, the system starts the next round of scheduling. 9.根据权利要求2所述的一种以太网多队列流量调度方法,其特征在于,依据所述流量类优先级顺序以及所述流量类中数据包长度进行调度包括,为每个流量类设置一个赤字计数器,所述赤字计数器初值设定为每个流量类一次调度允许传输数据的最大字节;按照流量类的优先级顺序,依次轮询每一个流量类并调度,在一次调度中,若:9. The Ethernet multi-queue traffic scheduling method according to claim 2 is characterized in that scheduling according to the priority order of the traffic classes and the length of the data packets in the traffic classes includes setting a deficit counter for each traffic class, wherein the initial value of the deficit counter is set to the maximum bytes of data allowed to be transmitted in one scheduling for each traffic class; and polling and scheduling each traffic class in turn according to the priority order of the traffic classes. In one scheduling, if: 轮询到一个流量类且所述流量类数据包长度小于或等于所述赤字计数器的值时,所述流量类输出一个数据包,所述赤字计数器减去所述数据包的长度;When a traffic class is polled and the length of the data packet of the traffic class is less than or equal to the value of the deficit counter, the traffic class outputs a data packet, and the deficit counter is subtracted by the length of the data packet; 轮询到一个流量类且所述流量类数据包长度大于所述赤字计数器的值时,则所述流量类不会发送,所述赤字计数器数值不改变,并转入下一所述赤字计数器数值不为零的流量类的调度;When a traffic class is polled and the length of the traffic class data packet is greater than the value of the deficit counter, the traffic class will not be sent, the deficit counter value will not change, and the scheduling will be transferred to the next traffic class whose deficit counter value is not zero; 任一所述流量类中数据包全部发送完成,所述流量类的赤字计数器的值清零,转入下一所述赤字计数器数值不为零的流量类的调度;When all the data packets in any of the traffic classes are sent, the value of the deficit counter of the traffic class is reset to zero, and the scheduling is transferred to the next traffic class whose deficit counter value is not zero; 当所有流量类的数据包长度均超过了对应的所述赤字计数器的值,则为每个流量类的所述赤字计数器加上对应的初值,并进行新一轮调度。When the lengths of data packets of all traffic classes exceed the corresponding values of the deficit counters, the corresponding initial values are added to the deficit counters of each traffic class, and a new round of scheduling is performed. 10.根据权利要求2所述的一种以太网多队列流量调度方法,其特征在于,依据所述流量类优先级顺序以及所述流量类中头部数据包发送所需时间进行调度,包括,各流量类按照头部数据包发送所需要的时间排序以及优先级顺序依次排队等待调度,在一轮调度中,头部数据包发送所需要的时间最短且优先级高的流量类,转发所述流量类中的头部数据包,一轮调度完成后,如各流量类中还存在数据,计算经过一轮调度后的各流量类中头部数据包的传输时间,并继续下一轮数据调度。10. An Ethernet multi-queue traffic scheduling method according to claim 2 is characterized in that scheduling is performed according to the priority order of the traffic classes and the time required for sending the header data packet in the traffic class, including that each traffic class is sorted in accordance with the time required for sending the header data packet and the priority order to queue up for scheduling. In one round of scheduling, the traffic class with the shortest time required for sending the header data packet and the highest priority forwards the header data packet in the traffic class. After one round of scheduling is completed, if there is still data in each traffic class, the transmission time of the header data packet in each traffic class after one round of scheduling is calculated, and the next round of data scheduling continues. 11.根据权利要求1所述的一种以太网多队列流量调度方法,其特征在于,用户依据提取的待传输数据帧的目的地址、类型、操作码、VLAN ID部分类型,对数据进行分类。11. The Ethernet multi-queue traffic scheduling method according to claim 1 is characterized in that the user classifies the data according to the destination address, type, operation code, and VLAN ID part type of the extracted data frame to be transmitted. 12.根据权利要求1所述的一种以太网多队列流量调度方法,其特征在于,根据寄存器配置的不同种类数据所映射的缓存队列序号,判断确认分类后的数据传输对应的队列序号。12. An Ethernet multi-queue traffic scheduling method according to claim 1, characterized in that the queue number corresponding to the classified data transmission is determined based on the cache queue number mapped by the different types of data configured in the register. 13.根据权利要求1所述的一种以太网多队列流量调度方法,其特征在于,所述队列中缓存空间不足,但尚未达到阈值时,则将待传输数据暂时缓存在数据分类路由模块中,至所述队列中缓存空间释放后,数据存入对应的缓存队列;13. The Ethernet multi-queue traffic scheduling method according to claim 1 is characterized in that when the cache space in the queue is insufficient but has not reached the threshold, the data to be transmitted is temporarily cached in the data classification routing module, and after the cache space in the queue is released, the data is stored in the corresponding cache queue; 若所述分类路由模块空间也不足,则丢弃所述待传输数据,并通知寄存器控制及状态监测模块,对数据调度方法、流量优先级或所述流量类权值进行调整。If the space of the classification routing module is also insufficient, the data to be transmitted is discarded, and the register control and status monitoring module is notified to adjust the data scheduling method, traffic priority or the traffic class weight. 14.根据权利要求2所述的一种以太网多队列流量调度方法,其特征在于,在当前一轮数据调度完成后,通过寄存器控制及状态监测模块的监测反馈信息选择对调度方法、流量类优先级或所述流量类权值进行调整。14. An Ethernet multi-queue traffic scheduling method according to claim 2, characterized in that after the current round of data scheduling is completed, the scheduling method, traffic class priority or the traffic class weight is adjusted through the monitoring feedback information of the register control and status monitoring module. 15.一种以太网多队列流量调度装置,其特征在于,用于执行权利要求1-14任一所述的以太网多队列流量调度方法,包括发送通路以及接收通路,所述发送通路与接收通路的数据传递方向相反;15. An Ethernet multi-queue traffic scheduling device, characterized in that it is used to execute the Ethernet multi-queue traffic scheduling method according to any one of claims 1 to 14, comprising a sending path and a receiving path, wherein the data transmission directions of the sending path and the receiving path are opposite; 所述发送通路以及接收通路均包括依次相连的数据分类路由模块、缓存模块以及流量调度模块;The sending path and the receiving path both include a data classification routing module, a cache module and a traffic scheduling module connected in sequence; 所述数据分类路由模块接收上游设备发送的数据并分类,经分类后的数据存入所述缓存模块内的缓存队列,所述流量调度模块根据配置的调度方法、流量类的优先级以及权重,对缓存队列中的数据进行调度。The data classification routing module receives and classifies data sent by the upstream device, and stores the classified data in the cache queue in the cache module. The traffic scheduling module schedules the data in the cache queue according to the configured scheduling method, priority and weight of the traffic class. 16.根据权利要求15所述的一种以太网多队列流量调度装置,其特征在于,还包括监测所述发送通路以及接收通路上的所述数据分类路由模块、缓存模块以及流量调度模块的寄存器控制及状态监测模块,所述寄存器控制及状态监测模块用于设置配置信息以及对装置运行调度情况进行监测统计。16. An Ethernet multi-queue traffic scheduling device according to claim 15, characterized in that it also includes a register control and status monitoring module for monitoring the data classification routing module, cache module and traffic scheduling module on the sending path and the receiving path, and the register control and status monitoring module is used to set configuration information and monitor and count the operation scheduling status of the device. 17.一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有程序代码,所述程序代码被处理器调用执行如权利要求1-14任一项所述的以太网多队列流量调度方法。17. A computer-readable storage medium, characterized in that program code is stored in the computer-readable storage medium, and the program code is called by a processor to execute the Ethernet multi-queue traffic scheduling method according to any one of claims 1 to 14. 18.一种电子设备,其特征在于,包括一个或多个处理器;18. An electronic device, comprising one or more processors; 存储器;Memory; 一个或多个应用程序,其中所述一个或多个应用程序被存储在所述存储器中并被配置为由所述一个或多个处理器执行,所述一个或多个应用程序配置用于执行如权利要求1-14任一项所述的以太网多队列流量调度方法。One or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, and the one or more applications are configured to execute the Ethernet multi-queue traffic scheduling method as described in any one of claims 1-14.
CN202510059628.7A 2025-01-15 2025-01-15 Ethernet multi-queue traffic scheduling method and device, computer readable storage medium and electronic equipment Active CN119520416B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202510059628.7A CN119520416B (en) 2025-01-15 2025-01-15 Ethernet multi-queue traffic scheduling method and device, computer readable storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202510059628.7A CN119520416B (en) 2025-01-15 2025-01-15 Ethernet multi-queue traffic scheduling method and device, computer readable storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN119520416A true CN119520416A (en) 2025-02-25
CN119520416B CN119520416B (en) 2025-04-15

Family

ID=94655660

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202510059628.7A Active CN119520416B (en) 2025-01-15 2025-01-15 Ethernet multi-queue traffic scheduling method and device, computer readable storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN119520416B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101610198A (en) * 2008-06-17 2009-12-23 大唐移动通信设备有限公司 A kind of dispatching method of Packet Service and dispatching device
CN101860916A (en) * 2009-04-08 2010-10-13 大唐移动通信设备有限公司 Resource scheduling method and device
CN104079501A (en) * 2014-06-05 2014-10-01 深圳市邦彦信息技术有限公司 Queue scheduling method based on multiple priorities
US20170351549A1 (en) * 2016-06-03 2017-12-07 International Business Machines Corporation Task queuing and dispatching mechanisms in a computational device
JP2020521251A (en) * 2017-10-17 2020-07-16 広東工業大学Guangdong University Of Technology Virtual product change method of electronic product production line
CN111966513A (en) * 2020-08-31 2020-11-20 国网上海市电力公司 Priori-knowledge-free Coflow multi-stage queue scheduling method and device and scheduling equipment thereof
CN113747597A (en) * 2021-08-30 2021-12-03 上海智能网联汽车技术中心有限公司 Network data packet scheduling method and system based on mobile 5G network
US20230336486A1 (en) * 2020-12-24 2023-10-19 Huawei Technologies Co., Ltd. Service flow scheduling method and apparatus, and system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101610198A (en) * 2008-06-17 2009-12-23 大唐移动通信设备有限公司 A kind of dispatching method of Packet Service and dispatching device
CN101860916A (en) * 2009-04-08 2010-10-13 大唐移动通信设备有限公司 Resource scheduling method and device
CN104079501A (en) * 2014-06-05 2014-10-01 深圳市邦彦信息技术有限公司 Queue scheduling method based on multiple priorities
US20170351549A1 (en) * 2016-06-03 2017-12-07 International Business Machines Corporation Task queuing and dispatching mechanisms in a computational device
JP2020521251A (en) * 2017-10-17 2020-07-16 広東工業大学Guangdong University Of Technology Virtual product change method of electronic product production line
CN111966513A (en) * 2020-08-31 2020-11-20 国网上海市电力公司 Priori-knowledge-free Coflow multi-stage queue scheduling method and device and scheduling equipment thereof
US20230336486A1 (en) * 2020-12-24 2023-10-19 Huawei Technologies Co., Ltd. Service flow scheduling method and apparatus, and system
CN113747597A (en) * 2021-08-30 2021-12-03 上海智能网联汽车技术中心有限公司 Network data packet scheduling method and system based on mobile 5G network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
江文静: "基于区分服务中的队列调度算法研究", 《中国优秀硕士学位论文全文数据库(电子期刊)》, 15 May 2016 (2016-05-15), pages 3 *

Also Published As

Publication number Publication date
CN119520416B (en) 2025-04-15

Similar Documents

Publication Publication Date Title
US7027457B1 (en) Method and apparatus for providing differentiated Quality-of-Service guarantees in scalable packet switches
EP3955550B1 (en) Flow-based management of shared buffer resources
KR100323258B1 (en) Rate guarantees through buffer management
KR100933917B1 (en) Bandwidth guarantee and overload protection method in network switch
US7016366B2 (en) Packet switch that converts variable length packets to fixed length packets and uses fewer QOS categories in the input queues that in the outout queues
AU752188B2 (en) System and method for scheduling message transmission and processing in a digital data network
JP3715098B2 (en) Packet distribution apparatus and method in communication network
US8064344B2 (en) Flow-based queuing of network traffic
US6993041B2 (en) Packet transmitting apparatus
US7619969B2 (en) Hardware self-sorting scheduling queue
US20070070895A1 (en) Scaleable channel scheduler system and method
US8553543B2 (en) Traffic shaping method and device
JP3306705B2 (en) Packet transfer control device and scheduling method thereof
CN104579962A (en) A method and device for distinguishing QoS policies of different packets
JP2002519910A (en) Policy-based quality of service
US20140317220A1 (en) Device for efficient use of packet buffering and bandwidth resources at the network edge
CN115473855B (en) Network system and data transmission method
CN106921586B (en) Data stream shaping method, data scheduling method and device
US20230022037A1 (en) Flow-based management of shared buffer resources
CN102594669A (en) Data message processing method, device and equipment
WO2002091757A1 (en) A scheduling method of realizing the quality of service of router in integrated service
CN106899514B (en) A Queue Scheduling Method to Guarantee Service Quality of Multimedia Service
CN119520416B (en) Ethernet multi-queue traffic scheduling method and device, computer readable storage medium and electronic equipment
Tong et al. Quantum varying deficit round robin scheduling over priority queues
KR100588001B1 (en) Weighted Packet Scheduling System and Its Method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant