CN111917666A - Data frame preemptive cache management method based on service level agreement - Google Patents
Data frame preemptive cache management method based on service level agreement Download PDFInfo
- Publication number
- CN111917666A CN111917666A CN202010732501.4A CN202010732501A CN111917666A CN 111917666 A CN111917666 A CN 111917666A CN 202010732501 A CN202010732501 A CN 202010732501A CN 111917666 A CN111917666 A CN 111917666A
- Authority
- CN
- China
- Prior art keywords
- user
- queue
- data frame
- priority
- execute
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000007726 management method Methods 0.000 title claims abstract description 25
- 238000000034 method Methods 0.000 claims description 8
- 230000003068 static effect Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
- H04L47/62—Queue scheduling characterised by scheduling criteria
- H04L47/6215—Individual queue per QOS, rate or priority
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/29—Flow control; Congestion control using a combination of thresholds
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
- H04L47/62—Queue scheduling characterised by scheduling criteria
- H04L47/625—Queue scheduling characterised by scheduling criteria for service slots or service orders
- H04L47/6275—Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Computational Mathematics (AREA)
- Pure & Applied Mathematics (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Algebra (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
本发明公开了一种基于服务等级协议的数据帧抢占式缓存管理方法,主要解决现有技术未考虑高服务等级用户的QoS,且高优先级数据丢帧率高及缓存空间低利用率的问题。其方案是:1)基于服务等级协议设置队列门限及各优先级最低门限,初始化缓存管理参数;2)当有数据帧到达缓存时,判断是否需要丢弃该数据帧或其他已存数据帧,执行3),若没有数据帧到达,则结束服务;3)对无需丢弃的数据帧将其进入到缓存空间,返回2);对需要丢弃的数据帧,依据服务等级协议设置的队列门限,丢弃满足条件的数据帧后,返回2)。本发明能保证高服务等级用户的QoS、降低了高优先级数据帧丢帧率提高了缓存空间利用率,可用于星型网络中的交换设备。
The invention discloses a data frame preemptive cache management method based on a service level agreement, which mainly solves the problems that the prior art does not consider the QoS of users with high service levels, and the high-priority data has a high frame loss rate and a low utilization rate of cache space. . The scheme is: 1) Set the queue threshold and the minimum threshold of each priority based on the service level agreement, and initialize the cache management parameters; 2) When a data frame arrives in the cache, judge whether it is necessary to discard the data frame or other stored data frames, and execute 3), if no data frame arrives, end the service; 3) For the data frame that does not need to be discarded, enter it into the buffer space, and return 2); for the data frame that needs to be discarded, according to the queue threshold set by the service level agreement, discarding meets the After the conditional data frame, return 2). The invention can ensure the QoS of users with high service levels, reduce the frame loss rate of high-priority data frames, and improve the utilization rate of cache space, and can be used for switching equipment in star networks.
Description
技术领域technical field
本发明属于通信技术领域,更进一步涉及一种数据帧队列缓存管理方法,可用于星型网络中的交换设备。The invention belongs to the technical field of communication, and further relates to a data frame queue buffer management method, which can be used for switching equipment in a star network.
背景技术Background technique
在基于共享媒质的局域网和接入网中,常采用一种星型网络的拓扑结构,这种拓扑结构通常由一个中心节点和若干用户节点构成。在这种网络中,中心节点和各用户节点之间可以进行双向数据传输,各用户节点需要通过中心节点转发来进行数据传输。因此,需要通过中心节点来管理和协调来发往不同用户节点的数据帧,进而要求中心节点能够有效地利用缓存空间来保证发往不同节点的数据帧能有效地入队。In local area networks and access networks based on shared media, a star network topology is often used, which is usually composed of a central node and several user nodes. In this kind of network, two-way data transmission can be performed between the central node and each user node, and each user node needs to be forwarded by the central node for data transmission. Therefore, the central node needs to manage and coordinate the data frames sent to different user nodes, and further requires the central node to effectively use the buffer space to ensure that the data frames sent to different nodes can be effectively queued.
在实际的网络系统中,网络的各种资源是有限的,为了网络资源的充分利用,服务提供商经常会提供不同的服务等级,因此不同的用户节点享受的服务是不同的。所以在网络交换设备进行缓存管理的时候,就需要中心节点在数据帧到达缓存的时候需要尽可能的考虑高服务等级用户的服务质量QoS、缓存空间的利用率、降低数据帧的丢帧率且尽可能保证高优先级数据帧的丢帧率。目前采用最多的缓存管理方法主要有静态阈值策略ST、动态阈值策略DT、Push-Out策略PO以及多优先级策略,其中:In an actual network system, various network resources are limited. In order to make full use of network resources, service providers often provide different service levels, so different user nodes enjoy different services. Therefore, when the network switching equipment performs cache management, it is necessary for the central node to consider the QoS of high service level users as much as possible, the utilization rate of cache space, and the reduction of the frame loss rate of data frames when the data frames arrive in the cache. Ensure the frame loss rate of high-priority data frames as much as possible. Currently, the most used cache management methods mainly include static threshold strategy ST, dynamic threshold strategy DT, Push-Out strategy PO and multi-priority strategy, among which:
所述静态阈值策略ST,其包括完全分占型方式、完全共享型方式以及部分共享部分分占型方式。The static threshold strategy ST includes a fully shared mode, a fully shared mode, and a partially shared and partially shared mode.
该完全分占型方式是将整个缓存空间按照用户队列的个数平均分为n份,每个队列固定占用缓存大小,当某队列的固定缓存空间占满之后,再入队的数据帧就需要被丢弃,这种缓存管理方式能很好的保证用户的公平性,但是不能保证高服等级用户的服务质量而且缓存利用率低。The full-occupancy method is to divide the entire cache space into n equal parts according to the number of user queues, and each queue occupies a fixed Cache size. When the fixed cache space of a queue is full, the data frames that are re-queued need to be discarded. This cache management method can well guarantee the fairness of users, but cannot guarantee the service quality of high-level users. And the cache utilization is low.
该完全共享型方式是让所有用户队列共享整个缓存空间,只有缓存空间已满才会丢弃到达的数据帧,这种缓存管理方式充分利用了缓存空间、提高了缓存空间的利用率,但是完全通过数据量去分配缓存空间不能保证高服务等级用户享受应有的QoS,而且不能给高优先级数据更低丢包率的保障。This fully shared method allows all user queues to share the entire cache space, and only when the cache space is full will the arriving data frames be discarded. This cache management method makes full use of the cache space and improves the utilization of the cache space, but completely The allocation of buffer space to the amount of data cannot ensure that users with high service levels enjoy due QoS, and cannot guarantee a lower packet loss rate for high-priority data.
该部分共享部分分占型方式将缓存空间划分为共享区和分占区,将分占区均分给每个用户队列,保证每个队列的缓存空间下限,当某个队列数据量较大并且占满了对应的分占区队列空间,可以将数据帧存放在共享缓存区内,直到共享缓存区满才会丢弃到达的数据帧,这种方式保证了用户的公平性并相应的减少了数据帧的丢包率,但是这种方式没有保证高服务等级用户可以享有更低的丢包率,也没有考虑高优先级数据帧的丢包率,而且不能在突发业务情况下保证缓存空间的利用率。This part-shared part-occupancy method divides the cache space into a shared area and an occupied area, and evenly distributes the occupied area to each user queue to ensure the lower limit of the cache space of each queue. When a queue has a large amount of data and When the corresponding partition queue space is filled, data frames can be stored in the shared buffer area, and the arriving data frames will not be discarded until the shared buffer area is full. This method ensures the fairness of users and reduces data accordingly. frame packet loss rate, but this method does not guarantee that users with high service levels can enjoy a lower packet loss rate, nor does it consider the packet loss rate of high-priority data frames, and cannot guarantee the buffer space in the case of bursty services. utilization.
所述动态阈值策略DT,指的是任何时候队列长度阈值与缓存管理中未使用缓存大小成比例。动态阈值策略比较有代表的有典型动态阈值策略和最佳DT算法。The dynamic threshold policy DT means that the queue length threshold at any time is proportional to the size of the unused cache in cache management. The typical dynamic threshold strategy and the best DT algorithm are representative of dynamic threshold strategy.
该典型动态阈值策略,其思路是设置队列管理每个队列的独立队列阈值与当前未使用的缓存空间大小成比例,只有当队列实际长度超过计算出的队列阈值,到达缓存的数据帧才会被丢弃,采用这种缓存管理策略虽然可保证在突发业务情况下的缓存空间利用率,但是这种策略计算出的动态阈值只和整个缓存空间有关而没有考虑队列自身长度的影响,很难保证高服务等级用户的服务质量以及高优先级数据帧的丢包率。The idea of this typical dynamic threshold strategy is to set the independent queue threshold of each queue in queue management in proportion to the size of the currently unused buffer space. Only when the actual length of the queue exceeds the calculated queue threshold, the data frames that arrive in the buffer will be processed. Discard, although this cache management strategy can ensure the cache space utilization in the case of sudden business, the dynamic threshold calculated by this strategy is only related to the entire cache space and does not consider the influence of the length of the queue itself, it is difficult to guarantee The quality of service for users with high service levels and the packet loss rate of high-priority data frames.
该最佳DT算法,是在典型动态阈值的策略之上,端口队列阈值会随输出服务速率的变化而变化,该策略本身不要求预留缓存,并且允许超负载的队列在缓存未满时占用更多空间,相比于典型动态阈值策略提高了缓存利用率。但是和典型动态阈值策略有同样的弊端。The optimal DT algorithm is based on the typical dynamic threshold strategy. The port queue threshold will change with the output service rate. The strategy itself does not require buffer reservation, and allows overloaded queues to occupy when the buffer is not full. More space and improved cache utilization compared to typical dynamic threshold policies. But it has the same drawbacks as the typical dynamic threshold strategy.
所述Push-Out策略PO,指的是任何时候只要缓存未满,数据帧都可以存放在缓存空间中。最具代表的Push-Out策略有延迟决定策略DRP以及按要求丢弃DoD。The Push-Out policy PO means that whenever the cache is not full, the data frame can be stored in the cache space. The most representative Push-Out strategies include delayed decision strategy DRP and drop-out DoD as required.
该DRP策略直到缓存空间没有空余时才决定是否丢弃分组,丢弃的对象由缓存占用状态或不用优先级决定;该DoD策略在缓存满时丢弃缓存中队列最长的分组。The DRP strategy does not decide whether to discard the packet until the buffer space is empty, and the discarded object is determined by the cache occupancy status or no priority; the DoD strategy discards the packet with the longest queue in the buffer when the buffer is full.
这两种策略较静态阈值和动态阈值策略有更高的缓存空间利用率,也不同程度的考虑到了优先级。但是这两种策略不能够为高服务等级的用户提供保障,只是单纯从提高空间利用率出发,缺乏一定的实用性。而且DoD策略中直接丢弃缓存中队列最长的分组的这种思想也没有考虑到用户之间的公平性。These two strategies have higher cache space utilization than the static threshold and dynamic threshold strategies, and also take into account the priority to varying degrees. However, these two strategies cannot provide guarantees for users with high service levels, and are only based on improving space utilization, which lacks certain practicability. Moreover, the idea of directly discarding the packet with the longest queue in the cache in the DoD strategy does not take into account the fairness among users.
所述多优先级策略,是为了向不同类型的业务流提供不同优先级的QoS保证提出的,最具有代表的策略有多优先级最佳DT和流水线区间策略。The multi-priority strategy is proposed to provide QoS guarantees with different priorities for different types of service flows, and the most representative strategies are multi-priority optimal DT and pipeline interval strategy.
该多优先级最佳DT是在最佳DT策略的基础之上扩展到多优先级情况,其继承了最佳DT的优点,即缓存利用率高、分组丢失率小,同时也弥补了最佳DT策略没有为优先级提供保证的缺点。The multi-priority optimal DT is extended to the multi-priority case on the basis of the optimal DT strategy. It inherits the advantages of the optimal DT, that is, high cache utilization and low packet loss rate, and also compensates for the optimal DT. The DT strategy does not have the disadvantage of providing guarantees for priority.
该流水线区间策略将缓存空间分成不同优先级区间,首先到达的传输流进入最后区间且优先级最高,当最后区间用尽后才会进去其他区间,输出端口只能从最后区间取出分组,当最后区间被传输完后,较低优先级区间会移到临近的高优先级区间。该策略很好的保证了高优先级业务流的QoS,但是缓存利用率方面不能得到很高的保证。The pipeline interval strategy divides the buffer space into different priority intervals. The transport stream that arrives first enters the last interval and has the highest priority. When the last interval is exhausted, it will enter other intervals. The output port can only take out packets from the last interval. After the interval has been transmitted, the lower priority interval is moved to the adjacent higher priority interval. This strategy guarantees the QoS of high-priority traffic well, but the cache utilization cannot be guaranteed very high.
以上这两种策略都考虑了为优先级提供保障,但是均没有考虑如何为高服务等级的用户提供QoS保证,且流水线区间策略的缓存利用率也不高。The above two strategies both consider providing guarantees for priority, but neither consider how to provide QoS guarantees for users with high service levels, and the cache utilization of the pipeline interval strategy is not high.
发明内容SUMMARY OF THE INVENTION
本发明的目的在于针对上述已有方法技术的不足,提出一种基于服务等级协议的数据帧抢占式缓存管理方法,以保证具有高服务等级的用户能享受更低丢包率的服务质量,且保证每个用户的高优先级队列有较高的入队成功率,同时提高在不同业务情况下的缓存空间利用率。The purpose of the present invention is to address the deficiencies of the above-mentioned existing methods and technologies, and propose a data frame preemptive cache management method based on a service level agreement, so as to ensure that users with high service levels can enjoy service quality with a lower packet loss rate, and It ensures that each user's high-priority queue has a high success rate of queuing, and at the same time improves the cache space utilization under different business conditions.
为了实现上述目标,本发明的技术方案包括如下:In order to achieve the above goals, the technical solutions of the present invention include the following:
(1)初始化缓存管理参数:(1) Initialize the cache management parameters:
(1a)设需要提供缓存管理服务的用户Ni有k个,其中i从1到k;(1a) Suppose there are k users Ni who need to provide cache management services, where i ranges from 1 to k;
(1b)设总缓存空间可容纳的数据帧总数为M,其中实时容纳的数据帧数为m;(1b) The total number of data frames that can be accommodated in the total buffer space is M, and the number of data frames that can be accommodated in real time is m;
(1c)设服务等级协议SLA是用户购买带宽的大小;(1c) Let the service level agreement SLA be the size of the bandwidth purchased by the user;
(1d)设总带宽大小为B,用户Ni购买的带宽大小为Bi,k个用户购买的带宽总和满足的条件,其中是k个用户购买的带宽总和;(1d) Suppose the total bandwidth size is B, the bandwidth size purchased by user Ni is B i , and the sum of the bandwidth purchased by k users satisfies conditions, where is the sum of bandwidth purchased by k users;
(1e)设用户Ni的数据帧分为高、中、低三个优先级,且都存放在用户队列Qi中;(1e) Suppose the data frame of user Ni is divided into three priority levels: high, medium and low, and all are stored in the user queue Qi ;
(1f)设用户队列Qi的队列门限Mi为: (1f) Let the queue threshold M i of the user queue Q i be:
(1g)设用户队列Qi的高优先级最低门限值Li_H为:中优先级最低门限值Li_M为:低优先级最低门限值Li_L为:其中为高中低优先级最低门限值之和占队列门限Mi的比例,且满足n≥1,a,b,c分别为高中低优先级最低门限值的比例,且满足a、b、c≥0;(1g) Let the high-priority minimum threshold value L i_H of the user queue Q i be: The lowest threshold value L i_M of the medium priority is: The low priority minimum threshold value Li_L is: in is the ratio of the sum of the minimum thresholds of high, medium and low priorities to the queue threshold Mi, and satisfies n≥1 , a, b, and c are the proportions of the minimum thresholds of high, medium and low priorities, and satisfies a, b, c ≥0;
(1h)设用户队列Qi的实时长度mi为:mi=mi_H+mi_M+mi_L,其中mi_H是高优先的数据帧个数,mi_M是中优先级的数据帧个数,mi_L是低优先的数据帧个数;(1h) Let the real-time length m i of the user queue Q i be: m i =m i_H +m i_M +m i_L , where m i_H is the number of data frames with high priority, and m i_M is the number of data frames with medium priority , m i_L is the number of low-priority data frames;
(1i)设用户队列实时长度超出队列门限的用户集合为:C={Ni|mi>Mi,1≤i≤k};(1i) Let the set of users whose real-time length of user queue exceeds the queue threshold be: C={N i |m i >M i , 1≤i≤k};
(1j)设用户队列实时长度超出队列门限最多的用户队列为:Qmax={Qi|mi-Mi=B},其中B为用户队列实时长度和队列门限的最大差值,B=max{mi-Mi|Ni∈C};(1j) Let the user queue whose real-time length of the user queue exceeds the queue threshold the most is: Q max ={Q i |m i -M i =B}, where B is the maximum difference between the real-time length of the user queue and the queue threshold, B= max{m i -M i |N i ∈ C};
(2)当用户Ni的数据帧到达缓存后,比较缓存实时容纳的数据帧数m与缓存可容纳的数据帧总数M的大小:(2) When the data frame of user Ni arrives in the cache, compare the size of the number of data frames m that the cache can accommodate in real time and the total number of data frames M that the cache can accommodate:
若m<M,则整个缓存空间未满,执行(11);If m<M, the entire cache space is not full, execute (11);
否则,缓存空间已满,执行(3);Otherwise, the cache space is full, execute (3);
(3)将上述数据帧的所属用户队列Qi的实时长度mi与队列门限Mi进行比较:(3) compare the real-time length m i of the user queue Q i of the above-mentioned data frame with the queue threshold M i :
若mi<Mi,则用户队列Qi的实时长度未超过队列的门限,找出Qmax所属的第j个用户Nj,j≠i,执行(4);If m i <M i , the real-time length of the user queue Qi does not exceed the threshold of the queue, find out the jth user N j to which Q max belongs, j≠ i , and execute (4);
否则,用户队列Qi的实时长度已超过队列的门限,执行(6);Otherwise, the real-time length of the user queue Qi has exceeded the threshold of the queue, and execute (6);
(4)将用户Nj的低优先级的数据帧个数mj_L与第j个用户队列Qj的低优先级最低门限Lj_L进行比较:(4) Compare the number of low-priority data frames m j_L of user N j with the low-priority minimum threshold L j_L of the jth user queue Q j :
若mj_L>Lj_L,则丢弃该用户队列Qj中任一个低优先级数据帧,执行(11);If m j_L > L j_L , discard any low-priority data frame in the user queue Q j , and execute (11);
否则,执行(5);Otherwise, execute (5);
(5)将用户Nj的中优先级数据帧个数mj_M与该用户队列Qj的中优先级最低门限Lj_M进行比较:(5) Compare the number m j_M of medium-priority data frames of user N j with the lowest threshold L j_M of medium-priority in the user queue Q j :
若mj_M>Lj_M,则丢弃该用户队列Qj中任一个中优先级数据帧,执行(11);If m j_M > L j_M , discard any medium-priority data frame in the user queue Q j , and execute (11);
否则,此时一定有mj_H>Lj_H,则丢弃该用户队列Qj中任一个高优先级数据帧,执行(11);Otherwise, there must be m j_H > L j_H at this time, then discard any high-priority data frame in the user queue Q j , and execute (11);
(6)判断用户Ni到达缓存的数据帧是否为低优先级数据帧:如果是,则直接丢弃该数据帧,执行(12);否则,执行(7);(6) judge whether the data frame that the user Ni arrives at the cache is a low-priority data frame: if so, directly discard the data frame and execute (12); otherwise, execute (7);
(7)判断用户Ni到达缓存的数据帧是否为中优先级数据帧:如果是,则执行(8),否则,执行(9);(7) judge whether the data frame that the user N i arrives at the buffer is a medium priority data frame: if so, execute (8), otherwise, execute (9);
(8)判断用户Ni到达缓存的数据帧所属用户队列Qi中是否存在低优先数据帧:(8) Determine whether there is a low-priority data frame in the user queue Qi to which the data frame of the user Ni arrives at the buffer:
如果存在,则丢弃Qi中的任一个低优先级数据帧,执行(11);If it exists, discard any low-priority data frame in Qi, and execute (11);
否则,丢弃到达缓存的数据帧,执行(12);Otherwise, discard the data frame that arrives in the cache, and execute (12);
(9)比较到达缓存的数据帧的所属用户Ni的低优先级数据帧个数与用户队列Qi的低优先级最低门限的大小:(9) Compare the number of low- priority data frames of the user Ni that arrives at the buffered data frame with the size of the low-priority minimum threshold of the user queue Qi :
若mi_L>Li_L,则丢弃Qi中任一个低优先级数据帧,执行(11);否则,执行(10);If m i_L >L i_L , discard any low-priority data frame in Qi , and execute (11); otherwise, execute (10);
(10)比较到达缓存的数据帧的所属用户Ni的中优先级数据帧个数与用户队列Qi的中优先级最低门限的大小:(10) Compare the number of medium-priority data frames of the user N i to which the buffered data frame belongs and the size of the lowest threshold of medium-priority in the user queue Q i :
若mi_M>Li_M,则丢弃Qi中任一个中优先级数据帧,执行(11);否则,直接丢弃该数据帧,执行(12);If m i_M > L i_M , discard any medium-priority data frame in Qi , and execute (11); otherwise, directly discard the data frame and execute (12);
(11)将到达缓存的数据帧进入到所属的用户队列;(11) Enter the data frame that reaches the cache into the user queue to which it belongs;
(12)判断是否还有数据帧到达缓存:如果有,则返回(2),否则,停止管理服务。(12) Determine whether there are still data frames arriving in the cache: if so, return to (2), otherwise, stop the management service.
本发明与现有技术相比具有如下优点:Compared with the prior art, the present invention has the following advantages:
第一,由于本发明是基于服务等级协议来设置队列的门限值,因此保证了高服务等级的用户即使是在所有用户同时突发业务的情况下也同样可以享受更高的服务质量,同时在非突发业务的情况下也能为低服务等级的用户暂时提供较高的服务质量。First, because the present invention sets the threshold value of the queue based on the service level agreement, it ensures that users with high service levels can enjoy higher service quality even when all users have burst services at the same time. In the case of non-burst services, it can temporarily provide higher service quality for users with low service levels.
第二,由于本发明在基于服务等级协议设置队列门限时,同时设置了高中低三个优先级队列的最低门限,因此保证了高优先级业务相比低优先级业务有较高的入队成功率。Second, when the present invention sets the queue threshold based on the service level agreement, the minimum thresholds of the three priority queues, high, middle and low, are simultaneously set, thus ensuring that the high-priority service has a higher success in joining the queue than the low-priority service. Rate.
附图说明Description of drawings
图1为本发明的实现流程图;Fig. 1 is the realization flow chart of the present invention;
图2为本发明使用的网络拓扑图;Fig. 2 is the network topology diagram that the present invention uses;
图3为本发明中基于服务等级协议的缓存分配示意图。FIG. 3 is a schematic diagram of cache allocation based on service level agreement in the present invention.
具体实施方式Detailed ways
以下结合附图对本发明的实施例作进一步详细描述。The embodiments of the present invention will be described in further detail below with reference to the accompanying drawings.
参照图2,本实施例使用的是星型网络拓扑结构,网络由1个中心节点和8个用户节点构成。Referring to FIG. 2 , this embodiment uses a star network topology, and the network consists of one central node and eight user nodes.
参照图3,本实施例为8个用户提供3种不同的服务等级,3种服务等级提供的带宽值比例为1:2:3。8个用户所购买的服务等级如表1所示。Referring to FIG. 3 , this embodiment provides 8 users with 3 different service levels, and the ratio of the bandwidth values provided by the 3 service levels is 1:2:3. Table 1 shows the service levels purchased by the 8 users.
表1用户购买服务等级表Table 1 User purchase service level table
参照图1,本实施例的实现步骤如下:1, the implementation steps of this embodiment are as follows:
步骤1,初始化缓存管理参数。Step 1, initialize the cache management parameters.
缓存管理参数是缓存用于管理数据帧入队的条件依据,包括但不限于:服务的用户个数、缓存空间可容纳的数据帧数、用户队列的各类门限等。依据缓存管理参数,缓存可以在数据帧到达的时候来管控数据帧是否入队,进而实现对缓存空间的有效管理。The cache management parameter is the condition basis for the cache to manage the entry of data frames, including but not limited to: the number of users served, the number of data frames that can be accommodated in the cache space, and various thresholds of user queues. According to the cache management parameters, the cache can control whether the data frame is queued when the data frame arrives, so as to realize the effective management of the cache space.
1.1)设需要提供缓存管理服务的用户Ni有8个,其中i从1到8;1.1) Suppose there are 8 users N i that need to provide cache management services, where i is from 1 to 8;
1.2)设总缓存空间可容纳的数据帧总数为M=2400个,其中实时容纳的数据帧数为m,m≤M;1.2) The total number of data frames that can be accommodated in the total buffer space is M=2400, and the number of data frames accommodated in real time is m, where m≤M;
1.3)设中心设备提供的总带宽大小为B=120MHz,用户Ni购买的带宽大小为Bi,8个用户购买的带宽大小如表2所示;1.3) Set the total bandwidth size provided by the central equipment to be B=120MHz, the bandwidth size purchased by user Ni is B i , and the bandwidth size purchased by 8 users is as shown in Table 2;
表2用户购买带宽大小Table 2 Bandwidth purchased by users
续表2 Continued from Table 2
1.4)设用户Ni的数据帧分为高、中、低三个优先级,并且都存放在用户队列Qi中;1.4) Suppose the data frame of user N i is divided into three priorities: high, medium and low, and all are stored in the user queue Q i ;
1.5)设用户队列Qi的队列门限为Mi,其中队列Qi的高优先级最低门限值为Li_H,中优先级最低门限值为Li_M,低优先级最低门限值为Li_L,为高中低优先级最低门限值之和占队列门限Mi的比例,且满足n≥1,本实施例设a,b,c分别为高中低优先级最低门限值的比例,且满足a、b、c≥0,该比例可以根据优先级的实际情况来设置,本实施例设但不限于a=6、b=3、c=1,8个用户根据服务等级协议SLA计算出的队列门限以及各优先级最低门限通过如下公式计算,最终得出的门限值如表3所示;1.5) Set the queue threshold of user queue Qi as M i , wherein the minimum threshold of high priority of queue Qi is Li_H , the minimum threshold of middle priority is Li_M , and the minimum threshold of low priority is L i_L , is the ratio of the sum of the minimum thresholds of high, medium and low priorities to the queue threshold M i , and satisfies n≥1. In this embodiment, set a, b, and c are the ratios of the lowest thresholds of high, medium and low priorities respectively, and satisfy a, b, and c≥0. The ratio can be set according to the actual situation of the priorities. In this embodiment, a=6 is set but not limited to , b=3, c=1, the queue threshold calculated by 8 users according to the service level agreement SLA and the minimum threshold of each priority are calculated by the following formula, and the final threshold value is shown in Table 3;
表3基于SLA的队列门限值Table 3 SLA-based queue thresholds
1.6)设用户队列Qi的实时长度mi为:mi=mi_H+mi_M+mi_L,其中mi_H是高优先的数据帧个数,mi_M是中优先级的数据帧个数,mi_L是低优先的数据帧个数;1.6) Let the real-time length m i of the user queue Q i be: m i =m i_H +m i_M +m i_L , where m i_H is the number of data frames with high priority, and m i_M is the number of data frames with medium priority, m i_L is the number of low-priority data frames;
1.7)设用户队列实时长度超出队列门限的用户集合为:C={Ni|mi>Mi,1≤i≤8};1.7) Let the set of users whose real-time length of user queue exceeds the queue threshold be: C={N i |m i >M i , 1≤i≤8};
1.8)设用户队列实时长度超出队列门限最多的用户队列为:Qmax={Qi|mi-Mi=B},其中B为用户队列实时长度与队列门限的最大差值,B=max{mi-Mi|Ni∈C}。1.8) Let the user queue whose real-time length of user queue exceeds the queue threshold the most is: Q max ={Q i |m i -M i =B}, where B is the maximum difference between the real-time length of the user queue and the queue threshold, B=max {m i -M i |N i ∈ C}.
步骤2,判断用户N7到达缓存的数据帧能否进入缓存空间。Step 2, it is judged whether the data frame that user N 7 arrives at the buffer can enter the buffer space.
比较缓存实时容纳的数据帧数m与缓存可容纳的数据帧总数M的大小:Compare the size of the number of data frames m that the cache can hold in real time with the total number of data frames M that the cache can hold:
如果缓存实时容纳的数据帧数m小于2400,则到达的数据帧能够进入缓存空间,执行步骤8;If the number of data frames m that the cache can accommodate in real time is less than 2400, the arriving data frames can enter the cache space, and step 8 is performed;
如果缓存实时容纳的数据帧数m等于2400,执行步骤3。If the number m of data frames that the cache can hold in real time is equal to 2400, go to step 3.
本实施例中,假设此时已到达数据帧的所属用户为N7。In this embodiment, it is assumed that the user belonging to the data frame that has arrived at this time is N 7 .
步骤3,判断是否丢弃用户N7已到达的数据帧。Step 3, it is judged whether to discard the data frame that user N 7 has arrived at.
将用户队列Q7的实时长度m7与其队列门限M7作比较:Compare the real-time length m 7 of the user queue Q 7 with its queue threshold M 7 :
如果此时用户队列Q7的实时长度m7小于400,则一定有其他用户队列实时长度超过对应门限,原因在于此时缓存空间已满,即m=2400,但是此时m7小于400,所以一定存在其他用户Nj,j≠i抢占了本来属于N7的缓存空间,即mj>Mj,j≠7,因此选择其他用户进行数据帧的丢弃,执行步骤4;If the real-time length m 7 of the user queue Q 7 is less than 400 at this time, there must be other user queues whose real-time length exceeds the corresponding threshold. The reason is that the cache space is full at this time, that is, m=2400, but at this time m 7 is less than 400, so There must be other users N j , j≠i that have seized the cache space originally belonging to N 7 , that is, m j >M j , j≠7, so select other users to discard the data frame, and go to step 4;
如果此时用户队列Q7的实时长度m7大于等于400,则需要对此时已到达数据帧的所属用户N7进行数据帧的丢弃,执行步骤7。If the real-time length m 7 of the user queue Q 7 is greater than or equal to 400 at this time, the data frame needs to be discarded for the user N 7 whose data frame has arrived at this time, and step 7 is executed.
步骤4,选择需要丢弃数据帧的用户及其所对应的用户队列。Step 4: Select the user who needs to discard the data frame and the corresponding user queue.
4.1)遍历除了用户N7以外的其他所有用户,即从N1,N2,N3,N4,N5,N6,N8这些用户中找出对应用户队列实时长度超过队列门限的用户集合C:4.1) Traverse all users except user N 7 , that is, find out the users whose real-time length of the corresponding user queue exceeds the queue threshold from N 1 , N 2 , N 3 , N 4 , N 5 , N 6 , and N 8 Collection C:
C={Nj|mj>Mj,1≤j≤8,j≠7},C={N j |m j >M j ,1≤j≤8,j≠7},
本实施例中,假设此时队列实时长度超过队列门限的用户有N1、N3、N5,即:In this embodiment, it is assumed that there are N 1 , N 3 , and N 5 users whose real-time queue length exceeds the queue threshold at this time, namely:
C={Nj|mj>Mj,1≤j≤8,j≠7}={N1,N3,N5}C={N j |m j >M j ,1≤j≤8,j≠7}={N 1 ,N 3 ,N 5 }
4.2)遍历集合C中的用户N1,N3,N5,找出此时队列实时长度超过队列门限最多的用户所对应的用户队列Qmax:4.2) Traverse the users N 1 , N 3 , and N 5 in the set C, and find out the user queue Q max corresponding to the user whose real-time queue length exceeds the queue threshold the most at this time:
Qmax={Qi|mi-Mi=B},其中B=max{mi-Mi|Ni∈C}Q max ={Q i |m i -M i =B}, where B=max{m i -M i |N i ∈C}
本实施例中,假设此时队列实时长度超过队列门限最多的用户为N3,即:In this embodiment, it is assumed that the users with the most real-time queue length exceeding the queue threshold at this time are N 3 , that is:
Qmax={Qi|mi-Mi=B}=Q3 Q max ={Q i |m i -M i =B}=Q 3
因此用户N3所对应的用户队列Q3需要丢弃数据帧。Therefore, the user queue Q 3 corresponding to the user N 3 needs to discard the data frame.
步骤5,判断是否丢弃用户队列Q3中的低优先级数据帧。Step 5, judge whether to discard the low-priority data frames in the user queue Q3 .
将用户N3所对应的用户队列Q3中的低优先级数据帧个数与其低优先级最低门限作比较:Compare the number of low-priority data frames in user queue Q 3 corresponding to user N 3 with its low-priority minimum threshold:
如果用户队列Q3中的低优先级数据帧数m3_L超过5,则丢弃Q3中任一个低优先级数据帧,执行步骤8;If the number m 3_L of low-priority data frames in the user queue Q 3 exceeds 5, discard any low-priority data frame in Q 3 , and perform step 8;
如果用户队列Q3中的低优先数据帧数m3_L未超过5,则需要丢弃Q3中的中或高优先级数据帧,执行步骤6。If the number m 3_L of low-priority data frames in the user queue Q 3 does not exceed 5, the medium or high-priority data frames in Q 3 need to be discarded, and step 6 is performed.
步骤6,判断是否丢弃用户队列Q3中的中优先级数据帧。Step 6, judging whether to discard the medium-priority data frames in the user queue Q3 .
将用户N3所对应的用户队列Q3中的中优先级数据帧个数与其中优先级最低门限作比较:Compare the number of medium-priority data frames in the user queue Q 3 corresponding to user N 3 with the lowest priority threshold among them:
如果用户队列Q3中的中优先级数据帧数m3_M超过10,则丢弃Q3中任一个中优先级数据帧,执行步骤8;If the number of medium-priority data frames m 3_M in the user queue Q 3 exceeds 10, discard any medium-priority data frame in Q 3 , and perform step 8;
如果用户队列Q3中的中优先级数据帧数m3_M未超过10,则丢弃Q3中任一高优先级数据帧,执行步骤8。If the number of medium-priority data frames m 3_M in user queue Q 3 does not exceed 10, discard any high-priority data frame in Q 3 , and perform step 8 .
步骤7,选择用户N7的数据帧进行丢弃。Step 7, select the data frame of user N 7 to discard.
7.1)判断用户N7到达的数据帧是否为低优先级:7.1) Determine whether the data frame arrived by user N 7 is of low priority:
如果用户N7到达的数据帧是低优先数据帧,则丢弃到达的数据帧,执行步骤9;If the data frame that the user N 7 arrives at is a low-priority data frame, then discard the arriving data frame, and execute step 9;
如果用户N7到达的数据帧不是低优先级数据帧,则执行步骤7.2)。If the data frame arriving by user N 7 is not a low priority data frame, then step 7.2) is performed.
7.2)判断用户N7到达的数据帧是否为中优先级:7.2) Determine whether the data frame arrived by user N 7 is of medium priority:
如果用户N7到达的数据帧是中优先级,则执行步骤7.3);If the data frame arriving by user N 7 is of medium priority, then perform step 7.3);
如果用户N7到达的数据帧不是中优先级,则一定为高优先级,执行步骤7.4)。If the data frame arrived by user N 7 is not of medium priority, it must be of high priority, and execute step 7.4).
7.3)判断用户队列Q7中是否存在低优先级数据帧: 7.3 ) Determine whether there is a low-priority data frame in the user queue Q7:
如果用户队列Q7中存在低优先级数据帧,则丢弃Q7中任一个低优先级数据帧,执行步骤8;If there is a low - priority data frame in the user queue Q7, then discard any low-priority data frame in Q7, and execute step 8 ;
如果用户队列Q7中不存在低优先级数据帧,则丢弃到达的数据帧,执行步骤9。If there is no low - priority data frame in the user queue Q7, the arriving data frame is discarded, and step 9 is performed.
7.4)判断用户队列Q7中的低优先级数据帧个数是否超过低优先级最低门限:7.4) Determine whether the number of low - priority data frames in the user queue Q7 exceeds the low-priority minimum threshold:
如果用户队列Q7中的低优先级数据帧个数m7_L超过10,则丢弃Q7中任一低优先级数据帧,执行步骤8;If the number m 7_L of low-priority data frames in the user queue Q 7 exceeds 10, discard any low-priority data frame in Q 7 , and perform step 8;
如果用户队列Q7中的低优先级数据帧个数m7_L未超过10,则执行步骤7.5)。If the number m 7_L of low-priority data frames in the user queue Q 7 does not exceed 10, step 7.5) is performed.
7.5)判断用户N7所对应的用户队列Q7中的中优先级数据帧个数是否超过中优先级最低门限:7.5) Determine whether the number of medium-priority data frames in the user queue Q 7 corresponding to the user N 7 exceeds the minimum threshold of the medium-priority:
如果用户队列Q7中的中优先级数据帧个数m7_M未超过30,则丢弃Q7中任一中优先级数据帧,执行步骤8;If the number of medium-priority data frames m 7_M in the user queue Q 7 does not exceed 30, discard any medium-priority data frame in Q 7 , and perform step 8;
如果用户队列Q7中的中优先级帧个数m7_M超过30,则直接丢弃到达的数据帧,执行步骤9。If the number m 7_M of medium-priority frames in the user queue Q 7 exceeds 30, the arriving data frames are directly discarded, and step 9 is performed.
步骤8,将到达缓存的数据帧进入所属用户队列。Step 8: Enter the data frame that arrives in the buffer into the queue of the user to which it belongs.
步骤9,判断是否还有数据帧到达。Step 9, determine whether there are still data frames arriving.
如果还有数据帧到达,则返回步骤2;If there are still data frames arriving, go back to step 2;
如果没有数据帧到达,停止管理服务。If no data frame arrives, stop the management service.
以上描述仅是本发明的一个具体实例,并不构成对本发明的任何限制,显然对于本领域的专业人员来说,在了解了本发明的内容和原理后,都可能在不背离本发明原理、结构的情况下,进行形式和细节上的修改和改变,但是这些基于本发明思想的修改和改变扔在本发明的权利要求保护范围之内。The above description is only a specific example of the present invention, and does not constitute any limitation to the present invention. Obviously, for those skilled in the art, after understanding the content and principles of the present invention, they may not deviate from the principles of the present invention, In the case of the structure, modifications and changes in form and details are made, but these modifications and changes based on the idea of the present invention are thrown within the protection scope of the claims of the present invention.
Claims (2)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010732501.4A CN111917666A (en) | 2020-07-27 | 2020-07-27 | Data frame preemptive cache management method based on service level agreement |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010732501.4A CN111917666A (en) | 2020-07-27 | 2020-07-27 | Data frame preemptive cache management method based on service level agreement |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111917666A true CN111917666A (en) | 2020-11-10 |
Family
ID=73280867
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010732501.4A Pending CN111917666A (en) | 2020-07-27 | 2020-07-27 | Data frame preemptive cache management method based on service level agreement |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111917666A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112751788A (en) * | 2020-12-30 | 2021-05-04 | 西安云维智联科技有限公司 | Double-plane switching method supporting multi-type frame mixed transmission |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130343398A1 (en) * | 2012-06-20 | 2013-12-26 | Redline Communications Inc. | Packet-based communication system with traffic prioritization |
CN110493145A (en) * | 2019-08-01 | 2019-11-22 | 新华三大数据技术有限公司 | A kind of caching method and device |
CN111400206A (en) * | 2020-03-13 | 2020-07-10 | 西安电子科技大学 | Cache Management Method Based on Dynamic Virtual Threshold |
CN113206800A (en) * | 2021-03-15 | 2021-08-03 | 新华三信息安全技术有限公司 | Message caching method and device and network equipment |
-
2020
- 2020-07-27 CN CN202010732501.4A patent/CN111917666A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130343398A1 (en) * | 2012-06-20 | 2013-12-26 | Redline Communications Inc. | Packet-based communication system with traffic prioritization |
CN110493145A (en) * | 2019-08-01 | 2019-11-22 | 新华三大数据技术有限公司 | A kind of caching method and device |
CN111400206A (en) * | 2020-03-13 | 2020-07-10 | 西安电子科技大学 | Cache Management Method Based on Dynamic Virtual Threshold |
CN113206800A (en) * | 2021-03-15 | 2021-08-03 | 新华三信息安全技术有限公司 | Message caching method and device and network equipment |
Non-Patent Citations (1)
Title |
---|
别玉霞等: "基于优先级的卫星终端双队列缓存管理算法", 《计算机仿真》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112751788A (en) * | 2020-12-30 | 2021-05-04 | 西安云维智联科技有限公司 | Double-plane switching method supporting multi-type frame mixed transmission |
CN112751788B (en) * | 2020-12-30 | 2023-11-14 | 西安云维智联科技有限公司 | Double-plane switching method supporting multi-type frame mixed transmission |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6859438B2 (en) | Policy based quality of service | |
US8467295B2 (en) | System and methods for distributed quality of service enforcement | |
US6678248B1 (en) | Policy based quality of service | |
US7855960B2 (en) | Traffic shaping method and device | |
US8520522B1 (en) | Transmit-buffer management for priority-based flow control | |
US6999416B2 (en) | Buffer management for support of quality-of-service guarantees and data flow control in data switching | |
CN107454015B (en) | OF-DiffServ model-based QoS control method and system | |
US20040125815A1 (en) | Packet transmission apparatus and method thereof, traffic conditioner, priority control mechanism and packet shaper | |
JPH0690255A (en) | Method for control of congestion of data network | |
CN101286949A (en) | MAC Layer Resource Scheduling Strategy of Wireless Mesh Network Based on IEEE802.16d Standard | |
US20050068798A1 (en) | Committed access rate (CAR) system architecture | |
CN100463451C (en) | A multi-dimensional queue scheduling and management method for network data flow | |
CN111400206B (en) | Cache management method based on dynamic virtual threshold | |
CN102594669A (en) | Data message processing method, device and equipment | |
US11463370B2 (en) | Scalable deterministic services in packet networks | |
CN101478486B (en) | Method, equipment and system for switch network data scheduling | |
CN111917666A (en) | Data frame preemptive cache management method based on service level agreement | |
KR20020079904A (en) | Unified algorithm for frame scheduling and buffer management in differentiated services networks | |
US7266612B1 (en) | Network having overload control using deterministic early active drops | |
Bonald et al. | Scheduling network traffic | |
JP3989197B2 (en) | Packet discard device | |
JP3631699B2 (en) | Multiplexing device, bandwidth control device, program, and recording medium | |
JP3595134B2 (en) | Packet buffer device and packet discard control method | |
JP4104756B2 (en) | Method and system for scheduling data packets in a telecommunications network | |
Joung et al. | Effect of flow aggregation on the maximum end-to-end delay |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20201110 |