[go: up one dir, main page]

CN102480430B - Method and device for realizing message order preservation - Google Patents

Method and device for realizing message order preservation Download PDF

Info

Publication number
CN102480430B
CN102480430B CN201010570221.4A CN201010570221A CN102480430B CN 102480430 B CN102480430 B CN 102480430B CN 201010570221 A CN201010570221 A CN 201010570221A CN 102480430 B CN102480430 B CN 102480430B
Authority
CN
China
Prior art keywords
forwarding
queue
queue area
core
shared
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201010570221.4A
Other languages
Chinese (zh)
Other versions
CN102480430A (en
Inventor
曹淋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Maipu Communication Technology Co Ltd
Original Assignee
Maipu Communication Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Maipu Communication Technology Co Ltd filed Critical Maipu Communication Technology Co Ltd
Priority to CN201010570221.4A priority Critical patent/CN102480430B/en
Publication of CN102480430A publication Critical patent/CN102480430A/en
Application granted granted Critical
Publication of CN102480430B publication Critical patent/CN102480430B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本发明提供了实现报文保序的方法和装置。该方法包括:A,网络通信设备上的队列区控制单元将已接收的报文存入共享队列区输入队列,并将该共享队列区输入队列提供给所述网络通信设备上首先执行转发处理的第一转发核;所述第一转发核轮询共享队列区输入队列,依次获取共享队列区输入队列中的每个共享队列区并处理;B,所述第一转发核处理完当前共享队列区中对应报文队列的所有报文后,判断自身是否为最后一个执行转发处理的转发核,如果否,提供处理后的共享队列区给下一个执行转发处理的转发核,并执行步骤C,如果是,提供处理后的共享队列区给所述网络通信设备上的保序处理单元,并执行步骤D;C,所述下一个执行转发处理的转发核执行步骤B中所述第一转发核执行的操作;D,所述保序处理单元获取共享队列区中的报文,并按照接收顺序进行串行化处理以供报文发送。采用本发明,能够在对报文保序时避免缓存,定时老化等问题。

The invention provides a method and a device for realizing message order preservation. The method includes: A, the queue area control unit on the network communication device stores the received message into the input queue of the shared queue area, and provides the input queue of the shared queue area to the network communication device that first performs forwarding processing The first forwarding core; the first forwarding core polls the input queue of the shared queue area, and sequentially acquires and processes each shared queue area in the input queue of the shared queue area; B, the first forwarding core finishes processing the current shared queue area After receiving all the messages in the corresponding message queue, judge whether it is the last forwarding core that performs forwarding processing, if not, provide the processed shared queue area to the next forwarding core that performs forwarding processing, and perform step C, if Yes, provide the processed shared queue area to the order-preserving processing unit on the network communication device, and execute step D; C, the next forwarding core that performs forwarding processing executes the first forwarding core in step B. D, the order-preserving processing unit obtains the messages in the shared queue area, and serializes them according to the receiving order for sending the messages. By adopting the invention, problems such as caching, timing aging and the like can be avoided while maintaining the sequence of messages.

Description

实现报文保序的方法和装置Method and device for realizing packet order preservation

技术领域 technical field

本发明涉及数据通信技术,特别涉及实现报文保序的方法和装置。The invention relates to data communication technology, in particular to a method and device for realizing message order preservation.

背景技术 Background technique

随着物理接口速率的不断提升,网络通信设备中CPU的转发核(core)数也随之增加。目前,通常采用并行工作方式使CPU中各个转发核并行处理报文流,以期提高转发性能,具体可如图1所示。As the rate of the physical interface continues to increase, the number of forwarding cores (cores) of the CPU in the network communication device also increases accordingly. At present, the parallel working mode is usually used to make each forwarding core in the CPU process the packet flow in parallel in order to improve the forwarding performance, as shown in FIG. 1 .

并行工作方式虽然充分利用了多核并行转发能力,且在大多数情况下也的确提高了转发性能,但是,针对某一个转发核而言,当一条报文流(其实质上为具有相同协议关键字的报文集合)的流量大于该转发核当前的负载时,为避免报文流中的报文被丢弃,需要将报文流中的一些报文分发到其他转发核上去处理,即出现了属于同一条报文流的报文被不同转发核处理的情况。如此,如果不同转发核处理报文的时间不匹配,就会导致同一条报文流中的报文不是按照接收顺序发送的,而数据通信系统要求报文发送顺序与接收顺序必须一致,基于此,网络通信设备就需要在报文发送之前对同一条报文流中的报文进行保序操作。Although the parallel working method makes full use of the multi-core parallel forwarding capability and improves the forwarding performance in most cases, for a certain forwarding core, when a message flow (which essentially has the same protocol keyword When the traffic of the packet set) is greater than the current load of the forwarding core, in order to prevent the packets in the packet flow from being discarded, some packets in the packet flow need to be distributed to other forwarding cores for processing. The case where packets of the same packet flow are processed by different forwarding cores. In this way, if the processing time of different forwarding cores does not match, the messages in the same message flow will not be sent in the order of reception, and the data communication system requires that the order of sending messages must be consistent with the order of receiving them. Based on this , the network communication device needs to perform an order-preserving operation on the messages in the same message stream before the messages are sent.

现有技术中,一旦涉及到报文保序,就会相应涉及到缓存,定时老化等问题。由于仅在一条报文流的流量大于某个转发核当前的负载时,才会将报文分发到不同的转发核,即不能事先知道同一条报文流中的报文在不同转发核之间的分布,以及在不同转发核上的处理延迟,因此,报文保序所需要的缓存容量就无法确定。如果缓存容量大小,会导致报文流中的报文过早丢弃,太大会导致缓存资源的过度消耗,进而引起系统反应(如接收端无法获得报文缓存)。至于定时老化问题,其也会由于不能事先知道同一条报文流中的报文在不同转发核之间的分布,以及在不同转发核上的处理延迟而导致无法确定所需要的老化时间,如果老化时间设置太小会导致过早丢弃,太大会增加缓存负担。In the prior art, once packet order preservation is involved, issues such as caching and timing aging will be involved accordingly. Since the packet will be distributed to different forwarding cores only when the traffic of a packet flow is greater than the current load of a certain forwarding core, it is impossible to know in advance that the packets in the same packet flow are between different forwarding cores. distribution, and processing delays on different forwarding cores, therefore, the cache capacity required for message order preservation cannot be determined. If the cache capacity is too large, the packets in the packet flow will be discarded prematurely, and if it is too large, it will lead to excessive consumption of cache resources, which will cause system reactions (for example, the receiving end cannot obtain the packet cache). As for the timing aging problem, it is also impossible to determine the required aging time due to the fact that the distribution of messages in the same message flow among different forwarding cores and the processing delay on different forwarding cores cannot be known in advance. If the aging time is set too small, it will cause premature discarding, and if it is too large, it will increase the cache burden.

综上可以看出,一种能够避免缓存以及定时老化等问题的报文保序方法是当前亟待解决的技术问题。To sum up, it can be seen that a packet order preservation method that can avoid problems such as caching and timing aging is a technical problem that needs to be solved urgently.

发明内容 Contents of the invention

本发明提供了实现报文保序的方法和装置,以便在对报文保序时避免缓存以及定时老化等问题。The invention provides a method and a device for realizing message order preservation, so as to avoid problems such as buffering and timing aging when the message order is preserved.

一种实现报文保序的方法,包括:A method for realizing message order preservation, comprising:

A,网络通信设备上的队列区控制单元将已接收的报文依次存入共享队列区输入队列,并将共享队列区输入队列提供给首先执行转发处理的第一转发核,所述第一转发核轮询共享队列区输入队列,依次获取共享队列区输入队列中的每个共享队列区并处理;A. The queue area control unit on the network communication device sequentially stores the received messages into the input queue of the shared queue area, and provides the input queue of the shared queue area to the first forwarding core that performs forwarding processing first, and the first forwarding core The core polls the input queue of the shared queue area, and obtains and processes each shared queue area in the input queue of the shared queue area in turn;

B,所述第一转发核处理完当前共享队列区中对应报文队列的所有报文后,判断自身是否为最后一个执行转发处理的转发核,如果否,提供处理后的共享队列区给下一个执行转发处理的转发核处理,并执行步骤C,如果是,提供处理后的共享队列区给所述网络通信设备上的保序处理单元,并执行步骤D;B. After the first forwarding core has processed all the messages corresponding to the message queue in the current shared queue area, it judges whether it is the last forwarding core that performs forwarding processing, and if not, provides the processed shared queue area to the next A forwarding core processing that performs forwarding processing, and executes step C, if so, provides the processed shared queue area to the order-preserving processing unit on the network communication device, and executes step D;

C,所述下一个执行转发处理的转发核执行步骤B中所述第一转发核执行的操作;C, the next forwarding core performing forwarding processing performs the operation performed by the first forwarding core in step B;

D,所述保序处理单元获取共享队列区中的报文,并按照接收顺序进行串行化处理以供报文发送。D. The order-preserving processing unit obtains the messages in the shared queue area, and serializes them according to the received order for sending the messages.

一种用于实现报文保序的装置,包括至少一个参与转发处理的转发核,报文接收队列和报文发送队列,其关键在于,该装置还包括:队列区控制单元和保序处理单元;其中,A device for realizing message order preservation, including at least one forwarding core participating in forwarding processing, a message receiving queue and a message sending queue, the key point of which is that the device also includes: a queue area control unit and an order preserving processing unit ;in,

所述队列区控制单元用于将已接收的报文存入共享队列区输入队列,并提供该共享队列区输入队列给所述装置上首先执行转发处理的第一转发核;The queue area control unit is used to store the received message into the input queue of the shared queue area, and provide the input queue of the shared queue area to the first forwarding core on the device that first performs forwarding processing;

第一转发核轮询共享队列区输入队列,依次获取共享队列区输入队列中的共享队列区并处理,所述第一转发核每处理完一个共享队列区中对应报文队列的所有报文后,判断自身是否为最后一个执行转发处理的转发核,如果否,提供处理后的共享队列区给下一个执行转发处理的转发核,由下一个执行转发处理的转发核执行所述第一转发核执行的操作,如果是,提供处理后的共享队列区给所述保序处理单元;The first forwarding core polls the input queue of the shared queue area, sequentially obtains and processes the shared queue areas in the input queue of the shared queue area, and after each the first forwarding core processes all the messages of the corresponding message queue in the shared queue area , judging whether it is the last forwarding core that performs forwarding processing, if not, providing the processed shared queue area to the next forwarding core that performs forwarding processing, and the next forwarding core that performs forwarding processing executes the first forwarding core The operation performed, if yes, providing the processed shared queue area to the order-preserving processing unit;

所述保序处理单元用于获取共享队列区,并对获取的共享队列区中的报文按照接收顺序进行串行化处理,以供处理后的报文发送。The order-preserving processing unit is used to obtain the shared queue area, and serialize the obtained messages in the shared queue area according to the receiving order, so as to send the processed messages.

由以上技术方案可以看出,本发明中,由首先执行转发处理的第一转发核先处理共享队列区输入队列中的共享队列区,其中,共享队列区包括每一参与转发处理的转发核对应的报文队列,每一个转发核将处理后的共享队列区提供给下一个相邻的执行转发处理的转发核处理,依次类推,直至最后一个执行转发处理的转发核将处理后的共享队列区提供给保序处理单元,以便保序处理单元对获取的共享队列区中的报文按照接收顺序进行串行化处理,实现了按照报文接收顺序发送报文的目的,即实现了报文保序。本发明提供的报文保序,对于共享队列区输入队列中的共享队列区按顺序分别被各个转发核处理,对于同一共享队列区来说转发核之间串行处理,最后各个处理后的共享队列区又按照进入共享队列区输入队列的顺序,进入到共享队列区输出队列后进行报序处理,因此不涉及缓存,定时老化等问题。As can be seen from the above technical solutions, in the present invention, the first forwarding core that first performs the forwarding process first processes the shared queue area in the input queue of the shared queue area, wherein the shared queue area includes each forwarding core that participates in the forwarding process. Each forwarding core provides the processed shared queue area to the next adjacent forwarding core that performs forwarding processing, and so on, until the last forwarding core that performs forwarding processing provides the processed shared queue area Provided to the order-preserving processing unit, so that the order-preserving processing unit serializes the obtained messages in the shared queue area according to the receiving order, and realizes the purpose of sending messages according to the order in which the messages are received, that is, realizes message preservation sequence. The message sequence protection provided by the present invention, for the shared queue area in the input queue of the shared queue area, is processed by each forwarding core in sequence, for the same shared queue area, serial processing is performed between the forwarding cores, and finally each processed shared The queue area is in accordance with the order of entering the input queue of the shared queue area, and after entering the output queue of the shared queue area, the sequence processing is performed, so problems such as cache and timing aging are not involved.

附图说明 Description of drawings

图1为现有技术中并行工作方式示意图;Fig. 1 is a schematic diagram of a parallel working mode in the prior art;

图2为本发明实施例提供的基本流程图;Fig. 2 is the basic flowchart provided by the embodiment of the present invention;

图3为本发明实施例提供的详细流程图;FIG. 3 is a detailed flowchart provided by an embodiment of the present invention;

图4为本发明实施例提供的共享队列区的结构示意图;FIG. 4 is a schematic structural diagram of a shared queue area provided by an embodiment of the present invention;

图5为本发明实施例提供的步骤312的实现流程图;FIG. 5 is an implementation flowchart of step 312 provided by the embodiment of the present invention;

图6为本发明实施例提供的对应图3中流程的示意图;FIG. 6 is a schematic diagram corresponding to the process in FIG. 3 provided by an embodiment of the present invention;

图7为本发明实施例提供的装置结构图;FIG. 7 is a structural diagram of a device provided by an embodiment of the present invention;

图8为本发明相邻的两个转发核之间的访问通道示意图。FIG. 8 is a schematic diagram of an access channel between two adjacent forwarding cores according to the present invention.

具体实施方式 Detailed ways

为了使本发明的目的、技术方案和优点更加清楚,下面结合附图和具体实施例对本发明进行详细描述。In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be described in detail below in conjunction with the accompanying drawings and specific embodiments.

本发明提供的方法主要是避免现有技术在进行报文保序时所涉及的缓存,定时老化等问题,图2对本发明实施例提供的方法进行了描述。The method provided by the present invention is mainly to avoid problems such as buffering and timing aging involved in packet order preservation in the prior art. FIG. 2 describes the method provided by the embodiment of the present invention.

参见图2,图2为本发明实施例提供的基本流程图。该流程主要应用于网络通信设备上,其中,该网络通信设备具体实现时可为路由器,也可为其他具有路由功能的设备,比如交换机等,本发明实施例并不具体限定。基于此,如图2所示,该流程可包括以下步骤:Referring to FIG. 2, FIG. 2 is a basic flowchart provided by an embodiment of the present invention. This process is mainly applied to network communication devices, where the network communication device may be a router or other devices with routing functions, such as switches, etc., which are not specifically limited in the embodiments of the present invention. Based on this, as shown in Figure 2, the process may include the following steps:

步骤201,网络通信设备上的队列区控制单元将已接收的报文依次存入共享队列区输入队列,并提供该共享队列区输入队列给所述网络通信设备上首先执行转发处理的第一转发核,第一转发核轮询共享队列区输入队列,依次获取共享队列区输入队列中的每个共享队列区并处理。Step 201, the queue area control unit on the network communication device sequentially stores the received messages into the input queue of the shared queue area, and provides the input queue of the shared queue area to the first forwarder on the network communication device that performs forwarding processing first core, the first forwarding core polls the input queue of the shared queue area, sequentially obtains and processes each shared queue area in the input queue of the shared queue area.

本步骤201中,队列区控制单元将已接收的报文存入共享队列区输入队列的操作具体可参见图3中的步骤302至步骤305中的描述。In this step 201, the operation of storing the received message in the input queue of the shared queue area by the queue area control unit may refer to the description in steps 302 to 305 in FIG. 3 for details.

步骤202,第一转发核处理完当前共享队列区中对应报文队列的所有报文后,判断自身是否为最后一个执行转发处理的转发核,如果否,提供处理后的共享队列区给下一个执行转发处理的转发核,并执行步骤203,如果是,提供处理后的共享队列区给所述网络通信设备上的保序处理单元,并执行步骤204。Step 202, after the first forwarding core processes all the messages corresponding to the message queue in the current shared queue area, it judges whether it is the last forwarding core to perform forwarding processing, and if not, provides the processed shared queue area to the next The forwarding core that executes forwarding processing, and executes step 203 , if yes, provides the processed shared queue area to the order-preserving processing unit on the network communication device, and executes step 204 .

本步骤202执行的操作具体可参见图3中步骤306至步骤309的操作。For details of the operations performed in step 202, refer to the operations in steps 306 to 309 in FIG. 3 .

另外,本实施例中,第一转发核为最后一个执行转发处理的转发核的情况是:所述网络通信设备仅包含一个用于转发处理的转发核。In addition, in this embodiment, the case where the first forwarding core is the last forwarding core performing forwarding processing is: the network communication device only includes one forwarding core for forwarding processing.

步骤203,下一个执行转发处理的转发核按照步骤202中第一转发核执行的操作执行。In step 203, the next forwarding core to perform forwarding processing executes the operation performed by the first forwarding core in step 202.

从步骤202至步骤203可以看出,第一转发核先处理共享队列区,并将处理后的共享队列区提供给下一个转发核处理,依次类推,直至最后执行转发处理的转发核提供处理后的共享队列区给保序处理单元,以便保序处理单元执行步骤204。It can be seen from step 202 to step 203 that the first forwarding core processes the shared queue area first, and provides the processed shared queue area to the next forwarding core for processing, and so on until the forwarding core that performs forwarding processing finally provides the processing The shared queue area of is given to the order preservation processing unit, so that the order preservation processing unit executes step 204.

步骤204,保序处理单元获取共享队列区中的报文,并按照接收顺序进行串行化处理以供报文发送。In step 204, the order-preserving processing unit obtains the messages in the shared queue area, and serializes them according to the received order for sending the messages.

本步骤204的描述可参见图3中步骤311至步骤312的描述。The description of this step 204 may refer to the description of steps 311 to 312 in FIG. 3 .

以上对本发明实施例提供的方法进行了简单描述。The methods provided by the embodiments of the present invention are briefly described above.

在上述流程中,可预先对网络通信设备上所有参与转发的转发核按照顺序编号。基于此,本实施例中的第一转发核可为编号最小的转发核,最后一个执行转发处理的转发核相应为编号最大的转发核;或者,第一转发核为编号最大的转发核,最后一个执行转发处理的转发核相应为编号最小的转发核。下面以第一转发核为编号最小的转发核,最后一个执行转发处理的转发核为编号最大的转发核为例,其他情况原理类似,通过图3对图2所示的流程进行详细描述。In the above process, all forwarding cores participating in forwarding on the network communication device may be numbered in sequence in advance. Based on this, the first forwarding core in this embodiment may be the forwarding core with the smallest number, and the last forwarding core that performs forwarding processing is correspondingly the forwarding core with the largest number; or, the first forwarding core is the forwarding core with the largest number, and the last A forwarding core that performs forwarding processing corresponds to the forwarding core with the smallest number. The following assumes that the first forwarding core is the forwarding core with the smallest number, and the last forwarding core performing forwarding processing is the forwarding core with the highest number as an example. The principle of other cases is similar. The process shown in FIG. 2 is described in detail through FIG. 3 .

参见图3,图3为本发明实施例提供的详细流程图。本实施例中,网络通信设备上所有参与转发的转发核功能完全相同,都是用于报文的转发处理。如果该网络通信设备上所有参与转发的转发核按照顺序进行的编号,分别为core-1,core-2,...core-N,其中,首先执行转发处理的第一转发核为编号最小的转发核即core-1,最后执行转发处理的转发核为编号最大的转发核即core-N。Referring to FIG. 3, FIG. 3 is a detailed flowchart provided by an embodiment of the present invention. In this embodiment, all the forwarding cores participating in the forwarding on the network communication device have completely the same function, and they are all used for message forwarding processing. If all forwarding cores participating in forwarding on the network communication device are numbered in sequence, they are core-1, core-2, ... core-N, wherein the first forwarding core that performs forwarding processing first is the smallest numbered The forwarding core is core-1, and the forwarding core that finally executes forwarding processing is the forwarding core with the largest number, namely core-N.

基于此,如图3所示,该流程可包括以下步骤:Based on this, as shown in Figure 3, the process may include the following steps:

步骤301,网络通信设备上的报文接收器按照先进先出(FIFO:First InFirst Out)顺序存放接收到的报文至报文接收队列中。In step 301, the message receiver on the network communication device stores the received messages in the message receiving queue according to FIFO (First In First Out) order.

步骤302,网络通信设备上的队列区控制单元在设定时间到达时,轮询所述报文接收队列,如果该报文接收队列为空,则结束当前流程;否则,执行步骤303。Step 302, when the set time arrives, the queue area control unit on the network communication device polls the message receiving queue, if the message receiving queue is empty, then end the current process; otherwise, execute step 303.

本步骤302中,队列区控制单元实时或者以固定时间为间隔周期性地执行轮询所述报文接收队列是否为空的操作。In this step 302, the queue area control unit performs the operation of polling whether the message receiving queue is empty in real time or periodically at fixed time intervals.

步骤303,队列区控制单元从报文接收队列中获取所有报文。Step 303, the queue area control unit obtains all messages from the message receiving queue.

步骤304,队列区控制单元创建共享队列区,并从共享队列区中core-1对应的报文队列开始,按顺序依次向各个转发核对应的报文队列放入报文。Step 304, the queue area control unit creates a shared queue area, and starts from the message queue corresponding to core-1 in the shared queue area, and puts messages into the message queues corresponding to each forwarding core in sequence.

本实施例中,共享队列区包括每一参与转发处理的转发核对应的报文队列,其中,每一转发核对应的报文队列由为该转发核创建的队列链表头存入,存入方式为FIFO。如此,执行到本步骤304时,网络通信设备上每一参与转发处理的转发核即core-1,core-2,...,core-N都有一个用于存入自身对应的报文队列的队列链表头。In this embodiment, the shared queue area includes a message queue corresponding to each forwarding core participating in the forwarding process, wherein the message queue corresponding to each forwarding core is stored by the head of the queue link list created for the forwarding core, and the storage method for FIFO. In this way, when this step 304 is executed, each forwarding core participating in the forwarding process on the network communication device, that is, core-1, core-2, ..., core-N, has a message queue for storing in its corresponding head of the queue list.

而本实施例中的共享队列区具体实现时可为:将从core-1对应的报文队列开始按顺序依次并行存放的各个报文队列组成的报文链块确定为共享队列区。具体可参见图4所示。The specific implementation of the shared queue area in this embodiment may be: a message chain block composed of message queues stored in parallel in order starting from the message queue corresponding to core-1 is determined as the shared queue area. See Figure 4 for details.

另外,本步骤304中,队列区控制单元可按照各个转发核平均分配报文的原则向各个转发核对应的报文队列放入报文,具体为:队列区控制单元确定需要放入每一个报文队列的报文数量M;之后以M为单位按顺序划分获取的所有报文,从core-1对应的报文队列开始按顺序依次向各个转发核对应的报文队列中放入N个报文。比如,网络通信设备存在3个转发核,即core-1至core-3,如果队列区控制单元从报文接收队列获取的所有报文的数量为12,依次为报文1至报文12,如此,就需要每个转发核对应的报文队列中存放4个报文,即core-1对应的报文队列存放报文1至报文4,core-2对应的报文队列存放报文5至报文8,core-3对应的报文队列存放报文9至报文12。In addition, in this step 304, the queue area control unit can put messages into the message queues corresponding to each forwarding core according to the principle of equal distribution of messages by each forwarding core, specifically: the queue area control unit determines that it needs to put each message The number of messages in the message queue is M; after that, all the obtained messages are divided in order in units of M, and N messages are put into the message queues corresponding to each forwarding core in sequence starting from the message queue corresponding to core-1. arts. For example, there are 3 forwarding cores in the network communication device, namely core-1 to core-3, if the number of all messages obtained by the queue area control unit from the message receiving queue is 12, the order is message 1 to message 12, In this way, it is necessary to store 4 messages in the message queue corresponding to each forwarding core, that is, the message queue corresponding to core-1 stores message 1 to message 4, and the message queue corresponding to core-2 stores message 5 Up to packet 8, the packet queue corresponding to core-3 stores packets 9 to 12.

当然,作为本发明实施例的另一种实现方式,本步骤304中,队列区控制单元也可根据各个转发核当前的负载以及获取的所有报文的数量向各个转发核对应的报文队列中放入报文,具体为:步骤1,队列区控制单元将core-1作为当前核。步骤2,队列区控制单元根据当前核的当前负载以及获取的所有报文的数量确定向当前核对应的报文队列中放入的报文数量X。步骤3,队列区控制单元按顺序向当前核对应的报文队列中放入X个报文;之后设置core-1的下一个转发核为当前核,返回步骤2。仍以网络通信设备存在3个转发核,即core-1至core-3,队列区控制单元从报文接收队列获取的所有报文的数量为12,即依次为报文1至报文12为例,假如core-1当前负载比较严重,目前仅能承载2个报文,core-2当前负载仅可以承载4个报文,core-3当前负载比较轻松,可以承载至少6个报文,则基于上述步骤1至步骤3的描述,可将报文1至报文2存放至core-1对应的报文队列中,报文3至报文6存放至core-2对应的报文队列中,报文7至报文12存放至core-3对应的报文队列中。当然,若core-2当前负载比较轻松,可以承载至少10个报文,则可以直接将报文3至报文12存放至core-2对应的报文队列中,core-3对应的报文队列中不存放报文,具体情况具体分析,本发明实施例并不具体限定。Certainly, as another implementation of the embodiment of the present invention, in this step 304, the queue area control unit may also transfer the information to the message queue corresponding to each forwarding core according to the current load of each forwarding core and the quantity of all messages obtained. Putting in the message, specifically: step 1, the queue area control unit takes core-1 as the current core. Step 2, the queue area control unit determines the number X of messages to be put into the message queue corresponding to the current core according to the current load of the current core and the number of all received messages. Step 3, the queue area control unit puts X messages into the message queue corresponding to the current core in order; then sets the next forwarding core of core-1 as the current core, and returns to step 2. There are still 3 forwarding cores in the network communication equipment, namely core-1 to core-3, and the number of all messages obtained by the queue area control unit from the message receiving queue is 12, that is, message 1 to message 12 in sequence. For example, if the current load of core-1 is relatively serious and can only carry 2 packets at present, the current load of core-2 can only carry 4 packets, and the current load of core-3 is relatively light and can carry at least 6 packets, then Based on the descriptions of steps 1 to 3 above, packets 1 to 2 can be stored in the packet queue corresponding to core-1, and packets 3 to 6 can be stored in the packet queue corresponding to core-2. Packets 7 to 12 are stored in the packet queue corresponding to core-3. Of course, if the current load of core-2 is relatively light and can carry at least 10 packets, you can directly store packets 3 to 12 in the packet queue corresponding to core-2, and the packet queue corresponding to core-3 The message is not stored in the file, and the specific situation is analyzed in detail, and the embodiment of the present invention does not specifically limit it.

步骤305,队列区控制单元将存放报文的共享队列区依次放入共享队列区输入队列。Step 305, the queue area control unit sequentially puts the shared queue area storing messages into the input queue of the shared queue area.

本步骤305中的共享队列区输入队列,其实质上为队列区控制单元与core-1之间的报文传输通道,用于按照FIFO方式存入放入其中的共享队列区。The input queue of the shared queue area in step 305 is essentially a message transmission channel between the queue area control unit and core-1, and is used to store the messages in the shared queue area in the form of FIFO.

需要说明的是,本实施例中,在执行本步骤305之前,需要预先在每一个共享队列区中为每一个执行转发处理的转发核创建对应的报文队列。It should be noted that, in this embodiment, before step 305 is executed, a corresponding message queue needs to be created in each shared queue area for each forwarding core performing forwarding processing.

步骤306,core-1轮询共享队列区输入队列,依次获取共享队列区输入队列中的每个共享队列区。Step 306, core-1 polls the input queue of the shared queue area, and obtains each shared queue area in the input queue of the shared queue area in turn.

步骤307,core-1针对获取的每一个共享队列区(记为当前共享队列区),判断当前共享队列区中自身对应的报文队列是否为空,如果是,执行步骤308;否则,处理自身对应的报文队列中的所有报文,之后执行步骤309。Step 307, core-1 judges whether the message queue corresponding to itself in the current shared queue area is empty for each shared queue area obtained (recorded as the current shared queue area), if yes, execute step 308; otherwise, process itself For all the messages in the corresponding message queue, step 309 is then executed.

如果步骤304中,队列区控制单元根据各个转发核当前的负载以及获取的所有报文的数量向各个转发核对应的报文队列中放入报文,并且,在形成当前共享队列区时core-1的负载比较严重,不能再承载报文处理,基于此,执行到本步骤307时,该当前共享队列区就会出现core-1对应的报文队列为空的情况,考虑到本发明实施例的广泛性,需要执行本步骤307中的判断操作。If in step 304, the queue area control unit puts messages into the message queues corresponding to each forwarding core according to the current load of each forwarding core and the quantity of all messages obtained, and, when forming the current shared queue area, core- The load of 1 is relatively serious, and it can no longer carry message processing. Based on this, when this step 307 is executed, the message queue corresponding to core-1 will be empty in the current shared queue area. Considering the embodiment of the present invention Extensiveness, it is necessary to perform the judgment operation in step 307.

本步骤307中,处理报文队列中的报文可按照该报文所需要的处理操作执行,这里并不限定。In this step 307, processing the message in the message queue can be performed according to the processing operation required by the message, which is not limited here.

步骤308,core-1判断自身是否为编号最大的转发核,如果是,将当前共享队列区存放到已创建的共享队列区输出队列中,并执行步骤311,否则,执行步骤309。Step 308 , core-1 judges whether it is the forwarding core with the largest number, if yes, stores the current shared queue area in the created shared queue area output queue, and executes step 311 , otherwise, executes step 309 .

本步骤308之所以由core-1判断自身是否为编号最大的转发核,是为了确定core-1是否为最后一个执行转发处理的转发核。其中,core-1为最后一个执行转发处理的转发核的情况为:core-1为网络通信设备唯一的转发核。In step 308, core-1 judges whether it is the forwarding core with the largest number because it determines whether core-1 is the last forwarding core to perform forwarding processing. The case where core-1 is the last forwarding core performing forwarding processing is: core-1 is the only forwarding core of the network communication device.

本步骤308中的共享队列区输出队列为最后一个执行转发处理的转发核与保序处理单元之间的报文传输通道,其按照FIFO方式存入存放其中的共享队列区。由于一个共享队列区中的报文经过最后一个执行转发处理的转发核后,已经全部处理完成,通过将处理后的共享队列区入队到共享队列区输出队列后,保序处理单元即可将处理后的报文进行保序。The output queue in the shared queue area in step 308 is a message transmission channel between the last forwarding core and the order-preserving processing unit, and it is stored in the shared queue area stored therein in a FIFO manner. Since the messages in a shared queue area have been processed after passing through the last forwarding core that performs forwarding processing, after enqueuing the processed shared queue area to the output queue of the shared queue area, the order-preserving processing unit can send The sequence of the processed packets is preserved.

步骤309,core-1将处理后的共享队列区存放至下一个转发核对应的核间访问通道中,该核间访问通道可组织成共享队列区输入队列。Step 309, core-1 stores the processed shared queue area in the inter-core access channel corresponding to the next forwarding core, and the inter-core access channel can be organized into a shared queue area input queue.

本实施例中,下一个转发核对应的核间访问通道为该下一个转发核顺序相邻的上一个转发核和该下一个转发核之间的报文传输通道,其可按照FIFO方式存入顺序相邻的上一个转发核处理后的共享队列区,因此,可组织成共享队列区输入队列,具体可参见图8所示。以本步骤309中下一个转发核为core-2为例,则本步骤309具体为:core-1将处理后的共享队列区存放至core-2对应的核间访问通道中,其中,core-2对应的核间访问通道为core-1与core-2之间的报文传输通道,其可按照FIFO方式存入core-1处理后的共享队列区,并可组织成共享队列区输入队列。In this embodiment, the inter-core access channel corresponding to the next forwarding core is the message transmission channel between the previous forwarding core adjacent to the next forwarding core and the next forwarding core, which can be stored in the FIFO mode. The shared queue area processed by the preceding forwarding core in an adjacent sequence can therefore be organized into an input queue of the shared queue area, as shown in FIG. 8 for details. Taking the next forwarding core in step 309 as core-2 as an example, then step 309 is specifically: core-1 stores the processed shared queue area in the inter-core access channel corresponding to core-2, wherein core- The inter-core access channel corresponding to 2 is the message transmission channel between core-1 and core-2, which can be stored in the shared queue area processed by core-1 in the form of FIFO, and can be organized into the input queue of the shared queue area.

步骤310,下一个转发核按照类似步骤306至309中core-1执行的操作执行。In step 310, the next forwarding core performs operations similar to those performed by core-1 in steps 306 to 309.

需要说明的是,由于每个共享队列区中各个转发核对应的报文队列的报文数会有不同,所以并非每个核按照相同的先后顺序处理共享队列区,因此,优选地,在下一个转发核执行步骤306中core-1执行的操作之前,可由该下一个转发核实时判断自身对应的共享队列区输入队列是否为空,如果是,继续返回该判断操作,否则,执行步骤306中core-1执行的操作。It should be noted that since the number of messages in the message queues corresponding to each forwarding core in each shared queue area will be different, not every core processes the shared queue area in the same order. Therefore, preferably, in the next Before the forwarding core executes the operation performed by core-1 in step 306, the next forwarding core can judge in real time whether the input queue of the shared queue area corresponding to itself is empty, if yes, continue to return to the judgment operation, otherwise, execute the core -1 The operation to perform.

步骤311,保序处理单元轮询共享队列区输出队列,依次获取共享队列区。Step 311 , the order preserving processing unit polls the output queue of the shared queue area, and obtains the shared queue area in sequence.

需要说明的是,由于保序处理单元和最后一个执行转发处理的转发核执行的操作并非按照固定的时间先后顺序,因此,优选地,在执行步骤311之前,可由保序处理单元实时判断共享队列区输出队列是否为空,是则继续返回判断操作,否则,执行步骤311。It should be noted that since the operations performed by the order-preserving processing unit and the last forwarding core performing forwarding processing are not in a fixed chronological order, preferably, before step 311 is executed, the order-preserving processing unit can determine the shared queue in real time Whether the area output queue is empty, if yes, continue to return to the judgment operation, otherwise, execute step 311.

步骤312,保序处理单元针对获取的每一个共享队列区,从获取的共享队列区中按顺序依次访问每个转发核对应的报文队列,并按照顺序将该被访问的报文队列中的报文依次存入所述网络通信设备的报文发送队列中。Step 312, for each acquired shared queue area, the order-preserving processing unit sequentially accesses the message queue corresponding to each forwarding core from the acquired shared queue area, and sequentially accesses the message queue in the accessed message queue The messages are sequentially stored in the message sending queue of the network communication device.

本步骤312具体实现时可有多种实现形式,图5示出了其中一种实现形式。This step 312 may be implemented in various forms, and FIG. 5 shows one of the forms.

参见图5,图5为本发明实施例提供的步骤312的实现流程图。如图5所示,该流程可包括以下步骤:Referring to FIG. 5 , FIG. 5 is a flowchart for implementing step 312 provided by the embodiment of the present invention. As shown in Figure 5, the process may include the following steps:

步骤501,创建初始值为空的报文链L。Step 501, create a message chain L whose initial value is empty.

即执行到本步骤501时,报文链L没有存放任何报文。That is, when step 501 is executed, the message chain L does not store any message.

步骤502,保序处理单元从获取的共享队列区中访问core-S对应的报文队列Q。Step 502, the order preserving processing unit accesses the message queue Q corresponding to the core-S from the acquired shared queue area.

由于本实施例中,转发核是从1开始编号的,因此,初始阶段的S为1。Since the forwarding cores are numbered starting from 1 in this embodiment, S in the initial stage is 1.

本实施例中,由于每一转发核对应的报文队列是由该转发核的队列链表头存入的,因此,本步骤502可根据core-s的队列链表头访问core-S对应的报文队列Q。In this embodiment, since the message queue corresponding to each forwarding core is stored by the queue link list header of the forwarding core, this step 502 can access the message corresponding to core-S according to the queue link list header of core-s Queue Q.

步骤503,保序处理单元判断该被访问的报文队列Q是否为空,如果是,执行步骤504;否则,按照顺序将报文队列Q中的报文依次放入报文链L。In step 503, the sequence-preserving processing unit judges whether the accessed message queue Q is empty, and if so, executes step 504; otherwise, puts the messages in the message queue Q into the message chain L in sequence.

步骤504,保序处理单元判断core-S是否为编号最大的转发核,如果否,执行步骤505,如果是,执行步骤506。Step 504 , the order preserving processing unit judges whether core-S is the forwarding core with the largest number, if not, execute step 505 , and if yes, execute step 506 .

步骤505,令S=S+1,返回执行步骤502。Step 505, set S=S+1, return to step 502.

步骤506,将报文链L放入报文发送队列,由网络通信设备上的报文发送器发送报文发送队列中的报文链L。Step 506, put the message chain L into the message sending queue, and the message sender on the network communication device sends the message chain L in the message sending queue.

至此,通过图5所示的流程实现了步骤312的操作。So far, the operation of step 312 has been realized through the process shown in FIG. 5 .

通过步骤301至步骤312的描述,可以看出,本实施例中,并非是由网络通信设备上的转发核真正并行处理报文,而是先由core-1处理共享队列区中的报文,并将处理后的共享队列区提供给core-2处理,依次类推,直至core-N即最后一个执行转发处理的转发核将处理后的共享队列区提供给保序处理单元,以便保序处理单元对获取的共享队列区中的报文按照接收顺序进行串行化处理,即实现了按照报文接收顺序发送报文的目的。而同一时间,每一个转发核都在处理不同的共享队列区,从而某种意义上来说,每个核都在并行工作。Through the description of steps 301 to 312, it can be seen that in this embodiment, the forwarding core on the network communication device does not actually process the messages in parallel, but the core-1 processes the messages in the shared queue area first, And provide the processed shared queue area to core-2 for processing, and so on, until core-N, which is the last forwarding core that performs forwarding processing, provides the processed shared queue area to the order-preserving processing unit, so that the order-preserving processing unit The obtained messages in the shared queue area are serialized according to the receiving order, that is, the purpose of sending the messages according to the receiving order of the messages is realized. At the same time, each forwarding core is processing a different shared queue area, so in a sense, each core is working in parallel.

为了使图3所示的流程更加清楚,本发明实施例给出了对应图3流程的具体示意图,具体如图6所示。In order to make the flow shown in FIG. 3 clearer, the embodiment of the present invention provides a specific schematic diagram corresponding to the flow shown in FIG. 3 , as shown in FIG. 6 .

以上对本发明实施例提供的方法进行了描述,下面对本发明实施例提供的设备进行描述。The method provided by the embodiment of the present invention is described above, and the device provided by the embodiment of the present invention is described below.

参见图7,图7为本发明实施例提供的装置结构图。其中,该装置具体实现时可为路由器,也可为其他具有路由功能的设备,比如交换机等,本发明实施例并不具体限定。如图7所示,该装置可包括:至少一个参与转发处理的转发核701,报文接收队列702和报文发送队列703,其关键在于,该装置还包括:队列区控制单元704、保序处理单元705和共享队列区输入队列706。Referring to FIG. 7, FIG. 7 is a structural diagram of a device provided by an embodiment of the present invention. Wherein, the device may be implemented as a router or other devices with a routing function, such as a switch, which are not specifically limited in this embodiment of the present invention. As shown in Figure 7, the device may include: at least one forwarding core 701 participating in forwarding processing, a message receiving queue 702 and a message sending queue 703, and the key point is that the device also includes: a queue area control unit 704, an order preserving The processing unit 705 and the shared queue area input the queue 706 .

其中,队列区控制单元704用于将已接收的报文存入共享队列区输入队列706,并提供该共享队列区输入队列706给所述装置上首先执行转发处理的第一转发核;Wherein, the queue area control unit 704 is configured to store the received message into the input queue 706 of the shared queue area, and provide the input queue 706 of the shared queue area to the first forwarding core on the device that first performs forwarding processing;

第一转发核轮询共享队列区输入队列706,依次获取共享队列区输入队列中的共享队列区并处理,以及每处理完一个共享队列区中对应报文队列的所有报文后,判断自身是否为最后一个执行转发处理的转发核,如果否,提供处理后的共享队列区给下一个执行转发处理的转发核,由下一个执行转发处理的转发核执行所述第一转发核执行的操作,如果是,提供处理后的共享队列区给保序处理单元705。本实施例中,第一转发核在处理共享队列区时,先判断该获取的当前共享队列区中自身对应的报文队列是否为空,如果不为空,则执行处理该当前共享队列区的操作;而如果为空,直接判断自身是否为最后一个执行转发处理的转发核,是则提供当前共享队列区给保序处理单元705,否则提供当前共享队列区给下一个执行转发处理的转发核,由下一个执行转发处理的转发核执行所述第一转发核执行的操作。The first forwarding core polls the input queue 706 of the shared queue area, sequentially acquires the shared queue areas in the input queue of the shared queue area and processes them, and after processing all the messages of the corresponding message queues in a shared queue area, judges whether it is For the last forwarding core that performs forwarding processing, if not, provide the processed shared queue area to the next forwarding core that performs forwarding processing, and the next forwarding core that performs forwarding processing performs the operation performed by the first forwarding core, If yes, provide the processed shared queue area to the order preserving processing unit 705 . In this embodiment, when the first forwarding core processes the shared queue area, it first judges whether the message queue corresponding to itself in the obtained current shared queue area is empty, and if not, executes the process of processing the current shared queue area. If it is empty, directly judge whether it is the last forwarding core that performs forwarding processing, and if so, provide the current shared queue area to the order-keeping processing unit 705, otherwise provide the current shared queue area to the next forwarding core that performs forwarding processing , the operation performed by the first forwarding core is performed by the next forwarding core performing forwarding processing.

保序处理单元705用于获取共享队列区,并对获取的共享队列区中的报文按照接收顺序进行串行化处理,以供处理后的报文发送。The order-preserving processing unit 705 is configured to obtain the shared queue area, and serialize the obtained messages in the shared queue area according to the receiving order, so as to send the processed messages.

在本实施例中,所述装置上所有参与转发处理的转发核被按照顺序编号;基于此,所述第一转发核为编号最小的转发核,所述最后一个执行转发处理的转发核为编号最大的转发核;或者,所述第一转发核为编号最大的转发核,所述最后一个执行转发处理的转发核为编号最小的转发核。In this embodiment, all forwarding cores participating in forwarding processing on the device are numbered in sequence; based on this, the first forwarding core is the forwarding core with the smallest number, and the last forwarding core that performs forwarding processing is the numbered The largest forwarding core; or, the first forwarding core is the forwarding core with the largest number, and the last forwarding core performing forwarding processing is the forwarding core with the smallest number.

本实施例中,如果所述装置上存在至少一个参与转发处理的转发核,则如图7所示,该装置还包括:至少一个核间访问通道707。In this embodiment, if there is at least one forwarding core participating in forwarding processing on the device, as shown in FIG. 7 , the device further includes: at least one inter-core access channel 707 .

以所述装置上参与转发处理的转发核依次为第一转发核,第二转发核,第三转发核......第N转发核为例,则共享队列区输入队列706和核间访问通道之间的关系如图8所示。其中,每一核间访问通道用于存放顺序相邻的上一个转发核处理后的共享队列区,因此,其也可相应组织成共享队列区输入队列。Taking the forwarding cores participating in the forwarding processing on the device as the first forwarding core, the second forwarding core, the third forwarding core...the Nth forwarding core as an example, the shared queue area input queue 706 and the inter-core The relationship between access channels is shown in Figure 8. Wherein, each inter-core access channel is used to store the shared queue area processed by the previous adjacent forwarding core in sequence, therefore, it can also be organized into a shared queue area input queue accordingly.

基于此,第一转发核在判断出自身不为最后一个执行转发处理的转发核时,提供处理后的共享队列区给下一个执行转发处理的转发核比如第二转发核对应的核间访问通道,由该下一个执行转发处理的转发核按照类似第一转发核执行的操作执行。Based on this, when the first forwarding core determines that it is not the last forwarding core to perform forwarding processing, it provides the processed shared queue area to the next forwarding core that performs forwarding processing, such as the inter-core access channel corresponding to the second forwarding core , the next forwarding core that performs forwarding processing performs operations similar to those performed by the first forwarding core.

优选地,如图7所示,队列区控制单元704可包括:Preferably, as shown in FIG. 7, the queue area control unit 704 may include:

获取子单元7041,用于轮询报文接收队列,一旦发现所述报文接收队列中存在报文,则从所述报文接收队列中获取所有报文;The obtaining subunit 7041 is configured to poll the message receiving queue, and once it is found that there are messages in the message receiving queue, obtain all messages from the message receiving queue;

创建子单元7042,用于创建共享队列区,所述共享队列区包括每一参与转发处理的转发核对应的报文队列;Create a subunit 7042, configured to create a shared queue area, the shared queue area includes a message queue corresponding to each forwarding core participating in the forwarding process;

处理子单元7043,用于从共享队列区中第一转发核对应的报文队列开始,按顺序依次向各个转发核对应的报文队列放入报文;The processing subunit 7043 is configured to start from the message queue corresponding to the first forwarding core in the shared queue area, and put messages into the message queues corresponding to each forwarding core in sequence;

构造子单元7044,用于将存放报文的共享队列区依次放入共享队列区输入队列706。A subunit 7044 is constructed, configured to put the shared queue area storing messages into the input queue 706 of the shared queue area in sequence.

其中,处理子单元7043在向各个转发核对应的报文队列放入报文时可按照各个转发核平均分配报文的原则向各个转发核对应的报文队列放入报文;或者,根据各个转发核当前的负载以及获取的所有报文的数量向各个转发核对应的报文队列中放入报文。Wherein, when the processing subunit 7043 puts a message into the message queue corresponding to each forwarding core, it can put a message into the message queue corresponding to each forwarding core according to the principle of equal distribution of messages by each forwarding core; or, according to each The current load of the forwarding core and the quantity of all received messages are put into the message queue corresponding to each forwarding core.

优选地,如图7所示,所述装置还包括:共享队列区输出队列708。Preferably, as shown in FIG. 7 , the device further includes: an output queue 708 in a shared queue area.

其中,共享队列区输出队列708为最后一个执行转发处理的转发核与保序处理单元705之间的报文传输通道,用于存放最后一个执行转发处理的转发核处理后的共享队列区,以供保序处理单元705将共享队列区中处理后的报文进行保序处理。基于此,最后一个执行转发处理的转发核提供处理后的共享队列区给保序处理单元包括:提供处理后的共享队列区给所述共享队列区输出队列,由保序处理单元705通过轮询所述共享队列区输出队列获取共享队列区中处理后的报文。Wherein, the shared queue area output queue 708 is a message transmission channel between the last forwarding core that performs forwarding processing and the order preserving processing unit 705, and is used to store the shared queue area processed by the last forwarding core that performs forwarding processing. The order-preserving processing unit 705 performs order-preserving processing on the processed messages in the shared queue area. Based on this, the last forwarding core that performs forwarding processing provides the processed shared queue area to the order preservation processing unit, including: providing the processed shared queue area to the shared queue area output queue, and the order preservation processing unit 705 polls The output queue in the shared queue area obtains the processed messages in the shared queue area.

本发明实施例中,如图7所示,保序处理单元705具体实现时可包括:In the embodiment of the present invention, as shown in FIG. 7 , the sequence preserving processing unit 705 may include:

轮询子单元7051,用于轮询共享队列区输出队列708,并将共享队列区输出队列中的共享队列区交给报文处理子单元7052;The polling subunit 7051 is used to poll the shared queue area output queue 708, and hand over the shared queue area in the shared queue area output queue to the message processing subunit 7052;

报文处理子单元7052,用于从获取的共享队列区中按顺序依次访问每个转发核对应的报文队列,并按照顺序将该被访问的报文队列中的报文依次存入报文发送队列中。The message processing subunit 7052 is used to sequentially access the message queues corresponding to each forwarding core from the obtained shared queue area, and store the messages in the accessed message queues into the message queues in sequence in the sending queue.

至此,对本发明实施例提供的装置进行了描述。So far, the device provided by the embodiment of the present invention has been described.

由以上技术方案可以看出,本发明中,由首先执行转发处理的第一转发核先处理共享队列区输入队列中的共享队列区,其中,共享队列区包括每一参与转发处理的转发核对应的报文队列,每一个转发核将处理后的共享队列区提供给下一个相邻的执行转发处理的转发核处理,依次类推,直至最后一个执行转发处理的转发核将处理后的共享队列区提供给保序处理单元,以便保序处理单元对获取的共享队列区中的报文按照接收顺序进行串行化处理,实现了按照报文接收顺序发送报文的目的,即实现了报文保序。本发明提供的报文保序,对于共享队列区输入队列中的共享队列区按顺序分别被各个转发核处理,对于同一共享队列区来说转发核之间串行处理,最后各个处理后的共享队列区又按照进入共享队列区输入队列的顺序,进入到共享队列区输出队列后进行报序处理,因此不涉及缓存,定时老化等问题。As can be seen from the above technical solutions, in the present invention, the first forwarding core that first performs the forwarding process first processes the shared queue area in the input queue of the shared queue area, wherein the shared queue area includes each forwarding core that participates in the forwarding process. Each forwarding core provides the processed shared queue area to the next adjacent forwarding core that performs forwarding processing, and so on, until the last forwarding core that performs forwarding processing provides the processed shared queue area Provided to the order-preserving processing unit, so that the order-preserving processing unit serializes the obtained messages in the shared queue area according to the receiving order, and realizes the purpose of sending messages according to the order in which the messages are received, that is, realizes message preservation sequence. The message sequence protection provided by the present invention, for the shared queue area in the input queue of the shared queue area, is processed by each forwarding core in sequence, for the same shared queue area, serial processing is performed between the forwarding cores, and finally each processed shared The queue area is in accordance with the order of entering the input queue of the shared queue area, and after entering the output queue of the shared queue area, the sequence processing is performed, so problems such as cache and timing aging are not involved.

以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本发明保护的范围之内。The above descriptions are only preferred embodiments of the present invention, and are not intended to limit the present invention. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of the present invention shall be included in the present invention. within the scope of protection.

Claims (10)

1. A method for realizing message order preservation is characterized in that the method comprises the following steps:
a, a queue area control unit on the network communication equipment sequentially stores received messages into a shared queue area input queue, and provides the shared queue area input queue to a first forwarding core which firstly executes forwarding processing, and the first forwarding core polls the shared queue area input queue, and sequentially acquires and processes each shared queue area in the shared queue area input queue;
b, after the first forwarding core finishes processing all the messages of the corresponding message queue in the current shared queue area, judging whether the first forwarding core is the last forwarding core for executing forwarding processing, if not, providing the processed shared queue area for the next forwarding core for executing forwarding processing, and executing the step C, if so, providing the processed shared queue area for an order-preserving processing unit on the network communication equipment, and executing the step D;
c, the next forwarding core executing the forwarding processing executes the operation executed by the first forwarding core in the step B;
d, the order-preserving processing unit acquires the messages in the shared queue area and carries out serialization processing according to the receiving sequence for sending the messages;
in step a, the step of sequentially storing the received messages into the input queue of the shared queue area by the queue area control unit includes:
a1, when the queue area control unit polls the message receiving queue each time, obtaining all messages from the message receiving queue, where the message receiving queue is used to store the messages received by the network communication device;
a2, the queue area control unit creates a shared queue area, the shared queue area includes a message queue corresponding to each forwarding core participating in forwarding processing, and messages are sequentially put into the message queues corresponding to the forwarding cores in sequence from the message queue corresponding to the first forwarding core in the shared queue area;
a3, the queue area control unit puts the shared queue areas storing messages into the shared queue area input queue in sequence.
2. The method of claim 1, further comprising, prior to execution of the method: numbering all forwarding cores participating in forwarding processing on the network communication equipment according to a sequence;
the first forwarding core is the forwarding core with the smallest number, and the last forwarding core executing forwarding processing is the forwarding core with the largest number; or,
the first forwarding core is the forwarding core with the largest number, and the last forwarding core executing the forwarding processing is the forwarding core with the smallest number.
3. The method according to claim 1, wherein in step a2, the queue area control unit puts packets into the packet queues corresponding to the forwarding cores according to a principle that packets are evenly distributed by the forwarding cores; or,
and the queue area control unit puts the messages into the message queues corresponding to the forwarding cores according to the current loads of the forwarding cores and the obtained number of all the messages.
4. The method of claim 1, further comprising, prior to performing step B: establishing a shared queue area output queue, wherein the shared queue area output queue is used for storing a shared queue area processed by a last forwarding core executing forwarding processing so that an order-preserving processing unit can carry out order-preserving processing on the processed messages in the shared queue area;
in step B, providing the processed shared queue area to the order preserving processing unit includes: and providing the processed shared queue area to the output queue of the shared queue area, and acquiring the processed message in the shared queue area by the order-preserving processing unit through polling the output queue of the shared queue area.
5. The method of claim 4, wherein step D comprises:
d1, polling the output queue of the shared queue area by the order-preserving processing unit to sequentially obtain the shared queue area;
and D2, the order-preserving processing unit sequentially accesses the message queues corresponding to each forwarding core from the obtained shared queue area in sequence, and sequentially stores the messages in the accessed message queues into the message sending queues of the network communication equipment in sequence.
6. An apparatus for implementing packet order preservation, comprising at least one forwarding core participating in forwarding processing, a packet receiving queue and a packet sending queue, the apparatus further comprising: a queue area control unit and an order preserving processing unit; wherein,
the queue area control unit is used for storing the received message into a shared queue area input queue and providing the shared queue area input queue to a first forwarding core which firstly executes forwarding processing on the device;
the first forwarding core is used for polling the input queue of the shared queue area, sequentially acquiring and processing the shared queue area in the input queue of the shared queue area, judging whether the first forwarding core is the last forwarding core for executing forwarding processing after processing all messages of a corresponding message queue in one shared queue area, if not, providing the processed shared queue area to the next forwarding core for executing forwarding processing, executing the operation executed by the first forwarding core by the next forwarding core for executing forwarding processing, and if so, providing the processed shared queue area to the order-preserving processing unit;
the order-preserving processing unit is used for acquiring the shared queue area and serializing the acquired messages in the shared queue area according to the receiving sequence so as to send the processed messages;
wherein the queue area control unit includes:
the acquiring subunit is used for polling a message receiving queue, and acquiring all messages from the message receiving queue once finding that the messages exist in the message receiving queue;
a creating subunit, configured to create a shared queue area, where the shared queue area includes a packet queue corresponding to each forwarding core participating in forwarding processing;
the processing subunit is configured to sequentially place the messages into the message queues corresponding to the forwarding cores in the shared queue area in sequence from the message queue corresponding to the first forwarding core according to a principle that the forwarding cores evenly distribute the messages, or according to the current load of the forwarding cores and the number of all obtained messages;
and the construction subunit is used for sequentially placing the shared queue areas for storing the messages into the shared queue area input queue.
7. The apparatus of claim 6, wherein all forwarding cores on the apparatus that participate in forwarding processing are numbered in order;
the first forwarding core is the forwarding core with the smallest number, and the last forwarding core executing forwarding processing is the forwarding core with the largest number; or,
the first forwarding core is the forwarding core with the largest number, and the last forwarding core executing the forwarding processing is the forwarding core with the smallest number.
8. The apparatus of claim 6 or 7, further comprising:
the shared queue area output queue is used for storing the shared queue area processed by the last forwarding core executing the forwarding processing so that the order-preserving processing unit can carry out order-preserving processing on the processed messages in the shared queue area;
the last forwarding core executing the forwarding processing provides the processed shared queue area to the order-preserving processing unit, and the method comprises the following steps: and providing the processed shared queue area to the output queue of the shared queue area, and acquiring the processed message in the shared queue area by the order-preserving processing unit through polling the output queue of the shared queue area.
9. The apparatus of claim 8, wherein the order preserving processing unit comprises:
the polling subunit is used for polling the output queue of the shared queue area and delivering the shared queue area in the output queue of the shared queue area to the message processing subunit;
and the message processing subunit is used for sequentially accessing the message queues corresponding to the forwarding cores from the acquired shared queue area according to the sequence, and sequentially storing the messages in the accessed message queues into the message sending queue according to the sequence.
10. The apparatus of claim 9, further comprising: at least one inter-core access channel is used for storing the shared queue area processed by the last forwarding core adjacent in sequence and can be organized into an input queue of the shared queue area.
CN201010570221.4A 2010-11-24 2010-11-24 Method and device for realizing message order preservation Expired - Fee Related CN102480430B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201010570221.4A CN102480430B (en) 2010-11-24 2010-11-24 Method and device for realizing message order preservation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201010570221.4A CN102480430B (en) 2010-11-24 2010-11-24 Method and device for realizing message order preservation

Publications (2)

Publication Number Publication Date
CN102480430A CN102480430A (en) 2012-05-30
CN102480430B true CN102480430B (en) 2014-07-09

Family

ID=46092914

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201010570221.4A Expired - Fee Related CN102480430B (en) 2010-11-24 2010-11-24 Method and device for realizing message order preservation

Country Status (1)

Country Link
CN (1) CN102480430B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9313148B2 (en) 2013-04-26 2016-04-12 Mediatek Inc. Output queue of multi-plane network device and related method of managing output queue having multiple packet linked lists
CN105511954B (en) * 2014-09-23 2020-07-07 华为技术有限公司 Message processing method and device
CN104994032B (en) * 2015-05-15 2018-09-25 京信通信系统(中国)有限公司 A kind of method and apparatus of information processing
CN106685854B (en) * 2016-12-09 2020-02-14 浙江大华技术股份有限公司 Data sending method and system
EP3535956B1 (en) 2016-12-09 2021-02-17 Zhejiang Dahua Technology Co., Ltd Methods and systems for data transmission
CN109218119B (en) * 2017-06-30 2020-11-27 迈普通信技术股份有限公司 Network packet loss diagnosis method and network equipment
CN109218226A (en) * 2017-07-03 2019-01-15 迈普通信技术股份有限公司 Message processing method and the network equipment
CN109327405B (en) * 2017-07-31 2022-08-12 迈普通信技术股份有限公司 Message order-preserving method and network equipment
CN108259369B (en) * 2018-01-26 2022-04-05 迈普通信技术股份有限公司 Method and device for forwarding data message
CN108667730B (en) * 2018-04-17 2021-02-12 东软集团股份有限公司 Message forwarding method, device, storage medium and equipment based on load balancing
CN114090274A (en) * 2020-07-31 2022-02-25 华为技术有限公司 Network interface card, storage device, message receiving method and message sending method
CN113055403B (en) * 2021-04-02 2022-06-17 电信科学技术第五研究所有限公司 Line speed order preserving method
CN115118686B (en) * 2022-06-23 2024-08-09 中国民航信息网络股份有限公司 Processing system, method, equipment, medium and product of passenger message

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101013383A (en) * 2007-02-13 2007-08-08 杭州华为三康技术有限公司 System and method for implementing packet combined treatment by multi-core CPU
CN101217467A (en) * 2007-12-28 2008-07-09 杭州华三通信技术有限公司 An inter-core load dispensing device and method
CN101442513A (en) * 2007-11-20 2009-05-27 杭州华三通信技术有限公司 Method for implementing various service treatment function and multi-nuclear processor equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080259797A1 (en) * 2007-04-18 2008-10-23 Aladdin Knowledge Systems Ltd. Load-Balancing Bridge Cluster For Network Nodes

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101013383A (en) * 2007-02-13 2007-08-08 杭州华为三康技术有限公司 System and method for implementing packet combined treatment by multi-core CPU
CN101442513A (en) * 2007-11-20 2009-05-27 杭州华三通信技术有限公司 Method for implementing various service treatment function and multi-nuclear processor equipment
CN101217467A (en) * 2007-12-28 2008-07-09 杭州华三通信技术有限公司 An inter-core load dispensing device and method

Also Published As

Publication number Publication date
CN102480430A (en) 2012-05-30

Similar Documents

Publication Publication Date Title
CN102480430B (en) Method and device for realizing message order preservation
TWI510030B (en) System and method for performing packet queuing on a client device using packet service classifications
US10193831B2 (en) Device and method for packet processing with memories having different latencies
US8514700B2 (en) MLPPP occupancy based round robin
Hua et al. Scheduling heterogeneous flows with delay-aware deduplication for avionics applications
WO2016202158A1 (en) Message transmission method and device, and computer-readable storage medium
CN107733813B (en) Message forwarding method and device
US10869227B2 (en) Message cache management in a mesh network
CN108270687A (en) A kind of load balance process method and device
CN107347039A (en) A kind of management method and device in shared buffer memory space
CN106533954A (en) Message scheduling method and device
WO2016082603A1 (en) Scheduler and dynamic multiplexing method for scheduler
CN109525518B (en) IP message network address conversion method and device based on FPGA
CN102984089B (en) Traffic management dispatching method and device
CN104780178A (en) Connection management method for preventing TCP attack
CN107733812A (en) A kind of data packet dispatching method, device and equipment
WO2021101640A1 (en) Method and apparatus of packet wash for in-time packet delivery
JP6101114B2 (en) Packet transmission apparatus and program thereof
CN109039934A (en) A kind of space DTN method for controlling network congestion and system
US8441953B1 (en) Reordering with fast time out
US9128785B2 (en) System and method for efficient shared buffer management
CN115361346B (en) Explicit packet loss notification mechanism
CN114221916B (en) A method and system for flushing forwarding queue of switching chip
US20250007846A1 (en) Hardware device for automatic detection and deployment of qos policies
JP2015536063A (en) Method of Random Access Message Retrieval from First-In-First-Out Transport Mechanism

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20140709