CN102739550A - Multi-memory flow routing architecture based on random duplication allocation - Google Patents
Multi-memory flow routing architecture based on random duplication allocation Download PDFInfo
- Publication number
- CN102739550A CN102739550A CN201210248200XA CN201210248200A CN102739550A CN 102739550 A CN102739550 A CN 102739550A CN 201210248200X A CN201210248200X A CN 201210248200XA CN 201210248200 A CN201210248200 A CN 201210248200A CN 102739550 A CN102739550 A CN 102739550A
- Authority
- CN
- China
- Prior art keywords
- routing
- memory
- data packets
- data packet
- tree
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
本发明公开了一种基于随机副本分配的多存储器流水路由体系结构,它包含缓存数据包的路由缓冲区,防止路由缓冲区中出现目的地址相同的数据包的包过滤器,控制排队数据包与各存储器间之间的读写访问的调度单元,以及存储单元和存储器组,路由表前缀树被切分为一棵主树和多棵子树,主树包含前缀树低层结点,并被转换为存储在索引表存储单元中的索引表,各子树包含前缀树高层结点,每棵子树有两个副本,各副本被独立地映射到存储器组中,每个到达的数据包生成一个路由信息查找请求,该请求包含一次或多次存储器访问操作,每次存储器访问操作的目标是前缀树中某结点所在的存储器,各目标结点依序构成一条从根结点出发到某叶子结点结束的查找路径。
The invention discloses a multi-memory pipeline routing architecture based on random copy allocation, which includes a routing buffer for caching data packets, a packet filter for preventing data packets with the same destination address from appearing in the routing buffer, and controlling the queued data packets and The scheduling unit of read and write access between the various memories, as well as the storage unit and the memory group, the prefix tree of the routing table is divided into a main tree and multiple sub-trees, the main tree contains the lower nodes of the prefix tree, and is converted into The index table stored in the index table storage unit, each subtree contains the high-level nodes of the prefix tree, each subtree has two copies, each copy is independently mapped to the memory group, and each arriving data packet generates a routing information Search request, the request includes one or more memory access operations, the target of each memory access operation is the memory where a node in the prefix tree is located, and each target node sequentially forms a link starting from the root node to a certain leaf node The ending lookup path.
Description
技术领域 technical field
本发明涉及一种路由体系结构,具体来说,涉及一种使用随机副本分配技术和多个存储器以流水并行化多个数据包的路由信息查找操作,从而有效提高路由系统的吞吐量的基于随机副本分配的多存储器流水路由体系结构。The present invention relates to a routing architecture, in particular, to a random copy allocation technology and multiple memories to parallelize the routing information lookup operation of multiple data packets, thereby effectively improving the throughput of the routing system based on random Multi-memory pipeline routing architecture for replica allocation.
背景技术 Background technique
一个IP路由器实现两个基本功能,分别是路由功能和交换功能,当数据包到达路由器的某个输入端口时,其须依次经历路由阶段和交换阶段,在路由阶段,系统获取数据包的目的地址,并据此查找路由表以获取相应的目的输出端口号;在交换阶段,系统通过调度将数据包交换到指定的输出端口,从而将数据包转发到下一跳站点。目前,基于前缀树的路由算法包括Binary Trie(BT)、Prefix Trie(PT)【1】、Fixed-Stride Trie(FST)【2】和Multi-Prefix Trie(MPT)【3】等算法。An IP router implements two basic functions, which are routing function and switching function. When a data packet arrives at an input port of the router, it must go through the routing stage and the switching stage in turn. In the routing stage, the system obtains the destination address of the data packet , and look up the routing table accordingly to obtain the corresponding destination output port number; in the switching phase, the system switches the data packet to the designated output port through scheduling, so as to forward the data packet to the next hop site. Currently, prefix tree-based routing algorithms include Binary Trie (BT), Prefix Trie (PT) [1], Fixed-Stride Trie (FST) [2] and Multi-Prefix Trie (MPT) [3] and other algorithms.
在本文的陈述中用到以下术语。The following terms are used in the presentations herein.
目的地址IPv4/IPv6互联网协议下,一个数据包的目的地址可表示为一个长度32/128的二进制字符串。Destination address Under the IPv4/IPv6 Internet protocol, the destination address of a data packet can be expressed as a binary string with a length of 32/128.
二进制字符串一个长度为n的二进制字符串是将n个属于字符集∑={0,1}的字符按其位置依次排列形成的数组S[0..n-1]。A binary string with a length of n is an array S[0..n-1] formed by arranging n characters belonging to the character set ∑={0,1} according to their positions.
二进制前缀子串二进制字符串S的前缀子串S[0.i],表示S串中从位置0到位置i的一段字符串,也就是S[0],S[1],..,S[i]组成的字符串。Binary prefix substring The prefix substring S[0.i] of the binary string S indicates a string from
路由表一张路由表由多条记录组成,一条记录由一个前缀子项和一个输出端口号组成,一个前缀子项是一个二进制字符串,各记录的前缀子项构成一个二进制字符串集合Π。Routing table A routing table is composed of multiple records. One record is composed of a prefix sub-item and an output port number. A prefix sub-item is a binary string. The prefix sub-items of each record form a binary string set Π.
最长匹配前缀对于任意两个二进制字符串S1[0..m-1],S2[0..n-1],m<n,若满足S1为S2的前缀子串,则我们称S1是S2的一个长度为m的匹配前缀,记作给定一个二进制字符串集合Π和字符串S[0..n-1],若S′[0,..i]∈Π且则S'是S的一个匹配前缀。又若对S的任意匹配前缀S″[0..j]∈Π,有则我们称S'为S的关于Π的最长匹配前缀。The longest matching prefix For any two binary strings S 1 [0..m-1], S 2 [0..n-1], m<n, if S 1 is the prefix substring of S 2 , then We call S 1 a matching prefix of length m of S 2 , denoted as Given a set of binary strings Π and strings S[0..n-1], if S′[0,..i]∈Π and Then S' is a matching prefix of S. And if any matching prefix S″[0..j]∈Π for S, we have Then we call S' the longest matching prefix of S with respect to Π.
基于前缀树的路由算法在路由阶段,系统根据数据包的目的地址在路由表中查找与目的地址匹配的最长匹配前缀。直观地,系统可以将目的地址与表中各记录的前缀子项依次进行比较以获得符合要求的最长匹配前缀。显而易见,这种查找方式无法获得较高的查找效率。另一方面,将路由表转化为前缀树以支持快速路由查找的策略被广泛地研究并推广到工业生产中。不失一般性地,在一棵前缀树中,单个前缀结点可表示路由表中的一条或多条记录,各前缀结点以指针相连。一个数据包的路由查找过程从前缀树的根结点开始,直到其到达某叶子结点结束。在此过程中,系统将数据包的目的地址与查找路径上的所有结点中的前缀进行匹配,并记录其间获得的最长匹配前缀。最终,系统返回该最长匹配前缀对应的输出端口号,数据包继而进入交换阶段。Routing algorithm based on prefix tree In the routing stage, the system searches the routing table for the longest matching prefix that matches the destination address according to the destination address of the data packet. Intuitively, the system can sequentially compare the destination address with the prefix sub-items of each record in the table to obtain the longest matching prefix that meets the requirements. Obviously, this search method cannot obtain higher search efficiency. On the other hand, the strategy of transforming the routing table into a prefix tree to support fast routing lookup has been widely studied and extended to industrial production. Without loss of generality, in a prefix tree, a single prefix node can represent one or more records in the routing table, and the prefix nodes are connected by pointers. The routing lookup process of a data packet starts from the root node of the prefix tree and ends when it reaches a certain leaf node. During this process, the system matches the destination address of the data packet with the prefixes in all nodes on the search path, and records the longest matching prefix obtained among them. Finally, the system returns the output port number corresponding to the longest matching prefix, and the data packet then enters the switching stage.
利用以上术语,我们给出一个基于前缀树的路由查找示例,为了描述方便,我们选用BT路由算法来组织路由表中的前缀,如图1和图2所示:Using the above terms, we give an example of routing lookup based on prefix tree. For the convenience of description, we choose BT routing algorithm to organize the prefixes in the routing table, as shown in Figure 1 and Figure 2:
根据图1所示的路由表,图2给出了采用BT路由算法构造而成的前缀树,该前缀树的每个前缀结点包含最多1个前缀。假设到达路由器的数据包的目的地址的高8位为10101010,则该数据包的路由查找过程可描述如下:数据包访问前缀树的根结点,根结点前缀为空,匹配不成功,其根据目的地址的第0位分支搜索前缀树;目的地址第0位为1,则数据包访问根结点的右孩子,该结点前缀为空,匹配不成功,其根据目的地址的第1位分支搜索前缀树;目的地址第1位为0,则数据包访问当前结点的左孩子,该结点前缀为10*,匹配成功,最长匹配前缀更新为10*,其继而根据目的地址的第2位分支搜索前缀树。重复以上步骤,最终找到与目的地址匹配的路由表前缀为10*和1010*,根据最长匹配前缀的定义,系统返回1010*对应的输出端口号R7。According to the routing table shown in Figure 1, Figure 2 shows a prefix tree constructed using the BT routing algorithm, and each prefix node of the prefix tree contains at most one prefix. Assuming that the high-order 8 bits of the destination address of the data packet arriving at the router are 10101010, the route lookup process of the data packet can be described as follows: the data packet visits the root node of the prefix tree, the prefix of the root node is empty, and the matching is unsuccessful, and the Search the prefix tree according to the 0th bit of the destination address; if the 0th bit of the destination address is 1, the data packet visits the right child of the root node, the prefix of the node is empty, and the match is unsuccessful. Branch search prefix tree; the first bit of the destination address is 0, then the data packet visits the left child of the current node, the node prefix is 10*, the match is successful, the longest matching prefix is updated to 10*, and then according to the destination address The 2nd branch searches the prefix tree. Repeat the above steps, and finally find out that the routing table prefixes matching the destination address are 10* and 1010*. According to the definition of the longest matching prefix, the system returns the output port number R 7 corresponding to 1010*.
现存有多种基于前缀树的路由算法,具体可参见文献【1-15】。这些算法可进一步划分为串行前缀树路由算法和并行前缀树路由算法。在串行算法中,各数据包的路由查找过程严格按照时序先后执行,因此,系统的处理速度慢,吞吐量低。为此,大量研究致力于设计高效的多存储器流水路由体系结构,其通过合理地调度和安排存储器资源的存取操作,以实现多个路由查找过程的并行执行,从而极大地提高系统的整体性能【12-15】。There are many routing algorithms based on prefix tree, please refer to [1-15] for details. These algorithms can be further divided into serial prefix tree routing algorithms and parallel prefix tree routing algorithms. In the serial algorithm, the route lookup process of each data packet is executed strictly in sequence, so the processing speed of the system is slow and the throughput is low. To this end, a lot of research is devoted to designing an efficient multi-memory pipeline routing architecture, which can realize the parallel execution of multiple routing lookup processes by reasonably scheduling and arranging the access operations of memory resources, thereby greatly improving the overall performance of the system 【12-15】.
参考文献:references:
[1]M.Berger,“IP lookup with low memory requirement and fast update,”in Proc.IEEEHPSR,Jun2003,287-291.[1]M.Berger, "IP lookup with low memory requirement and fast update," in Proc.IEEEHPSR,Jun2003,287-291.
[2]S.Sahni and K.Kim,“Efficient construction of multibit triesfor IP lookup”,IEEE/ACM Trans.on Networking,pp.650-662,Aug2003.[2] S.Sahni and K.Kim, "Efficient construction of multibit tries for IP lookup", IEEE/ACM Trans.on Networking, pp.650-662, Aug2003.
[3]S.Hsieh,Y.Huang and Y.Yang,“A novel dynamic router-tables design for IP lookupand update,”in IEEE Int.Con.on Future Information Technology,May2010,pp.1-6.[3]S.Hsieh, Y.Huang and Y.Yang, "A novel dynamic router-tables design for IP lookup and update," in IEEE Int.Con.on Future Information Technology, May2010, pp.1-6.
[4]V.Srinivasan and G.Varghese,“Faster IP lookups using controlled prefix expansion,”ACM Trans.on Computer Systems,pp.1–40,Feb1999.[4] V. Srinivasan and G. Varghese, "Faster IP lookups using controlled prefix expansion," ACM Trans. on Computer Systems, pp.1–40, Feb1999.
[5]S.Nilsson and G.Karlsson,“IP-address lookup using lc-tries,”IEEE J.on SelectedAreas in Communications,pp.1083–1092,Jun1999.[5] S.Nilsson and G.Karlsson, "IP-address lookup using lc-tries," IEEE J. on Selected Areas in Communications, pp.1083–1092, Jun1999.
[6]M.Degermark,A.Brodnik,S.Carlsson,and S.Pink,“Small forwarding tables for fastrouting lookups,”in Proc.ACM SIGCOMM,1997,pp.3–14.[6] M. Degermark, A. Brodnik, S. Carlsson, and S. Pink, “Small forwarding tables for fastrouting lookups,” in Proc.ACM SIGCOMM, 1997, pp.3–14.
[7]V.Ravikumar,R.Mahapatra,and J.Liu,“Modified lc-trie based efficient routinglookup,”in Proc.IEEE Int.Symp.on MASCOTS,Jan2003,pp.177–182.[7]V.Ravikumar, R.Mahapatra, and J.Liu, “Modified lc-trie based efficient routing lookup,” in Proc.IEEE Int.Symp.on MASCOTS,Jan2003,pp.177–182.
[8]S.Sahni and K.Kim,“Efficient construction of fixed-stride multibit tries for IPlookup,”in IEEE Workshop on Future Trends of Distributed Computing Systems,Nov2001,pp.178–184.[8]S.Sahni and K.Kim, "Efficient construction of fixed-stride multibit tries for IPlookup," in IEEE Workshop on Future Trends of Distributed Computing Systems, Nov2001, pp.178–184.
[9]——,“Efficient construction of variable-stride multibit tries for IP lookup,”in Proc.Symp.on Applications and the Internet,Feb2002,pp.220–227.[9]——,"Efficient construction of variable-stride multibit tries for IP lookup," in Proc.Symp.on Applications and the Internet, Feb2002, pp.220–227.
[10]S.Sahni and H.Lu,“Dynamic tree bitmap for IP lookup and update,”in IEEE Int.con.on Networking,April2007,pp.79–84.[10]S.Sahni and H.Lu, “Dynamic tree bitmap for IP lookup and update,” in IEEE Int.con.on Networking, April2007, pp.79–84.
[11]Y.Chang,Y.Lin,and C.Su,“Dynamic multiway segment tree for IP lookups and thefast pipelined search engine,”IEEE Trans.on Computers,pp.492–506,Feb2010.[11] Y.Chang, Y.Lin, and C.Su, "Dynamic multiway segment tree for IP lookups and the fast pipelined search engine," IEEE Trans. on Computers, pp.492–506, Feb2010.
[12]K.Kim and S.Sahni,“Efficient construction of pipelined multibit-trie router-tables,”IEEE Trans.on Computers,pp.32–43,Jan2007.[12]K.Kim and S.Sahni, "Efficient construction of pipelined multibit-trie router-tables," IEEE Trans. on Computers, pp.32–43, Jan2007.
[13]M.Bando and H.J.Chao,“Flashtrie:Hash-based prefix compressed trie for IP routelookup beyond 100Gbps,”in Proc.IEEE INFOCOM,March2010,pp.1–9.[13]M.Bando and H.J.Chao, "Flashtrie: Hash-based prefix compressed trie for IP routelookup beyond 100Gbps," in Proc.IEEE INFOCOM, March2010, pp.1–9.
[14]W.Jiang and V.Prasanna,“A memory-balanced linear pipeline architecture fortrie-based IP lookup,”in IEEE Symp.on High-Performance Interconnects,Aug2007,pp.83–90.[14]W.Jiang and V.Prasanna, "A memory-balanced linear pipeline architecture fortrie-based IP lookup," in IEEE Symp.on High-Performance Interconnects, Aug2007, pp.83–90.
[15]S.Kumar,M.Becchi,P.Corwley,and J.Turner,“CAMP:fast and efficient IP lookuparchitecture,”in ACM/IEEE Symp.on Architecture for Networking andCommunications Systems,2006,pp.51–60.[15]S.Kumar, M.Becchi, P.Corwley, and J.Turner, "CAMP:fast and efficient IP lookup architecture," in ACM/IEEE Symp.on Architecture for Networking and Communications Systems,2006,pp.51–60 .
[16]T.Anderson,S.Owicki,J.Saxe,and C.Thacker,“High-speed switch scheduling forlocal area networks,”ACM Trans.Comput.Syst.,pp.319–352,Nov1993.[16] T. Anderson, S. Owicki, J. Saxe, and C. Thacker, "High-speed switch scheduling for local area networks," ACM Trans. Comput. Syst., pp.319–352, Nov1993.
[17]N.McKeown,“The iSLIP scheduling algorithm for input-queued switches,”IEEE/ACM Trans.on Networking,pp.188–201,April1999.[17] N.McKeown, "The iSLIP scheduling algorithm for input-queued switches," IEEE/ACM Trans.on Networking, pp.188–201, April1999.
[18]P.Sanders,S.Egner,and J.Korst,“Fast concurrent access to parallel disks,”in Proc.11th annual ACM-SIAM symposium on Discrete algorithms,2000,pp.849–858.[18]P.Sanders, S.Egner, and J.Korst, "Fast concurrent access to parallel disks," in Proc.11th annual ACM-SIAM symposium on Discrete algorithms,2000,pp.849–858.
[19]J.Vitter and E.Shriver,“Algorithms for parallel memory I:Two level memories.”pp.110–147,1994.[19]J.Vitter and E.Shriver, "Algorithms for parallel memory I: Two level memories."pp.110–147,1994.
[20]R.Barve,E.Grove,and J.Vitter,“Simple randomized mergesort on parallel disks,”Paral.Comput.,pp.601631,1997.[20] R.Barve, E.Grove, and J.Vitter, "Simple randomized mergesort on parallel disks," Paral.Comput.,pp.601631,1997.
发明内容 Contents of the invention
针对以上的不足,本发明提供了一种使用随机副本分配技术和多个存储器以流水并行化多个数据包的路由信息查找操作,从而有效提高路由系统的吞吐量的基于随机副本分配的多存储器流水路由体系结构,它包含一个用于缓存数据包的路由缓冲区,一个用于控制排队数据包与各存储器间之间的读写访问的调度单元,以及一组用于存储路由表前缀树结点的存储器。In view of the above deficiencies, the present invention provides a multi-memory based on random copy allocation that effectively improves the throughput of the routing system by using random copy allocation technology and multiple memories to parallelize the routing information lookup operation of multiple data packets. Pipeline routing architecture, which includes a routing buffer for caching data packets, a scheduling unit for controlling read and write access between queued data packets and various memories, and a set of prefix tree structures for storing routing tables point memory.
所述路由表前缀树结点以随机副本分配的存储方式实现到存储器的映射。The routing table prefix tree node realizes the mapping to the memory in the storage mode of random copy allocation.
还包括索引表存储单元,所述路由表前缀树被切分为一棵主树和多棵子树,主树包含前缀树的低层结点,并被转换为一个索引表,该索引表存储在索引表存储单元中,各子树包含前缀树的高层结点,每棵子树有两个副本,各副本被独立地映射到存储器组中。It also includes an index table storage unit, the routing table prefix tree is divided into a main tree and multiple sub-trees, the main tree contains the low-level nodes of the prefix tree, and is converted into an index table, which is stored in the index In the table storage unit, each subtree contains high-level nodes of the prefix tree, and each subtree has two copies, and each copy is independently mapped to the storage group.
所述索引表包含数个子项,每个子项由一个前缀域和一个或多个结点指针域构成。The index table includes several sub-items, and each sub-item consists of a prefix field and one or more node pointer fields.
还包括一个用于检查每个到达路由体系结构的数据包,防止路由缓冲区中出现目的地址相同的数据包的包过滤器。Also included is a packet filter for checking each packet arriving at the routing architecture and preventing packets with the same destination address from appearing in the routing buffer.
所述数据包的排队策略为:1)系统时间被分割为固定大小的时隙,每个时隙被定义为一个存储访问周期,每个时隙被等分为M个时钟周期;2)每个到达路由缓冲区的数据包长度固定,每个时钟周期到达的数据包数量最多为1,即每个时隙最多有M数据包达到路由体系结构;3)在各时隙的末尾,调度单元从路由缓冲区的排队数据包中选取一个无冲突的数据包集合,所有当选的数据包并行地访问存储器组以查找存储在各存储器中的路由信息,如果某个当选的数据包完成了其路由信息查找任务,那么它将立刻离开路由缓冲区;4)各存储器在一个时隙内只能被访问一次。The queuing policy of the data packets is as follows: 1) the system time is divided into time slots of fixed size, each time slot is defined as a storage access cycle, and each time slot is equally divided into M clock cycles; 2) every The length of the data packets arriving at the routing buffer is fixed, and the number of data packets arriving in each clock cycle is at most 1, that is, at most M data packets reach the routing architecture in each time slot; 3) At the end of each time slot, the scheduling unit Select a conflict-free set of data packets from the queued data packets in the routing buffer, all selected data packets access the memory bank in parallel to find the routing information stored in each memory, if a selected data packet completes its route information lookup task, then it will leave the routing buffer immediately; 4) Each memory can only be accessed once in a time slot.
每个到达的数据包生成一个路由信息查找请求,该路由信息查找请求包含一次或多次存储器访问操作,每次存储器访问操作的目标是路由表前缀树中某结点所在的存储器,各目标结点依序构成一条从根结点出发到某叶子结点结束的查找路径。Each arriving data packet generates a routing information lookup request, and the routing information lookup request includes one or more memory access operations. The target of each memory access operation is the memory where a node in the routing table prefix tree is located. Each target node The nodes in turn constitute a search path starting from the root node and ending at a certain leaf node.
所述调度单元的调度策略为:步骤1)请求:每个尚未匹配成功的排队数据包发送一个请求给其尚未匹配成功的根存储器;步骤2)授权:每个被请求的存储器从所有向其请求的数据包中选择一个授予访问许可;步骤3)接受:每个被授权的数据包从所有向其授权的存储器中选择一个接受其访问许可。其中,根存储器表示每棵子树的根结点所在的存储器。The scheduling strategy of the scheduling unit is: Step 1) request: each queued data packet that has not been matched successfully sends a request to the root storage that has not been successfully matched; step 2) authorization: each requested storage sends a request to its Select one of the requested data packets to grant access permission; Step 3) Accept: Each authorized data packet selects one from all authorized storages to accept its access permission. Wherein, the root storage means the storage where the root node of each subtree is located.
本发明的有益效果:首先,本发明的路由体系结构提供了一个路由缓冲区来缓冲到达的数据包,从而支持日益增长的主干网带宽需求,如果路由缓冲区的包丢失率小于交换缓冲区的包丢失率,那么本发明的路由体系结构的性能就能够达到无路由缓冲区的路由系统的性能;另外,本发明通过包过滤器不但可以避免路由冗余,防止任意两个目的地址相同的数据包会被路由到同一个输出端口,还可以使目的地址相同的数据包保持先进先出的顺序;再有,本发明通过随机副本分配技术通过使用额外的存储空间来减少流水路由体系结构中存储器的访问冲突频率,可行性更高。Beneficial effects of the present invention: at first, routing architecture of the present invention provides a routing buffer to buffer the data packet that arrives, thereby supports the backbone network bandwidth demand that increases day by day, if the packet loss rate of routing buffer is less than that of switching buffer packet loss rate, then the performance of the routing architecture of the present invention can reach the performance of a routing system without a routing buffer; in addition, the present invention can not only avoid routing redundancy through the packet filter, but also prevent any two data with the same destination address The packet will be routed to the same output port, and the same data packets with the destination address can also be kept in the order of first-in-first-out; moreover, the present invention reduces memory in the pipeline routing architecture by using additional storage space through random copy allocation technology The frequency of access violations is higher, and the feasibility is higher.
附图说明 Description of drawings
图1为路由表示例图;Figure 1 is an example diagram of a routing table;
图2为路由表对应的BT树示意图;FIG. 2 is a schematic diagram of a BT tree corresponding to the routing table;
图3为本发明的路由体系结构的示意图;Fig. 3 is a schematic diagram of the routing architecture of the present invention;
图4为本发明的数据包的排队策略的结构示意图;Fig. 4 is the structural representation of the queuing policy of the data packet of the present invention;
图5为双向图匹配模型示意图;Fig. 5 is a schematic diagram of a two-way graph matching model;
图6为本发明初始状态和达到的数据包的示意图;Fig. 6 is the schematic diagram of the initial state of the present invention and the data packet that reaches;
图7为本发明对给定的数据包的流水并行调度的结果示意图。FIG. 7 is a schematic diagram of the result of the pipeline parallel scheduling of a given data packet according to the present invention.
具体实施方式 Detailed ways
下面结合附图来本对本发明进行进一步阐述。The present invention will be further elaborated below in conjunction with accompanying drawing.
如图3所示,本发明的基于随机副本分配的多存储器流水路由体系结构包括路由缓冲区、调度单元和存储器组,路由缓冲区用于缓存进入路由体系结构的数据包,调度单元用于控制路由缓冲区中的排队数据包与各存储器间之间的读写访问,存储器组用于存储路由表前缀树结点的存储器。传统路由器的一个设计假设是:路由器能够为以任意可能的速率到达的数据包提供实时的路由服务,从而,路由器在路由阶段不需要提供缓冲区来缓存到达的数据包。本发明为了提高系统吞吐量,路由体系结构提供一个路由缓冲区来缓冲到达的数据包,从而支持日益增长的主干网带宽需求,且如果路由缓冲区的包丢失率小于交换缓冲区的包丢失率,那么本发明所提出的路由体系结构的性能就能够达到无路由缓冲区的路由系统的性能。另外,本发明还可以增加一个包过滤器,包过滤器检查每个到达路由体系结构的数据包,防止路由缓冲区中出现目的地址相同的数据包,使用包过滤器有两个目的:其一,避免路由冗余,任意两个目的地址相同的数据包会被路由到同一个输出端口;其二,使目的地址相同的数据包保持先进先出的顺序。As shown in Figure 3, the multi-memory pipeline routing architecture based on random copy assignment of the present invention includes a routing buffer, a scheduling unit and a memory group, the routing buffer is used to cache the data packets entering the routing architecture, and the scheduling unit is used to control The read and write access between the queued data packet in the routing buffer and each memory, the memory group is used to store the memory of the prefix tree node of the routing table. A design assumption of traditional routers is that routers can provide real-time routing services for data packets arriving at any possible rate, so routers do not need to provide buffers to cache arriving data packets during the routing phase. In order to improve the throughput of the system, the routing architecture provides a routing buffer to buffer the arriving data packets, thereby supporting the ever-increasing bandwidth demand of the backbone network, and if the packet loss rate of the routing buffer is less than the packet loss rate of the switching buffer , then the performance of the routing architecture proposed by the present invention can reach the performance of the routing system without routing buffer. In addition, the present invention can also increase a packet filter, and the packet filter checks each data packet arriving at the routing architecture to prevent the same data packets with the destination address from appearing in the routing buffer. Using the packet filter has two purposes: one , to avoid routing redundancy, any two data packets with the same destination address will be routed to the same output port; second, keep the data packets with the same destination address in the first-in-first-out order.
另外,本发明通过使用一种随机副本分配(RDA(Random DuplicateAllocation))的存储分配技术实现路由表前缀树结点到存储器的映射,该技术能够将任意基于前缀树的路由算法下的路由表前缀树结点均匀地映射到流水系统的所有存储器中,相较于现存的其他流水路由设计方案,随机副本分配(RDA)技术通过使用额外的存储空间来减少流水系统中存储器的访问冲突频率,该技术最先被用于并行磁盘数据的存取管理【18-20】。详细地说,任意路由表前缀树被切分为一棵主树和多棵子树,主树包含前缀树的低层结点,并被转换为一个索引表,各子树包含前缀树的部分高层结点,并被映射到存储器组中,进一步地,每棵子树有两个副本,各副本被独立地映射到存储器组中。从而,如果数据包的路由查找过程能够访问两棵副本树中的任意一棵,其就能够继续执行。我们基于以下因素考虑使用RDA技术:在构建一个高性能路由系统的过程中,增加单个存储器的容量较之增加存储器的数量更为简单,低廉和可行。In addition, the present invention realizes the mapping from routing table prefix tree nodes to storage by using a random copy allocation (RDA (Random Duplicate Allocation)) storage allocation technology. The tree nodes are evenly mapped to all the memory of the pipeline system. Compared with other existing pipeline routing design schemes, the Random Copy Allocation (RDA) technology reduces the access conflict frequency of the memory in the pipeline system by using additional storage space. The technique was first used for parallel disk data access management [18-20]. In detail, any routing table prefix tree is split into a main tree and multiple subtrees. The main tree contains the low-level nodes of the prefix tree and is converted into an index table. Each subtree contains part of the high-level nodes of the prefix tree. Points are mapped to memory groups. Further, each subtree has two copies, and each copy is independently mapped to memory groups. Thus, if the packet's route lookup process can access either of the two replica trees, it can proceed. We consider using RDA technology based on the following factors: In the process of building a high-performance routing system, it is easier, cheaper and feasible to increase the capacity of a single memory than to increase the number of memories.
为了方便描述,我们首先给出几个专有名词及其定义如下:当我们提到一个数据包的子树时,该子树即为系统为了获取数据包的路由信息所需遍历的目标子树,对于每棵子树,其根结点所在的存储器被称为该子树的根存储器,进一步地说,给定某个数据包,其对应子树的根存储器也可以被认为是该数据包的根存储器,由于RDA存储分配技术为每棵子树在存储器组中提供两个独立的副本,故而每个数据包有两棵子树和两个根存储器。For the convenience of description, we first give several proper nouns and their definitions as follows: When we refer to a subtree of a data packet, the subtree is the target subtree that the system needs to traverse to obtain the routing information of the data packet , for each subtree, the memory where its root node is located is called the root memory of the subtree. Furthermore, given a data packet, the root memory of its corresponding subtree can also be considered as the root memory of the data packet Root storage, since the RDA storage allocation technique provides two independent copies of each subtree in the storage group, there are two subtrees and two root storages per packet.
如图3所示,本发明增加了一个用于存储索引表的索引表存储单元,该索引表存储单元一般为小而快的存储器(例如SRAM),即路由表前缀树的主树被表示为一个索引表,该索引表存储在一个小而快速的存储器中。每棵子树从根结点开始按层循环地映射到存储器组中,不失一般性地,如果根结点被存储到编号为i的存储器中,那么第j层结点被存储到编号为(i+j)modM的存储器中,对于每个新到达路由体系结构的数据包,其将经历两个处理阶段来获取数据包目的地址的最长匹配前缀,即:一个查找索引表的单步匹配阶段和一个涉及多次存储器访问的查找阶段。索引表包含2r个子项,每个子项由一个前缀域和一个或多个结点指针域构成,从而,一个数据包的路由信息查找过程的执行流程如下:首先,系统获取数据包目的地址的最高r比特,并将其作为索引表的第i号子项的索引,如果第i号子项的结点指针为空,那么本次路由查找就执行完毕;否则,系统将从该结点指针所指向的子树的根结点开始,遍历该子树以完成该路由信息查找请求。As shown in Figure 3, the present invention adds an index table storage unit for storing the index table, and the index table storage unit is generally a small and fast memory (such as SRAM), that is, the main tree of the routing table prefix tree is expressed as An index table, which is stored in a small, fast memory. Each subtree starts from the root node and is mapped to the memory group in layers. Without loss of generality, if the root node is stored in the memory numbered as i, then the jth layer node is stored in the numbered ( i+j) In the memory of modM, for each data packet newly arriving at the routing architecture, it will go through two processing stages to obtain the longest matching prefix of the data packet destination address, namely: a single-step matching of the lookup index table phase and a lookup phase involving multiple memory accesses. The index table contains 2 r sub-items, and each sub-item is composed of a prefix domain and one or more node pointer domains. Therefore, the execution flow of the routing information lookup process of a data packet is as follows: first, the system obtains the destination address of the data packet The highest r bits, and use it as the index of the i-th sub-item of the index table, if the node pointer of the i-th sub-item is empty, then this routing search is completed; otherwise, the system will start from the node pointer Starting from the root node of the subtree pointed to by , the subtree is traversed to complete the routing information lookup request.
为了使存储器的总吞吐量维持在较高水平,系统的调度目标是从排队数据包中选取一个尽可能大的数据包集合,该集合中的任意两个数据包的根存储器互异,根存储器不同的数据包就能够同时执行存储器访问操作。对于每棵子树,其根结点被随机存储到任意存储器中,而其子孙结点被按层循环地映射到根存储器的后续存储器中,因此,在数据包的路由信息查找过程中,对目标子树的遍历始于对根存储器的访问,直至到达某叶子结点所在的存储器结束,在此过程中,路由信息查找请求在每层访问一个结点,在每个存储器中访问一层,从而,当选的数据包集合的对应子树的访问操作能够按批处理方式并行地执行。在这里,术语批处理(batch)意味着当选的所有子树的遍历过程均始于同一时刻,并且,当所有遍历过程均完成时,该批处理过程才宣告结束。In order to maintain the total throughput of the memory at a high level, the scheduling goal of the system is to select a data packet set as large as possible from the queued data packets. The root memory of any two data packets in this set is different, and the root memory Different packets can perform memory access operations simultaneously. For each subtree, its root node is randomly stored in any memory, and its descendant nodes are cyclically mapped to the subsequent memory of the root memory. Therefore, in the process of routing information lookup for data packets, the target The traversal of the subtree begins with the access to the root memory until it reaches the end of the memory where a certain leaf node is located. During this process, the routing information lookup request visits a node in each layer and a layer in each memory, thus , the access operation of the corresponding subtree of the selected data packet set can be executed in parallel in a batch manner. Here, the term batch means that the traversal process of all selected subtrees starts at the same moment, and the batch process ends when all traversal processes are completed.
下面,我们给出以下假设和参数声明以建立路由体系结构的数据包排队策略:Below, we give the following assumptions and parameter declarations to establish the packet queuing strategy for the routing architecture:
1)系统时间被分割为固定大小的时隙,每个时隙被定义为一个存储访问周期,进一步地,每个时隙被等分为M个时钟周期。1) The system time is divided into time slots of fixed size, and each time slot is defined as a storage access cycle, further, each time slot is equally divided into M clock cycles.
2)每个到达路由缓冲区的数据包长度固定,每个时钟周期到达的数据包数量最多为1,换言之,每个时隙最多有M数据包达到路由体系结构。2) The length of each data packet arriving at the routing buffer is fixed, and the number of arriving data packets per clock cycle is at most 1, in other words, at most M data packets arrive at the routing architecture in each time slot.
3)在各时隙的末尾,调度单元从排队数据包中选取一个无冲突的数据包集合,所有当选的数据包并行地访问存储器组以查找存储在各存储器中的路由信息,如果某个当选的数据包完成了其路由信息查找任务,那么它将立刻离开路由缓冲区。3) At the end of each time slot, the scheduling unit selects a non-conflicting data packet set from the queued data packets, and all selected data packets access the memory bank in parallel to find the routing information stored in each memory. If the data packet completes its routing information lookup task, it will leave the routing buffer immediately.
4)各存储器在一个时隙内只能被访问一次。4) Each memory can only be accessed once in a time slot.
为了表述方便,我们引入下面两个参数:For the convenience of expression, we introduce the following two parameters:
N:路由缓冲区中排队数据包的数量,缓冲区的容量为L个数据包。N: The number of queued data packets in the routing buffer, and the capacity of the buffer is L data packets.
M:可用存储器的数量。我们用符号m1,m2,...,mM表示存储器,为了表述方便,我们假定M为2的整数次幂。M: Amount of available memory. We use the symbols m 1 , m 2 ,...,m M to denote the memory, and for the convenience of expression, we assume that M is an integer power of 2.
基于上述假设和参数定义,我们在图4给出所涉及的路由体系结构的排队策略,在这个模型中,每个到达的数据包生成一个路由信息查找请求,该路由信息查找请求包含一次或多次存储器访问操作,每次存储器访问操作的目标是前缀树中某结点所在的存储器,各目标结点依序构成一条从根结点出发到某叶子结点结束的查找路径。Based on the above assumptions and parameter definitions, we present the queuing strategy of the routing architecture involved in Figure 4. In this model, each arriving data packet generates a routing information lookup request, and the routing information lookup request contains one or more For memory access operations, the target of each memory access operation is the memory where a certain node in the prefix tree is located, and each target node sequentially forms a search path starting from the root node and ending at a certain leaf node.
调度策略Scheduling strategy
对于每个新到达的数据包,系统通过数据包地址的前r比特查询索引表以定位其子树的根结点在存储器组中的位置,然后,系统为数据包建立一个路由信息查找请求,该查找请求按层遍历目标子树以获取路由信息,我们同样使用形如R(s1,s2,...,si,...,st)的多维元组来表示一个路由信息查找请求。For each newly arrived data packet, the system searches the index table through the first r bits of the data packet address to locate the position of the root node of its subtree in the memory bank, then, the system sets up a routing information lookup request for the data packet, The lookup request traverses the target subtree layer by layer to obtain routing information. We also use a multi-dimensional tuple of the form R(s 1 ,s 2 ,...,s i ,...,s t ) to represent a routing information Find requests.
一个数据包的路由信息查找过程分为两个阶段:调度阶段和访问阶段,在前一个阶段中,系统选取一个根存储器互不相同的数据包集合;在后一个阶段中,系统同时遍历各当选数据包的对应子树以获取路由信息,在调度阶段所得到的数据包集合的根存储器是互异的,故而,这些数据包所对应的查找请求是可以并行地执行的。在访问阶段,每个当选的数据包将遍历其目标子树,详细地说,不失一般性地,假设Ri,Rj为任意两个当选的数据包的路由信息查找请求,Mi,Mj为这两个查找请求的目标根存储器,则在访问阶段,查找请求Ri将循环地访问编号依次为Mi,M(i+1)modM,M(i+2)modM,...的存储器序列,直到到达某个叶子结点结束,Rj也是如此。从而,查找请求Ri和Rj在整个访问阶段中都是无冲突的,其对应的每个查找步骤都可以被并行地执行。The routing information lookup process of a data packet is divided into two stages: the scheduling stage and the access stage. In the former stage, the system selects a data packet set with different root memories; in the latter stage, the system simultaneously traverses the selected The corresponding subtrees of the data packets are used to obtain routing information, and the root storages of the data packet collections obtained in the scheduling stage are different, so the search requests corresponding to these data packets can be executed in parallel. In the access phase, each selected data packet will traverse its target subtree. In detail, without loss of generality, assume that R i , R j are the routing information lookup requests of any two selected data packets, M i , M j is the target root memory of the two search requests, then in the access phase, the search request R i will cyclically visit the numbers M i , M (i+1)modM , M (i+2)modM , .. . The memory sequence until it reaches a certain leaf node ends, and the same is true for R j . Therefore, the search requests R i and R j are conflict-free during the entire access phase, and each corresponding search step can be executed in parallel.
如图5所示,调度阶段的仲裁问题可以被转化为一个双向图匹配问题,此处的两个顶点集分别为N个排队数据包和M个存储器,如果数据包和存储器之间存在一条弧,则表示该数据包的子树的根结点位于该存储器中,注意,每棵子树有两棵副本树,从而,我们有两棵独立地分布于存储器组中的副本树,以及两个分布在不同存储器中的根结点,通过访问该子树的任意副本,我们可以得到所需的路由信息。因此,图中的每个数据包顶点都有两条边。一般而言,我们可以使用任意双向图匹配算法来寻找存储器组和排队数据包之间的最大匹配集,这些匹配算法包括FCFS(First Come First Served),PIM(Parallel Iterative Matching)【16】,iSLIP【17】等等,调度的目标是在有资格参与调度的排队数据包和所有可用的存储器之间寻找一个最大匹配,我们在下文中给出关于该调度问题的一个算法框架。注意,在每轮调度阶段的开始时刻,所有的排队数据包和存储器均处于未匹配状态,系统迭代地执行如下三个步骤,直到找到一个最大匹配。As shown in Figure 5, the arbitration problem in the scheduling stage can be transformed into a two-way graph matching problem, where the two vertex sets are N queued packets and M memories respectively, if there is an arc between the packets and the memory , it means that the root node of the subtree of the data packet is located in this memory. Note that each subtree has two replica trees, thus, we have two replica trees independently distributed in the memory group, and two distributed By visiting any copy of the subtree at the root node in a different memory, we can get the required routing information. Therefore, each packet vertex in the graph has two edges. In general, we can use any two-way graph matching algorithm to find the largest matching set between memory banks and queued packets, these matching algorithms include FCFS (First Come First Served), PIM (Parallel Iterative Matching) [16], iSLIP [17] and so on, the goal of scheduling is to find a maximum match between the queued data packets eligible to participate in scheduling and all available memory, we give an algorithm framework for this scheduling problem in the following. Note that at the beginning of each round of scheduling, all queued data packets and memory are in an unmatched state, and the system iteratively performs the following three steps until a maximum match is found.
步骤1)请求:每个尚未匹配成功的排队数据包发送一个请求给其尚未匹配成功的根存储器;Step 1) Request: Each queued data packet that has not been successfully matched sends a request to the root memory that has not been successfully matched;
步骤2)授权:每个被请求的存储器从所有向其请求的数据包中选择一个授予访问许可;Step 2) Authorization: Each requested memory selects one of the requested data packets to grant access permission;
步骤3)接受:每个被授权的数据包从所有向其授权的存储器中选择一个接受其访问许可。Step 3) Accept: Each authorized data package selects one of all authorized storages to accept its access permission.
为了帮助理解该调度策略框架下的路由信息查找过程,我们图6和图7中给出了一个简单的调度实例,注意,调度单元采取先来先服务的调度策略控制存储器组和数据包集合间的访问操作。此处,我们假定可用的存储器数量为M=4,且初始缓冲区数据包数N=0,系统时间被划分为等大时隙,每个时隙被进一步细分为M个时钟周期,每个时钟周期最多有一个数据包到达路由缓冲区,为了表述方便,我们假设每个尚未匹配成功的排队数据包随机地从其尚未匹配成功的根存储器中选择一个并向其发送访问请求。In order to help understand the routing information search process under the scheduling policy framework, we give a simple scheduling example in Figure 6 and Figure 7. Note that the scheduling unit adopts a first-come-first-served scheduling policy to control the relationship between the storage group and the data packet set. access operations. Here, we assume that the available memory quantity is M=4, and the initial buffer data packet number N=0, the system time is divided into equal-sized time slots, each time slot is further subdivided into M clock cycles, each At most one data packet arrives at the routing buffer in each clock cycle. For the convenience of expression, we assume that each unmatched queued data packet randomly selects one of its unmatched root memories and sends an access request to it.
在时隙0,P0-P3到达路由缓冲区,系统分别为各数据包生成路由信息查找请求R0-R3。此处,R0对应于P0,R1对应于P1,以此类推。因为R2和R3同时请求访问m2,故而系统发生了一个存储器访问冲突。根据先来先服务的仲裁策略,m2被授权给更早到达的P2。结果是,数据包{P0,P1,P2}的查找请求{R0,R1,R2}当选。查找请求集{R0,R1,R2}是一个无冲突访问集,其可被系统并行地执行。如表中所示,该批处理作业连续消耗时隙1-3来执行存储器访问操作。显而易见,访问阶段的持续时间由所需访问次数最多的R0决定。At
系统在时隙4发起下一轮调度,查找请求{R3,R5,R6,R8}当选,系统消耗三个连续时隙5-7执行当选的路由信息查找请求的存储器访问操作,近似地,在时隙8和时隙12,查找请求{R7R9R10}和{R11}分别当选。注意,我们不必建立查找请求R4,因为P4的路由信息在索引表的快速匹配阶段已经被直接确定了。The system initiates the next round of scheduling in time slot 4, and the search request {R 3 , R 5 , R 6 , R 8 } is selected, and the system consumes three consecutive time slots 5-7 to execute the memory access operation of the selected routing information search request, Approximately, in slot 8 and slot 12, lookup requests {R 7 R 9 R 10 } and {R 11 } are elected respectively. Note that we don't have to create a lookup request R 4 , because the routing information of P 4 has been directly determined in the fast matching stage of the index table.
从这个例子中,我们注意到一个数据包的路由信息查找过程包含两个处理阶段:调度阶段和紧随其后的访问阶段。前一个阶段持续一个时隙,后一个阶段持续一个或多个时隙。在调度阶段,系统选取一个排队数据包集来执行路由信息查找任务。每个当选的数据包的查找请求需要消耗多个连续的时隙。我们还可以看到,对任意两个连续执行的路由信息查找过程,我们可以将前一个查找过程的访问阶段和后一个查找过程的调度阶段并行地执行以进一步提高系统效率。然而,为了简化描述,我们在这个例子中假设路由处理过程是非重叠地顺序执行的。From this example, we notice that the routing information lookup process of a data packet includes two processing stages: the scheduling stage and the following access stage. The former phase lasts for one slot and the latter phase lasts for one or more slots. In the scheduling phase, the system selects a set of queued data packets to perform routing information lookup tasks. The lookup request of each selected data packet needs to consume multiple consecutive time slots. We can also see that for any two routing information lookup processes that are executed continuously, we can execute the access phase of the previous lookup process and the scheduling phase of the latter lookup process in parallel to further improve system efficiency. However, to simplify the description, we assume in this example that routing processing is performed sequentially, non-overlappingly.
以上所述仅为本发明的较佳实施方式,本发明并不局限于上述实施方式,在实施过程中可能存在局部微小的结构改动,如果对本发明的各种改动或变型不脱离本发明的精神和范围,且属于本发明的权利要求和等同技术范围之内,则本发明也意图包含这些改动和变型。The above description is only a preferred embodiment of the present invention, the present invention is not limited to the above embodiment, there may be local minor structural changes during the implementation process, if the various changes or modifications of the present invention do not depart from the spirit of the present invention and scope, and belong to the claims and equivalent technical scope of the present invention, the present invention also intends to include these changes and modifications.
Claims (8)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210248200.XA CN102739550B (en) | 2012-07-17 | 2012-07-17 | Based on the multi-memory flowing water routing architecture that random copy distributes |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210248200.XA CN102739550B (en) | 2012-07-17 | 2012-07-17 | Based on the multi-memory flowing water routing architecture that random copy distributes |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102739550A true CN102739550A (en) | 2012-10-17 |
CN102739550B CN102739550B (en) | 2015-11-25 |
Family
ID=46994362
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201210248200.XA Active CN102739550B (en) | 2012-07-17 | 2012-07-17 | Based on the multi-memory flowing water routing architecture that random copy distributes |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102739550B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015188319A1 (en) * | 2014-06-10 | 2015-12-17 | 华为技术有限公司 | Lookup device, lookup configuration method and lookup method |
CN113609076A (en) * | 2021-08-04 | 2021-11-05 | 杭州海康威视数字技术股份有限公司 | File storage method and file reading method |
JP7436712B2 (en) | 2021-06-25 | 2024-02-22 | 新華三技術有限公司 | Packet matching methods, apparatus, network devices and media |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1561047A (en) * | 2004-02-20 | 2005-01-05 | 清华大学 | Distributed Parallel IP Routing Lookup Method Based on TCAM |
WO2005024659A1 (en) * | 2003-08-11 | 2005-03-17 | France Telecom | Trie memory device with a circular pipeline mechanism |
CN101631086A (en) * | 2009-08-10 | 2010-01-20 | 武汉烽火网络有限责任公司 | Routing list partitioning and placing method searched by parallel IP route |
CN102484610A (en) * | 2010-04-12 | 2012-05-30 | 华为技术有限公司 | Routing table construction method and device and routing table lookup method and device |
-
2012
- 2012-07-17 CN CN201210248200.XA patent/CN102739550B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2005024659A1 (en) * | 2003-08-11 | 2005-03-17 | France Telecom | Trie memory device with a circular pipeline mechanism |
CN1561047A (en) * | 2004-02-20 | 2005-01-05 | 清华大学 | Distributed Parallel IP Routing Lookup Method Based on TCAM |
CN101631086A (en) * | 2009-08-10 | 2010-01-20 | 武汉烽火网络有限责任公司 | Routing list partitioning and placing method searched by parallel IP route |
CN102484610A (en) * | 2010-04-12 | 2012-05-30 | 华为技术有限公司 | Routing table construction method and device and routing table lookup method and device |
Non-Patent Citations (3)
Title |
---|
PETER SANDERS ET AL.: "《Fast Concurrent Access to Parallel Disks》", 《ALGORITHMICA》 * |
S.KUMAR ET AL.: "《CAMP: fast and efficient IP lookup architectur》", 《ACM/IEEE ARCHITECTURE FOR NETWORKING AND COMMUNICATIONS SYSTEMS》 * |
YI WU ET AL.: "《A scalable routing architecture for prefix tries》", 《NETWORKS (ICON), 2011 17TH IEEE INTERNATIONAL CONFERENCE》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015188319A1 (en) * | 2014-06-10 | 2015-12-17 | 华为技术有限公司 | Lookup device, lookup configuration method and lookup method |
US10164884B2 (en) | 2014-06-10 | 2018-12-25 | Huawei Technologies Co., Ltd. | Search apparatus, search configuration method, and search method |
JP7436712B2 (en) | 2021-06-25 | 2024-02-22 | 新華三技術有限公司 | Packet matching methods, apparatus, network devices and media |
CN113609076A (en) * | 2021-08-04 | 2021-11-05 | 杭州海康威视数字技术股份有限公司 | File storage method and file reading method |
CN113609076B (en) * | 2021-08-04 | 2024-07-02 | 杭州海康威视数字技术股份有限公司 | File storage method and file reading method |
Also Published As
Publication number | Publication date |
---|---|
CN102739550B (en) | 2015-11-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5529976B2 (en) | Systolic array architecture for high-speed IP lookup | |
CN100428225C (en) | Apparatus and method for performing routing lookup and managing routing/forwarding tables | |
US8625604B2 (en) | Hash-based prefix-compressed trie for IP route lookup | |
Bando et al. | FlashTrie: beyond 100-Gb/s IP route lookup using hash-based prefix-compressed trie | |
JP5518135B2 (en) | Extensible multicast forwarding method and apparatus for data center | |
US10230639B1 (en) | Enhanced prefix matching | |
Warkhede et al. | Multiway range trees: scalable IP lookup with fast updates | |
Le et al. | Scalable tree-based architectures for IPv4/v6 lookup using prefix partitioning | |
CN103428093A (en) | Route prefix storing, matching and updating method and device based on names | |
Bando et al. | Flashtrie: Hash-based prefix-compressed trie for IP route lookup beyond 100Gbps | |
US11652744B1 (en) | Multi-stage prefix matching enhancements | |
CN113315705B (en) | Flexible IP addressing method and device based on single Hash bloom filter | |
CN100413285C (en) | Design and Implementation of High Speed Multidimensional Packet Classification Algorithm Based on Network Processor | |
CN102739550B (en) | Based on the multi-memory flowing water routing architecture that random copy distributes | |
Yu et al. | Hardware accelerator to speed up packet processing in NDN router | |
Sun et al. | An on-chip IP address lookup algorithm | |
CN104301227B (en) | High-speed low-power-consumption IP route table lookup method based on TCAM | |
CN102739551B (en) | Multi-memory flow routing architecture | |
CN113328947B (en) | Variable-length route searching method and device based on application of controllable prefix extension bloom filter | |
CN101645852B (en) | Equipment and method for classifying network packet | |
KR100493099B1 (en) | Route lookup and routing/forwarding table management for high-speed internet protocol router | |
Wu et al. | Scalable pipelined IP lookup with prefix tries | |
CN107517161A (en) | A kind of network processing unit look-up method, network processing unit and table look-up system | |
Wu et al. | A pipeline IP lookup architecture with random duplicate allocation | |
Yu et al. | Hardware accelerator for FIB lookup in named data networking |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
EE01 | Entry into force of recordation of patent licensing contract | ||
EE01 | Entry into force of recordation of patent licensing contract |
Application publication date: 20121017 Assignee: Yunnan Hongxin Technology Co.,Ltd. Assignor: SUN YAT-SEN University Contract record no.: X2023420000301 Denomination of invention: A Multi Memory Pipeline Routing Architecture Based on Random Copy Allocation Granted publication date: 20151125 License type: Common License Record date: 20230821 |
|
EE01 | Entry into force of recordation of patent licensing contract | ||
EE01 | Entry into force of recordation of patent licensing contract |
Application publication date: 20121017 Assignee: ANHUI YUNSEN INTERNET OF THINGS TECHNOLOGY Co.,Ltd. Assignor: SUN YAT-SEN University Contract record no.: X2023980053520 Denomination of invention: A Multi Memory Pipeline Routing Architecture Based on Random Copy Allocation Granted publication date: 20151125 License type: Common License Record date: 20231221 |
|
EE01 | Entry into force of recordation of patent licensing contract | ||
EE01 | Entry into force of recordation of patent licensing contract |
Application publication date: 20121017 Assignee: GUANGDONG TECSUN TECHNOLOGY Co.,Ltd. Assignor: SUN YAT-SEN University Contract record no.: X2023980054607 Denomination of invention: A Multi Memory Pipeline Routing Architecture Based on Random Copy Allocation Granted publication date: 20151125 License type: Common License Record date: 20231229 Application publication date: 20121017 Assignee: Huanyi (Guangdong) emergency safety Technology Group Co.,Ltd. Assignor: SUN YAT-SEN University Contract record no.: X2023980054606 Denomination of invention: A Multi Memory Pipeline Routing Architecture Based on Random Copy Allocation Granted publication date: 20151125 License type: Common License Record date: 20231229 |
|
EE01 | Entry into force of recordation of patent licensing contract | ||
EE01 | Entry into force of recordation of patent licensing contract |
Application publication date: 20121017 Assignee: SHENZHEN RONGSHENG INTELLIGENT EQUIPMENT Co.,Ltd. Assignor: SUN YAT-SEN University Contract record no.: X2023980054616 Denomination of invention: A Multi Memory Pipeline Routing Architecture Based on Random Copy Allocation Granted publication date: 20151125 License type: Common License Record date: 20231229 |
|
EE01 | Entry into force of recordation of patent licensing contract |
Application publication date: 20121017 Assignee: Guangzhou Lide Information Technology Co.,Ltd. Assignor: SUN YAT-SEN University Contract record no.: X2023980054830 Denomination of invention: A Multi Memory Pipeline Routing Architecture Based on Random Copy Allocation Granted publication date: 20151125 License type: Common License Record date: 20240104 Application publication date: 20121017 Assignee: Guangzhou Kangpusi Network Technology Co.,Ltd. Assignor: SUN YAT-SEN University Contract record no.: X2023980054833 Denomination of invention: A Multi Memory Pipeline Routing Architecture Based on Random Copy Allocation Granted publication date: 20151125 License type: Common License Record date: 20240104 |
|
EE01 | Entry into force of recordation of patent licensing contract | ||
EE01 | Entry into force of recordation of patent licensing contract | ||
EE01 | Entry into force of recordation of patent licensing contract |
Application publication date: 20121017 Assignee: Guangzhou Love Time Information Technology Co.,Ltd. Assignor: SUN YAT-SEN University Contract record no.: X2024980002510 Denomination of invention: A Multi Memory Pipeline Routing Architecture Based on Random Copy Allocation Granted publication date: 20151125 License type: Common License Record date: 20240306 |
|
EE01 | Entry into force of recordation of patent licensing contract | ||
EE01 | Entry into force of recordation of patent licensing contract |
Application publication date: 20121017 Assignee: Dongguan Xiayou Technology Co.,Ltd. Assignor: SUN YAT-SEN University Contract record no.: X2024980015715 Denomination of invention: Multi memory pipeline routing architecture based on random replica allocation Granted publication date: 20151125 License type: Common License Record date: 20240919 |
|
EE01 | Entry into force of recordation of patent licensing contract | ||
EE01 | Entry into force of recordation of patent licensing contract |
Application publication date: 20121017 Assignee: Dongdai (Jinan) Intelligent Technology Co.,Ltd. Assignor: SUN YAT-SEN University Contract record no.: X2024980016016 Denomination of invention: Multi memory pipeline routing architecture based on random replica allocation Granted publication date: 20151125 License type: Common License Record date: 20240923 |
|
EE01 | Entry into force of recordation of patent licensing contract |
Application publication date: 20121017 Assignee: Yantai Kuaiwang Network Technology Co.,Ltd. Assignor: SUN YAT-SEN University Contract record no.: X2024980018159 Denomination of invention: Multi memory pipeline routing architecture based on random replica allocation Granted publication date: 20151125 License type: Common License Record date: 20241012 |
|
EE01 | Entry into force of recordation of patent licensing contract |