[go: up one dir, main page]

CN1595910A - A data packet receiving interface component of network processor and storage management method thereof - Google Patents

A data packet receiving interface component of network processor and storage management method thereof Download PDF

Info

Publication number
CN1595910A
CN1595910A CNA2004100500047A CN200410050004A CN1595910A CN 1595910 A CN1595910 A CN 1595910A CN A2004100500047 A CNA2004100500047 A CN A2004100500047A CN 200410050004 A CN200410050004 A CN 200410050004A CN 1595910 A CN1595910 A CN 1595910A
Authority
CN
China
Prior art keywords
data
storage
pointer
data packet
dram
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2004100500047A
Other languages
Chinese (zh)
Other versions
CN100440854C (en
Inventor
宫曙光
李华伟
徐宇峰
刘彤
李晓维
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
G Cloud Technology Co Ltd
Original Assignee
Institute of Computing Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Computing Technology of CAS filed Critical Institute of Computing Technology of CAS
Priority to CNB2004100500047A priority Critical patent/CN100440854C/en
Publication of CN1595910A publication Critical patent/CN1595910A/en
Application granted granted Critical
Publication of CN100440854C publication Critical patent/CN100440854C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本发明涉及数据通信技术领域。特别是一种用于网络处理器的数据包接收接口部件及其存储管理方法。部件包括:数据接收缓冲装置;指针存储区管理装置;DRAM(动态随机存取存储器)存储控制器;SRAM(静态随机存取存储器)存储控制器;队列管理装置;数据存储区。方法包括:使用队列表、数据包指针和存储块指针对数据存储区进行有效的组织管理;利用存储块指针和存储块位置对齐节省存储空间并提高操作效率;使用SRAM和DRAM分别存储数据包头和净荷数据来提高处理数据传输速度。本发明还通过对DRAM存储控制器进行改进,进一步提高了数据的存取速度,有助于高速网络处理器克服其存储瓶颈,实现高速数据传输和处理。

Figure 200410050004

The invention relates to the technical field of data communication. In particular, a data packet receiving interface component for a network processor and a storage management method thereof. The components include: data receiving buffer device; pointer storage area management device; DRAM (dynamic random access memory) storage controller; SRAM (static random access memory) storage controller; queue management device; data storage area. The method includes: using the queue table, the data packet pointer and the storage block pointer to effectively organize and manage the data storage area; using the storage block pointer and the storage block position alignment to save storage space and improve operation efficiency; using SRAM and DRAM to store the data packet header and Payload data to improve processing data transmission speed. The invention further improves the data access speed by improving the DRAM storage controller, which helps the high-speed network processor to overcome its storage bottleneck and realize high-speed data transmission and processing.

Figure 200410050004

Description

A kind of packet receiving interface parts and memory management method thereof of network processing unit
Technical field
The present invention relates to data communication technology field.Particularly a kind of packet receiving interface parts and memory management method thereof that is used for network processing unit.
Background technology
Along with rapid development of network technique, the network bandwidth has risen to present 40Gbps from several years ago 2Gbps, this needs switch and router that the faster data disposal ability is provided, in addition, be the procotol that adapts to continuous variation and the requirement of network service quality (QOS), also require the network switching equipment to possess extensibility and programmability more flexibly, and traditional GPP (general processor) and ASIC (application-specific integrated circuit (ASIC)) all can not satisfy the requirement of this two aspect simultaneously.Therefore, a kind of new-type network processing unit---network processing unit owing to have at a high speed data-handling capacity and programmability flexibly concurrently, is applied in switch and the router more and more widely.
In typical network processing unit, find by quantitative analysis, at a packet from the process that receives forwarding, nearly 2/3 time will be used for the reception of data, storage, scheduling and transmission, although in the network processing unit design, can provide data processing operation ability at a high speed as network processing unit by using a plurality of special-purpose RISC CPU (compacting instruction set processor), but the low speed transmissions ability of slow memory parts has still hindered the further raising of network processing unit performance, and storage subsystem has become the bottleneck of network processing unit.Therefore, have only by reasonably designing and receive and the transmission interface parts, improve the concurrency of Data Receiving, storage and queuing, simultaneously storage subsystem is improved, adopt rationally memory management method efficiently, improve memory transfer speed to greatest extent, just can improve the performance of network processing unit effectively.
At present, in the network processing unit design, mainly improve storage subsystem by two kinds of methods, a kind of method is to use distributed storage method, by the data of different types bag is placed on respectively in the different memories, utilize the concurrent access of memory to improve transmission speed, but this method is not improved the transmission speed of same type packet.
Another kind method is to improve storage control, utilize the more distinctive access features of DRAM memory, memory reference instruction is cushioned, predicts and reorders, postpone by hiding some read-write, improve burst (Burst) the number of transmissions, improve memory transfer speed.But the existing implementation strategy of this method need be considered concrete procotol and scheduling strategy, causes hardware prediction logic complexity and is unsuitable for disposing the network environment of frequent variation.
Summary of the invention
The object of the present invention is to provide a kind of packet receiving interface parts and memory management method thereof of network processing unit.These parts are made up of a plurality of circuit arrangements that can executed in parallel, adopt this interface unit, the packet of network processing unit receiving terminal are received, the concurrency of storage and scheduling is improved, and effectively improves the data transmission bauds of network processing unit receiving terminal.
Another object of the present invention is to realize improving the method for parallel processing of packet reception, Flow Control, storage and scheduling.
Another object of the present invention provides a kind of memory block that is used for network processing unit and organizes management method, improves the memory block is distributed and packet is lined up flexibility and speed, makes network processing unit can carry out storage area management effectively.
Another object of the present invention provides a kind of method for designing of improved DRAM storage control, and the transmission rate of DRAM memory is improved, and compares with method in the past and to have simple and better adaptability.
Description of drawings
The front has been carried out briefly bright to purpose of the present invention, below in conjunction with accompanying drawing main contents of the present invention are described, and the accompanying drawing that comprises mainly contains:
Fig. 1: be that structure chart is organized in the used memory block of the present invention.
Fig. 2: be not improved DRAM memory burst read operation sequential chart.
Fig. 3: be the used improved DRAM memory burst read operation sequential chart of the present invention.
Fig. 4: the system architecture diagram that is receiving interface parts of the present invention.
Fig. 5: the state transition graph when being interface unit of the present invention operation.
Fig. 6 is the used memory area management method flow chart of the present invention.
Embodiment
Because the organization and management method of the global design of this interface unit and memory block is closely related, below organize management method to describe to the used memory block of the present invention earlier.
In network processing unit, be convenient queuing, the memory block is generally divided into the piece of fixed size, and employing chained list way to manage, by following the tracks of the backbone network packet, and the size of the packet of being caught added up discovery, the size of about about 40% Ethernet data bag less than or near 64KB, theoretical and experience all proves, in the network switching equipment, adopt the data block of 64Byte to carry out storage administration, help reducing the memory block fragment and reduce memory accesses, so generally adopted by network processing unit.For realizing the chain type way to manage of formation, the basic thought of most memory area management method is: be provided with in each memory block inside and point to the pointer that belongs to next piece memory block of same packet, the block-chaining that this pointer is responsible for belonging to same packet arrives together.In the memory block, reserve special space simultaneously and deposit MBA memory block address pointer and various formation head and the tail pointer, be used for packet is linked into different formations.But adopt which type of specific implementation method that the access efficiency of storage area management is produced different influences with the flexibility meeting.Accompanying drawing 1 has illustrated that our used memory block organizes the memory block of management method to organize structure chart.
Organize in the management method in this memory block, one block pointer memory block is set, pointer memory block list item is made up of position field, packet pointer and memory block pointer, and position field is used for pointing out the position of data block at affiliated packet, and packet pointer is used for data packet group is made into formation.Whole storage organization is made up of three parts, queue table, pointer memory block and data storage area.Formation of in the queue table each correspondence is made up of queue head pointer territory and tail pointer territory, is used for data packet group is made into formation.The pointer in two territories is pointed out first packet and the original position of last packet in the pointer memory block in the formation respectively; In the pointer memory block each mainly comprises three territories: data block location mark domain, packet pointer territory and memory block pointer field.The position of position field indication current data block in packet mainly formed by two, and implication is as follows:
11: the first blocks of data piece in the packet.
10: the data block that is positioned at the centre position in the packet.
00: last blocks of data piece in the packet.
01: first of packet also is last blocks of data piece (illustrating that this packet only contains a blocks of data piece).
Memory block pointer and DSB data store block are alignd one by one, and the data block organisation that is used for belonging to same packet becomes chained list.
The indication of packet pointer territory belongs to the original position of next packet correspondence in the pointer memory block of same formation, is used for different packets is linked into a formation.
Comprise the division of memory block, the tissue of memory block and the method for organizing of formation.
In Fig. 1, can see, the memory block comprises a DRAM and polylith SRAM, the DRAM and the SRAM of storage data are the unit piecemeal with 64 bytes all, and each piece of DRAM all is arranged in the delegation of DRAM storage chip, the payload of packet leaves among the DRAM, and data packet head leaves among the SRAM, each deal with data that sends all is the packet header data among the SRAM, pointer memory block and queue table all are placed among the SRAM, help to accelerate memory block distribution, search, revise and releasing operation.
Each of pointer memory block and the memory block of data storage area all are to concern one to one on the position, it is the fixing DSB data store block of each unique correspondence of pointer memory block, simultaneously, in the fixing pointer memory block of the also unique correspondence of each DSB data store block one, suppose that pointer memory block list item is from 1 open numbering, the size of memory block is 64 bytes, and the initial address of whole buffering area is start_addr, and then sequence number is that the pairing DSB data store block initial address of buffer entries of N is (supposition is a unit with the byte): block_addr=start_addr+N x 64.
Adopt the mode of this aligned in position, do not need to keep pointer entry in each DSB data store block again, and belong to the correspondence position of next data block in the pointer memory block of same packet with block pointer territory indication, therefore, just the block-chaining that belongs to same packet can be arrived together by the block pointer territory, thereby saved memory space and access times.
The data storage area is divided into DRAM part and SRAM part, is mainly used in the payload of store data bag among the DRAM, is mainly used in the packet header of store data bag among the SRAM.If owing to the handled information spinner header data of network processing unit, and general of payload part is visited once when receiving and transmit, and the header part is stored in help to improve processing speed of data among the SRAM.
This memory block organizes management method can realize distribution, release and the packet of data block and data block searching, revising and operation such as deletion in formation at an easy rate, packet and searching of data block all only need a secondary index and a sequential search in the formation, the time complexity that O (n+1) only need be arranged, simultaneously, queue table, pointer memory block and data packet head all are placed among the SRAM, help further improving data block distribution, search, the speed of release and data processing.
To the improvement of DRAM storage control design mainly is by hiding the precharge time of DRAM, and utilizes repeatedly burst transfer to improve the transmission speed of memory.Because the addressing of SDRAM has exclusivity,, resend row/column address so after finishing the reading and writing operation,, the row of working originally will be closed at every turn if another row is carried out addressing.This work on hand row of closing, the operation that newline is opened in preparation is exactly precharge.Because the memory bank among the DRAM can be interfered storage capacitance because of the row gating, so precharge is a kind of process of all memory banks in the row of working being carried out data rewrite, precharge can be repaired affected data-signal, but also therefore can bring certain delay.
Fig. 2 is the sequential of a DRAM read operation, therefrom can see the influence that precharge delay is brought.
Usually the DRAM memory all can carry out a precharge at every turn after finishing a read-write operation, but if reading and writing are with the data of delegation continuously, then do not need all to carry out precharge, and it is just passable to carry out a precharge again after only needing the last time the reading and writing of this row to be finished at every turn.
Fig. 3 illustrated to same row carry out reading continuously for twice, precharge sequential chart once.
Utilize this characteristic of dram chip just, the improved method of most memories all is by continuous internal storage access instruction is cushioned and predicts, and by reordering, continuous memory access is concentrated on in the delegation as far as possible, thereby reduce the precharge number of times, reach the purpose that improves memory access speed.Fig. 3 can be regarded as improved band prediction and instructs the memory read operation sequential chart that reorders, compare with operation shown in Figure 2, remarkable advantages is arranged, but this prediction needs concrete consider procotol and scheduling strategy, ordering arbitrarily may have influence on the correctness of data processing and scheduling sometimes
In our implementation, consider that the data storage area is with 64 byte piecemeals, and most operations about agreement and scheduling are that unit carries out with data block and packet all, and, the memory capacity of DRAM storage chip delegation generally is the integral multiple of 64 bytes, in we can just be arranged in each data block with delegation when memory block is divided, in storage control, increase instruction buffer and analytic function simultaneously, for continuous memory reference instruction, we are that unit reorders to instruction with the visit to same always, connected reference is all concentrated on in delegation's piece, thereby precharge time can be reduced, simultaneously, when relating to the connected reference instruction of striding piece, also can further avoid precharge operation by judging whether to belong to, the improvement method for designing of this storage control with delegation, do not need to carry out complicated especially decision logic, realize effectively simple.As continuous visit A1, A2, A3, A4, A5.Suppose A1, A3, A5 are the visits to same, and A2, A4 are the visits to another piece, and we can reset access sequence and be A1, A3, A5, A2, A4.In original access sequence, suppose that two pieces not in delegation, then need 4 precharge at least, and in new access sequence, only need between the A2 visit, carry out a precharge at A5.
The DRAM memory all provides non-burst and happens suddenly two kinds of data-transmission modes, so-called burst transfer be meant storage chip with delegation in adjacent memory unit carry out data transmission manner continuously, promptly as long as specify initial column address and burst length, the access of addressing and data is just carried out automatically, as long as and control the gap periods of two sections burst access orders well and can accomplish continuous burst transfer, the quantity of each involved memory cell of burst transfer (row) is exactly burst length (BurstLengths is called for short BL).Fig. 2 and Fig. 3 have illustrated that burst length is 4 burst transfer process, non-burst continuous transmission mode does not adopt burst transfer but addressing separately successively, can be equivalent to BL=1 this moment, though it can allow the data be continuous transmission, but all to send column address and command information at every turn, the control resource occupation is very big, therefore should use the burst transfer pattern when reservoir designs as far as possible.
But the length of burst transfer is not to be to be the bigger the better, if the valid data of each transmission are less, and that BL (BL=1,2,4,8) establishes is excessive, and the relative time that causes transmitting invalid data is longer, and counter attending the meeting causes the decline of efficiency of transmission.Because the storage bottleneck mainly concentrates on data in network processing unit reception storage and transmission are read, so should emphasis considering, the setting of BL receives or the length of the unit data of transmit block, if be operating unit with the N bit data at every turn, the data wire width of storage control is the L position, then BL should be made as the immediate effective value that is less than or equal to N/L, for example, if N=128, L=32, then BL is taken as 4 proper.The length of BL can be configured by special register.
Organization and administration mode in conjunction with above-mentioned memory, the structured flowchart of interface unit specific implementation as shown in Figure 4, this interface unit mainly contains following each circuit arrangement and forms: Data Receiving buffer unit 1, pointer storage area management device 2, dram controller 3, SRAM controller 4, queue management device 5, SRAM pointer memory block 6, SRAM data storage area 7 and queue table 8 can external DRAM memories by dram controller 3.
A queue management device 5 is used for the organization and administration of formation and the distribution of deal with data;
Wherein, Data Receiving buffer unit 1 links to each other with multiple arrangement, be responsible for to receive the data row buffering of going forward side by side, carry out Interface status simultaneously and follow the tracks of and receive the counting of data, receiving data wire and adopt 128 bit widths, is that unit receives processing with a packet, in addition, accept buffer unit and pointer storage area management device, queue management device, DRAM storage control and SRAM storage control have interface, be used to ask each device that the data that receive are carried out concurrent processing.
Pointer storage area management device 2 is used for the organization and administration of memory block, finishes the management of memory block pointer and the distribution of maintenance and DSB data store block, is responsible for finishing certain flow control function,
DRAM storage control 3 and SRAM storage control 4 provide the access interface to DRAM data storage area and SRAM data storage area, by arbitrated logic, realize the response to a plurality of data access request.
Queue management device 5 is used for the organization and administration of formation and the distribution of deal with data, be responsible for safeguarding different queue linked lists, be responsible for sending the required packet header data of microprocessor simultaneously, storage queue information is responsible in queue table 8, and SRAM data storage area 7 is used to store the packet header data.
The DRAM storage control, arbitration mechanism and three data access interfaces are provided: an interface is for the visit of Data Receiving parts inside to memory, an interface offers microprocessor, an interface offers the data forwarding parts, for the request from a plurality of interfaces, different priority is set, Data Receiving interface unit priority is the highest, microprocessor priority is inferior high, and it is minimum to transmit parts priority, and carries out granted access by the absolute priority strategy; The DRAM storage control also provides Instructions Cache, instruction analysis and reordering function, can many access instructions of a buffer memory, can analyze and reorder the instruction of buffer memory, to the rearrangement of instruction whether being sort criteria in same.
Except there is interface inside, storage control, pointer store trivial management devices and the queue management device also provides the external reference interface, realizes the visit to data of microprocessor and transmit block.
This interface unit uses above-mentioned memory block to organize management method and the design of improved storage control when specific implementation.
State transition graph when Fig. 5 is this interface unit operation, after device initialize is finished, when the receiving interface parts receive enough data, will ask pointer storage area management device to carry out data block distributes, carry out the packet queuing according to reception condition and pre-configured request queue management devices simultaneously, under the situation that the Flow Control inspection is passed through, by storage control storage packet data, payload for packet, be stored in the DRAM memory by dram controller, for the packet header of packet, then be stored in the SRAM memory by the SRAM controller.The queue management device is responsible for finishing the maintenance work of joining the team of packet and formation on the one hand, also be responsible for the request of response microprocessor simultaneously, when microprocessor requests is carried out new processing data packets, be responsible for by SRAM storage control read data packet header data, and send to microprocessor.
When finishing above action, storage control allows external unit, mainly be microprocessor and transmit block visit data memory block, storage control provides the arbitration controlled function, for different visits is provided with different priority, wherein, the priority of Data Receiving interface unit inter access is the highest, microprocessor priority is inferior high, and transmit block priority is minimum, and carries out granted access by the absolute priority strategy.
Pointer buffer management parts and queue management parts also allow external component simultaneously it to be conducted interviews simultaneously, and itself also has arbitrated logic, and the use absolute priority strategy identical with storage control.By the visit to pointer buffering area parts and queue management parts, external component can be finished the application and the distribution of free memory blocks, and operations such as the inquiry of data block and packet, modification, deletion, and management component itself is responsible for guaranteeing the effectively correct of operation.
Can see from above operation, this interface unit can receive in real time and cushion data, and can control other each component devices and carry out parallel work-flow, the operation that can walk abreast mainly comprises: the distribution of memory block, the storage of data, the execution of Flow Control, the tissue of formation and the distribution of deal with data.
Fig. 6 is the packet memory management method flow chart of network processing unit, and its operating procedure is as follows:
(1) preassignment DSB data store block is distributed free memory blocks and respective pointer to the application of idle queues manager;
(2) reception, buffered data;
(3), otherwise change (5) if new packet changes (4);
(4) data are write newly assigned SRAM data storage area, revise queue pointer and storage block pointer, the data that the last time is received wrap into respective queue simultaneously, change (8);
(5) judge to be header data,, otherwise change (7) if change (6);
(6) buffer data is write the SRAM data storage area;
(7) buffer data is write the DRAM data storage area;
(8) judge whether current memory block has been write completely,, otherwise change (2) if write full continuation;
(9) revise the memory block pointer and will write full memory block adding packet chained list,, change (2) to the application of idle queues manager new memory space and respective pointer.

Claims (9)

1.一种用于网络处理器的数据包接收接口部件,接收接口部件主要包括:1. A data packet receiving interface part for a network processor, the receiving interface part mainly includes: 一个数据接收缓冲装置,它与多个装置相连,用于数据的接收、缓冲、接口状态跟踪以及发出数据的存储与排队请求;A data receiving buffer device, which is connected to multiple devices, is used for data receiving, buffering, interface state tracking, and data storage and queuing requests; 一个指针存储区管理装置,用于存储区的组织管理,完成存储区指针的管理和维护以及数据存储块的分配;A pointer storage area management device, used for the organization and management of the storage area, to complete the management and maintenance of the storage area pointer and the allocation of data storage blocks; 一个DRAM存储控制器,提供对DRAM数据存储区的访问接口;A DRAM storage controller, providing an access interface to the DRAM data storage area; 一个SRAM存储控制器,提供对SRAM数据存储区的访问接口;An SRAM storage controller, providing an access interface to the SRAM data storage area; 一个队列管理装置,用于队列的组织管理和处理数据的分发;A queue management device for organizing and managing queues and distributing processing data; 一片数据存储区,该存储区主要是指一片内部的SRAM,用于存储数据包数据。A piece of data storage area, the storage area mainly refers to a piece of internal SRAM, which is used to store data packet data. 2.根据权利要求1所述的网络处理器的数据包接收接口部件,其特征在于,可以对数据进行实时接收和缓冲,并可以控制其它各组成装置进行并行操作,可并行的操作主要包括:存储块的分配,数据的存储,流控的执行,队列的组织与处理数据的分发。2. the data packet receiving interface part of network processor according to claim 1, it is characterized in that, data can be received and buffered in real time, and can control other each component device to carry out parallel operation, parallel operation mainly comprises: Allocation of storage blocks, storage of data, execution of flow control, organization of queues and distribution of processed data. 3.根据权利要求1所述的网络处理器的数据包接收接口部件,其特征在于,DRAM存储控制器提供仲裁机制和三个数据访问接口,一个接口供数据接收部件内部对存储器的访问,一个接口提供给微处理器,一个接口提供给数据转发部件,对于来自多个接口的请求,设置不同的优先级,数据接收接口部件优先级最高,微处理器优先级次高,转发部件优先级最低,并按绝对优先级策略进行授权访问。3. the data packet receiving interface part of network processor according to claim 1, it is characterized in that, DRAM memory controller provides arbitration mechanism and three data access interfaces, one interface is for the visit of internal memory of data receiving part, one The interface is provided to the microprocessor, and one interface is provided to the data forwarding component. For requests from multiple interfaces, different priorities are set. The data receiving interface component has the highest priority, the microprocessor has the second highest priority, and the forwarding component has the lowest priority. , and authorize access according to the absolute priority policy. 4.根据权利要求1所述的网络处理器的数据包接收接口部件,其特征在于,DRAM存储控制器提供指令缓存、指令分析和重排序功能,可以一次缓存多条访问指令,可以对缓存的指令进行分析和重排序,对指令的重新排序是以是否在同一块内为排序条件的。4. the data packet receiving interface part of network processor according to claim 1, it is characterized in that, DRAM storage controller provides instruction cache, instruction analysis and reordering function, can once cache a plurality of access instructions, can cache the Instructions are analyzed and reordered, and the reordering of instructions is based on whether they are in the same block or not. 5.一种网络处理器的数据包存储管理方法,其特征在于,设置一块指针存储区,指针存储区表项有位置域、数据包指针和存储块指针组成,位置域用于指出数据块在所属数据包中的位置,数据包指针用于将数据包组织成队列,存储块指针和数据存储块一一对齐,用于将属于同一数据包的数据块组织成链表。5. a data packet storage management method of a network processor, characterized in that, a pointer storage area is set, and the pointer storage area entry is made up of a location field, a data packet pointer and a storage block pointer, and the location field is used to point out that the data block is in The position in the belonging data packet, the data packet pointer is used to organize the data packet into a queue, and the storage block pointer and the data storage block are aligned one by one, and is used to organize the data blocks belonging to the same data packet into a linked list. 6.根据权利要求5所述的网络处理器的数据包存储管理方法,其特征在于,设置一个队列表,队列表项有队列头指针和队列尾指针组成,用于将数据包组织成队列。6. the data packet storage management method of network processor according to claim 5, is characterized in that, a queue table is set, and queue table item is made up of queue head pointer and queue tail pointer, is used for organizing data packet into queue. 7.根据权利要求5所述的网络处理器的数据包存储管理方法,其特征在于,包括存储块的划分,存储块的组织以及队列的组织方法。7. The data packet storage management method of the network processor according to claim 5, characterized in that, comprising division of storage blocks, organization of storage blocks and organization methods of queues. 8.根据权利要求5所述的网络处理器的数据包存储管理方法,其特征在于,存储区包括一块DRAM和多块SRAM,存储数据的DRAM和SRAM均以64字节为单位分块,并且DRAM的每一块都被安排在DRAM存储芯片的一行内,数据包的净荷存放在DRAM中,而数据包头存放在SRAM中,每次发送的处理数据都是SRAM中的数据包包头数据,指针存储区和队列表均放在SRAM中,有助于加快存储块的分配、查找,修改及释放操作。8. the packet storage management method of network processor according to claim 5, is characterized in that, storage area comprises a piece of DRAM and a plurality of pieces of SRAM, and the DRAM of storing data and SRAM all take 64 bytes as the unit block, and Each piece of DRAM is arranged in a row of the DRAM memory chip, the payload of the data packet is stored in the DRAM, and the data packet header is stored in the SRAM, and the processing data sent each time is the packet header data in the SRAM, the pointer Both the storage area and the queue table are placed in the SRAM, which helps to speed up the allocation, search, modification and release operations of the storage block. 9.根据权利要求5所述的网络处理器的数据包存储管理方法,其操作步骤如下:9. the packet storage management method of network processor according to claim 5, its operating steps are as follows: (1)预分配数据存储块,向空闲队列管理器申请分配空闲存储块和相应指针;(1) Pre-allocate data storage blocks, and apply for allocation of free storage blocks and corresponding pointers to the idle queue manager; (2)接收、缓冲数据;(2) Receive and buffer data; (3)如果是新的数据包,转(4),否则转(5);(3) If it is a new data packet, turn to (4), otherwise turn to (5); (4)将数据写入新分配的SRAM数据存储区,修改队列指针和数据存储块指针,同时将上一次接收的数据包入相应队列,转(8);(4) data is written into the newly distributed SRAM data storage area, the queue pointer and the data storage block pointer are modified, and the data packet received last time is entered into the corresponding queue simultaneously, and turns (8); (5)判断是不是包头数据,如果是转(6),否则转(7);(5) judge whether it is Baotou data, if turn (6), otherwise turn (7); (6)将缓冲区数据写入SRAM数据存储区;(6) buffer data is written into the SRAM data storage area; (7)将缓冲区数据写入DRAM数据存储区;(7) write the buffer data into the DRAM data storage area; (8)判断当前存储块是否已写满,如果写满继续,否则转(2);(8) judge whether the current storage block is full, if it is full, continue, otherwise turn to (2); (9)修改存储块指针将写满的存储块加入数据包链表,向空闲队列管理器申请新的存储空间和相应指针,转(2)。(9) Modify the storage block pointer to add the full storage block to the data packet linked list, apply for new storage space and corresponding pointers to the idle queue manager, and turn to (2).
CNB2004100500047A 2004-06-25 2004-06-25 A data packet receiving interface part of a network processor and its storage management method Expired - Fee Related CN100440854C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2004100500047A CN100440854C (en) 2004-06-25 2004-06-25 A data packet receiving interface part of a network processor and its storage management method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2004100500047A CN100440854C (en) 2004-06-25 2004-06-25 A data packet receiving interface part of a network processor and its storage management method

Publications (2)

Publication Number Publication Date
CN1595910A true CN1595910A (en) 2005-03-16
CN100440854C CN100440854C (en) 2008-12-03

Family

ID=34665885

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2004100500047A Expired - Fee Related CN100440854C (en) 2004-06-25 2004-06-25 A data packet receiving interface part of a network processor and its storage management method

Country Status (1)

Country Link
CN (1) CN100440854C (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100376099C (en) * 2005-07-04 2008-03-19 清华大学 Integrated Queue Management Method Based on Network Processor Platform
CN100386752C (en) * 2006-06-20 2008-05-07 北京飞天诚信科技有限公司 Online updating method for USB device when communication protocol constrained
CN101075930B (en) * 2006-05-16 2011-07-27 汤姆森许可贸易公司 Network storage device
CN101605100B (en) * 2009-07-15 2012-04-25 华为技术有限公司 Method and apparatus for managing queue storage space
CN102567241A (en) * 2010-12-27 2012-07-11 北京国睿中数科技股份有限公司 Memory controller and memory access control method
WO2013020429A1 (en) * 2011-08-11 2013-02-14 中兴通讯股份有限公司 Network processor mirror implementation method and network processor
CN101808029B (en) * 2009-02-13 2013-03-13 雷凌科技股份有限公司 Method and device for preloading packet header and system using method
CN103314362A (en) * 2010-12-17 2013-09-18 意法爱立信有限公司 Vector-based matching circuit for data streams
CN103490939A (en) * 2012-06-11 2014-01-01 中兴通讯股份有限公司 Data packet processing method and data packet processing device
WO2014101192A1 (en) * 2012-12-31 2014-07-03 华为技术有限公司 Network device and message processing method
CN104811495A (en) * 2015-04-27 2015-07-29 北京交通大学 Method and module for content storage of network component of smart and cooperative network
WO2016019554A1 (en) * 2014-08-07 2016-02-11 华为技术有限公司 Queue management method and apparatus
CN107369473A (en) * 2016-05-13 2017-11-21 爱思开海力士有限公司 Storage system and its operating method
WO2018040600A1 (en) * 2016-08-31 2018-03-08 深圳市中兴微电子技术有限公司 Forwarding table-based information processing method and apparatus, and computer readable storage medium
CN109413122A (en) * 2017-08-16 2019-03-01 深圳市中兴微电子技术有限公司 Data processing method, network processor and computer storage medium
CN113779019A (en) * 2021-01-14 2021-12-10 北京沃东天骏信息技术有限公司 Current limiting method and device based on annular linked list

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR0150072B1 (en) * 1995-11-30 1998-10-15 양승택 Memory data path controller in parallel processing computer system
US6983350B1 (en) * 1999-08-31 2006-01-03 Intel Corporation SDRAM controller for parallel processor architecture
US6754795B2 (en) * 2001-12-21 2004-06-22 Agere Systems Inc. Methods and apparatus for forming linked list queue using chunk-based structure
CN1214541C (en) * 2002-02-04 2005-08-10 华为技术有限公司 Communication method between inner core and microengine inside network processor

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100376099C (en) * 2005-07-04 2008-03-19 清华大学 Integrated Queue Management Method Based on Network Processor Platform
CN101075930B (en) * 2006-05-16 2011-07-27 汤姆森许可贸易公司 Network storage device
CN100386752C (en) * 2006-06-20 2008-05-07 北京飞天诚信科技有限公司 Online updating method for USB device when communication protocol constrained
CN101808029B (en) * 2009-02-13 2013-03-13 雷凌科技股份有限公司 Method and device for preloading packet header and system using method
CN101605100B (en) * 2009-07-15 2012-04-25 华为技术有限公司 Method and apparatus for managing queue storage space
CN103314362B (en) * 2010-12-17 2016-09-21 瑞典爱立信有限公司 Match circuit based on vector for data stream
CN103314362A (en) * 2010-12-17 2013-09-18 意法爱立信有限公司 Vector-based matching circuit for data streams
CN102567241A (en) * 2010-12-27 2012-07-11 北京国睿中数科技股份有限公司 Memory controller and memory access control method
WO2013020429A1 (en) * 2011-08-11 2013-02-14 中兴通讯股份有限公司 Network processor mirror implementation method and network processor
CN103490939A (en) * 2012-06-11 2014-01-01 中兴通讯股份有限公司 Data packet processing method and data packet processing device
WO2014101192A1 (en) * 2012-12-31 2014-07-03 华为技术有限公司 Network device and message processing method
WO2016019554A1 (en) * 2014-08-07 2016-02-11 华为技术有限公司 Queue management method and apparatus
CN106537858A (en) * 2014-08-07 2017-03-22 华为技术有限公司 Queue management method and apparatus
CN106537858B (en) * 2014-08-07 2019-07-19 华为技术有限公司 A method and device for queue management
US10248350B2 (en) 2014-08-07 2019-04-02 Huawei Technologies Co., Ltd Queue management method and apparatus
CN104811495B (en) * 2015-04-27 2018-06-08 北京交通大学 A kind of networking component content storage method and module for wisdom contract network
CN104811495A (en) * 2015-04-27 2015-07-29 北京交通大学 Method and module for content storage of network component of smart and cooperative network
CN107369473A (en) * 2016-05-13 2017-11-21 爱思开海力士有限公司 Storage system and its operating method
CN107797942A (en) * 2016-08-31 2018-03-13 深圳市中兴微电子技术有限公司 Reduce the method and its device of Large Copacity forward table access times
WO2018040600A1 (en) * 2016-08-31 2018-03-08 深圳市中兴微电子技术有限公司 Forwarding table-based information processing method and apparatus, and computer readable storage medium
CN107797942B (en) * 2016-08-31 2020-11-20 深圳市中兴微电子技术有限公司 Method and device for reducing access times of large-capacity forwarding table
CN109413122A (en) * 2017-08-16 2019-03-01 深圳市中兴微电子技术有限公司 Data processing method, network processor and computer storage medium
CN109413122B (en) * 2017-08-16 2022-05-13 深圳市中兴微电子技术有限公司 A data processing method, network processor and computer storage medium
CN113779019A (en) * 2021-01-14 2021-12-10 北京沃东天骏信息技术有限公司 Current limiting method and device based on annular linked list
CN113779019B (en) * 2021-01-14 2024-05-17 北京沃东天骏信息技术有限公司 Circular linked list-based current limiting method and device

Also Published As

Publication number Publication date
CN100440854C (en) 2008-12-03

Similar Documents

Publication Publication Date Title
CN1595910A (en) A data packet receiving interface component of network processor and storage management method thereof
US7724735B2 (en) On-chip bandwidth allocator
WO2007004159A2 (en) Method and apparatus for bandwidth efficient and bounded latency packet buffering
US20050219564A1 (en) Image forming device, pattern formation method and storage medium storing its program
US20020124149A1 (en) Efficient optimization algorithm in memory utilization for network applications
US8677075B2 (en) Memory manager for a network communications processor architecture
US20050220112A1 (en) Distributed packet processing with ordered locks to maintain requisite packet orderings
CN101499956B (en) Hierarchical buffer zone management system and method
US7529224B2 (en) Scheduler, network processor, and methods for weighted best effort scheduling
US12068972B1 (en) Shared traffic manager
JP2004536515A (en) Switch fabric with dual port memory emulation
CN103345451A (en) Data buffering method in multi-core processor
Hasan et al. Efficient use of memory bandwidth to improve network processor throughput
US6697923B2 (en) Buffer management method and a controller thereof
US20160085477A1 (en) Addressless merge command with data item identifier
WO2016019554A1 (en) Queue management method and apparatus
CN1677958A (en) A compact packet-switched node memory architecture using double-rate synchronous dynamic RAM
CN1829200A (en) Systems and methods for implementing counters in a network processor
US10846225B1 (en) Buffer read optimizations in a network device
JP2003228461A (en) Disk cache management method for disk array device
US10742558B1 (en) Traffic manager resource sharing
EP1471430B1 (en) Stream memory manager
US9804959B2 (en) In-flight packet processing
CN1781079A (en) Maintaining entity order with gate managers
CN101464839B (en) Access buffering mechanism and method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: G-CLOUD TECHNOLOGY CO., LTD.

Free format text: FORMER OWNER: INSTITUTE OF COMPUTING TECHNOLOGY, CHINESE ACADEMY OF SCIENCES

Effective date: 20140514

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 100080 HAIDIAN, BEIJING TO: 523808 DONGGUAN, GUANGDONG PROVINCE

TR01 Transfer of patent right

Effective date of registration: 20140514

Address after: 523808 Guangdong province Dongguan City Songshan Lake Science and Technology Industrial Park Building No. 14 Keyuan pine

Patentee after: G-CLOUD TECHNOLOGY Co.,Ltd.

Address before: 100080 No. 6 South Road, Zhongguancun Academy of Sciences, Beijing

Patentee before: Institute of Computing Technology, Chinese Academy of Sciences

CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Li Xiaowei

Inventor after: Li Huawei

Inventor after: Gong Shuguang

Inventor after: Xu Yufeng

Inventor after: Liu Tong

Inventor before: Gong Shuguang

Inventor before: Li Huawei

Inventor before: Xu Yufeng

Inventor before: Liu Tong

Inventor before: Li Xiaowei

CP02 Change in the address of a patent holder
CP02 Change in the address of a patent holder

Address after: 523808 19th Floor, Cloud Computing Center, Chinese Academy of Sciences, No. 1 Kehui Road, Songshan Lake Hi-tech Industrial Development Zone, Dongguan City, Guangdong Province

Patentee after: G-CLOUD TECHNOLOGY Co.,Ltd.

Address before: 523808 No. 14 Building, Songke Garden, Songshan Lake Science and Technology Industrial Park, Dongguan City, Guangdong Province

Patentee before: G-CLOUD TECHNOLOGY Co.,Ltd.

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20081203