CN101494636B - Method and apparatus for ordering data based on rapid IO interconnection technology - Google Patents
Method and apparatus for ordering data based on rapid IO interconnection technology Download PDFInfo
- Publication number
- CN101494636B CN101494636B CN 200810000587 CN200810000587A CN101494636B CN 101494636 B CN101494636 B CN 101494636B CN 200810000587 CN200810000587 CN 200810000587 CN 200810000587 A CN200810000587 A CN 200810000587A CN 101494636 B CN101494636 B CN 101494636B
- Authority
- CN
- China
- Prior art keywords
- groups
- response
- package
- request
- buffer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Landscapes
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
本发明公开了一种基于快速IO互连技术的数据排序方法,预先设定三组先入先出队列,用于存储不同优先级的响应包,所述方法包括:以请求包组为单位发送请求包;从目的端接收与所述请求包组对应的响应包组,并按照响应包的优先级和到达源端的先后顺序,依次将所述响应包组的响应包存入三组先入先出队列中;当接收完所述响应包组后,按照传输标识的顺序,依次从所述三组先入先出队列中读取并发送响应包。本发明在使用较少缓冲区资源及耗费较小时延的前提下,有效地校正Rapid IO接口的响应包传输顺序,解决了现有高速数据处理系统中响应包乱序的问题,提高了高速数据处理系统的性能。
The invention discloses a data sorting method based on fast IO interconnection technology. Three groups of first-in-first-out queues are preset for storing response packets with different priorities. The method includes: sending requests in units of request packet groups package; receive the response packet group corresponding to the request packet group from the destination, and store the response packets of the response packet group into three groups of first-in-first-out queues in sequence according to the priority of the response packets and the order of arrival at the source In: after receiving the response packet groups, read and send the response packets from the three groups of first-in-first-out queues sequentially according to the order of the transmission identifiers. The present invention effectively corrects the response packet transmission sequence of the Rapid IO interface on the premise of using less buffer resources and consuming less time delay, solves the problem of response packet disorder in the existing high-speed data processing system, and improves high-speed data processing. Processing system performance.
Description
技术领域 technical field
本发明涉及嵌入式系统的互连技术,尤其涉及一种基于快速IO互连技术的数据排序方法及装置。The invention relates to the interconnection technology of embedded systems, in particular to a data sorting method and device based on fast IO interconnection technology.
背景技术 Background technique
Rapid IO(快速IO架构)是由Rapid IO Trade Association(Rapid IO行业协会)于2001年12月开发制定的一套应用于芯片级和板级互连的公开的高带宽全双工级联方案,其性能能够达到10Gb/s或者更高。它是低迟延、基于存储器地址的协议,可升级、可靠、支持多重处理并对应用软件透明。Rapid IO (Rapid IO Architecture) is a set of open high-bandwidth full-duplex cascading solutions for chip-level and board-level interconnection developed by the Rapid IO Trade Association (Rapid IO Industry Association) in December 2001. Its performance can reach 10Gb/s or higher. It is a low-latency, memory-address-based protocol that is scalable, reliable, multiprocessing capable, and transparent to application software.
Rapid IO协议的包类型可分为维护包读写、读请求、读响应、写操作等类型。Rapid IO读操作模式:源端发起请求,目的端响应,即读请求以包交换的方式先行由源端发至目的端,目的端将响应包送至源端,从而实现一次完整的操作。The packet types of the Rapid IO protocol can be divided into maintenance packet read and write, read request, read response, and write operations. Rapid IO read operation mode: the source initiates a request, and the destination responds, that is, the read request is first sent from the source to the destination in the form of packet exchange, and the destination sends the response packet to the source, thus realizing a complete operation.
在目前主流的带有Rapid IO接口的CPU中,在Rapid IO接口的发送端通常有4组Buffer(缓冲区),分别对应于发送4种优先级的数据包。Rapid IO协议死锁预防规则规定,“携带响应的包的优先级应至少比相应请求包的优先级高一级”,所以响应包的优先级只能有3种级别:1、2、3。In the current mainstream CPU with Rapid IO interface, there are usually 4 sets of Buffers (buffers) at the sending end of the Rapid IO interface, corresponding to sending data packets with 4 priority levels. The deadlock prevention rules of the Rapid IO protocol stipulate that "the priority of the packet carrying the response should be at least one level higher than that of the corresponding request packet", so the priority of the response packet can only have three levels: 1, 2, and 3.
目的端CPU对于请求包的响应机制是:根据请求包请求的地址和数据长度,将相应数据进行打包生成相应的响应包。由于请求包的字节少,包传送速度高于相应响应包地发送速度,所以在响应包的打包过程中,若低优先级的Buffer填满后,新生成的响应包将存入较高优先级的Buffer中,这将导致顺序在后的响应包的优先级高于顺序在前的响应包。而Rapid IO协议中事务与包传送排序规则规定,"端点处理部件端口的物理层应该保证从处理部件的物理层收到的高优先级的请求事务在低优先级的请求事务之前转发,低优先级的包不能超过高优先级的包"。这样,在响应包传送过程中就会出现如下问题:CPU会先发送顺序在后但优先级高的数据包,再发送顺序在前但优先级低的数据包。The response mechanism of the destination CPU to the request packet is: according to the address and data length requested by the request packet, the corresponding data is packaged to generate a corresponding response packet. Due to the small number of bytes in the request packet, the packet transmission speed is higher than that of the corresponding response packet. Therefore, during the packaging process of the response packet, if the low-priority Buffer is filled, the newly generated response packet will be stored in the higher-priority buffer. In the buffer of the first level, this will cause the priority of the response packet in the later order to be higher than the response packet in the earlier order. The transaction and packet transmission ordering rules in the Rapid IO protocol stipulate that "the physical layer of the endpoint processing component port should ensure that the high-priority request transaction received from the physical layer of the processing component is forwarded before the low-priority request transaction, and the low-priority request transaction Priority packets cannot outnumber higher-priority packets". In this way, the following problem will occur during the transmission of the response packet: the CPU will first send the data packets with the lower order but with the higher priority, and then send the data packets with the earlier order but with the lower priority.
目前,解决这种问题的方法通常是由目的端CPU降低数据打包速度,减缓发送速率,但是在对数据顺序敏感的高速数据处理系统中,发送速率的降低将导致系统性能下降,而速率过快则会出现乱序现象,导致系统出错,从而严重影响Rapid IO接口在高速系统中的应用。At present, the solution to this problem is usually to reduce the data packaging speed and slow down the sending rate by the destination CPU. However, in a high-speed data processing system sensitive to data order, the reduction in sending rate will lead to a decrease in system performance, and the rate is too fast. There will be out-of-sequence phenomenon, which will lead to system errors, which will seriously affect the application of Rapid IO interface in high-speed systems.
发明内容 Contents of the invention
鉴于上述的分析,本发明旨在提供一种基于快速IO互连技术的数据排序方法及装置,用以解决现有技术中存在的高速数据处理系统中响应包乱序的问题。In view of the above analysis, the present invention aims to provide a data sorting method and device based on fast IO interconnection technology to solve the problem of out-of-order response packets in high-speed data processing systems in the prior art.
本发明的目的是通过以下技术方案实现的。The purpose of the present invention is achieved through the following technical solutions.
本发明提供了一种基于快速IO互连技术的数据排序方法,预先设定三组先入先出队列,用于存储不同优先级的响应包,所述方法包括:The present invention provides a data sorting method based on fast IO interconnection technology. Three sets of first-in-first-out queues are preset for storing response packets with different priorities. The method includes:
步骤A:以请求包组为单位发送请求包;Step A: sending request packets in units of request packet groups;
步骤B:从目的端接收与所述请求包组对应的响应包组,并按照响应包的优先级和到达源端的先后顺序,依次将所述响应包组的响应包存入三组先入先出队列中;Step B: Receive the response packet group corresponding to the request packet group from the destination, and store the response packets of the response packet group into three first-in-first-out groups in sequence according to the priority of the response packets and the order of arrival at the source in the queue;
步骤C:当接收完所述响应包组后,按照传输标识的顺序,依次从所述三组先入先出队列中读取并发送响应包。Step C: After receiving the response packet groups, read and send the response packets from the three groups of first-in-first-out queues sequentially according to the order of the transmission identifiers.
进一步地,所述步骤A具体包括:Further, the step A specifically includes:
根据系统的需求,将请求包以组为单位进行划分;According to the requirements of the system, the request package is divided into groups;
先发送请求包组中的一部分请求包;First send a part of the request packets in the request packet group;
当接收到目的端反馈的首个响应包后,将剩余的请求包发送出去。After receiving the first response packet fed back by the destination, send the remaining request packets.
进一步地,当预设定两组缓冲区单元,每组缓冲区单元包括三组先入先出队列时,所述步骤B具体包括:Further, when two groups of buffer units are preset, and each group of buffer units includes three groups of first-in-first-out queues, the step B specifically includes:
对从目的端接收到的与所述请求包组对应的响应包组进行奇偶计数,并采用乒乓操作的方式选择存储所述响应包组用的缓冲区单元;Carry out parity counting to the response packet group corresponding to the request packet group received from the destination end, and adopt the mode of ping-pong operation to select the buffer unit for storing the response packet group;
按照响应包的优先级和到达源端的先后顺序,依次将所述响应包组的响应包存入三组先入先出队列中,同时清空另外一组缓冲区单元。According to the priority of the response packets and the order in which they arrive at the source, the response packets of the response packet group are sequentially stored in three groups of first-in-first-out queues, and at the same time, another group of buffer units is cleared.
所述先入先出队列的长度等于请求包组的个数。The length of the first-in-first-out queue is equal to the number of request packet groups.
本发明还提供了一种基于快速IO互连技术的数据排序装置,包括:The present invention also provides a data sorting device based on fast IO interconnection technology, including:
发送模块,用于根据系统的需求,将请求包以组为单位进行划分,并以请求包组为单位发送请求包;The sending module is used to divide the request packets into groups according to the requirements of the system, and send the request packets in units of request packet groups;
排序模块,用于接收与所述请求包组对应的响应包组,并按照响应包的优先级和到达源端的先后顺序,依次将所述响应包组的响应包存入三组先入先出队列中;A sorting module, configured to receive the response packet groups corresponding to the request packet groups, and store the response packets of the response packet groups into three groups of first-in-first-out queues sequentially according to the priority of the response packets and the order of arrival at the source middle;
读取模块,用于按照传输标识的顺序,依次从所述三组先入先出队列中读取并发送响应包。The reading module is configured to sequentially read and send response packets from the three groups of first-in-first-out queues according to the order of the transmission identifiers.
进一步地,所述排序模块具体包括:Further, the sorting module specifically includes:
奇偶计数器,用于对从目的端接收到的与所述请求包组对应的响应包组进行奇偶计数;a parity counter, configured to perform parity counting on the response packet group corresponding to the request packet group received from the destination;
第一选择器,用于根据奇偶计数,选择第一缓冲区单元或第二缓冲区单元来存储所述响应包组;The first selector is used to select the first buffer unit or the second buffer unit to store the response packet group according to the parity count;
第一缓冲区单元,用于根据第一选择器的选择结果,与第二缓冲区单元轮流缓存所述响应包组;The first buffer unit is configured to cache the response packet group in turn with the second buffer unit according to the selection result of the first selector;
第二缓冲区单元,用于根据第一选择器的选择结果,与第一缓冲区单元轮流缓存所述响应包组;The second buffer unit is configured to cache the response packet group in turn with the first buffer unit according to the selection result of the first selector;
第二选择器,用于将从第一缓冲区单元或第二缓冲区单元得到的响应包组发送给下级数据处理单元。The second selector is used for sending the response packet group obtained from the first buffer unit or the second buffer unit to the lower-level data processing unit.
综上所述,本发明提供了一种基于快速IO互连技术的数据排序方法及装置,在使用较少缓冲区资源及耗费较小时延的前提下,有效地校正Rapid IO接口的响应包传输顺序,解决了现有高速数据处理系统中响应包乱序的问题,提高了高速数据处理系统的性能。In summary, the present invention provides a data sorting method and device based on fast IO interconnection technology, which can effectively correct the response packet transmission of the Rapid IO interface on the premise of using less buffer resources and consuming less time delay order, which solves the problem of out-of-order response packets in the existing high-speed data processing system, and improves the performance of the high-speed data processing system.
附图说明 Description of drawings
图1为本发明实施例所述方法的流程示意图;Fig. 1 is a schematic flow chart of the method described in the embodiment of the present invention;
图2为本发明实施例中所述排序模块的结构示意图。Fig. 2 is a schematic structural diagram of the sorting module in the embodiment of the present invention.
具体实施方式 Detailed ways
下面结合附图来具体描述本发明的优先实施例,其中,附图构成本申请一部分,并与本发明的实施例一起用于阐释本发明的原理。Preferred embodiments of the present invention will be specifically described below in conjunction with the accompanying drawings, wherein the accompanying drawings constitute a part of the application and are used together with the embodiments of the present invention to explain the principles of the present invention.
首先对结合附图2对本发明实施例所述方法进行详细阐述。Firstly, the method described in the embodiment of the present invention will be described in detail with reference to FIG. 2 .
步骤100、预先在源端开辟三组FIFO(First Input First Output,先入先出队列)空间,用于存储三种不同优先级的响应包,每个FIFO的深度为M。Step 100: Open up three groups of FIFO (First Input First Output) spaces at the source in advance for storing response packets of three different priorities, and the depth of each FIFO is M.
步骤101、源端以组为单位发送请求包;具体的说就是,源端根据系统的需求将请求包以组为单位划分成多个请求包组,每组包含M个请求包。在向目的端发送请求包时,每单位时间内发送一组请求包。发送的过程为:先发送M-N个包,当接收到该组首个响应包后,将剩余的N个请求包发送出去。Step 101, the source end sends request packets in units of groups; specifically, the source end divides the request packets into multiple request packet groups in groups according to system requirements, and each group includes M request packets. When sending a request packet to the destination, a group of request packets are sent per unit of time. The sending process is: send M-N packets first, and send the remaining N request packets after receiving the first response packet of the group.
步骤102、目的端接收到请求包后,发送相应的响应包。Step 102: After receiving the request packet, the destination end sends a corresponding response packet.
步骤103、当源端将接收到与其发送的请求包对应的响应包后,按照接收优先级的不同和到达的先后顺序,依次将所述响应包存入不同的FIFO空间中。Step 103: After the source end receives the response packets corresponding to the request packets it sent, it stores the response packets in different FIFO spaces sequentially according to the difference in receiving priority and the order of arrival.
步骤104、在源端接收到该组的最后一个响应包时,以优先级为1的FIFO的第一个包的Transaction ID(传输标识)值为基准,按照Transaction ID的顺序,从3个FIFO中读取响应包,发送给下一级数据处理单元进行相应处理。Step 104: When the last response packet of the group is received at the source end, the transaction ID (transmission identification) value of the first packet of the FIFO with a priority of 1 is used as the reference, and the three FIFOs are selected according to the order of the Transaction ID The response packet is read in and sent to the next-level data processing unit for corresponding processing.
在上述处理过程中,为了提高效率,本发明实施例采取了乒乓操作的方式。具体的说就是,预先在源端开辟两组缓冲区单元,同时在每组缓冲区单元内开辟三组FIFO空间,用于存储不同的优先级的响应包,每个FIFO的深度为M。During the above processing, in order to improve efficiency, the embodiment of the present invention adopts a ping-pong operation. Specifically, two sets of buffer units are opened at the source in advance, and three sets of FIFO spaces are opened in each set of buffer units to store response packets with different priorities, and the depth of each FIFO is M.
在源端处,对接收到的响应包组进行奇偶计数,以确定乒乓操作时使用的缓冲区单元。比如,当接收到第一组响应包时,记为奇数,并将其存储在第一缓冲区单元;当接收到第二组响应包时,记为偶数,并将其存储在第二缓冲区单元;当接收到第三组响应包时,记为奇数,并将其存储在第一缓冲区单元;以此类推。然后按照响应包的优先级和到达源端的先后顺序,依次将所述响应包组的响应包存入三组先入先出队列中,同时清空另外一组缓冲区单元,并重复执行步骤103和104。本发明通过采用乒乓操作的方式实现了响应包的无缝缓冲和处理。At the source end, the parity count is performed on the received response packet group to determine the buffer unit used in the ping-pong operation. For example, when the first group of response packets is received, it is recorded as an odd number and stored in the first buffer unit; when the second group of response packets is received, it is recorded as an even number and stored in the second buffer unit unit; when receiving the third group of response packets, record it as an odd number and store it in the first buffer unit; and so on. Then, according to the priority of the response packets and the order in which they arrive at the source, the response packets of the response packet group are stored in three groups of first-in-first-out queues, and at the same time, another group of buffer units is emptied, and steps 103 and 104 are repeated. . The invention realizes the seamless buffering and processing of the response packet by adopting the ping-pong operation mode.
接下来,对本发明实施例所述装置进行详细阐述。Next, the device described in the embodiment of the present invention is described in detail.
本发明实施例所述装置具体包括:The device described in the embodiment of the present invention specifically includes:
发送模块,根据系统的需求,将请求包以组为单位进行划分,并以请求包组为单位发送请求包给目的端;目的端根据接收到的请求包,进行打包生成相应的响应包,并依次将生成的响应包发送出去。The sending module divides the request packet into groups according to the requirements of the system, and sends the request packet to the destination end in units of request packet groups; the destination end packages the received request packet to generate a corresponding response packet, and Send the generated response packets in turn.
排序模块,接收与所述请求包对应的响应包,并按照响应包的优先级和到达源端的先后顺序,依次将所述响应包组的响应包存入三组先入先出队列中;The sorting module receives the response packets corresponding to the request packets, and according to the priority of the response packets and the order of arrival at the source, sequentially stores the response packets of the response packet groups into three groups of first-in-first-out queues;
读取模块,当接收完一组响应包组后,所述读取模块按照响应包的传输标识的顺序,即依次从所述三组先入先出队列中读取并发送响应包。The reading module, after receiving a group of response packets, reads and sends the response packets sequentially from the three groups of first-in-first-out queues according to the order of the transmission identifiers of the response packets.
这里,为了提高效率,本发明实施例采取了乒乓操作的方式。具体的说就是,预先在源端开辟两组缓冲区单元,同时在每组缓冲区单元内开辟三组FIFO空间,用于存储不同的优先级的响应包,每个FIFO的深度为M,则所述排序模块的结构如图2所示,具体可以包括:Here, in order to improve efficiency, the embodiment of the present invention adopts a ping-pong operation. Specifically, two groups of buffer units are opened at the source in advance, and three groups of FIFO spaces are opened in each group of buffer units to store response packets with different priorities. The depth of each FIFO is M, then The structure of the sorting module is as shown in Figure 2, specifically may include:
奇偶计数器,用于对从目的端接收到的与所述请求包组对应的响应包组进行奇偶计数;每接收到一组响应包,奇偶计数器进行一次计数。比如,当接收到第一组响应包时,记为奇数,并将其存储在第一缓冲区单元;当接收到第二组响应包时,记为偶数,并将其存储在第二缓冲区单元;当接收到第三组响应包时,记为奇数,并将其存储在第一缓冲区单元;以此类推。The parity counter is used to count the parity of the response packet group corresponding to the request packet group received from the destination end; each time a group of response packets is received, the parity counter counts once. For example, when the first group of response packets is received, it is recorded as an odd number and stored in the first buffer unit; when the second group of response packets is received, it is recorded as an even number and stored in the second buffer unit unit; when receiving the third group of response packets, record it as an odd number and store it in the first buffer unit; and so on.
第一选择器,用于根据奇偶计数,选择第一缓冲区单元或第二缓冲区单元来存储所述响应包组;The first selector is used to select the first buffer unit or the second buffer unit to store the response packet group according to the parity count;
第一缓冲区单元,用于根据第一选择器的选择结果,与第二缓冲区单元配合工作,轮流缓存所述响应包组;The first buffer unit is configured to cooperate with the second buffer unit according to the selection result of the first selector to cache the response packet groups in turn;
第二缓冲区单元,用于根据第一选择器的选择结果,与第一缓冲区单元配合工作,轮流缓存所述响应包组;The second buffer unit is configured to cooperate with the first buffer unit according to the selection result of the first selector to cache the response packet group in turn;
第二选择器,用于将从第一缓冲区单元或第二缓冲区单元得到的响应包组发送给下级数据处理单元。The second selector is used for sending the response packet group obtained from the first buffer unit or the second buffer unit to the lower-level data processing unit.
所述乒乓操作的具体过程可以为:通过第一选择器将响应包等时分配到两组缓冲区单元,当奇偶计数器为奇数时,将接收到的第一组响应包缓存到第一缓冲区单元;当奇偶计数器由奇数变为偶数时,通过第一选择器的切换,并将接收到的第二组响应包缓存到第二缓冲区单元,同时将第一缓冲区单元缓存的第一组响应包通过第二选择器的选择,送到下级数据处理单元进行相关处理;当奇偶计数器由偶数变为奇数时,通过第一选择器的再次切换,将接收到的第三组响应包缓存到第一缓冲区单元,同时将第二选择器缓存的第二组响应包通过第二选择器的切换,送到下级数据处理单元进行相关处理;以此类推。The specific process of the ping-pong operation can be: the response packet isochronously distributed to two groups of buffer units by the first selector, and when the parity counter is an odd number, the first group of response packets received is buffered into the first buffer unit unit; when the parity counter changes from odd to even, the second group of response packets received is buffered into the second buffer unit through the switching of the first selector, and the first group buffered by the first buffer unit is simultaneously The response packet is sent to the lower-level data processing unit for related processing through the selection of the second selector; when the parity counter changes from even to odd, the third group of response packets received is cached by switching the first selector again. The first buffer unit simultaneously sends the second group of response packets buffered by the second selector to the lower-level data processing unit for related processing through the switching of the second selector; and so on.
当然,本发明并不限于一组或两组缓冲区单元,为了达到快速进行排序的目的,也可以多设置几组缓冲区单元,原理与乒乓操作类似,此处就不一一举例。Of course, the present invention is not limited to one or two groups of buffer units. In order to achieve the purpose of fast sorting, several groups of buffer units can also be set up. The principle is similar to the ping-pong operation, and examples are not given here.
综上所述,本发明实施例提供了一种基于快速IO互连技术的数据排序方法及装置,根据Rapid IO规范中“具有相同源ID、相同目的ID、相同优先级且ftype!=8(ftype表示数据包的类型)的包在传递过程中顺序不变”的原则,将各个优先级的响应包分别排序,然后根据Transaction ID的顺序,向下级处理单元送出排序后的响应包。本发明实施例在处理过程中,为提高效率,采用乒乓操作的方式来完成数据的无缝缓冲与处理。To sum up, the embodiment of the present invention provides a data sorting method and device based on the rapid IO interconnection technology, according to the Rapid IO specification "have the same source ID, the same destination ID, the same priority and ftype!=8( ftype indicates the type of the data packet) and the order of the packets is not changed during the transmission process", sort the response packets of each priority separately, and then send the sorted response packets to the lower-level processing unit according to the order of the Transaction ID. In the processing process of the embodiment of the present invention, in order to improve the efficiency, the seamless buffering and processing of data is completed by means of ping-pong operation.
本发明实施例在使用较少缓冲区资源及耗费较小时延的前提下,有效地校正Rapid IO接口的响应包传输顺序,解决了现有高速数据处理系统中响应包乱序的问题,提高了高速数据处理系统的性能。The embodiment of the present invention effectively corrects the response packet transmission sequence of the Rapid IO interface on the premise of using less buffer resources and consuming less time delay, solves the problem of out-of-order response packets in the existing high-speed data processing system, and improves Performance of high-speed data processing systems.
以上所述,仅为本发明较佳的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到的变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应该以权利要求书的保护范围为准。The above is only a preferred embodiment of the present invention, but the scope of protection of the present invention is not limited thereto. Any person skilled in the art within the technical scope disclosed in the present invention can easily think of changes or Replacement should be covered within the protection scope of the present invention. Therefore, the protection scope of the present invention should be determined by the protection scope of the claims.
Claims (6)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN 200810000587 CN101494636B (en) | 2008-01-23 | 2008-01-23 | Method and apparatus for ordering data based on rapid IO interconnection technology |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN 200810000587 CN101494636B (en) | 2008-01-23 | 2008-01-23 | Method and apparatus for ordering data based on rapid IO interconnection technology |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN101494636A CN101494636A (en) | 2009-07-29 |
| CN101494636B true CN101494636B (en) | 2013-01-16 |
Family
ID=40925043
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN 200810000587 Expired - Fee Related CN101494636B (en) | 2008-01-23 | 2008-01-23 | Method and apparatus for ordering data based on rapid IO interconnection technology |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN101494636B (en) |
Families Citing this family (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102929801B (en) * | 2012-10-25 | 2016-06-22 | 华为技术有限公司 | A kind of method and apparatus for disk addressing |
| CN104734873B (en) * | 2013-12-20 | 2018-04-13 | 深圳市国微电子有限公司 | Management method, the system of buffer in a kind of exchange system and its switching equipment |
| CN104266657B (en) * | 2014-09-12 | 2017-08-04 | 海华电子企业(中国)有限公司 | A Parallel Method for Shortest Path Planning Based on Cooperative Computing of CPU and MIC |
| CN105867844B (en) * | 2016-03-28 | 2019-01-25 | 北京联想核芯科技有限公司 | A kind of order control method and storage equipment |
| CN106413000A (en) * | 2016-09-23 | 2017-02-15 | 东南大学 | Energy efficiency data flow transmission method based on packet sequence at arbitrary cut-off moment |
| US11023275B2 (en) * | 2017-02-09 | 2021-06-01 | Intel Corporation | Technologies for queue management by a host fabric interface |
| CN112328520B (en) * | 2020-09-30 | 2022-02-11 | 郑州信大捷安信息技术股份有限公司 | PCIE equipment, and data transmission method and system based on PCIE equipment |
| CN113259267B (en) * | 2021-06-28 | 2021-11-12 | 江苏省质量和标准化研究院 | System and method for transmitting associated information of social credit code |
| CN116055422B (en) * | 2022-06-29 | 2024-11-26 | 海光信息技术股份有限公司 | A device and method for controlling the order of sending data packets |
| CN119201838B (en) * | 2024-11-26 | 2025-02-25 | 成都旋极历通信息技术有限公司 | A method for managing out-of-order reception of RapidIO Message transactions based on FPGA |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN1585373A (en) * | 2004-05-28 | 2005-02-23 | 中兴通讯股份有限公司 | Ping pong buffer device |
| EP1804159A1 (en) * | 2005-12-30 | 2007-07-04 | STMicroelectronics Belgium N.V. | Serial in random out memory |
| CN101069161A (en) * | 2004-12-01 | 2007-11-07 | 索尼计算机娱乐公司 | Scheduling method, scheduling device and multiprocessor system |
-
2008
- 2008-01-23 CN CN 200810000587 patent/CN101494636B/en not_active Expired - Fee Related
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN1585373A (en) * | 2004-05-28 | 2005-02-23 | 中兴通讯股份有限公司 | Ping pong buffer device |
| CN101069161A (en) * | 2004-12-01 | 2007-11-07 | 索尼计算机娱乐公司 | Scheduling method, scheduling device and multiprocessor system |
| EP1804159A1 (en) * | 2005-12-30 | 2007-07-04 | STMicroelectronics Belgium N.V. | Serial in random out memory |
Non-Patent Citations (3)
| Title |
|---|
| Interconnect Specification Part 4: Physical Layer 8/16 LP-LVDS Specification,Rev.1.3,06/2005.《RapidIO Interconnect Specification Part 4: Physical Layer 8/16 LP-LVDS Specification,Rev.1.3,06/2005》.2005,正文17-30页、图2-1至2-5. |
| RapidIO Trade Association.RapidIO™ |
| RapidIO Trade Association.RapidIO™Interconnect Specification Part 4: Physical Layer 8/16 LP-LVDS Specification,Rev.1.3,06/2005.《RapidIO Interconnect Specification Part 4: Physical Layer 8/16 LP-LVDS Specification,Rev.1.3,06/2005》.2005,正文17-30页、图2-1至2-5. * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN101494636A (en) | 2009-07-29 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN101494636B (en) | Method and apparatus for ordering data based on rapid IO interconnection technology | |
| US20260030181A1 (en) | System and method for facilitating data request management in a network interface controller (nic) | |
| CN112084136B (en) | Queue cache management method, system, storage medium, computer equipment and application | |
| CN106415515B (en) | Send packets using optimized PIO write sequence without SFENCE | |
| US8085801B2 (en) | Resource arbitration | |
| CN104641608B (en) | Ultra-low latency network buffer storage | |
| US11700209B2 (en) | Multi-path packet descriptor delivery scheme | |
| US7327674B2 (en) | Prefetching techniques for network interfaces | |
| CN108366111B (en) | Data packet low-delay buffer device and method for switching equipment | |
| JPH04233352A (en) | Network adaptor controlling flow of data arranged in packet from system memory to network and control method of data flow | |
| JP2016195375A (en) | Method and apparatus for using multiple linked memory lists | |
| CN1736066A (en) | State engine for data processor | |
| KR20160147935A (en) | Optimized credit return mechanism for packet sends | |
| US10397144B2 (en) | Receive buffer architecture method and apparatus | |
| US7126959B2 (en) | High-speed packet memory | |
| US20030174708A1 (en) | High-speed memory having a modular structure | |
| CN101635682A (en) | Storage management method and storage management system | |
| CN106537858A (en) | Queue management method and apparatus | |
| CN102971997A (en) | A packet buffer comprising a data section and a data description section | |
| CN1826768A (en) | A scalable approach to large scale queuing through dynamic resource allocation | |
| CN103220230B (en) | Support the dynamic shared buffer method that message crossbar stores | |
| CN1965550A (en) | Method and apparatus for processing a complete burst of data | |
| CN1747440B (en) | A chip for realizing cell reordering | |
| US9118610B1 (en) | Network information processing and methods thereof | |
| CN115686879A (en) | System and method for facilitating dynamic trigger operation management in a network interface controller |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| C14 | Grant of patent or utility model | ||
| GR01 | Patent grant | ||
| CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20130116 Termination date: 20190123 |
|
| CF01 | Termination of patent right due to non-payment of annual fee |