CN102567120B - Method and device for determining node scheduling priority - Google Patents
Method and device for determining node scheduling priority Download PDFInfo
- Publication number
- CN102567120B CN102567120B CN201210031763.3A CN201210031763A CN102567120B CN 102567120 B CN102567120 B CN 102567120B CN 201210031763 A CN201210031763 A CN 201210031763A CN 102567120 B CN102567120 B CN 102567120B
- Authority
- CN
- China
- Prior art keywords
- node
- mrow
- msub
- pipeline
- scheduling priority
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Landscapes
- Multi Processors (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
Description
技术领域technical field
本发明涉及嵌入式计算机技术领域,尤其涉及一种节点调度优先级确定方法及装置。The invention relates to the technical field of embedded computers, in particular to a method and device for determining node scheduling priorities.
背景技术Background technique
目前,依靠提高单核处理器中单颗中央处理器(CPU)内核的运行频率已经不能满足用户对网络设备性能的要求。为了提高网络设备对数据流的处理速度,多核处理器应运而生。多核处理器可以解决单核处理器频率提升的瓶颈问题,通过采用多个CPU内核同时协同工作,可以大量缩短任务执行的时间。At present, relying on increasing the operating frequency of a single central processing unit (CPU) core in a single-core processor can no longer meet users' requirements for network device performance. In order to improve the processing speed of network equipment on data streams, multi-core processors emerge as the times require. Multi-core processors can solve the bottleneck problem of increasing the frequency of single-core processors. By using multiple CPU cores to work together at the same time, the time for task execution can be greatly shortened.
为了高效地利用多核处理器,可以把系统任务划分为多个线程(或多个子任务),每个线程又可以划分为多个阶段执行点,一个阶段执行点就是调度内核的最小执行颗粒,从而使得每个内核都能充分得到调度。每一个内核同一时间只能执行一个线程中的一个阶段执行点(即一个最小执行颗粒)。In order to efficiently utilize multi-core processors, system tasks can be divided into multiple threads (or multiple subtasks), and each thread can be divided into multiple phase execution points. A phase execution point is the smallest execution particle of the scheduling kernel, thus So that each core can be fully scheduled. Each core can only execute one phase execution point in one thread at the same time (that is, a minimum execution particle).
时间、阶段执行点、内核,这三个变量是一对一的关系不可重叠。通常把时间和阶段执行点的组合定义为一个最小的执行单元,每个内核可以自由地获取到一个空闲的执行单元并得到调度。具体的,可以用如图1所示的坐标图表示他们之间的关系。Time, stage execution point, kernel, these three variables have a one-to-one relationship and cannot overlap. Usually, the combination of time and phase execution point is defined as a minimum execution unit, and each core can freely obtain an idle execution unit and be scheduled. Specifically, a coordinate diagram as shown in FIG. 1 may be used to represent the relationship between them.
对于一个网络设备来说,系统任务应该划分多少个线程,每个线程该划分多少个阶段执行点,如何均衡这些线程的负载,这些是决定整个网络设备性能的关键。目前大部分网络设备只有一个线程,在这个线程上再划分多个阶段执行点,所以不涉及负载均衡的问题。但是随着CPU技术的发展,多核的概念已不再是双核或4核,而是可集成十多个甚至几十个内核。那么单纯地在一个线程里分布多个内核已经不能充分发挥系统性能,必须要在多线程多阶段执行点上分布多个内核,再加以负载均衡。For a network device, how many threads should be divided into system tasks, how many phase execution points should be divided into each thread, and how to balance the load of these threads are the keys to determine the performance of the entire network device. At present, most network devices have only one thread, and multiple stages of execution points are divided on this thread, so the problem of load balancing is not involved. However, with the development of CPU technology, the concept of multi-core is no longer dual-core or 4-core, but more than ten or even dozens of cores can be integrated. So simply distributing multiple cores in one thread can no longer give full play to system performance, it is necessary to distribute multiple cores on multi-threaded multi-stage execution points, and then add load balancing.
对于每个阶段执行点中可能出现的临界资源争夺问题,在多核处理器中尤其突出。多核处理器中会出现多个内核执行同一个阶段执行点的情况,如果这个阶段执行点出现操作临界资源的情况,那么这些内核就要排队等待获取临界资源。在有临界资源的阶段执行点中通常用自旋锁的方式进行保护,如果某个内核先抢到这个临界资源,其他内核执行到自旋锁的地方就挂起等待,直到临界资源被释放为止,从而出现了临界资源互锁。如果出现多个内核进入到某阶段执行点后都需要轮询等待临界资源,就出现了多核竞争,导致大大降低多核的性能发挥。The problem of critical resource contention that may arise in each stage execution point is especially prominent in multi-core processors. In a multi-core processor, multiple cores may execute the execution point of the same stage. If critical resources are operated at the execution point of this stage, these cores will queue up to obtain critical resources. Spin locks are usually used to protect the execution point of the phase with critical resources. If a certain core grabs the critical resource first, other cores will hang and wait until the critical resource is released. , resulting in a critical resource interlock. If multiple cores need to poll and wait for critical resources after entering a certain stage of execution, multi-core competition will occur, which will greatly reduce the performance of multi-core.
在多核处理器中数据处理从开始到结束的一连串的动作可以称为流水线,一条流水线可划分为多个处理阶段,每个处理阶段就是内核调度的节点(一个节点就是一个阶段执行点),也是内核并行调度和进入的最小原子操作。In a multi-core processor, a series of actions from the beginning to the end of data processing can be called a pipeline. A pipeline can be divided into multiple processing stages, and each processing stage is a node scheduled by the kernel (a node is a stage execution point). The smallest atomic operation scheduled and entered in parallel by the kernel.
有些多核处理器按照节点的任务队列深度来判断节点的调度优先级,当一个内核完成一个节点的处理任务后就需要进入另一个节点获取新的任务进行处理,此时,节点的选择是根据当前等待多核调度的节点的调度优先级,选择调度优先级高的节点进行处理。如果节点n出现操作临界资源的情况,而此时节点n-1(节点n之前的一个节点)还在源源不断地将要处理的数据传输给节点n,那么节点n的调度优先级就会不断增加。系统调度算法就会分配更多的内核来处理节点n,这些内核执行到节点n的临界资源时就会进行多核竞争,发生互斥,导致性能下降。Some multi-core processors judge the scheduling priority of a node according to the depth of the task queue of the node. When a core completes the processing task of a node, it needs to enter another node to obtain a new task for processing. At this time, the selection of the node is based on the current The scheduling priority of the node waiting for multi-core scheduling, and select the node with the highest scheduling priority for processing. If node n operates critical resources, and node n-1 (a node before node n) is still continuously transmitting the data to be processed to node n at this time, then the scheduling priority of node n will continue to increase . The system scheduling algorithm will allocate more cores to process node n. When these cores are executed to the critical resources of node n, they will compete for multi-core, and mutual exclusion will occur, resulting in performance degradation.
发明内容Contents of the invention
本发明实施例提供一种节点调度优先级确定方法及装置,用于减少节点中发生的多核竞争,提高多核处理器的性能。Embodiments of the present invention provide a node scheduling priority determination method and device, which are used to reduce multi-core competition in nodes and improve the performance of multi-core processors.
一种节点调度优先级确定方法,应用于根据任务队列深度确定节点调度优先级的多核处理器中,所述方法包括:A method for determining node scheduling priority, applied to a multi-core processor that determines node scheduling priority according to task queue depth, said method comprising:
确定出现临界资源互锁的节点n;Determine the node n where the critical resource interlock occurs;
在每条在节点n处发生临界资源互锁的流水线上,降低距离节点n最近,且位于节点n之前的节点n-1的调度优先级,其中n为大于1的正整数;On each pipeline where critical resource interlock occurs at node n, reduce the scheduling priority of node n-1 closest to node n and located before node n, where n is a positive integer greater than 1;
其中,在流水线A上,降低距离节点n最近,且位于节点n之前的节点n-1的调度优先级,具体包括:Among them, on pipeline A, reduce the scheduling priority of node n-1 closest to node n and located before node n, specifically including:
根据节点n-1的调度优先级与在每条在节点n处发生临界资源互锁的流水线上,节点n的固定执行时长的正比关系,和/或在每条在节点n处发生临界资源互锁的流水线上,节点n的每次锁定时间长度的反比关系,降低节点n-1的调度优先级。According to the proportional relationship between the scheduling priority of node n-1 and the fixed execution time of node n on each pipeline where critical resource interlock occurs at node n, and/or on each pipeline where critical resource interlock occurs at node n On the lock pipeline, the inverse relationship between the length of each lock of node n reduces the scheduling priority of node n-1.
一种节点调度优先级确定装置,应用于根据任务队列深度确定节点调度优先级的多核处理器中,所述装置包括:A device for determining node scheduling priority is applied to a multi-core processor that determines node scheduling priority according to the depth of a task queue, and the device includes:
确定模块,用于确定出现临界资源互锁的节点n;A determining module, configured to determine the node n where critical resource interlock occurs;
调整模块,用于根据节点n-1的调度优先级与在每条在节点n处发生临界资源互锁的流水线上,节点n的固定执行时长的正比关系,和/或在每条在节点n处发生临界资源互锁的流水线上,节点n的每次锁定时间长度的反比关系,降低节点n-1的调度优先级,其中n为大于1的正整数。The adjustment module is used to be based on the proportional relationship between the scheduling priority of node n-1 and the fixed execution time of node n on each pipeline where critical resource interlocking occurs at node n, and/or on each pipeline at node n On the pipeline where interlocking of critical resources occurs, the inverse relationship between the length of each locking time of node n reduces the scheduling priority of node n-1, where n is a positive integer greater than 1.
根据本发明实施例提供的方案,在根据任务队列深度确定节点调度优先级的多核处理器中,在每条流水线上,降低出现临界资源互锁的节点n之前的节点n-1的调度优先级,按照内核根据当前等待多核调度的节点的调度优先级,选择调度优先级高的节点进行处理的原理,降低节点n-1得到调度的可能性,从而避免前向节点不断的向出现瓶颈的后向节点输入待处理的数据任务,避免节点n中的任务队列深度增加,从而避免更多的内核调度到节点n,减少节点n出现多核竞争的可能性,充分发挥多核处理器的性能。According to the solution provided by the embodiment of the present invention, in the multi-core processor that determines the node scheduling priority according to the depth of the task queue, on each pipeline, reduce the scheduling priority of node n-1 before the node n where critical resource interlock occurs , according to the principle that the kernel selects a node with a high scheduling priority for processing according to the scheduling priority of the node currently waiting for multi-core scheduling, and reduces the possibility of node n-1 being scheduled, so as to avoid the forward node from continuously appearing to the back of the bottleneck Input the data tasks to be processed to the node, avoid the increase of the task queue depth in node n, thereby avoiding more kernel scheduling to node n, reduce the possibility of multi-core competition in node n, and give full play to the performance of multi-core processors.
附图说明Description of drawings
图1为现有技术中时间、阶段执行点和内核的关系示意图;Fig. 1 is a schematic diagram of the relationship between time, stage execution point and kernel in the prior art;
图2为本发明实施例一提供的节点调度优先级确定方法的步骤流程图;FIG. 2 is a flow chart of the steps of the method for determining the node scheduling priority provided by Embodiment 1 of the present invention;
图3为本发明实施例一提供的流水线示意图;FIG. 3 is a schematic diagram of a pipeline provided by Embodiment 1 of the present invention;
图4为本发明实施例二提供的节点调度优先级确定装置的结构示意图。FIG. 4 is a schematic structural diagram of an apparatus for determining a node scheduling priority provided by Embodiment 2 of the present invention.
具体实施方式Detailed ways
本发明实施例提供的方案中,在多核数据处理系统中发生临界资源互锁的节点,前向反馈降低前向节点的多核调度优先级,降低在多核数据处理系统中内核在临界资源操作上的竞争和碰撞,使多核的分配调度具有预见性,提前避免了多个内核争抢某个临界资源的情况发生。In the solution provided by the embodiment of the present invention, in the multi-core data processing system where critical resource interlocking occurs, the forward feedback reduces the multi-core scheduling priority of the forward node, and reduces the core's critical resource operation in the multi-core data processing system. Competition and collision make multi-core allocation and scheduling predictable, avoiding the occurrence of multiple cores competing for a certain critical resource in advance.
下面结合说明书附图和各实施例对本发明方案进行说明。The solutions of the present invention will be described below in conjunction with the accompanying drawings and various embodiments.
实施例一、Embodiment one,
本发明实施例一提供一种节点调度优先级确定方法,应用于根据任务队列深度确定节点调度优先级的多核处理器中,该方法的步骤如图2所示,包括:Embodiment 1 of the present invention provides a method for determining node scheduling priority, which is applied to a multi-core processor that determines node scheduling priority according to the task queue depth. The steps of the method are shown in FIG. 2 , including:
步骤101、节点n获取到内核调度。
其中,n为大于1的正整数。Wherein, n is a positive integer greater than 1.
步骤102、节点n确定自身是否出现临界资源互锁。
在本实施例中,用于确定节点调度优先级的装置,如节点调度优先级确定装置需要确定出现临界资源互锁的节点n,并降低节点n-1的调度优先级。具体的,可以根据接收到的节点的反馈信息确定该节点出现临界资源互锁。In this embodiment, the means for determining the node scheduling priority, such as the node scheduling priority determining means, needs to determine the node n where critical resource interlock occurs, and reduce the scheduling priority of node n-1. Specifically, it may be determined according to the received feedback information of the node that a critical resource interlock occurs on the node.
因此在本步骤中,节点n在获取到内核调度后,当执行到临界资源的时候,可以确定自身是否出现临界资源互锁。若确定自身出现临界资源互锁,可以计算临界资源执行的时间长度,并执行步骤103,反馈该信息。节点n在获取到内核调度后,若确定自身没有出现临界资源互锁,可以不必反馈信息,即无需触发执行步骤103,节点n-1可以保持按照原有的方法(即根据任务队列深度确定节点调度优先级的方法)确定出的自身的调度优先级。Therefore, in this step, after obtaining the kernel scheduling, the node n can determine whether critical resource interlocking occurs when the node n executes critical resources. If it is determined that critical resource interlocking occurs, the execution time of the critical resource may be calculated, and
步骤103、节点n反馈临界资源执行的时间长度。
具体的,可以将临界资源执行的时间长度反馈给节点调度优先级确定装置(该装置可以集成在前一个节点n-1中,即每个节点中可以集成一个节点调度优先级确定装置,因此,在本实施例中,可以利用前向反馈法将本节点的瓶颈信息告知前一节点,并降低前一节点的多核调度优先级,从而防止前向节点将大量的数据输入到后向节点上导致任务堆积在出现瓶颈的后向节点上。当然,该装置也可以独立于节点)。节点调度优先级确定装置可以在接收到节点n反馈的信息时,确定节点n出现临界资源互锁。Specifically, the time length of critical resource execution can be fed back to the node scheduling priority determination device (this device can be integrated in the previous node n-1, that is, a node scheduling priority determination device can be integrated in each node, therefore, In this embodiment, the forward feedback method can be used to inform the previous node of the bottleneck information of this node, and reduce the multi-core scheduling priority of the previous node, thereby preventing the forward node from inputting a large amount of data to the backward node and causing Tasks are piled up on the backward node where the bottleneck occurs. Of course, the device can also be independent of the node). The node scheduling priority determining device may determine that critical resource interlock occurs on node n when receiving information fed back by node n.
步骤104、降低节点n-1的调度优先级。
本步骤包括,在每条在节点n处发生临界资源互锁的流水线上,降低距离节点n最近,且位于节点n之前的节点n-1的调度优先级。This step includes, on each pipeline where critical resource interlock occurs at node n, lowering the scheduling priority of node n-1 closest to node n and located in front of node n.
在多核处理器中会有多条数据处理流水线,每条流水线上有多个节点,每条流水线上的节点处理过程是相同的。流水线示意图可以如图3所示,节点调度优先级确定装置集成在节点中时,在节点n处发生临界资源互锁时,针对每条在节点n处发生临界资源互锁的流水线,节点n可以向节点n-1中的节点调度优先级确定装置反馈信息,通知节点n-1,在节点n处出现临界资源互锁。此时,节点调度优先级确定装置可以重新确定节点n-1的调度优先级。There are multiple data processing pipelines in a multi-core processor, each pipeline has multiple nodes, and the processing process of the nodes on each pipeline is the same. The schematic diagram of the pipeline can be shown in Figure 3. When the node scheduling priority determination device is integrated in the node, when critical resource interlock occurs at node n, for each pipeline where critical resource interlock occurs at node n, node n can be Feedback information to the node scheduling priority determination device in node n-1, and notify node n-1 that critical resource interlock occurs at node n. At this time, the node scheduling priority determining device may re-determine the scheduling priority of node n-1.
在本实施例中,可以根据节点n-1的调度优先级与在每条在节点n处发生临界资源互锁的流水线上,节点n的固定执行时长的正比关系,和/或在每条在节点n处发生临界资源互锁的流水线上,节点n的每次锁定时间长度的反比关系,降低节点n-1的调度优先级。具体的,可以根据在节点n处发生临界资源互锁的流水线条数、在每条在节点n处发生临界资源互锁的流水线上,节点n的每次锁定时间长度、在每条在节点n处发生临界资源互锁的流水线上,节点n的固定执行时长、流水线A上节点n-1的固定执行时长以及流水线A上节点n-1的任务队列深度(可以理解为任务数量),确定流水线A上节点n-1的调度优先级。In this embodiment, according to the proportional relationship between the scheduling priority of node n-1 and the fixed execution time of node n on each pipeline where critical resource interlock occurs at node n, and/or on each pipeline On the pipeline where critical resource interlock occurs at node n, the inverse relationship between the length of each locking time of node n reduces the scheduling priority of node n-1. Specifically, according to the number of pipelines where critical resource interlocking occurs at node n, on each pipeline where critical resource interlocking occurs at node n, the length of each locking time of node n, on each pipeline at node n On the pipeline where critical resource interlock occurs, the fixed execution time of node n, the fixed execution time of node n-1 on pipeline A, and the task queue depth of node n-1 on pipeline A (which can be understood as the number of tasks), determine the pipeline The scheduling priority of node n-1 on A.
较优的,可以通过以下公式确定流水线A上节点n-1的调度优先级HA,n-1:Preferably, the scheduling priority H A,n-1 of node n-1 on pipeline A can be determined by the following formula:
其中,in,
K为在节点n处发生临界资源互锁的流水线条数;K is the number of pipeline lines where critical resource interlock occurs at node n;
Wi,n为节点n在第i条在节点n处发生临界资源互锁的流水线上的每次锁定时间长度;W i, n is the length of each locking time of node n on the ith pipeline where critical resource interlock occurs at node n;
Ti,n为节点n在第i条在节点n处发生临界资源互锁的流水线上的固定执行时长;T i,n is the fixed execution time of node n on the pipeline where critical resource interlock occurs at node n;
QA,n-1为节点n-1在流水线A上的任务队列深度;Q A,n-1 is the task queue depth of node n-1 on pipeline A;
TA,n-1为节点n-1在流水线A上的固定执行时长。T A,n-1 is the fixed execution time of node n-1 on pipeline A.
可以将视为调整前流水线A上节点n-1的调度优先级H’A,n-1,由于Wi,n必然小于Ti,n,因此,必然大于0且小于1,也必然小于1,由此,确定出的调度优先级HA,n-1相对于H’A,n-1必然是降低的。can As the scheduling priority H' A,n-1 of node n-1 on pipeline A before adjustment, since W i,n must be smaller than T i,n , therefore, Must be greater than 0 and less than 1, It must also be less than 1. Therefore, the determined scheduling priority H A,n-1 must be lower than H' A,n-1 .
当然,由于调度优先级HA,n-1与Ti,n成正比,与Wi,n成反比,也可以通过以下公式确定流水线A上节点n-1的调度优先级HA,n-1:Of course, since the scheduling priority H A,n-1 is directly proportional to T i,n and inversely proportional to W i,n , the scheduling priority H A, n-1 of node n-1 on pipeline A can also be determined by the following formula 1 :
其中,M为大于0且小于1的常数,且大于0且小于1。Among them, M is a constant greater than 0 and less than 1, and greater than 0 and less than 1.
较优的,在本实施例中,每个节点中可以保存并维护一个节点状态信息表,节点的调度优先级、任务队列深度和固定执行时间长可以保存在该信息表中。在重新确定出调度优先级后,可以更新节点状态信息表中保存的调度优先级。Preferably, in this embodiment, each node can save and maintain a node status information table, and the node's scheduling priority, task queue depth and fixed execution time can be saved in the information table. After the scheduling priority is re-determined, the scheduling priority stored in the node state information table may be updated.
与本发明实施例一基于同一发明构思,提供以下的装置。Based on the same inventive concept as Embodiment 1 of the present invention, the following devices are provided.
实施例二、Embodiment two,
本发明实施例二提供一种节点调度优先级确定装置,应用于根据任务队列深度确定节点调度优先级的多核处理器中,该装置的结构如图4所示,包括:Embodiment 2 of the present invention provides a device for determining node scheduling priority, which is applied to a multi-core processor that determines node scheduling priority according to the depth of the task queue. The structure of the device is shown in Figure 4, including:
确定模块11用于确定出现临界资源互锁的节点n;调整模块12用于在每条在节点n处发生临界资源互锁的流水线上,降低距离节点n最近,且位于节点n之前的节点n-1的调度优先级,其中n为大于1的正整数。The
所述调整模块12具体用于根据节点n-1的调度优先级与在每条在节点n处发生临界资源互锁的流水线上,节点n的固定执行时长的正比关系,和/或在每条在节点n处发生临界资源互锁的流水线上,节点n的每次锁定时间长度的反比关系,降低节点n-1的调度优先级。The
所述调整模块12具体用于根据在节点n处发生临界资源互锁的流水线条数、在每条在节点n处发生临界资源互锁的流水线上,节点n的每次锁定时间长度、在每条在节点n处发生临界资源互锁的流水线上,节点n的固定执行时长、流水线A上节点n-1的固定执行时长以及流水线A上节点n-1的任务队列深度,确定流水线A上节点n-1的调度优先级。The
所述调整模块12具体用于通过以下公式确定流水线A上节点n-1的调度优先级HA,n-1:The
其中,in,
K为在节点n处发生临界资源互锁的流水线条数;K is the number of pipeline lines where critical resource interlock occurs at node n;
Wi,n为节点n在第i条在节点n处发生临界资源互锁的流水线上的每次锁定时间长度;W i, n is the length of each locking time of node n on the ith pipeline where critical resource interlock occurs at node n;
Ti,n为节点n在第i条在节点n处发生临界资源互锁的流水线上的固定执行时长;T i,n is the fixed execution time of node n on the i-th pipeline where critical resource interlock occurs at node n;
QA,n-1为节点n-1在流水线A上的任务队列深度;Q A,n-1 is the task queue depth of node n-1 on pipeline A;
TA,n-1为节点n-1在流水线A上的固定执行时长。T A,n-1 is the fixed execution time of node n-1 on pipeline A.
所述调整模块12具体用于通过以下公式确定流水线A上节点n-1的调度优先级HA,n-1:The
其中,in,
K为在节点n处发生临界资源互锁的流水线条数;K is the number of pipeline lines where critical resource interlock occurs at node n;
Wi,n为节点n在第i条在节点n处发生临界资源互锁的流水线上的每次锁定时间长度;W i, n is the length of each locking time of node n on the ith pipeline where critical resource interlock occurs at node n;
Ti,n为节点n在第i条在节点n处发生临界资源互锁的流水线上的固定执行时长;T i,n is the fixed execution time of node n on the i-th pipeline where critical resource interlock occurs at node n;
QA,n-1为节点n-1在流水线A上的任务队列深度;Q A,n-1 is the task queue depth of node n-1 on pipeline A;
TA,n-1为节点n-1在流水线A上的固定执行时长;T A,n-1 is the fixed execution time of node n-1 on pipeline A;
M为大于0且小于1的常数,且的值大于0且小于1。M is a constant greater than 0 and less than 1, and The value is greater than 0 and less than 1.
根据本发明实施例一~实施例二提供的方案,不仅可以降低节点发生多核竞争的可能性,提高多核处理器的性能,且如果节点n出现操作临界资源的情况,而此时节点n-1(节点n之前的一个节点)还在源源不断地将要处理的数据传输给节点n,此时就会出现数据拥塞,使得数据处理时间延迟,且在节点n有可能导致缓存队列溢出,而通过本发明实施例一~实施例二提供的方案,还可以进一步避免出现数据拥塞,减少数据处理时间延迟,以及避免出现缓存队列溢出的问题。According to the solutions provided by Embodiment 1 to Embodiment 2 of the present invention, it can not only reduce the possibility of multi-core competition in nodes, but also improve the performance of multi-core processors, and if node n operates critical resources, and at this time node n-1 (a node before node n) is still continuously transmitting the data to be processed to node n. At this time, data congestion will occur, which will delay the data processing time, and may cause the cache queue to overflow at node n. The solutions provided by Embodiments 1 to 2 of the invention can further avoid data congestion, reduce data processing time delay, and avoid problems of buffer queue overflow.
显然,本领域的技术人员可以对本发明进行各种改动和变型而不脱离本发明的精神和范围。这样,倘若本发明的这些修改和变型属于本发明权利要求及其等同技术的范围之内,则本发明也意图包含这些改动和变型在内。Obviously, those skilled in the art can make various changes and modifications to the present invention without departing from the spirit and scope of the present invention. Thus, if these modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalent technologies, the present invention also intends to include these modifications and variations.
Claims (4)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210031763.3A CN102567120B (en) | 2012-02-13 | 2012-02-13 | Method and device for determining node scheduling priority |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210031763.3A CN102567120B (en) | 2012-02-13 | 2012-02-13 | Method and device for determining node scheduling priority |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102567120A CN102567120A (en) | 2012-07-11 |
CN102567120B true CN102567120B (en) | 2014-04-23 |
Family
ID=46412606
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201210031763.3A Expired - Fee Related CN102567120B (en) | 2012-02-13 | 2012-02-13 | Method and device for determining node scheduling priority |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102567120B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9355506B2 (en) | 2014-06-27 | 2016-05-31 | Continental Automotive France | Method for managing fault messages of a motor vehicle |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104969197A (en) * | 2013-02-04 | 2015-10-07 | 日本电气株式会社 | Data set multiplexing degree changing device, server and data set multiplexing degree changing method |
CN106547492B (en) * | 2016-12-08 | 2018-03-20 | 北京得瑞领新科技有限公司 | The operational order dispatching method and device of a kind of NAND flash memory equipment |
CN108307198B (en) * | 2018-03-08 | 2021-01-01 | 广州酷狗计算机科技有限公司 | Flow service node scheduling method and device and scheduling node |
CN114860406B (en) * | 2022-05-18 | 2024-02-20 | 安元科技股份有限公司 | A distributed compilation and packaging system and method based on Docker |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101055531A (en) * | 2006-04-14 | 2007-10-17 | 国际商业机器公司 | System and method for placing a processor into a gradual slow mode of operation |
CN101083655A (en) * | 2007-07-06 | 2007-12-05 | 中国人民解放军国防科学技术大学 | Antisymmetric process software production chain technique in P2P index service |
CN101661386A (en) * | 2009-09-24 | 2010-03-03 | 成都市华为赛门铁克科技有限公司 | Multi-hardware thread processor and business processing method thereof |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7657891B2 (en) * | 2005-02-04 | 2010-02-02 | Mips Technologies, Inc. | Multithreading microprocessor with optimized thread scheduler for increasing pipeline utilization efficiency |
-
2012
- 2012-02-13 CN CN201210031763.3A patent/CN102567120B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101055531A (en) * | 2006-04-14 | 2007-10-17 | 国际商业机器公司 | System and method for placing a processor into a gradual slow mode of operation |
CN101083655A (en) * | 2007-07-06 | 2007-12-05 | 中国人民解放军国防科学技术大学 | Antisymmetric process software production chain technique in P2P index service |
CN101661386A (en) * | 2009-09-24 | 2010-03-03 | 成都市华为赛门铁克科技有限公司 | Multi-hardware thread processor and business processing method thereof |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9355506B2 (en) | 2014-06-27 | 2016-05-31 | Continental Automotive France | Method for managing fault messages of a motor vehicle |
Also Published As
Publication number | Publication date |
---|---|
CN102567120A (en) | 2012-07-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9798830B2 (en) | Stream data multiprocessing method | |
US9858115B2 (en) | Task scheduling method for dispatching tasks based on computing power of different processor cores in heterogeneous multi-core processor system and related non-transitory computer readable medium | |
CN102567120B (en) | Method and device for determining node scheduling priority | |
KR20080041047A (en) | Apparatus and Method for Load Balancing in Multi-Core Processor Systems | |
WO2014187412A1 (en) | Method and apparatus for controlling message processing thread | |
CN112114950A (en) | Task scheduling method and device and cluster management system | |
GB2503438A (en) | Method and system for pipelining out of order instructions by combining short latency instructions to match long latency instructions | |
CN107832143B (en) | Method and device for processing physical machine resources | |
US9104491B2 (en) | Batch scheduler management of speculative and non-speculative tasks based on conditions of tasks and compute resources | |
US20150121387A1 (en) | Task scheduling method for dispatching tasks based on computing power of different processor cores in heterogeneous multi-core system and related non-transitory computer readable medium | |
US9389923B2 (en) | Information processing device and method for controlling information processing device | |
CN109840149B (en) | Task scheduling method, device, equipment and storage medium | |
US9471387B2 (en) | Scheduling in job execution | |
JP4912927B2 (en) | Task allocation apparatus and task allocation method | |
CN111651864A (en) | Event centralized emission type multi-heterogeneous time queue optimization simulation execution method and system | |
WO2023274278A1 (en) | Resource scheduling method and device and computing node | |
CN112214299A (en) | Multi-core processor and task scheduling method and device thereof | |
JP7122299B2 (en) | Methods, apparatus, devices and storage media for performing processing tasks | |
CN107589993A (en) | A kind of dynamic priority scheduling algorithm based on linux real time operating systems | |
KR101377195B1 (en) | Computer micro-jobs | |
CN105117281A (en) | Task scheduling method based on task application signal and execution cost value of processor core | |
CN113806044B (en) | A method for eliminating task bottlenecks on heterogeneous platforms for computer vision applications | |
JP2008225641A (en) | Computer system, interrupt control method and program | |
JP2012203911A (en) | Improvement of scheduling of task to be executed by asynchronous device | |
US9921891B1 (en) | Low latency interconnect integrated event handling |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20140423 |