CN116302391A - Multithreading task processing method and related device - Google Patents
Multithreading task processing method and related device Download PDFInfo
- Publication number
- CN116302391A CN116302391A CN202310076347.3A CN202310076347A CN116302391A CN 116302391 A CN116302391 A CN 116302391A CN 202310076347 A CN202310076347 A CN 202310076347A CN 116302391 A CN116302391 A CN 116302391A
- Authority
- CN
- China
- Prior art keywords
- memory
- thread
- task
- queue
- circular queue
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/546—Message passing systems or structures, e.g. queues
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Debugging And Monitoring (AREA)
Abstract
Description
技术领域technical field
本申请涉及计算机技术领域,特别涉及一种多线程的任务处理方法、任务处理装置、计算设备以及计算机可读存储介质。The present application relates to the field of computer technology, and in particular to a multi-threaded task processing method, a task processing device, a computing device, and a computer-readable storage medium.
背景技术Background technique
随着信息技术的不断发展,在服务器与存储设备中,需要处理高并发量的业务同时伴随着海量的IO。因此,服务器设备与存储设备对性能与稳定性有较高的要求,大多数场景都是多线程处理。With the continuous development of information technology, servers and storage devices need to handle high-concurrency services accompanied by massive IO. Therefore, server devices and storage devices have high requirements on performance and stability, and most scenarios are processed by multi-threads.
相关技术中,在多线程访问公共资源过程中需要进行上锁,但加锁会引入两个问题:一个是锁等待,另一个是线程切换,锁等待会增加耗时,线程切换增加额外调度开销,这两个问题对性能影响较大,但是不加锁可能造成数据污染,无法保证线程安全。In related technologies, locking is required during multi-threaded access to public resources, but locking will introduce two problems: one is lock waiting, and the other is thread switching. Lock waiting will increase time-consuming, and thread switching will increase additional scheduling overhead. , these two problems have a great impact on performance, but not locking may cause data pollution, and thread safety cannot be guaranteed.
因此,如何避免使用加锁功能实现资源的访问时本领域技术人员关注的重点问题。Therefore, how to avoid using the locking function to implement resource access is a major concern of those skilled in the art.
发明内容Contents of the invention
本申请的目的是提供一种多线程的任务处理方法、任务处理装置、计算设备以及计算机可读存储介质,以避免采用的锁机制实现多线程的任务处理,提高多任务的处理效率。The purpose of this application is to provide a multi-thread task processing method, task processing device, computing equipment and computer-readable storage medium, so as to avoid the lock mechanism adopted to realize multi-thread task processing and improve the processing efficiency of multi-task.
为解决上述技术问题,本申请提供一种多线程的任务处理方法,包括:In order to solve the above technical problems, the application provides a multithreaded task processing method, including:
将接收的任务存放至线程对应的循环队列;Store the received tasks in the circular queue corresponding to the thread;
将所述循环队列中的任务按照先入先出的顺序进行出队;Dequeue the tasks in the circular queue in a first-in-first-out order;
通过所述线程和所述线程对应的内存资源执行从所述循环队列出队的任务,得到执行结果;其中,所述内存资源为针对于所述线程申请的内存资源。Executing the task dequeued from the circular queue by the thread and the memory resource corresponding to the thread to obtain an execution result; wherein the memory resource is a memory resource applied for by the thread.
可选的,申请所述内存资源的过程,包括:Optionally, the process of applying for the memory resource includes:
向内存管理模块发送内存申请;其中,所述内存管理模块用于管理原始内存池;Send a memory application to the memory management module; wherein, the memory management module is used to manage the original memory pool;
所述内存管理模块从所述原始内存池中分配内存空间,并返回所述内存空间的内存信息;The memory management module allocates memory space from the original memory pool, and returns memory information of the memory space;
基于所述内存信息确定所述内存资源。The memory resource is determined based on the memory information.
可选的,所述内存管理模块从所述原始内存池中分配内存空间,并返回所述内存空间的内存信息,包括:Optionally, the memory management module allocates memory space from the original memory pool, and returns memory information of the memory space, including:
所述内存管理模块从所述原始内存池中分配连续的内存空间;The memory management module allocates continuous memory space from the original memory pool;
将所述内存空间的内存信息进行返回。Return the memory information of the memory space.
可选的,创建所述循环队列的过程,包括:Optionally, the process of creating the circular queue includes:
基于多线程需求信息创建对应的循环队列,并将所述循环队列的入队出队操作指令设置为公共接口。Create a corresponding circular queue based on the multi-thread requirement information, and set the enqueue and dequeue operation instructions of the circular queue as a public interface.
可选的,创建所述线程的过程,包括:Optionally, the process of creating the thread includes:
基于所述循环队列的数量创建对应的线程,并将每个所述线程的状态设置为常驻线程。Create corresponding threads based on the quantity of the circular queue, and set the state of each thread as a resident thread.
可选的,还包括:Optionally, also include:
当所述循环队列为空队列时,控制所述循环队列对应的线程执行空转操作。When the circular queue is an empty queue, control the thread corresponding to the circular queue to perform an idle operation.
可选的,将接收的任务存放至线程对应的循环队列,包括:Optionally, store the received tasks in the circular queue corresponding to the thread, including:
通过所述循环队列的入队操作接口将所述发送的任务存放至所述循环队列中。The sent task is stored in the circular queue through the enqueue operation interface of the circular queue.
本申请还提供一种多线程的任务处理装置,包括:The present application also provides a multi-threaded task processing device, including:
任务存放模块,用于将接收的任务存放至线程对应的循环队列;The task storage module is used to store the received tasks into the circular queue corresponding to the thread;
循环队列处理模块,用于将所述循环队列中的任务按照先入先出的顺序进行出队;A circular queue processing module, configured to dequeue tasks in the circular queue in a first-in first-out order;
任务执行模块,用于通过所述线程和所述线程对应的内存资源执行从所述循环队列出队的任务,得到执行结果;其中,所述内存资源为针对于所述线程申请的内存资源。A task execution module, configured to execute the task dequeued from the circular queue through the thread and the memory resource corresponding to the thread, and obtain an execution result; wherein, the memory resource is a memory resource applied for the thread .
本申请还提供一种计算设备,包括:The present application also provides a computing device, comprising:
存储器,用于存储计算机程序;memory for storing computer programs;
处理器,用于执行所述计算机程序时实现如上所述的任务处理方法的步骤。A processor, configured to implement the steps of the above-mentioned task processing method when executing the computer program.
本申请还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如上所述的任务处理方法的步骤。The present application also provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the above-mentioned task processing method are realized.
本申请所提供的一种多线程的任务处理方法,包括:将接收的任务存放至线程对应的循环队列将所述循环队列中的任务按照先入先出的顺序进行出队;通过所述线程和所述线程对应的内存资源执行从所述循环队列出队的任务,得到执行结果;其中,所述内存资源为针对于所述线程申请的内存资源。A multi-threaded task processing method provided by the present application includes: storing the received tasks in the circular queue corresponding to the thread, and dequeuing the tasks in the circular queue in the order of first-in-first-out; through the thread and The memory resource corresponding to the thread executes the task dequeued from the circular queue, and obtains an execution result; wherein, the memory resource is a memory resource requested for the thread.
通过将接收到的任务存放至循环队列中,然后该循环队列按照顺序将任务继续进行出队,最后每出队一个任务,对应的线程和线程对应的内存资源执行该出队的任务,得到执行结果,实现了线程采用对应的内存资源按照顺序执行任务,避免出现任务抢占资源和线程的情况,使得不使用锁机制就实现了多线程的访问,提高多任务的处理效率。By storing the received tasks in the circular queue, the circular queue continues to dequeue the tasks in order. Finally, each time a task is dequeued, the corresponding thread and the memory resource corresponding to the thread execute the dequeued task and get executed. As a result, the thread uses the corresponding memory resources to execute tasks in order, avoiding the situation that tasks preempt resources and threads, so that multi-thread access is realized without using the lock mechanism, and the processing efficiency of multi-tasking is improved.
本申请还提供一种多线程的任务处理装置、计算设备以及计算机可读存储介质,具有以上有益效果,在此不作赘述。The present application also provides a multi-threaded task processing device, a computing device, and a computer-readable storage medium, which have the above beneficial effects, and will not be repeated here.
附图说明Description of drawings
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据提供的附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present application or the prior art, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description are only It is an embodiment of the present application, and those skilled in the art can also obtain other drawings according to the provided drawings without creative work.
图1为本申请实施例所提供的一种多线程的任务处理方法的流程图;FIG. 1 is a flow chart of a multithreaded task processing method provided in an embodiment of the present application;
图2为本申请实施例所提供的一种多线程的任务处理方法的数据流程图;FIG. 2 is a data flow diagram of a multi-threaded task processing method provided by an embodiment of the present application;
图3为本申请实施例所提供的一种多线程的任务处理方法的内存管理示意图;FIG. 3 is a schematic diagram of memory management of a multi-threaded task processing method provided by an embodiment of the present application;
图4为本申请实施例所提供的一种多线程的任务处理方法的内存划分示意图;FIG. 4 is a schematic diagram of memory division of a multi-threaded task processing method provided by an embodiment of the present application;
图5为本申请实施例所提供的一种多线程的任务处理方法的线程结构示意图;FIG. 5 is a schematic diagram of a thread structure of a multi-threaded task processing method provided in an embodiment of the present application;
图6为本申请实施例所提供的一种多线程的任务处理装置的结构示意图;FIG. 6 is a schematic structural diagram of a multi-threaded task processing device provided by an embodiment of the present application;
图7本申请实施例所提供的一种计算设备的结构示意图。FIG. 7 is a schematic structural diagram of a computing device provided by an embodiment of the present application.
具体实施方式Detailed ways
本申请的核心是提供一种多线程的任务处理方法、任务处理装置、计算设备以及计算机可读存储介质,以避免采用的锁机制实现多线程的任务处理,提高多任务的处理效率。The core of the present application is to provide a multi-threaded task processing method, task processing device, computing device and computer-readable storage medium, so as to avoid the lock mechanism adopted to realize multi-threaded task processing and improve the processing efficiency of multi-task.
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。In order to make the purposes, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below in conjunction with the drawings in the embodiments of the present application. Obviously, the described embodiments It is a part of the embodiments of this application, not all of them. Based on the embodiments in this application, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the scope of protection of this application.
相关技术中,在多线程访问公共资源过程中需要进行上锁,但加锁会引入两个问题:一个是锁等待,另一个是线程切换,锁等待会增加耗时,线程切换增加额外调度开销,这两个问题对性能影响较大,但是不加锁可能造成数据污染,无法保证线程安全。In related technologies, locking is required during multi-threaded access to public resources, but locking will introduce two problems: one is lock waiting, and the other is thread switching. Lock waiting will increase time-consuming, and thread switching will increase additional scheduling overhead. , these two problems have a great impact on performance, but not locking may cause data pollution, and thread safety cannot be guaranteed.
因此,本申请提供一种多线程的任务处理方法,通过将接收到的任务存放至循环队列中,然后该循环队列按照顺序将任务继续进行出队,最后每出队一个任务,对应的线程和线程对应的内存资源执行该出队的任务,得到执行结果,实现了线程采用对应的内存资源按照顺序执行任务,避免出现任务抢占资源和线程的情况,使得不使用锁机制就实现了多线程的访问,提高多任务的处理效率。Therefore, this application provides a multi-threaded task processing method, by storing the received tasks in the circular queue, and then the circular queue continues to dequeue the tasks in order, and finally each time a task is dequeued, the corresponding thread and The memory resource corresponding to the thread executes the dequeued task and obtains the execution result, which realizes that the thread uses the corresponding memory resource to execute the task in sequence, avoiding the situation where the task preempts resources and threads, and realizes multi-threading without using the lock mechanism Access to improve the processing efficiency of multitasking.
以下通过一个实施例,对本申请提供的一种多线程的任务处理方法进行说明。The following describes a multi-threaded task processing method provided by the present application through an embodiment.
请参考图1,图1为本申请实施例所提供的一种多线程的任务处理方法的流程图。Please refer to FIG. 1 , which is a flowchart of a multi-threaded task processing method provided by an embodiment of the present application.
本实施例中,该方法可以包括:In this embodiment, the method may include:
S101,将接收的任务存放至线程对应的循环队列;S101, storing the received task in the circular queue corresponding to the thread;
本步骤旨在将接收的任务存放至线程对应的循环队列。This step is to store the received tasks into the circular queue corresponding to the thread.
其中,该循环队列主要是用于存放任务的队列,且该循环为对线程的私有队列。因此,该线程需要处理的任务仅仅是从队列中出队的任务,避免出现了多个任务抢占的线程的问题。Wherein, the circular queue is mainly used for storing tasks, and the circular queue is a private queue for threads. Therefore, the tasks that the thread needs to process are only the tasks dequeued from the queue, avoiding the problem of threads preempted by multiple tasks.
其中,线程与循环队列一一对应,保证线程之间没有数据交换。Among them, there is a one-to-one correspondence between threads and circular queues, ensuring that there is no data exchange between threads.
进一步的,本步骤可以包括:Further, this step may include:
通过循环队列的入队操作接口将发送的任务存放至循环队列中。The sent tasks are stored in the circular queue through the enqueue operation interface of the circular queue.
可见,本可选方案中主要是说明如何将任务存放至循环队列中。本可选方案中通过循环队列的入队操作接口将发送的任务存放至循环队列中。也就是,该循环队列存在用于入队的,入队操作接口。可以通过该入队操作接口将任务存放在循环队列中。相应的,该循环队列还包括出队操作接口。该入队操作接口和出队操作接口均是该循环队列的公共操作接口。It can be seen that this optional solution mainly explains how to store tasks in the circular queue. In this optional solution, the sent tasks are stored in the circular queue through the enqueue operation interface of the circular queue. That is, the circular queue has an enqueue operation interface for enqueue. Tasks can be stored in the circular queue through this enqueue operation interface. Correspondingly, the circular queue also includes a dequeue operation interface. Both the enqueue operation interface and the dequeue operation interface are public operation interfaces of the circular queue.
进一步的,本实施例中创建循环队列的过程,可以包括:Further, the process of creating a circular queue in this embodiment may include:
基于多线程需求信息创建对应的循环队列,并将循环队列的入队出队操作指令设置为公共接口。Create a corresponding circular queue based on the multi-thread requirement information, and set the enqueue and dequeue operation instructions of the circular queue as a public interface.
可见,本可选方案中主要是说明如何创建循环队列。本可选方案中,基于多线程需求信息创建对应的循环队列,并将循环队列的入队出队操作指令设置为公共接口。由于一个循环队列对应了一个队列。因此,本可选方案中可以基于多线程需求信息确定要创建多少个循环队列。其中,多线程需求信息就是说明需要多线程数量的信息。It can be seen that this optional solution mainly explains how to create a circular queue. In this alternative solution, a corresponding circular queue is created based on the multi-thread requirement information, and the enqueue and dequeue operation instructions of the circular queue are set as a public interface. Since a circular queue corresponds to a queue. Therefore, in this optional solution, it is possible to determine how many circular queues to create based on the multi-thread requirement information. Wherein, the multi-thread requirement information is information indicating the required number of multi-threads.
进一步的,本实施例中创建线程的过程,可以包括:Further, the process of creating a thread in this embodiment may include:
基于循环队列的数量创建对应的线程,并将每个线程的状态设置为常驻线程。Create corresponding threads based on the number of circular queues, and set the status of each thread as a resident thread.
可见,本可选方案中主要是说明如何创建对应的线程。本可选方案,可以一句循环队列的数量创建对应的线程。同时,将每个线程的状态设置为常驻线程。其中,常驻进程是指该进程有没有任务执行,都可以将该进程保持在系统中,而不需要因为没有任务而释放该进程。It can be seen that this optional solution mainly explains how to create corresponding threads. In this optional solution, you can create corresponding threads according to the number of circular queues. At the same time, set the status of each thread as a resident thread. Wherein, the resident process means that the process can be kept in the system regardless of whether the process has a task to execute, and the process does not need to be released because there is no task.
进一步的,在上一可选方案的基础上,本实施例还可以包括:Further, on the basis of the previous option, this embodiment may also include:
当循环队列为空队列时,控制循环队列对应的线程执行空转操作。When the circular queue is an empty queue, the thread corresponding to the control circular queue performs an idle operation.
可见,本可选方案中主要是说明如何当队列中没有任务时如何进行处理。本可选方案中,当循环队列为空队列时,控制循环队列对应的线程执行空转操作。It can be seen that this option mainly describes how to process when there is no task in the queue. In this optional scheme, when the circular queue is an empty queue, the thread corresponding to the control circular queue performs an idle operation.
S102,将循环队列中的任务按照先入先出的顺序进行出队;S102, dequeue the tasks in the circular queue in the first-in-first-out order;
在S101的基础上,本步骤旨在将循环队列中的任务按照先入先出的顺序进行出队。On the basis of S101, this step aims to dequeue the tasks in the circular queue in a first-in-first-out order.
也就是说,该循环队列中存放的任务按照先入先出的顺序进行出队操作。同时,可以将入队出队操作设置成公共接口,实现了线程对该循环队列的操作。进一步的,可以在其它模块中实现入队操作,在当前模块中轮询该循环队列。That is to say, the tasks stored in the circular queue are dequeued in the first-in-first-out order. At the same time, the enqueue and dequeue operation can be set as a public interface to realize the operation of the thread on the circular queue. Furthermore, enqueue operations can be implemented in other modules, and the circular queue can be polled in the current module.
进一步的,线程、队列、其他模块符合生产者消费者模型,线程作为消费者,在执行完任务后将元素出队,其他模块作为消费者,将要处理的任务入队。Furthermore, threads, queues, and other modules conform to the producer-consumer model. Threads, as consumers, dequeue elements after executing tasks, and other modules, as consumers, enqueue tasks to be processed.
S103,通过线程和线程对应的内存资源执行从循环队列出队的任务,得到执行结果;其中,内存资源为针对于线程申请的内存资源。S103. Execute the task dequeued from the circular queue through the thread and the memory resource corresponding to the thread, and obtain an execution result; wherein, the memory resource is a memory resource applied for by the thread.
在S102的基础上,本步骤旨在通过线程和线程对应的内存资源执行从循环队列出队的任务,得到执行结果;其中,内存资源为针对于线程申请的内存资源。On the basis of S102, this step aims to execute the task dequeued from the circular queue through the thread and the memory resource corresponding to the thread, and obtain the execution result; wherein, the memory resource is a memory resource applied for by the thread.
其中,该内存资源就是该线程对应的一部分的内存资源。其他的线程就不会对内存资源进行抢占,实现了多线程的访问。Wherein, the memory resource is a part of the memory resource corresponding to the thread. Other threads will not preempt memory resources and realize multi-threaded access.
进一步的,本实施例中申请内存资源的过程,可以包括:Further, the process of applying for memory resources in this embodiment may include:
步骤1,向内存管理模块发送内存申请;其中,内存管理模块用于管理原始内存池;Step 1, sending a memory application to the memory management module; wherein, the memory management module is used to manage the original memory pool;
步骤2,内存管理模块从原始内存池中分配内存空间,并返回内存空间的内存信息;Step 2, the memory management module allocates memory space from the original memory pool, and returns the memory information of the memory space;
步骤3,基于内存信息确定内存资源。Step 3, determining the memory resource based on the memory information.
可见,本可选方案中主要是说明如何申请内存资源。本可选方案中,向内存管理模块发送内存申请;其中,内存管理模块用于管理原始内存池;内存管理模块从原始内存池中分配内存空间,并返回内存空间的内存信息;基于内存信息确定内存资源。It can be seen that this option mainly describes how to apply for memory resources. In this optional solution, a memory application is sent to the memory management module; wherein, the memory management module is used to manage the original memory pool; the memory management module allocates memory space from the original memory pool, and returns the memory information of the memory space; determines based on the memory information memory resources.
进一步的,上一可选方案中的步骤2可以包括:Further, step 2 in the previous option may include:
步骤1,内存管理模块从原始内存池中分配连续的内存空间;Step 1, the memory management module allocates continuous memory space from the original memory pool;
步骤2,将内存空间的内存信息进行返回。Step 2, return the memory information of the memory space.
可见,本可选方案中主要是说明如何返回内存信息。本可选方案中,内存管理模块从原始内存池中分配连续的内存空间;将内存空间的内存信息进行返回。It can be seen that this optional solution mainly explains how to return memory information. In this optional solution, the memory management module allocates continuous memory space from the original memory pool; returns the memory information of the memory space.
综上,本实施例通过将接收到的任务存放至循环队列中,然后该循环队列按照顺序将任务继续进行出队,最后每出队一个任务,对应的线程和线程对应的内存资源执行该出队的任务,得到执行结果,实现了线程采用对应的内存资源按照顺序执行任务,避免出现任务抢占资源和线程的情况,使得不使用锁机制就实现了多线程的访问,提高多任务的处理效率。To sum up, in this embodiment, the received tasks are stored in the circular queue, and then the circular queue continues to dequeue the tasks in order. Finally, each time a task is dequeued, the corresponding thread and the memory resource corresponding to the thread execute the dequeue. The task of the team can get the execution result, and realize that the thread uses the corresponding memory resource to execute the task in order, avoiding the situation that the task preempts the resource and the thread, so that the multi-thread access can be realized without using the lock mechanism, and the processing efficiency of the multi-task can be improved. .
以下通过另一具体的实施例,对本申请提供的一种多线程的任务处理方法做进一步说明。A multi-threaded task processing method provided by the present application will be further described below through another specific embodiment.
请参考图2,图2为本申请实施例所提供的一种多线程的任务处理方法的数据流程图。Please refer to FIG. 2 . FIG. 2 is a data flow chart of a multi-threaded task processing method provided by an embodiment of the present application.
本实施例中,该方法可以包括:In this embodiment, the method may include:
步骤1,将接收的任务存放至线程对应的循环队列;Step 1, store the received task in the circular queue corresponding to the thread;
步骤2,将循环队列中的任务按照先入先出的顺序进行出队;Step 2, dequeue the tasks in the circular queue in the order of first-in-first-out;
步骤3,通过线程和线程对应的内存资源执行从循环队列出队的任务,得到执行结果;其中,内存资源为针对于线程申请的内存资源。Step 3, execute the task dequeued from the circular queue through the thread and the memory resource corresponding to the thread, and obtain the execution result; wherein, the memory resource is the memory resource applied for by the thread.
显然,上述过程通过将接收到的任务存放至循环队列中,然后该循环队列按照顺序将任务继续进行出队,最后每出队一个任务,对应的线程和线程对应的内存资源执行该出队的任务,得到执行结果,实现了线程采用对应的内存资源按照顺序执行任务,避免出现任务抢占资源和线程的情况,使得不使用锁机制就实现了多线程的访问,提高多任务的处理效率。Obviously, the above process stores the received tasks in the circular queue, and then the circular queue continues to dequeue the tasks in order. Finally, each time a task is dequeued, the corresponding thread and the memory resource corresponding to the thread execute the dequeue. Tasks, get the execution results, and realize that the threads use the corresponding memory resources to execute tasks in order, avoiding the situation where tasks preempt resources and threads, so that multi-thread access can be realized without using the lock mechanism, and the processing efficiency of multi-tasking can be improved.
具体的,本实施例使用提前划分内存保证专属线程访问专属资源以及线程轮询循环队列的方法,将循环队作为线程唯一的任务队列,每个线程轮询自己的私有的循环队列,将共享资源按私有队列分配、统筹规划数据结构、设置线程私有数据、不进行线程通信等方法从而避免频繁的线程切换、锁等待的问题,进而实现多线程在不加锁的情况下依然保证线程安全而不降低性能。本方法适用于多线程不进行线程通信,任务相对独立且高并发量的场景。Specifically, this embodiment uses the method of dividing the memory in advance to ensure that the exclusive thread accesses the exclusive resource and the thread polling circular queue, uses the circular queue as the unique task queue of the thread, and each thread polls its own private circular queue, and the shared resource Allocation according to private queues, overall planning of data structures, setting of thread private data, no thread communication and other methods can avoid frequent thread switching and lock waiting problems, and then realize that multi-threading can still ensure thread safety without locking. Reduce performance. This method is suitable for scenarios where multiple threads do not communicate with each other, and tasks are relatively independent and have high concurrency.
请参考图3,图3为本申请实施例所提供的一种多线程的任务处理方法的内存管理示意图。Please refer to FIG. 3 . FIG. 3 is a schematic diagram of memory management of a multi-threaded task processing method provided by an embodiment of the present application.
其中,内存申请的过程,首先创建内存申请模块Memory Management(内存管理模块),实现内核态的内存申请与用户态的内存申请接口。Among them, in the memory application process, a memory application module Memory Management (memory management module) is firstly created to realize the memory application interface of the kernel state and the memory application interface of the user state.
将内存申请分为两个大类,直接内存访问与普通内存。直接内存访问用于IO,不需要cpu参与,减少cpu占用,普通内存需要cpu介入。The memory application is divided into two categories, direct memory access and ordinary memory. Direct memory access is used for IO, does not require CPU participation, and reduces CPU usage, while ordinary memory requires CPU intervention.
在系统初始化阶段申请内存,运行阶段不申请内存。提前计算各个模块所需内存大小,初始化时Memory Management模块锁定一大段内存作为原始内存池,其他模块有内存申请需求时向Memory Management模块发出内存申请,并通过携带的参数告知其所需内存类型,Memory Management模块从原始内存池中划分一块(最小分配单位为4k)并返回该内存信息。Apply for memory in the system initialization phase, and do not apply for memory in the running phase. Calculate the memory size required by each module in advance. During initialization, the Memory Management module locks a large section of memory as the original memory pool. When other modules have memory request requirements, they send memory requests to the Memory Management module and inform them of the required memory type through the carried parameters. , the Memory Management module divides a block from the original memory pool (minimum allocation unit is 4k) and returns the memory information.
进一步的,内存申请与释放。模块内的内存申请后长期占用除非模块退出,将内存归还。Memory Management的运行阶段不释放内存。Further, memory application and release. After the memory application in the module is occupied for a long time, unless the module exits, the memory will be returned. The runtime of Memory Management does not release memory.
其中,内存申请的大小,可以向内核申请4K大小的内存页,保证内存连续,有利于内存偏移操作。Among them, the size of the memory application can apply to the kernel for a 4K memory page to ensure continuous memory and facilitate memory offset operations.
请参考图4,图4为本申请实施例所提供的一种多线程的任务处理方法的内存划分示意图。Please refer to FIG. 4 , which is a schematic diagram of memory division of a multi-threaded task processing method provided by an embodiment of the present application.
最后,模块间的内存划分。如附图4所示,各个模块在拿到内存后,将内存划分为若干个区域,每个区域放到一个队列中,不允许将同一个区域放到两个队列上。Finally, memory partitioning between modules. As shown in Figure 4, each module divides the memory into several regions after obtaining the memory, and puts each region in a queue, and it is not allowed to put the same region in two queues.
其中,线程与队列设计:Among them, thread and queue design:
创建循环队列的过程,可以实现队列的入队出队操作,并将入队出队操作设置成公共接口;在其它模块中实现入队操作,在本模块中轮询队列;循环队列为空时,认为没有任务,不为空时依次处理队列中元素,处理完毕将元素出队。The process of creating a circular queue can realize the enqueue and dequeue operation of the queue, and set the enqueue and dequeue operation as a public interface; implement the enqueue operation in other modules, and poll the queue in this module; when the circular queue is empty , it is considered that there is no task, and when it is not empty, the elements in the queue are processed sequentially, and the elements are dequeued after processing.
请参考图5,图5为本申请实施例所提供的一种多线程的任务处理方法的线程结构示意图。Please refer to FIG. 5 , which is a schematic diagram of a thread structure of a multi-threaded task processing method provided by an embodiment of the present application.
创建线程队列的过程,可以根据队列数创建相应数量的线程,所有线程常驻,不进行回收或销毁,在没有任务的时候允许线程空转;线程与队列一一对应如附图5所示,保证线程之间没有数据交换In the process of creating a thread queue, a corresponding number of threads can be created according to the number of queues. All threads are resident and will not be recycled or destroyed. Threads are allowed to idle when there is no task; the one-to-one correspondence between threads and queues is shown in Figure 5, ensuring that No data exchange between threads
其中,线程、队列、其他模块符合生产者消费者模型,线程作为消费者,在执行完任务后将元素出队,其他模块作为消费者,将要处理的任务入队。Among them, threads, queues, and other modules conform to the producer-consumer model. Threads act as consumers and dequeue elements after executing tasks. Other modules act as consumers and enqueue tasks to be processed.
最后,数据结构与函数设计:Finally, data structure and function design:
数据结构的定义要与线程标识统一,其他模块将任务加入队列时,确保同一块内存只在一个队列中执行。The definition of the data structure should be consistent with the thread ID. When other modules add tasks to the queue, ensure that the same block of memory is only executed in one queue.
各个模块中数据结构的个数若小于线程数,规定某几个线程处理,若大于线程数,视情况与需求可在所有线程上平均分配,若等于线程数可一一对应。If the number of data structures in each module is less than the number of threads, a certain number of threads is specified for processing. If it is greater than the number of threads, it can be evenly distributed among all threads depending on the situation and needs. If it is equal to the number of threads, it can be one-to-one.
函数中增加内存判断操作,如附图2所示函数0可能在所有线程上执行,函数执行时先判断内存合法性,合法才允许执行。A memory judgment operation is added to the function. As shown in Figure 2, function 0 may be executed on all threads. When the function is executed, the legality of the memory is judged first, and the execution is allowed only when it is legal.
可见,本实施例通过对内存的提前分配与线程访问专属资源实现多线程不加锁保证线程安全的设计,可以避免频繁的线程切换、锁等待等问题而影星性能以及稳定性,提升产品竞争力。将共享资源提前划分,每个线程访问自己的私有数据,从而避免锁操作,实现多线程不加锁保证线程安全。It can be seen that this embodiment realizes the design of multi-threading without locking to ensure thread safety by pre-allocating memory and thread accessing exclusive resources, which can avoid frequent thread switching, lock waiting and other problems, improve movie star performance and stability, and improve product competitiveness. . Shared resources are divided in advance, and each thread accesses its own private data, thereby avoiding lock operations and realizing multi-thread without locking to ensure thread safety.
可见,本实施例通过将接收到的任务存放至循环队列中,然后该循环队列按照顺序将任务继续进行出队,最后每出队一个任务,对应的线程和线程对应的内存资源执行该出队的任务,得到执行结果,实现了线程采用对应的内存资源按照顺序执行任务,避免出现任务抢占资源和线程的情况,使得不使用锁机制就实现了多线程的访问,提高多任务的处理效率。It can be seen that in this embodiment, the received tasks are stored in the circular queue, and then the circular queue continues to dequeue the tasks in order, and finally each time a task is dequeued, the corresponding thread and the memory resource corresponding to the thread execute the dequeue Tasks, execution results are obtained, threads use corresponding memory resources to execute tasks in sequence, avoiding tasks preempting resources and threads, enabling multi-threaded access without using a lock mechanism, and improving the processing efficiency of multi-tasking.
下面对本申请实施例提供的多线程的任务处理装置进行介绍,下文描述的多线程的任务处理装置与上文描述的多线程的任务处理方法可相互对应参照。The multi-thread task processing device provided by the embodiment of the present application is introduced below. The multi-thread task processing device described below and the multi-thread task processing method described above can be referred to in correspondence.
请参考图6,图6为本申请实施例所提供的一种多线程的任务处理装置的结构示意图。Please refer to FIG. 6 , which is a schematic structural diagram of a multi-threaded task processing device provided by an embodiment of the present application.
本实施例中,该装置可以包括:In this embodiment, the device may include:
任务存放模块100,用于将接收的任务存放至线程对应的循环队列;The
循环队列处理模块200,用于将循环队列中的任务按照先入先出的顺序进行出队;The circular
任务执行模块300,用于通过线程和线程对应的内存资源执行从循环队列出队的任务,得到执行结果;其中,内存资源为针对于线程申请的内存资源。The
可选的,本实施例还可以包括:内存申请模块,用于向内存管理模块发送内存申请;其中,内存管理模块用于管理原始内存池;内存管理模块从原始内存池中分配内存空间,并返回内存空间的内存信息;基于内存信息确定内存资源。Optionally, this embodiment may further include: a memory application module, configured to send a memory application to the memory management module; wherein, the memory management module is used to manage the original memory pool; the memory management module allocates memory space from the original memory pool, and Returns the memory information of the memory space; determines memory resources based on the memory information.
可选的,内存管理模块从原始内存池中分配内存空间,并返回内存空间的内存信息,包括:Optionally, the memory management module allocates memory space from the original memory pool, and returns the memory information of the memory space, including:
内存管理模块从原始内存池中分配连续的内存空间;将内存空间的内存信息进行返回。The memory management module allocates continuous memory space from the original memory pool; returns the memory information of the memory space.
可选的,本实施例还可以包括:队列申请模块,用于基于多线程需求信息创建对应的循环队列,并将循环队列的入队出队操作指令设置为公共接口。Optionally, this embodiment may further include: a queue application module, configured to create a corresponding circular queue based on multi-thread requirement information, and set the enqueue and dequeue operation instructions of the circular queue as a public interface.
可选的,本实施例还可以包括:线程申请模块,用于基于循环队列的数量创建对应的线程,并将每个线程的状态设置为常驻线程。Optionally, this embodiment may further include: a thread application module, configured to create corresponding threads based on the number of circular queues, and set the state of each thread as a resident thread.
可选的,本实施例还可以包括:空队列处理模块,用于当循环队列为空队列时,控制循环队列对应的线程执行空转操作。Optionally, this embodiment may further include: an empty queue processing module, configured to control a thread corresponding to the circular queue to perform an idle operation when the circular queue is an empty queue.
可选的,该任务存放模块100,具体用于通过循环队列的入队操作接口将发送的任务存放至循环队列中。Optionally, the
可见,本实施例通过将接收到的任务存放至循环队列中,然后该循环队列按照顺序将任务继续进行出队,最后每出队一个任务,对应的线程和线程对应的内存资源执行该出队的任务,得到执行结果,实现了线程采用对应的内存资源按照顺序执行任务,避免出现任务抢占资源和线程的情况,使得不使用锁机制就实现了多线程的访问,提高多任务的处理效率。It can be seen that in this embodiment, the received tasks are stored in the circular queue, and then the circular queue continues to dequeue the tasks in order, and finally each time a task is dequeued, the corresponding thread and the memory resource corresponding to the thread execute the dequeue Tasks, execution results are obtained, threads use corresponding memory resources to execute tasks in sequence, avoiding tasks preempting resources and threads, enabling multi-threaded access without using a lock mechanism, and improving the processing efficiency of multi-tasking.
本申请还提供了一种计算设备,请参考图7,图7本申请实施例所提供的一种计算设备的结构示意图,该计算设备可包括:The present application also provides a computing device. Please refer to FIG. 7, which is a schematic structural diagram of a computing device provided in an embodiment of the present application. The computing device may include:
存储器,用于存储计算机程序;memory for storing computer programs;
处理器,用于执行计算机程序时可实现如上述任意一种多线程的任务处理方法的步骤。The processor is used to implement the steps of any one of the above-mentioned multi-threaded task processing methods when executing a computer program.
如图7所示,为计算设备的组成结构示意图,计算设备可以包括:处理器10、存储器11、通信接口12和通信总线13。处理器10、存储器11、通信接口12均通过通信总线13完成相互间的通信。As shown in FIG. 7 , which is a schematic structural diagram of a computing device, the computing device may include: a
在本申请实施例中,处理器10可以为中央处理器(Central Processing Unit,CPU)、特定应用集成电路、数字信号处理器、现场可编程门阵列或者其他可编程逻辑器件等。In the embodiment of the present application, the
处理器10可以调用存储器11中存储的程序,具体的,处理器10可以执行异常IP识别方法的实施例中的操作。The
存储器11中用于存放一个或者一个以上程序,程序可以包括程序代码,程序代码包括计算机操作指令,在本申请实施例中,存储器11中至少存储有用于实现以下功能的程序:The
将接收的任务存放至线程对应的循环队列;Store the received tasks in the circular queue corresponding to the thread;
将循环队列中的任务按照先入先出的顺序进行出队;Dequeue the tasks in the circular queue in the order of first-in-first-out;
通过线程和线程对应的内存资源执行从循环队列出队的任务,得到执行结果;其中,内存资源为针对于线程申请的内存资源。Execute the task dequeued from the circular queue through the thread and the memory resource corresponding to the thread, and obtain the execution result; wherein, the memory resource is the memory resource applied for by the thread.
在一种可能的实现方式中,存储器11可包括存储程序区和存储数据区,其中,存储程序区可存储操作系统,以及至少一个功能所需的应用程序等;存储数据区可存储使用过程中所创建的数据。In a possible implementation, the
此外,存储器11可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件或其他易失性固态存储器件。In addition, the
通信接口12可以为通信模块的接口,用于与其他设备或者系统连接。The
当然,需要说明的是,图7所示的结构并不构成对本申请实施例中计算设备的限定,在实际应用中计算设备可以包括比图7所示的更多或更少的部件,或者组合某些部件。Of course, it should be noted that the structure shown in FIG. 7 does not constitute a limitation on the computing device in the embodiment of the present application. In practical applications, the computing device may include more or less components than those shown in FIG. 7, or combine certain parts.
可见,本实施例通过将接收到的任务存放至循环队列中,然后该循环队列按照顺序将任务继续进行出队,最后每出队一个任务,对应的线程和线程对应的内存资源执行该出队的任务,得到执行结果,实现了线程采用对应的内存资源按照顺序执行任务,避免出现任务抢占资源和线程的情况,使得不使用锁机制就实现了多线程的访问,提高多任务的处理效率。It can be seen that in this embodiment, the received tasks are stored in the circular queue, and then the circular queue continues to dequeue the tasks in order, and finally each time a task is dequeued, the corresponding thread and the memory resource corresponding to the thread execute the dequeue Tasks, execution results are obtained, threads use corresponding memory resources to execute tasks in sequence, avoiding tasks preempting resources and threads, enabling multi-threaded access without using a lock mechanism, and improving the processing efficiency of multi-tasking.
本申请还提供了一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,计算机程序被处理器执行时可实现如上述任意一种多线程的任务处理方法的步骤。The present application also provides a computer-readable storage medium, on which a computer program is stored. When the computer program is executed by a processor, the steps of any one of the above-mentioned multi-threaded task processing methods can be realized.
该计算机可读存储介质可以包括:U盘、移动硬盘、只读存储器(Read-OnlyMemory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。The computer-readable storage medium may include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk, etc., which can store program codes. medium.
对于本申请提供的计算机可读存储介质的介绍请参照上述方法实施例,本申请在此不做赘述。For the introduction of the computer-readable storage medium provided by the present application, please refer to the foregoing method embodiments, and the present application does not repeat it here.
可见,本实施例通过将接收到的任务存放至循环队列中,然后该循环队列按照顺序将任务继续进行出队,最后每出队一个任务,对应的线程和线程对应的内存资源执行该出队的任务,得到执行结果,实现了线程采用对应的内存资源按照顺序执行任务,避免出现任务抢占资源和线程的情况,使得不使用锁机制就实现了多线程的访问,提高多任务的处理效率。It can be seen that in this embodiment, the received tasks are stored in the circular queue, and then the circular queue continues to dequeue the tasks in order, and finally each time a task is dequeued, the corresponding thread and the memory resource corresponding to the thread execute the dequeue Tasks, execution results are obtained, threads use corresponding memory resources to execute tasks in sequence, avoiding tasks preempting resources and threads, enabling multi-threaded access without using a lock mechanism, and improving the processing efficiency of multi-tasking.
说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似部分互相参见即可。对于实施例公开的装置而言,由于其与实施例公开的方法相对应,所以描述的比较简单,相关之处参见方法部分说明即可。Each embodiment in the description is described in a progressive manner, each embodiment focuses on the difference from other embodiments, and the same and similar parts of each embodiment can be referred to each other. As for the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and for relevant details, please refer to the description of the method part.
专业人员还可以进一步意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。Professionals can further realize that the units and algorithm steps of the examples described in conjunction with the embodiments disclosed herein can be implemented by electronic hardware, computer software or a combination of the two. In order to clearly illustrate the possible For interchangeability, in the above description, the composition and steps of each example have been generally described according to their functions. Whether these functions are executed by hardware or software depends on the specific application and design constraints of the technical solution. Skilled artisans may use different methods to implement the described functions for each specific application, but such implementation should not be regarded as exceeding the scope of the present application.
结合本文中所公开的实施例描述的方法或算法的步骤可以直接用硬件、处理器执行的软件模块,或者二者的结合来实施。软件模块可以置于随机存储器(RAM)、内存、只读存储器(ROM)、电可编程ROM、电可擦除可编程ROM、寄存器、硬盘、可移动磁盘、CD-ROM、或技术领域内所公知的任意其它形式的存储介质中。The steps of the methods or algorithms described in connection with the embodiments disclosed herein may be directly implemented by hardware, software modules executed by a processor, or a combination of both. Software modules can be placed in random access memory (RAM), internal memory, read-only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, removable disk, CD-ROM, or any other Any other known storage medium.
以上对本申请所提供的一种多线程的任务处理方法、任务处理装置、计算设备以及计算机可读存储介质进行了详细介绍。本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想。应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以对本申请进行若干改进和修饰,这些改进和修饰也落入本申请权利要求的保护范围内。A multi-thread task processing method, a task processing device, a computing device, and a computer-readable storage medium provided in the present application have been introduced in detail above. In this paper, specific examples are used to illustrate the principles and implementation methods of the present application, and the descriptions of the above embodiments are only used to help understand the methods and core ideas of the present application. It should be pointed out that those skilled in the art can make some improvements and modifications to the application without departing from the principles of the application, and these improvements and modifications also fall within the protection scope of the claims of the application.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310076347.3A CN116302391A (en) | 2023-01-30 | 2023-01-30 | Multithreading task processing method and related device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310076347.3A CN116302391A (en) | 2023-01-30 | 2023-01-30 | Multithreading task processing method and related device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116302391A true CN116302391A (en) | 2023-06-23 |
Family
ID=86778830
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310076347.3A Pending CN116302391A (en) | 2023-01-30 | 2023-01-30 | Multithreading task processing method and related device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116302391A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117614906A (en) * | 2024-01-23 | 2024-02-27 | 珠海星云智联科技有限公司 | Method, computer device and medium for multi-thread multi-representation oral package |
-
2023
- 2023-01-30 CN CN202310076347.3A patent/CN116302391A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117614906A (en) * | 2024-01-23 | 2024-02-27 | 珠海星云智联科技有限公司 | Method, computer device and medium for multi-thread multi-representation oral package |
CN117614906B (en) * | 2024-01-23 | 2024-04-19 | 珠海星云智联科技有限公司 | Method, computer device and medium for multi-thread multi-representation oral package |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3425502B1 (en) | Task scheduling method and device | |
CN109564528B (en) | System and method for computing resource allocation in distributed computing | |
US10262390B1 (en) | Managing access to a resource pool of graphics processing units under fine grain control | |
US10223165B2 (en) | Scheduling homogeneous and heterogeneous workloads with runtime elasticity in a parallel processing environment | |
CN105579961B (en) | Data processing system, operating method and hardware unit for data processing system | |
EP3073374B1 (en) | Thread creation method, service request processing method and related device | |
CN105045658B (en) | A method of realizing that dynamic task scheduling is distributed using multinuclear DSP embedded | |
JP6294586B2 (en) | Execution management system combining instruction threads and management method | |
CN109697122B (en) | Task processing methods, equipment and computer storage media | |
US9858241B2 (en) | System and method for supporting optimized buffer utilization for packet processing in a networking device | |
WO2019223596A1 (en) | Method, device, and apparatus for event processing, and storage medium | |
US8881161B1 (en) | Operating system with hardware-enabled task manager for offloading CPU task scheduling | |
CN115605846A (en) | Apparatus and method for managing shareable resources in a multi-core processor | |
CN115480904B (en) | Concurrent calling method for system service in microkernel | |
Bernat et al. | Multiple servers and capacity sharing for implementing flexible scheduling | |
Reano et al. | Intra-node memory safe gpu co-scheduling | |
CN114816709A (en) | Task scheduling method, device, server and readable storage medium | |
CN116233022A (en) | Job scheduling method, server and server cluster | |
CN113806049A (en) | Task queuing method and device, computer equipment and storage medium | |
CN116302391A (en) | Multithreading task processing method and related device | |
CN111597044A (en) | Task scheduling method and device, storage medium and electronic equipment | |
EP2951691B1 (en) | System and method for supporting work sharing muxing in a cluster | |
CN112749020A (en) | Microkernel optimization method of Internet of things operating system | |
CN115934385B (en) | A multi-core inter-core communication method, system, device and storage medium | |
Peng et al. | bqueue: A coarse-grained bucket qos scheduler |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |