CN114138480A - Queue task classification hybrid processing method, device, system and storage medium - Google Patents
Queue task classification hybrid processing method, device, system and storage medium Download PDFInfo
- Publication number
- CN114138480A CN114138480A CN202111422158.4A CN202111422158A CN114138480A CN 114138480 A CN114138480 A CN 114138480A CN 202111422158 A CN202111422158 A CN 202111422158A CN 114138480 A CN114138480 A CN 114138480A
- Authority
- CN
- China
- Prior art keywords
- cache
- task
- interval
- core
- buffer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003860 storage Methods 0.000 title claims abstract description 21
- 238000003672 processing method Methods 0.000 title claims abstract description 17
- 239000000872 buffer Substances 0.000 claims abstract description 157
- 238000012545 processing Methods 0.000 claims abstract description 83
- 238000000034 method Methods 0.000 claims abstract description 44
- 238000012544 monitoring process Methods 0.000 claims abstract description 16
- 230000005012 migration Effects 0.000 claims description 12
- 238000013508 migration Methods 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 10
- 230000008569 process Effects 0.000 description 27
- 238000010586 diagram Methods 0.000 description 12
- 230000006870 function Effects 0.000 description 10
- 230000005540 biological transmission Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000009826 distribution Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000010267 cellular communication Effects 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000026676 system process Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
- G06F9/5016—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The embodiment of the invention discloses a queue task classification mixing processing method, a queue task classification mixing processing device, an Internet of things system and a storage medium. The method comprises the following steps: monitoring task states in a multi-core Internet of things system, wherein execution cores of the multi-core Internet of things system respectively correspond to a section of a first-in first-out cache queue, and one of the execution cores is used as a priority core for specially processing a pre-task; when the input of the latest task is monitored, caching the latest task to a first cache interval or a priority cache interval according to the task priority so as to be correspondingly distributed to an execution core corresponding to the first cache interval, wherein the first cache interval is a cache interval with the longest current residual cache queue in a common cache interval corresponding to a common core; and when the common core fails to acquire the new task from the priority buffer interval, acquiring the new task from the corresponding common buffer interval for processing. According to the scheme, the data processing efficiency of multi-core processing task scheduling in the Internet of things system is improved, and the processing requirement of the priority task is guaranteed.
Description
Technical Field
The embodiment of the invention relates to the technical field of networks, in particular to a queue task classification hybrid processing method, a device, a system and a storage medium.
Background
The internet of things is regarded as a major development and transformation opportunity in the information field, and is expected to bring revolutionary transformation, which has all-round influence on various fields such as industry, agriculture, property, city management, safety fire fighting and the like in a relatively common view. However, technically, the internet of things is not only a main body for changing data transmission, but also has obvious difference from traditional communication. For example, a feature of the large-scale internet of things is that a large number of users sporadically transmit very small packets, unlike conventional cellular communications.
In order to meet the task scheduling requirements in the internet of things, a high-performance embedded node is usually designed for large-scale internet of things to perform parallel processing on data collected in the internet of things, and even a multi-core processing mode is adopted to achieve task scheduling.
The inventor finds that in the process of task scheduling in a multi-core processing mode in a large-scale internet of things, one task may be scheduled for multiple times among multiple execution cores, a large amount of useless scheduling is performed, the scheduling efficiency is low, and different types of tasks are processed in a complex manner.
Disclosure of Invention
The invention provides a queue task classification hybrid processing method, device and system and a storage medium, and aims to solve the technical problems that in the prior art, the dispatching efficiency of multi-core processing task dispatching of the Internet of things is low, and different types of tasks are processed complicatedly.
In a first aspect, an embodiment of the present invention provides a queue task classification hybrid processing method, which is used for a multi-core internet of things system, and includes:
monitoring a task state in the multi-core Internet of things system, wherein an execution core of the multi-core Internet of things system comprises a priority core and a plurality of common cores, each execution core is correspondingly allocated with a cache interval, and the cache interval is one section of a first-in first-out cache queue in the multi-core Internet of things system;
when the input of a latest task is monitored, caching the latest task to a first cache interval or a priority cache interval according to task priority so as to be correspondingly allocated to an execution core corresponding to the cache interval, wherein the first cache interval is a cache interval with the longest current residual cache queue in a common cache interval corresponding to a common core, and the priority cache interval is a cache interval corresponding to the priority core;
and the priority core acquires a new task from the priority cache interval for processing, and the common core acquires the new task from the corresponding common cache interval for processing when the common core fails to acquire the new task from the priority cache interval.
Further, the method further comprises:
and when monitoring that an idle cache interval appears in the common cache interval, migrating at least one task cache to the idle cache interval from a second cache interval so as to be correspondingly distributed to the execution cores corresponding to the idle cache interval, wherein the idle cache interval is a cache interval with empty tasks, and the second cache interval is a common cache interval with the most remaining tasks.
Further, when it is monitored that an idle buffer interval occurs in the normal buffer interval, migrating at least one task buffer from a second buffer interval to the idle buffer interval, including:
when monitoring that idle cache intervals appear in the common cache intervals, successively confirming second cache intervals, and migrating task caches to the idle cache intervals one by one from the second cache intervals until the number of tasks in the idle cache intervals reaches a preset threshold value or the number of tasks in all the common cache intervals is not higher than the preset threshold value.
Further, when there are a plurality of second buffer intervals, a task buffer is randomly migrated from one second buffer interval to the idle buffer interval.
Further, when a plurality of first buffer intervals exist, the latest task is randomly buffered to one of the first buffer intervals.
Further, the length of the cache queue corresponding to each execution core is the same.
In a second aspect, an embodiment of the present invention further provides a queue task classification hybrid processing apparatus, which is used in a multi-core internet of things system, and includes:
the state monitoring unit is used for monitoring the task state in the multi-core Internet of things system, the execution core of the multi-core Internet of things system comprises a priority core and a plurality of common cores, each execution core is correspondingly allocated with a cache interval, and the cache interval is one section of a first-in first-out cache queue in the multi-core Internet of things system;
the task caching unit is used for caching the latest task to a first caching interval or a priority caching interval according to task priority when the input of the latest task is monitored, so as to correspondingly allocate the latest task to an execution core corresponding to the caching interval, wherein the first caching interval is a caching interval with the longest current residual caching queue in common caching intervals corresponding to the common cores, and the priority caching interval is a caching interval corresponding to the priority core;
and the task obtaining unit is used for obtaining a new task from the priority cache interval by the priority core for processing, and obtaining the new task from the corresponding common cache interval by the common core for processing when the new task obtained from the priority cache interval fails.
Further, the apparatus further includes:
and the task migration unit is used for migrating at least one task cache to the idle cache interval from a second cache interval when monitoring that the idle cache interval appears in the common cache interval so as to be correspondingly distributed to the execution core corresponding to the idle cache interval, wherein the idle cache interval is a cache interval with empty tasks, and the second cache interval is a common cache interval with the most remaining tasks.
Further, when it is monitored that an idle buffer interval occurs in the normal buffer interval, migrating at least one task buffer from a second buffer interval to the idle buffer interval, including:
when monitoring that idle cache intervals appear in the common cache intervals, successively confirming second cache intervals, and migrating task caches to the idle cache intervals one by one from the second cache intervals until the number of tasks in the idle cache intervals reaches a preset threshold value or the number of tasks in all the common cache intervals is not higher than the preset threshold value.
Further, when there are a plurality of second buffer intervals, a task buffer is randomly migrated from one second buffer interval to the idle buffer interval.
Further, when a plurality of first buffer intervals exist, the latest task is randomly buffered to one of the first buffer intervals.
Further, the length of the cache queue corresponding to each execution core is the same.
In a third aspect, an embodiment of the present invention further provides an internet of things system, including:
one or more processors;
a memory for storing one or more programs;
when the one or more programs are executed by the one or more processors, the system of things enables the method for queue task classification hybrid processing according to any one of the first aspect.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the queue task classification and mixing processing method according to the first aspect.
The queue task classification hybrid processing method, the queue task classification hybrid processing device, the Internet of things system and the storage medium monitor the task state in the multi-core Internet of things system, wherein an execution core of the multi-core Internet of things system comprises a priority core and a plurality of common cores, each execution core is correspondingly allocated with a buffer interval, and the buffer interval is one section of a first-in first-out buffer queue in the multi-core Internet of things system; when the input of a latest task is monitored, caching the latest task to a first cache interval or a priority cache interval according to task priority so as to be correspondingly distributed to an execution core corresponding to the cache interval, wherein the first cache interval is the cache interval with the longest current residual cache queue in a common cache interval when the acquisition of the new task fails, and the priority cache interval is the cache interval corresponding to the priority core; and the priority core acquires a new task from the priority cache interval for processing, and the common core acquires the new task from the corresponding common cache interval for processing when the common core fails to acquire the new task from the priority cache interval. According to the scheme, the corresponding cache intervals are distributed to the execution cores, when the latest tasks are received, the latest tasks are distributed to the corresponding cache intervals according to the priorities of the tasks and the number of the tasks in the cache intervals, the switching process of task distribution is reduced, the data processing efficiency of multi-core processing task scheduling in the Internet of things system is improved, the priority tasks are processed by the execution cores, the scheduling process of the tasks of different types is simplified, and meanwhile the requirement of priority processing is met.
Drawings
Fig. 1 is a flowchart of a queue task classification hybrid processing method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a queue task classification hybrid processing apparatus according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of an internet of things system according to a third embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are for purposes of illustration and not limitation. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
It should be noted that, for the sake of brevity, this description does not exhaust all alternative embodiments, and it should be understood by those skilled in the art after reading this description that any combination of features may constitute an alternative embodiment as long as the features are not mutually inconsistent.
The following examples are described in detail.
Example one
Fig. 1 is a flowchart of a queue task classification and mixing processing method according to an embodiment of the present invention. The queue task classification hybrid processing method provided in the embodiment may be executed by various operating devices for queue task classification hybrid processing, where the operating devices may be implemented by software and/or hardware, and the operating devices may be formed by two or more physical entities or may be formed by one physical entity.
Specifically, referring to fig. 1, the method for classifying and mixing queue tasks specifically includes:
step S101: and monitoring the task state in the multi-core Internet of things system, wherein the execution core of the multi-core Internet of things system comprises a priority core and a plurality of common cores, each execution core is correspondingly allocated with a buffer interval, and the buffer interval is one section of a first-in first-out buffer queue in the multi-core Internet of things system.
In the architecture of the internet of things system, a sink node is a key component of the architecture, in the specific implementation process, the multi-core internet of things system is designed based on an embedded multi-core processor, and a plurality of execution cores in the embedded multi-core processor can perform operation simultaneously, so that higher processing efficiency is brought to data collection in the multi-core internet of things system under the condition of limited resource configuration.
For an embedded multi-core processor, each processing core cannot process all tasks allocated to the processing core at the same time, that is, tasks allocated to one internet of things node may need to be queued, and the tasks in the queued state are temporarily cached in a first-in first-out cache queue. According to the prior art, during the queuing process, the tasks may be continuously scheduled and switched to different execution cores to wait for execution according to the actual processing progress of the execution cores, which is equivalent to performing an invalid scheduling process in the task scheduling process.
In the scheme, in order to improve the scheduling processing efficiency, the first-in first-out cache queue is segmented, each segment corresponds to one execution core, a task specifically allocated to one execution core is cached to a corresponding cache interval first, and through the corresponding allocation mode of the execution cores and the cache intervals, the association relation between the task and the execution core correspondingly processing the task is fixed in a relatively static mode, so that the invalid task allocation scheduling processing is reduced as much as possible. When the fifo buffer queue is segmented, the buffer intervals allocated to each execution core may be set to be the same, that is, the length of the buffer queue corresponding to each execution core is the same, so as to ensure that the upper limit of the task allocated to each execution core is the same, and the amount of the task allocated and scheduled to each execution core is relatively balanced.
In the actual processing process, in order to meet the requirement of priority processing on priority tasks, all the priority tasks are centrally cached to the corresponding priority cache interval and are specially processed by a special execution core (namely, a priority core), and other execution cores (namely, common cores) also distinguish the priority of the new tasks when acquiring the new tasks, namely if the current priority cache interval caches the tasks, the common cores also need to process preferentially.
Step S102: when the input of the latest task is monitored, caching the latest task to a first cache interval or a priority cache interval according to the task priority so as to be correspondingly distributed to the execution core corresponding to the cache interval, wherein the first cache interval is the cache interval with the longest current residual cache queue in the common cache interval corresponding to the common core, and the priority cache interval is the cache interval corresponding to the priority core.
For an internet of things node, when receiving input of a latest task, the latest task needs to be allocated to a certain execution core in an embedded multi-core processor of the internet of things node, in the existing processing mode, when all tasks of the current execution cores are to be executed, a first-in first-out cache queue is taken as a whole for cache management, the allocation process of a task to a specific execution core may be continuously adjusted due to the change of task processing progress, so that the task processing states of all execution cores need to be continuously monitored when the task is cached in the first-in first-out cache queue, and further, the task allocation is continuously adjusted adaptively. In the scheme, the common latest task is directly distributed to the common core with the least current processing task, the task distributed to each execution core is relatively fixed, and the initially distributed execution core is used for performing task processing as a basic processing principle, so that the distribution change in the task waiting processing process is reduced. The priority latest task is directly cached to the priority caching interval, a special execution core (namely, a priority core) is used for processing by default, and when the processing capability of the priority core is insufficient, other common cores can also process preferentially, so that the situation that the processing of the priority task is delayed is avoided.
In specific allocation, in order to reduce the queuing waiting time of each task as much as possible, for the detected latest task, the current task state of each execution core is firstly judged. The current task state is confirmed from the buffer interval corresponding to the execution core, and as a whole, the task of the current buffer in the buffer interval is the least, the shorter the occupied buffer queue is, the longer the remaining buffer queue is, the buffer interval with the longest remaining buffer queue is regarded as the first buffer interval, the latest task is allocated to the first buffer interval, after the latest task is buffered in the first buffer interval, the execution core allocated by the latest task is determined according to the corresponding relation between the buffer interval and the execution core, and the processing of the corresponding execution core is waited in the buffer interval.
In the actual processing process, there may be a plurality of buffer intervals in which the remaining tasks are the same, that is, there may be a plurality of buffer intervals that are all first buffer intervals, and when there are a plurality of first buffer intervals, the latest task is randomly buffered in one of the first buffer intervals.
Step S103: and the priority core acquires a new task from the priority cache interval for processing, and the common core acquires the new task from the corresponding common cache interval for processing when the common core fails to acquire the new task from the priority cache interval.
In the specific task processing process, the priority core acquires a new task from the priority cache interval for processing, the common core acquires the new task from the priority cache interval for processing preferentially, and when no task to be processed exists in the priority cache interval, the new task is acquired from the corresponding common cache interval for processing, so that the common tasks are guaranteed to be processed in order, and the priority processing of the priority task is realized.
In response to the difference in the ordinary task processing progress possibly caused by the difference in the specific task processing process, the present solution further includes step S104 to achieve fine tuning of task allocation.
Step S104: and when monitoring that an idle cache interval appears in the common cache interval, migrating at least one task cache to the idle cache interval from a second cache interval so as to be correspondingly distributed to the execution cores corresponding to the idle cache interval, wherein the idle cache interval is a cache interval with empty tasks, and the second cache interval is a common cache interval with the most remaining tasks.
In the process of processing the task by each execution core, due to reasons such as task complexity, data transmission speed, bandwidth allocation and the like, the speed of task processing may not be completely the same, and finally, the task queuing conditions in the buffer interval are different. For example, some execution cores may have processed all tasks, i.e., have emptied the tasks in the corresponding buffer interval; and a plurality of tasks are queued in the buffer intervals corresponding to other execution cores. At this time, one or more tasks can be migrated from the queued buffer interval to the idle buffer interval, so that the processing speed of the tasks is improved as a whole, and the situation that the execution core is in an idle state is avoided.
When the task is migrated specifically, instead of migrating a plurality of tasks from other buffer intervals to an idle buffer interval at a time, the tasks are migrated one by one, and the task number states in all the common buffer intervals are judged. Overall, when it is monitored that an idle buffer interval appears in the common buffer interval, successively confirming a second buffer interval, and migrating task buffers from the second buffer interval to the idle buffer interval one by one until the number of tasks in the idle buffer interval reaches a preset threshold value, or the number of tasks in all common buffer intervals is not higher than the preset threshold value. In the process of migrating tasks one by one, whether the previously confirmed idle buffer interval reaches a preset threshold value or not is judged, if the previously confirmed idle buffer interval reaches the preset threshold value, the fact that a certain number of tasks to be processed exist in the idle buffer interval means that the tasks are not migrated to the idle buffer interval any more, and the task is only required to be allocated to the buffer interval when the latest tasks are allocated to the buffer interval. Meanwhile, in order to avoid too few tasks in other buffer intervals, when the number of the tasks in the other buffer intervals is small due to outward migration to a certain extent, namely the number of the remaining tasks in the other buffer intervals is not higher than a preset threshold value, the migration is stopped.
In a specific processing process, the remaining tasks in the multiple buffer intervals are most parallel, that is, there are multiple second buffer intervals, at this time, instead of directly migrating one task from each second buffer interval to an idle buffer interval, one task buffer is randomly migrated from one second buffer interval to the idle buffer interval, and a successive migration confirmation mode is still adopted, and after one task is migrated, the remaining tasks in the idle buffer interval and the tasks in other buffer intervals are judged until the set number of tasks is reached, and the migration is stopped. The task migrated to the idle buffer interval may be the task of the latest buffer or the task of the earliest buffer.
The judgment basis of stopping migration of other buffer intervals can be based on the comparison with the tasks in the idle buffer intervals besides the preset threshold, and if the number of the remaining tasks in other buffer intervals is not more than one than the number of the tasks in the idle buffer intervals, the task migration is not performed.
It should be noted that, in this embodiment, the first buffer interval and the second buffer interval are not one or more fixed buffer intervals, which are only defined differently according to the state of the buffer interval at a certain time, and are special marks for convenience in description of the embodiment, and the functions of the special marks are not different from those of other buffer intervals, and after a current latest task is buffered in a certain first buffer interval, the first buffer interval may not be the first buffer interval when the next latest task is buffered. And the idle buffer interval is defined as the idle buffer interval in the whole migration process, but not defined as the idle buffer interval only if no task exists, and the state definition of the idle buffer interval is finished after the task migration of a certain buffer interval is finished in terms of the task queuing state.
Meanwhile, in the present solution, it should be understood that steps S101 to S104 exist as a whole, which are not sequentially executed in the strict order described above, when the multi-core internet of things system processes the tasks, the assignment of the latest task and the migration of the task may be executed according to the actual monitoring result, and when the latest task is monitored, the latest task is cached; and when the idle buffer interval is monitored, migrating the task to the idle buffer interval, if the latest task is continuously monitored, continuously executing the step S102 and the step S103 to buffer and process the task, and if the idle buffer interval is continuously monitored, continuously executing the step S104 to finely adjust the task allocation.
Monitoring a task state in the multi-core internet of things system, wherein an execution core of the multi-core internet of things system comprises a priority core and a plurality of common cores, each execution core is correspondingly allocated with a cache interval, and the cache interval is one section of a first-in first-out cache queue in the multi-core internet of things system; when the input of a latest task is monitored, caching the latest task to a first cache interval or a priority cache interval according to task priority so as to be correspondingly allocated to an execution core corresponding to the cache interval, wherein the first cache interval is a cache interval with the longest current residual cache queue in a common cache interval corresponding to a common core, and the priority cache interval is a cache interval corresponding to the priority core; and the priority core acquires a new task from the priority cache interval for processing, and the common core acquires the new task from the corresponding common cache interval for processing when the common core fails to acquire the new task from the priority cache interval. According to the scheme, the corresponding cache intervals are distributed to the execution cores, when the latest tasks are received, the latest tasks are distributed to the corresponding cache intervals according to the priorities of the tasks and the number of the tasks in the cache intervals, the switching process of task distribution is reduced, the data processing efficiency of multi-core processing task scheduling in the Internet of things system is improved, the priority tasks are processed by the execution cores, the scheduling process of the tasks of different types is simplified, and meanwhile the requirement of priority processing is met.
Example two
Fig. 2 is a schematic structural diagram of a queue task classification and mixing processing device according to a second embodiment of the present invention. Referring to fig. 2, the queue task classification hybrid processing apparatus includes: a status listening unit 210, a task caching unit 220, and a task fetching unit 230.
The state monitoring unit 210 is configured to monitor a task state in the multi-core internet of things system, where an execution core of the multi-core internet of things system includes a priority core and multiple ordinary cores, each execution core is correspondingly allocated with a buffer interval, and the buffer interval is a segment of a first-in first-out buffer queue in the multi-core internet of things system; the task caching unit 220 is configured to cache a latest task to a first caching interval or a priority caching interval according to a task priority when it is monitored that the latest task is input, so as to be correspondingly allocated to an execution core corresponding to the caching interval, where the first caching interval is a caching interval with a longest current remaining caching queue in a common caching interval corresponding to the common core, and the priority caching interval is a caching interval corresponding to the priority core; a task obtaining unit 230, configured to obtain a new task from the priority buffer interval for processing by the priority core, and obtain the new task from a corresponding normal buffer interval for processing when the normal core fails to obtain the new task from the priority buffer interval.
On the basis of the above embodiment, the apparatus further includes:
and the task migration unit is used for migrating at least one task cache to the idle cache interval from a second cache interval when monitoring that the idle cache interval appears in the common cache interval so as to be correspondingly distributed to the execution core corresponding to the idle cache interval, wherein the idle cache interval is a cache interval with empty tasks, and the second cache interval is a common cache interval with the most remaining tasks.
On the basis of the foregoing embodiment, when it is monitored that an idle buffer interval occurs in the normal buffer interval, migrating at least one task buffer from a second buffer interval to the idle buffer interval, including:
when monitoring that idle cache intervals appear in the common cache intervals, successively confirming second cache intervals, and migrating task caches to the idle cache intervals one by one from the second cache intervals until the number of tasks in the idle cache intervals reaches a preset threshold value or the number of tasks in all the common cache intervals is not higher than the preset threshold value.
On the basis of the above embodiment, when there are a plurality of second buffer intervals, a task buffer is randomly migrated from one second buffer interval to the idle buffer interval.
On the basis of the above embodiment, when there are a plurality of first buffer intervals, the latest task is randomly buffered to one of the first buffer intervals.
On the basis of the above embodiment, the length of the buffer queue corresponding to each execution core is the same.
The queue task classifying and mixing processing device provided by the embodiment of the invention is included in the queue task classifying and mixing processing equipment, can be used for executing any queue task classifying and mixing processing method provided by the first embodiment, and has corresponding functions and beneficial effects.
EXAMPLE III
Fig. 3 is a schematic structural diagram of node devices of the internet of things according to a third embodiment of the present invention, where the node devices of the internet of things are used to form a system of the internet of things, so as to comprehensively implement task scheduling in this scheme. As shown in fig. 3, the node apparatus of the internet of things includes a processor 310, a memory 320, an input device 330, an output device 340, and a communication device 350; the number of the processors 310 in the node device of the internet of things may be one or more, and one processor 310 is taken as an example in fig. 3; the processor 310, the memory 320, the input device 330, the output device 340 and the communication device 350 in the node device of the internet of things may be connected through a bus or other manners, and fig. 3 illustrates the connection through the bus as an example.
The memory 320 is a computer-readable storage medium, and can be used for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the queue task classification hybrid processing method in the embodiment of the present invention (for example, the status listening unit 210, the task caching unit 220, and the task obtaining unit 230 in the queue task classification hybrid processing device). The processor 310 executes various functional applications and data processing of the node device of the internet of things by running software programs, instructions and modules stored in the memory 320, that is, the queue task classification hybrid processing method is implemented.
The memory 320 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data created from use of the node device of the internet of things, and the like. Further, the memory 320 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, the memory 320 may further include memory located remotely from the processor 310, which may be connected to the internet of things node device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 330 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the internet of things node device. The output device 340 may include a display device such as a display screen.
The Internet of things node equipment comprises the queue task classification and mixing processing device, can be used for executing any queue task classification and mixing processing method, and has corresponding functions and beneficial effects.
Example four
Embodiments of the present invention further provide a storage medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform operations related to the method for processing a mixture of queue task classifications provided in any of the embodiments of the present application, and have corresponding functions and advantages.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product.
Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein. The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory. The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). The memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.
Claims (10)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011641487 | 2020-12-31 | ||
CN2020116414873 | 2020-12-31 |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114138480A true CN114138480A (en) | 2022-03-04 |
Family
ID=80388347
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111422158.4A Pending CN114138480A (en) | 2020-12-31 | 2021-11-26 | Queue task classification hybrid processing method, device, system and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114138480A (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104995603A (en) * | 2013-11-14 | 2015-10-21 | 联发科技股份有限公司 | Task scheduling method based at least in part on distribution of tasks sharing the same data and/or accessing the same memory address and related non-transitory computer readable medium for distributing tasks in a multi-core processor system |
CN108200134A (en) * | 2017-12-25 | 2018-06-22 | 腾讯科技(深圳)有限公司 | Request message management method and device, storage medium |
CN110493145A (en) * | 2019-08-01 | 2019-11-22 | 新华三大数据技术有限公司 | A kind of caching method and device |
CN111708639A (en) * | 2020-06-22 | 2020-09-25 | 中国科学技术大学 | Task scheduling system and method, storage medium and electronic device |
-
2021
- 2021-11-26 CN CN202111422158.4A patent/CN114138480A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104995603A (en) * | 2013-11-14 | 2015-10-21 | 联发科技股份有限公司 | Task scheduling method based at least in part on distribution of tasks sharing the same data and/or accessing the same memory address and related non-transitory computer readable medium for distributing tasks in a multi-core processor system |
CN108200134A (en) * | 2017-12-25 | 2018-06-22 | 腾讯科技(深圳)有限公司 | Request message management method and device, storage medium |
CN110493145A (en) * | 2019-08-01 | 2019-11-22 | 新华三大数据技术有限公司 | A kind of caching method and device |
CN111708639A (en) * | 2020-06-22 | 2020-09-25 | 中国科学技术大学 | Task scheduling system and method, storage medium and electronic device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7060724B2 (en) | Task scheduling methods, resource sharing usage, schedulers, computer-readable storage media and equipment | |
CN113934530A (en) | Multi-core multi-queue task cross processing method, device, system and storage medium | |
CN113934529A (en) | Task scheduling method, device and system of multi-level core and storage medium | |
CN109564528B (en) | System and method for computing resource allocation in distributed computing | |
US8615765B2 (en) | Dividing a computer job into micro-jobs | |
WO2023082560A1 (en) | Task processing method and apparatus, device, and medium | |
KR102719059B1 (en) | Multi-stream ssd qos management | |
CN107122233B (en) | A Multi-VCPU Adaptive Real-time Scheduling Method for TSN Services | |
WO2014193438A1 (en) | Efficient priority-aware thread scheduling | |
CN106330760A (en) | Method and device for cache management | |
US20150100964A1 (en) | Apparatus and method for managing migration of tasks between cores based on scheduling policy | |
CN105677744A (en) | Method and apparatus for increasing service quality in file system | |
CN114020440A (en) | Multi-stage task classification processing method, device and system and storage medium | |
US20170344266A1 (en) | Methods for dynamic resource reservation based on classified i/o requests and devices thereof | |
CN113971085A (en) | Method, device, system and storage medium for distinguishing processing tasks by master core and slave core | |
CN109766168B (en) | Task scheduling method and device, storage medium and computing equipment | |
CN110928649A (en) | Resource scheduling method and device | |
CN114138480A (en) | Queue task classification hybrid processing method, device, system and storage medium | |
EP2413240A1 (en) | Computer micro-jobs | |
EP3816801A1 (en) | Method and apparatus for orchestrating resources in multi-access edge computing (mec) network | |
CN114035929B (en) | Multi-sequence mode task execution method, device, system and storage medium | |
WO2017070869A1 (en) | Memory configuration method, apparatus and system | |
CN112764895A (en) | Task scheduling method, device and system of multi-core Internet of things chip and storage medium | |
CN112764896A (en) | Task scheduling method, device and system based on standby queue and storage medium | |
CN113971086A (en) | Task scheduling method, device and system based on task relevance and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |