CN113238843B - Task execution method, device, equipment and storage medium - Google Patents
Task execution method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN113238843B CN113238843B CN202110524439.4A CN202110524439A CN113238843B CN 113238843 B CN113238843 B CN 113238843B CN 202110524439 A CN202110524439 A CN 202110524439A CN 113238843 B CN113238843 B CN 113238843B
- Authority
- CN
- China
- Prior art keywords
- executed
- task
- tasks
- queue
- annotation information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 41
- 238000004590 computer program Methods 0.000 claims description 7
- 238000010586 diagram Methods 0.000 description 9
- 230000003287 optical effect Effects 0.000 description 8
- 230000003111 delayed effect Effects 0.000 description 7
- 230000006870 function Effects 0.000 description 7
- 238000004891 communication Methods 0.000 description 4
- 239000013307 optical fiber Substances 0.000 description 3
- 230000002093 peripheral effect Effects 0.000 description 3
- 238000003491 array Methods 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/5038—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5066—Algorithms for mapping a plurality of inter-dependent sub-tasks onto a plurality of physical CPUs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/52—Program synchronisation; Mutual exclusion, e.g. by means of semaphores
- G06F9/524—Deadlock detection or avoidance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/48—Indexing scheme relating to G06F9/48
- G06F2209/484—Precedence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/5011—Pool
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/5018—Thread allocation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/5021—Priority
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Debugging And Monitoring (AREA)
Abstract
The invention discloses a task execution method, a device, equipment and a storage medium. The method comprises the following steps: acquiring at least two tasks to be executed based on a distributed timing task framework, wherein the at least two tasks to be executed carry annotation information, and the annotation information comprises: a task identifier; inquiring a database according to the annotation information to obtain thread pools corresponding to the at least two tasks to be executed, wherein the database comprises: task identifiers of at least one type of task and thread pools corresponding to the task identifiers; the task to be executed is executed in parallel based on the thread pool, and the task execution efficiency is improved by the technical scheme of the invention.
Description
Technical Field
The embodiment of the invention relates to the technical field of computers, in particular to a task execution method, a task execution device, a task execution equipment and a task execution storage medium.
Background
The method comprises the steps that a task is firstly acquired when each time of scheduling triggering is performed in a streaming mode of a distributed timing task framework, if the task is acquired and then the task is serially executed one by one, the task acquiring operation and the task executing operation of the distributed timing task framework are serially executed, and after the acquired tasks are executed by the distributed timing task framework, the tasks are serially executed one by one.
In the process of realizing the invention, the prior art is found to have at least the following technical problems:
when the logic of task execution is complex, the network is delayed or time-consuming conditions occur, a certain task is blocked, and further the execution of subsequent tasks and the next acquisition of data are blocked. The task processing is slow, and the business operation efficiency is affected.
Disclosure of Invention
The embodiment of the invention provides a task execution method, device, equipment and storage medium, which are used for realizing parallel task execution and improving task execution efficiency.
In a first aspect, an embodiment of the present invention provides a task execution method, including:
acquiring at least two tasks to be executed based on a distributed timing task framework, wherein the at least two tasks to be executed carry annotation information, and the annotation information comprises: a task identifier;
Inquiring a database according to the annotation information to obtain thread pools corresponding to the at least two tasks to be executed, wherein the database comprises: task identifiers of at least one type of task and thread pools corresponding to the task identifiers;
And executing the at least two tasks to be executed in parallel based on the thread pool.
In a second aspect, an embodiment of the present invention further provides a task execution device, where the task execution device includes:
the task acquisition module is used for acquiring at least two tasks to be executed based on the distributed timing task framework, wherein the at least two tasks to be executed carry annotation information, and the annotation information comprises: a task identifier;
The query module is used for querying a database according to the annotation information to obtain thread pools corresponding to the at least two tasks to be executed, wherein the database comprises: task identifiers of at least one type of task and thread pools corresponding to the task identifiers;
And the execution module is used for executing the at least two tasks to be executed in parallel based on the thread pool.
In a third aspect, an embodiment of the present invention further provides a computer device, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the processor implements the task execution method according to any one of the embodiments of the present invention when executing the program.
In a fourth aspect, embodiments of the present invention further provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a task execution method according to any of the embodiments of the present invention.
According to the embodiment of the invention, at least two tasks to be executed carrying annotation information are obtained based on the distributed timing task framework, a database is queried according to the annotation information, at least two thread pools corresponding to the tasks to be executed are obtained, and the at least two tasks to be executed are executed in parallel based on the thread pools, so that the problem that after the distributed timing task framework obtains the at least two tasks to be executed, the at least two tasks to be executed are executed in series, if the processing logic of any task is complex, the network is delayed or time-consuming conditions occur, a certain task is blocked, further the execution of the subsequent task and the next data acquisition are blocked, the task processing is slow, the problem of influencing the service operation efficiency is caused, and after the distributed timing task framework obtains the at least two tasks to be executed, the at least two tasks to be executed are executed in parallel based on the thread pools, so that the task execution efficiency is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a task execution method in accordance with a first embodiment of the present invention;
FIG. 2 is a flow chart of a task execution method in a second embodiment of the present invention;
FIG. 3 is a schematic diagram of a task execution device according to a third embodiment of the present invention;
Fig. 4 is a schematic structural diagram of a computer device in a fourth embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present invention are shown in the drawings.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures. Meanwhile, in the description of the present invention, the terms "first", "second", and the like are used only to distinguish the description, and are not to be construed as indicating or implying relative importance.
Example 1
Fig. 1 is a flowchart of a task execution method according to an embodiment of the present invention, where the method may be applied to task execution based on a distributed timing task framework, and the method may be performed by a task execution device according to an embodiment of the present invention, where the device may be implemented in software and/or hardware, as shown in fig. 1, and the method specifically includes the following steps:
s110, acquiring at least two tasks to be executed based on a distributed timing task framework, wherein the at least two tasks to be executed carry annotation information, and the annotation information comprises: and (5) task identification.
The distributed timing task framework may be ealstic-job framework, or may be other distributed timing task frameworks, which is not limited in this embodiment of the present invention.
The task identifier may be at least one of a number, a letter and a symbol, and may be in other forms, which are not limited by the embodiment of the present invention.
The task identifier sets the same task identifier for one type of task, for example, the task identifier corresponding to the first type of task may be a task identifier a, the task identifier for the second type of task may be a task identifier B, and the task identifier for the third type of task may be a task identifier C.
The annotation information can be the annotation information added for the task when the task is established. The annotation information may include: task identification, may further include: the maximum number of threads may also include: parallel execution type identification, to which embodiments of the present invention are not limited.
For example, the manner of acquiring at least two tasks to be performed based on the distributed timing task framework may be: and acquiring task grabbing logic based on the distributed timing task framework, and grabbing at least two tasks to be executed through the task grabbing logic. Wherein the number of tasks to be performed is related to a grabbing logic.
S120, inquiring a database according to the annotation information to obtain thread pools corresponding to the at least two tasks to be executed, wherein the database comprises: and the task identification of at least one type of task and the thread pool corresponding to the task identification.
Wherein the database comprises: the task identifier of the at least one type of task and the thread pool corresponding to the task identifier may be, for example, the database includes: task identification A corresponds to thread pool P, task identification B corresponds to thread pool O, task identification C corresponds to thread pool Q, etc.
For example, the method for querying the database according to the annotation information to obtain the thread pools corresponding to the at least two tasks to be executed may be: a database of correspondence relation between task identifications and thread pools is pre-established, and annotation information comprises: and the task identification queries the database according to the task identification to obtain a thread pool corresponding to the task identification, namely at least two thread pools corresponding to the task to be executed. For example, it may be: pre-establishing a database, wherein the database comprises: the task identifier A corresponds to the thread pool P, the task identifier B corresponds to the thread pool O, the task identifier C corresponds to the thread pool Q, the first task to be executed and the second task to be executed are obtained based on the distributed timing task framework, and if annotation information carried by at least two tasks to be executed is: and querying a database according to the task identifier A to obtain a thread pool P corresponding to the task identifier A.
S130, executing the at least two tasks to be executed in parallel based on the thread pool.
By way of example, the manner in which at least two tasks to be performed are performed in parallel based on the thread pool may be: and executing the acquired at least two tasks to be executed in parallel based on the thread pool, and acquiring the at least two tasks to be executed based on the distributed timing task framework after all the tasks are executed. For example, the first task to be executed and the second task to be executed may be acquired based on the thread pool, and the first task to be executed and the second task to be executed may be executed in parallel based on the thread pool. In the prior art, if a first task to be executed and a second task to be executed are obtained based on a thread pool, the second task to be executed must be executed after the first task to be executed is completed. If the first task to be executed is complex in logic, delayed in network or time-consuming, the second task to be executed is blocked, then execution of the subsequent task and next data acquisition are blocked, so that task processing is slow, and service operation efficiency is affected.
By way of example, the manner of executing at least two tasks to be executed in parallel based on the thread pool may also be: and determining a parallel execution type according to the annotation information, and executing at least two tasks to be executed in parallel according to the thread pool and the parallel execution type. For example, it may be: if the parallel execution type is A, writing at least two tasks to be executed into the queue, executing the tasks to be executed in the queue in parallel based on the thread pool, and returning to execute the step of grabbing at least two tasks to be executed after the tasks to be executed in the queue are all executed. If the parallel execution type is B, writing at least two tasks to be executed which are acquired for the first time into a queue, and executing the tasks to be executed in the queue based on a thread pool until the tasks to be executed in the queue are all executed; after the execution of the tasks to be executed in the queue is completed, returning to the step of executing and grabbing at least two tasks to be executed; determining the maximum number of queue execution according to annotation information, determining the idle number of the queue according to the maximum number of queue execution, and determining the number of tasks to be executed to be written into the queue according to the idle number of the queue if the number of tasks to be executed which are not acquired for the first time is smaller than or equal to the maximum number of queue execution; determining a target task to be executed according to the number of the tasks to be executed of the queue to be written; writing target tasks to be executed into a queue, executing the tasks to be executed in the queue based on a thread pool until the tasks to be executed in the queue are all executed, ignoring the non-target tasks to be executed in the tasks to be executed which are not acquired for the first time, returning to execute the steps of acquiring at least two tasks to be executed based on the distributed timing task framework, and if the number of the tasks to be executed which are not acquired for the first time is greater than the maximum number of queue executions, ignoring the tasks to be executed which are not acquired for the first time, and returning to execute the steps of acquiring at least two tasks to be executed based on the distributed timing task framework. For example, if the parallel execution type is B, writing the 8 tasks to be executed acquired for the first time into a queue, and executing the tasks to be executed in the queue based on the thread pool until the tasks to be executed in the queue are all executed; and determining that the maximum number of queue execution is 5 according to the annotation information, if 6 tasks to be executed are acquired for the second time, neglecting the 6 tasks to be executed, and returning to execute the step of acquiring the tasks to be executed based on the distributed timing task framework. If 4 tasks to be executed are acquired for the third time, determining that the idle number of the queues is 2 according to the maximum number of the execution of the queues is 5, determining that the number of the tasks to be executed to be written into the queues is 2 according to the idle number of the queues 2, determining that the target tasks to be executed are the tasks X and Y to be executed according to the number of the tasks to be executed to be 2, writing the tasks X and Y to be executed into the queues, executing the tasks to be executed in the queues based on the thread pool until the tasks to be executed in the queues are all executed, ignoring the tasks to be executed except the tasks X and Y to be executed in the tasks to be executed acquired for the 3 rd time, acquiring the tasks to be executed for the fourth time based on a distributed timing task framework, and repeating the steps.
Optionally, before querying the database according to the annotation information, the method further comprises:
obtaining annotation information of at least one type of task, wherein the annotation information further comprises: a maximum number of threads;
creating a thread pool for the at least one type of task according to the maximum number of threads;
and correspondingly storing the thread pool and the task identifier in a database.
The maximum number of threads may be preset according to a task type, or may be set for a system, which is not limited in the embodiment of the present invention.
Illustratively, the manner of obtaining annotation information for at least one task may be: a first table of the corresponding relation between the characteristic information and the annotation information of the task is pre-established, and the first table is searched according to the characteristic information of the task to obtain the corresponding annotation information, which may be, for example: the first table stores: the characteristic information a corresponds to the annotation information P, the characteristic information b corresponds to the annotation information O, and the characteristic information c corresponds to the annotation information Q. If the characteristic information of the first type task is a, inquiring the first table to obtain annotation information P corresponding to the characteristic information a. The manner of acquiring annotation information of at least one task may also be: and a second table of the corresponding relation between the task identifications and the maximum number of threads is established in advance, and the second table is searched according to the task identifications to obtain the maximum number of threads corresponding to the task identifications. For example, it may be: the second table stores: task identification A corresponds to maximum thread number M, task identification B corresponds to maximum thread number N, and task identification C corresponds to maximum thread number W. If the task identifier of the first type task is A, the second table is queried to obtain the maximum thread number M corresponding to the task identifier A.
For example, a thread pool may be created for the at least one task according to the maximum number of threads, for example, a thread pool P may be created for the first task, a thread pool O may be created for the second task, and a thread pool Q may be created for the third task according to the maximum number of threads corresponding to the first task.
For example, the thread pool and the task identifier are correspondingly stored in a database, which may include: task identification A corresponds to thread pool P, task identification B corresponds to thread pool O, and task identification C corresponds to thread pool Q.
Optionally, acquiring at least two tasks to be performed based on the distributed timing task framework includes:
acquiring task grabbing logic based on a distributed timing task framework;
and grabbing at least two tasks to be executed through grabbing logic.
Wherein the number of tasks to be performed for the grabbing is related to the grabbing logic.
According to the technical scheme, at least two tasks to be executed carrying annotation information are obtained based on the distributed timing task framework, a database is queried according to the annotation information, at least two thread pools corresponding to the tasks to be executed are obtained, the at least two tasks to be executed are executed in parallel based on the thread pools, after the at least two tasks to be executed are obtained by the distributed timing task framework, the at least two tasks to be executed are executed in series, if the processing logic of any task is complex, the network is delayed or time-consuming conditions occur, a certain task is blocked, further the execution of the subsequent task is blocked, the next data is acquired, the task processing is slow, the problem of influencing the business operation efficiency is solved, and after the at least two tasks to be executed are obtained by the distributed timing task framework, the at least two tasks to be executed are executed in parallel based on the thread pools, so that the task execution efficiency is improved.
Example two
Fig. 2 is a flowchart of a task execution method according to a second embodiment of the present invention, where the second embodiment is further optimized based on the first embodiment. The annotation information further includes: parallel execution type identification; correspondingly, executing the at least two tasks to be executed in parallel based on the thread pool comprises the following steps: determining a parallel execution type according to the parallel execution type identifier; and executing the at least two tasks to be executed in parallel based on the thread pool and the parallel execution type.
As shown in fig. 2, the method includes:
s210, acquiring at least two tasks to be executed based on a distributed timing task framework, wherein the at least two tasks to be executed carry annotation information, and the annotation information comprises: task identification and parallel execution type identification.
Wherein the parallel execution type identifier may be composed of at least one of a number, a letter and a symbol, the parallel execution type identifier may also be in other forms, and the parallel execution type identifier and the task identifier are different.
The parallel execution type identifier may be a parallel execution type identifier manually added when the task is established, or may be a parallel execution type identifier obtained by the system by querying a corresponding relation table according to the characteristic information of the task, which is not limited in the embodiment of the present invention.
S220, inquiring a database according to annotation information to obtain at least two thread pools corresponding to tasks to be executed, wherein the database comprises: the task identification of at least one type of task and the thread pool corresponding to the task identification.
S230, determining the parallel execution type according to the parallel execution type identifier.
Illustratively, the determining the parallel execution type according to the parallel execution type identifier may be: a corresponding relation table about the parallel execution type identifier and the parallel execution type is pre-established, the parallel execution type identifier in the annotation information is acquired, and the corresponding relation table is queried according to the parallel execution type identifier to obtain the parallel execution type, for example, the corresponding relation table may include: the parallel execution type identifier R corresponds to the first type of parallel execution type, the parallel execution type identifier S corresponds to the second type of parallel execution type, the parallel execution type identifier T corresponds to the third type of parallel execution type, the parallel execution type identifier R in the annotation information is obtained, and the corresponding relation table is inquired according to the parallel execution type identifier R, so that the first type of parallel execution type is obtained.
S240, executing the at least two tasks to be executed in parallel based on the thread pool and the parallel execution type.
By way of example, the manner of executing the at least two tasks to be executed in parallel based on the thread pool and the parallel execution type may be: if the parallel execution type is the first type, writing at least two tasks to be executed into a queue, and executing the tasks to be executed in the queue in batches based on a thread pool; and returning to execute and grab at least two steps of the task to be executed until the task to be executed in the queue is executed. The manner of executing the at least two tasks to be executed in parallel based on the thread pool and the parallel execution type may further be: if the parallel execution type is the second type, determining the maximum number of queue execution according to the annotation information, writing at least two tasks to be executed acquired for the first time into the queue, and executing the tasks to be executed in the queue based on the thread pool until the tasks to be executed in the queue are all executed; and returning to execute and grab at least two steps of the task to be executed after the task to be executed in the queue is executed. If the number of tasks to be executed, which is not acquired for the first time, is smaller than or equal to the maximum number of queue execution, determining the number of tasks to be executed, which are to be written into the queue, according to the idle number of the queue; determining target tasks to be executed according to the number of the tasks to be executed of the queues to be written; writing the target tasks to be executed into a queue, executing the tasks to be executed in the queue based on the thread pool until the tasks to be executed in the queue are all executed, ignoring the non-target tasks to be executed in the non-first acquired tasks to be executed, returning to execute the step of acquiring at least two tasks to be executed based on a distributed timing task framework, and if the number of the tasks to be executed which are not acquired first is greater than the maximum number of the queue execution, ignoring the tasks to be executed which are not acquired first, and returning to execute the step of acquiring at least two tasks to be executed based on the distributed timing task framework.
Optionally, executing the at least two tasks to be executed in parallel based on the thread pool and the parallel execution type includes:
If the parallel execution type is determined to be the first type according to the parallel execution type identifier, writing the at least two tasks to be executed into a queue;
executing tasks to be executed in the queue in batches based on the thread pool;
and returning to execute and grab at least two steps of the task to be executed until the task to be executed in the queue is executed.
If the parallel execution type is the first type, writing the at least two grabbed tasks to be executed into the queue directly, executing the tasks to be executed in the queue in batches based on the thread pool, and returning to execute the step of grabbing the at least two tasks to be executed until the tasks to be executed in the queue are all executed, for example, acquiring 8 tasks to be executed based on a distributed timing task frame, acquiring a parallel execution type identifier in annotation information, if the parallel execution type identifier is R, querying a corresponding relation table, acquiring the parallel execution type corresponding to the parallel execution type identifier R as the first type, writing the 8 tasks to be executed into the queue, executing the 8 tasks to be executed in parallel based on the thread pool, and returning to execute the step of grabbing the at least two tasks to be executed until the 8 tasks to be executed in the queue are all executed. In the prior art, if 8 tasks to be executed are obtained based on the thread pool, a second task to be executed must be executed after the first task to be executed is executed, a third task to be executed is executed after the second task to be executed is executed, and so on. If logic complexity, network delay or time consuming conditions occur during execution of the first task to be executed, other tasks to be executed are blocked, execution of subsequent tasks and next data acquisition are blocked, task processing is slow, and service operation efficiency is affected. If the first task to be executed is complex in logic, delayed in network or time-consuming, other tasks to be executed are blocked, then execution of subsequent tasks and next data acquisition are blocked, task processing is slow, business operation efficiency is affected, tasks can be executed in parallel, and task execution efficiency is improved.
Optionally, executing the at least two tasks to be executed in parallel based on the thread pool and the parallel execution type includes:
If the parallel execution type is determined to be the second type according to the parallel execution type identifier, determining the maximum number of queue execution according to annotation information;
Writing the at least two tasks to be executed, which are acquired for the first time, into a queue, and executing the tasks to be executed in the queue based on the thread pool until the tasks to be executed in the queue are all executed;
and returning to execute and grab at least two steps of the task to be executed after the task to be executed in the queue is executed.
Illustratively, if the parallel execution type is determined to be the second type according to the parallel execution type identifier, determining the maximum number of queue executions according to the annotation information; writing the at least two tasks to be executed which are acquired for the first time into a queue, and executing the tasks to be executed in the queue based on a thread pool until the tasks to be executed in the queue are all executed; and returning to execute and grab at least two steps of the task to be executed after the task to be executed in the queue is executed. For example, if the frame based on the distributed timing task first obtains 8 tasks to be executed, if the parallel execution type is the second type, the first obtained 8 tasks to be executed are written into the queue, and the tasks to be executed in the queue are executed based on the thread pool until the tasks to be executed in the queue are all executed. If the 20 tasks to be executed are acquired for the first time based on the distributed timing task framework, if the parallel execution type is the second type, the 20 tasks to be executed acquired for the first time are written into the queue, the tasks to be executed in the queue are executed based on the thread pool until the tasks to be executed in the queue are all executed, and after the tasks to be executed in the queue are all executed, the step of capturing at least two tasks to be executed is performed is returned.
Optionally, executing the at least two tasks to be executed in parallel based on the thread pool and the parallel execution type includes:
If the parallel execution type is determined to be the second type according to the parallel execution type identifier, determining the maximum number of queue execution according to annotation information;
Determining the idle number of the queue according to the maximum number of queue execution;
If the number of tasks to be executed, which is not acquired for the first time, is smaller than or equal to the maximum number of queue execution, determining the number of tasks to be executed, which are to be written into the queue, according to the idle number of the queue;
Determining target tasks to be executed according to the number of the tasks to be executed of the queues to be written;
And writing the target task to be executed into a queue, and executing the task to be executed in the queue based on the thread pool until the task to be executed in the queue is executed.
The method for determining the maximum number of queue execution according to the annotation information may be: the annotation information comprises queue execution maximum data, and the queue execution maximum data in the annotation information is directly obtained; the manner of determining the maximum number of queue executions based on the annotation information may also be: a corresponding relation list of task identifiers and the maximum number of queue executions is established in advance, and the corresponding relation list is inquired according to the task identifiers to obtain the maximum number of queue executions corresponding to the task identifiers; the embodiment of the present invention is not limited thereto.
The determining the queue idle number according to the queue execution maximum number may, for example, be that if the queue execution maximum number is 5, the queue idle number in 5 is obtained according to the queue execution maximum number being 5.
If the number of tasks to be executed, which is not acquired for the first time, is less than or equal to the maximum number of queue execution, the number of tasks to be executed to be written into the queue is determined according to the number of queue idles, for example, if the number of tasks to be executed, which is acquired for the second time, is 4, the maximum number of queue execution is 5, 4 is less than 5, the number of tasks to be executed to be written into the queue is determined to be 2 according to the number of queue idles, and 2 tasks to be executed are optionally determined to be target tasks to be executed from the 4 tasks to be executed according to the number of tasks to be executed to be written into the queue is 2.
Illustratively, if the parallel execution type is determined to be the second type according to the parallel execution type identifier, determining a maximum number of queue execution according to annotation information; determining the idle number of the queue according to the maximum number of queue execution; if the number of tasks to be executed, which is not acquired for the first time, is smaller than or equal to the maximum number of queue execution, determining the number of tasks to be executed, which are to be written into the queue, according to the idle number of the queue; determining target tasks to be executed according to the number of the tasks to be executed of the queues to be written; and writing the target task to be executed into a queue, and executing the task to be executed in the queue based on the thread pool until the task to be executed in the queue is executed. For example, at least two tasks to be executed may be acquired based on a distributed timing task framework, where the at least two tasks to be executed carry annotation information, and the annotation information includes: task identification and parallel execution type identification; determining that the parallel execution type is a second type according to the parallel execution type identification, determining that the maximum number of queue execution is 5 according to the annotation information, determining that the idle number of the queue is 2 according to the maximum number of queue execution, if 4 tasks to be executed are acquired for the second time, determining that the number of tasks to be executed to be written into the queue is 2 according to the idle number of the queue because 4 is smaller than 5, determining that 2 tasks to be executed are selected from the 4 tasks to be executed as target tasks to be executed according to the number of the tasks to be executed to be written into the queue, writing the 2 target tasks to be executed into the queue, and executing the tasks to be executed in the queue based on the thread pool until the tasks to be executed in the queue are all executed.
Optionally, the method further comprises:
and ignoring the non-target task to be executed in the non-first acquired tasks to be executed, and returning to execute the step of acquiring at least two tasks to be executed based on the distributed timing task framework.
Illustratively, if the parallel execution type is determined to be the second type according to the parallel execution type identifier, determining a maximum number of queue execution according to annotation information; determining the idle number of the queue according to the maximum number of queue execution; if the number of tasks to be executed, which is not acquired for the first time, is smaller than or equal to the maximum number of queue execution, determining the number of tasks to be executed, which are to be written into the queue, according to the idle number of the queue; determining target tasks to be executed according to the number of the tasks to be executed of the queues to be written; and ignoring the non-target task to be executed in the non-first acquired tasks to be executed, and returning to execute the step of acquiring at least two tasks to be executed based on the distributed timing task framework. For example, at least two tasks to be executed may be acquired based on a distributed timing task framework, where the at least two tasks to be executed carry annotation information, and the annotation information includes: task identification and parallel execution type identification; determining that the parallel execution type is a second type according to the parallel execution type identification, determining that the maximum number of queue execution is 5 according to the annotation information, determining that the idle number of the queue is 2 according to the maximum number of queue execution is 5, if 4 tasks to be executed are acquired for the second time, determining that the number of tasks to be executed to be written into the queue is 2 according to the idle number of the queue, determining that 2 tasks to be executed are selected from the 4 tasks to be executed as target tasks to be executed according to the number of the tasks to be executed to be written into the queue, writing the 2 target tasks to be executed into the queue, ignoring other tasks except the 2 target tasks to be executed, returning to execute the tasks to be executed in the queue based on the distributed timing task framework, acquiring at least two tasks to be executed, and executing the tasks to be executed in the queue based on the thread pool until the tasks to be executed in the queue are all executed to complete synchronous execution.
In the embodiment of the invention, the two steps of acquiring at least two tasks to be executed based on the distributed timing task framework and the tasks to be executed in the parallel execution queue based on the thread pool are executed in parallel, in the prior art, the tasks to be executed are acquired firstly, and then the acquired tasks to be executed are executed, that is, the two steps of acquiring the tasks to be executed and executing the acquired tasks to be executed are executed in series.
Optionally, the method further comprises:
And if the number of the tasks to be executed which are not acquired for the first time is larger than the maximum number of the queue execution, ignoring the tasks to be executed which are not acquired for the first time, and returning to execute the step of acquiring at least two tasks to be executed based on the distributed timing task framework.
For example, if the number of tasks to be executed that are not acquired for the first time is greater than the maximum number of queue execution, the step of ignoring the tasks to be executed that are not acquired for the first time and acquiring at least two tasks to be executed based on the distributed timing task framework is performed in a return manner, for example, at least two tasks to be executed may be acquired based on the distributed timing task framework, where the at least two tasks to be executed carry annotation information, and the annotation information includes: task identification and parallel execution type identification; determining that the parallel execution type is a second type according to the parallel execution type identifier, determining that the maximum number of queue execution is 5 according to the annotation information, if 8 tasks to be executed are acquired for the second time, ignoring the 8 tasks acquired for the second time because 8 is greater than 5, and returning to execute the step of acquiring at least two tasks to be executed based on the distributed timing task framework.
According to the technical scheme, at least two tasks to be executed carrying annotation information are obtained based on the distributed timing task framework, a database is queried according to the annotation information, a thread pool corresponding to the at least two tasks to be executed is obtained, a parallel execution type is determined according to a parallel execution type identifier in the annotation information, the at least two tasks to be executed are executed in parallel based on the thread pool and the parallel execution type, after the at least two tasks to be executed are obtained by the distributed timing task framework, the at least two tasks to be executed are executed in series, if the processing logic of any task is complex, the network is delayed or time-consuming conditions occur, a certain task is blocked, further the execution of the subsequent task and the next data acquisition are blocked, the task processing is slow, the problem of influencing the business operation efficiency is solved, after the at least two tasks to be executed are obtained by the distributed timing task framework, the at least two tasks to be executed in parallel according to the parallel execution type and the thread pool, or the task acquisition and the task execution steps are executed in parallel, and the task execution efficiency is improved.
Example III
Fig. 3 is a schematic structural diagram of a task execution device according to a third embodiment of the present invention. The present embodiment may be applied to a case of task execution based on a distributed timing task framework, where the apparatus may be implemented in a software and/or hardware manner, and the apparatus may be integrated in any device that provides a task execution function, as shown in fig. 3, where the task execution apparatus specifically includes: a task acquisition module 310, a query module 320, and an execution module 330.
The task acquisition module is configured to acquire at least two tasks to be executed based on a distributed timing task framework, where the at least two tasks to be executed carry annotation information, and the annotation information includes: a task identifier;
The query module is used for querying a database according to the annotation information to obtain thread pools corresponding to the at least two tasks to be executed, wherein the database comprises: task identifiers of at least one type of task and thread pools corresponding to the task identifiers;
And the execution module is used for executing the at least two tasks to be executed in parallel based on the thread pool.
Optionally, the method further comprises:
The information acquisition module is used for acquiring annotation information of at least one type of task before inquiring the database according to the annotation information, wherein the annotation information further comprises: a maximum number of threads;
The thread pool creation module is used for creating a thread pool for the at least one type of task according to the maximum thread number;
and the storage module is used for correspondingly storing the thread pool and the task identifier into a database.
Optionally, the annotation information further includes: parallel execution type identification;
correspondingly, the execution module is specifically configured to:
determining a parallel execution type according to the parallel execution type identifier;
and executing the at least two tasks to be executed in parallel based on the thread pool and the parallel execution type.
Optionally, the execution module is specifically configured to:
If the parallel execution type is determined to be the first type according to the parallel execution type identifier, writing the at least two tasks to be executed into a queue;
executing tasks to be executed in the queue in batches based on the thread pool;
and returning to execute and grab at least two steps of the task to be executed until the task to be executed in the queue is executed.
Optionally, the execution module is specifically configured to:
If the parallel execution type is determined to be the second type according to the parallel execution type identifier, determining the maximum number of queue execution according to annotation information;
Writing the at least two tasks to be executed, which are acquired for the first time, into a queue, and executing the tasks to be executed in the queue based on the thread pool until the tasks to be executed in the queue are all executed;
and returning to execute and grab at least two steps of the task to be executed after the task to be executed in the queue is executed.
Optionally, the execution module is specifically configured to:
If the parallel execution type is determined to be the second type according to the parallel execution type identifier, determining the maximum number of queue execution according to annotation information;
Determining the idle number of the queue according to the maximum number of queue execution;
If the number of tasks to be executed, which is not acquired for the first time, is smaller than or equal to the maximum number of queue execution, determining the number of tasks to be executed, which are to be written into the queue, according to the idle number of the queue;
Determining target tasks to be executed according to the number of the tasks to be executed of the queues to be written;
And writing the target task to be executed into a queue, and executing the task to be executed in the queue based on the thread pool until the task to be executed in the queue is executed.
Optionally, the execution module is specifically configured to:
and ignoring the non-target task to be executed in the non-first acquired tasks to be executed, and returning to execute the step of acquiring at least two tasks to be executed based on the distributed timing task framework.
Optionally, the execution module is specifically configured to:
And if the number of the tasks to be executed which are not acquired for the first time is larger than the maximum number of the queue execution, ignoring the tasks to be executed which are not acquired for the first time, and returning to execute the step of acquiring at least two tasks to be executed based on the distributed timing task framework.
Optionally, the task obtaining module is specifically configured to:
acquiring task grabbing logic based on a distributed timing task framework;
and grabbing at least two tasks to be executed through grabbing logic.
The product can execute the method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
According to the technical scheme, at least two tasks to be executed carrying annotation information are obtained based on the distributed timing task framework, a database is queried according to the annotation information, at least two thread pools corresponding to the tasks to be executed are obtained, the at least two tasks to be executed are executed in parallel based on the thread pools, after the at least two tasks to be executed are obtained by the distributed timing task framework, the at least two tasks to be executed are executed in series, if the processing logic of any task is complex, the network is delayed or time-consuming conditions occur, a certain task is blocked, further the execution of the subsequent task is blocked, the next data is acquired, the task processing is slow, the problem of influencing the business operation efficiency is solved, and after the at least two tasks to be executed are obtained by the distributed timing task framework, the at least two tasks to be executed are executed in parallel based on the thread pools, so that the task execution efficiency is improved.
Example IV
Fig. 4 is a schematic structural diagram of a computer device according to a fourth embodiment of the present invention. Fig. 4 illustrates a block diagram of an exemplary computer device 12 suitable for use in implementing embodiments of the present invention. The computer device 12 shown in fig. 4 is merely an example and should not be construed as limiting the functionality and scope of use of embodiments of the present invention.
As shown in FIG. 4, the computer device 12 is in the form of a general purpose computing device. Components of computer device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, a bus 18 that connects the various system components, including the system memory 28 and the processing units 16.
Bus 18 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, and a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include industry standard architecture (Industry Standard Architecture, ISA) bus, micro channel architecture (Micro Channel Architecture, MCA) bus, enhanced ISA bus, video electronics standards association (Video Electronics Standards Association, VESA) local bus, and peripheral component interconnect (PERIPHERAL COMPONENT INTERCONNECT, PCI) bus.
Computer device 12 typically includes a variety of computer system readable media. Such media can be any available media that is accessible by computer device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 28 may include computer system readable media in the form of volatile memory, such as random access memory (Random Access Memory, RAM) 30 and/or cache memory 32. The computer device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from or write to non-removable, nonvolatile magnetic media (not shown in FIG. 4, commonly referred to as a "hard disk drive"). Although not shown in fig. 4, a disk drive for reading from and writing to a removable nonvolatile magnetic disk (e.g., a "floppy disk"), and an optical disk drive for reading from or writing to a removable nonvolatile optical disk (Compact Disc-Read Only Memory, CD-ROM), digital versatile disk (Digital Video Disc-Read Only Memory, DVD-ROM), or other optical media, may be provided. In such cases, each drive may be coupled to bus 18 through one or more data medium interfaces. The system memory 28 may include at least one program product having a set (e.g., at least one) of program modules configured to carry out the functions of the embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored in, for example, system memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. Program modules 42 generally perform the functions and/or methods of the embodiments described herein.
The computer device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), one or more devices that enable a user to interact with the computer device 12, and/or any devices (e.g., network card, modem, etc.) that enable the computer device 12 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 22. In addition, in the computer device 12 of the present embodiment, the display 24 is not present as a separate body but is embedded in the mirror surface, and the display surface of the display 24 and the mirror surface are visually integrated when the display surface of the display 24 is not displayed. Moreover, computer device 12 may also communicate with one or more networks such as a local area network (Local Area Network, LAN), a wide area network Wide Area Network, WAN) and/or a public network such as the Internet via network adapter 20. As shown, network adapter 20 communicates with other modules of computer device 12 via bus 18. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with computer device 12, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, disk array (Redundant Arrays of INDEPENDENT DISKS, RAID) systems, tape drives, data backup storage systems, and the like.
The processing unit 16 executes various functional applications and data processing by running programs stored in the system memory 28, for example, implementing a task execution method provided by an embodiment of the present invention:
acquiring at least two tasks to be executed based on a distributed timing task framework, wherein the at least two tasks to be executed carry annotation information, and the annotation information comprises: a task identifier;
Inquiring a database according to the annotation information to obtain thread pools corresponding to the at least two tasks to be executed, wherein the database comprises: task identifiers of at least one type of task and thread pools corresponding to the task identifiers;
And executing the at least two tasks to be executed in parallel based on the thread pool.
Example five
A fifth embodiment of the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the task execution method as provided by all the embodiments of the present application:
acquiring at least two tasks to be executed based on a distributed timing task framework, wherein the at least two tasks to be executed carry annotation information, and the annotation information comprises: a task identifier;
Inquiring a database according to the annotation information to obtain thread pools corresponding to the at least two tasks to be executed, wherein the database comprises: task identifiers of at least one type of task and thread pools corresponding to the task identifiers;
And executing the at least two tasks to be executed in parallel based on the thread pool.
Any combination of one or more computer readable media may be employed. The computer readable medium may be a computer readable signal medium or a computer readable storage medium or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (Hyper Text Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
Computer program code for carrying out operations of the present invention may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
Note that the above is only a preferred embodiment of the present invention and the technical principle applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, while the invention has been described in connection with the above embodiments, the invention is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit or scope of the invention, which is set forth in the following claims.
Claims (11)
1. A method of performing a task, comprising:
Acquiring at least two tasks to be executed based on a distributed timing task framework, wherein the at least two tasks to be executed carry annotation information, and the annotation information comprises: the task identification is used for identifying annotation information added for the task when the task is established;
Inquiring a database according to the annotation information to obtain thread pools corresponding to the at least two tasks to be executed, wherein the database comprises: task identifiers of at least one type of task and thread pools corresponding to the task identifiers;
executing the at least two tasks to be executed in parallel based on the thread pool;
The annotation information further includes: parallel execution type identification;
correspondingly, executing the at least two tasks to be executed in parallel based on the thread pool comprises the following steps:
determining a parallel execution type according to the parallel execution type identifier;
and executing the at least two tasks to be executed in parallel based on the thread pool and the parallel execution type.
2. The method of claim 1, further comprising, prior to querying a database based on the annotation information:
obtaining annotation information of at least one type of task, wherein the annotation information further comprises: a maximum number of threads;
creating a thread pool for the at least one type of task according to the maximum number of threads;
and correspondingly storing the thread pool and the task identifier in a database.
3. The method of claim 1, wherein executing the at least two tasks to be executed in parallel based on the thread pool and the parallel execution type comprises:
If the parallel execution type is determined to be the first type according to the parallel execution type identifier, writing the at least two tasks to be executed into a queue;
executing tasks to be executed in the queue in batches based on the thread pool;
and returning to execute and grab at least two steps of the task to be executed until the task to be executed in the queue is executed.
4. The method of claim 1, wherein executing the at least two tasks to be executed in parallel based on the thread pool and the parallel execution type comprises:
If the parallel execution type is determined to be the second type according to the parallel execution type identifier, determining the maximum number of queue execution according to annotation information;
Writing the at least two tasks to be executed, which are acquired for the first time, into a queue, and executing the tasks to be executed in the queue based on the thread pool until the tasks to be executed in the queue are all executed;
and returning to execute and grab at least two steps of the task to be executed after the task to be executed in the queue is executed.
5. The method of claim 1, wherein executing the at least two tasks to be executed in parallel based on the thread pool and the parallel execution type comprises:
If the parallel execution type is determined to be the second type according to the parallel execution type identifier, determining the maximum number of queue execution according to annotation information;
Determining the idle number of the queue according to the maximum number of queue execution;
If the number of tasks to be executed, which is not acquired for the first time, is smaller than or equal to the maximum number of queue execution, determining the number of tasks to be executed, which are to be written into the queue, according to the idle number of the queue;
Determining target tasks to be executed according to the number of the tasks to be executed of the queues to be written;
And writing the target task to be executed into a queue, and executing the task to be executed in the queue based on the thread pool until the task to be executed in the queue is executed.
6. The method as recited in claim 5, further comprising:
and ignoring the non-target task to be executed in the non-first acquired tasks to be executed, and returning to execute the step of acquiring at least two tasks to be executed based on the distributed timing task framework.
7. The method as recited in claim 5, further comprising:
And if the number of the tasks to be executed which are not acquired for the first time is larger than the maximum number of the queue execution, ignoring the tasks to be executed which are not acquired for the first time, and returning to execute the step of acquiring at least two tasks to be executed based on the distributed timing task framework.
8. The method of claim 1, wherein acquiring at least two tasks to be performed based on a distributed timed task framework comprises:
acquiring task grabbing logic based on a distributed timing task framework;
and grabbing at least two tasks to be executed through grabbing logic.
9. A task execution device, characterized by comprising:
the task acquisition module is used for acquiring at least two tasks to be executed based on the distributed timing task framework, wherein the at least two tasks to be executed carry annotation information, and the annotation information comprises: the task identification is used for identifying annotation information added for the task when the task is established;
The query module is used for querying a database according to the annotation information to obtain thread pools corresponding to the at least two tasks to be executed, wherein the database comprises: task identifiers of at least one type of task and thread pools corresponding to the task identifiers;
the execution module is used for executing the at least two tasks to be executed in parallel based on the thread pool;
The annotation information further includes: parallel execution type identification;
Correspondingly, the execution module is specifically configured to: determining a parallel execution type according to the parallel execution type identifier; and executing the at least two tasks to be executed in parallel based on the thread pool and the parallel execution type.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any of claims 1-8 when the program is executed by the processor.
11. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110524439.4A CN113238843B (en) | 2021-05-13 | 2021-05-13 | Task execution method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110524439.4A CN113238843B (en) | 2021-05-13 | 2021-05-13 | Task execution method, device, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113238843A CN113238843A (en) | 2021-08-10 |
CN113238843B true CN113238843B (en) | 2024-07-16 |
Family
ID=77134144
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110524439.4A Active CN113238843B (en) | 2021-05-13 | 2021-05-13 | Task execution method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113238843B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113849292A (en) * | 2021-11-30 | 2021-12-28 | 天聚地合(苏州)数据股份有限公司 | Timed task execution method and device, storage medium and equipment |
CN114327872B (en) * | 2021-12-14 | 2024-05-31 | 特赞(上海)信息科技有限公司 | Multimedia asynchronous processing method and device |
CN114721791A (en) * | 2022-03-10 | 2022-07-08 | 浙江大华技术股份有限公司 | Task scheduling method, electronic device, and computer-readable storage medium |
CN115601195B (en) * | 2022-10-17 | 2023-09-08 | 桂林电子科技大学 | Transaction bidirectional recommendation system and method based on real-time label of power user |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111625329A (en) * | 2020-05-18 | 2020-09-04 | 北京达佳互联信息技术有限公司 | Task allocation method and device, electronic equipment, server and storage medium |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8209702B1 (en) * | 2007-09-27 | 2012-06-26 | Emc Corporation | Task execution using multiple pools of processing threads, each pool dedicated to execute different types of sub-tasks |
US8201176B2 (en) * | 2008-08-06 | 2012-06-12 | International Business Machines Corporation | Detecting the starting and ending of a task when thread pooling is employed |
CN102360310B (en) * | 2011-09-28 | 2014-03-26 | 中国电子科技集团公司第二十八研究所 | Multitask process monitoring method in distributed system environment |
US9250953B2 (en) * | 2013-11-12 | 2016-02-02 | Oxide Interactive Llc | Organizing tasks by a hierarchical task scheduler for execution in a multi-threaded processing system |
CN106095585B (en) * | 2016-06-22 | 2019-08-30 | 中国建设银行股份有限公司 | Task requests processing method, device and enterprise information system |
CN107341054B (en) * | 2017-06-29 | 2020-06-16 | 广州市百果园信息技术有限公司 | Task execution method and device and computer readable storage medium |
CN109753354B (en) * | 2018-11-26 | 2024-07-05 | 平安科技(深圳)有限公司 | Processing method and device for streaming media task based on multithreading and computer equipment |
CN110806933B (en) * | 2019-11-05 | 2022-06-10 | 中国建设银行股份有限公司 | Batch task processing method, device, equipment and storage medium |
CN112148493A (en) * | 2020-09-30 | 2020-12-29 | 武汉中科通达高新技术股份有限公司 | Streaming media task management method and device and data server |
-
2021
- 2021-05-13 CN CN202110524439.4A patent/CN113238843B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111625329A (en) * | 2020-05-18 | 2020-09-04 | 北京达佳互联信息技术有限公司 | Task allocation method and device, electronic equipment, server and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN113238843A (en) | 2021-08-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113238843B (en) | Task execution method, device, equipment and storage medium | |
CN105787077B (en) | Data synchronization method and device | |
US10423442B2 (en) | Processing jobs using task dependencies | |
US10175954B2 (en) | Method of processing big data, including arranging icons in a workflow GUI by a user, checking process availability and syntax, converting the workflow into execution code, monitoring the workflow, and displaying associated information | |
US10324931B2 (en) | Dynamic combination of processes for sub-queries | |
US10496659B2 (en) | Database grouping set query | |
CN110287146B (en) | Method, device and computer storage medium for application download | |
CN110673959A (en) | System, method and apparatus for processing tasks | |
CN112613964A (en) | Account checking method, account checking device, account checking equipment and storage medium | |
CN113760242B (en) | Data processing method, device, server and medium | |
CN112307065B (en) | Data processing method, device and server | |
CN113760920B (en) | Data synchronization method and device, electronic equipment and storage medium | |
CN109582445A (en) | Message treatment method, device, electronic equipment and computer readable storage medium | |
CN105786917B (en) | Method and device for concurrent warehousing of time series data | |
CN112131248B (en) | Data analysis method, device, equipment and storage medium | |
CN112818204B (en) | Service processing method, device, equipment and storage medium | |
CN112527527B (en) | Message queue consumption speed control method, device, electronic device and medium | |
CN110837412B (en) | Method, device, equipment and storage medium for judging operation ready state | |
CN114238391A (en) | Data paging query method and device, electronic equipment and storage medium | |
CN112364268A (en) | Resource acquisition method and device, electronic equipment and storage medium | |
CN112286922A (en) | Data cleaning method, device, equipment and storage medium | |
CN113407331B (en) | A task processing method, device and storage medium | |
CN116610725A (en) | Entity enhancement rule mining method and device applied to big data | |
CN111262727B (en) | Service capacity expansion method, device, equipment and storage medium | |
CN111405015B (en) | Data processing method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |