CN110442434B - Task scheduling method and device, storage medium and server - Google Patents
Task scheduling method and device, storage medium and server Download PDFInfo
- Publication number
- CN110442434B CN110442434B CN201910603430.5A CN201910603430A CN110442434B CN 110442434 B CN110442434 B CN 110442434B CN 201910603430 A CN201910603430 A CN 201910603430A CN 110442434 B CN110442434 B CN 110442434B
- Authority
- CN
- China
- Prior art keywords
- task
- request
- contained
- target request
- sheet
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 230000006870 function Effects 0.000 claims description 20
- 238000004806 packaging method and process Methods 0.000 claims description 18
- 230000008569 process Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000010276 construction Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000013515 script Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/5038—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer And Data Communications (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention relates to the technical field of computers, and provides a task scheduling method, a task scheduling device, a storage medium and a server. The method comprises the following steps: acquiring each request sent by a client, and recording the time for acquiring each request; obtaining each task contained in each request respectively by using a list based on function programming; the time for obtaining each request is combined, and the same time stamp is respectively allocated to each task contained in each request; and setting a task scheduler, and sequentially executing the tasks contained in each request according to the sequence from small to large of the time stamps. By this arrangement, since the time stamp of the task included in the request to be executed first is smaller than the time stamp of the task included in the request to be executed later, the request to be executed first is processed preferentially after the interrupt is returned, and the request is not required to be reassigned to the thread pool for queuing, thereby avoiding the delay phenomenon.
Description
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a task scheduling method, a task scheduling device, a storage medium, and a server.
Background
Currently, in a micro-service environment, an asynchronous programming style plus thread pool mode is generally used to replace the traditional request-thread model, so as to improve the utilization rate of cpu. However, the following problems arise in this way: if an interrupt is generated in a request executed in advance, after the interrupt of the request returns, the request needs to be reassigned to a thread pool for queuing, so that the request is delayed for processing, and a delay phenomenon is generated.
Disclosure of Invention
In view of this, the embodiments of the present invention provide a task scheduling method, apparatus, storage medium, and server, which can make a request that is executed first be processed preferentially after an interrupt returns, without being reassigned to a thread pool for queuing, so as to avoid a delay phenomenon.
In a first aspect of an embodiment of the present invention, a task scheduling method is provided, including:
acquiring each request sent by a client, and recording the time for acquiring each request;
obtaining each task contained in each request respectively by using a list based on function programming;
The time for obtaining each request is combined, and the same time stamp is respectively allocated to each task contained in each request;
And setting a task scheduler, and sequentially executing the tasks contained in each request according to the sequence from small to large of the time stamps.
In a second aspect of the embodiment of the present invention, there is provided a task scheduling device, including:
the request acquisition module is used for acquiring each request sent by the client and recording the time for acquiring each request;
The task acquisition module is used for respectively acquiring each task contained in each request by using a list programmed based on a function;
The time stamp distribution module is used for combining the time for acquiring each request and respectively distributing the same time stamp for each task contained in each request;
and the task scheduling module is used for setting a task scheduler and sequentially executing each task contained in each request according to the sequence from small to large of the time stamps.
In a third aspect of the embodiments of the present invention, there is provided a computer readable storage medium storing computer readable instructions which, when executed by a processor, implement the steps of the task scheduling method as set forth in the first aspect of the embodiments of the present invention.
In a fourth aspect of the embodiments of the present invention, there is provided a server comprising a memory, a processor and computer readable instructions stored in the memory and executable on the processor, the processor implementing the steps of the task scheduling method as set forth in the first aspect of the embodiments of the present invention when executing the computer readable instructions.
The task scheduling method provided by the embodiment of the invention comprises the following steps: acquiring each request sent by a client, and recording the time for acquiring each request; obtaining each task contained in each request respectively by using a list based on function programming; the time for obtaining each request is combined, and the same time stamp is respectively allocated to each task contained in each request; and setting a task scheduler, and sequentially executing the tasks contained in each request according to the sequence from small to large of the time stamps. By this arrangement, since the time stamp of the task included in the request to be executed first is smaller than the time stamp of the task included in the request to be executed later, the request to be executed first is processed preferentially after the interrupt is returned, and the request is not required to be reassigned to the thread pool for queuing, thereby avoiding the delay phenomenon.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of one embodiment of a task scheduling method provided by an embodiment of the present invention;
FIG. 2 is a block diagram of one embodiment of a task scheduler provided in accordance with an embodiment of the present invention;
Fig. 3 is a schematic diagram of a server according to an embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a task scheduling method, a task scheduling device, a storage medium and a server, which can ensure that a request which is executed first is processed preferentially after an interrupt returns, does not need to be redistributed to a thread pool for queuing, and avoids the delay phenomenon.
In order to make the objects, features and advantages of the present invention more comprehensible, the technical solutions in the embodiments of the present invention are described in detail below with reference to the accompanying drawings, and it is apparent that the embodiments described below are only some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, an embodiment of a task scheduling method in an embodiment of the present invention includes:
101. Acquiring each request sent by a client, and recording the time for acquiring each request;
After each request sent by the client is acquired, the server side records the time for acquiring each request respectively, then analyzes and executes the task contained in each request, and completes the request response.
102. Obtaining each task contained in each request respectively by using a list based on function programming;
Each time a request is obtained, a list programmed based on a function is used to determine the tasks that the request contains. Monad (monad) is an abstract data type in functional programming, which is used to represent computation rather than data, monad inherits the traditional shell concept, and can easily communicate with a data storage area (regardless of data) by executing interactive commands or executing scripts stored as files in bulk. In programs written in a functional style, monad can be used to organize processes that involve ordered operations, or to define arbitrary control flows (e.g., handle concurrency, exceptions, continuations), sheets can be used as containers for loading containers for fetching items, containers fetch data for processing, and then put into a new sheet and return it.
Specifically, for any one target request, the step of obtaining each task included in the target request using a sheet programmed based on a functional formula may include:
(1) Packaging a first task contained in the target request as a parameter into a sheet, and transmitting the first task to a second task with a dependency relationship with the first task;
(2) Packaging the second task into the next sheet as a parameter, and transmitting the second task to a third task with a dependency relationship with the second task;
(3) In the same way, the last task contained in the target request is taken as a parameter to be packaged into a sheet and is transferred to the next task with a dependency relationship with the last task until the last task contained in the target request is taken as a parameter to be packaged into the last sheet;
(4) And analyzing the last sheet to obtain each task contained in the target request.
Specifically, map and flatmap functions may be used to store a task (function) included in the request as a parameter in a sheet, the task is returned to the next function through the sheet, and then map and flatmap functions are used again to store the next task as a parameter in another sheet, and the process is repeated continuously, so that all tasks included in the request and the dependency relationship between the tasks, that is, the calling and the association relationship between the tasks, can be obtained from the last sheet.
Further, the step (4) may include:
(4.1) analyzing the last sheet to obtain each task and task dependency relationship contained in the target request;
and (4.2) constructing a task dependency graph of the target request according to each task and task dependency relationship contained in the target request.
Task dependency graphs correspond to traditional graph data structures-each node is associated with other nodes. In the semantics of the task dependency graph, a node corresponds to a task, the tasks contain references pointing to other tasks, and a plurality of tasks are mutually referenced to represent the dependency relationship among the tasks, so that a graph data structure is finally formed. For example, based on Monad's semantics, when a task (denoted TaskA) executes map or flatMap, a new task is generated, denoted TaskB, which has a parent variable pointing to TaskA, indicating a dependency.
103. The time for obtaining each request is combined, and the same time stamp is respectively allocated to each task contained in each request;
Next, in combination with the time when each request was obtained, each task included in each request is assigned an identical time stamp, and the principle is that the earlier the obtained task included in the request is assigned a smaller time stamp.
Specifically, the time when the request is acquired can be directly taken as the time stamp of each task contained in the request, that is, the goal that the time stamp allocated to the task contained in the request acquired earlier is smaller is realized.
Further, if the task dependency graph of each request is constructed, after each task included in each request is respectively assigned with the same time stamp, the method may further include:
(1) Respectively counting the number of nodes of the task dependency graph of each request and the layer number of the task dependency relationship;
(2) And respectively adjusting the time stamp of the task contained in each request by combining the number of nodes in the task dependency graph of each request and the layer level of the task dependency relationship.
After the task dependency graphs corresponding to the requests are obtained respectively, the number of nodes in the task dependency graphs and the number of levels of task dependency relationships can be counted respectively, and then the time stamps allocated to the tasks of the requests are adjusted respectively by combining the information. The number of nodes in the task dependency graph, i.e. the number of tasks, and the number of levels, i.e. the highest level of the mutual references between tasks, for example, the longest reference chain is TaskA-TaskB-TaskC-TaskD, and the corresponding level number is 4.
Optionally, the adjusting the time stamp of the task included in each request in combination with the number of nodes and the level number of the task dependency relationship in the task dependency graph of each request may include:
(1) If the number of nodes of the task dependency graph of any one request exceeds a first threshold value and/or the number of levels of the task dependency relationship exceeds a second threshold value, the time stamp of the task contained in the any one request is adjusted backwards;
(2) And if the number of nodes of the task dependency graph of any one request is smaller than a third threshold value and/or the number of levels of the task dependency relationship is smaller than a fourth threshold value, the time stamp of the task contained in the any one request is adjusted forwards, wherein the third threshold value is smaller than the first threshold value, and the fourth threshold value is smaller than the second threshold value.
The essence of the timestamp adjustment method is that a simpler request (i.e. a request with fewer tasks and fewer dependency relationships among the tasks) is preferentially executed, and the processing time and the system resources required for the request are fewer, so that the request can be preferentially executed, and the waiting time is reduced.
Optionally, the adjusting the time stamp of the task included in each request in combination with the number of nodes and the level number of the task dependency relationship in the task dependency graph of each request may include:
(1) For any one request, constructing a corresponding request processing period according to the node number of the task dependency graph and the level number of the task dependency relationship;
(2) And if the current time exceeds the request processing deadline, the time stamp of the task contained in any one request is adjusted backwards.
For example, the greater the number of nodes of a requested task dependency graph and/or the greater the number of levels of task dependencies, the longer the request processing deadline may be set. In practice, the request processing deadline may be set according to the time required to typically complete processing the request. Normally, the request is processed within the processing deadline of the request, and the time stamp of the task contained in the request is not changed. However, in some special cases, the request cannot be successfully processed within the processing period of the request, for example, a problem of task error occurs, and in this case, in order to avoid that other requests always wait for the completion of the execution of the request, the timestamp of the task included in the request may be adjusted backwards, so that other requests that can be normally executed are preferentially executed, and the rationality of task scheduling is improved.
104. And setting a task scheduler, and sequentially executing the tasks contained in each request according to the sequence from small to large of the time stamps.
And setting a task scheduler at the server, and when a plurality of requests are received, sequentially executing the tasks triggered by the requests according to the sequence from small to large of the time stamps, wherein the smaller the time stamp is, the higher the priority of task execution is. Specifically, if there are multiple types of requests, the scheduler still executes on a first received task first processing principle. For example, within 1 minute, requests of type a are successively acquired 3000 times, each of the requests a containing 5 task steps of 1,2,3,4, 5. The first acquired a request will preferentially execute 5 steps, assuming that the first a request is a0001, the second a request is a0002, and the first thousand a requests are a1000, when step 4 of a0001 and step 1 of a1000 need to be executed, the steps related to a0001 will be queued before step of a1000 is executed (because the timestamp of step 4 of a0001 is smaller than the timestamp of step 1 of a 1000).
The task scheduling method provided by the embodiment of the invention comprises the following steps: acquiring each request sent by a client, and recording the time for acquiring each request; obtaining each task contained in each request respectively by using a list based on function programming; the time for obtaining each request is combined, and the same time stamp is respectively allocated to each task contained in each request; and setting a task scheduler, and sequentially executing the tasks contained in each request according to the sequence from small to large of the time stamps. By this arrangement, since the time stamp of the task included in the request to be executed first is smaller than the time stamp of the task included in the request to be executed later, the request to be executed first is processed preferentially after the interrupt is returned, and the request is not required to be reassigned to the thread pool for queuing, thereby avoiding the delay phenomenon.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
A task scheduling method is mainly described above, and a task scheduling device will be described in detail below.
Referring to fig. 2, an embodiment of a task scheduling device according to an embodiment of the present invention includes:
A request acquisition module 201, configured to acquire each request sent by a client, and record a time for acquiring each request;
a task obtaining module 202, configured to obtain each task included in each request using a sheet programmed based on a function;
The timestamp distribution module 203 is configured to combine the obtained time of each request, and distribute a same timestamp to each task included in each request;
The task scheduling module 204 is configured to set a task scheduler, and sequentially execute each task included in each request according to the order of the time stamps from small to large.
Further, the timestamp allocation module may include:
The first packaging unit is used for packaging a first task contained in any target request as a parameter into a sheet and transmitting the first task to a second task with a dependency relationship with the first task;
the second packaging unit is used for packaging the second task into a next sheet as a parameter and transmitting the second task to a third task with a dependency relationship with the second task;
the third packaging unit is used for packaging a last task contained in the target request into a single sheet as a parameter in the same way, and transmitting the last task to a next task with a dependency relationship with the last task until the last task contained in the target request is packaged into the last single sheet as the parameter;
and the single sheet analyzing unit is used for analyzing the last single sheet and obtaining each task contained in the target request.
Further, the monocotyledonous parsing unit may include:
the single sheet analysis subunit is used for analyzing the last single sheet and obtaining each task and task dependency relationship contained in the target request;
The task dependency graph construction subunit is used for constructing a task dependency graph of the target request according to each task and task dependency relationship contained in the target request;
The timestamp assignment module may further include:
The node number and task dependency layer progression statistics unit is used for counting the node number and the layer progression of the task dependency relationship of the task dependency graph of the target request;
And the time stamp adjusting unit is used for adjusting the time stamp of the task contained in the target request by combining the node number and the hierarchy number of the task dependency relationship of the task dependency graph of the target request.
Alternatively, the time stamp adjusting unit may include:
A first adjustment subunit, configured to, if the number of nodes in the task dependency graph of the target request exceeds a first threshold and/or the number of levels of the task dependency relationship exceeds a second threshold, adjust a timestamp of a task included in the target request backwards;
And the second adjusting subunit is used for adjusting the time stamp of the task contained in the target request forwards if the number of nodes in the task dependency graph of the target request is smaller than a third threshold value and/or the number of levels of the task dependency relationship is smaller than a fourth threshold value, wherein the third threshold value is smaller than the first threshold value, and the fourth threshold value is smaller than the second threshold value.
Further, the time stamp adjusting unit may include:
A request processing period construction subunit, configured to construct a corresponding request processing period according to the number of nodes and the number of levels of task dependency relationships of the task dependency graph of the target request;
and the third adjusting subunit is used for adjusting the time stamp of the task contained in the target request backwards if the current time exceeds the processing deadline of the request.
Embodiments of the present invention also provide a computer readable storage medium storing computer readable instructions that, when executed by a processor, implement the steps of any one of the task scheduling methods as represented in fig. 1.
The embodiment of the invention also provides a server, which comprises a memory, a processor and computer readable instructions stored in the memory and capable of running on the processor, wherein the steps of any one of the task scheduling methods shown in fig. 1 are realized when the processor executes the computer readable instructions.
Fig. 3 is a schematic diagram of a server according to an embodiment of the present invention. As shown in fig. 3, the server 3 of this embodiment includes: a processor 30, a memory 31, and computer readable instructions 32 stored in the memory 31 and executable on the processor 30. The processor 30, when executing the computer readable instructions 32, implements the steps of the various task scheduling method embodiments described above, such as steps 101 through 104 shown in fig. 1. Or the processor 30, when executing the computer-readable instructions 32, performs the functions of the modules/units of the apparatus embodiments described above, such as the functions of the modules 201-204 shown in fig. 2.
Illustratively, the computer readable instructions 32 may be partitioned into one or more modules/units that are stored in the memory 31 and executed by the processor 30 to complete the present invention. The one or more modules/units may be a series of computer readable instruction segments capable of performing a specific function describing the execution of the computer readable instructions 32 in the server 3.
The server 3 may be a computing device such as a smart phone, a notebook, a palm computer, a cloud server, etc. The server 3 may include, but is not limited to, a processor 30, a memory 31. It will be appreciated by those skilled in the art that fig. 3 is merely an example of the server 3 and does not constitute a limitation of the server 3, and may include more or less components than illustrated, or may combine certain components, or different components, e.g. the server 3 may further include input and output devices, network access devices, buses, etc.
The Processor 30 may be a central processing unit (CentraL Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (DIGITAL SIGNAL Processor, DSP), application specific integrated circuits (AppLication SPECIFIC INTEGRATED circuits, ASIC), off-the-shelf programmable gate arrays (FieLd-ProgrammabLe GATE ARRAY, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 31 may be an internal storage unit of the server 3, such as a hard disk or a memory of the server 3. The memory 31 may also be an external storage device of the server 3, such as a plug-in hard disk, a smart memory card (SMART MEDIA CARD, SMC), a Secure DigitaL (SD) card, a flash memory card (FLASH CARD) or the like, which are provided on the server 3. Further, the memory 31 may also include both an internal storage unit and an external storage device of the server 3. The memory 31 is used to store the computer readable instructions and other programs and data required by the server. The memory 31 may also be used for temporarily storing data that has been output or is to be output.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-OnLy Memory memory, a random access memory (RAM, random Access Memory), a magnetic or optical disk, or other various media in which program codes can be stored.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.
Claims (8)
1. A method for task scheduling, comprising:
acquiring each request sent by a client, and recording the time for acquiring each request;
obtaining each task contained in each request respectively by using a list based on function programming;
The time for obtaining each request is combined, and the same time stamp is respectively allocated to each task contained in each request;
setting a task scheduler, and sequentially executing each task contained in each request according to the sequence from small to large of the time stamps;
Wherein the using the sheet based on the function programming to obtain each task contained in each request comprises:
for any target request in the requests, packaging a first task contained in the target request as a parameter into a sheet, and transmitting the first task to a second task with a dependency relationship with the first task;
Packaging the second task into the next sheet as a parameter, and transmitting the second task to a third task with a dependency relationship with the second task;
In the same way, the last task contained in the target request is taken as a parameter to be packaged into a sheet and is transferred to the next task with a dependency relationship with the last task until the last task contained in the target request is taken as a parameter to be packaged into the last sheet;
and analyzing the last sheet to obtain each task contained in the target request.
2. The task scheduling method according to claim 1, wherein said parsing the last sheet to obtain each task contained in the target request includes:
Analyzing the last sheet to obtain each task and task dependency relationship contained in the target request;
Constructing a task dependency graph of the target request according to each task and task dependency relationship contained in the target request;
after allocating an identical time stamp to each task included in each request in combination with the time when each request is acquired, the method further comprises:
counting the number of nodes of the task dependency graph of the target request and the layer number of the task dependency relationship;
And adjusting the time stamp of the task contained in the target request by combining the node number and the hierarchy number of the task dependency relationship of the task dependency graph of the target request.
3. The task scheduling method according to claim 2, wherein the task dependency graph combined with the target request has the number of nodes and the number of levels of task dependency relationships, adjusting the timestamp of the task contained in the target request comprises:
if the number of nodes in the task dependency graph of the target request exceeds a first threshold value and/or the number of levels of task dependency relationships exceeds a second threshold value, the time stamp of the task contained in the target request is adjusted backwards;
And if the number of nodes in the task dependency graph of the target request is smaller than a third threshold value and/or the number of levels of task dependency relationships is smaller than a fourth threshold value, the time stamp of the task contained in the target request is adjusted forwards, wherein the third threshold value is smaller than the first threshold value, and the fourth threshold value is smaller than the second threshold value.
4. The task scheduling method according to claim 2, wherein the task dependency graph combined with the target request has the number of nodes and the number of levels of task dependency relationships, adjusting the timestamp of the task contained in the target request comprises:
constructing a corresponding request processing period according to the number of nodes of the task dependency graph of the target request and the number of levels of the task dependency relationship;
and if the current time exceeds the request processing deadline, backwards adjusting the time stamp of the task contained in the target request.
5. A task scheduling device, comprising:
the request acquisition module is used for acquiring each request sent by the client and recording the time for acquiring each request;
The task acquisition module is used for respectively acquiring each task contained in each request by using a list programmed based on a function;
The time stamp distribution module is used for combining the time for acquiring each request and respectively distributing the same time stamp for each task contained in each request;
The task scheduling module is used for setting a task scheduler and sequentially executing each task contained in each request according to the sequence from small to large of the time stamps;
Wherein, the timestamp allocation module includes:
The first packaging unit is used for packaging a first task contained in any target request as a parameter into a sheet and transmitting the first task to a second task with a dependency relationship with the first task;
the second packaging unit is used for packaging the second task into a next sheet as a parameter and transmitting the second task to a third task with a dependency relationship with the second task;
the third packaging unit is used for packaging a last task contained in the target request into a single sheet as a parameter in the same way, and transmitting the last task to a next task with a dependency relationship with the last task until the last task contained in the target request is packaged into the last single sheet as the parameter;
and the single sheet analyzing unit is used for analyzing the last single sheet and obtaining each task contained in the target request.
6. A computer readable storage medium storing computer readable instructions which, when executed by a processor, implement the steps of the task scheduling method of any one of claims 1 to 4.
7. A server comprising a memory, a processor, and computer readable instructions stored in the memory and executable on the processor, wherein the processor, when executing the computer readable instructions, performs the steps of:
acquiring each request sent by a client, and recording the time for acquiring each request;
obtaining each task contained in each request respectively by using a list based on function programming;
The time for obtaining each request is combined, and the same time stamp is respectively allocated to each task contained in each request;
setting a task scheduler, and sequentially executing each task contained in each request according to the sequence from small to large of the time stamps;
Wherein the using the sheet based on the function programming to obtain each task contained in each request comprises:
for any target request in the requests, packaging a first task contained in the target request as a parameter into a sheet, and transmitting the first task to a second task with a dependency relationship with the first task;
Packaging the second task into the next sheet as a parameter, and transmitting the second task to a third task with a dependency relationship with the second task;
In the same way, the last task contained in the target request is taken as a parameter to be packaged into a sheet and is transferred to the next task with a dependency relationship with the last task until the last task contained in the target request is taken as a parameter to be packaged into the last sheet;
and analyzing the last sheet to obtain each task contained in the target request.
8. The server of claim 7, wherein the parsing the last sheet to obtain the respective tasks included in the target request comprises:
Analyzing the last sheet to obtain each task and task dependency relationship contained in the target request;
Constructing a task dependency graph of the target request according to each task and task dependency relationship contained in the target request;
after allocating an identical time stamp to each task included in each request in combination with the time when each request is acquired, the method further comprises:
counting the number of nodes of the task dependency graph of the target request and the layer number of the task dependency relationship;
And adjusting the time stamp of the task contained in the target request by combining the node number and the hierarchy number of the task dependency relationship of the task dependency graph of the target request.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910603430.5A CN110442434B (en) | 2019-07-05 | 2019-07-05 | Task scheduling method and device, storage medium and server |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910603430.5A CN110442434B (en) | 2019-07-05 | 2019-07-05 | Task scheduling method and device, storage medium and server |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110442434A CN110442434A (en) | 2019-11-12 |
CN110442434B true CN110442434B (en) | 2024-09-13 |
Family
ID=68429381
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910603430.5A Active CN110442434B (en) | 2019-07-05 | 2019-07-05 | Task scheduling method and device, storage medium and server |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110442434B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108509284A (en) * | 2018-03-08 | 2018-09-07 | 华南理工大学 | A kind of tree shaped model task management system applied to functional expression programming |
CN109309712A (en) * | 2018-09-07 | 2019-02-05 | 平安科技(深圳)有限公司 | Data transmission method, server and the storage medium called based on interface asynchronous |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2387683B (en) * | 2002-04-19 | 2007-03-28 | Hewlett Packard Co | Workflow processing scheduler |
-
2019
- 2019-07-05 CN CN201910603430.5A patent/CN110442434B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108509284A (en) * | 2018-03-08 | 2018-09-07 | 华南理工大学 | A kind of tree shaped model task management system applied to functional expression programming |
CN109309712A (en) * | 2018-09-07 | 2019-02-05 | 平安科技(深圳)有限公司 | Data transmission method, server and the storage medium called based on interface asynchronous |
Non-Patent Citations (1)
Title |
---|
基于Petri网的建设工程项目实施阶段资源建模与仿真;李海凌,等;计算机应用研究;第28卷(第12期);4593-4596 * |
Also Published As
Publication number | Publication date |
---|---|
CN110442434A (en) | 2019-11-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10089142B2 (en) | Dynamic task prioritization for in-memory databases | |
US9875139B2 (en) | Graphics processing unit controller, host system, and methods | |
US8713571B2 (en) | Asynchronous task execution | |
US20180150326A1 (en) | Method and apparatus for executing task in cluster | |
CN106569891B (en) | Method and device for scheduling and executing tasks in storage system | |
CN110308982B (en) | Shared memory multiplexing method and device | |
JP2012511204A (en) | How to reorganize tasks to optimize resources | |
US20150205633A1 (en) | Task management in single-threaded environments | |
CN111352736A (en) | Method and device for scheduling big data resources, server and storage medium | |
JP2013506179A (en) | Execution management system combining instruction threads and management method | |
US8484649B2 (en) | Amortizing costs of shared scans | |
US8458136B2 (en) | Scheduling highly parallel jobs having global interdependencies | |
Becker et al. | Scheduling multi-rate real-time applications on clustered many-core architectures with memory constraints | |
CN112925616A (en) | Task allocation method and device, storage medium and electronic equipment | |
CN111597044A (en) | Task scheduling method and device, storage medium and electronic equipment | |
EP2840513B1 (en) | Dynamic task prioritization for in-memory databases | |
Wang et al. | DDS: A deadlock detection-based scheduling algorithm for workflow computations in HPC systems with storage constraints | |
CN110442434B (en) | Task scheduling method and device, storage medium and server | |
US11360702B2 (en) | Controller event queues | |
CN118819748A (en) | A task scheduling method, scheduling management system and multi-core processor | |
Jungklass et al. | Memopt: Automated memory distribution for multicore microcontrollers with hard real-time requirements | |
Busch et al. | Stable scheduling in transactional memory | |
US9152451B2 (en) | Method of distributing processor loading between real-time processor threads | |
KR20180082560A (en) | Method and apparatus for time-based scheduling of tasks | |
US12045671B2 (en) | Time-division multiplexing method and circuit for arbitrating concurrent access to a computer resource based on a processing slack associated with a critical program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |