[go: up one dir, main page]

CN110018893B - Task scheduling method based on data processing and related equipment - Google Patents

Task scheduling method based on data processing and related equipment Download PDF

Info

Publication number
CN110018893B
CN110018893B CN201910187622.2A CN201910187622A CN110018893B CN 110018893 B CN110018893 B CN 110018893B CN 201910187622 A CN201910187622 A CN 201910187622A CN 110018893 B CN110018893 B CN 110018893B
Authority
CN
China
Prior art keywords
target
task
timing task
scheduling
execution server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910187622.2A
Other languages
Chinese (zh)
Other versions
CN110018893A (en
Inventor
邓彪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei Hexi Network Technology Co ltd
Original Assignee
Hebei Hexi Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei Hexi Network Technology Co ltd filed Critical Hebei Hexi Network Technology Co ltd
Priority to CN201910187622.2A priority Critical patent/CN110018893B/en
Publication of CN110018893A publication Critical patent/CN110018893A/en
Priority to PCT/CN2019/117868 priority patent/WO2020181813A1/en
Application granted granted Critical
Publication of CN110018893B publication Critical patent/CN110018893B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)
  • Multi Processors (AREA)
  • Computer And Data Communications (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the invention discloses a task scheduling method based on data processing and related equipment, wherein the method is applied to the technical field of data processing and comprises the following steps: when the system time is detected to be the execution time corresponding to the target timing task, the load sequencing result of each execution server is obtained, and the task type of the target timing task is determined. If the task type is a single-machine task, determining a first target execution server for ordering from all the execution servers according to a load ordering result, and scheduling a target timing task to the first target execution server; if the task type is a distributed task, determining the number m of instances corresponding to the target timing task, determining a second target execution server m before sequencing from all execution servers according to a load sequencing result, and scheduling the target timing task to the second target execution server. The invention is beneficial to improving the dispatching efficiency of the task.

Description

Task scheduling method based on data processing and related equipment
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a task scheduling method and related devices based on data processing.
Background
A computer cluster is a computer system that cooperates through the connection of multiple computers (also known as nodes) to complete a computing job. The nodes are located in the same administrative domain, have a unified administrative policy and provide services to the user as a whole. The process of distributing job tasks on a computer cluster to multiple nodes may be referred to as scheduling of tasks. At present, a traditional task scheduling engine generally performs according to the sequence of tasks from first to last, and simply adopts resource preemption, first to first, and task scheduling efficiency is low.
Disclosure of Invention
The embodiment of the invention provides a task scheduling method based on data processing and related equipment, which are beneficial to improving the task scheduling efficiency.
In a first aspect, an embodiment of the present invention provides a task scheduling method based on data processing, where the method is applied to a scheduling server cluster, and the method includes:
When the system time is detected to be the execution time corresponding to the target timing task, acquiring a pre-stored load ordering result of each execution server from a storage device, wherein the target timing task is any one of at least one pre-configured timing task, the load ordering result is obtained by ordering the execution servers according to the order of the load rate from small to large, and the load rate is obtained according to the resource occupancy information of each execution server;
Determining a task type of the target timing task based on task information of the target timing task;
If the task type is a single-machine task, determining a first target execution server with a first ordering from the execution servers according to the load ordering result, and scheduling the target timing task to the first target execution server;
And if the task type is a distributed task, determining the number m of instances corresponding to the target timing task, determining a second target execution server of m before sequencing from the execution servers according to the load sequencing result, and scheduling the target timing task to the second target execution server, wherein m is an integer greater than 0.
In one embodiment, the resource occupancy rate information of each execution server, which is reported by each execution server through heartbeat information, may be further obtained according to a preset time interval, where the resource occupancy rate information includes at least one of the following: memory information, CPU utilization and disk input/output rate; according to a pre-configured load rate algorithm and the resource occupancy information, determining the load rates corresponding to the execution servers respectively; and sequencing the execution servers according to the sequence of the load rate from small to large to obtain a load sequencing result of the execution servers, and storing the load sequencing result into a storage device.
In one embodiment, before the target timing task is scheduled to the first target execution server, query information may also be sent to the first target execution server; and if the response information of the first target execution server for the inquiry information is received within the preset time, triggering the step of scheduling the target timing task to the first target execution server.
In one embodiment, before the target timing task is scheduled to the first target execution server, the resource occupancy information of the first target execution server may be further obtained, and the load factor of the first target execution server under the system time is determined according to the load factor algorithm and the resource occupancy information of the first target execution server; and when the load rate under the system time is smaller than a preset load rate threshold value, triggering and executing the step of scheduling the target timing task to the first target execution server.
In one embodiment, after determining a first target execution server with a first order from the execution servers according to the load ordering result, it may further detect whether the first target execution server runs the same timing task as the target timing task; the specific implementation manner of scheduling the target timing task to the first target execution server is as follows: if the first target execution server is detected to run the timing task which is the same as the target timing task, a preset scheduling strategy identifier corresponding to the target timing task is identified; searching a target preset scheduling strategy corresponding to the preset scheduling strategy identification from at least one preset scheduling strategy, and scheduling the target timing task according to the target preset scheduling strategy.
In one embodiment, the target preset scheduling policy is an overlay policy, and the specific implementation manner of scheduling the target timing task according to the target preset scheduling policy is as follows: deleting the timing task which is running in the first target execution server and is the same as the target timing task, and scheduling the target timing task to the first target execution server so as to facilitate the first target execution server to execute the target timing task.
In one embodiment, the target preset scheduling policy is a queuing waiting policy, and the specific implementation manner of scheduling the target timing task according to the target preset scheduling policy is as follows: detecting whether the timing task identical to the target timing task is executed to finish; and if the timing task which is the same as the target timing task is detected to be executed, the target timing task is scheduled to the first target execution server.
In a second aspect, an embodiment of the present invention provides a task scheduling device based on data processing, where the task scheduling device based on data processing includes a module for executing the method of the first aspect.
In a third aspect, an embodiment of the present invention provides a server, including a processor, a network interface, and a memory, where the processor, the network interface, and the memory are connected to each other, and the network interface is controlled by the processor to send and receive messages, and the memory is used to store a computer program that supports the server to perform the method described above, where the computer program includes program instructions, and where the processor is configured to invoke the program instructions to perform the method of the first aspect described above.
In a fourth aspect, embodiments of the present invention provide a computer readable storage medium storing a computer program comprising program instructions which, when executed by a processor, cause the processor to perform the method of the first aspect described above.
In the embodiment of the invention, when the system time is detected to be the execution time corresponding to the target timing task, the load sequencing result of each execution server is obtained, and the task type of the target timing task is determined. If the task type is a single-machine task, determining a first target execution server for ordering from all the execution servers according to a load ordering result, and scheduling a target timing task to the first target execution server; if the task type is a distributed task, determining the number m of instances corresponding to the target timing task, determining a second target execution server m before sequencing from all execution servers according to a load sequencing result, and scheduling the target timing task to the second target execution server. The invention is beneficial to improving the dispatching efficiency of the task.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a task scheduling system according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a task scheduling method based on data processing according to an embodiment of the present invention;
FIG. 3 is a flow chart of another task scheduling method based on data processing according to an embodiment of the present invention;
FIG. 4 is a schematic block diagram of a task scheduling device based on data processing according to an embodiment of the present invention;
fig. 5 is a schematic block diagram of a server according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Fig. 1 is a schematic structural diagram of a task scheduling system based on data processing according to an embodiment of the present invention, where the task scheduling system is used for scheduling tasks. As shown in fig. 1, the task scheduling system may include: a cluster of dispatch servers 10 and at least one execution server 11. Wherein the scheduling server cluster 10 comprises at least one scheduling server, which can be used for scheduling the timing tasks to each execution server 11 for execution; the execution server 11 may be configured to execute a timing task scheduled by the scheduling server cluster, and report, to the scheduling server cluster 10, its own resource occupancy information through heartbeat information at a preset time interval, where the resource occupancy information includes at least one of the following: memory information, CPU utilization, and disk I/O rate.
In one embodiment, when the scheduling server cluster 10 detects that the system time is the execution time corresponding to the target timing task, the load ordering result of each execution server 11 stored in advance is obtained from the storage device, where the load ordering result is obtained by ordering the load rates of each execution server 11 from a small order to a large order, and the load rates are obtained according to the resource occupancy information of each execution server. Further, the scheduling server cluster 10 may determine a task type of the timing task based on task information of the timing task, and if the task type is a stand-alone task, determine a first target execution server with a first order from the execution servers according to a load ordering result, and schedule the target timing task to the first target execution server; if the task type is a distributed task, determining the number m (m is an integer greater than 0) of instances corresponding to the timing task, determining a second target execution server of m before sequencing from all execution servers according to a load sequencing result, and scheduling the target timing task to the second target execution server. By adopting the method and the device, the timing tasks can be scheduled according to the load sequencing result of each execution server and the types of the timing tasks, and the efficiency of timing task scheduling is improved.
Referring to fig. 2, fig. 2 is a flow chart of a task scheduling method based on data processing, which is applied to a scheduling server cluster and can be executed by the scheduling server cluster, and as shown in the drawing, the task scheduling method based on data processing may include:
201: when the system time is detected to be the execution time corresponding to the target timing task, the load ordering result of each execution server stored in advance is obtained from the storage device. The target timing task is any one of at least one timing task which is configured in advance, the load sequencing result is obtained by sequencing the load rates of all the execution servers from small to large, and the load rates are obtained according to the resource occupancy information of all the execution servers.
202: And determining the task type of the target timing task based on the task information of the target timing task.
In one embodiment, a developer may pre-configure at least one timed task, and after the configuration is completed, the scheduling server cluster may store the execution time of each timed task and the task information corresponding to each timed task in a storage medium, where the storage medium includes, but is not limited to, a database, a disk cache, a file, and the like, and the timed task may include a stand-alone task and a distributed task, and the task information includes a task identifier (such as a task number), and a task type (such as a stand-alone task or a distributed task).
Further, the scheduling server cluster may detect the execution time of each preset timing task, and when the execution time of any timing task is the same as the current system time, determine the any timing task as a target timing task, and parse task information corresponding to the target timing task stored in the storage medium to determine a task type of the target timing task.
203: And if the task type is a single task, determining a first target execution server for ordering from all the execution servers according to the load ordering result, and scheduling the target timing task to the first target execution server.
The load sequencing result is obtained by sequencing the load rates of the execution servers from small to large. In one embodiment, each execution server may report respective resource occupancy information to the scheduling server cluster through heartbeat information, where the resource occupancy information includes at least one of: memory information, CPU utilization, and disk input/output rate. Further, after the scheduling server cluster receives the resource occupancy rate information of each execution server, the load rate of each execution server can be calculated according to the resource occupancy rate information, and then the execution servers are ordered according to the order of the load rate from small to large, so that the load ordering result of each execution server is obtained, and the load ordering result can be stored in a storage medium.
Illustratively, assuming that the number of execution servers is n (an integer greater than 1), the n execution servers are ordered in order of load factor from small to large, and the load ordering result of each execution server may be shown in table 2. Further, if the scheduling server cluster determines that the task type of the target timing task is a single task based on the task information of the target timing task, the first target execution server with the first order, that is, the execution server L1, may be determined from the n execution servers according to the load ordering result shown in table 2, and the target timing task is scheduled to the execution server L1, so that the execution server L1 executes the target timing task.
TABLE 2
Load ordering Execution server numbering
1 L1
2 L2
3 L3
n Ln
204: If the task type is a distributed task, determining the number m of instances corresponding to the target timing task, determining a second target execution server m before sequencing from all execution servers according to a load sequencing result, and scheduling the target timing task to the second target execution server. Wherein m is an integer greater than 0.
In one embodiment, if the task type of the target timing task is a distributed task, the task information of the target timing task may be analyzed to determine the number m of instances of the target timing task, and the number n of execution servers is detected, and if n is greater than or equal to m, m second target execution servers m before the ordering may be determined from n execution servers according to the load ordering result. Further, according to the principle that one instance corresponds to one second target execution server, the target timing task is scheduled to m second target execution servers.
Or if the n is smaller than m, all of the n execution servers may be determined to be the second target execution server. And (3) corresponding the target timing task to n instances in the m instances, scheduling the n instances to n execution servers for execution, and scheduling the remaining (m-n) instances after the n instances are detected to be executed.
Illustratively, assuming that n is an integer greater than 5, the load ordering results of the n execution servers are shown in table 2, and the task type of the target timing task is a distributed task. In this case, if the task information of the target timing task is analyzed by the scheduling server cluster to determine that the number m=3 of instances of the target timing task, 3 second target execution servers before ordering can be determined from n execution servers according to the load ordering result, where the second target execution servers are respectively: the execution server L1, the execution server L2, and the execution server L3. Further, the scheduling server cluster may split the target timing task into 3 subtasks according to the principle that one instance corresponds to one second target execution server, where each subtask corresponds to one instance, and further schedule the 3 subtasks to the execution server L1, the execution server L2, and the execution server L3 respectively.
In the embodiment of the invention, when the scheduling server cluster detects that the system time is the execution time corresponding to the target timing task, the load sequencing result of each execution server is obtained, and the task type of the target timing task is determined. If the task type is a single-machine task, determining a first target execution server for ordering from all the execution servers according to a load ordering result, and scheduling a target timing task to the first target execution server; if the task type is a distributed task, determining the number m of instances corresponding to the target timing task, determining a second target execution server m before sequencing from all execution servers according to a load sequencing result, and scheduling the target timing task to the second target execution server. The invention is beneficial to improving the dispatching efficiency of the task
Referring to fig. 3, fig. 3 is a flow chart of another task scheduling method based on data processing according to an embodiment of the present invention, where the method is applied to a server, and as shown in the drawing, the task scheduling method based on data processing may include:
301: and acquiring the resource occupancy rate information of each execution server reported by each execution server through the heartbeat information according to a preset time interval. The resource occupancy information includes at least one of: memory information, CPU utilization, and disk input/output rate.
302: And determining the load rate corresponding to each execution server according to a pre-configured load rate algorithm and resource occupancy information.
303: Sequencing each execution server according to the sequence of the load rate from small to large to obtain the load sequencing result of each execution server, and storing the load sequencing result into a storage device.
In one embodiment, the resource occupancy information may include memory information, a CPU utilization, and a disk input/output rate, where the memory information may include a memory occupancy of the execution server. Assuming that the load rate of the execution server is denoted as p, the memory occupancy rate is denoted as a, the CPU utilization rate is denoted as b, and the disk input/output rate is denoted as c, the weight indexes among the three components a, b, and c may be preconfigured as k1, k2, and k3, respectively, and a load rate algorithm is configured, where the load rate algorithm has the following formula:
p=(k2*b2+k1*a+k3*c)(k2*b2+k1*a2+k3*c2)
if n execution servers exist, the load rates of all the n execution servers can be calculated in sequence according to the resource occupancy information of all the execution servers and the load rate algorithm, and according to the principle that the smaller the load rate is, the earlier the order is, the larger the load rate is, the later the order is, the order is carried out on all the execution servers, the load ordering result of all the execution servers is obtained, and the load ordering result is stored in a storage device.
304: When the system time is detected to be the execution time corresponding to the target timing task, the load ordering results of all the execution servers stored in advance are obtained from the storage device.
305: And determining the task type of the target timing task based on the task information of the target timing task.
306: And if the task type is a single task, determining a first target execution server for ordering from all the execution servers according to the load ordering result, and scheduling the target timing task to the first target execution server. The specific implementation manners of steps 304 to 306 may be referred to the descriptions related to steps 201 to 203 in the above embodiments, and are not repeated here.
In one embodiment, after determining the first target execution server for ordering the first from the execution servers according to the load ordering result, the scheduling server cluster may further detect whether a timing task identical to the target timing task is running in the first target execution server, if it is detected that the timing task identical to the target timing task is running in the first target execution server, identify a preset scheduling policy identifier corresponding to the target timing task, find a target preset scheduling policy corresponding to the preset scheduling policy identifier from at least one preset scheduling policy, and schedule the target timing task according to the target preset scheduling policy.
The target preset scheduling policy may include a queuing policy, an overlay policy, and a task-discarding policy. When the target timing task needs to be scheduled to the first target execution server, the first target execution server can wait in a queuing way when the same task of the target timing task is running in the first target execution server, and then the target timing task is executed to the first target execution server when the end of the execution of the same task is detected; the coverage strategy is to directly schedule the target timing task to the first target execution server for execution and cover the running task which is the same as the target timing task; the task strategy of discarding the subsequent task is to discard the unexecuted task (hereinafter referred to as the remaining task) in the same task as the target timed task, and schedule the remaining task in the target timed task to the first target execution server for execution.
In one embodiment, each timing task corresponds to a service, and a developer may configure different scheduling policies in advance according to the service requirement corresponding to each timing task. For example, for the timing task 1 that needs to perform a business operation on the full traffic data, a developer may select according to the task execution time of the timing task 1, and if the time execution is long, the schedulable policy may be configured as an overlay policy; if the execution time is short, the scheduling policy may be configured to discard the subsequent task policy; if the timed task 1 is a business operation on a non-full volume of data, the scheduling policy may be configured as a queuing policy.
Further, after the scheduling policy configuration for each timing task is completed, the scheduling server cluster may store the task information of each timing task, the corresponding scheduling policy identifier (i.e. the preset scheduling policy identifier), and the corresponding scheduling policy in association with each other in the storage medium.
Illustratively, the correspondence among the timing task, the preset scheduling policy identifier and the preset scheduling policy is shown in table 3, and the target timing task is timing task 1. In this case, when the scheduling server cluster detects that the first target execution server runs the same timing task as the timing task 1, and determines that the preset scheduling policy identifier corresponding to the timing task 1 is the preset scheduling policy identifier 1 according to the corresponding relationship shown in table 3, the target preset scheduling policy can be found from at least one preset scheduling policy to be the preset scheduling policy 1. Further, the scheduling server cluster may schedule the timing task 1 according to the preset scheduling policy 1.
TABLE 3 Table 3
Timed tasks Preset scheduling policy identification Presetting scheduling policy
Timing task 1 Preset scheduling policy identification 1 Preset scheduling policy 1
Timing task 2 Preset scheduling policy identification 2 Preset scheduling policy 2
In one embodiment, the target preset scheduling policy is an overlay policy, and the scheduling server cluster may delete a timing task running in the first target execution server and identical to the target timing task, and schedule the target timing task to the first target execution server, so that the first target execution server executes the target timing task and overlays the running task identical to the target timing task.
In one embodiment, the target preset scheduling policy is a queuing policy, and the scheduling server cluster may detect whether the timing task identical to the target timing task is executed, and if it is detected that the timing task identical to the target timing task is executed, may schedule the target timing task to the first target execution server.
In one embodiment, the target preset scheduling policy is a task discarding subsequent to the target preset scheduling policy, and the scheduling server cluster may discard a task that is not executed in the same task as the target timed task (i.e., discard a remaining task), and schedule the remaining task in the target timed task to the first target execution server for execution.
307: If the task type is a distributed task, determining the number m of instances corresponding to the target timing task, determining a second target execution server m before sequencing from all execution servers according to a load sequencing result, and scheduling the target timing task to the second target execution server. Wherein m is an integer greater than 0. The specific implementation of step 307 may be referred to the description related to step 204 in the above embodiment, which is not repeated here.
In one embodiment, before the scheduling server cluster schedules the target timing task to the first target execution server, the scheduling server cluster may further send query information to the first target execution server, and if response information of the first target execution server to the query information is received within a preset time, trigger to execute the step of scheduling the target timing task to the first target execution server.
After the scheduling server cluster allocates the first target execution server to the target timing task, the first target execution server can be subjected to active state judgment. Specifically, an inquiry message may be sent to the first target execution server, and if the first target execution server returns the corresponding bytecode (i.e., the response message for the inquiry message) within a preset time, it may be determined that the first target execution server is available, and the step of scheduling the target timing task to the first target execution server may be triggered to be performed. If the first target execution server does not have the corresponding byte code within the preset time, determining that the first target execution server is not available, at this time, the scheduling server cluster may allocate other execution servers with small load rates for the target timing task again to perform scheduling execution, for example, order the second execution server in the load ordering result, and so on until the available execution server is found, so as to execute the target timing task.
It can be understood that after the second target execution server is allocated to the target timing task by the scheduling server cluster, query information can still be sent to the second target execution server in a similar manner as described above, and whether the second target execution server returns response information for the query information within a preset time is determined, so that the judgment of the activation state of the second target execution server is realized. And will not be described in detail herein.
In one embodiment, since the load ranking result is calculated in advance, the actual load rate of each execution server may have changed from the time point when the load ranking result is calculated to the time period when the target timing task is scheduled. For this situation, before the scheduling server cluster schedules the target timing task to the first target execution server, the scheduling server cluster may acquire the resource occupancy information of the first target execution server, determine, according to the load factor algorithm and the resource occupancy information of the first target execution server, the load factor (i.e., the current actual load factor) of the first target execution server under the system time, and when the load factor under the system time is less than the preset load factor threshold, trigger to execute the step of scheduling the target timing task to the first target execution server.
Similarly, before the scheduling server cluster schedules the target timing task to the second target execution server, the load rate of the second target execution server under the system time may still be calculated, and when the load rate under the system time is smaller than the preset load rate threshold value corresponding to the second target execution server, the step of scheduling the target timing task to the first target execution server is triggered to be executed.
In one embodiment, after determining the second target execution server m before the ordering from the execution servers according to the load ordering result, the scheduling server cluster may further detect whether a timing task identical to the target timing task is running in the second target execution server, if it is detected that the timing task identical to the target timing task is running in the second target execution server, identify a preset scheduling policy identifier corresponding to the target timing task, find a target preset scheduling policy corresponding to the preset scheduling policy identifier from at least one preset scheduling policy, and schedule the target timing task according to the target preset scheduling policy.
In one embodiment, the target preset scheduling policy is an overlay policy, and the scheduling server cluster may delete a timing task that is running in the second target execution server and is identical to the target timing task, and schedule the target timing task to the second target execution server, so that the second target execution server executes the target timing task and overlays the running task that is identical to the target timing task.
In one embodiment, the target preset scheduling policy is a queuing policy, and the scheduling server cluster may detect whether the timing task identical to the target timing task is executed, and if it is detected that the timing task identical to the target timing task is executed, may schedule the target timing task to the second target execution server.
In one embodiment, the target preset scheduling policy is a task discarding subsequent to the target preset scheduling policy, and the scheduling server cluster may discard the unexecuted task (i.e., the remaining task discard) in the same task as the target timed task, and schedule the remaining task in the target timed task to the second target execution server for execution.
In one embodiment, to ensure that the same timing task can only be performed by the same thread (i.e., the same execution server), as a possible implementation, the scheduling server cluster may include a storage server, or the scheduling server cluster has a communication connection established with the storage server. The storage server may be called a lock release server, and is configured to store lock information of each distributed lock, where the lock information includes a Key corresponding to each distributed lock, where the Key is a unique identifier of the lock, and may be named according to a service.
Further, each timing task corresponds to one distributed lock, and lock information (hereinafter referred to as preset lock information) of the distributed lock corresponding to each timing task may be preconfigured, that is, each timing task may obtain what distributed lock is preconfigured.
In this case, when the target timing task needs to be executed, the first target execution server or the second target execution server is determined through steps 301 to 302. If the task scheduling server cluster schedules the target timing task to the first target execution server, the first target execution server needs to obtain the pre-configured lock information of the target timing task and perform locking operation on the corresponding Key in the lock release server before executing the target timing task. If the locking is successful, the first target execution server qualifies the target timing task to be executed at the time and starts to execute the target timing task. Other execution servers at the same time point cannot execute the target timing task due to locking failure.
Further, when the first target execution server completes the target timing task, the information in the Key corresponding to the latch server can be deleted, and the locking state is released, so that other execution servers can execute the target timing task again later.
In the embodiment of the invention, when the scheduling server cluster detects that the system time is the execution time corresponding to the target timing task, the load sequencing result of each execution server is obtained, and the task type of the target timing task is determined. If the task type is a single-machine task, determining a first target execution server for ordering from all the execution servers according to a load ordering result, and scheduling a target timing task to the first target execution server; if the task type is a distributed task, determining the number m of instances corresponding to the target timing task, determining a second target execution server m before sequencing from all execution servers according to a load sequencing result, and scheduling the target timing task to the second target execution server. The invention is beneficial to improving the dispatching efficiency of the task.
The embodiment of the invention also provides a task scheduling device based on data processing. The apparatus includes means for performing the method described in fig. 2 or fig. 3, configured to schedule a server cluster. In particular, referring to fig. 4, a schematic block diagram of a task scheduling device based on data processing according to an embodiment of the present invention is provided. The task scheduling device based on data processing in this embodiment includes:
The processing module 40 is configured to obtain, when detecting that the system time is an execution time corresponding to a target timing task, a pre-stored load ordering result of each execution server from a storage device, where the target timing task is any one of at least one pre-configured timing task, and the load ordering result is obtained by ordering the execution servers according to a sequence from small to large, and the load factor is obtained according to resource occupancy information of each execution server;
A processing module 40, configured to determine a task type of the target timing task based on task information of the target timing task;
the processing module 40 is further configured to determine a first target execution server with a first order from the execution servers according to the load ordering result if the task type is a single task;
A communication module 41 for scheduling the target timing task to the first target execution server;
The processing module 40 is further configured to determine, if the task type is a distributed task, the number m of instances corresponding to the target timing task, and determine, according to the load sequencing result, a second target execution server m before sequencing from the execution servers;
The communication module 41 is further configured to schedule the target timing task to the second target execution server, where m is an integer greater than 0.
In one embodiment, the processing module 40 is further configured to obtain, at a preset time interval, resource occupancy information of each execution server reported by each execution server through heartbeat information, where the resource occupancy information includes at least one of the following: memory information, CPU utilization and disk input/output rate; according to a pre-configured load rate algorithm and the resource occupancy information, determining the load rates corresponding to the execution servers respectively; and sequencing the execution servers according to the sequence of the load rate from small to large to obtain a load sequencing result of the execution servers, and storing the load sequencing result into a storage device.
In one embodiment, the communication module 41 is further configured to send query information to the first target execution server; if the response information of the first target execution server to the query information is received within the preset time, the trigger processing module 40 schedules the target timing task to the first target execution server.
In one embodiment, the processing module 40 is further configured to obtain resource occupancy information of the first target execution server, and determine, according to the load factor algorithm and the resource occupancy information of the first target execution server, a load factor of the first target execution server at system time; when the load rate at system time is smaller than a preset load rate threshold, triggering a communication module 41 to schedule the target timing task to the first target execution server.
In one embodiment, the processing module 40 is further configured to detect whether the first target execution server runs the same timing task as the target timing task; if the first target execution server is detected to run the timing task which is the same as the target timing task, a preset scheduling strategy identifier corresponding to the target timing task is identified; the target preset scheduling policy corresponding to the preset scheduling policy identifier is found out from at least one preset scheduling policy, and the communication module 41 is triggered to schedule the target timing task according to the target preset scheduling policy.
In one embodiment, the target preset scheduling policy is an overlay policy, and the communication module 41 is specifically configured to delete the timing task running in the first target execution server and that is the same as the target timing task, and schedule the target timing task to the first target execution server, so that the first target execution server executes the target timing task.
In one embodiment, the target preset scheduling policy is a queuing policy, and the communication module 41 is specifically configured to detect whether the same timing task as the target timing task is executed to complete; and if the timing task which is the same as the target timing task is detected to be executed, the target timing task is scheduled to the first target execution server.
It should be noted that, the functions of each functional module of the task scheduling device based on data processing described in the embodiments of the present invention may be specifically implemented according to the method in the method embodiment described in fig. 2 or fig. 3, and the specific implementation process may refer to the related description of the method embodiment in fig. 2 or fig. 3, which is not repeated herein.
Referring to fig. 5, fig. 5 is a schematic block diagram of a server provided in an embodiment of the present invention, and as shown in fig. 5, the server is a server in a scheduling server cluster, and the server includes a processor 501, a memory 502, and a network interface 503. The processor 501, memory 502, and network interface 503 may be connected by a bus or otherwise, as illustrated in fig. 5 in an embodiment of the present invention. Wherein the network interface 503 is controlled by the processor for sending and receiving messages, the memory 502 is used for storing a computer program, the computer program comprises program instructions, and the processor 501 is used for executing the program instructions stored in the memory 502. Wherein the processor 501 is configured to invoke the program instruction execution: when the system time is detected to be the execution time corresponding to the target timing task, acquiring a pre-stored load ordering result of each execution server from a storage device, wherein the target timing task is any one of at least one pre-configured timing task, the load ordering result is obtained by ordering the execution servers according to the order of the load rate from small to large, and the load rate is obtained according to the resource occupancy information of each execution server; determining a task type of the target timing task based on task information of the target timing task; if the task type is a single-machine task, determining a first target execution server with a first ordering from the execution servers according to the load ordering result, and scheduling the target timing task to the first target execution server; and if the task type is a distributed task, determining the number m of instances corresponding to the target timing task, determining a second target execution server of m before sequencing from the execution servers according to the load sequencing result, and scheduling the target timing task to the second target execution server, wherein m is an integer greater than 0.
In one embodiment, the processor 501 is further configured to obtain, at a preset time interval, resource occupancy information of each execution server reported by each execution server through heartbeat information, where the resource occupancy information includes at least one of the following: memory information, CPU utilization and disk input/output rate; according to a pre-configured load rate algorithm and the resource occupancy information, determining the load rates corresponding to the execution servers respectively; and sequencing the execution servers according to the sequence of the load rate from small to large to obtain a load sequencing result of the execution servers, and storing the load sequencing result into a storage device.
In one embodiment, the network interface 503 is further configured to send query information to the first target execution server; if the response information of the first target execution server to the query information is received within the preset time, the triggering processor 501 schedules the target timing task to the first target execution server.
In one embodiment, the processor 501 is further configured to obtain resource occupancy information of the first target execution server, and determine, according to the load factor algorithm and the resource occupancy information of the first target execution server, a load factor of the first target execution server at system time; when the load rate at system time is less than a preset load rate threshold, the network interface 503 is triggered to schedule the target timing task to the first target execution server.
In one embodiment, the processor 501 is further configured to detect whether the first target execution server runs the same timing task as the target timing task; if the first target execution server is detected to run the timing task which is the same as the target timing task, a preset scheduling strategy identifier corresponding to the target timing task is identified; the target preset scheduling policy corresponding to the preset scheduling policy identifier is found from at least one preset scheduling policy, and the network interface 503 is triggered to schedule the target timing task according to the target preset scheduling policy.
In one embodiment, the target preset scheduling policy is an overlay policy, and the network interface 503 is specifically configured to delete the timing task running in the first target execution server and that is the same as the target timing task, and schedule the target timing task to the first target execution server, so that the first target execution server executes the target timing task.
In one embodiment, the target preset scheduling policy is a queuing policy, and the network interface 503 is specifically configured to detect whether the same timing task as the target timing task is executed to complete; and if the timing task which is the same as the target timing task is detected to be executed, the target timing task is scheduled to the first target execution server.
It should be appreciated that in embodiments of the present invention, the Processor 501 may be a central processing unit (Central Processing Unit, CPU), the Processor 501 may also be other general purpose processors, digital signal processors (DIGITAL SIGNAL processors, DSPs), application SPECIFIC INTEGRATED Circuits (ASICs), off-the-shelf Programmable gate arrays (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 502 may include read only memory and random access memory and provide instructions and data to the processor 501. A portion of memory 502 may also include non-volatile random access memory. For example, the memory 502 may also store information of device type.
In a specific implementation, the processor 501, the memory 502 and the network interface 503 described in the embodiments of the present invention may execute the implementation described in the embodiment of the method described in fig. 2 or fig. 3 provided in the embodiments of the present invention, and may also execute the implementation of the task scheduling device based on data processing described in the embodiments of the present invention, which is not described herein again.
In another embodiment of the present invention, there is provided a computer-readable storage medium storing a computer program comprising program instructions that when executed by a processor implement: when the system time is detected to be the execution time corresponding to the target timing task, acquiring a pre-stored load ordering result of each execution server from a storage device, wherein the target timing task is any one of at least one pre-configured timing task, the load ordering result is obtained by ordering the execution servers according to the order of the load rate from small to large, and the load rate is obtained according to the resource occupancy information of each execution server; determining a task type of the target timing task based on task information of the target timing task; if the task type is a single-machine task, determining a first target execution server with a first ordering from the execution servers according to the load ordering result, and scheduling the target timing task to the first target execution server; and if the task type is a distributed task, determining the number m of instances corresponding to the target timing task, determining a second target execution server of m before sequencing from the execution servers according to the load sequencing result, and scheduling the target timing task to the second target execution server, wherein m is an integer greater than 0.
The computer readable storage medium may be an internal storage unit of the server according to any of the foregoing embodiments, for example, a hard disk or a memory of the server. The computer readable storage medium may also be an external storage device of the server, such as a plug-in hard disk, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD), or the like, which are provided on the server. Further, the computer readable storage medium may also include both an internal storage unit and an external storage device of the server. The computer-readable storage medium is used to store the computer program and other programs and data required by the server. The computer-readable storage medium may also be used to temporarily store data that has been output or is to be output.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in accordance with the embodiments may be accomplished by way of a computer program stored on a computer readable storage medium, which when executed may comprise the steps of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a random-access Memory (Random Access Memory, RAM), or the like.
The above disclosure is only a few examples of the present invention, and it is not intended to limit the scope of the present invention, but it is understood by those skilled in the art that all or a part of the above embodiments may be implemented and equivalents thereof may be modified according to the scope of the present invention.

Claims (10)

1. A task scheduling method based on data processing, wherein the method is applied to a scheduling server cluster, and the method comprises:
When the system time is detected to be the execution time corresponding to the target timing task, acquiring a pre-stored load ordering result of each execution server from a storage device, wherein the target timing task is any one of at least one pre-configured timing task, the load ordering result is obtained by ordering the execution servers according to the order of the load rate from small to large, and the load rate is obtained according to the resource occupancy information of each execution server;
Determining a task type of the target timing task based on task information of the target timing task;
If the task type is a single-machine task, determining a first target execution server with a first ordering from the execution servers according to the load ordering result, and scheduling the target timing task to the first target execution server; if the first target execution server runs the timing task which is the same as the target timing task, the scheduling strategy corresponding to the target timing task is a target preset scheduling strategy, and the target preset scheduling strategy is determined based on whether the target timing task performs service operation on the total service data or not; the method comprises the steps that timing tasks of business operation are carried out on the whole business data, a scheduling policy corresponding to the timing tasks with long execution time is an overlay policy, and a scheduling policy corresponding to the timing tasks with short execution time is a follow-up task discarding policy; timing tasks of business operation are carried out aiming at non-full business data, and the corresponding scheduling strategy is a queuing waiting strategy;
And if the task type is a distributed task, determining the number m of instances corresponding to the target timing task, determining a second target execution server of m before sequencing from the execution servers according to the load sequencing result, and scheduling the target timing task to the second target execution server, wherein m is an integer greater than 0, and one instance corresponds to one second target execution server.
2. The method according to claim 1, wherein the method further comprises:
Acquiring resource occupancy rate information of each execution server reported by each execution server through heartbeat information according to a preset time interval, wherein the resource occupancy rate information comprises at least one of the following: memory information, CPU utilization and disk input/output rate;
according to a pre-configured load rate algorithm and the resource occupancy information, determining the load rates corresponding to the execution servers respectively;
And sequencing the execution servers according to the sequence of the load rate from small to large to obtain a load sequencing result of the execution servers, and storing the load sequencing result into a storage device.
3. The method according to claim 1 or 2, wherein before said scheduling of said target timing task to said first target execution server, the method further comprises:
sending inquiry information to the first target execution server;
And if the response information of the first target execution server for the inquiry information is received within the preset time, triggering the step of scheduling the target timing task to the first target execution server.
4. The method of claim 2, wherein prior to scheduling the target timing task to the first target execution server, the method further comprises:
acquiring the resource occupancy information of the first target execution server, and determining the load rate of the first target execution server in system time according to the load rate algorithm and the resource occupancy information of the first target execution server;
And when the load rate under the system time is smaller than a preset load rate threshold value, triggering and executing the step of scheduling the target timing task to the first target execution server.
5. The method of claim 1, wherein after determining a first target execution server of the first order from the execution servers according to the load order result, the method further comprises:
detecting whether the first target execution server runs a timing task identical to the target timing task or not;
wherein said scheduling said target timing task to said first target execution server comprises:
If the first target execution server is detected to run the timing task which is the same as the target timing task, a preset scheduling strategy identifier corresponding to the target timing task is identified;
Searching a target preset scheduling strategy corresponding to the preset scheduling strategy identification from at least one preset scheduling strategy, and scheduling the target timing task according to the target preset scheduling strategy.
6. The method of claim 5, wherein the target preset scheduling policy is an overlay policy, and wherein scheduling the target timed task according to the target preset scheduling policy comprises:
Deleting the timing task which is running in the first target execution server and is the same as the target timing task, and scheduling the target timing task to the first target execution server so as to facilitate the first target execution server to execute the target timing task.
7. The method of claim 5, wherein the target preset scheduling policy is a queuing policy, and wherein scheduling the target timed task according to the target preset scheduling policy comprises:
detecting whether the timing task identical to the target timing task is executed to finish;
And if the timing task which is the same as the target timing task is detected to be executed, the target timing task is scheduled to the first target execution server.
8. A task scheduling device based on data processing, wherein the device is configured in a scheduling server cluster, the task scheduling device comprising:
the processing module is used for acquiring a pre-stored load sequencing result of each execution server from the storage device when the system time is detected to be the execution time corresponding to the target timing task, wherein the target timing task is any one of at least one pre-configured timing task, the load sequencing result is obtained by sequencing the load rates of each execution server from small to large, and the load rates are obtained according to the resource occupancy information of each execution server;
The processing module is further used for determining the task type of the target timing task based on the task information of the target timing task;
The processing module is further configured to determine a first target execution server with a first order from the execution servers according to the load ordering result if the task type is a single task; if the first target execution server runs the timing task which is the same as the target timing task, the scheduling strategy corresponding to the target timing task is a target preset scheduling strategy, and the target preset scheduling strategy is determined based on whether the target timing task performs service operation on the total service data or not; the method comprises the steps that timing tasks of business operation are carried out on the whole business data, a scheduling policy corresponding to the timing tasks with long execution time is an overlay policy, and a scheduling policy corresponding to the timing tasks with short execution time is a follow-up task discarding policy; timing tasks of business operation are carried out aiming at non-full business data, and the corresponding scheduling strategy is a queuing waiting strategy;
the communication module is used for scheduling the target timing task to the first target execution server;
The processing module is further configured to determine, if the task type is a distributed task, the number m of instances corresponding to the target timing task, and determine, according to the load sequencing result, a second target execution server m before sequencing from the execution servers, where one instance corresponds to one second target execution server;
the communication module is further configured to schedule the target timing task to the second target execution server, where m is an integer greater than 0.
9. A server comprising a processor and a memory, the processor and the memory being interconnected, wherein the memory is adapted to store a computer program comprising program instructions, the processor being configured to invoke the program instructions to perform the method of any of claims 1-7.
10. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program comprising program instructions which, when executed by a processor, cause the processor to perform the method of any of claims 1-7.
CN201910187622.2A 2019-03-12 2019-03-12 Task scheduling method based on data processing and related equipment Active CN110018893B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910187622.2A CN110018893B (en) 2019-03-12 2019-03-12 Task scheduling method based on data processing and related equipment
PCT/CN2019/117868 WO2020181813A1 (en) 2019-03-12 2019-11-13 Task scheduling method based on data processing and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910187622.2A CN110018893B (en) 2019-03-12 2019-03-12 Task scheduling method based on data processing and related equipment

Publications (2)

Publication Number Publication Date
CN110018893A CN110018893A (en) 2019-07-16
CN110018893B true CN110018893B (en) 2024-08-16

Family

ID=67189442

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910187622.2A Active CN110018893B (en) 2019-03-12 2019-03-12 Task scheduling method based on data processing and related equipment

Country Status (2)

Country Link
CN (1) CN110018893B (en)
WO (1) WO2020181813A1 (en)

Families Citing this family (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110018893B (en) * 2019-03-12 2024-08-16 河北和熙网络科技有限公司 Task scheduling method based on data processing and related equipment
CN110428453B (en) * 2019-07-30 2020-12-15 深圳云天励飞技术有限公司 Data processing method, data processing device, data processing equipment and storage medium
CN110515737A (en) * 2019-09-02 2019-11-29 北京明略软件系统有限公司 Data management task operation method and device
CN110659134A (en) * 2019-09-04 2020-01-07 腾讯云计算(北京)有限责任公司 A data processing method and device applied to an artificial intelligence platform
CN110569120B (en) * 2019-09-11 2022-03-04 华云数据控股集团有限公司 Processing method and device for timing task
CN110704172B (en) * 2019-09-20 2024-03-12 深圳市递四方信息科技有限公司 Cluster system timing task scheduling method and cluster system
CN110704185B (en) * 2019-09-20 2024-03-22 深圳市递四方信息科技有限公司 Cluster system sharding scheduled task scheduling method and cluster system
CN111143053A (en) * 2019-11-15 2020-05-12 杭州涂鸦信息技术有限公司 Scheduling method of timing task, server and storage device
CN111309372A (en) * 2020-01-15 2020-06-19 中国平安财产保险股份有限公司 Timed task execution method and device, computer equipment and storage medium
CN111988429A (en) * 2020-09-01 2020-11-24 深圳壹账通智能科技有限公司 Algorithm scheduling method and system
CN112148458A (en) * 2020-10-10 2020-12-29 腾讯科技(深圳)有限公司 Task scheduling method and device
CN112445577B (en) * 2020-11-30 2023-11-24 广州文远知行科技有限公司 Container adding method, device, terminal equipment and storage medium
CN112346845B (en) * 2021-01-08 2021-04-16 腾讯科技(深圳)有限公司 Method, device and equipment for scheduling coding tasks and storage medium
CN112860394B (en) * 2021-01-21 2024-11-22 平安科技(深圳)有限公司 Scheduled task scheduling execution method, device, electronic device and storage medium
CN112948084B (en) * 2021-03-03 2024-05-10 上海御微半导体技术有限公司 Task scheduling method and system
CN114968508A (en) * 2021-05-06 2022-08-30 中移互联网有限公司 Task processing method, device, equipment and storage medium
CN113608852B (en) * 2021-08-03 2024-07-16 中国科学技术大学 Task scheduling method, scheduling module, reasoning node and collaborative operation system
CN113835852B (en) * 2021-08-26 2024-04-12 东软医疗系统股份有限公司 Task data scheduling method and device
CN113778652A (en) * 2021-09-22 2021-12-10 武汉悦学帮网络技术有限公司 Task scheduling method and device, electronic equipment and storage medium
CN113986534A (en) * 2021-10-15 2022-01-28 腾讯科技(深圳)有限公司 Task scheduling method and device, computer equipment and computer readable storage medium
CN114003316A (en) * 2021-10-29 2022-02-01 平安壹账通云科技(深圳)有限公司 Cluster timing task execution method and device, electronic equipment and storage medium
CN114048033B (en) * 2021-11-15 2025-03-28 中国平安财产保险股份有限公司 Load balancing method, device and computer equipment for batch tasks
CN114265681A (en) * 2021-12-28 2022-04-01 苏州小棉袄信息技术股份有限公司 Routing Strategy Based on XXL-JOB Distributed Task Scheduling System
CN114827157B (en) * 2022-04-12 2024-08-16 北京云思智学科技有限公司 Cluster task processing method, device and system, electronic equipment and readable medium
CN114942838B (en) * 2022-05-26 2025-04-08 中信建投证券股份有限公司 Data access method, device, equipment and storage medium
CN117421107B (en) * 2023-12-14 2024-03-08 江西飞尚科技有限公司 Monitoring platform scheduling method, monitoring platform scheduling system, readable storage medium and computer
CN118502693B (en) * 2024-07-17 2024-12-13 深圳市一恒科电子科技有限公司 A printer document cloud conversion method, system, medium and program product

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108984290A (en) * 2018-08-02 2018-12-11 北京京东金融科技控股有限公司 Method for scheduling task and system

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120297216A1 (en) * 2011-05-19 2012-11-22 International Business Machines Corporation Dynamically selecting active polling or timed waits
CN103226467B (en) * 2013-05-23 2015-09-30 中国人民解放军国防科学技术大学 Data parallel processing method, system and load balance scheduler
US10055170B2 (en) * 2015-04-30 2018-08-21 International Business Machines Corporation Scheduling storage unit maintenance tasks in a dispersed storage network
US10042886B2 (en) * 2015-08-03 2018-08-07 Sap Se Distributed resource-aware task scheduling with replicated data placement in parallel database clusters
CN109117259B (en) * 2018-07-25 2021-05-25 北京京东尚科信息技术有限公司 Task scheduling method, platform, device and computer readable storage medium
CN109144699A (en) * 2018-08-31 2019-01-04 阿里巴巴集团控股有限公司 Distributed task dispatching method, apparatus and system
CN110018893B (en) * 2019-03-12 2024-08-16 河北和熙网络科技有限公司 Task scheduling method based on data processing and related equipment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108984290A (en) * 2018-08-02 2018-12-11 北京京东金融科技控股有限公司 Method for scheduling task and system

Also Published As

Publication number Publication date
CN110018893A (en) 2019-07-16
WO2020181813A1 (en) 2020-09-17

Similar Documents

Publication Publication Date Title
CN110018893B (en) Task scheduling method based on data processing and related equipment
US8566641B2 (en) Fault tolerant batch processing
CN101882089B (en) Method for processing business conversational application with multi-thread and device thereof
US20170093988A1 (en) Workflow service using state transfer
US9218210B2 (en) Distributed processing system
CN110633135A (en) Asynchronous task allocation method and device, computer equipment and storage medium
CN112860387A (en) Distributed task scheduling method and device, computer equipment and storage medium
CN112529711B (en) Transaction processing method and device based on block chain virtual machine multiplexing
US20170078049A1 (en) Freshness-sensitive message delivery
CN110955506A (en) Distributed job scheduling processing method
US20200310828A1 (en) Method, function manager and arrangement for handling function calls
CN111767125B (en) Task execution method, device, electronic equipment and storage medium
CN116302420A (en) Concurrent scheduling method, concurrent scheduling device, computer equipment and computer readable storage medium
CN111124631A (en) Task processing method and device based on block chain network
CN113242149B (en) Long connection configuration method, apparatus, device, storage medium, and program product
CN108521524B (en) Agent collaborative task management method and device, computer equipment and storage medium
CN114217875B (en) Method, device, equipment and storage medium for processing order
CN110781452A (en) Statistical task processing method and device, computer equipment and storage medium
CN111061548B (en) Safe scanning task scheduling method and scheduler
CN114928636A (en) Interface call request processing method, device, equipment, storage medium and product
CN108920683A (en) A method, device and storage medium for downloading external resources on a cloud computing platform
CN113448710A (en) Distributed application system based on business resources
CN114780217A (en) Task scheduling method and device, computer equipment and medium
CN113806056A (en) Timed task processing method and device, computer equipment and storage medium
CN113760485A (en) Scheduling method, device and equipment of timing task and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240715

Address after: 067300 Yangguang Sijicheng A-4 plot, Chengde Development Zone, Hebei Province, Building 2, 2-402 (office only)

Applicant after: Hebei Hexi Network Technology Co.,Ltd.

Country or region after: China

Address before: 518000 Room 201, A building, 1 front Bay Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretarial Co., Ltd.)

Applicant before: PING AN PUHUI ENTERPRISE MANAGEMENT Co.,Ltd.

Country or region before: China

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant