Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Fig. 1 is a schematic structural diagram of a task scheduling system based on data processing according to an embodiment of the present invention, where the task scheduling system is used for scheduling tasks. As shown in fig. 1, the task scheduling system may include: a cluster of dispatch servers 10 and at least one execution server 11. Wherein the scheduling server cluster 10 comprises at least one scheduling server, which can be used for scheduling the timing tasks to each execution server 11 for execution; the execution server 11 may be configured to execute a timing task scheduled by the scheduling server cluster, and report, to the scheduling server cluster 10, its own resource occupancy information through heartbeat information at a preset time interval, where the resource occupancy information includes at least one of the following: memory information, CPU utilization, and disk I/O rate.
In one embodiment, when the scheduling server cluster 10 detects that the system time is the execution time corresponding to the target timing task, the load ordering result of each execution server 11 stored in advance is obtained from the storage device, where the load ordering result is obtained by ordering the load rates of each execution server 11 from a small order to a large order, and the load rates are obtained according to the resource occupancy information of each execution server. Further, the scheduling server cluster 10 may determine a task type of the timing task based on task information of the timing task, and if the task type is a stand-alone task, determine a first target execution server with a first order from the execution servers according to a load ordering result, and schedule the target timing task to the first target execution server; if the task type is a distributed task, determining the number m (m is an integer greater than 0) of instances corresponding to the timing task, determining a second target execution server of m before sequencing from all execution servers according to a load sequencing result, and scheduling the target timing task to the second target execution server. By adopting the method and the device, the timing tasks can be scheduled according to the load sequencing result of each execution server and the types of the timing tasks, and the efficiency of timing task scheduling is improved.
Referring to fig. 2, fig. 2 is a flow chart of a task scheduling method based on data processing, which is applied to a scheduling server cluster and can be executed by the scheduling server cluster, and as shown in the drawing, the task scheduling method based on data processing may include:
201: when the system time is detected to be the execution time corresponding to the target timing task, the load ordering result of each execution server stored in advance is obtained from the storage device. The target timing task is any one of at least one timing task which is configured in advance, the load sequencing result is obtained by sequencing the load rates of all the execution servers from small to large, and the load rates are obtained according to the resource occupancy information of all the execution servers.
202: And determining the task type of the target timing task based on the task information of the target timing task.
In one embodiment, a developer may pre-configure at least one timed task, and after the configuration is completed, the scheduling server cluster may store the execution time of each timed task and the task information corresponding to each timed task in a storage medium, where the storage medium includes, but is not limited to, a database, a disk cache, a file, and the like, and the timed task may include a stand-alone task and a distributed task, and the task information includes a task identifier (such as a task number), and a task type (such as a stand-alone task or a distributed task).
Further, the scheduling server cluster may detect the execution time of each preset timing task, and when the execution time of any timing task is the same as the current system time, determine the any timing task as a target timing task, and parse task information corresponding to the target timing task stored in the storage medium to determine a task type of the target timing task.
203: And if the task type is a single task, determining a first target execution server for ordering from all the execution servers according to the load ordering result, and scheduling the target timing task to the first target execution server.
The load sequencing result is obtained by sequencing the load rates of the execution servers from small to large. In one embodiment, each execution server may report respective resource occupancy information to the scheduling server cluster through heartbeat information, where the resource occupancy information includes at least one of: memory information, CPU utilization, and disk input/output rate. Further, after the scheduling server cluster receives the resource occupancy rate information of each execution server, the load rate of each execution server can be calculated according to the resource occupancy rate information, and then the execution servers are ordered according to the order of the load rate from small to large, so that the load ordering result of each execution server is obtained, and the load ordering result can be stored in a storage medium.
Illustratively, assuming that the number of execution servers is n (an integer greater than 1), the n execution servers are ordered in order of load factor from small to large, and the load ordering result of each execution server may be shown in table 2. Further, if the scheduling server cluster determines that the task type of the target timing task is a single task based on the task information of the target timing task, the first target execution server with the first order, that is, the execution server L1, may be determined from the n execution servers according to the load ordering result shown in table 2, and the target timing task is scheduled to the execution server L1, so that the execution server L1 executes the target timing task.
TABLE 2
Load ordering |
Execution server numbering |
1 |
L1 |
2 |
L2 |
3 |
L3 |
… |
… |
n |
Ln |
204: If the task type is a distributed task, determining the number m of instances corresponding to the target timing task, determining a second target execution server m before sequencing from all execution servers according to a load sequencing result, and scheduling the target timing task to the second target execution server. Wherein m is an integer greater than 0.
In one embodiment, if the task type of the target timing task is a distributed task, the task information of the target timing task may be analyzed to determine the number m of instances of the target timing task, and the number n of execution servers is detected, and if n is greater than or equal to m, m second target execution servers m before the ordering may be determined from n execution servers according to the load ordering result. Further, according to the principle that one instance corresponds to one second target execution server, the target timing task is scheduled to m second target execution servers.
Or if the n is smaller than m, all of the n execution servers may be determined to be the second target execution server. And (3) corresponding the target timing task to n instances in the m instances, scheduling the n instances to n execution servers for execution, and scheduling the remaining (m-n) instances after the n instances are detected to be executed.
Illustratively, assuming that n is an integer greater than 5, the load ordering results of the n execution servers are shown in table 2, and the task type of the target timing task is a distributed task. In this case, if the task information of the target timing task is analyzed by the scheduling server cluster to determine that the number m=3 of instances of the target timing task, 3 second target execution servers before ordering can be determined from n execution servers according to the load ordering result, where the second target execution servers are respectively: the execution server L1, the execution server L2, and the execution server L3. Further, the scheduling server cluster may split the target timing task into 3 subtasks according to the principle that one instance corresponds to one second target execution server, where each subtask corresponds to one instance, and further schedule the 3 subtasks to the execution server L1, the execution server L2, and the execution server L3 respectively.
In the embodiment of the invention, when the scheduling server cluster detects that the system time is the execution time corresponding to the target timing task, the load sequencing result of each execution server is obtained, and the task type of the target timing task is determined. If the task type is a single-machine task, determining a first target execution server for ordering from all the execution servers according to a load ordering result, and scheduling a target timing task to the first target execution server; if the task type is a distributed task, determining the number m of instances corresponding to the target timing task, determining a second target execution server m before sequencing from all execution servers according to a load sequencing result, and scheduling the target timing task to the second target execution server. The invention is beneficial to improving the dispatching efficiency of the task
Referring to fig. 3, fig. 3 is a flow chart of another task scheduling method based on data processing according to an embodiment of the present invention, where the method is applied to a server, and as shown in the drawing, the task scheduling method based on data processing may include:
301: and acquiring the resource occupancy rate information of each execution server reported by each execution server through the heartbeat information according to a preset time interval. The resource occupancy information includes at least one of: memory information, CPU utilization, and disk input/output rate.
302: And determining the load rate corresponding to each execution server according to a pre-configured load rate algorithm and resource occupancy information.
303: Sequencing each execution server according to the sequence of the load rate from small to large to obtain the load sequencing result of each execution server, and storing the load sequencing result into a storage device.
In one embodiment, the resource occupancy information may include memory information, a CPU utilization, and a disk input/output rate, where the memory information may include a memory occupancy of the execution server. Assuming that the load rate of the execution server is denoted as p, the memory occupancy rate is denoted as a, the CPU utilization rate is denoted as b, and the disk input/output rate is denoted as c, the weight indexes among the three components a, b, and c may be preconfigured as k1, k2, and k3, respectively, and a load rate algorithm is configured, where the load rate algorithm has the following formula:
p=(k2*b2+k1*a+k3*c)(k2*b2+k1*a2+k3*c2)
if n execution servers exist, the load rates of all the n execution servers can be calculated in sequence according to the resource occupancy information of all the execution servers and the load rate algorithm, and according to the principle that the smaller the load rate is, the earlier the order is, the larger the load rate is, the later the order is, the order is carried out on all the execution servers, the load ordering result of all the execution servers is obtained, and the load ordering result is stored in a storage device.
304: When the system time is detected to be the execution time corresponding to the target timing task, the load ordering results of all the execution servers stored in advance are obtained from the storage device.
305: And determining the task type of the target timing task based on the task information of the target timing task.
306: And if the task type is a single task, determining a first target execution server for ordering from all the execution servers according to the load ordering result, and scheduling the target timing task to the first target execution server. The specific implementation manners of steps 304 to 306 may be referred to the descriptions related to steps 201 to 203 in the above embodiments, and are not repeated here.
In one embodiment, after determining the first target execution server for ordering the first from the execution servers according to the load ordering result, the scheduling server cluster may further detect whether a timing task identical to the target timing task is running in the first target execution server, if it is detected that the timing task identical to the target timing task is running in the first target execution server, identify a preset scheduling policy identifier corresponding to the target timing task, find a target preset scheduling policy corresponding to the preset scheduling policy identifier from at least one preset scheduling policy, and schedule the target timing task according to the target preset scheduling policy.
The target preset scheduling policy may include a queuing policy, an overlay policy, and a task-discarding policy. When the target timing task needs to be scheduled to the first target execution server, the first target execution server can wait in a queuing way when the same task of the target timing task is running in the first target execution server, and then the target timing task is executed to the first target execution server when the end of the execution of the same task is detected; the coverage strategy is to directly schedule the target timing task to the first target execution server for execution and cover the running task which is the same as the target timing task; the task strategy of discarding the subsequent task is to discard the unexecuted task (hereinafter referred to as the remaining task) in the same task as the target timed task, and schedule the remaining task in the target timed task to the first target execution server for execution.
In one embodiment, each timing task corresponds to a service, and a developer may configure different scheduling policies in advance according to the service requirement corresponding to each timing task. For example, for the timing task 1 that needs to perform a business operation on the full traffic data, a developer may select according to the task execution time of the timing task 1, and if the time execution is long, the schedulable policy may be configured as an overlay policy; if the execution time is short, the scheduling policy may be configured to discard the subsequent task policy; if the timed task 1 is a business operation on a non-full volume of data, the scheduling policy may be configured as a queuing policy.
Further, after the scheduling policy configuration for each timing task is completed, the scheduling server cluster may store the task information of each timing task, the corresponding scheduling policy identifier (i.e. the preset scheduling policy identifier), and the corresponding scheduling policy in association with each other in the storage medium.
Illustratively, the correspondence among the timing task, the preset scheduling policy identifier and the preset scheduling policy is shown in table 3, and the target timing task is timing task 1. In this case, when the scheduling server cluster detects that the first target execution server runs the same timing task as the timing task 1, and determines that the preset scheduling policy identifier corresponding to the timing task 1 is the preset scheduling policy identifier 1 according to the corresponding relationship shown in table 3, the target preset scheduling policy can be found from at least one preset scheduling policy to be the preset scheduling policy 1. Further, the scheduling server cluster may schedule the timing task 1 according to the preset scheduling policy 1.
TABLE 3 Table 3
Timed tasks |
Preset scheduling policy identification |
Presetting scheduling policy |
Timing task 1 |
Preset scheduling policy identification 1 |
Preset scheduling policy 1 |
Timing task 2 |
Preset scheduling policy identification 2 |
Preset scheduling policy 2 |
In one embodiment, the target preset scheduling policy is an overlay policy, and the scheduling server cluster may delete a timing task running in the first target execution server and identical to the target timing task, and schedule the target timing task to the first target execution server, so that the first target execution server executes the target timing task and overlays the running task identical to the target timing task.
In one embodiment, the target preset scheduling policy is a queuing policy, and the scheduling server cluster may detect whether the timing task identical to the target timing task is executed, and if it is detected that the timing task identical to the target timing task is executed, may schedule the target timing task to the first target execution server.
In one embodiment, the target preset scheduling policy is a task discarding subsequent to the target preset scheduling policy, and the scheduling server cluster may discard a task that is not executed in the same task as the target timed task (i.e., discard a remaining task), and schedule the remaining task in the target timed task to the first target execution server for execution.
307: If the task type is a distributed task, determining the number m of instances corresponding to the target timing task, determining a second target execution server m before sequencing from all execution servers according to a load sequencing result, and scheduling the target timing task to the second target execution server. Wherein m is an integer greater than 0. The specific implementation of step 307 may be referred to the description related to step 204 in the above embodiment, which is not repeated here.
In one embodiment, before the scheduling server cluster schedules the target timing task to the first target execution server, the scheduling server cluster may further send query information to the first target execution server, and if response information of the first target execution server to the query information is received within a preset time, trigger to execute the step of scheduling the target timing task to the first target execution server.
After the scheduling server cluster allocates the first target execution server to the target timing task, the first target execution server can be subjected to active state judgment. Specifically, an inquiry message may be sent to the first target execution server, and if the first target execution server returns the corresponding bytecode (i.e., the response message for the inquiry message) within a preset time, it may be determined that the first target execution server is available, and the step of scheduling the target timing task to the first target execution server may be triggered to be performed. If the first target execution server does not have the corresponding byte code within the preset time, determining that the first target execution server is not available, at this time, the scheduling server cluster may allocate other execution servers with small load rates for the target timing task again to perform scheduling execution, for example, order the second execution server in the load ordering result, and so on until the available execution server is found, so as to execute the target timing task.
It can be understood that after the second target execution server is allocated to the target timing task by the scheduling server cluster, query information can still be sent to the second target execution server in a similar manner as described above, and whether the second target execution server returns response information for the query information within a preset time is determined, so that the judgment of the activation state of the second target execution server is realized. And will not be described in detail herein.
In one embodiment, since the load ranking result is calculated in advance, the actual load rate of each execution server may have changed from the time point when the load ranking result is calculated to the time period when the target timing task is scheduled. For this situation, before the scheduling server cluster schedules the target timing task to the first target execution server, the scheduling server cluster may acquire the resource occupancy information of the first target execution server, determine, according to the load factor algorithm and the resource occupancy information of the first target execution server, the load factor (i.e., the current actual load factor) of the first target execution server under the system time, and when the load factor under the system time is less than the preset load factor threshold, trigger to execute the step of scheduling the target timing task to the first target execution server.
Similarly, before the scheduling server cluster schedules the target timing task to the second target execution server, the load rate of the second target execution server under the system time may still be calculated, and when the load rate under the system time is smaller than the preset load rate threshold value corresponding to the second target execution server, the step of scheduling the target timing task to the first target execution server is triggered to be executed.
In one embodiment, after determining the second target execution server m before the ordering from the execution servers according to the load ordering result, the scheduling server cluster may further detect whether a timing task identical to the target timing task is running in the second target execution server, if it is detected that the timing task identical to the target timing task is running in the second target execution server, identify a preset scheduling policy identifier corresponding to the target timing task, find a target preset scheduling policy corresponding to the preset scheduling policy identifier from at least one preset scheduling policy, and schedule the target timing task according to the target preset scheduling policy.
In one embodiment, the target preset scheduling policy is an overlay policy, and the scheduling server cluster may delete a timing task that is running in the second target execution server and is identical to the target timing task, and schedule the target timing task to the second target execution server, so that the second target execution server executes the target timing task and overlays the running task that is identical to the target timing task.
In one embodiment, the target preset scheduling policy is a queuing policy, and the scheduling server cluster may detect whether the timing task identical to the target timing task is executed, and if it is detected that the timing task identical to the target timing task is executed, may schedule the target timing task to the second target execution server.
In one embodiment, the target preset scheduling policy is a task discarding subsequent to the target preset scheduling policy, and the scheduling server cluster may discard the unexecuted task (i.e., the remaining task discard) in the same task as the target timed task, and schedule the remaining task in the target timed task to the second target execution server for execution.
In one embodiment, to ensure that the same timing task can only be performed by the same thread (i.e., the same execution server), as a possible implementation, the scheduling server cluster may include a storage server, or the scheduling server cluster has a communication connection established with the storage server. The storage server may be called a lock release server, and is configured to store lock information of each distributed lock, where the lock information includes a Key corresponding to each distributed lock, where the Key is a unique identifier of the lock, and may be named according to a service.
Further, each timing task corresponds to one distributed lock, and lock information (hereinafter referred to as preset lock information) of the distributed lock corresponding to each timing task may be preconfigured, that is, each timing task may obtain what distributed lock is preconfigured.
In this case, when the target timing task needs to be executed, the first target execution server or the second target execution server is determined through steps 301 to 302. If the task scheduling server cluster schedules the target timing task to the first target execution server, the first target execution server needs to obtain the pre-configured lock information of the target timing task and perform locking operation on the corresponding Key in the lock release server before executing the target timing task. If the locking is successful, the first target execution server qualifies the target timing task to be executed at the time and starts to execute the target timing task. Other execution servers at the same time point cannot execute the target timing task due to locking failure.
Further, when the first target execution server completes the target timing task, the information in the Key corresponding to the latch server can be deleted, and the locking state is released, so that other execution servers can execute the target timing task again later.
In the embodiment of the invention, when the scheduling server cluster detects that the system time is the execution time corresponding to the target timing task, the load sequencing result of each execution server is obtained, and the task type of the target timing task is determined. If the task type is a single-machine task, determining a first target execution server for ordering from all the execution servers according to a load ordering result, and scheduling a target timing task to the first target execution server; if the task type is a distributed task, determining the number m of instances corresponding to the target timing task, determining a second target execution server m before sequencing from all execution servers according to a load sequencing result, and scheduling the target timing task to the second target execution server. The invention is beneficial to improving the dispatching efficiency of the task.
The embodiment of the invention also provides a task scheduling device based on data processing. The apparatus includes means for performing the method described in fig. 2 or fig. 3, configured to schedule a server cluster. In particular, referring to fig. 4, a schematic block diagram of a task scheduling device based on data processing according to an embodiment of the present invention is provided. The task scheduling device based on data processing in this embodiment includes:
The processing module 40 is configured to obtain, when detecting that the system time is an execution time corresponding to a target timing task, a pre-stored load ordering result of each execution server from a storage device, where the target timing task is any one of at least one pre-configured timing task, and the load ordering result is obtained by ordering the execution servers according to a sequence from small to large, and the load factor is obtained according to resource occupancy information of each execution server;
A processing module 40, configured to determine a task type of the target timing task based on task information of the target timing task;
the processing module 40 is further configured to determine a first target execution server with a first order from the execution servers according to the load ordering result if the task type is a single task;
A communication module 41 for scheduling the target timing task to the first target execution server;
The processing module 40 is further configured to determine, if the task type is a distributed task, the number m of instances corresponding to the target timing task, and determine, according to the load sequencing result, a second target execution server m before sequencing from the execution servers;
The communication module 41 is further configured to schedule the target timing task to the second target execution server, where m is an integer greater than 0.
In one embodiment, the processing module 40 is further configured to obtain, at a preset time interval, resource occupancy information of each execution server reported by each execution server through heartbeat information, where the resource occupancy information includes at least one of the following: memory information, CPU utilization and disk input/output rate; according to a pre-configured load rate algorithm and the resource occupancy information, determining the load rates corresponding to the execution servers respectively; and sequencing the execution servers according to the sequence of the load rate from small to large to obtain a load sequencing result of the execution servers, and storing the load sequencing result into a storage device.
In one embodiment, the communication module 41 is further configured to send query information to the first target execution server; if the response information of the first target execution server to the query information is received within the preset time, the trigger processing module 40 schedules the target timing task to the first target execution server.
In one embodiment, the processing module 40 is further configured to obtain resource occupancy information of the first target execution server, and determine, according to the load factor algorithm and the resource occupancy information of the first target execution server, a load factor of the first target execution server at system time; when the load rate at system time is smaller than a preset load rate threshold, triggering a communication module 41 to schedule the target timing task to the first target execution server.
In one embodiment, the processing module 40 is further configured to detect whether the first target execution server runs the same timing task as the target timing task; if the first target execution server is detected to run the timing task which is the same as the target timing task, a preset scheduling strategy identifier corresponding to the target timing task is identified; the target preset scheduling policy corresponding to the preset scheduling policy identifier is found out from at least one preset scheduling policy, and the communication module 41 is triggered to schedule the target timing task according to the target preset scheduling policy.
In one embodiment, the target preset scheduling policy is an overlay policy, and the communication module 41 is specifically configured to delete the timing task running in the first target execution server and that is the same as the target timing task, and schedule the target timing task to the first target execution server, so that the first target execution server executes the target timing task.
In one embodiment, the target preset scheduling policy is a queuing policy, and the communication module 41 is specifically configured to detect whether the same timing task as the target timing task is executed to complete; and if the timing task which is the same as the target timing task is detected to be executed, the target timing task is scheduled to the first target execution server.
It should be noted that, the functions of each functional module of the task scheduling device based on data processing described in the embodiments of the present invention may be specifically implemented according to the method in the method embodiment described in fig. 2 or fig. 3, and the specific implementation process may refer to the related description of the method embodiment in fig. 2 or fig. 3, which is not repeated herein.
Referring to fig. 5, fig. 5 is a schematic block diagram of a server provided in an embodiment of the present invention, and as shown in fig. 5, the server is a server in a scheduling server cluster, and the server includes a processor 501, a memory 502, and a network interface 503. The processor 501, memory 502, and network interface 503 may be connected by a bus or otherwise, as illustrated in fig. 5 in an embodiment of the present invention. Wherein the network interface 503 is controlled by the processor for sending and receiving messages, the memory 502 is used for storing a computer program, the computer program comprises program instructions, and the processor 501 is used for executing the program instructions stored in the memory 502. Wherein the processor 501 is configured to invoke the program instruction execution: when the system time is detected to be the execution time corresponding to the target timing task, acquiring a pre-stored load ordering result of each execution server from a storage device, wherein the target timing task is any one of at least one pre-configured timing task, the load ordering result is obtained by ordering the execution servers according to the order of the load rate from small to large, and the load rate is obtained according to the resource occupancy information of each execution server; determining a task type of the target timing task based on task information of the target timing task; if the task type is a single-machine task, determining a first target execution server with a first ordering from the execution servers according to the load ordering result, and scheduling the target timing task to the first target execution server; and if the task type is a distributed task, determining the number m of instances corresponding to the target timing task, determining a second target execution server of m before sequencing from the execution servers according to the load sequencing result, and scheduling the target timing task to the second target execution server, wherein m is an integer greater than 0.
In one embodiment, the processor 501 is further configured to obtain, at a preset time interval, resource occupancy information of each execution server reported by each execution server through heartbeat information, where the resource occupancy information includes at least one of the following: memory information, CPU utilization and disk input/output rate; according to a pre-configured load rate algorithm and the resource occupancy information, determining the load rates corresponding to the execution servers respectively; and sequencing the execution servers according to the sequence of the load rate from small to large to obtain a load sequencing result of the execution servers, and storing the load sequencing result into a storage device.
In one embodiment, the network interface 503 is further configured to send query information to the first target execution server; if the response information of the first target execution server to the query information is received within the preset time, the triggering processor 501 schedules the target timing task to the first target execution server.
In one embodiment, the processor 501 is further configured to obtain resource occupancy information of the first target execution server, and determine, according to the load factor algorithm and the resource occupancy information of the first target execution server, a load factor of the first target execution server at system time; when the load rate at system time is less than a preset load rate threshold, the network interface 503 is triggered to schedule the target timing task to the first target execution server.
In one embodiment, the processor 501 is further configured to detect whether the first target execution server runs the same timing task as the target timing task; if the first target execution server is detected to run the timing task which is the same as the target timing task, a preset scheduling strategy identifier corresponding to the target timing task is identified; the target preset scheduling policy corresponding to the preset scheduling policy identifier is found from at least one preset scheduling policy, and the network interface 503 is triggered to schedule the target timing task according to the target preset scheduling policy.
In one embodiment, the target preset scheduling policy is an overlay policy, and the network interface 503 is specifically configured to delete the timing task running in the first target execution server and that is the same as the target timing task, and schedule the target timing task to the first target execution server, so that the first target execution server executes the target timing task.
In one embodiment, the target preset scheduling policy is a queuing policy, and the network interface 503 is specifically configured to detect whether the same timing task as the target timing task is executed to complete; and if the timing task which is the same as the target timing task is detected to be executed, the target timing task is scheduled to the first target execution server.
It should be appreciated that in embodiments of the present invention, the Processor 501 may be a central processing unit (Central Processing Unit, CPU), the Processor 501 may also be other general purpose processors, digital signal processors (DIGITAL SIGNAL processors, DSPs), application SPECIFIC INTEGRATED Circuits (ASICs), off-the-shelf Programmable gate arrays (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 502 may include read only memory and random access memory and provide instructions and data to the processor 501. A portion of memory 502 may also include non-volatile random access memory. For example, the memory 502 may also store information of device type.
In a specific implementation, the processor 501, the memory 502 and the network interface 503 described in the embodiments of the present invention may execute the implementation described in the embodiment of the method described in fig. 2 or fig. 3 provided in the embodiments of the present invention, and may also execute the implementation of the task scheduling device based on data processing described in the embodiments of the present invention, which is not described herein again.
In another embodiment of the present invention, there is provided a computer-readable storage medium storing a computer program comprising program instructions that when executed by a processor implement: when the system time is detected to be the execution time corresponding to the target timing task, acquiring a pre-stored load ordering result of each execution server from a storage device, wherein the target timing task is any one of at least one pre-configured timing task, the load ordering result is obtained by ordering the execution servers according to the order of the load rate from small to large, and the load rate is obtained according to the resource occupancy information of each execution server; determining a task type of the target timing task based on task information of the target timing task; if the task type is a single-machine task, determining a first target execution server with a first ordering from the execution servers according to the load ordering result, and scheduling the target timing task to the first target execution server; and if the task type is a distributed task, determining the number m of instances corresponding to the target timing task, determining a second target execution server of m before sequencing from the execution servers according to the load sequencing result, and scheduling the target timing task to the second target execution server, wherein m is an integer greater than 0.
The computer readable storage medium may be an internal storage unit of the server according to any of the foregoing embodiments, for example, a hard disk or a memory of the server. The computer readable storage medium may also be an external storage device of the server, such as a plug-in hard disk, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD), or the like, which are provided on the server. Further, the computer readable storage medium may also include both an internal storage unit and an external storage device of the server. The computer-readable storage medium is used to store the computer program and other programs and data required by the server. The computer-readable storage medium may also be used to temporarily store data that has been output or is to be output.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in accordance with the embodiments may be accomplished by way of a computer program stored on a computer readable storage medium, which when executed may comprise the steps of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a random-access Memory (Random Access Memory, RAM), or the like.
The above disclosure is only a few examples of the present invention, and it is not intended to limit the scope of the present invention, but it is understood by those skilled in the art that all or a part of the above embodiments may be implemented and equivalents thereof may be modified according to the scope of the present invention.