[go: up one dir, main page]

CN102763086A - Distributed computing task processing system and task processing method - Google Patents

Distributed computing task processing system and task processing method Download PDF

Info

Publication number
CN102763086A
CN102763086A CN2012800001658A CN201280000165A CN102763086A CN 102763086 A CN102763086 A CN 102763086A CN 2012800001658 A CN2012800001658 A CN 2012800001658A CN 201280000165 A CN201280000165 A CN 201280000165A CN 102763086 A CN102763086 A CN 102763086A
Authority
CN
China
Prior art keywords
task
subtasks
level scheduler
scheduler
tasks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2012800001658A
Other languages
Chinese (zh)
Inventor
靳变变
刘文宇
严军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN102763086A publication Critical patent/CN102763086A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the invention provides a distributed computing task processing system and a task processing method. The system comprises: the first layer scheduler is used for receiving a request for executing the task, starting or selecting a second layer scheduler corresponding to the task and forwarding the request to the second layer scheduler; and the second-layer scheduler is used for decomposing the task into a plurality of subtasks according to the logical relation of the task when receiving the request forwarded by the first-layer scheduler. The embodiment of the invention adopts a two-layer scheduling architecture, the second-layer scheduler corresponds to the tasks, and the first-layer scheduler starts or selects the second-layer scheduler corresponding to the tasks, thereby being suitable for different tasks and improving the processing efficiency and the scheduling flexibility.

Description

分布式计算任务处理系统和任务处理方法Distributed computing task processing system and task processing method

技术领域 technical field

本发明实施例涉及网络通信领域,并且更具体地,涉及分布式计算任务处理系统和任务处理方法。Embodiments of the present invention relate to the field of network communication, and more specifically, to a distributed computing task processing system and a task processing method.

背景技术 Background technique

目前,随着互联网的发展,对大量信息的快速处理的需求变得很迫切。因此数据的并行处理就变得很重要。分布式计算环境提供了网络环境下不同软、硬件平台资源共享和互操作的有效手段,成为并行处理的常用架构。目前业界熟知的并行处理系统采用MapReduce架构。MapReduce是分布式计算软件构架,它可以支持大数据量的分布式处理。这个架构最初起源于函数式程式的map(映射)和reduce(缩减)两个函数。map指的是对原始的文档按照自定义的映射规则进行处理,输出中间结果。reduce按照自定义的缩减规则对中间结果进行合并。Currently, with the development of the Internet, the need for fast processing of a large amount of information becomes urgent. Therefore, parallel processing of data becomes very important. The distributed computing environment provides an effective means of resource sharing and interoperability between different software and hardware platforms in the network environment, and has become a common architecture for parallel processing. At present, the well-known parallel processing system in the industry adopts the MapReduce architecture. MapReduce is a distributed computing software architecture that can support distributed processing of large amounts of data. This architecture originally originated from the two functions of map (mapping) and reduce (reduction) of functional programming. Map refers to processing the original document according to custom mapping rules and outputting intermediate results. reduce merges the intermediate results according to the custom reduction rules.

在分布式计算环境中,MapReduce的通用架构包括调度节点和多个工作节点。调度节点负责任务调度和资源管理;负责根据用户配置,将用户提交的任务分解为map、reduce两种子任务,并分配map、reduce子任务到工作节点。工作节点负责运行map、reduce子任务,与调度节点保持通讯。In a distributed computing environment, the general architecture of MapReduce includes scheduling nodes and multiple working nodes. The scheduling node is responsible for task scheduling and resource management; it is responsible for decomposing the tasks submitted by the user into two subtasks of map and reduce according to the user configuration, and assigning the subtasks of map and reduce to the working nodes. Worker nodes are responsible for running map and reduce subtasks and maintaining communication with scheduling nodes.

在这种并行处理架构中,由于一个调度节点负责任务以及资源管理,并且需要严格地先后按照map、reduce两步的顺序进行任务处理。如果存在很多步骤的处理,则需要通过提交很多次任务请求来完成,处理效率较低,调度不够灵活。In this parallel processing architecture, since a scheduling node is responsible for task and resource management, it needs to process tasks strictly in the order of map and reduce. If there are many steps of processing, it needs to be completed by submitting many task requests, the processing efficiency is low, and the scheduling is not flexible enough.

发明内容 Contents of the invention

本发明实施例提供一种任务处理系统和任务处理方法,能够解决现有并行处理架构中处理效率的问题。Embodiments of the present invention provide a task processing system and a task processing method, which can solve the problem of processing efficiency in the existing parallel processing architecture.

一方面,提供了一种分布式计算任务处理系统,包括:第一层调度器,用于接收执行任务的请求,启动或选择任务对应的第二层调度器并向第二层调度器转发所述请求;第二层调度器,用于在接收到第一层调度器转发的请求时,按照任务的逻辑关系将任务分解为多个子任务。In one aspect, a distributed computing task processing system is provided, including: a first-level scheduler, configured to receive a request for executing a task, start or select a second-level scheduler corresponding to the task, and forward the requested information to the second-level scheduler. The above-mentioned request; the second-level scheduler is used to decompose the task into multiple sub-tasks according to the logical relationship of the task when receiving the request forwarded by the first-level scheduler.

另一方面,提供了一种分布式计算任务处理方法,该方法包括:第一层调度器在接收到执行任务的请求时,启动或选择任务对应的第二层调度器;第一层调度器向第二层调度器转发该请求;第二层调度器在接收到第一层调度器转发的请求时,按照任务的逻辑关系将任务分解为多个子任务。On the other hand, a distributed computing task processing method is provided, the method includes: when the first-level scheduler receives a request to execute the task, start or select the second-level scheduler corresponding to the task; the first-level scheduler The request is forwarded to the second-level scheduler; when the second-level scheduler receives the request forwarded by the first-level scheduler, it decomposes the task into multiple subtasks according to the logical relationship of the task.

本发明实施例采用两层调度架构,第二层调度器对应于任务,第一层调度器启动或选择任务对应的第二层调度器,从而可以适用于不同的任务,提高了处理效率和调度灵活性。The embodiment of the present invention adopts a two-tier scheduling architecture, the second-tier scheduler corresponds to the task, and the first-tier scheduler starts or selects the second-tier scheduler corresponding to the task, so that it can be applied to different tasks, and the processing efficiency and scheduling are improved. flexibility.

附图说明 Description of drawings

为了更清楚地说明本发明实施例的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the following will briefly introduce the accompanying drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description are only some of the present invention. Embodiments, for those of ordinary skill in the art, other drawings can also be obtained based on these drawings without any creative effort.

图1是本发明实施例的任务处理系统的框图。FIG. 1 is a block diagram of a task processing system according to an embodiment of the present invention.

图2是本发明一个实施例的处理架构的示意图。Figure 2 is a schematic diagram of the processing architecture of one embodiment of the present invention.

图3是本发明一个实施例的任务处理方法的流程图。Fig. 3 is a flowchart of a task processing method according to an embodiment of the present invention.

图4是本发明一个实施例的任务处理过程的示意流程图。Fig. 4 is a schematic flowchart of a task processing process according to an embodiment of the present invention.

具体实施方式 Detailed ways

下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are some of the embodiments of the present invention, but not all of them. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without creative efforts fall within the protection scope of the present invention.

图1是本发明实施例的分布式计算任务处理系统的框图。图1的任务处理系统10包括两层调度器,即第一层调度器11和第二层调度器12。FIG. 1 is a block diagram of a distributed computing task processing system according to an embodiment of the present invention. The task processing system 10 in FIG. 1 includes two layers of schedulers, namely a first layer scheduler 11 and a second layer scheduler 12 .

第一层调度器11接收执行任务的请求,启动或选择该任务对应的第二层调度器12并向第二层调度器12转发该请求。The first-level scheduler 11 receives a request for executing a task, starts or selects a second-level scheduler 12 corresponding to the task, and forwards the request to the second-level scheduler 12 .

例如,在系统中没有合适的第二层调度器时,第一层调度器11可启动该任务对应的第二层调度器12。在系统中已经存在合适的第二层调度器时,第一层调度器11可从这些合适的第二层调度器中选择该任务对应的第二层调度器12。For example, when there is no suitable second-level scheduler in the system, the first-level scheduler 11 can start the second-level scheduler 12 corresponding to the task. When there are suitable second-level schedulers in the system, the first-level scheduler 11 can select the second-level scheduler 12 corresponding to the task from these suitable second-level schedulers.

可选地,所述第一层调度器还用于对所述任务进行优先级管理,并按照所述优先级启动或选择所述第二层调度器对所述任务进行处理。Optionally, the first-level scheduler is further configured to perform priority management on the tasks, and start or select the second-level scheduler to process the tasks according to the priorities.

第二层调度器12在接收到第一层调度器11转发的请求时,按照任务的逻辑关系将该任务分解为多个子任务。When the second-level scheduler 12 receives the request forwarded by the first-level scheduler 11, it decomposes the task into multiple subtasks according to the logical relationship of the task.

本发明实施例采用两层调度架构,第二层调度器对应于任务,第一层调度器启动或选择任务对应的第二层调度器,从而可以适用于不同的任务,提高了处理效率和调度灵活性。The embodiment of the present invention adopts a two-tier scheduling architecture, the second-tier scheduler corresponds to the task, and the first-tier scheduler starts or selects the second-tier scheduler corresponding to the task, so that it can be applied to different tasks, and the processing efficiency and scheduling are improved. flexibility.

在现有的并行处理架构中,只有一层调度,从而需要严格地先后按照map、reduce两步进行任务处理,而本发明实施例没有此限制。本发明实施例的第一层调度器11可以接受各种形式的任务,任务的形式不必限于现有技术中的严格的map、reduce两步。第二层调度器12对应于任务,这样第一层调度器11可以将不同的任务发送给相应的第二层调度器12进行调度处理。第二层调度器12将任务分解为子任务以对任务进行处理,例如调度各个子任务的执行。这样的调度具有更高的灵活性。In the existing parallel processing architecture, there is only one layer of scheduling, so task processing needs to be strictly followed by two steps of map and reduce, but the embodiment of the present invention has no such limitation. The first-layer scheduler 11 of the embodiment of the present invention can accept various forms of tasks, and the form of tasks is not necessarily limited to the strict map and reduce steps in the prior art. The second-level schedulers 12 correspond to tasks, so that the first-level scheduler 11 can send different tasks to corresponding second-level schedulers 12 for scheduling processing. The second-level scheduler 12 decomposes the task into subtasks to process the task, such as scheduling the execution of each subtask. Such scheduling has higher flexibility.

另外,现有技术中需要严格地先后按照map、reduce两步的顺序进行任务处理。如果存在很多步骤的处理,则需要通过提交很多次任务请求来完成,处理效率较低。本发明实施例没有此限制。本发明实施例对任务本身以及执行任务或子任务的方式不作限制。例如,任务中所包含的子任务的数目可以比现有技术的map、reduce这两种更多,如三个或三个以上的子任务;而且不限于map、reduce子任务的形式。另外,子任务不必遵循严格的先后顺序,可以并行地执行、串行地执行、或者部分并行部分串行地执行。这样,即使是很多步骤的处理,也只需要较少次数的任务请求,提高了处理效率。In addition, in the prior art, task processing needs to be performed strictly in the order of map and reduce. If there are many steps of processing, it needs to be completed by submitting many task requests, and the processing efficiency is low. The embodiment of the present invention has no such limitation. The embodiment of the present invention does not limit the task itself and the manner of executing the task or subtask. For example, the number of subtasks included in a task may be more than that of map and reduce in the prior art, such as three or more subtasks; and it is not limited to the form of map and reduce subtasks. In addition, the subtasks do not have to follow a strict sequence, and can be executed in parallel, serially, or partly in parallel and partly in serially. In this way, even for processing with many steps, only a small number of task requests are required, which improves the processing efficiency.

子任务的数目与具体的任务有关,如转码、人脸识别业务等任务。根据任务的逻辑关系,这些任务可能具有相同或不同的子任务数目。可选地,作为一个实施例,可以在任务的描述文件中携带该任务的逻辑关系。例如,任务处理系统10(具体地,例如第二层调度器12)可接收用户上传的描述文件,如XML(Extensible Markup Language,可扩展标记语言)格式的描述文件,该描述文件中携带任务的逻辑关系。The number of subtasks is related to specific tasks, such as transcoding, face recognition business and other tasks. Depending on the logical relationship of tasks, these tasks may have the same or different number of subtasks. Optionally, as an embodiment, the logical relationship of the task may be carried in the description file of the task. For example, the task processing system 10 (specifically, such as the second-level scheduler 12) can receive a description file uploaded by the user, such as a description file in XML (Extensible Markup Language, Extensible Markup Language) format, which contains the description file of the task. Logic.

可选地,第二层调度器12在接收到第一层调度器11转发的请求时,获得该任务对应的XML格式的描述文件。根据该XML格式的描述文件中携带的任务的逻辑关系,将该任务分解为多个子任务。Optionally, when receiving the request forwarded by the first-level scheduler 11, the second-level scheduler 12 obtains the XML-formatted description file corresponding to the task. According to the logical relationship of the task carried in the description file in XML format, the task is decomposed into multiple subtasks.

另外,每个子任务还可以进一步分解为更小粒度的子任务。也就是说,本发明实施例的子任务可以是多层子任务,每层子任务的进一步分解的方式均可通过描述文件中携带的逻辑关系确定。举例来说,子任务1可以分解为多个子任务2,子任务2也还可以进一步分解为多个子任务3等等。In addition, each subtask can be further decomposed into smaller granularity subtasks. That is to say, the subtasks in this embodiment of the present invention may be multi-layer subtasks, and the way of further decomposing each layer of subtasks can be determined by the logical relationship carried in the description file. For example, subtask 1 can be decomposed into multiple subtasks 2, and subtask 2 can be further decomposed into multiple subtasks 3, and so on.

可选地,作为一个实施例,任务的逻辑关系可指示多个子任务的执行依赖关系。所谓执行依赖关系,是指各个子任务的执行操作之间是否相互依赖。Optionally, as an embodiment, the logical relationship of tasks may indicate the execution dependencies of multiple subtasks. The so-called execution dependency refers to whether the execution operations of each subtask depend on each other.

举例来说,假设子任务2必须依赖于子任务1的执行结果,则子任务2应该在子任务1执行之后再执行(即子任务1和子任务2需串行地执行)。另一方面,如果子任务2不依赖于子任务1的全部执行结果,则子任务1和子任务2可以并行执行,也可以串行执行。For example, assuming that subtask 2 must depend on the execution result of subtask 1, then subtask 2 should be executed after subtask 1 is executed (that is, subtask 1 and subtask 2 need to be executed serially). On the other hand, if subtask 2 does not depend on all execution results of subtask 1, then subtask 1 and subtask 2 can be executed in parallel or serially.

执行依赖关系的一个非限制性的例子可包括:多个子任务中的两个或更多个子任务按照串行、或并行、或部分并行部分串行顺序执行,而不限于现有技术中的map、reduce这两个步骤。这样,如果某一任务存在很多步骤的处理,则无需像MapReduce架构那样提交很多次任务请求,本发明实施例可能仅需提交一次或少量几次任务请求,从而提高了任务的处理效率。A non-limiting example of execution dependency may include: two or more subtasks in multiple subtasks are executed in serial, or parallel, or partially parallel and partially serial order, not limited to the map in the prior art , reduce these two steps. In this way, if there are many steps of processing for a certain task, there is no need to submit many task requests like the MapReduce architecture, and the embodiment of the present invention may only need to submit one or a few task requests, thereby improving the task processing efficiency.

任务的逻辑关系可以显式地指示子任务间的执行依赖关系,例如显式地表示该任务是由先后串行执行的子任务1-3构成。或者,任务的逻辑关系可以隐式地指示子任务间的执行依赖关系,例如对于某一特定任务,系统预先知道该任务是由先后串行执行的子任务1-3构成的。The logical relationship of the tasks may explicitly indicate the execution dependencies among the subtasks, for example, it may explicitly indicate that the task is composed of subtasks 1-3 executed serially. Alternatively, the logical relationship of tasks may implicitly indicate the execution dependencies between subtasks. For example, for a certain task, the system knows in advance that the task is composed of subtasks 1-3 executed serially.

可选地,作为另一实施例,第二层调度器12还用于为所述多个子任务创建相应的队列以存储所述子任务包含的任务。当在所述队列中存储了所述子任务包含的任务时,第二层调度器12还可以用于为所述子任务申请资源,并指示所申请资源的工作单元管理器启动工作单元,以使得所述工作单元从所述队列获取所述子任务包含的任务以执行任务。可选地,作为另一实施例,第二层调度器12还可以用于指示所述工作单元将执行任务的结果放入另一个队列中或输出所述执行任务的结果。Optionally, as another embodiment, the second-level scheduler 12 is further configured to create corresponding queues for the multiple subtasks to store tasks included in the subtasks. When the tasks contained in the subtasks are stored in the queue, the second-level scheduler 12 can also be used to apply for resources for the subtasks, and instruct the work unit manager of the requested resources to start the work unit to The working unit is made to obtain the tasks included in the subtasks from the queue to execute the tasks. Optionally, as another embodiment, the second-level scheduler 12 may also be configured to instruct the work unit to put the task execution result into another queue or output the task execution result.

进一步地,作为另一实施例,第二层调度器12还可以用于获取所述队列和所述工作单元的进度信息,以确定所述任务的执行进度。Further, as another embodiment, the second-level scheduler 12 may also be configured to obtain progress information of the queue and the work unit, so as to determine the execution progress of the task.

总之,本发明实施例对任务的具体形式不作限制。可选地,任务的逻辑关系设置或选择可支持用户自定义,例如通过插件机制接收用户的设置或选择。In a word, the embodiment of the present invention does not limit the specific form of the task. Optionally, the setting or selection of the logical relationship of the task may support user customization, for example, receiving the user's setting or selection through a plug-in mechanism.

本发明实施例的任务处理系统10可应用于云计算架构。云计算提出了一种高可靠性、低成本、按需使用、弹性的商业模式。很多系统可以通过使用云服务,来达到高可靠性、弹性、低成本的目标。The task processing system 10 of the embodiment of the present invention can be applied to cloud computing architecture. Cloud computing proposes a high-reliability, low-cost, on-demand, elastic business model. Many systems can achieve high reliability, elasticity, and low cost by using cloud services.

图2是本发明一个实施例的处理架构的示意图。图2的处理架构20是一种云计算架构,包括图1的任务处理系统10。与图1的不同之处在于,图2的任务处理系统10可包括多个第二层调度器12。为了简洁,图2中仅仅描绘了两个第二层调度器12,但第二层调度器12的数目不受此例子的限制(可以更多或更少)。每个第二层调度器12对应于一种任务,以适配或支撑不同的计算模型。可选地,多个第二层调度器12也可以对应于一种任务,以实现系统调度的高并发性。如果现有的多个第二层调度器12中存在对应于任务的合适的第二层调度器12,则第一层调度器11可以选择该合适的第二层调度器12对任务进行处理;如果现有的多个第二层调度器12中没有对应于任务的合适的第二层调度器12,则第一层调度器11可以启动新的合适的第二层调度器12对任务进行处理。Figure 2 is a schematic diagram of the processing architecture of one embodiment of the present invention. The processing architecture 20 in FIG. 2 is a cloud computing architecture, including the task processing system 10 in FIG. 1 . The difference from FIG. 1 is that the task processing system 10 in FIG. 2 may include multiple second-level schedulers 12 . For simplicity, only two second-level schedulers 12 are depicted in FIG. 2 , but the number of second-level schedulers 12 is not limited by this example (it can be more or less). Each second-level scheduler 12 corresponds to a type of task to adapt to or support different computing models. Optionally, multiple second-level schedulers 12 may also correspond to one type of task, so as to achieve high concurrency of system scheduling. If there is a suitable second-level scheduler 12 corresponding to the task among the existing plurality of second-level schedulers 12, the first-level scheduler 11 can select the suitable second-level scheduler 12 to process the task; If there is no suitable second-level scheduler 12 corresponding to the task among the existing multiple second-level schedulers 12, the first-level scheduler 11 can start a new suitable second-level scheduler 12 to process the task .

在处理架构20中,第一层调度器11可以是分布式的,以支持高并发性。第一层调度器11可接收Webservice(网络服务)21发送来的任务请求。Webservice 21负责用户的web(网络)请求的接收和转发,具体实现方式可参照现有技术,因此不再赘述。In the processing architecture 20, the first-level scheduler 11 may be distributed to support high concurrency. The first layer scheduler 11 can receive the task request sent by the Webservice (network service) 21 . Webservice 21 is responsible for receiving and forwarding the user's web (network) request, and the specific implementation may refer to the prior art, so details are not repeated here.

可选地,作为一个实施例,当第一层调度器11接收到多个任务请求时,第一层调度器11还可以对任务进行优先级管理(例如进行优先级排序),并按照优先级启动或选择第二层调度器12对任务进行处理。例如,第一层调度器11可优先启动或选择优先级较高的任务所对应的第二层调度器12。Optionally, as an embodiment, when the first-level scheduler 11 receives a plurality of task requests, the first-level scheduler 11 can also perform priority management (for example, perform priority sorting) on the tasks, and Start or select the second-level scheduler 12 to process the task. For example, the first-level scheduler 11 may preferentially start or select the second-level scheduler 12 corresponding to a task with a higher priority.

可选地,作为另一实施例,第一层调度器11可实现任务的优先级调整等附加功能。优先级排序或调整的方式可支持用户自定义,例如通过插件机制接收用户的设置。Optionally, as another embodiment, the first-level scheduler 11 may implement additional functions such as task priority adjustment. The way of prioritizing or adjusting can support user customization, for example, receiving user's settings through a plug-in mechanism.

第一层调度器11在启动或选择了对应于任务的第二层调度器12之后,向该第二层调度器12转发任务请求。第二层调度器12按照任务的逻辑关系,将任务分解为多个子任务,并管理多个子任务的执行。可选地,作为一个实施例,可通过队列(如图2所示的分布式队列22)管理子任务的执行。分布式队列22中可包括多个队列,分别存储相应的子任务包含的任务。After the first-level scheduler 11 starts or selects the second-level scheduler 12 corresponding to the task, it forwards the task request to the second-level scheduler 12 . The second-level scheduler 12 decomposes the task into multiple subtasks according to the logical relationship of the task, and manages the execution of the multiple subtasks. Optionally, as an embodiment, the execution of subtasks may be managed through a queue (distributed queue 22 shown in FIG. 2 ). The distributed queue 22 may include multiple queues, respectively storing tasks contained in corresponding subtasks.

具体地,第二层调度器12可以为多个子任务创建相应的队列以存储子任务所包含的任务。第二层调度器12可根据任务的逻辑关系整理队列的顺序。例如,假设任务由先后串行执行的子任务1-3(子任务1->子任务2->子任务3)构成,第二层调度器12可以建立队列1-3,分别存储子任务1-3所包含的任务,并且确定队列1-3的顺序,即按照队列1->队列2->队列3的顺序先后执行相应子任务中包含的任务。子任务1的任务执行结果放入队列2中,子任务2的任务执行结果放入队列3中,子任务3的任务执行结果输出至合适的位置,例如输出至图2所示的分布式存储设备24或返回给用户。Specifically, the second-level scheduler 12 may create corresponding queues for multiple subtasks to store tasks included in the subtasks. The second-level scheduler 12 can arrange the order of the queues according to the logical relationship of the tasks. For example, assuming that the task is composed of subtasks 1-3 (subtask 1->subtask 2->subtask 3) executed serially, the second-level scheduler 12 can establish queues 1-3 to store subtask 1 respectively. -3 tasks included, and determine the order of queues 1-3, that is, execute the tasks contained in the corresponding subtasks in the order of queue 1->queue 2->queue 3. The task execution result of subtask 1 is placed in queue 2, the task execution result of subtask 2 is placed in queue 3, and the task execution result of subtask 3 is output to an appropriate location, for example, to the distributed storage shown in Figure 2 device 24 or returned to the user.

可选地,作为另一实施例,在分布式队列22中存储了子任务包含的任务时,第二层调度器12还可以为该子任务申请资源,例如从资源管理器25申请资源。资源管理器25负责满足调度器11或12的资源申请、释放。资源管理器25的主要功能包括资源管理、资源匹配、资源自动伸缩。其中资源匹配方法可采用插件机制,支持用户自定义。另外,所谓资源自动伸缩是指用户配置集群规模在一个范围内时,可以根据集群负载情况,来自动扩容集群或者减容集群。资源管理器25的其他实现方式可参照现有技术,因此不再赘述。例如,资源管理器25也可以采用分布式方案,以实现高并发性。Optionally, as another embodiment, when the tasks included in the subtask are stored in the distributed queue 22 , the second-level scheduler 12 may also apply for resources for the subtask, for example, apply for resources from the resource manager 25 . The resource manager 25 is responsible for satisfying the resource application and release of the scheduler 11 or 12 . The main functions of the resource manager 25 include resource management, resource matching, and automatic scaling of resources. Among them, the resource matching method can adopt a plug-in mechanism to support user-defined. In addition, the so-called automatic resource scaling means that when the user configures the cluster size within a certain range, the cluster can be automatically expanded or reduced according to the cluster load. For other implementation manners of the resource manager 25, reference may be made to the prior art, so details are not repeated here. For example, the resource manager 25 may also adopt a distributed solution to achieve high concurrency.

在为子任务建立队列并申请资源之后,第二层调度器12可指示所申请资源的工作单元(worker)管理器26启动工作单元27,以使得工作单元27从队列获取子任务包含的任务并执行该任务。worker管理器26负责worker 27的创建、删除、监控。在云计算架构中的每个节点(物理机或虚拟机)上都有worker管理器26。worker管理器26的其他实现方式可参照现有技术,因此不再赘述。After setting up the queue and applying for resources for the subtasks, the second-level scheduler 12 can instruct the work unit (worker) manager 26 of the applied resources to start the work unit 27, so that the work unit 27 obtains the tasks contained in the subtasks from the queue and Execute the task. Worker manager 26 is responsible for creation, deletion and monitoring of worker 27. There is a worker manager 26 on each node (physical machine or virtual machine) in the cloud computing architecture. For other implementations of the worker manager 26, reference may be made to the prior art, so details are not repeated here.

Worker 27负责从分布式队列22中的相应队列获取用户的子任务所包含的任务,进行预处理,之后再调用用户开发的处理程序,待处理完成后,可按照第二层调度器12所确定的队列的顺序,将执行任务的结果放入另一个队列中或输出执行任务的结果。Worker 27的其他实现方式可参照现有技术,因此不再赘述。Worker 27 is responsible for obtaining the tasks contained in the user's subtasks from the corresponding queue in the distributed queue 22, performing preprocessing, and then calling the processing program developed by the user. After the processing is completed, it can be determined according to the second layer scheduler 12 The order of the queue, put the result of executing the task into another queue or output the result of executing the task. For other implementations of Worker 27, reference may be made to existing technologies, so details are not repeated here.

此外,第二层调度器12还可以实现其他调度处理,例如任务异常处理或任务进度统计等。例如,第二层调度器12可获取队列和工作单元(worker)的进度信息(如每个子任务是否完成或已完成多少,每个队列中的子任务是否完成或已完成多少等等),以确定任务的执行进度。这样,能够实现任务进度的实时查询。例如,用户可以到第二层调度器12查询相应任务的执行进度。或者,第二层调度器12可以将任务的进度信息上报给第一层调度器11,以便用户到第一层调度器11查询相应任务的执行进度,方便用户的监控。In addition, the second-level scheduler 12 can also implement other scheduling processes, such as task exception handling or task progress statistics. For example, the second-level scheduler 12 can obtain the progress information of the queue and the work unit (worker) (such as whether each subtask is completed or how much is completed, whether the subtasks in each queue are completed or how much is completed, etc.), to Determine the progress of the task. In this way, real-time query of task progress can be realized. For example, the user can go to the second-level scheduler 12 to query the execution progress of the corresponding task. Alternatively, the second-level scheduler 12 may report the progress information of the task to the first-level scheduler 11, so that the user can query the execution progress of the corresponding task from the first-level scheduler 11, which is convenient for the user to monitor.

为了简洁,图2中例示了三个worker管理器26和相应的三个worker 27,但是本发明实施例不限于该具体例子,worker管理器26和worker 27的数目可以更多或更少。For simplicity, three worker managers 26 and corresponding three workers 27 are illustrated in FIG. 2 , but the embodiment of the present invention is not limited to this specific example, and the number of worker managers 26 and worker 27 can be more or less.

集群管理软件28负责处理并行任务的集群的自动化部署与基本监控,其实现方式可参照现有技术,因此不再赘述。The cluster management software 28 is responsible for the automatic deployment and basic monitoring of the cluster processing parallel tasks, and its implementation method can refer to the existing technology, so it will not be repeated here.

分布式队列22、数据库23(如nosql数据库)、分布式存储设备24实现处理架构20所需的任务存储、数据库以及文件存储,具体实现方式也可参照现有技术,因此不再赘述。例如,数据库23可用于信息持久化存储,以满足系统运行需要或实现容错功能。Distributed queue 22, database 23 (such as nosql database), and distributed storage device 24 realize the task storage, database and file storage required by processing architecture 20. The specific implementation method can also refer to the existing technology, so it will not be described again. For example, the database 23 can be used for persistent storage of information to meet system operation requirements or implement fault tolerance.

处理架构20的最底层支持物理机或虚拟机29等各种异构硬件,对于用户应用来说无需关心。物理机或虚拟机29的实现方式可参照现有技术,因此不再赘述。The bottom layer of the processing architecture 20 supports various heterogeneous hardware such as physical machines or virtual machines 29 , and there is no need to care about it for user applications. For the implementation manner of the physical machine or the virtual machine 29, reference may be made to the prior art, so details are not repeated here.

处理架构20采用“队列-worker”的计算模型,但本发明实施例不限于此。处理架构20也可以采用其他计算模型,例如,处理架构20中的部分第二层调度器12也可以采用上述MapReduce方式,而无需队列。The processing architecture 20 adopts a "queue-worker" computing model, but the embodiment of the present invention is not limited thereto. The processing architecture 20 may also adopt other computing models. For example, part of the second-layer schedulers 12 in the processing architecture 20 may also adopt the aforementioned MapReduce method without queues.

因此,本发明实施例的处理架构20采用两层调度架构,第二层调度器对应于任务,第一层调度器启动或选择任务所对应的第二层调度器,从而可以适用于不同的任务,提高了处理效率和调度灵活性。并且,通过上述“队列-worker”的计算模型,可同时启动不同任务的多个第二层调度器,进一步提高并发性能。Therefore, the processing architecture 20 of the embodiment of the present invention adopts a two-tier scheduling architecture, the second-tier scheduler corresponds to the task, and the first-tier scheduler starts or selects the second-tier scheduler corresponding to the task, so that it can be applied to different tasks , improving processing efficiency and scheduling flexibility. Moreover, through the above-mentioned "queue-worker" computing model, multiple second-level schedulers for different tasks can be started at the same time, further improving concurrency performance.

另外,本发明实施例给出高性能、灵活的并行处理架构20可以支持物理机器以及目前较流行的云计算平台、支持大规模集群、支持用户调度策略配置以及自定义、支持不同的计算模型。In addition, the embodiment of the present invention provides a high-performance, flexible parallel processing architecture 20 that can support physical machines and currently popular cloud computing platforms, support large-scale clusters, support user scheduling policy configuration and customization, and support different computing models.

图3是本发明一个实施例的分布式计算任务处理方法的流程图。图3的方法可由图1和图2的任务处理系统10执行,因此下面结合图1和图2来描述图3的方法,并适当省略重复的描述。Fig. 3 is a flowchart of a distributed computing task processing method according to an embodiment of the present invention. The method in FIG. 3 can be executed by the task processing system 10 in FIG. 1 and FIG. 2 , so the method in FIG. 3 will be described below in conjunction with FIG. 1 and FIG. 2 , and repeated descriptions will be appropriately omitted.

301,第一层调度器11在接收到执行任务的请求时,启动或选择任务对应的第二层调度器12。301. When receiving a request to execute a task, the first-level scheduler 11 starts or selects a second-level scheduler 12 corresponding to the task.

例如,在系统中没有合适的第二层调度器时,第一层调度器11可启动该任务对应的第二层调度器12。在系统中已经存在合适的第二层调度器时,第一层调度器11可从这些合适的第二层调度器中选择该任务对应的第二层调度器12。For example, when there is no suitable second-level scheduler in the system, the first-level scheduler 11 can start the second-level scheduler 12 corresponding to the task. When there are suitable second-level schedulers in the system, the first-level scheduler 11 can select the second-level scheduler 12 corresponding to the task from these suitable second-level schedulers.

302,第一层调度器11向第二层调度器12转发请求。302. The first-level scheduler 11 forwards the request to the second-level scheduler 12.

303,第二层调度器12在接收到第一层调度器转发的请求时,按照任务的逻辑关系将任务分解为多个子任务。303. When receiving the request forwarded by the first-level scheduler, the second-level scheduler 12 decomposes the task into multiple subtasks according to the logical relationship of the task.

本发明实施例采用两层调度架构,第二层调度器对应于任务,第一层调度器启动或选择任务所对应的第二层调度器,从而可以适用于不同的任务,提高了处理效率和调度灵活性。The embodiment of the present invention adopts a two-tier scheduling architecture, the second-tier scheduler corresponds to the task, and the first-tier scheduler starts or selects the second-tier scheduler corresponding to the task, so that it can be applied to different tasks, improving the processing efficiency and Scheduling flexibility.

在现有的并行处理架构中,只有一层调度,从而需要严格地先后按照map、reduce两步进行任务处理,而本发明实施例没有此限制。本发明实施例的第一层调度器11可以接受各种形式的任务,任务的形式不必限于现有技术中的严格的map、reduce两步。第二层调度器12对应于任务,这样第一层调度器11可以将不同的任务发送给相应的第二层调度器12进行调度处理。第二层调度器12将任务分解为子任务以对任务进行处理,例如调度各个子任务的执行。这样的调度具有更高的灵活性。In the existing parallel processing architecture, there is only one layer of scheduling, so task processing needs to be strictly followed by two steps of map and reduce, but the embodiment of the present invention has no such limitation. The first-layer scheduler 11 of the embodiment of the present invention can accept various forms of tasks, and the form of tasks is not necessarily limited to the strict map and reduce steps in the prior art. The second-level schedulers 12 correspond to tasks, so that the first-level scheduler 11 can send different tasks to corresponding second-level schedulers 12 for scheduling processing. The second-level scheduler 12 decomposes the task into subtasks to process the task, such as scheduling the execution of each subtask. Such scheduling has higher flexibility.

另外,现有技术中需要严格地先后按照map、reduce两步的顺序进行任务处理。如果存在很多步骤的处理,则需要通过提交很多次任务请求来完成,处理效率较低。本发明实施例没有此限制。本发明实施例对任务本身以及执行任务或子任务的方式不作限制。例如,任务中所包含的子任务的数目可以比现有技术的map、reduce这两种更多,如三个或三个以上的子任务;而且不限于map、reduce子任务的形式。另外,子任务不必遵循严格的先后顺序,可以并行地执行、串行地执行、或者部分并行部分串行地执行。这样,即使是很多步骤的处理,也只需要较少次数的任务请求,提高了处理效率。In addition, in the prior art, task processing needs to be performed strictly in the order of map and reduce. If there are many steps of processing, it needs to be completed by submitting many task requests, and the processing efficiency is low. The embodiment of the present invention has no such limitation. The embodiment of the present invention does not limit the task itself and the manner of executing the task or subtask. For example, the number of subtasks included in a task may be more than that of map and reduce in the prior art, such as three or more subtasks; and it is not limited to the form of map and reduce subtasks. In addition, the subtasks do not have to follow a strict sequence, and can be executed in parallel, serially, or partly in parallel and partly in serially. In this way, even for processing with many steps, only a small number of task requests are required, which improves the processing efficiency.

可选地,作为一个实施例,任务的逻辑关系可指示多个子任务的执行依赖关系。所谓执行依赖关系,是指各个子任务的执行操作之间是否相互依赖。Optionally, as an embodiment, the logical relationship of tasks may indicate the execution dependencies of multiple subtasks. The so-called execution dependency refers to whether the execution operations of each subtask depend on each other.

举例来说,假设子任务2必须依赖于子任务1的执行结果,则子任务2应该在子任务1执行之后再执行(即子任务1和子任务2需串行地执行)。另一方面,如果子任务2不依赖于子任务1的全部执行结果,则子任务1和子任务2的可以并行执行,也可以串行执行。For example, assuming that subtask 2 must depend on the execution result of subtask 1, then subtask 2 should be executed after subtask 1 is executed (that is, subtask 1 and subtask 2 need to be executed serially). On the other hand, if subtask 2 does not depend on all execution results of subtask 1, subtask 1 and subtask 2 can be executed in parallel or serially.

执行依赖关系的一个非限制性的例子可包括:多个子任务中的两个或更多个子任务按照串行、或并行、或部分并行部分串行顺序执行,而不限于现有技术中的map、reduce这两个步骤。这样,如果某一任务存在很多步骤的处理,则无需像MapReduce架构那样提交很多次任务请求,本发明实施例可能仅需提交一次或少量几次任务请求,从而提高了任务的处理效率。A non-limiting example of execution dependency may include: two or more subtasks in multiple subtasks are executed in serial, or parallel, or partially parallel and partially serial order, not limited to the map in the prior art , reduce these two steps. In this way, if there are many steps of processing for a certain task, there is no need to submit many task requests like the MapReduce architecture, and the embodiment of the present invention may only need to submit one or a few task requests, thereby improving the task processing efficiency.

可选地,作为另一实施例,第二层调度器12还可为多个子任务创建相应的队列以存储子任务包含的任务,根据任务的逻辑关系整理队列的顺序。Optionally, as another embodiment, the second-level scheduler 12 may also create corresponding queues for multiple subtasks to store the tasks included in the subtasks, and arrange the order of the queues according to the logical relationship of the tasks.

可选地,作为另一实施例,第二层调度器12还可在队列中存储了子任务包含的任务时,为子任务申请资源,并指示所申请资源的工作单元(worker)管理器启动工作单元,以使得工作单元从队列获取子任务包含的任务以执行任务。Optionally, as another embodiment, the second-level scheduler 12 can also apply for resources for the subtasks when the tasks contained in the subtasks are stored in the queue, and instruct the work unit (worker) manager of the requested resources to start The unit of work, so that the unit of work fetches the tasks contained in the subtasks from the queue to execute the tasks.

可选地,作为另一实施例,第二层调度器12还可指示工作单元将执行任务的结果放入另一个队列中或输出执行任务的结果。Optionally, as another embodiment, the second-level scheduler 12 may also instruct the work unit to put the result of executing the task into another queue or output the result of executing the task.

可选地,作为另一实施例,第二层调度器12还可获取队列和工作单元的进度信息,以确定任务的执行进度。这样,可以实现任务进度的实时查询,方便用户监控。Optionally, as another embodiment, the second-level scheduler 12 may also obtain progress information of queues and work units, so as to determine the execution progress of tasks. In this way, real-time query of task progress can be realized, which is convenient for users to monitor.

可选地,作为另一实施例,在步骤301中,第一层调度器11可对任务进行优先级管理,并按照优先级启动或选择第二层调度器12对任务进行处理。Optionally, as another embodiment, in step 301, the first-level scheduler 11 may perform priority management on the tasks, and start or select the second-level scheduler 12 to process the tasks according to the priority.

下面,结合具体例子,更加详细地描述本发明的实施例。图4是本发明一个实施例的任务处理过程的示意流程图。例如,图4的过程可由图2的处理架构20执行,因此适当省略重复的描述。In the following, the embodiments of the present invention will be described in more detail in combination with specific examples. Fig. 4 is a schematic flowchart of a task processing process according to an embodiment of the present invention. For example, the process in FIG. 4 can be executed by the processing architecture 20 in FIG. 2 , so repeated descriptions are appropriately omitted.

在图4的例子中,假设任务由先后串行执行的子任务1-3(子任务1->子任务2->子任务3)构成。但是本发明实施例不限于该具体例子,其他任何类型的任务均可类似地应用本发明实施例的处理过程。这样的应用均落入本发明实施例的范围内。In the example in FIG. 4 , it is assumed that the task is composed of subtasks 1-3 (subtask 1 -> subtask 2 -> subtask 3) which are executed serially. However, this embodiment of the present invention is not limited to this specific example, and any other type of task can similarly apply the processing procedure of this embodiment of the present invention. Such applications all fall within the scope of the embodiments of the present invention.

401,网络服务(webservice)接收用户提交的执行任务的请求。任务的逻辑关系可以由用户定义。401. A web service (webservice) receives a request for executing a task submitted by a user. The logical relationship of tasks can be defined by the user.

402,网络服务将请求转发给第一层调度器。402. The network service forwards the request to the first-layer scheduler.

403,第一层调度器向网络服务返回提交请求成功的响应。步骤403是可选的步骤。403. The first-layer scheduler returns a response that the request is submitted successfully to the network service. Step 403 is an optional step.

404,第一层调度器将接收到的任务,按照优先级计算方法计算优先级且进行排序,选择出优先级高的任务。404. The first-level scheduler calculates and sorts the priorities of the received tasks according to the priority calculation method, and selects tasks with higher priorities.

405,第一层调度器按照步骤404中选择的任务,启动(如果当时系统中没有对应于这种类型的第二层调度器)或选择(系统中有可用的对应于这种类型的第二层调度器)合适的第二层调度器。405, the first-level scheduler starts (if there is no second-level scheduler corresponding to this type in the system at that time) or selects (there is a second-level scheduler corresponding to this type available in the system) according to the task selected in step 404. layer scheduler) a suitable second layer scheduler.

406,第一层调度器将任务请求转发给在步骤405中启动或选择的第二层调度器。406 , the first-level scheduler forwards the task request to the second-level scheduler started or selected in step 405 .

407,第二层调度器接收到任务请求后,根据任务的逻辑关系对任务进行预处理。具体地,作为一个非限制性的例子,第二层调度器可将任务分解为多个子任务(子任务1、子任务2、子任务3)。407. After receiving the task request, the second-layer scheduler preprocesses the task according to the logical relationship of the task. Specifically, as a non-limiting example, the second-level scheduler may decompose a task into multiple subtasks (subtask 1, subtask 2, subtask 3).

408,第二层调度器根据“队列-worker”计算模型,为子任务1-3创建队列1-3。可选地,第二层调度器可产生初始子任务(对应于子任务1)并将其放入队列1中。此时,第二层调度器可根据子任务1-3之间的执行依赖关系“子任务1->子任务2->子任务3”整理队列1-3的执行顺序为“队列1->队列2->队列3”。408. The second-level scheduler creates queues 1-3 for subtasks 1-3 according to the "queue-worker" computing model. Optionally, the second-level scheduler may generate an initial subtask (corresponding to subtask1) and put it in queue1. At this time, the second-level scheduler can arrange the execution order of queues 1-3 according to the execution dependency relationship between subtasks 1-3 "subtask 1->subtask 2->subtask 3" as "queue 1-> Queue 2 -> Queue 3".

409,第二层调度器发现队列1中有子任务1。例如,第二层调度器可以周期性地检查队列,以查看队列中是否有任务。但本发明实施例对此不作限制,第二层调度器可以按照其他方式发现队列中的子任务。409. The second-level scheduler finds that there is subtask 1 in queue 1. For example, a second-tier scheduler could periodically check the queue to see if there are tasks in the queue. However, this embodiment of the present invention does not limit this, and the second-level scheduler may discover subtasks in the queue in other ways.

410,第二层调度器从资源管理器为子任务1申请资源。410. The second-level scheduler applies for resources for subtask 1 from the resource manager.

411,第二层调度器指示所申请资源的工作单元(worker)管理器启动worker去处理队列1中的子任务1。411 , the second-level scheduler instructs the work unit (worker) manager of the applied resource to start the worker to process the subtask 1 in the queue 1 .

412,worker管理器启动worker,且告知worker将处理完子任务1得到的结果(对应于子任务2)放入队列2中。412. The worker manager starts the worker, and notifies the worker to put the result (corresponding to the subtask 2) obtained after processing the subtask 1 into the queue 2.

413:worker启动后,自动去队列1获取并执行子任务1所包含的任务,执行完成后,将执行结果(对应于子任务2)放入队列2中。413: After the worker is started, it automatically goes to queue 1 to obtain and execute the tasks contained in subtask 1, and after the execution is completed, puts the execution result (corresponding to subtask 2) into queue 2.

414,第二层调度器发现队列2中有子任务2。414. The second-level scheduler finds that there is subtask 2 in queue 2.

415,第二层调度器从资源管理器为子任务2申请资源。415. The second-level scheduler applies for resources for subtask 2 from the resource manager.

416,第二层调度器指示工作单元(worker)管理器启动worker去处理队列2中的子任务2。416. The second-level scheduler instructs the work unit (worker) manager to start the worker to process the subtask 2 in the queue 2.

417,worker管理器启动worker,且告知worker将处理完子任务2得到的结果(对应于子任务3)放入队列3中。417. The worker manager starts the worker, and informs the worker to put the result (corresponding to the subtask 3) into the queue 3 after processing the subtask 2.

418:worker启动后,自动去队列2获取并执行子任务2所包含的任务,执行完成后,将执行结果(对应于子任务3)放入队列3中。418: After the worker is started, it automatically goes to the queue 2 to obtain and execute the tasks contained in the subtask 2, and after the execution is completed, puts the execution result (corresponding to the subtask 3) into the queue 3.

419,第二层调度器发现队列3中有子任务3。419. The second-level scheduler finds that there is subtask 3 in queue 3.

420,第二层调度器从资源管理器为子任务3申请资源。420. The second-level scheduler applies for resources for subtask 3 from the resource manager.

421,第二层调度器指示工作单元(worker)管理器启动worker去处理队列3中的子任务3。421. The second-level scheduler instructs the work unit (worker) manager to start the worker to process the subtask 3 in the queue 3.

422,worker管理器启动worker,且告知worker将处理完子任务3得到的结果放入合适的位置(如放入分布式存储设备或返回给用户)。422. The worker manager starts the worker, and informs the worker to put the result obtained after processing subtask 3 into an appropriate location (such as putting it into a distributed storage device or returning it to the user).

423,worker启动后,自动去队列3获取并执行子任务3所包含的任务,执行完成后,将执行结果放入合适的位置。423. After the worker is started, it automatically goes to the queue 3 to obtain and execute the tasks contained in the subtask 3, and after the execution is completed, puts the execution result into an appropriate position.

通过上述步骤409-423:worker自动取子任务和放子任务。这样整个任务就可以按照任务定义的逻辑关系进行处理。另外,虽然图4的实施例中将409-423描绘为串行执行,但本发明实施例不限于此。在其他实施例中,当队列1-3无需按顺序执行时,步骤409-413、步骤414-418、步骤419-423的执行顺序有可能交换或重叠。例如,如果队列2的子任务2不依赖于队列1中全部子任务1的执行结果,则在队列1的worker工作时,队列2的worker也可以工作,而无须队列1的worker执行完全部子任务1才能启动队列2的worker。Through the above steps 409-423: the worker automatically fetches and puts subtasks. In this way, the entire task can be processed according to the logical relationship defined by the task. In addition, although the embodiment of FIG. 4 depicts 409-423 as being performed serially, the embodiment of the present invention is not limited thereto. In other embodiments, when the queues 1-3 do not need to be executed sequentially, the execution order of steps 409-413, steps 414-418, and steps 419-423 may be exchanged or overlapped. For example, if subtask 2 of queue 2 does not depend on the execution results of all subtasks 1 in queue 1, when workers in queue 1 are working, workers in queue 2 can also work without the worker in queue 1 executing all the subtasks. Task 1 can start the worker of queue 2.

424,第二层调度器可获取队列或worker的进度信息,判断整个任务执行进度。424. The second-layer scheduler can obtain the progress information of the queue or worker, and judge the execution progress of the entire task.

425,如果任务执行完成,则第二层调度器上报给第一层调度器。第二层调度器也可以直接供用户实时查询任务的进度。425. If the execution of the task is completed, the second-level scheduler reports to the first-level scheduler. The second-level scheduler can also directly allow users to query the progress of tasks in real time.

这样,本发明实施例采用两层调度架构,第二层调度器对应于任务,第一层调度器启动或选择任务所对应的第二层调度器,从而可以适用于不同的任务,提高了处理效率和调度灵活性,并且可以满足多种并行处理业务的需求。In this way, the embodiment of the present invention adopts a two-tier scheduling architecture, the second-tier scheduler corresponds to the task, and the first-tier scheduler starts or selects the second-tier scheduler corresponding to the task, so that it can be applied to different tasks and improves the processing efficiency. Efficiency and scheduling flexibility, and can meet the needs of a variety of parallel processing business.

本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。Those skilled in the art can appreciate that the units and algorithm steps of the examples described in conjunction with the embodiments disclosed herein can be implemented by electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are executed by hardware or software depends on the specific application and design constraints of the technical solution. Skilled artisans may use different methods to implement the described functions for each specific application, but such implementation should not be regarded as exceeding the scope of the present invention.

所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that for the convenience and brevity of the description, the specific working process of the above-described system, device and unit can refer to the corresponding process in the foregoing method embodiment, which will not be repeated here.

在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed systems, devices and methods may be implemented in other ways. For example, the device embodiments described above are only illustrative. For example, the division of the units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components can be combined or May be integrated into another system, or some features may be ignored, or not implemented. In another point, the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.

所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.

另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。In addition, each functional unit in each embodiment of the present invention may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.

所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。If the functions described above are realized in the form of software function units and sold or used as independent products, they can be stored in a computer-readable storage medium. Based on this understanding, the essence of the technical solution of the present invention or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in various embodiments of the present invention. The aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disc, etc., which can store program codes. .

以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应所述以权利要求的保护范围为准。The above is only a specific embodiment of the present invention, but the scope of protection of the present invention is not limited thereto. Anyone skilled in the art can easily think of changes or substitutions within the technical scope disclosed in the present invention. Should be covered within the protection scope of the present invention. Therefore, the protection scope of the present invention should be based on the protection scope of the claims.

Claims (15)

1.一种分布式计算任务处理系统,其特征在于,包括:1. A distributed computing task processing system, characterized in that, comprising: 第一层调度器,用于接收执行任务的请求,启动或选择所述任务对应的第二层调度器并向所述第二层调度器转发所述请求;The first-level scheduler is configured to receive a request for executing a task, start or select a second-level scheduler corresponding to the task, and forward the request to the second-level scheduler; 第二层调度器,用于在接收到所述第一层调度器转发的请求时,按照所述任务的逻辑关系将所述任务分解为多个子任务。The second-level scheduler is configured to decompose the task into multiple subtasks according to the logical relationship of the task when receiving the request forwarded by the first-level scheduler. 2.如权利要求1所述的系统,其特征在于,所述任务的逻辑关系指示所述多个子任务的执行依赖关系。2. The system according to claim 1, wherein the logical relationship of the tasks indicates the execution dependencies of the plurality of subtasks. 3.如权利要求1或2所述的系统,其特征在于,所述第二层调度器还用于为所述多个子任务创建相应的队列以存储所述子任务包含的任务。3. The system according to claim 1 or 2, wherein the second-level scheduler is further configured to create corresponding queues for the multiple subtasks to store tasks included in the subtasks. 4.如权利要求3所述的系统,其特征在于,在所述队列中存储了所述子任务包含的任务时,所述第二层调度器还用于为所述子任务申请资源,并指示所申请资源的工作单元管理器启动工作单元,以使得所述工作单元从所述队列获取所述子任务包含的任务以执行任务。4. The system according to claim 3, wherein when the tasks contained in the subtasks are stored in the queue, the second-level scheduler is also used to apply for resources for the subtasks, and Instructing the work unit manager of the requested resource to start the work unit, so that the work unit obtains the tasks contained in the subtasks from the queue to execute the tasks. 5.如权利要求4所述的系统,其特征在于,所述第二层调度器还用于指示所述工作单元将执行任务的结果放入另一个队列中或输出所述执行任务的结果。5. The system according to claim 4, wherein the second-level scheduler is further configured to instruct the work unit to put the result of executing the task into another queue or output the result of executing the task. 6.如权利要求4或5所述的系统,其特征在于,所述第二层调度器还用于获取所述队列和所述工作单元的进度信息,以确定所述任务的执行进度。6. The system according to claim 4 or 5, wherein the second-level scheduler is further configured to obtain progress information of the queue and the work unit, so as to determine the execution progress of the task. 7.如权利要求2-6任一项所述的系统,其特征在于,所述多个子任务的执行依赖关系包括:所述多个子任务中的两个或更多个子任务按照串行或者并行顺序执行。7. The system according to any one of claims 2-6, wherein the execution dependencies of the plurality of subtasks include: two or more subtasks in the plurality of subtasks are executed in series or in parallel Execute sequentially. 8.如权利要求1-7任一项所述的系统,其特征在于,所述第一层调度器还用于对所述任务进行优先级管理,并按照所述优先级启动或选择所述第二层调度器对所述任务进行处理。8. The system according to any one of claims 1-7, wherein the first-level scheduler is also used to perform priority management on the tasks, and start or select the task according to the priority The second layer scheduler processes the tasks. 9.一种分布式计算任务处理方法,其特征在于,所述方法包括:9. A distributed computing task processing method, characterized in that the method comprises: 第一层调度器在接收到执行任务的请求时,启动或选择所述任务对应的第二层调度器;When the first-level scheduler receives a request to execute a task, it starts or selects a second-level scheduler corresponding to the task; 所述第一层调度器向所述第二层调度器转发所述请求;forwarding the request to the second-tier scheduler by the first-tier scheduler; 所述第二层调度器在接收到所述第一层调度器转发的请求时,按照所述任务的逻辑关系将所述任务分解为多个子任务。When the second-level scheduler receives the request forwarded by the first-level scheduler, it decomposes the task into multiple subtasks according to the logical relationship of the task. 10.如权利要求9所述的方法,其特征在于,所述任务的逻辑关系指示所述多个子任务的执行依赖关系。10. The method according to claim 9, wherein the logical relationship of the tasks indicates the execution dependencies of the plurality of subtasks. 11.如权利要求9或10所述的方法,其特征在于,所述方法还包括:所述第二层调度器为所述多个子任务创建相应的队列以存储所述子任务包含的任务。11. The method according to claim 9 or 10, further comprising: the second-level scheduler creating corresponding queues for the plurality of subtasks to store tasks included in the subtasks. 12.如权利要求11所述的方法,其特征在于,所述方法还包括:所述第二层调度器在所述队列中存储了所述子任务包含的任务时,为所述子任务申请资源,并指示所申请资源的工作单元管理器启动工作单元,以使得所述工作单元从所述队列获取所述子任务包含的任务以执行任务。12. The method according to claim 11, further comprising: when the second-level scheduler stores tasks contained in the subtask in the queue, applying for the subtask resource, and instruct the work unit manager of the applied resource to start the work unit, so that the work unit obtains the tasks contained in the subtasks from the queue to execute the task. 13.如权利要求12所述的方法,其特征在于,所述方法还包括:所述第二层调度器指示所述工作单元将执行任务的结果放入另一个队列中或输出所述执行任务的结果。13. The method according to claim 12, further comprising: the second-level scheduler instructing the work unit to put the execution task result into another queue or output the execution task the result of. 14.如权利要求12或13所述的方法,其特征在于,还包括:所述第二层调度器获取所述队列和所述工作单元的进度信息,以确定所述任务的执行进度。14. The method according to claim 12 or 13, further comprising: the second-level scheduler acquiring progress information of the queue and the work unit, so as to determine the execution progress of the task. 15.如权利要求9-14任一项所述的方法,其特征在于,所述第一层调度器还对所述任务进行优先级管理,并按照所述优先级启动或选择所述第二层调度器对所述任务进行处理。15. The method according to any one of claims 9-14, wherein the first-level scheduler also performs priority management on the tasks, and starts or selects the second task according to the priority. The layer scheduler processes the tasks.
CN2012800001658A 2012-01-18 2012-01-18 Distributed computing task processing system and task processing method Pending CN102763086A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2012/070551 WO2013107012A1 (en) 2012-01-18 2012-01-18 Task processing system and task processing method for distributed computation

Publications (1)

Publication Number Publication Date
CN102763086A true CN102763086A (en) 2012-10-31

Family

ID=47056377

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2012800001658A Pending CN102763086A (en) 2012-01-18 2012-01-18 Distributed computing task processing system and task processing method

Country Status (2)

Country Link
CN (1) CN102763086A (en)
WO (1) WO2013107012A1 (en)

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103064736A (en) * 2012-12-06 2013-04-24 华为技术有限公司 Task processing device and method
CN104035817A (en) * 2014-07-08 2014-09-10 领佰思自动化科技(上海)有限公司 Distributed parallel computing method and system for physical implementation of large scale integrated circuit
CN104102949A (en) * 2014-06-27 2014-10-15 北京奇艺世纪科技有限公司 Distributed workflow device and method for processing workflow by distributed workflow device
CN104123182A (en) * 2014-07-18 2014-10-29 西安交通大学 Map Reduce task data-center-across scheduling system and method based on master-slave framework
CN104778074A (en) * 2014-01-14 2015-07-15 腾讯科技(深圳)有限公司 Calculation task processing method and device
CN105653365A (en) * 2016-02-22 2016-06-08 青岛海尔智能家电科技有限公司 Task processing method and device
CN106445681A (en) * 2016-08-31 2017-02-22 东方网力科技股份有限公司 Distributed task scheduling system and method
CN106547523A (en) * 2015-09-17 2017-03-29 北大方正集团有限公司 Progress bar progress display packing, apparatus and system
CN103870334B (en) * 2012-12-18 2017-05-31 中国移动通信集团公司 A kind of method for allocating tasks and device of extensive vulnerability scanning
US9886310B2 (en) 2014-02-10 2018-02-06 International Business Machines Corporation Dynamic resource allocation in MapReduce
CN107710173A (en) * 2015-05-29 2018-02-16 高通股份有限公司 Multithreading conversion and affairs rearrangement for MMU
CN107818016A (en) * 2017-11-22 2018-03-20 苏州麦迪斯顿医疗科技股份有限公司 Server application design method, request event processing method and processing device
WO2019051942A1 (en) * 2017-09-15 2019-03-21 平安科技(深圳)有限公司 Task allocation method, terminal, and computer readable storage medium
CN109885388A (en) * 2019-01-31 2019-06-14 上海赜睿信息科技有限公司 A kind of data processing method and device suitable for heterogeneous system
CN110569252A (en) * 2018-05-16 2019-12-13 杭州海康威视数字技术股份有限公司 A data processing system and method
CN110597613A (en) * 2018-06-12 2019-12-20 成都鼎桥通信技术有限公司 Task processing method, device, equipment and computer readable storage medium
CN110750371A (en) * 2019-10-17 2020-02-04 北京创鑫旅程网络技术有限公司 Flow execution method, device, equipment and storage medium
WO2020108303A1 (en) * 2018-11-30 2020-06-04 中兴通讯股份有限公司 Heterogeneous computing-based task processing method and software-hardware framework system
CN111506409A (en) * 2020-04-20 2020-08-07 南方电网科学研究院有限责任公司 Data processing method and system
CN111708643A (en) * 2020-06-11 2020-09-25 中国工商银行股份有限公司 Batch operation method and device for distributed streaming media platform
CN112130966A (en) * 2019-06-24 2020-12-25 北京京东尚科信息技术有限公司 Task scheduling method and system
CN112596871A (en) * 2020-12-16 2021-04-02 中国建设银行股份有限公司 Service processing method and device
CN113448692A (en) * 2020-03-25 2021-09-28 杭州海康威视数字技术股份有限公司 Distributed graph computing method, device, equipment and storage medium
CN113946417A (en) * 2021-09-18 2022-01-18 广州虎牙科技有限公司 Distributed task execution method, related device and equipment
CN114647491A (en) * 2020-12-17 2022-06-21 中移(苏州)软件技术有限公司 A task scheduling method, device, equipment and storage medium
CN114691311A (en) * 2020-12-30 2022-07-01 安徽寒武纪信息科技有限公司 Method, device and computer program product for executing asynchronous task
CN115268800A (en) * 2022-09-29 2022-11-01 四川汉唐云分布式存储技术有限公司 Data processing method and data storage system based on calculation route redirection

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1791025A (en) * 2005-12-26 2006-06-21 北京航空航天大学 Service gridding system and method for processing operation
CN101169743A (en) * 2007-11-27 2008-04-30 南京大学 A method for parallel power flow calculation based on multi-core computer in power grid
CN101957780A (en) * 2010-08-17 2011-01-26 中国电子科技集团公司第二十八研究所 Resource state information-based grid task scheduling processor and grid task scheduling processing method
CN102110022A (en) * 2011-03-22 2011-06-29 上海交通大学 Sensor network embedded operation system based on priority scheduling

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7876763B2 (en) * 2004-08-05 2011-01-25 Cisco Technology, Inc. Pipeline scheduler including a hierarchy of schedulers and multiple scheduling lanes
CN101621460B (en) * 2008-06-30 2011-11-30 中兴通讯股份有限公司 Packet scheduling method and device
CN102185761B (en) * 2011-04-13 2013-08-07 中国人民解放军国防科学技术大学 Two-layer dynamic scheduling method facing to ensemble prediction applications

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1791025A (en) * 2005-12-26 2006-06-21 北京航空航天大学 Service gridding system and method for processing operation
CN101169743A (en) * 2007-11-27 2008-04-30 南京大学 A method for parallel power flow calculation based on multi-core computer in power grid
CN101957780A (en) * 2010-08-17 2011-01-26 中国电子科技集团公司第二十八研究所 Resource state information-based grid task scheduling processor and grid task scheduling processing method
CN102110022A (en) * 2011-03-22 2011-06-29 上海交通大学 Sensor network embedded operation system based on priority scheduling

Cited By (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103064736A (en) * 2012-12-06 2013-04-24 华为技术有限公司 Task processing device and method
US9519338B2 (en) 2012-12-06 2016-12-13 Huawei Technologies Co., Ltd. Task processing apparatus and method including scheduling current and next-level task processing apparatus
CN103870334B (en) * 2012-12-18 2017-05-31 中国移动通信集团公司 A kind of method for allocating tasks and device of extensive vulnerability scanning
CN104778074A (en) * 2014-01-14 2015-07-15 腾讯科技(深圳)有限公司 Calculation task processing method and device
WO2015106687A1 (en) * 2014-01-14 2015-07-23 Tencent Technology (Shenzhen) Company Limited Method and apparatus for processing computational task
CN104778074B (en) * 2014-01-14 2019-02-26 腾讯科技(深圳)有限公司 A kind of calculating task processing method and processing device
US10146588B2 (en) 2014-01-14 2018-12-04 Tencent Technology (Shenzhen) Company Limited Method and apparatus for processing computational task having multiple subflows
US9886310B2 (en) 2014-02-10 2018-02-06 International Business Machines Corporation Dynamic resource allocation in MapReduce
CN104102949A (en) * 2014-06-27 2014-10-15 北京奇艺世纪科技有限公司 Distributed workflow device and method for processing workflow by distributed workflow device
CN104102949B (en) * 2014-06-27 2018-01-26 北京奇艺世纪科技有限公司 A kind of distributed work flow device and its method for handling workflow
CN104035817A (en) * 2014-07-08 2014-09-10 领佰思自动化科技(上海)有限公司 Distributed parallel computing method and system for physical implementation of large scale integrated circuit
CN104123182A (en) * 2014-07-18 2014-10-29 西安交通大学 Map Reduce task data-center-across scheduling system and method based on master-slave framework
CN104123182B (en) * 2014-07-18 2015-09-30 西安交通大学 Based on the MapReduce task of client/server across data center scheduling system and method
CN107710173A (en) * 2015-05-29 2018-02-16 高通股份有限公司 Multithreading conversion and affairs rearrangement for MMU
CN106547523B (en) * 2015-09-17 2019-08-06 北大方正集团有限公司 Progress bar progress display method, device and system
CN106547523A (en) * 2015-09-17 2017-03-29 北大方正集团有限公司 Progress bar progress display packing, apparatus and system
CN105653365A (en) * 2016-02-22 2016-06-08 青岛海尔智能家电科技有限公司 Task processing method and device
CN106445681A (en) * 2016-08-31 2017-02-22 东方网力科技股份有限公司 Distributed task scheduling system and method
CN106445681B (en) * 2016-08-31 2019-11-29 东方网力科技股份有限公司 Distributed task dispatching system and method
WO2019051942A1 (en) * 2017-09-15 2019-03-21 平安科技(深圳)有限公司 Task allocation method, terminal, and computer readable storage medium
CN107818016A (en) * 2017-11-22 2018-03-20 苏州麦迪斯顿医疗科技股份有限公司 Server application design method, request event processing method and processing device
CN110569252A (en) * 2018-05-16 2019-12-13 杭州海康威视数字技术股份有限公司 A data processing system and method
CN110597613A (en) * 2018-06-12 2019-12-20 成都鼎桥通信技术有限公司 Task processing method, device, equipment and computer readable storage medium
EP3889774A4 (en) * 2018-11-30 2022-08-03 ZTE Corporation HETEROGENEOUS COMPUTATION-BASED TASK PROCESSING METHOD AND SOFTWARE-HARDWARE FRAMEWORK SYSTEM
WO2020108303A1 (en) * 2018-11-30 2020-06-04 中兴通讯股份有限公司 Heterogeneous computing-based task processing method and software-hardware framework system
CN111258744A (en) * 2018-11-30 2020-06-09 中兴通讯股份有限公司 Task processing method based on heterogeneous computation and software and hardware framework system
CN111258744B (en) * 2018-11-30 2024-08-06 中兴通讯股份有限公司 Task processing method based on heterogeneous computation and software and hardware frame system
US11681564B2 (en) 2018-11-30 2023-06-20 Zte Corporation Heterogeneous computing-based task processing method and software and hardware framework system
CN109885388A (en) * 2019-01-31 2019-06-14 上海赜睿信息科技有限公司 A kind of data processing method and device suitable for heterogeneous system
CN112130966A (en) * 2019-06-24 2020-12-25 北京京东尚科信息技术有限公司 Task scheduling method and system
CN110750371A (en) * 2019-10-17 2020-02-04 北京创鑫旅程网络技术有限公司 Flow execution method, device, equipment and storage medium
CN113448692A (en) * 2020-03-25 2021-09-28 杭州海康威视数字技术股份有限公司 Distributed graph computing method, device, equipment and storage medium
CN111506409A (en) * 2020-04-20 2020-08-07 南方电网科学研究院有限责任公司 Data processing method and system
CN111708643A (en) * 2020-06-11 2020-09-25 中国工商银行股份有限公司 Batch operation method and device for distributed streaming media platform
CN112596871A (en) * 2020-12-16 2021-04-02 中国建设银行股份有限公司 Service processing method and device
CN114647491A (en) * 2020-12-17 2022-06-21 中移(苏州)软件技术有限公司 A task scheduling method, device, equipment and storage medium
CN114691311A (en) * 2020-12-30 2022-07-01 安徽寒武纪信息科技有限公司 Method, device and computer program product for executing asynchronous task
CN113946417A (en) * 2021-09-18 2022-01-18 广州虎牙科技有限公司 Distributed task execution method, related device and equipment
CN113946417B (en) * 2021-09-18 2026-01-06 广州虎牙科技有限公司 Execution methods for distributed tasks and related devices and equipment
CN115268800A (en) * 2022-09-29 2022-11-01 四川汉唐云分布式存储技术有限公司 Data processing method and data storage system based on calculation route redirection
CN115268800B (en) * 2022-09-29 2022-12-20 四川汉唐云分布式存储技术有限公司 Data processing method and data storage system based on calculation route redirection

Also Published As

Publication number Publication date
WO2013107012A1 (en) 2013-07-25

Similar Documents

Publication Publication Date Title
CN102763086A (en) Distributed computing task processing system and task processing method
US11709704B2 (en) FPGA acceleration for serverless computing
Singh et al. Scheduling real-time security aware tasks in fog networks
Sun et al. IaaS public cloud computing platform scheduling model and optimization analysis
CN103036946B (en) A kind of method and system processing file backup task for cloud platform
CN113243005A (en) Performance-based hardware emulation in on-demand network code execution systems
US20140007121A1 (en) Light weight workload management server integration
US9733984B2 (en) Multiple stage workload management system
CN108737560A (en) Cloud computing task intelligent dispatching method and system, readable storage medium storing program for executing, terminal
KR102052964B1 (en) Method and system for scheduling computing
US10803413B1 (en) Workflow service with translator
Mahmud et al. Edge affinity-based management of applications in fog computing environments
US20160203026A1 (en) Processing a hybrid flow associated with a service class
US20230333880A1 (en) Method and system for dynamic selection of policy priorities for provisioning an application in a distributed multi-tiered computing environment
CN113225269B (en) Container-based workflow scheduling method, device and system and storage medium
US12236267B2 (en) Method and system for performing domain level scheduling of an application in a distributed multi-tiered computing environment using reinforcement learning
Mershad et al. A study of the performance of a cloud datacenter server
US20250342052A1 (en) Application gateways in an on-demand network code execution system
US9246778B2 (en) System to enhance performance, throughput and reliability of an existing cloud offering
CN113254143A (en) Virtual network function network element arranging and scheduling method, device and system
US20230333897A1 (en) Method and system for performing device level management in a distributed multi-tiered computing environment
Yu et al. Towards dynamic resource provisioning for traffic mining service cloud
Kaur et al. A task scheduling and resource allocation algorithm for cloud using live migration and priorities
You et al. Hierarchical Queue-Based Task Scheduling
CN114721815B (en) Method, device, program, equipment and medium for determining the maximum number of available copies

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20121031