[go: up one dir, main page]

CN110717992B - Method, apparatus, computer system and readable storage medium for scheduling model - Google Patents

Method, apparatus, computer system and readable storage medium for scheduling model Download PDF

Info

Publication number
CN110717992B
CN110717992B CN201910947251.3A CN201910947251A CN110717992B CN 110717992 B CN110717992 B CN 110717992B CN 201910947251 A CN201910947251 A CN 201910947251A CN 110717992 B CN110717992 B CN 110717992B
Authority
CN
China
Prior art keywords
model
models
scheduled
scheduling
platform
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910947251.3A
Other languages
Chinese (zh)
Other versions
CN110717992A (en
Inventor
李培道
吴勇义
刘彬彬
刘志宛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Secworld Information Technology Beijing Co Ltd
Qax Technology Group Inc
Original Assignee
Secworld Information Technology Beijing Co Ltd
Qax Technology Group Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Secworld Information Technology Beijing Co Ltd, Qax Technology Group Inc filed Critical Secworld Information Technology Beijing Co Ltd
Priority to CN201910947251.3A priority Critical patent/CN110717992B/en
Publication of CN110717992A publication Critical patent/CN110717992A/en
Application granted granted Critical
Publication of CN110717992B publication Critical patent/CN110717992B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Stored Programmes (AREA)

Abstract

The present disclosure provides a method of a scheduling model applied to a scheduling platform, comprising: acquiring a scheduling task, wherein the scheduling task comprises a plurality of models to be scheduled and a plurality of dependency relations among the models to be scheduled, the models to be scheduled at least comprise two models written in different languages or at least comprise two models written in the same language, and the configuration information of the models written in different languages is different; selecting a first model conforming to the operation condition from a plurality of models to be scheduled according to the scheduling logic; determining a first operation platform for operating the first model according to the configuration information of the first model; and sending the execution file of the first model to the first operation platform so that the first operation platform executes the execution file of the first model. The present disclosure also provides an apparatus, a computer system, and a computer-readable storage medium for a scheduling model applied to a scheduling platform.

Description

调度模型的方法、装置、计算机系统和可读存储介质Method, device, computer system and readable storage medium for scheduling model

技术领域Technical field

本公开涉及计算机技术领域,更具体地,涉及一种应用于调度平台的调度模型的方法、一种应用于调度平台的调度模型的装置、一种计算机系统和一种计算机可读存储介质。The present disclosure relates to the field of computer technology, and more specifically, to a method for applying a scheduling model to a dispatching platform, a device for applying a dispatching model to a dispatching platform, a computer system, and a computer-readable storage medium.

背景技术Background technique

在相关技术中,建模平台可以实现项目管理、数据管理、数据加工和模型管理等功能。不同的客户可以在建模平台应用端搭建满足自身业务需求的模型,搭建好的模型可以用于实现业务目标。例如,客户基于大数据搭建预测模型,用于预测数据走势。但是随着多个客户之间的业务交叉需求,为了实现某些业务目标,不同客户自身搭建的模型之间不可避免的需要进行交互。Among related technologies, the modeling platform can realize functions such as project management, data management, data processing, and model management. Different customers can build models that meet their own business needs on the modeling platform application side, and the built models can be used to achieve business goals. For example, customers build prediction models based on big data to predict data trends. However, with the cross-business requirements among multiple customers, in order to achieve certain business goals, it is inevitable that the models built by different customers need to interact.

在实现本公开构思的过程中,发明人发现相关技术中至少存在如下问题:目前的建模平台一般只能进行单个模型的调度,缺乏对多个模型的协同调度能力,导致业务无法开展。In the process of realizing the concept of the present disclosure, the inventor found that there are at least the following problems in related technologies: current modeling platforms can generally only schedule a single model and lack the ability to coordinately schedule multiple models, resulting in the inability to carry out business.

发明内容Contents of the invention

有鉴于此,本公开提供了一种应用于调度平台的调度模型的方法、一种应用于调度平台的调度模型的装置、一种计算机系统和一种计算机可读存储介质。In view of this, the present disclosure provides a method for applying a scheduling model to a dispatching platform, a device for applying a scheduling model to a dispatching platform, a computer system, and a computer-readable storage medium.

本公开的一个方面提供了一种应用于调度平台的调度模型的方法,包括:获取调度任务,其中,所述调度任务包括多个待调度模型和所述多个待调度模型之间的依赖关系,所述多个待调度模型中至少包括两个由不同语言编写的模型,所述由不同语言编写的模型的配置信息不同或者至少包括两个由同一种语言编写的模型;根据调度逻辑从所述多个待调度模型中选择符合运行条件的第一模型;根据所述第一模型的配置信息确定用于运行所述第一模型的第一运行平台;以及向所述第一运行平台发送所述第一模型的执行文件,以使得所述第一运行平台执行所述第一模型的执行文件。One aspect of the present disclosure provides a method for applying to a scheduling model of a scheduling platform, including: obtaining a scheduling task, wherein the scheduling task includes multiple to-be-scheduled models and dependencies between the multiple to-be-scheduled models. , the multiple to-be-scheduled models include at least two models written in different languages, and the configuration information of the models written in different languages is different or at least include two models written in the same language; according to the scheduling logic, from all Select a first model that meets the operating conditions among the multiple to-be-scheduled models; determine a first operating platform for operating the first model according to the configuration information of the first model; and send the first operating platform to the first operating platform. The executable file of the first model is generated so that the first running platform executes the executable file of the first model.

根据本公开的实施例,所述方法还包括:接收来自所述第一运行平台运行所述第一模型的状态信息;在所述状态信息表征所述第一模型运行完成的情况下,根据所述多个待调度模型之间的依赖关系从所述多个待调度模型中选择符合所述运行条件的第二模型;根据所述第二模型的配置信息确定用于运行所述第二模型的第二运行平台;以及向所述第二运行平台发送所述第二模型的执行文件,以使得所述第二运行平台执行所述第二模型的执行文件。According to an embodiment of the present disclosure, the method further includes: receiving status information from the first running platform for running the first model; in the case where the status information represents the completion of running of the first model, according to the Select a second model that meets the operating conditions from the plurality of to-be-scheduled models based on the dependency relationship between the multiple to-be-scheduled models; determine the configuration information for running the second model according to the configuration information of the second model. a second running platform; and sending the executable file of the second model to the second running platform, so that the second running platform executes the executable file of the second model.

根据本公开的实施例,所述方法还包括:接收来自所述第一运行平台执行所述第一模型的执行文件的第一输出结果和第一日志文件;接收来自所述第二运行平台执行所述第二模型的执行文件的第二输出结果和第二日志文件;以及存储所述第一输出结果、所述第一日志文件、所述第二输出结果和所述第二日志文件。According to an embodiment of the present disclosure, the method further includes: receiving a first output result and a first log file from an execution file of the first model executed by the first running platform; receiving an execution file from the second running platform a second output result and a second log file of the execution file of the second model; and storing the first output result, the first log file, the second output result and the second log file.

根据本公开的实施例,所述方法还包括:在所述第二运行平台执行所述第二模型的执行文件的过程中,向所述第二运行平台提供所述调度平台存储的数据,以实现不同运行平台运行所述多个待调度模型的过程中数据共享。According to an embodiment of the present disclosure, the method further includes: in the process of the second running platform executing the execution file of the second model, providing the data stored by the scheduling platform to the second running platform to Realize data sharing in the process of running the multiple to-be-scheduled models on different operating platforms.

根据本公开的实施例,根据所述多个待调度模型之间的依赖关系从所述多个待调度模型中选择符合所述运行条件的第二模型包括:确定所述多个待调度模型中的一个或多个未运行模型;根据所述多个待调度模型之间的依赖关系确定所述一个或多个未运行模型各自依赖的前置模型是否运行完成;以及将依赖的前置模型运行完成的未运行模型确定为符合所述运行条件的第二模型。According to an embodiment of the present disclosure, selecting a second model that meets the operating conditions from the multiple to-be-scheduled models according to the dependency relationship between the multiple to-be-scheduled models includes: determining among the multiple to-be-scheduled models one or more unrun models; determine whether the pre-models each of the one or more un-run models depends on has been completed according to the dependency relationship between the multiple to-be-scheduled models; and run the dependent pre-models The completed unrun model is determined as the second model that meets the running conditions.

根据本公开的实施例,所述方法还包括:在获取所述调度任务之前,获取用于注册所述多个待调度模型的注册请求;以及响应于所述注册请求,将所述多个待调度模型对应的执行文件存储在模型库中。According to an embodiment of the present disclosure, the method further includes: before obtaining the scheduling task, obtaining a registration request for registering the multiple to-be-scheduled models; and in response to the registration request, registering the multiple to-be-scheduled models. The execution files corresponding to the scheduling model are stored in the model library.

本公开的另一个方面提供了一种应用于调度平台的调度模型的装置,包括:第一获取模块,用于获取调度任务,其中,所述调度任务包括多个待调度模型和所述多个待调度模型之间的依赖关系,所述多个待调度模型中至少包括两个由不同语言编写的模型或者至少包括两个由同一种语言编写的模型,所述由不同语言编写的模型的配置信息不同;第一选择模块,用于根据调度逻辑从所述多个待调度模型中选择符合运行条件的第一模型;第一确定模块,用于根据所述第一模型的配置信息确定用于运行所述第一模型的第一运行平台;以及第一发送模块,用于向所述第一运行平台发送所述第一模型的执行文件,以使得所述第一运行平台执行所述第一模型的执行文件。Another aspect of the present disclosure provides an apparatus for a scheduling model applied to a scheduling platform, including: a first acquisition module for acquiring a scheduling task, wherein the scheduling task includes a plurality of models to be scheduled and the plurality of Dependencies between models to be scheduled. The multiple models to be scheduled include at least two models written in different languages or at least two models written in the same language. The configurations of the models written in different languages The information is different; the first selection module is used to select the first model that meets the operating conditions from the multiple to-be-scheduled models according to the scheduling logic; the first determination module is used to determine the first model according to the configuration information of the first model. a first running platform that runs the first model; and a first sending module for sending the execution file of the first model to the first running platform, so that the first running platform executes the first The executable file of the model.

根据本公开的实施例,所述装置还包括:第一接收模块,用于接收来自所述第一运行平台运行所述第一模型的状态信息;第二选择模块,用于在所述状态信息表征所述第一模型运行完成的情况下,根据所述多个待调度模型之间的依赖关系从所述多个待调度模型中选择符合所述运行条件的第二模型;第二确定模块,用于根据所述第二模型的配置信息确定用于运行所述第二模型的第二运行平台;以及第二发送模块,用于向所述第二运行平台发送所述第二模型的执行文件,以使得所述第二运行平台执行所述第二模型的执行文件。According to an embodiment of the present disclosure, the device further includes: a first receiving module for receiving status information from the first running platform running the first model; a second selection module for receiving the status information from the first running platform. When the operation of the first model is completed, select a second model that meets the operating conditions from the plurality of models to be scheduled according to the dependency relationship between the multiple models to be scheduled; the second determination module, for determining a second running platform for running the second model according to the configuration information of the second model; and a second sending module for sending the execution file of the second model to the second running platform. , so that the second running platform executes the execution file of the second model.

根据本公开的实施例,所述装置还包括:第二接收模块,用于接收来自所述第一运行平台执行所述第一模型的执行文件的第一输出结果和第一日志文件;第三接收模块,用于接收来自所述第二运行平台执行所述第二模型的执行文件的第二输出结果和第二日志文件;以及第一存储模块,用于存储所述第一输出结果、所述第一日志文件、所述第二输出结果和所述第二日志文件。According to an embodiment of the present disclosure, the device further includes: a second receiving module, configured to receive a first output result and a first log file from an execution file of the first model executed by the first running platform; a third a receiving module, configured to receive a second output result and a second log file from the execution file of the second model executed by the second running platform; and a first storage module, configured to store the first output result, the The first log file, the second output result and the second log file.

根据本公开的实施例,所述装置还包括:共享模块,用于在所述第二运行平台执行所述第二模型的执行文件的过程中,向所述第二运行平台提供所述调度平台存储的数据,以实现不同运行平台运行所述多个待调度模型的过程中数据共享。According to an embodiment of the present disclosure, the device further includes: a sharing module, configured to provide the scheduling platform to the second running platform during the execution of the execution file of the second model by the second running platform. The stored data is used to realize data sharing in the process of running the multiple to-be-scheduled models on different operating platforms.

根据本公开的实施例,所述第二选择模块用于:确定所述多个待调度模型中的一个或多个未运行模型;根据所述多个待调度模型之间的依赖关系确定所述一个或多个未运行模型各自依赖的前置模型是否运行完成;以及将依赖的前置模型运行完成的未运行模型确定为符合所述运行条件的第二模型。According to an embodiment of the present disclosure, the second selection module is configured to: determine one or more unrunning models among the multiple to-be-scheduled models; determine the Whether the pre-models on which each of the one or more un-run models depends has been completed; and determining the un-run model on which the dependent pre-models have been completed as the second model that meets the running conditions.

根据本公开的实施例,所述装置还包括:第二获取模块,用于在获取所述调度任务之前,获取用于注册所述多个待调度模型的注册请求;以及第二存储模块,用于响应于所述注册请求,将所述多个待调度模型对应的执行文件存储在模型库中。According to an embodiment of the present disclosure, the device further includes: a second acquisition module, configured to acquire a registration request for registering the multiple to-be-scheduled models before acquiring the scheduling task; and a second storage module, using In response to the registration request, execution files corresponding to the multiple to-be-scheduled models are stored in the model library.

本公开的另一方面提供了一种计算机系统,包括:一个或多个处理器;存储器,用于存储一个或多个程序,其中,当所述一个或多个程序被所述一个或多个处理器执行时,使得所述一个或多个处理器实现如上所述的方法。Another aspect of the present disclosure provides a computer system, including: one or more processors; a memory for storing one or more programs, wherein when the one or more programs are executed by the one or more When the processor executes, the one or more processors are caused to implement the method as described above.

本公开的另一方面提供了一种计算机可读存储介质,其上存储有可执行指令,该指令被处理器执行时使处理器实现如上所述的方法。Another aspect of the present disclosure provides a computer-readable storage medium having executable instructions stored thereon, which when executed by a processor cause the processor to implement the method as described above.

本公开的另一方面提供了一种计算机程序,所述计算机程序包括计算机可执行指令,所述指令在被执行时用于实现如上所述的方法。Another aspect of the present disclosure provides a computer program comprising computer-executable instructions that, when executed, are used to implement the method as described above.

附图说明Description of the drawings

通过以下参照附图对本公开实施例的描述,本公开的上述以及其他目的、特征和优点将更为清楚,在附图中:The above and other objects, features and advantages of the present disclosure will become more apparent from the following description of embodiments of the present disclosure with reference to the accompanying drawings, in which:

图1示意性示出了根据本公开实施例的可以应用应用于调度平台的调度模型的方法及装置的示例性系统架构;Figure 1 schematically illustrates an exemplary system architecture that can apply methods and devices for scheduling models applied to a scheduling platform according to embodiments of the present disclosure;

图2示意性示出了根据本公开实施例的应用于调度平台的调度模型的方法的流程图;Figure 2 schematically illustrates a flowchart of a method for a scheduling model applied to a scheduling platform according to an embodiment of the present disclosure;

图3示意性示出了根据本公开另一实施例的应用于调度平台的调度模型的方法的流程图;Figure 3 schematically illustrates a flowchart of a method for a scheduling model applied to a scheduling platform according to another embodiment of the present disclosure;

图4示意性示出了根据本公开另一实施例的应用于调度平台的调度模型的方法的示意图;Figure 4 schematically shows a schematic diagram of a method applied to a scheduling model of a scheduling platform according to another embodiment of the present disclosure;

图5示意性示出了根据本公开实施例的根据多个待调度模型之间的依赖关系从多个待调度模型中选择符合运行条件的第二模型的方法的流程图;Figure 5 schematically illustrates a flow chart of a method for selecting a second model that meets operating conditions from multiple to-be-scheduled models based on dependencies between the multiple to-be-scheduled models according to an embodiment of the present disclosure;

图6示意性示出了根据本公开另一实施例的应用于调度平台的调度模型的方法的流程图;Figure 6 schematically illustrates a flow chart of a method applied to a scheduling model of a scheduling platform according to another embodiment of the present disclosure;

图7示意性示出了根据本公开实施例的应用于调度平台的调度模型的装置的框图;以及Figure 7 schematically shows a block diagram of a device for a scheduling model applied to a scheduling platform according to an embodiment of the present disclosure; and

图8示意性示出了根据本公开实施例的适于实现上文描述的方法的计算机系统的框图。Figure 8 schematically illustrates a block diagram of a computer system suitable for implementing the method described above, according to an embodiment of the present disclosure.

具体实施方式Detailed ways

以下,将参照附图来描述本公开的实施例。但是应该理解,这些描述只是示例性的,而并非要限制本公开的范围。在下面的详细描述中,为便于解释,阐述了许多具体的细节以提供对本公开实施例的全面理解。然而,明显地,一个或多个实施例在没有这些具体细节的情况下也可以被实施。此外,在以下说明中,省略了对公知结构和技术的描述,以避免不必要地混淆本公开的概念。Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood, however, that these descriptions are exemplary only and are not intended to limit the scope of the present disclosure. In the following detailed description, for convenience of explanation, numerous specific details are set forth to provide a comprehensive understanding of the disclosed embodiments. It will be apparent, however, that one or more embodiments may be practiced without these specific details. Furthermore, in the following description, descriptions of well-known structures and techniques are omitted to avoid unnecessarily confusing the concepts of the present disclosure.

在此使用的术语仅仅是为了描述具体实施例,而并非意在限制本公开。在此使用的术语“包括”、“包含”等表明了所述特征、步骤、操作和/或部件的存在,但是并不排除存在或添加一个或多个其他特征、步骤、操作或部件。The terminology used herein is for the purpose of describing particular embodiments only and is not intended to limit the disclosure. The terms "comprising," "comprising," and the like, as used herein, indicate the presence of stated features, steps, operations, and/or components but do not exclude the presence or addition of one or more other features, steps, operations, or components.

在此使用的所有术语(包括技术和科学术语)具有本领域技术人员通常所理解的含义,除非另外定义。应注意,这里使用的术语应解释为具有与本说明书的上下文相一致的含义,而不应以理想化或过于刻板的方式来解释。All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art, unless otherwise defined. It should be noted that the terms used here should be interpreted to have meanings consistent with the context of this specification and should not be interpreted in an idealized or overly rigid manner.

在使用类似于“A、B和C等中至少一个”这样的表述的情况下,一般来说应该按照本领域技术人员通常理解该表述的含义来予以解释(例如,“具有A、B和C中至少一个的系统”应包括但不限于单独具有A、单独具有B、单独具有C、具有A和B、具有A和C、具有B和C、和/或具有A、B、C的系统等)。在使用类似于“A、B或C等中至少一个”这样的表述的情况下,一般来说应该按照本领域技术人员通常理解该表述的含义来予以解释(例如,“具有A、B或C中至少一个的系统”应包括但不限于单独具有A、单独具有B、单独具有C、具有A和B、具有A和C、具有B和C、和/或具有A、B、C的系统等)。Where an expression similar to "at least one of A, B, C, etc." is used, it should generally be interpreted in accordance with the meaning that a person skilled in the art generally understands the expression to mean (e.g., "having A, B and C "A system with at least one of" shall include, but is not limited to, systems with A alone, B alone, C alone, A and B, A and C, B and C, and/or systems with A, B, C, etc. ). Where an expression similar to "at least one of A, B or C, etc." is used, it should generally be interpreted in accordance with the meaning that a person skilled in the art generally understands the expression to mean (for example, "having A, B or C "A system with at least one of" shall include, but is not limited to, systems with A alone, B alone, C alone, A and B, A and C, B and C, and/or systems with A, B, C, etc. ).

本公开的实施例提供了一种应用于调度平台的调度模型的方法、一种应用于调度平台的调度模型的装置、一种计算机系统和一种计算机可读存储介质。该方法包括:获取调度任务,其中,调度任务包括多个待调度模型和多个待调度模型之间的依赖关系,多个待调度模型中至少包括两个由不同语言编写的模型或者至少包括两个由同一种语言编写的模型,由不同语言编写的模型的配置信息不同;根据调度逻辑从多个待调度模型中选择符合运行条件的第一模型;根据第一模型的配置信息确定用于运行第一模型的第一运行平台;以及向第一运行平台发送第一模型的执行文件,以使得第一运行平台执行第一模型的执行文件。Embodiments of the present disclosure provide a method for applying a scheduling model to a dispatching platform, a device for applying a scheduling model to a dispatching platform, a computer system, and a computer-readable storage medium. The method includes: obtaining a scheduling task, wherein the scheduling task includes multiple to-be-scheduled models and dependencies between the multiple to-be-scheduled models, and the multiple to-be-scheduled models include at least two models written in different languages or at least two models. Models written in the same language have different configuration information for models written in different languages; select the first model that meets the running conditions from multiple to-be-scheduled models according to the scheduling logic; determine the model for running based on the configuration information of the first model a first running platform of the first model; and sending the execution file of the first model to the first running platform, so that the first running platform executes the execution file of the first model.

图1示意性示出了根据本公开实施例的可以应用应用于调度平台的调度模型的方法及装置的示例性系统架构。需要注意的是,图1所示仅为可以应用本公开实施例的系统架构的示例,以帮助本领域技术人员理解本公开的技术内容,但并不意味着本公开实施例不可以用于其他设备、系统、环境或场景。FIG. 1 schematically illustrates an exemplary system architecture that can apply methods and devices for scheduling models applied to a scheduling platform according to embodiments of the present disclosure. It should be noted that Figure 1 is only an example of a system architecture to which embodiments of the present disclosure can be applied, to help those skilled in the art understand the technical content of the present disclosure, but does not mean that the embodiments of the present disclosure cannot be used in other applications. Device, system, environment or scenario.

如图1所示,根据该实施例的系统架构100可以包括终端设备101、网络102和网络104、调度平台103、运行平台105和运行平台106。网络102和104用以在终端设备101、调度平台103、运行平台105和106之间提供通信链路的介质。网络102和104可以包括各种连接类型,例如有线和/或无线通信链路等等。As shown in Figure 1, the system architecture 100 according to this embodiment may include a terminal device 101, a network 102 and a network 104, a scheduling platform 103, an operating platform 105 and an operating platform 106. Networks 102 and 104 are used to provide media for communication links between the terminal device 101, the scheduling platform 103, and the operating platforms 105 and 106. Networks 102 and 104 may include various connection types, such as wired and/or wireless communication links, and the like.

终端设备101可以是具有显示屏并且支持网页浏览的各种电子设备,包括但不限于智能手机、平板电脑、膝上型便携计算机和台式计算机等等。The terminal device 101 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop computers, desktop computers, and the like.

用户可以使用终端设备101通过网络102与调度平台103交互,以接收或发送消息等。终端设备101上可以安装有各种通讯客户端应用,例如网页浏览器应用、搜索类应用、即时通信工具、邮箱客户端和/或社交平台软件等(仅为示例)。用户可以通过浏览器应用注册其所需要运行的模型,在注册模型的过程中,可以通过网络102将模型的代码和描述文件等信息上传给调度平台103。The user can use the terminal device 101 to interact with the dispatch platform 103 through the network 102 to receive or send messages, etc. Various communication client applications may be installed on the terminal device 101, such as web browser applications, search applications, instant messaging tools, email clients and/or social platform software (only examples). Users can register the model they need to run through the browser application. During the process of registering the model, the model code and description file and other information can be uploaded to the scheduling platform 103 through the network 102 .

调度平台103可以由一个或多个服务器组成,可以对用户利用终端设备101所浏览的网站提供支持的后台管理服务器(仅为示例)。后台管理服务器可以对接收到的用户请求等数据进行分析等处理,并将处理结果(例如根据用户请求获取或生成的网页、信息、或数据等)反馈给终端设备。The scheduling platform 103 may be composed of one or more servers, and may be a background management server that provides support for websites browsed by users using the terminal device 101 (example only). The background management server can analyze and process the received user request and other data, and feed back the processing results (such as web pages, information, or data obtained or generated according to the user request) to the terminal device.

调度平台103可以通过网络104向运行平台105和/或106发送模型的执行文件,使得运行平台105和/或106可以执行模型的执行文件,执行文件可以包括模型的代码,从而达到运行模型的效果。The scheduling platform 103 can send the execution file of the model to the execution platform 105 and/or 106 through the network 104, so that the execution platform 105 and/or 106 can execute the execution file of the model, and the execution file can include the code of the model, thereby achieving the effect of running the model. .

运行平台105和/或106可以由一个或多个服务器组成,能够支持多语言脚本的运行。The execution platform 105 and/or 106 may be composed of one or more servers and can support the execution of multi-language scripts.

需要说明的是,本公开实施例所提供的调度模型的方法一般可以由调度平台103执行。相应地,本公开实施例所提供的调度模型的装置一般可以设置于调度平台103中。It should be noted that the scheduling model method provided by the embodiment of the present disclosure can generally be executed by the scheduling platform 103. Correspondingly, the device of the scheduling model provided by the embodiment of the present disclosure may generally be provided in the scheduling platform 103.

应该理解,图1中的终端设备、网络和服务器的数目仅仅是示意性的。根据实现需要,可以具有任意数目的终端设备、网络和服务器。It should be understood that the number of terminal devices, networks and servers in Figure 1 is only illustrative. Depending on implementation needs, there can be any number of end devices, networks, and servers.

图2示意性示出了根据本公开实施例的应用于调度平台的调度模型的方法的流程图。FIG. 2 schematically shows a flow chart of a method applied to a scheduling model of a scheduling platform according to an embodiment of the present disclosure.

如图2所示,该方法包括操作S210~S240。As shown in Figure 2, the method includes operations S210 to S240.

在操作S210,获取调度任务,其中,调度任务包括多个待调度模型和多个待调度模型之间的依赖关系,多个待调度模型中至少包括两个由不同语言编写的模型或者至少包括两个由同一种语言编写的模型,由不同语言编写的模型的配置信息不同。In operation S210, a scheduling task is obtained, where the scheduling task includes multiple to-be-scheduled models and dependencies between the multiple to-be-scheduled models. The multiple to-be-scheduled models include at least two models written in different languages or at least two models. Models written in the same language have different configuration information for models written in different languages.

根据本公开的实施例,调度任务可以是基于用户在客户端上的操作生成的,客户端可以将响应用户操作后得到的结果发送给调度平台。According to embodiments of the present disclosure, the scheduling task may be generated based on the user's operation on the client, and the client may send the result obtained in response to the user's operation to the scheduling platform.

例如,用户可以在客户端的可视化界面上将多个准备执行的待调度模型通过拖拽和连线的方式连接在一起,生成调度任务,然后发送给调度平台。For example, users can connect multiple scheduled models ready for execution on the client's visual interface by dragging and connecting to generate scheduling tasks, and then send them to the scheduling platform.

根据本公开的实施例,在获取调度任务之前,可以先获取用于注册多个待调度模型的注册请求,响应于注册请求,将多个待调度模型对应的执行文件存储在模型库中。According to embodiments of the present disclosure, before obtaining the scheduling task, a registration request for registering multiple models to be scheduled can be obtained first, and in response to the registration request, execution files corresponding to the multiple models to be scheduled are stored in the model library.

根据本公开的实施例,待调度模型对应的执行文件可以包括模型的代码文件和描述文件等信息,可以在模型库中对模型进行管理。According to embodiments of the present disclosure, the execution file corresponding to the model to be scheduled may include information such as the code file and description file of the model, and the model may be managed in the model library.

根据本公开的实施例,用户可以在自身的建模平台上编写待调度模型,依据建模平台的不同,可以使用的编写语言也可以不同,所得到的模型脚本类型也不同。例如,依据SQL语言编写得到SQL脚本,依据JavaScript语言编写得到JavaScript脚本,依据python语言编写得到python脚本。According to embodiments of the present disclosure, users can write models to be scheduled on their own modeling platforms. Depending on the modeling platform, the writing languages that can be used may be different, and the resulting model script types may also be different. For example, SQL scripts can be obtained by writing in the SQL language, JavaScript scripts can be obtained by writing in the JavaScript language, and python scripts can be obtained by writing in the python language.

根据本公开的实施例,每个模型都有对应的配置信息,例如,每个模型的配置信息,包括但不限于模型ID、名称、描述、参数、运行环境信息等。According to embodiments of the present disclosure, each model has corresponding configuration information. For example, the configuration information of each model includes but is not limited to model ID, name, description, parameters, running environment information, etc.

在操作S220,根据调度逻辑从多个待调度模型中选择符合运行条件的第一模型。In operation S220, a first model that meets the operating conditions is selected from multiple to-be-scheduled models according to the scheduling logic.

根据本公开的实施例,第一模型可以是提交调度任务之后,第一个需要执行的模型,也可以是调度任务中不依赖其他模型的输出作为输入的模型。According to embodiments of the present disclosure, the first model may be the first model that needs to be executed after the scheduling task is submitted, or it may be a model that does not rely on the output of other models as input in the scheduling task.

根据本公开的实施例,第一模型可以包括多个。换言之,调度平台可以将多个模型同时调度给相同或不同的运行平台同步运行。当模型的脚本不同,所需的运行环境不同时,可以将多个模型同时调度给不同的运行平台同步运行。例如,SQL脚本的模型调度给提供sparkSQL运行环境的运行平台,JavaScript脚本的模型调度给提供JavaScript引擎的运行平台,python脚本的模型调度给提供python容器的运行平台。According to embodiments of the present disclosure, the first model may include multiple. In other words, the scheduling platform can schedule multiple models to the same or different running platforms at the same time for simultaneous running. When the model scripts are different and the required running environments are different, multiple models can be scheduled to run simultaneously on different running platforms. For example, the model of the SQL script is scheduled to the running platform that provides the sparkSQL running environment, the model of the JavaScript script is scheduled to the running platform that provides the JavaScript engine, and the model of the python script is scheduled to the running platform that provides the python container.

在操作S230,根据第一模型的配置信息确定用于运行第一模型的第一运行平台。In operation S230, a first running platform for running the first model is determined according to the configuration information of the first model.

根据本公开的实施例,第一模型的配置信息可以是模型的脚本类型信息。根据模型的脚本类型确定能够运行该模型的运行平台。According to an embodiment of the present disclosure, the configuration information of the first model may be script type information of the model. Determine the running platform that can run the model based on the script type of the model.

根据本公开的实施例,不同语言编写的模型可以由不同的运行平台运行。According to embodiments of the present disclosure, models written in different languages can be run by different running platforms.

在操作S240,向第一运行平台发送第一模型的执行文件,以使得第一运行平台执行第一模型的执行文件。In operation S240, the execution file of the first model is sent to the first running platform, so that the first running platform executes the execution file of the first model.

通过本公开的实施例,通过调度平台调度多个模型可以完成业务目标,多个模型可以是不同语言编写的模型,将不同模型调度给不同的运行平台运行,可以实现跨进程、跨平台模型的调度和协同,支持不同语言的模型运行,使多个模型能够协同完成业务数据的处理。因此,能够解决相关技术中建模平台一般只能进行单个模型的调度,缺乏对多个模型的协同调度能力,导致业务无法开展的问题。Through the embodiments of the present disclosure, business goals can be achieved by scheduling multiple models through the scheduling platform. The multiple models can be models written in different languages. Different models can be scheduled to run on different running platforms, and cross-process and cross-platform models can be implemented. Scheduling and collaboration support the running of models in different languages, enabling multiple models to collaboratively complete business data processing. Therefore, it can solve the problem in related technologies that modeling platforms can generally only schedule a single model and lack the collaborative scheduling capability for multiple models, resulting in the inability to carry out business.

根据本公开的实施例,调度平台可以从第一运行平台接收运行第一模型的状态信息,在状态信息表征第一模型运行完成的情况下,根据多个待调度模型之间的依赖关系从多个待调度模型中选择符合运行条件的第二模型,根据第二模型的配置信息确定用于运行第二模型的第二运行平台,向第二运行平台发送第二模型的执行文件,以使得第二运行平台执行第二模型的执行文件。According to embodiments of the present disclosure, the scheduling platform may receive status information for running the first model from the first running platform. When the status information represents the completion of running the first model, the scheduling platform may obtain status information from multiple to-be-scheduled models based on the dependencies between the multiple to-be-scheduled models. Select a second model that meets the operating conditions among the to-be-scheduled models, determine a second operating platform for running the second model based on the configuration information of the second model, and send the execution file of the second model to the second operating platform so that the second model The second operating platform executes the execution file of the second model.

根据本公开的实施例,调度平台可以对模型的输入、输出、状态更新等行为进行抽象,并提供规范化的接口供运行平台运行模型时调用,同时提供模型依赖链的描述规范。According to embodiments of the present disclosure, the scheduling platform can abstract the input, output, status update and other behaviors of the model, and provide standardized interfaces for the running platform to call when running the model, and also provide description specifications of the model dependency chain.

下面参考图3~图5,结合具体实施例对本公开所提供的的方法做进一步说明。The method provided by the present disclosure will be further described below in conjunction with specific embodiments with reference to Figures 3 to 5 .

图3示意性示出了根据本公开另一实施例的应用于调度平台的调度模型的方法的流程图。FIG. 3 schematically shows a flowchart of a method applied to a scheduling model of a scheduling platform according to another embodiment of the present disclosure.

如图3所示,该方法包括操作S310~S330。As shown in Figure 3, the method includes operations S310 to S330.

在操作S310,从第一运行平台接收执行第一模型的执行文件的第一输出结果和第一日志文件。In operation S310, a first output result of executing the execution file of the first model and a first log file are received from the first running platform.

在操作S320,从第二运行平台接收执行第二模型的执行文件的第二输出结果和第二日志文件。In operation S320, a second output result of executing the execution file of the second model and a second log file are received from the second running platform.

在操作S330,存储第一输出结果、第一日志文件、第二输出结果和第二日志文件。In operation S330, the first output result, the first log file, the second output result and the second log file are stored.

根据本公开的实施例,在第二运行平台执行第二模型的执行文件的过程中,可以向第二运行平台提供调度平台存储的数据,以实现不同运行平台运行多个待调度模型的过程中数据共享。According to embodiments of the present disclosure, when the second running platform executes the execution file of the second model, the data stored by the scheduling platform can be provided to the second running platform, so as to realize the process of running multiple to-be-scheduled models on different running platforms. data sharing.

图4示意性示出了根据本公开另一实施例的应用于调度平台的调度模型的方法的示意图。Figure 4 schematically shows a schematic diagram of a method applied to a scheduling model of a scheduling platform according to another embodiment of the present disclosure.

如图4所示,调度平台402具有依赖的数据库401,模型运行时,可以借助基础能力从数据库读取数据。调度平台402可以提供多种服务,例如,基础服务、数据读写服务、日志服务、状态服务和任务调度服务等。不同的运行平台403可以从调度平台402获取信息,例如,第二运行平台可以从调度平台402获取第一运行平台的状态信息,获取第一运行平台的输出结果等等。As shown in Figure 4, the scheduling platform 402 has a dependent database 401. When the model is running, data can be read from the database with the help of basic capabilities. The scheduling platform 402 can provide a variety of services, such as basic services, data reading and writing services, log services, status services, and task scheduling services. Different running platforms 403 can obtain information from the scheduling platform 402. For example, the second running platform can obtain the status information of the first running platform from the dispatching platform 402, obtain the output results of the first running platform, and so on.

根据本公开的实施例,基础服务可以是用于提供多个待调度模型之间的数据共享服务。According to embodiments of the present disclosure, the basic service may be used to provide a data sharing service between multiple to-be-scheduled models.

根据本公开的实施例,第一运行平台~第三运行平台可以将输出结果、日志文件、状态信息等等信息反馈给调度平台402。调度平台402可以给运行平台提供信息共享的功能。调度平台402可以将不同的模型调度给不同的运行平台执行。According to embodiments of the present disclosure, the first to third operating platforms can feed back information such as output results, log files, status information, etc. to the scheduling platform 402. The scheduling platform 402 can provide the operating platform with an information sharing function. The scheduling platform 402 can schedule different models to different running platforms for execution.

图5示意性示出了根据本公开实施例的根据多个待调度模型之间的依赖关系从多个待调度模型中选择符合运行条件的第二模型的方法的流程图。FIG. 5 schematically shows a flowchart of a method for selecting a second model that meets operating conditions from multiple to-be-scheduled models based on dependencies between the multiple to-be-scheduled models according to an embodiment of the present disclosure.

如图5所示,该方法包括操作S510~S530。As shown in Figure 5, the method includes operations S510 to S530.

在操作S510,确定多个待调度模型中的一个或多个未运行模型。In operation S510, one or more unrun models among the plurality of models to be scheduled are determined.

根据本公开的实施例,可以在第一模型已经运行完成的情况下,再确定多个待调度模型中的一个或多个未运行模型。According to embodiments of the present disclosure, one or more unrun models among the multiple to-be-scheduled models may be determined after the first model has been run.

在操作S520,根据多个待调度模型之间的依赖关系确定一个或多个未运行模型各自依赖的前置模型是否运行完成。In operation S520, it is determined based on the dependency relationships between the multiple to-be-scheduled models whether the pre-models on which each of the one or more unrunning models depends is completed.

根据本公开的实施例,多个待调度模型在逻辑上可以构成有向无环图,多个待调度模型之间的依赖关系例如可以是在后模型的输入是在前模型的输出,只有在前模型已经运行完成,在后模型才能开始运行。According to embodiments of the present disclosure, multiple to-be-scheduled models may logically form a directed acyclic graph, and the dependency relationship between multiple to-be-scheduled models may be, for example, that the input of the later model is the output of the previous model, and only if After the former model has been run, the latter model can start running.

在操作S530,将依赖的前置模型运行完成的未运行模型确定为符合运行条件的第二模型。In operation S530, the unrun model that has completed running of the dependent previous model is determined as a second model that meets the running conditions.

根据本公开的实施例,第二模型的个数可以包括多个,可以将多个第二模型同时发送给不同的运行平台运行。According to embodiments of the present disclosure, the number of second models may include multiple, and multiple second models may be sent to different running platforms at the same time for running.

图6示意性示出了根据本公开实施例的应用于调度平台的调度模型的示例方法的流程图。FIG. 6 schematically illustrates a flowchart of an example method applied to a scheduling model of a scheduling platform according to an embodiment of the present disclosure.

如图6所示,该方法包括操作S610~S670。As shown in Figure 6, the method includes operations S610 to S670.

在操作S610,调度平台可以获取来自客户端的调度任务。In operation S610, the scheduling platform may obtain the scheduling task from the client.

在操作S620,调度平台通过调度逻辑选择符合运行条件的模型,判断当前任务中是否存在未运行的模型,如果不存在,则执行操作S630,标记当前任务运行成功,结束任务。否则进入下一步操作S640。In operation S620, the scheduling platform selects a model that meets the running conditions through scheduling logic, and determines whether there is an unrun model in the current task. If it does not exist, operation S630 is performed to mark the current task as running successfully and end the task. Otherwise, proceed to the next step S640.

在操作S640,调度平台判断未运行的模型是否存在前置模型、或者前置模型都已运行成功,如果是,则执行操作S650,提交当前模型。In operation S640, the scheduling platform determines whether the unrun model has a pre-model or whether the pre-model has been successfully run. If so, operation S650 is performed to submit the current model.

在操作S650,模型提交后,运行平台可以以异步方式运行,首先从调度平台获取模型参数和输入数据。In operation S650, after the model is submitted, the running platform can run in an asynchronous manner, and first obtains model parameters and input data from the scheduling platform.

在操作S660,调度平台可以把调度平台的基础能力、模型参数等文件一起封装成运行平台能够识别的任务包,调度平台的基础能力通过接口方式提供给其他运行平台,这些接口被封装到SDK中,模型通过SDK中提供的接口来调用调度平台的基础能力。根据本公开的实施例,在将任务提交到运行平台运行之后,可以跳转至作S620,进行后续调度。In operation S660, the dispatching platform can encapsulate the basic capabilities, model parameters and other files of the scheduling platform into a task package that can be recognized by the running platform. The basic capabilities of the scheduling platform are provided to other running platforms through interfaces, and these interfaces are encapsulated into the SDK. , the model calls the basic capabilities of the scheduling platform through the interface provided in the SDK. According to the embodiment of the present disclosure, after the task is submitted to the running platform for execution, the task can be jumped to S620 for subsequent scheduling.

在操作S670,运行平台执行封装好的任务包。运行平台可以实时反馈状态和记录日志,最后存储模型生成的数据。具体地,运行平台可以通过SDK调用调度平台的状态服务上报运行状态,调用调度平台的日志服务存储运行日志,调用调度平台的数据服务获取输入数据和保存输出数据。模型运行过程中可以通过调度平台的基础服务实现与其他模型的信息共享和通信。In operation S670, the running platform executes the encapsulated task package. The running platform can feedback status and record logs in real time, and finally store the data generated by the model. Specifically, the running platform can call the status service of the dispatching platform through the SDK to report the running status, call the log service of the dispatching platform to store the running log, and call the data service of the dispatching platform to obtain input data and save output data. During the model running process, information sharing and communication with other models can be achieved through the basic services of the scheduling platform.

根据本公开的实施例,调度平台可以负责模型交互、状态收集、日志收集、模型中间结果的持久化等,根据规范把多个模型打包成任务分配给运行平台执行。通过本公开的实施例,通过调度平台调度多个模型可以完成业务目标,多个模型可以是不同语言编写的模型,将不同模型调度给不同的运行平台运行,可以实现跨进程、跨平台模型的调度和协同,支持不同语言的模型运行,使多个模型能够协同完成业务数据的处理。解决了相关技术中建模平台一般只能进行单个模型的调度,缺乏对多个模型的协同调度能力,导致业务无法开展的问题。According to embodiments of the present disclosure, the scheduling platform can be responsible for model interaction, status collection, log collection, persistence of model intermediate results, etc., and packages multiple models into tasks according to specifications and assigns them to the running platform for execution. Through the embodiments of the present disclosure, business goals can be achieved by scheduling multiple models through the scheduling platform. The multiple models can be models written in different languages. Different models can be scheduled to run on different running platforms, and cross-process and cross-platform models can be implemented. Scheduling and collaboration support the running of models in different languages, enabling multiple models to collaboratively complete business data processing. This solves the problem in related technologies that modeling platforms can generally only schedule a single model and lack collaborative scheduling capabilities for multiple models, resulting in inability to carry out business.

图7示意性示出了根据本公开实施例的应用于调度平台的调度模型的装置的框图。Figure 7 schematically shows a block diagram of a device for a scheduling model applied to a scheduling platform according to an embodiment of the present disclosure.

如图7所示,应用于调度平台的调度模型的装置700包括第一获取模块710、第一选择模块720、第一确定模块730和第一发送模块740。As shown in FIG. 7 , the device 700 applied to the scheduling model of the scheduling platform includes a first acquisition module 710 , a first selection module 720 , a first determination module 730 and a first sending module 740 .

第一获取模块710用于获取调度任务,其中,调度任务包括多个待调度模型和多个待调度模型之间的依赖关系,多个待调度模型中至少包括两个由不同语言编写的模型或者至少包括两个由同一种语言编写的模型,由不同语言编写的模型的配置信息不同。The first acquisition module 710 is used to acquire scheduling tasks, where the scheduling tasks include multiple to-be-scheduled models and dependencies between the multiple to-be-scheduled models. The multiple to-be-scheduled models include at least two models written in different languages or Include at least two models written in the same language. Models written in different languages have different configuration information.

第一选择模块720用于根据调度逻辑从多个待调度模型中选择符合运行条件的第一模型。The first selection module 720 is used to select a first model that meets the operating conditions from multiple to-be-scheduled models according to the scheduling logic.

第一确定模块730用于根据第一模型的配置信息确定用于运行第一模型的第一运行平台。The first determining module 730 is configured to determine a first running platform for running the first model according to the configuration information of the first model.

第一发送模块740用于向第一运行平台发送第一模型的执行文件,以使得第一运行平台执行第一模型的执行文件。The first sending module 740 is used to send the execution file of the first model to the first running platform, so that the first running platform executes the execution file of the first model.

通过本公开的实施例,通过调度平台调度多个模型可以完成业务目标,多个模型可以是不同语言编写的模型,将不同模型调度给不同的运行平台运行,可以实现跨进程、跨平台模型的调度和协同,支持不同语言的模型运行,使多个模型能够协同完成业务数据的处理。解决了相关技术中建模平台一般只能进行单个模型的调度,缺乏对多个模型的协同调度能力,导致业务无法开展的问题。Through the embodiments of the present disclosure, business goals can be achieved by scheduling multiple models through the scheduling platform. The multiple models can be models written in different languages. Different models can be scheduled to run on different running platforms, and cross-process and cross-platform models can be implemented. Scheduling and collaboration support the running of models in different languages, enabling multiple models to collaboratively complete business data processing. This solves the problem in related technologies that modeling platforms can generally only schedule a single model and lack collaborative scheduling capabilities for multiple models, resulting in inability to carry out business.

应用于调度平台的调度模型的装置700还包括第一接收模块、第二选择模块、第二确定模块和第二发送模块。The device 700 applied to the scheduling model of the scheduling platform also includes a first receiving module, a second selecting module, a second determining module and a second sending module.

第一接收模块用于接收来自第一运行平台运行第一模型的状态信息。The first receiving module is used to receive status information from the first running platform running the first model.

第二选择模块用于在状态信息表征第一模型运行完成的情况下,根据多个待调度模型之间的依赖关系从多个待调度模型中选择符合运行条件的第二模型。The second selection module is configured to select a second model that meets the operating conditions from multiple to-be-scheduled models based on dependencies between the multiple to-be-scheduled models when the status information indicates that the first model is completed.

第二确定模块用于根据第二模型的配置信息确定用于运行第二模型的第二运行平台。The second determination module is configured to determine a second running platform for running the second model according to the configuration information of the second model.

第二发送模块用于向第二运行平台发送第二模型的执行文件,以使得第二运行平台执行第二模型的执行文件。The second sending module is used to send the execution file of the second model to the second running platform, so that the second running platform executes the execution file of the second model.

根据本公开的实施例,应用于调度平台的调度模型的装置700还包括第二接收模块、第三接收模块和第一存储模块。According to an embodiment of the present disclosure, the apparatus 700 applied to the scheduling model of the scheduling platform further includes a second receiving module, a third receiving module and a first storage module.

第二接收模块用于接收来自第一运行平台执行第一模型的执行文件的第一输出结果和第一日志文件。The second receiving module is configured to receive the first output result and the first log file from the execution file of the first running platform executing the first model.

第三接收模块用于接收来自第二运行平台执行第二模型的执行文件的第二输出结果和第二日志文件。The third receiving module is configured to receive the second output result and the second log file from the execution file of the second model executed by the second running platform.

第一存储模块用于存储第一输出结果、第一日志文件、第二输出结果和第二日志文件。The first storage module is used to store the first output result, the first log file, the second output result and the second log file.

根据本公开的实施例,应用于调度平台的调度模型的装置700还包括共享模块,用于在第二运行平台执行第二模型的执行文件的过程中,向第二运行平台提供调度平台存储的数据,以实现不同运行平台运行多个待调度模型的过程中数据共享。According to an embodiment of the present disclosure, the device 700 applied to the scheduling model of the scheduling platform further includes a sharing module for providing the second running platform with the information stored by the scheduling platform during the process of the second running platform executing the execution file of the second model. Data to achieve data sharing in the process of running multiple to-be-scheduled models on different operating platforms.

根据本公开的实施例,第二选择模块用于确定多个待调度模型中的一个或多个未运行模型;根据多个待调度模型之间的依赖关系确定一个或多个未运行模型各自依赖的前置模型是否运行完成;以及将依赖的前置模型运行完成的未运行模型确定为符合运行条件的第二模型。According to an embodiment of the present disclosure, the second selection module is used to determine one or more unrun models among the multiple to-be-scheduled models; determine the respective dependencies of the one or more un-run models according to the dependencies between the multiple to-be-scheduled models. whether the preceding model has been completed; and determine the unrun model whose dependent preceding model has been completed as the second model that meets the running conditions.

根据本公开的实施例,应用于调度平台的调度模型的装置700还包括第二获取模块和第二存储模块。According to an embodiment of the present disclosure, the device 700 applied to the scheduling model of the scheduling platform further includes a second acquisition module and a second storage module.

第二获取模块用于在获取调度任务之前,获取用于注册多个待调度模型的注册请求。The second acquisition module is used to acquire registration requests for registering multiple models to be scheduled before acquiring the scheduling task.

第二存储模块用于响应于注册请求,将多个待调度模型对应的执行文件存储在模型库中。The second storage module is configured to respond to the registration request and store execution files corresponding to multiple to-be-scheduled models in the model library.

根据本公开的实施例的模块、子模块、单元、子单元中的任意多个、或其中任意多个的至少部分功能可以在一个模块中实现。根据本公开实施例的模块、子模块、单元、子单元中的任意一个或多个可以被拆分成多个模块来实现。根据本公开实施例的模块、子模块、单元、子单元中的任意一个或多个可以至少被部分地实现为硬件电路,例如现场可编程门阵列(FPGA)、可编程逻辑阵列(PLA)、片上系统、基板上的系统、封装上的系统、专用集成电路(ASIC),或可以通过对电路进行集成或封装的任何其他的合理方式的硬件或固件来实现,或以软件、硬件以及固件三种实现方式中任意一种或以其中任意几种的适当组合来实现。或者,根据本公开实施例的模块、子模块、单元、子单元中的一个或多个可以至少被部分地实现为计算机程序模块,当该计算机程序模块被运行时,可以执行相应的功能。Any number of modules, sub-modules, units, sub-units according to embodiments of the present disclosure, or at least part of the functions of any number of them, may be implemented in one module. Any one or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be split into multiple modules for implementation. Any one or more of the modules, sub-modules, units, and sub-units according to embodiments of the present disclosure may be at least partially implemented as hardware circuits, such as field programmable gate arrays (FPGAs), programmable logic arrays (PLA), System-on-a-chip, system-on-substrate, system-on-package, application-specific integrated circuit (ASIC), or any other reasonable means of integrating or packaging circuits that can be implemented in hardware or firmware, or in a combination of software, hardware, and firmware Any one of these implementation methods or an appropriate combination of any of them. Alternatively, one or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be at least partially implemented as a computer program module, and when the computer program module is executed, corresponding functions may be performed.

例如,第一获取模块710、第一选择模块720、第一确定模块730和第一发送模块740中的任意多个可以合并在一个模块/单元/子单元中实现,或者其中的任意一个模块/单元/子单元可以被拆分成多个模块/单元/子单元。或者,这些模块/单元/子单元中的一个或多个模块/单元/子单元的至少部分功能可以与其他模块/单元/子单元的至少部分功能相结合,并在一个模块/单元/子单元中实现。根据本公开的实施例,第一获取模块710、第一选择模块720、第一确定模块730和第一发送模块740中的至少一个可以至少被部分地实现为硬件电路,例如现场可编程门阵列(FPGA)、可编程逻辑阵列(PLA)、片上系统、基板上的系统、封装上的系统、专用集成电路(ASIC),或可以通过对电路进行集成或封装的任何其他的合理方式等硬件或固件来实现,或以软件、硬件以及固件三种实现方式中任意一种或以其中任意几种的适当组合来实现。或者,第一获取模块710、第一选择模块720、第一确定模块730和第一发送模块740中的至少一个可以至少被部分地实现为计算机程序模块,当该计算机程序模块被运行时,可以执行相应的功能。For example, any more of the first acquisition module 710, the first selection module 720, the first determination module 730 and the first sending module 740 can be combined and implemented in one module/unit/subunit, or any one module/ A unit/subunit can be split into multiple modules/units/subunits. Alternatively, at least part of the functionality of one or more of these modules/units/subunits may be combined with at least part of the functionality of other modules/units/subunits and combined in one module/unit/subunit realized in. According to an embodiment of the present disclosure, at least one of the first acquisition module 710 , the first selection module 720 , the first determination module 730 and the first sending module 740 may be at least partially implemented as a hardware circuit, such as a field programmable gate array. (FPGA), programmable logic array (PLA), system-on-a-chip, system-on-substrate, system-on-package, application-specific integrated circuit (ASIC), or any other reasonable means by which circuits can be integrated or packaged It can be implemented by firmware, or it can be implemented by any one of the three implementation methods of software, hardware and firmware or by an appropriate combination of any of them. Alternatively, at least one of the first obtaining module 710, the first selecting module 720, the first determining module 730 and the first sending module 740 may be at least partially implemented as a computer program module, and when the computer program module is executed, Perform the corresponding function.

需要说明的是,本公开的实施例中利用调度平台调度模型的装置部分与本公开的实施例中利用调度平台调度模型的方法部分是相对应的,利用调度平台调度模型的装置部分的描述具体参考利用调度平台调度模型的方法部分,在此不再赘述。It should be noted that the device part for utilizing the dispatch platform to dispatch the model in the embodiment of the present disclosure corresponds to the method part for utilizing the dispatch platform to dispatch the model in the embodiment of the present disclosure. The description of the device part for utilizing the dispatch platform to dispatch the model is specific. Please refer to the method section of scheduling model using the scheduling platform, which will not be described again here.

根据本公开的实施例,还提供了一种计算机系统,包括:一个或多个处理器;存储器,用于存储一个或多个程序,其中,当所述一个或多个程序被所述一个或多个处理器执行时,使得所述一个或多个处理器实现如上所述的方法。According to an embodiment of the present disclosure, a computer system is also provided, including: one or more processors; a memory for storing one or more programs, wherein when the one or more programs are executed by the one or more When executed by multiple processors, the one or more processors implement the method as described above.

根据本公开的实施例,还提供了一种计算机可读存储介质,其上存储有可执行指令,该指令被处理器执行时使处理器实现如上所述的方法。According to an embodiment of the present disclosure, a computer-readable storage medium is also provided, on which executable instructions are stored. When the instructions are executed by a processor, they cause the processor to implement the method as described above.

图8示意性示出了根据本公开实施例的适于实现上文描述的方法的计算机系统的框图。图8示出的计算机系统仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。Figure 8 schematically illustrates a block diagram of a computer system suitable for implementing the method described above, according to an embodiment of the present disclosure. The computer system shown in FIG. 8 is only an example and should not impose any limitations on the functions and scope of use of the embodiments of the present disclosure.

如图8所示,根据本公开实施例的计算机系统800包括处理器801,其可以根据存储在只读存储器(ROM)802中的程序或者从存储部分808加载到随机访问存储器(RAM)803中的程序而执行各种适当的动作和处理。处理器801例如可以包括通用微处理器(例如CPU)、指令集处理器和/或相关芯片组和/或专用微处理器(例如,专用集成电路(ASIC)),等等。处理器801还可以包括用于缓存用途的板载存储器。处理器801可以包括用于执行根据本公开实施例的方法流程的不同动作的单一处理单元或者是多个处理单元。As shown in FIG. 8 , a computer system 800 according to an embodiment of the present disclosure includes a processor 801 that can be loaded into a random access memory (RAM) 803 according to a program stored in a read-only memory (ROM) 802 or from a storage portion 808 program to perform various appropriate actions and processes. Processor 801 may include, for example, a general purpose microprocessor (eg, a CPU), an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (eg, an application specific integrated circuit (ASIC)), among others. Processor 801 may also include onboard memory for caching purposes. The processor 801 may include a single processing unit or multiple processing units for performing different actions of the method flow according to the embodiments of the present disclosure.

在RAM 803中,存储有系统800操作所需的各种程序和数据。处理器801、ROM 802以及RAM 803通过总线804彼此相连。处理器801通过执行ROM 802和/或RAM 803中的程序来执行根据本公开实施例的方法流程的各种操作。需要注意,所述程序也可以存储在除ROM 802和RAM 803以外的一个或多个存储器中。处理器801也可以通过执行存储在所述一个或多个存储器中的程序来执行根据本公开实施例的方法流程的各种操作。In the RAM 803, various programs and data required for the operation of the system 800 are stored. The processor 801, ROM 802, and RAM 803 are connected to each other through a bus 804. The processor 801 performs various operations according to the method flow of the embodiment of the present disclosure by executing programs in the ROM 802 and/or RAM 803. It should be noted that the program may also be stored in one or more memories other than ROM 802 and RAM 803. The processor 801 may also perform various operations according to the method flow of embodiments of the present disclosure by executing programs stored in the one or more memories.

根据本公开的实施例,系统800还可以包括输入/输出(I/O)接口805,输入/输出(I/O)接口805也连接至总线804。系统800还可以包括连接至I/O接口805的以下部件中的一项或多项:包括键盘、鼠标等的输入部分806;包括诸如阴极射线管(CRT)、液晶显示器(LCD)等以及扬声器等的输出部分807;包括硬盘等的存储部分808;以及包括诸如LAN卡、调制解调器等的网络接口卡的通信部分809。通信部分809经由诸如因特网的网络执行通信处理。驱动器810也根据需要连接至I/O接口805。可拆卸介质811,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器810上,以便于从其上读出的计算机程序根据需要被安装入存储部分808。According to embodiments of the present disclosure, system 800 may also include an input/output (I/O) interface 805 that is also connected to bus 804 . System 800 may also include one or more of the following components connected to I/O interface 805: an input portion 806 including a keyboard, mouse, etc.; including a cathode ray tube (CRT), liquid crystal display (LCD), etc.; and speakers. an output section 807, etc.; a storage section 808 including a hard disk, etc.; and a communication section 809 including a network interface card such as a LAN card, a modem, etc. The communication section 809 performs communication processing via a network such as the Internet. Driver 810 is also connected to I/O interface 805 as needed. Removable media 811, such as magnetic disks, optical disks, magneto-optical disks, semiconductor memories, etc., are installed on the drive 810 as needed, so that a computer program read therefrom is installed into the storage portion 808 as needed.

根据本公开的实施例,根据本公开实施例的方法流程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读存储介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信部分809从网络上被下载和安装,和/或从可拆卸介质811被安装。在该计算机程序被处理器801执行时,执行本公开实施例的系统中限定的上述功能。根据本公开的实施例,上文描述的系统、设备、装置、模块、单元等可以通过计算机程序模块来实现。According to embodiments of the present disclosure, the method flow according to the embodiments of the present disclosure may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product including a computer program carried on a computer-readable storage medium, the computer program containing program code for performing the method illustrated in the flowchart. In such embodiments, the computer program may be downloaded and installed from the network via communications portion 809 and/or installed from removable media 811 . When the computer program is executed by the processor 801, the above-described functions defined in the system of the embodiment of the present disclosure are performed. According to embodiments of the present disclosure, the systems, devices, devices, modules, units, etc. described above may be implemented by computer program modules.

本公开还提供了一种计算机可读存储介质,该计算机可读存储介质可以是上述实施例中描述的设备/装置/系统中所包含的;也可以是单独存在,而未装配入该设备/装置/系统中。上述计算机可读存储介质承载有一个或者多个程序,当上述一个或者多个程序被执行时,实现根据本公开实施例的方法。The present disclosure also provides a computer-readable storage medium. The computer-readable storage medium may be included in the device/device/system described in the above embodiments; it may also exist independently without being assembled into the device/system. in the device/system. The above computer-readable storage medium carries one or more programs. When the above one or more programs are executed, the method according to the embodiment of the present disclosure is implemented.

根据本公开的实施例,计算机可读存储介质可以是非易失性的计算机可读存储介质。例如可以包括但不限于:便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium. Examples may include but are not limited to: portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), portable compact disk read-only memory (CD-ROM), ROM), optical storage device, magnetic storage device, or any suitable combination of the above. In this disclosure, a computer-readable storage medium may be any tangible medium that contains or stores a program for use by or in connection with an instruction execution system, apparatus, or device.

例如,根据本公开的实施例,计算机可读存储介质可以包括上文描述的ROM 802和/或RAM 803和/或ROM 802和RAM 803以外的一个或多个存储器。For example, according to embodiments of the present disclosure, the computer-readable storage medium may include one or more memories other than ROM 802 and/or RAM 803 and/or ROM 802 and RAM 803 described above.

附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,上述模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图或流程图中的每个方框、以及框图或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。本领域技术人员可以理解,本公开的各个实施例和/或权利要求中记载的特征可以进行多种组合和/或结合,即使这样的组合或结合没有明确记载于本公开中。特别地,在不脱离本公开精神和教导的情况下,本公开的各个实施例和/或权利要求中记载的特征可以进行多种组合和/或结合。所有这些组合和/或结合均落入本公开的范围。The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operations of possible implementations of systems, methods, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more logic functions that implement the specified executable instructions. It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown one after another may actually execute substantially in parallel, or they may sometimes execute in the reverse order, depending on the functionality involved. It will also be noted that each block in the block diagram or flowchart illustration, and combinations of blocks in the block diagram or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or operations, or may be implemented by special purpose hardware-based systems that perform the specified functions or operations. Achieved by a combination of specialized hardware and computer instructions. Those skilled in the art will understand that features recited in various embodiments and/or claims of the present disclosure may be combined and/or combined in various ways, even if such combinations or combinations are not explicitly recited in the present disclosure. In particular, various combinations and/or combinations of features recited in the various embodiments and/or claims of the disclosure may be made without departing from the spirit and teachings of the disclosure. All such combinations and/or combinations fall within the scope of this disclosure.

以上对本公开的实施例进行了描述。但是,这些实施例仅仅是为了说明的目的,而并非为了限制本公开的范围。尽管在以上分别描述了各实施例,但是这并不意味着各个实施例中的措施不能有利地结合使用。本公开的范围由所附权利要求及其等同物限定。不脱离本公开的范围,本领域技术人员可以做出多种替代和修改,这些替代和修改都应落在本公开的范围之内。The embodiments of the present disclosure have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although each embodiment is described separately above, this does not mean that the measures in the various embodiments cannot be used in combination to advantage. The scope of the disclosure is defined by the appended claims and their equivalents. Without departing from the scope of the present disclosure, those skilled in the art can make various substitutions and modifications, and these substitutions and modifications should all fall within the scope of the present disclosure.

Claims (10)

1. A method of scheduling model for a scheduling platform, comprising:
acquiring a scheduling task, wherein the scheduling task comprises a plurality of models to be scheduled and a dependency relationship among the models to be scheduled, the models to be scheduled at least comprise two models written in different languages or at least comprise two models written in the same language, and the configuration information of the models written in different languages is different;
selecting a first model conforming to the operation condition from the multiple models to be scheduled according to the scheduling logic;
determining a first operation platform for operating the first model according to the configuration information of the first model; and
Sending an execution file of the first model to the first operation platform so that the first operation platform executes the execution file of the first model;
wherein the first model comprises: and the first model to be executed in the plurality of models to be scheduled and/or the model which does not depend on the output of other models as input in the scheduling task.
2. The method of claim 1, further comprising:
receiving state information of running the first model from the first running platform;
under the condition that the state information characterizes that the first model is operated, selecting a second model conforming to the operation condition from the plurality of models to be scheduled according to the dependency relationship among the plurality of models to be scheduled;
determining a second operation platform for operating the second model according to the configuration information of the second model; and
and sending the execution file of the second model to the second operation platform so that the second operation platform executes the execution file of the second model.
3. The method of claim 2, further comprising:
receiving a first output result of an execution file executing the first model and a first log file from the first operation platform;
Receiving a second output result of an execution file executing the second model and a second log file from the second operation platform; and
and storing the first output result, the first log file, the second output result and the second log file.
4. A method according to claim 3, further comprising:
and in the process that the second operation platform executes the execution file of the second model, providing the data stored by the dispatching platform for the second operation platform so as to realize data sharing in the process that different operation platforms operate the plurality of models to be dispatched.
5. The method of claim 2, wherein selecting a second model from the plurality of models to be scheduled that meets the operating condition according to the dependency relationship between the plurality of models to be scheduled comprises:
determining one or more non-running models of the plurality of models to be scheduled;
determining whether the front model on which the one or more non-running models depend respectively is finished to run according to the dependency relationship among the plurality of models to be scheduled; and
and determining an unexposed model with the dependent front-end model operation completed as a second model conforming to the operation condition.
6. The method of claim 1, further comprising:
before acquiring the scheduling task, acquiring a registration request for registering the plurality of models to be scheduled; and
and responding to the registration request, and storing the execution files corresponding to the multiple models to be scheduled in a model library.
7. An apparatus for a scheduling model for a scheduling platform, comprising:
the system comprises a first acquisition module, a second acquisition module and a scheduling module, wherein the scheduling task comprises a plurality of models to be scheduled and a dependency relationship among the models to be scheduled, the models to be scheduled at least comprise two models written in different languages or at least comprise two models written in the same language, and the configuration information of the models written in different languages is different;
the first selection module is used for selecting a first model which meets the operation condition from the plurality of models to be scheduled according to the scheduling logic;
the first determining module is used for determining a first operation platform for operating the first model according to the configuration information of the first model; and
the first sending module is used for sending the execution file of the first model to the first operation platform so that the first operation platform executes the execution file of the first model;
Wherein the first model comprises: and the first model to be executed in the plurality of models to be scheduled and/or the model which does not depend on the output of other models as input in the scheduling task.
8. The apparatus of claim 7, further comprising:
the first receiving module is used for receiving state information of running the first model from the first running platform;
the second selection module is used for selecting a second model conforming to the operation condition from the plurality of models to be scheduled according to the dependency relationship among the plurality of models to be scheduled under the condition that the state information characterizes that the operation of the first model is completed;
the second determining module is used for determining a second operation platform for operating the second model according to the configuration information of the second model; and
and the second sending module is used for sending the execution file of the second model to the second operation platform so that the second operation platform executes the execution file of the second model.
9. A computer system, comprising:
one or more processors;
a memory for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1 to 6.
10. A computer readable storage medium having stored thereon executable instructions which when executed by a processor cause the processor to implement the method of any of claims 1 to 6.
CN201910947251.3A 2019-09-30 2019-09-30 Method, apparatus, computer system and readable storage medium for scheduling model Active CN110717992B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910947251.3A CN110717992B (en) 2019-09-30 2019-09-30 Method, apparatus, computer system and readable storage medium for scheduling model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910947251.3A CN110717992B (en) 2019-09-30 2019-09-30 Method, apparatus, computer system and readable storage medium for scheduling model

Publications (2)

Publication Number Publication Date
CN110717992A CN110717992A (en) 2020-01-21
CN110717992B true CN110717992B (en) 2023-10-20

Family

ID=69212195

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910947251.3A Active CN110717992B (en) 2019-09-30 2019-09-30 Method, apparatus, computer system and readable storage medium for scheduling model

Country Status (1)

Country Link
CN (1) CN110717992B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI825317B (en) * 2020-05-13 2023-12-11 日商Spp科技股份有限公司 Manufacturing process determination device for substrate processing apparatus, substrate processing system, manufacturing process determination method for substrate processing apparatus, computer program, method and program for generating learning model group
CN113741912A (en) * 2020-05-29 2021-12-03 阿里巴巴集团控股有限公司 Model management system, method, device and equipment
CN112685150A (en) * 2020-12-21 2021-04-20 联想(北京)有限公司 Multi-language program execution method, device and storage medium
CN113821314B (en) * 2021-02-24 2025-04-15 北京沃东天骏信息技术有限公司 Dependent object scheduling method, device, computer system and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109271238A (en) * 2017-07-12 2019-01-25 北京京东尚科信息技术有限公司 Support the task scheduling apparatus and method of a variety of programming languages

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8887163B2 (en) * 2010-06-25 2014-11-11 Ebay Inc. Task scheduling based on dependencies and resources

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109271238A (en) * 2017-07-12 2019-01-25 北京京东尚科信息技术有限公司 Support the task scheduling apparatus and method of a variety of programming languages

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
郭辉 ; 陈松乔 ; .基于J2EE架构的Java语言学习平台的设计与实现.计算机与信息技术.2008,(07),全文. *

Also Published As

Publication number Publication date
CN110717992A (en) 2020-01-21

Similar Documents

Publication Publication Date Title
CN110717992B (en) Method, apparatus, computer system and readable storage medium for scheduling model
US20210149668A1 (en) System and method for generating documentation for microservice based applications
US10185558B2 (en) Language-independent program composition using containers
CN113515271B (en) Service code generation method and device, electronic equipment and readable storage medium
CN110377429A (en) A kind of control method, device, server and storage medium that real-time task calculates
CN113094081B (en) Software release method, device, computer system and computer readable storage medium
CN113760252B (en) Data visualization method, device, computer system and readable storage medium
CN113191889B (en) Wind control configuration method, configuration system, electronic equipment and readable storage medium
CN113127361B (en) Application development method and device, electronic equipment and storage medium
US20200320383A1 (en) Complete process trace prediction using multimodal attributes
CN113986258A (en) Service distribution method, device, device and storage medium
CN113392002A (en) Test system construction method, device, equipment and storage medium
CN112860344A (en) Component processing method and device, electronic equipment and storage medium
CN111611086A (en) Information processing method, apparatus, electronic device and medium
CN115543543A (en) Application service processing method, device, equipment and medium
CN110413675A (en) A control method, device, server and storage medium for real-time task computing
CN112395194A (en) Method and device for accessing test platform
CN109840073B (en) Method and device for realizing business process
CN113535590B (en) Program testing method and device
CN112817573A (en) Method, apparatus, computer system, and medium for building streaming computing applications
CN114371839A (en) Interface arrangement method and device based on graphic data
CN114677114A (en) Approval process generation method and device based on graph dragging
CN113568838A (en) Test data generation method, apparatus, device, storage medium and program product
CN113781154A (en) Information rollback method, system, electronic equipment and storage medium
CN112506781A (en) Test monitoring method, test monitoring device, electronic device, storage medium, and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room 332, 3 / F, Building 102, 28 xinjiekouwei street, Xicheng District, Beijing 100088

Applicant after: QAX Technology Group Inc.

Applicant after: Qianxin Wangshen information technology (Beijing) Co.,Ltd.

Address before: Room 332, 3 / F, Building 102, 28 xinjiekouwei street, Xicheng District, Beijing 100088

Applicant before: QAX Technology Group Inc.

Applicant before: LEGENDSEC INFORMATION TECHNOLOGY (BEIJING) Inc.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant