[go: up one dir, main page]

CN113656046B - Application deployment method and device - Google Patents

Application deployment method and device Download PDF

Info

Publication number
CN113656046B
CN113656046B CN202111011306.3A CN202111011306A CN113656046B CN 113656046 B CN113656046 B CN 113656046B CN 202111011306 A CN202111011306 A CN 202111011306A CN 113656046 B CN113656046 B CN 113656046B
Authority
CN
China
Prior art keywords
application
resource
deployment
applications
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111011306.3A
Other languages
Chinese (zh)
Other versions
CN113656046A (en
Inventor
贾宁
韩金魁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Qianshi Technology Co Ltd
Original Assignee
Beijing Jingdong Qianshi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Qianshi Technology Co Ltd filed Critical Beijing Jingdong Qianshi Technology Co Ltd
Priority to CN202111011306.3A priority Critical patent/CN113656046B/en
Publication of CN113656046A publication Critical patent/CN113656046A/en
Application granted granted Critical
Publication of CN113656046B publication Critical patent/CN113656046B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Debugging And Monitoring (AREA)

Abstract

本发明公开了一种应用部署方法和装置,涉及计算机技术领域。该方法的一具体实施方式包括:获取多个主机节点上部署的应用的资源使用特征数据,将资源使用特征数据输入至预训练的编排分类模型,输出应用的资源使用类别;其中,资源使用类别包括设定资源指标的指标级别;根据为主机节点设定的编排规则和指标级别,对多个应用进行编排,生成编排部署文件;根据编排部署文件,将待部署应用部署到对应的目标主机节点上,完成应用部署。该实施方式通过预训练的编排分类模型,对应用的资源使用情况进行分类,之后结合设定的编排规则和模型输出的资源使用类别,对应用进行编排,进而实现应用的自动、合理部署,提高了资源利用率和系统整体性能。

The present invention discloses an application deployment method and device, and relates to the field of computer technology. A specific implementation of the method includes: obtaining resource usage feature data of applications deployed on multiple host nodes, inputting the resource usage feature data into a pre-trained orchestration classification model, and outputting the resource usage category of the application; wherein the resource usage category includes setting the indicator level of the resource indicator; according to the orchestration rules and indicator levels set for the host node, multiple applications are orchestrated to generate an orchestration deployment file; according to the orchestration deployment file, the application to be deployed is deployed to the corresponding target host node to complete the application deployment. This implementation classifies the resource usage of the application through a pre-trained orchestration classification model, and then orchestrates the application in combination with the set orchestration rules and the resource usage category output by the model, thereby realizing automatic and reasonable deployment of the application, improving resource utilization and overall system performance.

Description

Application deployment method and device
Technical Field
The present invention relates to the field of computer technologies, and in particular, to an application deployment method and apparatus.
Background
Currently, application deployment is typically through application orchestration, deploying applications to a common physical host node or cloud host node. The existing application orchestration approach is to randomly balance the application onto existing host nodes. For example, kubernetes deploys application machines to host nodes in the above-described arrangement. Among other things, kubernetes is an application for managing containerization across multiple hosts in a cloud platform.
In the process of realizing the invention, the prior art has at least the following problems:
The existing application deployment mode has low utilization rate of system resources and poor system performance.
Disclosure of Invention
In view of the above, the embodiment of the invention provides an application deployment method and device, which classifies the application resource use condition through a pre-trained arrangement classification model, and then arranges the application by combining a set arrangement rule and a resource use category output by the model, thereby realizing automatic and reasonable deployment of the application and improving the resource utilization rate and the overall performance of a system.
To achieve the above object, according to an aspect of an embodiment of the present invention, there is provided an application deployment method.
The application deployment method comprises the steps of obtaining resource use characteristic data of applications deployed on a plurality of host nodes, inputting the resource use characteristic data into a pre-trained arrangement classification model, and outputting resource use categories of the applications, wherein the resource use categories comprise index levels of set resource indexes, arranging the plurality of applications according to arrangement rules and the index levels set for the host nodes to generate arrangement deployment files, wherein the arrangement rules are used for combining the plurality of resource indexes, and deploying the applications to be deployed on corresponding target host nodes according to the arrangement deployment files to finish application deployment.
The method comprises the steps of selecting an application conforming to the arrangement rule of the host node from a plurality of applications to construct a corresponding application group, taking the host node as a target host node to be deployed by the application of the application group, and writing application information of the application group and an address of the target host node into an arrangement deployment file.
Optionally, the method further comprises the steps of collecting a plurality of resource usage data applied to a plurality of host nodes, extracting features of the resource usage data to obtain corresponding resource usage feature data as training data, and training the training data by using a machine learning algorithm to obtain the arrangement classification model.
Optionally, the collecting resource usage data of the plurality of applications in the plurality of host nodes comprises counting request amounts received by the plurality of applications in a set time period, and collecting index values of performance indexes of the plurality of applications when the respective host nodes run, wherein the performance indexes comprise any one or more of CPU (Central processing Unit) usage rate, memory usage rate and disk usage rate, and generating resource usage data according to the request amounts and the index values of the performance indexes.
Optionally, the feature extraction is performed on the resource usage data to obtain corresponding resource usage feature data, which includes performing data processing on the resource usage data according to pre-selected feature parameters to obtain corresponding resource usage feature data, wherein the feature parameters include request quantity duty ratios and performance indexes of different time periods.
Optionally, the resource index includes a combination of any plurality of a set period of time usage, CPU usage, memory usage, and disk usage.
Optionally, the method further comprises the steps of obtaining resource usage feature data of the application deployed on the target host node, outputting a new resource usage class of the application by using the arrangement classification model, and executing the step of arranging the application again to generate a new arrangement deployment file, and redeploying the application according to the new arrangement deployment file.
To achieve the above object, according to another aspect of the embodiments of the present invention, there is provided an application deployment apparatus.
The application deployment device comprises an application classification module, an application scheduling module and an application deployment module, wherein the application classification module is used for acquiring resource usage characteristic data of applications deployed on a plurality of host nodes, inputting the resource usage characteristic data into a pre-trained scheduling classification model and outputting resource usage categories of the applications, the resource usage categories comprise index levels for setting resource indexes, the application scheduling module is used for scheduling the plurality of applications according to scheduling rules and the index levels set for the host nodes to generate scheduling deployment files, the scheduling rules are used for combining a plurality of resource indexes, and the application deployment module is used for deploying the applications to be deployed on corresponding target host nodes according to the scheduling deployment files to complete application deployment.
The application programming module is further configured to compare the index levels of the same resource index with the expected levels, to screen applications conforming to the programming rules of the host nodes from the applications, to construct corresponding application groups, to use the host nodes as target host nodes to be deployed by the applications of the application groups, and to write application information of the application groups and addresses of the target host nodes into a programming deployment file.
Optionally, the device further comprises a model training module, a machine learning algorithm and a scheduling classification model, wherein the model training module is used for acquiring resource usage data of a plurality of applications at a plurality of host nodes, extracting characteristics of the resource usage data to obtain corresponding resource usage characteristic data as training data, and training the training data by using the machine learning algorithm to obtain the scheduling classification model.
Optionally, the model training module is further configured to count the request amounts received by the applications in a set period of time, and collect index values of performance indexes of the applications when the respective host nodes run, where the performance indexes include any one or more of CPU usage, memory usage, and disk usage, and generate resource usage data according to the request amounts and the index values of the performance indexes.
Optionally, the model training module is further configured to perform data processing on the resource usage data according to a pre-selected feature parameter, so as to obtain corresponding resource usage feature data, where the feature parameter includes a request amount duty ratio and the performance index in different time periods.
Optionally, the resource index includes a combination of any plurality of a set period of time usage, CPU usage, memory usage, and disk usage.
Optionally, the device further comprises a redeployment module, which is used for acquiring the resource usage characteristic data of the application deployed on the target host node, outputting a new resource usage category of the application by using the arrangement classification model, and re-executing the step of arranging the application to generate a new arrangement deployment file, and redeploying the application according to the new arrangement deployment file.
To achieve the above object, according to still another aspect of the embodiments of the present invention, there is provided an electronic device.
The electronic device comprises one or more processors and a storage device, wherein the storage device is used for storing one or more programs, and when the one or more programs are executed by the one or more processors, the one or more processors are enabled to realize the application deployment method of the embodiment of the invention.
To achieve the above object, according to still another aspect of the embodiments of the present invention, there is provided a computer-readable medium.
A computer readable medium of an embodiment of the present invention has stored thereon a computer program which, when executed by a processor, implements an application deployment method of an embodiment of the present invention.
The method has the advantages that the application resource use condition is classified through the pre-trained arrangement classification model, and then the application is arranged by combining the set arrangement rule and the resource use category output by the model, so that automatic and reasonable deployment of the application is realized, and the resource utilization rate and the overall performance of the system are improved.
By setting the resource index combination in the arrangement rule of the host node and the expected level of each resource index in the resource index combination, the index level of each resource index in the resource use category can be compared with the corresponding expected level to screen out the application conforming to the arrangement rule, thereby accurately determining the deployment position of the screened application. The machine learning algorithm is used for training the arrangement classification model, so that the resource use category to which the prediction application belongs is facilitated, and meanwhile, the accuracy of a prediction result is guaranteed.
Further effects of the above-described non-conventional alternatives are described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
FIG. 1 is a schematic diagram of the main steps of an application deployment method according to an embodiment of the present invention;
FIG. 2 is a schematic flow diagram of an application deployment method according to yet another embodiment of the present invention;
FIG. 3 is a schematic flow diagram of an application deployment method according to yet another embodiment of the present invention;
FIG. 4 is a schematic diagram of the major modules of an application deployment apparatus according to an embodiment of the present invention;
FIG. 5 is an exemplary system architecture diagram in which embodiments of the present invention may be applied;
Fig. 6 is a schematic structural diagram of a computer device suitable for use in an electronic apparatus to implement an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will now be described with reference to the accompanying drawings, in which various details of the embodiments of the present invention are included to facilitate understanding, and are to be considered merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
FIG. 1 is a schematic diagram of the main steps of an application deployment method according to an embodiment of the present invention. As shown in fig. 1, the application deployment method in the embodiment of the present invention mainly includes the following steps:
Step S101, acquiring resource use characteristic data of applications deployed on a plurality of host nodes, inputting the resource use characteristic data into a pre-trained arrangement classification model, and outputting the resource use category of the applications. The method comprises the steps of firstly collecting resource usage data of applications deployed on a plurality of host nodes, and extracting features of the resource usage data to obtain corresponding resource usage feature data. The host nodes may be physical host nodes or cloud host nodes.
The resource usage feature data is then input to a pre-trained orchestration classification model that is used to predict the resource usage class of each application. Wherein the resource usage class includes an index level that sets the resource index. The resource index is an index for measuring the utilization of resources and the performance of the system, such as the utilization rate of a set time period, the utilization rate of a CPU (Central Processing Unit, a central processing unit), the utilization rate of a memory, the utilization rate of a disk, and the like, and the index level is high, low, medium, and the like. Thus, the orchestration classification model is used to predict the index level of each application under each resource index.
And step S102, arranging a plurality of applications according to the arrangement rules and the index levels set for the host nodes, and generating an arrangement deployment file. The arrangement rule is used for combining a plurality of resource indexes, such as a CPU utilization and a disk utilization (forming one resource index combination), and further such as a memory utilization and a disk utilization (forming another resource index combination). In an embodiment, the orchestration rule may indicate the desired level of the plurality of resource indicators combined, and thus the orchestration rule may include the combination of resource indicators, as well as the desired level of each resource indicator in the combination of resource indicators.
In an embodiment, respective orchestration rules may be set for each of the plurality of host nodes. The method comprises the steps of screening out applications meeting the arrangement rules of a host node from a plurality of applications according to arrangement rules set for the host node and resource use categories of the applications predicted by an arrangement classification model, adding the screened applications to an application group, and using the host node as a deployment position of the application group to generate an arrangement deployment file, wherein the arrangement deployment file records deployment positions corresponding to the applications of the application group.
And step S103, deploying the application to be deployed to the corresponding target host node according to the arranging deployment file, and completing application deployment. The application to be deployed may be an application already deployed to the host node, or may be an application not deployed to the host node. For the application which is already deployed to the host node, the deployment file is recorded with the deployment position, so that the application can be directly deployed to the corresponding deployment position (namely the target host node), and for the application which is not deployed to the host node, the target host node can be selected from a plurality of host nodes according to the application information and the deployment file, and the application deployment can be performed.
According to the method, the device and the system, the application resource use condition is classified through the pre-trained arrangement classification model, and then the application is arranged by combining the set arrangement rule and the resource use category output by the model, so that automatic and reasonable deployment of the application is realized, and the resource utilization rate and the overall performance of the system are improved.
Fig. 2 is a schematic flow chart of an application deployment method according to still another embodiment of the present invention. As shown in fig. 2, the application deployment method in the embodiment of the present invention mainly includes the following steps:
Step S201, training data is acquired, and training is carried out on the training data by using a machine learning algorithm to obtain an arrangement classification model. This step is used to train the orchestration classification model using a machine learning algorithm. The method comprises the steps of collecting a plurality of resource usage data applied to a plurality of host nodes, extracting features of the resource usage data to obtain corresponding resource usage feature data as training data, and training the training data by using a machine learning algorithm to obtain the arrangement classification model.
It will be appreciated that the resource usage data collected herein is data formed during the running of an application that has been deployed on a host node. In the embodiment, the resource usage data can be acquired by counting the request quantity received by a plurality of applications in a set time period, acquiring index values of performance indexes of the plurality of applications when the respective host nodes run, and generating the resource usage data according to the request quantity and the index values of the performance indexes.
The performance index is used for measuring system performance, and comprises any one or more of CPU (Central processing Unit) utilization, memory utilization and disk utilization. In an embodiment, the resource usage data may include a request amount (including a request in amount and a request out amount) applied in a certain or a certain time period, and may further include a CPU usage, a CPU load, a memory usage, and a disk usage applied in the time period, where these data may be obtained from a project deployment test process, and specific data content is shown in table 1. The above resource usage data can weigh the impact of current deployments on system resources and system performance from multiple dimensions.
TABLE 1
In an embodiment, the resource usage data is subjected to feature extraction, that is, according to the pre-selected feature parameters, the resource usage data is subjected to data processing, so as to obtain corresponding resource usage feature data. The characteristic parameters comprise request quantity duty ratios and performance indexes of different time periods, redundant data in the resource use data can be removed through selection of the characteristic parameters, the data dimension is reduced, and the machine learning efficiency and effect are improved. The request amount ratio of different time periods, i.e., the ratio of the request amounts of two time periods, such as the ratio of the day request amount to the night request amount. Table 2 is a data content example of the resource usage characteristic data.
TABLE 2
After extracting the resource use characteristic data, training can be performed by using a machine learning algorithm to obtain an arrangement classification model. In an embodiment, the machine learning algorithm may be a decision tree, a logistic regression algorithm, a support vector machine, etc. Taking the decision tree as an example, the characteristic parameter can be compared with a set threshold value to obtain an index level of the set resource index.
The resource index may be any combination of a plurality of time period usage (such as daytime usage and night usage), CPU usage, memory usage and disk usage, and the setting of the resource index may achieve the goals of improving the resource usage and the overall performance of the system from multiple dimensions.
For example, the request amount duty ratio of day and night is compared with a threshold value 1, and if the request amount duty ratio is greater than 1, the index level of the resource index indicating the daytime usage rate is high, if the request amount duty ratio is equal to 1, the index level of the resource index indicating the daytime usage rate is medium, and if the request amount duty ratio is less than 1, the index level of the resource index indicating the daytime usage rate is low.
For another example, the resource index (CPU utilization, memory utilization, or disk utilization) is less than 30% of the threshold, indicating that the index level of the corresponding resource index is low, is between 30% and 60% indicating that the index level of the corresponding resource index is medium, is greater than 60%, indicating that the index level of the corresponding resource index is high. Table 3 is an example of the results obtained after processing the table 2 data by the machine learning algorithm.
TABLE 3 Table 3
Application of Daytime use rate Night use rate CPU utilization Memory utilization rate Disk usage rate
A Low and low High height High height High height High height
B Low and low High height In (a) In (a) In (a)
E High height Low and low High height High height Low and low
F In (a) In (a) In (a) In (a) In (a)
Step S202, acquiring resource use characteristic data of applications deployed on a plurality of host nodes, inputting the resource use characteristic data into a layout classification model, and outputting the resource use category of the applications. In an embodiment, resource usage data of applications deployed on a plurality of host nodes is collected, including request amounts, CPU usage rates, memory usage rates and disk usage rates of a plurality of time periods, and then feature extraction is performed on the resource usage data according to a feature extraction flow in step S201, so as to obtain corresponding resource usage feature data. Here, the host node may be the same as or different from the host node used to acquire the training data in step S201.
In this embodiment, the resource usage feature data is the classification result of the application, such as the daytime usage rate, the CPU usage rate, the memory usage rate and the disk usage rate of application 1 are high, and the night usage rate, the CPU usage rate, the memory usage rate and the disk usage rate of application 2 are high.
And step S203, arranging a plurality of applications according to the arrangement rules set for the host nodes and the index levels of the resource use categories to generate an arrangement deployment file. The scheduling rules may include a combination of resource metrics and a desired level of each resource metric in the combination of resource metrics. The step can compare the index level of the same resource index with the corresponding expected level to screen out the application conforming to the arrangement rule of the host node from a plurality of applications, construct an application group, then use the host node as the target host node to be deployed by the application of the application group, and write the application information of the application group and the address of the target host node into the arrangement deployment file.
In the embodiment, the resource index combination may be a combination of daytime usage and night usage, a combination of CPU usage and disk usage, and a combination of memory usage and disk usage. Taking the deployment host node 1 as an example, the arrangement rule is that the daytime use rate of partial application is high, and the night use rate of partial application is high. Assuming that the application 1 is used for timing synchronization tasks and the night use rate is high, and the application 2 is a specific service and the daytime use rate is high, the application 1 and the application 2 can be deployed to the host node 1, so that the application can use resources such as a CPU, a memory, a disk and the like of the host node 1 to the maximum extent.
And step S204, deploying the application to be deployed to the corresponding target host node according to the arrangement deployment file, and completing application deployment. When the user deploys the application, the address of the target host node to be deployed of each application in the deployment file can be read, and the application to be deployed is reasonably planned and deployed.
Fig. 3 is a schematic flow chart of an application deployment method according to still another embodiment of the present invention. As shown in fig. 3, the application deployment method in the embodiment of the present invention mainly includes the following steps:
Step S301, training data is acquired, and training is carried out on the training data by using a machine learning algorithm to obtain an arrangement classification model.
Step S302, acquiring the resource use characteristic data of the applications deployed on the plurality of host nodes, inputting the resource use characteristic data into the arrangement classification model, and outputting the resource use types of the applications.
And step S303, arranging a plurality of applications according to the arrangement rules set for the host nodes and the index levels of the resource use categories to generate an arrangement deployment file.
And step S304, deploying the application to be deployed to the corresponding target host node according to the arrangement deployment file, and completing application deployment.
Step S305, judging whether the current deployment is the first deployment, if yes, executing step S302, and if not, ending the flow. It will be appreciated that it may be further determined whether the current deployment is a specified deployment, and if not, steps S302-S305 may be looped again. Here, the designated number is 2 or more.
The specific implementation of step S301 to step S304 corresponds to step S201 to step S204, and will not be described here again. Step S305 is configured to collect, after the first deployment is finished, resource usage feature data of an application deployed on the target host node through step S302, and then output a new resource usage class of the application by using the orchestration classification model, and execute step S303 again to generate a new orchestration deployment file, so as to redeploy the application according to the new orchestration deployment file, complete fine tuning of application deployment, and further ensure deployment rationality.
According to the embodiment, after one deployment is finished, the resource use data generated by the one deployment is acquired again, and classified by using the arrangement classification model, so that the application is arranged again, fine adjustment of the application deployment is completed, the deployment rationality is further ensured, and the resource utilization rate and the overall performance of the system are further improved.
In a preferred embodiment, step S305 described above may be performed during idle time. When the upgrade is deployed, a rolling upgrade mode can be adopted, namely, part of the applications are upgraded firstly, and then the rest of the applications are upgraded, so that the occupation of a large amount of resources caused by the whole upgrade deployment is avoided, and the normal service is influenced.
Fig. 4 is a schematic diagram of the main modules of an application deployment apparatus according to an embodiment of the present invention.
As shown in fig. 4, an application deployment apparatus 400 according to an embodiment of the present invention mainly includes:
The application classification module 401 is configured to obtain resource usage feature data of applications deployed on a plurality of host nodes, input the resource usage feature data to a pre-trained orchestration classification model, and output a resource usage class of the applications. The module firstly collects the resource usage data of the application deployed on a plurality of host nodes, and performs feature extraction on the resource usage data to obtain corresponding resource usage feature data. The host nodes may be physical host nodes or cloud host nodes.
The resource usage feature data is then input to a pre-trained orchestration classification model that is used to predict the resource usage class of each application. Wherein the resource usage class includes an index level that sets the resource index. The resource index is an index for measuring the utilization condition of resources and the performance of the system, such as the utilization rate of a set time period, the utilization rate of a CPU, the utilization rate of a memory, the utilization rate of a disk and the like, and the index level is high, low and medium. Thus, the orchestration classification model is used to predict the index level of each application under each resource index.
An application arrangement module 402, configured to arrange a plurality of the applications according to an arrangement rule set for the host node and the index level, and generate an arrangement deployment file. The arrangement rule is used for combining a plurality of resource indexes, such as a CPU utilization and a disk utilization (forming one resource index combination), and further such as a memory utilization and a disk utilization (forming another resource index combination). In an embodiment, the orchestration rule may indicate the desired level of the plurality of resource indicators combined, and thus the orchestration rule may include the combination of resource indicators, as well as the desired level of each resource indicator in the combination of resource indicators.
In an embodiment, respective orchestration rules may be set for each of the plurality of host nodes. The module screens out the application meeting the arrangement rule of the host node from a plurality of applications according to the arrangement rule set for the host node and the resource use category of each application predicted by the arrangement classification model, adds the screened application to the application group, and then takes the host node as the deployment position of the application group to generate an arrangement deployment file, wherein the arrangement deployment file records the deployment position corresponding to the application of each application group.
And the application deployment module 403 is configured to deploy the application to be deployed to the corresponding target host node according to the orchestration deployment file, thereby completing application deployment. The application to be deployed may be an application already deployed to the host node, or may be an application not deployed to the host node. For the application which is already deployed to the host node, the deployment file is recorded with the deployment position, so that the application can be directly deployed to the corresponding deployment position (namely the target host node), and for the application which is not deployed to the host node, the target host node can be selected from a plurality of host nodes according to the application information and the deployment file, and the application deployment can be performed.
In addition, the application deployment apparatus 400 of the embodiment of the present invention may further include a model training module and a redeployment module (not shown in fig. 4). The model training module is used for collecting the resource usage data of a plurality of host nodes, extracting the characteristics of the resource usage data to obtain corresponding resource usage characteristic data as training data, and training the training data by using a machine learning algorithm to obtain the arranging classification model.
The system comprises a target host node, a redeployment module, a scheduling classification module and a scheduling module, wherein the target host node is used for acquiring resource usage characteristic data of an application deployed on the target host node, outputting a new resource usage category of the application by using the scheduling classification model, and performing the step of scheduling the application again to generate a new scheduling deployment file, and redeploying the application according to the new scheduling deployment file.
From the above description, the application resource use condition is classified by the pre-trained arrangement classification model, and then the application is arranged by combining the set arrangement rule and the resource use category output by the model, so that reasonable deployment of the application is realized, and the resource utilization rate and the overall performance of the system are improved.
Fig. 5 illustrates an exemplary system architecture 500 to which the application deployment method or application deployment apparatus of embodiments of the present invention may be applied.
As shown in fig. 5, the system architecture 500 may include terminal devices 501, 502, 503, a network 504, and a server 505. The network 504 is used as a medium to provide communication links between the terminal devices 501, 502, 503 and the server 505. The network 504 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
A user may interact with the server 505 via the network 504 using the terminal devices 501, 502, 503 to receive or send messages or the like. Various communication client applications, such as shopping class applications, web browser applications, search class applications, instant messaging tools, mailbox clients, social platform software, etc., may be installed on the terminal devices 501, 502, 503.
The terminal devices 501, 502, 503 may be a variety of electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablets, laptop and desktop computers, and the like.
The server 505 may be a server providing various services, such as a background management server processing an application deployment request transmitted by an administrator using the terminal devices 501, 502, 503. The background management server can acquire the resource characteristic data, determine the resource use category of the application, generate the arrangement and deployment file, complete the application deployment, and feed back the processing result (such as the application deployment result) to the terminal equipment.
It should be noted that, the application deployment method provided in the embodiment of the present application is generally executed by the server 505, and accordingly, the application deployment device is generally disposed in the server 505.
It should be understood that the number of terminal devices, networks and servers in fig. 5 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
According to an embodiment of the invention, the invention further provides an electronic device and a computer readable medium.
The electronic device comprises one or more processors and a storage device, wherein the storage device is used for storing one or more programs, and when the one or more programs are executed by the one or more processors, the one or more processors are enabled to realize an application deployment method of the embodiment of the invention.
The computer readable medium of the present invention has stored thereon a computer program which, when executed by a processor, implements an application deployment method of an embodiment of the present invention.
Referring now to FIG. 6, there is illustrated a schematic diagram of a computer system 600 suitable for use in implementing an electronic device of an embodiment of the present invention. The electronic device shown in fig. 6 is only an example and should not be construed as limiting the functionality and scope of use of the embodiments of the invention.
As shown in fig. 6, the computer system 600 includes a Central Processing Unit (CPU) 601, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the computer system 600 are also stored. The CPU 601, ROM 602, and RAM 603 are connected to each other through a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Connected to the I/O interface 605 are an input section 606 including a keyboard, a mouse, and the like, an output section 607 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like, a storage section 608 including a hard disk, and the like, and a communication section 609 including a network interface card such as a LAN card, a modem, and the like. The communication section 609 performs communication processing via a network such as the internet. The drive 610 is also connected to the I/O interface 605 as needed. Removable media 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed as needed on drive 610 so that a computer program read therefrom is installed as needed into storage section 608.
In particular, the processes described above in the main step diagrams may be implemented as computer software programs according to the disclosed embodiments of the invention. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the main step diagrams. In such an embodiment, the computer program may be downloaded and installed from a network through the communication portion 609, and/or installed from the removable medium 611. The above-described functions defined in the system of the present invention are performed when the computer program is executed by a Central Processing Unit (CPU) 601.
The computer readable medium shown in the present invention may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of a computer-readable storage medium may include, but are not limited to, an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules involved in the embodiments of the present invention may be implemented in software or in hardware. The described modules may also be provided in a processor, for example, as a processor including an application classification module, an application orchestration module, and an application deployment module. The names of these modules do not limit the module itself in some cases, for example, the application classification module may also be described as a module that "obtains resource usage feature data of applications deployed on multiple host nodes, inputs the resource usage feature data into a pre-trained orchestration classification model, and outputs a resource usage class of the applications.
As a further aspect, the invention also provides a computer readable medium which may be comprised in the device described in the above embodiments or may be present alone without being fitted into the device. The computer readable medium carries one or more programs, when the one or more programs are executed by the device, the device comprises resource usage characteristic data of applications deployed on a plurality of host nodes, the resource usage characteristic data are input into a pre-trained arrangement classification model, and the resource usage categories of the applications are output, wherein the resource usage categories comprise index levels for setting resource indexes, the plurality of applications are arranged according to arrangement rules and the index levels set for the host nodes to generate arrangement deployment files, the arrangement rules are used for combining the plurality of resource indexes, and the applications to be deployed are deployed on corresponding target host nodes according to the arrangement deployment files to complete application deployment.
According to the technical scheme provided by the embodiment of the invention, the application resource use condition is classified through the pre-trained arrangement classification model, and then the application is arranged by combining the set arrangement rule and the resource use category output by the model, so that reasonable deployment of the application is realized, and the resource utilization rate and the overall performance of the system are improved.
The product can execute the method provided by the embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method. Technical details not described in detail in this embodiment may be found in the methods provided in the embodiments of the present invention.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives can occur depending upon design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (9)

1. An application deployment method, comprising:
Acquiring resource use characteristic data of applications deployed on a plurality of host nodes, inputting the resource use characteristic data into a pre-trained arrangement classification model, and outputting a resource use class of the applications, wherein the resource use class comprises index levels for setting resource indexes;
According to the arrangement rules and the index levels set for the host nodes, arranging a plurality of applications to generate arrangement deployment files, wherein the arrangement rules are used for combining a plurality of resource indexes;
According to the arranging and deploying file, deploying the application to be deployed to the corresponding target host node to complete application deployment;
the orchestration rule indicates a desired level of a plurality of the resource indicators combined;
the programming of the plurality of applications generates programming deployment files, which comprises the following steps:
comparing the index level of the same resource index with the expected level to screen out the application conforming to the arrangement rule of the host node from a plurality of applications, and constructing a corresponding application group;
and taking the host node as a target host node to be deployed by the application of the application group, and writing the application information of the application group and the address of the target host node into a deployment file.
2. The method according to claim 1, wherein the method further comprises:
collecting resource usage data of a plurality of applications at a plurality of host nodes, and extracting features of the resource usage data to obtain corresponding resource usage feature data as training data;
And training the training data by using a machine learning algorithm to obtain the arranging classification model.
3. The method of claim 2, wherein collecting resource usage data for a plurality of applications at a plurality of host nodes comprises:
Counting the request quantity received by a plurality of applications in a set time period, and collecting index values of performance indexes of the plurality of applications when the respective host nodes run, wherein the performance indexes comprise any one or more of CPU (Central processing Unit) utilization rate, memory utilization rate and disk utilization rate;
and generating resource use data according to the request quantity and the index value of the performance index.
4. A method according to claim 3, wherein the feature extracting the resource usage data to obtain corresponding resource usage feature data includes:
and carrying out data processing on the resource use data according to the pre-selected characteristic parameters to obtain corresponding resource use characteristic data, wherein the characteristic parameters comprise request quantity duty ratios and the performance indexes in different time periods.
5. The method of any of claims 1 to 4, wherein the resource indicator comprises a combination of any of a set period of time usage, CPU usage, memory usage, and disk usage.
6. The method of claim 5, wherein the method further comprises:
Acquiring resource usage feature data of an application deployed on the target host node, so as to output a new resource usage category of the application by using the arrangement classification model;
And executing the step of arranging the application again, generating a new arranging and deploying file, and redeploying the application according to the new arranging and deploying file.
7. An application deployment apparatus, comprising:
The application classification module is used for acquiring resource use characteristic data of applications deployed on a plurality of host nodes, inputting the resource use characteristic data into a pre-trained arrangement classification model and outputting the resource use class of the applications, wherein the resource use class comprises index levels for setting resource indexes;
the application arrangement module is used for arranging a plurality of applications according to arrangement rules and the index levels set for the host nodes to generate arrangement deployment files, wherein the arrangement rules are used for combining a plurality of resource indexes;
the application deployment module is used for deploying the application to be deployed to the corresponding target host node according to the arranging deployment file to complete application deployment;
The application programming module is further used for comparing the index level of the same resource index with the expected level to screen out applications conforming to the programming rule of the host node from the applications to construct a corresponding application group, and writing application information of the application group and addresses of the target host nodes into a programming deployment file by taking the host node as a target host node to be deployed by the application of the application group.
8. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs,
When executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-6.
9. A computer readable medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any of claims 1-6.
CN202111011306.3A 2021-08-31 2021-08-31 Application deployment method and device Active CN113656046B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111011306.3A CN113656046B (en) 2021-08-31 2021-08-31 Application deployment method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111011306.3A CN113656046B (en) 2021-08-31 2021-08-31 Application deployment method and device

Publications (2)

Publication Number Publication Date
CN113656046A CN113656046A (en) 2021-11-16
CN113656046B true CN113656046B (en) 2025-02-21

Family

ID=78493324

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111011306.3A Active CN113656046B (en) 2021-08-31 2021-08-31 Application deployment method and device

Country Status (1)

Country Link
CN (1) CN113656046B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115034030A (en) * 2022-03-25 2022-09-09 北京商询科技有限公司 Deployment method and system of digital twin scene
CN115033718B (en) * 2022-08-15 2022-10-25 浙江大学 Service application deployment method, device and equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108809694A (en) * 2018-04-27 2018-11-13 广州西麦科技股份有限公司 Arranging service method, system, device and computer readable storage medium
CN112363813A (en) * 2020-11-20 2021-02-12 上海连尚网络科技有限公司 Resource scheduling method and device, electronic equipment and computer readable medium

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102958166B (en) * 2011-08-29 2017-07-21 华为技术有限公司 A kind of resource allocation methods and resource management platform
CN103873569B (en) * 2014-03-05 2017-04-19 兰雨晴 Resource optimized deployment method based on IaaS (infrastructure as a service) cloud platform
CN105808341B (en) * 2014-12-29 2019-05-28 中国移动通信集团公司 A kind of methods, devices and systems of scheduling of resource
US10719423B2 (en) * 2017-07-12 2020-07-21 Futurewei Technologies, Inc. Apparatus and method for application deployment assessment
CN110879750B (en) * 2017-10-13 2025-05-02 华为技术有限公司 Resource management method and terminal device
CN109167835B (en) * 2018-09-13 2021-11-26 重庆邮电大学 Physical resource scheduling method and system based on kubernets
US10708135B1 (en) * 2019-01-31 2020-07-07 EMC IP Holding Company LLC Unified and automated installation, deployment, configuration, and management of software-defined storage assets
CN110990024B (en) * 2019-11-28 2024-02-09 合肥讯飞数码科技有限公司 Application deployment method, device, equipment and storage medium
CN111966453B (en) * 2020-07-29 2022-12-16 苏州浪潮智能科技有限公司 A load balancing method, system, device and storage medium
CN113220452B (en) * 2021-05-10 2025-05-27 北京百度网讯科技有限公司 Resource allocation method, model training method, device and electronic device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108809694A (en) * 2018-04-27 2018-11-13 广州西麦科技股份有限公司 Arranging service method, system, device and computer readable storage medium
CN112363813A (en) * 2020-11-20 2021-02-12 上海连尚网络科技有限公司 Resource scheduling method and device, electronic equipment and computer readable medium

Also Published As

Publication number Publication date
CN113656046A (en) 2021-11-16

Similar Documents

Publication Publication Date Title
US11442764B2 (en) Optimizing the deployment of virtual resources and automating post-deployment actions in a cloud environment
US20200259715A1 (en) Topology-Aware Continuous Evaluation of Microservice-based Applications
CN106201661B (en) Method and device for elastic scaling virtual machine cluster
WO2020258290A1 (en) Log data collection method, log data collection apparatus, storage medium and log data collection system
US20150302440A1 (en) Cloud computing solution generation systems and methods
CN113515672B (en) Data processing method, device, computer readable medium and electronic device
CN113760521B (en) A method and device for allocating virtual resources
US8606905B1 (en) Automated determination of system scalability and scalability constraint factors
WO2017166643A1 (en) Method and device for quantifying task resources
JP2018515844A (en) Data processing method and system
CN103763346A (en) Distributed resource scheduling method and device
CN111143039A (en) Virtual machine scheduling method and device and computer storage medium
CN113656046B (en) Application deployment method and device
WO2024160273A1 (en) Data processing method and apparatus, device, and storage medium
CN115237804A (en) Performance bottleneck assessment method, performance bottleneck assessment device, electronic equipment, medium and program product
US10313457B2 (en) Collaborative filtering in directed graph
CN117992295A (en) Service test data construction method and device
CN114253813B (en) Computing power optimization method, device, electronic device and storage medium
CN107045452B (en) Virtual machine scheduling method and device
US11888930B1 (en) System and method for management of workload distribution for transitory disruption
US11811862B1 (en) System and method for management of workload distribution
CN110119300A (en) The load-balancing method and device of dummy unit cluster
CN113760484A (en) Method and apparatus for data processing
US20230325871A1 (en) Subgroup analysis in a/b testing
CN116225690A (en) Memory multidimensional database calculation load balancing method and system based on docker

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant