Disclosure of Invention
In view of the above, the embodiment of the invention provides an application deployment method and device, which classifies the application resource use condition through a pre-trained arrangement classification model, and then arranges the application by combining a set arrangement rule and a resource use category output by the model, thereby realizing automatic and reasonable deployment of the application and improving the resource utilization rate and the overall performance of a system.
To achieve the above object, according to an aspect of an embodiment of the present invention, there is provided an application deployment method.
The application deployment method comprises the steps of obtaining resource use characteristic data of applications deployed on a plurality of host nodes, inputting the resource use characteristic data into a pre-trained arrangement classification model, and outputting resource use categories of the applications, wherein the resource use categories comprise index levels of set resource indexes, arranging the plurality of applications according to arrangement rules and the index levels set for the host nodes to generate arrangement deployment files, wherein the arrangement rules are used for combining the plurality of resource indexes, and deploying the applications to be deployed on corresponding target host nodes according to the arrangement deployment files to finish application deployment.
The method comprises the steps of selecting an application conforming to the arrangement rule of the host node from a plurality of applications to construct a corresponding application group, taking the host node as a target host node to be deployed by the application of the application group, and writing application information of the application group and an address of the target host node into an arrangement deployment file.
Optionally, the method further comprises the steps of collecting a plurality of resource usage data applied to a plurality of host nodes, extracting features of the resource usage data to obtain corresponding resource usage feature data as training data, and training the training data by using a machine learning algorithm to obtain the arrangement classification model.
Optionally, the collecting resource usage data of the plurality of applications in the plurality of host nodes comprises counting request amounts received by the plurality of applications in a set time period, and collecting index values of performance indexes of the plurality of applications when the respective host nodes run, wherein the performance indexes comprise any one or more of CPU (Central processing Unit) usage rate, memory usage rate and disk usage rate, and generating resource usage data according to the request amounts and the index values of the performance indexes.
Optionally, the feature extraction is performed on the resource usage data to obtain corresponding resource usage feature data, which includes performing data processing on the resource usage data according to pre-selected feature parameters to obtain corresponding resource usage feature data, wherein the feature parameters include request quantity duty ratios and performance indexes of different time periods.
Optionally, the resource index includes a combination of any plurality of a set period of time usage, CPU usage, memory usage, and disk usage.
Optionally, the method further comprises the steps of obtaining resource usage feature data of the application deployed on the target host node, outputting a new resource usage class of the application by using the arrangement classification model, and executing the step of arranging the application again to generate a new arrangement deployment file, and redeploying the application according to the new arrangement deployment file.
To achieve the above object, according to another aspect of the embodiments of the present invention, there is provided an application deployment apparatus.
The application deployment device comprises an application classification module, an application scheduling module and an application deployment module, wherein the application classification module is used for acquiring resource usage characteristic data of applications deployed on a plurality of host nodes, inputting the resource usage characteristic data into a pre-trained scheduling classification model and outputting resource usage categories of the applications, the resource usage categories comprise index levels for setting resource indexes, the application scheduling module is used for scheduling the plurality of applications according to scheduling rules and the index levels set for the host nodes to generate scheduling deployment files, the scheduling rules are used for combining a plurality of resource indexes, and the application deployment module is used for deploying the applications to be deployed on corresponding target host nodes according to the scheduling deployment files to complete application deployment.
The application programming module is further configured to compare the index levels of the same resource index with the expected levels, to screen applications conforming to the programming rules of the host nodes from the applications, to construct corresponding application groups, to use the host nodes as target host nodes to be deployed by the applications of the application groups, and to write application information of the application groups and addresses of the target host nodes into a programming deployment file.
Optionally, the device further comprises a model training module, a machine learning algorithm and a scheduling classification model, wherein the model training module is used for acquiring resource usage data of a plurality of applications at a plurality of host nodes, extracting characteristics of the resource usage data to obtain corresponding resource usage characteristic data as training data, and training the training data by using the machine learning algorithm to obtain the scheduling classification model.
Optionally, the model training module is further configured to count the request amounts received by the applications in a set period of time, and collect index values of performance indexes of the applications when the respective host nodes run, where the performance indexes include any one or more of CPU usage, memory usage, and disk usage, and generate resource usage data according to the request amounts and the index values of the performance indexes.
Optionally, the model training module is further configured to perform data processing on the resource usage data according to a pre-selected feature parameter, so as to obtain corresponding resource usage feature data, where the feature parameter includes a request amount duty ratio and the performance index in different time periods.
Optionally, the resource index includes a combination of any plurality of a set period of time usage, CPU usage, memory usage, and disk usage.
Optionally, the device further comprises a redeployment module, which is used for acquiring the resource usage characteristic data of the application deployed on the target host node, outputting a new resource usage category of the application by using the arrangement classification model, and re-executing the step of arranging the application to generate a new arrangement deployment file, and redeploying the application according to the new arrangement deployment file.
To achieve the above object, according to still another aspect of the embodiments of the present invention, there is provided an electronic device.
The electronic device comprises one or more processors and a storage device, wherein the storage device is used for storing one or more programs, and when the one or more programs are executed by the one or more processors, the one or more processors are enabled to realize the application deployment method of the embodiment of the invention.
To achieve the above object, according to still another aspect of the embodiments of the present invention, there is provided a computer-readable medium.
A computer readable medium of an embodiment of the present invention has stored thereon a computer program which, when executed by a processor, implements an application deployment method of an embodiment of the present invention.
The method has the advantages that the application resource use condition is classified through the pre-trained arrangement classification model, and then the application is arranged by combining the set arrangement rule and the resource use category output by the model, so that automatic and reasonable deployment of the application is realized, and the resource utilization rate and the overall performance of the system are improved.
By setting the resource index combination in the arrangement rule of the host node and the expected level of each resource index in the resource index combination, the index level of each resource index in the resource use category can be compared with the corresponding expected level to screen out the application conforming to the arrangement rule, thereby accurately determining the deployment position of the screened application. The machine learning algorithm is used for training the arrangement classification model, so that the resource use category to which the prediction application belongs is facilitated, and meanwhile, the accuracy of a prediction result is guaranteed.
Further effects of the above-described non-conventional alternatives are described below in connection with the embodiments.
Detailed Description
Exemplary embodiments of the present invention will now be described with reference to the accompanying drawings, in which various details of the embodiments of the present invention are included to facilitate understanding, and are to be considered merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
FIG. 1 is a schematic diagram of the main steps of an application deployment method according to an embodiment of the present invention. As shown in fig. 1, the application deployment method in the embodiment of the present invention mainly includes the following steps:
Step S101, acquiring resource use characteristic data of applications deployed on a plurality of host nodes, inputting the resource use characteristic data into a pre-trained arrangement classification model, and outputting the resource use category of the applications. The method comprises the steps of firstly collecting resource usage data of applications deployed on a plurality of host nodes, and extracting features of the resource usage data to obtain corresponding resource usage feature data. The host nodes may be physical host nodes or cloud host nodes.
The resource usage feature data is then input to a pre-trained orchestration classification model that is used to predict the resource usage class of each application. Wherein the resource usage class includes an index level that sets the resource index. The resource index is an index for measuring the utilization of resources and the performance of the system, such as the utilization rate of a set time period, the utilization rate of a CPU (Central Processing Unit, a central processing unit), the utilization rate of a memory, the utilization rate of a disk, and the like, and the index level is high, low, medium, and the like. Thus, the orchestration classification model is used to predict the index level of each application under each resource index.
And step S102, arranging a plurality of applications according to the arrangement rules and the index levels set for the host nodes, and generating an arrangement deployment file. The arrangement rule is used for combining a plurality of resource indexes, such as a CPU utilization and a disk utilization (forming one resource index combination), and further such as a memory utilization and a disk utilization (forming another resource index combination). In an embodiment, the orchestration rule may indicate the desired level of the plurality of resource indicators combined, and thus the orchestration rule may include the combination of resource indicators, as well as the desired level of each resource indicator in the combination of resource indicators.
In an embodiment, respective orchestration rules may be set for each of the plurality of host nodes. The method comprises the steps of screening out applications meeting the arrangement rules of a host node from a plurality of applications according to arrangement rules set for the host node and resource use categories of the applications predicted by an arrangement classification model, adding the screened applications to an application group, and using the host node as a deployment position of the application group to generate an arrangement deployment file, wherein the arrangement deployment file records deployment positions corresponding to the applications of the application group.
And step S103, deploying the application to be deployed to the corresponding target host node according to the arranging deployment file, and completing application deployment. The application to be deployed may be an application already deployed to the host node, or may be an application not deployed to the host node. For the application which is already deployed to the host node, the deployment file is recorded with the deployment position, so that the application can be directly deployed to the corresponding deployment position (namely the target host node), and for the application which is not deployed to the host node, the target host node can be selected from a plurality of host nodes according to the application information and the deployment file, and the application deployment can be performed.
According to the method, the device and the system, the application resource use condition is classified through the pre-trained arrangement classification model, and then the application is arranged by combining the set arrangement rule and the resource use category output by the model, so that automatic and reasonable deployment of the application is realized, and the resource utilization rate and the overall performance of the system are improved.
Fig. 2 is a schematic flow chart of an application deployment method according to still another embodiment of the present invention. As shown in fig. 2, the application deployment method in the embodiment of the present invention mainly includes the following steps:
Step S201, training data is acquired, and training is carried out on the training data by using a machine learning algorithm to obtain an arrangement classification model. This step is used to train the orchestration classification model using a machine learning algorithm. The method comprises the steps of collecting a plurality of resource usage data applied to a plurality of host nodes, extracting features of the resource usage data to obtain corresponding resource usage feature data as training data, and training the training data by using a machine learning algorithm to obtain the arrangement classification model.
It will be appreciated that the resource usage data collected herein is data formed during the running of an application that has been deployed on a host node. In the embodiment, the resource usage data can be acquired by counting the request quantity received by a plurality of applications in a set time period, acquiring index values of performance indexes of the plurality of applications when the respective host nodes run, and generating the resource usage data according to the request quantity and the index values of the performance indexes.
The performance index is used for measuring system performance, and comprises any one or more of CPU (Central processing Unit) utilization, memory utilization and disk utilization. In an embodiment, the resource usage data may include a request amount (including a request in amount and a request out amount) applied in a certain or a certain time period, and may further include a CPU usage, a CPU load, a memory usage, and a disk usage applied in the time period, where these data may be obtained from a project deployment test process, and specific data content is shown in table 1. The above resource usage data can weigh the impact of current deployments on system resources and system performance from multiple dimensions.
TABLE 1
In an embodiment, the resource usage data is subjected to feature extraction, that is, according to the pre-selected feature parameters, the resource usage data is subjected to data processing, so as to obtain corresponding resource usage feature data. The characteristic parameters comprise request quantity duty ratios and performance indexes of different time periods, redundant data in the resource use data can be removed through selection of the characteristic parameters, the data dimension is reduced, and the machine learning efficiency and effect are improved. The request amount ratio of different time periods, i.e., the ratio of the request amounts of two time periods, such as the ratio of the day request amount to the night request amount. Table 2 is a data content example of the resource usage characteristic data.
TABLE 2
After extracting the resource use characteristic data, training can be performed by using a machine learning algorithm to obtain an arrangement classification model. In an embodiment, the machine learning algorithm may be a decision tree, a logistic regression algorithm, a support vector machine, etc. Taking the decision tree as an example, the characteristic parameter can be compared with a set threshold value to obtain an index level of the set resource index.
The resource index may be any combination of a plurality of time period usage (such as daytime usage and night usage), CPU usage, memory usage and disk usage, and the setting of the resource index may achieve the goals of improving the resource usage and the overall performance of the system from multiple dimensions.
For example, the request amount duty ratio of day and night is compared with a threshold value 1, and if the request amount duty ratio is greater than 1, the index level of the resource index indicating the daytime usage rate is high, if the request amount duty ratio is equal to 1, the index level of the resource index indicating the daytime usage rate is medium, and if the request amount duty ratio is less than 1, the index level of the resource index indicating the daytime usage rate is low.
For another example, the resource index (CPU utilization, memory utilization, or disk utilization) is less than 30% of the threshold, indicating that the index level of the corresponding resource index is low, is between 30% and 60% indicating that the index level of the corresponding resource index is medium, is greater than 60%, indicating that the index level of the corresponding resource index is high. Table 3 is an example of the results obtained after processing the table 2 data by the machine learning algorithm.
TABLE 3 Table 3
| Application of |
Daytime use rate |
Night use rate |
CPU utilization |
Memory utilization rate |
Disk usage rate |
| A |
Low and low |
High height |
High height |
High height |
High height |
| B |
Low and low |
High height |
In (a) |
In (a) |
In (a) |
| E |
High height |
Low and low |
High height |
High height |
Low and low |
| F |
In (a) |
In (a) |
In (a) |
In (a) |
In (a) |
Step S202, acquiring resource use characteristic data of applications deployed on a plurality of host nodes, inputting the resource use characteristic data into a layout classification model, and outputting the resource use category of the applications. In an embodiment, resource usage data of applications deployed on a plurality of host nodes is collected, including request amounts, CPU usage rates, memory usage rates and disk usage rates of a plurality of time periods, and then feature extraction is performed on the resource usage data according to a feature extraction flow in step S201, so as to obtain corresponding resource usage feature data. Here, the host node may be the same as or different from the host node used to acquire the training data in step S201.
In this embodiment, the resource usage feature data is the classification result of the application, such as the daytime usage rate, the CPU usage rate, the memory usage rate and the disk usage rate of application 1 are high, and the night usage rate, the CPU usage rate, the memory usage rate and the disk usage rate of application 2 are high.
And step S203, arranging a plurality of applications according to the arrangement rules set for the host nodes and the index levels of the resource use categories to generate an arrangement deployment file. The scheduling rules may include a combination of resource metrics and a desired level of each resource metric in the combination of resource metrics. The step can compare the index level of the same resource index with the corresponding expected level to screen out the application conforming to the arrangement rule of the host node from a plurality of applications, construct an application group, then use the host node as the target host node to be deployed by the application of the application group, and write the application information of the application group and the address of the target host node into the arrangement deployment file.
In the embodiment, the resource index combination may be a combination of daytime usage and night usage, a combination of CPU usage and disk usage, and a combination of memory usage and disk usage. Taking the deployment host node 1 as an example, the arrangement rule is that the daytime use rate of partial application is high, and the night use rate of partial application is high. Assuming that the application 1 is used for timing synchronization tasks and the night use rate is high, and the application 2 is a specific service and the daytime use rate is high, the application 1 and the application 2 can be deployed to the host node 1, so that the application can use resources such as a CPU, a memory, a disk and the like of the host node 1 to the maximum extent.
And step S204, deploying the application to be deployed to the corresponding target host node according to the arrangement deployment file, and completing application deployment. When the user deploys the application, the address of the target host node to be deployed of each application in the deployment file can be read, and the application to be deployed is reasonably planned and deployed.
Fig. 3 is a schematic flow chart of an application deployment method according to still another embodiment of the present invention. As shown in fig. 3, the application deployment method in the embodiment of the present invention mainly includes the following steps:
Step S301, training data is acquired, and training is carried out on the training data by using a machine learning algorithm to obtain an arrangement classification model.
Step S302, acquiring the resource use characteristic data of the applications deployed on the plurality of host nodes, inputting the resource use characteristic data into the arrangement classification model, and outputting the resource use types of the applications.
And step S303, arranging a plurality of applications according to the arrangement rules set for the host nodes and the index levels of the resource use categories to generate an arrangement deployment file.
And step S304, deploying the application to be deployed to the corresponding target host node according to the arrangement deployment file, and completing application deployment.
Step S305, judging whether the current deployment is the first deployment, if yes, executing step S302, and if not, ending the flow. It will be appreciated that it may be further determined whether the current deployment is a specified deployment, and if not, steps S302-S305 may be looped again. Here, the designated number is 2 or more.
The specific implementation of step S301 to step S304 corresponds to step S201 to step S204, and will not be described here again. Step S305 is configured to collect, after the first deployment is finished, resource usage feature data of an application deployed on the target host node through step S302, and then output a new resource usage class of the application by using the orchestration classification model, and execute step S303 again to generate a new orchestration deployment file, so as to redeploy the application according to the new orchestration deployment file, complete fine tuning of application deployment, and further ensure deployment rationality.
According to the embodiment, after one deployment is finished, the resource use data generated by the one deployment is acquired again, and classified by using the arrangement classification model, so that the application is arranged again, fine adjustment of the application deployment is completed, the deployment rationality is further ensured, and the resource utilization rate and the overall performance of the system are further improved.
In a preferred embodiment, step S305 described above may be performed during idle time. When the upgrade is deployed, a rolling upgrade mode can be adopted, namely, part of the applications are upgraded firstly, and then the rest of the applications are upgraded, so that the occupation of a large amount of resources caused by the whole upgrade deployment is avoided, and the normal service is influenced.
Fig. 4 is a schematic diagram of the main modules of an application deployment apparatus according to an embodiment of the present invention.
As shown in fig. 4, an application deployment apparatus 400 according to an embodiment of the present invention mainly includes:
The application classification module 401 is configured to obtain resource usage feature data of applications deployed on a plurality of host nodes, input the resource usage feature data to a pre-trained orchestration classification model, and output a resource usage class of the applications. The module firstly collects the resource usage data of the application deployed on a plurality of host nodes, and performs feature extraction on the resource usage data to obtain corresponding resource usage feature data. The host nodes may be physical host nodes or cloud host nodes.
The resource usage feature data is then input to a pre-trained orchestration classification model that is used to predict the resource usage class of each application. Wherein the resource usage class includes an index level that sets the resource index. The resource index is an index for measuring the utilization condition of resources and the performance of the system, such as the utilization rate of a set time period, the utilization rate of a CPU, the utilization rate of a memory, the utilization rate of a disk and the like, and the index level is high, low and medium. Thus, the orchestration classification model is used to predict the index level of each application under each resource index.
An application arrangement module 402, configured to arrange a plurality of the applications according to an arrangement rule set for the host node and the index level, and generate an arrangement deployment file. The arrangement rule is used for combining a plurality of resource indexes, such as a CPU utilization and a disk utilization (forming one resource index combination), and further such as a memory utilization and a disk utilization (forming another resource index combination). In an embodiment, the orchestration rule may indicate the desired level of the plurality of resource indicators combined, and thus the orchestration rule may include the combination of resource indicators, as well as the desired level of each resource indicator in the combination of resource indicators.
In an embodiment, respective orchestration rules may be set for each of the plurality of host nodes. The module screens out the application meeting the arrangement rule of the host node from a plurality of applications according to the arrangement rule set for the host node and the resource use category of each application predicted by the arrangement classification model, adds the screened application to the application group, and then takes the host node as the deployment position of the application group to generate an arrangement deployment file, wherein the arrangement deployment file records the deployment position corresponding to the application of each application group.
And the application deployment module 403 is configured to deploy the application to be deployed to the corresponding target host node according to the orchestration deployment file, thereby completing application deployment. The application to be deployed may be an application already deployed to the host node, or may be an application not deployed to the host node. For the application which is already deployed to the host node, the deployment file is recorded with the deployment position, so that the application can be directly deployed to the corresponding deployment position (namely the target host node), and for the application which is not deployed to the host node, the target host node can be selected from a plurality of host nodes according to the application information and the deployment file, and the application deployment can be performed.
In addition, the application deployment apparatus 400 of the embodiment of the present invention may further include a model training module and a redeployment module (not shown in fig. 4). The model training module is used for collecting the resource usage data of a plurality of host nodes, extracting the characteristics of the resource usage data to obtain corresponding resource usage characteristic data as training data, and training the training data by using a machine learning algorithm to obtain the arranging classification model.
The system comprises a target host node, a redeployment module, a scheduling classification module and a scheduling module, wherein the target host node is used for acquiring resource usage characteristic data of an application deployed on the target host node, outputting a new resource usage category of the application by using the scheduling classification model, and performing the step of scheduling the application again to generate a new scheduling deployment file, and redeploying the application according to the new scheduling deployment file.
From the above description, the application resource use condition is classified by the pre-trained arrangement classification model, and then the application is arranged by combining the set arrangement rule and the resource use category output by the model, so that reasonable deployment of the application is realized, and the resource utilization rate and the overall performance of the system are improved.
Fig. 5 illustrates an exemplary system architecture 500 to which the application deployment method or application deployment apparatus of embodiments of the present invention may be applied.
As shown in fig. 5, the system architecture 500 may include terminal devices 501, 502, 503, a network 504, and a server 505. The network 504 is used as a medium to provide communication links between the terminal devices 501, 502, 503 and the server 505. The network 504 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
A user may interact with the server 505 via the network 504 using the terminal devices 501, 502, 503 to receive or send messages or the like. Various communication client applications, such as shopping class applications, web browser applications, search class applications, instant messaging tools, mailbox clients, social platform software, etc., may be installed on the terminal devices 501, 502, 503.
The terminal devices 501, 502, 503 may be a variety of electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablets, laptop and desktop computers, and the like.
The server 505 may be a server providing various services, such as a background management server processing an application deployment request transmitted by an administrator using the terminal devices 501, 502, 503. The background management server can acquire the resource characteristic data, determine the resource use category of the application, generate the arrangement and deployment file, complete the application deployment, and feed back the processing result (such as the application deployment result) to the terminal equipment.
It should be noted that, the application deployment method provided in the embodiment of the present application is generally executed by the server 505, and accordingly, the application deployment device is generally disposed in the server 505.
It should be understood that the number of terminal devices, networks and servers in fig. 5 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
According to an embodiment of the invention, the invention further provides an electronic device and a computer readable medium.
The electronic device comprises one or more processors and a storage device, wherein the storage device is used for storing one or more programs, and when the one or more programs are executed by the one or more processors, the one or more processors are enabled to realize an application deployment method of the embodiment of the invention.
The computer readable medium of the present invention has stored thereon a computer program which, when executed by a processor, implements an application deployment method of an embodiment of the present invention.
Referring now to FIG. 6, there is illustrated a schematic diagram of a computer system 600 suitable for use in implementing an electronic device of an embodiment of the present invention. The electronic device shown in fig. 6 is only an example and should not be construed as limiting the functionality and scope of use of the embodiments of the invention.
As shown in fig. 6, the computer system 600 includes a Central Processing Unit (CPU) 601, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the computer system 600 are also stored. The CPU 601, ROM 602, and RAM 603 are connected to each other through a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Connected to the I/O interface 605 are an input section 606 including a keyboard, a mouse, and the like, an output section 607 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like, a storage section 608 including a hard disk, and the like, and a communication section 609 including a network interface card such as a LAN card, a modem, and the like. The communication section 609 performs communication processing via a network such as the internet. The drive 610 is also connected to the I/O interface 605 as needed. Removable media 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed as needed on drive 610 so that a computer program read therefrom is installed as needed into storage section 608.
In particular, the processes described above in the main step diagrams may be implemented as computer software programs according to the disclosed embodiments of the invention. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the main step diagrams. In such an embodiment, the computer program may be downloaded and installed from a network through the communication portion 609, and/or installed from the removable medium 611. The above-described functions defined in the system of the present invention are performed when the computer program is executed by a Central Processing Unit (CPU) 601.
The computer readable medium shown in the present invention may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of a computer-readable storage medium may include, but are not limited to, an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules involved in the embodiments of the present invention may be implemented in software or in hardware. The described modules may also be provided in a processor, for example, as a processor including an application classification module, an application orchestration module, and an application deployment module. The names of these modules do not limit the module itself in some cases, for example, the application classification module may also be described as a module that "obtains resource usage feature data of applications deployed on multiple host nodes, inputs the resource usage feature data into a pre-trained orchestration classification model, and outputs a resource usage class of the applications.
As a further aspect, the invention also provides a computer readable medium which may be comprised in the device described in the above embodiments or may be present alone without being fitted into the device. The computer readable medium carries one or more programs, when the one or more programs are executed by the device, the device comprises resource usage characteristic data of applications deployed on a plurality of host nodes, the resource usage characteristic data are input into a pre-trained arrangement classification model, and the resource usage categories of the applications are output, wherein the resource usage categories comprise index levels for setting resource indexes, the plurality of applications are arranged according to arrangement rules and the index levels set for the host nodes to generate arrangement deployment files, the arrangement rules are used for combining the plurality of resource indexes, and the applications to be deployed are deployed on corresponding target host nodes according to the arrangement deployment files to complete application deployment.
According to the technical scheme provided by the embodiment of the invention, the application resource use condition is classified through the pre-trained arrangement classification model, and then the application is arranged by combining the set arrangement rule and the resource use category output by the model, so that reasonable deployment of the application is realized, and the resource utilization rate and the overall performance of the system are improved.
The product can execute the method provided by the embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method. Technical details not described in detail in this embodiment may be found in the methods provided in the embodiments of the present invention.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives can occur depending upon design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.