Disclosure of Invention
In order to solve the defects in the prior art and achieve the purposes of interactive parameter configuration and convenient deployment of optional components, the invention adopts the following technical scheme:
the convenient deployment method of the selectable components based on the DevOps comprises the following specific steps:
The method comprises the steps that a cluster configuration item is interactively obtained from a user, wherein the cluster configuration item comprises basic parameter configuration of a cluster and starting parameter configuration of optional components of the cluster, so that a main configuration file is generated, the main configuration file is analyzed to obtain a cluster deployment installation file, the installation file is executed, a cluster environment and the optional components are integrated and conveniently deployed, the use threshold of the user can be reduced, the working time is greatly reduced, and the error rate of manual input can be reduced through selection of the interactive cluster configuration item;
The modification of the optional component by the developer is to configure a trigger mechanism for the code warehouse, the developer pushes an update code to the code warehouse, triggers the pipeline task of an automation tool, carries out automatic code compiling and mirror image construction on the optional component code, automatically pushes the mirror image to the mirror image warehouse, carries out mirror image update of the optional component through the automation tool so as to complete automatic deployment of the optional component, and automatically carries out the whole code compiling, mirror image construction and component deployment flow, thereby eliminating misoperation caused by manual execution when upgrading or updating the optional component, improving the efficiency and the development efficiency, acquiring deployed feedback information in a short time, finding and repairing the problem in time, shortening the development period and realizing continuous delivery and rapid iteration of an application program.
Further, the basic parameter configuration comprises host information and cluster configuration information, wherein the host information is used for confirming the cluster type, the roles of cluster nodes and mirror image warehouse node information, the master control node and the working node information are confirmed based on the cluster type and the node roles and are used for configuring address information, passwords and host names, the configuration is critical to the normal operation and management of the clusters, the clusters cannot be properly deployed or operated without being configured, and the corresponding container mirror image is confirmed to be acquired by the clusters based on the mirror image warehouse node information, so that the K8s clusters can be ensured to be properly pulled to obtain the required mirror image;
The method comprises the steps of analyzing a master configuration file, extracting configuration information related to a container, analyzing and configuring the configuration information to a daemon of the container, writing the read configuration items into a daemon file for installing the container Docker, generating an automation operation and maintenance Ansible configuration file based on host information for setting load balancing and proxy service to improve the performance and reliability of application programs and services, configuring KEEPALIVED files for managing virtual address allocation to set virtual IP addresses and network interfaces, realizing high availability, automating a cluster configuration process, adapting to different cluster configuration requirements by dynamically modifying the configuration files, processing K8s cluster network storage configuration based on the cluster configuration information, correctly configuring the K8s cluster network storage configuration, otherwise, the clusters may lack necessary functions or services to influence the integrity and functions of a system, extracting host IP addresses of the container in a configuration dictionary, updating host names Hostname in the configuration dictionary to be assigned host IP addresses, ensuring that the Harbor uses the IP addresses to configure, and enabling the Harbor to be configured, and enabling the Harbor to be difficult to set, and the cluster configuration nodes to be controlled by the proper or the service, or the cluster can not be controlled by the proper configuration nodes, and the cluster configuration nodes can be controlled to be normally or can be controlled by the nodes or can be controlled to be affected by the nodes.
Further, the installation file performs initialization configuration, cluster configuration, proxy configuration and Kubelet configuration extraction of node management and operation on the Kubeadm configuration file of cluster deployment management based on the parsed main configuration file and default configuration file, wherein a node is a most basic scheduling unit of a cluster and represents one or more groups of containers which share storage, network and operation environments to generate a corresponding cluster deployment management configuration YAML file for K8s cluster initialization;
And in the cluster installation stage, cluster initialization, the installation of the mirror warehouse and the container, the creation of a cluster network and cluster storage are respectively carried out based on the cluster deployment management configuration YAML file, and corresponding components are installed and configured on each node according to the cluster type and the roles of the cluster nodes.
Further, the initialization configuration is used for setting basic parameters of the K8s cluster, extracting the initialization configuration from the main configuration file and the default configuration file, and generating Kubeadm initialization configuration files of corresponding cluster deployment management, including control plane setting, authentication, encryption configuration and the like of the cluster, so as to ensure that the cluster can be started normally and perform preliminary configuration;
The cluster configuration is used for defining the network and storage setting of the cluster, extracting the cluster configuration and default cluster configuration to generate Kubeadm cluster configuration files for cluster deployment management;
the proxy configuration is to process the network traffic route in the cluster by configuring proxy service to ensure the correct forwarding of network traffic, correct work of service and load balancing, extract proxy configuration and default proxy configuration, and generate Kubeadm proxy configuration file for cluster deployment management;
The Kubelet configuration of node management and operation is used for managing container operation of the node, and the Kubelet configuration of node management and operation is extracted to generate Kubelet configuration file of node management and operation.
Further, the automated operation and maintenance Ansible configuration file comprises configuration global variables, acquires the IP address of the mirror warehouse Harbor, reads and checks whether the Master node is also used as the setting of the working node, updates the global variable configuration file, comprises the mirror warehouse HarborIP address and the working state of the Master node, and regenerates the total variable configuration file;
And in the cluster installation stage, network connection checking is carried out on the IP address of the designated mirror image warehouse Harbor and the IP address of the main control node, so that no problem is caused in network communication, and the method is an important step for ensuring the stability of the cluster and the normal operation of the service.
Further, the cluster type comprises three main control clusters, wherein the three main control clusters comprise a control initialization node, a main control joining node and a working node, and the acquired user interaction information comprises the main control initialization node, the working node and the main control joining node, wherein the main control initialization node is used for initializing the clusters, inputting the IP addresses and SSH root user passwords of the three main control nodes, configuring the virtual IP addresses and network card names bound with the virtual IP addresses and the virtual IP addresses for KEEPALIVED configuration, and the working node is used for inputting the IP addresses and the host names of the working node;
The automated operation and maintenance Ansible configuration file comprises a configuration Ansible host file, and for three master nodes, the IP addresses and states of the three master nodes are updated to correctly identify and manage the master nodes in the cluster, including the IP addresses and roles of the nodes, so that Ansible is clear of the position and roles of each master node;
the automated operation and maintenance Ansible configuration file comprises a configuration load balancing and proxy service HAProxy file, and adds new main control node IP address and port information to ensure that a load balancer of HAProxy can correctly distribute traffic to three main control nodes;
The master control initialization node executes cluster initialization operation, including configuration of a high available control plane, starting of a cluster, configuration of load balancing and the like, and the acquired information includes IP addresses of three master control nodes, root user passwords of a server where the three master control nodes are located, and the VIP addresses of KEEPALIVED and names of network cards bound by the VIP are required to be configured to realize the high availability of the master control nodes;
The method comprises the steps of configuring a node in a three-master cluster, wherein the node is added into the three-master cluster which is already configured to form a new master node of the cluster, and obtaining information required to be obtained, namely, an IP address of a master initialization node is used for connecting to the existing cluster;
The working node is suitable for a single master control cluster and a three master control cluster, does not bear the responsibility of a control plane, is only used for running a container of an application program, is usually the same machine, is used as the working node in the three master control clusters and is responsible for running service loads, the information required to be acquired is the IP address of the working node and/or the host name (optional) of the working node, and the following operation is that the working node joins the cluster and runs the service serving as the working node, and the role of the master control node is configured in the single master control cluster and runs the working node as the working node. In the three main control clusters, common working nodes are configured to be added into the clusters, and the common working nodes can be ensured to accept the workload;
In the cluster installation stage, for a three-master-control node mode, a master-control initialization node is used for installing an automation management Ansible tool to deploy HAProxy load balancing and KEEPALIVED according to the type of an operating system, the load balancing and high-availability configuration are used for reducing the complexity and error risk of manual operation, and each node can be ensured to be configured according to expectations through an automation script, and the management and maintenance are convenient; the automation management tool Ansible needs to be able to execute commands or scripts between different nodes, if it requires to input passwords each time, it is very troublesome and unfavorable for automation operations, these operations can be implemented without passwords by configuring SSH mutual trust, in order to ensure that SSH connections can be made between different hosts in a cluster environment without passwords, SSH mutual trust needs to be configured, SSH mutual trust is established with a mirror warehouse Harbor server, the SSH public key of the local machine is sent to the mirror warehouse Harbor server, SSH mutual trust is established between the local machine and the mirror warehouse Harbor server, SSH mutual trust is established between all master nodes, and if node working nodes are included, node working nodes also need to establish SSH mutual trust with all master nodes, for master initialization nodes, installation and configuration are performed by installing an automation management Ansible tool, for master joining nodes, a certificate key for cluster authentication is acquired from the master nodes, correct security verification can be performed for new master nodes, a command to join a cluster is generated on the new master nodes, a command to join a cluster can be generated on the new master nodes, a new cluster can be used for joining a cluster node is correctly configured for a new cluster node, a new cluster can be joined to a cluster node is successfully joined by a new cluster node is configured for a new cluster node, a normal node is configured to a cluster node is required to be joined, a new node is a cluster can be joined by a normal node is a cluster node is a normal node is a cluster a normal node is joined, to join the cluster.
Further, the cluster type includes a single master control cluster, only one master control node is provided in the cluster, and is used for all control and management tasks of the cluster, the single master control cluster includes a master control node and a working node, and the acquired user interaction information includes:
The master control node is responsible for managing the control plane of the whole cluster and needs to acquire information, namely an IP address of the master control node and an SSH root user password of the master control node, and whether the master control node is used as a working node is selected;
The working node is suitable for a single master control cluster and a three master control cluster, does not bear the responsibility of a control plane, is only used for running a container of an application program, is usually the same machine, is used as the working node in the three master control clusters and is responsible for running service loads, the information required to be acquired is the IP address of the working node and/or the host name (optional) of the working node, and the following operation is that the working node joins the cluster and runs the service serving as the working node, and the role of the master control node is configured in the single master control cluster and runs the working node as the working node. In the three main control clusters, common working nodes are configured to be added into the clusters, and the common working nodes can be ensured to accept the workload;
in the cluster installation stage, the automation management tool Ansible needs to be able to execute commands or scripts between different nodes, if a password is required to be input each time, the operations are very troublesome and unfavorable for automation operation, by configuring SSH mutual trust, the operations can be realized without password, in order to ensure that SSH connection can be performed between different hosts in a cluster environment without password, SSH mutual trust needs to be configured, SSH mutual trust is established with a mirror warehouse Harbor server, an SSH public key of a local machine is sent to the mirror warehouse Harbor server, SSH mutual trust is established between the local machine and the mirror warehouse Harbor server, SSH mutual trust only needs to be established between a node working node and a single master node for a single master node mode, and for a master node, cluster initialization and master node setting are performed based on the cluster deployment management configuration YAML file, and for the working node, a command for node joining a cluster is acquired from the master node, and a command for joining the cluster is generated on a new working node to join the cluster.
Further, the NTP node information is obtained through the host information to confirm the time source of time synchronization, because the time difference of different nodes may cause various problems, the time of all nodes in the K8s cluster of the multiple nodes needs to be kept consistent, and if the time difference between the nodes is too large, the problems may occur in task scheduling, log recording, data synchronization and other operations in the cluster;
The automation operation and maintenance Ansible configuration file comprises a configuration Chrony file, and the NTP server information in the Chrony configuration file is updated to ensure the system time synchronization;
in the cluster installation stage, NTP nodes are installed for time synchronization between all working nodes and master nodes, which is critical to normal operation and scheduling of a distributed system, and master nodes are usually configured with NTP services in the early stage of cluster deployment, so that independent processing in a subsequent script is not needed, the time of the whole cluster can be ensured to be consistent, and various potential problems caused by time asynchronization are avoided.
Further, the enabling parameter is configured as related information configuration of optional components, including configuration information of a cluster management visualization component, a cluster monitoring visualization component and a cluster log visualization component;
Generating corresponding configuration files through enabling parameters of optional components (a cluster management platform, a monitoring visualization platform and a log visualization platform), extracting an IP address of an open component service and an address of a harbor mirror warehouse from a host configuration file, and transmitting the IP address and the mirror warehouse address as parameters to an installation processing function of the optional components to generate a monitor. Yaml file for installing the cluster monitoring visualization component, a cluster_manager. Yaml file of the cluster management visualization component and a log collect. Yaml file of a cluster log visualization interface for installation and deployment of subsequent optional components;
and in the cluster installation stage, according to configuration parameters of a user, executing an optional-components_install.sh script to install the corresponding optional components, reading a switch for installing the optional components from a main configuration file, determining whether to execute the installation script of the optional components, if so, executing the installation script optional-components_install.sh after verifying the health state of the cluster and the installation checking condition of the optional components, reading the installation yaml file of the cluster management visual components, the cluster monitoring visual components and the cluster log visual components, finishing the installation of the optional components, and finally outputting the installation result information.
Further, the automatic deployment of the optional components is that in a code warehouse, a code warehouse plug-in pointing to an automatic tool is identified through positioning, a pipeline is built in the automatic tool to set a series of tasks of different stages, and in a workflow for continuous integration and continuous delivery/deployment, the different stages comprise pulling codes of the code warehouse, building npm items, mirroring through container dockers, pushing the mirror to a mirror warehouse harbor, and executing yaml files of the optional components to complete the deployment of the components;
When the automation tool detects that the code of the code warehouse changes, automatically triggering and executing a construction process to configure a locator of the code warehouse, specify a constructed code branch, configure a construction trigger (when the code changes, namely triggering), configure a construction environment, configure construction, mirror image a user name and a password of a warehouse Harbor, configure a K8s cluster credential, and add the K8s cluster credential in the automation tool for executing Kubectl commands for files of optional components in a pipeline of the automation tool;
When the change of the code warehouse is monitored, a request is sent to the positioning identifier, then the code warehouse plug-in of the automation tool receives and processes the request to trigger the execution of the set task, the automation tool pulls the latest code to compile so as to construct a container mirror image, the container mirror image is pushed to the mirror image warehouse, after the construction is completed, the local container mirror image is cleaned to release space, the newly constructed container mirror image is used in the cluster and the corresponding mirror image configuration is updated, and the corresponding scheduling unit of the cluster is replaced to automatically update the state of the deployment selectable component.
Through the process, the automation tool Jenkins can automatically construct the Docker mirror image of the component after detecting the code change, push the Docker mirror image to the mirror image warehouse, and update Deployment in the cluster K8s to use the new mirror image, so that automatic deployment is realized, and the whole process ensures that when the functional code of the component changes, pod can automatically update the state of deploying the optional component, and the latest state of the system is maintained.
The invention has the advantages that:
The invention provides a convenient deployment method of a multi-choice component based on a DevOps, which aims to realize K8s cluster basic environment by interactively inputting user configuration parameters, integrates convenient deployment of cluster related visual components, adopts an automatic DevOps flow, and aims to realize automatic compiling, automatic mirror image construction and automatic deployment of the selectable components aiming at the user-choice cluster visual components (a cluster management visual platform component, a cluster monitoring visual platform component and a cluster log visual platform component), thereby improving the efficiency of cluster management and component update deployment and improving the delivery efficiency and quality of a software system.
Detailed Description
The following describes specific embodiments of the present invention in detail with reference to the drawings. It should be understood that the detailed description and specific examples, while indicating and illustrating the invention, are not intended to limit the invention.
The portable deployment method of the selectable components based on the DevOps comprises portable deployment of a selectable component K8s cluster capable of interactively inputting parameters and automatic deployment of the selectable components based on the DevOps.
1. The overall design is as shown in fig. 1:
in the process of installing the K8s environment, a user can input configuration items of the cluster through interaction, wherein the configuration items comprise basic parameter configuration of the K8s cluster and enabling parameter configuration of user selectable components related to the cluster, yaml (a markup language for data serialization) configuration files required by cluster installation are generated through conveniently deployed parameter configuration and analysis functions, and the generated yaml files are executed through kubectl apply commands (Kubectl is a command line tool of the K8s and used for interacting and managing the K8s cluster) so as to achieve installation and deployment of the K8s environment and the selectable components, and a unified flow of convenient cluster deployment is achieved.
Meanwhile, a process of realizing automatic deployment of the installed optional components by a developer is supported, when the function codes of the optional components change and the change webhook of the configuration monitoring codes (a user-defined HTTP callback mechanism allows an application program to send real-time data or notification to another application program when a specific event occurs) can trigger Jenkins (an automatic tool for continuously constructing a software project, so that the developer can integrate the change into the project more easily and can obtain new construction more easily), a series of actions of code compiling, mirror image construction, mirror image pushing and component updating deployment of the visual components are sequentially executed, so that the function of updating and upgrading of the optional components is achieved.
2. Selectable component K8s cluster convenient deployment of interactive input parameters:
In the prior art, although one-key deployment of a basic cluster environment can be realized, a host list still needs to be manually written, the K8s architecture receives the host list containing architecture parameters and sends the architecture parameters in the host list to an existing architecture (an automatic operation and maintenance tool is used for configuration management, application program deployment, task execution and coordination of multiple nodes), the existing architecture sends a shell script with K8s component information to a target host architecture, the target host architecture installs and deploys the K8s component according to the K8s component information in the shell script, and related visualization components of a cluster such as a cluster management visualization interface, a cluster monitoring visualization interface and a cluster log visualization interface cannot be integrated.
In order to solve the problems, the invention provides a configuration parameter supporting the interactive input of cluster deployment by a user, so as to solve the problem of manually writing cluster parameter files, and a third party component (a cluster management visual platform component, a cluster monitoring visual platform component and a cluster log visual platform component) which is selectable by the user and needs to be installed is added in the process, wherein the function of the third party component is mainly parameter visualization, otherwise, the user is required to manually input complicated command lines to check data changes at a server, the user experience is reduced, and the third party component is deployed by the user according to the needs in the process of installing and deploying the clusters, so that the manual deployment of the user is not required, and the use experience of the user is improved. The specific flow is shown in fig. 2:
The method comprises the steps of inputting information required by a user (prompting the user to input the information on a page), writing the information into a main configuration file cluster.yaml through ConfigHost parameter configuration functions, wherein the file is mainly used for collecting host configuration information, K8s configuration information and configuration information of optional components, writing the information input by the user into a harbor.yam file, an allowable host file, a kubeadm file, a K8s network and storage file and the configuration file of the optional components through ParseConfFile parameter analysis functions through the main configuration file, finally executing installation of the installation file through kubectl apply commands through the install.sh file, and finally completing integrated convenient deployment of a basic K8s cluster environment, a cluster management visual component, a cluster monitoring visual component and a cluster log visual component. Specifically, the code design flow includes the following steps:
Step S1.1 environmental pre-inspection
The system environment, including operating system version, kernel version, configuration DNS server addresses, etc., is checked to ensure that they meet installation requirements. The purpose of the pre-check is to ensure that the system environment and configuration meet the requirements of software installation and operation to reduce problems and failure risk during subsequent installation.
Step S1.2 Interactive input configuration items
The interactive input step allows the user to interactively configure input parameters according to specific requirements, avoids manually writing yaml files by the user, and can improve flexibility, and the user can select to install optional components, set network modes, input server addresses and the like. Meanwhile, the error rate of human input can be reduced. And the related optional component parameter selection (cluster management visual interface, cluster monitoring visual interface, cluster log visual interface) of the integrated cluster can be selected and installed by a user in the process of interactive input parameters, so that the cluster integrated convenient deployment is provided, the use threshold of the user is reduced, and the working time is greatly reduced.
At this stage, an input_config.sh script is run, which in turn prompts the user on the page that the required information needs to be entered, and the information is written into the master profile, closed.yaml, by means of the ConfigHost function. The information required to be input by the user includes host information IP (Internet Protocol, which is responsible for transmitting the data packet from the source address to the destination address, is a core protocol in the internet and the local area network), K8s related configuration information, related information configuration of the optional components, specifically, a validation cluster type (single master, triple master), validation is master-init, node, master-join mode, input IP of each node of the master, root user password, KEEPALIVED (a Linux software tool for high availability and load balancing, mainly used for managing VIP and providing high availability of services), VIP (Virtual IP address, a network card name bound to VIP address not bound to any single physical network interface), IP address of KEEPALIVED, IP/private_y of the harbor host, and NTP (Network Time Protocol, network time protocol, a protocol for synchronizing time in computer network), a network segment of the K8s cluster network mode, a subnet segment of the network mode (visual IP address of the Virtual network interface), a visual cluster log component (visual cluster log component ), visual cluster log component, visual cluster port number (visual cluster component). Wherein harbor is an enterprise-level Docker image repository that provides storage, management, and distribution functions for Docker images, while Docker is a platform for developing, transporting, and running applications, allowing developers to package applications and all their dependencies (including libraries and configuration files) into a lightweight, portable virtualized environment called a "container".
The interactive input configuration item specifically comprises a host information configuration, a K8s related information configuration and an optional component configuration:
1. Host information configuration
In the interactive input configuration item, first, host related information configuration is input. The main purpose of this step is to set the network and identity information of each host node in the cluster. Using ConfigHost functions for interactively configuring information related to a host for use in cluster deployment, wherein the implementation includes:
1) And configuring cluster types, namely determining the cluster types according to the selection of a user, wherein the signal-master represents a single master cluster, and 3-masters represents three master clusters. Wherein a single master cluster refers to only one master node in the cluster. The three master control nodes are responsible for all control and management tasks of the cluster, including scheduling, cluster state management, API service and the like, and the three master control clusters refer to three master control nodes in the cluster. The master node is in this architecture as a core component of the Control Plane, K8s, responsible for managing and scheduling all activities in the cluster for providing high availability and fault tolerance.
2) Configuring node roles, namely selecting the role of a host (such as master-init, node, master-join) according to the cluster type, and carrying out corresponding configuration according to the selection.
3) According to the cluster type and node role selection, detailed information of different nodes can be configured, and ConfigHost functions can write the information into corresponding configuration files. The specific classification of the nodes is shown in fig. 3a and 3 b.
For a single master control cluster, which comprises a master control node and a working node, information required to be input by a user is as follows:
And the master node (master) inputs the IP address of the master node and the SSH root user password of the master node. The master node is selected as the working node (optional). SSH is commonly used for system management, file transfer, and remote execution of commands.
The working node (node) inputs the IP address and host name of the working node.
For a three-master control cluster, the three-master control cluster comprises a master control initialization node, a master control joining node and a working node, and the information required to be input by a user is as follows:
A master-init node (master-init) for cluster initialization, which inputs three master nodes (master-1, master-2),
Master-3) IP address and SSH root user password. Configuring virtual IP and VIP binding network card name for
KEEPALIVED configuration.
The working node (node) inputs the IP address and host name of the working node.
Master joining node (master-join) for joining an existing cluster) inputs the IP address of the master initialization node.
4) The NTP node is configured such that NTP ensures that the time of all computer systems and servers is synchronized to one accurate time source. This is particularly important in a clustered environment, as time differences between different nodes may lead to various problems. The time of all nodes in a multi-node K8s cluster needs to be kept consistent. If the time difference between the nodes is too large, problems may occur in task scheduling, logging, data synchronization and other operations in the cluster.
The default is to deploy an NTP server at the node, and if the node needs to deploy an NTP client, the node needs to input the IP address of the NTP server so that the node can acquire time from the server.
The relevant information of the host is configured through the interactive prompt, so that all necessary parameters are ensured to be set correctly before cluster deployment. The function selects cluster type and node role according to user input, and configures corresponding IP address, SSH password, host name and NTP setting. These configurations are critical to the proper functioning and management of the cluster, and failure to do so would result in improper deployment or functioning of the cluster.
5) Deploying the Harbor node, namely requiring a user to input the address of the mirror warehouse required by deploying the cluster. Components involved in K8s deployment and management (e.g., kube-APISERVER, KUBE-controller-manager, kube-scheduler, etc.) typically require the use of a Docker mirror. These images are obtained from a private image repository. Here, the correct configuration of the image repository address is required to ensure that the K8s cluster can correctly pull the required image, and the root passwords of the Harbor host IP and ssh are input.
2. K8s related information configuration
The network mode of the K8s cluster is configured using ConfigK8sNetWork () function, including selecting the network plug-in (e.g., flannel or Multus) and its associated settings. Let the user select the network mode of the K8s cluster. The default selection is Multus, requiring the user to enter Multus a subnet segment in network mode. This configuration is used to avoid network collisions and to set up the network correctly.
Proper network configuration is an important part of cluster deployment, and ensuring that network mode selection and related parameter configuration are correct is a key to ensuring stable operation of the clusters. These configurations, if not done, may result in network conflicts, performance problems, or failure of the cluster to function properly.
3. Configuration enablement selectable component
As shown in FIG. 4, in the convenient deployment flow of the selectable components capable of interactively inputting parameters, a user-selected and installed selectable component parameter configuration part is added, and in the interactive parameter input interface, a component selection list is provided to allow the user to select the selectable components to be installed. These components include a cluster management platform (to simplify cluster management), a monitoring visualization platform (to expose various monitoring data of the clusters), a log visualization platform (to process and expose log data of the clusters). The processing script of the configuration parameters writes the component selected by the user and the related configuration information thereof into a master configuration file cloud.yaml for the analysis of the subsequent configuration file and the installation of the cluster. Such information includes installation paths, port numbers, dependencies, etc. of the components so that subsequent deployment scripts or management tools can properly install and configure the components according to these configurations.
By the method, the deployment efficiency, flexibility and maintainability of the third party component in the K8s cluster can be remarkably improved, and the method specifically comprises the following advantages:
1) Simplifying deployment flow
The user can input all necessary parameters through a unified interface and select the components to be installed from the component selection list. The analysis script of the parameters automatically completes the configuration and installation of the components according to the selection of the user, and the steps of manual operation are greatly reduced. Through the interactive input interface, the user only needs to pay attention to necessary parameters, and does not need to go deep to the details of the bottom layer, so that the risk of configuration errors is reduced.
2) Enhancing user experience
The interactive parameter input interface is provided, and a user can easily select the components to be installed through the graphical or command line interface, so that the complexity of command line input is reduced, and the user experience is improved.
3) Improving deployment efficiency
And the automatic installation and configuration are that the system automatically calls the corresponding configuration function according to the selection of the user, so that the installation and configuration of the components are completed, and the complicated manual operation is avoided. The same component parameters are not required to be repeatedly configured, the system can automatically perform optimal configuration according to the input parameters, and the cost of repeated labor is reduced.
4) Flexibility and extensibility
The user can flexibly select whether to install components such as the cluster management platform, the monitoring visualization platform, the log visualization platform and the like according to actual demands, and all the components are not required to be installed by default, so that resources are saved.
5) Lowering the technical threshold
The method reduces the technical threshold, and even for users unfamiliar with K8s, complex cluster deployment tasks can be completed through a simple interactive interface, so that dependence on high-level technicians is reduced. All bottom layer operations are processed through the automatic configuration function, a user can complete deployment without understanding the complexity of K8s, and learning and using processes are simplified.
Step S1.3, analyzing the configuration file
Based on configuration parameters input by a user through an interactive interface, the configuration file is analyzed through ParseConfFile functions, various configuration and generation tasks are executed, and logs and related error information are recorded according to the results. The processing procedure for analyzing the configuration file comprises the following steps:
step S1.3.1, reading a default Kubeadm configuration file from a specified path, and analyzing the content of the configuration file into a list to provide basic data for subsequent processing steps;
Kubeadm is a tool provided by K8s for operations such as initializing a cluster, joining a node, upgrading, etc., in order to simplify deployment and management of the K8s cluster, and Kubeadm configuration file generally contains initializing parameters of the K8s cluster.
Step S1.3.2 configuration Kubeadm File
As shown in fig. 5, the Kubeadm file contains an initialization configuration, a cluster configuration, a proxy configuration, and Kubelet (one of the key components in K8s, responsible for managing and running Pods) configuration on each node, pod is the most basic scheduling unit in K8s, representing a group of one or more containers that share storage, network, and running environments.
Initializing a configuration InitConfiguration for setting basic parameters of the K8s cluster. The initialization configuration is extracted from the master configuration file and the default configuration file, and a corresponding Kubeadm initialization configuration file is generated, including control plane settings, authentication, encryption configuration, and the like of the cluster. These parameters are necessary at cluster initialization to ensure that the cluster can be started up normally and initially configured.
Cluster configuration ClusterConfiguration is used to define network, storage, etc. settings for the clusters. The cluster configuration and the default cluster configuration are extracted, and Kubeadm cluster configuration files are generated.
The proxy service KubeProxyConfiguration is configured to ensure proper forwarding of network traffic by configuring Kube-proxy services. Kube-proxy is responsible for handling network traffic routing within the cluster to ensure proper operation of service and load balancing. The proxy configuration and default proxy configuration are extracted, and a Kubeadm proxy configuration file is generated.
Kubelet configure KubeletConfiguration, i.e., configure Kubelet services, to manage the container operations of the nodes. Extracting Kubelet the configuration generates Kubelet a configuration file.
All the configurations are combined into a complete Kubeadm configuration YAML file for use in initializing the K8s cluster.
Step S1.3.3 parse the master configuration file
Host information is extracted from the configuration file and its content is parsed into a list containing different types of configurations, typically including IP address, network settings, docker, NTP configuration, etc. of the host, which are written into Docker, NTP, ansible files, respectively.
Step S1.3.4 processing Docker configuration File
In order not to affect the management and deployment of containers, docker configuration information is extracted from the master configuration file, the extracted Docker configuration information is parsed, and the configurations are applied to a Docker daemon. And writing the read configuration items into a daemon.json file for installing the Docker.
Step S1.3.5 configuration Ansible File
To achieve load balancing and high availability, single point failures and performance bottlenecks are avoided, and the processed host configuration information is used to generate Ansible configuration files for setting HAProxy (High Availability Proxy, a high availability load balancer and proxy server, mainly used to promote performance and reliability of applications and services) and KEEPALIVED. And (3) automating a cluster configuration process, and adapting to different cluster setting requirements by dynamically modifying a configuration file.
A plurality of Ansible related files are configured to prepare for cluster management, load balancing, VIP configuration, time synchronization, and global variable setting. Specifically, the following operations are performed based on the code:
1. configuring Ansible host files
The IP addresses and status of the three master nodes in the Ansible host file are updated to ensure that Ansible is able to correctly identify and manage the master nodes in the cluster, including the IP addresses and roles of the nodes. Making Ansible clear the location and role of each master node.
2. Configuration HAProxy file
The existing server entry is deleted from HAProxy configuration files and new master node IP address and port information is added. This ensures that the HAProxy load balancer is able to properly distribute traffic to the three master nodes.
3. Configuration KEEPALIVED file
The variable profile of KEEPALIVED is updated to set the virtual IP address and network interface. This ensures KEEPALIVED that the allocation of virtual IP addresses can be properly managed, thereby achieving high availability.
4. Configuration Chrony file
NTP server information in the Chrony configuration file is updated to ensure system time synchronization.
5. Configuring global variables
The IP address of the Harbor is obtained. And reading and checking whether the Master node is also set as a working node. The global variable configuration file is updated, including HarborIP and Master node operating states, and the global variable configuration file is regenerated.
Step S1.3.6 processing K8s network storage configuration
And writing yaml files of cluste-config into the K8s network and configuration parameters of a default storage mode. Such as Flannel network plug-in, rook-Ceph storage. These components provide additional functionality such as network plug-ins and storage solutions. Properly configuring them can extend the functionality of the K8s cluster. Otherwise, the clusters may lack the necessary functions or services, affecting the integrity and functionality of the system.
Step S1.3.7 processing Harbor configuration
The host machine IP address of the Harbor container mirror warehouse is extracted from the main configuration file, and the Hostname in the configuration dictionary is updated to the appointed host machine IP address, so that the Harbor can be ensured to be configured by using the correct IP address. A harbor.yaml profile is generated for mirror warehouse settings. Otherwise, storage and management of the container image would be affected, possibly resulting in image unavailability or difficult management.
Step S1.3.8 obtaining node type and Cluster mode
Extracting configuration information, namely extracting the K8s node type, whether the node is a working node, the cluster mode and other information from a master configuration file cloud.yaml. The obtained node types can be divided into a master-initialization node (master-init), a common node (node), a master joining node (master-join), and a master node (master).
1. The master control initialization node is only applicable to three master control clusters:
The node performs cluster initialization operations including configuring a highly available control plane, starting the cluster, configuring load balancing, etc. Information needs to be acquired, namely the IP addresses of the three main control nodes and the root user passwords of the server where the three main control nodes are located, and the VIP addresses of KEEPALIVED and the names of the network cards bound by the VIP need to be configured to realize high availability of the main control nodes. The initial node is set as the first main control node in the cluster, and the cluster is initialized, and other main control nodes are added into the initialized cluster.
2. The master control joining node is only applicable to three master control clusters:
the node is added into the configured three-master cluster to become a new master node of the cluster. The information to be acquired is the IP address of the master control initialization node used for connecting to the existing cluster. And a subsequent operation of connecting the new master control node to the existing cluster through a cluster joining command. In the joining process, the node is configured to cooperate with other master control nodes, and the configuration and the state of the cluster are synchronized.
3. The master control node is the only master control node in the single master control cluster:
The node is responsible for managing the control plane of the entire cluster. The information to be acquired includes the IP address of the main control node and the root user password of the server where the main control node is located, and whether the main control node is used as a working node is selected. And the subsequent operation comprises the steps of configuring a single main control node and ensuring that the single main control node can process all control plane tasks in the cluster, and configuring whether the main control node is used as a working node to run traffic load.
4. The working node is suitable for a single master control cluster and three master control clusters:
nodes are normal working nodes, do not assume the role of the control plane, and are only used as containers for running applications. In a single master cluster, the master node and the working node are typically the same machine. In the three master control clusters, node nodes are used as working nodes and are responsible for running traffic loads. The information that needs to be obtained is the IP address of the working node and the hostname (optional) of the working node. And the subsequent operation is that the working node joins the cluster and operates the service as the working node. In a single master cluster, the role of the master node is typically configured and operated as a working node. In a three master cluster, the configuration joins the normal working node to the cluster and ensures that it can accept the workload.
The steps are to provide stable control plane and service load support for the cluster in order to ensure that each node of the cluster can be properly configured and work normally.
Step S1.4 processing optional component configuration
Generating a configuration file according to the enabling parameters of selectable components (cluster management platform, monitoring visualization platform, log visualization platform) input by a user, as shown in fig. 6, comprising the following procedures:
Firstly, determining the requirement of a component to be installed according to the selection of a user, and confirming the installation of the component;
Then, extracting the IP of the open component service and the address of harbor warehouse from the host configuration file closed.yaml input by the user;
And finally, the IP and the mirror warehouse address are used as parameters to be transmitted to an installation processing function of the optional component to generate monitor. Yaml for installing the cluster monitoring visual component, cluster_manager. Yaml of the cluster management visual component and log collect. Yaml of the cluster log visual interface for installation and deployment of the follow-up optional component.
Step S1.5, installing a basic environment, namely installing a K8S environment, comprising the following steps:
Step S1.5.1 network connection check
And (3) checking network connection, namely calling CheckNetworkConnect functions, and checking the network connection between the IP address of the designated Harbor and the IP of the main control node, so as to ensure that network communication is free from problems, which is an important step for ensuring cluster stability and normal operation of service.
Step S1.5.2 configuring SSH mutual trust
In a clustered environment, the automation management tool Ansible needs to be able to execute commands or scripts between different nodes, which can be cumbersome and detrimental to automation if a password is required to be entered each time. By configuring SSH mutually trusted, these operations can be accomplished without password. To ensure that SSH connections can be made between different hosts in a clustered environment without ciphers, SSH mutual trust needs to be configured.
And establishing SSH mutual trust with the Harbor server, namely sending the SSH public key of the local machine to the Harbor server, and establishing SSH mutual trust between the local machine and the Harbor server.
And establishing SSH mutual trust among all the master control nodes. If node nodes are included, then node nodes also need to establish SSH mutual trust with all master nodes.
And in the mode of single master control node, SSH mutual trust only needs to be established between the node and the single master control node.
Step S1.5.3 installation of NTP
This is critical to the proper operation and scheduling of the distributed system in order to ensure time synchronization between all working nodes and the master node. Whereas master nodes are typically already configured with NTP services in the early stages of cluster deployment, and therefore do not need to be handled separately in subsequent scripts. This ensures that the time of the entire cluster is consistent, avoiding potential problems due to time dyssynchrony.
Step S1.6 mounting of clusters and Components
Step S1.6.1 mounting Harbor
For both the master-init node in the three-master mode and the master node in the single-master mode, a Harbor needs to be installed, which is an enterprise-level mirror warehouse for managing the Docker mirror, in order to support storage and management of container mirrors.
Step S1.6.2 install Docker
For node nodes, to be able to provide a container runtime environment, it is ensured that the container can start and run normally. The Docker needs to be installed. Executing the Docker installation script and command, and installing the Docker engine and related components.
Installation Ansible and deployment HAProxy/KEEPALIVED (three master mode only)
The three master mode and master-init node installs Ansible tools for load balancing and high availability configuration, depending on the operating system type, deploys HAProxy and KEEPALIVED using Ansible. The use Ansible in a clustered environment can automate the deployment and configuration of individual components (e.g., HAProxy and KEEPALIVED), reducing the complexity and risk of errors in manual operations. By automating the script, it is ensured that each node is configured as intended and is convenient to manage and maintain.
In the K8s cluster, HAProxy is generally used for load balancing the requests of different main control nodes, ensuring uniform distribution of the requests and improving the overall performance and reliability of the system. KEEPALIVED is generally used for managing the high availability of the master control node, so that when the master control node fails, the system can be automatically switched to other healthy nodes, and the continuous operation of the cluster and the high availability of the service are ensured.
Step S1.6.3 mounting K8s
The K8s cluster needs to be installed on each node, where the corresponding components need to be installed and configured in the K8s cluster according to different node types (master-init, master-join, node).
1. Three master control mode master-init node initialization K8s
Cluster initialization in three master mode performs installation and configuration mainly using Ansible.
Preparation environment preparation, namely preparation.yaml of Ansible scripts, and preparation of an installation environment of the Kubernetes cluster. This step typically includes installing the necessary software packages, configuring system parameters, closing the swap switch area, etc.
Establishing a cluster, namely executing Ansible script build_cluster.yaml, and initializing the K8s cluster by using Kubeadm. This step invokes Kubeadminit a command to initialize the k8s cluster and set the cluster parameters according to the contents of the configuration file.
And completing subsequent configuration, namely executing Ansible script post_cluster.yaml and completing subsequent configuration of the Kubernetes cluster. The method comprises the steps of configuring Kubectl rights and automatic completion, and taking a master node as a working node.
2. Three master mode master-join node joining cluster
Under the condition that the environment of the cluster is normal, the node acquires a certificate key for cluster authentication from the main control node, so that the new main control node can perform correct security verification, and a command for joining the cluster is generated on the new main control node, so that the cluster can be joined. Through this procedure, it can be ensured that the new master node is properly joined to the K8s cluster and configured to function properly.
3. Master node initialization K8s in single master mode
Preparation before installing K8s prepare the package needed to install K8s according to the operating system type and architecture, and disable the swap partition and ensure that the installed critical components Kubeadm are available.
Initialization of K8s clusters using Kubeadm execution kubeaminit command to perform cluster initialization based on previously configured K8s configuration files. And finally, configuring relevant settings of the master control node.
4. Work node joining cluster
Under the condition that the environment of the cluster is normal, the node obtains a command for joining the cluster from the main control node, and generates a command for joining the cluster on a new working node, so that the cluster can be joined.
Step 1.7K 8s Cluster creation network
The initialized K8s cluster needs to create a network. Network type parameters (flannel or multus) are obtained from the main configuration file input by the user to determine the K8s network type to be created.
Step 1.8 mounting the storage
Storage services are typically used to provide persistent storage solutions to support the storage requirements of applications and services in a K8s cluster. By automatically executing the storage_install.sh script, manual intervention is not needed in the process, complexity of cluster deployment and configuration is simplified, and efficiency and consistency are improved.
Step 1.9 mounting optional Components
As shown in fig. 7, according to the configuration parameters of the user, an optional-components_install.sh script is executed to install the corresponding optional components. And reading a switch for installing the optional component from the main configuration file to determine whether to execute the installation script of the optional component. If the switch is true, after the health state of the cluster and the installation checking condition of the optional components are verified, executing an installation script optional-components_installation. Sh, reading the installation yaml files of the cluster management visualization component, the cluster monitoring visualization component and the cluster log visualization component, completing the installation of the optional components, and finally outputting the installation result information.
On the other hand, the user selects the third party components to be deployed and performs automatic deployment based on the DevOps selectable components.
When development or operation staff needs to upgrade or update deployed optional components, code compiling, mirror image construction, pushing, mirror image pulling and manual updating of yaml files of the optional components are often needed. The above-mentioned process is dependent on manual execution and is prone to operational errors. And because each step needs to be manually executed, the whole updating process consumes long time, lacks support of an automation tool, cannot realize continuous integration and delivery, and limits development efficiency and delivery frequency.
Therefore, in order to solve the above problems, the present invention adopts Jenkins to automatically compile the components selected and deployed by the user and deploy the components to the K8s cluster, so as to solve the problem of automation, and the specific flow is shown in fig. 8. The method can automatically execute the whole code compiling, mirror image constructing and component deployment flow, thereby eliminating manual intervention and reducing human errors. The developer only needs to submit codes, the system can automatically trigger subsequent construction and deployment operations, continuous delivery is realized, and the application program can be iterated quickly. Through automatic deployment, CICD can quickly feed back a construction result after code submission, a developer can acquire deployed feedback information in a short time, and timely discover and repair problems, so that a development period is shortened, wherein CI represents continuous integration, and CD represents continuous delivery/deployment. The CICD flow ensures that each deployment is subjected to the same automation step, the environment configuration or deployment step inconsistency caused by manual operation is avoided, and the error probability is reduced.
The method mainly comprises the steps that a user completes installation and deployment of optional components through a process of interactively inputting parameters, when a developer needs to modify functional codes of the optional components to achieve upgrading and updating of the optional components, firstly pushing the updating codes to a code warehouse Gitlab (a Web-based Git warehouse management tool), automatically triggering pipeline tasks (pre-writing) of an automation tool by pre-configured Gitlab webhook, completing automatic compiling and mirror image construction processes of the codes of the optional components, automatically pushing mirror images to a harbor mirror image warehouse, and finally executing pod mirror image updating of the optional components through the automation tool to complete automatic deployment of the optional components. The detailed flow for implementing automated deployment for optional components, as shown in fig. 9, includes the following steps:
Step 2.1 environmental preparation
1. Configuration GitLab Webhook
The automation of code construction is mainly implemented by Jenkins, so a GitLab plug-in with Webhook URL pointing to Jenkins needs to be added in the setting of GitLab items.
2. Configuration Jenkins construction pipeline
Creating a new Pipeline Job in Jenkins, configuring build and deploy flows, pipelining Pipeline is used to define and automate the process of a series of tasks or steps, in the workflow of Continuous Integration (CI) and continuous delivery/deployment (CD), to ensure smooth transition of code from development to production environment, stage of Pipeline can be divided into pulling the git warehouse code, building npm project, mirroring through dock, pushing mirror to harbor, executing yaml file of component to complete deployment of component, as shown in fig. 10 in particular.
When Jenkins detects Gitlab a code change, it automatically triggers and performs the build process. The method comprises the steps of configuring a URL of GitLab code warehouse, designating a constructed branch, configuring a construction trigger (when the code is changed or triggered), configuring construction environment, configuring construction, configuring a user name and a password of a Harbor, configuring a K8s credential, adding the K8s credential in Jenkins, and executing Kubectl command in Jenkins Pipeline.
Step 2.2 trigger construction
GitLab listens for changes to the code repository (code commit) and sends a POST request to the configured Webhook URL. The GitLab plug-in of Jenkins receives and processes this Webhook request and then begins executing the corresponding build tasks according to the configured triggers.
After the Jenkins receives the notification, the construction task is started to be executed according to the configured Pipeline script. To complete the build project, the latest code is pulled from GitLab along with Dockfile files, and the build script is run (compile the code, generate a binary file).
Step 2.3 push mirror
Jenkins execute the construction of the Docker mirror image, the construction number is marked as a mirror image label, the Docker push command is used for pushing the mirror image to the Harbor mirror image warehouse, and after the construction is completed, the local Docker mirror image is cleaned to release space.
Step 2.4 updating optional Components
After the new image is constructed and pushed, jenkins updates the image information in Deployment (one object in K8s for managing the application program and mainly responsible for deployment, upgrading, rollback and expansion of the application program) configuration in K8s, and updates Deployment image configuration by kubectl set image command to replace the old Pod with the new Pod by using the dock image newly constructed by the above procedure.
Through the process, jenkins can automatically construct the Docker mirror image of the component after detecting the code change, push the Docker mirror image to a mirror image warehouse and update Deployment in K8s to use a new mirror image, so that automatic deployment is realized. The whole flow ensures that when the functional code of the component changes, the Pod automatically updates the state of the optional component to be deployed, and the latest state of the system is maintained.
In one embodiment of the invention:
And taking the change of the functional codes of the cluster monitoring visualization component as an example to realize the automatic flow of the component. After the user inputs the parameters interactively, the user confirms that the optional components for cluster monitoring are installed, and the components can facilitate the user to intuitively observe the change of each resource of the cluster at the client. At present, a developer of the component needs to modify the function code of the component and increase a resource monitoring page of the rail traffic service, but the component is normally operated in a cluster environment, and the component is automatically compiled, constructed and deployed, and the specific flow is shown in fig. 11.
1. Code submission (GitLab)
And the developer modifies the code, namely, the functional code developer of the cluster monitoring visual component completes code modification locally and increases the resource monitoring page function of the rail traffic service.
Code is submitted to GitLab-the developer pushes the modified component function code into branches in GitLab repository using the gitpush command.
2. Continuous integration and automatic construction (Jenkins)
Triggering Jenkins construction, namely triggering Jenkins to automatically construct through webhook after GitLab receives code submission. Jenkins will perform the following steps according to the pipeline configuration:
Pulling codes, namely using a gitpull command to pull the codes of the latest monitoring components from GitLab warehouse;
compiling codes, namely compiling front-end codes of the monitoring components by using npm build commands;
constructing a mirror image, namely manufacturing a new mirror image by using a dock build command;
Pushing images to Harbor pushing new images to Harbor image warehouse using a dock push command, typically distinguishing between different image versions by version number or tag. The new mirror image of the component containing the rail traffic monitoring page has been stored in the Harbor repository at this point.
3. Deployment to K8s clusters
Jenkins implements cluster deployment of the monitor mirror components through kubectl APPLY YAML files. The K8s executes rolling update, and the old version components in operation are gradually replaced by components containing new functions, so that the service is not interrupted.
4. Functional verification
After deployment is completed, the developer can access the service address of the monitoring component to ensure that the new function has been successfully deployed and operated. And verifying whether the rail transit service resource monitoring page works normally or not by accessing the rail transit service resource monitoring page, and testing the function of the rail transit service resource monitoring page.
Based on the whole content, the invention provides a convenient deployment method of selectable components based on the DevOps. In order to simplify the deployment flow of K8s and components thereof, an interactive input parameter configuration method is provided, the method comprises parameter configuration of user selectable components meeting different requirements of users, and aiming at the problem that the user selectable components are manually compiled codes for updating and upgrading, a DevOps flow of automatic code compiling, mirror image construction and deployment is adopted, so that an efficient and reliable method is provided. The main advantages of this method include:
1. simplified configuration and operation
By interactively inputting configuration parameters and automatically generating and installing YAML files, the complexity of manual modification and configuration is obviously reduced, deployment errors are reduced, and deployment efficiency and accuracy are improved.
2. The integrated man-machine interaction interface is an optional component
In the interactive input parameter configuration process, a cluster management platform (used for simplifying cluster management), a monitoring visualization platform (used for displaying various monitoring data of a cluster) and a log visualization platform (used for processing and displaying log data of the cluster) are integrated as optional components for selection by a user, the user only needs to input corresponding parameters for installing the optional components, the analysis script of parameter configuration can automatically analyze and install the optional components selected by the user, the processes of self-installation, environment configuration, mirror image pulling and the like of the user are reduced, and the user experience is greatly improved.
The K8s component is flexibly selected and deployed by a user according to actual demands, so that resource waste is avoided, personalized demands are met, and the expandability and flexibility of the system are improved.
3. Integration of DevOps to automation of optional components
For component upgrade, a user needs to upgrade and update a component selected to be installed, a developer needs to manually update the mirror image of the component, then manually install and deploy a new component, and the process not only needs to consume a large amount of manpower and material resources, but also can cause problems such as data loss, and the like, and continuous integration and continuous delivery are realized by combining Jenkins automatic code compiling, mirror image construction and deployment flows, so that manual operation and configuration are reduced, and the system delivery efficiency and quality are improved.
4. Improving user experience
The unified interface capable of interactively inputting the parameters is provided, so that a user can configure and operate more intuitively, the technical threshold of the user is reduced, and the user experience is optimized.
Although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those skilled in the art that the foregoing embodiments may be modified or equivalents may be substituted for some or all of the features thereof, and that the modifications or substitutions do not depart from the scope of the embodiments of the present invention.