CN117851269A - Cloud-based automatic test environment management method and system - Google Patents
Cloud-based automatic test environment management method and system Download PDFInfo
- Publication number
- CN117851269A CN117851269A CN202410259407.XA CN202410259407A CN117851269A CN 117851269 A CN117851269 A CN 117851269A CN 202410259407 A CN202410259407 A CN 202410259407A CN 117851269 A CN117851269 A CN 117851269A
- Authority
- CN
- China
- Prior art keywords
- test
- service
- micro
- performance
- automatically
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012360 testing method Methods 0.000 title claims abstract description 199
- 238000007726 management method Methods 0.000 title claims abstract description 35
- 238000012544 monitoring process Methods 0.000 claims abstract description 34
- 238000000034 method Methods 0.000 claims description 54
- 230000008569 process Effects 0.000 claims description 33
- 238000013515 script Methods 0.000 claims description 23
- 238000004590 computer program Methods 0.000 claims description 13
- 230000001960 triggered effect Effects 0.000 claims description 13
- 238000004458 analytical method Methods 0.000 claims description 9
- 230000007246 mechanism Effects 0.000 claims description 8
- 230000009467 reduction Effects 0.000 claims description 8
- 230000010354 integration Effects 0.000 claims description 7
- 238000005457 optimization Methods 0.000 claims description 7
- 238000003860 storage Methods 0.000 claims description 7
- 230000006870 function Effects 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 6
- 238000011161 development Methods 0.000 claims description 5
- 230000005856 abnormality Effects 0.000 claims description 4
- 230000035945 sensitivity Effects 0.000 claims description 4
- 230000002159 abnormal effect Effects 0.000 claims description 3
- 238000004422 calculation algorithm Methods 0.000 claims description 3
- 230000008859 change Effects 0.000 claims description 3
- 238000004140 cleaning Methods 0.000 claims description 3
- 230000001419 dependent effect Effects 0.000 claims description 3
- 238000013468 resource allocation Methods 0.000 claims description 3
- 238000000926 separation method Methods 0.000 claims description 3
- 238000010276 construction Methods 0.000 claims 4
- YHXISWVBGDMDLQ-UHFFFAOYSA-N moclobemide Chemical compound C1=CC(Cl)=CC=C1C(=O)NCCN1CCOCC1 YHXISWVBGDMDLQ-UHFFFAOYSA-N 0.000 claims 3
- 230000003247 decreasing effect Effects 0.000 claims 1
- 230000007547 defect Effects 0.000 claims 1
- 230000003993 interaction Effects 0.000 claims 1
- 238000005096 rolling process Methods 0.000 claims 1
- 230000000087 stabilizing effect Effects 0.000 claims 1
- 238000012795 verification Methods 0.000 claims 1
- 230000006872 improvement Effects 0.000 description 7
- 230000008901 benefit Effects 0.000 description 5
- 238000005311 autocorrelation function Methods 0.000 description 4
- 238000001514 detection method Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 238000013522 software testing Methods 0.000 description 3
- 206010000117 Abnormal behaviour Diseases 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000010921 in-depth analysis Methods 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000002360 preparation method Methods 0.000 description 2
- 230000008439 repair process Effects 0.000 description 2
- 238000007619 statistical method Methods 0.000 description 2
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 1
- 101001121408 Homo sapiens L-amino-acid oxidase Proteins 0.000 description 1
- 102100026388 L-amino-acid oxidase Human genes 0.000 description 1
- 238000003648 Ljung–Box test Methods 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000008713 feedback mechanism Effects 0.000 description 1
- 229910021389 graphene Inorganic materials 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000013112 stability test Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000000528 statistical test Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Prevention of errors by analysis, debugging or testing of software
- G06F11/3698—Environments for analysis, debugging or testing of software
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Prevention of errors by analysis, debugging or testing of software
- G06F11/3668—Testing of software
- G06F11/3672—Test management
- G06F11/3684—Test management for test design, e.g. generating new test cases
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Prevention of errors by analysis, debugging or testing of software
- G06F11/3668—Testing of software
- G06F11/3672—Test management
- G06F11/3688—Test management for test execution, e.g. scheduling of test suites
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Prevention of errors by analysis, debugging or testing of software
- G06F11/3668—Testing of software
- G06F11/3672—Test management
- G06F11/3692—Test management for test results analysis
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- Quality & Reliability (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Debugging And Monitoring (AREA)
Abstract
Description
技术领域Technical Field
本发明云端自动化技术领域,具体为一种基于云端自动化测试环境管理方法及系统。The present invention relates to the field of cloud automation technology, and specifically to a cloud-based automated testing environment management method and system.
背景技术Background technique
随着软件开发进程的快速演进,特别是在敏捷开发和DevOps文化的推动下,对于更高效、灵活的测试环境管理方法的需求日益增长。With the rapid evolution of software development processes, especially driven by agile development and DevOps culture, the demand for more efficient and flexible test environment management methods is growing.
云计算以其高度的弹性、可扩展性和按需服务模式,为软件测试提供了一个理想的平台。它允许快速部署和调整测试环境,适应不同规模和复杂性的测试需求。云平台能够提供各种计算资源,包括CPU、内存、存储和网络资源,这些资源可以根据需求动态调整,从而优化成本和性能。软件测试自动化是提高软件开发周期效率和质量的关键。它减少了人工测试的需要,允许持续集成/持续部署(CI/CD)实践的实施,确保了软件质量和快速迭代。自动化测试包括多种类型,如单元测试、集成测试和性能测试,对于确保软件在各种环境下的可靠性和稳定性至关重要。Cloud computing, with its high elasticity, scalability, and on-demand service model, provides an ideal platform for software testing. It allows for rapid deployment and adjustment of test environments to accommodate testing needs of varying scale and complexity. Cloud platforms are able to provide a variety of computing resources, including CPU, memory, storage, and network resources, which can be dynamically adjusted based on demand to optimize cost and performance. Software testing automation is key to improving the efficiency and quality of the software development cycle. It reduces the need for manual testing, allows the implementation of continuous integration/continuous deployment (CI/CD) practices, and ensures software quality and rapid iteration. Automated testing includes multiple types, such as unit testing, integration testing, and performance testing, and is essential to ensuring the reliability and stability of software in various environments.
微服务架构通过将应用程序分解为一组小型、松耦合的服务来提高灵活性和可维护性。然而,这种架构也带来了测试上的挑战,因为需要独立地测试每个微服务单元并确保它们在集成时能够正常工作。在云环境中测试微服务需要有效的服务发现、负载均衡和网络配置。The microservices architecture improves flexibility and maintainability by breaking down an application into a set of small, loosely coupled services. However, this architecture also brings challenges in testing, as each microservice unit needs to be tested independently and ensure that they work properly when integrated. Testing microservices in a cloud environment requires effective service discovery, load balancing, and network configuration.
在传统的测试环境管理中,手动配置和维护测试环境常常耗时耗力,且容易出错。尤其是在云环境和微服务架构下,这种管理方式难以满足快速部署和灵活配置的需求。此外,缺乏有效的性能监控和资源优化机制也是传统方法的短板。In traditional test environment management, manual configuration and maintenance of the test environment is often time-consuming, labor-intensive, and error-prone. Especially in cloud environments and microservice architectures, this management method is difficult to meet the needs of rapid deployment and flexible configuration. In addition, the lack of effective performance monitoring and resource optimization mechanisms is also a shortcoming of traditional methods.
本发明通过整合云计算和自动化测试技术,提供了一种全新的测试环境管理方法。它能够自动化部署和配置容器化的测试环境,支持微服务单元的独立和集成测试,并通过集成CI/CD工具实现测试流程的自动化。此外,本发明还包括了性能监控和日志分析功能,以及基于监控数据的智能资源调整机制,从而确保测试环境的高效运行和资源的最优利用。The present invention provides a new test environment management method by integrating cloud computing and automated testing technologies. It can automatically deploy and configure containerized test environments, support independent and integrated testing of microservice units, and automate test processes by integrating CI/CD tools. In addition, the present invention also includes performance monitoring and log analysis functions, as well as an intelligent resource adjustment mechanism based on monitoring data, thereby ensuring efficient operation of the test environment and optimal utilization of resources.
发明内容Summary of the invention
鉴于上述存在的问题,提出了本发明。In view of the above-mentioned problems, the present invention is proposed.
因此,本发明解决的技术问题是:如何在云计算环境中高效、灵活且自动化地管理软件测试环境。Therefore, the technical problem solved by the present invention is: how to manage the software testing environment efficiently, flexibly and automatically in a cloud computing environment.
为解决上述技术问题,本发明提供如下技术方案:一种基于云端自动化测试环境管理方法,包括:在云计算平台上自动部署包含测试所需依赖的容器化测试环境;To solve the above technical problems, the present invention provides the following technical solutions: A cloud-based automated test environment management method, comprising: automatically deploying a containerized test environment containing dependencies required for testing on a cloud computing platform;
将待测试应用拆分为独立的微服务单元,并为每个单元实施独立测试;Split the application to be tested into independent microservice units and implement independent tests for each unit;
通过CI/CD工具自动触发针对每个微服务的测试流程;Automatically trigger the testing process for each microservice through CI/CD tools;
对容器和微服务的性能进行监控以及日志收集与分析;Monitor the performance of containers and microservices and collect and analyze logs;
根据性能监控结果自动调整微服务实例数量,并实施服务部署策略。Automatically adjust the number of microservice instances based on performance monitoring results and implement service deployment strategies.
作为本发明所述的基于云端自动化测试环境管理方法的一种优选方案,其中:所述容器化测试环境包括,As a preferred solution of the cloud-based automated test environment management method described in the present invention, wherein: the containerized test environment includes:
当需要创建新的测试环境时,自动执行容器化环境的部署脚本,脚本基于预先定义的Dockerfile构建包含所有必要依赖和配置的容器镜像;When a new test environment needs to be created, the deployment script of the containerized environment is automatically executed. The script builds a container image containing all necessary dependencies and configurations based on the pre-defined Dockerfile;
当容器镜像构建完成时,将镜像上传到云平台的容器仓库中,使其在各测试环境中随时进行拉取和部署;When the container image is built, upload the image to the container repository of the cloud platform so that it can be pulled and deployed at any time in various test environments;
当有测试任务触发时,则通过CI/CD工具自动拉取对应的容器镜像,并在云计算平台的指定环境中启动容器实例进行测试;When a test task is triggered, the corresponding container image is automatically pulled through the CI/CD tool, and the container instance is started in the specified environment of the cloud computing platform for testing;
当容器实例启动后,则自动配置环境变量和依赖服务,确保测试环境与预期设置一致,并能够立即用于测试;When the container instance is started, environment variables and dependent services are automatically configured to ensure that the test environment is consistent with the expected settings and can be used for testing immediately;
当测试任务完成或更新时,则自动停止并销毁相关容器实例,释放资源,并根据需要更新或重新部署容器镜像。When the test task is completed or updated, the relevant container instance is automatically stopped and destroyed to release resources, and the container image is updated or redeployed as needed.
作为本发明所述的基于云端自动化测试环境管理方法的一种优选方案,其中:将待测试应用拆分为独立的所述微服务单元包括,As a preferred solution of the cloud-based automated test environment management method of the present invention, wherein: splitting the application to be tested into independent microservice units includes:
当应用的功能模块被识别为可独立运行和测试的单元时,则将其拆分为单独的微服务单元,并为每个单元定义API接口和服务契约;When the functional modules of the application are identified as units that can be run and tested independently, they are split into separate microservice units, and API interfaces and service contracts are defined for each unit;
当微服务单元被定义后,则针对每个单元编写自动化测试脚本,覆盖其关键功能和交互界面;Once the microservice units are defined, an automated test script is written for each unit, covering its key functions and interactive interfaces;
当自动化测试脚本准备就绪时,则将其集成到CI/CD流程中,使其在微服务代码更新时自动执行这些测试;When the automated test scripts are ready, they are integrated into the CI/CD process so that they can be automatically executed when the microservice code is updated.
当单元测试在CI/CD流程中被触发时,则在独立的、隔离的测试环境中运行这些测试,确保测试结果的准确性和微服务之间的独立性;When unit tests are triggered in the CI/CD process, they are run in independent, isolated test environments to ensure the accuracy of test results and the independence between microservices.
当单元测试完成后,则收集测试结果和性能数据,以评估每个微服务的功能性和稳定性,并为后续优化提供依据。When the unit test is completed, the test results and performance data are collected to evaluate the functionality and stability of each microservice and provide a basis for subsequent optimization.
作为本发明所述的基于云端自动化测试环境管理方法的一种优选方案,其中:所述通过CI/CD工具自动触发针对每个微服务的测试流程包括,当源代码仓库中的微服务代码发生更新时,则自动触发CI/CD工具中配置的测试流程,这包括从代码仓库拉取最新代码、执行构建过程以及运行自动化测试脚本;As a preferred solution of the cloud-based automated test environment management method described in the present invention, wherein: the automatic triggering of the test process for each microservice by the CI/CD tool includes, when the microservice code in the source code repository is updated, the test process configured in the CI/CD tool is automatically triggered, which includes pulling the latest code from the code repository, executing the build process, and running the automated test script;
当自动化测试脚本开始执行时,则在隔离的测试环境中部署相应的微服务实例;When the automated test script starts executing, the corresponding microservice instance is deployed in the isolated test environment;
当自动化测试执行完成时,则收集测试结果和性能指标,并生成测试报告;When the automated test execution is completed, the test results and performance indicators are collected and a test report is generated;
当测试发现错误时,则将错误报告和相关日志发送给开发团队,及时修复,并根据需要重新触发测试流程;When testing finds errors, error reports and related logs are sent to the development team for timely repairs and re-triggering of the testing process as needed;
当所有测试通过且满足预设的质量标准时,则自动将代码变更合并到主分支或标记为可部署状态,为后续的发布步骤做准备。When all tests pass and meet the preset quality standards, the code changes are automatically merged into the main branch or marked as deployable in preparation for subsequent release steps.
作为本发明所述的基于云端自动化测试环境管理方法的一种优选方案,其中:所述对容器和微服务的性能进行监控包括,在云环境中配置监控工具,实时跟踪微服务和容器的性能指标;As a preferred solution of the cloud-based automated test environment management method described in the present invention, wherein: the monitoring of the performance of the container and the microservice includes configuring a monitoring tool in the cloud environment to track the performance indicators of the microservice and the container in real time;
部署日志收集系统,自动从所有容器和微服务中收集、存储、索引日志信息;Deploy a log collection system to automatically collect, store, and index log information from all containers and microservices;
实施智能报警机制,当监控到的性能指标超出预设阈值或日志中出现预定义的错误模式时,则自动触发报警系统,并通过集成的通知方式发送警报;Implement intelligent alarm mechanism, which automatically triggers the alarm system and sends alarms through integrated notification when the monitored performance indicators exceed the preset threshold or predefined error patterns appear in the logs;
当性能指标处于正常范围内且日志中无关键错误或异常行为时,则不触发报警;When the performance indicators are within the normal range and there are no critical errors or abnormal behaviors in the log, no alarm is triggered;
使用日志分析工具,对收集的日志进行分析,以便快速定位和解决性能问题或系统异常;Use log analysis tools to analyze collected logs to quickly locate and resolve performance issues or system anomalies;
将收集的性能数据以图形化的方式展示,提供系统性能的直观视图和深入分析。The collected performance data is displayed in a graphical form, providing an intuitive view and in-depth analysis of system performance.
作为本发明所述的基于云端自动化测试环境管理方法的一种优选方案,其中:所述对收集的日志进行分析包括,从日志中提取性能指标的时间序列数据,对数据进行清洗,处理缺失值和异常点;As a preferred solution of the cloud-based automated test environment management method described in the present invention, wherein: the analysis of the collected logs includes extracting time series data of performance indicators from the logs, cleaning the data, and processing missing values and abnormal points;
应用ARIMA模型对处理后的时间序列数进行建模;Applying the ARIMA model Model the processed time series;
其中,是自回归项的阶数它允许我们将过去/>个时间点的值用作预测变量;表示查分次数,用于使时间序列数据稳定;/>是移动平均项的阶数,它允许我们将过去个预测误差用作预测变量;使用历史数据来估计模型参数/>,并拟合ARIMA模型;in, is the order of the autoregressive term which allows us to transform the past The values at each time point were used as predictor variables; Indicates the number of check points, used to stabilize time series data; /> is the order of the moving average term, which allows us to convert the past The forecast errors are used as predictors; historical data are used to estimate model parameters/> , and fit the ARIMA model;
利用拟合好的ARIMA模型进行预测,生成未来一段时间的性能指标预测值;Use the fitted ARIMA model to make predictions and generate performance indicator forecasts for a period of time in the future;
计算时间点预测值/>和时间点/>实际观测值/>之间的差异,;Calculate time points Prediction value/> and time points/> Actual observed value/> difference between, ;
其中,是时间点/>的差异值;in, It's time /> The difference value of
设定一个阈值来判断异常,若/>超过阈值/>,则认为在时间点/>发生异常;Set a threshold To judge the abnormality, if/> Exceeding the threshold /> , then it is considered that at time point /> An exception occurs;
所述阈值通过历史差异数据的平均值加减两倍的标准层,公式为:The threshold The formula is the average value of historical difference data plus or minus twice the standard layer:
; ;
其中,表示历史差异数据的平均值,/>表示标准差。in, Indicates the average value of historical difference data, /> Represents standard deviation.
作为本发明所述的基于云端自动化测试环境管理方法的一种优选方案,其中:所述根据性能监控结果自动调整微服务实例数量包括,As a preferred solution of the cloud-based automated test environment management method described in the present invention, wherein: the automatic adjustment of the number of microservice instances according to the performance monitoring results includes:
确定CPU使用率、内存使用率/>、磁盘I/O使用率/>和网络带宽使用率/>为监控指标;Determine CPU usage , memory usage/> , Disk I/O usage/> and network bandwidth usage/> To monitor indicators;
实时收集每个微服务实例的性能数据,判断是否达到自动扩展阈值和自动缩减阈值;Collect performance data of each microservice instance in real time to determine whether the automatic expansion threshold and automatic reduction threshold are reached;
当任一所述监控指标超过上限阈值/>时,表示当前的资源已不足以满足性能需求,则增加微服务实例数量;When any of the monitoring indicators Exceeding the upper threshold /> , it means that the current resources are insufficient to meet the performance requirements, so increase the number of microservice instances;
当任一所述监控指标超过下限阈值/>,表示当前的资源使用不足,则减少微服务实例数量,优化资源使用;When any of the monitoring indicators Exceeding the lower threshold /> , indicating that the current resource usage is insufficient, the number of microservice instances is reduced to optimize resource usage;
设定自动调整实例数量的决策算法,来计算调整后的实例数量,公式表示为:Set the decision algorithm for automatically adjusting the number of instances to calculate the number of instances after adjustment. The formula is:
; ;
其中,表示调整后的实例数量,/>表示当前的实例数量,/>表示调整敏感度系数,/>表示当前的综合性能指标,/>表示目标性能指标;in, Indicates the number of instances after adjustment,/> Indicates the current number of instances, /> Indicates the adjustment sensitivity coefficient, /> Indicates the current comprehensive performance index, /> represents the target performance indicator;
根据计算出的新实例数量,动态地增加或减少微服务实例;Based on the calculated number of new instances , dynamically increase or decrease microservice instances;
调整服务部署策略,包括网络配置、负载均衡和资源分配,以适应新的实例数量并优化整体性能。Adjust service deployment strategies, including network configuration, load balancing, and resource allocation, to accommodate the new number of instances and optimize overall performance.
一种基于云端自动化测试环境管理系统,其特征在于:包括,A cloud-based automated test environment management system, characterized by: including:
自动化部署模块:在云计算平台上自动部署包含测试所需依赖的容器化测试环境;Automatic deployment module: Automatically deploys a containerized test environment containing dependencies required for testing on a cloud computing platform;
微服务分离:将待测试应用拆分为独立的微服务单元,并为每个单元实施独立测试;Microservice separation: Split the application to be tested into independent microservice units and implement independent testing for each unit;
CI/CD集成模块:通过CI/CD工具自动触发针对每个微服务的测试流程;CI/CD integration module: automatically triggers the test process for each microservice through CI/CD tools;
性能监控模块:对容器和微服务的性能进行监控以及日志收集与分析;Performance monitoring module: monitors the performance of containers and microservices and collects and analyzes logs;
自动优化:模块根据性能监控结果自动调整微服务实例数量,并实施服务部署策略。Automatic optimization: The module automatically adjusts the number of microservice instances based on performance monitoring results and implements service deployment strategies.
一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现如上所述的方法的步骤。A computer device comprises a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the above method when executing the computer program.
一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如上所述的方法的步骤。A computer-readable storage medium stores a computer program, which implements the steps of the method described above when executed by a processor.
本发明的有益效果:显著提升测试流程的自动化程度和效率。通过自动化部署和配置容器化测试环境,它减少了人工配置的需要,加速了软件开发和测试周期。特别适用于微服务架构的应用,本发明支持独立的服务单元测试和集成测试,确保了高度的测试准确性和可靠性。结合实时性能监控和日志分析,它使得问题诊断更为迅速和准确。通过动态调整资源使用,优化了成本效率,同时提高了测试环境的灵活性和可扩展性。为现代软件开发提供了一种高效、经济且可靠的测试环境管理解决方案。The beneficial effects of the present invention: significantly improve the automation level and efficiency of the test process. By automating the deployment and configuration of the containerized test environment, it reduces the need for manual configuration and accelerates the software development and testing cycle. Particularly suitable for applications with microservice architectures, the present invention supports independent service unit testing and integration testing, ensuring a high degree of test accuracy and reliability. Combined with real-time performance monitoring and log analysis, it makes problem diagnosis faster and more accurate. By dynamically adjusting resource usage, cost efficiency is optimized, while improving the flexibility and scalability of the test environment. It provides an efficient, economical and reliable test environment management solution for modern software development.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
为了更清楚地说明本发明实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the accompanying drawings required for use in the description of the embodiments will be briefly introduced below. Obviously, the accompanying drawings described below are only some embodiments of the present invention. For ordinary technicians in this field, other accompanying drawings can be obtained based on these accompanying drawings without paying creative work.
图1为本发明第一个实施例提供的一种基于云端自动化测试环境管理方法的整体流程图。FIG1 is an overall flow chart of a cloud-based automated testing environment management method provided by the first embodiment of the present invention.
具体实施方式Detailed ways
为使本发明的上述目的、特征和优点能够更加明显易懂,下面结合说明书附图对本发明的具体实施方式做详细的说明,显然所描述的实施例是本发明的一部分实施例,而不是全部实施例。基于本发明中的实施例,本领域普通人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本发明的保护的范围。In order to make the above-mentioned purposes, features and advantages of the present invention more obvious and easy to understand, the specific implementation methods of the present invention are described in detail below in conjunction with the drawings of the specification. Obviously, the described embodiments are part of the embodiments of the present invention, but not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by ordinary persons in the art without creative work should fall within the scope of protection of the present invention.
实施例1Example 1
参照图1,为本发明的一个实施例,提供了一种基于云端自动化测试环境管理方法,包括:Referring to FIG. 1 , an embodiment of the present invention provides a cloud-based automated test environment management method, including:
S1:在云计算平台上自动部署包含测试所需依赖的容器化测试环境。S1: Automatically deploy a containerized test environment containing the dependencies required for testing on a cloud computing platform.
当需要创建新的测试环境时,自动执行容器化环境的部署脚本,脚本基于预先定义的Dockerfile构建包含所有必要依赖和配置的容器镜像。When a new test environment needs to be created, the deployment script of the containerized environment is automatically executed. The script builds a container image containing all necessary dependencies and configurations based on the pre-defined Dockerfile.
当容器镜像构建完成时,将镜像上传到云平台的容器仓库中,使其在各测试环境中随时进行拉取和部署。When the container image is built, it is uploaded to the container repository of the cloud platform so that it can be pulled and deployed at any time in various test environments.
当有测试任务触发时,则通过CI/CD工具自动拉取对应的容器镜像,并在云计算平台的指定环境中启动容器实例进行测试;When a test task is triggered, the corresponding container image is automatically pulled through the CI/CD tool, and the container instance is started in the specified environment of the cloud computing platform for testing;
当容器实例启动后,则自动配置环境变量和依赖服务,确保测试环境与预期设置一致,并能够立即用于测试。When the container instance is started, environment variables and dependent services are automatically configured to ensure that the test environment is consistent with the expected settings and can be used for testing immediately.
当测试任务完成或更新时,则自动停止并销毁相关容器实例,释放资源,并根据需要更新或重新部署容器镜像。When the test task is completed or updated, the relevant container instance is automatically stopped and destroyed to release resources, and the container image is updated or redeployed as needed.
进一步的,部署脚本将采用高效的层级缓存机制,确保在构建容器镜像时,重用未改变的层,从而减少构建时间和网络带宽消耗。在容器镜像上传到云平台后,实现对镜像的标签管理,方便版本跟踪和快速定位。设计镜像回滚机制,当新版本出现问题时,能够迅速切换回稳定的旧版本。Furthermore, the deployment script will use an efficient layer cache mechanism to ensure that unchanged layers are reused when building container images, thereby reducing build time and network bandwidth consumption. After the container image is uploaded to the cloud platform, image tag management is implemented to facilitate version tracking and rapid location. An image rollback mechanism is designed so that when a problem occurs with the new version, it can be quickly switched back to the stable old version.
更进一步的,在CI/CD流程中,集成自动化测试结果的反馈机制,以便在测试失败时快速响应并采取措施。Furthermore, in the CI/CD process, a feedback mechanism for automated test results is integrated to quickly respond and take action when a test fails.
S2:将待测试应用拆分为独立的微服务单元,并为每个单元实施独立测试。S2: Split the application to be tested into independent microservice units and perform independent tests for each unit.
当应用的功能模块被识别为可独立运行和测试的单元时,则将其拆分为单独的微服务单元,并为每个单元定义API接口和服务契约。When the functional modules of the application are identified as units that can be run and tested independently, they are split into separate microservice units, and API interfaces and service contracts are defined for each unit.
当微服务单元被定义后,则针对每个单元编写自动化测试脚本,覆盖其关键功能和交互界面。Once the microservice units are defined, automated test scripts are written for each unit, covering its key functions and interactive interfaces.
当自动化测试脚本准备就绪时,则将其集成到CI/CD流程中,使其在微服务代码更新时自动执行这些测试。When the automated test scripts are ready, they are integrated into the CI/CD process to automatically execute these tests whenever the microservice code is updated.
当单元测试在CI/CD流程中被触发时,则在独立的、隔离的测试环境中运行这些测试,确保测试结果的准确性和微服务之间的独立性。When unit tests are triggered in the CI/CD process, they are run in a separate, isolated test environment to ensure the accuracy of the test results and the independence between microservices.
当单元测试完成后,则收集测试结果和性能数据,以评估每个微服务的功能性和稳定性,并为后续优化提供依据。When the unit test is completed, the test results and performance data are collected to evaluate the functionality and stability of each microservice and provide a basis for subsequent optimization.
进一步的,在将应用拆分为微服务单元时,需要精确界定每个单元的职责和边界。对于每个微服务单元,必须制定严格的服务契约,包括API接口的定义、数据格式、错误处理机制等。Furthermore, when splitting the application into microservice units, it is necessary to accurately define the responsibilities and boundaries of each unit. For each microservice unit, a strict service contract must be formulated, including the definition of the API interface, data format, error handling mechanism, etc.
应说明的是,确保单元测试在与生产环境尽可能一致的环境中运行。这包括相同的服务配置、相似的网络条件和类似的数据集,以确保测试结果的有效性和可靠性。将微服务单元测试脚本整合进持续集成(CI)流程,确保每次代码更新后都能自动执行这些测试。这要求CI流程能够高效地处理多个服务单元的测试需求,并能够及时反馈测试结果。It should be noted that unit tests should be run in an environment that is as consistent as possible with the production environment. This includes the same service configuration, similar network conditions, and similar data sets to ensure the validity and reliability of the test results. Integrate microservice unit test scripts into the continuous integration (CI) process to ensure that these tests are automatically executed after each code update. This requires the CI process to efficiently handle the testing requirements of multiple service units and to provide timely feedback on test results.
S3:通过CI/CD工具自动触发针对每个微服务的测试流程。S3: Automatically trigger the testing process for each microservice through CI/CD tools.
当源代码仓库中的微服务代码发生更新时,则自动触发CI/CD工具中配置的测试流程,这包括从代码仓库拉取最新代码、执行构建过程以及运行自动化测试脚本。When the microservice code in the source code repository is updated, the test process configured in the CI/CD tool is automatically triggered, which includes pulling the latest code from the code repository, executing the build process, and running automated test scripts.
当自动化测试脚本开始执行时,则在隔离的测试环境中部署相应的微服务实例,确保测试在尽可能接近生产环境的条件下进行。When the automated test script starts executing, the corresponding microservice instance is deployed in an isolated test environment to ensure that the test is performed under conditions as close to the production environment as possible.
当自动化测试执行完成时,则收集测试结果和性能指标,并生成测试报告。When the automated test execution is completed, the test results and performance indicators are collected and a test report is generated.
当测试发现错误或失败时,则将错误报告和相关日志发送给开发团队,及时修复,并根据需要重新触发测试流程。When the test finds errors or fails, the error report and related logs are sent to the development team for timely repair and re-triggering of the test process as needed.
当所有测试通过且满足预设的质量标准时,则自动将代码变更合并到主分支或标记为可部署状态,为后续的发布步骤做准备。When all tests pass and meet the preset quality standards, the code changes are automatically merged into the main branch or marked as deployable in preparation for subsequent release steps.
应说明的是,CI/CD工具是现代软件开发流程的核心,它们自动化了代码从开发到部署的整个生命周期,确保了软件开发的快速迭代和高质量。It should be noted that CI/CD tools are the core of the modern software development process. They automate the entire life cycle of code from development to deployment, ensuring rapid iteration and high quality of software development.
S4:对容器和微服务的性能进行监控以及日志收集与分析。S4: Monitor the performance of containers and microservices and collect and analyze logs.
在云环境中配置监控工具,实时跟踪微服务和容器的性能指标。Configure monitoring tools in cloud environments to track performance metrics of microservices and containers in real time.
部署日志收集系统,自动从所有容器和微服务中收集、存储、索引日志信息。Deploy a log collection system to automatically collect, store, and index log information from all containers and microservices.
实施智能报警机制,当监控到的性能指标超出预设阈值或日志中出现预定义的错误模式时,则自动触发报警系统,并通过集成的通知方式发送警报。Implement an intelligent alarm mechanism. When the monitored performance indicators exceed the preset thresholds or predefined error patterns appear in the logs, the alarm system is automatically triggered and an alarm is sent through integrated notifications.
当性能指标处于正常范围内且日志中无关键错误或异常行为时,则不触发报警。When the performance indicators are within the normal range and there are no critical errors or abnormal behaviors in the logs, no alarm is triggered.
使用日志分析工具,对收集的日志进行分析,以便快速定位和解决性能问题或系统异常。Use log analysis tools to analyze collected logs to quickly locate and resolve performance issues or system anomalies.
将收集的性能数据以图形化的方式展示,提供系统性能的直观视图和深入分析。The collected performance data is displayed in a graphical form, providing an intuitive view and in-depth analysis of system performance.
从日志中提取性能指标的时间序列数据,对数据进行清洗,处理缺失值和异常点。Extract time series data of performance indicators from logs, clean the data, and process missing values and outliers.
应用ARIMA模型对处理后的时间序列数进行建模。Applying the ARIMA model Model the processed time series.
其中,是自回归项的阶数它允许我们将过去/>个时间点的值用作预测变量;表示查分次数,用于使时间序列数据稳定;/>是移动平均项的阶数,它允许我们将过去个预测误差用作预测变量;使用历史数据来估计模型参数/>,并拟合ARIMA模型。in, is the order of the autoregressive term which allows us to transform the past The values at each time point were used as predictor variables; Indicates the number of check points, used to stabilize time series data; /> is the order of the moving average term, which allows us to convert the past The forecast errors are used as predictors; historical data are used to estimate model parameters/> , and fit the ARIMA model.
利用拟合好的ARIMA模型进行预测,生成未来一段时间的性能指标预测值;Use the fitted ARIMA model to make predictions and generate performance indicator forecasts for a period of time in the future;
进一步的,ARIMA模型拟合过程包括:Furthermore, the ARIMA model fitting process includes:
收集数据:首先收集时间序列数据,这里是关于微服务性能指标的历史数据,如CPU使用率、内存使用率等。Collect data: First, collect time series data, which is historical data about microservice performance indicators, such as CPU usage, memory usage, etc.
数据清洗:处理缺失值和异常点,确保数据的完整性和准确性。Data cleaning: Process missing values and outliers to ensure data integrity and accuracy.
平稳性检验:检查时间序列数据的平稳性。如果数据非平稳,进行差分处理直至数据平稳。Stationarity test: Check the stationarity of the time series data. If the data is non-stationary, perform difference processing until the data is stationary.
参数选择:确定自回归项(AR)的阶数:可以通过观察自相关函数(ACF)图来辅助确定。确定差分次数/>:基于平稳性测试的结果确定。确定移动平均项(MA)的阶数/>:通过观察偏自相关函数(PACF)图来辅助确定。Parameter selection: Determining the order of the autoregressive term (AR) :You can use the autocorrelation function (ACF) graph to help determine the number of differences. :Determined based on the results of the stationarity test. Determine the order of the moving average (MA)/> : Assisted by observing the partial autocorrelation function (PACF) graph.
估计模型参数:使用统计软件(如R或Python的statsmodels库)来估计ARIMA模型的参数。最小化误差,以找到最佳拟合的模型参数。Estimate model parameters: Use statistical software (such as R or Python's statsmodels library) to estimate the parameters of the ARIMA model. Minimize the error to find the best-fitting model parameters.
模型诊断:检查残差序列以确保模型拟合的适当性。Model diagnostics: Examine the residual series to ensure the adequacy of the model fit.
使用各种统计检验(如Ljung-Box检验)和图形分析(如残差的ACF图)来评估模型的有效性。Use various statistical tests (such as the Ljung-Box test) and graphical analysis (such as the ACF plot of the residuals) to assess the validity of the model.
模型预测:使用拟合好的ARIMA模型对未来的性能指标进行预测。生成未来一段时间的性能指标预测值,并据此做出相应的资源调整决策。Model prediction: Use the fitted ARIMA model to predict future performance indicators. Generate performance indicator forecasts for a period of time in the future and make corresponding resource adjustment decisions based on them.
计算预测误差和异常检测:计算实际观测值与预测值之间的差异。设定阈值,基于预测误差判断是否存在异常。Calculate prediction error and anomaly detection: Calculate the difference between the actual observed value and the predicted value. Set a threshold to determine whether there is an anomaly based on the prediction error.
应说明的是,ARIMA(自回归积分移动平均)模型是一种常用的时间序列预测方法,它结合了自回归(AR)、差分(I)和移动平均(MA)三种方法,可以有效处理非平稳时间序列数据。在本发明中,ARIMA模型用于预测微服务的性能指标,帮助预测未来的资源需求和性能趋势。阈值的设定是基于历史数据的统计分析,通常采用历史差异数据的平均值加减两倍标准差来确定。这种方法能够在保持灵敏度的同时减少误报,确保异常检测的准确性。It should be noted that the ARIMA (Autoregressive Integrated Moving Average) model is a commonly used time series prediction method, which combines the three methods of autoregression (AR), difference (I) and moving average (MA), and can effectively process non-stationary time series data. In the present invention, the ARIMA model is used to predict the performance indicators of microservices and help predict future resource requirements and performance trends. The setting of the threshold is based on the statistical analysis of historical data, and is usually determined by adding or subtracting two standard deviations from the mean value of historical difference data. This method can reduce false alarms while maintaining sensitivity and ensure the accuracy of anomaly detection.
计算时间点预测值/>和时间点/>实际观测值/>之间的差异,;Calculate time points Prediction value/> and time points/> Actual observed value/> difference between, ;
其中,是时间点/>的差异值;in, It's time /> The difference value of
设定一个阈值来判断异常,若/>超过阈值/>,则认为在时间点/>发生异常;Set a threshold To judge the abnormality, if/> Exceeding the threshold /> , then it is considered that at time point /> An exception occurs;
所述阈值通过历史差异数据的平均值加减两倍的标准层,公式为:The threshold The formula is the average value of historical difference data plus or minus twice the standard layer:
; ;
其中,表示历史差异数据的平均值,/>表示标准差。in, Indicates the average value of historical difference data, /> Represents standard deviation.
S5:根据性能监控结果自动调整微服务实例数量,并实施服务部署策略。S5: Automatically adjust the number of microservice instances based on performance monitoring results and implement service deployment strategies.
确定CPU使用率、内存使用率/>、磁盘I/O使用率/>和网络带宽使用率/>为监控指标。Determine CPU usage , memory usage/> , Disk I/O usage/> and network bandwidth usage/> For monitoring indicators.
实时收集每个微服务实例的性能数据,判断是否达到自动扩展阈值和自动缩减阈值。Collect performance data of each microservice instance in real time to determine whether the automatic expansion threshold and automatic reduction threshold are reached.
当任一所述监控指标超过上限阈值/>时,表示当前的资源已不足以满足性能需求,则增加微服务实例数量;When any of the monitoring indicators Exceeding the upper threshold /> , it means that the current resources are insufficient to meet the performance requirements, so increase the number of microservice instances;
当任一所述监控指标超过下限阈值/>,表示当前的资源使用不足,则减少微服务实例数量,优化资源使用;When any of the monitoring indicators Exceeding the lower threshold /> , indicating that the current resource usage is insufficient, the number of microservice instances is reduced to optimize resource usage;
设定自动调整实例数量的决策算法,来计算调整后的实例数量,公式表示为:Set the decision algorithm for automatically adjusting the number of instances to calculate the number of instances after adjustment. The formula is:
; ;
其中,表示调整后的实例数量,/>表示当前的实例数量,/>表示调整敏感度系数,/>表示当前的综合性能指标,/>表示目标性能指标。in, Indicates the number of instances after adjustment,/> Indicates the current number of instances, /> Indicates the adjustment sensitivity coefficient, /> Indicates the current comprehensive performance index, /> Represents the target performance indicator.
根据计算出的新实例数量,动态地增加或减少微服务实例;Based on the calculated number of new instances , dynamically increase or decrease microservice instances;
调整服务部署策略,包括网络配置、负载均衡和资源分配,以适应新的实例数量并优化整体性能。Adjust service deployment strategies, including network configuration, load balancing, and resource allocation, to accommodate the new number of instances and optimize overall performance.
以上实施例中,还包括一种基于云端自动化测试环境管理系统,具体为:The above embodiments also include a cloud-based automated test environment management system, specifically:
自动化部署模块:在云计算平台上自动部署包含测试所需依赖的容器化测试环境。Automated deployment module: Automatically deploys a containerized test environment containing the dependencies required for testing on the cloud computing platform.
微服务分离:将待测试应用拆分为独立的微服务单元,并为每个单元实施独立测试。Microservice separation: Split the application to be tested into independent microservice units and implement independent tests for each unit.
CI/CD集成模块:通过CI/CD工具自动触发针对每个微服务的测试流程。CI/CD integration module: automatically triggers the testing process for each microservice through CI/CD tools.
性能监控模块:对容器和微服务的性能进行监控以及日志收集与分析。Performance monitoring module: monitors the performance of containers and microservices, and collects and analyzes logs.
自动优化:模块根据性能监控结果自动调整微服务实例数量,并实施服务部署策略。Automatic optimization: The module automatically adjusts the number of microservice instances based on performance monitoring results and implements service deployment strategies.
计算机设备可以是服务器。该计算机设备包括处理器、存储器、输入/输出接口(Input/Output,简称I/O)和通信接口。其中,处理器、存储器和输入/输出接口通过系统总线连接,通信接口通过输入/输出接口连接到系统总线。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质和内存储器。该非易失性存储介质存储有操作系统、计算机程序和数据库。该内存储器为非易失性存储介质中的操作系统和计算机程序的运行提供环境。该计算机设备的数据库用于存储电力监控系统的数据集群数据。该计算机设备的输入/输出接口用于处理器与外部设备之间交换信息。该计算机设备的通信接口用于与外部的终端通过网络连接通信。该计算机程序被处理器执行时以实现一种基于云端自动化测试环境管理方法。The computer device may be a server. The computer device includes a processor, a memory, an input/output interface (Input/Output, referred to as I/O) and a communication interface. The processor, the memory and the input/output interface are connected via a system bus, and the communication interface is connected to the system bus via the input/output interface. The processor of the computer device is used to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program and a database. The internal memory provides an environment for the operation of the operating system and the computer program in the non-volatile storage medium. The database of the computer device is used to store data cluster data of the power monitoring system. The input/output interface of the computer device is used to exchange information between the processor and an external device. The communication interface of the computer device is used to communicate with an external terminal through a network connection. When the computer program is executed by the processor, a cloud-based automated test environment management method is implemented.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、数据库或其它介质的任何引用,均可包括非易失性和易失性存储器中的至少一种。非易失性存储器可包括只读存储器(Read-OnlyMemory,ROM)、磁带、软盘、闪存、光存储器、高密度嵌入式非易失性存储器、阻变存储器(ReRAM)、磁变存储器(MagnetoresistiveRandomAccessMemory,MRAM)、铁电存储器(FerroelectricRandomAccessMemory,FRAM)、相变存储器(PhaseChangeMemory,PCM)、石墨烯存储器等。易失性存储器可包括随机存取存储器(RandomAccessMemory,RAM)或外部高速缓冲存储器等。作为说明而非局限,RAM可以是多种形式,比如静态随机存取存储器(StaticRandomAccessMemory,SRAM)或动态随机存取存储器(DynamicRandomAccessMemory,DRAM)等。本申请所提供的各实施例中所涉及的数据库可包括关系型数据库和非关系型数据库中至少一种。非关系型数据库可包括基于区块链的分布式数据库等,不限于此。本申请所提供的各实施例中所涉及的处理器可为通用处理器、中央处理器、图形处理器、数字信号处理器、可编程逻辑器、基于量子计算的数据处理逻辑器等,不限于此。A person of ordinary skill in the art can understand that all or part of the processes in the above-mentioned embodiment method can be completed by instructing the relevant hardware through a computer program, and the computer program can be stored in a non-volatile computer-readable storage medium. When the computer program is executed, it can include the processes of the embodiments of the above-mentioned methods. Among them, any reference to the memory, database or other medium used in the embodiments provided in this application can include at least one of non-volatile and volatile memory. Non-volatile memory can include read-only memory (ROM), magnetic tape, floppy disk, flash memory, optical memory, high-density embedded non-volatile memory, resistive random access memory (ReRAM), magnetic random access memory (MRAM), ferroelectric random access memory (FRAM), phase change memory (PCM), graphene memory, etc. Volatile memory can include random access memory (RAM) or external cache memory, etc. As an illustration and not limitation, RAM can be in various forms, such as static random access memory (SRAM) or dynamic random access memory (DRAM). The database involved in each embodiment provided in this application may include at least one of a relational database and a non-relational database. Non-relational databases may include distributed databases based on blockchains, etc., but are not limited to this. The processor involved in each embodiment provided in this application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic device, a data processing logic device based on quantum computing, etc., but are not limited to this.
实施例2Example 2
为本发明的一个实施例,提供了一种基于云端自动化测试环境管理方法,为了验证本发明的有益效果,通过经济效益计算和仿真/对比实验进行科学论证。As an embodiment of the present invention, a cloud-based automated test environment management method is provided. In order to verify the beneficial effects of the present invention, scientific demonstration is carried out through economic benefit calculation and simulation/comparative experiments.
比较基于云端自动化测试环境管理方法(以下简称“新方法”)与传统的测试环境管理方法(以下简称“传统方法”)在部署效率、资源利用率、系统稳定性和测试覆盖率等方面的差异。Compare the differences between the cloud-based automated test environment management method (hereinafter referred to as the "new method") and the traditional test environment management method (hereinafter referred to as the "traditional method") in terms of deployment efficiency, resource utilization, system stability, and test coverage.
测试对象:10个微服务应用,每个应用都有相同的规模、复杂性和历史记录。Test subjects: 10 microservice applications, each with the same size, complexity, and history.
测试周期:30天。Testing period: 30 days.
性能参数:在测试周期中,记录并分析各项性能参数,包括部署时间、CPU和内存利用率、系统故障次数等。Performance parameters: During the test cycle, various performance parameters are recorded and analyzed, including deployment time, CPU and memory utilization, number of system failures, etc.
部署测试:在新方法和传统方法下,分别记录从代码提交到应用部署完全就绪的时间。Deployment testing: Record the time from code submission to application deployment being fully ready under both the new and traditional methods.
资源利用测试:监控和记录两种方法下的CPU和内存利用率。Resource Utilization Test: Monitor and record CPU and memory utilization under two methods.
稳定性测试:记录测试周期内各应用的故障次数和系统响应时间。Stability test: record the number of failures and system response time of each application during the test cycle.
测试覆盖率评估:比较两种方法下的测试覆盖率,包括自动化测试覆盖的应用功能点和代码覆盖率。Test coverage evaluation: Compare the test coverage of the two methods, including the application function points and code coverage covered by automated tests.
数据收集:使用自动化工具和脚本收集测试过程中的数据。Data Collection: Use automated tools and scripts to collect data during testing.
分析方法:对收集到的数据进行统计分析,评估新方法与传统方法在各项指标上的表现,并计算改进百分比。Analytical methods: Statistical analysis was performed on the collected data to evaluate the performance of the new method compared with the traditional method on various indicators and calculate the improvement percentage.
系统性能与资源利用率对比可参考表1:For a comparison of system performance and resource utilization, please refer to Table 1:
表1系统性能与资源利用率对比表Table 1 Comparison of system performance and resource utilization
本发明将部署时间减少了55.6%,从18分钟缩短到8分钟。这一显著改进表明了本发明在提高部署流程的效率方面的优势,这对于需要快速迭代和部署的现代软件开发尤为重要。本发明使CPU利用率提高了20%,内存利用率提高了15.3%,分别从65%和72%提升到了78%和83%。这表明本发明能够更有效地利用硬件资源,提升系统的整体性能和效率。网络带宽利用率的提高(从55%提升到68%,增加了23.6%)反映了本发明在网络资源管理方面的优化,这对于数据密集型应用尤其重要。系统响应时间从1.1秒减少到0.7秒,降低了36.4%。这一改进直接影响用户体验,快速响应时间对于用户满意度至关重要。The present invention reduces deployment time by 55.6%, from 18 minutes to 8 minutes. This significant improvement demonstrates the advantages of the present invention in improving the efficiency of the deployment process, which is particularly important for modern software development that requires rapid iteration and deployment. The present invention increases CPU utilization by 20% and memory utilization by 15.3%, from 65% and 72% to 78% and 83%, respectively. This shows that the present invention can more effectively utilize hardware resources and improve the overall performance and efficiency of the system. The improvement in network bandwidth utilization (from 55% to 68%, an increase of 23.6%) reflects the optimization of the present invention in network resource management, which is particularly important for data-intensive applications. The system response time was reduced from 1.1 seconds to 0.7 seconds, a reduction of 36.4%. This improvement directly affects the user experience, and fast response time is critical to user satisfaction.
再从测试效率与系统稳定性方面进行对比,参考表2:Let’s compare the test efficiency and system stability, see Table 2:
表2 测试效率与系统稳定性对比表Table 2 Comparison of test efficiency and system stability
本发明将测试覆盖率从75%提高到92%,增加了22.7%。这表明本发明能够更全面地检测和验证软件功能,降低因未测试代码造成的风险。故障率的显著降低(从每月2次故障减少到每月0.5次故障,降低了75%)显示了本发明在提升系统稳定性方面的显著效果。自动化测试效率的提升(从65%增加到88%,提高了35.4%)反映了本发明在测试流程自动化和效率提升方面的优势。故障检测时间的大幅缩短(从3小时减少到45分钟,减少了75%)表明了本发明在快速识别和响应系统问题方面的有效性。用户满意度的提高(从80%提升到95%,增加了18.8%)显示了本发明在提升用户体验和满意度方面的显著效果。The present invention increases the test coverage from 75% to 92%, an increase of 22.7%. This shows that the present invention can detect and verify software functions more comprehensively and reduce the risks caused by untested code. The significant reduction in the failure rate (from 2 failures per month to 0.5 failures per month, a reduction of 75%) shows the significant effect of the present invention in improving system stability. The improvement in automated testing efficiency (from 65% to 88%, an increase of 35.4%) reflects the advantages of the present invention in automating the test process and improving efficiency. The significant reduction in fault detection time (from 3 hours to 45 minutes, a reduction of 75%) shows the effectiveness of the present invention in quickly identifying and responding to system problems. The improvement in user satisfaction (from 80% to 95%, an increase of 18.8%) shows the significant effect of the present invention in improving user experience and satisfaction.
综上所述,这些数据表明本发明在提升测试的全面性、减少系统故障、提高测试效率、加快故障检测和处理速度以及提升用户满意度方面具有显著的优势。这些改进对于提高软件开发和维护的整体质量和效率具有重要意义。In summary, these data show that the present invention has significant advantages in improving the comprehensiveness of testing, reducing system failures, improving test efficiency, accelerating fault detection and processing speed, and improving user satisfaction. These improvements are of great significance to improving the overall quality and efficiency of software development and maintenance.
应说明的是,以上实施例仅用以说明本发明的技术方案而非限制,尽管参照较佳实施例对本发明进行了详细说明,本领域的普通技术人员应当理解,可以对本发明的技术方案进行修改或者等同替换,而不脱离本发明技术方案的精神和范围,其均应涵盖在本发明的权利要求范围当中。It should be noted that the above embodiments are only used to illustrate the technical solutions of the present invention rather than to limit it. Although the present invention has been described in detail with reference to the preferred embodiments, those skilled in the art should understand that the technical solutions of the present invention may be modified or replaced by equivalents without departing from the spirit and scope of the technical solutions of the present invention, which should all be included in the scope of the claims of the present invention.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410259407.XA CN117851269B (en) | 2024-03-07 | 2024-03-07 | A cloud-based automated test environment management method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410259407.XA CN117851269B (en) | 2024-03-07 | 2024-03-07 | A cloud-based automated test environment management method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117851269A true CN117851269A (en) | 2024-04-09 |
CN117851269B CN117851269B (en) | 2024-05-28 |
Family
ID=90542135
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410259407.XA Active CN117851269B (en) | 2024-03-07 | 2024-03-07 | A cloud-based automated test environment management method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117851269B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118708300A (en) * | 2024-08-30 | 2024-09-27 | 华能信息技术有限公司 | Server resource expansion method and system based on container cloud engine |
CN119690595A (en) * | 2025-02-25 | 2025-03-25 | 北京联讯星烨科技有限公司 | A low-code application platform that supports containerized deployment |
CN119788428A (en) * | 2025-03-10 | 2025-04-08 | 国网四川省电力公司信息通信公司 | Cloud integrated management method of autonomous controllable equipment |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110806881A (en) * | 2019-11-05 | 2020-02-18 | 浪潮云信息技术有限公司 | Method for deploying different CPU architectures by kubernets |
CN110837386A (en) * | 2019-11-08 | 2020-02-25 | 神州数码融信软件有限公司 | Micro-service architecture flexible upgrading method |
CN112506525A (en) * | 2020-12-03 | 2021-03-16 | 中国人寿保险股份有限公司 | Continuous integration and continuous delivery method, device, electronic equipment and storage medium |
CN113934446A (en) * | 2021-12-16 | 2022-01-14 | 中电云数智科技有限公司 | Micro-service configuration system and method based on container cloud platform |
CN113986714A (en) * | 2021-09-18 | 2022-01-28 | 济南浪潮数据技术有限公司 | Automatic continuous testing method and device based on containerization |
CN114490078A (en) * | 2022-02-11 | 2022-05-13 | 青岛海信网络科技股份有限公司 | Dynamic capacity reduction and expansion method, device and equipment for micro-service |
US20220398187A1 (en) * | 2021-06-14 | 2022-12-15 | Intuit Inc | Systems and methods for workflow based application testing in cloud computing environments |
CN116881166A (en) * | 2023-07-28 | 2023-10-13 | 重庆赛力斯新能源汽车设计院有限公司 | Method, device and system for generating test script |
-
2024
- 2024-03-07 CN CN202410259407.XA patent/CN117851269B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110806881A (en) * | 2019-11-05 | 2020-02-18 | 浪潮云信息技术有限公司 | Method for deploying different CPU architectures by kubernets |
CN110837386A (en) * | 2019-11-08 | 2020-02-25 | 神州数码融信软件有限公司 | Micro-service architecture flexible upgrading method |
CN112506525A (en) * | 2020-12-03 | 2021-03-16 | 中国人寿保险股份有限公司 | Continuous integration and continuous delivery method, device, electronic equipment and storage medium |
US20220398187A1 (en) * | 2021-06-14 | 2022-12-15 | Intuit Inc | Systems and methods for workflow based application testing in cloud computing environments |
CN113986714A (en) * | 2021-09-18 | 2022-01-28 | 济南浪潮数据技术有限公司 | Automatic continuous testing method and device based on containerization |
CN113934446A (en) * | 2021-12-16 | 2022-01-14 | 中电云数智科技有限公司 | Micro-service configuration system and method based on container cloud platform |
CN114490078A (en) * | 2022-02-11 | 2022-05-13 | 青岛海信网络科技股份有限公司 | Dynamic capacity reduction and expansion method, device and equipment for micro-service |
CN116881166A (en) * | 2023-07-28 | 2023-10-13 | 重庆赛力斯新能源汽车设计院有限公司 | Method, device and system for generating test script |
Non-Patent Citations (3)
Title |
---|
STAN Z: "" Kubernetes和Jenkins——基于Kubernetes构建Jenkins持续集成平台"", pages 1 - 29, Retrieved from the Internet <URL:《https://blog.csdn.net/Cantevenl/article/details/116722510》> * |
十五的猫: ""k8s微服务怎么做性能测试监控"", pages 1 - 3, Retrieved from the Internet <URL:《https://blog.51cto.com/u_16506098/9825416》> * |
黄若航: ""基于Kubernetes的空中特种机分系统DevOps平台的设计与实现"", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》, 15 April 2022 (2022-04-15), pages 031 - 230 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118708300A (en) * | 2024-08-30 | 2024-09-27 | 华能信息技术有限公司 | Server resource expansion method and system based on container cloud engine |
CN119690595A (en) * | 2025-02-25 | 2025-03-25 | 北京联讯星烨科技有限公司 | A low-code application platform that supports containerized deployment |
CN119788428A (en) * | 2025-03-10 | 2025-04-08 | 国网四川省电力公司信息通信公司 | Cloud integrated management method of autonomous controllable equipment |
CN119788428B (en) * | 2025-03-10 | 2025-05-16 | 国网四川省电力公司信息通信公司 | Cloud integrated management method of autonomous controllable equipment |
Also Published As
Publication number | Publication date |
---|---|
CN117851269B (en) | 2024-05-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN117851269B (en) | A cloud-based automated test environment management method and system | |
US20220100641A1 (en) | System and method for continuous testing and delivery of software | |
CN113946499B (en) | A microservice link tracking and performance analysis method, system, device and application | |
JP6880560B2 (en) | Failure prediction device, failure prediction method and failure prediction program | |
CN105550100A (en) | Method and system for automatic fault recovery of information system | |
CN109032891A (en) | A kind of cloud computing server hard disk failure prediction technique and device | |
Zhebka et al. | Methodology for Predicting Failures in a Smart Home based on Machine Learning Methods | |
CN107566172A (en) | A kind of Active Management method and system based on storage system | |
CN117313012A (en) | Fault management method, device, equipment and storage medium of service orchestration system | |
CN118534229A (en) | Transformer fault determination method and device based on multi-sensor | |
CN118886784A (en) | Enterprise R&D process quality management platform | |
CN119580817B (en) | An intelligent solid state hard disk quality detection method, system, device and medium | |
CN115277473A (en) | Remote operation and maintenance method and device for edge gateway, computer equipment and storage medium | |
CN119883892A (en) | Automated test integration system in software development process | |
CN119668916A (en) | Cluster system fault handling method, system, device, equipment, medium and program | |
CN119201617A (en) | Fault prediction method and device, storage medium and electronic device | |
CN116560893B (en) | Computer application program operation data fault processing system | |
CN118550791A (en) | Cloud server operation and maintenance management method, device, equipment and storage medium | |
CN118519650A (en) | GPT technology-based hybrid cloud automatic upgrading method | |
CN117391663A (en) | Equipment maintenance method and device, nonvolatile storage medium and computer equipment | |
Chren et al. | Strait: A tool for automated software reliability growth analysis | |
CN113626044A (en) | Service management method and device | |
CN112667597B (en) | Algorithm model full life cycle management tool system and implementation method thereof | |
CN114422332B (en) | Network slice control method, device, processing equipment and storage medium | |
US20250094857A1 (en) | Detecting task-related patterns using artificial intelligence techniques |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |