[go: up one dir, main page]

CN119415013A - Resource information processing method, apparatus, device, medium, and program product - Google Patents

Resource information processing method, apparatus, device, medium, and program product Download PDF

Info

Publication number
CN119415013A
CN119415013A CN202411286040.7A CN202411286040A CN119415013A CN 119415013 A CN119415013 A CN 119415013A CN 202411286040 A CN202411286040 A CN 202411286040A CN 119415013 A CN119415013 A CN 119415013A
Authority
CN
China
Prior art keywords
information
path
resource
processing
policy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202411286040.7A
Other languages
Chinese (zh)
Inventor
李筱桐
谢兴山
程志雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CCB Finetech Co Ltd
Original Assignee
CCB Finetech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CCB Finetech Co Ltd filed Critical CCB Finetech Co Ltd
Priority to CN202411286040.7A priority Critical patent/CN119415013A/en
Publication of CN119415013A publication Critical patent/CN119415013A/en
Pending legal-status Critical Current

Links

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The disclosure provides a resource information processing method, which can be applied to the technical fields of software development and financial science and technology. The resource information processing method comprises the steps of obtaining resource information of a storage resource, wherein the resource information comprises configuration information and use information, the configuration information comprises hardware configuration information and logic configuration information, the storage resource comprises block type resources, the use information comprises path use information, determining a path processing strategy based on the hardware configuration information, the logic configuration information and the path use information, wherein the path processing strategy comprises at least one of a path updating strategy, a fault processing strategy and a path loading strategy, and processing the block resource information by utilizing the path updating strategy, the fault processing strategy and the path loading strategy to obtain a path processing result. The present disclosure also provides a resource information processing apparatus, device, storage medium, and program product.

Description

Resource information processing method, apparatus, device, medium, and program product
Technical Field
The present disclosure relates to the technical field of software development and financial science and technology, and in particular, to a resource information processing method, apparatus, device, medium and program product.
Background
The operation and maintenance of storage device resources at a data center involves a variety of techniques and strategies, which methods and practices are critical to ensure reliability, performance, and efficiency of the system. The operation and maintenance mode of the storage resource in the related technology mainly comprises the steps of monitoring performance indexes of various storage devices by using a monitoring tool and performing optimization adjustment, or regularly backing up data by making a more detailed recovery plan or test.
The inventor finds that the operation and maintenance of the storage resources in the related technology have the defects that a unified management mode for comprehensive operation and maintenance of the heterogeneous storage resources is lacked, performance bottlenecks of an overall system are caused by performance differences of different storage resources, different management tools and strategies are needed for storage devices of different architectures, operation and maintenance work is complex, fault diagnosis and elimination work in the heterogeneous storage devices is complex, and overall operation and maintenance efficiency is low.
Disclosure of Invention
In view of the foregoing, the present disclosure provides a resource information processing method, apparatus, device, medium, and program product.
According to a first aspect of the present disclosure, there is provided a resource information processing method including obtaining resource information of a storage resource, wherein the resource information includes configuration information including hardware configuration information and logic configuration information, and the storage resource includes a plurality of types of block type resources, and the use information includes path use information, determining a path processing policy based on the hardware configuration information, the logic configuration information, and the path use information, wherein the path processing policy includes at least one of a path update policy, a failure processing policy, and a path load policy, and processing the block resource information using the path update policy, the failure processing policy, and the path load policy to obtain a path processing result.
According to the embodiment of the disclosure, the hardware configuration information comprises hardware device information, the logic configuration information comprises path information, the hardware device information and the path information are multiple, the path processing strategy is determined based on the hardware configuration information, the logic configuration information and the path use information of the block type resource, the method comprises the steps of determining a plurality of initial path processing strategies corresponding to the hardware device information and the path information, and updating the initial path processing strategies based on a preset updating strategy and the path use information to obtain the path processing strategy, so that the path processing strategy is matched with the hardware device information.
According to the embodiment of the disclosure, the path processing result is obtained by processing the resource information by using the path updating policy, the fault processing policy and the path loading policy, wherein the path processing result comprises the steps of determining a plurality of pieces of storage path information in the block resource information, and processing the storage path information based on the fault processing policy, the path updating policy and the path loading policy when at least one piece of storage path information in the plurality of pieces of storage path information is detected to be abnormal information, so as to obtain the path processing result.
According to the embodiment of the disclosure, the usage information further comprises resource usage information, the storage resource further comprises file type resources, the method further comprises the steps of processing the hardware configuration information, the logic configuration information and the resource usage information in a historical time period to obtain a resource usage result, predicting the resource usage information in a future time period by using a preset prediction rule and the resource usage result to obtain a prediction result, determining a resource allocation strategy based on the prediction result, and processing the resource information by using the resource allocation strategy to obtain the resource allocation result.
According to the embodiment of the disclosure, the method for predicting the resource use information in the future time period by using the preset prediction rule and the resource use result to obtain the prediction result comprises the steps of extracting characteristic sequence information and load peaks from the resource use result, inputting the characteristic sequence information and the load peaks into a time sequence model, and outputting the prediction result.
According to the embodiment of the disclosure, the hardware configuration information further comprises hardware management information and resource pool information, the logic configuration information further comprises port information and a security policy, the method further comprises the steps of determining a mapping relation between the resource information and target equipment based on the hardware equipment information, the hardware management information, the resource pool information, the port information and the security policy, and generating resource architecture information based on the mapping relation, so that a user can correspondingly process the resource information according to configuration requirements.
According to the embodiment of the disclosure, the method further comprises detecting the storage resource by using a detection tool to obtain type information of the storage resource, and determining the configuration information and the use information based on the type information.
According to the embodiment of the disclosure, the method further comprises the step of processing the resource information and the processing result by using a graph generating tool to generate chart information.
The second aspect of the present disclosure provides a resource information processing apparatus, which includes a resource information obtaining module configured to obtain resource information of a storage resource, where the resource information includes configuration information and usage information, the configuration information includes hardware configuration information and logic configuration information, the storage resource includes a plurality of types of block type resources, and the usage information includes path usage information, a path processing policy determining module configured to determine a path processing policy based on the hardware configuration information, the logic configuration information, and the path usage information, where the path processing policy includes at least one of a path update policy, a failure processing policy, and a path load policy, and a resource information processing module configured to process the block resource information using the path update policy, the failure processing policy, and the path load policy to obtain a path processing result.
A third aspect of the present disclosure provides an electronic device comprising one or more processors and a memory for storing one or more computer programs, wherein the one or more processors execute the one or more computer programs to implement the steps of the method.
A fourth aspect of the present disclosure also provides a computer readable storage medium having stored thereon a computer program or instructions which, when executed by a processor, implement the steps of the above method.
A fifth aspect of the present disclosure also provides a computer program product comprising a computer program or instructions which, when executed by a processor, performs the steps of the method described above.
According to the resource information processing method, device, equipment, medium and program product provided by the disclosure, the path update strategy, the fault processing strategy and the path load strategy can be determined to process different types of block resource information through the hardware configuration information, the logic configuration information and the path use information based on the block type storage resource, so that the path processing result is obtained.
Drawings
The foregoing and other objects, features and advantages of the disclosure will be more apparent from the following description of embodiments of the disclosure with reference to the accompanying drawings, in which:
FIG. 1 schematically illustrates an application scenario diagram of a resource information processing method, apparatus, device, medium and program product according to an embodiment of the present disclosure;
FIG. 2 schematically illustrates a flow chart of a resource information processing method according to an embodiment of the present disclosure;
FIG. 3 schematically illustrates a flow chart of a method for implementing path failover, load balancing, and monitoring functions in SAN storage using dynamic multipath policies according to an embodiment of the present disclosure;
FIG. 4 schematically illustrates a flow chart of another resource information processing method according to an embodiment of the disclosure;
FIG. 5 schematically illustrates a system architecture diagram of a resource information processing method according to an embodiment of the present disclosure;
Fig. 6 schematically illustrates a block diagram of a resource information processing apparatus according to an embodiment of the present disclosure;
Fig. 7 schematically illustrates a block diagram of an electronic device adapted to implement a resource information processing method according to an embodiment of the disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is only exemplary and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the present disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and/or the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It should be noted that the terms used herein should be construed to have meanings consistent with the context of the present specification and should not be construed in an idealized or overly formal manner.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a convention should be interpreted in accordance with the meaning of one of skill in the art having generally understood the convention (e.g., "a system having at least one of A, B and C" would include, but not be limited to, systems having a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
It should be noted that, in the embodiments of the present application, some existing solutions in the industry such as software, components, models, etc. may be mentioned, and they should be regarded as exemplary, only for illustrating the feasibility of implementing the technical solution of the present application, but it does not mean that the applicant has or must not use the solution.
In the technical solution of the present disclosure, the related user information (including, but not limited to, user personal information, user image information, user equipment information, such as location information, etc.) and data (including, but not limited to, data for analysis, stored data, displayed data, etc.) are information and data authorized by the user or sufficiently authorized by each party, and the related data is collected, stored, used, processed, transmitted, provided, disclosed, applied, etc. in compliance with relevant laws and regulations and standards, necessary security measures are taken, no prejudice to the public order colloquia is provided, and corresponding operation entries are provided for the user to select authorization or rejection.
In the scenario of using personal information to make an automated decision, the method, the device and the system provided by the embodiment of the disclosure provide corresponding operation inlets for users to choose to agree or reject the automated decision, and enter an expert decision flow if the users choose to reject. The expression "automated decision" here refers to an activity of automatically analyzing, assessing the behavioral habits, hobbies or economic, health, credit status of an individual, etc. by means of a computer program, and making a decision. The expression "expert decision" here refers to an activity of making a decision by a person who is specializing in a certain field of work, has specialized experience, knowledge and skills and reaches a certain level of expertise.
In the process of conception of the present disclosure, the inventor finds that in the related art, a unified management mode for comprehensive operation and maintenance of heterogeneous storage resources is lacking in operation and maintenance management of storage equipment resources, performance bottlenecks of an overall system are caused by performance differences of different storage resources, different management tools and strategies are needed for storage equipment of different architectures, operation and maintenance work is complex, fault diagnosis and elimination work in the heterogeneous storage equipment is complex, and overall operation and maintenance efficiency is low.
In view of this, the present disclosure can determine a path update policy, a fault handling policy, and a path load policy to handle different types of block resource information by storing hardware configuration information, logic configuration information, and path usage information of the resource based on the block type, thereby obtaining a path handling result, the path processing result is an integrated result obtained by comprehensively using different strategies for block resource information of different architectures, so that unified management of storage resources of different architectures and types is realized, management complexity is reduced, complexity of fault investigation work of heterogeneous storage equipment is reduced, and overall operation and maintenance efficiency of the storage resources is further improved.
The embodiment of the disclosure provides a resource information processing method, device, equipment, medium and program product, wherein the method comprises the steps of obtaining resource information of a storage resource, the resource information comprises configuration information and use information, the configuration information comprises hardware configuration information and logic configuration information, the storage resource comprises block type resource, the use information comprises path use information, determining a path processing strategy based on the hardware configuration information, the logic configuration information and the path use information, wherein the path processing strategy comprises at least one of a path updating strategy, a fault processing strategy and a path loading strategy, and processing the block resource information by utilizing the path updating strategy, the fault processing strategy and the path loading strategy to obtain a path processing result.
Fig. 1 schematically illustrates an application scenario diagram of a resource information processing method, apparatus, device, medium and program product according to an embodiment of the present disclosure.
As shown in fig. 1, an application scenario 100 according to this embodiment may include a first terminal device 101, a second terminal device 102, a third terminal device 103, a network 104, and a server 105. The network 104 is a medium used to provide a communication link between the first terminal device 101, the second terminal device 102, the third terminal device 103, and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The user may interact with the server 105 via the network 104 using the first terminal device 101, the second terminal device 102, the third terminal device 103, to receive or send messages etc. Various communication client applications, such as a shopping class application, a web browser application, a search class application, an instant messaging tool, a mailbox client, social platform software, etc. (by way of example only) may be installed on the first terminal device 101, the second terminal device 102, and the third terminal device 103.
The first terminal device 101, the second terminal device 102, the third terminal device 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablets, laptop and desktop computers, and the like.
The server 105 may be a server providing various services, such as a background management server (by way of example only) providing support for websites browsed by the user using the first terminal device 101, the second terminal device 102, and the third terminal device 103. The background management server may analyze and process the received data such as the user request, and feed back the processing result (e.g., the web page, information, or data obtained or generated according to the user request) to the terminal device.
It should be noted that, the resource information processing method provided by the embodiment of the present disclosure may be generally executed by the server 105. Accordingly, the resource information processing apparatus provided by the embodiments of the present disclosure may be generally provided in the server 105. The resource information processing method provided by the embodiment of the present disclosure may also be performed by a server or a server cluster that is different from the server 105 and is capable of communicating with the first terminal device 101, the second terminal device 102, the third terminal device 103, and/or the server 105. Accordingly, the resource information processing apparatus provided by the embodiments of the present disclosure may also be provided in a server or a server cluster that is different from the server 105 and is capable of communicating with the first terminal device 101, the second terminal device 102, the third terminal device 103, and/or the server 105.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Fig. 2 schematically illustrates a flowchart of a resource information processing method according to an embodiment of the present disclosure.
As shown in fig. 2, the resource information processing method of this embodiment includes operations S210 to S230.
In operation S210, resource information of a storage resource is acquired, wherein the resource information includes configuration information including hardware configuration information and logic configuration information, the storage resource includes a plurality of types of block type resources, and the usage information includes path usage information.
According to embodiments of the present disclosure, storage resources may be Storage resources including different architectures, different vendors, and different types, including, but not limited to, direct-Attached Storage (DAS), network Attached Storage (Network Attached Storage, NAS), storage area network (Storage Area Network, SAN), and cloud Storage. The hardware configuration information may include hardware device information, hardware management information, and resource pool information, and the logic configuration information may include path information, port information, and security policies. The block type resources may include a storage area network (Storage Area Network, SAN) and the file type resources may include network attached storage (Network Attached Storage, NAS). It is understood that the resource information processing method in the present disclosure may be implemented by writing in a compiled language (e.g., python).
In operation S220, a path processing policy is determined based on the hardware configuration information, the logic configuration information, and the path usage information, wherein the path processing policy includes at least one of a path update policy, a fault processing policy, and a path load policy.
According to embodiments of the present disclosure, a path may characterize a path of a server accessing a storage device in a block type resource, and path usage information may characterize path information that has been used/occupied among a plurality of access paths, as well as remaining available path information. The fault processing strategy can represent that path switching is realized under the condition that paths have faults in the multipath strategy, the path load strategy can represent that the data flow/information flow is reasonably distributed to different paths by utilizing a load balancing strategy algorithm, and the path updating strategy can represent that the path initial information is updated according to the path load strategy and the fault processing strategy.
In operation S230, the block resource information is processed using the path update policy, the fault handling policy, and the path loading policy, to obtain a path handling result.
According to an embodiment of the present disclosure, the path processing result may be a result of monitoring, processing, and updating the block resource information based on the path update policy, the fault processing policy, and the path load policy, and may include a path switching result, a path load result, and a path update result. The block resource information may include hardware configuration information, logic configuration information, and path information of the block type resource.
According to the embodiment of the disclosure, the hardware configuration information, the logic configuration information and the path use information of the block type storage resources are based, so that the path update strategy, the fault processing strategy and the path load strategy can be determined to process the block resource information of different types, and further the path processing result is obtained.
According to the embodiment of the disclosure, the hardware configuration information comprises hardware equipment information, the logic configuration information comprises path information, the hardware equipment information and the path information are multiple, the path processing strategy is determined based on the hardware configuration information, the logic configuration information and the path use information of the block type resource, the method comprises the steps of determining multiple initial path processing strategies corresponding to the multiple hardware equipment information and the multiple path information, and updating the multiple initial path processing strategies based on preset updating strategies and the path use information to obtain the path processing strategy, so that the path processing strategy is matched with the multiple hardware equipment information.
According to embodiments of the present disclosure, there may be multiple types of hardware device information and paths due to vendor different block storage resources (e.g., SANs). The initial path processing policy may be a respective corresponding path processing policy determined based on storage resources of different vendors/types, e.g., an a-path processing policy for SAN storage devices of vendor a and a B-path processing policy for SAN storage devices of vendor B. The initial path processing strategies of different block storage resources are collected, and the path processing strategies can be obtained.
According to the embodiment of the disclosure, the step of updating the initial path processing policy based on the preset updating policy for different types of storage resources can comprise defining processing policies of different paths, including load balancing and fault switching policies, selecting supportable protocols such as iSCSI or fiber Channel so as to achieve compatibility between different architecture devices, selecting a multi-path tool capable of supporting heterogeneous environments, ensuring compatibility and configuration flexibility, keeping consistency when configuring and managing the path policies, ensuring policy uniformity among different devices, and comprehensively testing different devices before deployment to ensure validity and reliability of the path processing policy.
According to the embodiment of the disclosure, the path processing strategy is obtained by updating the initial path processing strategy, so that the consistent path processing strategy can be realized in the heterogeneous storage resource environment, the complexity of storage equipment and path processing is reduced, and the performance and reliability of the system are optimized.
According to the embodiment of the disclosure, the path processing result is obtained by processing the block resource information by using the path updating policy, the fault processing policy and the path loading policy, wherein the path processing result comprises the steps of determining a plurality of pieces of storage path information in the block resource information, and processing the storage path information based on the fault processing policy, the path updating policy and the path loading policy under the condition that at least one piece of storage path information in the plurality of pieces of storage path information is detected to be abnormal information, so as to obtain the path processing result.
According to the embodiment of the disclosure, the anomaly information characterizes that the current path is a fault path or that the path performance does not meet the preset performance requirement. By processing the abnormal path according to the fault processing strategy, the path updating strategy and the path loading strategy, the path switching result, the path loading result and the path updating result can be obtained.
In one possible embodiment, it is contemplated that in the storage environment of heterogeneous storage resources, the storage devices may come from different vendors, using different technologies or protocols, the path usage environment is complex, leading to compatibility issues, system performance bottlenecks, and stability issues. The present disclosure manages and optimizes data access paths in a heterogeneous SAN storage environment using a dynamic multi-path policy that may include detecting and storing all path information in real-time and configuring, dynamically selecting an optimal path based on real-time performance information and path health status information, and automatically switching a data/information flow to a backup path when a path failure or performance degradation is detected.
Figure 3 schematically illustrates a flow chart of a method for implementing path failover, load balancing and monitoring functions in SAN storage using dynamic multipath policies according to an embodiment of the present disclosure.
As shown in fig. 3, a method for implementing path failover, load balancing, and monitoring functions in SAN storage using a dynamic multipath strategy may include operations S310-S330.
In operation S310, the path is failed over. Simulating path faults, verifying whether the software can be automatically switched to a standby path or not to ensure that data transmission is not interrupted.
In operation S320, a load balancing policy is set. The load balancing algorithm, such as polling, minimum connection number, etc., may be selected in the multipath software to distribute data traffic to the various paths, and a monitoring tool is used to monitor the load balancing effect and adjust the policy as needed.
In operation S330, the function is monitored. The method has the advantages of being capable of tracking the health condition and performance of the path in real time by configuring the monitoring function of the multipath software, capable of checking alarms and reports generated by the software regularly, timely processing potential hidden trouble problems, ensuring compatibility of the multipath software and the storage device, and updating and maintaining the software regularly so as to cope with new challenges.
According to the embodiment of the disclosure, the use information further comprises resource use information, the storage resource further comprises file type resources, the method further comprises the steps of processing hardware configuration information, logic configuration information and resource use information in a historical time period to obtain a resource use result, predicting the resource use information in a future time period by using a preset prediction rule and the resource use result to obtain a prediction result, determining a resource allocation strategy based on the prediction result, and processing the resource information by using the resource allocation strategy to obtain the resource allocation result.
According to an embodiment of the present disclosure, the resource usage information may include block type resource usage information, file type resource usage information. The historical time period may be selected according to the actual scene requirement, for example, a peak time period with the largest usage and demand of the storage resource in the historical time period may be selected, or a valley time period with the smallest usage and demand of the storage resource may be selected, and the specific time period is not limited herein. The file type resource usage information may include node usage information such as a disk, a hard disk domain, a storage pool, a tenant, a file system, a controller, a physical port group, a logical port group, and the like, and the block type resource usage information may include node usage information such as a disk, a Logical Unit (LUN), an Initiator Device (IDEV), a storage pool, a Host (Host), a Host group (HostGroup), and the like. The preset prediction rules may be a prediction algorithm, a prediction model, or based on empirical predictions.
In one possible embodiment, a random forest algorithm may be utilized to predict resource usage prediction results for a future time period, including collecting and organizing stored resource usage data, including time stamps and related features, over a historical time period, extracting features, such as time features (hours, days, weeks, etc.), load patterns, from historical information, dividing the data set into a training set and a test set, training a random forest model using the training set data, adjusting hyper-parameters (e.g., number of trees, maximum depth) to optimize performance, and evaluating prediction accuracy of the model using the test set data, calculating an error indicator, and predicting resource usage for the future time period using the trained model to obtain the prediction results.
In one possible embodiment, determining the resource allocation policy based on the prediction result may include analyzing the prediction result, evaluating peaks and valleys of the storage resource demand in a future time period, formulating the resource allocation policy based on the prediction demand, for example, setting a resource reservation during high demand and a resource release during low demand, setting priorities for different types of storage tasks based on actual business demand, ensuring the priority allocation of critical tasks when resources are strained, implementing an automated adjustment mechanism, dynamically adjusting the resource allocation based on actual conditions and the prediction result, monitoring resource usage and policy effects, and optimizing the allocation policy based on actual data feedback.
According to the embodiment of the invention, the prediction result is used for formulating a storage resource allocation strategy of a future time period, so that the resource demand can be accurately predicted, the storage resource is ensured to be efficiently utilized, the waste or deficiency of the resource is avoided, the performance bottleneck caused by the shortage of the resource can be reduced by allocating the resource in advance, the stability and the response speed of the system are improved, and meanwhile, the important business and the task with high priority can be ensured to obtain enough resources by optimizing the resource allocation, so that the service quality and the user experience are improved.
According to the embodiment of the disclosure, the resource use information in a future time period is predicted by using a preset prediction rule and a resource use result to obtain a prediction result, wherein the prediction result comprises the steps of extracting characteristic sequence information and a load peak value from the resource use result, inputting the characteristic sequence information and the load peak value into a time sequence model, and outputting the prediction result.
According to embodiments of the present disclosure, the signature sequence information may characterize trends, seasonal patterns, periodicity, outliers, and statistical properties (e.g., mean and variance) of the data in the historical resource usage information. The peak load may characterize the maximum usage of the resource at a certain point in time/period.
In one possible embodiment, a method for predicting future time period storage resource usage information using a time series model may include obtaining historical usage data of the storage resource, including time stamps and usage, the historical usage data source may include monitoring tools, log files, or databases, visualizing the data via graphs (e.g., time series diagrams), observing resource usage trends, seasonality, and periodicity, computing basic statistics (e.g., mean, standard deviation) and correlation analysis of the data, checking stationarity of the time series using ADF test or KPSS test, if the data is not stationary, requiring differentiation or conversion to stabilize it, selecting a suitable time series model, e.g., ARIMA (AutoRegressive Integrated Moving Average), splitting the historical data (sample data) into a training set and a test set in time series, fitting an ARIMA model over the training set, evaluating performance of the model over the test set, using indicators such as Mean Square Error (MSE), root Mean Square Error (RMSE), and Mean Absolute Error (MAE), adjusting parameters (p, d, q) according to the performance of the ARIMA model, or cross-rolling parameters, providing a prediction result, or a prediction result of the future time period may be further validated, or a future time period prediction region may be provided by verifying the future prediction result.
According to the embodiment of the disclosure, the hardware configuration information further comprises hardware management information and resource pool information, the logic configuration information further comprises port information and a security policy, the method further comprises the steps of determining a mapping relation between the resource information and the target device based on the hardware device information, the hardware management information, the resource pool information, the port information and the security policy, and generating resource architecture information based on the mapping relation, so that a user can correspondingly process the resource information according to configuration requirements.
According to embodiments of the present disclosure, the mapping relationship may characterize a mapping relationship between storage resource information to a target device (server), including, but not limited to, mount information, data storage location information, access rights information, and the like.
In a feasible embodiment, constructing the mapping relation between the resource information and the target device can include establishing a mapping table, recording the relation between each host and the storage device connected with the host or recording which host each storage device is used by, acquiring the resource use condition and the mapping relation in real time by using a monitoring tool, and further, periodically checking and updating the mapping relation by writing an automation script so as to keep the accuracy and timeliness of the data.
According to the embodiment of the disclosure, the mapping relation between the resource information and the target equipment is determined, so that the resource use and allocation bottleneck can be identified and the resource allocation can be adjusted, the overall system performance is improved, the load balance is ensured, the overload of a single resource is reduced, meanwhile, the resource can be more effectively utilized by accurate mapping, the resource idling and waste are reduced, the problem of quickly positioning the resource is solved, and the stability and the reliability of the system are improved.
According to the embodiment of the disclosure, the method further comprises the steps of detecting the storage resources by using a detection tool to obtain type information of the storage resources, and determining configuration information and use information based on the type information.
According to the embodiment of the disclosure, in the process of implementing the resource information processing method of the disclosure by using python writing, a plurality of tool libraries can be used, including pexpect libraries, pexpect libraries can realize automatic interaction of various programs (such as ssh, ftp, passwd and telnet), realize equipment login, judge whether equipment can be logged in, check equipment manufacturer and model, judge whether storage equipment is needed, further collect storage configuration information of different architectures according to storage equipment of different manufacturers, and the storage configuration information can comprise storage hardware resources, hard disk domains, storage pools, file systems, logic port groups, data protection, mapping relations, host units, multi-path links and the like.
In one possible embodiment, the tool library may include a time library, a system (OS) library, a diagrams library, a PyQt library, numPy, and SciPy library in addition to pexpect libraries. The Time library can run and access various types of clocks to achieve the effect of recording the current Time, the OS library can read and write files and catalogues to achieve file generation and reading and writing, the diagrams library can draw a cloud system architecture through simple Python codes to achieve prototype design of a new system architecture, the PyQt library can be used for creating a Graphical User Interface (GUI) application program to create an heterogeneous storage device operation and maintenance platform, and the NumPy and SciPy libraries can be used for numerical calculation, scientific and technical calculation, including scientific calculation, linear algebra, signal processing and optimization problems.
According to the embodiment of the disclosure, the method further comprises the step of processing the resource information and the processing result by using a graph generating tool to generate chart information.
According to embodiments of the present disclosure, the image generation tool may be a tool for generating resource information in a tool library, including Matplotlib library and Networkx library, matplotlib library may be used to create a bottom library of two-dimensional graphs and graphics, build icons of different network models, networkx library may be used to create and process complex graph network structures, and generate network topology graphs.
Fig. 4 schematically illustrates a flowchart of another resource information processing method according to an embodiment of the present disclosure.
As shown in fig. 4, the resource information processing method may include operations S410 to S490.
In operation S410, a storage management address is acquired.
In operation S420, device login may be implemented through Pexpect, and it may be determined whether the device can be logged in.
In operation S421, in the case where the device can be logged in, the device is logged in.
In operation S422, the device type is judged. Checking the manufacturer and model of the device, and judging whether the device is a storage device.
In operation S423, if the device cannot be logged in, a connection abnormality is presented.
In operation S430, in the case where the login device is a storage device, storage configuration information of different architectures may be collected according to different types of storage devices, including storage hardware resources, hard disk domains, storage pools, file systems, logical port groups, data protection, mapping relationships, host groups, multi-path links, and the like.
Based on the collected storage configuration information, storage resource usage may be analyzed, analyzing individual business system disk, network, and processor usage, in operation S440.
In operation S450, a future resource usage trend may be predicted according to the resource usage situation.
In operation S460, node information of different types of resources is acquired. And for the block type storage resource, collecting node information of a disk, a logic unit, an initiating device, a storage pool, a host group and the like.
In operation S470, a mapping relationship between the storage resources and the target device (host) may be confirmed according to the acquired node information. A storage network architecture diagram can be constructed by using diagrams libraries, and information query, configuration, monitoring and access functions are realized for each node.
In operation S480, a storage resource data visualization graph may be constructed using the Matplotlib library to analyze the storage of each node resource.
In operation S490, a storage capacity and performance plan analysis report is generated from the architecture diagram and the data analysis diagram.
According to the embodiment of the disclosure, the graphic generation tool is utilized to process resource information and generate the chart, so that the visual effect of data can be greatly improved, complex data is easier to understand and analyze, real-time monitoring and performance optimization are facilitated, important roles in fault diagnosis and troubleshooting are played, fault positions can be rapidly positioned, and the overall operation and maintenance efficiency is improved.
Fig. 5 schematically illustrates a system architecture diagram of a resource information processing method according to an embodiment of the present disclosure.
As shown in fig. 5, resource information 510 of a storage resource may be acquired, the resource information package may include configuration information 511 and usage information 512, the configuration information includes hardware configuration information 5111 and logic configuration information 5112, the storage resource includes block type resource, the usage information 512 may include path usage information 5121, so that a path processing policy 520 is determined based on the hardware configuration information 5111, the logic configuration information 5112 and the path usage information 5121, the path processing policy 520 includes at least one of a path update policy 521, a fault processing policy 522 and a path load policy 523, and the block resource information is processed using the path update policy 521, the fault processing policy 522 and the path load policy 523 to obtain a path processing result 530.
Based on the resource information processing method, the disclosure also provides a resource information processing device. The device will be described in detail below in connection with fig. 6.
Fig. 6 schematically shows a block diagram of a resource information processing apparatus according to an embodiment of the present disclosure.
As shown in fig. 6, the resource information processing apparatus of this embodiment includes a resource information acquisition module 610, a path processing policy determination module 620, and a resource information processing module 630.
A resource information obtaining module 610, configured to obtain resource information of a storage resource, where the resource information includes configuration information and usage information, the configuration information includes hardware configuration information and logic configuration information, the storage resource includes a plurality of types of block type resources, and the usage information includes path usage information. In an embodiment, the resource information obtaining module 610 may be configured to perform the operation S210 described above, which is not described herein.
The path processing policy determining module 620 is configured to determine a path processing policy based on the hardware configuration information, the logic configuration information, and the path usage information, where the path processing policy includes at least one of a path update policy, a fault processing policy, and a path load policy. In an embodiment, the path processing policy determining module 620 may be configured to perform the operation S220 described above, which is not described herein.
The resource information processing module 630 is configured to process the block resource information by using a path update policy, a fault processing policy, and a path load policy, so as to obtain a path processing result. In an embodiment, the resource information processing module 630 may be configured to perform the operation S230 described above, which is not described herein.
According to the embodiment of the disclosure, through the resource information acquisition module 610, the path processing policy determining module 620 and the resource information processing module 630 in the resource information processing device, the path update policy, the fault processing policy and the path load policy can be determined by storing the hardware configuration information, the logic configuration information and the path use information of the resource based on the block types, so as to process the block resource information of different types, and further obtain the path processing result.
According to the embodiment of the disclosure, the hardware configuration information comprises hardware equipment information, the logic configuration information comprises path information, the hardware equipment information and the path information are multiple, and the path processing strategy determining module comprises an initial strategy determining sub-module and a strategy updating module.
An initial policy determination sub-module for determining a plurality of initial path processing policies corresponding to the plurality of hardware device information and the plurality of path information.
And the policy updating module is used for updating the plurality of initial path processing policies based on a preset updating policy and path use information to obtain a path processing policy, so that the path processing policy is matched with the plurality of hardware equipment information.
According to an embodiment of the present disclosure, the resource information processing module includes a path information determination sub-module and a path information processing sub-module.
And the path information determination submodule is used for determining a plurality of storage path information in the block resource information.
And the path information processing sub-module is used for processing the stored path information based on the fault processing strategy, the path updating strategy and the path loading strategy to obtain a path processing result when at least one stored path information in the plurality of stored path information is detected to be abnormal information.
According to the embodiment of the disclosure, the usage information further comprises resource usage information, the storage resource further comprises file type resources, and the device further comprises a first information processing module, an information prediction module, an allocation strategy determination module and a second information processing module.
The first information processing module is used for processing the hardware configuration information, the logic configuration information and the resource use information in the historical time period to obtain a resource use result.
And the information prediction module is used for predicting the resource use information in the future time period by utilizing a preset prediction rule and a resource use result to obtain a prediction result.
And the allocation strategy determining module is used for determining a resource allocation strategy based on the prediction result.
And the second information processing module is used for processing the resource information by utilizing the resource allocation strategy to obtain a resource allocation result.
According to the embodiment of the disclosure, the information prediction module comprises an extraction sub-module and a prediction result output sub-module.
And the extraction submodule is used for extracting the characteristic sequence information and the load peak value from the resource use result.
And the prediction result output sub-module is used for inputting the characteristic sequence information and the load peak value into the time sequence model and outputting a prediction result.
According to the embodiment of the disclosure, the hardware configuration information further comprises hardware management information and resource pool information, the logic configuration information further comprises port information and a security policy, and the device further comprises a mapping relation determining module and an architecture information generating module.
The mapping relation determining module is used for determining the mapping relation between the resource information and the target device based on the hardware device information, the hardware management information, the resource pool information, the port information and the security policy.
And the architecture information generation module is used for generating resource architecture information based on the mapping relation, so that a user carries out corresponding processing on the resource information according to configuration requirements.
The device further comprises a resource detection module and a configuration information determination module.
And the resource detection module is used for detecting the storage resources by using a detection tool to obtain the type information of the storage resources.
And the configuration information determining module is used for determining configuration information and use information based on the type information.
The device further comprises a chart information generating module, wherein the chart information generating module is used for processing the resource information and the processing result by using a graph generating tool to generate chart information.
Any of the resource information acquisition module 610, the path processing policy determination module 620, and the resource information processing module 630 may be combined in one module to be implemented, or any of them may be split into a plurality of modules, according to an embodiment of the present disclosure. Or at least some of the functionality of one or more of the modules may be combined with, and implemented in, at least some of the functionality of other modules. According to embodiments of the present disclosure, at least one of the resource information acquisition module 610, the path processing policy determination module 620, and the resource information processing module 630 may be implemented at least in part as hardware circuitry, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in hardware or firmware in any other reasonable way of integrating or packaging circuitry, or in any one of or a suitable combination of three of software, hardware, and firmware. Or at least one of the resource information acquisition module 610, the path processing policy determination module 620 and the resource information processing module 630 may be at least partially implemented as a computer program module which, when executed, may perform the corresponding functions.
Fig. 7 schematically illustrates a block diagram of an electronic device adapted to implement a resource information processing method according to an embodiment of the disclosure.
As shown in fig. 7, an electronic device 700 according to an embodiment of the present disclosure includes a processor 701 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. The processor 701 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or an associated chipset and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), or the like. The processor 701 may also include on-board memory for caching purposes. The processor 701 may comprise a single processing unit or a plurality of processing units for performing different actions of the method flows according to embodiments of the disclosure.
In the RAM 703, various programs and data necessary for the operation of the electronic apparatus 700 are stored. The processor 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704. The processor 701 performs various operations of the method flow according to the embodiments of the present disclosure by executing programs in the ROM 702 and/or the RAM 703. Note that the program may be stored in one or more memories other than the ROM 702 and the RAM 703. The processor 701 may also perform various operations of the method flow according to embodiments of the present disclosure by executing programs stored in the one or more memories.
According to an embodiment of the present disclosure, the electronic device 700 may further include an input/output (I/O) interface 705, the input/output (I/O) interface 705 also being connected to the bus 704. The electronic device 700 may also include one or more of an input portion 706 including a keyboard, mouse, etc., an output portion 707 including a Cathode Ray Tube (CRT), liquid Crystal Display (LCD), etc., and speaker, etc., a storage portion 708 including a hard disk, etc., and a communication portion 709 including a network interface card such as a LAN card, modem, etc., connected to an input/output (I/O) interface 705. The communication section 709 performs communication processing via a network such as the internet. The drive 710 is also connected to an input/output (I/O) interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read therefrom is mounted into the storage section 708 as necessary.
The present disclosure also provides a computer-readable storage medium that may be included in the apparatus/device/system described in the above embodiments, or may exist alone without being assembled into the apparatus/device/system. The computer-readable storage medium carries one or more programs which, when executed, implement methods in accordance with embodiments of the present disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example, but is not limited to, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. For example, according to embodiments of the present disclosure, the computer-readable storage medium may include ROM 702 and/or RAM 703 and/or one or more memories other than ROM 702 and RAM 703 described above.
Embodiments of the present disclosure also include a computer program product comprising a computer program containing program code for performing the methods shown in the flowcharts. When the computer program product runs in a computer system, the program code is used for enabling the computer system to realize the resource information processing method provided by the embodiment of the disclosure.
The above-described functions defined in the system/apparatus of the embodiments of the present disclosure are performed when the computer program is executed by the processor 701. The systems, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the disclosure.
In one embodiment, the computer program may be based on a tangible storage medium such as an optical storage device, a magnetic storage device, or the like. In another embodiment, the computer program may also be transmitted, distributed over a network medium in the form of signals, downloaded and installed via the communication section 709, and/or installed from the removable medium 711. The computer program may comprise program code that is transmitted using any appropriate network medium, including but not limited to wireless, wireline, etc., or any suitable combination of the preceding.
In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 709, and/or installed from the removable medium 711. The above-described functions defined in the system of the embodiments of the present disclosure are performed when the computer program is executed by the processor 701. The systems, devices, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the disclosure.
According to embodiments of the present disclosure, program code for performing computer programs provided by embodiments of the present disclosure may be written in any combination of one or more programming languages, and in particular, such computer programs may be implemented in high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. Programming languages include, but are not limited to, such as Java, c++, python, "C" or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that the features recited in the various embodiments of the disclosure may be combined and/or combined in various combinations, even if such combinations or combinations are not explicitly recited in the disclosure. In particular, features recited in various embodiments of the present disclosure may be combined and/or combined in various ways without departing from the spirit and teachings of the present disclosure. All such combinations and/or combinations fall within the scope of the present disclosure.
The embodiments of the present disclosure are described above. These examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described above separately, this does not mean that the measures in the embodiments cannot be used advantageously in combination. Various alternatives and modifications can be made by those skilled in the art without departing from the scope of the disclosure, and such alternatives and modifications are intended to fall within the scope of the disclosure.

Claims (12)

1. A resource information processing method, the method comprising:
Acquiring resource information of a storage resource, wherein the resource information comprises configuration information and use information, the configuration information comprises hardware configuration information and logic configuration information, the storage resource comprises a plurality of types of block type resources, and the use information comprises path use information;
determining a path processing policy based on hardware configuration information, the logic configuration information, and the path usage information, wherein the path processing policy includes at least one of a path update policy, a fault processing policy, and a path load policy, and
And processing the block resource information by using the path updating strategy, the fault processing strategy and the path loading strategy to obtain a path processing result.
2. The method of claim 1, wherein the hardware configuration information comprises hardware device information, the logic configuration information comprises path information, and the hardware device information and the path information are each a plurality of;
Determining a path processing policy based on hardware configuration information of the block type resource, the logic configuration information, and the path usage information, comprising:
determining a plurality of initial path processing policies corresponding to a plurality of the hardware device information and a plurality of the path information, and
And updating a plurality of initial path processing strategies based on a preset updating strategy and the path using information to obtain the path processing strategy, so that the path processing strategy is matched with a plurality of pieces of hardware equipment information.
3. The method of claim 2, wherein processing the block resource information using the path update policy, the fault handling policy, and the path load policy to obtain a path handling result comprises:
Determining a plurality of storage path information in the block resource information, and
And under the condition that at least one storage path information in the plurality of storage path information is detected to be abnormal information, processing the storage path information based on the fault processing strategy, the path updating strategy and the path loading strategy to obtain the path processing result.
4. The method of claim 2, wherein the usage information further comprises resource usage information, the storage resources further comprise file type resources, the method further comprising:
processing the hardware configuration information, the logic configuration information and the resource use information in the historical time period to obtain a resource use result;
predicting the resource use information in the future time period by using a preset prediction rule and the resource use result to obtain a prediction result;
determining a resource allocation policy based on the prediction result, and
And processing the resource information by utilizing the resource allocation strategy to obtain a resource allocation result.
5. The method of claim 4, wherein predicting the resource usage information for the future time period using the preset prediction rule and the resource usage result to obtain the prediction result, comprises:
extracting characteristic sequence information and load peaks from the resource usage results, and
And inputting the characteristic sequence information and the load peak value into a time sequence model, and outputting the prediction result.
6. The method of claim 4, wherein the hardware configuration information further comprises hardware management information and resource pool information, and the logic configuration information further comprises port information and security policies;
the method further comprises the steps of:
Determining a mapping relationship between the resource information and a target device based on the hardware device information, the hardware management information, the resource pool information, the port information, and a security policy, and
And generating resource architecture information based on the mapping relation, so that a user carries out corresponding processing on the resource information according to configuration requirements.
7. The method according to claim 1, wherein the method further comprises:
detecting the storage resource by using a detection tool to obtain the type information of the storage resource, and
The configuration information and the usage information are determined based on the type information.
8. The method according to claim 1, wherein the method further comprises:
and processing the resource information and the processing result by using a graph generating tool to generate chart information.
9. A resource information processing apparatus, characterized in that the apparatus comprises:
A resource information acquisition module, configured to acquire resource information of a storage resource, where the resource information includes configuration information and usage information, the configuration information includes hardware configuration information and logic configuration information, the storage resource includes a plurality of types of block type resources, and the usage information includes path usage information;
A path processing policy determining module for determining a path processing policy based on hardware configuration information, the logic configuration information and the path usage information, wherein the path processing policy includes at least one of a path update policy, a fault processing policy and a path load policy, and
And the resource information processing module is used for processing the block resource information by utilizing the path updating strategy, the fault processing strategy and the path loading strategy to obtain a path processing result.
10. An electronic device, comprising:
One or more processors;
A memory for storing one or more computer programs,
Characterized in that the one or more processors execute the one or more computer programs to implement the steps of the method according to any one of claims 1-8.
11. A computer-readable storage medium, on which a computer program or instructions is stored, which, when executed by a processor, carries out the steps of the method according to any one of claims 1-8.
12. A computer program product comprising a computer program or instructions which, when executed by a processor, implement the steps of the method according to any one of claims 1 to 8.
CN202411286040.7A 2024-09-13 2024-09-13 Resource information processing method, apparatus, device, medium, and program product Pending CN119415013A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411286040.7A CN119415013A (en) 2024-09-13 2024-09-13 Resource information processing method, apparatus, device, medium, and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411286040.7A CN119415013A (en) 2024-09-13 2024-09-13 Resource information processing method, apparatus, device, medium, and program product

Publications (1)

Publication Number Publication Date
CN119415013A true CN119415013A (en) 2025-02-11

Family

ID=94458709

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411286040.7A Pending CN119415013A (en) 2024-09-13 2024-09-13 Resource information processing method, apparatus, device, medium, and program product

Country Status (1)

Country Link
CN (1) CN119415013A (en)

Similar Documents

Publication Publication Date Title
US11438214B2 (en) Operational analytics in managed networks
US11314576B2 (en) System and method for automating fault detection in multi-tenant environments
CN110851342A (en) Fault prediction method, device, computing equipment and computer readable storage medium
CN110472809A (en) Calculate the basic reason and forecast analysis of Environmental Technology problem
JP7546668B2 (en) Identifying the component events of an event storm in operations management
US11212173B2 (en) Model-driven technique for virtual network function rehoming for service chains
US9397906B2 (en) Scalable framework for monitoring and managing network devices
US20220050733A1 (en) Component failure prediction
JP2017207894A (en) Integrated monitoring operation system and method
US20220179729A1 (en) Correlation-based multi-source problem diagnosis
CN113032237B (en) Data processing method and device, electronic equipment and computer readable storage medium
JP7305641B2 (en) Methods and systems for tracking application activity data from remote devices and generating corrective behavior data structures for remote devices
CN118260294B (en) Manufacturing pain signal summarizing method, system, medium and equipment based on AI
US20240362098A1 (en) Method and system for real-time identification of blast radius of a fault in a globally distributed virtual desktop fabric
US11775654B2 (en) Anomaly detection with impact assessment
JP2023537769A (en) Fault location for cloud-native applications
CN117950838A (en) Resource scheduling method, device, equipment, medium and program product
CN117130812A (en) System fault detection method, apparatus, device, medium and program product
CN115190008B (en) Fault processing method, fault processing device, electronic equipment and storage medium
CN119415013A (en) Resource information processing method, apparatus, device, medium, and program product
Bendimerad et al. On-premise aiops infrastructure for a software editor SME: an experience report
Reitze Using commercial web services to build Automated Test Equipment cloud based applications
Tarak et al. DIA4M: A Tool to Streamline DevOps Processes of Distributed Cloud-Native Systems
CN119449602B (en) Network connection method, device, equipment, medium and program product
Norton Stanley et al. Elastic circuit de-constructor: a pattern to enhance resiliency in microservices

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination