Disclosure of Invention
In order to solve the problems, the invention provides a slow fault detection method, a slow fault detection device, slow fault detection equipment and slow fault detection media in a distributed storage system, which can improve the usability of a Ceph system.
The technical scheme adopted by the invention for solving the technical problems is as follows:
in a first aspect, a method for detecting a slow failure in a distributed storage system according to an embodiment of the present invention includes the following steps:
step S1, data interacted by an OC module and all OSD are acquired at intervals of t time, time delay and throughput in the t time are calculated, and a data set is constructed;
S2, preprocessing a data set by adopting a PCAP algorithm and a DBSCAN algorithm;
s3, establishing a linear regression model by adopting polynomial regression;
s4, predicting the preprocessed data set by using a linear regression model, identifying a boundary entry according to a slow fault detection threshold value and marking the boundary entry as a slow entry;
And S5, counting all the slow fault events by adopting a sliding window and scoreboard mechanism and calculating scores.
As a possible implementation manner of this embodiment, in step S1, the collected data includes a timestamp, a host, a disk ID, a delay, and a throughput.
As a possible implementation manner of this embodiment, the step S2 of preprocessing the data set by using the PCAP algorithm and the DBSCAN algorithm includes:
Carrying out standardization processing on the data set;
identifying abnormal values in the data set by adopting a DBSCAN algorithm;
The PCAP algorithm is used to discard outlier entries in the dataset.
As a possible implementation manner of this embodiment, the step S3 of establishing a linear regression model by using polynomial regression includes:
Constructing polynomial characteristics and converting the polynomial characteristics, and establishing a binomial relation between time delay and throughput;
Fitting a binomial relation between the time delay and the throughput to construct a linear regression model, and outputting a prediction result of the time delay by the linear regression model.
As a possible implementation manner of this embodiment, the step S4 of predicting the preprocessed data set by using a linear regression model, identifying and marking the boundary entry as a slow fault entry according to a slow fault detection threshold includes:
calculating standard deviation of residual error between a prediction result and an actual value of the linear regression model, determining a corresponding z quantile by using the characteristic of standard normal distribution in statistics through a given confidence level, multiplying the standard deviation by the z quantile to obtain an upper limit of a confidence interval, and taking the upper limit of the confidence interval as a slow fault detection threshold;
And identifying abnormal items, namely defining a sliding window after predicting each node, and if the predicted value in the sliding window exceeds the upper limit of the confidence interval, considering that a slow fault event occurs.
As a possible implementation manner of this embodiment, the step S5, using a sliding window and a scoreboard mechanism to count and calculate scores for all the slow fault events, includes:
judging whether a slow fault event occurs;
if a slow fault event occurs and there is no slow fault event for the previous period of time, then the Score is pressed Factor drop, score initial value of 10, if Score continues to decay and is less than 0, then set to 0;
If a slow failure event occurs and a slow failure event occurs for a previous period of time, the Score presses The factor is incremented and if Score exceeds the threshold, the calculation is stopped.
As a possible implementation manner of this embodiment, the determining whether a slow fault event occurs includes:
rate of whether there is a faulty slow event in a certain time The method comprises the following steps:
,
wherein slowCount is the number of slow fault entries, totalCount is the total number of entries;
if slowRatio is greater than the slow fault event duty cycle threshold, then a slow fault event is deemed to have occurred.
In a second aspect, an embodiment of the present invention provides a slow fault detection device in a distributed storage system, including:
The data set construction module is used for acquiring the data interacted by the OC module and all OSD at intervals of t time, calculating the time delay and throughput in the t time and constructing a data set;
The data set preprocessing module is used for preprocessing the data set by adopting a PCAP algorithm and a DBSCAN algorithm;
the model building module is used for building a linear regression model by adopting polynomial regression;
The slow fault detection module is used for predicting the preprocessed data set by utilizing a linear regression model, identifying a boundary entry according to a slow fault detection threshold value and marking the boundary entry as a slow entry;
And the slow fault statistics module is used for counting all slow fault events and calculating scores by adopting a sliding window and scoreboard mechanism.
In a third aspect, an embodiment of the present invention provides a computer device, including a processor, a memory, and a bus, where the memory stores machine-readable instructions executable by the processor, and when the computer device is running, the processor communicates with the memory through the bus, and the processor executes the machine-readable instructions to perform steps of a slow fault detection method in any of the distributed storage systems described above.
In a fourth aspect, embodiments of the present invention provide a storage medium having a computer program stored thereon, which when executed by a processor performs the steps of a slow failure detection method in any of the above-described distributed storage systems.
The technical scheme of the embodiment of the invention has the following beneficial effects:
The invention establishes a regression model based on the characteristics of the captured disk-level time delay and throughput data by using a machine learning technology, evaluates the severity of slow faults by using a scoreboard mechanism, timely isolates problematic drivers and improves the usability of the Ceph system. According to the invention, a new node-level lightweight small-sized data set regression-based driver-level slow fault event is introduced, such problems are found in advance, and the usability and long tail problems of the system are effectively improved.
The slow fault detection device in the distributed storage system of the technical scheme of the embodiment of the invention has the same beneficial effects as the slow fault detection method in the distributed storage system of the technical scheme of the embodiment of the invention.
Detailed Description
In order to more clearly illustrate the technical features of the solution of the present invention, the present invention will be described in detail below with reference to the following detailed description and the accompanying drawings.
In order to detect a drive slow failure state in a Ceph system, the invention utilizes classical machine learning techniques (PCA, DBSCAN and polynomial regression) to establish a mapping relationship between delay variation and workload pressure, through which an accurate adaptive threshold can be automatically determined for each node to identify tracked slow requests. Furthermore, based on the slow request, the present invention builds a corresponding slow-to-failure event and utilizes a scoreboard mechanism to evaluate the severity of such event.
As shown in fig. 1, the method for detecting a slow failure in a distributed storage system according to the embodiment of the present invention includes the following steps:
step S1, data interacted by an OC module and all OSD are acquired at intervals of t time, time delay and throughput in the t time are calculated, and a data set is constructed;
S2, preprocessing a data set by adopting a PCAP algorithm and a DBSCAN algorithm;
s3, establishing a linear regression model by adopting polynomial regression;
s4, predicting the preprocessed data set by using a linear regression model, identifying a boundary entry according to a slow fault detection threshold value and marking the boundary entry as a slow entry;
And S5, counting all the slow fault events by adopting a sliding window and scoreboard mechanism and calculating scores.
As a possible implementation manner of this embodiment, in step S1, the collected data includes a timestamp, a host, a disk ID, a delay, and a throughput.
As a possible implementation manner of this embodiment, the step S2 of preprocessing the data set by using the PCAP algorithm and the DBSCAN algorithm includes:
Carrying out standardization processing on the data set;
identifying abnormal values in the data set by adopting a DBSCAN algorithm;
The PCAP algorithm is used to discard outlier entries in the dataset.
As a possible implementation manner of this embodiment, the step S3 of establishing a linear regression model by using polynomial regression includes:
Constructing polynomial characteristics and converting the polynomial characteristics, and establishing a binomial relation between time delay and throughput;
Fitting a binomial relation between the time delay and the throughput to construct a linear regression model, and outputting a prediction result of the time delay by the linear regression model.
As a possible implementation manner of this embodiment, the step S4 of predicting the preprocessed data set by using a linear regression model, identifying and marking the boundary entry as a slow fault entry according to a slow fault detection threshold includes:
calculating standard deviation of residual error between a prediction result and an actual value of the linear regression model, determining a corresponding z quantile by using the characteristic of standard normal distribution in statistics through a given confidence level, multiplying the standard deviation by the z quantile to obtain an upper limit of a confidence interval, and taking the upper limit of the confidence interval as a slow fault detection threshold;
And identifying abnormal items, namely defining a sliding window after predicting each node, and if the predicted value in the sliding window exceeds the upper limit of the confidence interval, considering that a slow fault event occurs.
As a possible implementation manner of this embodiment, the step S5, using a sliding window and a scoreboard mechanism to count and calculate scores for all the slow fault events, includes:
judging whether a slow fault event occurs;
if a slow fault event occurs and there is no slow fault event for the previous period of time, then the Score is pressed Factor drop, score initial value of 10, if Score continues to decay and is less than 0, then set to 0;
If a slow failure event occurs and a slow failure event occurs for a previous period of time, the Score presses The factor is incremented and if Score exceeds the threshold, the calculation is stopped.
As a possible implementation manner of this embodiment, the determining whether a slow fault event occurs includes:
rate of whether there is a faulty slow event in a certain time The method comprises the following steps:
,
wherein slowCount is the number of slow fault entries, totalCount is the total number of entries;
If slowRatio is greater than the slow fault event duty cycle threshold, then a slow fault event is deemed to have occurred, namely:
;
slowThreadshold is a slow fault event duty cycle threshold, if curSlow =1, a slow fault event is considered to occur.
As shown in fig. 2, a slow fault detection device in a distributed storage system according to an embodiment of the present invention includes:
The data set construction module is used for acquiring the data interacted by the OC module and all OSD at intervals of t time, calculating the time delay and throughput in the t time and constructing a data set;
The data set preprocessing module is used for preprocessing the data set by adopting a PCAP algorithm and a DBSCAN algorithm;
the model building module is used for building a linear regression model by adopting polynomial regression;
The slow fault detection module is used for predicting the preprocessed data set by utilizing a linear regression model, identifying a boundary entry according to a slow fault detection threshold value and marking the boundary entry as a slow entry;
And the slow fault statistics module is used for counting all slow fault events and calculating scores by adopting a sliding window and scoreboard mechanism.
As shown in FIG. 3, the devices involved in the implementation of the present invention are an OSD (Object-based Storage Device, object storage device), MGR (MySQL Group Replication) module and OC module (Object in Object-C). The invention performs the following specific process of slow fault detection.
1. And (5) data collection.
The OC module collects op_w_latency and op_w_in_bytes every 15s and all OSD interaction on the node, calculates throughput and time delay in the period, and stops collecting after 1 hour of collecting every day when the load is idle, wherein the data format of the collecting is as follows:
The collected data includes time stamp (Timestamp), host (Host), disk ID (disk), latency (Latency), throughput (Throughput).
The following steps are then performed for the collected data set.
2. The outlier entries are identified and discarded using a PCA and density-based noise application spatial clustering DBSCAN algorithm.
2.1DBSCAN algorithm.
DBSCAN (Density-Based Spatial Clustering of Applications with Noise) is a Density-based clustering algorithm, which defines clusters as the largest set of points that are connected in Density, can divide areas with a sufficiently high Density into clusters, and can find clusters of arbitrary shape in noisy spatial databases.
The clusters of DBSCAN define a set of samples connected by a maximum density derived from density reachability relationships, i.e., a class, or cluster, of the final cluster. (note the set of density links), there may be one or more core objects within a cluster. If there is only one core object, then other non-core object samples in the cluster are all in this core objectIn the neighborhood, if there are multiple core objects, any one core object in the clusterThere must be one other core object in the neighborhood, otherwise the two core objects cannot be reachable in density. These core objectsThe collection of all samples in the neighborhood form a DBSCAN cluster.
How can such a cluster sample set be found? a core object without a category is arbitrarily selected as a seed, and then finding out a sample set with reachable density of all the core objects, namely a cluster. Then, another core object without category is selected to find a sample set with reachable density, thus obtaining another cluster (all the obtained clusters are certainly connected by density). Run until all core objects have a class.
2.2PCA component analysis.
The data dimension reduction is a preprocessing method for high-dimension characteristic data. The dimension reduction is to keep some important characteristics of high-dimension data, remove noise and unimportant characteristics, and therefore achieve the purpose of improving the data processing speed. In practical production and application, the dimension reduction is within a certain information loss range, and a large amount of time and cost can be saved. Dimension reduction is also a very widely applied data preprocessing method.
PCA (Principal Component Analysis), the principal component analysis method, is one of the most widely used data dimension reduction algorithms. The main idea of PCA is to map n-dimensional features onto k-dimensions, which are completely new orthogonal features, also called principal components, and are k-dimensional features reconstructed on the basis of the original n-dimensional features. PCA works by sequentially finding a set of mutually orthogonal axes from the original space, the selection of which is closely related to the data itself. The first new coordinate axis is selected to be the direction with the maximum variance in the original data, the second new coordinate axis is selected to be the plane orthogonal to the first coordinate axis so as to make the variance maximum, and the third axis is selected to be the plane orthogonal to the 1 st and 2 nd axes so as to make the variance maximum. By analogy, n such coordinate axes may be obtained. The new axes obtained in this way find that most of the variance is contained in the first k axes and that the latter axes contain almost 0 variance. Thus, the remaining axes can be ignored, leaving only the first k axes with the vast majority of variances. In fact, this amounts to retaining only dimensional features containing a substantial portion of variance, while ignoring feature dimensions containing variances of almost 0, achieving dimension reduction of the data features.
2.3 Data normalization.
STANDARDSCALER is a common data preprocessing technique, which is used for carrying out standardization processing on characteristic data according to a standard normal distribution with a mean value of 0 and a variance of 1. Its main purposes and meanings include:
The dimensional differences between features are eliminated, i.e. different features often have different dimensions and dimensions, for example, the value range of a certain feature may be several hundred to several thousand, while the value range of another feature may be between 0 and 1. In this case, dimensional differences between features may cause the model to be affected during the training process, possibly resulting in slow model convergence or poor model results. By normalization, dimensional differences between features can be eliminated, making it easier for the model to learn the relationships between features.
In many machine learning algorithms, such as gradient descent methods, the feature scale directly affects the convergence speed of the algorithm and the final model accuracy. By the normalization process, the algorithm can be made to converge faster and a more accurate model can be obtained.
The influence of outliers and outliers is reduced by normalizing the normal distribution, which can convert the data into a normal distribution with a mean of 0 and a variance of 1, so that the influence of outliers and outliers on the model can be reduced. Because the normalized data has a tighter distribution, the influence of outliers on the mean and variance is reduced.
The interpretation of the model is improved-in some cases, if the scale difference of the features is large, the interpretation of the model parameters may become difficult. By the normalization process, the sizes of the model parameters can be made comparable, thereby improving the interpretation of the model.
2.4 Screening outliers using DBSCAN and PCA.
The necessary pre-treatment is to eradicate outlier samples before applying the regression model. Although < latency > sample pairs are typically clustered together in one node, entries from slow-failure drives or normal performance changes (e.g., internal GCs) may still be biased. Thus, the present invention first screens out outliers before building a polynomial regression model. The use of a DBSCAN density-based clustering algorithm (measuring spatial distance) is an effective method of identifying potentially normal and outliers.
In short, DBSCAN groups points that are sufficiently close in space-the distance between the points is less than a minimum. Note that < latency > pairs from long-term or permanent slow-failure drives may be clustered together, but remote from the primary cluster. Thus, the present invention only retains the most scored set for further modeling.
Unfortunately, the effectiveness of screening raw data sets using DBSCAN has proven to be relatively limited. The root cause is that throughput and delay are positively correlated. Thus, the sample (i.e., < latency > pair) may be biased in a particular direction. Thus, outliers (i.e., samples from slow-failing drives) may be falsely marked as internal values. Therefore, the present invention transforms coordinates using principal component analysis PCA and penalizes outliers perpendicular to the tilt direction to reduce false marks.
2.5 Based on the above invention, the following steps are performed in the engineering:
A. Data were first read from the CSV file in dependence on pandas library as shown in table 1 (table 1 is a partial data display).
TABLE 1 reading data from CSV File
B. Taking log 10 of the through put to obtain log 10 _through put, constructing STANDARDSCALER to normalize the data so that the mean value of the data is 0 and the variance is 1, constructing a two-bit PCA object, and performing PCA conversion on the normalized data.
C. constructing a DBSCAN object with a neighborhood radius of 0.5 and a minimum sample number of 5, and carrying out prediction screening on the data after PCA conversion.
D. and deleting the abnormal item with the predicted value of-1.
3. A regression model is built by performing polynomial regression based on the clean dataset to obtain the model.
3.1 Model selection.
Since normal drives within a node may have similar latency to throughput relationships (i.e., well clustered together), the present invention can use a regression model to describe the behavior of "normal" drives and to delineate the range of variation for slow failure detection. Classical regression models include linearity, polynomials, and advanced problems such as kernel regression. The present invention does not use linear regression because the dependency of latency on throughput is obviously nonlinear. In addition, advanced models (e.g., kernel regression) are unnecessary because the mapping of latency to throughput is mainly monotonic (i.e., latency increases with throughput). Polynomial regression is preferable because it can handle non-linearities while preserving model simplicity (i.e., achieving the required goodness of fit with sufficient parameters).
3.2 Polynomial regression.
Polynomial regression is a regression analysis method that establishes a relationship between an independent variable (feature) and a dependent variable (target) by fitting data using a polynomial function. Unlike linear regression models, polynomial regression models allow polynomial terms of independent variables, making the model more flexible, enabling better fitting of data of nonlinear relationships.
The principle of the polynomial regression model is as follows:
A. model expression polynomial regression model expression can be written as:
,
where y is the dependent variable (target), x is the independent variable (feature), Is a coefficient of the model, n is an order of the polynomial,Is an error term.
B. Fitting procedure polynomial regression the fitting procedure is achieved by minimizing the sum of squares of the residuals, i.e. by optimizing the algorithm to find the coefficients that minimize the error between the model predicted and real values. This process can be solved using least squares or the like.
C. Feature transformation in polynomial regression, if the argument has only one feature x, then in the modelThe polynomial term that is characteristic of the like may be obtained by polynomial conversion of the original characteristic. For example, for quadratic polynomial regression, the argument may beConversion toAnd then fitting using a linear regression model.
D. In practical applications, it is important to select a suitable polynomial order n. Too low an order may lead to a lack of fitting, the model does not fit the complexity of the data well, while too high an order may lead to an overfitting, the model is too sensitive to noise and does not generalize well to new data.
E. For polynomial regression models, the performance may be evaluated using the evaluation metrics of various regression models, such as mean square error (Mean Squared Error, MSE), root mean square error (Root Mean Squared Error, RMSE), decision coefficient (Coefficient of Determination, R-squared), and the like.
3.3 Polynomial regression is applied to establish the binomial relationship between latency and log10_ throught.
Firstly, polynomialFeatures is constructed to perform binomial feature conversion, then LinearRegression is constructed to perform fitting, and finally, the latency_pred is predicted according to the constructed model.
4. The upper prediction limit is used as a slow fault detection threshold. The model is then applied to the original dataset to identify and mark the world entry as a slow entry.
4.1 Constructing a threshold according to the model.
A. Standard deviation is calculated, the standard deviation of the residual error (prediction error) between the model predictive value 'latency_pred' and the actual value 'latency'. This standard deviation measures the degree of dispersion between the model's predicted value and the actual value, i.e. the model's prediction accuracy.
B. The upper confidence interval limit is calculated by determining the corresponding z quantile by a given confidence level (99.9%) using the characteristics of a standard normal distribution in statistics (z distribution), 3.291 being the z quantile at the current 99.9% confidence level. The standard deviation is then multiplied by the z quantile to obtain the upper bound of the confidence interval. This upper limit represents the upper range of model predictions at a 99.9% confidence level.
C. the model is visualized as shown in fig. 4.
4.2 Identifying an exception entry.
Fig. 4 shows the driver latency fit (green line) and the 99.9% upper limit (red line). The invention counts 1h per day according to the nodes, and after regression is carried out according to the nodes, data is collected every 15s, a sliding window is defined to be 5 minutes, the proportion of each driver latency exceeding the upper limit is detected every 5 minutes, and if the proportion exceeds a certain threshold (configurable here), then a slow fault event is considered to occur.
5. A fault slow event is identified. And counting the daily slow fault events of each driver by adopting a sliding window and scoreboard mechanism, calculating scores and reporting the scores to Mgr.
The specific process for calculating the score is as follows:
Judging whether a fault slow event exists in 2 min:
,
Wherein slowCount is the number of analyzed slow fault entries, totalCount is the total number of entries;
,
slowThreadshold is a slow fault event duty cycle threshold, if curSlow =1, then a slow fault event is considered to occur;
if a slow fault occurs currently and there is no slow fault event in the last 2 minutes, the Score is calculated as Factor decreases, score initial value is 10;
;
;
if the attenuation continuously occurs and the calculation result is smaller than 0, setting to 0;
If a slow fault event occurs within the last 2 minutes, the Score is calculated as The factor is incremented:
,
After exceeding the threshold (default 100) in the Score calculation, the calculation is stopped.
After the calculation is completed, the node driver Score pair < OSDID and Score > is sent to Mgr, and the Mgr module sorts all drivers and sends the drivers to a two-wire engineer for intervention processing.
The Mgr module collects the node driver scores reported by the OC module every day, gives out the slow fault disk list with corresponding proportion according to the configuration threshold after giving out the overall ranking, and sends out the intervention of the two-wire engineer.
The newly emerging fail-slow fault plagues both software and hardware, with the victim component still running, but the performance is degraded. In order to solve the problem, the invention introduces a new node-level lightweight small data set regression-based slow fault event of the driver level, discovers the problem in advance, and effectively improves the usability of the system and the long tail problem.
Fig. 5 is a schematic diagram of a time delay change of disk1 in 1h on a host, fig. 6 is a schematic diagram of a time delay change of disk4 in 1h on the same host, in fig. 5 and 6, a yellow curve is a curve of primary driver data, a red curve is a polynomial regression curve of multiple driver data after cleaning in the host, a green curve is a 99.9% upper limit curve of polynomial regression, fig. 6 shows that a slow fault event does not occur in the disk4, and a slow fault event may exist in the disk1 in fig. 5.
The invention can identify slow fault events and discover the problems in advance, thereby effectively improving the usability of the system and the long tail problem and improving the usability of the Ceph system.
The embodiment of the invention provides a computer device, which comprises a processor, a memory and a bus, wherein the memory stores machine-readable instructions executable by the processor, when the device runs, the processor and the memory are communicated through the bus, and the processor executes the machine-readable instructions to execute the steps of the slow fault detection method in any distributed storage system.
In particular, the above-mentioned memory and processor can be general-purpose memory and processor, and are not particularly limited herein, and the slow failure detection method in the above-mentioned distributed storage system can be performed when the processor runs a computer program stored in the memory.
It will be appreciated by those skilled in the art that the structure of the computer device is not limiting of the computer device and may include more or fewer components than shown, or may be combined with or separated from certain components, or may be arranged in a different arrangement of components.
In some embodiments, the computer device may further include a touch screen operable to display a graphical user interface (e.g., a launch interface of an application) and to receive user operations with respect to the graphical user interface (e.g., launch operations with respect to the application). A particular touch screen may include a display panel and a touch panel. The display panel may be configured in the form of an LCD (Liquid CRYSTAL DISPLAY), an OLED (Organic Light-Emitting Diode), or the like. The touch panel may collect touch or non-touch operations on or near the user and generate preset operation instructions, for example, operations on or near the touch panel by the user using any suitable object such as a finger, a stylus, or the like. In addition, the touch panel may include two parts, a touch detection device and a touch controller. The touch controller receives touch information from the touch detection device, converts the touch information into information which can be processed by the processor, sends the information to the processor, and can receive a command sent by the processor and execute the command. In addition, the touch panel may be implemented by various types such as resistive, capacitive, infrared, and surface acoustic wave, or may be implemented by any technology developed in the future. Further, the touch panel may overlay the display panel, and a user may operate on or near the touch panel overlaid on the display panel according to a graphical user interface displayed by the display panel, and upon detection of an operation thereon or thereabout, the touch panel is transferred to the processor to determine a user input, and the processor then provides a corresponding visual output on the display panel in response to the user input. In addition, the touch panel and the display panel may be implemented as two independent components or may be integrated.
Corresponding to the above method for starting an application program, the embodiment of the present invention further provides a storage medium, where a computer program is stored, and when the computer program is executed by a processor, the steps of the method for detecting a slow failure in any of the above distributed storage systems are performed.
The starting device of the application program provided by the embodiment of the application can be specific hardware on the equipment or software or firmware installed on the equipment. The device provided by the embodiment of the present application has the same implementation principle and technical effects as those of the foregoing method embodiment, and for the sake of brevity, reference may be made to the corresponding content in the foregoing method embodiment where the device embodiment is not mentioned. It will be clear to those skilled in the art that, for convenience and brevity, the specific operation of the system, apparatus and unit described above may refer to the corresponding process in the above method embodiment, which is not described in detail herein.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of modules is merely a logical function division, and there may be additional divisions in actual implementation, and for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted, or not performed. In addition, the coupling or direct coupling or communication connection shown or discussed with respect to each other may be through some communication interface, indirect coupling or communication connection of devices or modules, electrical, mechanical, or other form.
The modules illustrated as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in the embodiment provided by the application may be integrated in one processing module, or each module may exist alone physically, or two or more modules may be integrated in one module.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Finally, it should be noted that the above embodiments are only for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the above embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made to the specific embodiments of the present invention without departing from the spirit and scope of the present invention, and any modifications and equivalents are intended to be included in the scope of the claims of the present invention.