CN114398954A - Cloud server load prediction method based on hybrid optimization strategy extreme learning machine - Google Patents
Cloud server load prediction method based on hybrid optimization strategy extreme learning machine Download PDFInfo
- Publication number
- CN114398954A CN114398954A CN202111545844.0A CN202111545844A CN114398954A CN 114398954 A CN114398954 A CN 114398954A CN 202111545844 A CN202111545844 A CN 202111545844A CN 114398954 A CN114398954 A CN 114398954A
- Authority
- CN
- China
- Prior art keywords
- whale
- cloud server
- optimizer
- strategy
- server load
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3447—Performance evaluation by modeling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/004—Artificial life, i.e. computing arrangements simulating life
- G06N3/006—Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/082—Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Computer Hardware Design (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
Description
技术领域technical field
本发明涉及云计算领域,具体是一种基于混合优化策略极限学习机的云服务器负载预测方法。The invention relates to the field of cloud computing, in particular to a cloud server load prediction method based on a hybrid optimization strategy extreme learning machine.
背景技术Background technique
随着通信技术的迅猛发展,各类移动终端设备数量不断增加,各式各样的软件应用服务也应运而生,这直接导致互联网中的数据量呈爆炸式增长,使得传统数据处理陷入了瓶颈。云计算具有强大的计算资源,通过分布式的并行计算能够在短时间内处理海量数据。但是,如何实现高效的资源管理一直是云服务运营商和供应商最关心的问题。云负载预测作为有效监管云网络情况的必要措施之一已经成为了领域内的热点话题,预测的结果越精确,资源利用率就越高,响应速度就越快,最终就会带来更优质的服务质量和更高的用户满意率。同时,准确的负载预测也会为云资源的管理奠定坚实的基础并给云服务运营商和供应商们带来巨大的经济效益。With the rapid development of communication technology, the number of various types of mobile terminal equipment has been increasing, and various software application services have also emerged, which directly leads to the explosive growth of the amount of data in the Internet, making traditional data processing into a bottleneck . Cloud computing has powerful computing resources, and can process massive data in a short time through distributed parallel computing. However, how to achieve efficient resource management has always been the most concerned issue for cloud service operators and suppliers. As one of the necessary measures to effectively monitor cloud network conditions, cloud load prediction has become a hot topic in the field. The more accurate the prediction results, the higher the resource utilization, the faster the response speed, and the better quality Service quality and higher user satisfaction rates. At the same time, accurate load prediction will also lay a solid foundation for cloud resource management and bring huge economic benefits to cloud service operators and suppliers.
在先前的研究中已经提出了大量云服务器负载预测的方法,其中不乏有一些实用的方法已经成功应用于工业界。一般来说,这些预测方法主要可以大致分为两类,即基于时间序列模型的方法和基于机器学习模型的方法。其中基于时间序列模型的方法是将非平稳序列转化为平稳序列,但是时间序列数据必须是严格连续的。同时,计算前需要对数据进行预处理,导致计算过程相对复杂,不适合应对数据丢失等情况。基于机器学习模型的方法包括支持向量机、神经网络和高斯过程回归等,在实践中,虽然云负载数据具有很高的复杂性,但机器学习模型能够有效地预测具有非线性变化的云负载资源。此外,与基于时间序列预测模型的方法相比,基于机器学习模型的方法具有更好的泛化能力和映射能力。A large number of cloud server load prediction methods have been proposed in previous studies, and some of them have been successfully applied in the industry. Generally speaking, these forecasting methods can be roughly divided into two categories, namely methods based on time series models and methods based on machine learning models. The method based on time series model is to convert non-stationary series into stationary series, but the time series data must be strictly continuous. At the same time, the data needs to be preprocessed before calculation, which makes the calculation process relatively complicated and is not suitable for dealing with data loss and other situations. Methods based on machine learning models include support vector machines, neural networks and Gaussian process regression, etc. In practice, although cloud load data has high complexity, machine learning models can effectively predict cloud load resources with nonlinear changes . In addition, the methods based on machine learning models have better generalization ability and mapping ability compared with methods based on time series forecasting models.
极限学习机是一类基于前馈神经网络构建的机器学习模型,其特点是隐含层节点的权重为随机或人为给定的,且不需要更新,学习过程仅计算输出权重。极限学习机与其它浅层机器学习模型例如单层感知机和支持向量机相比较时,在学习速率和泛化能力方面更加具有优势。然而,极限学习机模型的参数设置对云负载的预测效果影响很大,如何选择最优参数仍然是一个棘手问题。Extreme learning machine is a type of machine learning model based on feedforward neural network. Compared with other shallow machine learning models such as single-layer perceptron and support vector machine, extreme learning machine has more advantages in learning rate and generalization ability. However, the parameter setting of the extreme learning machine model has a great influence on the prediction effect of cloud load, and how to choose the optimal parameters is still a difficult problem.
发明内容SUMMARY OF THE INVENTION
本发明的目的是提供一种基于混合优化策略极限学习机的云服务器负载预测方法,包括以下步骤:The purpose of the present invention is to provide a cloud server load prediction method based on a hybrid optimization strategy extreme learning machine, comprising the following steps:
1)建立极限学习机模型。1) Establish an extreme learning machine model.
所述极限学习机模型包括输入层、隐藏层和输出层。输入层、隐藏层和输出层由神经元连接,每层神经元的数量为u、r和e。隐藏层和输入层之间的权重系数为S,隐藏层与输出层之间的权重系数为P。θ为隐藏层的神经元阈值的集合,The extreme learning machine model includes an input layer, a hidden layer and an output layer. The input layer, hidden layer, and output layer are connected by neurons, and the number of neurons in each layer is u, r, and e. The weight coefficient between the hidden layer and the input layer is S, and the weight coefficient between the hidden layer and the output layer is P. θ is the set of neuron thresholds in the hidden layer,
其中,权重系数S、权重系数P、集合θ分别如下所示:Among them, the weight coefficient S, the weight coefficient P, and the set θ are respectively as follows:
式中,sru为第r个隐藏层神经元和第u个输入层神经元之间的权重系数。pre为第r个隐藏层神经元和第e个输出层神经元之间的权重系数。θr为第r个隐藏层神经元阈值。In the formula, s ru is the weight coefficient between the rth hidden layer neuron and the uth input layer neuron. pre is the weight coefficient between the rth hidden layer neuron and the eth output layer neuron. θr is the threshold of the rth hidden layer neuron.
极限学习机模型的激活函数为y(·),输出为F,即:The activation function of the extreme learning machine model is y( ), and the output is F, that is:
F=[f1,f2,...,fM] (4)F=[f 1 ,f 2 ,...,f M ] (4)
式中,权重系数si=[si1,si2,...,siu]。ai=[a1j,a2j,...,auj]T为输入矩阵A的第i列。输出F的转置矩阵FT=LP。In the formula, the weight coefficient s i =[s i1 ,s i2 ,...,s iu ]. a i =[a 1j ,a 2j ,...,a uj ] T is the i-th column of the input matrix A. Transpose matrix F T =LP of output F.
其中,隐藏层输出矩阵L如下所示:Among them, the hidden layer output matrix L is as follows:
2)构建鲸鱼优化器,2) Build the whale optimizer,
所述鲸鱼优化器包括觅食环抱阶段、随机搜索阶段和泡沫网攻击阶段。The whale optimizer includes a foraging and encirclement phase, a random search phase, and a bubble-net attack phase.
在觅食环抱阶段,鲸鱼的位置更新如下:During the feeding and hugging phase, the whale's position is updated as follows:
式中,pos(t+1)是鲸鱼个体预测位置。t是当前迭代次数。是最佳鲸鱼个体位置。K和Q是系数向量。系数向量K=2k·n-k。参数k=2-(2t)/MAXiterβ。系数向量Q=|J·pos(t)-pos(t)|。参数J=2n。n是[0,1]中的随机向量。In the formula, pos(t+1) is the predicted position of the individual whale. t is the current iteration number. is the best individual whale position. K and Q are coefficient vectors. The coefficient vector K=2k·nk. Parameter k=2-(2t)/MAX iter β. The coefficient vector Q=|J·pos(t)-pos(t)|. Parameter J=2n. n is a random vector in [0,1].
在随机搜索阶段,鲸鱼的位置更新如下:During the random search phase, the whale's position is updated as follows:
pos(t+1)=posrand(t)-K·Q (8)pos(t+1)=pos rand (t)-K·Q(8)
Q=|J·posrand(t)-pos(t)|(9)Q=|J·pos rand (t)-pos(t)|(9)
式中,posrand(t)是随机鲸鱼的位置。当|K|≤1时,选择最优个体作为解决方案,以实现鲸鱼优化器的局部优化。当|K|>1时,随机选择个体作为解决方案,以实现鲸鱼优化器的全局优化。where pos rand (t) is the position of a random whale. When |K| ≤ 1, the optimal individual is selected as the solution to achieve the local optimization of the whale optimizer. When |K| > 1, individuals are randomly selected as solutions to achieve the global optimization of the whale optimizer.
在泡沫网攻击阶段,鲸鱼的位置更新如下:During the bubble-net attack phase, the whale's position was updated as follows:
式中,是鲸鱼个体与迄今为止获得的最佳个体之间的距离。o是定义螺旋形状的常数。l是范围在-1与1之间的数。In the formula, is the distance between the whale individual and the best individual obtained so far. o is a constant that defines the shape of the spiral. l is a number in the range -1 to 1.
3)使用多样混合优化策略更新鲸鱼优化器。v是随机数;3) Update the whale optimizer with various mixed optimization strategies. v is a random number;
使用多样混合优化策略更新鲸鱼优化器的步骤包括:The steps to update the whale optimizer with a diverse mix of optimization strategies include:
3.1)使用Levy策略初始化鲸鱼种群。Levy策略的数学模型如下所示:3.1) Initialize the whale population using the Levy strategy. The mathematical model of the Levy policy is as follows:
式中,λ是随机步长。where λ is the random step size.
其中,参数μ和参数η遵循正态分布,即:where the parameter μ and the parameter η follow a normal distribution, namely:
参数δμ和参数δη分别如下所示:The parameters δ μ and δ η are respectively as follows:
式中,Γ(·)为标准伽马函数。where Γ(·) is the standard gamma function.
3.2)基于Levy策略,计算个体初始位置,即:3.2) Based on the Levy strategy, calculate the initial position of the individual, namely:
式中,Lb是下限,Ub是上限,Levy(v)是服从Levy分布的步长的随机向量。where L b is the lower limit, U b is the upper limit, and Levy(v) is a random vector with a step size that obeys the Levy distribution.
3.3)在鲸鱼优化器中引入非线性收敛因子β,即:3.3) Introduce a nonlinear convergence factor β in the whale optimizer, namely:
式中,Maxiter为最大迭代次数。where Max iter is the maximum number of iterations.
4)将迁移策略引入鲸鱼优化器,得到混合策略鲸鱼优化器。4) Introduce the migration strategy into the whale optimizer to get the mixed strategy whale optimizer.
将迁移策略引入鲸鱼优化器后,鲸鱼群的位置更新如下。After introducing the migration strategy into the whale optimizer, the position of the whale swarm is updated as follows.
pos(t+1)=B*pos(t)+pos(t)*randn(0,σ2) (16)pos(t+1)=B*pos(t)+pos(t)*randn(0,σ 2 ) (16)
式中,B是收敛系数,randn(0,σ2)符合高斯分布。Bmax、Bmin是收敛系数B的上下限。In the formula, B is the convergence coefficient, and randn(0,σ 2 ) conforms to the Gaussian distribution. B max and B min are the upper and lower limits of the convergence coefficient B.
5)获取云服务器历史负载数据,并对混合策略鲸鱼优化器进行训练,得到最优超参数。5) Obtain the historical load data of the cloud server, and train the hybrid strategy whale optimizer to obtain the optimal hyperparameters.
所述云服务器负载数据经过了归一化处理。The cloud server load data is normalized.
所述云服务器负载数据包括为训练集和测试集。其中,训练集用于对混合策略鲸鱼优化器进行训练,测试集用于对混合策略鲸鱼优化器进行测试。The cloud server load data includes a training set and a test set. Among them, the training set is used to train the mixed strategy whale optimizer, and the test set is used to test the mixed strategy whale optimizer.
极限学习机模型的训练集包括输入矩阵A和输出矩阵B,即:The training set of the extreme learning machine model includes input matrix A and output matrix B, namely:
式中,M为云服务器负载数据训练样本集的数量。In the formula, M is the number of cloud server load data training sample sets.
混合策略鲸鱼优化器训练过程中,权重系数s和神经元阈值θ为常数,权重系数P为未知参数。权重系数P采用最小二乘法求解。During the training process of the mixed strategy whale optimizer, the weight coefficient s and the neuron threshold θ are constants, and the weight coefficient P is an unknown parameter. The weight coefficient P is solved by the least square method.
6)将最优超参数输入至极限学习机模型中,建立云服务器负载预测模型。6) Input the optimal hyperparameters into the extreme learning machine model to establish a cloud server load prediction model.
7)获取当前云服务器负载数据,并输入到云服务器负载预测模型中,对云服务器未来负载进行预测。7) Obtain the current cloud server load data, and input it into the cloud server load prediction model to predict the future load of the cloud server.
值得说明的是,本发明考虑到云服务器负载数据具有明显的非线性和非平稳特性,而基于时间序列模型的方法适用性相对较差,于是本发明提出了一种混合优化策略极限学习机来预测云服务器负载情况。其中使用了具有较强的非线性映射能力和泛化能力的极限学习机模型,本发明由于在云服务器负载预测中并未使用梯度下降法来更新参数,与传统的神经网络模型相比,大大减少模型复杂性与训练时间。It is worth noting that the present invention considers that the cloud server load data has obvious nonlinear and non-stationary characteristics, and the method based on the time series model has relatively poor applicability, so the present invention proposes a hybrid optimization strategy extreme learning machine. Predict cloud server load. The extreme learning machine model with strong nonlinear mapping ability and generalization ability is used. Because the gradient descent method is not used to update the parameters in the cloud server load prediction, compared with the traditional neural network model, the Reduce model complexity and training time.
此外,由于极限学习机模型的参数设置会影响最终预测精度。因此,本发明提出了一种基于鲸鱼优化器的混合优化策略来确定极限学习机模型的超参数。具体来说,本发明采用Levy策略对种群进行初始化,Levy策略的随机游走模式扩展了鲸鱼群的搜索范围,有效地避免了鲸鱼个体陷入局部极值。因此,它有助于鲸鱼优化器具有更好的收敛性能。同时在觅食和围捕阶段,由于鲸鱼优化器在早期和晚期的收敛速度相同,这大大降低了优化的整体收敛速度。因此,在鲸鱼优化器中引入非线性收敛因子来解决这一问题。In addition, the final prediction accuracy is affected by the parameter settings of the extreme learning machine model. Therefore, the present invention proposes a hybrid optimization strategy based on the whale optimizer to determine the hyperparameters of the extreme learning machine model. Specifically, the present invention uses the Levy strategy to initialize the population, and the random walk mode of the Levy strategy expands the search range of the whale group and effectively avoids individual whales falling into local extreme values. Therefore, it helps the whale optimizer to have better convergence performance. At the same time in the foraging and round-up phases, since the whale optimizer converges at the same speed in the early and late stages, this greatly reduces the overall convergence speed of the optimization. Therefore, a nonlinear convergence factor is introduced in the whale optimizer to solve this problem.
本发明的技术效果是毋庸置疑的。本发明提供一种基于混合优化策略极限学习机的云服务器负载预测方法,该方法可以大大提高云服务器负载的预测精度。The technical effect of the present invention is beyond doubt. The invention provides a cloud server load prediction method based on a hybrid optimization strategy extreme learning machine, which can greatly improve the prediction accuracy of the cloud server load.
本发明综合考虑了云服务器负载数据存在的非线性和非平稳特性,提出了一种具有更好优化性能的混合策略鲸鱼优化器,并将提出的混合策略鲸鱼优化器与极限学习机模型相结合,提供有效准确的预测,大大增强云服务器计算资源的使用效率。The invention comprehensively considers the nonlinear and non-stationary characteristics of cloud server load data, proposes a hybrid strategy whale optimizer with better optimization performance, and combines the proposed hybrid strategy whale optimizer with an extreme learning machine model. , providing effective and accurate predictions, greatly enhancing the efficiency of cloud server computing resources.
附图说明Description of drawings
图1为云服务器负载预测算法框架图;Figure 1 is a framework diagram of a cloud server load prediction algorithm;
图2为云服务器负载预测算法工作流程图;Fig. 2 is the working flow chart of the cloud server load prediction algorithm;
具体实施方式Detailed ways
下面结合实施例对本发明作进一步说明,但不应该理解为本发明上述主题范围仅限于下述实施例。在不脱离本发明上述技术思想的情况下,根据本领域普通技术知识和惯用手段,做出各种替换和变更,均应包括在本发明的保护范围内。The present invention will be further described below in conjunction with the examples, but it should not be understood that the scope of the above-mentioned subject matter of the present invention is limited to the following examples. Without departing from the above-mentioned technical idea of the present invention, various substitutions and changes can be made according to common technical knowledge and conventional means in the field, which shall be included in the protection scope of the present invention.
实施例1:Example 1:
参见图1至图2,一种基于混合优化策略极限学习机的云服务器负载预测方法,包括以下步骤:Referring to Figure 1 to Figure 2, a cloud server load prediction method based on hybrid optimization strategy extreme learning machine, including the following steps:
1)建立极限学习机模型。1) Establish an extreme learning machine model.
所述极限学习机模型包括输入层、隐藏层和输出层。输入层、隐藏层和输出层由神经元连接,每层神经元的数量为u、r和e。隐藏层和输入层之间的权重系数为S,隐藏层与输出层之间的权重系数为P。θ为隐藏层的神经元阈值的集合,The extreme learning machine model includes an input layer, a hidden layer and an output layer. The input layer, hidden layer, and output layer are connected by neurons, and the number of neurons in each layer is u, r, and e. The weight coefficient between the hidden layer and the input layer is S, and the weight coefficient between the hidden layer and the output layer is P. θ is the set of neuron thresholds in the hidden layer,
其中,权重系数S、权重系数P、集合θ分别如下所示:Among them, the weight coefficient S, the weight coefficient P, and the set θ are respectively as follows:
式中,sru为第r个隐藏层神经元和第u个输入层神经元之间的权重系数。pre为第r个隐藏层神经元和第e个输出层神经元之间的权重系数。θr为第r个隐藏层神经元阈值。In the formula, s ru is the weight coefficient between the rth hidden layer neuron and the uth input layer neuron. pre is the weight coefficient between the rth hidden layer neuron and the eth output layer neuron. θr is the threshold of the rth hidden layer neuron.
极限学习机模型的激活函数为y(·),输出为F,即:The activation function of the extreme learning machine model is y( ), and the output is F, that is:
F=[f1,f2,...,fM] (4)F=[f 1 ,f 2 ,...,f M ] (4)
式中,权重系数si=[si1,si2,...,siu]。ai=[a1j,a2j,...,auj]T为输入矩阵A的第i列;。输出F的转置矩阵FT=LP。In the formula, the weight coefficient s i =[s i1 ,s i2 ,...,s iu ]. a i =[a 1j ,a 2j ,...,a uj ] T is the i-th column of the input matrix A; Transpose matrix F T =LP of output F.
其中,隐藏层输出矩阵L如下所示:Among them, the hidden layer output matrix L is as follows:
2)构建鲸鱼优化器,2) Build the whale optimizer,
所述鲸鱼优化器包括觅食环抱阶段、随机搜索阶段和泡沫网攻击阶段。The whale optimizer includes a foraging and encirclement phase, a random search phase, and a bubble-net attack phase.
在觅食环抱阶段,鲸鱼的位置更新如下:During the feeding and hugging phase, the whale's position is updated as follows:
式中,pos(t+1)是鲸鱼个体预测位置。t是当前迭代次数。是最佳鲸鱼个体位置。K和Q是系数向量。系数向量K=2k·n-k。参数k=2-(2t)/MAXiterβ。系数向量Q=|J·pos(t)-pos(t)|。参数J=2n。n是[0,1]中的随机向量。In the formula, pos(t+1) is the predicted position of the individual whale. t is the current iteration number. is the best individual whale position. K and Q are coefficient vectors. The coefficient vector K=2k·nk. Parameter k=2-(2t)/MAX iter β. The coefficient vector Q=|J·pos(t)-pos(t)|. Parameter J=2n. n is a random vector in [0,1].
在随机搜索阶段,鲸鱼的位置更新如下:During the random search phase, the whale's position is updated as follows:
pos(t+1)=posrand(t)-K·Q (8)pos(t+1)=pos rand (t)-K·Q(8)
Q=|J·posrand(t)-pos(t)| (9)Q=|J·pos rand (t)-pos(t)| (9)
式中,posrand(t)是随机鲸鱼的位置。当|K|≤1时,选择最优个体作为解决方案,以实现鲸鱼优化器的局部优化。当|K|>1时,随机选择个体作为解决方案,以实现鲸鱼优化器的全局优化。where pos rand (t) is the position of a random whale. When |K| ≤ 1, the optimal individual is selected as the solution to achieve the local optimization of the whale optimizer. When |K| > 1, individuals are randomly selected as solutions to achieve the global optimization of the whale optimizer.
在泡沫网攻击阶段,鲸鱼的位置更新如下:During the bubble-net attack phase, the whale's position was updated as follows:
式中,是鲸鱼个体与迄今为止获得的最佳个体之间的距离。o是定义螺旋形状的常数。l是范围在-1与1之间的数。In the formula, is the distance between the whale individual and the best individual obtained so far. o is a constant that defines the shape of the spiral. l is a number in the range -1 to 1.
3)使用多样混合优化策略更新鲸鱼优化器。v是随机数;3) Update the whale optimizer with various mixed optimization strategies. v is a random number;
使用多样混合优化策略更新鲸鱼优化器的步骤包括:The steps to update the whale optimizer with a diverse mix of optimization strategies include:
3.1)使用Levy策略初始化鲸鱼种群。Levy策略的数学模型如下所示:3.1) Initialize the whale population using the Levy strategy. The mathematical model of the Levy strategy is as follows:
式中,λ是随机步长。where λ is the random step size.
其中,参数μ和参数η遵循正态分布,即:where the parameter μ and the parameter η follow a normal distribution, namely:
参数δμ和参数δη分别如下所示:The parameters δ μ and δ η are respectively as follows:
式中,Γ(·)为标准伽马函数。where Γ(·) is the standard gamma function.
3.2)基于Levy策略,计算个体初始位置,即:3.2) Based on the Levy strategy, calculate the initial position of the individual, namely:
式中,Lb是下限,Ub是上限,Levy(v)是服从Levy分布的步长的随机向量。where L b is the lower limit, U b is the upper limit, and Levy(v) is a random vector with a step size that obeys the Levy distribution.
3.3)在鲸鱼优化器中引入非线性收敛因子β,即:3.3) Introduce a nonlinear convergence factor β in the whale optimizer, namely:
式中,Maxiter为最大迭代次数。where Max iter is the maximum number of iterations.
4)将迁移策略引入鲸鱼优化器,得到混合策略鲸鱼优化器。4) Introduce the migration strategy into the whale optimizer to get the mixed strategy whale optimizer.
将迁移策略引入鲸鱼优化器后,鲸鱼群的位置更新如下。After introducing the migration strategy into the whale optimizer, the position of the whale swarm is updated as follows.
pos(t+1)=B*pos(t)+pos(t)*randn(0,σ2) (16)pos(t+1)=B * pos(t)+pos(t) * randn(0,σ 2 ) (16)
式中,B是收敛系数,randn(0,σ2)符合高斯分布。Bmax、Bmin是收敛系数B的上下限。In the formula, B is the convergence coefficient, and randn(0,σ 2 ) conforms to the Gaussian distribution. B max and B min are the upper and lower limits of the convergence coefficient B.
5)获取云服务器历史负载数据,并对混合策略鲸鱼优化器进行训练,得到最优超参数。5) Obtain the historical load data of the cloud server, and train the hybrid strategy whale optimizer to obtain the optimal hyperparameters.
所述云服务器负载数据经过了归一化处理。The cloud server load data is normalized.
所述云服务器负载数据包括为训练集和测试集。其中,训练集用于对混合策略鲸鱼优化器进行训练,测试集用于对混合策略鲸鱼优化器进行测试。The cloud server load data includes a training set and a test set. Among them, the training set is used to train the mixed strategy whale optimizer, and the test set is used to test the mixed strategy whale optimizer.
极限学习机模型的训练集包括输入矩阵A和输出矩阵B,即:The training set of the extreme learning machine model includes input matrix A and output matrix B, namely:
式中,M为云服务器负载数据训练样本集的数量。In the formula, M is the number of cloud server load data training sample sets.
混合策略鲸鱼优化器训练过程中,权重系数s和神经元阈值θ为常数,权重系数P为未知参数。权重系数P采用最小二乘法求解。During the training process of the mixed strategy whale optimizer, the weight coefficient s and the neuron threshold θ are constants, and the weight coefficient P is an unknown parameter. The weight coefficient P is solved by the least square method.
6)将最优超参数输入至极限学习机模型中,建立云服务器负载预测模型。6) Input the optimal hyperparameters into the extreme learning machine model to establish a cloud server load prediction model.
7)获取当前云服务器负载数据,并输入到云服务器负载预测模型中,对云服务器未来负载进行预测。7) Obtain the current cloud server load data, and input it into the cloud server load prediction model to predict the future load of the cloud server.
实施例2:Example 2:
一种基于混合优化策略极限学习机的云服务器负载预测方法,包括以下步骤:A cloud server load prediction method based on a hybrid optimization strategy extreme learning machine, comprising the following steps:
1)划分云服务器负载数据,确定云服务器负载预测模型的训练集和测试集,并对数据集进行统一规范化处理;1) Divide the cloud server load data, determine the training set and test set of the cloud server load prediction model, and uniformly normalize the data set;
2)构建极限学习机模型。2) Build an extreme learning machine model.
2.1)其中整个模型由三层网络结构组成,即输入层、隐藏层和输出层,这三层网络结构由神经元连接。对于极限学习机模型,每层神经元的数量为u、r和e。隐藏层和输入层之间的权重系数为S,隐藏层与输出层之间的权重系数为P。θ为隐藏层的神经元阈值的集合,其中,2.1) The whole model consists of three-layer network structure, namely input layer, hidden layer and output layer, which are connected by neurons. For the extreme learning machine model, the number of neurons in each layer is u, r, and e. The weight coefficient between the hidden layer and the input layer is S, and the weight coefficient between the hidden layer and the output layer is P. θ is the set of neuron thresholds in the hidden layer, where,
2.2)极限学习机模型有M个云服务器负载数据训练样本集。A是输入矩阵,B是输出矩阵,其中,2.2) The extreme learning machine model has M cloud server load data training sample sets. A is the input matrix and B is the output matrix, where,
极限学习机模型的激活函数为y(·),输出为F,两者表达式为The activation function of the extreme learning machine model is y( ), the output is F, and the expressions of the two are
F=[f1,f2,...,fM],(6)F=[f 1 , f 2 ,...,f M ], (6)
其中si=[si1,si2,...,siu],ai=[a1j,a2j,…,auj]T。同时FT=LP,其中,where s i =[s i1 ,s i2 ,...,s iu ], a i =[a 1j ,a 2j ,...,a uj ] T . At the same time F T =LP, where,
其中L是隐藏层输出矩阵,FT是F的转置矩阵。具体来说,当y是无限可除的时,s和θ在训练过程中将为常数。同时,唯一的未知参数是P,可以通过最小二乘法来求解参数P,即minP||LP-FT||。where L is the hidden layer output matrix and F T is the transpose matrix of F. Specifically, when y is infinitely divisible, s and θ will be constant during training. At the same time, the only unknown parameter is P, which can be solved by the least squares method, ie min P ||LP-F T ||.
3)构建鲸鱼优化器,模拟座头鲸的捕猎策略。3) Build a whale optimizer to simulate the hunting strategy of humpback whales.
3.1)觅食环抱阶段,在这个过程中,鲸鱼群分享猎物的位置信息,最后包围猎物。假设鲸鱼群的当前最佳个体是目标位置,其他个体更新其位置并尝试接近最佳个体位置。鲸鱼的位置更新公式如下:3.1) Foraging and enveloping stage, in this process, the whale group shares the location information of the prey, and finally surrounds the prey. Assuming that the current best individual of the whale swarm is the target position, the other individuals update their positions and try to approach the best individual position. The whale's position update formula is as follows:
其中pos(t)是当前鲸鱼个体位置,t是当前迭代次数,是最佳鲸鱼个体位置,K和Q是系数向量。K和Q的可分别由K=2k·n-k,J=2n,k=2-(2t)/MAXiterβ,Q=|J·pos(t)-pos(t)|计算得出,其中n是[0,1]中的随机向量,k在迭代过程中从2减小到0。where pos(t) is the current whale individual position, t is the current iteration number, is the optimal whale individual position, and K and Q are coefficient vectors. K and Q can be calculated by K=2k·nk, J=2n, k=2-(2t)/MAX iter β, Q=|J·pos(t)-pos(t)|, where n is a random vector in [0,1], k decreases from 2 to 0 during iteration.
3.2)随机搜索阶段,在此阶段,随机选择当前的单个鲸鱼位置作为最佳解决方案。与之前的过程相反,鲸鱼的位置是根据随机选择的鲸鱼而不是目前发现的最佳鲸鱼个体来更新的。调整K值会使其他个体远离所选鲸鱼,更新公式如下:3.2) Random search stage, in which the current single whale position is randomly selected as the best solution. In contrast to the previous process, the whale positions were updated based on randomly selected whales rather than the best individual whales found so far. Adjusting the K value will keep other individuals away from the selected whale, and the update formula is as follows:
pos(t+1)=posrand(t)-K·Q,(10)pos(t+1)=pos rand (t)-K·Q, (10)
Q=|J·posrand(t)-pos(t)|,(11)Q=|J·pos rand (t)-pos(t)|, (11)
其中posrand(t)是随机鲸鱼的位置。当|K|≤1时,选择最优个体作为解决方案,以实现鲸鱼优化器的局部优化。当|K|>1时,随机选择个体作为解决方案,以实现鲸鱼优化器的全局优化。where pos rand (t) is the position of a random whale. When |K| ≤ 1, the optimal individual is selected as the solution to achieve the local optimization of the whale optimizer. When |K|>1, individuals are randomly selected as solutions to achieve the global optimization of the whale optimizer.
3.3)泡沫网攻击阶段3.3) Foam net attack stage
在这个阶段,鲸鱼有两种行为:收缩、环绕和盘旋。鲸鱼的环绕运动在步骤2.1中,通过减少k来实现。螺旋函数用于模拟鲸鱼螺旋形的行为。这两种行为的选择由随机数v决定,以实现同时沿螺旋形路径收缩包围。个别鲸鱼的位置更新如下:During this phase, whales have two behaviors: retraction, circle, and hover. The orbital motion of the whale is achieved in step 2.1 by reducing k. The spiral function is used to simulate the behavior of a whale spiral. The choice of these two behaviors is determined by a random number v to achieve simultaneous shrinking encirclement along a spiral path. The positions of individual whales are updated as follows:
其中是鲸鱼个体与迄今为止获得的最佳个体之间的距离,o是定义螺旋形状的常数,l是范围在-1与1之间的数。in is the distance between the whale individual and the best individual obtained so far, o is a constant defining the shape of the spiral, and l is a number ranging between -1 and 1.
4)使用多样混合优化策略更新鲸鱼优化器,进一步增强局部优化和全局优化能力。4) Update the whale optimizer with various hybrid optimization strategies to further enhance the local optimization and global optimization capabilities.
4.1)使用Levy策略初始化鲸鱼种群,种群初始化对优化器的优化过程有重要影响。同时,高质量的初始状态可以加快优化器的收敛速度。采用Levy策略对种群进行初始化,得益于其随机游走模式,可以扩展了鲸鱼群的搜索范围,有效地避免了鲸鱼个体陷入局部极值。Levy策略的数学模型如下:4.1) Use the Levy strategy to initialize the whale population. The population initialization has an important impact on the optimization process of the optimizer. At the same time, a high-quality initial state can speed up the convergence of the optimizer. The Levy strategy is used to initialize the population. Thanks to its random walk mode, the search range of the whale group can be expanded, and the individual whales can be effectively prevented from falling into local extreme values. The mathematical model of the Levy strategy is as follows:
其中λ是随机步长,μ和η遵循正态分布即 where λ is the random step size, and μ and η follow a normal distribution i.e.
,δμ和δη的值可由, the values of δ μ and δ η can be given by
计算得出,其中Γ(·)为标准伽马函数。于是基于Levy策略的初始位置可以由以下公式计算得出:Calculated, where Γ(·) is the standard gamma function. So the initial position based on the Levy strategy can be calculated by the following formula:
其中,Lb是下限,Ub是上限,Levy(v)是服从Levy分布的步长的随机向量。where L b is the lower bound, U b is the upper bound, and Levy(v) is a random vector with a step size obeying the Levy distribution.
4.2)非线性收敛因子,在觅食和围捕阶段,在鲸鱼优化器中引入非线性收敛因子来提高收敛速度。非线性收敛因子为:4.2) Non-linear convergence factor, in the foraging and round-up stages, a non-linear convergence factor is introduced in the whale optimizer to improve the convergence speed. The nonlinear convergence factor is:
其中Maxiter为最大迭代次数。Where Max iter is the maximum number of iterations.
5)将迁移策略引入鲸鱼优化器,增加鲸鱼群的多样性,增强鲸鱼跳出局部最小值的能力。迁移后鲸鱼群的最新位置公式如下:5) Introduce the migration strategy into the whale optimizer to increase the diversity of whale groups and enhance the ability of whales to jump out of local minima. The formula for the latest position of the whale population after migration is as follows:
pos(t+1)=B*pos(t)+pos(t)*randn(0,σ2),(18)pos(t+1)=B * pos(t)+pos(t) * randn(0,σ 2 ), (18)
其中,其中B(Bmax=0.9,Bmin=0.4)是收敛系数,randn(0,σ2)符合高斯分布。where B (B max =0.9, B min =0.4) is the convergence coefficient, and randn(0,σ 2 ) conforms to a Gaussian distribution.
6)初始化混合策略鲸鱼优化器参数,并将云服务器负载训练集输入至优化器中进行模型训练,得出最优超参数后输入至极限学习机模型。最后将云服务器负载数据测试集输入至极限学习机模型中,对云服务器负载进行预测。6) Initialize the parameters of the hybrid strategy whale optimizer, and input the cloud server load training set into the optimizer for model training, and then input the optimal hyperparameters to the extreme learning machine model. Finally, the cloud server load data test set is input into the extreme learning machine model to predict the cloud server load.
Claims (9)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111545844.0A CN114398954B (en) | 2021-12-16 | 2021-12-16 | Cloud server load prediction method based on hybrid optimization strategy extreme learning machine |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111545844.0A CN114398954B (en) | 2021-12-16 | 2021-12-16 | Cloud server load prediction method based on hybrid optimization strategy extreme learning machine |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114398954A true CN114398954A (en) | 2022-04-26 |
CN114398954B CN114398954B (en) | 2025-07-08 |
Family
ID=81227864
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111545844.0A Active CN114398954B (en) | 2021-12-16 | 2021-12-16 | Cloud server load prediction method based on hybrid optimization strategy extreme learning machine |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114398954B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115599763A (en) * | 2022-09-22 | 2023-01-13 | 西安电子科技大学(Cn) | A Workload Prediction Method for Cloud Native Database |
CN115844419A (en) * | 2022-11-11 | 2023-03-28 | 广东省大湾区集成电路与系统应用研究院 | Electrocardiosignal quality detection method and device and computer readable medium |
TWI856829B (en) * | 2023-09-26 | 2024-09-21 | 台灣大哥大股份有限公司 | Load distribution device and method |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB0428466D0 (en) * | 2000-12-22 | 2005-02-02 | Epic Systems Corp | System and method for a seamless user interface for an integrated electronic health care information system |
EP1659520A1 (en) * | 2004-11-19 | 2006-05-24 | Siemens Aktiengesellschaft | An agent-based architecture for manufacturing scheduling, related negotiation protocol and computer program product |
CN109886589A (en) * | 2019-02-28 | 2019-06-14 | 长安大学 | A method for low-carbon workshop scheduling based on improved whale optimization algorithm |
CN110378490A (en) * | 2019-07-24 | 2019-10-25 | 江苏壹度科技股份有限公司 | Based on the semiconductor yields prediction technique for improving whale algorithm optimization support vector machines |
CN112783172A (en) * | 2020-12-31 | 2021-05-11 | 重庆大学 | AGV and machine integrated scheduling method based on discrete whale optimization algorithm |
CN112836846A (en) * | 2020-12-02 | 2021-05-25 | 红云红河烟草(集团)有限责任公司 | A double-layer optimization algorithm for multi-direction intermodal transportation scheduling for cigarette delivery |
CN113326969A (en) * | 2021-04-29 | 2021-08-31 | 淮阴工学院 | Short-term wind speed prediction method and system based on improved whale algorithm for optimizing ELM |
CN113361785A (en) * | 2021-06-10 | 2021-09-07 | 国网河北省电力有限公司经济技术研究院 | Power distribution network short-term load prediction method and device, terminal and storage medium |
-
2021
- 2021-12-16 CN CN202111545844.0A patent/CN114398954B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB0428466D0 (en) * | 2000-12-22 | 2005-02-02 | Epic Systems Corp | System and method for a seamless user interface for an integrated electronic health care information system |
EP1659520A1 (en) * | 2004-11-19 | 2006-05-24 | Siemens Aktiengesellschaft | An agent-based architecture for manufacturing scheduling, related negotiation protocol and computer program product |
CN109886589A (en) * | 2019-02-28 | 2019-06-14 | 长安大学 | A method for low-carbon workshop scheduling based on improved whale optimization algorithm |
CN110378490A (en) * | 2019-07-24 | 2019-10-25 | 江苏壹度科技股份有限公司 | Based on the semiconductor yields prediction technique for improving whale algorithm optimization support vector machines |
CN112836846A (en) * | 2020-12-02 | 2021-05-25 | 红云红河烟草(集团)有限责任公司 | A double-layer optimization algorithm for multi-direction intermodal transportation scheduling for cigarette delivery |
CN112783172A (en) * | 2020-12-31 | 2021-05-11 | 重庆大学 | AGV and machine integrated scheduling method based on discrete whale optimization algorithm |
CN113326969A (en) * | 2021-04-29 | 2021-08-31 | 淮阴工学院 | Short-term wind speed prediction method and system based on improved whale algorithm for optimizing ELM |
CN113361785A (en) * | 2021-06-10 | 2021-09-07 | 国网河北省电力有限公司经济技术研究院 | Power distribution network short-term load prediction method and device, terminal and storage medium |
Non-Patent Citations (2)
Title |
---|
XILIANG ZHANG, HOANG NGUYEN, ... JIAN ZHOU: "Novel Extreme Learning Machine-Multi-Verse Optimization Model for Predicting Peak Particle Velocity Induced by Mine Blasting", 《NATURAL RESOURCES RESEARCH》, 13 October 2021 (2021-10-13), pages 4735 - 4751, XP037613431, DOI: 10.1007/s11053-021-09960-z * |
杨博: "改进鲸鱼算法及其在路径规划的应用", 《计算机测量与控制》, vol. 29, no. 2, 25 February 2021 (2021-02-25), pages 187 - 193 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115599763A (en) * | 2022-09-22 | 2023-01-13 | 西安电子科技大学(Cn) | A Workload Prediction Method for Cloud Native Database |
CN115844419A (en) * | 2022-11-11 | 2023-03-28 | 广东省大湾区集成电路与系统应用研究院 | Electrocardiosignal quality detection method and device and computer readable medium |
TWI856829B (en) * | 2023-09-26 | 2024-09-21 | 台灣大哥大股份有限公司 | Load distribution device and method |
Also Published As
Publication number | Publication date |
---|---|
CN114398954B (en) | 2025-07-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114398954A (en) | Cloud server load prediction method based on hybrid optimization strategy extreme learning machine | |
WO2021175058A1 (en) | Neural network architecture search method and apparatus, device and medium | |
Wang et al. | Optimizing the echo state network with a binary particle swarm optimization algorithm | |
CN111191709B (en) | Continuous learning framework and continuous learning method of deep neural network | |
WO2019218263A1 (en) | Extreme learning machine-based extreme ts fuzzy inference method and system | |
WO2021259090A1 (en) | Method and apparatus for federated learning, and chip | |
CN108959728A (en) | Radio-frequency devices parameter optimization method based on deep learning | |
Saha et al. | A novel quasi-oppositional chaotic antlion optimizer for global optimization | |
EP4481645A1 (en) | Data processing method and apparatus | |
CN114205251B (en) | Link resource prediction method for switches based on spatio-temporal features | |
CN114490065A (en) | A load prediction method, device and equipment | |
Morell et al. | Optimising communication overhead in federated learning using NSGA-II | |
CN117670586B (en) | Power grid node carbon factor prediction method and system based on graph neural network | |
CN117914701A (en) | A blockchain-based building Internet of Things performance optimization system and method | |
CN113132482B (en) | Distributed message system parameter adaptive optimization method based on reinforcement learning | |
CN109547083B (en) | Flat-top beam forming method based on neural network | |
Huang et al. | T-distributed stochastic neighbor embedding echo state network with state matrix dimensionality reduction for time series prediction | |
TWI732467B (en) | Method of training sparse connected neural network | |
Luan et al. | LRP‐based network pruning and policy distillation of robust and non‐robust DRL agents for embedded systems | |
CN119337121A (en) | Ultra-short-term power load forecasting method, system, storage medium and electronic device based on attention mechanism and long short-term memory network | |
Pan et al. | A hybrid artificial bee colony algorithm with modified search model for numerical optimization | |
WO2025016271A1 (en) | Neural network model training method and apparatus, program product and storage medium | |
CN115936110B (en) | Federal learning method for relieving isomerism problem | |
Ying et al. | Neural architecture search using multi-objective evolutionary algorithm based on decomposition | |
El-Chabib et al. | Neural network modelling of properties of cement-based materials demystified |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |