[go: up one dir, main page]

CN107330294A - The application process of many hidden layer extreme learning machines of online sequential with forgetting factor - Google Patents

The application process of many hidden layer extreme learning machines of online sequential with forgetting factor Download PDF

Info

Publication number
CN107330294A
CN107330294A CN201710577695.3A CN201710577695A CN107330294A CN 107330294 A CN107330294 A CN 107330294A CN 201710577695 A CN201710577695 A CN 201710577695A CN 107330294 A CN107330294 A CN 107330294A
Authority
CN
China
Prior art keywords
mtr
mtd
mrow
msub
mtable
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710577695.3A
Other languages
Chinese (zh)
Inventor
肖冬
李北京
毛亚纯
柳小波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeastern University China
Original Assignee
Northeastern University China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeastern University China filed Critical Northeastern University China
Priority to CN201710577695.3A priority Critical patent/CN107330294A/en
Publication of CN107330294A publication Critical patent/CN107330294A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16ZINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS, NOT OTHERWISE PROVIDED FOR
    • G16Z99/00Subject matter not provided for in other main groups of this subclass

Landscapes

  • External Artificial Organs (AREA)

Abstract

The present invention relates to a kind of application process of many hidden layer extreme learning machines of the online sequential with forgetting factor, comprise the following steps:1) an extreme learning machine model with many hidden layers is sought, the output expression formula of many hidden layer extreme learning machine models is obtained;2) real-time update is carried out to above-mentioned many hidden layer extreme learning machine models, the expression formula of model after output updates.The present invention handles the data variation of batch process using the method for many hidden layer extreme learning machines of the online sequential with forgetting factor, method can adjust model according to the change in data structure, depth optimization can also be carried out to model parameter, reach more preferable effect, it is ensured that final hiding output hides output closer to expected.

Description

带遗忘因子的在线时序多隐含层极限学习机的应用方法Application method of online sequential multi-hidden layer extreme learning machine with forgetting factor

技术领域technical field

本发明涉及一种学习机,具体为一种带遗忘因子的在线时序多隐含层极限学习机的应用方法。The invention relates to a learning machine, in particular to an application method of an online sequential multi-hidden layer extreme learning machine with forgetting factors.

背景技术Background technique

由于间歇过程的变量间存在着很强的非线性和耦合性,且很多实时数据都随时间变化,具有很强的时效性,而传统的单隐层极限学习机或者多隐层极限学习机都是事先根据建立好的模型预测结果输出,不能根据数据结构上的变化来调整模型的变化,模型太过僵硬;单隐层的极限学习机对模型结构参数的优化不够彻底,不能有效的减少噪声的干扰,无法保证最终的隐藏输出更接近预期的隐藏输出。因此需要使用集成的在线时序单隐层极限学习机(EOS-ELM),或者带遗忘机制的集成的在线时序单隐层极限学习机(FOS-ELM),它们可以根据数据结构上的变化调整模型变化,适应不同的时间阶段,达到更好地效果;由于实时变化的数据都带有一系列的不可避免噪声信号的干扰。Due to the strong nonlinearity and coupling between the variables of the intermittent process, and many real-time data change with time, it has strong timeliness, while the traditional single-hidden-layer extreme learning machine or multi-hidden-layer extreme learning machine cannot It is based on the established model to predict the output in advance, and the model cannot be adjusted according to the change in the data structure. The model is too rigid; the extreme learning machine with a single hidden layer is not thorough enough to optimize the model structural parameters, and cannot effectively reduce noise. interference, there is no guarantee that the final hidden output is closer to the expected hidden output. Therefore, it is necessary to use an integrated online sequential single hidden layer extreme learning machine (EOS-ELM), or an integrated online sequential single hidden layer extreme learning machine with a forgetting mechanism (FOS-ELM), which can adjust the model according to changes in the data structure Changes to adapt to different time stages to achieve better results; because real-time changing data has a series of unavoidable noise signal interference.

目前,能够满足上述要求的在线时序学习机尚未见报道。Currently, no online sequential learning machine that can meet the above requirements has been reported.

发明内容Contents of the invention

针对现有技术中单隐层极限学习机或者多隐层极限学习机存在不能根据数据结构上的变化来调整模型变化等不足,本发明要解决的问题是提供一种既能根据数据结构上的变化来调整模型,也可以对模型参数进行深度优化的带遗忘因子的在线时序多隐含层极限学习机的应用方法。Aiming at the disadvantages of single hidden layer extreme learning machine or multi-hidden layer extreme learning machine that cannot adjust model changes according to changes in data structure in the prior art, the problem to be solved by the present invention is to provide a It is an application method of an online time-series multi-hidden layer extreme learning machine with forgetting factor to adjust the model by changing the model parameters in depth.

为解决上述技术问题,本发明采用的技术方案是:In order to solve the problems of the technologies described above, the technical solution adopted in the present invention is:

本发明一种带遗忘因子的在线时序多隐含层极限学习机的应用方法,包括以下步骤:An application method of an online time-series multi-hidden layer extreme learning machine with a forgetting factor of the present invention comprises the following steps:

1)求一个具有多隐含层的极限学习机模型,得到该多隐含层极限学习机模型的输出表达式;1) seek an extreme learning machine model with multiple hidden layers, and obtain the output expression of the extreme learning machine model with multiple hidden layers;

2)对上述多隐含层极限学习机模型进行实时更新,输出更新后模型的表达式。2) The above-mentioned multi-hidden layer extreme learning machine model is updated in real time, and the expression of the updated model is output.

步骤1)中,求一个具有多隐含层的极限学习机模型,得到该多隐含层极限学习机模型的输出表达式,具体为:In step 1), seek an extreme learning machine model with multiple hidden layers, and obtain the output expression of the extreme learning machine model with multiple hidden layers, specifically:

11)给定样本和多个隐含层的网络结构,隐含层的激活函数为g,网络输出为g(a,b,X),其中a为输入层和第一隐含层之间的权重,b为第一隐层的偏差,X为输入矩阵;11) Given a sample and a network structure of multiple hidden layers, the activation function of the hidden layer is g, and the network output is g(a,b,X), where a is the distance between the input layer and the first hidden layer Weight, b is the bias of the first hidden layer, X is the input matrix;

12)假设数据分批次变化,且每一批次都持续S个单位时间,在第k-th个单位时间的数据表示为Nj为j批次数据的个数;χk在[k k+s]的范围内有效,j=0,1,…k.,ti为标志变量,在第(k+1)-th个单位时间的数据表示为k为任意大的正整数,xi为输入样本,ti为样本标志量,th为批次。12) Assuming that the data changes in batches, and each batch lasts for S units of time, the data of the k-th unit time is expressed as N j is the number of j batches of data; χ k is valid within the range of [k k+s], j=0,1,...k., t i is a flag variable, at (k+1)-th The data of unit time is expressed as k is any large positive integer, xi is the input sample, t i is the sample mark, and th is the batch.

13)假设k≥s-1,训练数据的个数远大于隐藏层节点的数目,Zk+1为第(k+1)-th个单位时间预测出的结果,设l=k-s+1,k-s+2,…k;数据在l-th时刻网络第一个隐含层的输出为:13) Suppose k≥s-1, the number of training data is much larger than the number of hidden layer nodes, Z k+1 is the predicted result of the (k+1)-th unit time, set l=k-s+ 1, k-s+2,...k; the output of the first hidden layer of the network at the l-th moment of the data is:

(ai,bi)为输入层和第一个隐含层之间的权值与阈值,i=1,...L.随机初始化;G为隐含层激活函数,T为[k-s+1,k]内批数据样本的标志量,Tl为第l批数据样本的标志量;l为在[k-s+1,k]内的一个正整数;(a i , b i ) is the weight and threshold between the input layer and the first hidden layer, i=1,...L. Random initialization; G is the hidden layer activation function, T is [k- s+1, k] inner batch data sample mark amount, T 1 is the mark amount of the lth batch data sample; l is a positive integer in [k-s+1, k];

得到最终隐含层的输出权值β为:The output weight β of the final hidden layer is obtained as:

and

14)假设第二个隐含层的权值和偏差为W1,B1,则第二个隐含层的输出为:14) Suppose the weight and bias of the second hidden layer are W 1 , B 1 , then the output of the second hidden layer is:

15)假设WHE=[B1 W1],则第二个隐含层的权值和偏差通过计算且假设HE=[1 H]T,1为元素全为1的一维行向量,g-1(x)为激活函数的g(x)反函数,WHE和HE为假设的变量;15) Assuming W HE = [B 1 W 1 ], then the weight and bias of the second hidden layer are calculated by And suppose H E =[1 H] T , 1 is a one-dimensional row vector whose elements are all 1, g -1 (x) is the inverse function of g(x) of the activation function, W HE and HE are assumed variables;

16)更新第二个隐含层的输出为H2=g(WHEHE);更新最终隐含层的输出权值β为 16) Update the output of the second hidden layer as H 2 =g(W HE H E ); update the output weight β of the final hidden layer as

17)假设第三个隐含层的权值和偏差为W2,B2,则第三个隐含层的输出为17) Suppose the weight and bias of the third hidden layer are W 2 , B 2 , then the output of the third hidden layer is

18)假设WHE1=[B2 W2],则第三个隐含层的权值和偏差通过计算且HE1=[1 H2]T,1为元素全为1的一维行向量;g-1(x)为激活函数的g(x)反函数;WHE1、HE1为为假设的变量;18) Assuming W HE1 = [B 2 W 2 ], then the weight and bias of the third hidden layer are calculated by And H E1 =[1 H 2 ] T , 1 is a one-dimensional row vector whose elements are all 1; g -1 (x) is the inverse function of g(x) of the activation function; W HE1 and HE1 are hypothetical variables;

更新第三个隐含层的输出为:The output of the updated third hidden layer is:

H4=g(WHE1HE1) (2)H 4 =g(W HE1 H E1 ) (2)

19)更新最终隐含层的输出权值β为:19) Update the output weight β of the final hidden layer as:

则最终的输出为f=βnew1H4Then the final output is f=β new1 H 4 .

对于具有多个隐含层的网络结构,循环迭代公式(1)、(2)、(3),三个隐层迭代一次,四个隐层迭代两次,N个隐层迭代N-2次,每次迭代结束使得βnew=βnew1,H2=H4For a network structure with multiple hidden layers, loop iteration formulas (1), (2), (3), three hidden layers iterate once, four hidden layers iterate twice, and N hidden layers iterate N-2 times , so that β new = β new1 , H 2 = H 4 at the end of each iteration.

步骤2)中,对上述多隐含层极限学习机模型进行实时更新,输出更新后模型的表达式,具体为:In step 2), the above multi-hidden layer extreme learning machine model is updated in real time, and the expression of the updated model is output, specifically:

21)假设Zk+2为第(k+2)-th个单位时间预测出的结果,在(k+1)-th时间的结果已知的情况下,与上一次保持同样的(ai,bi),i=1,...L,输出权值表示为:21) Assuming that Z k+2 is the predicted result of the (k+2)-th unit time, when the result of the (k+1)-th time is known, keep the same (a i ,b i ),i=1,...L, the output weight is expressed as:

则Pk+1由Pk表示为:and Then P k+1 is expressed by P k as:

and

but

22)假设第二个隐含层的权值和偏差为W1,B1,则第二个隐含层的输出为22) Suppose the weight and bias of the second hidden layer are W 1 , B 1 , then the output of the second hidden layer is

23)假设WHE=[B1 W1],则第二个隐含层的权值和偏差通过计算且HE=[1 H]T,1为元素全为1的一维行向量;g-1(x)为激活函数的g(x)反函数, 23) Assuming W HE = [B 1 W 1 ], then the weight and bias of the second hidden layer are calculated by And H E =[1 H] T , 1 is a one-dimensional row vector whose elements are all 1; g -1 (x) is the inverse function of g(x) of the activation function,

24)更新第二个隐含层的输出为H2=g(WHEHE);更新最终隐含层的输出权值β为 24) Update the output of the second hidden layer as H 2 =g(W HE H E ); update the output weight β of the final hidden layer as

25)假设第三个隐含层的权值和偏差为W2,B2,则第三个隐含层的输出为25) Suppose the weight and bias of the third hidden layer are W 2 , B 2 , then the output of the third hidden layer is

26)假设WHE1=[B2 W2],则第三个隐含层的权值和偏差通过计算且HE1=[1 H2]T,1为元素全为1的一维行向量,g-1(x)为激活函数的g(x)反函数;26) Assuming W HE1 = [B 2 W 2 ], then the weight and bias of the third hidden layer are calculated by And H E1 =[1 H 2 ] T , 1 is a one-dimensional row vector whose elements are all 1, and g -1 (x) is the inverse function of g(x) of the activation function;

27)更新第三个隐含层的输出为:H4=g(WHE1HE1) (5)27) Update the output of the third hidden layer as: H 4 =g(W HE1 H E1 ) (5)

28)更新最终隐含层的输出权值β为: 28) Update the output weight β of the final hidden layer as:

则最终的输出为f=βnew1H4Then the final output is f=β new1 H 4 .

本发明还包括以下步骤:The present invention also includes the following steps:

对于具有多个隐含层的网络结构,循环迭代公式(4)、(5)、(6),三个隐层迭代一次,四个隐层迭代两次,N个隐层迭代N-2次,每次迭代结束,使得βnew=βnew1,H2=H4For a network structure with multiple hidden layers, formulas (4), (5), and (6) are iterated cyclically, three hidden layers are iterated once, four hidden layers are iterated twice, and N hidden layers are iterated N-2 times , each iteration ends, so that β new = β new1 , H 2 = H 4 .

本发明具有以下有益效果及优点:The present invention has the following beneficial effects and advantages:

1.本发明采用带遗忘因子的在线时序多隐含层极限学习机的方法来处理间歇过程的数据变化,法既能根据数据结构上的变化来调整模型,也可以对模型参数进行深度优化,达到更好的效果,保证最终的隐藏输出更接近预期的隐藏输出。1. The present invention adopts the method of online sequential multi-hidden layer extreme learning machine with forgetting factor to process the data change of the intermittent process, the method can adjust the model according to the change in the data structure, and can also carry out deep optimization to the model parameters, Achieving better results ensures that the final hidden output is closer to the expected hidden output.

附图说明Description of drawings

图1为FOS-ELM模型和FOS-MELM模型下,在每一批次的数据只有2个有效时间时,训练集的均方根误差值;Figure 1 shows the root mean square error value of the training set when each batch of data has only 2 valid times under the FOS-ELM model and the FOS-MELM model;

图2为FOS-ELM模型和FOS-MELM模型下,在每一批次的数据只有2个有效时间时,测试集的均方根误差值;Figure 2 shows the root mean square error value of the test set under the FOS-ELM model and the FOS-MELM model when each batch of data has only 2 valid times;

图3为FOS-ELM模型和FOS-MELM模型下,在每一批次的数据只有3个有效时间时,训练集的均方根误差值;Figure 3 shows the root mean square error value of the training set when each batch of data has only 3 valid times under the FOS-ELM model and the FOS-MELM model;

图4为FOS-ELM模型和FOS-MELM模型下,在每一批次的数据只有3个有效时间时,测试集的均方根误差值。Figure 4 shows the root mean square error value of the test set under the FOS-ELM model and the FOS-MELM model when each batch of data has only 3 valid times.

具体实施方式detailed description

下面结合说明书附图对本发明作进一步阐述。The present invention will be further elaborated below in conjunction with the accompanying drawings of the description.

为了使训练输出更加接近于实际输出,考虑到单隐层的极限学习机对模型结构参数的优化不够彻底,不能有效的减少噪声的干扰,因此为了保证最终的隐藏输出更接近预期的隐藏输出,结合以前的改进极限学习机的优势成果,本发明用一种带遗忘因子的在线时序多隐含层极限学习机的方法来处理间歇过程的数据变化,这种方法既能根据数据结构上的变化来调整模型,也可以对模型参数进行深度优化,以达到更好地效果。In order to make the training output closer to the actual output, considering that the optimization of the model structure parameters by the single hidden layer extreme learning machine is not thorough enough to effectively reduce the noise interference, so in order to ensure that the final hidden output is closer to the expected hidden output, Combined with the previous advantageous results of improving the extreme learning machine, the present invention uses a method of online sequential multi-hidden layer extreme learning machine with forgetting factor to deal with the data changes of the intermittent process. To adjust the model, you can also deeply optimize the model parameters to achieve better results.

本发明带遗忘因子的在线时序多隐含层极限学习机的应用方法包括以下步骤:The application method of the online sequential multi-hidden layer extreme learning machine with forgetting factor of the present invention comprises the following steps:

1)求一个具有多隐含层的极限学习机模型,得到该多隐含层极限学习机模型的输出表达式;1) seek an extreme learning machine model with multiple hidden layers, and obtain the output expression of the extreme learning machine model with multiple hidden layers;

2)对上述多隐含层极限学习机模型进行实时更新,输出更新后模型的表达式。2) The above-mentioned multi-hidden layer extreme learning machine model is updated in real time, and the expression of the updated model is output.

本实施例使用一个带有三个隐含层的ELM网络结构为例,来分析带遗忘机制的在线时序多隐层极值学习机(FOS-MELM)的算法步骤。步骤1)中,求一个具有多隐含层的极限学习机模型,得到该多隐含层极限学习机模型的输出表达式,具体为:This embodiment uses an ELM network structure with three hidden layers as an example to analyze the algorithm steps of an online sequential multi-hidden layer extreme value learning machine (FOS-MELM) with a forgetting mechanism. In step 1), seek an extreme learning machine model with multiple hidden layers, and obtain the output expression of the extreme learning machine model with multiple hidden layers, specifically:

11)首先给定样本和三个隐含层的网络结构(每个隐含层含有L个节点),隐含层的激活函数选择为g,因此网络输出为g(a,b,X),其中a为输入层和第一隐含层之间的权重,b为第一隐层的偏差,X为输入矩阵;11) First, given the sample and the network structure of three hidden layers (each hidden layer contains L nodes), the activation function of the hidden layer is selected as g, so the network output is g(a,b,X), where a is the weight between the input layer and the first hidden layer, b is the bias of the first hidden layer, and X is the input matrix;

12)假设数据是一批一批的变化,且每一批都持续S个单位时间。因此在第k-th个单位时间的数据可以表示为并且Nj为j批次数据的个数,χk在[kk+s]的范围内有效j=0,1,…k.,ti为标志变量。所以在第(k+1)-th个单位时间的数据可以表示为 12) Assume that the data changes batch by batch, and each batch lasts for S unit time. So the data of the k-th unit time can be expressed as And N j is the number of j batches of data, χ k is valid within the range of [kk+s] j=0,1,...k., t i is a flag variable. So the data of the (k+1)-th unit time can be expressed as

13)假设k≥s-1,训练数据的个数远大于隐藏层节点的数目,Zk+1为第(k+1)-th个单位时间预测出的结果,设l=k-s+1,k-s+2,…k.。数据在l-th时刻网络第一个隐含层的输出为:13) Suppose k≥s-1, the number of training data is much larger than the number of hidden layer nodes, Z k+1 is the predicted result of the (k+1)-th unit time, set l=k-s+ 1,k-s+2,...k. The output of the first hidden layer of the network at the l-th moment of the data is:

(ai,bi)为输入层和第一个隐含层之间的权值与阈值,i=1,...L.为随机初始化的;可以得到最终隐含层的输出权值β为:(a i , b i ) are the weights and thresholds between the input layer and the first hidden layer, i=1,...L. are randomly initialized; the output weight β of the final hidden layer can be obtained for:

and

24)假设第二个隐含层的权值和偏差为W1,B1,则第二个隐含层的输出为:24) Suppose the weight and bias of the second hidden layer are W 1 , B 1 , then the output of the second hidden layer is:

15)假设WHE=[B1 W1],则第二个隐含层的权值和偏差通过计算且HE=[1 H]T,1为元素全为1的一维行向量。g-1(x)为激活函数的g(x)反函数, 15) Assuming W HE = [B 1 W 1 ], then the weight and bias of the second hidden layer are calculated by And H E =[1 H] T , 1 is a one-dimensional row vector whose elements are all 1. g -1 (x) is the inverse function of g(x) of the activation function,

16)更新第二个隐含层的输出为H2=g(WHEHE);更新最终隐含层的输出权值β为 16) Update the output of the second hidden layer as H 2 =g(W HE H E ); update the output weight β of the final hidden layer as

17)现在假设第三个隐含层的权值和偏差为W2,B2,则第三个隐含层的输出为17) Now suppose the weight and bias of the third hidden layer are W 2 , B 2 , then the output of the third hidden layer is

18)假设WHE1=[B2 W2],则第三个隐含层的权值和偏差通过计算且HE1=[1 H2]T,1为元素全为1的一维行向量;g-1(x)为激活函数的g(x)反函数;更新第三个隐含层的输出为H4=g(WHE1HE1)(2)18) Assuming W HE1 = [B 2 W 2 ], then the weight and bias of the third hidden layer are calculated by And H E1 =[1 H 2 ] T , 1 is a one-dimensional row vector whose elements are all 1; g -1 (x) is the inverse function of g(x) of the activation function; the output of updating the third hidden layer is H 4 =g(W HE1 H E1 )(2)

19)更新最终隐含层的输出权值β为 19) Update the output weight β of the final hidden layer as

则最终的输出为f=βnew1H4Then the final output is f=β new1 H 4 .

对于具有多个隐含层的网络结构,循环迭代公式(1)、(2)、(3),三个隐层迭代一次,四个隐层迭代两次,N个隐层迭代N-2次,每次迭代结束使得βnew=βnew1,H2=H4For a network structure with multiple hidden layers, loop iteration formulas (1), (2), (3), three hidden layers iterate once, four hidden layers iterate twice, and N hidden layers iterate N-2 times , so that β new = β new1 , H 2 = H 4 at the end of each iteration.

步骤2)中,对上述多隐含层极限学习机模型进行实时更新,输出更新后模型的表达式,具体为:In step 2), the above multi-hidden layer extreme learning machine model is updated in real time, and the expression of the updated model is output, specifically:

21)假设Zk+2为第(k+2)-th个单位时间预测出的结果,在(k+1)-th时间的结果已知的情况下,与上一次保持同样的(ai,bi),i=1,...L;输出权值可以表示为:21) Assuming that Z k+2 is the predicted result of the (k+2)-th unit time, when the result of the (k+1)-th time is known, keep the same (a i ,b i ),i=1,...L; the output weight can be expressed as:

则Pk+1可以由Pk表示为and Then P k+1 can be expressed by P k as

and

but

22)现在假设第二个隐含层的权值和偏差为W1,B1,则第二个隐含层的输出为22) Now suppose the weight and bias of the second hidden layer are W 1 , B 1 , then the output of the second hidden layer is

23)现在假设WHE=[B1 W1],则第二个隐含层的权值和偏差通过计算且HE=[1 H]T,1为元素全为1的一维行向量,g-1(x)为激活函数的g(x)反函数, 23) Now suppose W HE =[B 1 W 1 ], then the weight and bias of the second hidden layer are calculated by And H E =[1 H] T , 1 is a one-dimensional row vector whose elements are all 1, g -1 (x) is the inverse function of g(x) of the activation function,

24)更新第二个隐含层的输出为H2=g(WHEHE);更新最终隐含层的输出权值β为 24) Update the output of the second hidden layer as H 2 =g(W HE H E ); update the output weight β of the final hidden layer as

25)现在假设第三个隐含层的权值和偏差为W2,B2,则第三个隐含层的输出为25) Now suppose the weight and bias of the third hidden layer are W 2 , B 2 , then the output of the third hidden layer is

26)现在假设WHE1=[B2 W2],则第三个隐含层的权值和偏差通过计算且HE1=[1 H2]T,1为元素全为1的一维行向量。g-1(x)为激活函数的g(x)反函数;26) Now suppose W HE1 = [B 2 W 2 ], then the weight and bias of the third hidden layer are calculated by And H E1 =[1 H 2 ] T , 1 is a one-dimensional row vector whose elements are all 1. g -1 (x) is the inverse function of g(x) of the activation function;

27)更新第三个隐含层的输出为H4=g(WHE1HE1) (5)27) Update the output of the third hidden layer as H 4 =g(W HE1 H E1 ) (5)

28)更新最终隐含层的输出权值β为 28) Update the output weight β of the final hidden layer as

则最终的输出为f=βnew1H4Then the final output is f=β new1 H 4 .

对于具有多个隐含层的网络结构,循环迭代公式(4)、(5)、(6),三个隐层迭代一次,四个隐层迭代两次,N个隐层迭代N-2次,每次迭代结束,使得βnew=βnew1,H2=H4For a network structure with multiple hidden layers, formulas (4), (5), and (6) are iterated cyclically, three hidden layers are iterated once, four hidden layers are iterated twice, and N hidden layers are iterated N-2 times , each iteration ends, so that β new = β new1 , H 2 = H 4 .

本实施例采用苯乙烯的聚合反应器作为研究对象,该反应器的生产目的是通过调节反应器的温度,使得反应的转化率、反应产物的数均链长和重均链长在反应结束时接近最优。反应的机理模型如下:This embodiment uses the polymerization reactor of styrene as the research object. The production purpose of this reactor is to adjust the temperature of the reactor so that the conversion rate of the reaction, the number-average chain length and the weight-average chain length of the reaction product are at the end of the reaction. close to optimum. The mechanistic model of the reaction is as follows:

表3.2标称参数值Table 3.2 Nominal parameter values

式中T表示反应器的绝对温度,Tc为摄氏温度值,Aw和B为重均链长与温度的相关系数,Am和Em为单体聚合反应的频率因子和激活能量,r1-r4为密度-温度修正值,Mm和χ分别为单体分子重量和聚合体相互作用参数。产品的质量指标包括转化率y1,无量纲数均链长y2,无量纲重均链长y3,即终点质量指标为:y=[y1(tf)y2(tf)y3(tf)],tf代表终点时刻。设总反应时间设定为400分钟,控制变量为反应器温度T,按时间平均分成20个区间段,在每一段内温度保持恒定,即过程变量为X=[T1 T2 T3 … T20],方程中的参数由表3.2给出。产生60个批次的数据X(60×20),在反应器上实施得到质量变量矩阵Y(60×3)。取前20个数据作为训练样本,建立初始模型;中间20个样本作为更新样本;后20个数据作为测试样本,用于检验模型预测效果。建模前选取激励函数为sigmoid函数。In the formula, T represents the absolute temperature of the reactor, Tc is the temperature in Celsius, Aw and B are the correlation coefficients between the weight-average chain length and temperature, Am and Em are the frequency factor and activation energy of the monomer polymerization reaction, and r1-r4 is the density - temperature correction value, Mm and χ are monomer molecular weight and polymer interaction parameters, respectively. The quality index of the product includes the conversion rate y 1 , the dimensionless number average chain length y 2 , and the dimensionless weight average chain length y 3 , that is, the final quality index is: y=[y 1 (t f )y 2 (t f )y 3 (t f )], t f represents the end point. Assuming that the total reaction time is set to 400 minutes, the control variable is the reactor temperature T, which is divided into 20 intervals according to the time, and the temperature in each section is kept constant, that is, the process variable is X=[T 1 T 2 T 3 ... T 20 ], the parameters in the equation are given in Table 3.2. Generate 60 batches of data X (60×20), and implement it on the reactor to obtain a quality variable matrix Y (60×3). The first 20 data are taken as training samples to establish the initial model; the middle 20 samples are used as update samples; the last 20 data are used as test samples to test the prediction effect of the model. The activation function is selected as the sigmoid function before modeling.

如图1~4所示,图1、2分别为当每一批次的数据只有2个有效时间时,训练集与测试集的均方根误差值在两种不同的模型下(FOS-ELM和FOS-MELM)和不同的隐含层节点;图3、4分别为当每一批次的数据只有3个有效时间时,训练集与测试集的均方根误差值在两种不同的模型下(FOS-ELM和FOS-MELM)和不同的隐含层节点。As shown in Figures 1 to 4, Figures 1 and 2 respectively show the root mean square error values of the training set and the test set under two different models (FOS-ELM and FOS-MELM) and different hidden layer nodes; Figures 3 and 4 respectively show that when each batch of data has only 3 valid times, the root mean square error values of the training set and the test set are in two different models Lower (FOS-ELM and FOS-MELM) and different hidden layer nodes.

以上图示说明,由于聚合反应器的参数变化有时效性,为了有效的预测温度值,我们采用本方法在线时序更新系统模型,减少甚至避免数据剧烈变化对模型以及预测输出带来的影响。The above illustration shows that due to the timeliness of the parameter changes of the polymerization reactor, in order to effectively predict the temperature value, we use this method to update the system model online in time series, reducing or even avoiding the impact of drastic data changes on the model and prediction output.

Claims (5)

1.一种带遗忘因子的在线时序多隐含层极限学习机的应用方法,其特征在于包括以下步骤:1. an application method of online sequential multi-hidden layer extreme learning machine with forgetting factor, it is characterized in that comprising the following steps: 1)求一个具有多隐含层的极限学习机模型,得到该多隐含层极限学习机模型的输出表达式;1) seek an extreme learning machine model with multiple hidden layers, and obtain the output expression of the extreme learning machine model with multiple hidden layers; 2)对上述多隐含层极限学习机模型进行实时更新,输出更新后模型的表达式。2) The above-mentioned multi-hidden layer extreme learning machine model is updated in real time, and the expression of the updated model is output. 2.根据权利要求1所述的带遗忘因子的在线时序多隐含层极限学习机的应用方法,其特征在于:2. the application method of the online sequential multi-hidden layer extreme learning machine of band forgetting factor according to claim 1, it is characterized in that: 步骤1)中,求一个具有多隐含层的极限学习机模型,得到该多隐含层极限学习机模型的输出表达式,具体为:In step 1), seek an extreme learning machine model with multiple hidden layers, and obtain the output expression of the extreme learning machine model with multiple hidden layers, specifically: 11)给定样本和多个隐含层的网络结构,隐含层的激活函数为g,网络输出为g(a,b,X),其中a为输入层和第一隐含层之间的权重,b为第一隐层的偏差,X为输入矩阵;11) Given a sample and a network structure of multiple hidden layers, the activation function of the hidden layer is g, and the network output is g(a,b,X), where a is the distance between the input layer and the first hidden layer Weight, b is the bias of the first hidden layer, X is the input matrix; 12)假设数据分批次变化,且每一批次都持续S个单位时间,在第k-th个单位时间的数据表示为Nj为j批次数据的个数;χk在[k k+s]的范围内有效,j=0,1,…k.,ti为标志变量,在第(k+1)-th个单位时间的数据表示为k为任意大的正整数,xi为输入样本,ti为样本标志量,th为批次。12) Assuming that the data changes in batches, and each batch lasts for S units of time, the data of the k-th unit time is expressed as N j is the number of j batches of data; χ k is valid within the range of [k k+s], j=0,1,...k., t i is a flag variable, at (k+1)-th The data of unit time is expressed as k is any large positive integer, xi is the input sample, t i is the sample mark, and th is the batch. 13)假设k≥s-1,训练数据的个数远大于隐藏层节点的数目,Zk+1为第(k+1)-th个单位时间预测出的结果,设l=k-s+1,k-s+2,…k;数据在l-th时刻网络第一个隐含层的输出为:13) Suppose k≥s-1, the number of training data is much larger than the number of hidden layer nodes, Z k+1 is the predicted result of the (k+1)-th unit time, set l=k-s+ 1, k-s+2,...k; the output of the first hidden layer of the network at the l-th moment of the data is: (ai,bi)为输入层和第一个隐含层之间的权值与阈值,i=1,...L.随机初始化;(G为隐含层激活函数,T为[k-s+1,k]内批数据样本的标志量,Tl为第l批数据样本的标志量;l为在[k-s+1,k]内的一个正整数;(a i , b i ) is the weight and threshold between the input layer and the first hidden layer, i=1,...L. Random initialization; (G is the activation function of the hidden layer, T is [k -s+1, k] inner batch data sample mark amount, T l is the mark amount of the lth batch data sample; l is a positive integer in [k-s+1, k]; 得到最终隐含层的输出权值β为:The output weight β of the final hidden layer is obtained as: <mrow> <mi>&amp;beta;</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>=</mo> <msup> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>H</mi> <mrow> <mi>k</mi> <mo>-</mo> <mi>s</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <msub> <mi>H</mi> <mi>k</mi> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>+</mo> </msup> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>T</mi> <mrow> <mi>k</mi> <mo>-</mo> <mi>s</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <msub> <mi>T</mi> <mi>k</mi> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <msub> <mi>P</mi> <mi>k</mi> </msub> <msup> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>H</mi> <mrow> <mi>k</mi> <mo>-</mo> <mi>s</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <msub> <mi>H</mi> <mi>k</mi> </msub> </mtd> </mtr> </mtable> </mfenced> <mi>T</mi> </msup> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>T</mi> <mrow> <mi>k</mi> <mo>-</mo> <mi>s</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <msub> <mi>T</mi> <mi>k</mi> </msub> </mtd> </mtr> </mtable> </mfenced> </mrow> <mrow><mi>&amp;beta;</mi><mrow><mo>(</mo><mi>k</mi><mo>)</mo></mrow><mo>=</mo><msup><mfenced open = "[" close = "]"><mtable><mtr><mtd><msub><mi>H</mi><mrow><mi>k</mi><mo>-</mo><mi>s</mi><mo>+</mo><mn>1</mn></mrow></msub></mtd></mtr><mtr><mtd><mo>.</mo></mtd></mtr><mtr><mtd><mo>.</mo></mtd></mtr><mtr><mtd><mo>.</mo></mtd></mtr><mtr><mtd><msub><mi>H</mi><mi>k</mi></msub></mtd></mtr></mtable></mfenced><mo>+</mo></msup><mfenced open = "[" close = "]"><mtable><mtr><mtd><msub><mi>T</mi><mrow><mi>k</mi><mo>-</mo><mi>s</mi><mo>+</mo><mn>1</mn></mrow></msub></mtd></mtr><mtr><mtd><mo>.</mo></mtd></mtr><mtr><mtd><mo>.</mo></mtd></mtr><mtr><mtd><mo>.</mo></mtd></mtr><mtr><mtd><msub><mi>T</mi><mi>k</mi></msub></mtd></mtr></mtable></mfenced><mo>=</mo><msub><mi>P</mi><mi>k</mi></msub><msup><mfenced open = "[" close = "]"><mtable><mtr><mtd><msub><mi>H</mi><mrow><mi>k</mi><mo>-</mo><mi>s</mi><mo>+</mo><mn>1</mn></mrow></msub></mtd></mtr><mtr><mtd><mo>.</mo></mtd></mtr><mtr><mtd><mo>.</mo></mtd></mtr><mtr><mtd><mo>.</mo></mtd></mtr><mtr><mtd><msub><mi>H</mi><mi>k</mi></msub></mtd></mtr></mtable></mfenced><mi>T</mi></msup><mfenced open = "[" close = "]"><mtable><mtr><mtd><msub><mi>T</mi><mrow><mi>k</mi><mo>-</mo><mi>s</mi><mo>+</mo><mn>1</mn></mrow></msub></mtd></mtr><mtr><mtd><mo>.</mo></mtd></mtr><mtr><mtd><mo>.</mo></mtd></mtr><mtr><mtd><mo>.</mo></mtd></mtr><mtr><mtd><msub><mi>T</mi><mi>k</mi></msub></mtd></mtr></mtable></mfenced></mrow> and 14)假设第二个隐含层的权值和偏差为W1,B1,则第二个隐含层的输出为:14) Suppose the weight and bias of the second hidden layer are W 1 , B 1 , then the output of the second hidden layer is: <mrow> <msub> <mi>H</mi> <mn>1</mn> </msub> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>T</mi> <mrow> <mi>k</mi> <mo>-</mo> <mi>s</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <msub> <mi>T</mi> <mi>k</mi> </msub> </mtd> </mtr> </mtable> </mfenced> <mi>&amp;beta;</mi> <msup> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>+</mo> </msup> <mo>;</mo> </mrow> <mrow><msub><mi>H</mi><mn>1</mn></msub><mo>=</mo><mfenced open = "[" close = "]"><mtable><mtr><mtd><msub><mi>T</mi><mrow><mi>k</mi><mo>-</mo><mi>s</mi><mo>+</mo><mn>1</mn></mrow></msub></mtd></mtr><mtr><mtd><mo>.</mo></mtd></mtr><mtr><mtd><mo>.</mo></mtd></mtr><mtr><mtd><mo>.</mo></mtd></mtr><mtr><mtd><msub><mi>T</mi><mi>k</mi></msub></mtd></mtr></mtable></mfenced><mi>&amp;beta;</mi><msup><mrow><mo>(</mo><mi>k</mi><mo>)</mo></mrow><mo>+</mo></msup><mo>;</mo></mrow> 15)假设WHE=[B1 W1],则第二个隐含层的权值和偏差通过计算且假设HE=[1 H]T,1为元素全为1的一维行向量,g-1(x)为激活函数的g(x)反函数,WHE和HE为假设的变量;15) Assuming W HE = [B 1 W 1 ], then the weight and bias of the second hidden layer are calculated by And suppose H E =[1 H] T , 1 is a one-dimensional row vector whose elements are all 1, g -1 (x) is the inverse function of g(x) of the activation function, W HE and HE are assumed variables; 16)更新第二个隐含层的输出为H2=g(WHEHE);更新最终隐含层的输出权值β为 16) Update the output of the second hidden layer as H 2 =g(W HE H E ); update the output weight β of the final hidden layer as 17)假设第三个隐含层的权值和偏差为W2,B2,则第三个隐含层的输出为17) Suppose the weight and bias of the third hidden layer are W 2 , B 2 , then the output of the third hidden layer is <mrow> <msub> <mi>H</mi> <mn>3</mn> </msub> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>T</mi> <mrow> <mi>k</mi> <mo>-</mo> <mi>s</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <msub> <mi>T</mi> <mi>k</mi> </msub> </mtd> </mtr> </mtable> </mfenced> <msubsup> <mi>&amp;beta;</mi> <mrow> <mi>n</mi> <mi>e</mi> <mi>w</mi> </mrow> <mo>+</mo> </msubsup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> <mrow><msub><mi>H</mi><mn>3</mn></msub><mo>=</mo><mfenced open = "[" close = "]"><mtable><mtr><mtd><msub><mi>T</mi><mrow><mi>k</mi><mo>-</mo><mi>s</mi><mo>+</mo><mn>1</mn></mrow></msub></mtd></mtr><mtr><mtd><mo>.</mo></mtd></mtr><mtr><mtd><mo>.</mo></mtd></mtr><mtr><mtd><mo>.</mo></mtd></mtr><mtr><mtd><msub><mi>T</mi><mi>k</mi></msub></mtd></mtr></mtable></mfenced><msubsup><mi>&amp;beta;</mi><mrow><mi>n</mi><mi>e</mi><mi>w</mi></mrow><mo>+</mo></msubsup><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>1</mn><mo>)</mo></mrow></mrow> 18)假设WHE1=[B2 W2],则第三个隐含层的权值和偏差通过计算且HE1=[1 H2]T,1为元素全为1的一维行向量;g-1(x)为激活函数的g(x)反函数;WHE1、HE1为为假设的变量;18) Assuming W HE1 = [B 2 W 2 ], then the weight and bias of the third hidden layer are calculated by And H E1 =[1 H 2 ] T , 1 is a one-dimensional row vector whose elements are all 1; g -1 (x) is the inverse function of g(x) of the activation function; W HE1 and HE1 are hypothetical variables; 更新第三个隐含层的输出为:The output of the updated third hidden layer is: H4=g(WHE1HE1) (2)H 4 =g(W HE1 H E1 ) (2) 19)更新最终隐含层的输出权值β为:19) Update the output weight β of the final hidden layer as: <mrow> <msub> <mi>&amp;beta;</mi> <mrow> <mi>n</mi> <mi>e</mi> <mi>w</mi> <mn>1</mn> </mrow> </msub> <mo>=</mo> <msubsup> <mi>H</mi> <mn>4</mn> <mo>+</mo> </msubsup> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>T</mi> <mrow> <mi>k</mi> <mo>-</mo> <mi>s</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <msub> <mi>T</mi> <mi>k</mi> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> <mrow><msub><mi>&amp;beta;</mi><mrow><mi>n</mi><mi>e</mi><mi>w</mi><mn>1</mn></mrow></msub><mo>=</mo><msubsup><mi>H</mi><mn>4</mn><mo>+</mo></msubsup>< mfenced open = "[" close = "]"><mtable><mtr><mtd><msub><mi>T</mi><mrow><mi>k</mi><mo>-</mo><mi>s</mi><mo>+</mo><mn>1</mn></mrow></msub></mtd></mtr><mtr><mtd><mo>.</mo></mtd></mtr><mtr><mtd><mo>.</mo></mtd></mtr><mtr><mtd><mo>.</mo></mtd></mtr><mtr><mtd><msub><mi>T</mi><mi>k</mi></msub></mtd></mtr></mtable></mfenced><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>3</mn><mo>)</mo></mrow></mrow> 则最终的输出为f=βnew1H4Then the final output is f=β new1 H 4 . 3.根据权利要求2所述的带遗忘因子的在线时序多隐含层极限学习机的应用方法,其特征在于:3. the application method of the online sequential multi-hidden layer extreme learning machine of band forgetting factor according to claim 2, it is characterized in that: 对于具有多个隐含层的网络结构,循环迭代公式(1)、(2)、(3),三个隐层迭代一次,四个隐层迭代两次,N个隐层迭代N-2次,每次迭代结束使得βnew=βnew1,H2=H4For a network structure with multiple hidden layers, loop iteration formulas (1), (2), (3), three hidden layers iterate once, four hidden layers iterate twice, and N hidden layers iterate N-2 times , so that β new = β new1 , H 2 = H 4 at the end of each iteration. 4.根据权利要求1所述的带遗忘因子的在线时序多隐含层极限学习机的应用方法,其特征在于:4. the application method of the online sequential multi-hidden layer extreme learning machine of band forgetting factor according to claim 1, it is characterized in that: 步骤2)中,对上述多隐含层极限学习机模型进行实时更新,输出更新后模型的表达式,具体为:In step 2), the above multi-hidden layer extreme learning machine model is updated in real time, and the expression of the updated model is output, specifically: 21)假设Zk+2为第(k+2)-th个单位时间预测出的结果,在(k+1)-th时间的结果已知的情况下,与上一次保持同样的(ai,bi),i=1,...L,输出权值表示为:21) Assuming that Z k+2 is the predicted result of the (k+2)-th unit time, when the result of the (k+1)-th time is known, keep the same (a i ,b i ),i=1,...L, the output weight is expressed as: <mrow> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>H</mi> <mrow> <mi>k</mi> <mo>-</mo> <mi>s</mi> <mo>+</mo> <mn>2</mn> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <msub> <mi>H</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mtd> </mtr> </mtable> </mfenced> <mi>&amp;beta;</mi> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>T</mi> <mrow> <mi>k</mi> <mo>-</mo> <mi>s</mi> <mo>+</mo> <mn>2</mn> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <msub> <mi>T</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mtd> </mtr> </mtable> </mfenced> <mi>a</mi> <mi>n</mi> <mi>d</mi> <mi>&amp;beta;</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>=</mo> <msup> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>H</mi> <mrow> <mi>k</mi> <mo>-</mo> <mi>s</mi> <mo>+</mo> <mn>2</mn> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <msub> <mi>H</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>+</mo> </msup> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>T</mi> <mrow> <mi>k</mi> <mo>-</mo> <mi>s</mi> <mo>+</mo> <mn>2</mn> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <msub> <mi>T</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <msub> <mi>P</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <msup> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>H</mi> <mrow> <mi>k</mi> <mo>-</mo> <mi>s</mi> <mo>+</mo> <mn>2</mn> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <msub> <mi>H</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mtd> </mtr> </mtable> </mfenced> <mi>T</mi> </msup> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>T</mi> <mrow> <mi>k</mi> <mo>-</mo> <mi>s</mi> <mo>+</mo> <mn>2</mn> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <msub> <mi>T</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>35</mn> <mo>)</mo> </mrow> </mrow> <mrow><mfenced open = "[" close = "]"><mtable><mtr><mtd><msub><mi>H</mi><mrow><mi>k</mi><mo>-</mo><mi>s</mi><mo>+</mo><mn>2</mn></mrow></msub></mtd></mtr><mtr><mtd><mo>.</mo></mtd></mtr><mtr><mtd><mo>.</mo></mtd></mtr><mtr><mtd><mo>.</mo></mtd></mtr><mtr><mtd><msub><mi>H</mi><mrow><mi>k</mi><mo>+</mo><mn>1</mn></mrow></msub></mtd></mtr></mtable></mfenced><mi>&amp;beta;</mi><mo>=</mo><mfenced open = "[" close = "]"><mtable><mtr><mtd><msub><mi>T</mi><mrow><mi>k</mi><mo>-</mo><mi>s</mi><mo>+</mo><mn>2</mn></mrow></msub></mtd></mtr><mtr><mtd><mo>.</mo></mtd></mtr><mtr><mtd><mo>.</mo></mtd></mtr><mtr><mtd><mo>.</mo></mtd></mtr><mtr><mtd><msub><mi>T</mi><mrow><mi>k</mi><mo>+</mo><mn>1</mn></mrow></msub></mtd></mtr></mtable></mfenced><mi>a</mi><mi>n</mi><mi>d</mi><mi>&amp;beta;</mi><mrow><mo>(</mo><mi>k</mi><mo>+</mo><mn>1</mn><mo>)</mo></mrow><mo>=</mo><msup><mfenced open = "[" close = "]"><mtable><mtr><mtd><msub><mi>H</mi><mrow><mi>k</mi><mo>-</mo><mi>s</mi><mo>+</mo><mn>2</mn></mrow></msub></mtd></mtr><mtr><mtd><mo>.</mo></mtd></mtr><mtr><mtd><mo>.</mo></mtd></mtr><mtr><mtd><mo>.</mo></mtd></mtr><mtr><mtd><msub><mi>H</mi><mrow><mi>k</mi><mo>+</mo><mn>1</mn></mrow></msub></mtd></mtr></mtable></mfenced><mo>+</mo></msup><mfenced open = "[" close = "]"><mtable><mtr><mtd><msub><mi>T</mi><mrow><mi>k</mi><mo>-</mo><mi>s</mi><mo>+</mo><mn>2</mn></mrow></msub></mtd></mtr><mtr><mtd><mo>.</mo></mtd></mtr><mtr><mtd><mo>.</mo></mtd></mtr><mtr><mtd><mo>.</mo></mtd></mtr><mtr><mtd><msub><mi>T</mi><mrow><mi>k</mi><mo>+</mo><mn>1</mn></mrow></msub></mtd></mtr></mtable></mfenced><mo>=</mo><msub><mi>P</mi><mrow><mi>k</mi><mo>+</mo><mn>1</mn></mrow></msub><msup><mfenced open = "[" close = "]"><mtable><mtr><mtd><msub><mi>H</mi><mrow><mi>k</mi><mo>-</mo><mi>s</mi><mo>+</mo><mn>2</mn></mrow></msub></mtd></mtr><mtr><mtd><mo>.</mo></mtd></mtr><mtr><mtd><mo>.</mo></mtd></mtr><mtr><mtd><mo>.</mo></mtd></mtr><mtr><mtd><msub><mi>H</mi><mrow><mi>k</mi><mo>+</mo><mn>1</mn></mrow></msub></mtd></mtr></mtable></mfenced><mi>T</mi></msup><mfenced open = "[" close = "]"><mtable><mtr><mtd><msub><mi>T</mi><mrow><mi>k</mi><mo>-</mo><mi>s</mi><mo>+</mo><mn>2</mn></mrow></msub></mtd></mtr><mtr><mtd><mo>.</mo></mtd></mtr><mtr><mtd><mo>.</mo></mtd></mtr><mtr><mtd><mo>.</mo></mtd></mtr><mtr><mtd><msub><mi>T</mi><mrow><mi>k</mi><mo>+</mo><mn>1</mn></mrow></msub></mtd></mtr></mtable></mfenced><mo>-</mi>mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>35</mn><mo>)</mo></mrow></mrow> 则Pk+1由Pk表示为:and Then P k+1 is expressed by P k as: <mfenced open = "" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>P</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>=</mo> <msup> <mrow> <mo>(</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>l</mi> <mo>=</mo> <mi>k</mi> <mo>-</mo> <mi>s</mi> <mo>+</mo> <mn>2</mn> </mrow> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </munderover> <msubsup> <mi>H</mi> <mi>l</mi> <mi>T</mi> </msubsup> <msub> <mi>H</mi> <mi>l</mi> </msub> <mo>)</mo> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <msup> <mrow> <mo>(</mo> <mrow> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>l</mi> <mo>=</mo> <mi>k</mi> <mo>-</mo> <mi>s</mi> <mo>+</mo> <mn>1</mn> </mrow> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </munderover> <msubsup> <mi>H</mi> <mi>l</mi> <mi>T</mi> </msubsup> <msub> <mi>H</mi> <mi>l</mi> </msub> <mo>+</mo> <msup> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <mo>-</mo> <msub> <mi>H</mi> <mrow> <mi>k</mi> <mo>-</mo> <mi>s</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <msub> <mi>H</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mtd> </mtr> </mtable> </mfenced> <mi>T</mi> </msup> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>H</mi> <mrow> <mi>k</mi> <mo>-</mo> <mi>s</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>H</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mtd> </mtr> </mtable> </mfenced> </mrow> <mo>)</mo> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>=</mo> <msub> <mi>P</mi> <mi>k</mi> </msub> <mo>-</mo> <msub> <mi>P</mi> <mi>k</mi> </msub> <msup> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <mo>-</mo> <msub> <mi>H</mi> <mrow> <mi>k</mi> <mo>-</mo> <mi>s</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <msub> <mi>H</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mtd> </mtr> </mtable> </mfenced> <mi>T</mi> </msup> <msup> <mrow> <mo>(</mo> <mrow> <mi>I</mi> <mo>+</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <mo>-</mo> <msub> <mi>H</mi> <mrow> <mi>k</mi> <mo>-</mo> <mi>s</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <msub> <mi>H</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mtd> </mtr> </mtable> </mfenced> <msub> <mi>P</mi> <mi>k</mi> </msub> <msup> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <mo>-</mo> <msub> <mi>H</mi> <mrow> <mi>k</mi> <mo>-</mo> <mi>s</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <msub> <mi>H</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mtd> </mtr> </mtable> </mfenced> <mi>T</mi> </msup> </mrow> <mo>)</mo> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>&amp;times;</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>H</mi> <mrow> <mi>k</mi> <mo>-</mo> <mi>s</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>H</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mtd> </mtr> </mtable> </mfenced> <msub> <mi>P</mi> <mi>k</mi> </msub> </mrow> </mtd> </mtr> </mtable> </mfenced> <mfenced open = "" close = ""><mtable><mtr><mtd><mrow><msub><mi>P</mi><mrow><mi>k</mi><mo>+</mo><mn>1</mn></mrow></msub><mo>=</mo><msup><mrow><mo>(</mo><munderover><mo>&amp;Sigma;</mo><mrow><mi>l</mi><mo>=</mo><mi>k</mi><mo>-</mo><mi>s</mi><mo>+</mo><mn>2</mn></mrow><mrow><mi>k</mi><mo>+</mo><mn>1</mn></mrow></munderover><msubsup><mi>H</mi><mi>l</mi><mi>T</mi></msubsup><msub><mi>H</mi><mi>l</mi></msub><mo>)</mo></mrow><mrow><mo>-</mo><mn>1</mn></mrow></msup><mo>=</mo><msup><mrow><mo>(</mo><mrow><munderover><mo>&amp;Sigma;</mo><mrow><mi>l</mi><mo>=</mo><mi>k</mi><mo>-</mo><mi>s</mi><mo>+</mo><mn>1</mn></mrow><mrow><mi>k</mi><mo>+</mo><mn>1</mn></mrow></munderover><msubsup><mi>H</mi><mi>l</mi><mi>T</mi></msubsup><msub><mi>H</mi><mi>l</mi></msub><mo>+</mo><msup><mfenced open = "[" close = "]"><mtable><mtr><mtd><mrow><mo>-</mo><msub><mi>H</mi><mrow><mi>k</mi><mo>-</mo><mi>s</mi><mo>+</mo><mn>1</mi>mn></mrow></msub></mrow></mtd></mtr><mtr><mtd><msub><mi>H</mi><mrow><mi>k</mi><mo>+</mo><mn>1</mn></mrow></msub></mtd></mtr></mtable></mfenced><mi>T</mi></msup><mfenced open = "[" close = "]"><mtable><mtr><mtd><msub><mi>H</mi><mrow><mi>k</mi><mo>-</mo><mi>s</mi><mo>+</mo><mn>1</mn></mrow></msub></mtd></mtr><mtr><mtd><msub><mi>H</mi><mrow><mi>k</mi><mo>+</mo><mn>1</mn></mrow></msub></mtd></mtr></mtable></mfenced></mrow><mo>)</mo></mrow><mrow><mo>-</mo><mn>1</mn></mrow></msup></mrow></mtd></mtr><mtr><mtd><mrow><mo>=</mo><msub><mi>P</mi><mi>k</mi></msub><mo>-</mo><msub><mi>P</mi><mi>k</mi></msub><msup><mfenced open = "[" close = "]"><mtable><mtr><mtd><mrow><mo>-</mo><msub><mi>H</mi><mrow><mi>k</mi><mo>-</mo><mi>s</mi><mo>+</mo><mn>1</mn></mrow></msub></mrow></mtd></mtr><mtr><mtd><msub><mi>H</mi><mrow><mi>k</mi><mo>+</mo><mn>1</mn></mrow></msub></mtd></mtr></mtable></mfenced><mi>T</mi></msup><msup><mrow><mo>(</mo><mrow><mi>I</mi><mo>+</mo><mfenced open = "[" close = "]"><mtable><mtr><mtd><mrow><mo>-</mo><msub><mi>H</mi><mrow><mi>k</mi><mo>-</mo><mi>s</mi><mo>+</mo><mn>1</mn></mrow></msub></mrow></mtd></mtr><mtr><mtd><msub><mi>H</mi><mrow><mi>k</mi><mo>+</mo><mn>1</mn></mrow></msub></mtd></mtr></mtable></mfenced><msub><mi>P</mi><mi>k</mi></msub><msup><mfenced open = "[" close = "]"><mtable><mtr><mtd><mrow><mo>-</mo><msub><mi>H</mi><mrow><mi>k</mi><mo>-</mo><mi>s</mi><mo>+</mo><mn>1</mn></mrow></msub></mrow></mtd></mtr><mtr><mtd><msub><mi>H</mi><mrow><mi>k</mi><mo>+</mo><mn>1</mn></mrow></msub></mtd></mtr></mtable></mfenced><mi>T</mi></msup></mrow><mo>)</mo></mrow><mrow><mo>-</mo><mn>1</mn></mrow></msup><mo>&amp;times;</mo><mfenced open = "[" close = "]"><mtable><mtr><mtd><msub><mi>H</mi><mrow><mi>k</mi><mo>-</mo><mi>s</mi><mo>+</mo><mn>1</mn></mrow></msub></mtd></mtr><mtr><mtd><msub><mi>H</mi><mrow><mi>k</mi><mo>+</mo><mn>1</mn></mrow></msub></mtd></mtr></mtable></mfenced><msub><mi>P</mi><mi>k</mi></msub></mrow></mtd></mtr></mtable></mfenced> and but 22)假设第二个隐含层的权值和偏差为W1,B1,则第二个隐含层的输出为22) Suppose the weight and bias of the second hidden layer are W 1 , B 1 , then the output of the second hidden layer is <mrow> <msub> <mi>H</mi> <mn>1</mn> </msub> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>T</mi> <mrow> <mi>k</mi> <mo>-</mo> <mi>s</mi> <mo>+</mo> <mn>2</mn> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <msub> <mi>T</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mtd> </mtr> </mtable> </mfenced> <mi>&amp;beta;</mi> <msup> <mrow> <mo>(</mo> <mi>k</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>+</mo> </msup> <mo>;</mo> </mrow> <mrow><msub><mi>H</mi><mn>1</mn></msub><mo>=</mo><mfenced open = "[" close = "]"><mtable><mtr><mtd><msub><mi>T</mi><mrow><mi>k</mi><mo>-</mo><mi>s</mi><mo>+</mo><mn>2</mn></mrow></msub></mtd></mtr><mtr><mtd><mo>.</mo></mtd></mtr><mtr><mtd><mo>.</mo></mtd></mtr><mtr><mtd><mo>.</mo></mtd></mtr><mtr><mtd><msub><mi>T</mi><mrow><mi>k</mi><mo>+</mo><mn>1</mn></mrow></msub></mtd></mtr></mtable></mfenced><mi>&amp;beta;</mi><msup><mrow><mo>(</mo><mi>k</mi><mo>+</mo><mn>1</mn><mo>)</mo></mrow><mo>+</mo></msup><mo>;</mo></mrow> 23)假设WHE=[B1 W1],则第二个隐含层的权值和偏差通过计算且HE=[1H]T,1为元素全为1的一维行向量;g-1(x)为激活函数的g(x)反函数, 23) Assuming W HE = [B 1 W 1 ], then the weight and bias of the second hidden layer are calculated by And H E =[1H] T , 1 is a one-dimensional row vector whose elements are all 1; g -1 (x) is the inverse function of g(x) of the activation function, 24)更新第二个隐含层的输出为H2=g(WHEHE);更新最终隐含层的输出权值β为 24) Update the output of the second hidden layer as H 2 =g(W HE H E ); update the output weight β of the final hidden layer as 25)假设第三个隐含层的权值和偏差为W2,B2,则第三个隐含层的输出为25) Suppose the weight and bias of the third hidden layer are W 2 , B 2 , then the output of the third hidden layer is <mrow> <msub> <mi>H</mi> <mn>3</mn> </msub> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>T</mi> <mrow> <mi>k</mi> <mo>-</mo> <mi>s</mi> <mo>+</mo> <mn>2</mn> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <msub> <mi>T</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mtd> </mtr> </mtable> </mfenced> <msubsup> <mi>&amp;beta;</mi> <mrow> <mi>n</mi> <mi>e</mi> <mi>w</mi> </mrow> <mo>+</mo> </msubsup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow> <mrow><msub><mi>H</mi><mn>3</mn></msub><mo>=</mo><mfenced open = "[" close = "]"><mtable><mtr><mtd><msub><mi>T</mi><mrow><mi>k</mi><mo>-</mo><mi>s</mi><mo>+</mo><mn>2</mn></mrow></msub></mtd></mtr><mtr><mtd><mo>.</mo></mtd></mtr><mtr><mtd><mo>.</mo></mtd></mtr><mtr><mtd><mo>.</mo></mtd></mtr><mtr><mtd><msub><mi>T</mi><mrow><mi>k</mi><mo>+</mo><mn>1</mn></mrow></msub></mtd></mtr></mtable></mfenced><msubsup><mi>&amp;beta;</mi><mrow><mi>n</mi><mi>e</mi><mi>w</mi></mrow><mo>+</mo></msubsup><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>4</mn><mo>)</mo></mrow></mrow> 26)假设WHE1=[B2 W2],则第三个隐含层的权值和偏差通过计算且HE1=[1 H2]T,1为元素全为1的一维行向量,g-1(x)为激活函数的g(x)反函数;26) Assuming W HE1 = [B 2 W 2 ], then the weight and bias of the third hidden layer are calculated by And H E1 =[1 H 2 ] T , 1 is a one-dimensional row vector whose elements are all 1, and g -1 (x) is the inverse function of g(x) of the activation function; 27)更新第三个隐含层的输出为:H4=g(WHE1HE1) (5)27) Update the output of the third hidden layer as: H 4 =g(W HE1 H E1 ) (5) 28)更新最终隐含层的输出权值β为: 28) Update the output weight β of the final hidden layer as: 则最终的输出为f=βnew1H4Then the final output is f=β new1 H 4 . 5.根据权利要求4所述的带遗忘因子的在线时序多隐含层极限学习机的应用方法,其特征在于还包括以下步骤:5. the application method of the online sequential multi-hidden layer extreme learning machine of band forgetting factor according to claim 4, it is characterized in that also comprising the following steps: 对于具有多个隐含层的网络结构,循环迭代公式(4)、(5)、(6),三个隐层迭代一次,四个隐层迭代两次,N个隐层迭代N-2次,每次迭代结束,使得βnew=βnew1,H2=H4For a network structure with multiple hidden layers, formulas (4), (5), and (6) are iterated cyclically, three hidden layers are iterated once, four hidden layers are iterated twice, and N hidden layers are iterated N-2 times , each iteration ends, so that β new = β new1 , H 2 = H 4 .
CN201710577695.3A 2017-07-15 2017-07-15 The application process of many hidden layer extreme learning machines of online sequential with forgetting factor Pending CN107330294A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710577695.3A CN107330294A (en) 2017-07-15 2017-07-15 The application process of many hidden layer extreme learning machines of online sequential with forgetting factor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710577695.3A CN107330294A (en) 2017-07-15 2017-07-15 The application process of many hidden layer extreme learning machines of online sequential with forgetting factor

Publications (1)

Publication Number Publication Date
CN107330294A true CN107330294A (en) 2017-11-07

Family

ID=60226989

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710577695.3A Pending CN107330294A (en) 2017-07-15 2017-07-15 The application process of many hidden layer extreme learning machines of online sequential with forgetting factor

Country Status (1)

Country Link
CN (1) CN107330294A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109255477A (en) * 2018-08-24 2019-01-22 国电联合动力技术有限公司 A kind of wind speed forecasting method and its system and unit based on depth limit learning machine
CN109377752A (en) * 2018-10-19 2019-02-22 桂林电子科技大学 Short-term traffic flow change prediction method, device, computer equipment and storage medium
CN112381139A (en) * 2020-11-13 2021-02-19 长春工业大学 Complex separation process optimization method based on ELM-ADHDP

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012177792A9 (en) * 2011-06-24 2013-06-06 Sequenom, Inc. Methods and processes for non-invasive assessment of a genetic variation
CN103593550A (en) * 2013-08-12 2014-02-19 东北大学 Pierced billet quality modeling and prediction method based on integrated mean value staged RPLS-OS-ELM
US20140187988A1 (en) * 2010-03-15 2014-07-03 Nanyang Technological University Method of predicting acute cardiopulmonary events and survivability of a patient
CN104182622A (en) * 2014-08-12 2014-12-03 大连海事大学 Feedback analysis method and device in tunnel construction based on extreme learning machine
CN106845144A (en) * 2017-03-17 2017-06-13 泉州装备制造研究所 A kind of trend prediction method excavated based on industrial big data

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140187988A1 (en) * 2010-03-15 2014-07-03 Nanyang Technological University Method of predicting acute cardiopulmonary events and survivability of a patient
WO2012177792A9 (en) * 2011-06-24 2013-06-06 Sequenom, Inc. Methods and processes for non-invasive assessment of a genetic variation
CN103593550A (en) * 2013-08-12 2014-02-19 东北大学 Pierced billet quality modeling and prediction method based on integrated mean value staged RPLS-OS-ELM
CN104182622A (en) * 2014-08-12 2014-12-03 大连海事大学 Feedback analysis method and device in tunnel construction based on extreme learning machine
CN106845144A (en) * 2017-03-17 2017-06-13 泉州装备制造研究所 A kind of trend prediction method excavated based on industrial big data

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
B.Y. QU等: "Two-hidden-layer extreme learning machine for regression and classification", 《NEUROCOMPUTING》 *
DONG XIAO等: "The research on the modeling method of batch process based on OS-ELM-RMPLS", 《CHEMOMETRICS AND INTELLIGENT LABORATORY SYSTEMS》 *
JIANWEIZHAO等: "Online sequential extreme learning machine with forgetting mechanism", 《NEUROCOMPUTING》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109255477A (en) * 2018-08-24 2019-01-22 国电联合动力技术有限公司 A kind of wind speed forecasting method and its system and unit based on depth limit learning machine
CN109377752A (en) * 2018-10-19 2019-02-22 桂林电子科技大学 Short-term traffic flow change prediction method, device, computer equipment and storage medium
CN112381139A (en) * 2020-11-13 2021-02-19 长春工业大学 Complex separation process optimization method based on ELM-ADHDP
CN112381139B (en) * 2020-11-13 2023-07-25 长春工业大学 Optimization Method of Complex Separation Process Based on ELM-ADHDP

Similar Documents

Publication Publication Date Title
CN113642653B (en) Complex value neural network signal modulation identification method based on structure optimization algorithm
CN103413174B (en) Based on the short-term wind speed multistep forecasting method of degree of depth learning method
Saadat et al. Training echo state neural network using harmony search algorithm
CN109299430A (en) Short-term wind speed prediction method based on two-stage decomposition and extreme learning machine
CN107330294A (en) The application process of many hidden layer extreme learning machines of online sequential with forgetting factor
CN111045326A (en) Tobacco shred drying process moisture prediction control method and system based on recurrent neural network
Gan et al. Seasonal and trend time series forecasting based on a quasi-linear autoregressive model
CN101872444A (en) A Batch-to-Batch Optimization Method for Batch Processes Combined with Mid-Term Correction Strategies
CN107103397A (en) A kind of traffic flow forecasting method based on bat algorithm, apparatus and system
CN101819782A (en) Variable-step self-adaptive blind source separation method and blind source separation system
CN111950711A (en) A Second-Order Hybrid Construction Method and System for Complex-valued Feedforward Neural Networks
CN106600001B (en) Glass furnace Study of Temperature Forecasting method based on Gaussian mixtures relational learning machine
CN110018675A (en) Nonlinear system modeling method based on LWDNN-ARX model
CN114897144A (en) Complex value time sequence signal prediction method based on complex value neural network
CN103226728B (en) High density polyethylene polymerization cascade course of reaction Intelligent Measurement and yield optimization method
Zhang et al. Energy-dissipative evolutionary deep operator neural networks
CN108142976B (en) Cut tobacco drying process parameter optimization method
CN104330972A (en) Comprehensive prediction iterative learning control method based on model adaptation
CN110807510B (en) Parallel learning soft measurement modeling method for industrial big data
CN115167150B (en) Batch process two-dimensional off-orbit strategy staggered Q learning optimal tracking control method with unknown system dynamics
CN110085255B (en) A Gaussian Process Regression Modeling Method for Speech Conversion Based on Deep Kernel Learning
CN108829058A (en) A kind of fuzzy iterative learning control method of chemical industry batch process
CN107168066A (en) A kind of greenhouse self-adaptation control method
Nagata et al. Exchange Monte Carlo sampling from Bayesian posterior for singular learning machines
Subbotin et al. Entropy based evolutionary search for feature selection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20171107

RJ01 Rejection of invention patent application after publication