[go: up one dir, main page]

CN112613603B - Neural network training method based on amplitude limiter and application thereof - Google Patents

Neural network training method based on amplitude limiter and application thereof Download PDF

Info

Publication number
CN112613603B
CN112613603B CN202011567835.7A CN202011567835A CN112613603B CN 112613603 B CN112613603 B CN 112613603B CN 202011567835 A CN202011567835 A CN 202011567835A CN 112613603 B CN112613603 B CN 112613603B
Authority
CN
China
Prior art keywords
neural network
limiter
layer
prediction
training set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011567835.7A
Other languages
Chinese (zh)
Other versions
CN112613603A (en
Inventor
刘向阳
褚健淳
何茂刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN202011567835.7A priority Critical patent/CN112613603B/en
Publication of CN112613603A publication Critical patent/CN112613603A/en
Application granted granted Critical
Publication of CN112613603B publication Critical patent/CN112613603B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16CCOMPUTATIONAL CHEMISTRY; CHEMOINFORMATICS; COMPUTATIONAL MATERIALS SCIENCE
    • G16C10/00Computational theoretical chemistry, i.e. ICT specially adapted for theoretical aspects of quantum chemistry, molecular mechanics, molecular dynamics or the like

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

本发明公开了一种基于限幅器的神经网络训练方法及其应用,所述神经网络训练方法包括:将实验数据作为训练集训练神经网络;通过训练后的神经网络预测未知状况下待预测量的值;设定限幅器的上下限,将超过限幅器上限的预测结果改为限幅器的上限,将低于限幅器下限的预测结果改为限幅器的下限,将修改后的结果纳入训练集;通过新训练集重新训练神经网络;神经网络预测结果均在限幅器上下限范围内则结束训练,否则建立新训练集,重新训练神经网络,重复直到神经网络预测结果均在限幅器上下限范围内。本发明具有在实验数据较少的情况下建立较高预测精度神经网络的优点。

Figure 202011567835

The invention discloses a limiter-based neural network training method and its application. The neural network training method includes: using experimental data as a training set to train a neural network; value; set the upper and lower limits of the limiter, change the prediction results exceeding the upper limit of the limiter to the upper limit of the limiter, change the prediction results lower than the lower limit of the limiter to the lower limit of the limiter, and change the modified The results of the training set are included in the training set; the neural network is retrained through the new training set; if the prediction results of the neural network are within the upper and lower limits of the limiter, the training ends; Within the upper and lower limits of the limiter. The invention has the advantage of establishing a neural network with higher prediction accuracy under the condition of less experimental data.

Figure 202011567835

Description

基于限幅器的神经网络训练方法及其应用Neural Network Training Method Based on Limiter and Its Application

技术领域technical field

本发明属于人工神经网络预测技术领域,涉及神经网络的训练方法,特别涉及一种基于限幅器的神经网络训练方法及其应用。The invention belongs to the technical field of artificial neural network prediction, relates to a neural network training method, in particular to a limiter-based neural network training method and its application.

背景技术Background technique

目前,许多物理化学性质测量难度大,测量成本高,且实验所得的数据为离散点,难以满足工业需求;物理化学性质的研究思路基本为通过实验数据建立其理论计算模型。相对于传统的预测模型,人工神经网络具有非线性处理能力强、预测精度高的优点,在物理化学性质预测方面得到了广泛的应用。应用神经网络时,需要利用大量实验数据对神经网络进行训练对不同状态下的特征进行学习,预测所需要的物理化学性质。At present, the measurement of many physical and chemical properties is difficult and costly, and the data obtained from experiments are discrete points, which is difficult to meet the needs of the industry; the research idea of physical and chemical properties is basically to establish their theoretical calculation models through experimental data. Compared with the traditional prediction model, artificial neural network has the advantages of strong nonlinear processing ability and high prediction accuracy, and has been widely used in the prediction of physical and chemical properties. When applying a neural network, it is necessary to use a large amount of experimental data to train the neural network to learn the characteristics of different states and predict the required physical and chemical properties.

然而,对于许多物质来讲,其物理化学性质数据较少,通过小数据集训练的神经网络会存在预测精度低的问题。如何通过有限实验数据训练获得预测精度较高的神经网络是亟需解决的技术问题。However, for many substances, there is little data on their physical and chemical properties, and the neural network trained by a small data set has the problem of low prediction accuracy. How to obtain a neural network with high prediction accuracy through limited experimental data training is a technical problem that needs to be solved urgently.

发明内容Contents of the invention

本发明的目的在于提供一种基于限幅器的神经网络训练方法及其应用,以解决上述存在的一个或多个技术问题。本发明的方法能够改善小数据集训练所得的神经网络预测精度低的问题,提高训练所得神经网络预测的精度。The object of the present invention is to provide a limiter-based neural network training method and its application, so as to solve one or more technical problems above. The method of the invention can improve the problem of low prediction accuracy of the neural network obtained through training of small data sets, and improve the prediction accuracy of the neural network obtained through training.

为达到上述目的,本发明采用以下技术方案:To achieve the above object, the present invention adopts the following technical solutions:

本发明的一种基于限幅器的神经网络训练方法,包括以下步骤:A kind of neural network training method based on limiter of the present invention comprises the following steps:

步骤1,采用预获取的实验数据作为训练集,对待训练的神经网络进行训练,获得训练后的神经网络;其中,所述实验数据包括预测所需特征量及待预测量的实验数据;Step 1, using the pre-acquired experimental data as the training set, training the neural network to be trained, and obtaining the trained neural network; wherein, the experimental data includes experimental data for predicting required feature quantities and to be measured;

步骤2,通过训练后的神经网络预测未知状况下待预测量的值,获得预测结果;Step 2, predict the value to be measured under unknown conditions through the trained neural network, and obtain the prediction result;

步骤3,设定限幅器的上下限,通过限幅器确定步骤2获得的预测结果的合理性;其中,若预测结果均在限幅器上下限范围内,则训练结束;否则更改限幅器上下限范围外的预测结果,获得新的训练集并跳转执行步骤4;Step 3, set the upper and lower limits of the limiter, and determine the rationality of the prediction results obtained in step 2 through the limiter; if the prediction results are all within the upper and lower limits of the limiter, the training ends; otherwise, change the limit If the prediction result is outside the range of the upper and lower limits of the detector, obtain a new training set and jump to step 4;

步骤4,通过新的训练集重新训练神经网络,获得新的训练后的神经网络并跳转执行步骤5;Step 4, retrain the neural network with the new training set, obtain the new trained neural network and jump to step 5;

步骤5,基于步骤4获得的训练后的神经网络,重复步骤2和步骤3。Step 5, based on the trained neural network obtained in step 4, repeat step 2 and step 3.

本发明的进一步改进在于,步骤3中,所述否则更改限幅器上下限范围外的预测结果,获得新的训练集的具体步骤包括:A further improvement of the present invention is that, in step 3, the specific steps for obtaining a new training set include:

将超过限幅器上限的预测结果改为限幅器的上限,将低于限幅器下限的预测结果改为限幅器的下限;Change predictions that exceed the upper limit of the limiter to the upper limit of the limiter, and change predictions that are lower than the lower limit of the limiter to the lower limit of the limiter;

将修改后的结果纳入训练集成为新的训练集。Incorporate the modified results into the training set as a new training set.

本发明的进一步改进在于,所述神经网络的输出为归一化值。A further improvement of the present invention is that the output of the neural network is a normalized value.

本发明的进一步改进在于,所述神经网络的输入为影响待预测量的预测所需特征量,输出为待预测量。A further improvement of the present invention is that the input of the neural network is the required feature quantity affecting the prediction of the quantity to be predicted, and the output is the quantity to be predicted.

本发明的进一步改进在于,步骤4中,在利用新的训练集重新训练神经网络时,非实验数据对所述神经网络结构和参数的影响权重乘以一个小于1的系数。A further improvement of the present invention is that, in step 4, when using the new training set to retrain the neural network, the influence weight of the non-experimental data on the structure and parameters of the neural network is multiplied by a coefficient less than 1.

本发明的进一步改进在于,步骤1或步骤4中,所述神经网络在训练过程中,根据选择的损失函数,采用优化算法对神经网络内部参数进行优化。A further improvement of the present invention is that in step 1 or step 4, during the training process of the neural network, an optimization algorithm is used to optimize the internal parameters of the neural network according to the selected loss function.

本发明的一种基于限幅器的神经网络训练方法的应用,用于天然气压缩因子的预测。The application of the neural network training method based on the limiter of the present invention is used for the prediction of the natural gas compression factor.

本发明的进一步改进在于,预测所需特征量包括:温度、压力以及天然气中甲烷、乙烷、丙烷、丁烷、戊烷、己烷、庚烷、辛烷八种组分的含量;待预测量为天然气的压缩因子;A further improvement of the present invention is that the required characteristic quantities for prediction include: the content of eight components of methane, ethane, propane, butane, pentane, hexane, heptane, and octane in temperature, pressure and natural gas; The quantity is the compressibility factor of natural gas;

所述神经网络为前向神经网络,包括输入层、隐藏层和输出层;其中,每个层内均有多个神经元,神经元具有各自的偏置、权重与激活函数,层内部各个神经元之间相互独立;从输入层输入的特征值,经过各个隐藏层的计算最终达到输出层;The neural network is a forward neural network, including an input layer, a hidden layer and an output layer; wherein, each layer has a plurality of neurons, each neuron has its own bias, weight and activation function, and each neuron in the layer The elements are independent of each other; the feature value input from the input layer, after the calculation of each hidden layer, finally reaches the output layer;

对前向神经网络的输出进行归一化处理,归一化处理的表达式为:The output of the forward neural network is normalized, and the expression of normalization is:

Figure BDA0002861509320000031
Figure BDA0002861509320000031

式中,X为输入的初始值,Xmin和Xmax分别为所有输入的最小值和最大值,x为输入的归一化值;In the formula, X is the initial value of the input, X min and X max are the minimum and maximum values of all inputs, and x is the normalized value of the input;

神经网络每层之间的传递公式为:The transfer formula between each layer of the neural network is:

Figure BDA0002861509320000032
Figure BDA0002861509320000032

式中,

Figure BDA0002861509320000033
为具有m个神经元的第k+1层中第j个神经元对于上一层也即具有n个神经元的第k层第i个神经元ai k的权重,bj k+1为第k+1层中第j个神经元的偏置,
Figure BDA0002861509320000034
为第k+1层中第j个神经元的输出;In the formula,
Figure BDA0002861509320000033
is the weight of the j-th neuron in the k+1th layer with m neurons to the weight of the i-th neuron a i k in the upper layer, that is, the k - th layer with n neurons, and b j k+1 is The bias of the jth neuron in the k+1th layer,
Figure BDA0002861509320000034
is the output of the jth neuron in the k+1th layer;

采用的激活函数为ReLU或Tanh函数,表达式为:The activation function used is ReLU or Tanh function, and the expression is:

Figure BDA0002861509320000035
Figure BDA0002861509320000036
Figure BDA0002861509320000035
or
Figure BDA0002861509320000036

式中,

Figure BDA0002861509320000037
为第k+1层中第j个神经元的输出,aj k+1为激活函数的函数值。In the formula,
Figure BDA0002861509320000037
is the output of the jth neuron in the k+1th layer, and a j k+1 is the function value of the activation function.

本发明的进一步改进在于,采用误差函数作为优化目标,误差函数为:A further improvement of the present invention is that the error function is used as the optimization target, and the error function is:

Figure BDA0002861509320000038
Figure BDA0002861509320000038

式中,MSE为均方误差,f(x)为神经网络的输出值,Y为实际值;M为全体数据的个数。In the formula, MSE is the mean square error, f(x) is the output value of the neural network, Y is the actual value; M is the number of all data.

在进行误差计算后,通过优化算法调整神经网络的结构和权重。After the error calculation, the structure and weight of the neural network are adjusted through an optimization algorithm.

本发明的进一步改进在于,限幅器的上限、下限,分别为1.66、0.36。The further improvement of the present invention lies in that the upper limit and the lower limit of the limiter are 1.66 and 0.36 respectively.

与现有技术相比,本发明具有以下有益效果:Compared with the prior art, the present invention has the following beneficial effects:

本发明同现有方法相比,具有在实验数据较少的情况下建立较高预测精度神经网络的优点;具体地,本发明通过预设所预测值的理论范围,可以使神经网络在训练时除了少量的实验数据以外获得其他可支持训练的信息;同时,本发明技术方案根据理论范围对神经网络的输出进行修正,并将修正后的值与实验数据一同放入训练集,重复多次直至神经网络的输出全部符合预测值的理论范围,可以充分发挥无标签数据的价值,使神经网络在其预测输入特征在训练集的范围之外的数据点时,预测值相对于每一个输入特征有着相对正确的变化趋势,尤其当实验数据较少且预测值与输入特征之间的关系可看待为近似的递增的凸函数或递减的凹函数时(类似对数函数);另外,本发明可以极大地提高预测的精度至传统神经网络的数倍甚至数十倍。Compared with the existing method, the present invention has the advantage of establishing a higher prediction accuracy neural network with less experimental data; specifically, the present invention can make the neural network during training by presetting the theoretical range of the predicted value. In addition to a small amount of experimental data, other information that can support training is obtained; at the same time, the technical solution of the present invention corrects the output of the neural network according to the theoretical range, and puts the corrected value into the training set together with the experimental data, and repeats it many times until The output of the neural network all conforms to the theoretical range of the predicted value, which can give full play to the value of unlabeled data, so that when the neural network predicts data points whose input features are outside the range of the training set, the predicted value has a certain value relative to each input feature. Relatively correct trend of change, especially when the experimental data is less and the relationship between the predicted value and the input feature can be regarded as an approximate increasing convex function or decreasing concave function (similar logarithmic function); in addition, the present invention can be very Greatly improve the prediction accuracy to several times or even dozens of times that of traditional neural networks.

附图说明Description of drawings

为了更清楚地说明本发明实施例或现有技术中的技术方案,下面对实施例或现有技术描述中所需要使用的附图做简单的介绍;显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来说,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present invention or the prior art, the following briefly introduces the accompanying drawings that need to be used in the description of the embodiments or the prior art; obviously, the accompanying drawings in the following description are For some embodiments of the present invention, those skilled in the art can also obtain other drawings based on these drawings without creative effort.

图1是本发明实施例中,前馈神经网络的结构示意图;Fig. 1 is in the embodiment of the present invention, the structural representation of feed-forward neural network;

图2是本发明实施例中,基于限幅器的神经网络训练方法的流程示意图;Fig. 2 is a schematic flow chart of a limiter-based neural network training method in an embodiment of the present invention;

图3是本发明实施例中,本发明方法与传统前向神经网络的预测效果对比示意图。Fig. 3 is a schematic diagram of the comparison of the prediction effect between the method of the present invention and the traditional feedforward neural network in the embodiment of the present invention.

具体实施方式Detailed ways

为使本发明实施例的目的、技术效果及技术方案更加清楚,下面结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述;显然,所描述的实施例是本发明一部分实施例。基于本发明公开的实施例,本领域普通技术人员在没有做出创造性劳动的前提下所获得的其它实施例,都应属于本发明保护的范围。In order to make the purpose, technical effects and technical solutions of the embodiments of the present invention more clear, the technical solutions in the embodiments of the present invention are clearly and completely described below in conjunction with the accompanying drawings in the embodiments of the present invention; obviously, the described embodiments It is a part of the embodiment of the present invention. Based on the disclosed embodiments of the present invention, other embodiments obtained by persons of ordinary skill in the art without making creative efforts shall all fall within the protection scope of the present invention.

请参阅图1和图2,本发明实施例提供了一种基于限幅器的神经网络训练方法,应用该方法可以改善小数据集训练所得的神经网络预测精度低的问题,提高神经网络预测的精度。所述的神经网络训练方法具体包括以下步骤:Please refer to Fig. 1 and Fig. 2, the embodiment of the present invention provides a kind of neural network training method based on limiter, application of this method can improve the problem of low prediction accuracy of the neural network obtained from the training of small data sets, and improve the accuracy of neural network prediction. precision. Described neural network training method specifically comprises the following steps:

步骤1,将含有预测所需要的特征量及待预测量的实验数据作为训练集训练神经网络;Step 1, using the experimental data containing the feature quantities required for prediction and the measurement to be predicted as the training set to train the neural network;

步骤2,通过训练后的神经网络预测未知状况下待预测量的值;Step 2, predicting the value to be measured under unknown conditions through the trained neural network;

步骤3,设定限幅器的上下限,通过限幅器确定预测结果的合理性并更改不合理结果;其中,具体方法可以为将超过限幅器上限的预测结果改为限幅器的上限,将低于限幅器下限的预测结果改为限幅器的下限,并将修改后的结果纳入训练集;Step 3, set the upper and lower limits of the limiter, determine the rationality of the prediction results through the limiter and change the unreasonable results; the specific method can be to change the prediction results exceeding the upper limit of the limiter to the upper limit of the limiter , change the prediction results below the lower limit of the clipper to the lower limit of the clipper, and incorporate the modified results into the training set;

步骤4,通过新训练集重新训练神经网络;Step 4, retrain the neural network through the new training set;

步骤5,重复步骤2,如果神经网络预测结果均在限幅器上下限范围内,则结束训练;否则返回步骤3。Step 5, repeat step 2, if the prediction results of the neural network are within the upper and lower limits of the limiter, then end the training; otherwise, return to step 3.

本发明的又一实施例中,所述神经网络的输出为归一化值。In yet another embodiment of the present invention, the output of the neural network is a normalized value.

本发明的又一实施例中,所述神经网络的输入为影响待预测量的特征量,输出为待预测量。In yet another embodiment of the present invention, the input of the neural network is the feature quantity affecting the quantity to be predicted, and the output is the quantity to be predicted.

本发明的又一实施例中,在利用新训练集重新训练神经网络时,非实验数据对所述神经网络结构和参数的影响权重乘以一个小于1的系数。In yet another embodiment of the present invention, when using a new training set to retrain the neural network, the influence weight of the non-experimental data on the structure and parameters of the neural network is multiplied by a coefficient less than 1.

本发明的又一实施例中,所述神经网络在训练过程中,所述神经网络在训练过程中通过损失函数作为模型质量的评价标准,并通过优化算法优化神经网路内部各个参数。直到获得足够高的预测精度。In another embodiment of the present invention, during the training process of the neural network, the loss function is used as the evaluation standard of the model quality during the training process of the neural network, and various internal parameters of the neural network are optimized by an optimization algorithm. until a sufficiently high prediction accuracy is obtained.

本发明的又一实施例中,所述限幅器的上限和下限可根据实验数据范围进行确定。In yet another embodiment of the present invention, the upper limit and the lower limit of the limiter can be determined according to the range of experimental data.

本发明实施例的基于限幅器的神经网络训练方法的应用,应用于天然气压缩因子的预测具体包括:通过10个特征,包括温度、压力以及天然气中甲烷、乙烷、丙烷、丁烷、戊烷、己烷、庚烷、辛烷八种组分的含量,预测天然气的压缩因子。The application of the neural network training method based on the limiter in the embodiment of the present invention is applied to the prediction of the natural gas compression factor, which specifically includes: through 10 features, including temperature, pressure and methane, ethane, propane, butane, pentane in natural gas The content of the eight components of alkanes, hexanes, heptanes and octanes can be used to predict the compressibility factor of natural gas.

前向神经网络由输入层、隐藏层、输出层组成,每个层内有多个神经元,神经元具有自己的偏置、权重与激活函数,如图1所示。从输入层输入特征值,经过各个隐藏层的计算最终达到输出层,层内部各个神经元之间相互独立。The feedforward neural network consists of an input layer, a hidden layer, and an output layer. There are multiple neurons in each layer, and each neuron has its own bias, weight, and activation function, as shown in Figure 1. The feature value is input from the input layer, and finally reaches the output layer through the calculation of each hidden layer, and each neuron in the layer is independent of each other.

对前向神经网络的输出进行归一化处理,从而使误差不会随着输出量的增大而增大。归一化处理的公式如下:The output of the feed-forward neural network is normalized so that the error will not increase with the increase of the output. The formula for normalization processing is as follows:

Figure BDA0002861509320000061
Figure BDA0002861509320000061

式中,X为输入的初始值,Xmin和Xmax分别为所有输入的最小值和最大值,x为输入的归一化值In the formula, X is the initial value of the input, X min and X max are the minimum and maximum values of all inputs, and x is the normalized value of the input

神经网络每层之间的传递公式为The transfer formula between each layer of the neural network is

Figure BDA0002861509320000062
Figure BDA0002861509320000062

式中,

Figure BDA0002861509320000063
为具有m个神经元的第k+1层中第j个神经元对于上一层也即具有n个神经元的第k层第i个神经元ai k的权重,bj k+1为第k+1层中第j个神经元的偏置,
Figure BDA0002861509320000064
为第k+1层中第j个神经元的输出;In the formula,
Figure BDA0002861509320000063
is the weight of the j-th neuron in the k+1th layer with m neurons to the weight of the i-th neuron a i k in the upper layer, that is, the k - th layer with n neurons, and b j k+1 is The bias of the jth neuron in the k+1th layer,
Figure BDA0002861509320000064
is the output of the jth neuron in the k+1th layer;

采用的激活函数为ReLU或Tanh函数:The activation function used is ReLU or Tanh function:

Figure BDA0002861509320000065
Figure BDA0002861509320000065

Figure BDA0002861509320000066
Figure BDA0002861509320000066

式中,

Figure BDA0002861509320000067
为第k+1层中第j个神经元的输出,aj k+1为激活函数的函数值。In the formula,
Figure BDA0002861509320000067
is the output of the jth neuron in the k+1th layer, and a j k+1 is the function value of the activation function.

在神经网络初始化随机的各项参数后,进行前向运算。After the neural network initializes random parameters, the forward operation is performed.

本发明实施例采用以下的误差函数作为优化目标The embodiment of the present invention adopts the following error function as the optimization target

Figure BDA0002861509320000068
Figure BDA0002861509320000068

式中,MSE为均方误差,f(x)为神经网络的输出值,Y为实际值;M为全体数据的个数。在进行误差计算后,通过优化算法调整神经网络的结构和权重。In the formula, MSE is the mean square error, f(x) is the output value of the neural network, Y is the actual value; M is the number of all data. After the error calculation, the structure and weight of the neural network are adjusted through an optimization algorithm.

基于限幅器的神经网络训练方法应用流程如图2所示,包括:对于天然气的压缩因子,本发明实施例根据现有实验数据的最大值及最小值作为限幅器的上限和下限,最大值为1.66,最小值为0.36。The application process of the neural network training method based on the limiter is shown in Figure 2, including: for the compression factor of natural gas, the maximum value and the minimum value of the embodiment of the present invention are used as the upper limit and the lower limit of the limiter according to the existing experimental data, and the maximum The value is 1.66 and the minimum value is 0.36.

请参图3,图3比较了本发明的方法与传统神经网络训练方法得到的神经网络的预测结果,该图中横坐标为原始数据值,纵坐标为通过传统神经网络和本发明所预测的值,对于各个数据点而言,越靠近对角线(即预测值和实验值相等)精度越好,可以看出本发明的技术方案可以有效地提升神经网络的预测性能。Please refer to Fig. 3, Fig. 3 has compared the prediction result of the neural network that method of the present invention and traditional neural network training method obtain, and abscissa among this figure is original data value, and ordinate is predicted by traditional neural network and the present invention For each data point, the closer to the diagonal (that is, the predicted value is equal to the experimental value), the better the accuracy. It can be seen that the technical solution of the present invention can effectively improve the prediction performance of the neural network.

综上,本发明公开了基于限幅器的神经网络训练方法,方法包括以下步骤:将含有预测所需要的特征量及待预测量的实验数据作为训练集训练神经网络;通过训练后的神经网络预测未知状况下待预测量的值;设定限幅器的上下限,通过限幅器确定预测结果的合理性,并将超过限幅器上限的预测结果改为限幅器的上限,将低于限幅器下限的预测结果改为限幅器的下限,并将修改后的结果纳入训练集;通过新训练集重新训练神经网络;如果神经网络预测结果均在限幅器上下限范围内,则结束训练,否则继续利用限幅器更改预测结果,建立新训练集,重新训练神经网络,重复此过程,直到神经网络预测结果均在限幅器上下限范围内。本发明同现有方法相比,具有在实验数据较少的情况下建立较高预测精度神经网络的优点。In summary, the present invention discloses a method for training a neural network based on a limiter. The method comprises the following steps: using the experimental data containing the required feature quantity for prediction and the experimental data to be measured as a training set to train the neural network; Predict the value to be measured under unknown conditions; set the upper and lower limits of the limiter, determine the rationality of the prediction results through the limiter, and change the prediction results that exceed the upper limit of the limiter to the upper limit of the limiter, and change the low The prediction results based on the lower limit of the limiter are changed to the lower limit of the limiter, and the modified results are included in the training set; the neural network is retrained through the new training set; if the prediction results of the neural network are all within the upper and lower limits of the limiter, Then end the training, otherwise continue to use the limiter to change the prediction results, create a new training set, retrain the neural network, and repeat this process until the prediction results of the neural network are within the upper and lower limits of the limiter. Compared with the existing method, the present invention has the advantage of establishing a neural network with higher prediction accuracy under the condition of less experimental data.

以上实施例仅用以说明本发明的技术方案而非对其限制,尽管参照上述实施例对本发明进行了详细的说明,所属领域的普通技术人员依然可以对本发明的具体实施方式进行修改或者等同替换,这些未脱离本发明精神和范围的任何修改或者等同替换,均在申请待批的本发明的权利要求保护范围之内。The above embodiments are only used to illustrate the technical solutions of the present invention and not to limit them. Although the present invention has been described in detail with reference to the above embodiments, those of ordinary skill in the art can still modify or equivalently replace the specific embodiments of the present invention. , any modifications or equivalent replacements that do not deviate from the spirit and scope of the present invention are within the protection scope of the claims of the present invention pending application.

Claims (5)

1.一种天然气压缩因子的预测方法,其特征在于,包括以下步骤:1. a method for predicting natural gas compressibility factor, is characterized in that, comprises the following steps: 预测所需特征量包括:温度、压力以及天然气中甲烷、乙烷、丙烷、丁烷、戊烷、己烷、庚烷、辛烷八种组分的含量;待预测量为天然气的压缩因子;The characteristic quantities required for prediction include: temperature, pressure, and the content of eight components in natural gas: methane, ethane, propane, butane, pentane, hexane, heptane, and octane; the compression factor of natural gas to be measured; 预测采用的神经网络为前向神经网络,包括输入层、隐藏层和输出层;其中,每个层内均有多个神经元,神经元具有各自的偏置、权重与激活函数,层内部各个神经元之间相互独立;从输入层输入的特征值,经过各个隐藏层的计算最终达到输出层;The neural network used for prediction is a feed-forward neural network, including an input layer, a hidden layer, and an output layer; each layer has multiple neurons, each neuron has its own bias, weight, and activation function. The neurons are independent of each other; the eigenvalues input from the input layer are calculated by each hidden layer and finally reach the output layer; 对前向神经网络的输出进行归一化处理,归一化处理的表达式为:The output of the forward neural network is normalized, and the expression of normalization is:
Figure FDA0004030820830000011
Figure FDA0004030820830000011
式中,X为输入的初始值,Xmin和Xmax分别为所有输入的最小值和最大值,x为输出的归一化值;In the formula, X is the initial value of the input, X min and X max are the minimum and maximum values of all inputs, respectively, and x is the normalized value of the output; 神经网络每层之间的传递公式为:The transfer formula between each layer of the neural network is:
Figure FDA0004030820830000012
Figure FDA0004030820830000012
式中,
Figure FDA0004030820830000013
为具有m个神经元的第k+1层中第j个神经元对于具有n个神经元的第k层第i个神经元ai k的权重,bj k+1为第k+1层中第j个神经元的偏置,
Figure FDA0004030820830000014
为第k+1层中第j个神经元的输出;
In the formula,
Figure FDA0004030820830000013
is the weight of the jth neuron in the k+1th layer with m neurons to the i kth neuron a i k in the kth layer with n neurons, and b j k + 1 is the k+1th layer The bias of the jth neuron in ,
Figure FDA0004030820830000014
is the output of the jth neuron in the k+1th layer;
采用的激活函数为ReLU或Tanh函数,表达式为:The activation function used is ReLU or Tanh function, and the expression is:
Figure FDA0004030820830000015
Figure FDA0004030820830000016
Figure FDA0004030820830000015
or
Figure FDA0004030820830000016
式中,
Figure FDA0004030820830000017
为第k+1层中第j个神经元的输出,aj k+1为激活函数的函数值;
In the formula,
Figure FDA0004030820830000017
is the output of the jth neuron in the k+1th layer, a j k+1 is the function value of the activation function;
神经网络训练方法包括以下步骤:The neural network training method includes the following steps: 步骤1,采用预获取的实验数据作为训练集,对待训练的神经网络进行训练,获得训练后的神经网络;其中,所述实验数据包括预测所需特征量及待预测量的实验数据;Step 1, using the pre-acquired experimental data as the training set, training the neural network to be trained, and obtaining the trained neural network; wherein, the experimental data includes experimental data for predicting required feature quantities and to be measured; 步骤2,通过训练后的神经网络预测未知状况下待预测量的值,获得预测结果;Step 2, predict the value to be measured under unknown conditions through the trained neural network, and obtain the prediction result; 步骤3,设定限幅器的上下限,通过限幅器确定步骤2获得的预测结果的合理性;其中,若预测结果均在限幅器上下限范围内,则训练结束;否则更改限幅器上下限范围外的预测结果,获得新的训练集并跳转执行步骤4;Step 3, set the upper and lower limits of the limiter, and determine the rationality of the prediction results obtained in step 2 through the limiter; if the prediction results are all within the upper and lower limits of the limiter, the training ends; otherwise, change the limit If the prediction result is outside the range of the upper and lower limits of the detector, obtain a new training set and jump to step 4; 步骤4,通过新的训练集重新训练神经网络,获得新的训练后的神经网络并跳转执行步骤5;Step 4, retrain the neural network with the new training set, obtain the new trained neural network and jump to step 5; 步骤5,基于步骤4获得的训练后的神经网络,重复步骤2和步骤3。Step 5, based on the trained neural network obtained in step 4, repeat step 2 and step 3.
2.根据权利要求1所述的一种天然气压缩因子的预测方法,其特征在于,步骤3中,所述否则更改限幅器上下限范围外的预测结果,获得新的训练集的具体步骤包括:2. the prediction method of a kind of natural gas compressibility factor according to claim 1, it is characterized in that, in step 3, described otherwise changes the prediction result outside limiter upper and lower limit scope, the specific step of obtaining new training set comprises : 将超过限幅器上限的预测结果改为限幅器的上限,将低于限幅器下限的预测结果改为限幅器的下限;Change predictions that exceed the upper limit of the limiter to the upper limit of the limiter, and change predictions that are lower than the lower limit of the limiter to the lower limit of the limiter; 将修改后的结果纳入训练集成为新的训练集。Incorporate the modified results into the training set as a new training set. 3.根据权利要求1所述的一种天然气压缩因子的预测方法, 其特征在于,步骤4中,在利用新的训练集重新训练神经网络时,非实验数据对所述神经网络结构和参数的影响权重乘以一个小于1的系数。3. the prediction method of a kind of natural gas compressibility factor according to claim 1, it is characterized in that, in step 4, when utilizing new training set to retrain neural network, non-experimental data is to described neural network structure and parameter The influence weight is multiplied by a factor less than 1. 4.根据权利要求1所述的一种天然气压缩因子的预测方法,其特征在于,采用损失函数作为优化目标,损失函数为:4. the prediction method of a kind of natural gas compressibility factor according to claim 1, is characterized in that, adopts loss function as optimization target, and loss function is:
Figure FDA0004030820830000021
Figure FDA0004030820830000021
式中,MSE为均方误差,f(x)为神经网络的输出值,Y为实际值;M为全体数据的个数;In the formula, MSE is the mean square error, f(x) is the output value of the neural network, Y is the actual value; M is the number of all data; 在进行误差计算后,通过优化算法调整神经网络的结构和权重。After the error calculation, the structure and weight of the neural network are adjusted through an optimization algorithm.
5.根据权利要求1所述的一种天然气压缩因子的预测方法,其特征在于,限幅器的上限、下限分别为1.66、0.36。5. The method for predicting a natural gas compression factor according to claim 1, wherein the upper and lower limits of the limiter are 1.66 and 0.36 respectively.
CN202011567835.7A 2020-12-25 2020-12-25 Neural network training method based on amplitude limiter and application thereof Active CN112613603B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011567835.7A CN112613603B (en) 2020-12-25 2020-12-25 Neural network training method based on amplitude limiter and application thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011567835.7A CN112613603B (en) 2020-12-25 2020-12-25 Neural network training method based on amplitude limiter and application thereof

Publications (2)

Publication Number Publication Date
CN112613603A CN112613603A (en) 2021-04-06
CN112613603B true CN112613603B (en) 2023-04-07

Family

ID=75247886

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011567835.7A Active CN112613603B (en) 2020-12-25 2020-12-25 Neural network training method based on amplitude limiter and application thereof

Country Status (1)

Country Link
CN (1) CN112613603B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115081625B (en) * 2022-07-21 2022-11-11 常安集团有限公司 Intelligent control method and system for miniature circuit breaker

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102129259A (en) * 2010-01-20 2011-07-20 北京航空航天大学 Neural network proportion integration (PI)-based intelligent temperature control system and method for sand dust environment test wind tunnel
CN106168759A (en) * 2016-07-12 2016-11-30 武汉长江仪器自动化研究所有限公司 A kind of coagulant dosage control method and system based on artificial neural network algorithm
CN206440945U (en) * 2016-07-12 2017-08-25 武汉长江仪器自动化研究所有限公司 A kind of coagulant dosage control system based on artificial neural network algorithm
CN109063594A (en) * 2018-07-13 2018-12-21 吉林大学 Remote sensing images fast target detection method based on YOLOv2
CN109543821A (en) * 2018-11-26 2019-03-29 济南浪潮高新科技投资发展有限公司 A kind of limitation weight distribution improves the convolutional neural networks training method of quantification effect
CN110378201A (en) * 2019-06-05 2019-10-25 浙江零跑科技有限公司 A kind of hinged angle measuring method of multiple row vehicle based on side ring view fisheye camera input

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011006344A1 (en) * 2009-07-15 2011-01-20 北京航空航天大学 Temperature regulating device and intelligent temperature control method for sand dust environment test system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102129259A (en) * 2010-01-20 2011-07-20 北京航空航天大学 Neural network proportion integration (PI)-based intelligent temperature control system and method for sand dust environment test wind tunnel
CN106168759A (en) * 2016-07-12 2016-11-30 武汉长江仪器自动化研究所有限公司 A kind of coagulant dosage control method and system based on artificial neural network algorithm
CN206440945U (en) * 2016-07-12 2017-08-25 武汉长江仪器自动化研究所有限公司 A kind of coagulant dosage control system based on artificial neural network algorithm
CN109063594A (en) * 2018-07-13 2018-12-21 吉林大学 Remote sensing images fast target detection method based on YOLOv2
CN109543821A (en) * 2018-11-26 2019-03-29 济南浪潮高新科技投资发展有限公司 A kind of limitation weight distribution improves the convolutional neural networks training method of quantification effect
CN110378201A (en) * 2019-06-05 2019-10-25 浙江零跑科技有限公司 A kind of hinged angle measuring method of multiple row vehicle based on side ring view fisheye camera input

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CamStyle: A Novel Data Augmentation Method for Person Re-Identification;Zhun Zhong et al.;《EEE TRANSACTIONS ON IMAGE PROCESSING》;20181102;第1176-1190页 *
General Model Based on Artificial Neural Networks for Estimating the Viscosities of Oxygenated Fuels;Xiangyang Liu et al.;《ACS AuthorChoice》;20190925;第16564-16571页 *
Rémi Abgral et al..Neural Network-Based Limiter with Transfer Learning.《Communications on Applied Mathematics and Computation》.2020, *
煤自燃极限参数的神经网络预测方法;徐精彩, 王华;《煤炭学报》;20020831;第27卷(第4期);第366-370页 *

Also Published As

Publication number Publication date
CN112613603A (en) 2021-04-06

Similar Documents

Publication Publication Date Title
CN110650153B (en) Industrial control network intrusion detection method based on focus loss deep neural network
CN107679617B (en) Multi-iteration deep neural network compression method
WO2020238237A1 (en) Power exponent quantization-based neural network compression method
CN111242287A (en) Neural network compression method based on channel L1 norm pruning
CN110472778A (en) A kind of short-term load forecasting method based on Blending integrated study
CN110414788A (en) A Power Quality Prediction Method Based on Similar Days and Improved LSTM
CN109002686A (en) A kind of more trade mark chemical process soft-measuring modeling methods automatically generating sample
CN110390955A (en) A cross-database speech emotion recognition method based on deep domain adaptive convolutional neural network
CN106355330A (en) Multi-response parameter optimization method based on radial basis function neural network prediction model
CN112085254B (en) Prediction method and model based on multi-fractal cooperative measurement gating circulation unit
CN104636801A (en) Transmission line audible noise prediction method based on BP neural network optimization
CN111832703B (en) A Dynamic Sequence Modeling Method for Irregular Sampling in Process Manufacturing Industry
CN111308896A (en) Nonlinear system self-adaptive optimal control method based on variable error
Qiao et al. A self-organizing RBF neural network based on distance concentration immune algorithm
CN107704920A (en) One kind is based on BP neural network roll alloy contact prediction of fatigue behaviour method
CN111898316A (en) A Construction Method of Metasurface Structure Design Model and Its Application
CN111506868B (en) An ultra-short-term wind speed prediction method based on HHT weight optimization
CN107273971B (en) Feedforward neural network structure self-organization method based on neuron saliency
CN108107716A (en) A kind of Parameter Measuring method based on improved BP neural network
CN112613603B (en) Neural network training method based on amplitude limiter and application thereof
CN113205182B (en) Real-time power load prediction system based on sparse pruning method
CN108090564A (en) Based on network weight is initial and the redundant weighting minimizing technology of end-state difference
Xiaoyuan et al. A new improved BP neural network algorithm
CN116978499A (en) GRA-WOA-GRU-based glass horseshoe kiln temperature prediction method
CN116681159A (en) Short-term power load prediction method based on whale optimization algorithm and DRESN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant