CN114897144A - Complex value time sequence signal prediction method based on complex value neural network - Google Patents
Complex value time sequence signal prediction method based on complex value neural network Download PDFInfo
- Publication number
- CN114897144A CN114897144A CN202210520164.1A CN202210520164A CN114897144A CN 114897144 A CN114897144 A CN 114897144A CN 202210520164 A CN202210520164 A CN 202210520164A CN 114897144 A CN114897144 A CN 114897144A
- Authority
- CN
- China
- Prior art keywords
- complex
- valued
- neural network
- step size
- iteration
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 94
- 238000000034 method Methods 0.000 title claims abstract description 64
- 238000012549 training Methods 0.000 claims abstract description 60
- 238000011478 gradient descent method Methods 0.000 claims abstract description 21
- 238000012360 testing method Methods 0.000 claims abstract description 9
- 230000003044 adaptive effect Effects 0.000 claims description 42
- 230000008569 process Effects 0.000 claims description 17
- 230000004913 activation Effects 0.000 claims description 8
- 210000002569 neuron Anatomy 0.000 claims description 4
- 230000001174 ascending effect Effects 0.000 claims description 3
- 230000006870 function Effects 0.000 description 34
- 238000010586 diagram Methods 0.000 description 11
- 238000004422 calculation algorithm Methods 0.000 description 8
- 238000004590 computer program Methods 0.000 description 7
- 230000008901 benefit Effects 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000010801 machine learning Methods 0.000 description 3
- 230000000306 recurrent effect Effects 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000001373 regressive effect Effects 0.000 description 1
- 238000013179 statistical model Methods 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 238000000714 time series forecasting Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y04—INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
- Y04S—SYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
- Y04S10/00—Systems supporting electrical power generation, transmission or distribution
- Y04S10/50—Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
Description
技术领域technical field
本发明涉及时序信号预测技术领域,尤其是指一种基于复值神经网络的复值时序信号预测方法。The invention relates to the technical field of time series signal prediction, in particular to a complex-valued time series signal prediction method based on a complex-valued neural network.
背景技术Background technique
时序信号预测是一种通过已知的历史数据来预测未来数据的技术,该技术在风速预测、语音分析、故障诊断及股票市场预测中都有十分广泛的应用。复值时序信号本质上体现的是一个或多个复值随机变量随时间变化的趋势,复值时序信号预测就是挖掘复值时序信号的变化规律来估计未来的数据。Time series signal forecasting is a technology that predicts future data through known historical data. This technology is widely used in wind speed forecasting, speech analysis, fault diagnosis and stock market forecasting. The complex-valued time-series signal essentially reflects the trend of one or more complex-valued random variables changing with time. The complex-valued time-series signal prediction is to mine the variation law of the complex-valued time-series signal to estimate future data.
经典的时间序列参数模型有:滑动平均模型(Moving Average,MA)、自回归模型(Auto Regressive,AR)和自回归滑动平均模型(Auto Regression Moving Average,ARMA)。自回归滑动平均模型ARMA(p,q)可以表示为:The classic time series parameter models are: Moving Average (MA), Auto Regressive (AR) and Auto Regression Moving Average (ARMA). The autoregressive moving average model ARMA(p,q) can be expressed as:
其中,αp为表示第p阶自回归系数,βq为第q阶滑动平均系数,假设p为自回归阶数,q为滑动平均阶数,εt为t时刻的白噪声。事实上,AR模型和MA模型均为特殊的ARMA模型。当上式中的αi=0,i=1,2,…,p时,称为MA(q)模型;当βj=0,j=1,2,…,q时,称为AR(p)模型。但是,传统的时间序列预测方法大多是通过建立基于符合假设条件的统计模型来实现的,如滑动平均法,自回归法等。这些方法需要满足的限制条件太多,所以在实际应用中效果并不理想。近些年来随着人工智能技术的兴起,基于机器学习的预测方法快速发展了起来。相比于传统方法,基于机器学习的方法具有精度高、鲁棒性强,能够很好地处理非线性数据的优点。机器学习中的支持向量机、贝叶斯网络、矩阵分解、人工神经网络等方法都是用来进行时序信号预测的良好工具。Among them, α p represents the p-th order autoregressive coefficient, β q is the q-th order moving average coefficient, assuming p is the autoregressive order, q is the moving average order, and ε t is the white noise at time t. In fact, both the AR model and the MA model are special ARMA models. When α i = 0, i = 1, 2, ..., p in the above formula, it is called MA(q) model; when β j =0, j = 1, 2, ..., q, it is called AR( p) model. However, most of the traditional time series forecasting methods are realized by establishing statistical models based on assumptions, such as moving average method, autoregressive method, etc. These methods need to meet too many constraints, so the effect is not ideal in practical application. In recent years, with the rise of artificial intelligence technology, prediction methods based on machine learning have developed rapidly. Compared with traditional methods, methods based on machine learning have the advantages of high accuracy, strong robustness, and can handle nonlinear data well. Support vector machines, Bayesian networks, matrix factorization, artificial neural networks and other methods in machine learning are all good tools for time series signal prediction.
在这些方法中,人工神经网络是一种常用方法。常用的人工神经网络包括实值神经网络和复值神经网络。传统的实值神经网络需要对复值信号进行预处理转换成实值信号,而复值神经网络的决策边界由两个正交的超平面组成,相比于实值神经网络具有更高的泛化能力,可以直接处理复值信号,因此采用复值神经网络建立对复值时序信号的预测模型是一个更好的选择。Among these methods, artificial neural network is a commonly used method. Commonly used artificial neural networks include real-valued neural networks and complex-valued neural networks. The traditional real-valued neural network needs to preprocess the complex-valued signal to convert it into a real-valued signal, and the decision boundary of the complex-valued neural network consists of two orthogonal hyperplanes, which has a higher generalization than the real-valued neural network. Therefore, it is a better choice to use a complex-valued neural network to establish a prediction model for complex-valued time series signals.
根据所采用的激活函数的不同,复值神经网络可以分为分裂式复值神经网络和全复式复值神经网络。但是,现阶段常用的分裂式复值神经网络需要将复值信号拆分成实部虚部或幅度相位,会破坏复值信号的内在联系。According to the different activation functions used, complex-valued neural networks can be divided into split complex-valued neural networks and full-complex complex-valued neural networks. However, the commonly used split complex-valued neural network at this stage needs to split the complex-valued signal into real and imaginary parts or amplitude and phase, which will destroy the intrinsic relationship of the complex-valued signal.
同时,在对复值神经网络的训练过程中,目前常用的方法有:复梯度下降算法(Complex Gradient Descent,CGD)、复最小均方算法(Complex Least Mean Square,CLMS)和CRTRL(Complex-valued real-time recurrent learning)等。但是,这些方法都存在收敛速度慢,容易陷入局部最小值,容易被鞍点影响等问题。为了解决这些问题,近些年来也有出现一些自适应复步长的算法,例如CBBM(Complex Barzilai–Borwein method)、CALRT(Complex Adaptable Learning rate Tree)和SDTA(Selectable Direction TreeAlgorithm)等。但是,这些自适应复步长算法虽然可以大幅提高收敛速度,但是其计算量很大。就像CALRT算法,由于采用了两重树结构,其训练时间急剧增加。因此,即使是在一般复值神经网络的结构不会太大的情况下,使用这些方法所增加的时间成本,相比于传统方法的优势仍不明显。At the same time, in the training process of complex-valued neural network, the commonly used methods are: Complex Gradient Descent (CGD), Complex Least Mean Square (CLMS) and CRTRL (Complex-valued real-time recurrent learning), etc. However, these methods all have problems such as slow convergence speed, easy to fall into local minimum, and easy to be affected by saddle points. In order to solve these problems, some adaptive complex step algorithms have appeared in recent years, such as CBBM (Complex Barzilai–Borwein method), CALRT (Complex Adaptable Learning rate Tree) and SDTA (Selectable Direction Tree Algorithm). However, although these adaptive complex-step algorithms can greatly improve the convergence speed, they have a large amount of computation. Just like the CALRT algorithm, its training time increases dramatically due to the use of a double tree structure. Therefore, even if the structure of the general complex-valued neural network is not too large, the time cost added by using these methods is still insignificant compared to the traditional methods.
发明内容SUMMARY OF THE INVENTION
为此,本发明所要解决的技术问题在于克服现有技术中的不足,提供一种基于复值神经网络的复值时序信号预测方法,可以避免破坏复值信号的内在联系、并且降低求解复步长的计算量,加快训练的收敛速度,提高对复值时序信号预测的精度。Therefore, the technical problem to be solved by the present invention is to overcome the deficiencies in the prior art, and to provide a complex-valued time series signal prediction method based on a complex-valued neural network, which can avoid destroying the internal relationship of the complex-valued signal and reduce the complex steps of solving Long calculation time, speed up the convergence speed of training, and improve the prediction accuracy of complex-valued time series signals.
为解决上述技术问题,本发明提供了一种基于复值神经网络的复值时序信号预测方法,包括以下步骤:In order to solve the above-mentioned technical problems, the present invention provides a complex-valued time series signal prediction method based on a complex-valued neural network, comprising the following steps:
S1:构建复值时序信号的数据集,分成训练集和测试集;S1: Construct a data set of complex-valued time series signals and divide it into a training set and a test set;
S2:构建全复神经网络并初始化参数;S2: Build a fully complex neural network and initialize parameters;
S3:使用自适应复值正交梯度下降法和所述训练集训练所述全复神经网络,直至达到预设的迭代次数停止训练,得到训练完成的全复神经网络;S3: Use the adaptive complex-valued orthogonal gradient descent method and the training set to train the fully complex neural network, stop training until a preset number of iterations is reached, and obtain a fully trained fully complex neural network;
S4:将所述测试集输入所述训练完成的全复神经网络,得到复值时序信号的预测值。S4: Input the test set into the trained fully complex neural network to obtain the predicted value of the complex-valued time series signal.
作为优选的,所述S2中的全复神经网络为复值前向单隐层神经网络。Preferably, the fully complex neural network in S2 is a complex-valued forward single hidden layer neural network.
作为优选的,所述复值前向单隐层神经网络中,第l层的激活函数ol的表达式为:Preferably, in the complex-valued forward single-hidden layer neural network, the expression of the activation function o l of the lth layer is:
o1=hl(wlOl-1+bl);o 1 = h l (w l O l-1 +b l );
其中,l=2,3,hl(·)为第l层的激活函数,wl是第(l-1)层与第l层之间的权值,ol-1是第(l-1)层的输出,bl是与ol-1对应的偏置向量。Among them, l=2,3, h l (·) is the activation function of the lth layer, w l is the weight between the (l-1)th layer and the lth layer, o l-1 is the (l-th layer) 1) The output of the layer, b l is the bias vector corresponding to o l-1 .
作为优选的,所述复值前向单隐层神经网络的输出层的损失函数E为:Preferably, the loss function E of the output layer of the complex-valued forward single hidden layer neural network is:
其中,(·)H表示共轭转置,oj表示所述复值前向单隐层神经网络的最终输出,J是训练样本的总数;训练样本表示为其中xj是输入向量,yj是期望输出,表示样本的输入空间是P维的复数空间,表示输出空间是Q维的复数空间。where (·) H represents the conjugate transpose, o j represents the final output of the complex-valued forward single-hidden layer neural network, and J is the total number of training samples; the training samples are expressed as where x j is the input vector, y j is the desired output, The input space representing the sample is a P-dimensional complex space, Indicates that the output space is a Q-dimensional complex space.
作为优选的,所述S3中使用自适应复值正交梯度下降法和所述训练集训练所述全复神经网络,所述自适应复值正交梯度下降法的第t次迭代中,全复神经网络中需要调整的参数的表达式为:Preferably, in S3, the fully complex neural network is trained using the adaptive complex-valued orthogonal gradient descent method and the training set, and in the t-th iteration of the adaptive complex-valued orthogonal gradient descent method, the full The expression of the parameters to be adjusted in the complex neural network is:
其中,w(t)包括第t次迭代中每层神经元的权值和偏置,为第t次迭代中的复梯度向量,ηt为第t次迭代中的自适应复步长。where w (t) includes the weights and biases of each layer of neurons in the t-th iteration, is the complex gradient vector in the t-th iteration, and η t is the adaptive complex step size in the t-th iteration.
作为优选的,所述第t次迭代中的复梯度向量通过Wirtinger算子得到。Preferably, the complex gradient vector in the t-th iteration is obtained by a Wirtinger operator.
作为优选的,所述第t次迭代中的自适应复步长ηt解耦成实部和虚部两个部分,第t次迭代的迭代方向表示为:Preferably, the adaptive complex step size η t in the t-th iteration is decoupled into two parts, the real part and the imaginary part, and the iteration direction of the t-th iteration is expressed as:
其中,αt为第t次迭代的自适应复步长ηt的实步长,所述βt为第t次迭代的自适应复步长ηt的虚步长, where α t is the real step size of the adaptive complex step size η t of the t-th iteration, and β t is the imaginary step size of the adaptive complex step size η t of the t-th iteration,
作为优选的,所述S3中使用自适应复值正交梯度下降法和所述训练集训练所述全复神经网络,具体过程为:Preferably, the fully complex neural network is trained using the adaptive complex-valued orthogonal gradient descent method and the training set in S3, and the specific process is:
S31:初始化需要调整的参数w(t),每次迭代保存的节点数为K,自适应复步长的实步长α的尺度因子序列为自适应复步长的虚步长β的尺度因子序列为M表示α的尺度因子个数,N表示β的尺度因子个数;S31: Initialize the parameter w (t) that needs to be adjusted, the number of nodes saved in each iteration is K, and the scale factor sequence of the real step size α of the adaptive complex step size is The scale factor sequence of the imaginary step size β of the adaptive complex step size is M represents the number of scale factors of α, and N represents the number of scale factors of β;
遍历K个候选节点,将每个节点的实步长α分别与尺度因子序列中的N个因子相乘;Traverse K candidate nodes, and compare the real step size α of each node with the scale factor sequence Multiply the N factors in ;
S32:找到此时使得损失函数E的值最小的参数kt和nt,其中,kt是使得损失函数E值最小的一级子节点的索引,nt为使得损失函数E值最小的一级子节点的对应的尺度因子的索引;S32: Find the parameters k t and n t that minimize the value of the loss function E at this time, where k t is the index of the first-level child node that minimizes the value of the loss function E, and n t is the one that minimizes the value of the loss function E. The index of the corresponding scale factor of the level child node;
S33:根据此时的kt和nt更新临时权值 S33: Update the temporary weights according to the current k t and n t
S34:根据损失函数E的大小将递增排序,并将第2到K对参数保存为作为下次迭代的候选节点;S34: According to the size of the loss function E, the Sort ascending and save the 2nd to K pairs of parameters as as a candidate node for the next iteration;
S35:在S33中得到的临时权值对应的最优节点基础上,分别将虚步长β与尺度因子的M个因子相乘;S35: Temporary weights obtained in S33 On the basis of the corresponding optimal node, the virtual step size β and the scale factor are respectively Multiply the M factors of ;
S36:找到此时使得损失函数E的值最小的参数mt,mt是使得损失函数E值最小的二级子节点的索引;S36: Find the parameter m t that minimizes the value of the loss function E at this time, where m t is the index of the secondary child node that minimizes the value of the loss function E;
S37:根据此时的mt、kt和nt更新权值w(t);S37: Update the weight w (t) according to m t , k t and n t at this time;
S38:保存参数对与S34中K-1个的参数对合并,一共K对参数用作下次迭代的候选节点;S38: save parameter pair Pairs with K-1 parameters in S34 Merge, a total of K pairs of parameters are used as candidate nodes for the next iteration;
S39:重复S31~S38。S39: Repeat S31 to S38.
作为优选的,所述S31中将每个节点的实步长α分别与尺度因子序列中的N个因子相乘,表达式为:Preferably, in said S31, the real step size α of each node is respectively associated with the scale factor sequence The N factors in are multiplied, and the expression is:
其中,表示第k个根节点的实步长,an表示第n个尺度因子,表示与an对应的一级子节点的实步长;表示第k个根节点参数在使用上一步得到的实步长更新之后的参数;表示在使用参数时得到的损失值,Ek,n为的简化表示,X表示训练集数据,表示训练集标签。in, represents the real step size of the kth root node, a n represents the nth scale factor, Represents the real step size of the first - level child node corresponding to an; Represents the kth root node parameter Using the real step size obtained in the previous step The updated parameters; Indicates that a parameter is being used The loss value obtained when E k,n is A simplified representation of , where X represents the training set data, represents the training set label.
作为优选的,所述S35中分别将虚步长β与尺度因子的M个因子相乘,表达式为:Preferably, in the S35, the virtual step size β and the scale factor are respectively used Multiplying the M factors of , the expression is:
其中,β(t-1)表示第t-1次迭代所确定的虚步长,bm为第m个尺度因子,表示与bm对应的二级子节点的虚步长;表示与bm对应的二级子节点的参数, 表示在使用参数时得到的损失值,是的简化表示。Among them, β (t-1) represents the imaginary step size determined by the t-1th iteration, b m is the mth scale factor, represents the virtual step size of the secondary child node corresponding to b m ; represents the parameter of the secondary child node corresponding to b m , Indicates that a parameter is being used The loss value obtained when Yes a simplified representation of .
本发明的上述技术方案相比现有技术具有以下优点:The above-mentioned technical scheme of the present invention has the following advantages compared with the prior art:
1.采用全复神经网络直接进行复值时序信号的预测,将复值信号作为一个整体来进行处理,相比于传统信号预测方法具有非线性拟合能力强、可以直接处理复值信号和准确度高的优点。避免了将复值信号拆分成实部虚部或者幅度相位的形式而破坏复值信号的内在联系。1. The fully complex neural network is used to directly predict the complex-valued time series signal, and the complex-valued signal is processed as a whole. Compared with the traditional signal prediction method, it has strong nonlinear fitting ability, can directly process the complex-valued signal and accurately. high degree of advantage. It avoids breaking the internal relationship of complex-valued signals by splitting the complex-valued signal into the form of real and imaginary parts or amplitude and phase.
2.本发明将复步长解耦成实步长和虚步长两个正交的部分,实步长的搜索方向为负梯度方向,而虚步长的搜索方向与负梯度方向正交。在两个正交方向上分别自适应调整其大小,可以分别沿着这两个方向搜索合适的步长而不相互影响。相比于传统的自适应复步长方法,在保证收敛速度的基础上大幅降低了求解复步长的计算量,加快了训练的收敛速度。2. The present invention decouples the complex step size into two orthogonal parts, the real step size and the virtual step size. The search direction of the real step size is the negative gradient direction, while the search direction of the virtual step size is orthogonal to the negative gradient direction. It adjusts its size adaptively in two orthogonal directions respectively, and can search for suitable step sizes along these two directions without affecting each other. Compared with the traditional self-adaptive complex step size method, the calculation amount of solving the complex step size is greatly reduced on the basis of ensuring the convergence speed, and the convergence speed of training is accelerated.
3.本发明中使用的自适应复值正交梯度下降法在复值神经网络训练过程中可以有助于逃离鞍点和局部极小值点,提高了对复值时序信号预测的精度。3. The adaptive complex-valued orthogonal gradient descent method used in the present invention can help to escape from the saddle point and the local minimum point in the complex-valued neural network training process, and improve the prediction accuracy of the complex-valued time series signal.
附图说明Description of drawings
为了使本发明的内容更容易被清楚的理解,下面根据本发明的具体实施例并结合附图,对本发明作进一步详细的说明,其中In order to make the content of the present invention easier to understand clearly, the present invention will be described in further detail below according to specific embodiments of the present invention and in conjunction with the accompanying drawings, wherein
图1是本发明的系统框图。FIG. 1 is a system block diagram of the present invention.
图2是本发明的流程图。Figure 2 is a flow chart of the present invention.
图3是本发明中自适应复值正交梯度下降法训练全复神经网络的过程示意图。FIG. 3 is a schematic diagram of the process of training a fully complex neural network by the adaptive complex-valued orthogonal gradient descent method in the present invention.
图4是本发明中自适应复值正交梯度下降法训练全复神经网络的流程图。FIG. 4 is a flow chart of training a fully complex neural network by the adaptive complex-valued orthogonal gradient descent method in the present invention.
图5是本发明实施例中使用不同训练方法和基准圆信号分别对复值神经网络进行训练的训练过程对比图。FIG. 5 is a comparison diagram of a training process for training a complex-valued neural network using different training methods and reference circular signals, respectively, in an embodiment of the present invention.
图6是本发明实施例中使用不同训练方法和基准非圆信号分别对复值神经网络进行训练的训练过程对比图。FIG. 6 is a comparison diagram of a training process for training a complex-valued neural network using different training methods and a reference non-circular signal, respectively, in an embodiment of the present invention.
具体实施方式Detailed ways
下面结合附图和具体实施例对本发明作进一步说明,以使本领域的技术人员可以更好地理解本发明并能予以实施,但所举实施例不作为对本发明的限定。The present invention will be further described below with reference to the accompanying drawings and specific embodiments, so that those skilled in the art can better understand the present invention and implement it, but the embodiments are not intended to limit the present invention.
如图1系统框图所示,本发明采用复值神经网络进行复值时序信号的预测,采用自适应复值正交梯度下降法实现复值神经网络的高效训练。首先从信号生成器采集前P个时刻的信号值[x(k),x(k-1),…,x(k-P+1)]T作为复值神经网络的输入,将下一时刻的信号值x(k+1)作为复值神经网络的期望输出,通过均方误差构建损失函数E(x),对损失函数使用自适应复值正交梯度下降法进行优化,从而得到合适的网络模型,最终用该网络来实现信号预测。As shown in the system block diagram of Fig. 1, the present invention adopts complex-valued neural network to predict complex-valued time series signals, and adopts adaptive complex-valued orthogonal gradient descent method to realize efficient training of complex-valued neural network. First, the signal values [x(k), x(k-1),...,x(k-P+1)] T of the first P moments are collected from the signal generator as the input of the complex-valued neural network, and the next moment The signal value x(k+1) is used as the expected output of the complex-valued neural network, and the loss function E(x) is constructed through the mean square error, and the loss function is optimized using the adaptive complex-valued orthogonal gradient descent method, so as to obtain a suitable The network model is finally used to realize the signal prediction.
参照图2流程图所示,本发明的一种基于复值神经网络的复值时序信号预测方法,包括以下步骤:Referring to the flowchart shown in FIG. 2, a method for predicting a complex-valued time series signal based on a complex-valued neural network of the present invention includes the following steps:
S1:构建复值时序信号的数据集,分成复值神经网络的训练集和测试集。构建复值时间序列的数据集的方法为将现实中采集的或者仿真生成的复值时序信号进行归一化,本实施例中由信号生成器生成复值时序信号。使用MATLAB仿真生成基准复值圆和非圆时序信号,基准圆信号由稳定的AR(Autoregressive)过程产生,该AR过程表示为:S1: Construct a dataset of complex-valued time series signals, and divide it into a training set and a test set of a complex-valued neural network. The method for constructing the complex-valued time series data set is to normalize the complex-valued time-series signals collected in reality or generated by simulation. In this embodiment, the complex-valued time-series signals are generated by a signal generator. Use MATLAB simulation to generate reference complex-valued circle and non-circular time series signals. The reference circle signal is generated by a stable AR (Autoregressive) process. The AR process is expressed as:
r(k)=1.79r(k-1)-1.85r(k-2)+1.27r(k-3)-0.41r(k-4)+n(k);r(k)=1.79r(k-1)-1.85r(k-2)+1.27r(k-3)-0.41r(k-4)+n(k);
其中,n(k)是均值为0,方差为1的复值高斯圆白噪声。其中,r(k-t),t=0,1,2,3,4表示k-t时刻的信号值。基准非圆信号由ARMA(Autoregressive Moving Average)过程产生,该ARMA过程表示为:where n(k) is a complex-valued Gaussian circular white noise with
r(k)=1.79r(k-1)-1.85r(k-2)+1.27r(k-3)-0.41r(k-4)+0.1r(k-5)+2n(k)+0.5n*(k)+n(k-1)+0.9n*(k-1);r(k)=1.79r(k-1)-1.85r(k-2)+1.27r(k-3)-0.41r(k-4)+0.1r(k-5)+2n(k)+ 0.5n * (k)+n(k-1)+0.9n * (k-1);
其中,n(k)需要满足:Among them, n(k) needs to satisfy:
E{n(k-i)n*(k-j)}=δ(i-j),E{n(ki)n * (kj)}=δ(ij),
E{n(k-i)n(k-j)}=Cδ(i-j);E{n(k-i)n(k-j)}=Cδ(i-j);
其中n(k)为复值高斯非圆白噪声,C=0.95.其中,δ为狄拉克函数。将通过上述两个过程生成的复值时序信号中的连续3000个样本作为训练集,连续2000个样本作为测试集,然后进行归一化,构成所需的数据集。where n(k) is complex-valued Gaussian non-circular white noise, C=0.95. Among them, δ is the Dirac function. Take 3000 consecutive samples in the complex-valued time series signal generated by the above two processes as the training set, and 2000 consecutive samples as the test set, and then normalize to form the required data set.
S2:本实施例中使用的复值神经网络为全复神经网络。构建全复神经网络并初始化参数;本实施例中的全复神经网络为复值前向单隐层神经网络。借助神经网络强大的函数逼近能力建立预测模型,可以克服现有模型在自适应性、数据特征提取等方面存在的缺点。S2: The complex-valued neural network used in this embodiment is a fully complex neural network. A fully complex neural network is constructed and parameters are initialized; the fully complex neural network in this embodiment is a complex-valued forward single hidden layer neural network. Using the powerful function approximation ability of neural network to establish a prediction model can overcome the shortcomings of existing models in terms of adaptability and data feature extraction.
所述复值前向单隐层神经网络中,第l层的激活函数ol的表达式为:In the complex-valued forward single-hidden layer neural network, the expression of the activation function o l of the lth layer is:
ol=hl(wlol-1+bl);o l = h l (w l o l-1 +b l );
其中,l=2,3,hl(·)为第l层的激活函数,wl是第(l-1)层与第l层之间的权值,权值wl是复数值,它的实部虚部是随机初始化的,在训练过程中权值通过反向传播进行调整。ol-1是第(l-1)层的输出,bl是与ol-1对应的偏置向量。本实施例中复值前向单隐层神经网络的隐层神经元个数为20,随机初始化连接权值和偏置,激活函数为tanh函数,损失函数为均方误差函数。Among them, l=2,3, h l (·) is the activation function of the lth layer, w l is the weight between the (l-1)th layer and the lth layer, and the weight w l is a complex value, which is The real and imaginary parts of are randomly initialized, and the weights are adjusted by backpropagation during training. o l-1 is the output of the (l-1)th layer, and b l is the bias vector corresponding to o l-1 . In this embodiment, the number of hidden layer neurons of the complex-valued forward single hidden layer neural network is 20, the connection weights and biases are randomly initialized, the activation function is the tanh function, and the loss function is the mean square error function.
所述复值前向单隐层神经网络的输出层的损失函数E为:The loss function E of the output layer of the complex-valued forward single hidden layer neural network is:
其中,(·)H表示共轭转置,oj表示所述复值前向单隐层神经网络的最终输出,J是训练样本的总数;训练样本表示为其中xj是输入向量,yj是期望输出,表示样本的输入空间是P维的复数空间,表示输出空间是Q维的复数空间。where (·) H represents the conjugate transpose, o j represents the final output of the complex-valued forward single-hidden layer neural network, and J is the total number of training samples; the training samples are expressed as where x j is the input vector, y j is the desired output, The input space representing the sample is a P-dimensional complex space, Indicates that the output space is a Q-dimensional complex space.
S3:将训练集作为全复神经网络的输入集,使用自适应复值正交梯度下降法(Adaptive Orthogonal Complex-valued Gradient Descent,AOCGD)训练所述全复神经网络,直至达到预设的迭代次数(或者其它迭代停止条件)停止训练,得到训练完成的全复神经网络。本实施例中的预设迭代次数为200次。S3: Use the training set as the input set of the fully complex neural network, and use the Adaptive Orthogonal Complex-valued Gradient Descent (AOCGD) method to train the fully complex neural network until a preset number of iterations is reached (or other iterative stop conditions) to stop training, and get the fully complex neural network that has been trained. The preset number of iterations in this embodiment is 200 times.
使用自适应复值正交梯度下降法和训练集训练所述全复神经网络时,所述自适应复值正交梯度下降法的第t次迭代中,全复神经网络中需要调整的参数的表达式为:其中,w(t)包括第t次迭代中每层神经元的权值和偏置,为第t次迭代中复梯度向量所述第t次迭代中复梯度向量通过Wirtinger算子得到;ηt为第t次迭代中自适应复步长,且ηt是一个复数。所述第t次迭代中自适应复步长ηt解耦成实部和虚部两个部分,第t次迭代的迭代方向表示为: 其中,αt为第t次迭代的自适应复步长ηt的实步长,所述βt为第t次迭代的自适应复步长ηt的虚步长, When using the adaptive complex-valued orthogonal gradient descent method and the training set to train the fully complex neural network, in the t-th iteration of the adaptive complex-valued orthogonal gradient descent method, the parameters that need to be adjusted in the fully complex neural network are The expression is: where w (t) includes the weights and biases of each layer of neurons in the t-th iteration, is the complex gradient vector in the t-th iteration said complex gradient vector in the t-th iteration Obtained by the Wirtinger operator; η t is the adaptive complex step size in the t-th iteration, and η t is a complex number. In the t-th iteration, the adaptive complex step size η t is decoupled into two parts, the real part and the imaginary part, and the iteration direction of the t-th iteration is expressed as: where α t is the real step size of the adaptive complex step size η t of the t-th iteration, and β t is the imaginary step size of the adaptive complex step size η t of the t-th iteration,
如图3所示,参数在训练时自适应调整α和β的大小。为了简化,将图3中第一层的各节点称为根节点,第二层各节点称为一级子节点,第三层各节点称为二级子节点。图中实线箭头表示需要进行计算,虚线箭头表示将一级子节点直接保存为二级子节点。As shown in Figure 3, the parameters adaptively adjust the size of α and β during training. For simplicity, each node of the first layer in FIG. 3 is called a root node, each node of the second layer is called a first-level child node, and each node of the third layer is called a second-level child node. The solid line arrow in the figure indicates that calculation needs to be performed, and the dotted line arrow indicates that the first-level child node is directly saved as the second-level child node.
如图4所示,使用训练集和自适应复值正交梯度下降法训练所述全复神经网络,具体过程为:As shown in Figure 4, the fully complex neural network is trained using the training set and the adaptive complex-valued orthogonal gradient descent method, and the specific process is as follows:
S31:初始化需要调整的参数即权值w(t),每次迭代保存的节点数为K,自适应复步长的实步长α的尺度因子序列为自适应复步长的虚步长β的尺度因子序列为M表示α的尺度因子个数,N表示β的尺度因子个数。本实施例中α的尺度因子序列为[0.5,1,2],β的尺度因子序列为[-10,-0.1,0,0.1,10],每次迭代保存的候选节点数为5。α的初始值为0.25,β的初始值为0.1。S31: Initialize the parameter to be adjusted, that is, the weight w (t) , the number of nodes saved in each iteration is K, and the scale factor sequence of the real step size α of the adaptive complex step size is The scale factor sequence of the imaginary step size β of the adaptive complex step size is M represents the number of scale factors of α, and N represents the number of scale factors of β. In this embodiment, the scale factor sequence of α is [0.5, 1, 2], the scale factor sequence of β is [-10, -0.1, 0, 0.1, 10], and the number of candidate nodes saved in each iteration is 5. The initial value of α is 0.25, and the initial value of β is 0.1.
遍历K个候选节点,将每个节点的实步长α分别与尺度因子序列中的N个因子相乘,表达式为:Traverse K candidate nodes, and compare the real step size α of each node with the scale factor sequence The N factors in are multiplied, and the expression is:
其中,表示图3中第k个根节点的实步长,an表示第n个尺度因子,表示与an对应的一级子节点的实步长;表示图3中第k个根节点参数在使用上一步得到的实步长更新之后的参数;表示在使用参数时得到的损失值,Ek,n为的简化表示,X表示训练集数据,表示训练集标签。in, represents the real step size of the kth root node in Figure 3, a n represents the nth scale factor, Represents the real step size of the first - level child node corresponding to an; Represents the kth root node parameter in Figure 3 Using the real step size obtained in the previous step updated parameters; Indicates that a parameter is being used The loss value obtained when E k,n is A simplified representation of , where X represents the training set data, represents the training set label.
S32:找到此时使得损失函数E的值最小的参数kt和nt,表示为:S32: Find the parameters k t and n t that minimize the value of the loss function E at this time, expressed as:
其中,kt是使得损失函数E值最小的一级子节点的索引,nt为使得损失函数E值最小的一级子节点的对应的尺度因子的索引;此时通过损失函数确定了实步长α。Among them, k t is the index of the first-level child node that minimizes the loss function E value, and n t is the index of the corresponding scale factor of the first-level child node that minimizes the loss function E value; at this time, the real step is determined by the loss function. long α.
S33:根据此时的kt和nt更新临时权值表示为: S33: Update the temporary weights according to the current k t and n t Expressed as:
S34:根据损失函数E的大小将递增排序,并将第2到K对参数保存为作为下次迭代的候选节点;S34: According to the size of the loss function E, the Sort ascending and save the 2nd to K pairs of parameters as as a candidate node for the next iteration;
S35:在S33中得到的临时权值对应的最优节点基础上,分别将虚步长β与尺度因子的M个因子相乘,表达式为:S35: Temporary weights obtained in S33 On the basis of the corresponding optimal node, the virtual step size β and the scale factor are respectively Multiplying the M factors of , the expression is:
其中,β(t-1)表示第t-1次迭代所确定的虚步长,bm为第m个尺度因子,表示与bm对应的二级子节点的虚步长;表示与bm对应的二级子节点的参数, 表示在使用参数时得到的损失值,是的简化表示。Among them, β (t-1) represents the imaginary step size determined by the t-1th iteration, b m is the mth scale factor, represents the virtual step size of the secondary child node corresponding to b m ; represents the parameter of the secondary child node corresponding to b m , Indicates that a parameter is being used The loss value obtained when Yes a simplified representation of .
S36:找到此时使得损失函数E的值最小的参数mt,表示为:S36: Find the parameter m t that minimizes the value of the loss function E at this time, expressed as:
mt是使得损失函数E值最小的二级子节点的索引; m t is the index of the secondary child node that minimizes the loss function E value;
S37:根据此时的mt、kt和nt更新权值w(t),表示为:此时通过损失函数确定了虚步长β。S37: Update the weight w (t) according to m t , k t and n t at this time, which is expressed as: At this time, the virtual step size β is determined by the loss function.
S38:保存参数对与S34中K-1个的参数对合并,一共K对参数用作下次迭代的候选节点;S38: save parameter pair Pairs with K-1 parameters in S34 Merge, a total of K pairs of parameters are used as candidate nodes for the next iteration;
S39:重复S31~S38。S39: Repeat S31 to S38.
S4:将所述测试集输入所述训练完成的全复神经网络,得到复值时序信号的预测值。S4: Input the test set into the trained fully complex neural network to obtain the predicted value of the complex-valued time series signal.
本发明将复步长解耦成实步长和虚步长两个正交的部分,引入自适应学习树(Adaptive Learning Tree,ALRT)算法自适应调整实步长和虚步长,提出了自适应复值正交梯度下降法,实现复值神经网络的高效训练。将采用以上算法训练的复值神经网络应用于复值时序信号的预测,可以建立有效的预测模型,本发明的上述技术方案相比现有技术具有以下优点:The invention decouples the complex step size into two orthogonal parts, the real step size and the virtual step size, and introduces an adaptive learning tree (Adaptive Learning Tree, ALRT) algorithm to adjust the real step size and the virtual step size adaptively. Adapt the complex-valued orthogonal gradient descent method to achieve efficient training of complex-valued neural networks. The complex-valued neural network trained by the above algorithm is applied to the prediction of complex-valued time series signals, and an effective prediction model can be established. Compared with the prior art, the above-mentioned technical solution of the present invention has the following advantages:
1.采用全复神经网络直接进行复值时序信号的预测,将复值信号作为一个整体来进行处理,相比于传统信号预测方法具有非线性拟合能力强、可以直接处理复值信号和准确度高的优点。避免了将复值信号拆分成实部虚部或者幅度相位的形式而破坏复值信号的内在联系。1. The fully complex neural network is used to directly predict the complex-valued time series signal, and the complex-valued signal is processed as a whole. Compared with the traditional signal prediction method, it has strong nonlinear fitting ability, can directly process the complex-valued signal and accurately. high degree of advantage. It avoids breaking the internal relationship of complex-valued signals by splitting the complex-valued signal into the form of real and imaginary parts or amplitude and phase.
2.本发明将复步长解耦成实步长和虚步长两个正交的部分,实步长的搜索方向为负梯度方向,而虚步长的搜索方向与负梯度方向正交。在两个正交方向上分别自适应调整其大小,可以分别沿着这两个方向搜索合适的步长而不相互影响。相比于传统的自适应复步长方法,在保证收敛速度的基础上大幅降低了求解复步长的计算量,加快了训练的收敛速度。2. The present invention decouples the complex step size into two orthogonal parts, the real step size and the virtual step size. The search direction of the real step size is the negative gradient direction, while the search direction of the virtual step size is orthogonal to the negative gradient direction. It adjusts its size adaptively in two orthogonal directions respectively, and can search for suitable step sizes along these two directions without affecting each other. Compared with the traditional self-adaptive complex step size method, the calculation amount of solving the complex step size is greatly reduced on the basis of ensuring the convergence speed, and the convergence speed of training is accelerated.
3.本发明中使用的自适应复值正交梯度下降法在复值神经网络训练过程中可以有助于逃离鞍点和局部极小值点,提高了对复值时序信号预测的精度。3. The adaptive complex-valued orthogonal gradient descent method used in the present invention can help to escape from the saddle point and the local minimum point in the complex-valued neural network training process, and improve the prediction accuracy of the complex-valued time series signal.
为了进一步说明本发明的有益效果,本实施例中分别采用复梯度下降算法(Complex Gradient Descent,CGD)、复值自适应正交梯度下降(Adaptive OrthogonalComplex-valued Gradient Descent,AOCGD)法、自适应复步长算法CRTRL(Complex-valuedreal-time recurrent learning)和CBBM(Complex Barzilai–Borwein method)四种训练算法,分别使用基准圆信号和基准非圆信号对复值神经网络进行训练,并对基准圆信号和基准非圆信号进行预测。使用不同训练方法和基准圆信号的训练结果如图5所示,使用不同训练方法和基准非圆信号预测的训练结果如图6所示,基准圆信号的预测结果如表1所示,基准非圆信号的预测结果如表2所示。In order to further illustrate the beneficial effects of the present invention, in this embodiment, the Complex Gradient Descent (CGD), the Adaptive Orthogonal Complex-valued Gradient Descent (AOCGD), the Adaptive Orthogonal Step size algorithm CRTRL (Complex-valued real-time recurrent learning) and CBBM (Complex Barzilai–Borwein method) four training algorithms, respectively use the reference circular signal and the reference non-circular signal to train the complex-valued neural network, and the reference circular signal and the reference non-circular signal for prediction. The training results using different training methods and reference circular signals are shown in Figure 5, the training results predicted using different training methods and reference non-circular signals are shown in Figure 6, and the prediction results of the reference circular signals are shown in Table 1. The prediction results of the circular signal are shown in Table 2.
表1不同训练方法下的基准圆信号的预测结果Table 1 Prediction results of reference circular signals under different training methods
表2不同训练方法下的基准非圆信号的预测结果Table 2 Prediction results of benchmark non-circular signals under different training methods
图5和图6中横坐标表示迭代次数,纵坐标表示损失函数E的损失值,从图5和图6可以看出,使用自适应复值正交梯度下降法的本发明损失值下降最快、收敛速度最快。表1和表2分别使用平均绝对误差(Mean Absolute Error,MAE)、标准均方根误差(normalizedRoot Mean Squared Error,nRMSE)和决定系数R2三个结果评价指标来评价预测结果,MAE和nRMSE越小、R2越大,表示预测结果越准确。从表1和表2可以看出,使用本发明进行预测的结果精度是最高的。In Figures 5 and 6, the abscissa represents the number of iterations, and the ordinate represents the loss value of the loss function E. It can be seen from Figures 5 and 6 that the loss value of the present invention using the adaptive complex-valued orthogonal gradient descent method decreases the fastest , the fastest convergence rate. Tables 1 and 2 respectively use the mean absolute error (Mean Absolute Error, MAE), the standard root mean square error (normalized Root Mean Squared Error, nRMSE) and the coefficient of determination R 2 three outcome evaluation indicators to evaluate the prediction results. The smaller the value and the larger the R 2 , the more accurate the prediction result. It can be seen from Table 1 and Table 2 that the prediction results obtained by using the present invention have the highest accuracy.
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。As will be appreciated by those skilled in the art, the embodiments of the present application may be provided as a method, a system, or a computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the present application. It will be understood that each flow and/or block in the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to the processor of a general purpose computer, special purpose computer, embedded processor or other programmable data processing device to produce a machine such that the instructions executed by the processor of the computer or other programmable data processing device produce Means for implementing the functions specified in a flow or flow of a flowchart and/or a block or blocks of a block diagram.
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory result in an article of manufacture comprising instruction means, the instructions The apparatus implements the functions specified in the flow or flow of the flowcharts and/or the block or blocks of the block diagrams.
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded on a computer or other programmable data processing device to cause a series of operational steps to be performed on the computer or other programmable device to produce a computer-implemented process such that The instructions provide steps for implementing the functions specified in the flow or blocks of the flowcharts and/or the block or blocks of the block diagrams.
显然,上述实施例仅仅是为清楚地说明所作的举例,并非对实施方式的限定。对于所属领域的普通技术人员来说,在上述说明的基础上还可以做出其它不同形式变化或变动。这里无需也无法对所有的实施方式予以穷举。而由此所引申出的显而易见的变化或变动仍处于本发明创造的保护范围之中。Obviously, the above-mentioned embodiments are only examples for clear description, and are not intended to limit the implementation manner. For those of ordinary skill in the art, other different forms of changes or modifications can also be made on the basis of the above description. There is no need and cannot be exhaustive of all implementations here. However, the obvious changes or changes derived from this are still within the protection scope of the present invention.
Claims (10)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210520164.1A CN114897144A (en) | 2022-05-13 | 2022-05-13 | Complex value time sequence signal prediction method based on complex value neural network |
PCT/CN2022/101004 WO2023216383A1 (en) | 2022-05-13 | 2022-06-24 | Complex-valued timing signal prediction method based on complex-valued neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210520164.1A CN114897144A (en) | 2022-05-13 | 2022-05-13 | Complex value time sequence signal prediction method based on complex value neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114897144A true CN114897144A (en) | 2022-08-12 |
Family
ID=82722038
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210520164.1A Pending CN114897144A (en) | 2022-05-13 | 2022-05-13 | Complex value time sequence signal prediction method based on complex value neural network |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN114897144A (en) |
WO (1) | WO2023216383A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115856425A (en) * | 2022-11-21 | 2023-03-28 | 中国人民解放军32802部队 | Spectrum anomaly detection method and device based on hidden space probability prediction |
WO2024108723A1 (en) * | 2022-11-23 | 2024-05-30 | 苏州大学 | Time series prediction method and system based on complex-value dendritic neural model |
WO2024124618A1 (en) * | 2022-12-13 | 2024-06-20 | 苏州大学 | Solar radiation prediction method and system using complex-valued neural network based on asm-cnag algorithm |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117454233B (en) * | 2023-12-22 | 2024-03-22 | 厦门锋联信息技术有限公司 | Safety production management method and system based on positioning identification |
CN117892117B (en) * | 2024-03-13 | 2024-05-31 | 国网山东省电力公司邹城市供电公司 | Fault positioning method and system for power transmission line of power distribution network |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106875002A (en) * | 2017-02-20 | 2017-06-20 | 中国石油大学(华东) | Complex value neural network training method based on gradient descent method Yu generalized inverse |
TWI651927B (en) * | 2018-02-14 | 2019-02-21 | National Central University | Signal source separation method and signal source separation device |
CN112182961B (en) * | 2020-09-23 | 2023-01-31 | 中国南方电网有限责任公司超高压输电公司 | Converter station wireless network channel large-scale fading modeling prediction method |
CN112861813B (en) * | 2021-03-29 | 2022-07-22 | 电子科技大学 | Method for identifying human behavior behind wall based on complex value convolution neural network |
CN113158582A (en) * | 2021-05-24 | 2021-07-23 | 苏州大学 | Wind speed prediction method based on complex value forward neural network |
CN113408726B (en) * | 2021-06-30 | 2022-12-06 | 苏州大学 | Method for designing complex value neural network channel equalizer based on AP-NAG algorithm |
-
2022
- 2022-05-13 CN CN202210520164.1A patent/CN114897144A/en active Pending
- 2022-06-24 WO PCT/CN2022/101004 patent/WO2023216383A1/en unknown
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115856425A (en) * | 2022-11-21 | 2023-03-28 | 中国人民解放军32802部队 | Spectrum anomaly detection method and device based on hidden space probability prediction |
CN115856425B (en) * | 2022-11-21 | 2023-10-17 | 中国人民解放军32802部队 | Spectrum anomaly detection method and device based on hidden space probability prediction |
WO2024108723A1 (en) * | 2022-11-23 | 2024-05-30 | 苏州大学 | Time series prediction method and system based on complex-value dendritic neural model |
WO2024124618A1 (en) * | 2022-12-13 | 2024-06-20 | 苏州大学 | Solar radiation prediction method and system using complex-valued neural network based on asm-cnag algorithm |
Also Published As
Publication number | Publication date |
---|---|
WO2023216383A1 (en) | 2023-11-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114897144A (en) | Complex value time sequence signal prediction method based on complex value neural network | |
CN113642653B (en) | Complex value neural network signal modulation identification method based on structure optimization algorithm | |
Liu et al. | A fault diagnosis intelligent algorithm based on improved BP neural network | |
CN101825871B (en) | Intelligent adaptive control method for heave and pitch device for oblique rudder ship | |
CN109670580A (en) | A kind of data recovery method based on time series | |
CN108062572A (en) | A method and system for fault diagnosis of hydropower units based on DdAE deep learning model | |
CN110232448A (en) | It improves gradient and promotes the method that the characteristic value of tree-model acts on and prevents over-fitting | |
CN111950711A (en) | A Second-Order Hybrid Construction Method and System for Complex-valued Feedforward Neural Networks | |
CN115081590A (en) | Short-term wind speed prediction method based on complex value acceleration algorithm | |
CN114166509A (en) | Motor bearing fault prediction method | |
CN108896330B (en) | A fault diagnosis method for a hydroelectric unit | |
CN113935489A (en) | Variational quantum model TFQ-VQA based on quantum neural network and two-stage optimization method thereof | |
CA3206072A1 (en) | Method and system for solving qubo problems with hybrid classical-quantum solvers | |
CN116303786B (en) | Block chain financial big data management system based on multidimensional data fusion algorithm | |
CN116881686A (en) | A nuclear pipeline fault diagnosis method using quantum BP neural network | |
CN110807510B (en) | Parallel learning soft measurement modeling method for industrial big data | |
CN118036809A (en) | Fault current prediction method and medium based on snow melting optimized recurrent neural network | |
Song et al. | Target trajectory prediction based on optimized neural network | |
Dong et al. | Neural networks and AdaBoost algorithm based ensemble models for enhanced forecasting of nonlinear time series | |
CN115577284A (en) | Power quality disturbance classification method and system based on deep convolutional neural network | |
CN105301961A (en) | Large-diameter turntable system model identification method based on RBF neural network | |
Er et al. | Progressive learning strategies for multi-class classification | |
CN115202339B (en) | DQN-based Adaptive Planning Method for Multiple Rover Sampling Fixed Targets | |
El-Rabbany et al. | A new approach to sequential tidal prediction | |
CN118566853B (en) | Signal level radar interference generation method and device in open environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20220812 |