CN112072634B - Load flow calculation method based on load flow embedding technology - Google Patents
Load flow calculation method based on load flow embedding technology Download PDFInfo
- Publication number
- CN112072634B CN112072634B CN201910495388.XA CN201910495388A CN112072634B CN 112072634 B CN112072634 B CN 112072634B CN 201910495388 A CN201910495388 A CN 201910495388A CN 112072634 B CN112072634 B CN 112072634B
- Authority
- CN
- China
- Prior art keywords
- neural network
- power
- power flow
- training
- node
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H02—GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
- H02J—CIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
- H02J3/00—Circuit arrangements for AC mains or AC distribution networks
-
- H—ELECTRICITY
- H02—GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
- H02J—CIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
- H02J3/00—Circuit arrangements for AC mains or AC distribution networks
- H02J3/04—Circuit arrangements for AC mains or AC distribution networks for connecting networks of the same frequency but supplied from different sources
- H02J3/06—Controlling transfer of power between connected networks; Controlling sharing of load between connected networks
Landscapes
- Engineering & Computer Science (AREA)
- Power Engineering (AREA)
- Supply And Distribution Of Alternating Current (AREA)
Abstract
Description
技术领域technical field
本发明涉及电力系统技术领域,具体涉及一种基于潮流嵌入技术的潮流计算方法。The invention relates to the technical field of power systems, in particular to a power flow calculation method based on a power flow embedding technology.
背景技术Background technique
随着新能源技术不断发展,出现了能源互联网这一新型能源利用体系。能源互联网的作用在于在大数据、机器学习等人工智能技术的支持下,整合一系列电网运行数据来进行各种情况的预测,并最终使所有机器、设备、系统可以进行实时动态调整,提高电网的整体运作效率。With the continuous development of new energy technologies, a new energy utilization system, the Energy Internet, has emerged. The role of the Energy Internet is to integrate a series of power grid operation data with the support of artificial intelligence technologies such as big data and machine learning to predict various situations, and finally enable all machines, equipment, and systems to be dynamically adjusted in real time to improve the power grid. overall operational efficiency.
随着大数据时代的到来和计算机技术的提升,神经网络,尤其是深度学习,在人工智能领域的表现大大超过了其它机器学习模型,并在语音识别、图像分类、自然语言处理等领域均得到了广泛的应用以及瞩目的成就。With the advent of the era of big data and the improvement of computer technology, neural networks, especially deep learning, have greatly outperformed other machine learning models in the field of artificial intelligence, and have been used in speech recognition, image classification, natural language processing and other fields. It has a wide range of applications and remarkable achievements.
将深度学习技术应用于潮流计算,是对潮流计算的新探索,旨在扩展深度学习在电力系统中的应用,潮流计算是电力系统非常重要的分析计算,对电网的潮流值进行精准快速的估计是电力系统中各个计算环节的基础,也是对电力系统进行稳定性与可靠性分析的前提。Applying deep learning technology to power flow calculation is a new exploration of power flow calculation, aiming to expand the application of deep learning in power system. Power flow calculation is a very important analysis and calculation of power system, which can accurately and quickly estimate the power flow value of power grid. It is the basis of each calculation link in the power system, and it is also the premise of the stability and reliability analysis of the power system.
将深度学习技术应用于潮流计算,是对现有传统潮流计算方法的补充,在能源互联网的新形势下,潮流计算所涉及的电网结构变得越来越复杂,对算法在快速性、收敛性方面的要求更为严苛,高斯赛德尔迭代法原理简单,占用计算机内存少,但将其应用于大规模电力系统时会出现迭代次数增加以及不收敛的情况;牛顿拉夫逊法收敛快、精度高,但在每次迭代过程中都需重新计算雅各比矩阵,从而造成占用计算机内存过多以及计算速度过慢等问题;快速解耦法在计算速度和占用内存上有所改进,但当出现某些病态条件时可能会导致不收敛的情况。The application of deep learning technology to power flow calculation is a supplement to the existing traditional power flow calculation methods. Under the new situation of the energy Internet, the power grid structure involved in power flow calculation becomes more and more complex, and the algorithm is fast and convergent. The requirements are more stringent. The Gauss-Seidel iteration method is simple in principle and occupies less computer memory. However, when it is applied to a large-scale power system, the number of iterations will increase and the convergence will not occur; the Newton-Raphson method has fast convergence and high precision. high, but the Jacobian matrix needs to be recalculated in each iteration process, resulting in problems such as occupying too much computer memory and slowing the calculation speed; the fast decoupling method improves the calculation speed and memory consumption, but when Certain ill-conditioned conditions may result in non-convergence.
潮流计算从本质上来说就是一组非线性方程组的求解,而深度学习在一定程度上就是一种有着强大的非线性拟合能力的工具,因此,使用深度学习来求解潮流具有一定的可行性,为此,我们提出一种基于潮流嵌入技术的潮流计算方法。Power flow calculation is essentially the solution of a set of nonlinear equations, and deep learning is a tool with powerful nonlinear fitting ability to a certain extent. Therefore, it is feasible to use deep learning to solve power flow. , for this purpose, we propose a power flow calculation method based on power flow embedding technology.
发明内容SUMMARY OF THE INVENTION
本发明的主要目的在于提供一种基于潮流嵌入技术的潮流计算方法,该方法简单方便、计算速度较快,可用于在线潮流计算,且不存在收敛问题,可以计算N个节点内任意拓扑结构电网的潮流值。The main purpose of the present invention is to provide a power flow calculation method based on the power flow embedding technology, which is simple and convenient, has a relatively fast calculation speed, can be used for online power flow calculation, and has no convergence problem, and can calculate any topology network within N nodes. trend value.
为实现上述目的,本发明采取的技术方案为:To achieve the above object, the technical scheme adopted in the present invention is:
本发明公开了一种基于潮流嵌入技术的潮流计算方法,包括如下步骤:The invention discloses a power flow calculation method based on the power flow embedding technology, comprising the following steps:
1)确定最大节点数N以及最大PV节点数K,构造训练集K、验证集V、测试集T;1) Determine the maximum number of nodes N and the maximum number of PV nodes K, and construct a training set K, a verification set V, and a test set T;
2)对于步骤1)中的训练数据,构造相应的正样本K+及负样本K-;2) For the training data in step 1), construct corresponding positive samples K + and negative samples K − ;
3)基于步骤2)中的训练样本K、正样本K+、负样本K-,使用基于三元组的孪生神经网络,充分训练后,得到潮流嵌入层P;3) Based on the training sample K, positive sample K + , and negative sample K − in step 2), using the triplet-based Siamese neural network, after sufficient training, the power flow embedding layer P is obtained;
4)将步骤3)中的潮流嵌入层作为深度神经网络的第一层隐层,在此基础上训练该深度神经网络,保留训练完毕的深度神经网络参数;4) using the power flow embedding layer in step 3) as the first hidden layer of the deep neural network, training the deep neural network on this basis, and retaining the trained deep neural network parameters;
5)对于任意一个需要进行潮流计算的电网,使用步骤4)中训练完毕的深度神经网络,通过前向计算,得到相应潮流解。5) For any power grid that needs to perform power flow calculation, use the deep neural network trained in step 4) to obtain the corresponding power flow solution through forward calculation.
作为优选,所述步骤1)进一步包括:Preferably, the step 1) further comprises:
1-1)确定数据集大小,对于每个数据,执行如下步骤:1-1) Determine the data set size, for each data, perform the following steps:
①以随机数形式确定其节点个数n以及PV节点数k,并对每个节点进行随机编号;① Determine the number of nodes n and the number of PV nodes k in the form of random numbers, and randomly number each node;
②根据图论以及深度优先搜索算法生成一个n节点连通图,随机生成每条连边的阻抗,得到并保存导纳矩阵;② According to graph theory and depth-first search algorithm, an n-node connected graph is generated, the impedance of each connected edge is randomly generated, and the admittance matrix is obtained and saved;
③将节点1定义为平衡节点,节点2~(k-1)为PV节点,剩余节点未PQ节点,随机生成每个节点的电压幅值v与相角θ;③ Define
④按照潮流方程计算每个节点的有功功率p和无功功率q,其中潮流方程的描述如下:④ Calculate the active power p and reactive power q of each node according to the power flow equation. The description of the power flow equation is as follows:
其中,pi、qi分别表示节点i的有功及无功功率,vi表示节点i的电压幅值,Gij、Bij分别表示节点导纳矩阵中位于第i行第j列的元素的实部与虚部,θij=θi-θj,表示的是节点i与节点j之间的电压相位差,符号j∈i意味着节点i与节点j是相连的,包括i=j的情况;Among them, pi and qi represent the active and reactive power of node i , respectively, vi represents the voltage amplitude of node i , and G ij and B ij represent the element in the node admittance matrix at row i and column j, respectively. The real part and the imaginary part, θ ij =θ i -θ j , represent the voltage phase difference between node i and node j, the symbol j∈i means that node i and node j are connected, including i=j Happening;
⑤将PQ节点的有功功率p和无功功率q、PV节点的有功功率p和电压幅值v、平衡节点的电压幅值v和电压相角θ、导纳矩阵的实部上三角矩阵G和虚部上三角矩阵B作为数据集中的输入部分,将其作为输入矩阵中的某一行;⑤ The active power p and reactive power q of the PQ node, the active power p and voltage amplitude v of the PV node, the voltage amplitude v and voltage phase angle θ of the balance node, the upper triangular matrix G of the real part of the admittance matrix and The upper triangular matrix B of the imaginary part is used as the input part of the data set, and it is used as a row in the input matrix;
⑥将所有节点的电压幅值v和电压相角θ作为标签矩阵的某一行;⑥ Take the voltage amplitude v and voltage phase angle θ of all nodes as a row of the label matrix;
1-2)将生成的数据集分为训练集K、验证集V、测试集T。1-2) Divide the generated data set into training set K, validation set V, and test set T.
作为优选,所述步骤2)进一步包括:Preferably, the step 2) further comprises:
2-1)对于每个训练集中的数据(xi,ti),执行如下步骤:2-1) For the data ( xi , t i ) in each training set, perform the following steps:
⑴对ti施加一个小扰动得到保留原始导纳矩阵,根据潮流方程计算每个节点的有功功率和无功功率 (1) Apply a small perturbation to ti to get Keep the original admittance matrix and calculate the active power of each node according to the power flow equation and reactive power
⑵对ti施加一个大扰动得到保留原始导纳矩阵,根据潮流方程计算每个节点的有功功率和无功功率 (2) Apply a large perturbation to ti to get Keep the original admittance matrix and calculate the active power of each node according to the power flow equation and reactive power
⑶输出相应正样本以及负样本 (3) Output the corresponding positive sample and negative samples
2-2)将作为正样本集中输入矩阵以及标签矩阵的某一行, 作为负样本集中输入矩阵以及标签矩阵的某一行,输出训练集的正样本集K+与负样本集K-。2-2) will As a row of the input matrix and label matrix in the positive sample set, As a row of the input matrix and the label matrix in the negative sample set, the positive sample set K+ and the negative sample set K - of the training set are output.
作为优选,所述步骤3)进一步包括:Preferably, the step 3) further comprises:
3-1)基于如附图一所示的三元组损失函数的孪生神经网络模型,将训练集K中的xi、正样本集K+中的负样本集K-中的作为孪生神经网络的输入,如附图1所示的基于三元组损失函数的孪生神经网络模型可描述为:3-1) Based on the twin neural network model of the triplet loss function as shown in Figure 1, the x i in the training set K, the x i in the positive sample set K+ Negative sample set K-medium As the input of the Siamese neural network, the Siamese neural network model based on the triplet loss function shown in Figure 1 can be described as:
y1=Wxi+b,y 1 =Wx i +b,
d1=||y1-y2||,d 1 =||y 1 -y 2 ||,
d2=||y1-y3||,d 2 =||y 1 -y 3 ||,
其中,嵌入层系数P由最终的(W,b)组成;Among them, the embedding layer coefficient P consists of the final (W, b);
3-2)采用Adam算法充分训练后输出嵌入层参数P,采用Adam算法更新参数x的规则如下:3-2) After the Adam algorithm is fully trained, the parameter P of the embedding layer is output, and the rules for updating the parameter x by the Adam algorithm are as follows:
mn=β1·mn-1+(1-β1)·gn,m n =β 1 ·m n-1 +(1-β 1 )·g n ,
其中,下标n表示正处于第n次迭代过程中,gn表示f(x)在xn处的梯度,mn和分别表示梯度的一阶矩估计以及修正后的一阶矩估计,vn和分别表示梯度的二阶矩估计以及修正后的二阶矩估计。Among them, the subscript n indicates that it is in the nth iteration process, g n indicates the gradient of f(x) at x n , m n and represent the first-order moment estimates of the gradient and the modified first-order moment estimates, respectively, v n and represent the second-order moment estimate of the gradient and the revised second-order moment estimate, respectively.
作为优选,所述步骤4)进一步包括:Preferably, the step 4) further comprises:
4-1)选择一种深度神经网络,将训练集样本K中的xi作为其输入,K中的ti作为标签,将网络第一层隐层的参数初始化为嵌入层参数P,其他层的参数根据所选深度神经网络选择初始化方式,网络的前向计算过程可描述为:4-1) Select a deep neural network, take the xi in the training set sample K as its input, and the t i in K as the label, initialize the parameters of the first hidden layer of the network as the embedding layer parameter P, and the other layers The parameters of the initialization method are selected according to the selected deep neural network. The forward calculation process of the network can be described as:
oi=f(p,W1,b1,W2,b2,…,Wn,bn,xi),o i = f(p, W 1 , b 1 , W 2 , b 2 , . . . , W n , bn , x i ),
4-2)采用Adam优化方法最小化Loss损失函数,充分训练后输出并保留每一层的参数(W1,b1,W2,b2,…,Wn,bn)。4-2) Use the Adam optimization method to minimize the Loss loss function, output and retain the parameters of each layer (W 1 , b 1 , W 2 , b 2 , . . . , W n , bn ) after sufficient training.
作为优选,所述步骤5)进一步包括:Preferably, the step 5) further comprises:
5-1)对于任意一个电网,将PQ节点的有功功率及无功功率、PV节点的有功功率及电压幅值、平衡节点的电压幅值及电压相角、导纳矩阵实部与虚部上神经网络的输入xi,带入oi=f(p,W1,b1,W2,b2,…,Wn,bn,xi)中,输出oi,即所有节点的电压幅值与电压角度。5-1) For any power grid, add the active power and reactive power of the PQ node, the active power and voltage amplitude of the PV node, the voltage amplitude and voltage phase angle of the balance node, and the real and imaginary parts of the admittance matrix to The input x i of the neural network is brought into o i =f(p,W 1 ,b 1 ,W 2 ,b 2 ,...,W n ,b n , xi ), and the output o i , that is, the voltage of all nodes Amplitude and voltage angle.
与现有技术相比,本发明具有如下有益效果:Compared with the prior art, the present invention has the following beneficial effects:
本发明的计算方法是一种直接法,在最终使用时,只需要将已知参数按规则进行排列后,作为深度神经网络的输入,通过几个矩阵的相乘以及神经元的非线性运算,即可得到最终的潮流值,因此该方法较为简单、计算速度较快,可用于在线潮流计算,且不存在收敛性问题;本发明的计算方法不仅可以用于固定拓扑结构的电网,还可以用于求解任意节点数不大于N的可变拓扑电网的潮流。The calculation method of the present invention is a direct method. In the final use, it is only necessary to arrange the known parameters according to the rules and use them as the input of the deep neural network. Through the multiplication of several matrices and the nonlinear operation of neurons, The final power flow value can be obtained, so the method is relatively simple, the calculation speed is fast, and it can be used for online power flow calculation, and there is no convergence problem; the calculation method of the present invention can not only be used for grids with fixed topology It is used to solve the power flow of any variable topology power grid with no more than N nodes.
附图说明Description of drawings
图1是本发明一种基于潮流嵌入技术的潮流计算方法中基于三元组损失函数的孪生神经网络模型。FIG. 1 is a twin neural network model based on triplet loss function in a power flow calculation method based on power flow embedding technology of the present invention.
图2是本发明一种基于潮流嵌入技术的潮流计算方法中所涉及的十四节点电网。FIG. 2 is a fourteen-node power grid involved in a power flow calculation method based on the power flow embedding technology of the present invention.
图3是本发明一种基于潮流嵌入技术的潮流计算方法中电网拓扑生成流程图。FIG. 3 is a flow chart of generating a power grid topology in a power flow calculation method based on the power flow embedding technology of the present invention.
图4是本发明一种基于潮流嵌入技术的潮流计算方法中所涉及的五节点电网。FIG. 4 is a five-node power grid involved in a power flow calculation method based on the power flow embedding technology of the present invention.
图5是本发明一种基于潮流嵌入技术的潮流计算方法中带嵌入层的残差网络模型。FIG. 5 is a residual network model with an embedded layer in a power flow calculation method based on the power flow embedding technology of the present invention.
具体实施方式Detailed ways
下面结合附图,对本发明的实施例作进一步详细的描述。The embodiments of the present invention will be described in further detail below with reference to the accompanying drawings.
下面将以N为14为例,结合本发明实施例中的相关附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The following will take N as 14 as an example, in conjunction with the relevant drawings in the embodiments of the present invention, to clearly and completely describe the technical solutions in the embodiments of the present invention. Obviously, the described embodiments are only a part of the embodiments of the present invention. not all examples. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
本发明提供一种基于深度学习的N节点内可变拓扑电网的潮流计算方法,该方法简单方便,计算速度较快,可用于在线潮流计算,且不存在收敛性问题。所述方法主要包括以下步骤:The invention provides a power flow calculation method of a variable topology power grid in an N node based on deep learning, which is simple and convenient, has a fast calculation speed, can be used for online power flow calculation, and has no convergence problem. The method mainly includes the following steps:
确定最大节点数N=14,最大PV节点数K=1,构造训练集、验证集、测试集,对于每个数据,执行以下步骤:Determine the maximum number of nodes N=14, the maximum number of PV nodes K=1, construct training set, validation set, test set, for each data, perform the following steps:
1-1)在3~N(在本例中,此处N取14)之间取随机整数,以该随机整数作为该电网节点个数n,在0~K(在本例中,K取1)之间取随机整数,以该随机整数作为该电网PV节点个数k,并对每个节点进行随机编号,以下步骤以n=5,k=1为例;1-1) Take a random integer between 3 and N (in this example, N here is 14), and take the random integer as the number n of the grid nodes, and between 0 and K (in this example, K is taken as 1) A random integer is taken between, and the random integer is taken as the number k of PV nodes in the grid, and each node is randomly numbered. The following steps take n=5, k=1 as an example;
1-2)根据图论以及深度优先搜索算法生成一个n节点连通图,随机生成每条连边的阻抗,以导纳矩阵的形式保存,以步骤1-1)中所述电网为例,该网络的拓扑结构可根据如附图3所示的算法得到,网络的导纳矩阵可表示为:1-2) Generate an n-node connected graph according to graph theory and depth-first search algorithm, randomly generate the impedance of each connected edge, and save it in the form of an admittance matrix. Taking the power grid described in step 1-1) as an example, this The topology of the network can be obtained according to the algorithm shown in Figure 3, and the admittance matrix of the network can be expressed as:
其中,Yij=Yji,最终的导纳矩阵可表示为:Among them, Y ij =Y ji , The final admittance matrix can be expressed as:
1-3)将节点1定义为平衡节点,节点2~(k+1)为PV节点,剩余节点未PQ节点,生成每个节点的电压幅值v与相角θ,以步骤1-1)~1-2)中所述电网为例,定义该网络的节点1为平衡节点,节点2为PV节点,节点3~5为PQ节点,随机生成每个节点的电压幅值v和相角θ,v和θ的取值如下:1-3) Define
v=[v1v2v3v4v5],v=[v 1 v 2 v 3 v 4 v 5 ],
θ=[θ1θ2θ3θ4θ5],θ=[θ 1 θ 2 θ 3 θ 4 θ 5 ],
1-4)按照潮流方程计算每个节点的有功功率p和无功功率q,其中潮流方程的描述如下:1-4) Calculate the active power p and reactive power q of each node according to the power flow equation, where the power flow equation is described as follows:
以步骤1-1)~1-3)中所述电网为例,将步骤1-3)中每个节点的v和θ代入潮流方程中,得到的有功功率p和无功功率q取值如下:Taking the power grid described in steps 1-1) to 1-3) as an example, substituting v and θ of each node in step 1-3) into the power flow equation, the obtained values of active power p and reactive power q are as follows :
p=[p1p2p3p4p5],p=[p 1 p 2 p 3 p 4 p 5 ],
q=[q1q2q3q4q5],q=[q 1 q 2 q 3 q 4 q 5 ],
1-5)将PQ节点的有功功率p和无功功率q、PV节点的有功功率p和电压幅值v、平衡节点的电压幅值v和电压相角θ、导纳矩阵的实部上三角矩阵G和虚部上三角矩阵B作为数据集中的输入部分,将其作为输入矩阵中的某一行。需要注意的有两点:1-5) Calculate the active power p and reactive power q of the PQ node, the active power p and voltage amplitude v of the PV node, the voltage amplitude v and voltage phase angle θ of the balance node, and the upper triangle of the real part of the admittance matrix. The matrix G and the imaginary upper triangular matrix B are used as the input part of the data set, and it is used as a row in the input matrix. There are two points to note:
导纳矩阵是对称的,且对角线上的元素可以由该行或该列的其他元素求得,因此只需取导纳矩阵的上三角元素即可;The admittance matrix is symmetric, and the elements on the diagonal can be obtained from other elements in the row or column, so it is only necessary to take the upper triangular element of the admittance matrix;
由于所采用的深度学习模型是一种全连接式的模型,要求输入的维度需保持一致,而在训练数据中会包含不同节点数、不同拓扑结构的电网,电网节点数量的不同以及PV节点个数的不同会导致每一个数据的维度不一致,对于不同节点数的电网,我们通过将未知的潮流状态置为零来处理,对于不同PV节点数的电网,我们通过K个三元组来处理,每个三元组包括(p,q,v)三个量,对于一个有k个PV节点的电网,取其中k个三元组为(p,0,v),其余取(p,q,0),以步骤1-1)~1-4)所述电网为例,其节点2为PV节点,输入矩阵的该行可表示为:Since the deep learning model used is a fully connected model, the input dimensions must be consistent, and the training data will include grids with different numbers of nodes, different topologies, different numbers of grid nodes, and PV nodes. The difference of the number will cause the dimension of each data to be inconsistent. For power grids with different node numbers, we deal with the unknown power flow state by setting it to zero. For power grids with different PV nodes, we deal with K triples. Each triplet includes three quantities (p, q, v). For a power grid with k PV nodes, the k triples are taken as (p, 0, v), and the rest are taken as (p, q, 0), taking the power grid described in steps 1-1) to 1-4) as an example, its
Input_14=[v1θ1p20v2p3p4p5000000000q3q4q5000000000Input_14=[v 1 θ 1 p 2 0v 2 p 3 p 4 p 5 000000000q 3 q 4 q 5 000000000
G12~G1(14)G23~G2(14)G34~G3(14)G45~G4(14)G56~G5(14)G67~G6(14)G78~G7(14)G89~G8(14)G9(10)~G9(14)G(10)(11)~G(10)(14)G(11)(12)~G(11)(14)G(12)(13)~G(12)(14)G(13)(14)~G(13)(14)B12~B1(14)B23~B2(14)B34~B3(14)B45~B4(14)B56~B5(14)B67~B6(14)B78~B7(14)B89~B8(14)B9(10)~B9(14)B(10)(11)~B(10)(14)B(11)(12)~B(11)(14)B(12)(13)~B(12)(14)B(13)(14)~B(13)(14)」G 12 to G 1(14) G 23 to G 2(14) G 34 to G 3(14) G 45 to G 4(14) G 56 to G 5(14) G 67 to G 6(14) G 78 ~G 7(14) G 89 ~G 8(14) G 9(10) ~G 9(14) G (10)(11) ~G (10)(14) G (11)(12) ~G ( 11)(14) G (12)(13) ~G (12)(14) G (13)(14) ~G (13)(14) B 12 ~B 1(14) B 23 ~B 2(14 ) B 34 to B 3(14) B 45 to B 4(14) B 56 to B 5(14) B 67 to B 6(14) B 78 to B 7(14) B 89 to B 8(14) B 9(10) ~B 9(14) B (10)(11) ~B (10)(14) B (11)(12) ~B (11)(14) B (12)(13) ~B ( 12)(14) B (13)(14) ~B (13)(14) ”
当在步骤1-1)中取k=0时,即节点2为PQ节点时,输入矩阵的该行可表When k=0 is taken in step 1-1), that is, when
示为:shown as:
Input_14=[v1θ1p2q20p3p4p5000000000q3q4q5000000000Input_14=[v 1 θ 1 p 2 q 2 0p 3 p 4 p 5 000000000q 3 q 4 q 5 000000000
G12~G1(14)G23~G2(14)G34~G3(14)G45~G4(14)G56~G5(14)G67~G6(14)G78~G7(14)G89~G8(14)G9(10)~G9(14)G(10)(11)~G(10)(14)G(11)(12)~G(11)(14)G(12)(13)~G(12)(14)G(13)(14)~G(13)(14)B12~B1(14)B23~B2(14)B34~B3(14)B45~B4(14)B56~B5(14)B67~B6(14)B78~B7(14)B89~B8(14)B9(10)~B9(14)B(10)(11)~B(10)(14)B(11)(12)~B(11)(14)B(12)(13)~B(12)(14)B(13)(14)~B(13)(14)],G 12 to G 1(14) G 23 to G 2(14) G 34 to G 3(14) G 45 to G 4(14) G 56 to G 5(14) G 67 to G 6(14) G 78 ~G 7(14) G 89 ~G 8(14) G 9(10) ~G 9(14) G (10)(11) ~G (10)(14) G (11)(12) ~G ( 11)(14) G (12)(13) ~G (12)(14) G (13)(14) ~G (13)(14) B 12 ~B 1(14) B 23 ~B 2(14 ) B 34 to B 3(14) B 45 to B 4(14) B 56 to B 5(14) B 67 to B 6(14) B 78 to B 7(14) B 89 to B 8(14) B 9(10) ~B 9(14) B (10)(11) ~B (10)(14) B (11)(12) ~B (11)(14) B (12)(13) ~B ( 12)(14) B (13)(14) ~B (13)(14) ],
1-6)将所有节点的电压幅值v和电压相角θ作为标签矩阵的某一行。标签1-6) Take the voltage amplitude v and voltage phase angle θ of all nodes as a certain row of the label matrix. Label
矩阵的该行可表示为:This row of the matrix can be represented as:
Target_14=[v1 v2 v3 v4 v5 v6 v7 v8 v9 v(10) v(11) v(12) v(13) v(14) θ1 θ2 θ3 θ4θ5 θ6 θ7 θ8 θ9 θ(10) θ(11) θ(12) θ(13) θ(14)],Target_14=[v 1 v 2 v 3 v 4 v 5 v 6 v 7 v 8 v 9 v (10) v (11) v (12) v (13) v (14) θ 1 θ 2 θ 3 θ 4 θ 5 θ 6 θ 7 θ 8 θ 9 θ (10) θ (11) θ (12) θ (13) θ (14) ],
以步骤1-1)~1-4)所述电网为例,标签矩阵的该行可表示为:Taking the power grid described in steps 1-1) to 1-4) as an example, this row of the label matrix can be expressed as:
Target_5=[v1 v2 v3 v4 v5 0 0 0 0 0 0 0 0 0 θ1 θ2 θ3 θ4 θ5 0 0 0 0 0 0 00 0],Target_5=[v 1 v 2 v 3 v 4 v 5 0 0 0 0 0 0 0 0 0 θ 1 θ 2 θ 3 θ 4 θ 5 0 0 0 0 0 0 00 0],
将生成的数据集分为训练集、验证集、测试集。在本实施例中,取训练集、验证集、测试集分别为400000、60000、40000;The generated dataset is divided into training set, validation set, and test set. In this embodiment, the training set, the verification set and the test set are respectively 400000, 60000 and 40000;
对于步骤2)中的每个训练数据(xi,ti),执行如下步骤:For each training data ( xi , t i ) in step 2), perform the following steps:
3-1)对ti施加一个小扰动得到保留原始导纳矩阵,根据潮流方程计算每个节点的有功功率和无功功率经小扰动后得到的有功功率和无功功率可表示为:3-1) Apply a small perturbation to ti get Keep the original admittance matrix and calculate the active power of each node according to the power flow equation and reactive power obtained after a small perturbation Active power and reactive power can be expressed as:
3-2)对ti施加一个大扰动得到保留原始导纳矩阵,根据潮流方程计算每个节点的有功功率和无功功率经大扰动后得到的有功功率和无功功率可表示为:3-2) Apply a large perturbation to ti get Retain the original admittance matrix and calculate the active power of each node according to the power flow equation and reactive power obtained after a large disturbance Active power and reactive power can be expressed as:
3-3)将分别作为正样本集中输入矩阵以及标签矩阵的某一行,分别作为负样本集中输入矩阵以及标签矩阵的某一行;3-3) will as a row of the input matrix and the label matrix in the positive sample set, respectively, Respectively as a row of the input matrix and the label matrix in the negative sample set;
基于三元组损失函数的孪生神经网络模型,以Tensorflow作为平台,python作为编程语言、随机梯度算法作为优化方法,充分训练后得到嵌入层系数P,如附图1所示的基于三元组损失函数的孪生神经网络模型可描述为:The twin neural network model based on triplet loss function uses Tensorflow as the platform, python as the programming language, and stochastic gradient algorithm as the optimization method. After sufficient training, the embedding layer coefficient P is obtained, as shown in Figure 1 based on triplet loss. The Siamese neural network model of the function can be described as:
y1=Wxi+b,y 1 =Wx i +b,
d1=||y1-y2||,d 1 =||y 1 -y 2 ||,
d2=||y1-y3||,d 2 =||y 1 -y 3 ||,
其中,嵌入层系数P由最终的(W,b)组成;Among them, the embedding layer coefficient P consists of the final (W, b);
将步骤2)中的潮流嵌入层作为深度神经网络的第一层隐层,在此基础上训练一个深度神经网络用于计算节点数不大于N的可变拓扑电网。以残差网络为例,最终的带嵌入层的残差网络模型结构如附图5所示;The power flow embedding layer in step 2) is used as the first hidden layer of the deep neural network, and on this basis, a deep neural network is trained to calculate the variable topology power grid with the number of nodes not greater than N. Taking the residual network as an example, the final residual network model structure with an embedded layer is shown in Figure 5;
对于任意一个需要进行潮流计算的电网,使用步骤5)中训练完毕的深度神经网络得到相应潮流解。以如附图4所示的五节点电网为例,执行以下步骤:For any power grid that needs to perform power flow calculation, use the deep neural network trained in step 5) to obtain the corresponding power flow solution. Taking the five-node power grid shown in Figure 4 as an example, perform the following steps:
6-1)将网络进行抽样后,得到相关参数,该五节点电网的节点参数如表1所示,线路参数如表2所示;6-1) After sampling the network, obtain relevant parameters, the node parameters of the five-node power grid are shown in Table 1, and the line parameters are shown in Table 2;
表1:五节点电网节点参数Table 1: Five-node grid node parameters
表2:五节点电网线路参数Table 2: Five-node grid line parameters
6-2)根据表1-表2,得到其导纳矩阵Y的实部G、虚部B以及深度神经网络的输入Input:6-2) According to Table 1-Table 2, obtain the real part G, imaginary part B and the input Input of the deep neural network of its admittance matrix Y:
Input=[1.0501.050523.71.60000000001.01.30.8000000000Input=[1.0501.050523.71.60000000001.01.30.8000000000
0000000000000000000000000-0.8299-0.6240000000000-0.75470000000000000000000000000000000000000000000000000000000000000000000000000000000-0.8299-0.6240000000000-0.75470000000000000000000000000000000000000000000000000000
0033.3333000000000066.6667000000000003.11203.90020000000002.6415000000000000000000000000000000000000000000000000000000],0033.3333000000000066.6667000000000003.11203.900200000000002.64150000000000000000000000000000000000000000000000000000],
6-3)将输入作为深度神经网络的输入,进行前向计算后得到相应的输出即可。6-3) The input is used as the input of the deep neural network, and the corresponding output can be obtained after forward calculation.
以下通过数据说明本发明的可行性。表3展示了几种取N=50、K=3的不带嵌入层的栈式自编码器的在训练集上以及测试集上的误差。The feasibility of the present invention is illustrated by the following data. Table 3 shows the training set and test set errors of several stacked autoencoders with N=50 and K=3 without embedding layers.
表3:基于栈式自编码器的潮流计算结果Table 3: Load flow calculation results based on stacked autoencoders
根据表格3,可以发现隐含层神经元数量多、隐含层数目多的网络往往有着更好的训练和测试结果。例如,通过比较我们能够发现表3中网络结构为2554-2000-1000-500-100的栈式自编码器,同时也能够发现网络结构为2554-5000-1000-500-100的栈式自编码器,后者在训练集与测试集上均有更好的效果;通过比较表3中网络结构为2554-5000-100的栈式自编码网络以及表3中网络结构为2554-2000-1000-100的栈式自编码网络,后者在训练误差与测试误差上均略低于前者,但是后者所需训练的参数远远小于前者。According to Table 3, it can be found that a network with a large number of neurons in the hidden layer and a large number of hidden layers tends to have better training and testing results. For example, by comparison, we can find the stacked autoencoder with the network structure of 2554-2000-1000-500-100 in Table 3, and we can also find the stacked autoencoder with the network structure of 2554-5000-1000-500-100 The latter has better results on both the training set and the test set; by comparing the stacked autoencoder network with the network structure of 2554-5000-100 in Table 3 and the network structure of 2554-2000-1000- 100 stack auto-encoding network, the latter is slightly lower than the former in terms of training error and test error, but the parameters required for training of the latter are much smaller than the former.
表4:栈式自编码器训练以及测试时间Table 4: Stacked autoencoder training and test times
表4展示了三隐层栈式自编码器两种不同结构的训练及测试时间,其中测试时间为计算附图2所示的十四节点电网的潮流值所需的时间,如附图2所示的十四节点电网,电网节点参数如表5所示,电网线路参数如表6所示;Table 4 shows the training and testing time of two different structures of the three-hidden-layer stacked autoencoder, where the testing time is the time required to calculate the power flow value of the fourteen-node power grid shown in Fig. 2, as shown in Fig. 2 The 14-node power grid is shown, the grid node parameters are shown in Table 5, and the power grid line parameters are shown in Table 6;
表5:十四节点电网节点参数Table 5: Fourteen-node grid node parameters
表6:十四节点电网线路参数Table 6: Fourteen-node grid line parameters
对于如附图2所示的十四节点电网,采用牛顿-拉夫逊法所需时间约为0.113314s,根据表4所示结果,使用栈式自编码器可以更快的得到结果。对于如附图2所示的十四节点电网,将电阻值增大100倍时,采用牛顿-拉夫逊法时,雅各比矩阵会出现奇点,从而导致其误差上下震荡,不能收敛。当使用表3中网络结构为2554-2000-1000-500-100的栈式自编码器来计算这一网络的潮流值并将计算得到的结果重新带入潮流方程时,可得到有功功率的最大误差为0.03(KVA),无功功率的最大误差为0.02(KVA),可见本发明在一定程度上可以给出病态潮流的参考值。For the 14-node power grid shown in Fig. 2, the time required by the Newton-Raphson method is about 0.113314s. According to the results shown in Table 4, the stack autoencoder can be used to obtain faster results. For the 14-node power grid shown in Figure 2, when the resistance value is increased by 100 times, when the Newton-Raphson method is used, the Jacobian matrix will have a singularity, which will cause its error to oscillate up and down and fail to converge. When using the stack autoencoder with the network structure of 2554-2000-1000-500-100 in Table 3 to calculate the power flow value of this network and bring the calculated result back into the power flow equation, the maximum active power can be obtained. The error is 0.03 (KVA), and the maximum error of reactive power is 0.02 (KVA). It can be seen that the present invention can provide the reference value of ill-conditioned power flow to a certain extent.
通过这一实施例,一方面证明了深度神经网络有解决潮流计算这一问题的能力,另一方面通过与牛顿法的对比,说明了本发明在计算快速性以及收敛性上的优势。为了进一步说明本发明所提到的嵌入层的能力,我们将以N=14、K=1为例,训练4个带嵌入层的不同的深度神经网络,即浅层BP神经网络、深度BP神经网络与深度ReLU神经网络以及残差网络,并将其应用于如附图4所示的五节点电网中。Through this embodiment, on the one hand, it proves that the deep neural network has the ability to solve the problem of power flow calculation; In order to further illustrate the capability of the embedding layer mentioned in the present invention, we will take N=14, K=1 as an example to train 4 different deep neural networks with embedded layers, namely shallow BP neural network, deep BP neural network network with a deep ReLU neural network and a residual network, and applied it to a five-node power grid as shown in Figure 4.
表7-表10分别展示了带嵌入层的浅层BP神经网络、深度BP神经网络与深度ReLU神经网络以及残差网络的仿真结果。表中电压幅值以及电压幅值误差的单位为p.u.,电压角度以及电压角度误差的单位为度。Tables 7-10 show the simulation results of shallow BP neural network with embedding layer, deep BP neural network, deep ReLU neural network and residual network, respectively. The unit of voltage amplitude and voltage amplitude error in the table is p.u., and the unit of voltage angle and voltage angle error is degree.
表7:带嵌入层的浅层BP神经网络潮流计算实验结果Table 7: Experimental results of power flow calculation of shallow BP neural network with embedding layer
表8:带嵌入层的深度BP神经网络潮流计算实验结果Table 8: Experimental results of power flow calculation of deep BP neural network with embedding layer
表9:带嵌入层的深度ReLU神经网络潮流计算实验结果Table 9: Experimental results of power flow calculation of deep ReLU neural network with embedding layer
表10:带嵌入层的残差网络潮流计算实验结果Table 10: Experimental results of power flow calculation of residual network with embedded layer
表7展示了将一个训练完的带嵌入层的浅层BP神经网络应用于如附图4所示的五节点电网的实验结果。根据电压幅值误差以及电压角度误差,可以认为浅层神经网络能在一定程度上用于电网的潮流计算,但精确性较差;Table 7 shows the experimental results of applying a trained shallow BP neural network with embedding layer to the five-node grid shown in Fig. 4. According to the voltage amplitude error and the voltage angle error, it can be considered that the shallow neural network can be used for the power flow calculation of the power grid to a certain extent, but the accuracy is poor;
表8展示了将一个训练完的带嵌入层的深度BP神经网络(十层)应用于如附图4所示的五节点电网的实验结果。通过比较表7与表8之间的结果,深度BP神经网络的仿真效果远不如浅层BP神经网络,这是因为深度BP神经网络会出现梯度消失的问题。同时也说明了潮流嵌入技术无法解决梯度消失问题;Table 8 shows the experimental results of applying a trained deep BP neural network with embedding layers (ten layers) to the five-node grid shown in Fig. 4. By comparing the results between Table 7 and Table 8, the simulation effect of the deep BP neural network is far inferior to that of the shallow BP neural network, because the gradient disappearance problem of the deep BP neural network. At the same time, it also shows that the power flow embedding technology cannot solve the problem of gradient disappearance;
表9展示了将一个训练完的带嵌入层的深度ReLU神经网络(十层)应用于如附图4所示的五节点电网的实验结果。通过比较表8与表9,可以发现深度ReLU神经网络的误差要远小于相同结构的深度BP神经网络;Table 9 shows the experimental results of applying a trained deep ReLU neural network with embedding layers (ten layers) to the five-node grid shown in Fig. 4. By comparing Table 8 and Table 9, it can be found that the error of the deep ReLU neural network is much smaller than that of the deep BP neural network of the same structure;
表10展示了将一个训练完的带嵌入层的深度残差网络(十层)应用于如附图4所示的五节点电网的实验结果。通过比较表7~表10,残差网络在性能上是最好的。Table 10 shows the experimental results of applying a trained deep residual network with embedding layers (ten layers) to a five-node grid as shown in Fig. 4. By comparing Tables 7 to 10, the residual network is the best in performance.
表11:不带嵌入层的各网络仿真结果Table 11: Simulation results of each network without embedding layer
表11展示了不带嵌入层的各个网络的仿真结果。通过比较表7-表11,在潮流嵌入技术的支持下,除深度BP神经网络外的各个网络,其性能均得到了一定的提高,即使是使用最简单的浅层BP神经网络,也可以得到相对较好的实验结果,这说明了潮流嵌入技术在一定程度上可以挖掘电网输入数据的潜在特征,从而进一步提高模型的表现力。Table 11 shows the simulation results for each network without the embedding layer. By comparing Tables 7 to 11, with the support of the power flow embedding technology, the performance of each network except the deep BP neural network has been improved to a certain extent. Even the simplest shallow BP neural network can be used. The relatively good experimental results show that the power flow embedding technology can mine the potential features of the grid input data to a certain extent, thereby further improving the expressiveness of the model.
以上所述仅是本发明优选实施方式,应当指出,对于本技术领域的普通技术人员,在不脱离本发明构思的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本发明保护范围内。The above are only preferred embodiments of the present invention, it should be pointed out that for those skilled in the art, without departing from the concept of the present invention, several improvements and modifications can also be made, and these improvements and modifications should also be regarded as within the protection scope of the present invention.
Claims (7)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910495388.XA CN112072634B (en) | 2019-06-10 | 2019-06-10 | Load flow calculation method based on load flow embedding technology |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910495388.XA CN112072634B (en) | 2019-06-10 | 2019-06-10 | Load flow calculation method based on load flow embedding technology |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112072634A CN112072634A (en) | 2020-12-11 |
CN112072634B true CN112072634B (en) | 2022-06-24 |
Family
ID=73658132
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910495388.XA Active CN112072634B (en) | 2019-06-10 | 2019-06-10 | Load flow calculation method based on load flow embedding technology |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112072634B (en) |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10140705B2 (en) * | 2014-06-10 | 2018-11-27 | Siemens Healthcare Diagnostics Inc. | Drawer vision system |
CN108846410A (en) * | 2018-05-02 | 2018-11-20 | 湘潭大学 | Power Quality Disturbance Classification Method based on sparse autocoding deep neural network |
CN108898249A (en) * | 2018-06-28 | 2018-11-27 | 鹿寨知航科技信息服务有限公司 | A kind of electric network fault prediction technique |
CN108879708B (en) * | 2018-08-28 | 2021-06-22 | 东北大学 | Reactive voltage partitioning method and system for active power distribution network |
-
2019
- 2019-06-10 CN CN201910495388.XA patent/CN112072634B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN112072634A (en) | 2020-12-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105843073B (en) | A kind of wing structure aeroelastic stability analysis method not knowing depression of order based on aerodynamic force | |
CN106532711B (en) | Newton's method for calculating power flow with Jacobian changing with iteration and node type | |
CN108390393B (en) | Multi-objective reactive power optimization method and terminal equipment for distribution network | |
CN102013680A (en) | Fast decoupled flow calculation method for power systems | |
CN104037764A (en) | A Cartesian Coordinate Newton Method Power Flow Calculation Method Based on Jacobian Matrix Change | |
WO2019184132A1 (en) | Data driving-based grid power flow equation linearization and solving method | |
CN104113061B (en) | A kind of distribution network three-phase power flow method containing distributed power source | |
CN108959671B (en) | Real-time simulation modeling method for half-bridge and full-bridge modular multilevel converters | |
CN103810646A (en) | Improved projection integral algorithm based active power distribution system dynamic simulation method | |
CN103632046A (en) | Power grid load flow calculation method | |
CN106548418A (en) | Power system small interference stability appraisal procedure | |
CN104734148B (en) | Three-phrase power-distributing network continuation power flow analysis of distributed power supply | |
CN106897942A (en) | A kind of power distribution network distributed parallel method for estimating state and device | |
CN106532712B (en) | Compensation Method Cartesian Coordinate Newton's Method of Power Flow Calculation Method for Power Networks with Small Impedance Branch | |
CN104158191B (en) | A kind of decentralized coordinated control method of multi-machine power system stable operation | |
CN110571788A (en) | Calculation Method of Boundary Coefficient of Static Voltage Stability Region Based on Dynamic Equivalent Circuit | |
CN110676852A (en) | A Fast Probabilistic Power Flow Calculation Method for Improved Extreme Learning Machine Considering Power Flow Characteristics | |
CN112072634B (en) | Load flow calculation method based on load flow embedding technology | |
CN107846039A (en) | Consider the cluster wind-electricity integration modeling and analysis methods and system of wind speed correlation | |
CN116244894B (en) | A power system transient simulation method and system based on large step size | |
CN114741956B (en) | A carbon emission flow tracking method based on graph neural network | |
CN106682760A (en) | Wind power climbing prediction method | |
CN101866381A (en) | A Parallel Simulation Method of Elastic Wave Propagation Using Lengendre Spectral Element Method Based on Element-by-Element Technology | |
CN110110471A (en) | A kind of recognition methods of electric system key node | |
CN101478159A (en) | Transient stabilized constraint tide optimization process based on differential equality constraint conversion order reduction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: 310000 No.51 Huzhou street, Gongshu District, Hangzhou City, Zhejiang Province Applicant after: HANGZHOU City University Address before: 310000 No.51 Huzhou street, Gongshu District, Hangzhou City, Zhejiang Province Applicant before: Zhejiang University City College |
|
CB02 | Change of applicant information | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20220127 Address after: 310000 No.51 Huzhou street, Gongshu District, Hangzhou City, Zhejiang Province Applicant after: HANGZHOU City University Applicant after: ZHEJIANG TIANNENG POWER ENERGY Co.,Ltd. Address before: 310000 No.51 Huzhou street, Gongshu District, Hangzhou City, Zhejiang Province Applicant before: HANGZHOU City University |
|
TA01 | Transfer of patent application right | ||
GR01 | Patent grant | ||
GR01 | Patent grant |