[go: up one dir, main page]

CN118734708B - Flow tube approximate generation method, device, medium and equipment based on neural network - Google Patents

Flow tube approximate generation method, device, medium and equipment based on neural network

Info

Publication number
CN118734708B
CN118734708B CN202410919136.6A CN202410919136A CN118734708B CN 118734708 B CN118734708 B CN 118734708B CN 202410919136 A CN202410919136 A CN 202410919136A CN 118734708 B CN118734708 B CN 118734708B
Authority
CN
China
Prior art keywords
neural network
sample set
simulation
time
positive
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410919136.6A
Other languages
Chinese (zh)
Other versions
CN118734708A (en
Inventor
陈鑫
谢宇轩
汤恩义
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN202410919136.6A priority Critical patent/CN118734708B/en
Publication of CN118734708A publication Critical patent/CN118734708A/en
Application granted granted Critical
Publication of CN118734708B publication Critical patent/CN118734708B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/28Design optimisation, verification or simulation using fluid dynamics, e.g. using Navier-Stokes equations or computational fluid dynamics [CFD]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/04Constraint-based CAD
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2113/00Details relating to the application field
    • G06F2113/08Fluids
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2113/00Details relating to the application field
    • G06F2113/14Pipes

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Computer Hardware Design (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Geometry (AREA)
  • Molecular Biology (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Fluid Mechanics (AREA)
  • Algebra (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

本发明公开了基于神经网络的流管近似生成方法、装置、介质及设备。该方法首先按照连续系统的运行时长进行分段,然后构建出各个时段对应的深度神经网络,然后通过连续系统的仿真,构建出各个时段对应的正例样本集合和反例样本集合,然后以正例样本集合和反例样本集合作为训练样本训练这些时段的深度神经网络,然后基于训练好了的深度神经网络模型参数构建出各个时段对应的混合整型规划约束编码,并将可达问题转换成约束求解问题进行求解,基于求解的结果,确定是否对深度神经网络做进一步训练。相比于传统的多面体近似或泰勒模型近似的方法,该方法基于深度神经网络近似的精度高,并有效避免了长时间下流管的精度随时间增加而下降的问题。

This invention discloses a method, apparatus, medium, and device for approximating flow tubes based on neural networks. The method first segments the continuous system according to its runtime, then constructs a deep neural network corresponding to each segment. Next, through continuous system simulation, it constructs positive and negative sample sets for each segment. These sets are then used as training samples to train the deep neural networks for these segments. Based on the trained deep neural network model parameters, it constructs hybrid integer programming constraint codes for each segment, transforming the reachability problem into a constraint-solving problem. Based on the solution results, it determines whether to further train the deep neural network. Compared to traditional polyhedral approximation or Taylor model approximation methods, this method based on deep neural network approximation offers higher accuracy and effectively avoids the problem of decreased accuracy of flow tubes over long periods.

Description

Flow tube approximate generation method, device, medium and equipment based on neural network
Technical Field
The invention relates to a simulation approximation and security verification technology of a continuous system.
Background
Continuous system flow tube overapproximation generation is to generate a set for a given continuous system that is online at a specified time, including the set of reachable states of the system variables. This set is a set that contains all the states that the system may have in a given time, and is important to verify the nature of the continuous system, e.g., to determine if the system may reach certain specific unsafe states, thereby ensuring the safety of the continuous system. It is apparent that a complete set containing all states may be a stream-passing approximation, but it is not helpful for verification and may even produce erroneous results. How to generate a high-precision flow tube overexposure is a critical issue.
Disclosure of Invention
The invention aims to solve the problems of generating flow tube overexposure with higher precision in a continuous system, and preventing the precision from being deteriorated as much as possible with the increase of the system time.
In order to solve the problems, the invention adopts the following scheme:
the flow tube approximate generation method based on the neural network comprises the following steps:
Step S1, acquiring information of a continuous system to be processed, wherein the information of the continuous system at least comprises system quantity, a differential equation of the system quantity relative to time, an initial value range of the system quantity and operation time of the continuous system;
S2, dividing the running time of the continuous system into N time periods according to the running time in a mode of equal time length, and respectively constructing a simulation depth neural network of the continuous system for the N time periods;
step 3, performing simulation operation on the continuous system through sampling of the system quantity in the initial value range, and respectively constructing a positive sample set and a negative sample set for N time periods, wherein the positive sample set and the negative sample set are sample sets, and the samples are sampling values of the system quantity;
S4, respectively taking samples in the positive example sample set and the negative example sample set of the N time periods as training samples of the simulation depth neural network of the N time periods, and inputting the training samples into the corresponding simulation depth neural network for training;
step S5, extracting model parameter data of N trained simulation depth neural networks, constructing a mixed integer constraint code which is corresponding to N time periods and used for describing the maximum output problem of the simulation depth neural networks according to the extracted model parameter data, and then carrying out constraint solving on the maximum output problem of the simulation depth neural networks through a mixed integer programming constraint solver on the mixed integer constraint code to obtain corresponding extremum and extremum points;
the simulation deep neural network comprises an input layer, n hidden layers and an output layer, and is expressed as:
Aout=woutzn+bout;
zk=δ(wkzk-1+bk),k=1,2,...,n;
z0=x;
Wherein A out is the output of the simulation deep neural network;
w out and b out are parameters of an output layer of the simulation deep neural network, w out is a vector, and b out is a numerical value;
w k and b k are simulation deep neural network kth hidden layer parameters, which are vectors, wherein k=1, 2, & gt, n;
z k is the simulation depth neural network kth hidden layer output vector, where k=1, 2,..n;
z 0 is the simulation depth neural network input layer output vector;
x is a vector consisting of systematic quantities;
n represents the number of hidden layers contained in the simulation deep neural network;
delta is the activation function.
Further, according to the flow tube approximation generation method of the present invention, in the step S3, the positive sample set of N periods is constructed by:
step S3A1 of obtaining an initial vector composed of initial values of the system quantity by sampling in the initial value domain
Step S3A2 is according to the formulaCalculating instantaneous values in a system magnitude domain at various points in time within a continuous system operating durationThen, the instantaneous values in the system magnitude domain of each time point are taken as samples to be added into a positive sample set of the corresponding time period;
The counterexample sample set for the N time periods is constructed by:
Step S3B1 obtaining an initial vector composed of initial values of the system quantity by sampling outside the initial value domain
Step S3B2 is performed according to the formulaCalculating instantaneous values outside the system magnitude domain at various points in the duration of continuous system operationThen, the instantaneous values in the system magnitude domain of each time point are taken as samples to be added into a counterexample sample set of the corresponding time period;
Wherein, the
The time points corresponding to the j-th period are (j×m-M) ×dt, (j×m-m+1) ×dt, (j×m-m+2) ×dt, (j×m) ×dt, j=1, 2,3, N;
And Respectively representing instantaneous values in a system magnitude domain and instantaneous values outside the magnitude domain with the time point of i x dT;
Wherein i=1, 2,3,.., T is the running time of the continuous system;
f is a differential equation of system quantity with respect to time;
m is the number of time-point fine divisions within a time period.
Further, according to the flow tube approximation generation method of the present invention, in the step S4, training the simulation deep neural network with samples in the positive example sample set and the negative example sample set includes the following steps:
s41, inputting samples in the positive example sample set and the negative example sample set into a simulation depth neural network one by one to obtain corresponding outputs of the samples;
step S42, calculating a positive case loss function value and a negative case loss function value;
s43, optimizing parameters of the simulation depth neural network by taking the sum of the positive loss function value and the negative loss function value as a loss function value of an optimizer;
step S44, repeating the steps S41 to S44 until the sum of the positive loss function value and the negative loss function value is 0;
the positive loss function value and the negative loss function value are calculated by adopting the following formula:
Wherein, the
The positive loss function value of the t-th training is shown,
The counterexample loss function value of the t-th training is shown,
The k sample in the positive sample set is input to the output obtained by simulating the deep neural network;
the output is expressed as that the kth sample in the counterexample sample set is input to the simulation depth neural network;
NP is the number of samples in the positive sample set;
NN is the number of samples in the counterexample sample set;
max and min represent maximum and minimum values, respectively.
Further, according to the flow tube approximation generation method of the present invention, the steps S3A1 and S3B1 generate an initial vectorAndAnd finely dividing around the boundary of the initial value range.
The flow tube approximation generation device based on the neural network comprises the following modules:
The module M1 is used for acquiring information of a continuous system to be processed, wherein the information of the continuous system at least comprises a system quantity, a differential equation of the system quantity relative to time, an initial value range of the system quantity and the running time of the continuous system;
The module M2 is used for dividing the running time of the continuous system into N time periods according to the running time in a mode of equal time length and respectively constructing a simulation depth neural network of the continuous system for the N time periods;
The module M3 is used for carrying out simulation operation on the continuous system through sampling of the system quantity in the initial value range, and respectively constructing a positive sample set and a negative sample set for N time periods, wherein the positive sample set and the negative sample set are sample sets, the samples are sampling values of the system quantity, the sampling values of the system quantity in the positive sample set are obtained through the simulation operation of the continuous system after the sampling of the system quantity in the initial value range, and the sampling values of the system quantity in the negative sample set are obtained through the simulation operation of the continuous system after the sampling of the system quantity outside the initial value range;
the module M4 is used for respectively inputting samples in the positive example sample set and the negative example sample set of the N time periods as training samples of the simulation depth neural network of the N time periods to the corresponding simulation depth neural network for training;
The module M5 is used for extracting model parameter data of N trained simulation depth neural networks, constructing a mixed integer constraint code which is corresponding to N time periods and used for describing the maximum output problem of the simulation depth neural network according to the extracted model parameter data, and then carrying out constraint solving on the maximum output problem of the simulation depth neural network on the mixed integer constraint code through a mixed integer programming constraint solver to obtain a corresponding extreme value and an extreme value point;
the simulation deep neural network comprises an input layer, n hidden layers and an output layer, and is expressed as:
Aout=woutzn+bout;
zk=δ(wkzk-1+bk),k=1,2,...,n;
z0=x;
wherein, A out is the output of the simulation depth neural network, and is more than 0;
w out and b out are parameters of an output layer of the simulation deep neural network, w out is a vector, and b out is a numerical value;
w k and b k are simulation deep neural network kth hidden layer parameters, which are vectors, wherein k=1, 2, & gt, n;
z k is the simulation depth neural network kth hidden layer output vector, where k=1, 2,..n;
z 0 is the simulation depth neural network input layer output vector;
x is a vector consisting of systematic quantities;
n represents the number of hidden layers contained in the simulation deep neural network;
delta is the activation function.
Further, according to the flow tube approximation generation device of the present invention, in the module M3, the positive sample set of N periods is constructed by the following modules:
A module M3A1 for obtaining an initial vector composed of initial values of the system quantity by sampling in the initial value domain
A module M3A2 for, according to the formulaCalculating instantaneous values in a system magnitude domain at various points in time within a continuous system operating durationThen, the instantaneous values in the system magnitude domain of each time point are taken as samples to be added into a positive sample set of the corresponding time period;
the counterexample sample set for the N time periods is constructed by the following modules:
a module M3B1 for obtaining an initial vector composed of the initial values of the system quantity by sampling outside the initial value domain
A module M3B2 for determining according to the formulaCalculating instantaneous values outside the system magnitude domain at various points in the duration of continuous system operationThen, the instantaneous values in the system magnitude domain of each time point are taken as samples to be added into a counterexample sample set of the corresponding time period;
Wherein, the
The time points corresponding to the j-th period are (j×m-M) ×dt, (j×m-m+1) ×dt, (j×m-m+2) ×dt, (j×m) ×dt, j=1, 2,3, N;
And Respectively representing instantaneous values in a system magnitude domain and instantaneous values outside the magnitude domain with the time point of i x dT;
Wherein i=1, 2,3,.., T is the running time of the continuous system;
f is a differential equation of system quantity with respect to time;
m is the number of time-point fine divisions within a time period.
Further, according to the flow tube approximation generation device of the present invention, in the module M4, training the simulation deep neural network with samples in the positive example sample set and the negative example sample set includes the following modules:
The module M41 is used for inputting samples in the positive example sample set and the negative example sample set into the simulation depth neural network one by one to obtain corresponding outputs of the samples;
A module M42 for calculating a positive case loss function value and a negative case loss function value;
A module M43 for optimizing parameters of the simulation depth neural network by taking the sum of the positive loss function value and the negative loss function value as the loss function value of the optimizer;
a module M44 for repeatedly executing the functions of the modules M41 to M44 until the sum of the positive case loss function value and the negative case loss function value is 0;
the positive loss function value and the negative loss function value are calculated by adopting the following formula:
Wherein, the
The positive loss function value of the t-th training is shown,
The counterexample loss function value of the t-th training is shown,
The k sample in the positive sample set is input to the output obtained by simulating the deep neural network;
the output is expressed as that the kth sample in the counterexample sample set is input to the simulation depth neural network;
NP is the number of samples in the positive sample set;
NN is the number of samples in the counterexample sample set;
max and min represent maximum and minimum values, respectively.
Further, according to the flow tube approximation generation apparatus of the present invention, the modules M3A1 and M3B1 generate initial vectorsAndAnd finely dividing around the boundary of the initial value range.
According to a medium of the present invention, a set of program instructions readable by a machine is stored in the medium, and when the set of program instructions stored in the medium is executed after being read by the machine, the machine can implement the above-described flow tube approximation generation method.
The device comprises a processor and a memory which are connected, wherein a program instruction set is stored in the memory, and when the program instruction set stored in the memory is read by the processor and executed, the device can realize the flow tube approximate generation method.
The invention has the following technical effects:
1. compared with the traditional polyhedral approximation method, the method has higher precision in the upper limit of the whole system time.
2. Compared with the traditional method for approximation of the Taylor model, the accuracy is similar in a short time, but the accuracy of the approximation of the flow tube is reduced to a far smaller degree as the system time is operated.
3. The invention can generate the flow tube overapproximation for any two-dimensional continuous system as universal as possible, and can effectively control the accuracy of the flow tube overapproximation by adjusting the interval between the data sets.
Drawings
FIG. 1 is a flow diagram of a method for approximating flow tube generation of a continuous flow system in accordance with an embodiment of the present invention.
Fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the accompanying drawings.
Fig. 2 illustrates an electronic device, a general purpose computer device of von neumann form factor, comprising at least a processor 201 and a memory 202 connected. Memory 201 is used to store a set of computer program instructions and data. Memory 201 is a machine-readable medium to which the present invention refers, typically a sustainable storage device, including but not limited to, such as magnetic disks, magnetic tapes, solid state drives, and the like. Processor 202 performs its corresponding functions by loading the set of computer program instructions stored in memory 201 and executing them. In particular, in this embodiment, the electronic device implements the flow tube approximation generation method of the present invention by executing the set of computer program instructions stored in the memory 201 by the processor 202.
Referring to fig. 1, the flow tube approximation generation method of the present invention includes a time-division neural network construction step, a training sample simulation generation step, a time-division neural network training step, a constraint coding and constraint solving step. The aforementioned step S1 acquires information of the continuous system to be processed, which represents information of the continuous system input of the present invention. The output of the present invention is a deep neural network that approximates the input continuous system flow tube. The information of the continuous system includes the system quantity, a differential equation of the system quantity with respect to time, an initial value range of the system quantity, and an operation duration of the continuous system. Wherein there are typically a plurality of system volumes of a continuous system, the plurality of system volumes comprising a system volume set. The deep neural networks output by the invention are a series of time-sliced deep neural networks, each representing a flow tube approximation over a period of time of the continuous system.
For example, in a continuous system of an example, the system quantity is a and b, the differential equation of the system quantity with respect to time is ta=b and tb=0.2 (1-a) b-a, the initial value range is {0.8 ∈a ∈ 1,0.4 ∈b ∈0.6}, and the operation duration is 10s. Where ta is expressed as the differentiation of the system amount a with respect to time and tb is expressed as the differentiation of the system amount b with respect to time.
And a time-division neural network construction step, namely the step S2, wherein the running time of the continuous system is divided into N time periods according to the running time in a mode of equal time length, and the simulation depth neural network of the continuous system is constructed for the N time periods respectively. For example, the running duration of the continuous system in the previous example is 10s, and the continuous system can be divided into 20 time periods according to the duration of one time period every 0.5 seconds, so that 20 corresponding simulation depth neural networks are constructed, and each simulation depth neural network corresponds to one time period. In this embodiment, N is determined according to the duration of each period and the operation duration of the continuous system. Specifically, n=t/ST, where T is the operation duration of the continuous system, ST is the duration of each period set in advance, and ST is typically set to 0.5s,1s,2s, etc. The simulation depth neural network is composed of an input layer, a plurality of hidden layers and an output layer, and can be specifically expressed as:
Aout=woutzn+bout;
zk=δ(wkzk-1+bk),k=1,2,...,n;
z0=x;
Wherein A out is the output of the simulation deep neural network;
w out and b out are parameters of an output layer of the simulation deep neural network, w out is a vector, and b out is a numerical value;
w k and b k are simulation deep neural network kth hidden layer parameters, which are vectors, wherein k=1, 2, & gt, n;
z k is the simulation depth neural network kth hidden layer output vector, where k=1, 2,..n;
n represents the number of hidden layers contained in the simulation deep neural network;
z 0 is the simulation depth neural network input layer output vector;
x is a vector consisting of systematic quantities;
delta is the activation function.
The activation function delta typically employs a ReLU activation function.
In the initially constructed deep neural network, hidden layer parameters w k and b k and output layer parameters w out and b out are typically determined randomly.
And step S3, namely performing simulation generation on the training samples, namely performing simulation operation on the continuous system through sampling of the system quantity within and outside the initial value range, and respectively constructing a positive sample set and a negative sample set for N time periods. The N time periods correspond to N simulation depth neural networks, namely, each simulation depth neural network corresponds to a positive example sample set and a negative example sample set. The positive example sample set and the negative example sample set are sample sets. The samples are sampled values of the system quantity. In the positive sample set, the sampling value of the system quantity is obtained through the simulation operation of the continuous system after the sampling of the system quantity in the initial value domain. In the counterexample sample set, the sampling value of the system quantity is obtained through the simulation operation of the continuous system after the sampling of the system quantity outside the initial value domain. Specifically, in this step, a positive sample set of N periods is constructed by:
step S3A1 of obtaining an initial vector composed of initial values of the system quantity by sampling in the initial value domain
Step S3A2 is according to the formulaCalculating instantaneous values in a system magnitude domain at various points in time within a continuous system operating durationThe instantaneous values in the system magnitude domain at these various points in time are then added as samples to the positive sample set for the corresponding time period.
The counterexample sample set for the N time periods is constructed by:
Step S3B1 obtaining an initial vector composed of initial values of the system quantity by sampling outside the initial value domain
Step S3B2 is performed according to the formulaCalculating instantaneous values outside the system magnitude domain at various points in the duration of continuous system operationThe instantaneous values in the system magnitude domain at these various points in time are then added as samples to the counterexample sample sets for the corresponding time periods.
In the above-mentioned steps, the step of,AndRespectively, instantaneous values in the system magnitude domain and instantaneous values outside the value domain at the time point i x dT, i=1, 2,3,..,T is the running time of the continuous system, F is the differential equation of the system quantity with respect to time, and M is the fine division number of the time points in the time period.All are vectors formed by the values of the system quantity. The time points corresponding to the jth period are (j×m-M) ×dt, (j×m-m+1) ×dt, (j×m-m+2) ×dt, (j×m) ×dt, j=1, 2,3,) N, i.e., the system measurement values of m+1 time points in the positive sample set and the negative sample set of each period, respectively. In step S3A1 and step S3B1, vectorsSum vectorHaving a plurality of, a plurality of vectorsSum vectorComposition vectorIs set and vector of (a)Corresponding to this, vectorAndThe same constitutes a collection. Therefore, the number of samples in the positive sample set and the negative sample set is (m+1) K + and (m+1) K -, where K + is the initial vector generated in step S3A1K - is the initial vector generated in step S3B1Is a number of (3).
The time point fine division number M in the time period is generally determined by the time point fine division duration, that is, m=st/MT. Where ST is the duration of the period, and MT is the fine division duration. For example st=0.5 s, mt=25 ms, then m=20.
Steps S3A1 and S3B1 generate initial vectors by samplingAndIn this case, a random sampling method may be used, or a grid point sampling method may be used. This embodiment preferably employs a grid point sampling scheme, such as in the continuous system of the previous example, where the initial value range for system quantities a and b is defined as { 0.8. Ltoreq.a. Ltoreq. 1,0.4. Ltoreq.b. Ltoreq.0.6 }. Grid lines 0.6,0.7,0.8,0.9,1.0,1.1,1.2 of system amount a and grid lines 0.2,0.3,0.4,0.5,0.6,0.7,0.8 of system amount b are obtained by dividing the grids at equal intervals of 0.1 by system amounts a and b. After grid lines of the system quantities a and b are combined in pairs, the following initial vectors in the value range can be obtained by sampling
{0.8,0.4},{0.8,0.5},{0.8,0.6},{0.9,0.4},{0.9,0.5},{0.9,0.6},{1.0,0.4},{1.0,0.5},{1.0,0.6};
Value-out-of-domain initial vector
{0.6,0.2},{0.6,0.3},{0.6,0.4},{0.6,0.5},{0.6,0.6},{0.6,0.7},{0.6,0.8},
{0.7,0.2},{0.7,0.3},{0.7,0.4},{0.7,0.5},{0.7,0.6},{0.7,0.7},{0.7,0.8},
{1.1,0.2},{1.1,0.3},{1.1,0.4},{1.1,0.5},{1.1,0.6},{1.1,0.7},{1.1,0.8},
{1.2,0.2},{1.2,0.3},{1.2,0.4},{1.2,0.5},{1.2,0.6},{1.2,0.7},{1.2,0.8},
{0.8,0.2},{0.9,0.2},{1.0,0.2},{0.8,0.3},{0.9,0.3},{1.0,0.3},
{0.8,0.7},{0.9,0.7},{1.0,0.7},{0.8,0.3},{0.9,0.3},{1.0,0.3}。
Further, to enable the deep neural network to more accurately approximate a continuous system, steps S3A1 and S3B1 generate initial vectors by samplingAndAnd finely dividing around the boundary of the initial value range. Specifically, when the grid points are adopted in this embodiment, the closer the grid points are to the initial value range boundary, the denser the grid points are, such as in the continuous system of the foregoing example, the grid line of the system amount a is divided into 0.6,0.7.0.76,0.79.0.8,0.81,0.83,0.9,0.97,0.99,1.0,1.01,1.04,1.1,1.2, and the grid line of the system amount b is divided into 0.2,0.3.0.36,0.39.0.4,0.41,0.43,0.5,0.57,0.59,0.6,0.61,0.64,0.7,0.8. Grid lines of the system quantities a and b are combined two by two to form grid points with higher density near the boundary of the initial value range and finer resolution.
And (4) a time-division neural network training step, namely the step (S4), wherein samples in the positive example sample set and the negative example sample set of the N time periods are respectively used as training samples of the simulation depth neural network of the N time periods and are input into the corresponding simulation depth neural network for training. Training the simulation deep neural network by using samples in the positive example sample set and the negative example sample set specifically comprises the following steps:
s41, inputting samples in the positive example sample set and the negative example sample set into a simulation depth neural network one by one to obtain corresponding outputs of the samples;
step S42, calculating a positive case loss function value and a negative case loss function value;
s43, optimizing parameters of the simulation depth neural network by taking the sum of the positive loss function value and the negative loss function value as a loss function value of an optimizer;
step S44, repeating the steps S41 to S44 until the sum of the positive loss function value and the negative loss function value is 0.
The positive loss function value and the negative loss function value are calculated by adopting the following formulas:
Wherein, the
The positive loss function value of the t-th training is shown,
The counterexample loss function value of the t-th training is shown,
The k sample in the positive sample set is input to the output obtained by simulating the deep neural network;
the output is expressed as that the kth sample in the counterexample sample set is input to the simulation depth neural network;
NP is the number of samples in the positive sample set;
NN is the number of samples in the counterexample sample set;
max and min represent maximum and minimum values, respectively.
And (3) constraint coding and constraint solving, namely extracting model parameter data of N trained simulation depth neural networks, constructing a mixed integer constraint code which describes the maximum output problem of the simulation depth neural networks and corresponds to N time periods according to the extracted model parameter data, and then carrying out constraint solving on the maximum output problem of the simulation depth neural networks through a mixed integer programming constraint solver to obtain corresponding extremum and extremum points, wherein if the extremum is greater than zero, adding the corresponding extremum points into a corresponding counterexample sample set, and continuing further training on the simulation depth neural networks until the corresponding extremum is smaller than or equal to zero.
In the above steps, the mixed integer constraint coding specifically includes:
In the above-described hybrid integer constraint coding,
X out represents the output of the simulation deep neural network;
j represents a simulation deep neural network corresponding to the j-th period, and j takes values of 1 to N;
w out and b out are parameters of an output layer of the simulation deep neural network;
w k and b k are parameters of the kth hidden layer of the simulation deep neural network;
n represents the number of hidden layers contained in the simulation deep neural network;
z k is the output vector of the kth hidden layer of the simulation depth neural network;
x 0 is the input of the simulation depth neural network, and is a vector composed of system quantities;
delta is an activation function, and a ReLU activation function is adopted;
M is the fine division number of time points in the time period;
f is a differential equation of system quantity with respect to time;
maximize denotes that the hybrid integer constrained code describes the maximum output problem of the simulated deep neural network.
In the above steps, the extremum is the extremum output by the simulation depth neural network, namely the extremum of x out in the mixed integer constraint coding, and the extremum point is the value of the system quantity and is the vector formed by the system quantity.
It should be noted that, in the above step, step S4 trains the simulated deep neural networks of the N periods, and step S5 solves constraint coding for the simulated deep neural networks of the N periods, respectively. Those skilled in the art understand that in the preferred embodiment, training and constraint coding solution of the simulated deep neural network may also be performed one by one, that is, the simulated deep neural network is traversed for N periods of time, training is performed on the traversed simulated deep neural network, then constraint coding solution is performed, if the extremum obtained by the solution is greater than zero, the corresponding extremum point is added to the corresponding counterexample sample set, further training on the simulated deep neural network is continued until the corresponding extremum value is less than or equal to zero, and then the next simulated deep neural network is traversed to repeat the above training and constraint coding solution process.
In addition, the flow tube approximation generation device based on the neural network is a virtual device corresponding to the flow tube approximation generation method based on the neural network, and the modules and the steps in the method are in one-to-one correspondence, which is not described again.

Claims (10)

1. The flow tube approximate generation method based on the neural network is characterized by comprising the following steps of:
Step S1, acquiring information of a continuous system to be processed, wherein the information of the continuous system at least comprises system quantity, a differential equation of the system quantity relative to time, an initial value range of the system quantity and operation time of the continuous system;
S2, dividing the running time of the continuous system into N time periods according to the running time in a mode of equal time length, and respectively constructing a simulation depth neural network of the continuous system for the N time periods;
step 3, performing simulation operation on the continuous system through sampling of the system quantity in the initial value range, and respectively constructing a positive sample set and a negative sample set for N time periods, wherein the positive sample set and the negative sample set are sample sets, and the samples are sampling values of the system quantity;
S4, respectively taking samples in the positive example sample set and the negative example sample set of the N time periods as training samples of the simulation depth neural network of the N time periods, and inputting the training samples into the corresponding simulation depth neural network for training;
step S5, extracting model parameter data of N trained simulation depth neural networks, constructing a mixed integer constraint code which is corresponding to N time periods and used for describing the maximum output problem of the simulation depth neural networks according to the extracted model parameter data, and then carrying out constraint solving on the maximum output problem of the simulation depth neural networks through a mixed integer programming constraint solver on the mixed integer constraint code to obtain corresponding extremum and extremum points;
the simulation deep neural network comprises an input layer, n hidden layers and an output layer, and is expressed as:
Aout=woutzn+bout;
zk=δ(wkzk-1+bk),k=1,2,...,n;
z0=x;
Wherein A out is the output of the simulation deep neural network;
w out and b out are parameters of an output layer of the simulation deep neural network, w out is a vector, and b out is a numerical value;
w k and b k are simulation deep neural network kth hidden layer parameters, which are vectors, wherein k=1, 2, & gt, n;
z k is the simulation depth neural network kth hidden layer output vector, where k=1, 2,..n;
z 0 is the simulation depth neural network input layer output vector;
x is a vector consisting of systematic quantities;
n represents the number of hidden layers contained in the simulation deep neural network;
delta is the activation function.
2. The flow tube approximation generation method according to claim 1, wherein in the step S3, the positive sample set of N periods is constructed by:
step S3A1 of obtaining an initial vector composed of initial values of the system quantity by sampling in the initial value domain
Step S3A2 is according to the formulaCalculating instantaneous values in a system magnitude domain at various points in time within a continuous system operating durationThen, the instantaneous values in the system magnitude domain of each time point are taken as samples to be added into a positive sample set of the corresponding time period;
The counterexample sample set for the N time periods is constructed by:
Step S3B1 obtaining an initial vector composed of initial values of the system quantity by sampling outside the initial value domain
Step S3B2 is performed according to the formulaCalculating instantaneous values outside the system magnitude domain at various points in the duration of continuous system operationThen, the instantaneous values in the system magnitude domain of each time point are taken as samples to be added into a counterexample sample set of the corresponding time period;
Wherein, the
The time points corresponding to the j-th period are (j×m-M) ×dt, (j×m-m+1) ×dt, (j×m-m+2) ×dt, (k×m) ×dt, j=1, 2,3, N;
And Respectively representing instantaneous values in a system magnitude domain and instantaneous values outside the magnitude domain with the time point of i x dT;
Wherein i=1, 2,3,.., T is the running time of the continuous system;
f is a differential equation of system quantity with respect to time;
m is the number of time-point fine divisions within a time period.
3. The flow tube approximation generation method according to claim 1, wherein in the step S4, training the simulated deep neural network with samples in the positive example sample set and the negative example sample set comprises the following steps:
s41, inputting samples in the positive example sample set and the negative example sample set into a simulation depth neural network one by one to obtain corresponding outputs of the samples;
step S42, calculating a positive case loss function value and a negative case loss function value;
s43, optimizing parameters of the simulation depth neural network by taking the sum of the positive loss function value and the negative loss function value as a loss function value of an optimizer;
step S44, repeating the steps S41 to S44 until the sum of the positive loss function value and the negative loss function value is 0;
the positive loss function value and the negative loss function value are calculated by adopting the following formula:
Wherein, the
The positive loss function value of the t-th training is shown,
The counterexample loss function value of the t-th training is shown,
The k sample in the positive sample set is input to the output obtained by simulating the deep neural network;
the output is expressed as that the kth sample in the counterexample sample set is input to the simulation depth neural network;
NP is the number of samples in the positive sample set;
NN is the number of samples in the counterexample sample set;
max and min represent maximum and minimum values, respectively.
4. The flow tube approximation generation method of claim 2, wherein steps S3A1 and S3B1 generate an initial vectorAndAnd finely dividing around the boundary of the initial value range.
5. A neural network-based flow tube approximation generation apparatus, comprising the following modules:
The module M1 is used for acquiring information of a continuous system to be processed, wherein the information of the continuous system at least comprises a system quantity, a differential equation of the system quantity relative to time, an initial value range of the system quantity and the running time of the continuous system;
The module M2 is used for dividing the running time of the continuous system into N time periods according to the running time in a mode of equal time length and respectively constructing a simulation depth neural network of the continuous system for the N time periods;
The system comprises a module M3, a module M4 and a module M4, wherein the module M3 is used for carrying out simulation operation on a continuous system through sampling of the system quantity in an initial value range and respectively constructing a positive sample set and a negative sample set for N time periods, the positive sample set and the negative sample set are sample sets, the samples are sampling values of the system quantity, the sampling values of the system quantity in the positive sample set are obtained through the simulation operation of the continuous system after the sampling of the system quantity in an initial value range, the sampling values of the system quantity in the negative sample set are obtained through the simulation operation of the continuous system after the sampling of the system quantity in the initial value range, and the module M4 is used for respectively taking the samples in the positive sample set and the negative sample set of the N time periods as training samples of a simulation depth neural network of the N time periods to be input into a corresponding simulation depth neural network for training;
The module M5 is used for extracting model parameter data of N trained simulation depth neural networks, constructing a mixed integer constraint code which is corresponding to N time periods and used for describing the maximum output problem of the simulation depth neural network according to the extracted model parameter data, and then carrying out constraint solving on the maximum output problem of the simulation depth neural network on the mixed integer constraint code through a mixed integer programming constraint solver to obtain a corresponding extreme value and an extreme value point;
the simulation deep neural network comprises an input layer, n hidden layers and an output layer, and is expressed as:
Aout=woutzn+bout;
zk=δ(wkzk-1+bk),k=1,2,...,n;
z0=x;
Wherein A out is the output of the simulation deep neural network;
w out and b out are parameters of an output layer of the simulation deep neural network, w out is a vector, and b out is a numerical value;
w k and b k are simulation deep neural network kth hidden layer parameters, which are vectors, wherein k=1, 2, & gt, n;
z k is the simulation depth neural network kth hidden layer output vector, where k=1, 2,..n;
z 0 is the simulation depth neural network input layer output vector;
x is a vector consisting of systematic quantities;
n represents the number of hidden layers contained in the simulation deep neural network;
delta is the activation function.
6. The flow tube approximation generation apparatus of claim 5, wherein in said module M3, the positive sample set of N time periods is constructed by:
A module M3A1 for obtaining an initial vector composed of initial values of the system quantity by sampling in the initial value domain
A module M3A2 for, according to the formulaCalculating instantaneous values in a system magnitude domain at various points in time within a continuous system operating durationThen, the instantaneous values in the system magnitude domain of each time point are taken as samples to be added into a positive sample set of the corresponding time period;
the counterexample sample set for the N time periods is constructed by the following modules:
a module M3B1 for obtaining an initial vector composed of the initial values of the system quantity by sampling outside the initial value domain
A module M3B2 for determining according to the formulaCalculating instantaneous values outside the system magnitude domain at various points in the duration of continuous system operationThen, the instantaneous values in the system magnitude domain of each time point are taken as samples to be added into a counterexample sample set of the corresponding time period;
Wherein, the
The time points corresponding to the j-th period are (j×m-M) ×dt, (j×m-m+1) ×dt, (j×m-m+2) ×dt, (j×m) ×dt, j=1, 2,3, N;
And Respectively representing instantaneous values in a system magnitude domain and instantaneous values outside the magnitude domain with the time point of i x dT;
Wherein i=1, 2,3,.., T is the running time of the continuous system;
f is a differential equation of system quantity with respect to time;
m is the number of time-point fine divisions within a time period.
7. The flow tube approximation generation apparatus of claim 5, wherein training the simulated deep neural network with samples in the positive example sample set and the negative example sample set in the module M4 comprises the following modules:
The module M41 is used for inputting samples in the positive example sample set and the negative example sample set into the simulation depth neural network one by one to obtain corresponding outputs of the samples;
A module M42 for calculating a positive case loss function value and a negative case loss function value;
A module M43 for optimizing parameters of the simulation depth neural network by taking the sum of the positive loss function value and the negative loss function value as the loss function value of the optimizer;
A module M44, configured to repeatedly execute the functions of the modules M41 to M44 until the sum of the positive loss function value and the negative loss function value is 0, where the positive loss function value and the negative loss function value are calculated by adopting the following formula:
Wherein, the
The positive loss function value of the t-th training is shown,
The counterexample loss function value of the t-th training is shown,
The k sample in the positive sample set is input to the output obtained by simulating the deep neural network;
the output is expressed as that the kth sample in the counterexample sample set is input to the simulation depth neural network;
NP is the number of samples in the positive sample set;
NN is the number of samples in the counterexample sample set;
max and min represent maximum and minimum values, respectively.
8. The flow tube approximation generation apparatus of claim 6, wherein the modules M3A1 and M3B1 generate an initial vectorAndAnd finely dividing around the boundary of the initial value range.
9. A medium having stored therein a set of program instructions readable by a machine, characterized in that the machine is capable of implementing a flow tube approximation generation method according to any one of claims 1 to 4 when the set of program instructions stored in the medium is executed after being read by the machine.
10. An apparatus comprising a processor and a memory coupled to each other, the memory having a set of program instructions stored therein, the apparatus being capable of implementing the flow tube approximation generation method of any one of claims 1 to 4 when the set of program instructions stored in the memory is executed after being read by the processor.
CN202410919136.6A 2024-07-08 2024-07-08 Flow tube approximate generation method, device, medium and equipment based on neural network Active CN118734708B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410919136.6A CN118734708B (en) 2024-07-08 2024-07-08 Flow tube approximate generation method, device, medium and equipment based on neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410919136.6A CN118734708B (en) 2024-07-08 2024-07-08 Flow tube approximate generation method, device, medium and equipment based on neural network

Publications (2)

Publication Number Publication Date
CN118734708A CN118734708A (en) 2024-10-01
CN118734708B true CN118734708B (en) 2026-01-16

Family

ID=92845417

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410919136.6A Active CN118734708B (en) 2024-07-08 2024-07-08 Flow tube approximate generation method, device, medium and equipment based on neural network

Country Status (1)

Country Link
CN (1) CN118734708B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114839966A (en) * 2022-03-17 2022-08-02 武汉理工大学 Unmanned ship optimal path planning method based on reachable set
CN116127844A (en) * 2023-02-08 2023-05-16 大连海事大学 A Deep Learning Prediction Method of Flow Field Time History Considering the Constraints of Flow Control Equations

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2599029A4 (en) * 2010-07-29 2014-01-08 Exxonmobil Upstream Res Co Methods and systems for machine-learning based simulation of flow
CN107963093B (en) * 2017-11-30 2019-07-02 北京交通大学 Hybrid monitoring method for train overspeed protection
US20220391563A1 (en) * 2018-11-21 2022-12-08 Kontrol Gmbh Computer-Assisted Design Method for Mechatronic Systems
CN114547956B (en) * 2022-02-16 2025-09-19 中山大学 Flow field representation method, system and medium based on high-order network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114839966A (en) * 2022-03-17 2022-08-02 武汉理工大学 Unmanned ship optimal path planning method based on reachable set
CN116127844A (en) * 2023-02-08 2023-05-16 大连海事大学 A Deep Learning Prediction Method of Flow Field Time History Considering the Constraints of Flow Control Equations

Also Published As

Publication number Publication date
CN118734708A (en) 2024-10-01

Similar Documents

Publication Publication Date Title
Dumbser et al. Building blocks for arbitrary high order discontinuous Galerkin schemes
CN104346629B (en) A kind of model parameter training method, apparatus and system
Ibragimov et al. Numerical solution of the Boltzmann equation on the uniform grid
White et al. Neural networks predict fluid dynamics solutions from tiny datasets
Farago et al. On the convergence and local splitting error of different splitting schemes
Kochdumper et al. Conformant synthesis for Koopman operator linearized control systems
CN118734708B (en) Flow tube approximate generation method, device, medium and equipment based on neural network
Wang et al. An Improved Grey Prediction Model Based on Matrix Representations of the Optimized Initial Value.
CN115577573B (en) Method, device, equipment and storage medium for predicting output current of synchronous generator
Sheng et al. An implicit-explicit Monte Carlo method for semi-linear PDEs driven by 𝛼-stable Lévy process and its error estimates
Lau et al. Numerical tests of rotational mixing in massive stars with the new population synthesis code BONNFIRES
CN116151092A (en) Method and system for measuring loss of UHVDC transmission system
Angel et al. Hardware in the loop experimental validation of PID controllers tuned by genetic algorithms
Barros Composition of numerical integrators in the hyflow formalism
Wei et al. Solving second-order cone programs deterministically in matrix multiplication time
CN114611421B (en) Method and system for artificial viscosity based on modal decay
Koopman Relaxed motion in irreversible molecular statistics
CN117910367B (en) A method for predicting power system disturbance trajectory based on physical information neural network
Mickel Weak and strong approximation of the log-Heston Model by Euler-type methods and related topics
CN104376158A (en) A Multi-time-Scale Output Method for Transient Simulation for Matrix Index
CN119903747B (en) Fluid simulation method and system based on Taichi and Pytorch
CN119720841B (en) Method for determining steady-state characteristics of fluid and related device
CN117371291B (en) A method, device, equipment and medium for calculating temperature field distribution of electric power equipment
CN116825226B (en) Method, device, electronic device and storage medium for constructing coal molecular structure
Shornikov et al. Computer simulation of hybrid systems by ISMA instrumental facilities

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant