CN114282578B - Switch machine running state detection method and electronic equipment - Google Patents
Switch machine running state detection method and electronic equipment Download PDFInfo
- Publication number
- CN114282578B CN114282578B CN202111657343.1A CN202111657343A CN114282578B CN 114282578 B CN114282578 B CN 114282578B CN 202111657343 A CN202111657343 A CN 202111657343A CN 114282578 B CN114282578 B CN 114282578B
- Authority
- CN
- China
- Prior art keywords
- rbm
- sub
- hidden layer
- rbm network
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 17
- 238000012549 training Methods 0.000 claims abstract description 144
- 230000009471 action Effects 0.000 claims abstract description 97
- 238000000034 method Methods 0.000 claims abstract description 27
- 238000000605 extraction Methods 0.000 claims abstract description 24
- 238000004590 computer program Methods 0.000 claims description 21
- 230000000007 visual effect Effects 0.000 claims description 17
- 230000008859 change Effects 0.000 claims description 8
- 239000011159 matrix material Substances 0.000 description 27
- 230000006870 function Effects 0.000 description 7
- 230000008569 process Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000704 physical effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Landscapes
- Train Traffic Observation, Control, And Security (AREA)
Abstract
The invention is applicable to the technical field of switch machine detection, and provides a switch machine running state detection method and electronic equipment, wherein the method comprises the following steps: acquiring turnout action current training data and preset RBM network hidden layer node number; constructing Q identical sub RBM networks according to the dimension of the turnout action current training data and the number of nodes of an RBM network hidden layer; training weight parameters of each sub RBM network and weight parameters among the sub RBM networks according to turnout action current training data to obtain an RBM network model; and carrying out feature extraction on turnout action current data generated when the target point machine moves turnout based on the RBM network model, and carrying out running state detection on the target point machine according to the extracted features. The invention can solve the problem of poor characteristic extraction capability of RBM to high-dimensional turnout action current data in the prior art, thereby improving the accuracy of detecting the running state of the switch machine.
Description
Technical Field
The invention belongs to the technical field of switch machine detection, and particularly relates to a switch machine running state detection method and electronic equipment.
Background
Because the high-speed railway switch machine is influenced by the physical properties, health conditions, the environment and other factors of the equipment, the switch action current curve generated when the switch machine moves the switch is mixed with a large amount of uncorrelated and redundant noise characteristics while providing the operation condition information of the switch machine for technicians, thereby influencing the learning effect of a follow-up intelligent algorithm model. Therefore, more representative features need to be extracted from the noise-mixed features to replace the original switch current data features so as to facilitate the promotion of the subsequent model algorithm.
RBM (RESTRICTED BOLTZMANN MACHINE, boltzmann machine limited) is a randomly generated neural network for learning probability distribution of input data, and has been widely applied to the fields of data dimension reduction, classification, collaborative filtering and the like due to its strong feature extraction and expression capability. When the RBM encounters high-dimensional data, more hidden layer nodes can be adopted within a certain range to improve the abstract learning capability of the RBM. However, if the number of hidden layer nodes exceeds a certain threshold, the generalization ability of the RBM is significantly reduced, which is called overfitting. In order to cope with the increasingly serious dimension disaster problem and overcome the over-fitting phenomenon, the RBM needs to break through the upper limit of the number of nodes of the hidden layer of the network, so that the RBM can provide better support for the learning and generalization of the subsequent model with stronger feature extraction capability.
Disclosure of Invention
In view of the above, the embodiment of the invention provides a method and electronic equipment for detecting the running state of a switch machine, so as to solve the problem of poor feature extraction capability of RBM on high-dimensional switch action current data in the prior art, and further improve the accuracy of detecting the running state of the switch machine.
A first aspect of an embodiment of the present invention provides a method for detecting an operating state of a switch machine, including:
acquiring turnout action current training data and preset RBM network hidden layer node number;
constructing Q identical sub RBM networks according to the dimension of the turnout action current training data and the number of nodes of an RBM network hidden layer, wherein Q is a preset positive integer value;
Training weight parameters of each sub RBM network and weight parameters among the sub RBM networks according to turnout action current training data to obtain an RBM network model;
Performing feature extraction on turnout action current data generated when the target point machine moves turnout based on the RBM network model, and detecting the running state of the target point machine according to the extracted features; the switch action current data and the switch action current training data have the same dimension.
Optionally, constructing Q identical sub RBM networks according to the dimension of the switch action current training data and the number of nodes of the hidden layer of the RBM network, including:
according to Determining the number of visible layer nodes of each sub RBM network;
according to Determining the number of hidden layer nodes of each sub RBM network;
constructing each sub RBM network according to the number of visible layer nodes and the number of hidden layer nodes;
Wherein numV i is the number of nodes in the visual layer, numH i is the number of nodes in the hidden layer, N is the dimension of the turnout action current training data, and K is the number of nodes in the hidden layer of the RBM network.
Optionally, training the weight parameters of each sub RBM network according to the switch action current training data includes:
For the ith sub RBM network, extracting data from the point switch action current training data from the ith numV i to the (i+1) numV i, and training the connection weight of the visible layer node and the hidden layer node in the sub RBM network to obtain the weight parameter of the sub RBM network.
Optionally, training weight parameters between each sub-RBM network according to the switch action current training data includes:
After the weight parameters of each sub RBM network are obtained, the connection weights of the visible layer nodes of each sub RBM network and the hidden layer nodes of other sub RBM networks are trained according to the turnout action current training data, so that the weight parameters among the sub RBM networks are obtained.
Optionally, if the dimension of the switch action current data and/or the number of nodes in the hidden layer of the RBM network generated when the target switch machine moves the switch change, then:
And performing secondary training on the RBM network model according to the variable quantity, and performing feature extraction on turnout action current data based on the RBM network model after the secondary training.
Optionally, performing secondary training on the RBM network model according to the variable quantity includes:
If the dimension of the turnout action current data is increased and the number of nodes of an RBM network hidden layer is unchanged, P 1 first sub-RBM networks are newly added, the increased dimension is distributed to each first sub-RBM network, and the number P 1 of the first sub-RBM networks is determined by the increased dimension;
training the connection weight of the visible layer node and the hidden layer node in each first sub RBM network;
and training the connection weight between the visible layer node of each first sub RBM network and the hidden layer node of each original sub RBM network to obtain a RBM network model after secondary training.
Optionally, performing secondary training on the RBM network model according to the variable quantity includes:
If the dimension of the turnout action current data is unchanged and the number of hidden layer nodes of the RBM network is increased, P 2 second RBM networks are newly added, the increased number of hidden layer nodes is distributed to each second RBM network, and the number P 2 of the second RBM networks is determined by the increased number of hidden layer nodes;
training the connection weight of the visible layer node and the hidden layer node in each second sub RBM network;
And training the connection weight between each hidden layer node of the second sub RBM network and the visible layer node of the original sub RBM network to obtain a RBM network model after secondary training.
Optionally, performing secondary training on the RBM network model according to the variable quantity includes:
If the dimension of the turnout action current data and the number of the hidden layer nodes of the RBM network are increased, adding P 1 first sub-RBM networks and P 2 second sub-RBM networks, distributing the increased dimension to each first sub-RBM network, and distributing the increased number of the hidden layer nodes to each second sub-RBM network; wherein the number P 1 of the first sub-RBM networks is determined by the increased dimension, and the number P 2 of the second sub-RBM networks is determined by the increased number of hidden layer nodes;
respectively training the connection weights of the visible layer nodes and the hidden layer nodes in each first sub RBM network and the connection weights of the visible layer nodes and the hidden layer nodes in each second sub RBM network;
And training the connection weight between each first sub RBM network visual layer node and each original sub RBM network hidden layer node, the connection weight between each second sub RBM network hidden layer node and each original sub RBM network visual layer node, and the connection weight between each first sub RBM network visual layer node and each second sub RBM network hidden layer node, thereby obtaining the RBM network model after secondary training.
A second aspect of the embodiment of the present invention provides a switch machine operation state detection device, including:
The training module is used for acquiring turnout action current training data and the preset node number of the hidden layer of the RBM network; constructing Q identical sub RBM networks according to the dimension of the turnout action current training data and the number of nodes of an RBM network hidden layer, wherein Q is a preset positive integer value; training weight parameters of each sub RBM network and weight parameters among the sub RBM networks according to turnout action current training data to obtain an RBM network model;
the detection module is used for extracting characteristics of turnout action current data generated when the target point machine moves turnout based on the RBM network model, and detecting the running state of the target point machine according to the extracted characteristics; the switch action current data and the switch action current training data have the same dimension.
A third aspect of the embodiments of the present invention provides an electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the switch machine operation state detection method of the first aspect as described above when executing the computer program.
A fourth aspect of the embodiments of the present invention provides a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the switch machine operation state detection method of the first aspect described above.
Compared with the prior art, the embodiment of the invention has the beneficial effects that:
According to the embodiment of the invention, a plurality of identical sub RBM networks are constructed and trained according to the dimension of turnout action current training data and the preset number of RBM network hidden layer nodes, and then all the sub RBM networks are spliced by training weight parameters among all the sub RBM networks, so that a final RBM network characteristic extraction model is obtained. The embodiment of the invention can lead the RBM to break through the upper limit of the number of nodes of the hidden layer of the network, overcome the over-fitting phenomenon, strengthen the characteristic extraction capability of the RBM network on the turnout action current data, and further improve the accuracy of detecting the running state of the switch machine.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of a method for detecting an operating state of a switch machine according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a sub RBM network according to an embodiment of the present invention;
Fig. 3 is a detailed flow chart of a method for detecting an operating state of a switch machine according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a switch machine running state detecting device according to an embodiment of the present invention;
Fig. 5 is a schematic diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In order to illustrate the technical scheme of the invention, the following description is made by specific examples.
Referring to fig. 1, an embodiment of the present invention provides a method for detecting an operating state of a switch machine, including the following steps:
And step S101, acquiring turnout action current training data and the number of RBM network hidden layer nodes.
In the embodiment of the invention, the switch action current training data with corresponding dimensions can be selected according to the dimensions of the switch action current data generated when the target switch machine moves the switch. For example, switch action current training dataset destDSet contains M pieces of data, with data dimension N, and dataset destDSet has undergone the required data preprocessing procedure. The number K of the hidden layer nodes of the RBM network is the number of the hidden layer nodes of the RBM network to be finally obtained, and K is a preset value and is larger than the number Q of the sub RBMs.
Step S102, constructing Q identical sub RBM networks according to the dimension of the turnout action current training data and the number of nodes of an RBM network hidden layer, wherein Q is a preset positive integer value.
Optionally, constructing Q identical sub RBM networks according to the dimension of the switch action current training data and the number of nodes of the hidden layer of the RBM network, including:
according to Determining the number of visible layer nodes of each sub RBM network;
according to Determining the number of hidden layer nodes of each sub RBM network;
constructing each sub RBM network according to the number of visible layer nodes and the number of hidden layer nodes;
Wherein numV i is the number of nodes in the visual layer, numH i is the number of nodes in the hidden layer, N is the dimension of the turnout action current training data, K is the number of nodes in the hidden layer of the RBM network, Is a rounding operation.
In the embodiment of the present invention, referring to fig. 2, it may be regarded as performing longitudinal cutting on the whole RBM network, and equally dividing the dimension N of the switch action current training data and the number K of nodes in the hidden layer of the RBM network into Q identical sub RBM networks, i.e. constructing each sub RBM network.
And step S103, training the weight parameters of each sub RBM network and the weight parameters among the sub RBM networks according to the turnout action current training data to obtain an RBM network model.
In the embodiment of the invention, each sub RBM network is trained respectively, and then the weights of each sub RBM network are spliced to form a weight parameter matrix of the whole RBM. And finally, taking the weight parameter matrix as an initial matrix, and performing fine adjustment on the RBM network so as to obtain a final RBM network.
Step S104, performing feature extraction on turnout action current data generated when the target point machine moves turnout based on the RBM network model, and detecting the running state of the target point machine according to the extracted features; the switch action current data and the switch action current training data have the same dimension.
In the embodiment of the invention, the RBM network model has better feature extraction capability for the same high-dimensional turnout action current data, and provides better support for the learning/generalization of the follow-up model.
It can be seen that, in the embodiment of the present invention, a plurality of identical sub RBM networks are constructed and trained according to the dimension of the switch action current training data and the preset number of nodes of the hidden layer of the RBM network, and then, each sub RBM network is spliced by training weight parameters among each sub RBM network, so as to obtain a final RBM network feature extraction model. The embodiment of the invention can lead the RBM to break through the upper limit of the number of nodes of the hidden layer of the network, overcome the over-fitting phenomenon, strengthen the characteristic extraction capability of the RBM network on the turnout action current data, and further improve the accuracy of detecting the running state of the switch machine.
Optionally, in step S103, training the weight parameters of each sub RBM network according to the switch action current training data includes:
For the ith sub RBM network, extracting data from the point switch action current training data from the ith numV i to the (i+1) numV i, and training the connection weight of the visible layer node and the hidden layer node in the sub RBM network to obtain the weight parameter of the sub RBM network.
Optionally, in step S103, training weight parameters between each sub RBM network according to the switch action current training data includes:
After the weight parameters of each sub RBM network are obtained, the connection weights of the visible layer nodes of each sub RBM network and the hidden layer nodes of other sub RBM networks are trained according to the turnout action current training data, so that the weight parameters among the sub RBM networks are obtained.
In the embodiment of the invention, after the weight parameter training of each sub RBM network is completed, the weight parameters among the sub RBM networks are expressed as the following matrix form
Wherein, W is a weight parameter matrix, and the submatrix (i.j) represents the connection weight between the visual layer node of the ith sub RBM and the hidden layer node of the jth sub RBM. Training by taking the weight parameter matrix W as an initial weight matrix of the RBM to obtain a final overall RBM network characteristic extraction model.
Optionally, if the dimension of the switch action current data and/or the number of nodes in the hidden layer of the RBM network generated when the target switch machine moves the switch change, then:
And performing secondary training on the RBM network model according to the variable quantity, and performing feature extraction on turnout action current data based on the RBM network model after the secondary training.
Considering that in the practical application process, the dimension N of the switch action current data and the number K of nodes of the hidden layer of the RBM network may change, it is necessary to retrain the RBM so that the RBM learns more information due to the newly added data (dimension). Therefore, the embodiment of the invention provides a new weight parameter matrix which is generated by a specific heuristic method on the basis of the generated weight parameter matrix without completely retraining, thereby accelerating the retraining speed of the RBM.
Specifically, a new weight parameter matrix is constructed under the following three conditions:
in the first case, performing secondary training on the RBM network model according to the variable quantity may include:
If the dimension of the turnout action current data is increased and the number of nodes of an RBM network hidden layer is unchanged, P 1 first sub-RBM networks are newly added, the increased dimension is distributed to each first sub-RBM network, and the number P 1 of the first sub-RBM networks is determined by the increased dimension;
training the connection weight of the visible layer node and the hidden layer node in each first sub RBM network;
and training the connection weight between the visible layer node of each first sub RBM network and the hidden layer node of each original sub RBM network to obtain a RBM network model after secondary training.
In the embodiment of the present invention, if the switch action current data dimension is increased by a large amount (for example, greater than V i), the newly added data dimension may be divided into the input data of a plurality of sub RBMs according to the formula in step S102. If the switch operating current data dimension is not increased much (e.g., V i or less), it can be considered as input data for a new RBM. After newly increasing the sub RBM or single RBM, the thought of updating the weight parameter matrix is consistent, and only the updating of the weight parameter matrix after newly increasing the single RBM is shown below, wherein the new weight parameter matrix is as follows:
The sub-matrix of the Q+1th row of W' is a weight parameter matrix obtained by training the new sub-RBM by respectively pairing the newly added data column with hidden layer nodes of each sub-RBM as input data.
And secondly, performing secondary training on the RBM network model according to the variable quantity, wherein the secondary training comprises the following steps:
If the dimension of the turnout action current data is unchanged and the number of hidden layer nodes of the RBM network is increased, P 2 second RBM networks are newly added, the increased number of hidden layer nodes is distributed to each second RBM network, and the number P 2 of the second RBM networks is determined by the increased number of hidden layer nodes;
training the connection weight of the visible layer node and the hidden layer node in each second sub RBM network;
And training the connection weight between each hidden layer node of the second sub RBM network and the visible layer node of the original sub RBM network to obtain a RBM network model after secondary training.
In the embodiment of the present invention, if the number of hidden layer nodes is increased greatly (for example, greater than H i), the newly added hidden layer nodes may be divided into hidden layers of several sub-RBMs according to the method in step S102, respectively. If the number of hidden layer nodes does not increase much (e.g., H i or less), it is considered to be a new hidden layer of the RBM. After newly increasing the sub RBM or single RBM, the thought of updating the weight parameter matrix is consistent, and only the updating of the weight parameter matrix after newly increasing the single RBM is shown below, wherein the new weight parameter matrix is as follows:
The Q+1st column of the W' is a weight parameter matrix obtained after training the new sub RBM formed by the newly added hidden layer nodes and the visible layer nodes of each sub RBM.
And thirdly, performing secondary training on the RBM network model according to the variable quantity, wherein the method comprises the following steps:
If the dimension of the turnout action current data and the number of the hidden layer nodes of the RBM network are increased, adding P 1 first sub-RBM networks and P 2 second sub-RBM networks, distributing the increased dimension to each first sub-RBM network, and distributing the increased number of the hidden layer nodes to each second sub-RBM network; wherein the number P 1 of the first sub-RBM networks is determined by the increased dimension, and the number P 2 of the second sub-RBM networks is determined by the increased number of hidden layer nodes;
respectively training the connection weights of the visible layer nodes and the hidden layer nodes in each first sub RBM network and the connection weights of the visible layer nodes and the hidden layer nodes in each second sub RBM network;
And training the connection weight between each first sub RBM network visual layer node and each original sub RBM network hidden layer node, the connection weight between each second sub RBM network hidden layer node and each original sub RBM network visual layer node, and the connection weight between each first sub RBM network visual layer node and each second sub RBM network hidden layer node, thereby obtaining the RBM network model after secondary training.
In the embodiment of the present invention, the weight parameter matrix is updated as follows after a single RBM is newly added, in the same manner as described above:
The sub-matrix of the Q+1th row of W' is a weight parameter matrix obtained after training by respectively pairing the newly added data column with hidden layer nodes of each sub-RBM as input data; the Q+1st column of the submatrix of W' is a weight parameter matrix obtained after training by respectively forming new sub RBM with the newly added hidden layer nodes and the visible layer nodes of each sub RBM.
Based on the above, referring to fig. 3, the embodiment of the present invention provides a detailed implementation flow of feature extraction, as follows:
(1) Judging whether the RBM needs to be retrained;
(2) If the RBM needs to be retrained, the parameters are preset: the number of sub-RBMs that the RBM is to be split and trained on separately Q, RBM is the number of network hidden layer nodes K. Then, the number of visible layer nodes and hidden layer nodes of the RBM is calculated, weight parameters of each RBM network and weight parameter matrixes among the RBM networks are extracted and trained by training data, and an RBM feature extraction model is obtained;
(3) If only the RBM is required to be trained for the second time, updating an original weight parameter matrix according to the dimension of turnout action current data and the change condition of the number of nodes of an RBM network hidden layer;
(4) And performing feature extraction based on the RBM feature extraction model or the RBM feature extraction model trained secondarily, wherein the output of the RBM hidden layer node is the feature extraction result of the turnout action current data.
It can be seen that, in the embodiment of the present invention, firstly, each parameter (the number of visible layer nodes, the number of hidden layer nodes, and the hidden layer node activation function) of the RBM network is set, and the network is split into the number of sub RBM networks longitudinally; then, the RBM is longitudinally cut into a plurality of sub RBM networks with uniform sizes, and the sub RBM networks are respectively trained; then, the weight parameters among all the sub RBM networks are spliced together according to a certain rule to form a weight parameter matrix of the whole RBM; finally, the RBM network is finely adjusted by taking the weight parameter matrix as an initial matrix, so that a final RBM network is obtained; and the output of the RBM hidden layer node is the final result of the RBM for extracting the characteristics of the turnout action current data.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
Referring to fig. 4, an embodiment of the present invention provides a switch machine operation state detecting device, the device 40 includes:
The training module 41 is used for acquiring turnout action current training data and the preset number of nodes of the hidden layer of the RBM network; constructing Q identical sub RBM networks according to the dimension of the turnout action current training data and the number of nodes of an RBM network hidden layer, wherein Q is a preset positive integer value; and training the weight parameters of each sub RBM network and the weight parameters among the sub RBM networks according to the turnout action current training data to obtain an RBM network model.
The detection module 42 is used for extracting characteristics of turnout action current data generated when the target point machine moves turnout based on the RBM network model, and detecting the running state of the target point machine according to the extracted characteristics; the switch action current data and the switch action current training data have the same dimension.
Optionally, the training module 41 is specifically configured to:
according to Determining the number of visible layer nodes of each sub RBM network;
according to Determining the number of hidden layer nodes of each sub RBM network;
constructing each sub RBM network according to the number of visible layer nodes and the number of hidden layer nodes;
Wherein numV i is the number of nodes in the visual layer, numH i is the number of nodes in the hidden layer, N is the dimension of the turnout action current training data, and K is the number of nodes in the hidden layer of the RBM network.
Optionally, the training module 41 is specifically configured to:
For the ith sub RBM network, extracting data from the point switch action current training data from the ith numV i to the (i+1) numV i, and training the connection weight of the visible layer node and the hidden layer node in the sub RBM network to obtain the weight parameter of the sub RBM network.
Optionally, the training module 41 is specifically configured to:
After the weight parameters of each sub RBM network are obtained, the connection weights of the visible layer nodes of each sub RBM network and the hidden layer nodes of other sub RBM networks are trained according to the turnout action current training data, so that the weight parameters among the sub RBM networks are obtained.
Optionally, if the dimension of the switch action current data and/or the number of nodes in the hidden layer of the RBM network generated when the target switch machine moves the switch change, the training module 41 is further configured to:
And performing secondary training on the RBM network model according to the variable quantity, and performing feature extraction on turnout action current data based on the RBM network model after the secondary training.
Optionally, the training module 41 is specifically further configured to:
If the dimension of the turnout action current data is increased and the number of nodes of an RBM network hidden layer is unchanged, P 1 first sub-RBM networks are newly added, the increased dimension is distributed to each first sub-RBM network, and the number P 1 of the first sub-RBM networks is determined by the increased dimension;
training the connection weight of the visible layer node and the hidden layer node in each first sub RBM network;
and training the connection weight between the visible layer node of each first sub RBM network and the hidden layer node of each original sub RBM network to obtain a RBM network model after secondary training.
Optionally, the training module 41 is specifically further configured to:
If the dimension of the turnout action current data is unchanged and the number of hidden layer nodes of the RBM network is increased, P 2 second RBM networks are newly added, the increased number of hidden layer nodes is distributed to each second RBM network, and the number P 2 of the second RBM networks is determined by the increased number of hidden layer nodes;
training the connection weight of the visible layer node and the hidden layer node in each second sub RBM network;
And training the connection weight between each hidden layer node of the second sub RBM network and the visible layer node of the original sub RBM network to obtain a RBM network model after secondary training.
Optionally, the training module 41 is specifically further configured to:
If the dimension of the turnout action current data and the number of the hidden layer nodes of the RBM network are increased, adding P 1 first sub-RBM networks and P 2 second sub-RBM networks, distributing the increased dimension to each first sub-RBM network, and distributing the increased number of the hidden layer nodes to each second sub-RBM network; wherein the number P 1 of the first sub-RBM networks is determined by the increased dimension, and the number P 2 of the second sub-RBM networks is determined by the increased number of hidden layer nodes;
respectively training the connection weights of the visible layer nodes and the hidden layer nodes in each first sub RBM network and the connection weights of the visible layer nodes and the hidden layer nodes in each second sub RBM network;
And training the connection weight between each first sub RBM network visual layer node and each original sub RBM network hidden layer node, the connection weight between each second sub RBM network hidden layer node and each original sub RBM network visual layer node, and the connection weight between each first sub RBM network visual layer node and each second sub RBM network hidden layer node, thereby obtaining the RBM network model after secondary training.
Fig. 5 is a schematic diagram of an electronic device 50 according to an embodiment of the present invention. As shown in fig. 5, the electronic device 50 of this embodiment includes: a processor 51, a memory 52 and a computer program 53 stored in the memory 52 and executable on the processor 51, such as a switch machine operating state detection program. The processor 51, when executing the computer program 53, implements the steps of the above-described embodiments of the method for detecting the operating state of each switch machine, such as steps 101 to 104 shown in fig. 1. Or the processor 51 when executing the computer program 53 performs the functions of the modules of the above-described embodiments of the apparatus, such as the functions of the modules 41 to 42 shown in fig. 4.
By way of example, the computer program 53 may be divided into one or more modules/units, which are stored in the memory 52 and executed by the processor 51 to complete the present invention. One or more of the modules/units may be a series of computer program instruction segments capable of performing particular functions for describing the execution of the computer program 53 in the electronic device 50. For example, the computer program 53 may be divided into a training module 41 and a detection module 42 (modules in the virtual device), each of which functions specifically as follows:
The training module 41 is used for acquiring turnout action current training data and the preset number of nodes of the hidden layer of the RBM network; constructing Q identical sub RBM networks according to the dimension of the turnout action current training data and the number of nodes of an RBM network hidden layer, wherein Q is a preset positive integer value; and training the weight parameters of each sub RBM network and the weight parameters among the sub RBM networks according to the turnout action current training data to obtain an RBM network model.
The detection module 42 is used for extracting characteristics of turnout action current data generated when the target point machine moves turnout based on the RBM network model, and detecting the running state of the target point machine according to the extracted characteristics; the switch action current data and the switch action current training data have the same dimension.
The electronic device 50 may be a desktop computer, a notebook computer, a palm computer, a cloud server, or the like. The electronic device 50 may include, but is not limited to, a processor 51, a memory 52. It will be appreciated by those skilled in the art that fig. 5 is merely an example of electronic device 50 and is not intended to limit electronic device 50, and may include more or fewer components than shown, or may combine certain components, or different components, e.g., electronic device 50 may also include input-output devices, network access devices, buses, etc.
The Processor 51 may be a central processing unit (Central Processing Unit, CPU), other general purpose Processor, digital signal Processor (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), off-the-shelf Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 52 may be an internal storage unit of the electronic device 50, such as a hard disk or a memory of the electronic device 50. The memory 52 may also be an external storage device of the electronic device 50, such as a plug-in hard disk provided on the electronic device 50, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD), or the like. Further, the memory 52 may also include both internal and external storage units of the electronic device 50. The memory 52 is used to store computer programs and other programs and data required by the electronic device 50. The memory 52 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/electronic device and method may be implemented in other manners. For example, the apparatus/electronic device embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, and the computer program may be stored in a computer readable storage medium, where the computer program, when executed by a processor, may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, executable files or in some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the content of the computer readable medium can be appropriately increased or decreased according to the requirements of the jurisdiction's jurisdiction and the patent practice, for example, in some jurisdictions, the computer readable medium does not include electrical carrier signals and telecommunication signals according to the jurisdiction and the patent practice.
The above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.
Claims (7)
1. A method for detecting an operating state of a switch machine, comprising:
acquiring turnout action current training data and preset RBM network hidden layer node number;
constructing Q identical sub RBM networks according to the dimension of the turnout action current training data and the node number of the hidden layer of the RBM network, wherein Q is a preset positive integer value;
training weight parameters of each sub RBM network and weight parameters among the sub RBM networks according to the turnout action current training data to obtain an RBM network model;
Performing feature extraction on turnout action current data generated when the target point machine moves turnout based on the RBM network model, and detecting the running state of the target point machine according to the extracted features; the turnout action current data and the turnout action current training data have the same dimension;
constructing Q identical sub RBM networks according to the dimension of the turnout action current training data and the node number of the RBM network hidden layer, wherein the Q identical sub RBM networks comprise:
according to Determining the number of visible layer nodes of each sub RBM network;
according to Determining the number of hidden layer nodes of each sub RBM network;
Constructing each sub RBM network according to the number of the visible layer nodes and the number of the hidden layer nodes;
Wherein numV i is the number of nodes of the visual layer, numH i is the number of nodes of the hidden layer, N is the dimension of the turnout action current training data, and K is the number of nodes of the hidden layer of the RBM network;
Training the weight parameters of each sub RBM network according to the turnout action current training data, wherein the weight parameters comprise:
For the ith sub RBM network, extracting column data from the i numV i to (i+1) numV i column data from the turnout action current training data, and training the connection weight of the visible layer node and the hidden layer node in the sub RBM network to obtain the weight parameter of the sub RBM network;
Training weight parameters among all sub RBM networks according to the turnout action current training data, wherein the weight parameters comprise:
After the weight parameters of each sub RBM network are obtained, the connection weights of the visible layer nodes of each sub RBM network and the hidden layer nodes of other sub RBM networks are trained according to the turnout action current training data, so that the weight parameters among the sub RBM networks are obtained.
2. The method of claim 1, wherein if the dimension of the switch operation current data and/or the number of nodes of the RBM network hidden layer generated when the target switch moves the switch are changed, the method comprises:
and performing secondary training on the RBM network model according to the variable quantity, and performing feature extraction on turnout action current data based on the RBM network model after the secondary training.
3. The switch machine operating state detection method of claim 2, wherein the secondary training of the RBM network model according to the amount of change comprises:
If the dimension of the turnout action current data is increased and the number of nodes of an RBM network hidden layer is unchanged, P 1 first sub-RBM networks are newly added, the increased dimension is distributed to each first sub-RBM network, and the number P 1 of the first sub-RBM networks is determined by the increased dimension;
training the connection weight of the visible layer node and the hidden layer node in each first sub RBM network;
and training the connection weight between the visible layer node of each first sub RBM network and the hidden layer node of each original sub RBM network to obtain a RBM network model after secondary training.
4. The switch machine operating state detection method of claim 2, wherein the secondary training of the RBM network model according to the amount of change comprises:
If the dimension of the turnout action current data is unchanged and the number of hidden layer nodes of the RBM network is increased, P 2 second RBM networks are newly added, the increased number of hidden layer nodes is distributed to each second RBM network, and the number P 2 of the second RBM networks is determined by the increased number of hidden layer nodes;
training the connection weight of the visible layer node and the hidden layer node in each second sub RBM network;
And training the connection weight between each hidden layer node of the second sub RBM network and the visible layer node of the original sub RBM network to obtain a RBM network model after secondary training.
5. The switch machine operating state detection method of claim 2, wherein the secondary training of the RBM network model according to the amount of change comprises:
If the dimension of the turnout action current data and the number of the hidden layer nodes of the RBM network are increased, adding P 1 first sub-RBM networks and P 2 second sub-RBM networks, distributing the increased dimension to each first sub-RBM network, and distributing the increased number of the hidden layer nodes to each second sub-RBM network; wherein the number P 1 of the first sub-RBM networks is determined by the increased dimension, and the number P 2 of the second sub-RBM networks is determined by the increased number of hidden layer nodes;
respectively training the connection weights of the visible layer nodes and the hidden layer nodes in each first sub RBM network and the connection weights of the visible layer nodes and the hidden layer nodes in each second sub RBM network;
And training the connection weight between each first sub RBM network visual layer node and each original sub RBM network hidden layer node, the connection weight between each second sub RBM network hidden layer node and each original sub RBM network visual layer node, and the connection weight between each first sub RBM network visual layer node and each second sub RBM network hidden layer node, thereby obtaining the RBM network model after secondary training.
6. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any one of claims 1 to 5 when the computer program is executed.
7. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111657343.1A CN114282578B (en) | 2021-12-30 | 2021-12-30 | Switch machine running state detection method and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111657343.1A CN114282578B (en) | 2021-12-30 | 2021-12-30 | Switch machine running state detection method and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114282578A CN114282578A (en) | 2022-04-05 |
CN114282578B true CN114282578B (en) | 2024-08-09 |
Family
ID=80878871
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111657343.1A Active CN114282578B (en) | 2021-12-30 | 2021-12-30 | Switch machine running state detection method and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114282578B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112348170A (en) * | 2020-11-10 | 2021-02-09 | 交控科技股份有限公司 | Fault diagnosis method and system for turnout switch machine |
CN113460122A (en) * | 2021-07-09 | 2021-10-01 | 北京昊鹏智能技术有限公司 | State detection method, device, equipment and medium for electric turnout switch machine system |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110097755B (en) * | 2019-04-29 | 2021-08-17 | 东北大学 | State recognition method of expressway traffic flow based on deep neural network |
US11544422B2 (en) * | 2019-09-16 | 2023-01-03 | Palo Alto Research Center Incorporated | Machine learning based systems and methods for real time, model based diagnosis |
-
2021
- 2021-12-30 CN CN202111657343.1A patent/CN114282578B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112348170A (en) * | 2020-11-10 | 2021-02-09 | 交控科技股份有限公司 | Fault diagnosis method and system for turnout switch machine |
CN113460122A (en) * | 2021-07-09 | 2021-10-01 | 北京昊鹏智能技术有限公司 | State detection method, device, equipment and medium for electric turnout switch machine system |
Also Published As
Publication number | Publication date |
---|---|
CN114282578A (en) | 2022-04-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12169875B2 (en) | Model training method and apparatus for image recognition, network device, and storage medium | |
CN112668716B (en) | A training method and device for a neural network model | |
CN108764195B (en) | Handwriting model training method, handwritten character recognition method, device, equipment and medium | |
CN108197773A (en) | Methods of electric load forecasting, load forecast device and terminal device | |
CN109684476B (en) | Text classification method, text classification device and terminal equipment | |
CN111784699B (en) | Method and device for carrying out target segmentation on three-dimensional point cloud data and terminal equipment | |
CN105022754A (en) | Social network based object classification method and apparatus | |
CN111368887B (en) | Training method of thunderstorm weather prediction model and thunderstorm weather prediction method | |
CN108021908B (en) | Face age group identification method and device, computer device and readable storage medium | |
CN112163637B (en) | Image classification model training method and device based on unbalanced data | |
CN110502976A (en) | The training method and Related product of text identification model | |
CN110046622B (en) | Targeted attack sample generation method, device, equipment and storage medium | |
CN113837379B (en) | Training method and device for neural network and computer readable storage medium | |
CN113298152B (en) | Model training method, device, terminal equipment and computer readable storage medium | |
CN108985442B (en) | Handwriting model training method, handwritten character recognition method, device, equipment and medium | |
CN109214671B (en) | Personnel grouping method, device, electronic device and computer readable storage medium | |
Gabrys | Combining neuro-fuzzy classifiers for improved generalisation and reliability | |
CN114282578B (en) | Switch machine running state detection method and electronic equipment | |
CN112818114B (en) | Information classification method, detection method, computing device and storage medium | |
CN111343115B (en) | A 5G communication modulated signal identification method and system | |
CN111143560A (en) | Short text classification method, terminal equipment and storage medium | |
CN111930935B (en) | Image classification method, device, equipment and storage medium | |
CN112633394B (en) | Intelligent user label determination method, terminal equipment and storage medium | |
CN114332561A (en) | Super-resolution model training method, device, equipment and medium | |
CN114139718A (en) | Model creation method, model creation device and computer-readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |