[go: up one dir, main page]

CN111046908A - Emulsion explosive package fault real-time monitoring model based on convolutional neural network - Google Patents

Emulsion explosive package fault real-time monitoring model based on convolutional neural network Download PDF

Info

Publication number
CN111046908A
CN111046908A CN201911071337.0A CN201911071337A CN111046908A CN 111046908 A CN111046908 A CN 111046908A CN 201911071337 A CN201911071337 A CN 201911071337A CN 111046908 A CN111046908 A CN 111046908A
Authority
CN
China
Prior art keywords
layer
output
size
previous
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911071337.0A
Other languages
Chinese (zh)
Inventor
王越胜
钱卓涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN201911071337.0A priority Critical patent/CN111046908A/en
Publication of CN111046908A publication Critical patent/CN111046908A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an emulsion explosive package fault real-time monitoring model based on a convolutional neural network. The constructed convolutional neural network is trained by using a training set picture, then a GUI is developed by using PyQt5 based on Anaconda + Python3+ Opencv, real-time detection is performed by using a trained neural network model, and the detection can be turned on and off and a recognition result can be obtained in real time only by simple mouse clicking on a user interface. Experiments show that the method has high identification accuracy and high real-time property. The requirement of actual emulsion explosive package detection of the production line is met. The patent also provides complete code and designs the package of the complete code into a GUI interface, so that a user can directly use and read the code and improve programs on the basis of the code.

Description

Emulsion explosive package fault real-time monitoring model based on convolutional neural network
Technical Field
The invention belongs to the technical field of fault detection, and relates to an emulsion explosive package fault real-time monitoring model based on a convolutional neural network.
Background
Currently, the monitoring of the package failure of the emulsion explosive commonly used in the industry mainly depends on the following two modes:
firstly, manual observation; the manual observation method mainly depends on the workers in the explosive factory to observe whether the packaged emulsion explosive is abnormal in real time, the conveyor belt transportation is stopped if the abnormality is found, the reason for the abnormality of the emulsion explosive package is found, then the conveyor belt is opened, and the production line continues to run. The manual observation and detection has strong subjectivity, and the long-time work easily causes the fatigue of workers, the condition of overlooking or mislooking often easily occurs, the operation efficiency of the production line of a explosive plant is influenced, and irreparable consequences such as explosion and the like can be caused in serious cases.
Secondly, the propagation process obtains an external matrix of the emulsion explosive package in the research on the online detection technology and application of the industrial explosive packaging process, and distinguishes and detects the emulsion explosive package faults according to the length, width and length-width ratio of the external matrix; the appearance of the emulsion explosive cartridge is cylindrical, the industrial field cartridge is packaged in orange, the transmission belt is green, the emulsion explosive is bright brown, and the cartridge, the emulsion explosive and the transmission belt have obvious brightness difference in images acquired by a black-and-white area array camera. When the packaging machine is used for packaging cartridges, due to external factors such as the opening and stopping processes of the machine, too much or too little cartridges can be filled, if too much, bags can be expanded, and if too much, the cartridges can be packaged and damaged, and emulsion explosives can leak, so that the production line is polluted. And the situation that the two ends of the medicated roll are leaked due to the fact that the two ends of the medicated roll are not firmly buckled when the two ends of the medicated roll are buckled is also possible. The general emulsion explosive package detection comprises the conditions of normal explosive sticks, side-end explosive-leaking explosive sticks, port explosive-leaking explosive sticks, package sunken explosive sticks, leaked emulsion explosives and the like. The external shape information of the drug roll can be known through the image, such as information of length, width, perimeter, area, space coordinates and the like. Through analysis and summary of a large number of pictures on site, the defective cartridge can be identified through the contour size.
In summary, the problems of the prior art are as follows:
(1) the manual observation and detection has strong subjectivity, and the long-time work easily causes the fatigue of workers, the condition of overlooking or mislooking often easily occurs, the operation efficiency of the production line of a explosive plant is influenced, and irreparable consequences such as explosion and the like can be caused in serious cases.
(2) The method for distinguishing and detecting the package faults of the emulsified explosive mainly uses a BP neural network according to the length, width and length-width ratio of an external matrix, but the distinguishing accuracy rate is not high and is generally between 80 and 90 percent; and the algorithm has slow running time, is not suitable for processing videos shot in real time, only can continuously shoot pictures by using an industrial camera and then transmit the pictures to a BP neural network for identification, and has very low real-time property.
(3) Without complete program code, the user cannot directly extract the test and obtain the final recognition result.
(4) At present, a complete independent user interface is not developed, and the identification of the shot emulsion explosive package photos is researched, and the practicability of the emulsion explosive package photos is not researched.
The difficulty in solving the technical problems is as follows:
1: the accuracy rate of identifying the package faults of the emulsion explosive is not high, and particularly, the difference between the expansion bag and the normal external rectangle of the emulsion explosive is not much, so that the expansion bag and the normal situation can not be accurately distinguished.
2: in order to achieve higher real-time monitoring difficulty and achieve higher real-time performance under certain hardware conditions, the convolutional neural network needs to be simplified and improved, so that the operation rate is improved while the accuracy is ensured. The white blood cells cannot be classified and counted, and the counting can only be used for learning research and cannot be used in an actual production line.
3: the problem that in the prior art, only the emulsion explosive on a production line can be photographed firstly and then identified, and the specific position of the fault packaged emulsion explosive which cannot be accurately positioned and identified is caused by time delay is urgently solved.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a convolutional neural network-based emulsion explosive package fault real-time monitoring model.
The technical scheme adopted by the invention for solving the technical problems is as follows:
step 1, reading picture pixels, wherein the read picture is a 28 × 1 gray picture, the first layer is a convolution layer, the convolution kernel size is 5 × 5, the depth is 6, and the step size is 1;
the convolution layer is directly connected with the input layer, and the type of pictures which can be read by the improved convolution neural network model is a gray-scale picture with the size of 28 × 1. The first convolutional kernel size is 5x5, depth is 6, all 0 padding is not used, and step size is 1. Since all 0 padding is not used, the node matrix for the convolutional layer output is 28-5+ 1-24 with a depth of 6; namely 24 x 6 node matrix i.
Step 2, the second layer adopts a maximum pooling layer with the filter of 2x2, the length, the width and the step length of the filter are both 2, so the size of the node matrix II of the output of the second layer is 12 x 6;
the pooling layer is directly connected with the output of the previous layer, the output of the previous layer after convolution processing is a 24X 6-3456 node matrix, and the pooling layer changes the 24X 6 node matrix I of the output of the previous layer into 12X 6 node matrix II by using a filter with length, width and motion step length of 2 and outputs the node matrix I to the next layer. The pooling layer employs a maximum pooling method, i.e., sequentially retaining the maximum of four pixels in 2x2, and the pooling operation can reduce the amount of information carried by the matrix to
Figure BDA0002261050210000031
Step 3, the third layer is a convolution layer, wherein the size of a convolution kernel is 5 × 5, the depth is 16, the step size is 1, all 0 filling is not adopted in the same convolution layer, and the node matrix II of the previous layer 12 × 6 can be changed into a node matrix III of 8 × 16;
the convolutional layer is connected to the second (pooling layer) output, the convolutional layer convolution kernel size is 5x5, depth is 16, no all 0 padding is used, step size is 1. Since all 0 padding is not used, the output of the convolutional layer has a node matrix iii of 12-5+ 1-8 and a depth of 16, i.e., 8-16.
Step 4, adopting a maximum pooling layer with a filter of 2x2 in the fourth layer, wherein the length, the width and the step length of the filter are both 2, so that the size of an output matrix IV of the maximum pooling layer is 4 x 16;
the maximum pooling layer is directly connected with the output of the last convolution layer, and the output of the last convolution layer is 8 × 16 ═ 1024 node matrix III; the maximum pooling layer uses a filter with length and width and motion step of 2 to change the node matrix III of 8 x 16 outputted from the previous layer into a node matrix IV of 4 x 16 outputted to the next layer. The pooling layer adopts a maximum pooling method, namely, the maximum of four pixels in 2x2 is reserved in sequence, and the information amount carried by the matrix can be reduced to the value
Figure BDA0002261050210000041
Step 5, the fifth layer is an improved full-connection layer, the full-connection layer and a LeNet-5 convolutional neural network model are greatly changed, and the original three full-connection layers are simplified into one full-connection layer;
the fully-connected layer is connected with the output of the last maximum pooling layer, the output of the last maximum pooling layer after the maximum pooling process is a node matrix IV with 4 x 16 x 256, and the fully-connected layer has the function of drawing the nodes of the node matrix IV into a vector, so that the fully-connected layer receives a 256-dimensional input vector. Each node of the full-connection layer is connected with nodes of the upper layer and the lower layer.
Step 6, the sixth layer is an output layer, and the output layer has 4 nodes because the defects of the emulsified explosive packages distinguished by the experiment are 4 in total;
the output layer is connected with the last full connection layer, the full connection layer has 256 output nodes, and each node is connected with four nodes of the output layer. Each node of the output layer represents a defect in the package of 4 experimentally differentiated emulsion explosives.
Step 7, mode testing
Selecting 4 types of packages of normally packaged emulsion explosive, excessive (bag expansion), unsaturated (head distortion) and leaked explosive, identifying and classifying by utilizing a trained improved LeNet-5 convolutional neural network, and carrying out 10 tests in the experiment, wherein the classification accuracy and the loss function value of each test on the 100 pictures are shown in the following table 1:
TABLE 1 recognition accuracy statistics table
Figure BDA0002261050210000051
And (4) analyzing results: from the recognition accuracy of 10 experiments, the recognition accuracy of 10 experiments reaches over 90%, the recognition rate of individual experiments reaches over 97%, loss function values of LeNet-5 convolutional neural network models are recorded after training is completed in each training, the loss function values of 10 experiments are very small, and the convolutional neural network models are very consistent with the method for processing and recognizing emulsion explosive package faults, the number of training set pictures of the convolutional neural network is 1100 in total, the number of model training times is 1000 at present, and the accuracy can be further improved if the number of the training set pictures is increased continuously and the proper number of model training times is increased.
In summary, the advantages and positive effects of the invention are:
the method adopts the improved LeNet-5-based convolutional neural network for identification and classification, and the convolutional neural network has good image identification and classification effects; in the experiment, 4 situations of normally packaged emulsion explosive, excessive explosive loading (bag expansion), unsaturated explosive loading (head bending) and explosive leakage can be clearly distinguished, and the identification image is visual and clear and has high accuracy. The accuracy of identifying the faults of the emulsion explosive package is improved, the real-time performance of identification is improved, and the fault types of whether the emulsion explosive package is normal or not and abnormal packages can be identified when the emulsion explosive on the conveyer belt enters the shooting range of a camera; the staff can be according to the trouble packing that detects out in the video, and the concrete position of this emulsion explosive is fixed a position to quick accuracy. Therefore, the fault detection also has practical significance, and can be applied to the actual explosive production instead of only staying at the research level.
Drawings
Fig. 1 is a flow chart provided by an embodiment of the present invention.
FIG. 2 is a training process of the improved LeNet-5 convolutional neural network model of the present invention
Fig. 3 is a schematic diagram of an improved LeNet-5 convolutional neural network model provided by an embodiment of the present invention.
Fig. 4 is a grayscale image of an emulsion explosive provided by an embodiment of the invention.
FIG. 5 is a binary image of an emulsion explosive provided by an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The constructed convolutional neural network is trained by using a training set picture, then a GUI is developed by using PyQt5 based on Anaconda + Python3+ Opencv, real-time detection is performed by using a trained neural network model, and the detection can be turned on and off and a recognition result can be obtained in real time only by simple mouse clicking on a user interface. Experiments show that the method has high identification accuracy and high real-time property. The requirement of actual emulsion explosive package detection of the production line is met. The patent also provides complete code and designs the package of the complete code into a GUI interface, so that a user can directly use and read the code and improve programs on the basis of the code.
The invention will be further described in detail with reference to the accompanying drawings.
As shown in figure 1, the invention provides a convolutional neural network-based emulsion explosive package fault real-time monitoring model. The specific implementation comprises the following steps:
s101: reading picture pixels, wherein the readable picture is a 28 × 1 gray picture, the first layer is a convolution layer, the convolution kernel size is 5 × 5, the depth is 6, and the step size is 1;
the specific codes are as follows:
conv1_weights=tf.get_variable("conv1_weights",[5,5,1,32],initializer=tf.truncated_normal_initializer(stddev=0.1))
conv1_biases=tf.get_variable("conv1_biases",[32],initializer=tf.constant_initializer(0.0))
conv1=tf.nn.conv2d(x_image,conv1_weights,strides=[1,1,1,1],padding='VALID')
relu1=tf.nn.relu(tf.nn.bias_add(conv1,conv1_biases))
s102: the second layer adopts a maximum pooling layer with the filter of 2x2, the length and the width of the filter and the step size of the filter are both 2, so the output matrix size of the layer is 12 x 6;
the specific codes are as follows:
pool1=tf.nn.max_pool(relu1,ksize=[1,2,2,1],strides=[1,2,2,1],padding='VALID')
s103: the third layer is a convolution layer, wherein the size of a convolution kernel is 5 × 5, the depth is 16, the step size is 1, the same layer is not filled with all 0, and the node matrix of the previous layer 12 × 6 can be changed into a node matrix of 8 × 16;
the specific codes are as follows:
conv2_weights=tf.get_variable("conv2_weights",[5,5,32,64],initializer=tf.truncated_normal_initializer(stddev=0.1))
conv2_biases=tf.get_variable("conv2_biases",[64],initializer=tf.constant_initializer(0.0))
conv2=tf.nn.conv2d(pool1,conv2_weights,strides=[1,1,1,1],padding='VALID')
relu2=tf.nn.relu(tf.nn.bias_add(conv2,conv2_biases))
s104: the fourth layer adopts a maximum pooling layer with the filter of 2x2, the length and the width of the filter and the step size of the filter are both 2, so the output matrix size of the layer is 4 x 16;
the specific codes are as follows:
pool2=tf.nn.max_pool(relu2,ksize=[1,2,2,1],strides=[1,2,2,1],padding='VALID')
s105: the fifth layer is a full connection layer, which is greatly changed with a LeNet-5 convolutional neural network model, and the original three full connection layers are simplified into one full connection layer;
the specific codes are as follows:
fc1 weight tg _ variable ("fc1 weight", [4 × 16,256], initializer tf. truncated _ normal _ initializer (stddev 0.1)) #7 × 7 64 3136 change the output of the previous layer into a feature vector
fc1_baises=tf.get_variable("fc1_baises",[1024],initializer=tf.constant_initializer(0.1))
pool2_vector=tf.reshape(pool2,[-1,4*4*16])
fc1=tf.nn.relu(tf.matmul(pool2_vector,fc1_weights)+fc1_baises)
S106: the sixth layer is an output layer, and the defects of the emulsified explosive packages distinguished in the experiment are 4 in total, so that the output layer has 4 nodes;
the specific codes are as follows:
y_conv=tf.nn.softmax(fc1)
the invention is further described with reference to specific examples.
The emulsion explosive package fault real-time monitoring technology and system based on the improved LeNet-5 convolutional neural network model provided by the embodiment of the invention comprise:
reading picture pixels, wherein the readable picture is a 28 × 1 gray picture, the first layer is a convolution layer, the convolution kernel size is 5 × 5, the depth is 6, and the step size is 1;
the second layer adopts a maximum pooling layer with the filter of 2x2, the length and the width of the filter and the step size of the filter are both 2, so the output matrix size of the layer is 12 x 6;
the third layer is a convolution layer, wherein the size of a convolution kernel is 5 × 5, the depth is 16, the step size is 1, the same layer is not filled with all 0, and the node matrix of the previous layer 12 × 6 can be changed into a node matrix of 8 × 16;
the fourth layer adopts a maximum pooling layer with the filter of 2x2, the length and the width of the filter and the step size of the filter are both 2, so the output matrix size of the layer is 4 x 16;
the fifth layer is a full connection layer, which is greatly changed with a LeNet-5 convolutional neural network model, and the original three full connection layers are simplified into one full connection layer;
the sixth layer is an output layer, and the defects of the emulsified explosive packages distinguished in the experiment are 4 in total, so that the output layer has 4 nodes.
The development and implementation of the invention are based on Anaconda + Python3+ Opencv, and a GUI interface is developed by PyQt5, so that a user can easily control the operation of a detection program on the interface and display a fault detection result in real time.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (1)

1.基于卷积神经网络的乳化炸药包装故障实时监测模型,其特征在于包括如下步骤:1. the real-time monitoring model of emulsion explosive packaging fault based on convolutional neural network, is characterized in that comprising the steps: 步骤1、读取图片像素,可读取的图片为28*28*1的灰度图片,第一层为卷积层,其中卷积核大小为5*5,深度为6,步长为1;Step 1. Read the picture pixels. The readable picture is a grayscale picture of 28*28*1. The first layer is a convolution layer, where the convolution kernel size is 5*5, the depth is 6, and the step size is 1. ; 该卷积层与输入层直接相连,此改进的卷积神经网络模型可读取的图片类型是大小为32*32*1的灰度图片;第一卷积层卷积核的尺寸是5x5,深度为6,不使用全0填充,步长为1;由于不使用全0填充,因此该卷积层的输出的节点矩阵为28-5+1=24,深度为6;即24*24*6的节点矩阵Ⅰ;The convolutional layer is directly connected to the input layer. The image type that can be read by this improved convolutional neural network model is a grayscale image with a size of 32*32*1; the size of the convolution kernel of the first convolutional layer is 5x5, The depth is 6, all 0 padding is not used, and the stride is 1; since all 0 padding is not used, the node matrix of the output of the convolutional layer is 28-5+1=24, and the depth is 6; that is, 24*24* 6 node matrix I; 步骤2、第二层采用过滤器为2x2的最大池化层,过滤器的长和宽以及步长均为2,所以第二层的输出的节点矩阵Ⅱ的大小为12*12*6;Step 2. The second layer adopts a maximum pooling layer with a filter of 2x2. The length and width of the filter and the step size are both 2, so the size of the output node matrix II of the second layer is 12*12*6; 该池化层与上一层卷积层输出直接相连,上一层经过卷积处理之后输出是一个24*24*6=3456的节点矩阵,该池化层利用一个长和宽以及运动步长都为2的过滤器,将上一层输出的24*24*6的节点矩阵Ⅰ变为12*12*6的节点矩阵Ⅱ输出给下一层;该池化层采用的是最大值池化法,也就是依次将2*2中四个像素中的最大值保留,经过该池化操作能将矩阵所带的信息量减少到
Figure FDA0002261050200000011
The pooling layer is directly connected to the output of the previous convolutional layer. The output of the previous layer is a 24*24*6=3456 node matrix after convolution processing. The pooling layer uses a length and width and a motion step size All filters are 2, and the 24*24*6 node matrix I output by the previous layer is changed into a 12*12*6 node matrix II and output to the next layer; the pooling layer adopts the maximum pooling method, that is, keep the maximum value of the four pixels in 2*2 in turn, and the pooling operation can reduce the amount of information carried by the matrix to
Figure FDA0002261050200000011
步骤3、第三层为卷积层,其中卷积核的大小为5*5,深度为16,步长为1,同样的该卷积层不采用全0填充,能够将上一层12*12*6的节点矩阵Ⅱ变化为8*8*16的节点矩阵Ⅲ;Step 3. The third layer is the convolution layer, where the size of the convolution kernel is 5*5, the depth is 16, and the step size is 1. The same convolution layer does not use all 0 padding, and the previous layer can be 12* The node matrix II of 12*6 is changed to the node matrix III of 8*8*16; 该卷积层与第二层(池化层)输出相连,该卷积层卷积核的尺寸是5x5,深度为16,不使用全0填充,步长为1;由于不使用全0填充,因此该卷积层的输出的节点矩阵Ⅲ为12-5+1=8,深度为16,即8*8*16的节点矩阵Ⅲ;The convolutional layer is connected to the output of the second layer (pooling layer). The size of the convolutional kernel of the convolutional layer is 5x5, the depth is 16, the padding of all 0s is not used, and the stride is 1; since padding of all 0s is not used, Therefore, the output node matrix III of the convolution layer is 12-5+1=8, and the depth is 16, that is, the node matrix III of 8*8*16; 步骤4、第四层采用过滤器为2x2的最大池化层,过滤器的长和宽以及步长均为2,所以该最大池化层的输出矩阵Ⅳ大小为4*4*16;Step 4. The fourth layer adopts a maximum pooling layer with a filter of 2x2. The length and width of the filter and the step size are both 2, so the output matrix IV of the maximum pooling layer is 4*4*16 in size; 该最大池化层与上一层卷积层输出直接相连,上一层经过卷积处理之后输出是一个8*8*16=1024的节点矩阵Ⅲ;该最大池化层利用一个长和宽以及运动步长都为2的过滤器,将上一层输出的8*8*16的节点矩阵Ⅲ变为4*4*16的节点矩阵Ⅳ输出给下一层;该池化层采用的是最大值池化法,也就是依次将2*2中四个像素中的最大值保留,经过这一池化操作可以将矩阵所带的信息量减少到
Figure FDA0002261050200000021
The maximum pooling layer is directly connected to the output of the previous convolutional layer, and the output of the previous layer is a node matrix III of 8*8*16=1024 after convolution processing; the maximum pooling layer uses a length and width and A filter with a motion step size of 2 changes the 8*8*16 node matrix III output by the previous layer into a 4*4*16 node matrix IV and outputs it to the next layer; the pooling layer uses the maximum The value pooling method, that is, the maximum value of the four pixels in 2*2 is retained in turn. After this pooling operation, the amount of information carried by the matrix can be reduced to
Figure FDA0002261050200000021
步骤5、第五层为改进的全连接层,该全连接层与LeNet-5卷积神经网络模型有较大改变,将原有的三个全连接层简化为一个全连接层;Step 5. The fifth layer is an improved fully connected layer, which is greatly changed from the LeNet-5 convolutional neural network model, and the original three fully connected layers are simplified into one fully connected layer; 该全连接层与上一最大池化层输出相连,上一层最大池化层经过最大池化处理之后输出是一个4*4*16=256的节点矩阵Ⅳ,而此全连接层的作用就是将节点矩阵Ⅳ的节点拉成一个向量,那么此全连接层接收的就是一个256维的输入向量;全连接层的每一个节点与上一层和下一层的节点都有连接;The fully connected layer is connected to the output of the previous maximum pooling layer. The output of the previous maximum pooling layer is a 4*4*16=256 node matrix IV after the maximum pooling process, and the function of this fully connected layer is to Pull the nodes of the node matrix IV into a vector, then the fully connected layer receives a 256-dimensional input vector; each node of the fully connected layer is connected to the nodes of the previous layer and the next layer; 步骤6、第六层为输出层,由于实验区分的乳化炸药包装缺陷共有4种,故输出层有4个节点;该输出层与上一全连接层相连,全连接层共有256个输出节点,每个节点与输出层的四个节点相连;输出层的每个节点分别代表4种实验区分的乳化炸药包装缺陷。Step 6. The sixth layer is the output layer. Since there are 4 kinds of emulsion explosive packaging defects distinguished in the experiment, the output layer has 4 nodes; this output layer is connected to the previous fully connected layer, and the fully connected layer has a total of 256 output nodes. Each node is connected to four nodes of the output layer; each node of the output layer represents the four experimentally differentiated emulsion explosive packaging defects.
CN201911071337.0A 2019-11-05 2019-11-05 Emulsion explosive package fault real-time monitoring model based on convolutional neural network Pending CN111046908A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911071337.0A CN111046908A (en) 2019-11-05 2019-11-05 Emulsion explosive package fault real-time monitoring model based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911071337.0A CN111046908A (en) 2019-11-05 2019-11-05 Emulsion explosive package fault real-time monitoring model based on convolutional neural network

Publications (1)

Publication Number Publication Date
CN111046908A true CN111046908A (en) 2020-04-21

Family

ID=70231906

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911071337.0A Pending CN111046908A (en) 2019-11-05 2019-11-05 Emulsion explosive package fault real-time monitoring model based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN111046908A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10072919B1 (en) * 2017-08-10 2018-09-11 Datacloud International, Inc. Efficient blast design facilitation systems and methods
CN108985163A (en) * 2018-06-11 2018-12-11 视海博(中山)科技股份有限公司 Confined Space Safety Detection Method Based on Unmanned Aerial Vehicle
CN109946746A (en) * 2019-03-21 2019-06-28 长安大学 A security inspection system and method based on deep neural network
CN110378383A (en) * 2019-06-19 2019-10-25 江苏大学 A kind of picture classification method based on Keras frame and deep neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10072919B1 (en) * 2017-08-10 2018-09-11 Datacloud International, Inc. Efficient blast design facilitation systems and methods
CN108985163A (en) * 2018-06-11 2018-12-11 视海博(中山)科技股份有限公司 Confined Space Safety Detection Method Based on Unmanned Aerial Vehicle
CN109946746A (en) * 2019-03-21 2019-06-28 长安大学 A security inspection system and method based on deep neural network
CN110378383A (en) * 2019-06-19 2019-10-25 江苏大学 A kind of picture classification method based on Keras frame and deep neural network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
徐海波: "工业炸药药卷包装缺陷在线视觉检测系统", 中国优秀硕士学位论文全文数据库工程科技辑, no. 10, pages 1 - 62 *
王越胜等: "基于神经网络信息融合的乳化器故障诊断系统", 《杭州电子科技大学学报》 *
王越胜等: "基于神经网络信息融合的乳化器故障诊断系统", 《杭州电子科技大学学报》, 15 March 2014 (2014-03-15), pages 96 - 99 *

Similar Documents

Publication Publication Date Title
CN111325713B (en) Neural network-based wood defect detection method, system and storage medium
CN107392896B (en) A wood defect detection method and system based on deep learning
CN106530284A (en) Welding spot type detection and device based on image recognition
CN108154504A (en) Method for detecting surface defects of steel plate based on convolutional neural network
CN115205274A (en) A Fabric Defect Detection Method Based on Lightweight Cascade Network
US20220092765A1 (en) Defect inspection device
Hansen et al. Concept of easy-to-use versatile artificial intelligence in industrial small & medium-sized enterprises
CN116152730A (en) Method and device for detecting multiple running states of belt conveyor based on deep learning
CN115797357B (en) Power transmission channel hidden danger detection method based on improved YOLOv7
CN112150460A (en) Detection method, detection system, device, and medium
CN114741942A (en) Fault diagnosis device and diagnosis method for reciprocating compressor of offshore platform based on machine learning
CN114429445A (en) A Method of PCB Defect Detection and Recognition Based on MAIRNet
CN107194380A (en) The depth convolutional network and learning method of a kind of complex scene human face identification
CN111046908A (en) Emulsion explosive package fault real-time monitoring model based on convolutional neural network
CN118674685A (en) Movable pin hole plugging defect detection system and method
CN118587146A (en) A defect detection method based on improved YOLOV8
CN117974572A (en) A three-stage cascade detection method for glass fiber cloth defects
CN115272814B (en) Long-distance space self-adaptive multi-scale small target detection method
CN111738991A (en) A method of creating a digital radiographic inspection model for weld defects
CN116912173A (en) Product apparent defect detection method based on unsupervised anomaly detection
CN116912601A (en) State monitoring and preventive protection method, system and equipment for collection cultural relics
CN116342502A (en) A method of industrial vision detection based on deep learning
CN114663353A (en) Neural network training method, weld joint crack detection method, device and medium
Ghoben et al. Exploring the Impact of Image Quality on Convolutional Neural Networks: A Study on Noise, Blur, and Contrast
Wiangtong et al. Deployment of machine vision platform for checking spot welds on metal strap belts

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200421

RJ01 Rejection of invention patent application after publication