CN113313000B - An intelligent identification method of gas-liquid two-phase flow based on optical image - Google Patents
An intelligent identification method of gas-liquid two-phase flow based on optical image Download PDFInfo
- Publication number
- CN113313000B CN113313000B CN202110546145.1A CN202110546145A CN113313000B CN 113313000 B CN113313000 B CN 113313000B CN 202110546145 A CN202110546145 A CN 202110546145A CN 113313000 B CN113313000 B CN 113313000B
- Authority
- CN
- China
- Prior art keywords
- gas
- phase flow
- liquid
- fcn
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Image Analysis (AREA)
Abstract
本发明提供一种基于光学图像的气液两相流智能识别方法,准备训练数据集和测试数据集;构建全卷积网络模型FCN;根据训练好的FCN全卷积网络模型识别气液两相流中的气泡:对于训练完成后的FCN全卷积网络模型,为模型输入一张气液两相流待识别的图片,通过网络可近乎精确识别图片中的气泡,计算并得到气泡识别的精确度。将基于深度监督学习和数据提取的FCN方法引入到气液两相流识别中,它可以通过多层卷积操作从像素级别自动提取信息,以提取抽象的语义概念,使用上采样层和多尺度融合技术来进一步优化结果,使高级子网多次融合了低级子网的特征,保持极高的分辨率,从而提高气泡识别的精度。
The invention provides an optical image-based intelligent identification method for gas-liquid two-phase flow, which includes preparing training data sets and test data sets; constructing a full convolution network model FCN; identifying gas-liquid two-phase flow according to the trained FCN full convolution network model Bubbles in the flow: For the FCN fully convolutional network model after training, input a picture of the gas-liquid two-phase flow to be identified for the model, the bubbles in the picture can be almost accurately identified through the network, and the accuracy of bubble identification can be calculated and obtained. Spend. Introducing the FCN method based on deep supervised learning and data extraction to gas-liquid two-phase flow recognition, it can automatically extract information from the pixel level through multi-layer convolution operations to extract abstract semantic concepts, using upsampling layers and multi-scale The fusion technology is used to further optimize the results, so that the high-level subnets fuse the features of the low-level subnets many times, maintaining a very high resolution, thereby improving the accuracy of bubble identification.
Description
技术领域technical field
本发明涉及一种基于光学图像的气液两相流智能识别方法,特别是一种基于改进的全卷积网络模型FCN的气液两相流识别方法,尤其针对液体中的气泡识别问题,属于气液两相流识别和分类领域。The invention relates to an intelligent identification method of gas-liquid two-phase flow based on optical images, in particular to a gas-liquid two-phase flow identification method based on an improved full convolution network model FCN, especially for the problem of bubble identification in liquid, belonging to The field of gas-liquid two-phase flow identification and classification.
背景技术Background technique
自然界和工业过程中,往往都涉及多相流问题,其中以气液两相流最为常见。基于气泡在化学工程、生物制药、地球物理学和废水管理等众多领域的复杂的流体动力学特点,通过实验研究来更好地理解两相流间的相互作用是很有必要的。基础的实验研究通过建立闭合的实验模型来支持各个工程领域的开发。此外,详细的实验结果可用于比较和验证这些模型的准确性。在实验中,最重要的是测量诸如气泡大小和形状、速度、界面面积浓度和空隙率之类的参数,以评估准确性并进行模型的开发。In nature and industrial processes, multiphase flow problems are often involved, among which gas-liquid two-phase flow is the most common. Based on the complex hydrodynamic characteristics of bubbles in many fields such as chemical engineering, biopharmaceuticals, geophysics, and wastewater management, it is necessary to better understand the interaction between two-phase flows through experimental studies. Basic experimental research supports the development of various engineering fields by establishing closed experimental models. Furthermore, detailed experimental results can be used to compare and validate the accuracy of these models. In the experiments, it is of paramount importance to measure parameters such as bubble size and shape, velocity, interfacial area concentration, and porosity to assess accuracy and to develop models.
专注于气泡参数表征的实验技术可分为两大类:基于探针的侵入式(接触式)方法和非侵入式(非接触式)方法。非侵入式方法与基于探针的方法有本质的不同,因为它们不干扰被研究的流动,避免了基于侵入性方法的大多数缺点,因此通常会呈现较高的空间分辨率。典型的非侵入式方法是激光多普勒风速仪的利用和图像处理技术。为了确定气液两相流的参数,例如连续相的平均速度、局部气体浓度、流动特性以及它们在分散相中的波动,需要一些基于模型的特征提取方法来提取关于气泡在空间中的确切位置及其尺寸的信息。这些特征不仅能准确地捕捉两相流中的气泡特性,同时在后续的气泡跟踪等领域的研究中也有着重要作用。确切地讲,就是利用图像处理方法来识别两相流中的不同气泡,并计算相关参数。传统的图像处理方法通过一系列过滤和操作步骤来识别图像中的气泡,例如图像类型转换、图像滤波、图像去噪和增强、图像填充等。在应用了这些图像处理步骤之后,从最终图像中可以看到图像的几何特征。最后通过鉴别器算法进行边缘检测和进一步需要的参数操作,输出图像的特定几何特征。但是在传统的图像处理方法中,由于特征选择是主观人为的,特征的提取不全面,因此智能的选择图像特征是这一领域待解决的问题。Experimental techniques focused on the characterization of bubble parameters can be divided into two broad categories: probe-based invasive (contact) methods and non-invasive (non-contact) methods. Non-invasive methods are fundamentally different from probe-based methods in that they do not interfere with the flow under study, avoid most of the disadvantages of invasive-based methods, and therefore generally exhibit higher spatial resolution. Typical non-invasive methods are the use of laser Doppler anemometers and image processing techniques. In order to determine the parameters of the gas-liquid two-phase flow, such as the average velocity of the continuous phase, the local gas concentration, the flow properties and their fluctuations in the dispersed phase, some model-based feature extraction methods are needed to extract information about the exact location of the bubbles in space and its size information. These features not only can accurately capture the characteristics of bubbles in two-phase flow, but also play an important role in subsequent researches in areas such as bubble tracking. Specifically, image processing methods are used to identify different bubbles in a two-phase flow and calculate the relevant parameters. Traditional image processing methods identify air bubbles in images through a series of filtering and manipulation steps, such as image type conversion, image filtering, image denoising and enhancement, image filling, etc. After applying these image processing steps, the geometric features of the image are visible from the final image. Finally, edge detection and further required parameter operations are performed through the discriminator algorithm, and the specific geometric features of the image are output. However, in the traditional image processing method, since the feature selection is subjective and artificial, the feature extraction is not comprehensive, so the intelligent selection of image features is a problem to be solved in this field.
近年来,基于深度学习的方法能够对图像进行智能的处理,在图像识别、分类和处理中得到了广泛的应用。例如,基于卷积神经网络(CNN)的图像处理方法由于其鲁棒性和多功能性深受开发者们的喜爱。因而一些作者提议使用深度学习进行气液两相流中的气泡识别,与经典的图像处理方法相比,可提供相似或更好的结果。由于水下泡状流数据复杂度较高,导致模型的鲁棒性较差,而大多数工业实验中又要求极高的气泡识别精确度,因此,针对气液两相流识别领域中的气泡识别问题,引入改进的深度学习模型进行专注于气泡的精确识别,在这一领域具有很高的应用价值。In recent years, methods based on deep learning can intelligently process images and have been widely used in image recognition, classification and processing. For example, image processing methods based on Convolutional Neural Networks (CNN) are favored by developers due to their robustness and versatility. Some authors have thus proposed the use of deep learning for bubble identification in gas-liquid two-phase flows, which can provide similar or better results than classical image processing methods. Due to the high complexity of underwater bubble flow data, the robustness of the model is poor, and most industrial experiments require extremely high bubble identification accuracy. Identifying problems, introducing an improved deep learning model for precise bubble-focused identification has high application value in this field.
发明内容SUMMARY OF THE INVENTION
针对上述现有技术,本发明要解决的技术问题是提供一种针对气泡识别的基于光学图像的气液两相流智能识别方法,通过改进的全卷积FCN网络模型训练,建立气液两相流中可以识别气泡的模型,并能够达到较高的精确度。In view of the above-mentioned prior art, the technical problem to be solved by the present invention is to provide an optical image-based gas-liquid two-phase flow intelligent identification method for bubble identification. A model that can identify bubbles in the flow with a high degree of accuracy.
本发明的目的是这样实现的:The object of the present invention is achieved in this way:
步骤1:准备训练数据集和测试数据集。对于我们现有的气液两相流视频,使用Python提取视频中每一帧的画面,将每一帧的图片使用Labelme进行数据标注。每张图片X的标签中包含两种像素类型,背景对应的像素为0,气泡对应的像素为1,这里背景像素和气泡像素的比例可能不均衡。然后将每一张图片与其对应的标签构建成Dataset数据集,并且进行读取、解码、归一化以及标准化的预处理操作,并随机地将其中80%的图片和其对应的标签作为训练数据集,其余20%的图片和其对应的标签作为测试数据集。Step 1: Prepare training dataset and test dataset. For our existing gas-liquid two-phase flow video, use Python to extract the picture of each frame in the video, and use Labelme to label the picture of each frame. The label of each image X contains two types of pixels. The pixel corresponding to the background is 0, and the pixel corresponding to the bubble is 1. Here, the ratio of background pixels and bubble pixels may not be balanced. Then, each image and its corresponding label are constructed into a Dataset dataset, and the preprocessing operations of reading, decoding, normalization and normalization are performed, and 80% of the images and their corresponding labels are randomly used as training data. set, and the remaining 20% of the images and their corresponding labels are used as the test dataset.
步骤2:构建全卷积网络模型FCN,首先利用VGG16网络模块进行迁移学习,使用VGG16网络中的卷积基部分,移除全连接层,并且使用在ImageNet数据集上预训练好的权重进行训练。由于语义分割需要对图像上的各个像素进行分类,而卷积核池化的过程是一个“下采样”的过程,使得图像的长宽越来越小,这就需要使用反卷积上采样的方法将最后得到的输出还原到原图的大小,然后使用Sigmoid进行激活且输出一个值,从而实现分类。在网络模型FCN参数更新中,为卷积层的每一种输出的特征图,E为损失函数Dice_loss。其具体更新方式为Step 2: Build a fully convolutional network model FCN, first use the VGG16 network module for transfer learning, use the convolutional base part in the VGG16 network, remove the fully connected layer, and use the pre-trained weights on the ImageNet dataset for training . Since semantic segmentation needs to classify each pixel on the image, the process of convolution kernel pooling is a "down-sampling" process, which makes the length and width of the image smaller and smaller, which requires the use of deconvolution upsampling. The method restores the final output to the size of the original image, and then uses Sigmoid to activate and output a value to achieve classification. In the network model FCN parameter update, is the feature map of each output of the convolutional layer, and E is the loss function Dice_loss. Its specific update method is
其中Mj表示选择的输入特征图组合,kij是输入的第i种特征图和输出的第j种特征图之间的连接用的卷积核,bj是第j种特征图对应的偏置,灵敏度where M j represents the selected combination of input feature maps, k ij is the convolution kernel used for the connection between the input i-th feature map and the output j-th feature map, and b j is the bias corresponding to the j-th feature map setting, sensitivity
(u,v)代表灵敏度矩阵中的元素位置,是在做卷积时,与kij做卷积的每一个patch。(u,v) represents the element position in the sensitivity matrix, Yes When doing convolution, each patch that is convolved with k ij .
步骤3:训练构建的FCN网络模型,具体包括Step 3: Train the constructed FCN network model, including
步骤3.1:初始化参数:输入训练数据的批次batch_size、训练迭代次数epoch,超参数γ(学习率)、buffer_size(缓冲区大小);Step 3.1: Initialization parameters: batch_size of input training data, number of training iterations epoch, hyperparameter γ (learning rate), buffer_size (buffer size);
步骤3.2:设置损失函数Dice_loss:考虑一个样本,则第n个样本的损失函数是Step 3.2: Set the loss function Dice_loss: Consider one sample, then the loss function of the nth sample is
其中,c是label的维度,对于分类问题,意味着这些样本能分为c类。表示第n个样本的label tn的第k维,是第n个样本网络的输出(predict label)的第k维。Among them, c is the dimension of label, for classification problems, it means that these samples can be divided into c categories. represents the kth dimension of the label t n of the nth sample, is the kth dimension of the output (predict label) of the nth sample network.
优化器使用Adam优化算法来更新网络的权重参数,其中mt、nt分别是对梯度的一阶矩估计和二阶矩估计,是对mt、nt的校正,θt+1为更新后的参数,其具体更新方式为The optimizer uses the Adam optimization algorithm to update the weight parameters of the network, where m t and n t are the first and second moment estimates of the gradient, respectively, is the correction of m t and n t , θ t+1 is the updated parameter, and the specific update method is as follows
mt=β1mt-1+(1-β1)gt m t =β 1 m t-1 +(1-β 1 )g t
其中,η,β1,β2默认值为η=0.001,β1=0.9,β2=0.999,ε=10-8,β1和β2都是接近1的数,ε是为了防止除以0,gt表示梯度。Among them, the default values of η, β 1 , β 2 are η=0.001, β 1 =0.9, β 2 =0.999, ε=10 −8 , β 1 and β 2 are numbers close to 1, ε is to prevent dividing by 0, gt represents the gradient.
步骤3.3:计算并且输出网络的精确度Precision、召回率Recall,并且单独计算气泡识别的准确率。精确度Precision的具体表达为Step 3.3: Calculate and output the precision and recall of the network, and calculate the accuracy of bubble recognition separately. The specific expression of precision is
召回率Recall的具体表达为The specific expression of recall rate Recall is
其中,TP是分类器认为是正样本而且确实是正样本的例子,FP是分类器认为是正样本但实际上不是正样本的例子,FN是分类器认为是负样本但实际上不是负样本的例子。Among them, TP is an example that the classifier considers to be a positive sample and is indeed a positive sample, FP is an example that the classifier considers to be a positive sample but is not actually a positive sample, and FN is an example that the classifier considers to be a negative sample but is not actually a negative sample.
步骤4:根据训练好的FCN全卷积网络模型识别气液两相流中的气泡:对于训练完成后的FCN全卷积网络模型,为模型输入一张气液两相流待识别的图片,通过网络可近乎精确识别图片中的气泡,计算并得到气泡识别的精确度。Step 4: Identify the bubbles in the gas-liquid two-phase flow according to the trained FCN full convolution network model: For the FCN full convolution network model after training, input a picture of the gas-liquid two-phase flow to be identified for the model, Through the network, the bubbles in the picture can be almost accurately identified, and the accuracy of bubble identification can be calculated and obtained.
与现有技术相比,本发明的有益效果是:Compared with the prior art, the beneficial effects of the present invention are:
本发明针对基于光学图像液体中气泡的识别问题,引入改进的FCN全卷积网络,建立了基于光学图像的气液两相流智能识别模型。此方法的优势在于:(1)针对大量的气泡训练样本标记难的问题,我们采取了Labelme进行数据智能标注,能够去除原始图像中的噪声,将背景和气泡分成两类,以便于网络的训练。(2)使用VGG16进行迁移学习,充分利用模型之间存在的相似性,提高了模型的稳定性、可泛化性以及学习效率。(3)由于获取的图像数据中气泡与背景两类像素分布比例并不均衡,当使用二元交叉熵损失函数时,模型的训练结果会更加偏向图像中像素数量多的一类,因此使用Dice_loss作为损失函数进行训练。(4)将基于深度监督学习和数据提取的FCN方法引入到气液两相流识别中,它可以通过多层卷积操作从像素级别自动提取信息,以提取抽象的语义概念。(5)使用上采样层和多尺度融合技术来进一步优化结果,使高级子网多次融合了低级子网的特征,保持极高的分辨率,从而提高气泡识别的精度。Aiming at the identification problem of bubbles in liquid based on optical images, the invention introduces an improved FCN full convolution network, and establishes an intelligent identification model of gas-liquid two-phase flow based on optical images. The advantages of this method are: (1) For the problem that it is difficult to label a large number of bubble training samples, we use Labelme for data intelligent labeling, which can remove the noise in the original image and divide the background and bubbles into two categories to facilitate network training. . (2) Using VGG16 for transfer learning, making full use of the similarity between models, improving the stability, generalizability and learning efficiency of the model. (3) Due to the uneven distribution of the two types of pixels in the acquired image data, the bubbles and the background, when the binary cross entropy loss function is used, the training results of the model will be more biased towards the type with more pixels in the image, so Dice_loss is used. training as a loss function. (4) The FCN method based on deep supervised learning and data extraction is introduced into the gas-liquid two-phase flow recognition, which can automatically extract information from the pixel level through multi-layer convolution operations to extract abstract semantic concepts. (5) Upsampling layers and multi-scale fusion techniques are used to further optimize the results, so that the high-level subnetworks fuse the features of the low-level subnetworks multiple times, maintaining extremely high resolution, thereby improving the accuracy of bubble recognition.
附图说明Description of drawings
图1是本发明使用VGG16进行迁移学习所改进的FCN全卷积网络框架示意图;Fig. 1 is the FCN full convolutional network framework schematic diagram that the present invention uses VGG16 to carry out migration learning improved;
图2是本发明对于气液两相流数据使用Labelme进行的数据标注;Fig. 2 is the data labeling that the present invention uses Labelme to carry out for gas-liquid two-phase flow data;
图3(a)至图3(c)是本发明对于气液两相流数据使用改进FCN模型的收敛结果、精确度与召回率结果;3(a) to 3(c) are the convergence results, precision and recall results of the present invention using the improved FCN model for gas-liquid two-phase flow data;
图4是本发明对于气液两相流数据使用改进FCN模型的单独气泡识别精确度;Fig. 4 is the individual bubble identification accuracy that the present invention uses the improved FCN model for gas-liquid two-phase flow data;
图5是本发明对于气液两相流数据使用改进FCN模型识别结果。FIG. 5 is the identification result of the present invention using the improved FCN model for gas-liquid two-phase flow data.
具体实施方式Detailed ways
下面结合附图与具体实施方式对本发明作进一步详细描述。The present invention will be described in further detail below with reference to the accompanying drawings and specific embodiments.
本发明基于语义分割算法中的全卷积网络模型FCN,针对气液两相流图像序列数据,提出了一种基于光学图像的气液两相流智能识别技术。在模型结构上,首先引入经典网络VGG16进行迁移学习,移除全连接层,保留该网络中的卷积基部分,并且使用在ImageNet数据集上预训练好的权重进行训练。接下来引入了语义分割的核心思想——反卷积上采样以及跳跃结构。由于网络需要对图像上的各个像素进行分类,而卷积核池化的过程是一个“下采样”的过程,使得图像的长宽越来越小,“反卷积”是卷积的逆过程,使得图像的长宽得以还原,保留高分辨率的结果;然后引入跳跃融合结构,弥补前面卷积层和池化层丢失的特征,结合不同深度层结果的跳级结构,将不同池化层的结果进行上采样,结合这些结果来优化输出;该方法可确保鲁棒性和精确性,修复还原的图像。在损失函数上,使用二元交叉熵损失函数,模型的训练结果会更加偏向图像中像素数量多的一类,因此使用Dice_loss作为损失函数进行训练。以上在模型结构以及损失函数上的改进可以使得模型收敛更快,数据质量更高。引入FCN改进模型去训练气液两相流图像序列数据集,可得到针对于气泡的精确识别,提高了气液两相流识别模型的准确度。Based on the full convolution network model FCN in the semantic segmentation algorithm, the invention proposes an optical image-based intelligent identification technology for gas-liquid two-phase flow for the image sequence data of gas-liquid two-phase flow. In terms of model structure, the classic network VGG16 is first introduced for transfer learning, the fully connected layer is removed, the convolutional base part in the network is retained, and the weights pre-trained on the ImageNet dataset are used for training. Next, the core ideas of semantic segmentation - deconvolution upsampling and skip structure are introduced. Since the network needs to classify each pixel on the image, the process of convolution kernel pooling is a "down-sampling" process, which makes the length and width of the image smaller and smaller, and "deconvolution" is the inverse process of convolution , so that the length and width of the image can be restored, and the high-resolution results are preserved; then a jump fusion structure is introduced to make up for the lost features of the previous convolutional layers and pooling layers. Combined with the jump-level structure of the results of different depth layers, the The results are upsampled and combined to optimize the output; the method ensures robustness and accuracy, inpainting the restored image. On the loss function, using the binary cross entropy loss function, the training results of the model will be more biased towards the class with more pixels in the image, so Dice_loss is used as the loss function for training. The above improvements in model structure and loss function can make the model converge faster and the data quality is higher. The improved FCN model is introduced to train the gas-liquid two-phase flow image sequence data set, which can obtain accurate identification of air bubbles and improve the accuracy of the gas-liquid two-phase flow identification model.
实施例Example
本发明提出的一种基于改进的全卷积网络FCN光学图像的气液两相流智能识别方法,包括以下步骤:A gas-liquid two-phase flow intelligent identification method based on an improved fully convolutional network FCN optical image proposed by the present invention includes the following steps:
步骤一:将气液两相流的视频源作为原始数据,使用Python提取视频中每一帧的画面,将每一帧的图片使用Labelme进行数据标注。每张图片X的标签中包含两种像素类型,背景对应的像素为0,气泡对应的像素为1,这里背景像素和气泡像素的比例不均衡。然后将每一张图片与其对应的标签构建成Dataset数据集,并且进行读取、解码、归一化以及标准化的预处理,并将其中80%的图片和其对应的标签作为训练数据集,其余20%的图片和其对应的标签作为测试数据集。Step 1: Use the video source of the gas-liquid two-phase flow as the original data, use Python to extract the picture of each frame in the video, and use Labelme to label the picture of each frame. The label of each image X contains two types of pixels, the pixel corresponding to the background is 0, and the pixel corresponding to the bubble is 1, where the ratio of background pixels and bubble pixels is not balanced. Then build each image and its corresponding label into a Dataset dataset, and perform preprocessing of reading, decoding, normalization and normalization, and use 80% of the images and their corresponding labels as the training dataset, and the rest 20% of the images and their corresponding labels are used as the test dataset.
步骤二:首先我们以全卷积网络FCN模型为基础,构建利用VGG16网络模块的全卷积部分作为卷积基进行迁移学习,移除全连接层,并且使用在ImageNet数据集上预训练好的权重进行训练。由于语义分割需要对图像上的各个像素进行分类,而卷积核池化的过程是一个“下采样”的过程,使得图像的长宽越来越小,需要使用反卷积上采样的方法将最后得到的输出上采样到原图的大小,然后使用Sigmoid进行激活且输出一个值,从而实现分类。在改进FCN网络模型参数更新中,xj为卷积层的每一种输出的特征图,E为损失函数Dice_loss。Step 2: First, based on the fully convolutional network FCN model, we build and use the fully convolutional part of the VGG16 network module as the convolution base for transfer learning, remove the fully connected layer, and use the pre-trained image on the ImageNet dataset. weights for training. Since semantic segmentation needs to classify each pixel on the image, the process of convolution kernel pooling is a "down-sampling" process, which makes the length and width of the image smaller and smaller, and the deconvolution up-sampling method needs to be used. The final output is upsampled to the size of the original image, and then activated using Sigmoid and outputting a value to achieve classification. In the parameter update of the improved FCN network model, x j is the feature map of each output of the convolutional layer, and E is the loss function Dice_loss.
其具体更新方式为Its specific update method is
其中Mj表示选择的输入特征图组合,kij是输入的第i种特征图和输出的第j种特征图之间的连接用的卷积核,bj是第j种特征图对应的偏置,灵敏度where M j represents the selected combination of input feature maps, k ij is the convolution kernel used for the connection between the input i-th feature map and the output j-th feature map, and b j is the bias corresponding to the j-th feature map setting, sensitivity
(u,v)代表灵敏度矩阵中的元素位置,是在做卷积时,与kij做卷积的每一个patch。(u,v) represents the element position in the sensitivity matrix, Yes When doing convolution, each patch that is convolved with k ij .
步骤三:计算损失函数。如果使用二分类交叉熵损失函数BinaryCrossentropy,当数据集中每张图片的气泡像素过少,背景像素过多的情况下,训练得到的结果会偏重于背景类别,使得网络并不能对气泡进行很好的识别。所以,在这里我们使用医学影像分割的损失函数Dice Loss,其表达式为Step 3: Calculate the loss function. If the binary cross-entropy loss function BinaryCrossentropy is used, when the number of bubble pixels in each image in the dataset is too small and the background pixels are too many, the training results will be biased towards the background category, so that the network cannot perform well on the bubbles. identify. Therefore, here we use the loss function Dice Loss for medical image segmentation, whose expression is
上式中,c是label的维度,对于分类问题,意味着这些样本能分为c类。表示第n个样本的label tn的第k维,是第n个样本网络输出(predict label)的第k维。In the above formula, c is the dimension of label. For classification problems, it means that these samples can be divided into c categories. represents the kth dimension of the label t n of the nth sample, is the kth dimension of the nth sample network output (predict label).
根据上式可知,如果模型达到完全拟合,损失函数的值会无限接近于0,在训练过程中可以根据损失函数的值判断模型是否收敛,如果Dice Loss损失函数值不再下降,则网络模型收敛,训练完成。According to the above formula, if the model is fully fitted, the value of the loss function will be infinitely close to 0. During the training process, it can be judged whether the model has converged according to the value of the loss function. If the value of the Dice Loss loss function no longer decreases, the network model Convergence, the training is complete.
步骤四:训练改进的FCN全卷积网络模型。如图1所示,首先将气液两相流图片与使用Labelme标注的图片成对的输入网络中通过卷积提取特征,池化改变图片的长、宽、通道;然后通过上采样使高级子网与前面的低级子网进行特征融合并使最后得到的输出还原到原图的大小,结合这些结果来优化输出,同时确保鲁棒性和精确性;然后计算网络的损失函数,并根据Adam优化算法更新权值参数,根据上述方式循环迭代训练网络的参数,直到损失函数值不再下降或保持稳定,则网络模型收敛,训练完成,输入气液两相流待识别图片,利用网络模型生成语义分割后的结果。Step 4: Train the improved FCN fully convolutional network model. As shown in Figure 1, first, the input network of the gas-liquid two-phase flow picture and the picture marked with Labelme is extracted by convolution, and the length, width and channel of the picture are changed by pooling; The network performs feature fusion with the previous low-level subnet and restores the final output to the size of the original image, and combines these results to optimize the output while ensuring robustness and accuracy; then calculate the loss function of the network and optimize it according to Adam The algorithm updates the weight parameters, and iteratively trains the parameters of the network according to the above method, until the loss function value no longer decreases or remains stable, the network model converges, and the training is completed. Input the gas-liquid two-phase flow to be recognized pictures, and use the network model to generate semantic The result after segmentation.
步骤五:训练完成后的模型输入一张气液两相流待识别的图片,其中每张图片X包含两种像素类型,背景对应的像素为0,气泡对应的像素为1,并且背景像素和气泡像素的比例不均衡,通过网络可近乎精确识别图片中的气泡,自动生成语义分割的结果,并且单独计算气泡识别的最终准确率。Step 5: After the training is completed, input a picture of the gas-liquid two-phase flow to be recognized, in which each picture X contains two types of pixels, the pixel corresponding to the background is 0, the pixel corresponding to the bubble is 1, and the background pixel and The proportion of bubble pixels is not balanced. The network can almost accurately identify the bubbles in the picture, automatically generate the results of semantic segmentation, and calculate the final accuracy of bubble recognition separately.
结合具体参数实施例,本实施例数据来自模拟闭合回路进行的两相流实验,通过调节管路阀门可控制气液两相流竖直上升及竖直下降,对于竖直下降两相流,水泵驱动水箱中的去离子水经过滤器及后分两路分别进入气泡发生器及竖直试验段,经气水分离器进行气水分离后回到水箱。一体式空压机产生压缩空气储存于容积300L的压缩空气罐,压缩空气经电磁阀进入气泡发生器,进而进入实验段及气水分离器,最终排入大气。实验段总长约3.7m,由内径50.8mm的不同长度的有机玻璃管段及采集窗口连接组成。采用平面设计,可进行高速摄像。通过调整有机玻璃管段及采集窗口的相对位置,可以进行沿流动轴向不同位置的两相流参数测量。通过对视频每一帧的截取获得两相流图片数据集,我们将它作为网络模型训练的训练集,训练前对数据进行归一化以及标准化处理。Combined with the specific parameter example, the data in this example comes from a two-phase flow experiment performed by simulating a closed loop. By adjusting the pipeline valve, the gas-liquid two-phase flow can be controlled to rise vertically and fall vertically. For the vertical descending two-phase flow, the water pump The deionized water in the driving water tank enters the bubble generator and the vertical test section respectively through the filter and the rear, and returns to the water tank after being separated from the gas and water by the gas-water separator. The integrated air compressor generates compressed air and stores it in a compressed air tank with a volume of 300L. The compressed air enters the bubble generator through the solenoid valve, and then enters the experimental section and the gas-water separator, and finally discharges into the atmosphere. The total length of the experimental section is about 3.7m, which is composed of plexiglass tube sections of different lengths with an inner diameter of 50.8mm and the connection of the acquisition window. The flat design enables high-speed photography. By adjusting the relative positions of the plexiglass tube section and the acquisition window, two-phase flow parameters can be measured at different positions along the flow axis. The two-phase flow picture data set is obtained by intercepting each frame of the video. We use it as the training set for network model training, and normalize and standardize the data before training.
模拟闭合回路的两相流实验气泡识别结果分析:Analysis of the bubble identification results of the two-phase flow experiment simulating a closed loop:
本实验数据集为模拟闭合回路的两相流实验所得视频源截取的9351张气液两相流图片,我们随机抽取其中80%作为训练集,剩余20%作为测试集,根据构建的改进FCN网络模型以及训练方式进行训练,表1是在我们配置好网络参数的情况下,使用该模型得到的各个结果指标,分别是训练集和测试集数量、损失函数值、精确度Precision、召回率Recall、在9351张图片上测试气泡识别的准确率这4个衡量网络是否拟合的指标。损失函数值越低说明分类效果越好。精确度越高时说明分类器认为是正类,并且确实是正类的部分占所有分类器认为是正类的比例越高。召回率越高时说明分类器认为是正类,并且确实是正类的部分占所有确实是正类的比例越高。图3(a)到图3(c)是该模型的收敛值曲线、精确度曲线、召回率曲线,从中可以看到随着迭代次数的增加,模型的收敛性以及稳定性上都有了显著的提升。The experimental data set is 9351 gas-liquid two-phase flow pictures intercepted by the video source obtained from the simulated closed-loop two-phase flow experiment. We randomly selected 80% of them as the training set and the remaining 20% as the test set. According to the improved FCN network constructed The model and the training method are trained. Table 1 shows the results obtained by using the model when we configure the network parameters, namely the number of training sets and test sets, loss function value, precision, recall rate, Test the accuracy of bubble recognition on 9351 images, these 4 indicators that measure the fit of the network. The lower the loss function value, the better the classification effect. When the accuracy is higher, it means that the classifier considers it to be a positive class, and the part that is indeed a positive class accounts for a higher proportion of all classifiers that are considered to be a positive class. When the recall rate is higher, it means that the classifier thinks that it is a positive class, and the proportion of the part that is indeed a positive class in all positive classes is higher. Figure 3(a) to Figure 3(c) are the convergence value curve, precision curve, and recall rate curve of the model. It can be seen that with the increase of the number of iterations, the convergence and stability of the model are significantly improved. improvement.
基于改进的全卷积网络FCN模型,以7481张气液两相流图片作为训练数据集,1870张气液两相流图片作为测试数据集,我们将训练数据集与测试数据集输入到该网络中进行训练。图4是计算得到的单独气泡在该网络中识别的最终准确率曲线,从中可以看到,该准确率能够达到98.37%,并且计算复杂度低、运算量较小。图5是该网络识别气液两相流图片中气泡的结果,从测试结果和图片预测结果中,我们均可得出结论,该网络对于气液两相流识别尤其针对气泡识别具有极高的精确度。Based on the improved fully convolutional network FCN model, 7481 gas-liquid two-phase flow pictures are used as the training data set, and 1870 gas-liquid two-phase flow pictures are used as the test data set. We input the training data set and test data set into the network in training. Figure 4 is the final accuracy curve of the calculated individual bubbles identified in the network, from which it can be seen that the accuracy can reach 98.37%, and the computational complexity is low and the amount of computation is small. Figure 5 shows the result of the network identifying bubbles in the gas-liquid two-phase flow picture. From the test results and the picture prediction results, we can all draw the conclusion that the network has extremely high performance for gas-liquid two-phase flow identification, especially for bubble identification. Accuracy.
表1本发明训练集及测试集数据最终输出结果Table 1 The final output result of the training set and test set data of the present invention
Claims (1)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110546145.1A CN113313000B (en) | 2021-05-19 | 2021-05-19 | An intelligent identification method of gas-liquid two-phase flow based on optical image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110546145.1A CN113313000B (en) | 2021-05-19 | 2021-05-19 | An intelligent identification method of gas-liquid two-phase flow based on optical image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113313000A CN113313000A (en) | 2021-08-27 |
CN113313000B true CN113313000B (en) | 2022-04-29 |
Family
ID=77373867
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110546145.1A Active CN113313000B (en) | 2021-05-19 | 2021-05-19 | An intelligent identification method of gas-liquid two-phase flow based on optical image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113313000B (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114821072B (en) * | 2022-06-08 | 2023-04-18 | 四川大学 | Method, device, equipment and medium for extracting bubbles from dynamic ice image |
CN115861751B (en) * | 2022-12-06 | 2024-04-16 | 常熟理工学院 | Oil-water two-phase flow multi-oil drop identification method based on integrated characteristics |
CN116468891B (en) * | 2023-04-17 | 2025-06-27 | 台州学院 | Gas-liquid two-phase flow independent bubble segmentation method, electronic equipment and storage medium |
CN116452823A (en) * | 2023-04-18 | 2023-07-18 | 西南石油大学 | Drilling fluid bubble identification method based on image |
CN116935317A (en) * | 2023-07-26 | 2023-10-24 | 西南石油大学 | On-line intelligent monitoring method and system for oil content and gas content of drilling fluid |
CN117611844B (en) * | 2023-11-08 | 2024-09-27 | 中移互联网有限公司 | Image similarity recognition model training method, device and image similarity recognition method |
CN118172710B (en) * | 2024-04-12 | 2024-12-06 | 云南大学 | A flotation process condition recognition method based on hybrid learning of manual features and deep learning features |
CN119125326A (en) * | 2024-09-24 | 2024-12-13 | 哈尔滨工程大学 | Ultrasonic detection method and storage medium for composite material structure damage based on deep learning |
CN119274043B (en) * | 2024-12-09 | 2025-02-28 | 广东工业大学 | Multiphase flow bubble identification and counting method based on improvement YOLOv8 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102073037A (en) * | 2011-01-05 | 2011-05-25 | 哈尔滨工程大学 | Iterative current inversion method based on adaptive threshold selection technique |
CN105426889A (en) * | 2015-11-13 | 2016-03-23 | 浙江大学 | Flow pattern recognition method of gas-liquid two-phase flow based on PCA mixture feature fusion |
CN108304770A (en) * | 2017-12-18 | 2018-07-20 | 中国计量大学 | A method of the flow pattern of gas-liquid two-phase flow based on time frequency analysis algorithm combination deep learning theory |
CN111028217A (en) * | 2019-12-10 | 2020-04-17 | 南京航空航天大学 | Image crack segmentation method based on full convolution neural network |
CN111553373A (en) * | 2020-04-30 | 2020-08-18 | 上海理工大学 | CNN + SVM-based pressure bubble image recognition algorithm |
CN111882579A (en) * | 2020-07-03 | 2020-11-03 | 湖南爱米家智能科技有限公司 | Large infusion foreign matter detection method, system, medium and equipment based on deep learning and target tracking |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10713540B2 (en) * | 2017-03-07 | 2020-07-14 | Board Of Trustees Of Michigan State University | Deep learning system for recognizing pills in images |
-
2021
- 2021-05-19 CN CN202110546145.1A patent/CN113313000B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102073037A (en) * | 2011-01-05 | 2011-05-25 | 哈尔滨工程大学 | Iterative current inversion method based on adaptive threshold selection technique |
CN105426889A (en) * | 2015-11-13 | 2016-03-23 | 浙江大学 | Flow pattern recognition method of gas-liquid two-phase flow based on PCA mixture feature fusion |
CN108304770A (en) * | 2017-12-18 | 2018-07-20 | 中国计量大学 | A method of the flow pattern of gas-liquid two-phase flow based on time frequency analysis algorithm combination deep learning theory |
CN111028217A (en) * | 2019-12-10 | 2020-04-17 | 南京航空航天大学 | Image crack segmentation method based on full convolution neural network |
CN111553373A (en) * | 2020-04-30 | 2020-08-18 | 上海理工大学 | CNN + SVM-based pressure bubble image recognition algorithm |
CN111882579A (en) * | 2020-07-03 | 2020-11-03 | 湖南爱米家智能科技有限公司 | Large infusion foreign matter detection method, system, medium and equipment based on deep learning and target tracking |
Non-Patent Citations (3)
Title |
---|
Multi-Function Radar Signal Sorting Based on Complex Network;Kun Chi等;《网页在线公开:https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9292936》;20201214;第1-5页 * |
基于高速摄影传感器的气液两相流型分层模糊识别;常佃康等;《传感器与微系统》;20161231;第35卷(第11期);第58-60页 * |
肖健等;基于K-means聚类的微细通道纳米流体气液两相流流型识别;《农业机械学报》;20170117;第47卷(第12期);第385-390页 * |
Also Published As
Publication number | Publication date |
---|---|
CN113313000A (en) | 2021-08-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113313000B (en) | An intelligent identification method of gas-liquid two-phase flow based on optical image | |
CN107506761B (en) | Brain image segmentation method and system based on saliency learning convolutional neural network | |
CN107292333B (en) | A kind of rapid image categorization method based on deep learning | |
CN108647718B (en) | A method for automatic classification and rating of metallographic structure of different materials | |
CN109345508B (en) | A Bone Age Evaluation Method Based on Two-Stage Neural Network | |
CN106447626B (en) | A deep learning-based fuzzy kernel size estimation method and system | |
CN110276402B (en) | Salt body identification method based on deep learning semantic boundary enhancement | |
CN108985250A (en) | Traffic scene analysis method based on multitask network | |
CN107644415A (en) | A kind of text image method for evaluating quality and equipment | |
CN112434672A (en) | Offshore human body target detection method based on improved YOLOv3 | |
CN107480726A (en) | A kind of Scene Semantics dividing method based on full convolution and shot and long term mnemon | |
CN108171318B (en) | Convolution neural network integration method based on simulated annealing-Gaussian function | |
CN112529005B (en) | Target detection method based on semantic feature consistency supervision pyramid network | |
CN114048822A (en) | An Image Attention Mechanism Feature Fusion Segmentation Method | |
CN114863348A (en) | Video target segmentation method based on self-supervision | |
CN111597907B (en) | Anti-noise meta-learning-based face recognition method and system | |
CN105657402A (en) | Depth map recovery method | |
CN106991411B (en) | Remote Sensing Target based on depth shape priori refines extracting method | |
CN103268607B (en) | A kind of common object detection method under weak supervision condition | |
CN108629369A (en) | A kind of Visible Urine Sediment Components automatic identifying method based on Trimmed SSD | |
CN113705655A (en) | Full-automatic classification method for three-dimensional point cloud and deep neural network model | |
CN114972753A (en) | A lightweight semantic segmentation method and system based on contextual information aggregation and assisted learning | |
CN110659601A (en) | Depth full convolution network remote sensing image dense vehicle detection method based on central point | |
CN106991666A (en) | A kind of disease geo-radar image recognition methods suitable for many size pictorial informations | |
CN109461177A (en) | A kind of monocular image depth prediction approach neural network based |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |