[go: up one dir, main page]

CN118799640B - A method and device for identifying a dynamometer diagram - Google Patents

A method and device for identifying a dynamometer diagram Download PDF

Info

Publication number
CN118799640B
CN118799640B CN202410945262.9A CN202410945262A CN118799640B CN 118799640 B CN118799640 B CN 118799640B CN 202410945262 A CN202410945262 A CN 202410945262A CN 118799640 B CN118799640 B CN 118799640B
Authority
CN
China
Prior art keywords
indicator diagram
module
classification
tested
state space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410945262.9A
Other languages
Chinese (zh)
Other versions
CN118799640A (en
Inventor
马超
任森浩
唐闻强
钟瀚霆
伍坤宇
吴松涛
杨芸
侯明才
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Univeristy of Technology
Original Assignee
Chengdu Univeristy of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Univeristy of Technology filed Critical Chengdu Univeristy of Technology
Priority to CN202410945262.9A priority Critical patent/CN118799640B/en
Publication of CN118799640A publication Critical patent/CN118799640A/en
Application granted granted Critical
Publication of CN118799640B publication Critical patent/CN118799640B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/771Feature selection, e.g. selecting representative features from a multi-dimensional feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of oil and gas exploration and discloses an indicator diagram recognition method and device, which comprise the steps of obtaining an indicator diagram to be recognized, inputting the indicator diagram to be recognized into a target indicator diagram classification model, carrying out test classification on the indicator diagram to be recognized, and outputting a target prediction result, wherein the target indicator diagram classification model comprises a self-adaptive state space module, a depth separable convolution module, a channel attention mechanism module and a classification module, so that global visual context characteristics of data dependence are obtained from the indicator diagram to be recognized through the self-adaptive state space module, feature extraction is carried out on the indicator diagram to be recognized through the depth separable convolution module under the condition that the local representation is enough, the output characteristics of the self-adaptive state space module and the output characteristics of the depth separable convolution module are fused through the channel attention mechanism module, so that global and local characteristics are balanced, the output through the channel attention mechanism module is recognized and judged, and the target prediction result is output, and further automatic recognition and judgment on the indicator diagram type can be realized.

Description

Indicator diagram identification method and device
Technical Field
The invention relates to the technical field of oil and gas exploration, in particular to a method and a device for identifying an indicator diagram.
Background
Petroleum is taken as a non-renewable energy source, is a first energy element of the industry in the world today, and is also a strategic resource with a great significance. Petroleum is the fundamental substance of rapid development of human life and society, and is important for the development of economy. In the severe underinvestment of international petroleum supply, demand is growing in a severe background of rapid growth, and global craving for new supplies of petroleum is increasing. The petroleum development strength is required to be increased, the petroleum exploitation plan is adjusted, and the petroleum resources are optimally configured. Most of the oil fields in China use rod-type pumping units as oil extraction equipment, but the oil fields are deeply buried, of complex types and poor in natural conditions, and the oil pump is unattended and often operated in an abnormal state, so that the fault of the oil pump can cause waste of labor force, physical resources and time.
In order to improve the oil exploration efficiency, the oil pumping machine working condition monitoring technology is widely applied to the field of oil pumping machine system fault diagnosis. The indicator diagram is a widely used tool in fault diagnosis of pumping unit systems, and is a closed diagram obtained through the change relation between load and displacement. The indicator diagrams with different shapes correspond to different fault degrees, so that different influences are generated, such as production problems of reduced yield, high energy cost, increased oil well maintenance workload, even oil well shutdown and the like. The manual identification of the indicator diagram requires complex prior knowledge, the workload is limited too much when the knowledge base is updated, and subjective judgment may cause misjudgment in the diagnosis process. Traditional indicator diagram classification methods are time consuming and cannot be adapted to modern automatic data generation and collection techniques.
Disclosure of Invention
Based on the above, the present invention aims to provide a method and a device for identifying an indicator diagram, so as to realize automatic identification and judgment of the category of the indicator diagram.
In order to achieve the above purpose, the invention adopts the following technical scheme:
The first aspect of the application provides an indicator diagram identification method, which comprises the following steps:
acquiring an indicator diagram to be identified;
inputting the indicator diagram to be identified into a target indicator diagram classification model, performing test classification on the indicator diagram to be identified, and outputting a target prediction result;
The target indicator diagram classification model comprises a self-adaptive state space module, a depth separable convolution module, a channel attention mechanism module and a classification module, so that global visual context characteristics of data dependence are obtained from an indicator diagram to be identified through the self-adaptive state space module, the characteristic extraction is carried out on the indicator diagram to be identified under the condition that the local representation is enough through the depth separable convolution module, the output characteristics of the self-adaptive state space module and the output characteristics of the depth separable convolution module are fused through the channel attention mechanism module, the global characteristics and the local characteristics are balanced, and the output of the channel attention mechanism module is identified and judged through the classification module so as to output a target prediction result.
In some embodiments, before the step of inputting the to-be-identified indicator diagram into the target indicator diagram classification model and performing test classification on the to-be-identified indicator diagram and outputting the target prediction result, training and obtaining the target indicator diagram classification model is further included, and the training and obtaining the target indicator diagram classification model includes the following steps:
collecting and sorting the to-be-tested indicator diagram and the image characteristic classification description of the to-be-tested indicator diagram;
carrying out standardized treatment on the indicator diagram to be tested to obtain a standardized indicator diagram to be tested;
Utilizing the standardized indicator diagram to be tested to manufacture a training data set, a verification data set and a test data set, and establishing an Ada-VisionMamba deep learning model as an initial indicator diagram identification model;
Optimizing an initial indicator diagram recognition model by using a random gradient descent method based on the training data set and the verification data set, and iterating until a stable and accurate optimized indicator diagram classification model is obtained;
Performing test classification on the to-be-tested indicator diagram in the test data set by using an optimized indicator diagram classification model to obtain a test prediction result;
And comparing the test prediction result with a standard label corresponding to the test data set, and analyzing the test prediction result to obtain a target indicator diagram classification model.
In some embodiments, an indicator diagram generated by a downhole indicator in the actual working condition of an oil well in the western Fencil region of the Qidamu basin is used as the indicator diagram to be tested, and/or the image characteristic classification description comprises basic category characteristics and composite category characteristics composed of a plurality of basic category characteristics, so that the classification description of the indicator diagram to be tested is completed through the image characteristic classification description, wherein the basic category characteristics comprise normal, insufficient filling, gas influence, sand discharge, wax precipitation, sucker rod disconnection, oil pipe leakage, travelling valve leakage, fixed valve leakage, double valve leakage, upstroke bump pump, downstroke bump pump, thick oil and out of a working cylinder.
In some embodiments, the creating training data sets, validation data sets, and test data sets using the normalized indicator diagram to be tested includes:
and scaling the standardized indicator diagram to be tested to the size of a first preset pixel to obtain a scaled indicator diagram to be tested, and dividing the scaled indicator diagram to be tested into the training data set, the verification data set and the test data set according to a first preset proportion.
In some embodiments, the first preset pixel is 224x224 and the first preset ratio is 6:2:2.
In some embodiments, the normalizing the indicator diagram to be tested, and obtaining the normalized indicator diagram to be tested includes:
and adopting a standardized formula to realize standardized processing of the indicator diagram to be tested:
Where norm is the normalized value, x i is the image pixel value, max (x) is the maximum value of the image pixel, and min (x) is the minimum value of the image pixel.
In some embodiments, in the step of optimizing the initial indicator diagram recognition model using a random gradient descent method based on the training dataset and the validation dataset, adamW is used as an optimizer to optimize the initial indicator diagram recognition model using a loss function that is a Focal loss function.
In some implementations, the obtaining, by the adaptive state space module, the data-dependent global visual context feature from the indicator diagram to be identified includes:
Dividing an input indicator diagram to be identified into image blocks, then expanding the image blocks into one-dimensional vectors, and adding a classification vector token and a position coding token to obtain an embedded matrix added with position codes;
introducing the embedded matrix added with the position codes into an adaptive state space module, wherein the attention layer of the adaptive state space module learns the importance of each image block relative to other image blocks, and a multi-layer perceptron mixer of the adaptive state space module integrates the image block information in space and feature dimensions so as to improve the capability of the model to process different types of features and realize adaptive weight distribution on different image blocks;
The method comprises the steps of carrying out standardization processing on a processed embedded matrix to obtain a standardized sequence, carrying out two different linear transformations on the standardized sequence to obtain a bidirectional cyclic scanning input and a gating mechanism sequence, carrying out self-adaptive state space scanning on the bidirectional cyclic scanning input through a self-adaptive state space scanning block, multiplying the bidirectional cyclic scanning input by the gating mechanism sequence, and respectively calculating a forward cyclic scanning sequence and a backward cyclic scanning sequence;
After the forward cyclic scanning sequence and the backward cyclic scanning sequence are combined, the forward cyclic scanning sequence and the backward cyclic scanning sequence are added to an original input sequence through residual connection, and a characteristic sequence obtained through cyclic characteristic extraction at the moment is obtained through linear transformation.
In some embodiments, the feature extraction of the indicator diagram to be identified by the depth separable convolution module under the condition of ensuring that the local representation is enough, the fusion of the output features of the two modules of the adaptive state space module and the depth separable convolution module by the channel attention mechanism module to balance the global and local features, and the identification and judgment of the output by the channel attention mechanism module by the classification module to output the target prediction result comprise:
Each channel of the input indicator diagram to be identified is independently trained by using an independent convolution kernel;
combining and transforming the output channels of the depth convolution, and adjusting the channel number of the feature map after the depth convolution and integrating the information of different channels through point-by-point convolution;
The channel attention mechanism module respectively inputs the feature sequences extracted by the self-adaptive state space module and the features extracted by the depth separable convolution module into the channel attention mechanism module for feature selection and filtering by extracting important features of different channels;
And fusing the time sequence characteristics and the space characteristics output by the channel attention mechanism module, and inputting the time sequence characteristics and the space characteristics into the classification module for identifying and judging the composite indicator diagram so as to output a target prediction result.
A second aspect of the present application provides an indicator diagram recognition apparatus, comprising:
at least one processor, and
A memory communicatively coupled to the at least one processor;
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the indicator diagram recognition method of any one of the embodiments.
The beneficial effects of the invention are as follows:
the invention creatively designs a target indicator diagram recognition model by considering the characteristic of linear change of an indicator diagram, wherein the target indicator diagram recognition model comprises a self-adaptive state space module, a depth separable convolution module, a channel attention mechanism module and a classification module. The self-adaptive state space module can avoid specific induction deviation of images and obtain global visual context of data dependence, the depth separable convolution module can achieve more effective feature extraction under the condition of ensuring enough local representation, the channel attention mechanism module fuses output features of the self-adaptive state space module and the depth separable convolution module to balance global and local features, and through the model design, accurate and automatic identification of an indicator diagram of the pumping unit is achieved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the following description will briefly explain the drawings needed in the description of the embodiments of the present invention, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to the contents of the embodiments of the present invention and these drawings without inventive effort for those skilled in the art.
FIG. 1 is a schematic diagram of an indicator diagram recognition method according to some embodiments of the invention;
FIG. 2 is a schematic diagram of a training flow of a target indicator diagram recognition model according to some embodiments of the invention;
FIG. 3 is a schematic diagram of the overall architecture of a target indicator diagram recognition model employed in some embodiments of the invention.
FIG. 4 is a schematic diagram of an architecture of an adaptive state space module in some embodiments of the invention, where (b) is an adaptive state space scan block architecture diagram;
FIG. 5 is a diagram of an adaptive state space architecture in some embodiments of the invention;
FIG. 6 is a visual result of a randomly extracted partial category classification on a test dataset using an optimized indicator diagram recognition model in some embodiments of the invention.
Detailed Description
In order to make the technical problems solved by the present invention, the technical solutions adopted and the technical effects achieved more clear, the technical solutions of the embodiments of the present invention will be described in further detail below with reference to the accompanying drawings, and it is obvious that the described embodiments are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to fall within the scope of the invention.
In the description of the present invention, unless explicitly stated or limited otherwise, the terms "connected," "connected," and "fixed" are to be construed broadly, and may, for example, be fixedly connected, detachably connected, or integrally formed, mechanically connected, electrically connected, directly connected, indirectly connected through an intervening medium, or in communication between two elements or in an interaction relationship between two elements. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
In the present invention, unless expressly stated or limited otherwise, a first feature "above" or "below" a second feature may include both the first and second features being in direct contact, as well as the first and second features not being in direct contact but being in contact with each other through additional features therebetween. Moreover, a first feature being "above," "over" and "on" a second feature includes the first feature being directly above and obliquely above the second feature, or simply indicating that the first feature is higher in level than the second feature. The first feature being "under", "below" and "beneath" the second feature includes the first feature being directly under and obliquely below the second feature, or simply means that the first feature is less level than the second feature.
In the description of the present embodiment, the terms "upper", "lower", "left", "right", etc., azimuth or positional relationship are based on the azimuth or positional relationship shown in the drawings, and are merely for convenience of description and simplification of operations, and do not indicate or imply that the apparatus or element referred to must have a specific azimuth, be constructed and operated in a specific azimuth, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and the like, are used merely for distinguishing between descriptions and not for distinguishing between them.
In recent years, with the development of artificial intelligence technology, various fields begin to apply AI technology to realize intellectualization, such as intelligent home, automatic driving, and the like. In terms of fault diagnosis of the oil pumping unit, for the complicated problems of classifying and identifying the indicator diagram, the solution mode of the problems is changed from the traditional manual-based method to the deep learning-based method.
Aiming at the situation, the research adopts an independently developed Ada-VisionMamba deep learning model as a basis to construct a target indicator diagram classification model, and realizes automatic classification of indicator diagrams in fault diagnosis of the pumping unit. Different from a single classification task, a label multi-feature learning mode is adopted to process the complex situations of various basic fault types possibly occurring in actual work, so that more accurate oil pumping fault judgment is performed. In addition, the characteristics of the indicator diagram are considered, and the self-adaptive state space model is used for extracting the characteristics, so that the model is more stable and reasonable. The research provides an effective way for intelligent diagnosis of the faults of the oil pumping unit.
The method and the device for identifying the indicator diagram provided by the application are developed and introduced.
Referring to fig. 1-2, an embodiment of the present application provides an indicator diagram recognition method, including:
Step 100, acquiring an indicator diagram to be identified, wherein all the indicator diagrams of the application mainly refer to the indicator diagrams of the pumping unit.
And 200, inputting the to-be-identified indicator diagram into a target indicator diagram classification model, carrying out test classification on the to-be-identified indicator diagram, and outputting a target prediction result, wherein referring to fig. 3-5, the target indicator diagram classification model comprises an adaptive state space module, a depth separable convolution module, a channel attention mechanism module and a classification module, so that global visual context characteristics of data dependence are obtained from the to-be-identified indicator diagram through the adaptive state space module, characteristic extraction is carried out on the to-be-identified indicator diagram through the depth separable convolution module under the condition that enough local representation is ensured, and the output characteristics of the adaptive state space module and the depth separable convolution module are fused through the channel attention mechanism module so as to balance global and local characteristics, and the output through the channel attention mechanism module is identified and judged through the classification module so as to output the target prediction result.
The embodiment of the application creatively designs a target indicator diagram recognition model by considering the characteristic of linear change of the indicator diagram, wherein the target indicator diagram recognition model comprises a self-adaptive state space module, a depth separable convolution module, a channel attention mechanism module and a classification module. The self-adaptive state space module can avoid specific induction deviation of images and obtain global visual context of data dependence, the depth separable convolution module can achieve more effective feature extraction under the condition of ensuring enough local representation, the channel attention mechanism module fuses output features of the self-adaptive state space module and the depth separable convolution module to balance global and local features, and through the model design, accurate and automatic identification of an indicator diagram of the pumping unit is achieved.
In some embodiments, referring to fig. 2, in the step 200, inputting the indicator diagram to be identified into the target indicator diagram classification model, performing test classification on the indicator diagram to be identified, and before outputting the target prediction result, further including a step 000, training, and obtaining the target indicator diagram classification model, where the step 000, training, and obtaining the target indicator diagram classification model includes the following steps:
Step 010, collecting and sorting the to-be-tested indicator diagram and the image characteristic classification description of the to-be-tested indicator diagram;
Step 020, carrying out standardization treatment on the indicator diagram to be tested to obtain a standardized indicator diagram to be tested;
Step 030, making a training data set, a verification data set and a test data set by using the standardized indicator diagram to be tested, and establishing an Ada-VisionMamba deep learning model as an initial indicator diagram identification model, wherein the training data set, the verification data set and the test data set can be specifically written by adopting a Pytorch framework (version 2.1.0).
Step 040, optimizing an initial indicator diagram recognition model by using a random gradient descent method based on the training data set and the verification data set, and iterating until a stable and accurate optimized indicator diagram classification model is obtained;
Step 050, performing test classification on the to-be-tested indicator diagram in the test data set by using the optimized indicator diagram classification model to obtain a test prediction result, and referring to fig. 6, fig. 6 is an explanatory view of a part of category classification visualization result of random extraction on the test data set by using the optimized indicator diagram recognition model in some embodiments of the invention. Wherein the shade of color represents the importance of the region to the network in making the final classification decision. The darker areas represent the greater predicted contribution to the model, and the model is more concerned with these areas when making decisions. Conversely, lighter colored regions have less impact on the prediction.
And step 060, comparing the test prediction result with a standard label corresponding to the test data set, and analyzing the test prediction result to obtain a target indicator diagram classification model. Through analysis, the accuracy of the target indicator diagram classification model can be determined.
In some embodiments, the specific implementation manner of the to-be-tested indicator diagram and the image characteristic classification description of the to-be-tested indicator diagram in the step 010 is to adopt an indicator diagram generated by a downhole indicator in the actual working condition of an oil well in the West Feng-xi region of a Qidamu basin as the to-be-tested indicator diagram. Embodiments of the present application perform well training and testing on a graph dataset from wells in the western Feng-xi region of the Qidamu basin. Through test verification, 94.3% of composite indicator diagram recognition accuracy can be obtained on the test data set. In addition, the embodiment of the application emphasizes the self-adaptive state space characteristics and the depth separable convolution characteristics to classify the indicator diagrams, is more beneficial to the actual situation encountered in the working process, and further embodies the scientificity and rationality of the embodiment of the application compared with the classification model of other single indicator diagrams. In particular, it has great potential in the identification of composite indicator diagrams. In some embodiments of the test, the number of the composite indicator diagrams can be 58, and the 58 composite indicator diagrams can be accurately, efficiently and universally identified.
In some embodiments, the image feature classification description in step 010 includes basic category features and composite category features composed of multiple basic category features, so that the classification description of the indicator diagram to be tested is completed through the image feature classification description, wherein the basic category features include normal, underfill, gas influence, sand discharge, wax precipitation, broken and broken sucker rod, oil pipe leakage, traveling valve leakage, fixed valve leakage, double valve leakage, up-stroke pump bump, down-stroke pump bump, thick oil, and out-of-working cylinder. Specifically, more kinds may be included. In the experimental training of some embodiments, 58 kinds of indicator diagrams with composite type characteristics can be provided, and the identification accuracy of the composite indicator diagrams can be improved.
In some embodiments, the step 020 of performing standardization processing on the indicator diagram to be tested, where obtaining the standardized indicator diagram to be tested includes:
the standardized processing of the indicator diagram to be tested is realized by adopting a standardized formula, wherein the standardized formula is as follows:
Where norm is the normalized value, x i is the image pixel value, max (x) is the maximum value of the image pixel, and min (x) is the minimum value of the image pixel.
In some embodiments, the step 030, using the normalized indicator diagram to be tested, creates a training dataset, a validation dataset, and a test dataset, comprising:
And scaling the standardized indicator diagram to be tested to the size of a first preset pixel to obtain a scaled indicator diagram to be tested, and dividing the scaled indicator diagram to be tested into the training data set, the verification data set and the test data set according to a first preset proportion. Specifically, in some embodiments, the first preset pixel is 224x224 and the first preset ratio is 6:2:2.
In some embodiments, in the step 040 of optimizing the initial indicator diagram recognition model by using a random gradient descent method based on the training data set and the verification data set, adamW is specifically adopted as an optimizer to optimize the initial indicator diagram recognition model, and the loss function used is a Focal loss function. In some embodiments, the model can be iterated 80 times, the predicted value and the real value are used for calculating the loss each time, model optimization is performed by using the loss, so that the predicted value of the model is continuously close to the real value, and a trained optimized indicator diagram classification model can be obtained after the loss tends to be stable.
In some implementations, referring to fig. 3-5, the step 200 of obtaining data-dependent global visual context features from the indicator diagram to be identified by the adaptive state space module includes:
Step 210, dividing an input indicator diagram to be identified into image blocks, then expanding the image blocks into one-dimensional vectors, and adding a classification vector token and a position coding token to obtain an embedded matrix added with position codes;
Illustratively, first, in the adaptive state space module, the input indicator diagram to be identified (assumed to be X e R H ×W×C) is partitioned into N p×p image blocks patches, which are then expanded into one-dimensional vectors, and classification vector tokens and position-coded tokens are added:
xi=E·Flatten(patchi);
X'=Concat(xcls,x1,x2,…,xN);
X”=X'+Epos;
Where patch i is the ith image block, X i is the flattening E of the ith image block is a linear projection matrix for converting patches of the image from the original pixel space to a high-dimensional feature space, the flat function is used to Flatten each image block into a one-dimensional vector, X' is the feature information of the added classification vector, concat function is used to stitch all image blocks together according to the specified dimension, X cls is the classification vector token responsible for the final indicator diagram classification, X "represents the embedding moment after adding the position code, and E pos represents the position code which can embed the position information into the feature vector of the sequence, adaptively learning the embedding parameters of the position through the training process.
And 220, introducing the embedded matrix added with the position codes into an adaptive state space module, wherein the attention layer of the adaptive state space module learns the importance of each image block relative to other image blocks, and a multi-layer perceptron mixer of the adaptive state space module integrates the image block information in space and feature dimensions so as to improve the capability of the model in processing different types of features and realize adaptive weight distribution on different image blocks.
The method comprises the steps of setting image blocks in a self-adaptive state space module, firstly, dynamically distributing weights of different image sequences to obtain attention scores Output attention of the image sequences through a self-attention mechanism, and learning through a multi-component perceptron mixer by using two multi-layer perceptrons at different angles to realize self-adaptive adjustment of contribution degree of each image sequence, so that training is more suitable for image change:
Q,K,V=X”(Wq,Wk,Wv)
Outputchannel=MLPchannel(Outputtoken)
The method comprises the steps of inquiring the relation between a current image block and other image blocks, calculating the similarity between the current image block and the other image blocks by a Q matrix, calculating the specific information of Q and K by a K matrix, calculating the final self-attention Output by a V matrix, multiplying the Q, K and V matrix by the image blocks respectively, wherein W q,Wk,Wv is a learnable matrix, calculating the similarity between each Q and all K by a QK T, enabling each element of the result matrix to represent the matching degree of one Q corresponding to one K, multiplying the result matrix by the V matrix and carrying out Softmax normalization to obtain an attention score Output attention, processing the transpose of the matrix Output attention, and using another channel multi-layer perceptron in the dimension of the image blocks to help transfer information between different channels of the image blocks, and obtaining the score of each image block from the Output channel through a full connection layer and a Sigmoid activation function.
Referring to fig. 4, fig. 4 is a schematic diagram of an architecture of an adaptive state space module in some embodiments of the present invention, where (b) is an adaptive state space scan block composition diagram, where a self-attention mechanism and a multi-layer perceptron mixer compose a block adaptive weighting block in the adaptive state space module.
The scheme realizes self-adaptive weight distribution on different image block patches, the attention layer of the self-adaptive state space module learns the importance of each image block patch relative to other image block patches so as to dynamically distribute more weights to the image block patches with information, and the multi-layer perceptron mixer of the self-adaptive state space module plays a role in weighting and reassigning the characteristics of each image block patch so that the output of each image block patch is a result adjusted by common information of all the image block patches. And secondly, the features in each image block patch are fused, so that information can be transferred between different channels of the image block patch, and the learning of the model on the features is further enhanced.
Step 230, the standardized sequence is obtained by the standardized processing of the processed embedded matrix, and the two-way cyclic scanning input and the gating mechanism sequence are obtained by carrying out two different linear transformations on the standardized sequence;
Illustratively, referring to FIG. 4 (b), two different linear transformations are performed on the normalized sequence to obtain a bi-directional cyclic scan input z 1 and a gating mechanism sequence z 2, which are then processed by the adaptive state space scan block, multiplied by the gating mechanism sequence z 2 calculated previously to calculate a forward cyclic scan sequence y 'forward and a backward cyclic scan sequence y' backward, respectively:
y1=DWConv(z1);
SS2D=SSMσ(y1),σ∈{forward,backward}
yforward,ybackward=Norm(SS2D);
y'forward=yforward⊙SiLU(z2);
y'backward=ybackward⊙SiLU(z2);
Where Linear is a Linear transformation function, two sequences z 1 and z 2,z1 are obtained as inputs for the next bidirectional cyclic scan, and the gating mechanism sequence z 2 is subsequently used for a gating mechanism, where the gating operation can help the network model adjust its behavior according to the current input or past information, and enhance the flexibility and effectiveness of the model in processing information, DWConv is a deep convolution, and captures the important features of the input data by its weights, thereby enhancing the representation capability of the model. SS2D represents the input of vectors into the forward and backward state space models, respectively, by an adaptive state space scan block, yielding two results, i.e., a bi-directional state space scan, where SSM is a state space process and Norm is used to characterize the normalization process.
Step 240, after combining the forward cyclic scan sequence and the backward cyclic scan sequence, adding the combined forward cyclic scan sequence and the backward cyclic scan sequence to the original input sequence through residual connection, and obtaining a feature sequence obtained through cyclic feature extraction at the moment through linear transformation.
Illustratively, after combining the forward cyclic scan sequence y 'forward and the backward cyclic scan sequence y' backward, the combined forward cyclic scan sequence y 'forward and the backward cyclic scan sequence y' backward are added to the original input sequence X "l-1 through residual connection, and finally X" l is obtained through linear transformation:
X”l=Linear(y'forward+y'backward)+X"l-1;
In the formula, X' l represents a feature sequence obtained by cyclic feature extraction at the moment.
In some embodiments, referring to fig. 3, in step 200, feature extraction is performed on the indicator diagram to be identified by the depth separable convolution module under the condition of ensuring that the local representation is sufficient, output features of the two modules of the adaptive state space module and the depth separable convolution module are fused by the channel attention mechanism module so as to balance global and local features, and the output of the channel attention mechanism module is identified and judged by the classification module so as to output a target prediction result, including:
Step 250, each channel of the input indicator diagram to be identified is independently trained by using an independent convolution kernel;
illustratively, each channel of the input diagram to be identified (assumed to be X ε R H×W×C) is trained separately using a separate convolution kernel:
YC(i,j)=∑m,nXc(i+m,j+n)×Kc(m,n);
Wherein Y C is characteristic information obtained by performing one-time depth convolution on the C channel, K c is a convolution kernel of the C channel, i, j indexes represent positions of output elements currently being calculated, m, n indexes represent row and column indexes inside the convolution kernel, and the m, n indexes are used for accessing specific elements of the convolution kernel.
260, Combining and transforming the output channels of the depth convolution, and adjusting the channel number of the feature map after the depth convolution and integrating the information of different channels through point-by-point convolution;
illustratively, the output channels of the depth convolution are combined and transformed, and the channel number of the characteristic diagram after the depth convolution is adjusted through point-by-point convolution and the information of different channels is integrated:
Where Z c' (i, j) is the value of the C' th channel at position (i, j) in the output signature, Is a specific weight in the convolution kernel that is used to transform information from the C-th channel of the input signature to the C' th channel of the output signature.
Step 270, the channel attention mechanism module extracts important features of different channels, and respectively inputs the feature sequences extracted by the self-adaptive state space module and the features extracted by the depth separable convolution module into the channel attention mechanism module for feature selection and filtering;
And 280, fusing the time sequence characteristics and the space characteristics output by the channel attention mechanism module, and inputting the time sequence characteristics and the space characteristics into the classification module for identifying and judging the composite indicator diagram so as to output a target prediction result.
Explanatory, feature=se (X ") +se (Z);
Diram=Sigmoid(AveragePool(Feature));
Wherein Feature represents the result of Feature fusion of the adaptive state space module and the depth separable module, averagePool represents average pooling, reduces the space dimension of the Feature map and reduces the data volume while maintaining important Feature information, sigmoid activation function maps learned category features to probability values, and Diram represents the final obtained indicator diagram classification result, namely the target prediction result.
The target indicator diagram recognition model of the embodiment of the application comprises a self-adaptive state space module, a depth separable convolution module and a channel attention mechanism module, wherein the self-adaptive state space module is used for obtaining the position codes of curve continuity according to feature learning by utilizing the imaging characteristics of the combination of the upper stroke curve and the lower stroke curve of the indicator diagram, so that single fault types can be fully recognized, and the learned single fault type features can be used for fusing and distinguishing composite fault types so as to cope with the actual situation of various basic fault types in oil extraction. The indicator diagram identification method of the embodiment of the application obtains excellent identification precision, improves the rationality and stability of the model and provides an effective way for intelligent diagnosis of the faults of the pumping unit.
In addition, the embodiment of the application also provides an indicator diagram recognition device, which comprises at least one processor and a memory in communication connection with the at least one processor, wherein the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute the indicator diagram recognition method in any embodiment, and the details are not repeated.
In conclusion, the method and the device for identifying the indicator diagram provided by the invention have high calculation efficiency, and can realize good state identification on the composite indicator diagram.
Note that the above is only a preferred embodiment of the present invention and the technical principle applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, while the invention has been described in connection with the above embodiments, the invention is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit or scope of the invention, which is set forth in the following claims.

Claims (9)

1. The indicator diagram recognition method is characterized by comprising the following steps of:
acquiring an indicator diagram to be identified;
inputting the indicator diagram to be identified into a target indicator diagram classification model, performing test classification on the indicator diagram to be identified, and outputting a target prediction result;
The target indicator diagram classification model comprises a self-adaptive state space module, a depth separable convolution module, a channel attention mechanism module and a classification module, so that global visual context characteristics of data dependence are obtained from an indicator diagram to be identified through the self-adaptive state space module, the characteristic extraction is carried out on the indicator diagram to be identified under the condition that the local representation is enough through the depth separable convolution module, the output characteristics of the self-adaptive state space module and the output characteristics of the depth separable convolution module are fused through the channel attention mechanism module, the global characteristics and the local characteristics are balanced, and the output of the channel attention mechanism module is identified and judged through the classification module so as to output a target prediction result;
Wherein the obtaining, by the adaptive state space module, the data-dependent global visual context feature from the indicator diagram to be identified comprises:
Dividing an input indicator diagram to be identified into image blocks, then expanding the image blocks into one-dimensional vectors, and adding a classification vector token and a position coding token to obtain an embedded matrix added with position codes;
Introducing the embedded matrix added with the position codes into a self-adaptive state space module to obtain a processed embedded matrix, wherein the attention layer of the self-adaptive state space module learns the importance of each image block relative to other image blocks, and a multi-layer perceptron mixer of the self-adaptive state space module integrates the image block information in space and feature dimensions so as to improve the capability of the model in processing different types of features and realize self-adaptive weight distribution on different image blocks;
The method comprises the steps of carrying out standardization processing on a processed embedded matrix to obtain a standardized sequence, carrying out two different linear transformations on the standardized sequence to obtain a bidirectional cyclic scanning input and a gating mechanism sequence, carrying out self-adaptive state space scanning on the bidirectional cyclic scanning input through a self-adaptive state space scanning block, multiplying the bidirectional cyclic scanning input by the gating mechanism sequence, and respectively calculating a forward cyclic scanning sequence and a backward cyclic scanning sequence;
After the forward cyclic scanning sequence and the backward cyclic scanning sequence are combined, the forward cyclic scanning sequence and the backward cyclic scanning sequence are added to an original input sequence through residual connection, and a characteristic sequence obtained through cyclic characteristic extraction at the moment is obtained through linear transformation.
2. The method for identifying an indicator diagram according to claim 1, wherein before the step of inputting the indicator diagram to be identified into the target indicator diagram classification model, the method further comprises training and obtaining the target indicator diagram classification model before the step of testing and classifying the indicator diagram to be identified and outputting the target prediction result, the training and obtaining the target indicator diagram classification model comprises the following steps:
collecting and sorting the to-be-tested indicator diagram and the image characteristic classification description of the to-be-tested indicator diagram;
carrying out standardized treatment on the indicator diagram to be tested to obtain a standardized indicator diagram to be tested;
Utilizing the standardized indicator diagram to be tested to manufacture a training data set, a verification data set and a test data set, and establishing an Ada-VisionMamba deep learning model as an initial indicator diagram identification model;
Optimizing an initial indicator diagram recognition model by using a random gradient descent method based on the training data set and the verification data set, and iterating until a stable and accurate optimized indicator diagram classification model is obtained;
Performing test classification on the to-be-tested indicator diagram in the test data set by using an optimized indicator diagram classification model to obtain a test prediction result;
And comparing the test prediction result with a standard label corresponding to the test data set, and analyzing the test prediction result to obtain a target indicator diagram classification model.
3. The indicator diagram recognition method according to claim 2, wherein an indicator diagram generated by a downhole indicator in the actual working condition of an oil well in the western Fender and West region of a Qidamu basin is adopted as the indicator diagram to be tested, and/or the image characteristic classification description comprises basic category characteristics and composite category characteristics composed of a plurality of basic category characteristics, so that the classification description of the indicator diagram to be tested is completed through the image characteristic classification description, wherein the basic category characteristics comprise normal, insufficient filling, gas influence, sand discharge, wax precipitation, sucker rod disconnection, oil pipe leakage, travelling valve leakage, fixed valve leakage, double valve leakage, up-stroke pump bump, down-stroke pump bump, thick oil and cylinder leakage.
4. The indicator diagram recognition method of claim 2, wherein the creating training data sets, verification data sets and test data sets using the standardized indicator diagram to be tested comprises:
and scaling the standardized indicator diagram to be tested to the size of a first preset pixel to obtain a scaled indicator diagram to be tested, and dividing the scaled indicator diagram to be tested into the training data set, the verification data set and the test data set according to a first preset proportion.
5. The indicator diagram recognition method according to claim 4, wherein the first preset pixel is 224x224, and the first preset ratio is 6:2:2.
6. The method for identifying an indicator diagram according to claim 2, wherein the normalizing the indicator diagram to be tested to obtain a normalized indicator diagram to be tested includes:
and adopting a standardized formula to realize standardized processing of the indicator diagram to be tested:
Where norm is the normalized value, x i is the image pixel value, max (x) is the maximum value of the image pixel, and min (x) is the minimum value of the image pixel.
7. The method of claim 2, wherein in the step of optimizing the initial indicator diagram recognition model using a random gradient descent method based on the training dataset and the validation dataset, the initial indicator diagram recognition model is optimized using AdamW as an optimizer using a loss function of Focalloss.
8. The method for identifying an indicator diagram according to claim 1, wherein the feature extraction of the indicator diagram to be identified by the depth separable convolution module under the condition of ensuring that the local representation is enough is performed, the output features of the two modules of the adaptive state space module and the depth separable convolution module are fused by the channel attention mechanism module so as to balance global and local features, and the output of the channel attention mechanism module is identified and judged by the classification module so as to output a target prediction result, and the method comprises the following steps:
Each channel of the input indicator diagram to be identified is independently trained by using an independent convolution kernel;
combining and transforming the output channels of the depth convolution, and adjusting the channel number of the feature map after the depth convolution and integrating the information of different channels through point-by-point convolution;
The channel attention mechanism module respectively inputs the feature sequences extracted by the self-adaptive state space module and the features extracted by the depth separable convolution module into the channel attention mechanism module for feature selection and filtering by extracting important features of different channels;
And fusing the time sequence characteristics and the space characteristics output by the channel attention mechanism module, and inputting the time sequence characteristics and the space characteristics into the classification module for identifying and judging the composite indicator diagram so as to output a target prediction result.
9. An indicator diagram recognition device, characterized by comprising:
at least one processor, and
A memory communicatively coupled to the at least one processor;
Wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the indicator diagram recognition method of any one of claims 1 to 8.
CN202410945262.9A 2024-07-15 2024-07-15 A method and device for identifying a dynamometer diagram Active CN118799640B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410945262.9A CN118799640B (en) 2024-07-15 2024-07-15 A method and device for identifying a dynamometer diagram

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410945262.9A CN118799640B (en) 2024-07-15 2024-07-15 A method and device for identifying a dynamometer diagram

Publications (2)

Publication Number Publication Date
CN118799640A CN118799640A (en) 2024-10-18
CN118799640B true CN118799640B (en) 2025-01-28

Family

ID=93034646

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410945262.9A Active CN118799640B (en) 2024-07-15 2024-07-15 A method and device for identifying a dynamometer diagram

Country Status (1)

Country Link
CN (1) CN118799640B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117094370A (en) * 2023-08-04 2023-11-21 西安电子科技大学 Deep learning-based fault diagnosis method for indicator diagram of oil pumping unit

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11961298B2 (en) * 2019-02-22 2024-04-16 Google Llc Memory-guided video object detection
CN116524226A (en) * 2023-02-22 2023-08-01 太原理工大学 A device and method for breast cancer pathological image classification based on deep learning
CN117333663A (en) * 2023-10-25 2024-01-02 青岛九维华盾科技研究院有限公司 Camouflage target detection and identification method based on pixel level fusion
CN118279679B (en) * 2024-06-04 2024-08-02 深圳大学 Image classification method, image classification device and medium based on deep learning model

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117094370A (en) * 2023-08-04 2023-11-21 西安电子科技大学 Deep learning-based fault diagnosis method for indicator diagram of oil pumping unit

Also Published As

Publication number Publication date
CN118799640A (en) 2024-10-18

Similar Documents

Publication Publication Date Title
CN111986099B (en) Tillage monitoring method and system based on convolutional neural network with residual error correction fused
CN111368896B (en) Hyperspectral Remote Sensing Image Classification Method Based on Dense Residual 3D Convolutional Neural Network
CN110163302A (en) Indicator card recognition methods based on regularization attention convolutional neural networks
CN114723957A (en) Multi-class pipeline defect detection, tracking and counting method based on self-attention mechanism
CN115457006B (en) Unmanned aerial vehicle inspection defect classification method and device based on similarity consistency self-distillation
CN111027631B (en) X-ray image classification and identification method for judging crimping defects of high-voltage strain clamp
CN114092697B (en) Building facade semantic segmentation method with attention fused with global and local depth features
CN114283285A (en) Cross consistency self-training remote sensing image semantic segmentation network training method and device
CN103714148A (en) SAR image search method based on sparse coding classification
CN116188880A (en) Cultivated land classification method and system based on remote sensing image and fuzzy recognition
CN116682068B (en) Oil well sand prevention operation construction monitoring method and system thereof
CN112818818B (en) Novel ultra-high-definition remote sensing image change detection method based on AFFPN
CN117828423A (en) Photovoltaic module abnormality identification method and system based on statistical characteristics
Liu et al. Channel-Spatial attention convolutional neural networks trained with adaptive learning rates for surface damage detection of wind turbine blades
CN118820862B (en) Knowledge graph-based fault diagnosis method for indicator diagram
CN115497006A (en) Urban remote sensing image change depth monitoring method and system based on dynamic hybrid strategy
CN118799640B (en) A method and device for identifying a dynamometer diagram
CN117152548B (en) A method and system for identifying operating conditions of measured electrical power diagrams in pumping unit wells
CN108915668A (en) A kind of Diagnosing The Faults of Sucker Rod Pumping System method based on gray level co-occurrence matrixes
CN114332536A (en) A posteriori probability-based forgery image detection method, system and storage medium
CN114492216A (en) Pumping unit operation track simulation method based on high-resolution representation learning
CN118097662B (en) CNN-SPPF and ViT-based pap smear cervical cell image classification method
CN116467923A (en) Beam pumping unit indicator diagram self-diagnosis and multi-objective optimization method
CN114897909B (en) Crankshaft surface crack monitoring method and system based on unsupervised learning
CN118397247A (en) A waste drilling fluid flocculation identification system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant