[go: up one dir, main page]

CN111984928A - Method for calculating organic carbon content of shale oil reservoir by logging information - Google Patents

Method for calculating organic carbon content of shale oil reservoir by logging information Download PDF

Info

Publication number
CN111984928A
CN111984928A CN202010832175.4A CN202010832175A CN111984928A CN 111984928 A CN111984928 A CN 111984928A CN 202010832175 A CN202010832175 A CN 202010832175A CN 111984928 A CN111984928 A CN 111984928A
Authority
CN
China
Prior art keywords
output
layer
value
neural network
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010832175.4A
Other languages
Chinese (zh)
Other versions
CN111984928B (en
Inventor
叶青竹
郑有恒
范传军
吴世强
杜小娟
管文静
张亮
贺钦
殷文洁
杨薇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Petroleum and Chemical Corp
Exploration and Development Research Institute of Sinopec Jianghan Oilfield Co
Original Assignee
China Petroleum and Chemical Corp
Exploration and Development Research Institute of Sinopec Jianghan Oilfield Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Petroleum and Chemical Corp, Exploration and Development Research Institute of Sinopec Jianghan Oilfield Co filed Critical China Petroleum and Chemical Corp
Priority to CN202010832175.4A priority Critical patent/CN111984928B/en
Publication of CN111984928A publication Critical patent/CN111984928A/en
Application granted granted Critical
Publication of CN111984928B publication Critical patent/CN111984928B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/18Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Analysis (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Databases & Information Systems (AREA)
  • Algebra (AREA)
  • Probability & Statistics with Applications (AREA)
  • Operations Research (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application provides a method for calculating the organic carbon content of a shale oil reservoir by logging information, which comprises the following steps: s1, carrying out depth homing processing on the core drilling depth and the logging depth of a coring well section of a research block; s2, carrying out standardized processing on the logging data of each well of the research block, and eliminating the system error of the logging data; s3, establishing a nonlinear relation between the organic carbon content of the shale oil reservoir of the research block and logging information by using laboratory measurement information of the organic carbon content of the core sample of the core well section of the research block and adopting an artificial neural network analysis method, and establishing a neural network calculation model between the organic carbon content measurement value and the logging value of the core sample of the core well section; and S4, applying the established neural network calculation model to the non-coring wells of the research block, and calculating the organic carbon content of the shale oil reservoir of the non-coring wells of the research block. The method can accurately obtain the organic carbon content of the shale reservoir.

Description

Method for calculating organic carbon content of shale oil reservoir by logging information
Technical Field
The application relates to the field of reservoirs, in particular to a method for calculating the organic carbon content of a shale oil reservoir by using logging information.
Background
The organic carbon content TOC is carbon excluding carbonate and inorganic carbon in graphite in the rock, that is, the content of organic matter in the rock is expressed by carbon element. The organic carbon content is generally adopted as an index of the abundance of organic matters in the source rock in the evaluation of the source rock.
The method for calculating the organic carbon content based on logging information mainly comprises a delta logR method, a density method, a natural gamma indicating method, an element logging indicating method and the like, wherein the delta logR method is one of the most widely accepted methods at present. In the methods, a linear fitting method is adopted by utilizing single logging curve data (resistivity logging, density logging, natural gamma logging and the like) to establish a linear relation between the organic carbon content of the reservoir and the logging curve, and in the process of obtaining the organic carbon content of the shale oil reservoir, the fact that shale oil reservoir rock components are various, the pore structure is complex, the correlation between the single logging curve and the organic carbon content (TOC) of the reservoir is poor, and the error between the linear fitting calculation result and the laboratory measurement result is large is found.
Disclosure of Invention
The application provides a method for calculating the organic carbon content of a shale oil reservoir by logging information, and aims to solve the problem that the error of the value of the organic carbon content of the shale oil reservoir in the prior art is large.
The technical scheme of the application is as follows:
a method for calculating the organic carbon content of a shale oil reservoir by logging information comprises the following steps:
s1, carrying out depth homing processing on the core drilling depth and the logging depth of a coring well section of a research block;
s2, carrying out standardized processing on the logging data of each well of the research block, and eliminating the system error of the logging data;
s3, establishing a nonlinear relation between the organic carbon content of the shale oil reservoir of the research block and logging information by using laboratory measurement information of the organic carbon content of the core sample of the coring well section of the research block and adopting an artificial neural network analysis method, and establishing a neural network calculation model between the organic carbon content measurement value and the logging value of the core sample of the coring well section;
and S4, applying the established neural network calculation model to the non-coring wells of the research block, and calculating the organic carbon content of the shale oil reservoir of the non-coring wells of the research block.
As an aspect of the present application, in step S1, a drilling depth D1 of a top boundary or a bottom boundary of a marker layer of the core well section is determined according to lithological characteristics of the core well section of the study block, and then a logging depth D2 corresponding to the top boundary or the bottom boundary of the marker layer is found on a log of the core well section of the study block, where a difference between the logging depth D2 and the drilling depth D1 is a corrected value of the logging depth and the drilling depth of the core well section, that is:
ΔD=D2-D1;
the relation between the homing depth D2 of the core sample of the cored interval on the logging curve and the well drilling depth D1 of the core sample is as follows:
D2’=D1’+ΔD。
as one technical solution of the present application, in step S2, a standard layer of the research block is determined, first logging feature values of the standard layer of all the wells drilled in the standard layer are counted, and second logging feature values of the standard layer of the research block are determined by a histogram statistical method; determining a correction amount for the well log data for each of the wells in the study block based on a difference between the first well log characteristic value for each of the wells and the second well log characteristic value for the study block.
In one aspect of the present application, in step S2, the logging data includes sonic moveout logging and offset density logging data.
As one technical solution of the present application, in step S2, the standard layer includes mudstone or plaster mudstone having a uniform thickness distribution and a uniform physical property distribution.
As an embodiment of the present invention, in step S3, a training sample is first selected, a logging response value of the training sample on the logging data is represented by a vector X, and the logging response value of the training sample is used as an input layer vector X of a neural network analysis:
X=(Xi1,Xi2,…,Xij,…,Xin);
i=1,2,…,m;j=1,2,…,n;
wherein m represents the number of samples, n represents n log response values, and XijIs the jth log response value of the ith sample;
the laboratory measurement organic carbon content measurement value of the training sample is represented by a vector Y, and the laboratory measurement organic carbon content measurement value of the training sample is used as an output layer vector Y of the neural network analysis:
Y=(Yi),i=1,2,…,m;
wherein m represents the number of training samples, Yi represents the laboratory measured organic carbon content measurement value of the ith training sample;
after the training sample is selected, carrying out normalization processing on the input value and the output value of the training sample by adopting a maximum-minimum standardization method; carrying out ANN neural network training on the training samples, and setting the number of neural network hidden layers, confidence and training learning times; wherein each neuron of the Nth layer is connected with all neurons of the N-1 th layer, and the output of the neuron of the N-1 th layer is the input of the neuron of the Nth layer; randomly initializing the connection weight and offset of the network in the range of (0, 1), calculating the training output value of the neural network under the current parameter condition, and calculating the mean square error of the training output value of the neural network and the training sample output value; if the mean square error does not meet the given standard, calculating the gradients of the output neurons and the hidden neurons according to the mean square error, and reversely updating the connection weight and the offset of the neural network; retraining a calculation output value according to the updated connection weight and offset of the neural network, and then calculating the mean square error of the neural network training output value and the training sample output value; and repeating the calculation until the error or the learning iteration number reaches the condition, stopping learning, and determining the connection weight and the offset of the neural network.
As an embodiment of the present application, in step S3, the maximum-minimum normalization process is to perform linear transformation on the raw data, the minimum value and the maximum value of the attribute a are minA and maxA, respectively, and one raw value of the attribute a is mapped to the interval (0, 1) through the maximum-minimum normalization, so that the calculation formula is:
Figure BDA0002638387300000041
wherein, A is an input variable or an output variable, and A' is a value obtained by normalizing the variable A.
As an embodiment of the present application, in step S3, the algorithm for calculating the neural network training output value, the updated connection weight of the neural network, and the updated offset of the neural network includes:
assuming neural network hidden layersThere are q nodes, and the h node of the hidden layer has the input weight of W ═ W1h,…,Wjh…,Wnh),WjhThe input weight from the jth input node to the h hidden node; the input weight of the output layer node is V ═ V (V)1,…,Vh…,Vq),VhIs the input weight from the h hidden layer node to the output layer node;
the input α of the h neuron of the hidden layer of the i sampleihThe method comprises the following steps:
Figure BDA0002638387300000042
wherein, WjhThe input weight from the jth input node to the h hidden node; xijIs the jth log response value of the ith sample; i is 1, …, m is the number of samples; j is 1, …, n, n is the number of neurons in the input layer; h is 1, …, q, q is the number of hidden layer neurons;
order function
Figure BDA0002638387300000051
In order to be a function of the excitation,
the output of the h neuron of the hidden layer for the ith sample is:
bih=f(αihh),
wherein alpha isihIs an input variable of an h neuron of the hidden layer of an i sample; thetahAn input offset for the h hidden node;
the inputs to the output layer neurons for the ith sample are:
Figure BDA0002638387300000056
wherein, VhIs the input weight from the h hidden layer node to the output layer node, bihOutput of h neuron of the hidden layer being the ith sampleA weight value;
the output of the output layer neurons for the ith sample is:
Figure BDA0002638387300000052
wherein, betaiIs the input value to the output layer neuron for the ith sample,
Figure BDA0002638387300000053
is the input offset of the output layer node;
then the error function of the ith sample at the neural network output node is:
Figure BDA0002638387300000054
wherein, YiIs the ith sample expected output value, Y'iAn output value of the output layer neuron of the neural network for an ith sample;
the total mean square error of all samples at the output layer of the neural network is then:
Figure BDA0002638387300000055
wherein, i is 1, …, m is the number of samples; y isiIs the ith sample expected output value, Y'iAn output value of the output layer neuron of the neural network for an ith sample;
when training the neural network, the iterative update formula of any parameter is as follows:
γ′=γ+Δγ,
wherein, gamma is the nth iteration value of any variable to be solved, gamma' is the (N + 1) th iteration value of the variable to be solved, and delta gamma is the iteration value increment;
the weight V from the h-th neuron of the hidden layer to the output layerhThe updating process is as follows:
Figure BDA0002638387300000061
Figure BDA0002638387300000062
wherein eta is a learning step length, and the range is (0, 1); m is the number of samples, EiIs the error function of the ith sample on the output node of the neural network, Y'iOutput value, β, of the output layer neurons of the neural network for the ith sampleiIs the input value, V, of the output layer neuron of the ith samplehIs the input weight, Y, from the h hidden layer node to the output layer nodeiFor the ith sample expected output value, bihIs the output weight of the h neuron of the hidden layer of the i sample;
offset value of the output layer
Figure BDA0002638387300000067
The updating is as follows:
Figure BDA0002638387300000063
Figure BDA0002638387300000064
wherein, i is 1, …, m is the number of samples; eiIs the error function of the ith sample on the output node of the neural network, Y'iFor the ith sample neural network output value,
Figure BDA0002638387300000068
is the input offset, Y, of the output layer nodeiIs the ith sample expected value;
weight W from the jth neuron of the input layer to the h node of the hidden layerjhThe updating process is as follows:
Figure BDA0002638387300000065
Figure BDA0002638387300000066
wherein eta is a learning step length, and the range is (0, 1); m is the number of samples, EiIs the error function of the ith sample on the output node of the neural network, Y'iFor the ith sample neural network output value, βiIs the input value of the output layer neuron of the ith sample, bihIs the output weight, W, of the h neuron of the hidden layer of the ith samplejhIs the input weight from the jth input node to the h hidden node, YiFor the ith sample expected value, VhIs the input weight, X, from the h hidden layer node to the output layer nodeijIs the jth log response value of the ith sample;
offset θ from the input layer to the h-th node of the hidden layerhThe updating process is as follows:
Figure BDA0002638387300000071
Figure BDA0002638387300000072
wherein, i is 1, …, m is the number of samples; eiIs the error function of the ith sample on the output node of the neural network, Y'iFor the ith sample neural network output value, βiIs the input value of the output layer neuron of the ith sample, bihIs the output weight, θ, of the h neuron of the hidden layer for the i samplehInput offset for h hidden node, YiFor the ith sample expected value, VhIs the input weight from the h hidden layer node to the output layer node;
to this end, the mean square error is propagated back to the hidden layer.
The beneficial effect of this application:
the method comprises the steps of utilizing laboratory measurement data of organic carbon content of a core sample of a core well section of a research block, establishing a nonlinear relation between the organic carbon content of the core sample of the research block and the logging data by adopting an artificial neural network analysis method, and establishing a neural network calculation model between a measured value and a logging value of the organic carbon content of the core sample of the core well section, so that the established neural network calculation model is applied to a non-core well of the research block, and the organic carbon content of the core sample of the non-core well of the research block is calculated. Therefore, the method can calculate the organic carbon content of the shale oil reservoir by researching the data of a plurality of logging curves of the block and adopting an artificial neural network analysis method, greatly improves the fitting precision, obtains a better application effect, can quickly and accurately calculate the organic carbon content of the shale oil reservoir in the region, greatly improves the calculation efficiency of the organic carbon content of the shale oil reservoir in a work area, and is time-saving and labor-saving.
Drawings
In order to more clearly explain the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that for those skilled in the art, other related drawings can be obtained from these drawings without inventive effort.
Fig. 1 is a schematic diagram illustrating a normalization process performed on input values and output values of training samples according to an embodiment of the present application;
FIG. 2 is a clamshell oil 2 well submersible 3 provided by the embodiment of the application4And the organic carbon content calculation result of the ten-rhythm neural network model is compared with the organic carbon content calculation result measured by the laboratory core.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
In the description of the present application, it should be noted that the terms "upper", "lower", and the like refer to orientations or positional relationships based on the orientations or positional relationships shown in the drawings or orientations or positional relationships that the products of the present invention are conventionally placed in use, and are used for convenience in describing the present application and simplifying the description, but do not indicate or imply that the devices or elements referred to must have a specific orientation, be constructed in a specific orientation, and be operated, and thus should not be construed as limiting the present application.
Further, in the present application, unless expressly stated or limited otherwise, the first feature may be directly contacting the second feature or may be directly contacting the second feature, or the first and second features may be contacted with each other through another feature therebetween, not directly contacting the second feature. Also, the first feature being above, on or above the second feature includes the first feature being directly above and obliquely above the second feature, or merely means that the first feature is at a higher level than the second feature. A first feature that underlies, and underlies a second feature includes a first feature that is directly under and obliquely under a second feature, or simply means that the first feature is at a lesser level than the second feature.
Furthermore, the terms "horizontal", "vertical" and the like do not imply that the components are required to be absolutely horizontal or pendant, but rather may be slightly inclined. For example, "horizontal" merely means that the direction is more horizontal than "vertical" and does not mean that the structure must be perfectly horizontal, but may be slightly inclined.
In the description of the present application, it is also to be noted that, unless otherwise explicitly specified or limited, the terms "disposed," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present application can be understood in a specific case by those of ordinary skill in the art.
Example (b):
referring to fig. 1 and fig. 2, an embodiment of the present application provides a method for calculating an organic carbon content of a shale oil reservoir from logging data, which mainly includes the following steps:
s1, carrying out depth homing processing on the core drilling depth and the logging depth of a coring well section of a research block;
s2, carrying out standardized processing on the logging data of each well of the research block, and eliminating the system error of the logging data;
and S3, on the basis of core logging depth homing and regional logging data standardization, establishing a nonlinear relation between the shale oil reservoir organic carbon content of the research block and logging data by utilizing laboratory measurement data of the core sample organic carbon content of the core logging section of the research block and adopting an artificial neural network analysis method, and establishing a neural network calculation model between the core sample organic carbon content measurement value and the logging value of the core logging section, so that the aim of calculating the reservoir organic carbon content by using the logging data is fulfilled. (ii) a
And S4, applying the established neural network calculation model to the non-coring wells of the research block, and calculating the organic carbon content of the shale oil reservoir of the non-coring wells of the research block.
It should be noted that, in this embodiment, in step S1, the drilling depth D1 of the top boundary or the bottom boundary of the marker layer of the core wellbore section is determined according to the lithological characteristics of the core wellbore section of the study block, and then the logging depth D2 corresponding to the top boundary or the bottom boundary of the marker layer is found on the log curve of the core wellbore section of the study block, where the difference between the logging depth D2 and the drilling depth D1 is the correction value of the logging depth and the drilling depth of the core wellbore section, that is: D2-D1; thus, the relationship between the homing depth D2 'of the core sample of the cored interval on the log and the drilling depth D1' of the exact core sample is: d2 ═ D1' + Δ D.
The purpose of the logging depth homing of the core sample is to ensure that the logging characteristic value extracted according to the core depth and the core measured value reflect the characteristics of the same reservoir.
It should be noted that, in an oil field, sandstone bodies or other lithologies belonging to the same layer generally have the same depositional environment and similar parameter distribution characteristics, and the standardization of logging data utilizes this characteristic to eliminate systematic errors between logging data measured by different instruments at different periods in an area through standardization processing. The research result developed by utilizing the well logging data processed in a standardized way has applicability on the region. The specific method for the standardized processing of the logging data comprises the following steps: firstly, a standard layer of a region is determined, and a rock stratum with stable distribution, similar or regularly changed physical properties and certain thickness, such as mudstone, claystone or sandstone with stable porosity distribution, is selected as the standard layer in the region.
Further, in step S2, determining a standard layer of the research block, counting first logging feature values of the standard layers of all wells drilled in the standard layer, and determining a second logging feature value of the standard layer of the research block by using a histogram statistical method; and determining the correction value of the logging information of each well in the research block according to the difference value of the first logging characteristic value of each well and the second logging characteristic value of the research block.
It should be noted that, in this embodiment, in step S2, the well log data is mainly normalized with respect to the sonic time difference well log and the compensated density well log data; in other embodiments, other well log data may be used for normalization, and is not limited to the data in this embodiment.
In this embodiment, in step S2, mudstone or plaster mudstone having a uniform thickness distribution and a uniform physical property distribution may be used as the standard layer.
In this embodiment, in step S3, a training sample is first selected, a logging response value of the training sample on the logging data is represented by a vector X, and the logging response value of the training sample is used as an input layer vector X for neural network analysis:
X=(Xi1,Xi2,…,Xij,…,Xin);
i=1,2,…,m;j=1,2,…,n;
wherein m represents the number of samples, n represents n log response values, and XijIs the jth log response value of the ith sample;
the laboratory measured organic carbon content measurement value of the training sample is represented by a vector Y, and the laboratory measured organic carbon content measurement value of the training sample is used as an output layer vector Y of the neural network analysis:
Y=(Yi),i=1,2,…,m;
wherein m represents the number of training samples, YiLaboratory measurements of organic carbon content representative of the ith training sample;
after the training sample is selected, carrying out normalization processing on the input value and the output value of the training sample by adopting a maximum-minimum standardization method; carrying out ANN neural network training on the training samples, and setting the number of hidden layers of the neural network, the confidence coefficient and the training learning times; wherein each neuron of the Nth layer is connected with all neurons of the N-1 th layer, and the output of the neuron of the N-1 th layer is the input of the neuron of the Nth layer; the connection of each neuron has a connection weight, firstly, randomly initializing the connection weight and the offset of the network in the range of (0, 1), calculating the training output value of the neural network under the current parameter condition, and calculating the mean square error of the training output value of the neural network and the training sample output value; if the mean square error does not meet the given standard, calculating the gradients of the output neurons and the hidden neurons according to the mean square error, and reversely updating the connection weight and the offset of the neural network; retraining the calculated output value according to the updated connection weight and offset of the neural network, and then calculating the mean square error of the training output value of the neural network and the training sample output value; and repeating the calculation until the error or the learning iteration number reaches the condition, stopping learning, and determining the connection weight and the offset of the neural network.
In step S3, the max-min normalization process is to perform linear transformation on the raw data, and assuming that the minimum value and the maximum value of the attribute a are minA and maxA, respectively, and map one raw value of the attribute a to the interval (0, 1) by the max-min normalization, the calculation formula is:
Figure BDA0002638387300000121
wherein, A is an input variable or an output variable, and A' is a value obtained by normalizing the variable A.
In step S3, the algorithm for calculating the neural network training output value, the updated connection weight of the neural network, and the updated offset of the neural network is as follows:
assuming that a hidden layer of the neural network has q nodes, the input weight of the h node of the hidden layer is W ═ W (W)1h,…,Wjh…,Wnh),WjhThe input weight from the jth input node to the h hidden node; the input weight of the output layer node is V ═ V (V)1,…,Vh…,Vq),VhIs the input weight from the h hidden layer node to the output layer node;
the input α of the h neuron of the hidden layer of the i sampleihThe method comprises the following steps:
Figure BDA0002638387300000131
wherein, WjhThe input weight from the jth input node to the h hidden node; xijIs the jth log response value of the ith sample; i is 1, …, m is the number of samples; j is 1, …, n, n is the number of neurons in the input layer; h is 1, …, q, q is the number of hidden layer neurons;
order function
Figure BDA0002638387300000132
In order to be a function of the excitation,
the output of the h neuron of the hidden layer of the ith sample is:
bih=f(αihh),
wherein alpha isihIs the input variable of the h neuron of the hidden layer of the ith sample; thetahAn input offset for the h hidden node;
the inputs to the output layer neurons for the ith sample are:
Figure BDA0002638387300000133
wherein, VhIs the input weight from the h hidden layer node to the output layer node, bihIs the output weight of the h neuron of the hidden layer of the ith sample;
the output of the output layer neurons for the ith sample is:
Figure BDA0002638387300000134
wherein, betaiIs the input value to the output layer neuron for the ith sample,
Figure BDA0002638387300000136
is the input offset of the output layer node;
then the error function of the ith sample at the neural network output node is:
Figure BDA0002638387300000135
wherein, YiIs the ith sample expected output value, Y'iAn output value of an output layer neuron of the neural network for the ith sample;
the total mean square error of all samples at the output layer of the neural network is:
Figure BDA0002638387300000141
wherein, i is 1, …, m is the number of samples; y isiIs the ith sample expected output value, Y'iAn output value of an output layer neuron of the neural network for the ith sample;
when training the neural network, the iterative update formula of any parameter is as follows:
γ′=γ+Δγ,
wherein, gamma is the nth iteration value of any variable to be solved, gamma' is the (N + 1) th iteration value of the variable to be solved, and delta gamma is the iteration value increment;
then the h-th neuron of the hidden layer is weighted V to the output layerhThe updating process is as follows:
Figure BDA0002638387300000142
Figure BDA0002638387300000143
wherein eta is a learning step length, and the range is (0, 1); m is the number of samples, EiIs the error function of the ith sample on the output node of the neural network, Y'iOutput value, β, of output layer neuron of neural network for ith sampleiIs the ith sampleOf output layer neurons, VhIs the input weight, Y, from the h hidden layer node to the output layer nodeiFor the ith sample expected output value, bihIs the output weight of the h neuron of the hidden layer of the ith sample;
the offset value of the output layer
Figure BDA0002638387300000146
The updating is as follows:
Figure BDA0002638387300000144
Figure BDA0002638387300000145
wherein, i is 1, …, m is the number of samples; eiIs the error function of the ith sample on the output node of the neural network, Y'iFor the ith sample neural network output value,
Figure BDA0002638387300000155
is the input offset, Y, of the output layer nodeiIs the ith sample expected value;
weight W from jth neuron of input layer to h node of hidden layerjhThe updating process is as follows:
Figure BDA0002638387300000151
Figure BDA0002638387300000152
wherein eta is a learning step length, and the range is (0, 1); m is the number of samples, EiIs the error function of the ith sample on the output node of the neural network, Y'iFor the ith sample neural network output value, βiIs the input of the output layer neuron of the ith sampleValue, bihIs the output weight, W, of the h neuron of the hidden layer of the i samplejhIs the input weight from the jth input node to the h hidden node, YiFor the ith sample expected value, VhIs the input weight, X, from the h hidden layer node to the output layer nodeijIs the jth log response value of the ith sample;
offset theta from input layer to h-th node of hidden layerhThe updating process is as follows:
Figure BDA0002638387300000153
Figure BDA0002638387300000154
wherein, i is 1, …, m is the number of samples; eiIs the error function of the ith sample on the output node of the neural network, Y'iFor the ith sample neural network output value, βiIs the input value of the output layer neuron of the ith sample, bihIs the output weight, θ, of the h neuron of the hidden layer of the i samplehInput offset for h hidden node, YiFor the ith sample expected value, VhIs the input weight from the h hidden layer node to the output layer node;
to this end, the mean square error is propagated back to the hidden layer.
And determining all connection weights and offsets of the neural network through ANN neural network training, thereby establishing a neural network calculation model between the TOC measured value of the core sample and the logging value, and providing a basis for shale oil hydrocarbon source rock resource evaluation.
The method is applied to the calculation of the organic carbon content of the shale oil reservoir in the salt room of the Jianghan oil field, and a better application effect is obtained through the application of the mussel leaf oil 1 well and the mussel leaf oil 2 well; the correlation coefficient of the calculated TOC of the two wells and the TOC of the core analysis reaches 0.67.
In this embodiment, the clam shell oil 1 well and the clam shell oil 2 well are two coring wells, and on the basis of standardization of regional logging data, the TOC laboratory measurement sample of the clam shell oil 1 well is used as a neural network training sample, and the TOC laboratory measurement sample of the clam shell oil 2 well is used as a test sample.
In this embodiment, five curves of natural Gamma (GR), acoustic time difference (AC), compensation Density (DEN), Compensation Neutron (CNL), and deep lateral resistivity (LLD) of a conventional logging are selected as sample input variables, and TOC is measured for a sample laboratory as an output variable.
Firstly, normalizing an input value and an output value of a training sample, namely normalizing an input variable natural Gamma (GR), a sound wave time difference (AC), a compensation Density (DEN), a Compensation Neutron (CNL), a deep lateral resistivity (LLD) and an output variable laboratory measured TOC of the sample (see figure 1), wherein figure 1 is a frequency distribution diagram of normalized TOC values of the input variable natural Gamma (GR), the sound wave time difference (AC), the compensation Density (DEN), the Compensation Neutron (CNL), the deep lateral resistivity (LLD) and the output variable laboratory measured TOC; performing neural network learning training by using the normalized training sample, determining the connection weight and offset of each neuron in the neural network, establishing a neural network training model, and then popularizing the neural network training model to the mussel leaf oil 2 well for verification (see fig. 2), wherein TOC is represented by a rod-shaped graph, TOC is represented by TOC _ PRED1, which is a solid curve TOCANN and TOC _ PRED which is the rightmost curve in fig. 2 is a TOC value calculated by using a well logging curve according to the neural network model, and TOC is represented by a rod-shaped graph, TOC _1 is a TOC value measured in a sample laboratory, and the neural network model has a better calculation result.
Therefore, the method can calculate the organic carbon content of the shale oil reservoir by researching the data of a plurality of logging curves of the block and adopting an artificial neural network analysis method, greatly improves the fitting precision, obtains a better application effect, can quickly and accurately calculate the organic carbon content of the shale oil reservoir in the region, greatly improves the calculation efficiency of the organic carbon content of the shale oil reservoir in a work area, and is time-saving and labor-saving.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (8)

1. A method for calculating the organic carbon content of a shale oil reservoir by logging information is characterized by comprising the following steps:
s1, carrying out depth homing processing on the core drilling depth and the logging depth of a coring well section of a research block;
s2, carrying out standardized processing on the logging data of each well of the research block, and eliminating the system error of the logging data;
s3, establishing a nonlinear relation between the organic carbon content of the shale oil reservoir of the research block and logging information by using laboratory measurement information of the organic carbon content of the core sample of the coring well section of the research block and adopting an artificial neural network analysis method, and establishing a neural network calculation model between the organic carbon content measurement value and the logging value of the core sample of the coring well section;
and S4, applying the established neural network calculation model to the non-coring wells of the research block, and calculating the organic carbon content of the shale oil reservoir of the non-coring wells of the research block.
2. The method for calculating the organic carbon content of shale oil reservoir according to the logging information of claim 1, wherein in step S1, the drilling depth D1 of the top or bottom boundary of the marker layer of the coring section is determined according to the lithology characteristics of the coring section of the research block, and then the logging depth D2 corresponding to the top or bottom boundary of the marker layer is found on the logging curve of the coring section of the research block, and the difference between the logging depth D2 and the drilling depth D1 is the corrected value of the logging depth and the drilling depth of the coring section, namely:
ΔD=D2-D1;
the relation between the homing depth D2 'of the core sample on the logging curve of the core section and the well drilling depth D1' of the core sample is as follows:
D2’=D1’+ΔD。
3. the method for calculating the organic carbon content of the shale oil reservoir according to the logging information of claim 1, wherein in step S2, a standard layer of the research block is determined, first logging characteristic values of the standard layer of all the wells which meet the standard layer are counted, and second logging characteristic values of the standard layer of the research block are determined by a histogram statistic method; determining a correction amount for the well log data for each of the wells in the study block based on a difference between the first well log characteristic value for each of the wells and the second well log characteristic value for the study block.
4. The method for calculating the organic carbon content of the shale oil reservoir from the logging information of claim 3, wherein in step S2, the logging information comprises sonic moveout logging and offset density logging information.
5. The method for calculating the organic carbon content of the shale oil reservoir according to the logging information of claim 3, wherein in step S2, the standard layer comprises mudstone or plaster mudstone with uniform thickness distribution and uniform physical property distribution.
6. The method for calculating the organic carbon content of the shale oil reservoir according to the logging information of claim 1, wherein in step S3, a training sample is selected first, the logging response value of the training sample on the logging information is represented by a vector X, and the logging response value of the training sample is used as an input layer vector X of neural network analysis:
X=(Xi1,Xi2,…,Xij,…,Xin);
i=1,2,…,m;j=1,2,…,n;
wherein m represents the number of samples, n represents n log response values, and XijIs the jth log response value of the ith sample;
the laboratory measurement organic carbon content measurement value of the training sample is represented by a vector Y, and the laboratory measurement organic carbon content measurement value of the training sample is used as an output layer vector Y of the neural network analysis:
Y=(Yi),i=1,2,…,m;
wherein m represents the number of training samples, YiLaboratory measurements of organic carbon content representative of the ith training sample;
after the training sample is selected, carrying out normalization processing on the input value and the output value of the training sample by adopting a maximum-minimum standardization method; carrying out neural network training on the training sample, and setting the number of neural network hidden layers, confidence and training learning times; wherein each neuron of the Nth layer is connected with all neurons of the N-1 th layer, and the output of the neuron of the N-1 th layer is the input of the neuron of the Nth layer; randomly initializing the connection weight and offset of the network in the range of (0, 1), calculating the training output value of the neural network under the current parameter condition, and calculating the mean square error of the training output value of the neural network and the training sample output value; if the mean square error does not meet the given standard, calculating the gradients of the output neurons and the hidden neurons according to the mean square error, and reversely updating the connection weight and the offset of the neural network; retraining a calculation output value according to the updated connection weight and offset of the neural network, and then calculating the mean square error of the neural network training output value and the training sample output value; and repeating the calculation until the error or the learning iteration number reaches the condition, stopping learning, and determining the connection weight and the offset of the neural network.
7. The method of claim 6, wherein in step S3, the maximum-minimum normalization process is performed by performing a linear transformation on the raw data, and the minimum and maximum values of the attribute A are minA and maxA, respectively, and one of the raw values of the attribute A is mapped to the interval (0, 1) by the maximum-minimum normalization, so that the calculation formula is:
Figure FDA0002638387290000031
wherein, A is an input variable or an output variable, and A' is a value obtained by normalizing the variable A.
8. The method for calculating the organic carbon content of the shale oil reservoir according to the well logging data of claim 6, wherein in step S3, the algorithm of the calculation of the training output value of the neural network, the connection weight of the updated neural network and the offset of the updated neural network is as follows:
assuming that a hidden layer of the neural network has q nodes, the input weight of the h node of the hidden layer is W ═ W (W)1h,…,Wjh…,Wnh),WjhThe input weight from the jth input node to the h hidden node; the input weight of the output layer node is V ═ V (V)1,…,Vh…,Vq),VhIs the input weight from the h hidden layer node to the output layer node;
the input α of the h neuron of the hidden layer of the i sampleihThe method comprises the following steps:
Figure FDA0002638387290000041
wherein, i is 1, …, m is the number of samples; j is 1, …, n, n is the number of neurons in the input layer; h is 1, …, q, q is the number of hidden layer neurons; wjhThe input weight from the jth input node to the h hidden node; xijIs the jth log response value of the ith sample;
order function
Figure FDA0002638387290000042
In order to be a function of the excitation,
the output b of the h neuron of the hidden layer of the i sampleihThe method comprises the following steps:
bih=f(αihh),
wherein alpha isihIs an input variable of an h neuron of the hidden layer of an i sample; thetahAn input offset for the h hidden node;
input β for the output layer neurons of the ith sampleiThe method comprises the following steps:
Figure FDA0002638387290000043
wherein, VhIs the input weight from the h hidden layer node to the output layer node, bihIs the output weight of the h neuron of the hidden layer of the i sample;
the output of the output layer neurons for the ith sample is:
Figure FDA0002638387290000044
wherein, betaiIs the input value to the output layer neuron for the ith sample,
Figure FDA0002638387290000051
an input offset for the output layer node;
then the error function E of the ith sample at the output node of the neural networkiComprises the following steps:
Figure FDA0002638387290000052
wherein, YiFor the ith sample expected output value, Yi' an output value of the output layer neuron of the neural network for an i-th sample;
the total mean square error E of all samples at the output layer of the neural network is then:
Figure FDA0002638387290000053
wherein, i is 1, …, m is the number of samples; y isiFor the ith sample expected output value, Yi' an output value of the output layer neuron of the neural network for an i-th sample;
when training the neural network, the iterative update formula of any parameter is as follows:
γ′=γ+Δγ,
wherein, gamma is the nth iteration value of any variable to be solved, gamma' is the (N + 1) th iteration value of the variable to be solved, and delta gamma is the iteration value increment;
the weight V from the h-th neuron of the hidden layer to the output layerhThe updating process is as follows:
Figure FDA0002638387290000054
Figure FDA0002638387290000055
wherein eta is a learning step length, and the range is (0, 1); m is the number of samples, EiFor the error function of the i-th sample at the output node of the neural network, Yi' output value, β, of the output layer neurons of the neural network for the ith sampleiIs the input value, V, of the output layer neuron of the ith samplehIs the input weight, Y, from the h hidden layer node to the output layer nodeiFor the ith sample expected output value, bihIs the output weight of the h neuron of the hidden layer of the i sample;
offset value of the output layer
Figure FDA0002638387290000061
The updating is as follows:
Figure FDA0002638387290000062
Figure FDA0002638387290000063
wherein, i is 1, …, m is the number of samples; eiFor the error function of the i-th sample at the output node of the neural network, Yi' is the ith sample neural network output value,
Figure FDA0002638387290000064
is the input offset, Y, of the output layer nodeiIs the ith sample expected value;
weight W from the jth neuron of the input layer to the h node of the hidden layerjhThe updating process is as follows:
Figure FDA0002638387290000065
Figure FDA0002638387290000066
wherein eta is a learning step length, and the range is (0, 1); m is the number of samples, EiFor the error function of the i-th sample at the output node of the neural network, YiIs the ith sample neural network output value, betaiIs the input value of the output layer neuron of the ith sample, bihIs the output weight, W, of the h neuron of the hidden layer of the ith samplejhIs the input weight from the jth input node to the h hidden node, YiFor the ith sample expected value, VhIs the input weight, X, from the h hidden layer node to the output layer nodeijIs the jth log response value of the ith sample;
offset θ from the input layer to the h-th node of the hidden layerhUpdate process ofComprises the following steps:
Figure FDA0002638387290000067
Figure FDA0002638387290000068
wherein, i is 1, …, m is the number of samples; eiFor the error function of the i-th sample at the output node of the neural network, YiIs the ith sample neural network output value, betaiIs the input value of the output layer neuron of the ith sample, bihIs the output weight, θ, of the h neuron of the hidden layer for the i samplehInput offset for h hidden node, YiFor the ith sample expected value, VhIs the input weight from the h hidden layer node to the output layer node;
to this end, the mean square error is propagated back to the hidden layer.
CN202010832175.4A 2020-08-18 2020-08-18 Method for calculating organic carbon content of shale oil reservoir by logging data Active CN111984928B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010832175.4A CN111984928B (en) 2020-08-18 2020-08-18 Method for calculating organic carbon content of shale oil reservoir by logging data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010832175.4A CN111984928B (en) 2020-08-18 2020-08-18 Method for calculating organic carbon content of shale oil reservoir by logging data

Publications (2)

Publication Number Publication Date
CN111984928A true CN111984928A (en) 2020-11-24
CN111984928B CN111984928B (en) 2024-02-20

Family

ID=73435648

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010832175.4A Active CN111984928B (en) 2020-08-18 2020-08-18 Method for calculating organic carbon content of shale oil reservoir by logging data

Country Status (1)

Country Link
CN (1) CN111984928B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115032361A (en) * 2021-03-03 2022-09-09 中国石油化工股份有限公司 A method for evaluating organic carbon content in shale oil reservoirs based on genetic optimization neural network algorithm
WO2024077538A1 (en) * 2022-10-13 2024-04-18 Saudi Arabian Oil Company Methods and systems for predicting lithology and formation boundary ahead of the bit

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5251286A (en) * 1992-03-16 1993-10-05 Texaco, Inc. Method for estimating formation permeability from wireline logs using neural networks
CN103670388A (en) * 2013-12-12 2014-03-26 中国石油天然气股份有限公司 Method for evaluating organic carbon content of shale
CN111048163A (en) * 2019-12-18 2020-04-21 延安大学 A high-order neural network-based evaluation method for hydrocarbon retention (S1) in shale oil

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5251286A (en) * 1992-03-16 1993-10-05 Texaco, Inc. Method for estimating formation permeability from wireline logs using neural networks
CN103670388A (en) * 2013-12-12 2014-03-26 中国石油天然气股份有限公司 Method for evaluating organic carbon content of shale
CN111048163A (en) * 2019-12-18 2020-04-21 延安大学 A high-order neural network-based evaluation method for hydrocarbon retention (S1) in shale oil

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
吴建兴;刘之的;徐德峰;: "鄂尔多斯盆地长7地层油页岩含油率预测", 延安大学学报(自然科学版), no. 03 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115032361A (en) * 2021-03-03 2022-09-09 中国石油化工股份有限公司 A method for evaluating organic carbon content in shale oil reservoirs based on genetic optimization neural network algorithm
WO2024077538A1 (en) * 2022-10-13 2024-04-18 Saudi Arabian Oil Company Methods and systems for predicting lithology and formation boundary ahead of the bit

Also Published As

Publication number Publication date
CN111984928B (en) 2024-02-20

Similar Documents

Publication Publication Date Title
CN113419284B (en) Method for identifying physical facies double desserts of well logging rock based on cluster analysis
CN108596780A (en) A kind of Reservoir type division methods and system based on multiple information
CN107784191A (en) Anisotropic rock joint peak shear strength Forecasting Methodology based on neural network model
CN111027882A (en) A method for evaluating brittleness index using conventional logging data based on high-order neural network
CN114638255A (en) A comprehensive logging identification method for tight sandstone fractures combined with deep learning
CN111984928A (en) Method for calculating organic carbon content of shale oil reservoir by logging information
CN112796738A (en) A method for calculating formation permeability combining array acoustic logging and conventional logging
CN114114414A (en) Artificial intelligence prediction method for 'dessert' information of shale reservoir
CN116305850A (en) Stratum thermal conductivity prediction method based on random forest model
CN118349913A (en) An intelligent identification method of favorable shale lithofacies based on four control factors
CN107526117B (en) Based on autocoding and transfinites and learn the acoustic speed prediction technique of joint network
CN112253087A (en) Biological disturbance reservoir physical property calculation method based on multi-source logging data
CN114427457B (en) Method for determining logging penta-relation of tidal flat phase carbonate reservoir and logging evaluation method
CN119004289B (en) A diagenetic facies identification method based on geological constraints and well logging parameter fusion clustering
RU2399070C2 (en) Method for determination of residual water saturation and permeability of bed
CN112379442A (en) Seismic waveform classification method and device
CN111929744B (en) Kendall's coefficient-based multivariate information reservoir classification method
Vu et al. Estimation of shale volume from well logging data using Artificial Neural Network
CN115840032B (en) Method for determining talc mineral well logging skeleton parameter and stratum effective porosity
CN117174203B (en) Logging curve response analysis method for sandstone uranium ores
CN114154435B (en) Method, device and equipment for determining porosity of formation fluid
CN115506783B (en) Lithology recognition method
CN113378999B (en) Compact sandstone reservoir classification grading method based on cloud model
US20240280017A1 (en) System and method for predicting well characteristics
CN116821815A (en) Lithology classification method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant