CN113255729A - Epitaxial layer growth state judgment method and device based on convolutional neural network - Google Patents
Epitaxial layer growth state judgment method and device based on convolutional neural network Download PDFInfo
- Publication number
- CN113255729A CN113255729A CN202110461440.7A CN202110461440A CN113255729A CN 113255729 A CN113255729 A CN 113255729A CN 202110461440 A CN202110461440 A CN 202110461440A CN 113255729 A CN113255729 A CN 113255729A
- Authority
- CN
- China
- Prior art keywords
- neural network
- convolutional neural
- network model
- epitaxial layer
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013527 convolutional neural network Methods 0.000 title claims abstract description 113
- 238000000034 method Methods 0.000 title claims abstract description 58
- 238000012549 training Methods 0.000 claims abstract description 86
- 230000006870 function Effects 0.000 claims description 25
- 238000011156 evaluation Methods 0.000 claims description 21
- 238000012795 verification Methods 0.000 claims description 15
- 238000004590 computer program Methods 0.000 claims description 11
- 238000011176 pooling Methods 0.000 claims description 8
- 238000007781 pre-processing Methods 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 7
- 238000003860 storage Methods 0.000 claims description 5
- 238000010606 normalization Methods 0.000 claims description 4
- 238000012952 Resampling Methods 0.000 claims description 3
- 238000002372 labelling Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 abstract description 15
- 238000004451 qualitative analysis Methods 0.000 abstract description 5
- 238000004445 quantitative analysis Methods 0.000 abstract description 5
- 238000010223 real-time analysis Methods 0.000 abstract description 4
- 238000002128 reflection high energy electron diffraction Methods 0.000 description 11
- 238000001451 molecular beam epitaxy Methods 0.000 description 10
- 238000001228 spectrum Methods 0.000 description 7
- 238000012544 monitoring process Methods 0.000 description 6
- 238000004458 analytical method Methods 0.000 description 4
- 239000004065 semiconductor Substances 0.000 description 4
- 239000013078 crystal Substances 0.000 description 3
- 239000010409 thin film Substances 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- OAICVXFJPJFONN-UHFFFAOYSA-N Phosphorus Chemical compound [P] OAICVXFJPJFONN-UHFFFAOYSA-N 0.000 description 1
- 239000000956 alloy Substances 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 150000001875 compounds Chemical class 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000000151 deposition Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 239000010408 film Substances 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000001902 propagating effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000010998 test method Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Probability & Statistics with Applications (AREA)
- Testing Or Measuring Of Semiconductors Or The Like (AREA)
Abstract
The invention discloses a method and a device for judging the growth state of an epitaxial layer based on a convolutional neural network, wherein the judging method comprises the following steps: acquiring real-time two-dimensional images of the epitaxial layer in different growth states; training a pre-established initial convolutional neural network model based on the two-dimensional image acquisition training sample to acquire a convolutional neural network model; and acquiring an output probability vector corresponding to the two-dimensional image according to the convolutional neural network model, judging the growth state of the epitaxial layer according to the probability vector, using the convolutional neural network model to replace manpower, carrying out quantitative and qualitative analysis on the two-dimensional image in the epitaxial growth process of the molecular beam, quickly and accurately judging the growth state of the epitaxial layer, improving the efficiency and the accuracy of the growth state of the epitaxial layer, and realizing real-time analysis.
Description
Technical Field
The invention relates to the technical field of semiconductors, in particular to a method and a device for judging the growth state of an epitaxial layer based on a convolutional neural network.
Background
Molecular Beam Epitaxy (MBE) is a method of physically depositing single crystal thin films. Due to the superiority of the MBE in the aspects of material chemical composition, growth rate control and the like, the MBE is very suitable for homojunction and heterojunction epitaxial growth of various compound semiconductors and alloy materials thereof. At present, the requirements of the semiconductor industry on the performance of devices are gradually improved, and the design of the devices is developed in the forward direction of size miniaturization, novel structure and space low-dimensional direction. The use of MBE to grow high quality thin film epitaxial layers is an indispensable part of the semiconductor industry.
However, conventionally, the growth state of the epitaxial layer is based on manual analysis, and in the actual epitaxial growth process, analysis of the map is very dependent on the experience of an analyst, different epitaxial structures need to be fully known about the map, and the whole-process real-time monitoring is difficult to achieve. In addition, high manual resources are consumed, and the efficiency of manually analyzing data is difficult to meet the requirement of real-time analysis.
Disclosure of Invention
Therefore, it is necessary to provide a method and an apparatus for determining an epitaxial growth state based on a convolutional neural network, which use a convolutional neural network model to replace manual work, perform quantitative and qualitative analysis on a two-dimensional image of an MBE epitaxial growth process, and improve efficiency and accuracy of the epitaxial growth state.
In order to solve the above technical problem, a first aspect of the present application provides a method for determining an epitaxial layer growth state based on a convolutional neural network, including:
acquiring real-time two-dimensional images of the epitaxial layer in different growth states;
training a pre-established initial convolutional neural network model based on the two-dimensional image acquisition training sample to acquire a convolutional neural network model;
and acquiring an output probability vector corresponding to the two-dimensional image according to the convolutional neural network model so as to judge the growth state of the epitaxial layer according to the probability vector.
In the method for judging the growth state of the epitaxial layer based on the convolutional neural network provided by the embodiment, real-time two-dimensional images of the epitaxial layer in different growth states are acquired; training a pre-established initial convolutional neural network model based on a two-dimensional image acquisition training sample to obtain a convolutional neural network model; and acquiring an output probability vector corresponding to the two-dimensional image according to the convolutional neural network model so as to judge the growth state of the epitaxial layer according to the probability vector. The method comprises the steps of establishing an initial convolutional neural network model in advance according to the obtained two-dimensional image as the input quantity of the initial convolutional neural network model, and training the initial convolutional neural network model by using a large number of two-dimensional images, so that the convolutional neural network model is obtained to replace manpower, quantitative and qualitative analysis is carried out on the two-dimensional image in the MBE epitaxial growth process, the epitaxial layer growth state is judged quickly and accurately, the efficiency and the accuracy of the epitaxial layer growth state are improved, and real-time analysis is realized.
In one embodiment, the acquiring real-time two-dimensional images of the epitaxial layer in different growth states includes:
acquiring real-time diffraction images of the epitaxial layer in different growth states;
preprocessing the diffraction image to obtain a preprocessed diffraction image so as to obtain the training sample based on the preprocessed diffraction image;
wherein the preprocessing comprises at least one of image denoising, normalization, effective region cropping and resampling.
In one embodiment, the initial convolutional neural network model includes a convolutional layer, a pooling layer, a fully-connected layer and an output layer, which are connected in sequence, and the convolutional layer and the pooling layer are iterative modules.
In one embodiment, the training of the pre-established initial convolutional neural network model based on the two-dimensional image acquisition training samples to acquire a convolutional neural network model includes:
acquiring an annotation data set, wherein the annotation data set comprises the growth state categories of the epitaxial layers and the two-dimensional images corresponding to the growth state categories of the epitaxial layers;
dividing the labeling data group into a training data set and a verification data set;
acquiring training configuration parameters of the initial convolutional neural network model;
inputting the training data set into the initial convolutional neural network model, and calculating a loss function of the training configuration parameters through forward propagation;
judging whether the loss function is smaller than a preset threshold value or not and whether the training iteration times are larger than or equal to the preset iteration times or not;
if yes, determining the convolutional neural network model;
otherwise, performing inverse gradient propagation calculation according to the optimizer of the training configuration parameters, and reacquiring the training configuration parameters.
In one embodiment, said inputting said training data set into said initial convolutional neural network model, and forward propagating said training configuration parameters to compute a loss function, comprises:
calculating according to the forward propagation to obtain a first classification predicted value of the two-dimensional image;
comparing the first classification predicted value of the two-dimensional image with the actual value of the two-dimensional image, and calculating the formula of the loss function as follows:
wherein,and y is a first classification predicted value of the two-dimensional image, and y is an actual value of the two-dimensional image.
In one embodiment, after the obtaining the labeled data group into the training data set and the verification data set, and before obtaining the training configuration parameters of the initial convolutional neural network model, the method further includes:
and expanding the capacity of the training data set.
In one embodiment, after determining the convolutional neural network model, the method further includes:
evaluating the convolutional neural network model.
In one embodiment, the performing verification evaluation on the convolutional neural network model includes:
inputting the verification data set into the convolutional neural network model, and obtaining a second classification predicted value of the two-dimensional image through forward propagation calculation;
determining a model evaluation function and a preset evaluation value according to the second classification predicted value of the two-dimensional image and the actual value corresponding to the two-dimensional image;
judging whether the model evaluation function meets the preset evaluation value or not;
if yes, setting the training configuration parameters as constants to complete verification and evaluation;
otherwise, adjusting the training configuration parameters or expanding the labeled data set, and re-training.
A second aspect of the present application provides an epitaxial layer growth state determination apparatus based on a convolutional neural network, including:
the image acquisition module is used for acquiring real-time two-dimensional images of the epitaxial layer in different growth states;
the convolutional neural network model acquisition module is used for acquiring a training sample based on the two-dimensional image to train a pre-established initial convolutional neural network model so as to acquire a convolutional neural network model;
and the epitaxial layer growth state judgment module is used for acquiring an output probability vector corresponding to the two-dimensional image according to the convolutional neural network model so as to judge the growth state of the epitaxial layer according to the probability vector.
A third aspect of the application proposes a computer device comprising a memory storing a computer program and a processor implementing the steps of the method described above when executing the computer program.
A fourth aspect of the present application proposes a computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the steps of the method as described above.
The foregoing description is only an overview of the technical solutions of the present invention, and in order to make the technical solutions of the present invention more clearly understood and to implement them in accordance with the contents of the description, the following detailed description is given with reference to the preferred embodiments of the present invention and the accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain drawings of other embodiments based on these drawings without any creative effort.
Fig. 1 is a schematic flowchart of a method for determining an epitaxial layer growth state based on a convolutional neural network according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of acquiring a two-dimensional image according to an embodiment of the present application;
FIG. 3 is a schematic flow chart illustrating a process for obtaining a convolutional neural network model according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram of a two-dimensional image corresponding to a growth state of each epitaxial layer provided in an embodiment of the present application;
FIG. 5 is a schematic flow chart illustrating evaluation of a convolutional neural network model provided in an embodiment of the present application;
fig. 6 is a schematic structural diagram of an epitaxial layer growth state determination device based on a convolutional neural network according to an embodiment of the present application.
Description of reference numerals: 10-an image acquisition module, 20-a convolutional neural network model acquisition module and 30-an epitaxial layer growth state judgment module.
Detailed Description
To facilitate an understanding of the present application, the present application will now be described more fully with reference to the accompanying drawings. Preferred embodiments of the present application are illustrated in the accompanying drawings. This application may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Where the terms "comprising," "having," and "including" are used herein, another element may be added unless an explicit limitation is used, such as "only," "consisting of … …," etc. Unless mentioned to the contrary, terms in the singular may include the plural and are not to be construed as being one in number.
It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of the present application.
In this application, unless otherwise expressly stated or limited, the terms "connected" and "connecting" are used broadly and encompass, for example, direct connection, indirect connection via an intermediary, communication between two elements, or interaction between two elements. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art as appropriate.
In order to test the quality of the epitaxial structure of the thin film grown by MBE, different test methods are needed to calibrate the quality of the epitaxial layer. Methods for measuring the quality of epitaxial layers can be divided into two categories: one requires taking the epitaxial wafer out of the growth chamber and using special instruments for measurement, while the other is directly mounted on the MBE growth chamber, so that the quality of the epitaxial layer can be monitored in real time during the growth process. Reflection High Energy Electron Diffraction (RHEED) is one of the most important means. Different epitaxial growth states correspond to different characteristic patterns on the RHEED measurement spectrum. By analyzing the two-dimensional image of the RHEED spectrum, the growth state of the epitaxial layer can be known in real time, and the method is an important reference for epitaxial growth quality. Analysis of RHEED spectra can be divided into two categories, qualitative and quantitative: 1. the qualitative analysis is mainly to directly guess the growth state of the epitaxial layer according to experience by observing the diffraction pattern of the RHEED spectrum; 2. the quantitative analysis is to calculate the state parameters (such as growth rate, surface reconstruction, etc.) of epitaxial layer growth by analyzing the data of diffraction spots (such as the variation period of the diffraction spot brightness, the interval width of the diffraction spots, etc.). The requirement of real-time monitoring is difficult to meet by manually monitoring the growth state of the epitaxial layer from the viewpoint of monitoring efficiency and accuracy. Therefore, the epitaxial layer growth state judgment method based on the convolutional neural network is provided, manual monitoring is replaced, and real-time monitoring of the MBE epitaxial layer growth state is achieved.
In an embodiment of the present application, a method for determining an epitaxial layer growth state based on a convolutional neural network is provided, as shown in fig. 1, including the following steps:
step S10: acquiring real-time two-dimensional images of the epitaxial layer in different growth states;
step S20: training a pre-established initial convolutional neural network model based on a two-dimensional image acquisition training sample to obtain a convolutional neural network model;
step S30: and acquiring an output probability vector corresponding to the two-dimensional image according to the convolutional neural network model so as to judge the growth state of the epitaxial layer according to the probability vector.
In the method for judging the growth state of the epitaxial layer based on the convolutional neural network provided by the embodiment, real-time two-dimensional images of the epitaxial layer in different growth states are acquired; training a pre-established initial convolutional neural network model based on a two-dimensional image acquisition training sample to obtain a convolutional neural network model; and acquiring an output probability vector corresponding to the two-dimensional image according to the convolutional neural network model so as to judge the growth state of the epitaxial layer according to the probability vector. The method comprises the steps of establishing an initial convolutional neural network model in advance according to the obtained two-dimensional image as the input quantity of the initial convolutional neural network model, and training the initial convolutional neural network model by using a large number of two-dimensional images, so that the convolutional neural network model is obtained to replace manpower, quantitative and qualitative analysis is carried out on the two-dimensional image in the MBE epitaxial growth process, the epitaxial layer growth state is judged quickly and accurately, the efficiency and the accuracy of the epitaxial layer growth state are improved, and real-time analysis is realized.
As an example, a two-dimensional image on the phosphor screen may be acquired by a video camera (image sensor or general camera). The two-dimensional image is a two-dimensional gray image of the RHEED spectrum, and the initial convolutional neural network model is a network model which is pre-established according to actual requirements. The multiple two-dimensional images are used as image feature extractors or input quantities of the initial convolutional neural network model, and the convolutional neural network model can be integrated in a RHEED spectrum acquisition and analysis system.
As an example, a probability vector P is outputiCan be directly used as the basis for judging the epitaxial growth state. For a single RHEED image, the class corresponding to the maximum probability of the output vector can be taken as the final judgment class. Or continuously sampling RHEED image, outputting probability vector through convolution neural network model, and calculating according to PiAnd the trend of the epitaxial growth state is monitored and analyzed in real time along with the continuous change of time.
In one embodiment, the initial convolutional neural network model comprises a convolutional layer, a pooling layer, a fully-connected layer and an output layer which are connected in sequence, the convolutional layer and the pooling layer are iteration modules, and the two-dimensional image is used as an input quantity of the initial convolutional neural network model. In particular, the two-dimensional image is subjected to one convolution and pooling, i.e. one iteration, iteration NConvolutionAfter the wheel, output one (K)N1,1) dimensional vector data as input to the fully connected layer, via NDenseOutputting a lower-dimensionality RN vector after the round of full-connection operation; finally, outputting a K-dimensional vector asNumber of classes of epitaxial growth state. As in the case of general growth, it can be divided into stripes, scattered dots, circles and others. The magnitude of the corresponding value in the vector is the probability of the class, and the sum of the vector values is 1. Output (R) of full connection layerNDimensional vector) as input to the output layer, and outputting a K-dimensional probability vector through a softmax classifier. The concrete operation of softmax is as follows, and the probability of softmax of the kth class is:
wherein x isTRepresenting an RN dimension input vector; k represents the number of output categories; v. of(k)Parameter vector (R) representing class k output layerNDimension), all K classes make up the parameter matrix V.
In one embodiment, as shown in FIG. 2, step S10: the method for acquiring the real-time two-dimensional images of the epitaxial layer in different growth states comprises the following steps:
step S11: acquiring real-time diffraction images of epitaxial layers in different growth states;
step S12: preprocessing the diffraction image to obtain a preprocessed diffraction image so as to obtain a training sample based on the preprocessed diffraction image;
wherein the preprocessing comprises at least one of image denoising, normalization, effective region clipping and resampling.
As an example, during the growth process, a beam of high-energy electrons is glancing and incident on the growth surface, the high-energy electrons are diffracted on the surface of the epitaxial layer according to different growth states, and the emergent reflected electrons present a diffraction image on the fluorescent screen, namely a corresponding RHEED spectrum. The preprocessed diffraction image is a two-dimensional image, and various two-dimensional images corresponding to various epitaxial layer growth states are collected to form a training sample.
As an example, the preprocessing of the diffraction image needs to be consistent for the same epitaxial growth process. The data of the two-dimensional image after the diffraction image preprocessing may be (1, N)input,Ninput) Of a single value of (1), data range of an integer of 0-255 orA floating point number of 0-1; wherein the size is (N)input,Ninput) And the two-dimensional gray image of each pixel point has an integer of 0-255 or a floating point number of 0-1 as a normalization result. N is a radical ofinputAfter the value is determined, N is in the processes of pretreatment, model training and evaluationinputRemain unchanged.
In one embodiment, as shown in FIG. 3, step S20: training a pre-established initial convolutional neural network model based on a two-dimensional image acquisition training sample to acquire the convolutional neural network model, comprising the following steps of:
step S21: acquiring an annotation data set, wherein the annotation data set comprises growth state categories of epitaxial layers and two-dimensional images corresponding to the growth state categories of the epitaxial layers;
step S22: dividing the labeling data group into a training data set and a verification data set;
step S23: acquiring training configuration parameters of an initial convolutional neural network model;
step S24: inputting the training data set into an initial convolutional neural network model, and calculating a loss function of a training configuration parameter by forward propagation;
step S25: judging whether the loss function is smaller than a preset threshold value or not and whether the training iteration times are larger than or equal to the preset iteration times or not;
step S26: if so, determining a convolutional neural network model;
step S27: otherwise, performing inverse gradient propagation calculation according to the optimizer of the training configuration parameters, and reacquiring the training configuration parameters.
As an example, different growth state categories of the epitaxial layer correspond to the two-dimensional images one-to-one, and are respectively as follows: the growth of the two-dimensional film single crystal corresponds to the stripe, the growth of the three-dimensional island-shaped single crystal corresponds to the scattered point, and the growth of the polycrystal corresponds to the circular ring, as shown in figure 4.
As an example, in step S21, after data of a certain scale is acquired and preprocessed, the two-dimensional image corresponding to the growth status category is labeled manually. Each group of data is marked as K-dimensional vector, the corresponding category vector value is 1, and the group is not the categoryThe vector value is 0. If a RHEED spectral image belongs to the stripe class in the four classes of { stripe, scatter, circle and other }, the label vector is (1, 0, 0, 0). The set of data finally marked is one (N)input,Ninput) The two-dimensional gray level image of each pixel point and the corresponding labeled K-dimensional vector.
As an example, in step S22, the training data set and the verification data set are divided according to a preset ratio, and the number distribution of each growth state category in the training data set and the verification data set is substantially consistent. The predetermined ratio is 7:3 or 8:2, which can be adjusted in real time, but not limited thereto.
As an example, in steps S23 and S24, the training configuration parameters include a loss function and an optimizer, i.e., initialization parameters for the initial convolutional neural network model. Of course, the initialization parameters further include model meta-parameters and neural network parameters, where the model meta-parameters refer to model definition parameters that remain unchanged during the training process, and the neural network parameters refer to parameters that are continuously adjusted and optimized during the training process. Inputting input data of a training data set into an initial convolutional neural network model in batches, and processing the input data through NconvolutionRound-robin, pooling operations and NdenseAnd performing round of full connection operation, finally obtaining a predicted value through softmax calculation, comparing the predicted value with a true value, and calculating to obtain a loss function, wherein the process is called one-time forward propagation.
Specifically, step S24: inputting a training data set into an initial convolutional neural network model, and calculating a loss function of a training configuration parameter by forward propagation, wherein the method comprises the following steps:
step S241: calculating according to forward propagation to obtain a first classification predicted value of the two-dimensional image;
step S242: comparing the first classification predicted value of the two-dimensional image with the actual value of the two-dimensional image, and calculating to obtain a formula of a loss function, wherein the formula is as follows:
wherein,the first classification predicted value of the two-dimensional image is y, and the actual value of the two-dimensional image is y.
In one embodiment, step S22: after the annotation data set is divided into the training data set and the verification data set, step S23: before obtaining the training configuration parameters of the initial convolutional neural network model, the method further comprises the following steps:
step S220: and expanding the training data set.
As an example, to avoid model under-fitting due to too small data amount of the training data set, the training data set may be expanded in a data enhancement manner before the initial convolutional neural network model is trained. The data enhancement mode includes but not limited to image horizontal turning, fine adjustment of image brightness and contrast, and image rotation. And in the same group of data, the label of the data after enhancement is consistent with that before enhancement. The enhanced parameter configuration, such as brightness, contrast adjustment matrix, rotation angle array, is adjusted according to the actual situation.
As an example, in step S25, the setting of the preset threshold and the setting of the preset iteration number are determined according to actual conditions, and are not unique.
As an example, in step S27, the optimizer, i.e., the optimization algorithm, is a method for optimizing the loss function by fine-tuning the training parameters through inverse propagation of the loss function with respect to the gradient of the training parameters. The gradient of the loss function obtained by forward propagation relative to the training parameters is calculated and called backward gradient propagation. The dimension of the gradient is consistent with the training parameter dimension.
In one embodiment, step S26: after determining the convolutional neural network model, further comprising:
step S260: and evaluating the convolutional neural network model.
Specifically, as shown in fig. 5, step S260: evaluating the convolutional neural network model, comprising the steps of:
step S261: inputting the verification data set into a convolutional neural network model, and obtaining a second classification predicted value of the two-dimensional image through forward propagation calculation;
step S262: determining a model evaluation function and a preset evaluation value according to a second classification predicted value of the two-dimensional image and an actual value corresponding to the two-dimensional image;
step S263: judging whether the model evaluation function meets a preset evaluation value or not;
step S264: if so, setting the training configuration parameters as constants to complete verification and evaluation;
step S265: otherwise, adjusting the training configuration parameters or expanding the labeled data set, and re-training.
As an example, commonly used evaluation functions include accuracy TP/(TP + FP), recall TP/(TP + FN), F1 values (weighted average of accuracy and recall). If the evaluation function does not meet the preset evaluation value, the initial convolutional neural network model training can be performed again by adjusting the model element parameters, re-collecting any one of the diffraction sample and the labeled data group or the expanded data group according to the actual situation.
In an embodiment of the present application, a device for determining an epitaxial layer growth state based on a convolutional neural network is further provided, as shown in fig. 6, including an image obtaining module 10, a convolutional neural network model obtaining module 20, and an epitaxial layer growth state determining module 30. The image acquisition module 10 is used for acquiring real-time two-dimensional images of the epitaxial layer in different growth states; the convolutional neural network model obtaining module 20 is configured to obtain a training sample based on the two-dimensional image to train a pre-established initial convolutional neural network model so as to obtain a convolutional neural network model; the epitaxial layer growth state judgment module 30 is configured to obtain an output probability vector corresponding to the two-dimensional image according to the convolutional neural network model, so as to judge the growth state of the epitaxial layer according to the probability vector.
In an embodiment of the present application, a computer device is also proposed, which includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the above method when executing the computer program.
In an embodiment of the present application, a computer-readable storage medium is also proposed, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the above-mentioned method.
For specific limitations of the method for determining the growth state of the epitaxial layer based on the convolutional neural network in the above embodiments, refer to the above limitations of the method for determining the growth state of the epitaxial layer based on the convolutional neural network, and are not described herein again.
It should be understood that the steps described are not to be performed in the exact order recited, and that the steps may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps described may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or in alternation with other steps or at least some of the sub-steps or stages of other steps.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others.
It should be noted that the above-mentioned embodiments are only for illustrative purposes and are not meant to limit the present invention.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (11)
1. An epitaxial layer growth state judgment method based on a convolutional neural network is characterized by comprising the following steps:
acquiring real-time two-dimensional images of the epitaxial layer in different growth states;
training a pre-established initial convolutional neural network model based on the two-dimensional image acquisition training sample to acquire a convolutional neural network model;
and acquiring an output probability vector corresponding to the two-dimensional image according to the convolutional neural network model so as to judge the growth state of the epitaxial layer according to the probability vector.
2. The method for judging the growth state of the epitaxial layer based on the convolutional neural network as claimed in claim 1, wherein the step of obtaining real-time two-dimensional images of the epitaxial layer under different growth states comprises:
acquiring real-time diffraction images of the epitaxial layer in different growth states;
preprocessing the diffraction image to obtain a preprocessed diffraction image so as to obtain the training sample based on the preprocessed diffraction image;
wherein the preprocessing comprises at least one of image denoising, normalization, effective region cropping and resampling.
3. The epitaxial layer growth state judgment method based on the convolutional neural network of claim 1, wherein the initial convolutional neural network model comprises a convolutional layer, a pooling layer, a fully-connected layer and an output layer which are connected in sequence, and the convolutional layer and the pooling layer are iterative modules.
4. The method for judging the growth state of the epitaxial layer based on the convolutional neural network as claimed in any one of claims 1 to 3, wherein the training of the pre-established initial convolutional neural network model based on the two-dimensional image acquisition training samples to obtain the convolutional neural network model comprises:
acquiring an annotation data set, wherein the annotation data set comprises the growth state categories of the epitaxial layers and the two-dimensional images corresponding to the growth state categories of the epitaxial layers;
dividing the labeling data group into a training data set and a verification data set;
acquiring training configuration parameters of the initial convolutional neural network model;
inputting the training data set into the initial convolutional neural network model, and calculating a loss function of the training configuration parameters through forward propagation;
judging whether the loss function is smaller than a preset threshold value or not and whether the training iteration times are larger than or equal to the preset iteration times or not;
if yes, determining the convolutional neural network model;
otherwise, performing inverse gradient propagation calculation according to the optimizer of the training configuration parameters, and reacquiring the training configuration parameters.
5. The method for judging the epitaxial growth state based on the convolutional neural network of claim 4, wherein the inputting the training data set into the initial convolutional neural network model and the forward propagation calculating the loss function of the training configuration parameters comprises:
calculating according to the forward propagation to obtain a first classification predicted value of the two-dimensional image;
comparing the first classification predicted value of the two-dimensional image with the actual value of the two-dimensional image, and calculating the formula of the loss function as follows:
6. The method for judging the growth state of an epitaxial layer based on a convolutional neural network as claimed in claim 4, wherein after the obtaining of the labeled data group divided into a training data set and a verification data set and before the obtaining of the training configuration parameters of the initial convolutional neural network model, the method further comprises:
and expanding the capacity of the training data set.
7. The method for judging the growth state of the epitaxial layer based on the convolutional neural network as claimed in claim 4, further comprising, after determining the convolutional neural network model:
evaluating the convolutional neural network model.
8. The method for judging the growth state of the epitaxial layer based on the convolutional neural network as claimed in claim 7, wherein the verifying and evaluating the convolutional neural network model comprises:
inputting the verification data set into the convolutional neural network model, and obtaining a second classification predicted value of the two-dimensional image through forward propagation calculation;
determining a model evaluation function and a preset evaluation value according to the second classification predicted value of the two-dimensional image and the actual value corresponding to the two-dimensional image;
judging whether the model evaluation function meets the preset evaluation value or not;
if yes, setting the training configuration parameters as constants to complete verification and evaluation;
otherwise, adjusting the training configuration parameters or expanding the labeled data set, and re-training.
9. An epitaxial layer growth state judgment device based on a convolutional neural network, comprising:
the image acquisition module is used for acquiring real-time two-dimensional images of the epitaxial layer in different growth states;
the convolutional neural network model acquisition module is used for acquiring a training sample based on the two-dimensional image to train a pre-established initial convolutional neural network model so as to acquire a convolutional neural network model;
and the epitaxial layer growth state judgment module is used for acquiring an output probability vector corresponding to the two-dimensional image according to the convolutional neural network model so as to judge the growth state of the epitaxial layer according to the probability vector.
10. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 8.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110461440.7A CN113255729A (en) | 2021-04-27 | 2021-04-27 | Epitaxial layer growth state judgment method and device based on convolutional neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110461440.7A CN113255729A (en) | 2021-04-27 | 2021-04-27 | Epitaxial layer growth state judgment method and device based on convolutional neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113255729A true CN113255729A (en) | 2021-08-13 |
Family
ID=77221876
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110461440.7A Pending CN113255729A (en) | 2021-04-27 | 2021-04-27 | Epitaxial layer growth state judgment method and device based on convolutional neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113255729A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114107935A (en) * | 2021-11-29 | 2022-03-01 | 重庆忽米网络科技有限公司 | Automatic PVD (physical vapor deposition) coating thickness adjusting method based on AI (Artificial Intelligence) algorithm |
CN114242335A (en) * | 2021-12-31 | 2022-03-25 | 苏州新材料研究所有限公司 | Production process for kilometre-level IBAD-MgO long strip |
CN118136173A (en) * | 2024-03-06 | 2024-06-04 | 中国科学院半导体研究所 | Material growth control method and device |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108334843A (en) * | 2018-02-02 | 2018-07-27 | 成都国铁电气设备有限公司 | A kind of arcing recognition methods based on improvement AlexNet |
-
2021
- 2021-04-27 CN CN202110461440.7A patent/CN113255729A/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108334843A (en) * | 2018-02-02 | 2018-07-27 | 成都国铁电气设备有限公司 | A kind of arcing recognition methods based on improvement AlexNet |
Non-Patent Citations (4)
Title |
---|
CHAO SHEN 等: "Machine-Learning-Assisted and Real-Time- Feedback-Controlled Growth of InAs/GaAs Quantum Dots", 《MDPI》, pages 1 - 31 * |
JINKWAN KWOEN等: "Classification of Rflection High-Energy Electron Diffraction Pattern Using Machine Learning", 《CRYSTAL GROWTH DESIGN》, pages 4 - 5 * |
李勇: "GaAs 基 InSb、InAsSb 薄膜材料的 分子束外延生长研究", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》, pages 135 - 123 * |
罗子江 等: "RHEED 实时监控下 MBE 生长不同 In 组分的 InGaAs 薄膜", 《功能材料》, pages 1 - 5 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114107935A (en) * | 2021-11-29 | 2022-03-01 | 重庆忽米网络科技有限公司 | Automatic PVD (physical vapor deposition) coating thickness adjusting method based on AI (Artificial Intelligence) algorithm |
CN114242335A (en) * | 2021-12-31 | 2022-03-25 | 苏州新材料研究所有限公司 | Production process for kilometre-level IBAD-MgO long strip |
CN114242335B (en) * | 2021-12-31 | 2023-12-05 | 苏州新材料研究所有限公司 | Production process for kilometer-level IBAD-MgO long belt |
CN118136173A (en) * | 2024-03-06 | 2024-06-04 | 中国科学院半导体研究所 | Material growth control method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113255729A (en) | Epitaxial layer growth state judgment method and device based on convolutional neural network | |
US6208752B1 (en) | System for eliminating or reducing exemplar effects in multispectral or hyperspectral sensors | |
US20250021828A1 (en) | Computer implemented method for the detection of anomalies in an imaging dataset of a wafer, and systems making use of such methods | |
US5557104A (en) | Method and apparatus for determining crystallographic characteristics in response to confidence factors | |
US20050031188A1 (en) | Systems and methods for characterizing a sample | |
CN117710833B (en) | Mapping geographic information data acquisition method and related device based on cloud computing | |
CN112733394A (en) | Atmospheric parameter inversion method and device | |
CN118518675B (en) | Lead-tin bar quality detection method, device, equipment and storage medium | |
CN113222926A (en) | Zipper abnormity detection method based on depth support vector data description model | |
Demant et al. | Visualizing material quality and similarity of mc-Si wafers learned by convolutional regression networks | |
CN117115666A (en) | Plateau lake extraction method, device, equipment and medium based on multi-source data | |
JP6890819B2 (en) | Mapping method and measuring device | |
CN103955711B (en) | A kind of mode identification method in imaging spectral target identification analysis | |
CN112101313B (en) | Machine room robot inspection method and system | |
CN117541832B (en) | Abnormality detection method, abnormality detection system, electronic device, and storage medium | |
WO2022103337A1 (en) | Particulate material calculation and classification | |
CN112956035A (en) | Method for processing images of semiconductor structures and method for process characterization and process optimization by means of semantic data compression | |
Konstantinova et al. | Machine learning enhances algorithms for quantifying non-equilibrium dynamics in correlation spectroscopy experiments to reach frame-rate-limited time resolution | |
RAJAONARISOA et al. | Enhanced Deep Super-Resolution Model Parameter Optimization to Improve the Quality of Geostationary Meteorological Satellite Images | |
CN108287313B (en) | Fuel cell stack sensitivity test data slicing method | |
CN117094998B (en) | Defect detection method in monocrystalline silicon wafer production process | |
CN116128982B (en) | Color grading/color measurement method, system, equipment and medium based on hyperspectral image | |
CN119181023B (en) | Ecological environment remote sensing image classification and feature extraction method based on deep learning | |
CN119226363B (en) | A drought monitoring method combining improved TVDI index and dynamic drought threshold | |
CN114965375B (en) | Method and device for obtaining pixel-scale ground truth for surface reflectance products |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |