CN119091441A - Distortion correction matching method and system for unsupervised scanning electron microscope images - Google Patents
Distortion correction matching method and system for unsupervised scanning electron microscope images Download PDFInfo
- Publication number
- CN119091441A CN119091441A CN202411562064.0A CN202411562064A CN119091441A CN 119091441 A CN119091441 A CN 119091441A CN 202411562064 A CN202411562064 A CN 202411562064A CN 119091441 A CN119091441 A CN 119091441A
- Authority
- CN
- China
- Prior art keywords
- layout
- sem image
- sem
- unsupervised
- matching
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 63
- 238000012937 correction Methods 0.000 title claims abstract description 32
- 238000001000 micrograph Methods 0.000 title claims abstract description 21
- 238000001878 scanning electron micrograph Methods 0.000 claims abstract description 128
- 238000013461 design Methods 0.000 claims abstract description 33
- 238000000605 extraction Methods 0.000 claims abstract description 28
- 238000001514 detection method Methods 0.000 claims abstract description 14
- 230000003287 optical effect Effects 0.000 claims abstract description 12
- 238000004458 analytical method Methods 0.000 claims abstract description 8
- 238000012549 training Methods 0.000 claims description 22
- 239000013598 vector Substances 0.000 claims description 20
- 230000008569 process Effects 0.000 claims description 17
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 13
- 238000013507 mapping Methods 0.000 claims description 11
- 238000004364 calculation method Methods 0.000 claims description 6
- 230000009977 dual effect Effects 0.000 claims description 6
- 238000007781 pre-processing Methods 0.000 claims description 3
- 238000000926 separation method Methods 0.000 claims description 2
- 238000012545 processing Methods 0.000 abstract description 4
- 230000000007 visual effect Effects 0.000 abstract description 2
- 238000013256 Gubra-Amylin NASH model Methods 0.000 description 14
- 238000004422 calculation algorithm Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 6
- 238000004519 manufacturing process Methods 0.000 description 6
- 238000013519 translation Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 238000007689 inspection Methods 0.000 description 3
- 238000001459 lithography Methods 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 238000013508 migration Methods 0.000 description 3
- 230000005012 migration Effects 0.000 description 3
- 238000005457 optimization Methods 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 239000002131 composite material Substances 0.000 description 2
- 125000004122 cyclic group Chemical group 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000001259 photo etching Methods 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 208000032538 Depersonalisation Diseases 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 230000008485 antagonism Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000012733 comparative method Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000002542 deteriorative effect Effects 0.000 description 1
- 238000010894 electron beam technology Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 229920002120 photoresistant polymer Polymers 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/69—Microscopic objects, e.g. biological cells or cellular parts
- G06V20/698—Matching; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a distortion correction matching method and a distortion correction matching system for an unsupervised scanning electron microscope image, and belongs to the field of image processing. The method comprises the steps of obtaining an SEM image and a design layout file of a wafer, wherein the design layout file comprises corresponding areas in the SEM image, converting the SEM image into a reference layout style image by using an unsupervised SEM pattern position extraction model, obtaining a false reference layout carrying pattern position information in the SEM image, matching the false reference layout with the design layout file, generating a matching layout according to a real layout area matched with the SEM image in the design layout file, calculating a deformation map between the false reference layout and the matching layout by using an optical flow method, correcting distortion in the SEM image by using the deformation map, and obtaining a corrected SEM image, wherein the corrected SEM image and the matching layout are used for hot spot detection and contour analysis of the wafer. The invention solves the problem of distortion of the visual field of the wafer SEM image and reduces the matching error.
Description
Technical Field
The invention belongs to the field of image processing, and particularly relates to a distortion correction matching method and system for an unsupervised scanning electron microscope image.
Background
The process of semiconductor chips is evolving towards smaller dimensions, and the lithography machines are also evolving iteratively. Nevertheless, such a difference between the lithography machine light source wavelength and the manufacturing node size may still result in manufacturing defects, also known as lithography hotspots. These hot spots may affect circuit performance and yield, so all hot spots must be detected and identified comprehensively by dedicated equipment. The Die-to-Database (D2 DB) technique compares a chip image with a design file to detect defects, and the D2DB can effectively shorten a detection time and improve sensitivity compared with a hot spot detection method of Die-to-Die (D2D) which compares a chip image with a reference image. Meanwhile, the D2DB is also widely used for mask detection, photoetching proximity effect correction and optimization processes and the like. D2DB detection relies on the precise alignment of the chip images and design layout files, whereas electron beam detection systems can theoretically inspect sites with the same (x, y) coordinates in each chip by virtue of their high precision alignment capabilities. Unfortunately, however, the coordinates are actually checked for up to 1um, which is critical for high density, small size chips. Worse, positional offset may increase over time as massive Scanning Electron Microscope (SEM) images are collected. Therefore, further distortion correction of SEM images is required based on the alignment of the mechanical positioning system.
The existing scheme mainly relies on extracting the contours of SEM images and design files, and high-precision alignment is realized by minimizing contour errors or errors of the center of gravity position of the closed graph after contour extraction. But contour-based methods are affected by photoresist variations, image distortions, local pattern variations, etc. caused by multiple measurements. Meanwhile, as the nodes are miniaturized, the diversity and complexity of patterns are also increasing, resulting in rounding of the edges of the patterns, and increasing the roughness of the edges of the lines, the profile extraction is more difficult. Although the learner proposed a method using an average profile, this greatly increases the inspection time and calculation, and makes it difficult to accommodate the current rapid wafer inspection requirements. With the rapid development of deep learning technology, large-scale neural network models are widely used in the field of computer vision. In the D2DB task, many scholars have proposed a deep learning-based method. For example by creating a CNN model, converting paired layout and SEM images into acceptable deformed SEM images, enabling comparison with real SEM images for D2DB inspection, or converting SEM images into layout-like style images using modified pix2pix models for matching. While these methods work well, they rely on a large number of paired images, and it is very difficult to obtain such data due to the confidentiality of the semiconductor industry. In addition, data annotation requires expertise and extensive verification work of engineers. To solve this problem, scholars have proposed employing CycleGAN for unpaired SEM image style migration. The method learns the mapping from the SEM image to the layout image style by the generator with the help of the resistance loss, so that the generated layout image looks more real. Another generator maps the composite layout image back to the SEM domain, and the loss of periodic consistency facilitates matching of the reconstructed image to the input image. However, cycleGAN cannot guarantee their structural and positional consistency nor solve the distortion problem in SEM images due to the lack of direct constraints between the composite image and the input image.
Disclosure of Invention
The invention provides a distortion correction matching method and system for an unsupervised scanning electron microscope image, which aims to solve the problems that a D2DB on a high-density chip faces inaccurate mechanical positioning, requires complex manual parameter setting and causes lower detection efficiency and larger error due to field distortion of an SEM image.
The technical scheme adopted by the invention is as follows:
In a first aspect, the present invention provides a distortion correction matching method for an unsupervised scanning electron microscope image, including the steps of:
Obtaining an SEM image of a wafer and a design layout file, wherein the design layout file comprises corresponding areas in the SEM image;
Converting an SEM image into a reference layout style image by using an unsupervised SEM pattern position extraction model, obtaining a false reference layout carrying pattern position information in the SEM image, matching the false reference layout with a design layout file, and generating a matched layout according to a real layout area matched with the SEM image in the design layout file, wherein the unsupervised SEM pattern position extraction model is a double-generator double-discriminator network structure based on CycleGAN models, and edge contrast learning loss and HV overturning invariance loss are introduced in the training process of the unsupervised SEM pattern position extraction model;
Calculating a deformation map between the false reference map and the matching map by adopting an optical flow method, and correcting distortion in the SEM image by utilizing the deformation map to obtain a corrected SEM image;
And the corrected SEM image and the matched layout are used for hot spot detection and contour analysis of the wafer.
Further, the generator of the unsupervised SEM pattern position extraction model comprises a downsampling layer, 2m residual blocks and an upsampling layer, wherein the residual blocks are connected in series between the downsampling layer and the upsampling layer, the downsampling layer and the first m residual blocks are used as encoders, the upsampling layer and the last m residual blocks are used as decoders, and the downsampling layer and the upsampling layer are composed of the same number of convolution layers.
Further, the last layer of each residual block is inserted with a global attention module.
Further, the data set for the unsupervised SEM pattern position extraction model training comprises an SEM image and a pseudo-reference layout, wherein the pseudo-reference layout is used as a weak tag of the SEM image to participate in the training process.
Further, the calculation process of the edge contrast learning loss is as follows:
obtaining an output result of the SEM image passing through a k layer encoder in a first generator, and projecting the output result to a linear space to obtain a coding feature vector Wherein, the method comprises the steps of, wherein,Coded feature vectors representing SEM imagesN is the dimension of the encoded feature vector;
obtaining the output result of the pseudo-reference layout through a k-th layer encoder in a second generator, and projecting the output result to a linear space to obtain a coding feature vector Wherein, the method comprises the steps of, wherein,Coding feature vector representing pseudo-reference layoutThe i-th dimensional feature of (a);
To achieve% ) Constructing positive and negative sample pairs, wherein i=j is positive sample pair, i=j is negative sample pair, and calculating edge contrast learning loss:
Wherein, Representation ofThe included angle between the two parts is that,The representation is taken from the mould,Representing the angular separation penalty factor.
Further, the edge contrast learning loss corresponding to the different k values is averaged to be used as the final edge contrast learning loss.
Further, the calculation process of the HV rollover invariance loss is as follows:
Performing horizontal-vertical overturn on the SEM image, converting the SEM images before and after overturn into false reference layouts through a first generator, overturning a pair of false reference layouts until the directions are consistent, and calculating the loss of the pair of false reference layouts overturned until the directions are consistent as HV overturned invariance loss of the SEM image;
Performing horizontal-vertical overturn on the pseudo-reference layout, converting the pseudo-reference layout before and after overturn into a pseudo-SEM image through a second generator, overturning a pair of pseudo-SEM images until the directions are consistent, and calculating the loss of the pair of pseudo-SEM images overturned until the directions are consistent as HV overturned invariance loss of the pseudo-reference layout;
and taking the sum of the HV flip invariance loss of the SEM image and the HV flip invariance loss of the pseudo-reference layout as the final HV flip invariance loss.
Further, resistance loss, cyclic consistency loss, and identity mapping loss are also introduced during training of the unsupervised SEM pattern location extraction model.
Further, the method also comprises a preprocessing step of denoising and contour enhancement on the images in the data set for training before training the unsupervised SEM pattern position extraction model.
In a second aspect, the present invention provides a distortion correction matching system for an unsupervised sem image, which is configured to implement the distortion correction matching method for an unsupervised sem image.
The invention has the beneficial effects that:
(1) The invention provides an unsupervised SEM pattern position extraction model (SPPE-GAN), which introduces new edge contrast loss, HV flip invariance loss and global context attention mechanism to respectively realize the constraint on comprehensive local, global and key point information, and can accurately extract the pattern position in an SEM image and migrate the pattern position into a reference layout style picture.
(2) According to the invention, on the basis of obtaining the reference layout style picture by the SPPE-GAN model, the image matching algorithm is used for carrying out image matching on the generated reference layout and design layout file, and then the optical flow method is combined to calculate and correct the distortion of a single pixel point in the SEM image, so that the distortion problem in the SEM image can be effectively processed.
(3) Compared with experience matching of senior engineers, the distortion correction matching method provided by the invention realizes improvement of more than 10% in contour intersection ratio, and under the condition of the same matching algorithm, the SPPE-GAN model provided by the invention exceeds the general full supervision methods such as pix2pix and the like in the field and the advanced unsupervised style migration technology in the current industry on each index.
Drawings
FIG. 1 is a flow chart of a distortion correcting matching method for an unsupervised scanning electron microscope image;
FIG. 2 is a training schematic of SPPE-GAN model;
FIG. 3 is a schematic definition of an outline IOU;
FIG. 4 is a schematic diagram of a generator architecture in the SPPE-GAN model;
FIG. 5 is a visual comparison of the results obtained by the present invention and the comparative method.
Detailed Description
The following description is presented to enable one of ordinary skill in the art to make and use the invention.
The drawings are merely schematic illustrations of the present invention and are not necessarily drawn to scale. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
The flow diagrams depicted in the figures are exemplary only and not necessarily all steps are included. For example, some steps may be decomposed, and some steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations.
The invention provides a distortion correction matching method of an unsupervised scanning electron microscope image, which aims to accurately align an SEM image with a reference layout. First, an unsupervised SEM pattern location extraction model (SPPE-GAN) was introduced. Because the traditional CycleGAN lacks direct constraint on output and generated images, the SPPE-GAN provided by the invention can accurately extract pattern positions in the SEM and migrate the pattern positions into a reference layout style image through comprehensive constraint of local, global and key information. Specifically, the SPPE-GAN model constrains local information consistency by computing the contrast learning penalty of the input SEM image and the feature block that generated the reference layout. In addition, in order to transfer global position information, the model horizontally and vertically flips the input image, thereby enhancing generalization performance. While introducing a global contextual attention mechanism GcNet in the generator to enable the model to focus on the pattern areas in the input SEM images. And on the basis of SPPE-GAN model, finding out the real layout area corresponding to the SEM image by combining an image matching algorithm. Because of the non-negligible distortion of the SEM image, the distortion correction of single pixel points is carried out on the SEM image by adopting an optical flow method, thereby reducing the inherent distortion, and the matched layout and the corrected SEM image pair are used for subsequent hot spot detection and contour analysis.
As shown in fig. 1, the distortion correction matching method of the unsupervised scanning electron microscope image mainly comprises the following steps:
S1, collecting a data set comprising an SEM image, a design layout file and a pseudo-reference layout.
Specifically, after correcting the optical proximity effect, the GDSII file of the Active Area (AA) layer is prepared into a mask, and is subjected to photoetching on a 55nm wafer production line, and then SEM images are manually collected by using a Review-SEM machine. On the other hand, the design layout file and the pseudo-reference layout are obtained by clipping the GDSII file of the original AA layer. In this example, the SEM image size is 480×480×3. The pseudo-reference layout is manually aligned by engineers and has a size of 480×480×3, which is closely matched with SEM images, but SEM distortion cannot be completely eliminated, so that the images serve as weak labels, which is convenient for analyzing the matching and distortion correcting processes. The design layout file is used for verifying the matching precision of the SEM image and the pseudo-reference layout, and the size of the design layout file is 960 multiplied by 3, so that the design layout file contains corresponding areas in the SEM image.
S2, preprocessing a data set.
In this embodiment, in order to meet the input requirements of the model, the SEM image and the pseudo-reference layout size are adjusted to 256×256×3. And selecting a data set consisting of 800 pairs of SEM images and pseudo-reference layout as a training set, and forming a test set by the other 200 pairs of images. Because the SEM image and the pseudo-reference layout may contain various forms of complex noise, illumination variation and interference, the conventional contour extraction and pattern matching method is often difficult to effectively process, and the embodiment adopts the BM3D algorithm to denoise and enhance the contour of the image of the training set, so that a person skilled in the art can also adopt other existing denoising and contour enhancement algorithms.
S3, an unsupervised SEM pattern position extraction model (SPPE-GAN) is constructed and trained.
The SPPE-GAN model features its dual generator network and discriminator, aimed at facilitating the transition between SEM image domain and pseudo-reference layout domain. The training strategy of SPPE-GAN model involves learning two bi-directional mapped generator networks: x- > Y is used for forward conversion, SEM image is converted into pseudo-reference layout, and the pseudo-reference layout generated by the generator is called pseudo-reference layout for distinguishing; y- > X is used for reverse adaptation to convert the pseudo-reference layout or the pseudo-reference layout into an SEM image, and similarly, the SEM image generated by the generator is called a pseudo-SEM image. Each generator network includes an encoder and a decoder. The discriminator is used to ensure that the target domain images (pseudo-reference layout and pseudo-SEM images) generated by the generator retain their inherent domain features.
In one implementation of the present invention, as shown in FIG. 4, the generatorThe structure comprises downsampling, residual block and upsampling, and generatorStructure and generatorThe structure is the same. First, downsampling is performed by three convolution layers, convolution kernels of 7×7, 3×3, and 3×3, respectively, progressively reducing the spatial dimension and increasing the feature channel. Next, the feature learning ability is enhanced by residual connection using six residual blocks. And finally, up-sampling is carried out through three deconvolution layers, the space dimension is gradually restored, and a target domain image is generated. The present invention also inserts a global attention (GcNet) module after each residual block to enhance the aggregation of global context information.
The GcNet module calculates by firstly inputting the dimension of the characteristic diagram as H×W×C, and reducing the channel number from C to 1 by a 1×1 convolution layer to obtain the characteristic diagram with the dimension of H×W. Then, the h×w feature map is remodeled to hw×1×1. Then, the HW×1×1 feature map is subjected to softmax operation to generate a normalized weight matrix, and the weight matrix is multiplied by a matrix reshaped into C×HW×1 from the original feature map to obtain a global context feature with a size of 1×1×C. These global context features go through two consecutive 1x1 convolution layers, the first 1x1 convolution layer reducing the number of channels from C to C/r and the second 1x1 convolution layer increasing the number of channels from C/r back to C. Finally, the processed global context feature is added to the original input feature map H W C to form an enhanced output feature map, which is still H W C in size. In this way, gcNet can capture the long-distance dependency relationship more effectively, and the richness and discrimination of the feature expression are improved.
As shown in fig. 2, the training of SPPE-GAN model proposed by the present invention fuses the antagonism loss, the cyclic consistency loss, the identity mapping loss, the edge contrast learning loss and the HV rollover invariance loss, and the total training objectives are as follows:
wherein, In order to account for the total loss,Loss of resistance, loss of cyclical consistency, loss of identity mapping, loss of edge contrast learning and loss of HV rollover invariance,The weight superparameters for the resistance loss, the loop consistency loss, the identity mapping loss, the edge contrast learning loss and the HV rollover invariance loss are respectively set to 1,2, 1, 3 and 4 in this embodiment.
Details of each loss employed by the SPPE-GAN model are presented below.
(1) Loss of resistance
The resistive penalty applies to both mapping directions for the generatorAnd its discriminatorA generatorAnd its discriminatorThe resistance losses are expressed as:
wherein, Indicating the desire to find SEM images,Representing the expectations of pseudo-reference layout for a generatorThe source domain is SEM image, the target domain is reference domain, and the generator is used forThe two generators aim to generate false images which are completely indistinguishable from images in the target domain, the discriminator aims to distinguish the source domain images from the target domain images, the generator aims to minimize the loss between the source domain images and the target domain images, and the discriminator aims to maximize the loss between the source domain images and the target domain images, so the contrast loss can be expressed as:
(2) Cycle consistency loss
For forcing a bi-directional mapping between unpaired images, thereby preserving image semantic information during translation. The loop consistency loss can be expressed as:
wherein, Representing the L1 norm.
(3) Identity mapping loss
When providing a source domain image from a target domain as input, in order to constrain the generator to approximate an identity map, pixel-level consistency is introduced between the input image and the generated image. Identity mapping loss can be expressed as:
(4) Edge contrast learning penalty
In order to enhance feature extraction, the invention extracts feature vectors from the encoder layer of the generator through two projectors, in this embodiment, the projectors use a multi-layer perceptron (MLP), the feature vectors are regarded as anchor points, the feature vectors at the same spatial position are regarded as positive samples, and the rest of the feature vectors are regarded as negative samples. Contrast learning becomes a powerful tool for unsupervised representation learning by pulling the positive sample closer and pushing the negative sample away.
In order to construct positive and negative pairs for contrast learning, SEM images are passed through a generatorIn (a) encoderThe output of the nth convolutional layer of the encoder is encoded and selected by the projectorExtracting features to obtain coded feature vectorsWill generatorThe generated false reference layout is passed through a generatorIn (a) encoderCoding and likewise selecting the nth convolutional layer via a projectorExtracting features to obtain coded feature vectorsTo achieve%) The sample pair of contrast learning is constructed, i=j is a positive sample pair, i=j is a negative sample pair, and the contrast learning loss can be expressed as:
Where N is the dimension of the encoded feature vector, τ is the temperature parameter, N is the number of layers the encoded feature vector is taken from the encoder, in this embodiment N is taken as 6, 8, 12 and 16, respectively, and 4 loss values are calculated And then an average value is obtained. Contrast learning is capable of encoding domain invariant features, so establishing an accurate correspondence between two sets of features requires a high degree of distinguishability between the features.
However, since the original contrast learning tends to produce smooth transitions between different feature clusters, this may lead to smooth and inaccurate correspondence, thus introducing edge contrast loss. Due toIs a vector, thus,Is thatAn included angle between the two. The edge contrast loss adds an additional angular interval penalty m (m=0.3) in the positive sample, and expands the separability of the features, thereby generating a more definite and accurate corresponding relationship, so that the above formula can be rewritten as:
(5) HV rollover invariance loss
For the followingBy usingRepresenting a horizontal-to-vertical flip transform, converting an image X of an X-domain and an image after horizontal-to-vertical flipRespectively through generatorsConversion is carried out, and output of a calculation generator is calculatedAnd (3) withL1 loss between them, while calculatingAnd (3) withL1 losses in between. For mappingAnd the same is true. The HV rollover invariance loss can be expressed as:
s4, inputting the SEM image preprocessed in the step S2 into a generator of a SPPE-GAN model after training And generating a false reference layout carrying pattern position information in the SEM image, and precisely matching the generated false reference layout with a design layout file to obtain a real layout area corresponding to the SEM image, thereby obtaining a matched layout.
In one embodiment of the invention, after a SPPE-GAN model is utilized to generate a false reference layout carrying pattern information in an SEM image, a SIFT algorithm is utilized to extract characteristics of the generated false reference layout and a design layout file, and a FLANN algorithm is combined to realize matching, so that a corresponding real layout area of the SEM image in the design layout file is accurately positioned, and a matching layout is obtained.
S5, correcting the SEM image by using a light flow method based on the results of S3 and S4, relieving inherent distortion of the SEM image, and using the matched layout and the corrected SEM image pair for subsequent hot spot detection and contour analysis.
After the matching layout is obtained in step S4, because of the non-negligible distortion in the original SEM image, the matching layout and the SEM image cannot be directly used for subsequent hot spot detection and contour analysis, and distortion correction needs to be performed on the SEM image. However, for SEM images collected from the production line, no baseline true values for distortion were obtained. Fortunately, the false reference layout generated by SPPE-GAN model carries the pattern position and distortion information in the SEM image, so the distortion in the SEM image is corrected by calculating the deformation graph between the generated false reference layout and the matched layout.
In particular, the process of registering these two images is considered an optimization problem, whose goal is to search for a matching layout that maximizes the similarity with the false reference layout after transformation T by optimizing some similarity criteria between the false reference layout and the matching layout. This optimization can be calculated by gradient descent and ends when the maximum similarity is reached or the maximum number of iterations is reached. Optical flow is a computer vision technique used to estimate the motion of pixels in a sequence of images. The optical flow method is based on the assumption that the brightness of an image remains constant for a short period of time, while movement of an object causes a change in the pixel position in the image. It extrapolates the motion vector field of the pixels, i.e. the direction and speed of movement of each pixel at different points in time, by analyzing the brightness variations between the images.
In one implementation of the invention, the deformation map is obtained by utilizing the generated optical flow information from the false reference map to the matching map, which is obtained based on Farneback dense optical flow method, and the deformation map is applied to the original SEM image, so that the distortion correction of the SEM image is realized.
To quantitatively evaluate SPPE-GAN performance, the present invention uses three metrics, FID, area IOU, and outline IOU. In evaluating the quality of the generated pseudo-reference layout, the FID metric is used to evaluate the difference between the pseudo-reference layout and the pseudo-reference layout, the area IOU metric is used to evaluate the difference between the matching layout and the pseudo-reference layout, and the contour IOU metric is used to evaluate the alignment between the matching layout and the rectified SEM image, and the alignment between the pseudo-reference layout and the rectified SEM image.
Wherein, FID, the index is used for calculating the distance between two multi-element Gaussian functions. The mean and covariance are extracted from the translation data and the real data by using Inception network, and the performance is consistent with human judgment. If the translation is correct, the FID value will be lower. Therefore, the FID between the generated false reference layout and the false reference layout is calculated, and because the image matching and distortion correction processes are based on the generated reference layout, the FID measurement not only reflects the success of image translation, but also ensures the sense of reality and the reliability of experimental results.
Area IOU, which is an indicator used to measure the degree of overlap between two regions or images. Specifically, the area IOU is defined as the ratio of the intersection area and the union area.
Contour IOU (figure 3), performing edge enhancement on the matched layout or the pseudo-reference layout, obtaining the contour of the binary SEM image, and calculating the contour of the matched layout or the pseudo-reference layoutOutline of SEM imageCross-over ratio of (i.e.). The larger the contour IOU, the higher the degree of matching of the two graphs. Meanwhile, in order to ensure the practical significance of the profile IOU index, the embodiment selects the result that the FID score is located in the stable interval, namely the average value of the training times from 175 to 200 times.
Table 1 experimental results
As shown in Table 1, the experiment is compared with the Pix2Pix and TSM models commonly used in the field, and the result shows that the SPPE-GAN has the lowest FID, which indicates that the translation performance is better. This may be because the fully supervised model lacks special attention to pattern corner edges in pixel level loss computation, resulting in a lower area IOU. Because the full supervision model is trained by using the pseudo-reference layout matched with the deep engineer through the SEM image, the corresponding outline IOU indexes are all more than 40%, the performance is good, but the outline IOU of the matched layout obtained by using SIFT and FLANN algorithms is lower. This may be due to insufficient quality of the generated dummy reference layout, resulting in large scaling and positional misalignment of the matched layout.
In addition, the model is also compared with an advanced unsupervised style migration model in the current industry. It should be noted that the CUT is a unilateral GAN model, the dual structure of CycleGAN is omitted, and the version of CycleGAN as the backbone network with optimal performance is used in DISTANCEGAN. In fig. 5, four rows of data are respectively compared results of the present invention and Pix2Pix, TSM, cycleGAN, CUT, distanceGAN, DCLGAN, wherein the first row is a generated false reference layout, the second row is an effect diagram in which the SEM image and the false reference layout are overlapped with each other at a transparency of 50%, and the third row and the fourth row are respectively enlarged views of two in-frame areas in the second row. The model of the present invention also achieves the highest score over the area IOU. The outline IOU is calculated by using the pseudo-reference layout and the matching layout respectively, and the model of the invention also obtains the best result. Furthermore, it can be seen that when using engineer-matched pseudo-reference layout datasets, both the corrected SEM images generated by CycleGAN, CUT and DISTANCEGAN and the contour IOU of the pseudo-reference layout are reduced compared to the SEM images before correction. This may be because the model fails to extract sufficient positional information from the pattern on the SEM image, thereby deteriorating the result.
There is also provided in this embodiment a distortion correcting matching system for an unsupervised scanning electron microscope image, which is used to implement the above-described embodiment. The terms "module," "unit," and the like, as used below, may be a combination of software and/or hardware that performs a predetermined function. Although the system described in the following embodiments is preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible.
The distortion correction matching system for an unsupervised scanning electron microscope image provided in this embodiment includes:
The wafer data acquisition module is used for acquiring an SEM image and a design layout file of the wafer, wherein the design layout file comprises corresponding areas in the SEM image;
The reference layout matching module is used for converting an SEM image into a reference layout style image by using an unsupervised SEM pattern position extraction model, so as to obtain a false reference layout carrying pattern position information in the SEM image, matching the false reference layout with a design layout file, and generating a matching layout according to a real layout area matched with the SEM image in the design layout file, wherein the unsupervised SEM pattern position extraction model is a double-generator double-discriminator network structure based on a CycleGAN model, and edge contrast learning loss and HV overturning invariance loss are introduced in the training process of the unsupervised SEM pattern position extraction model;
the SEM image correction module is used for calculating a deformation graph between the false reference layout and the matching layout by adopting an optical flow method, correcting distortion in an SEM image by utilizing the deformation graph, and obtaining a corrected SEM image;
And the application module is used for realizing hot spot detection and contour analysis of the wafer by utilizing the corrected SEM image and the matched layout.
For the system embodiment, since the system embodiment basically corresponds to the method embodiment, the relevant parts only need to be referred to in the description of the method embodiment, and the implementation methods of the remaining modules are not repeated herein. The system embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purposes of the present invention. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
Embodiments of the system of the present invention may be applied to any device having data processing capabilities, such as a computer or the like. The system embodiment may be implemented by software, or may be implemented by hardware or a combination of hardware and software. Taking software implementation as an example, the device in a logic sense is formed by reading corresponding computer program instructions in a nonvolatile memory into a memory by a processor of any device with data processing capability.
It is obvious that the above-described embodiments and the drawings are only examples of the present application, and that it is possible for a person skilled in the art to apply the present application to other similar situations without the need for inventive work from these drawings. In addition, it should be appreciated that while the development effort might be complex and lengthy, it would nevertheless be a routine undertaking of design, fabrication, or manufacture for those of ordinary skill having the benefit of this disclosure, and thus should not be construed as a departure from the disclosure. Several variations and modifications may be made without departing from the spirit of the application, which fall within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411562064.0A CN119091441A (en) | 2024-11-05 | 2024-11-05 | Distortion correction matching method and system for unsupervised scanning electron microscope images |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411562064.0A CN119091441A (en) | 2024-11-05 | 2024-11-05 | Distortion correction matching method and system for unsupervised scanning electron microscope images |
Publications (1)
Publication Number | Publication Date |
---|---|
CN119091441A true CN119091441A (en) | 2024-12-06 |
Family
ID=93669843
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202411562064.0A Pending CN119091441A (en) | 2024-11-05 | 2024-11-05 | Distortion correction matching method and system for unsupervised scanning electron microscope images |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN119091441A (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5790417A (en) * | 1996-09-25 | 1998-08-04 | Taiwan Semiconductor Manufacturing Company Ltd. | Method of automatic dummy layout generation |
CN106033171A (en) * | 2015-03-11 | 2016-10-19 | 中芯国际集成电路制造(上海)有限公司 | A failure analysis method for a bad point on a wafer |
CN115760592A (en) * | 2022-10-16 | 2023-03-07 | 哈尔滨工程大学 | A network video restoration method for color distortion on social network platforms |
WO2023083559A1 (en) * | 2021-11-12 | 2023-05-19 | Asml Netherlands B.V. | Method and system of image analysis and critical dimension matching for charged-particle inspection apparatus |
WO2023142384A1 (en) * | 2022-01-25 | 2023-08-03 | 深圳晶源信息技术有限公司 | Design layout defect repair method, storage medium and device |
CN118096799A (en) * | 2024-04-29 | 2024-05-28 | 浙江大学 | Hybrid weakly-supervised wafer SEM defect segmentation method and system |
CN118314086A (en) * | 2024-03-22 | 2024-07-09 | 浙江大学 | A method and system for automatically matching wafer reference layout and SEM image |
-
2024
- 2024-11-05 CN CN202411562064.0A patent/CN119091441A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5790417A (en) * | 1996-09-25 | 1998-08-04 | Taiwan Semiconductor Manufacturing Company Ltd. | Method of automatic dummy layout generation |
CN106033171A (en) * | 2015-03-11 | 2016-10-19 | 中芯国际集成电路制造(上海)有限公司 | A failure analysis method for a bad point on a wafer |
WO2023083559A1 (en) * | 2021-11-12 | 2023-05-19 | Asml Netherlands B.V. | Method and system of image analysis and critical dimension matching for charged-particle inspection apparatus |
WO2023142384A1 (en) * | 2022-01-25 | 2023-08-03 | 深圳晶源信息技术有限公司 | Design layout defect repair method, storage medium and device |
CN115760592A (en) * | 2022-10-16 | 2023-03-07 | 哈尔滨工程大学 | A network video restoration method for color distortion on social network platforms |
CN118314086A (en) * | 2024-03-22 | 2024-07-09 | 浙江大学 | A method and system for automatically matching wafer reference layout and SEM image |
CN118096799A (en) * | 2024-04-29 | 2024-05-28 | 浙江大学 | Hybrid weakly-supervised wafer SEM defect segmentation method and system |
Non-Patent Citations (2)
Title |
---|
MILLER, K.K., WANG, P. & GRILLET, N: "SUB-immunogold-SEM reveals nanoscale distribution of submembranous epitopes", NAT COMMUN 15, 10 September 2024 (2024-09-10) * |
徐伟: "基于图像矫正的SEM三维测量技术及其应用研究", 苏州大学, 16 March 2020 (2020-03-16) * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Margffoy-Tuay et al. | Dynamic multimodal instance segmentation guided by natural language queries | |
Yang et al. | Directional connectivity-based segmentation of medical images | |
US20240289945A1 (en) | Method and system for classifying defects in wafer using wafer-defect images, based on deep learning | |
CN115063573B (en) | A multi-scale object detection method based on attention mechanism | |
US11410300B2 (en) | Defect inspection device, defect inspection method, and storage medium | |
CN114820579A (en) | Semantic segmentation based image composite defect detection method and system | |
CN118096799B (en) | A hybrid weakly supervised wafer SEM defect segmentation method and system | |
CN115205672A (en) | A method and system for semantic segmentation of remote sensing buildings based on multi-scale regional attention | |
Gai et al. | Flexible hotspot detection based on fully convolutional network with transfer learning | |
WO2022082692A1 (en) | Lithography hotspot detection method and apparatus, and storage medium and device | |
Han et al. | Progressive feature interleaved fusion network for remote-sensing image salient object detection | |
Zha et al. | Weakly-supervised mirror detection via scribble annotations | |
Jayasekara et al. | Detecting anomalous solder joints in multi-sliced PCB X-ray images: a deep learning based approach | |
CN115049833A (en) | Point cloud component segmentation method based on local feature enhancement and similarity measurement | |
CN114820541A (en) | Defect detection method based on reconstructed network | |
CN118314086A (en) | A method and system for automatically matching wafer reference layout and SEM image | |
Wang et al. | IH-ViT: Vision transformer-based integrated circuit appear-ance defect detection | |
CN117078608B (en) | A method for detecting highly reflective leather surface defects based on double mask guidance | |
CN119091441A (en) | Distortion correction matching method and system for unsupervised scanning electron microscope images | |
CN117853778A (en) | Improved HTC casting DR image defect identification method | |
CN114066825B (en) | Improved complex texture image flaw detection method based on deep learning | |
CN116664494A (en) | Surface defect detection method based on template comparison | |
CN113065547A (en) | A Weakly Supervised Text Detection Method Based on Character Supervision Information | |
CN114399628A (en) | Efficient detection system for insulators in complex space environment | |
CN115170457A (en) | BGA solder ball region segmentation method based on improved full convolution network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |