[go: up one dir, main page]

CN119091441A - Distortion correction matching method and system for unsupervised scanning electron microscope images - Google Patents

Distortion correction matching method and system for unsupervised scanning electron microscope images Download PDF

Info

Publication number
CN119091441A
CN119091441A CN202411562064.0A CN202411562064A CN119091441A CN 119091441 A CN119091441 A CN 119091441A CN 202411562064 A CN202411562064 A CN 202411562064A CN 119091441 A CN119091441 A CN 119091441A
Authority
CN
China
Prior art keywords
layout
sem image
sem
unsupervised
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202411562064.0A
Other languages
Chinese (zh)
Inventor
陈一宁
汪玉萍
高大为
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Chuangxin Integrated Circuit Co ltd
Zhejiang University ZJU
Original Assignee
Zhejiang Chuangxin Integrated Circuit Co ltd
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Chuangxin Integrated Circuit Co ltd, Zhejiang University ZJU filed Critical Zhejiang Chuangxin Integrated Circuit Co ltd
Priority to CN202411562064.0A priority Critical patent/CN119091441A/en
Publication of CN119091441A publication Critical patent/CN119091441A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/698Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a distortion correction matching method and a distortion correction matching system for an unsupervised scanning electron microscope image, and belongs to the field of image processing. The method comprises the steps of obtaining an SEM image and a design layout file of a wafer, wherein the design layout file comprises corresponding areas in the SEM image, converting the SEM image into a reference layout style image by using an unsupervised SEM pattern position extraction model, obtaining a false reference layout carrying pattern position information in the SEM image, matching the false reference layout with the design layout file, generating a matching layout according to a real layout area matched with the SEM image in the design layout file, calculating a deformation map between the false reference layout and the matching layout by using an optical flow method, correcting distortion in the SEM image by using the deformation map, and obtaining a corrected SEM image, wherein the corrected SEM image and the matching layout are used for hot spot detection and contour analysis of the wafer. The invention solves the problem of distortion of the visual field of the wafer SEM image and reduces the matching error.

Description

Distortion correction matching method and system for unsupervised scanning electron microscope image
Technical Field
The invention belongs to the field of image processing, and particularly relates to a distortion correction matching method and system for an unsupervised scanning electron microscope image.
Background
The process of semiconductor chips is evolving towards smaller dimensions, and the lithography machines are also evolving iteratively. Nevertheless, such a difference between the lithography machine light source wavelength and the manufacturing node size may still result in manufacturing defects, also known as lithography hotspots. These hot spots may affect circuit performance and yield, so all hot spots must be detected and identified comprehensively by dedicated equipment. The Die-to-Database (D2 DB) technique compares a chip image with a design file to detect defects, and the D2DB can effectively shorten a detection time and improve sensitivity compared with a hot spot detection method of Die-to-Die (D2D) which compares a chip image with a reference image. Meanwhile, the D2DB is also widely used for mask detection, photoetching proximity effect correction and optimization processes and the like. D2DB detection relies on the precise alignment of the chip images and design layout files, whereas electron beam detection systems can theoretically inspect sites with the same (x, y) coordinates in each chip by virtue of their high precision alignment capabilities. Unfortunately, however, the coordinates are actually checked for up to 1um, which is critical for high density, small size chips. Worse, positional offset may increase over time as massive Scanning Electron Microscope (SEM) images are collected. Therefore, further distortion correction of SEM images is required based on the alignment of the mechanical positioning system.
The existing scheme mainly relies on extracting the contours of SEM images and design files, and high-precision alignment is realized by minimizing contour errors or errors of the center of gravity position of the closed graph after contour extraction. But contour-based methods are affected by photoresist variations, image distortions, local pattern variations, etc. caused by multiple measurements. Meanwhile, as the nodes are miniaturized, the diversity and complexity of patterns are also increasing, resulting in rounding of the edges of the patterns, and increasing the roughness of the edges of the lines, the profile extraction is more difficult. Although the learner proposed a method using an average profile, this greatly increases the inspection time and calculation, and makes it difficult to accommodate the current rapid wafer inspection requirements. With the rapid development of deep learning technology, large-scale neural network models are widely used in the field of computer vision. In the D2DB task, many scholars have proposed a deep learning-based method. For example by creating a CNN model, converting paired layout and SEM images into acceptable deformed SEM images, enabling comparison with real SEM images for D2DB inspection, or converting SEM images into layout-like style images using modified pix2pix models for matching. While these methods work well, they rely on a large number of paired images, and it is very difficult to obtain such data due to the confidentiality of the semiconductor industry. In addition, data annotation requires expertise and extensive verification work of engineers. To solve this problem, scholars have proposed employing CycleGAN for unpaired SEM image style migration. The method learns the mapping from the SEM image to the layout image style by the generator with the help of the resistance loss, so that the generated layout image looks more real. Another generator maps the composite layout image back to the SEM domain, and the loss of periodic consistency facilitates matching of the reconstructed image to the input image. However, cycleGAN cannot guarantee their structural and positional consistency nor solve the distortion problem in SEM images due to the lack of direct constraints between the composite image and the input image.
Disclosure of Invention
The invention provides a distortion correction matching method and system for an unsupervised scanning electron microscope image, which aims to solve the problems that a D2DB on a high-density chip faces inaccurate mechanical positioning, requires complex manual parameter setting and causes lower detection efficiency and larger error due to field distortion of an SEM image.
The technical scheme adopted by the invention is as follows:
In a first aspect, the present invention provides a distortion correction matching method for an unsupervised scanning electron microscope image, including the steps of:
Obtaining an SEM image of a wafer and a design layout file, wherein the design layout file comprises corresponding areas in the SEM image;
Converting an SEM image into a reference layout style image by using an unsupervised SEM pattern position extraction model, obtaining a false reference layout carrying pattern position information in the SEM image, matching the false reference layout with a design layout file, and generating a matched layout according to a real layout area matched with the SEM image in the design layout file, wherein the unsupervised SEM pattern position extraction model is a double-generator double-discriminator network structure based on CycleGAN models, and edge contrast learning loss and HV overturning invariance loss are introduced in the training process of the unsupervised SEM pattern position extraction model;
Calculating a deformation map between the false reference map and the matching map by adopting an optical flow method, and correcting distortion in the SEM image by utilizing the deformation map to obtain a corrected SEM image;
And the corrected SEM image and the matched layout are used for hot spot detection and contour analysis of the wafer.
Further, the generator of the unsupervised SEM pattern position extraction model comprises a downsampling layer, 2m residual blocks and an upsampling layer, wherein the residual blocks are connected in series between the downsampling layer and the upsampling layer, the downsampling layer and the first m residual blocks are used as encoders, the upsampling layer and the last m residual blocks are used as decoders, and the downsampling layer and the upsampling layer are composed of the same number of convolution layers.
Further, the last layer of each residual block is inserted with a global attention module.
Further, the data set for the unsupervised SEM pattern position extraction model training comprises an SEM image and a pseudo-reference layout, wherein the pseudo-reference layout is used as a weak tag of the SEM image to participate in the training process.
Further, the calculation process of the edge contrast learning loss is as follows:
obtaining an output result of the SEM image passing through a k layer encoder in a first generator, and projecting the output result to a linear space to obtain a coding feature vector Wherein, the method comprises the steps of, wherein,Coded feature vectors representing SEM imagesN is the dimension of the encoded feature vector;
obtaining the output result of the pseudo-reference layout through a k-th layer encoder in a second generator, and projecting the output result to a linear space to obtain a coding feature vector Wherein, the method comprises the steps of, wherein,Coding feature vector representing pseudo-reference layoutThe i-th dimensional feature of (a);
To achieve% ) Constructing positive and negative sample pairs, wherein i=j is positive sample pair, i=j is negative sample pair, and calculating edge contrast learning loss:
Wherein, Representation ofThe included angle between the two parts is that,The representation is taken from the mould,Representing the angular separation penalty factor.
Further, the edge contrast learning loss corresponding to the different k values is averaged to be used as the final edge contrast learning loss.
Further, the calculation process of the HV rollover invariance loss is as follows:
Performing horizontal-vertical overturn on the SEM image, converting the SEM images before and after overturn into false reference layouts through a first generator, overturning a pair of false reference layouts until the directions are consistent, and calculating the loss of the pair of false reference layouts overturned until the directions are consistent as HV overturned invariance loss of the SEM image;
Performing horizontal-vertical overturn on the pseudo-reference layout, converting the pseudo-reference layout before and after overturn into a pseudo-SEM image through a second generator, overturning a pair of pseudo-SEM images until the directions are consistent, and calculating the loss of the pair of pseudo-SEM images overturned until the directions are consistent as HV overturned invariance loss of the pseudo-reference layout;
and taking the sum of the HV flip invariance loss of the SEM image and the HV flip invariance loss of the pseudo-reference layout as the final HV flip invariance loss.
Further, resistance loss, cyclic consistency loss, and identity mapping loss are also introduced during training of the unsupervised SEM pattern location extraction model.
Further, the method also comprises a preprocessing step of denoising and contour enhancement on the images in the data set for training before training the unsupervised SEM pattern position extraction model.
In a second aspect, the present invention provides a distortion correction matching system for an unsupervised sem image, which is configured to implement the distortion correction matching method for an unsupervised sem image.
The invention has the beneficial effects that:
(1) The invention provides an unsupervised SEM pattern position extraction model (SPPE-GAN), which introduces new edge contrast loss, HV flip invariance loss and global context attention mechanism to respectively realize the constraint on comprehensive local, global and key point information, and can accurately extract the pattern position in an SEM image and migrate the pattern position into a reference layout style picture.
(2) According to the invention, on the basis of obtaining the reference layout style picture by the SPPE-GAN model, the image matching algorithm is used for carrying out image matching on the generated reference layout and design layout file, and then the optical flow method is combined to calculate and correct the distortion of a single pixel point in the SEM image, so that the distortion problem in the SEM image can be effectively processed.
(3) Compared with experience matching of senior engineers, the distortion correction matching method provided by the invention realizes improvement of more than 10% in contour intersection ratio, and under the condition of the same matching algorithm, the SPPE-GAN model provided by the invention exceeds the general full supervision methods such as pix2pix and the like in the field and the advanced unsupervised style migration technology in the current industry on each index.
Drawings
FIG. 1 is a flow chart of a distortion correcting matching method for an unsupervised scanning electron microscope image;
FIG. 2 is a training schematic of SPPE-GAN model;
FIG. 3 is a schematic definition of an outline IOU;
FIG. 4 is a schematic diagram of a generator architecture in the SPPE-GAN model;
FIG. 5 is a visual comparison of the results obtained by the present invention and the comparative method.
Detailed Description
The following description is presented to enable one of ordinary skill in the art to make and use the invention.
The drawings are merely schematic illustrations of the present invention and are not necessarily drawn to scale. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
The flow diagrams depicted in the figures are exemplary only and not necessarily all steps are included. For example, some steps may be decomposed, and some steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations.
The invention provides a distortion correction matching method of an unsupervised scanning electron microscope image, which aims to accurately align an SEM image with a reference layout. First, an unsupervised SEM pattern location extraction model (SPPE-GAN) was introduced. Because the traditional CycleGAN lacks direct constraint on output and generated images, the SPPE-GAN provided by the invention can accurately extract pattern positions in the SEM and migrate the pattern positions into a reference layout style image through comprehensive constraint of local, global and key information. Specifically, the SPPE-GAN model constrains local information consistency by computing the contrast learning penalty of the input SEM image and the feature block that generated the reference layout. In addition, in order to transfer global position information, the model horizontally and vertically flips the input image, thereby enhancing generalization performance. While introducing a global contextual attention mechanism GcNet in the generator to enable the model to focus on the pattern areas in the input SEM images. And on the basis of SPPE-GAN model, finding out the real layout area corresponding to the SEM image by combining an image matching algorithm. Because of the non-negligible distortion of the SEM image, the distortion correction of single pixel points is carried out on the SEM image by adopting an optical flow method, thereby reducing the inherent distortion, and the matched layout and the corrected SEM image pair are used for subsequent hot spot detection and contour analysis.
As shown in fig. 1, the distortion correction matching method of the unsupervised scanning electron microscope image mainly comprises the following steps:
S1, collecting a data set comprising an SEM image, a design layout file and a pseudo-reference layout.
Specifically, after correcting the optical proximity effect, the GDSII file of the Active Area (AA) layer is prepared into a mask, and is subjected to photoetching on a 55nm wafer production line, and then SEM images are manually collected by using a Review-SEM machine. On the other hand, the design layout file and the pseudo-reference layout are obtained by clipping the GDSII file of the original AA layer. In this example, the SEM image size is 480×480×3. The pseudo-reference layout is manually aligned by engineers and has a size of 480×480×3, which is closely matched with SEM images, but SEM distortion cannot be completely eliminated, so that the images serve as weak labels, which is convenient for analyzing the matching and distortion correcting processes. The design layout file is used for verifying the matching precision of the SEM image and the pseudo-reference layout, and the size of the design layout file is 960 multiplied by 3, so that the design layout file contains corresponding areas in the SEM image.
S2, preprocessing a data set.
In this embodiment, in order to meet the input requirements of the model, the SEM image and the pseudo-reference layout size are adjusted to 256×256×3. And selecting a data set consisting of 800 pairs of SEM images and pseudo-reference layout as a training set, and forming a test set by the other 200 pairs of images. Because the SEM image and the pseudo-reference layout may contain various forms of complex noise, illumination variation and interference, the conventional contour extraction and pattern matching method is often difficult to effectively process, and the embodiment adopts the BM3D algorithm to denoise and enhance the contour of the image of the training set, so that a person skilled in the art can also adopt other existing denoising and contour enhancement algorithms.
S3, an unsupervised SEM pattern position extraction model (SPPE-GAN) is constructed and trained.
The SPPE-GAN model features its dual generator network and discriminator, aimed at facilitating the transition between SEM image domain and pseudo-reference layout domain. The training strategy of SPPE-GAN model involves learning two bi-directional mapped generator networks: x- > Y is used for forward conversion, SEM image is converted into pseudo-reference layout, and the pseudo-reference layout generated by the generator is called pseudo-reference layout for distinguishing; y- > X is used for reverse adaptation to convert the pseudo-reference layout or the pseudo-reference layout into an SEM image, and similarly, the SEM image generated by the generator is called a pseudo-SEM image. Each generator network includes an encoder and a decoder. The discriminator is used to ensure that the target domain images (pseudo-reference layout and pseudo-SEM images) generated by the generator retain their inherent domain features.
In one implementation of the present invention, as shown in FIG. 4, the generatorThe structure comprises downsampling, residual block and upsampling, and generatorStructure and generatorThe structure is the same. First, downsampling is performed by three convolution layers, convolution kernels of 7×7, 3×3, and 3×3, respectively, progressively reducing the spatial dimension and increasing the feature channel. Next, the feature learning ability is enhanced by residual connection using six residual blocks. And finally, up-sampling is carried out through three deconvolution layers, the space dimension is gradually restored, and a target domain image is generated. The present invention also inserts a global attention (GcNet) module after each residual block to enhance the aggregation of global context information.
The GcNet module calculates by firstly inputting the dimension of the characteristic diagram as H×W×C, and reducing the channel number from C to 1 by a 1×1 convolution layer to obtain the characteristic diagram with the dimension of H×W. Then, the h×w feature map is remodeled to hw×1×1. Then, the HW×1×1 feature map is subjected to softmax operation to generate a normalized weight matrix, and the weight matrix is multiplied by a matrix reshaped into C×HW×1 from the original feature map to obtain a global context feature with a size of 1×1×C. These global context features go through two consecutive 1x1 convolution layers, the first 1x1 convolution layer reducing the number of channels from C to C/r and the second 1x1 convolution layer increasing the number of channels from C/r back to C. Finally, the processed global context feature is added to the original input feature map H W C to form an enhanced output feature map, which is still H W C in size. In this way, gcNet can capture the long-distance dependency relationship more effectively, and the richness and discrimination of the feature expression are improved.
As shown in fig. 2, the training of SPPE-GAN model proposed by the present invention fuses the antagonism loss, the cyclic consistency loss, the identity mapping loss, the edge contrast learning loss and the HV rollover invariance loss, and the total training objectives are as follows:
wherein, In order to account for the total loss,Loss of resistance, loss of cyclical consistency, loss of identity mapping, loss of edge contrast learning and loss of HV rollover invariance,The weight superparameters for the resistance loss, the loop consistency loss, the identity mapping loss, the edge contrast learning loss and the HV rollover invariance loss are respectively set to 1,2, 1, 3 and 4 in this embodiment.
Details of each loss employed by the SPPE-GAN model are presented below.
(1) Loss of resistance
The resistive penalty applies to both mapping directions for the generatorAnd its discriminatorA generatorAnd its discriminatorThe resistance losses are expressed as:
wherein, Indicating the desire to find SEM images,Representing the expectations of pseudo-reference layout for a generatorThe source domain is SEM image, the target domain is reference domain, and the generator is used forThe two generators aim to generate false images which are completely indistinguishable from images in the target domain, the discriminator aims to distinguish the source domain images from the target domain images, the generator aims to minimize the loss between the source domain images and the target domain images, and the discriminator aims to maximize the loss between the source domain images and the target domain images, so the contrast loss can be expressed as:
(2) Cycle consistency loss
For forcing a bi-directional mapping between unpaired images, thereby preserving image semantic information during translation. The loop consistency loss can be expressed as:
wherein, Representing the L1 norm.
(3) Identity mapping loss
When providing a source domain image from a target domain as input, in order to constrain the generator to approximate an identity map, pixel-level consistency is introduced between the input image and the generated image. Identity mapping loss can be expressed as:
(4) Edge contrast learning penalty
In order to enhance feature extraction, the invention extracts feature vectors from the encoder layer of the generator through two projectors, in this embodiment, the projectors use a multi-layer perceptron (MLP), the feature vectors are regarded as anchor points, the feature vectors at the same spatial position are regarded as positive samples, and the rest of the feature vectors are regarded as negative samples. Contrast learning becomes a powerful tool for unsupervised representation learning by pulling the positive sample closer and pushing the negative sample away.
In order to construct positive and negative pairs for contrast learning, SEM images are passed through a generatorIn (a) encoderThe output of the nth convolutional layer of the encoder is encoded and selected by the projectorExtracting features to obtain coded feature vectorsWill generatorThe generated false reference layout is passed through a generatorIn (a) encoderCoding and likewise selecting the nth convolutional layer via a projectorExtracting features to obtain coded feature vectorsTo achieve%) The sample pair of contrast learning is constructed, i=j is a positive sample pair, i=j is a negative sample pair, and the contrast learning loss can be expressed as:
Where N is the dimension of the encoded feature vector, τ is the temperature parameter, N is the number of layers the encoded feature vector is taken from the encoder, in this embodiment N is taken as 6, 8, 12 and 16, respectively, and 4 loss values are calculated And then an average value is obtained. Contrast learning is capable of encoding domain invariant features, so establishing an accurate correspondence between two sets of features requires a high degree of distinguishability between the features.
However, since the original contrast learning tends to produce smooth transitions between different feature clusters, this may lead to smooth and inaccurate correspondence, thus introducing edge contrast loss. Due toIs a vector, thus,Is thatAn included angle between the two. The edge contrast loss adds an additional angular interval penalty m (m=0.3) in the positive sample, and expands the separability of the features, thereby generating a more definite and accurate corresponding relationship, so that the above formula can be rewritten as:
(5) HV rollover invariance loss
For the followingBy usingRepresenting a horizontal-to-vertical flip transform, converting an image X of an X-domain and an image after horizontal-to-vertical flipRespectively through generatorsConversion is carried out, and output of a calculation generator is calculatedAnd (3) withL1 loss between them, while calculatingAnd (3) withL1 losses in between. For mappingAnd the same is true. The HV rollover invariance loss can be expressed as:
s4, inputting the SEM image preprocessed in the step S2 into a generator of a SPPE-GAN model after training And generating a false reference layout carrying pattern position information in the SEM image, and precisely matching the generated false reference layout with a design layout file to obtain a real layout area corresponding to the SEM image, thereby obtaining a matched layout.
In one embodiment of the invention, after a SPPE-GAN model is utilized to generate a false reference layout carrying pattern information in an SEM image, a SIFT algorithm is utilized to extract characteristics of the generated false reference layout and a design layout file, and a FLANN algorithm is combined to realize matching, so that a corresponding real layout area of the SEM image in the design layout file is accurately positioned, and a matching layout is obtained.
S5, correcting the SEM image by using a light flow method based on the results of S3 and S4, relieving inherent distortion of the SEM image, and using the matched layout and the corrected SEM image pair for subsequent hot spot detection and contour analysis.
After the matching layout is obtained in step S4, because of the non-negligible distortion in the original SEM image, the matching layout and the SEM image cannot be directly used for subsequent hot spot detection and contour analysis, and distortion correction needs to be performed on the SEM image. However, for SEM images collected from the production line, no baseline true values for distortion were obtained. Fortunately, the false reference layout generated by SPPE-GAN model carries the pattern position and distortion information in the SEM image, so the distortion in the SEM image is corrected by calculating the deformation graph between the generated false reference layout and the matched layout.
In particular, the process of registering these two images is considered an optimization problem, whose goal is to search for a matching layout that maximizes the similarity with the false reference layout after transformation T by optimizing some similarity criteria between the false reference layout and the matching layout. This optimization can be calculated by gradient descent and ends when the maximum similarity is reached or the maximum number of iterations is reached. Optical flow is a computer vision technique used to estimate the motion of pixels in a sequence of images. The optical flow method is based on the assumption that the brightness of an image remains constant for a short period of time, while movement of an object causes a change in the pixel position in the image. It extrapolates the motion vector field of the pixels, i.e. the direction and speed of movement of each pixel at different points in time, by analyzing the brightness variations between the images.
In one implementation of the invention, the deformation map is obtained by utilizing the generated optical flow information from the false reference map to the matching map, which is obtained based on Farneback dense optical flow method, and the deformation map is applied to the original SEM image, so that the distortion correction of the SEM image is realized.
To quantitatively evaluate SPPE-GAN performance, the present invention uses three metrics, FID, area IOU, and outline IOU. In evaluating the quality of the generated pseudo-reference layout, the FID metric is used to evaluate the difference between the pseudo-reference layout and the pseudo-reference layout, the area IOU metric is used to evaluate the difference between the matching layout and the pseudo-reference layout, and the contour IOU metric is used to evaluate the alignment between the matching layout and the rectified SEM image, and the alignment between the pseudo-reference layout and the rectified SEM image.
Wherein, FID, the index is used for calculating the distance between two multi-element Gaussian functions. The mean and covariance are extracted from the translation data and the real data by using Inception network, and the performance is consistent with human judgment. If the translation is correct, the FID value will be lower. Therefore, the FID between the generated false reference layout and the false reference layout is calculated, and because the image matching and distortion correction processes are based on the generated reference layout, the FID measurement not only reflects the success of image translation, but also ensures the sense of reality and the reliability of experimental results.
Area IOU, which is an indicator used to measure the degree of overlap between two regions or images. Specifically, the area IOU is defined as the ratio of the intersection area and the union area.
Contour IOU (figure 3), performing edge enhancement on the matched layout or the pseudo-reference layout, obtaining the contour of the binary SEM image, and calculating the contour of the matched layout or the pseudo-reference layoutOutline of SEM imageCross-over ratio of (i.e.). The larger the contour IOU, the higher the degree of matching of the two graphs. Meanwhile, in order to ensure the practical significance of the profile IOU index, the embodiment selects the result that the FID score is located in the stable interval, namely the average value of the training times from 175 to 200 times.
Table 1 experimental results
As shown in Table 1, the experiment is compared with the Pix2Pix and TSM models commonly used in the field, and the result shows that the SPPE-GAN has the lowest FID, which indicates that the translation performance is better. This may be because the fully supervised model lacks special attention to pattern corner edges in pixel level loss computation, resulting in a lower area IOU. Because the full supervision model is trained by using the pseudo-reference layout matched with the deep engineer through the SEM image, the corresponding outline IOU indexes are all more than 40%, the performance is good, but the outline IOU of the matched layout obtained by using SIFT and FLANN algorithms is lower. This may be due to insufficient quality of the generated dummy reference layout, resulting in large scaling and positional misalignment of the matched layout.
In addition, the model is also compared with an advanced unsupervised style migration model in the current industry. It should be noted that the CUT is a unilateral GAN model, the dual structure of CycleGAN is omitted, and the version of CycleGAN as the backbone network with optimal performance is used in DISTANCEGAN. In fig. 5, four rows of data are respectively compared results of the present invention and Pix2Pix, TSM, cycleGAN, CUT, distanceGAN, DCLGAN, wherein the first row is a generated false reference layout, the second row is an effect diagram in which the SEM image and the false reference layout are overlapped with each other at a transparency of 50%, and the third row and the fourth row are respectively enlarged views of two in-frame areas in the second row. The model of the present invention also achieves the highest score over the area IOU. The outline IOU is calculated by using the pseudo-reference layout and the matching layout respectively, and the model of the invention also obtains the best result. Furthermore, it can be seen that when using engineer-matched pseudo-reference layout datasets, both the corrected SEM images generated by CycleGAN, CUT and DISTANCEGAN and the contour IOU of the pseudo-reference layout are reduced compared to the SEM images before correction. This may be because the model fails to extract sufficient positional information from the pattern on the SEM image, thereby deteriorating the result.
There is also provided in this embodiment a distortion correcting matching system for an unsupervised scanning electron microscope image, which is used to implement the above-described embodiment. The terms "module," "unit," and the like, as used below, may be a combination of software and/or hardware that performs a predetermined function. Although the system described in the following embodiments is preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible.
The distortion correction matching system for an unsupervised scanning electron microscope image provided in this embodiment includes:
The wafer data acquisition module is used for acquiring an SEM image and a design layout file of the wafer, wherein the design layout file comprises corresponding areas in the SEM image;
The reference layout matching module is used for converting an SEM image into a reference layout style image by using an unsupervised SEM pattern position extraction model, so as to obtain a false reference layout carrying pattern position information in the SEM image, matching the false reference layout with a design layout file, and generating a matching layout according to a real layout area matched with the SEM image in the design layout file, wherein the unsupervised SEM pattern position extraction model is a double-generator double-discriminator network structure based on a CycleGAN model, and edge contrast learning loss and HV overturning invariance loss are introduced in the training process of the unsupervised SEM pattern position extraction model;
the SEM image correction module is used for calculating a deformation graph between the false reference layout and the matching layout by adopting an optical flow method, correcting distortion in an SEM image by utilizing the deformation graph, and obtaining a corrected SEM image;
And the application module is used for realizing hot spot detection and contour analysis of the wafer by utilizing the corrected SEM image and the matched layout.
For the system embodiment, since the system embodiment basically corresponds to the method embodiment, the relevant parts only need to be referred to in the description of the method embodiment, and the implementation methods of the remaining modules are not repeated herein. The system embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purposes of the present invention. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
Embodiments of the system of the present invention may be applied to any device having data processing capabilities, such as a computer or the like. The system embodiment may be implemented by software, or may be implemented by hardware or a combination of hardware and software. Taking software implementation as an example, the device in a logic sense is formed by reading corresponding computer program instructions in a nonvolatile memory into a memory by a processor of any device with data processing capability.
It is obvious that the above-described embodiments and the drawings are only examples of the present application, and that it is possible for a person skilled in the art to apply the present application to other similar situations without the need for inventive work from these drawings. In addition, it should be appreciated that while the development effort might be complex and lengthy, it would nevertheless be a routine undertaking of design, fabrication, or manufacture for those of ordinary skill having the benefit of this disclosure, and thus should not be construed as a departure from the disclosure. Several variations and modifications may be made without departing from the spirit of the application, which fall within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (10)

1.一种无监督扫描电子显微镜图像的失真矫正匹配方法,其特征在于,包括以下步骤:1. A method for distortion correction and matching of unsupervised scanning electron microscope images, characterized in that it comprises the following steps: 获取晶圆的SEM图像和设计版图文件,所述设计版图文件包含SEM图像中的相应区域;Acquire a SEM image and a design layout file of the wafer, wherein the design layout file includes a corresponding area in the SEM image; 利用无监督SEM图案位置提取模型将SEM图像转换为参考版图风格图像,得到携带SEM图像中图案位置信息的假参考版图,将假参考版图与设计版图文件匹配,根据设计版图文件中与SEM图像相匹配的真实版图区域生成匹配版图;所述的无监督SEM图案位置提取模型是基于CycleGAN模型的双生成器双鉴别器网络结构,在无监督SEM图案位置提取模型的训练过程中引入边缘对比学习损失和HV翻转不变性损失;An unsupervised SEM pattern position extraction model is used to convert a SEM image into a reference layout style image to obtain a fake reference layout carrying pattern position information in the SEM image, the fake reference layout is matched with a design layout file, and a matching layout is generated according to a real layout area in the design layout file that matches the SEM image; the unsupervised SEM pattern position extraction model is a dual generator and dual discriminator network structure based on a CycleGAN model, and edge contrast learning loss and HV flip invariance loss are introduced in the training process of the unsupervised SEM pattern position extraction model; 采用光流法计算所述假参考版图与所述匹配版图之间的形变图,利用形变图矫正SEM图像中的畸变,得到矫正后的SEM图像;The optical flow method is used to calculate the deformation map between the pseudo reference layout and the matching layout, and the deformation map is used to correct the distortion in the SEM image to obtain a corrected SEM image; 所述矫正后的SEM图像与匹配版图用于晶圆的热点检测和轮廓分析。The corrected SEM image and matching layout are used for hot spot detection and profile analysis of the wafer. 2.根据权利要求1所述的一种无监督扫描电子显微镜图像的失真矫正匹配方法,其特征在于,所述的无监督SEM图案位置提取模型的生成器包括下采样层、2m个残差块、上采样层,所述的若干残差块串联在下采样层和上采样层之间,下采样层和前m个残差块作为编码器,上采样层和后m个残差块作为解码器;所述下采样层和上采样层由相同数量的卷积层组成。2. According to claim 1, a distortion correction and matching method for unsupervised scanning electron microscope images is characterized in that the generator of the unsupervised SEM pattern position extraction model includes a downsampling layer, 2m residual blocks, and an upsampling layer, and the plurality of residual blocks are connected in series between the downsampling layer and the upsampling layer, the downsampling layer and the first m residual blocks serve as encoders, and the upsampling layer and the last m residual blocks serve as decoders; the downsampling layer and the upsampling layer are composed of the same number of convolutional layers. 3.根据权利要求2所述的一种无监督扫描电子显微镜图像的失真矫正匹配方法,其特征在于,每一个残差块的最后一层插入有全局注意力模块。3. According to the unsupervised scanning electron microscope image distortion correction matching method described in claim 2, it is characterized in that a global attention module is inserted into the last layer of each residual block. 4.根据权利要求2所述的一种无监督扫描电子显微镜图像的失真矫正匹配方法,其特征在于,用于无监督SEM图案位置提取模型训练的数据集包含SEM图像和伪参考版图,所述的伪参考版图作为SEM图像的弱标签参与训练过程。4. According to the unsupervised scanning electron microscope image distortion correction matching method described in claim 2, it is characterized in that the data set used for unsupervised SEM pattern position extraction model training includes SEM images and pseudo-reference layouts, and the pseudo-reference layouts participate in the training process as weak labels of the SEM images. 5.根据权利要求4所述的一种无监督扫描电子显微镜图像的失真矫正匹配方法,其特征在于,所述的边缘对比学习损失的计算过程为:5. The method for distortion correction and matching of unsupervised scanning electron microscope images according to claim 4, characterized in that the calculation process of the edge contrast learning loss is: 获取SEM图像经过第一个生成器中的第k层编码器的输出结果,将输出结果投影至线性空间,获得编码特征向量,其中,表示SEM图像的编码特征向量中的第i维特征,N是编码特征向量的维度;Get the output result of the SEM image after passing through the k-th layer encoder in the first generator, project the output result into the linear space, and obtain the encoded feature vector ,in, The encoded feature vector representing the SEM image The i-th dimension feature in , N is the dimension of the encoded feature vector; 获取伪参考版图经过第二个生成器中的第k层编码器的输出结果,将输出结果投影至线性空间,获得编码特征向量,其中,表示伪参考版图的编码特征向量中的第i维特征;Get the output result of the pseudo reference layout after the k-th layer encoder in the second generator, project the output result into the linear space, and obtain the encoded feature vector ,in, Encoded feature vector representing the pseudo reference layout The i-th dimension feature in ; 以()构建正负样本对,i=j时为正样本对,i≠j时为负样本对,计算边缘对比学习损失by( ) Construct positive and negative sample pairs, i=j is a positive sample pair, i≠j is a negative sample pair, and calculate the edge contrast learning loss : ; ; 其中,表示之间的夹角,表示取模,表示角度间隔惩罚因子。in, express The angle between Indicates modulus, Represents the angle separation penalty factor. 6.根据权利要求5所述的一种无监督扫描电子显微镜图像的失真矫正匹配方法,其特征在于,将不同的k取值对应的边缘对比学习损失取均值,作为最终的边缘对比学习损失。6. The method for distortion correction and matching of unsupervised scanning electron microscope images according to claim 5 is characterized in that the edge contrast learning loss corresponding to different k values is averaged as the final edge contrast learning loss. 7.根据权利要求4所述的一种无监督扫描电子显微镜图像的失真矫正匹配方法,其特征在于,所述的HV翻转不变性损失的计算过程为:7. The method for distortion correction and matching of unsupervised scanning electron microscope images according to claim 4, wherein the calculation process of the HV flip invariance loss is: 将SEM图像进行水平-垂直翻转,将翻转前后的SEM图像通过第一生成器转换为假参考版图,再将一对假参考版图翻转至方向一致,计算翻转至方向一致的一对假参考版图的损失作为SEM图像的HV翻转不变性损失;The SEM image is flipped horizontally and vertically, and the SEM images before and after the flipping are converted into a pseudo reference layout through the first generator, and then a pair of pseudo reference layouts are flipped to have the same direction, and the loss of the pair of pseudo reference layouts flipped to have the same direction is calculated as the HV flip invariance loss of the SEM image; 将伪参考版图进行水平-垂直翻转,将翻转前后的伪参考版图通过第二生成器转换为假SEM图像,再将一对假SEM图像翻转至方向一致,计算翻转至方向一致的一对假SEM图像的损失作为伪参考版图的HV翻转不变性损失;The pseudo reference layout is flipped horizontally and vertically, and the pseudo reference layout before and after flipping is converted into a pseudo SEM image through the second generator, and then a pair of pseudo SEM images are flipped to have the same direction, and the loss of the pair of pseudo SEM images flipped to have the same direction is calculated as the HV flip invariance loss of the pseudo reference layout; 将SEM图像的HV翻转不变性损失和伪参考版图的HV翻转不变性损失之和作为最终的HV翻转不变性损失。The sum of the HV flip invariance loss of the SEM image and the HV flip invariance loss of the pseudo-reference layout is taken as the final HV flip invariance loss. 8.根据权利要求1所述的一种无监督扫描电子显微镜图像的失真矫正匹配方法,其特征在于,在无监督SEM图案位置提取模型的训练过程中还引入对抗性损失、循环一致性损失和身份映射损失。8. The method for distortion correction and matching of unsupervised scanning electron microscope images according to claim 1 is characterized in that adversarial loss, cycle consistency loss and identity mapping loss are also introduced in the training process of the unsupervised SEM pattern position extraction model. 9.根据权利要求1所述的一种无监督扫描电子显微镜图像的失真矫正匹配方法,其特征在于,在对所述的无监督SEM图案位置提取模型训练之前,还包括对用于训练的数据集中的图像进行去噪和轮廓增强的预处理步骤。9. The method for distortion correction and matching of unsupervised scanning electron microscope images according to claim 1 is characterized in that before training the unsupervised SEM pattern position extraction model, it also includes a preprocessing step of denoising and contour enhancement of the images in the training data set. 10.一种无监督扫描电子显微镜图像的失真矫正匹配系统,其特征在于,包括:10. An unsupervised scanning electron microscope image distortion correction matching system, characterized by comprising: 晶圆数据获取模块,其用于获取晶圆的SEM图像和设计版图文件,所述设计版图文件包含SEM图像中的相应区域;A wafer data acquisition module, which is used to acquire a SEM image and a design layout file of a wafer, wherein the design layout file includes a corresponding area in the SEM image; 参考版图匹配模块,其用于利用无监督SEM图案位置提取模型将SEM图像转换为参考版图风格图像,得到携带SEM图像中图案位置信息的假参考版图,将假参考版图与设计版图文件匹配,根据设计版图文件中与SEM图像相匹配的真实版图区域生成匹配版图;所述的无监督SEM图案位置提取模型是基于CycleGAN模型的双生成器双鉴别器网络结构,在无监督SEM图案位置提取模型的训练过程中引入边缘对比学习损失和HV翻转不变性损失;A reference layout matching module is used to convert a SEM image into a reference layout style image using an unsupervised SEM pattern position extraction model, obtain a fake reference layout carrying pattern position information in the SEM image, match the fake reference layout with a design layout file, and generate a matching layout based on a real layout area in the design layout file that matches the SEM image; the unsupervised SEM pattern position extraction model is a dual generator and dual discriminator network structure based on a CycleGAN model, and introduces edge contrast learning loss and HV flip invariance loss in the training process of the unsupervised SEM pattern position extraction model; SEM图像矫正模块,其用于采用光流法计算所述假参考版图与所述匹配版图之间的形变图,利用形变图矫正SEM图像中的畸变,得到矫正后的SEM图像;A SEM image correction module, which is used to calculate a deformation map between the pseudo reference layout and the matching layout using an optical flow method, and use the deformation map to correct the distortion in the SEM image to obtain a corrected SEM image; 应用模块,其用于利用所述矫正后的SEM图像与匹配版图实现晶圆的热点检测和轮廓分析。An application module is used to realize hot spot detection and contour analysis of the wafer by using the corrected SEM image and the matching layout.
CN202411562064.0A 2024-11-05 2024-11-05 Distortion correction matching method and system for unsupervised scanning electron microscope images Pending CN119091441A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411562064.0A CN119091441A (en) 2024-11-05 2024-11-05 Distortion correction matching method and system for unsupervised scanning electron microscope images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411562064.0A CN119091441A (en) 2024-11-05 2024-11-05 Distortion correction matching method and system for unsupervised scanning electron microscope images

Publications (1)

Publication Number Publication Date
CN119091441A true CN119091441A (en) 2024-12-06

Family

ID=93669843

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411562064.0A Pending CN119091441A (en) 2024-11-05 2024-11-05 Distortion correction matching method and system for unsupervised scanning electron microscope images

Country Status (1)

Country Link
CN (1) CN119091441A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5790417A (en) * 1996-09-25 1998-08-04 Taiwan Semiconductor Manufacturing Company Ltd. Method of automatic dummy layout generation
CN106033171A (en) * 2015-03-11 2016-10-19 中芯国际集成电路制造(上海)有限公司 A failure analysis method for a bad point on a wafer
CN115760592A (en) * 2022-10-16 2023-03-07 哈尔滨工程大学 A network video restoration method for color distortion on social network platforms
WO2023083559A1 (en) * 2021-11-12 2023-05-19 Asml Netherlands B.V. Method and system of image analysis and critical dimension matching for charged-particle inspection apparatus
WO2023142384A1 (en) * 2022-01-25 2023-08-03 深圳晶源信息技术有限公司 Design layout defect repair method, storage medium and device
CN118096799A (en) * 2024-04-29 2024-05-28 浙江大学 Hybrid weakly-supervised wafer SEM defect segmentation method and system
CN118314086A (en) * 2024-03-22 2024-07-09 浙江大学 A method and system for automatically matching wafer reference layout and SEM image

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5790417A (en) * 1996-09-25 1998-08-04 Taiwan Semiconductor Manufacturing Company Ltd. Method of automatic dummy layout generation
CN106033171A (en) * 2015-03-11 2016-10-19 中芯国际集成电路制造(上海)有限公司 A failure analysis method for a bad point on a wafer
WO2023083559A1 (en) * 2021-11-12 2023-05-19 Asml Netherlands B.V. Method and system of image analysis and critical dimension matching for charged-particle inspection apparatus
WO2023142384A1 (en) * 2022-01-25 2023-08-03 深圳晶源信息技术有限公司 Design layout defect repair method, storage medium and device
CN115760592A (en) * 2022-10-16 2023-03-07 哈尔滨工程大学 A network video restoration method for color distortion on social network platforms
CN118314086A (en) * 2024-03-22 2024-07-09 浙江大学 A method and system for automatically matching wafer reference layout and SEM image
CN118096799A (en) * 2024-04-29 2024-05-28 浙江大学 Hybrid weakly-supervised wafer SEM defect segmentation method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MILLER, K.K., WANG, P. & GRILLET, N: "SUB-immunogold-SEM reveals nanoscale distribution of submembranous epitopes", NAT COMMUN 15, 10 September 2024 (2024-09-10) *
徐伟: "基于图像矫正的SEM三维测量技术及其应用研究", 苏州大学, 16 March 2020 (2020-03-16) *

Similar Documents

Publication Publication Date Title
Margffoy-Tuay et al. Dynamic multimodal instance segmentation guided by natural language queries
Yang et al. Directional connectivity-based segmentation of medical images
US20240289945A1 (en) Method and system for classifying defects in wafer using wafer-defect images, based on deep learning
CN115063573B (en) A multi-scale object detection method based on attention mechanism
US11410300B2 (en) Defect inspection device, defect inspection method, and storage medium
CN114820579A (en) Semantic segmentation based image composite defect detection method and system
CN118096799B (en) A hybrid weakly supervised wafer SEM defect segmentation method and system
CN115205672A (en) A method and system for semantic segmentation of remote sensing buildings based on multi-scale regional attention
Gai et al. Flexible hotspot detection based on fully convolutional network with transfer learning
WO2022082692A1 (en) Lithography hotspot detection method and apparatus, and storage medium and device
Han et al. Progressive feature interleaved fusion network for remote-sensing image salient object detection
Zha et al. Weakly-supervised mirror detection via scribble annotations
Jayasekara et al. Detecting anomalous solder joints in multi-sliced PCB X-ray images: a deep learning based approach
CN115049833A (en) Point cloud component segmentation method based on local feature enhancement and similarity measurement
CN114820541A (en) Defect detection method based on reconstructed network
CN118314086A (en) A method and system for automatically matching wafer reference layout and SEM image
Wang et al. IH-ViT: Vision transformer-based integrated circuit appear-ance defect detection
CN117078608B (en) A method for detecting highly reflective leather surface defects based on double mask guidance
CN119091441A (en) Distortion correction matching method and system for unsupervised scanning electron microscope images
CN117853778A (en) Improved HTC casting DR image defect identification method
CN114066825B (en) Improved complex texture image flaw detection method based on deep learning
CN116664494A (en) Surface defect detection method based on template comparison
CN113065547A (en) A Weakly Supervised Text Detection Method Based on Character Supervision Information
CN114399628A (en) Efficient detection system for insulators in complex space environment
CN115170457A (en) BGA solder ball region segmentation method based on improved full convolution network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination