CN112561873B - A Virtual Measurement Method of CDSEM Image Based on Machine Learning - Google Patents
A Virtual Measurement Method of CDSEM Image Based on Machine Learning Download PDFInfo
- Publication number
- CN112561873B CN112561873B CN202011459003.3A CN202011459003A CN112561873B CN 112561873 B CN112561873 B CN 112561873B CN 202011459003 A CN202011459003 A CN 202011459003A CN 112561873 B CN112561873 B CN 112561873B
- Authority
- CN
- China
- Prior art keywords
- image
- cdsem
- neural network
- photoetching
- network model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000010801 machine learning Methods 0.000 title claims abstract description 27
- 238000003062 neural network model Methods 0.000 claims abstract description 64
- 238000000034 method Methods 0.000 claims abstract description 47
- 238000012549 training Methods 0.000 claims abstract description 45
- 238000012795 verification Methods 0.000 claims abstract description 38
- 238000013507 mapping Methods 0.000 claims abstract description 8
- 238000001259 photo etching Methods 0.000 claims abstract 31
- 238000001459 lithography Methods 0.000 claims description 55
- 230000006870 function Effects 0.000 claims description 21
- 238000000206 photolithography Methods 0.000 claims description 20
- 230000003287 optical effect Effects 0.000 claims description 14
- 229920002120 photoresistant polymer Polymers 0.000 claims description 11
- 238000013527 convolutional neural network Methods 0.000 claims description 8
- 238000012937 correction Methods 0.000 claims description 8
- 230000000694 effects Effects 0.000 claims description 7
- 238000013256 Gubra-Amylin NASH model Methods 0.000 claims description 5
- 230000004913 activation Effects 0.000 claims description 4
- 238000013528 artificial neural network Methods 0.000 claims 1
- 230000007547 defect Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 8
- 238000010894 electron beam technology Methods 0.000 description 4
- 238000001900 extreme ultraviolet lithography Methods 0.000 description 4
- 238000004519 manufacturing process Methods 0.000 description 4
- 238000001878 scanning electron micrograph Methods 0.000 description 4
- 238000012360 testing method Methods 0.000 description 3
- 238000013136 deep learning model Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012821 model calculation Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000008080 stochastic effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10056—Microscopic image
- G06T2207/10061—Microscopic image from scanning electron microscope
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30148—Semiconductor; IC; Wafer
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biomedical Technology (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Quality & Reliability (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Geometry (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Exposure And Positioning Against Photoresist Photosensitive Materials (AREA)
Abstract
Description
技术领域technical field
本发明属于半导体集成电路生产制造领域,涉及一种基于机器学习的 CDSEM图像虚拟测量方法。The invention belongs to the field of semiconductor integrated circuit production and manufacturing, and relates to a CDSEM image virtual measurement method based on machine learning.
背景技术Background technique
在半导体集成电路制造的光刻工艺流程中,对于给定的图案,在光刻机聚焦和剂量确定的情况下,晶圆上光刻胶的光刻空间像图(Aerial Image) 也得到确定,在光刻胶确定的情况下,光刻胶显影后的三维结构得到确定,此时通过扫描电子显微镜(ScanningElectronic Microscope,SEM)拍摄出的CDSEM图像便也确定了,该CDSEM图像通常以此来确认最终经过光刻的图形质量。In the lithography process flow of semiconductor integrated circuit manufacturing, for a given pattern, when the lithography machine is focused and the dose is determined, the lithography aerial image (Aerial Image) of the photoresist on the wafer is also determined. When the photoresist is determined, the three-dimensional structure of the photoresist after development is determined, and at this time, the CDSEM image taken by a scanning electron microscope (Scanning Electronic Microscope, SEM) is also determined, and the CDSEM image is usually confirmed by this Ultimate photolithographic graphic quality.
通常,在光刻工艺中,为了提高光刻工艺的图形的质量,就需要对用于制造光罩的图形先做光学邻近效应修正(Optical Proximity Correction, OPC)。基于模型的光学邻近效应修正重要的是建立精确的OPC模型,该模型包含一系列参数,这些参数需要通过测试图形(例如,CDSEM图像) 实验数据来校准OPC模型,实验数据越多,模型越精确。Usually, in the photolithography process, in order to improve the quality of the pattern in the photolithography process, it is necessary to perform optical proximity correction (Optical Proximity Correction, OPC) on the pattern used to manufacture the mask. Model-based optical proximity effect correction is important to establish an accurate OPC model, which contains a series of parameters, which need to calibrate the OPC model through test graphics (for example, CDSEM images) experimental data, the more experimental data, the more accurate the model .
然而,测试图形不可能覆盖全部种类,且OPC模型并不是完全基于物理原理,模型中有经验的组成部分,所以OPC模型的精确度无法保证。目前,在OPC后的测试图形质量是通过用相同的OPC模型的仿真结果作为验证手段,因此,验证质量取决于OPC模型本身对所有图形的准确性。However, it is impossible for test patterns to cover all types, and the OPC model is not completely based on physical principles, and there are empirical components in the model, so the accuracy of the OPC model cannot be guaranteed. Currently, the test pattern quality after OPC is verified by using the simulation results of the same OPC model. Therefore, the verification quality depends on the accuracy of the OPC model itself for all patterns.
本领域技术人员清楚,OPC模型本身对所有图形的准确性是无法保证的。因此,业界非常需要独立于OPC模型的其它模型为新的验证手段来确保OPC后的图形数据质量。It is clear to those skilled in the art that the accuracy of the OPC model itself for all graphics cannot be guaranteed. Therefore, the industry is in great need of other models independent of the OPC model as a new means of verification to ensure the quality of graphic data after OPC.
此外,光刻机分为紫外光源(UV)、深紫外光源(DUV)、极紫外光源 (EUV)。在光刻的缺陷检测中,由随机效应导致的随机缺陷是一个很大的问题,尤其是在EUV光刻中。并且,由于分辨率的限制,EUV光刻后的图形需要用电子束扫描机(E-beam)来扫描晶圆上光刻工艺后的图形是否存在缺陷,而由于电子束扫描机设计约束的原因,电子束扫描机不能通过die-to-die芯片间检测模式来检测缺陷,只能通过die-to-database芯片到数据库检测模式来检测缺陷。也就是说,目前的EUV的缺陷检测,通过比较 CDSEM图像和OPC目标几何图形来实现的,这样的实现不是最好的方法。In addition, lithography machines are divided into ultraviolet light source (UV), deep ultraviolet light source (DUV), and extreme ultraviolet light source (EUV). Random defects caused by stochastic effects are a big problem in defect detection in lithography, especially in EUV lithography. Moreover, due to the limitation of resolution, the pattern after EUV lithography needs to use an electron beam scanner (E-beam) to scan whether there are defects in the pattern after the lithography process on the wafer, and due to the design constraints of the electron beam scanner , The electron beam scanner cannot detect defects through the die-to-die inter-chip inspection mode, but can only detect defects through the die-to-database chip-to-database inspection mode. That is to say, the current EUV defect detection is realized by comparing the CDSEM image and the OPC target geometry, which is not the best method.
光刻工艺中,光刻条件的稳定性对于光刻后的晶圆上图像质量的稳定起到关键作用。具体来说,在光刻工艺流程中,对于给定的图案,在光刻机聚焦和剂量确定的情况下,晶圆上光刻胶的光刻空间像图(Aerial Image)也得到确定,在光刻胶确定的情况下,光刻胶显影后的三维结构得到确定,此时通过扫描电子显微镜拍摄出的CDSEM图像便也确定了。In the lithography process, the stability of lithography conditions plays a key role in the stability of the image quality on the wafer after lithography. Specifically, in the lithography process flow, for a given pattern, when the lithography machine is focused and the dose is determined, the lithography aerial image (Aerial Image) of the photoresist on the wafer is also determined. When the photoresist is determined, the three-dimensional structure of the photoresist after development is determined, and at this time, the CDSEM image taken by the scanning electron microscope is also determined.
发明内容Contents of the invention
鉴于以上光刻条件稳定性的监控技术的问题所在,本发明提出了一种基于机器学习的CDSEM图像虚拟测量方法,通过建立OPC后光罩图案和光刻工艺后的CDSEM图像的映射,实现采用机器学习生成CDSEM图像,得到一个独立于OPC模型的验证模型,用于确认光刻后的图形的质量。In view of the problems in the monitoring technology of the stability of the above lithography conditions, the present invention proposes a machine learning-based CDSEM image virtual measurement method, by establishing the mapping of the photomask pattern after OPC and the CDSEM image after the lithography process. Machine learning generates CDSEM images to obtain a verification model independent of the OPC model, which is used to confirm the quality of the pattern after lithography.
为实现上述目的,本发明的技术方案如下:To achieve the above object, the technical scheme of the present invention is as follows:
一种基于机器学习的CDSEM图像虚拟测量方法,其包括如下步骤:A kind of CDSEM image virtual measurement method based on machine learning, it comprises the steps:
步骤S1:训练集和验证集的生成步骤;其包括:Step S1: a step of generating a training set and a verification set; it includes:
步骤S11:提供晶圆,并预设光刻工艺次数为K次;其中,K为大于等于1的正整数;Step S11: Provide a wafer, and preset the number of photolithography processes as K times; wherein, K is a positive integer greater than or equal to 1;
步骤S12:在所述晶圆上完成一次光刻工艺流程,使用扫描电子显微镜在光刻后晶圆的Mi处不同坐标进行扫描,得到的Mi张CDSEM图像;其中,Mi为大于等于10的正整数,i为1,2,3…K中的一个值;Step S12: Complete a lithography process flow on the wafer, use a scanning electron microscope to scan at different coordinates at M i of the wafer after lithography, and obtain M i CDSEM images; wherein, M i is greater than or equal to A positive integer of 10, i is a value in 1, 2, 3...K;
步骤S13:计算与所述CDSEM图像相同坐标的光刻空间像图,将一张所述CDSEM图像与对应的光刻空间像图组成一组光刻空间像图-CDSEM 图像数据对,最终得到M组所述光刻空间像图-CDSEM图像数据对,其中,所述光刻空间像图包括至少一张光刻胶不同深度处的二维图像;Step S13: Calculate the lithographic spatial image with the same coordinates as the CDSEM image, and form a set of lithographic spatial image-CDSEM image data pairs with one of the CDSEM images and the corresponding lithographic spatial image, and finally obtain M Set the lithographic aerial image image-CDSEM image data pair, wherein the lithographic aerial image image includes at least one two-dimensional image at different depths of the photoresist;
步骤S14:判断所述光刻空间像图-CDSEM图像数据对的组数是否等于N,如果否,执行步骤S12;如果是,执行步骤S15;其中:Step S14: judging whether the group number of the lithographic aerial image map-CDSEM image data pair is equal to N, if not, execute step S12; if yes, execute step S15; wherein:
步骤S15:将N组所述光刻空间像图-CDSEM图像数据对按比例分成用于模型训练的训练集和用于验证模型的验证集;其中,所述训练集和验证集中所述光刻空间像图-CDSEM图像数据对的组数比例为N1:N2, N=N1+N2;Step S15: Divide the N groups of photolithographic aerial image map-CDSEM image data pairs into a training set for model training and a verification set for verifying the model in proportion; The ratio of the number of groups of spatial image-CDSEM image data pairs is N1:N2, N=N1+N2;
步骤S2:将所述光刻空间像图和所述CDSEM图像对齐;Step S2: Aligning the photolithography aerial image and the CDSEM image;
步骤S3:采用神经网络模型,将所述光刻空间像图作为输入,与之对应的所述CDSEM图像作为目标输出,遍历所述训练集中的N1组所述光刻空间像图-CDSEM图像数据对完成所述神经网络模型的训练;遍历验证集中的 N2组所述光刻空间像图-CDSEM图像数据对完成所述神经网络模型的验证。Step S3: Using a neural network model, using the lithographic spatial image as input and the corresponding CDSEM image as the target output, traversing the N1 groups of lithographic spatial image-CDSEM image data in the training set The training of the neural network model is completed; the verification of the neural network model is completed by traversing the N2 groups of the lithographic aerial image map-CDSEM image data pairs in the verification set.
进一步地,所述的基于机器学习的CDSEM图像虚拟测量方法,步骤 S3包括:Further, described CDSEM image virtual measuring method based on machine learning, step S3 comprises:
步骤S31:提供所述神经网络模型;Step S31: providing the neural network model;
步骤S32:以所述训练集中的所述光刻空间像图作为输入,与之对应的所述CDSEM图像作为目标输出,遍历所述训练集中的所述光刻空间像图 -CDSEM图像数据对,对所述神经网络模型进行训练;Step S32: Taking the lithographic aerial image in the training set as input and the corresponding CDSEM image as a target output, traversing the lithographic aerial image-CDSEM image data pair in the training set, training the neural network model;
步骤S33:遍历所述验证集中的所述光刻空间像图-CDSEM图像数据对,对所述神经网络模型进行验证,并计算所述验证集的损失函数;Step S33: traversing the lithography spatial image map-CDSEM image data pair in the verification set, verifying the neural network model, and calculating the loss function of the verification set;
步骤S34:判断所述损失函数是否小于设定值,如果是,停止对所述神经网络模型的训练,得到最终神经网络模型;如果不是,重复执行步骤S15 至S34;其中,所述神经网络模型体现了光刻空间像图与所述CDSEM图像之间的映射。Step S34: Judging whether the loss function is smaller than the set value, if yes, stop the training of the neural network model to obtain the final neural network model; if not, repeat steps S15 to S34; wherein, the neural network model It embodies the mapping between the lithography space image and the CDSEM image.
进一步地,所述神经网络模型为以卷积为主的深度卷积神经网络DCNN 模型或者生成式对抗网络GAN模型,使用ReLU为激活函数;如果所述神经网络模型采用所述深度卷积神经网络DCNN模型,所述损失函数为均方误差损失函数,如果所述神经网络模型采用所述生成式对抗网络GAN模型,所述损失函数为交叉熵损失函数。Further, the neural network model is a convolution-based deep convolutional neural network DCNN model or a generative confrontation network GAN model, using ReLU as an activation function; if the neural network model adopts the deep convolutional neural network In the DCNN model, the loss function is a mean square error loss function, and if the neural network model adopts the generative confrontation network GAN model, the loss function is a cross-entropy loss function.
进一步地,所述训练集中所述光刻空间像图-CDSEM图像数据对的组数 N1是7的倍数,所述验证集中所述光刻空间像图-CDSEM图像数据对的组数N2为3的倍数。Further, the group number N1 of the lithographic aerial image map-CDSEM image data pair in the training set is a multiple of 7, and the group number N2 of the photolithographic aerial image map-CDSEM image data pair in the verification set is 3 multiples of .
进一步地,所述的基于机器学习的CDSEM图像虚拟测量方法还包括:Further, the described CDSEM image virtual measurement method based on machine learning also includes:
步骤S4:基于所述最终神经网络模型,当新的光刻空间像图输入时,所述最终神经网络模型生成与之对应的虚拟CDSEM图像。Step S4: Based on the final neural network model, when a new lithographic spatial image is input, the final neural network model generates a corresponding virtual CDSEM image.
进一步地,所述的基于机器学习的CDSEM图像虚拟测量方法,还包括步骤S5:检测所述最终神经网络模型生成的所述虚拟CDSEM图像的关键尺寸,并根据所述关键尺寸确定所述OPC光学模型是否需要进行校准修正。Further, the machine learning-based CDSEM image virtual measurement method also includes step S5: detecting the critical dimension of the virtual CDSEM image generated by the final neural network model, and determining the OPC optical dimension according to the critical dimension Whether the model requires calibration correction.
进一步地,所述的基于机器学习的CDSEM图像虚拟测量方法,所述步骤S5包括:Further, the described CDSEM image virtual measurement method based on machine learning, the step S5 includes:
S51:获取所述虚拟CDSEM图像轮廓,通过所述轮廓找到关键尺寸;S51: Obtain the outline of the virtual CDSEM image, and find a key dimension through the outline;
S52:判断所述关键尺寸是否符合工艺要求,若不符合则对所述OPC光学模型进行校准修正。S52: Judging whether the critical dimension meets the process requirements, and if not, performing calibration correction on the OPC optical model.
进一步地,所述的基于机器学习的CDSEM图像虚拟测量方法,其还包括步骤S5’:判断一次光刻工艺的随机效应是否可以接受。Further, the machine learning-based CDSEM image virtual measurement method also includes step S5': judging whether the random effect of a photolithography process is acceptable.
进一步地,所述的基于机器学习的CDSEM图像虚拟测量方法中的所述步骤S5’包括:Further, the step S5' in the described CDSEM image virtual measuring method based on machine learning comprises:
S51’用所述新的光刻空间像图对应的光刻工艺条件进行一次光刻工艺流程,测量得到一张实际CDSEM图像;S51' performs a lithography process flow with the lithography process conditions corresponding to the new lithography spatial image, and obtains an actual CDSEM image by measurement;
S52’将所述最终神经网络模型生成的所述虚拟CDSEM图像与实际CDSEM图像作比较,若两者像素值的均方误差满足精度要求,则判定本次光刻工艺的随机效应可以接受。S52' compares the virtual CDSEM image generated by the final neural network model with the actual CDSEM image, and if the mean square error of the pixel values of the two meets the accuracy requirement, then it is determined that the random effect of the photolithography process is acceptable.
进一步地,所述光刻空间像图和CDSEM图像的图像大小和分辨率相同。Further, the image size and resolution of the lithographic aerial image and the CDSEM image are the same.
从上述技术方案可以看出,本发明的一种基于机器学习的CDSEM图像虚拟测量方法的有益效果为,其建立了OPC后光罩图案和光刻工艺后的 SEM图像的映射,通过机器学习生成SEM图像,从而达到确认光刻后的图形的质量的目的,相当于建立了一个独立于OPC模型的验证模型,即建立了一个可以从OPC后的几何数据生成虚拟参考SEM图像的模型。It can be seen from the above technical scheme that the beneficial effect of a machine learning-based CDSEM image virtual measurement method of the present invention is that it establishes the mapping of the photomask pattern after OPC and the SEM image after the photolithography process, and generates SEM image, so as to achieve the purpose of confirming the quality of the pattern after lithography, is equivalent to establishing a verification model independent of the OPC model, that is, establishing a model that can generate a virtual reference SEM image from the geometric data after OPC.
附图说明Description of drawings
图1所示为本发明实施例中基于机器学习的CDSEM图像虚拟测量方法的流程示意图Fig. 1 shows the schematic flow chart of the CDSEM image virtual measurement method based on machine learning in the embodiment of the present invention
图2所示为本发明实施例中基于机器学习建立光刻工艺后CDSEM图像的架构框图Fig. 2 shows the frame diagram of the CDSEM image after establishing the photolithography process based on machine learning in the embodiment of the present invention
图3所示为本发明实施例中对OPC后光罩图案通过严格的光学模型计算得到光刻空间像图的示意图Fig. 3 is a schematic diagram of the lithography spatial image obtained through strict optical model calculation for the OPC mask pattern in the embodiment of the present invention
图4所示为本发明实施例中OPC后光罩图案光刻后的实验CDSEM图像示意图Fig. 4 shows the schematic diagram of the experimental CDSEM image after the photolithography of the mask pattern after OPC in the embodiment of the present invention
图5所示为本发明实施例中学习后的深度学习模型生成的虚拟CDSEM 图像示意图Fig. 5 shows the schematic diagram of the virtual CDSEM image generated by the deep learning model after learning in the embodiment of the present invention
具体实施方式Detailed ways
下面结合附图1-5,对本发明的具体实施方式作进一步的详细说明。Below in conjunction with accompanying drawings 1-5, the specific embodiment of the present invention will be described in further detail.
需要说明的是,本发明公开的一种基于机器学习的CDSEM图像虚拟测量方法,其光刻工艺中,光刻机通过EUV或UV等将掩膜版上的图案映射到涂有光刻胶的晶圆(Wafer)上,在实际的光刻工艺流程中,当光刻机中光刻工艺参数(焦深和剂量)确定后,晶圆上的图案也相应确定下来,此时,通过扫描电子显微镜拍摄出的CDSEM图像也得到确定。因此,在工艺流程确定的情况下,CDSEM图像和光刻工艺参数之间存在一定的对应关系。It should be noted that in the machine learning-based CDSEM image virtual measurement method disclosed in the present invention, in the photolithography process, the photolithography machine maps the pattern on the mask plate to the wafer coated with photoresist through EUV or UV, etc. (Wafer), in the actual lithography process flow, when the lithography process parameters (focus depth and dose) in the lithography machine are determined, the pattern on the wafer is also determined accordingly. The resulting CDSEM images were also identified. Therefore, when the process flow is determined, there is a certain correspondence between the CDSEM image and the photolithography process parameters.
请参阅图1,图1所示为本发明实施例中基于机器学习的CDSEM图像虚拟测量方法的流程示意图。如图1所示,该种基于机器学习的CDSEM图像虚拟测量方法,其可以包括如下步骤:Please refer to FIG. 1 . FIG. 1 is a schematic flowchart of a machine learning-based CDSEM image virtual measurement method in an embodiment of the present invention. As shown in Figure 1, this kind of CDSEM image virtual measurement method based on machine learning may include the following steps:
步骤S1:训练集和验证集的生成步骤;其包括:Step S1: a step of generating a training set and a verification set; it includes:
步骤S11:提供晶圆,并预设光刻工艺次数为K次;其中,K为大于等于1的正整数;Step S11: Provide a wafer, and preset the number of photolithography processes as K times; wherein, K is a positive integer greater than or equal to 1;
步骤S12:在所述晶圆上完成一次光刻工艺流程,使用扫描电子显微镜在光刻后晶圆的Mi处不同坐标进行扫描,得到的Mi张CDSEM图像;其中,Mi为大于等于10的正整数,i为1,2,3…K中的一个值;Step S12: Complete a lithography process flow on the wafer, use a scanning electron microscope to scan at different coordinates at M i of the wafer after lithography, and obtain M i CDSEM images; wherein, M i is greater than or equal to A positive integer of 10, i is a value in 1, 2, 3...K;
步骤S13:计算与所述CDSEM图像相同坐标的光刻空间像图,将一张所述CDSEM图像与对应的光刻空间像图组成一组光刻空间像图-CDSEM 图像数据对,最终得到M组所述光刻空间像图-CDSEM图像数据对,其中,所述光刻空间像图包括至少一张光刻胶不同深度处的二维图像;Step S13: Calculate the lithographic spatial image with the same coordinates as the CDSEM image, and form a set of lithographic spatial image-CDSEM image data pairs with one of the CDSEM images and the corresponding lithographic spatial image, and finally obtain M Set the lithographic aerial image image-CDSEM image data pair, wherein the lithographic aerial image image includes at least one two-dimensional image at different depths of the photoresist;
步骤S14:判断所述光刻空间像图-CDSEM图像数据对的组数是否等于N,如果否,执行步骤S12;如果是,执行步骤S15;其中:Step S14: judging whether the group number of the lithographic aerial image map-CDSEM image data pair is equal to N, if not, execute step S12; if yes, execute step S15; wherein:
步骤S15:将N组所述光刻空间像图-CDSEM图像数据对按比例分成用于模型训练的训练集和用于验证模型的验证集;其中,所述训练集和验证集中所述光刻空间像图-CDSEM图像数据对的组数比例为N1:N2, N=N1+N2。Step S15: Divide the N groups of photolithographic aerial image map-CDSEM image data pairs into a training set for model training and a verification set for verifying the model in proportion; The ratio of the number of sets of aerial image image-CDSEM image data pairs is N1:N2, N=N1+N2.
步骤S2:在进行模型训练之前,将所述光刻空间像图和所述CDSEM 图像对齐;Step S2: before performing model training, aligning the photolithography space image and the CDSEM image;
步骤S3:采用神经网络模型,将所述光刻空间像图作为输入,与之对应的所述CDSEM图像作为目标输出,遍历所述训练集中的N1组所述光刻空间像图-CDSEM图像数据对完成所述神经网络模型的训练;遍历验证集中的N2组所述光刻空间像图-CDSEM图像数据对完成所述神经网络模型的验证。Step S3: Using a neural network model, using the lithographic spatial image as input and the corresponding CDSEM image as the target output, traversing the N1 groups of lithographic spatial image-CDSEM image data in the training set The training of the neural network model is completed; the verification of the neural network model is completed by traversing the N2 groups of the lithographic aerial image map-CDSEM image data pairs in the verification set.
也就是说,请参阅图2,图2所示为本发明实施例中基于图像的离线的光刻工艺稳定性控制功能框图。如图2所示,用于模型训练的训练集和用于验证模型的验证集均是从多次的实际光刻工艺中得到的(例如进行5次光刻,每次扫描的晶圆坐标分别为200处,300处,50处,150处和300处,则最终会得到1000张CDSEM图像,即N=1000)。N组所述光刻空间像图-CDSEM图像数据对按比例分成用于模型训练的训练集和用于验证模型的验证集;所述训练集和验证集的比例为N1:N2,N=N1+N2。较佳地,可以是按训练集和验证集的比例为7:3进行,其中,训练集包括700组所述光刻空间像图-CDSEM图像数据对,验证集包括300组所述光刻空间像图-CDSEM图像数据对。That is to say, please refer to FIG. 2 , which is a functional block diagram of image-based off-line photolithography process stability control in an embodiment of the present invention. As shown in Figure 2, both the training set used for model training and the verification set used for verifying the model are obtained from multiple actual lithography processes (for example, lithography is performed 5 times, and the wafer coordinates of each scan are respectively 200, 300, 50, 150 and 300, 1000 CDSEM images will be finally obtained, ie N=1000). N groups of said lithography space image map-CDSEM image data are divided into training set for model training and verification set for verification model in proportion; the ratio of said training set and verification set is N1:N2, N=N1 +N2. Preferably, it may be carried out at a ratio of 7:3 between the training set and the verification set, wherein the training set includes 700 sets of lithography space image map-CDSEM image data pairs, and the verification set includes 300 sets of lithography space image data pairs. imagemap - CDSEM image data pair.
在本发明的实施例中,有了所述光刻空间像图-SEM图像数据对以后,针对两者之间的映射关系,可以通过深度卷积神经网络(DCNN:deep convolutional neuralnetworks)或者生成式对抗网络(GAN:Generative adversarial networks)等方式进行推导。In an embodiment of the present invention, after having the lithographic spatial image map-SEM image data pair, for the mapping relationship between the two, it is possible to use a deep convolutional neural network (DCNN: deep convolutional neural networks) or a generative formula Adversarial networks (GAN: Generative adversarial networks) and other methods are used for derivation.
请参阅图3、图4和图5,图3所示为本发明实施例中对OPC后光罩图案通过严格的光学模型计算得到光刻空间像图的示意图,图4所示为本发明实施例中OPC后光罩图案光刻后的实验CDSEM图像示意图,图5所示为本发明实施例中学习后的深度学习模型生成的虚拟CDSEM图像示意图。Please refer to Fig. 3, Fig. 4 and Fig. 5, Fig. 3 shows that in the embodiment of the present invention, the schematic diagram of the lithography spatial image obtained by calculating the OPC mask pattern through strict optical model, Fig. 4 shows the implementation of the present invention In the example, the schematic diagram of the experimental CDSEM image after the photolithography of the mask pattern after OPC, and FIG. 5 is a schematic diagram of the virtual CDSEM image generated by the learned deep learning model in the embodiment of the present invention.
在本发明的实施例中,由于实际光刻后图形的坐标与掩模版上对应的图形坐标可能有偏差,在进行模型训练之前,还需执行步骤S2:将所述光刻空间像图和所述CDSEM图像对齐,并且,较佳地,所述光刻空间像图和所述 CDSEM图像的图像大小和分辨率相同。图像大小视具体情况而定,本次示例中可以是512*512。In the embodiment of the present invention, since the coordinates of the actual pattern after lithography may deviate from the coordinates of the corresponding pattern on the reticle, before performing model training, it is necessary to perform step S2: combine the lithography space image with the The CDSEM image is aligned, and, preferably, the image size and resolution of the photolithographic aerial image and the CDSEM image are the same. The size of the image depends on the specific situation. In this example, it can be 512*512.
步骤S3:采用神经网络模型,将所述光刻空间像图作为输入,与之对应的所述CDSEM图像作为目标输出,遍历所述训练集中的N1组所述光刻空间像图-CDSEM图像数据对完成所述神经网络模型的训练;遍历验证集中的 N2组所述光刻空间像图-CDSEM图像数据对完成所述神经网络模型的验证。Step S3: Using a neural network model, using the lithographic spatial image as input and the corresponding CDSEM image as the target output, traversing the N1 groups of lithographic spatial image-CDSEM image data in the training set The training of the neural network model is completed; the verification of the neural network model is completed by traversing the N2 groups of the lithographic aerial image map-CDSEM image data pairs in the verification set.
具体地,通过图像到图像(Image To Image)的方法,主要方式是基于曝光后光刻胶中光刻空间像图去生成对应的CDSEM图像,将光刻空间像图作为神经网络模型的输入,与之对应的CDSEM图像作为神经网络模型的目标输出,通过对神经网络模型不断的训练和验证,并进行神经网络模型参数的调节,最终完成光刻空间像图到CDSEM图像之间的映射。Specifically, through the image-to-image (Image To Image) method, the main method is to generate a corresponding CDSEM image based on the lithographic spatial image in the photoresist after exposure, and use the lithographic spatial image as the input of the neural network model, The corresponding CDSEM image is used as the target output of the neural network model. Through continuous training and verification of the neural network model, and adjustment of the parameters of the neural network model, the mapping between the lithography space image and the CDSEM image is finally completed.
在本发明的实施例中,所述基于机器学习的CDSEM图像虚拟测量方法的步骤S3具体包括:In an embodiment of the present invention, the step S3 of the CDSEM image virtual measurement method based on machine learning specifically includes:
步骤S31:提供所述神经网络模型;Step S31: providing the neural network model;
步骤S32:以所述训练集中的所述光刻空间像图作为输入,与之对应的所述CDSEM图像作为目标输出,遍历所述训练集中的所述光刻空间像图 -CDSEM图像数据对,对所述神经网络模型进行训练;Step S32: Taking the lithographic aerial image in the training set as input and the corresponding CDSEM image as a target output, traversing the lithographic aerial image-CDSEM image data pair in the training set, training the neural network model;
步骤S33:遍历所述验证集中的所述光刻空间像图-CDSEM图像数据对,对所述神经网络模型进行验证,并计算所述验证集的损失函数;Step S33: traversing the lithography spatial image map-CDSEM image data pair in the verification set, verifying the neural network model, and calculating the loss function of the verification set;
步骤S34:判断所述损失函数是否小于设定值,如果是,停止对所述神经网络模型的训练,得到最终神经网络模型;如果不是,重复执行步骤S15 至S34;其中,所述神经网络模型体现了光刻空间像图与所述CDSEM图像之间的映射。Step S34: Judging whether the loss function is smaller than the set value, if yes, stop the training of the neural network model to obtain the final neural network model; if not, repeat steps S15 to S34; wherein, the neural network model It embodies the mapping between the lithography space image and the CDSEM image.
进一步地,所述神经网络模型为以卷积为主的深度卷积神经网络DCNN 模型或者生成式对抗网络GAN模型,使用ReLU为激活函数;如果所述神经网络模型采用所述深度卷积神经网络DCNN模型,所述损失函数为均方误差损失函数,如果所述神经网络模型采用所述生成式对抗网络GAN模型,所述损失函数为交叉熵损失函数。Further, the neural network model is a convolution-based deep convolutional neural network DCNN model or a generative confrontation network GAN model, using ReLU as an activation function; if the neural network model adopts the deep convolutional neural network In the DCNN model, the loss function is a mean square error loss function, and if the neural network model adopts the generative confrontation network GAN model, the loss function is a cross-entropy loss function.
进一步地,所述深度卷积神经网络DCNN模型可以包括一输入层、13 个中间层和一输出层,所述中间层的结构相同,卷积核大小为3*3,每层的宽度是64个或128个特征映射,每个卷积层后有批归一化,所述输入层只进行卷积和激活操作,输出层只进行卷积操作。Further, the deep convolutional neural network DCNN model may include an input layer, 13 intermediate layers and an output layer, the intermediate layers have the same structure, the convolution kernel size is 3*3, and the width of each layer is 64 or 128 feature maps, each convolution layer is followed by batch normalization, the input layer only performs convolution and activation operations, and the output layer only performs convolution operations.
有了上述神经网络模型,就可以执行步骤S4,即于所述最终神经网络模型,当新的光刻空间像图输入时,所述最终神经网络模型生成与之对应的虚拟CDSEM图像。With the above-mentioned neural network model, step S4 can be performed, that is, in the final neural network model, when a new lithographic spatial image is input, the final neural network model generates a corresponding virtual CDSEM image.
步骤S4:基于训练好的所述神经网络模型,当新的光刻空间像图输入时,所述神经网络模型生成与之对应的虚拟CDSEM图像;其中,所述光刻空间像图为根据EUV掩膜版图形和工艺参数计算出的与所述CDSEM图像相同坐标处的光刻空间像图,或者,所述光刻空间像图为所述晶圆通过OPC光学模型计算得到OPC后光罩图案与所述CDSEM图像相同坐标处的光刻空间像图。Step S4: Based on the trained neural network model, when a new lithographic spatial image is input, the neural network model generates a corresponding virtual CDSEM image; wherein, the lithographic spatial image is based on EUV The lithography spatial image at the same coordinates as the CDSEM image calculated by the mask pattern and process parameters, or the lithography spatial image is the post-OPC mask pattern calculated by the OPC optical model of the wafer Photolithography aerial image at the same coordinates as the CDSEM image.
在本发明的实施例中,当新的光刻空间像图为对所述晶圆在OPC后光罩图案通过光学模型计算得到与所述CDSEM图像相同坐标处的光刻空间像图时,还包括步骤S5:检测所述最终神经网络模型生成的所述虚拟CDSEM 图像的关键尺寸,并根据所述关键尺寸确定所述OPC光学模型是否需要进行校准修正。In an embodiment of the present invention, when the new photolithographic aerial image is the photolithographic aerial image at the same coordinates as the CDSEM image obtained by calculating the mask pattern of the wafer after OPC through an optical model, it is also Step S5 is included: detecting the critical dimension of the virtual CDSEM image generated by the final neural network model, and determining whether calibration correction is required for the OPC optical model according to the critical dimension.
进一步地,所述步骤S5可以包括:Further, the step S5 may include:
检测所述最终神经网络模型生成的所述虚拟CDSEM图像的关键尺寸,并根据所述关键尺寸确定所述OPC光学模型是否需要进行校准修正。Detecting the critical dimension of the virtual CDSEM image generated by the final neural network model, and determining whether calibration correction is required for the OPC optical model according to the critical dimension.
在本发明的另一个实施例中,当新的光刻空间像图为根据EUV掩膜版图形和工艺参数计算出的与所述CDSEM图像相同坐标处的光刻空间像图时,还包括步骤S5’:对EUV掩膜版的光刻空间像图,使用上述生成的神经网络模型生成虚拟的CDSEM图像,作为电子束缺陷扫描的参考CDSEM 图像,用所述参考CDSEM图像对比EUV光刻后的CDSEM图像;当光刻后的CDSEM图像与参考CDSEM图像的像素值均方误差满足精度要求时,判定所述EUV光刻后生成的光刻图形的几何数据不存在问题,否则为判定所述EUV光刻后生成的光刻图形的几何数据具有随机缺陷。In another embodiment of the present invention, when the new lithography aerial image is the lithography aerial image at the same coordinates as the CDSEM image calculated according to the EUV mask pattern and process parameters, further comprising the step S5': For the lithography spatial image of the EUV mask, use the neural network model generated above to generate a virtual CDSEM image as a reference CDSEM image for electron beam defect scanning, and use the reference CDSEM image to compare EUV lithography CDSEM image; when the pixel value mean square error between the CDSEM image after lithography and the reference CDSEM image meets the accuracy requirements, it is determined that there is no problem with the geometric data of the lithographic pattern generated after the EUV lithography, otherwise it is determined that the EUV The geometric data of the lithographic pattern generated after lithography has random defects.
具体地,所述步骤S5’可以包括:Specifically, said step S5' may include:
S51’用所述新的光刻空间像图对应的光刻工艺条件进行一次光刻工艺流程,测量得到一张实际CDSEM图像;S51' performs a lithography process flow with the lithography process conditions corresponding to the new lithography spatial image, and obtains an actual CDSEM image by measurement;
S52’将所述最终神经网络模型生成的所述虚拟CDSEM图像与实际 CDSEM图像作比较,若两者像素值的均方误差满足精度要求,则判定本次光刻工艺的随机效应可以接受。S52' compares the virtual CDSEM image generated by the final neural network model with the actual CDSEM image, and if the mean square error of the pixel values of the two meets the accuracy requirement, then it is determined that the random effect of the photolithography process is acceptable.
也就是说,在模型应用阶段,输入光刻空间像图(Aerial image),模型输出对应的光刻后的CDSEM虚拟图像,该CDSEM虚拟图像可作为一种独立于OPC模型的标准,与OPC模型模拟得到的光刻后的图像进行比较,比较得到的均方误差小于预设精度的OPC模型即为可接受的OPC模型,否则将利用更多的实验数据重新校准OPC模型。That is to say, in the model application stage, input the photolithographic spatial image (Aerial image), and the model outputs the corresponding photolithographic CDSEM virtual image. The CDSEM virtual image can be used as a standard independent of the OPC model, and the OPC model The simulated images after lithography are compared, and the OPC model whose mean square error is less than the preset accuracy is an acceptable OPC model, otherwise more experimental data will be used to recalibrate the OPC model.
以上所述的仅为本发明的优选实施例,所述实施例并非用以限制本发明的专利保护范围,因此凡是运用本发明的说明书及附图内容所作的等同结构变化,同理均应包含在本发明的保护范围内。The above are only preferred embodiments of the present invention, and the embodiments are not intended to limit the scope of patent protection of the present invention. Therefore, all equivalent structural changes made by using the description and accompanying drawings of the present invention should be included in the same way. Within the protection scope of the present invention.
Claims (7)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011459003.3A CN112561873B (en) | 2020-12-11 | 2020-12-11 | A Virtual Measurement Method of CDSEM Image Based on Machine Learning |
PCT/CN2021/134560 WO2022121736A1 (en) | 2020-12-11 | 2021-11-30 | Cdsem image virtual measurement method based on machine learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011459003.3A CN112561873B (en) | 2020-12-11 | 2020-12-11 | A Virtual Measurement Method of CDSEM Image Based on Machine Learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112561873A CN112561873A (en) | 2021-03-26 |
CN112561873B true CN112561873B (en) | 2022-11-25 |
Family
ID=75062324
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011459003.3A Active CN112561873B (en) | 2020-12-11 | 2020-12-11 | A Virtual Measurement Method of CDSEM Image Based on Machine Learning |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112561873B (en) |
WO (1) | WO2022121736A1 (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112561873B (en) * | 2020-12-11 | 2022-11-25 | 上海集成电路装备材料产业创新中心有限公司 | A Virtual Measurement Method of CDSEM Image Based on Machine Learning |
CN114049342A (en) * | 2021-11-19 | 2022-02-15 | 上海集成电路装备材料产业创新中心有限公司 | A method, system, device and medium for generating a denoising model |
CN114627095A (en) * | 2022-03-25 | 2022-06-14 | 上海集成电路装备材料产业创新中心有限公司 | Detection method and device for photoetching process window |
CN114841378B (en) * | 2022-07-04 | 2022-10-11 | 埃克斯工业(广东)有限公司 | Wafer feature parameter prediction method, device, electronic device and readable storage medium |
CN117669473B (en) * | 2024-01-29 | 2024-04-19 | 全智芯(上海)技术有限公司 | Method for model calibration, electronic device and storage medium |
CN119668055A (en) * | 2025-02-20 | 2025-03-21 | 弈芯科技(杭州)有限公司 | A method and device for generating SEM images |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1862383A (en) * | 2006-04-04 | 2006-11-15 | 上海微电子装备有限公司 | Aberration field measuring method for imaging optical system of photoetching apparatus |
CN101042526A (en) * | 2004-02-23 | 2007-09-26 | 株式会社东芝 | Mask data correction method, photomask and optical image prediction method |
CN101877016A (en) * | 2009-04-30 | 2010-11-03 | 新思科技有限公司 | Modeling critical-dimension (CD) scanning-electron-microscopy (CD-sem) CD extraction |
CN107004060A (en) * | 2014-11-25 | 2017-08-01 | 流动马赛克公司 | Improved process control technology for semiconductor fabrication process |
CN108228981A (en) * | 2017-12-19 | 2018-06-29 | 上海集成电路研发中心有限公司 | The Forecasting Methodology of OPC model generation method and experimental pattern based on neural network |
CN108475417A (en) * | 2016-01-04 | 2018-08-31 | 科磊股份有限公司 | It is applied for semiconductor and high-definition picture is generated by low-resolution image |
CN111310407A (en) * | 2020-02-10 | 2020-06-19 | 上海集成电路研发中心有限公司 | Method for designing optimal feature vector of reverse photoetching based on machine learning |
CN111386500A (en) * | 2017-11-22 | 2020-07-07 | 卡尔蔡司Smt有限责任公司 | Method for authenticating a mask for microlithography |
CN111985611A (en) * | 2020-07-21 | 2020-11-24 | 上海集成电路研发中心有限公司 | Calculation method of reverse lithography solution based on physical feature map and DCNN machine learning |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017171891A1 (en) * | 2016-04-02 | 2017-10-05 | Intel Corporation | Systems, methods, and apparatuses for modeling reticle compensation for post lithography processing using machine learning algorithms |
US11222415B2 (en) * | 2018-04-26 | 2022-01-11 | The Regents Of The University Of California | Systems and methods for deep learning microscopy |
DE102018207880A1 (en) * | 2018-05-18 | 2019-11-21 | Carl Zeiss Smt Gmbh | Method and apparatus for evaluating an unknown effect of defects of an element of a photolithography process |
WO2019238372A1 (en) * | 2018-06-15 | 2019-12-19 | Asml Netherlands B.V. | Machine learning based inverse optical proximity correction and process model calibration |
US11635699B2 (en) * | 2018-12-28 | 2023-04-25 | Asml Netherlands B.V. | Determining pattern ranking based on measurement feedback from printed substrate |
US11061318B2 (en) * | 2019-02-28 | 2021-07-13 | Taiwan Semiconductor Manufacturing Co., Ltd. | Lithography model calibration |
CN112561873B (en) * | 2020-12-11 | 2022-11-25 | 上海集成电路装备材料产业创新中心有限公司 | A Virtual Measurement Method of CDSEM Image Based on Machine Learning |
-
2020
- 2020-12-11 CN CN202011459003.3A patent/CN112561873B/en active Active
-
2021
- 2021-11-30 WO PCT/CN2021/134560 patent/WO2022121736A1/en active Application Filing
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101042526A (en) * | 2004-02-23 | 2007-09-26 | 株式会社东芝 | Mask data correction method, photomask and optical image prediction method |
CN1862383A (en) * | 2006-04-04 | 2006-11-15 | 上海微电子装备有限公司 | Aberration field measuring method for imaging optical system of photoetching apparatus |
CN101877016A (en) * | 2009-04-30 | 2010-11-03 | 新思科技有限公司 | Modeling critical-dimension (CD) scanning-electron-microscopy (CD-sem) CD extraction |
CN107004060A (en) * | 2014-11-25 | 2017-08-01 | 流动马赛克公司 | Improved process control technology for semiconductor fabrication process |
CN108475417A (en) * | 2016-01-04 | 2018-08-31 | 科磊股份有限公司 | It is applied for semiconductor and high-definition picture is generated by low-resolution image |
CN111386500A (en) * | 2017-11-22 | 2020-07-07 | 卡尔蔡司Smt有限责任公司 | Method for authenticating a mask for microlithography |
CN108228981A (en) * | 2017-12-19 | 2018-06-29 | 上海集成电路研发中心有限公司 | The Forecasting Methodology of OPC model generation method and experimental pattern based on neural network |
CN111310407A (en) * | 2020-02-10 | 2020-06-19 | 上海集成电路研发中心有限公司 | Method for designing optimal feature vector of reverse photoetching based on machine learning |
CN111985611A (en) * | 2020-07-21 | 2020-11-24 | 上海集成电路研发中心有限公司 | Calculation method of reverse lithography solution based on physical feature map and DCNN machine learning |
Non-Patent Citations (3)
Title |
---|
Sem-GAN: Semantically-Consistent Image-to-Image Translation;Anoop Cherian;《arXiv》;20180712;全文 * |
基于卷积神经网络的复材射线图像孔缺陷识别;张逸;《光学与光电技术》;20200630;第18卷(第3期);全文 * |
基于卷积神经网络的晶圆缺陷检测与分类算法;邡鑫,史峥;《计算机工程》;20180831;第44卷(第8期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
WO2022121736A1 (en) | 2022-06-16 |
CN112561873A (en) | 2021-03-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112561873B (en) | A Virtual Measurement Method of CDSEM Image Based on Machine Learning | |
US10223494B2 (en) | Semiconductor device manufacturing method and mask manufacturing method | |
US20220291593A1 (en) | Method and apparatus for lithographic process performance determination | |
JP5289343B2 (en) | Exposure amount determination method, semiconductor device manufacturing method, exposure amount determination program, and exposure amount determination apparatus | |
US20200064728A1 (en) | Methods of manufacturing semiconductor devices, method sof performing extreme ultraviolet ray exposure, and methods of performing optical proximity correction | |
CN111430261B (en) | Method and device for detecting process stability of photoetching machine | |
TW201732450A (en) | Improvements in gauge pattern selection | |
TWI795566B (en) | Method for performing optical proximity correction and method of manufacturing mask using optical proximity correction | |
US20080304029A1 (en) | Method and System for Adjusting an Optical Model | |
CN113109991B (en) | Target layout correction method and mask layout forming method | |
JP7344203B2 (en) | Method for qualification of masks for microlithography | |
JP5224853B2 (en) | Pattern prediction method, pattern correction method, semiconductor device manufacturing method, and program | |
US7930654B2 (en) | System and method of correcting errors in SEM-measurements | |
CN111771167B (en) | Alignment mark positioning in lithographic processes | |
CN112541545B (en) | Method for predicting CDSEM image after etching process based on machine learning | |
CN106033171A (en) | Method for Failure Analysis of Dead Pixels on Wafer | |
KR20230064407A (en) | Mask layout correction method based on machine learning, and mask manufacturing method comprising the correction method | |
US7222327B2 (en) | Photo mask, method of manufacturing photo mask, and method of generating mask data | |
US20220283496A1 (en) | Photomask and method for inspecting photomask | |
CN115469514A (en) | Graphic correction method | |
JP6155848B2 (en) | Defect correction method, semiconductor manufacturing apparatus, semiconductor manufacturing method, and defect correction program | |
TWI794312B (en) | Set of microlithographic masks, method for determining edge positions of the images of the structures of a mask, system for carrying out such a method, use of such a method, method for determining an overlay error or a relative edge placement error of a plurality of microlithographic masks, method for producing a microstructured or nanostructured component and component produced according to such a method | |
Weisbuch et al. | Improving ORC methods and hotspot detection with the usage of aerial images metrology | |
TWI886014B (en) | Light mask and inspection method thereof, and method for manufacturing light mask | |
CN100416761C (en) | Method for verifying photomask |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |