WO2025040361A1 - Learning-based local alignment for edge placement metrology - Google Patents
Learning-based local alignment for edge placement metrology Download PDFInfo
- Publication number
- WO2025040361A1 WO2025040361A1 PCT/EP2024/071028 EP2024071028W WO2025040361A1 WO 2025040361 A1 WO2025040361 A1 WO 2025040361A1 EP 2024071028 W EP2024071028 W EP 2024071028W WO 2025040361 A1 WO2025040361 A1 WO 2025040361A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- inspection
- contour
- readable medium
- transitory computer
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/001—Industrial image inspection using an image reference approach
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/337—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
Definitions
- the description herein relates to measurement schemes that may be useful in the field of charged particle beam systems, and more particularly, to systems and methods that may be applicable to charged particle inspection systems such as scanning electron microscope (SEM) tools.
- SEM scanning electron microscope
- Inspection and metrology systems may be used for sensing physically observable phenomena.
- charged particle beam tools such as electron microscopes, may comprise detectors that receive charged particles projected from a sample and that output detection signals.
- Detection signals may be used to reconstruct images of sample structures under inspection and may be used for, e.g., metrology, overlay, or defect inspection.
- EPE edge placement error
- Some embodiments of the present disclosure provide a non-transitory computer-readable medium.
- the non-transitory computer-readable medium may store a set of instructions that is executable by at least one processor of an apparatus.
- the instructions may cause the apparatus to perform operations comprising: acquiring an inspection image of a pattern region on a wafer; aligning the inspection image to a reference image in a first alignment process to generate an aligned image; aligning the aligned image to the inspection image in a second alignment process to generate a deformation map; applying the deformation map to the reference image to generate a customized contour image; measuring a parameter value of the pattern region based on the customized contour image; and performing an adjustment to a semiconductor manufacturing process or a semiconductor manufacturing apparatus based on the measured parameter value.
- Some embodiments of the present disclosure provide an inspection method comprising the operations discussed above.
- the charged particle beam apparatus may comprise: a charged particle beam source configured to generate a beam of primary charged particles; a charged particle optical system configured to direct the beam of primary charged particles at a pattern region on a wafer; a controller comprising one or more processors, and configured to cause the charged particle beam apparatus to perform operations comprising: irradiating a surface of the wafer with the beam to cause charged particles to be emitted from the surface; detecting the charged particles on a charged particle detector of the charged particle beam apparatus to produce a charged particle beam inspection image of the pattern region on the wafer; aligning the inspection image to a reference image in a first alignment process to generate an aligned image; aligning the aligned image to the inspection image in a second alignment process to generate a deformation map; applying the deformation map to the reference image to generate a customized contour image; measuring a parameter value of the pattern region based on the customized contour image; and performing an adjustment to a semiconductor manufacturing process or a
- Fig. 1 is a diagrammatic representation of an exemplary electron beam inspection (EBI) system, consistent with embodiments of the present disclosure.
- EBI electron beam inspection
- FIGs. 2A-B are diagrams illustrating a charged particle beam apparatus that may be an example of an electron beam tool, consistent with embodiments of the present disclosure.
- FIGs. 3A-B are diagrammatic representations of an example contour extraction, according to a comparative embodiment.
- Fig. 4 is a diagrammatic representation of an example contour extraction, consistent with embodiments of the present disclosure.
- Fig. 5 is a diagrammatic representation of an example transformation model, consistent with embodiments of the present disclosure.
- Fig. 6A is a diagrammatic representation of an example inspection image, consistent with embodiments of the present disclosure.
- Figs. 6B-D are diagrammatic representations of an example contour extraction of the inspection image of Fig. 6A, according to a comparative embodiment.
- Fig. 6E is a diagrammatic representation of an example contour extraction of the inspection image of Fig. 6A, consistent with embodiments of the present disclosure.
- Fig. 7 is a diagrammatic representation of example use cases of contour extraction, consistent with embodiments of the present disclosure.
- FIG. 8 a flowchart illustrating an example method that may be useful for contour extraction, consistent with embodiments of the disclosure.
- Electronic devices are constructed of circuits formed on a piece of silicon called a substrate. Many circuits may be formed together on the same piece of silicon and are called integrated circuits or ICs. With advancements in technology, the size of these circuits has decreased dramatically so that many more of them can fit on the substrate. For example, an IC chip in a smart phone can be as small as a fingernail and yet may include over 2 billion transistors, the size of each transistor being less than 1/1, 000th the width of a human hair.
- One component of improving yield is monitoring the chip making process to ensure that it is producing a sufficient number of functional integrated circuits.
- One way to monitor the process is to inspect the chip circuit structures at various stages of their formation. Inspection can be carried out using a scanning charged-particle microscope (“SCPM”).
- SCPM scanning charged-particle microscope
- an SCPM may be a scanning electron microscope (SEM).
- SEM can be used to image these extremely small structures, in effect, taking a “picture” of the structures. The image can be used to determine if the structure was formed properly and also if it was formed in the proper location. If the structure is defective, then the process can be adjusted so the defect is less likely to recur. To enhance throughput (e.g., the number of samples processed per hour), it is desirable to conduct inspection as quickly as possible.
- a SEM takes a picture by receiving and recording intensity of light reflected or emitted from people or objects.
- a SEM takes a “picture” by receiving and recording energies or quantities of electrons reflected or emitted from the structures of the wafer.
- an electron beam may be projected onto the structures, and when the electrons are reflected or emitted (“exiting”) from the structures (e.g., from the wafer surface, from the structures underneath the wafer surface, or both), a detector of the SEM may receive and record the energies or quantities of those electrons to generate an inspection image.
- the electron beam may scan through the wafer (e.g., in a line-by-line or zig-zag manner), and the detector may receive exiting electrons coming from a region under electron-beam projection (referred to as a “beam spot”).
- the detector may receive and record exiting electrons from each beam spot one at a time and join the information recorded for all the beam spots to generate the inspection image.
- SEMs use a single electron beam (referred to as a “single-beam SEM”) to take a single “picture” to generate the inspection image
- some SEMs use multiple electron beams (referred to as a “multi-beam SEM”) to take in parallel multiple “pictures” of the wafer, which can be used separately or be stitched together to generate the inspection image.
- the SEM may provide more electron beams onto the structures for obtaining these multiple “pictures,” resulting in more electrons exiting from the structures. Accordingly, the detector may receive more exiting electrons simultaneously and generate inspection images of the structures of the wafer with higher efficiency and faster speed.
- the detection process involves measuring the magnitude of an electrical signal generated when electrons land on the detector.
- electron counting may be used, in which a detector may count individual electron arrival events as they occur.
- intensity of the secondary beam may be determined based on electrical signals generated in the detector that vary in proportion to the change in intensity of the secondary beam.
- the detection process may be used for metrology measurements, e.g., in circuit pattern features printed on a semiconductor wafer.
- a metrology process may identify the locations of feature edges within the circuit pattern. Such measurements may be used to determine an edge placement error (EPE), or an unwanted offset of a feature edge from its designed location.
- EPE edge placement error
- One conventional method for determining EPE may involve a contour identification technique for the feature.
- the contour identification technique may involve a first step of aligning a grayscale SEM image of the pattern area with the binary outlines of a reference contour image.
- the reference contour image may comprise, e.g., a post-processed version of the design file that was used to create the pattern area.
- Such files may take the form of, e.g., GDS, GDSII, OASIS or other circuit pattern file formats (referred to collectively as “GDS”).
- Post-processing may be used to render the GDS file image as a set of feature outlines, as well as to more accurately represent the contours of a true printed pattern (such as by rounding off the feature corners, creating irregular line widths, etc.). By aligning the reference contour, a coarse location of edge features in the grayscale SEM image may be identified.
- a grayscale thresholding operation may be used to perform a fine contour extraction.
- the thresholding operation may include evaluating a series of pixels in the SEM image along a line that runs perpendicular (or “normal”) to the identified coarse edge. When a pixel is reached that has a grayscale intensity value above a certain threshold, the pixel may be considered a fine edge location of the feature. This process may be repeated at multiple lines running perpendicular to the coarse edge to identify a series of edge pixels in the SEM image, thus tracing out a fine contour of the pattern feature edges. Such a technique may be referred to as Contour Extraction via Normal Thresholding (CENT).
- CENT Contour Extraction via Normal Thresholding
- the CENT technique may offer a fast processing time in use
- designing a CENT algorithm may be labor-intensive.
- the design may require careful tuning of algorithm parameters, such as threshold value, sensitivity, etc.
- algorithm parameters such as threshold value, sensitivity, etc.
- it may be difficult to achieve a satisfactory EPE measurement under this technique, especially when the measured pattern is not highly regular or repetitive.
- random logic layers may be very challenging under conventional CENT techniques because there may be little repetition, and every edge profile within the pattern may include unique shapes or other characteristics. This can result in poor alignment of a GDS file to the pattern, and subsequently a poor thresholding operation during the fine contour extraction step.
- Embodiments of the present disclosure may provide an inspection apparatus and inspection method for accurately extracting an edge contour, even from highly irregular SEM images.
- the method may comprise a pixel-level extraction of edge features in the grayscale SEM image by reference to a binary customized contour image.
- the customized contour image may be generated based on the SEM image and a reference image, such as a GDS or other original design file, using a neural network or other machine learning transformation model.
- the customized contour image may be generated by a series of transformations using the grayscale SEM image and the binary reference image.
- the series of transformations may be designed to calculate an appropriate deformation map, e.g., a mathematical deformation model, that may be applied to the original binary design file to generate the customized contour image.
- the result is a smooth, sharp-edged, binary outline that very closely matches the actual printed patterns in the grayscale SEM image.
- edge locations may be extracted directly from it, or it may be aligned to the original SEM image with pixel-level accuracy to identify fine contour edges. This process may be referred to as Local Alignment for Contour Extraction (LACE).
- LACE Local Alignment for Contour Extraction
- the LACE process above may be used as a final step to directly determine the edge contours in an image.
- the process may comprise an initial step to replace coarse edge identification step in the CENT process. This is because performance of the CENT process depends heavily on the accuracy of the initial coarse contour. Therefore, the customized contour image of the present disclosure may be used in lieu of the simple corner-rounded post-processed image to achieve an accurate starting point for edge location. Then a thresholding operation may be applied as discussed above to perform the fine edge contour extraction. For example, using the hybrid LACE-CENT process, a customized contour image from one printed pattern may be derived and applied to images of other patterns that were printed from the same file. This hybrid process may combine the pixel-level accuracy of the present disclosure with the speed of thresholding, resulting in a rapid, high-accuracy EPE measurement across a wafer or across multiple wafers.
- some embodiments may be described in the context of providing detection systems and detection methods in systems utilizing electron beams (“e-beams”).
- e-beams electron beams
- systems and methods for detection may be used in other imaging systems, such as optical imaging, photon detection, proton detection, x-ray detection, ion detection, or the like.
- Photon detection may comprise light in the infrared, visible, UV, DUV, EUV, x-ray, or any other wavelength range. Therefore, while detectors in the present disclosure may be disclosed with respect to electron detection, some embodiments of the present disclosure may be directed to detecting other charged particles or photons.
- the process may be applied to an image obtained by, e.g., scatterometry or other optical metrology processes.
- the process may be used to determine other metrology parameters such as critical dimension (CD), overlay (OVL), line width roughness (LWR), line edge roughness (LER), line end shortening (LES), line end shortening (LES) or the like.
- the process may be used to perform a defect inspection.
- the term “or” encompasses all possible combinations, except where infeasible. For example, if it is stated that a component includes A or B, then, unless specifically stated otherwise or infeasible, the component may include A, or B, or A and B. As a second example, if it is stated that a component includes A, B, or C, then, unless specifically stated otherwise or infeasible, the component may include A, or B, or C, or A and B, or A and C, or B and C, or A and B and C. [0035] Reference is now made to Fig.
- EBI system 10 which illustrates an exemplary electron beam inspection (EBI) system 10 that may be used for wafer inspection, consistent with embodiments of the present disclosure.
- EBI system 10 includes a main chamber I l a load/lock chamber 20, an electron beam tool 100 (e.g., a scanning electron microscope (SEM)), and an equipment front end module (EFEM) 30.
- Electron beam tool 100 is located within main chamber 11 and may be used for imaging.
- EFEM 30 includes a first loading port 30a and a second loading port 30b.
- EFEM 30 may include additional loading ports.
- First loading port 30a and second loading port 30b receive wafer front opening unified pods (FOUPs) that contain wafers (e.g., semiconductor wafers or wafers made of other materials) or samples to be inspected (wafers and samples may be collectively referred to as “wafers” herein).
- FOUPs wafer front opening unified pods
- One or more robotic arms (not shown) in EFEM 30 may transport the wafers to load/lock chamber 20.
- Load/lock chamber 20 is connected to a load/lock vacuum pump system (not shown) which removes gas molecules in load/lock chamber 20 to reach a first pressure below the atmospheric pressure.
- a load/lock vacuum pump system not shown
- main chamber 11 is connected to a main chamber vacuum pump system (not shown) which removes gas molecules in main chamber 11 to reach a second pressure below the first pressure.
- the wafer is subject to inspection by electron beam tool 100.
- Electron beam tool 100 may be a single -beam system or a multi-beam system.
- a controller 109 is electronically connected to electron beam tool 100, and may be electronically connected to other components as well. Controller 109 may be a computer configured to execute various controls of EBI system 10. While controller 109 is shown in Fig. 1 as being outside of the structure that includes main chamber 11 , load/lock chamber 20, and EFEM 30, it is appreciated that controller 109 can be part of the structure.
- controller 109 may include one or more processors (not shown).
- a processor may be a generic or specific electronic device capable of manipulating or processing information.
- the processor may include any combination of any number of a central processing unit (or “CPU”), a graphics processing unit (or “GPU”), an optical processor, a programmable logic controllers, a microcontroller, a microprocessor, a digital signal processor, an intellectual property (IP) core, a Programmable Logic Array (PL A), a Programmable Array Logic (PAL), a Generic Array Logic (GAL), a Complex Programmable Logic Device (CPLD), a Field- Programmable Gate Array (FPGA), a System On Chip (SoC), an Application-Specific Integrated Circuit (ASIC), and any other type of circuit capable of data processing.
- the processor may also be a virtual processor that includes one or more processors distributed across multiple machines or devices coupled via a network.
- controller 109 may further include one or more memories (not shown).
- a memory may be a generic or specific electronic device capable of storing codes and data accessible by the processor (e.g., via a bus).
- the memory may include any combination of any number of a random-access memory (RAM), a read-only memory (ROM), an optical disc, a magnetic disk, a hard drive, a solid-state drive, a flash drive, a security digital (SD) card, a memory stick, a compact flash (CF) card, or any type of storage device.
- the codes and data may include an operating system (OS) and one or more application programs (or “apps”) for specific tasks.
- the memory may also be a virtual memory that includes one or more memories distributed across multiple machines or devices coupled via a network.
- the intensity of the secondary particle beams may be determined using a detector.
- the secondary particle beams may form beam spots on a surface of the detector.
- the detector may generate electrical signals (e.g., a current, a charge, a voltage, etc.) that represent intensity of the detected secondary particle beams.
- the electrical signals may be measured with measurement circuitries which may include further components (e.g., analog-to-digital converters) to obtain a distribution of the detected electrons.
- the electron distribution data collected during a detection time window, in combination with corresponding scan path data of the primary electron beam incident on the wafer surface may be used to reconstruct images of the wafer structures or materials under inspection.
- the reconstructed images may be used to reveal various features of the internal or external structures or materials of the wafer and may be used to reveal defects that may exist in the wafer.
- Fig. 2A illustrates a charged particle beam apparatus that may be an example of electron beam tool 100, consistent with embodiments of the present disclosure.
- Fig. 2A shows an apparatus that uses a plurality of beamlets formed from a primary electron beam to simultaneously scan multiple locations on a wafer.
- electron beam tool 100A may comprise an electron source 202, a gun aperture 204, a condenser lens 206, a primary electron beam 210 emitted from electron source 202, a source conversion unit 212, a plurality of beamlets 214, 216, and 218 of primary electron beam 210, a primary projection optical system 220, a wafer stage (not shown in Fig. 2A), multiple secondary electron beams 236, 238, and 240, a secondary optical system 242, and electron detection device 244.
- Electron source 202 may generate primary particles, such as electrons of primary electron beam 210.
- a controller, image processing system, and the like may be coupled to electron detection device 244.
- Primary projection optical system 220 may comprise beam separator 222, deflection scanning unit 226, and objective lens 228.
- Electron detection device 244 may comprise detection sub-regions 246, 248, and 250.
- Electron source 202, gun aperture 204, condenser lens 206, source conversion unit 212, beam separator 222, deflection scanning unit 226, and objective lens 228 may be aligned with a primary optical axis 260 of apparatus 100A.
- Secondary optical system 242 and electron detection device 244 may be aligned with a secondary optical axis 215 of apparatus 100A.
- Electron source 202 may comprise a cathode, an extractor or an anode, wherein primary electrons can be emitted from the cathode and extracted or accelerated to form a primary electron beam 210 with a crossover (virtual or real) 208.
- Primary electron beam 210 can be visualized as being emitted from crossover 208.
- Gun aperture 204 may block off peripheral electrons of primary electron beam 210 to reduce size of probe spots 270, 272, and 274.
- Source conversion unit 212 may comprise an array of image-forming elements (not shown in Fig. 2A) and an array of beam-limit apertures (not shown in Fig. 2A).
- An example of source conversion unit 212 may be found in U.S. Patent No 9,691,586; U.S. Publication No. 2017/0021543; and International Application No. PCT/EP2017/084429, all of which are incorporated by reference in their entireties.
- the array of image-forming elements may comprise an array of micro-deflectors or micro-lenses.
- the array of image-forming elements may form a plurality of parallel images (virtual or real) of crossover 208 with a plurality of beamlets 214, 216, and 218 of primary electron beam 210.
- the array of beam-limit apertures may limit the plurality of beamlets 214, 216, and 218.
- the adjustable condenser lens may be an adjustable anti-rotation condenser lens, which involves an anti-rotation lens with a movable first principal plane.
- an adjustable condenser lens is further described in U.S. Publication No. 2017/0021541, which is incorporated by reference in its entirety.
- the generated signals may represent intensities of secondary electron beams 236, 238, and 240 and may be provided to an image processing system (e.g. such as image processing system 199 provided in Fig. 2B below) that is in communication with detection device 244, primary projection optical system 220, and motorized wafer stage.
- the movement speed of motorized wafer stage may be synchronized and coordinated with the beam deflections controlled by deflection scanning unit 226, such that the movement of the scan probe spots (e.g., scan probe spots 270, 272, and 274) may orderly cover regions of interests on the wafer 230.
- the parameters of such synchronization and coordination may be adjusted to adapt to different materials of wafer 230. For example, different materials of wafer 230 may have different resistance-capacitance characteristics that may cause different signal sensitivities to the movement of the scan probe spots.
- the intensity of secondary electron beams 236, 238, and 240 may vary according to the external or internal structure of wafer 230, and thus may indicate whether wafer 230 includes defects. Moreover, as discussed above, beamlets 214, 216, and 218 may be projected onto different locations of the top surface of wafer 230, or different sides of local structures of wafer 230, to generate secondary electron beams 236, 238, and 240 that may have different intensities. Therefore, by mapping the intensity of secondary electron beams 236, 238, and 240 with the areas of wafer 230, the image processing system may reconstruct an image that reflects the characteristics of internal or external structures of wafer 230.
- apparatus 100B includes a wafer holder 136 supported by motorized stage 134 to hold a wafer 150 to be inspected.
- Electron beam tool 100B includes an electon emitter, which may comprise a cathode 103, an anode 121, and a gun aperture 122.
- Electron beam tool 100B further includes a beam limit aperture 125, a condenser lens 126, a column aperture 135, an objective lens assembly 132, and a detector 144.
- Objective lens assembly 132 in some embodiments, may be a modified SORIL lens, which includes a pole piece 132a, a control electrode 132b, a deflector 132c, and an exciting coil 132d.
- an electron beam 161 emanating from the tip of cathode 103 may be accelerated by anode 121 voltage, pass through gun aperture 122, beam limit aperture 125, condenser lens 126, and be focused into a probe spot 170 by the modified SORIL lens and impinge onto the surface of wafer 150.
- Probe spot 170 may be scanned across the surface of wafer 150 by a deflector, such as deflector 132c or other deflectors in the SORIL lens.
- Secondary or scattered particles, such as secondary electrons or scattered primary electrons emanated from the wafer surface may be collected by detector 144 to determine intensity of the beam and so that an image of an area of interest on wafer 150 may be reconstructed.
- Image acquirer 120 may comprise one or more processors.
- image acquirer 120 may comprise a computer, server, mainframe host, terminals, personal computer, any kind of mobile computing devices, and the like, or a combination thereof.
- Image acquirer 120 may be communicatively coupled with detector 144 of electron beam tool 100B through a medium such as an electrical conductor, optical fiber cable, portable storage media, IR, Bluetooth, internet, wireless network, wireless radio, or a combination thereof.
- Image acquirer 120 may receive a signal from detector 144 and may construct an image. Image acquirer 120 may thus acquire images of wafer 150.
- Image acquirer 120 may also perform various post-processing functions, such as image averaging, generating contours, superimposing indicators on an acquired image, and the like. Image acquirer 120 may be configured to perform adjustments of brightness and contrast, etc. of acquired images.
- Storage 130 may be a storage medium such as a hard disk, random access memory (RAM), cloud storage, other types of computer readable memory, and the like. Storage 130 may be coupled with image acquirer 120 and may be used for saving scanned raw image data as original images, and post-processed images.
- Image acquirer 120 and storage 130 may be connected to controller 109. In some embodiments, image acquirer 120, storage 130, and controller 109 may be integrated together as one electronic control unit.
- image acquirer 120 may acquire one or more images of a sample based on an imaging signal received from detector 144.
- An imaging signal may correspond to a scanning operation for conducting charged particle imaging.
- An acquired image may be a single image comprising a plurality of imaging areas that may contain various features of wafer 150.
- the single image may be stored in storage 130. Imaging may be performed on the basis of imaging frames.
- the condenser and illumination optics of the electron beam tool may comprise or be supplemented by electromagnetic quadrupole electron lenses.
- electron beam tool 100B may comprise a first quadrupole lens 148 and a second quadrupole lens 149.
- the quadrupole lenses may be used for controlling the electron beam.
- first quadrupole lens 148 may be controlled to adjust the beam current
- second quadrupole lens 149 may be controlled to adjust the beam spot size and beam shape.
- Fig. 2B illustrates a charged particle beam apparatus that may use a single primary beam configured to generate secondary electrons by interacting with wafer 150.
- Detector 144 may be placed along optical axis 105, as in the embodiment shown in Fig. 2B.
- the primary electron beam may be configured to travel along optical axis 105.
- detector 144 may include a hole at its center so that the primary electron beam may pass through to reach wafer 150.
- Fig. 2B shows an example of detector 144 having an opening at its center.
- some embodiments may use a detector placed off-axis relative to the optical axis along which the primary electron beam travels. For example, as in the embodiment shown in Fig.
- a beam separator 222 may be provided to direct secondary electron beams toward a detector placed off-axis. Beam separator 222 may be configured to divert secondary electron beams by an angle a toward an electron detection device 244, as shown in Fig. 2A.
- a detector in a charged particle beam system may include one or more sensing elements.
- the detector may comprise a single-element detector or an array with multiple sensing elements.
- the sensing elements may be configured for charged particle counting. Sensing elements of a detector that may be useful for charged particle counting are discussed in U.S. Publication No. 2019/0379682, which is incorporated by reference in its entirety.
- Sensing elements may include a diode or an element similar to a diode that may convert incident energy into a measurable signal.
- sensing elements in a detector may include a PIN diode.
- sensing elements may be represented as a diode, for example in the figures, although sensing elements or other components may deviate from ideal circuit behavior of electrical elements such as diodes, resistors, capacitors, etc.
- machine learning may be employed in the generation of inspection images, reference images or other images associated with apparatus 100, 100A or 100B.
- a machine learning system may operate in association with, e.g., controller 109, image acquisition unit 199, image acquirer 120, or storage unit 130 of Figs. 1-2B.
- a machine learning system may comprise a discriminative model.
- a machine learning system may include a generative model.
- learning can feature two types of mechanisms: discriminative learning that may be used to create classification and detection algorithms, and generative learning that may be used to actually create models that, in the extreme, can render images.
- a generative model may be configured for generating an image from a design clip that resembles a corresponding location on a wafer in a SEM image. This may be performed by 1) training the generative model with design clips and the associated actual SEM images from those locations on the wafer; and 2) using the model in inference mode to feed the model design clips in locations for which simulated SEM images are desired. Such simulated images can be used as reference images in, e.g., die-to-database inspection. [0062] If the model(s) include one or more discriminative models, the discriminative model(s) may have any suitable architecture and/or configuration known in the art.
- Discriminative models also called conditional models, are a class of models used in machine learning for modeling the dependence of an unobserved variable “y” on an observed variable “x.” Within a probabilistic framework, this may be done by modeling a conditional probability distribution P(ylx), which can be used for predicting y based on x. Discriminative models, as opposed to generative models, may not allow one to generate samples from the joint distribution of x and y. However, for tasks such as classification and regression that do not require the joint distribution, discriminative models may yield superior performance. On the other hand, generative models are typically more flexible than discriminative models in expressing dependencies in complex learning tasks. In addition, most discriminative models are inherently supervised and cannot easily be extended to unsupervised learning. Application specific details ultimately dictate the suitability of selecting a discriminative versus generative model.
- a generative model can be generally defined as a model that is probabilistic in nature.
- a “generative” model is not one that performs forward simulation or rule -based approaches and, as such, it may not be necessary to model the physics of the processes involved in generating an actual image or output (for which a simulated image or output is being generated). Instead, the generative model can be learned (in that its parameters can be learned) based on a suitable training set of data.
- Such generative models may have a number of advantages for the embodiments described herein.
- the generative model may be configured to have a deep learning architecture in that the generative model may include multiple layers, which may perform a number of algorithms or transformations. The number of layers included in the generative model may depend on the particular use case. For practical purposes, a suitable range of layers is from 2 layers to a few tens of layers.
- Deep learning is a type of machine learning.
- Machine learning can be generally defined as a type of artificial intelligence (Al) that provides computers with the ability to learn without being explicitly programmed.
- Al artificial intelligence
- Machine learning focuses on the development of computer programs that can teach themselves to grow and change when exposed to new data.
- machine learning can be defined as the subfield of computer science that “gives computers the ability to learn without being explicitly programmed.”
- Machine learning explores the study and construction of algorithms that can learn from and make predictions on data — such algorithms overcome following strictly static program instructions by making data driven predictions or decisions, through building a model from sample inputs.
- the machine learning described herein may be further performed as described in “Introduction to Statistical Machine Learning,” by Sugiyama, Morgan Kaufmann, 2016, 534 pages; “Discriminative, Generative, and Imitative Learning,” Jebara, MIT Thesis, 2002, 212 pages; and “Principles of Data Mining (Adaptive Computation and Machine Learning)” Hand et al., MIT Press, 2001, 578 pages; which are incorporated by reference as if fully set forth herein.
- the embodiments described herein may be further configured as described in these references.
- a machine learning system may comprise a neural network.
- a model may be a deep neural network with a set of weights that model the world according to the data that it has been fed to train it.
- Neural networks can be generally defined as a computational approach which is based on a relatively large collection of neural units loosely modeling the way a biological brain solves problems with relatively large clusters of biological neurons connected by axons. Each neural unit is connected with many others, and links can be enforcing or inhibitory in their effect on the activation state of connected neural units.
- a model may comprise convolutional and deconvolution neural network.
- the embodiments described herein can take advantage of learning concepts such as a convolution and deconvolution neural network to solve the normally intractable representation conversion problem (e.g., rendering).
- the model may have any convolution and deconvolution neural network configuration or architecture known in the art.
- Figs. 3A-B schematically illustrate contour extraction techniques 300 and 301 in a metrology process, such as an edge placement error (EPE) measurement, according to a comparative embodiment.
- EPE edge placement error
- an inspection image 351 of a feature area on a sample may be acquired.
- the image 351 may comprise, e.g., a charged particle image such as a SEM image.
- the feature area may correspond to, e.g., a field of view of the SEM, a portion of a field of view, a plurality of fields of view, etc.
- the sample may comprise, e.g., a semiconductor wafer comprising a printed circuit pattern.
- a further step 301 of normal thresholding may be performed to determine a fine contour extraction.
- the small sample region 383 of Fig. 3A is enlarged at the left side of in Fig. 3B to illustrate the fine contour extraction process.
- a further enlarged section of sample region 383 is shown on the right.
- Individual pixel intensities may be evaluated along a series of normal lines 384 running perpendicular to the coarse edge lines formed by reference image 381.
- chart 386 when running along a normal line starting from the outside of a feature and moving toward the inside, for example, a pixel is selected at the point where an intensity value reaches a prescribed threshold. This selected pixel may be set as one point along an edge of the feature.
- a fine edge 385 may be extracted as seen on the right in Fig. 3B.
- this contour-extraction process 301 may be iterative. For example, a first series of lines 384 running normal to the outlines of reference image 381 may be evaluated to determine an intermediate contour 385a. Then, a second series of lines 384 running normal to the intermediate contour 385a may be evaluated to extract a fine contour 385b.
- the thresholds used to extract intermediate contour 385a and fine contour 3385b may be the same or may be different. In general, the process may be iterated as many times as is desired to optimize the extraction process.
- Embodiments of the present disclosure provide systems and methods for producing a customized contour image.
- the customized contour image may be used to directly extract an edge location in a metrology process, or may be used as an initial coarse contour in a normal thresholding operation.
- the customized contour image may replace the reference image 381 in contour-extraction process 301 above.
- Fig. 4 illustrates an edge extraction process 400, consistent with embodiments of the present disclosure.
- the edge extraction process may be used in, e.g., a metrology process such as, e.g., EPE measurement.
- the process may be applied to other measurements, such as critical dimension (CD), overlay (OVL), linewidth roughness (LWR), line edge roughness (LER), line end shortening (LES), or other parameters in semiconductor manufacturing.
- CD critical dimension
- OTL overlay
- LWR linewidth roughness
- LER line edge roughness
- LES line end shortening
- Defects generated in an etching process may include, e.g., etching residue defects, over-etching, defects and open circuit defect.
- Defects generated in a CMP process may include, e.g., slurry residue defects, dishing defects, and erosion defects due to variance in polishing rates, and scratched due to polishing.
- Defects generated in an interconnection forming process may include, e.g., broken line defects, void defects, extrusion defects, and bridge defects.
- Defect inspection parameters may include any measured features that may be used to identify or otherwise characterize defects such as those discussed above. For example, in some embodiments a value of a defect inspection parameter may include a value of the size, depth, height, surface roughness, edge roughness, or other characteristic of the defect.
- an inspection image 451 of a feature area on a sample may be acquired in a similar manner to inspection image 351 of Fig. 3A.
- Image 451 may be a grayscale image.
- Image 451 may comprise, e.g., a charged particle image such as a SEM image.
- a reference image 481 may also be acquired.
- Reference image 481 may comprise, e.g., a design file used to create the printed patterns or another suitable reference image.
- the design file may be subjected to post processing as discussed above with respect to Figs. 3A-B.
- the reference image may correspond to an original design file.
- Inspection image 451 may be aligned to reference image 481 by a transformation model in a first alignment process 401.
- the transformation model may comprise a neural network or other machine learning model as further discussed with respect to Fig. 5 below.
- the transformation model may be configured to calculate a first deformation map in the first alignment process 401.
- the first deformation map may be used to locally deform the shapes and locations of pattern elements in inspection image 451 so that they are mapped precisely onto the corresponding pattern elements in reference image 481.
- the pattern elements of inspection image 451 may be deformed to align with their corresponding elements in reference image 481 substantially at or near the pixel level.
- the pattern elements of inspection image 451 may be deformed to align with their corresponding elements in reference image 481 substantially at or near the resolution limit of the imaging system used to acquire inspection image 451.
- First alignment process 401 may generate an aligned image 491, as seen at the upper right corner in Fig. 4.
- Inspection image 451 (such as a SEM image) may be alternatively denoted “S,” and reference image 481 may be alternatively denoted “R,” for the purpose of establishing a naming convention for derivative images that are generated based on these transformations.
- S S
- R reference image 481
- the alignment of grayscale inspection image S onto binary reference image R may generate an aligned grayscale inspection image S2R.
- the reverse operation of aligning the binary reference image R onto the grayscale inspection image S would generate an aligned binary reference image R2S.
- the aligned grayscale image 491 may then be re-aligned to the original inspection image 451 in a second alignment process 402.
- second alignment process 402 may use the same transformation model to calculate a second deformation map.
- the second deformation map may be used to locally deform the shapes and locations of pattern elements in aligned grayscale image 491 so that they are mapped precisely onto the corresponding pattern elements in inspection image 451.
- the resulting re-aligned grayscale image 492 may be referred to as [S2R]2S.
- the key element of second alignment process 402 lies in deriving the second deformation map rather than actually generating the [S2R]2S image. Therefore, in some embodiments, the second deformation map may be calculated by other means.
- the second deformation map may be generated based on the first deformation map, such as by taking an inverse of the first deformation map, or by performing another transformation on the first deformation map.
- a third alignment process 403 may be performed. Unlike the first and second alignment processes 401 and 402, third alignment process 403 may not comprise feeding two images into the transformation model. Instead, third alignment process 403 may comprise applying the second deformation map to the original reference image 481. In this way, a reference binary image may be transformed into an aligned binary image 493a using a deformation map that was derived by aligning two grayscale images to each other. This operation may achieve a smooth, binary contour that very closely matches the actual edges of the features captured in inspection image 451.
- the aligned binary image 493a may be represented in outline form as customized contour image 493b.
- Customized contour image 493b may be overlaid on original inspection image 451 for performing edge contour extraction.
- all the information needed for edge extraction or other metrology processes may be present in the solid image 493a, without a need for further comparison to the original inspection image 451.
- edge extraction may be performed directly from the information contained in aligned binary image 493a, the image 493a may also be referred to as a customized contour image.
- customized contour images 493a/b may be used as an initial coarse alignment in a normal thresholding process as discussed above with respect to Fig. 3B.
- a single customized contour image may be applied to a plurality of similarly patterned samples, such as a plurality of identically patterned die regions on a wafer.
- a single transformation process according to Fig. 4 may be performed on a representative inspection image to derive strong initial coarse contour.
- minor die-to-die variations may be corrected by a rapid normal thresholding process, such as that disclosed in Fig. 3B.
- the final customized contour image which requires multiple transformations, may appear similar to an aligned binary reference image R2S that could be achieved with a single transformation. However, for reasons discussed below with respect to Fig. 5, it may be more desirable to perform the series of alignment transformations of process 400 rather than performing a direct alignment transformation R2S.
- Fig. 5 schematically illustrates a machine learning system 500 for performing the alignment transformations discussed above, consistent with embodiments of the present disclosure.
- An alignment process may be performed using an encoder-decoder network, such as a deep neural network (DNN) 595 comprising a set of weights w, or another machine learning model.
- the encoderdecoder network 595 may be configured to encode into, and decode out of, a latent space.
- the alignment process may be iterated to find an optimized test weighting for the encoder-decoder network 595.
- the encoder and decoder of the network may be set to operate with an initial test weighting.
- the initial test weighting can be selected using a variety of methods. For example, all values may be set to maximum, minimum or mid-range values, to random values or to values obtained from a previous use of the method.
- an inspection image 551 and a reference image 581 may be input for alignment to generate an aligned image 599.
- a binary reference image 581 may be aligned directly to a grayscale inspection image 551 to generate an “R2S”-type aligned image 599.
- a grayscale inspection image 551 may be aligned to a binary reference image 581 to generate an “S2R”-type aligned image 599.
- the inspection and reference images 551/581 may be encoded, using the encoder, into a latent space to form an encoding.
- the encoding may be decoded to form a deformation map 596 indicative of a difference between inspection image 551 and reference image 581.
- the inspection image is spatially transformed by deformation map 596 to obtain an aligned image 599.
- a loss function 597 may be determined.
- the loss function 597 may be at least partially defined by a similarity metric which is obtained by comparing the aligned image to the reference image.
- the loss metric may be obtained by inputting the reference image 581 and the aligned image 551 into a discriminator network that outputs values depending on the similarity of the images. For example, the network may output values close to 0 for similar inputs and close to 1 for inputs that are significantly different. Of course any metric may be used.
- the loss function my take the form of the following equation: in which w is a particular weighting, I represents a strength of a smoothness prior, d>(x, w) represents the deformation map, CC represents a cross-correlation calculation, f is a reference image, and m is the inspection image.
- the loss function may also be at least partially defined by a smoothness metric, which is defined by the smoothness of the deformation map 596. Accordingly, the step of determining the loss function 597 may further comprise determining a smoothness metric of the deformation map 596.
- the smoothness metric may be defined by any suitable measurement of the deformation map 596, which is representative of smoothness. In an example, the smoothness metric is at least partially defined by the spatial gradients of the deformation map 596. Images of semiconductor substrates that are obtained using a SEM are known to display distortions of the first, second, and sometimes third order.
- the weighting of the encoder-decoder network such that an appropriate deformation map 596 can be generated.
- Higher frequency distortions can be due to actual differences in the measured geometry of the inspection image 551, when compared to the reference image, if the inspection image 551 and reference image 581 are obtained from different locations on a substrate, or to noise, and so it may not be desirable to form a deformation map 596 that corrects for these differences. This ensures that the deformation map 596 is indicative of the distortions in the image, rather than other differences between the reference image 581 and the inspection image 551.
- the aligned image may have some differences to the reference image, for example if the aligned and reference images are obtained from different places on a substrate or are derived from different modalities (e.g., comparing an SEM image to a mask image, GDSII, or a simulated image).
- a termination condition can be one or more of the following conditions: a predetermined value for the loss function 597 has been achieved; the improvement in the loss function compared to previous iterations is below a predetermined value; a local minimum in the loss function has been found; and a predetermined number of iterations has been performed.
- test weighting may be adjusted, and the process described above may be repeated with a different test weighting.
- the values of the test weighting may be adjusted in a manner that is predicted to minimize the loss function 597.
- a random component may also be added to prevent the optimization routine becoming trapped in a local minimum.
- an optimized weighting may be determined as the test weighting having an optimized loss function.
- the weighting of the encoderdecoder network is then set as the optimized weighting and the alignment process may terminate. Further discussion of transformation methods that are applicable to embodiments of the present disclosure may be found in U.S. Patent Publication NO. 2023/0036630, the entirety of which is incorporated by reference.
- a series of transformations may be used to calculate a second deformation map for generating the customized contour image 493, rather than generating a direct “S2R”-type aligned image as shown at Fig. 5.
- the close-up of aligned image 599 in Fig. 5 illustrates why.
- the global contours may appear acceptable when performing a direct “S2R”- type alignment
- a pixel-level view reveals local noise introduced when the model attempts to map the binary form onto gradient images. These noisy edges may defeat the purpose of performing a pixellevel alignment operation.
- the first and second alignment processes 401 and 402 of Fig. 4 may be performed. This may be used to determine the appropriate deformation map 596 that will transform a reference image to generate smooth, binary contours that very closely match the features of the inspection image.
- a machine learning system may include discriminative or generative models as described above.
- FIGs. 6A-E schematically illustrate an example of a challenging edge placement measurement, as performed using techniques according to a comparative embodiment and embodiments of the present disclosure.
- an inspection image 651 of a portion of circuit pattern 654 comprises an intricate series of horizontal and vertical lines.
- the circuit 654 may form a part of, e.g., a memory structure or other feature having intricate patterns.
- Figs. 6B-D illustrate a contour extraction 685 as achieved in a comparative embodiment.
- the contour extraction of Figs. 6B-D may be performed using the processes 300-301 described with respect to Figs. 3A-B above.
- Contour extraction according to a comparative embodiment may be performed in steps to capture the vertical and horizontal patterns.
- a first intermediate contour 685v may be extracted for the vertical lines by applying a vertical line pattern as a coarse contour and performing a first normal thresholding operation. The result may be a poor initial matching with further interference from the horizontal components of the pattern.
- Fig. 6B illustrate a contour extraction 685 as achieved in a comparative embodiment.
- the contour extraction of Figs. 6B-D may be performed using the processes 300-301 described with respect to Figs. 3A-B above.
- Contour extraction according to a comparative embodiment may be performed in steps to capture the vertical and horizontal patterns.
- a first intermediate contour 685v may be extracted for the vertical lines by applying a vertical line pattern
- a second intermediate contour 685h may be extracted for the horizontal lines by applying a horizontal line pattern as a coarse contour and performing a second normal thresholding operation, again with similar results.
- the two contours may be further processed to achieve a combined extracted contour 685h-v (as shown in Fig. 6D) that fails to adequately capture the actual edge contours of the printed pattern.
- Fig. 6E schematically illustrates a contour extraction of the same inspection image 651 using a customized contour image 693, consistent with embodiments of the present disclosure.
- a single set of transformations (such as using process 400 of Fig. 4) may generate the customized contour image 693 for both horizontal and vertical lines with pixel-level accuracy.
- a customized contour may be used as an initial coarse contour in a thresholding operation, thus eliminating the distorted edges introduced by poor initial contours and horizontal-vertical interference.
- Fig. 7 schematically illustrates example use cases of contour extraction, consistent with embodiments of the present disclosure.
- the contour extraction techniques may be applied to, e.g., metrology measurements such as critical dimension (CD), overlay (OVL), linewidth roughness (LWR), line edge roughness (LER), line end shortening (LES), or other parameters in semiconductor manufacturing.
- a critical pattern monitoring operation 710 may be used for, e.g., evaluating the overlay of repeated pattern images 751a, 751b,. .
- the contour extraction may be applied in a dual layer contour stacking operation 720 to identify minimum overlay margins between neighboring pattern pairs.
- the contour extraction may be applied in a single layer contour stacking operation 730. Single layer stacking may be used to identify, e.g., a maximum LEPE per pattern type.
- embodiments of the present disclosure may be applicable to any type of metrology or other operation as understood by a person of ordinary skill in the art.
- Fig. 8 schematically illustrates a flowchart of an example method 800 of contour extraction, consistent with embodiments of the present disclosure.
- the method may be performed according to embodiments disclosed in, e.g., Figs. 1-2B and 3A-7.
- method 800 may be performed using a controller, such as controller 109 in Fig. 1, or image acquisition unit 199 in Fig. 2B.
- a controller such as controller 109 in Fig. 1, or image acquisition unit 199 in Fig. 2B.
- an inspection image of a feature area on a sample may be acquired.
- the inspection image may comprise, e.g., a charged particle image.
- the inspection image may comprise an electron beam image, such as a SEM image.
- the inspection image may comprise an optical inspection image such as an image acquired through an optical inspection tool.
- the inspection image may comprise, e.g., inspection image 451 of Fig. 4.
- the inspection image may comprise a grayscale image comprising pixels of varying intensity.
- the feature area may correspond to, e.g., a field of view of a SEM or other inspection apparatus, a portion of a field of view, a plurality of fields of view, etc.
- the sample may comprise, e.g., a semiconductor wafer comprising a printed circuit pattern.
- the grayscale inspection image may be aligned to a reference image in a first alignment process.
- the reference image may be a binary image based on, e.g., a design file used to create the printed patterns in the inspection image.
- the reference image may comprise, e.g., reference image 481 of Fig. 4.
- the first alignment process may comprise deforming pattern elements in the inspection image to conform to the corresponding elements in the reference image.
- the alignment may be performed using a transformation model comprising a neural network or other machine learning model.
- the first alignment process may comprise applying local as well as global deformations according to a first deformation map calculated during the first alignment process.
- the first deformation map may correspond in form to deformation map 596 of Fig. 5.
- the first alignment process may generate a first grayscale aligned image.
- the first grayscale aligned image may comprise aligned image 491 of Fig. 4.
- the first grayscale aligned image may be re-aligned to the original inspection image in a second alignment process.
- the second alignment process may utilize the same transformation model as was used in the first alignment process.
- the second alignment process may generate a second grayscale aligned image by deforming pattern elements in the first grayscale aligned image to conform to the corresponding elements in the inspection image.
- the second grayscale aligned image may comprise re-aligned grayscale image 492 of Fig. 4.
- a second deformation map may be calculated based on the second alignment process.
- the second deformation map may also correspond in form to deformation map 596 of Fig. 5.
- the second deformation map may be applied to the reference image to generate a customized contour image.
- the reference image may be transformed into a customized contour image containing pixel-level information about edge features in the patterns of the inspection image.
- the customized contour image may comprise customized contour image 493a or 493b of Fig. 4.
- contour extraction may be performed using the customized contour image.
- contour information may be directly extracted from the customized contour image.
- the contour information may be contained entirely in customized contour image 493a or 493b of Fig. 4.
- contour extraction may be performed by comparing the customized contour image to the inspection image.
- the customized contour image may be used as an initial coarse contour. Then a fine contour extraction may be performed using a thresholding operation as discussed with respect to Fig. 3B above.
- a parameter value may be measured based on the extracted contour.
- parameter values may comprise edge placement error value, an overlay value, a critical dimension value, a linewidth value, linewidth roughness value, line edge roughness value, a line end shortening value, etc.
- the parameter value may comprise any observable dimensional parameter on a sample that may be determined using contour extraction or other edge determination.
- measuring a parameter value may comprise performing one or more of the operations depicted at Fig. 7.
- an adjustment may be performed based on the measured parameter value.
- the adjustment may comprise a tuning or correction to a process or apparatus involved in manufacturing the measured sample.
- the adjustment may be performed to a lithography or other semiconductor manufacturing apparatus or process based on the measured parameter value, such as by comparison to a target, threshold, or previously measured parameter value.
- a non-transitory computer-readable medium may be provided that stores instructions for a processor of a controller (e.g., controller 109 in Fig. 1, or image acquisition unit 199 in Fig. 2B) for detecting charged particles according to, e.g., systems 301-500 of Figs. 3B-5, or the example method 800 of Fig. 8, consistent with embodiments of the present disclosure.
- the instructions stored in the non-transitory computer-readable medium may be executed by the circuitry of the controller for performing measurements according to systems 301-500 or method 800 in part or in entirety.
- non-transitory media include, for example, a floppy disk, a flexible disk, hard disk, solid-state drive, magnetic tape, or any other magnetic data storage medium, a Compact Disc Read-Only Memory (CD-ROM), any other optical data storage medium, any physical medium with patterns of holes, a Random Access Memory (RAM), a Programmable Read-Only Memory (PROM), and Erasable Programmable Read-Only Memory (EPROM), a FLASH-EPROM or any other flash memory, Non-Volatile Random Access Memory (NVRAM), a cache, a register, any other memory chip or cartridge, and networked versions of the same.
- NVRAM Non-Volatile Random Access Memory
- a non-transitory computer-readable medium that stores a set of instructions that is executable by at least one processor of an apparatus to cause the apparatus to perform operations comprising: acquiring an inspection image of a pattern region on a wafer; aligning the inspection image to a reference image in a first alignment process to generate an aligned image; aligning the aligned image to the inspection image in a second alignment process to generate a deformation map; applying the deformation map to the reference image to generate a customized contour image; measuring a parameter value of the pattern region based on the customized contour image; and performing an adjustment to a semiconductor manufacturing process or a semiconductor manufacturing apparatus based on the measured parameter value.
- the deformation map comprises a second deformation map
- the encoder-decoder network decodes the first encoding to generate a first deformation map.
- the encoder-decoder network encodes the aligned image and the inspection image into a latent space to form a second encoding.
- the loss function comprises the form: wherein w represents a weighting value, I represents a strength of a smoothness prior, ⁇ D(x, w) represents the deformation map, CC represents a cross-correlation calculation, f represents the reference image, and m represents the inspection image.
- extracting the edge location based on the customized contour image comprises: matching the customized contour image to the inspection image to determine a coarse edge contour; acquiring a plurality of pixel intensity values from the inspection image along a line crossing the coarse edge contour; and selecting a pixel value above a predetermined threshold intensity as an edge pixel to perform a fine edge contour extraction.
- An inspection method comprising: acquiring an inspection image of a pattern region on a wafer; aligning the inspection image to a reference image in a first alignment process to generate an aligned image; aligning the aligned image to the inspection image in a second alignment process to generate a deformation map; applying the deformation map to the reference image to generate a customized contour image; measuring a parameter value of the pattern region based on the customized contour image; and performing an adjustment to a semiconductor manufacturing process or a semiconductor manufacturing apparatus based on the measured parameter value.
- the loss function comprises the form: wherein w represents a weighting value, I represents a strength of a smoothness prior, ⁇ D(x, w) represents the deformation map, CC represents a cross-correlation calculation, f represents the reference image, and m represents the inspection image.
- extracting the edge location based on the customized contour image comprises: matching the customized contour image to the inspection image to determine a coarse edge contour; acquiring a plurality of pixel intensity values from the inspection image along a line crossing the coarse edge contour; and selecting a pixel value above a predetermined threshold intensity as an edge pixel to perform a fine edge contour extraction.
- parameter value comprises one of edge placement error, critical dimension, overlay, linewidth, linewidth roughness, or line edge roughness.
- a charged particle beam apparatus comprising: a charged particle beam source configured to generate a beam of primary charged particles; a charged particle optical system configured to direct the beam of primary charged particles at a pattern region on a wafer; a controller comprising one or more processors and configured to cause the charged particle beam apparatus to perform operations comprising: irradiating a surface of the wafer with the beam to cause charged particles to be emitted from the surface; detecting the charged particles on a charged particle detector of the charged particle beam apparatus to produce a charged particle beam inspection image of the pattern region on the wafer; aligning the inspection image to a reference image in a first alignment process to generate an aligned image; aligning the aligned image to the inspection image in a second alignment process to generate a deformation map; applying the deformation map to the reference image to generate a customized contour image; measuring a parameter value of the pattern region based on the customized contour image; and performing an adjustment to a semiconductor manufacturing process or a semiconductor manufacturing apparatus based on the measured parameter value. 58.
- the loss function comprises the form: wherein w represents a weighting value, X represents a strength of a smoothness prior, ⁇ D(x, w) represents the deformation map, CC represents a cross-correlation calculation, f represents the reference image, and m represents the inspection image.
- the deformation map comprises a second deformation map
- aligning the inspection image to the reference image in the first alignment process comprises generating a first deformation map
- extracting the edge location based on the customized contour image comprises: matching the customized contour image to the inspection image to determine a coarse edge contour; acquiring a plurality of pixel intensity values from the inspection image along a line crossing the coarse edge contour; and selecting a pixel value above a predetermined threshold intensity as an edge pixel to perform a fine edge contour extraction.
- the parameter value comprises one of edge placement error, critical dimension, overlay, linewidth, linewidth roughness, or line edge roughness.
- Block diagrams in the figures may illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer hardware or software products according to various exemplary embodiments of the present disclosure.
- each block in a schematic diagram may represent certain arithmetical or logical operation processing that may be implemented using hardware such as an electronic circuit.
- Blocks may also represent a module, segment, or portion of code that comprises one or more executable instructions for implementing the specified logical functions. It should be understood that in some alternative implementations, functions indicated in a block may occur out of the order noted in the figures.
- a charged particle inspection system may be but one example of a charged particle beam system consistent with embodiments of the present disclosure.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Quality & Reliability (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Testing Or Measuring Of Semiconductors Or The Like (AREA)
- Analysing Materials By The Use Of Radiation (AREA)
Abstract
A charged particle beam inspection method for edge placement error detection includes acquiring a grayscale inspection image and performing a series of transformations between the inspection image and a binary reference image to calculate a deformation map. The deformation map may then be applied to the binary reference image to generate a binary customized contour image that matches the edge locations of patterns in the inspection image.
Description
LEARNING-BASED LOCAL ALIGNMENT FOR EDGE PLACEMENT METROLOGY
CROSS REFERENCE TO RELATED APPLICATION
[0001] This application claims priority to U.S. Application No. 63/534,114, filed August 22, 2023, and which is incorporated herein in its entirety by reference.
FIELD
[0002] The description herein relates to measurement schemes that may be useful in the field of charged particle beam systems, and more particularly, to systems and methods that may be applicable to charged particle inspection systems such as scanning electron microscope (SEM) tools.
BACKGROUND
[0003] Inspection and metrology systems may be used for sensing physically observable phenomena. For example, charged particle beam tools, such as electron microscopes, may comprise detectors that receive charged particles projected from a sample and that output detection signals. Detection signals may be used to reconstruct images of sample structures under inspection and may be used for, e.g., metrology, overlay, or defect inspection.
[0004] For example, most semiconductor devices require a plurality of pattern layers to be formed and transferred onto the substrate. For proper functioning of the device, there is usually a limit on the tolerable error in the positioning of the edges of features. This parameter is typically quantified as an edge placement error (EPE). EPE can arise because of errors in the relative positioning of successive layers, known as overlay, or due to errors in the dimensions (specifically the critical dimension or CD) of features. With the continual desire in the lithographic art to reduce the size of features to be formed on a semiconductor wafer or other sample, the limits on allowable EPE are becoming stricter, and the requirements on the precision, accuracy and speed of measuring quantities such as EPE are increasing.
SUMMARY
[0005] Some embodiments of the present disclosure provide a non-transitory computer-readable medium. The non-transitory computer-readable medium may store a set of instructions that is executable by at least one processor of an apparatus. The instructions may cause the apparatus to perform operations comprising: acquiring an inspection image of a pattern region on a wafer; aligning the inspection image to a reference image in a first alignment process to generate an aligned image; aligning the aligned image to the inspection image in a second alignment process to generate a deformation map; applying the deformation map to the reference image to generate a customized contour image; measuring a parameter value of the pattern region based on the customized contour image; and performing an adjustment to a semiconductor manufacturing process or a semiconductor
manufacturing apparatus based on the measured parameter value. Some embodiments of the present disclosure provide an inspection method comprising the operations discussed above.
[0006] Some embodiments of the present disclosure provide a charged particle beam apparatus. The charged particle beam apparatus may comprise: a charged particle beam source configured to generate a beam of primary charged particles; a charged particle optical system configured to direct the beam of primary charged particles at a pattern region on a wafer; a controller comprising one or more processors, and configured to cause the charged particle beam apparatus to perform operations comprising: irradiating a surface of the wafer with the beam to cause charged particles to be emitted from the surface; detecting the charged particles on a charged particle detector of the charged particle beam apparatus to produce a charged particle beam inspection image of the pattern region on the wafer; aligning the inspection image to a reference image in a first alignment process to generate an aligned image; aligning the aligned image to the inspection image in a second alignment process to generate a deformation map; applying the deformation map to the reference image to generate a customized contour image; measuring a parameter value of the pattern region based on the customized contour image; and performing an adjustment to a semiconductor manufacturing process or a semiconductor manufacturing apparatus based on the measured parameter value.
BRIEF DESCRIPTION OF THE DRAWINGS
[0007] The above and other aspects of the present disclosure will become more apparent from the description of exemplary embodiments, taken in conjunction with the accompanying drawings. [0008] Fig. 1 is a diagrammatic representation of an exemplary electron beam inspection (EBI) system, consistent with embodiments of the present disclosure.
[0009] Figs. 2A-B are diagrams illustrating a charged particle beam apparatus that may be an example of an electron beam tool, consistent with embodiments of the present disclosure.
[0010] Figs. 3A-B are diagrammatic representations of an example contour extraction, according to a comparative embodiment.
[0011] Fig. 4 is a diagrammatic representation of an example contour extraction, consistent with embodiments of the present disclosure.
[0012] Fig. 5 is a diagrammatic representation of an example transformation model, consistent with embodiments of the present disclosure.
[0013] Fig. 6A is a diagrammatic representation of an example inspection image, consistent with embodiments of the present disclosure.
[0014] Figs. 6B-D are diagrammatic representations of an example contour extraction of the inspection image of Fig. 6A, according to a comparative embodiment.
[0015] Fig. 6E is a diagrammatic representation of an example contour extraction of the inspection image of Fig. 6A, consistent with embodiments of the present disclosure.
[0016] Fig. 7 is a diagrammatic representation of example use cases of contour extraction, consistent with embodiments of the present disclosure.
[0017] Fig. 8 a flowchart illustrating an example method that may be useful for contour extraction, consistent with embodiments of the disclosure.
DETAILED DESCRIPTION
[0018] Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise represented. The implementations set forth in the following description of exemplary embodiments do not represent all implementations consistent with the invention. Instead, they are merely examples of apparatuses, systems, and methods consistent with aspects related to subject matter that may be recited in the appended claims. For example, although some embodiments are described in the context of utilizing charged-particle beams (e.g., electron beams), the disclosure is not so limited. Other types of charged particle beams (e.g., photon beams) may be similarly applied. Furthermore, other imaging systems may be used, such as optical imaging, photodetection, x-ray detection, or the like.
[0019] Electronic devices are constructed of circuits formed on a piece of silicon called a substrate. Many circuits may be formed together on the same piece of silicon and are called integrated circuits or ICs. With advancements in technology, the size of these circuits has decreased dramatically so that many more of them can fit on the substrate. For example, an IC chip in a smart phone can be as small as a fingernail and yet may include over 2 billion transistors, the size of each transistor being less than 1/1, 000th the width of a human hair.
[0020] Making these ICs with extremely small structures or components is a complex, timeconsuming, and expensive process, often involving hundreds of individual steps. Errors in even one step have the potential to result in defects in the finished IC, rendering it useless. Thus, one goal of the manufacturing process is to avoid such defects to maximize the number of functional ICs made in the process, that is, to improve the overall yield of the process.
[0021] One component of improving yield is monitoring the chip making process to ensure that it is producing a sufficient number of functional integrated circuits. One way to monitor the process is to inspect the chip circuit structures at various stages of their formation. Inspection can be carried out using a scanning charged-particle microscope (“SCPM”). For example, an SCPM may be a scanning electron microscope (SEM). A SEM can be used to image these extremely small structures, in effect, taking a “picture” of the structures. The image can be used to determine if the structure was formed properly and also if it was formed in the proper location. If the structure is defective, then the process can be adjusted so the defect is less likely to recur. To enhance throughput (e.g., the number of samples processed per hour), it is desirable to conduct inspection as quickly as possible.
[0022] The working principle of a SEM is similar to a camera. A camera takes a picture by receiving and recording intensity of light reflected or emitted from people or objects. A SEM takes a “picture” by receiving and recording energies or quantities of electrons reflected or emitted from the structures of the wafer. Before taking such a “picture,” an electron beam may be projected onto the structures, and when the electrons are reflected or emitted (“exiting”) from the structures (e.g., from the wafer surface, from the structures underneath the wafer surface, or both), a detector of the SEM may receive and record the energies or quantities of those electrons to generate an inspection image. To take such a “picture,” the electron beam may scan through the wafer (e.g., in a line-by-line or zig-zag manner), and the detector may receive exiting electrons coming from a region under electron-beam projection (referred to as a “beam spot”). The detector may receive and record exiting electrons from each beam spot one at a time and join the information recorded for all the beam spots to generate the inspection image. Some SEMs use a single electron beam (referred to as a “single-beam SEM”) to take a single “picture” to generate the inspection image, while some SEMs use multiple electron beams (referred to as a “multi-beam SEM”) to take in parallel multiple “pictures” of the wafer, which can be used separately or be stitched together to generate the inspection image. By using multiple electron beams, the SEM may provide more electron beams onto the structures for obtaining these multiple “pictures,” resulting in more electrons exiting from the structures. Accordingly, the detector may receive more exiting electrons simultaneously and generate inspection images of the structures of the wafer with higher efficiency and faster speed.
[0023] Typically, the detection process involves measuring the magnitude of an electrical signal generated when electrons land on the detector. In another approach, electron counting may be used, in which a detector may count individual electron arrival events as they occur. In either approach, intensity of the secondary beam may be determined based on electrical signals generated in the detector that vary in proportion to the change in intensity of the secondary beam.
[0024] The detection process may be used for metrology measurements, e.g., in circuit pattern features printed on a semiconductor wafer. For example, a metrology process may identify the locations of feature edges within the circuit pattern. Such measurements may be used to determine an edge placement error (EPE), or an unwanted offset of a feature edge from its designed location.
[0025] One conventional method for determining EPE may involve a contour identification technique for the feature. For example, the contour identification technique may involve a first step of aligning a grayscale SEM image of the pattern area with the binary outlines of a reference contour image. The reference contour image may comprise, e.g., a post-processed version of the design file that was used to create the pattern area. Such files may take the form of, e.g., GDS, GDSII, OASIS or other circuit pattern file formats (referred to collectively as “GDS”). Post-processing may be used to render the GDS file image as a set of feature outlines, as well as to more accurately represent the contours of a true printed pattern (such as by rounding off the feature corners, creating irregular line
widths, etc.). By aligning the reference contour, a coarse location of edge features in the grayscale SEM image may be identified.
[0026] Next, a grayscale thresholding operation may be used to perform a fine contour extraction. For example, the thresholding operation may include evaluating a series of pixels in the SEM image along a line that runs perpendicular (or “normal”) to the identified coarse edge. When a pixel is reached that has a grayscale intensity value above a certain threshold, the pixel may be considered a fine edge location of the feature. This process may be repeated at multiple lines running perpendicular to the coarse edge to identify a series of edge pixels in the SEM image, thus tracing out a fine contour of the pattern feature edges. Such a technique may be referred to as Contour Extraction via Normal Thresholding (CENT).
[0027] While the CENT technique may offer a fast processing time in use, designing a CENT algorithm may be labor-intensive. For example, the design may require careful tuning of algorithm parameters, such as threshold value, sensitivity, etc. Furthermore, it may be difficult to achieve a satisfactory EPE measurement under this technique, especially when the measured pattern is not highly regular or repetitive. For example, random logic layers may be very challenging under conventional CENT techniques because there may be little repetition, and every edge profile within the pattern may include unique shapes or other characteristics. This can result in poor alignment of a GDS file to the pattern, and subsequently a poor thresholding operation during the fine contour extraction step.
[0028] Embodiments of the present disclosure may provide an inspection apparatus and inspection method for accurately extracting an edge contour, even from highly irregular SEM images. The method may comprise a pixel-level extraction of edge features in the grayscale SEM image by reference to a binary customized contour image. The customized contour image may be generated based on the SEM image and a reference image, such as a GDS or other original design file, using a neural network or other machine learning transformation model.
[0029] For example, the customized contour image may be generated by a series of transformations using the grayscale SEM image and the binary reference image. The series of transformations may be designed to calculate an appropriate deformation map, e.g., a mathematical deformation model, that may be applied to the original binary design file to generate the customized contour image. The result is a smooth, sharp-edged, binary outline that very closely matches the actual printed patterns in the grayscale SEM image. After the customized contour image is generated, edge locations may be extracted directly from it, or it may be aligned to the original SEM image with pixel-level accuracy to identify fine contour edges. This process may be referred to as Local Alignment for Contour Extraction (LACE).
[0030] In some embodiments, the LACE process above may be used as a final step to directly determine the edge contours in an image. In some embodiments, the process may comprise an initial step to replace coarse edge identification step in the CENT process. This is because performance of
the CENT process depends heavily on the accuracy of the initial coarse contour. Therefore, the customized contour image of the present disclosure may be used in lieu of the simple corner-rounded post-processed image to achieve an accurate starting point for edge location. Then a thresholding operation may be applied as discussed above to perform the fine edge contour extraction. For example, using the hybrid LACE-CENT process, a customized contour image from one printed pattern may be derived and applied to images of other patterns that were printed from the same file. This hybrid process may combine the pixel-level accuracy of the present disclosure with the speed of thresholding, resulting in a rapid, high-accuracy EPE measurement across a wafer or across multiple wafers.
[0031] Objects and advantages of the disclosure may be realized by the elements and combinations as set forth in the embodiments discussed herein. However, embodiments of the present disclosure are not necessarily required to achieve such exemplary objects or advantages, and some embodiments may not achieve any of the stated objects or advantages.
[0032] Without limiting the scope of the present disclosure, some embodiments may be described in the context of providing detection systems and detection methods in systems utilizing electron beams (“e-beams”). However, the disclosure is not so limited. Other types of charged particle beams (such as proton beams) may be similarly applied. Furthermore, systems and methods for detection may be used in other imaging systems, such as optical imaging, photon detection, proton detection, x-ray detection, ion detection, or the like. Photon detection may comprise light in the infrared, visible, UV, DUV, EUV, x-ray, or any other wavelength range. Therefore, while detectors in the present disclosure may be disclosed with respect to electron detection, some embodiments of the present disclosure may be directed to detecting other charged particles or photons. For example, in some embodiments the process may be applied to an image obtained by, e.g., scatterometry or other optical metrology processes.
[0033] Furthermore, while some embodiments may be described in the context of providing EPE measurements, embodiments of the present disclosure are not limited to this. For example, in some embodiments the process may be used to determine other metrology parameters such as critical dimension (CD), overlay (OVL), line width roughness (LWR), line edge roughness (LER), line end shortening (LES), line end shortening (LES) or the like. In some embodiments the process may be used to perform a defect inspection.
[0034] As used herein, unless specifically stated otherwise, the term “or” encompasses all possible combinations, except where infeasible. For example, if it is stated that a component includes A or B, then, unless specifically stated otherwise or infeasible, the component may include A, or B, or A and B. As a second example, if it is stated that a component includes A, B, or C, then, unless specifically stated otherwise or infeasible, the component may include A, or B, or C, or A and B, or A and C, or B and C, or A and B and C.
[0035] Reference is now made to Fig. 1, which illustrates an exemplary electron beam inspection (EBI) system 10 that may be used for wafer inspection, consistent with embodiments of the present disclosure. As shown in Fig. 1, EBI system 10 includes a main chamber I l a load/lock chamber 20, an electron beam tool 100 (e.g., a scanning electron microscope (SEM)), and an equipment front end module (EFEM) 30. Electron beam tool 100 is located within main chamber 11 and may be used for imaging. EFEM 30 includes a first loading port 30a and a second loading port 30b. EFEM 30 may include additional loading ports. First loading port 30a and second loading port 30b receive wafer front opening unified pods (FOUPs) that contain wafers (e.g., semiconductor wafers or wafers made of other materials) or samples to be inspected (wafers and samples may be collectively referred to as “wafers” herein).
[0036] One or more robotic arms (not shown) in EFEM 30 may transport the wafers to load/lock chamber 20. Load/lock chamber 20 is connected to a load/lock vacuum pump system (not shown) which removes gas molecules in load/lock chamber 20 to reach a first pressure below the atmospheric pressure. After reaching the first pressure, one or more robotic arms (not shown) may transport the wafer from load/lock chamber 20 to main chamber 11. Main chamber 11 is connected to a main chamber vacuum pump system (not shown) which removes gas molecules in main chamber 11 to reach a second pressure below the first pressure. After reaching the second pressure, the wafer is subject to inspection by electron beam tool 100. Electron beam tool 100 may be a single -beam system or a multi-beam system. A controller 109 is electronically connected to electron beam tool 100, and may be electronically connected to other components as well. Controller 109 may be a computer configured to execute various controls of EBI system 10. While controller 109 is shown in Fig. 1 as being outside of the structure that includes main chamber 11 , load/lock chamber 20, and EFEM 30, it is appreciated that controller 109 can be part of the structure.
[0037] In some embodiments, controller 109 may include one or more processors (not shown). A processor may be a generic or specific electronic device capable of manipulating or processing information. For example, the processor may include any combination of any number of a central processing unit (or “CPU”), a graphics processing unit (or “GPU”), an optical processor, a programmable logic controllers, a microcontroller, a microprocessor, a digital signal processor, an intellectual property (IP) core, a Programmable Logic Array (PL A), a Programmable Array Logic (PAL), a Generic Array Logic (GAL), a Complex Programmable Logic Device (CPLD), a Field- Programmable Gate Array (FPGA), a System On Chip (SoC), an Application-Specific Integrated Circuit (ASIC), and any other type of circuit capable of data processing. The processor may also be a virtual processor that includes one or more processors distributed across multiple machines or devices coupled via a network.
[0038] In some embodiments, controller 109 may further include one or more memories (not shown). A memory may be a generic or specific electronic device capable of storing codes and data accessible by the processor (e.g., via a bus). For example, the memory may include any combination of any
number of a random-access memory (RAM), a read-only memory (ROM), an optical disc, a magnetic disk, a hard drive, a solid-state drive, a flash drive, a security digital (SD) card, a memory stick, a compact flash (CF) card, or any type of storage device. The codes and data may include an operating system (OS) and one or more application programs (or “apps”) for specific tasks. The memory may also be a virtual memory that includes one or more memories distributed across multiple machines or devices coupled via a network.
[0039] A charged particle beam microscope, such as that formed by or which may be included in EBI system 10, may be capable of resolution down to, e.g., the nanometer scale, and may serve as a practical tool for inspecting IC components on wafers. With an e-beam system, electrons of a primary electron beam may be focused at probe spots on a wafer under inspection. The interactions of the primary electrons with the wafer may result in secondary particle beams being formed. The secondary particle beams may comprise backscattered electrons, secondary electrons, or Auger electrons, etc. resulting from the interactions of the primary electrons with the wafer. Characteristics of the secondary particle beams (e.g., intensity) may vary based on the properties of the internal or external structures or materials of the wafer, and thus may indicate whether the wafer includes defects.
[0040] The intensity of the secondary particle beams may be determined using a detector. The secondary particle beams may form beam spots on a surface of the detector. The detector may generate electrical signals (e.g., a current, a charge, a voltage, etc.) that represent intensity of the detected secondary particle beams. The electrical signals may be measured with measurement circuitries which may include further components (e.g., analog-to-digital converters) to obtain a distribution of the detected electrons. The electron distribution data collected during a detection time window, in combination with corresponding scan path data of the primary electron beam incident on the wafer surface, may be used to reconstruct images of the wafer structures or materials under inspection. The reconstructed images may be used to reveal various features of the internal or external structures or materials of the wafer and may be used to reveal defects that may exist in the wafer.
[0041] Fig. 2A illustrates a charged particle beam apparatus that may be an example of electron beam tool 100, consistent with embodiments of the present disclosure. Fig. 2A shows an apparatus that uses a plurality of beamlets formed from a primary electron beam to simultaneously scan multiple locations on a wafer.
[0042] As shown in Fig. 2A, electron beam tool 100A may comprise an electron source 202, a gun aperture 204, a condenser lens 206, a primary electron beam 210 emitted from electron source 202, a source conversion unit 212, a plurality of beamlets 214, 216, and 218 of primary electron beam 210, a primary projection optical system 220, a wafer stage (not shown in Fig. 2A), multiple secondary electron beams 236, 238, and 240, a secondary optical system 242, and electron detection device 244. Electron source 202 may generate primary particles, such as electrons of primary electron beam 210. A controller, image processing system, and the like may be coupled to electron detection device 244. Primary projection optical system 220 may comprise beam separator 222, deflection scanning unit
226, and objective lens 228. Electron detection device 244 may comprise detection sub-regions 246, 248, and 250.
[0043] Electron source 202, gun aperture 204, condenser lens 206, source conversion unit 212, beam separator 222, deflection scanning unit 226, and objective lens 228 may be aligned with a primary optical axis 260 of apparatus 100A. Secondary optical system 242 and electron detection device 244 may be aligned with a secondary optical axis 215 of apparatus 100A.
[0044] Electron source 202 may comprise a cathode, an extractor or an anode, wherein primary electrons can be emitted from the cathode and extracted or accelerated to form a primary electron beam 210 with a crossover (virtual or real) 208. Primary electron beam 210 can be visualized as being emitted from crossover 208. Gun aperture 204 may block off peripheral electrons of primary electron beam 210 to reduce size of probe spots 270, 272, and 274.
[0045] Source conversion unit 212 may comprise an array of image-forming elements (not shown in Fig. 2A) and an array of beam-limit apertures (not shown in Fig. 2A). An example of source conversion unit 212 may be found in U.S. Patent No 9,691,586; U.S. Publication No. 2017/0021543; and International Application No. PCT/EP2017/084429, all of which are incorporated by reference in their entireties. The array of image-forming elements may comprise an array of micro-deflectors or micro-lenses. The array of image-forming elements may form a plurality of parallel images (virtual or real) of crossover 208 with a plurality of beamlets 214, 216, and 218 of primary electron beam 210. The array of beam-limit apertures may limit the plurality of beamlets 214, 216, and 218.
[0046] Condenser lens 206 may focus primary electron beam 210. The electric currents of beamlets 214, 216, and 218 downstream of source conversion unit 212 may be varied by adjusting the focusing power of condenser lens 206 or by changing the radial sizes of the corresponding beam-limit apertures within the array of beam-limit apertures. Condenser lens 206 may be an adjustable condenser lens that may be configured so that the position of its first principal plane is movable. The adjustable condenser lens may be configured to be magnetic, which may result in off-axis beamlets 216 and 218 landing on the beamlet-limit apertures with rotation angles. The rotation angles change with the focusing power and the position of the first principal plane of the adjustable condenser lens. In some embodiments, the adjustable condenser lens may be an adjustable anti-rotation condenser lens, which involves an anti-rotation lens with a movable first principal plane. An example of an adjustable condenser lens is further described in U.S. Publication No. 2017/0021541, which is incorporated by reference in its entirety.
[0047] Objective lens 228 may focus beamlets 214, 216, and 218 onto a wafer 230 for inspection and may form a plurality of probe spots 270, 272, and 274 on the surface of wafer 230. Secondary electron beamlets 236, 238, and 240 may be formed that are emitted from wafer 230 and travel back toward beam separator 222.
[0048] Beam separator 222 may be a beam separator of Wien filter type generating an electrostatic dipole field and a magnetic dipole field. In some embodiments, if they are applied, the force exerted
by electrostatic dipole field on an electron of beamlets 214, 216, and 218 may be equal in magnitude and opposite in direction to the force exerted on the electron by magnetic dipole field. Beamlets 214, 216, and 218 can therefore pass straight through beam separator 222 with zero deflection angle. However, the total dispersion of beamlets 214, 216, and 218 generated by beam separator 222 may also be non-zero. Beam separator 222 may separate secondary electron beams 236, 238, and 240 from beamlets 214, 216, and 218 and direct secondary electron beams 236, 238, and 240 towards secondary optical system 242.
[0049] Deflection scanning unit 226 may deflect beamlets 214, 216, and 218 to scan probe spots 270, 272, and 274 over an area on a surface of wafer 230. In response to incidence of beamlets 214, 216, and 218 at probe spots 270, 272, and 274, secondary electron beams 236, 238, and 240 may be emitted from wafer 230. Secondary electron beams 236, 238, and 240 may comprise electrons with a distribution of energies including secondary electrons and backscattered electrons. Secondary optical system 242 may focus secondary electron beams 236, 238, and 240 onto detection sub-regions 246, 248, and 250 of electron detection device 244. Detection sub-regions 246, 248, and 250 may be configured to detect corresponding secondary electron beams 236, 238, and 240 and generate corresponding signals used to reconstruct an image of the surface of wafer 230.
[0050] The generated signals may represent intensities of secondary electron beams 236, 238, and 240 and may be provided to an image processing system (e.g. such as image processing system 199 provided in Fig. 2B below) that is in communication with detection device 244, primary projection optical system 220, and motorized wafer stage. The movement speed of motorized wafer stage may be synchronized and coordinated with the beam deflections controlled by deflection scanning unit 226, such that the movement of the scan probe spots (e.g., scan probe spots 270, 272, and 274) may orderly cover regions of interests on the wafer 230. The parameters of such synchronization and coordination may be adjusted to adapt to different materials of wafer 230. For example, different materials of wafer 230 may have different resistance-capacitance characteristics that may cause different signal sensitivities to the movement of the scan probe spots.
[0051] The intensity of secondary electron beams 236, 238, and 240 may vary according to the external or internal structure of wafer 230, and thus may indicate whether wafer 230 includes defects. Moreover, as discussed above, beamlets 214, 216, and 218 may be projected onto different locations of the top surface of wafer 230, or different sides of local structures of wafer 230, to generate secondary electron beams 236, 238, and 240 that may have different intensities. Therefore, by mapping the intensity of secondary electron beams 236, 238, and 240 with the areas of wafer 230, the image processing system may reconstruct an image that reflects the characteristics of internal or external structures of wafer 230.
[0052] Detection sub-regions 246, 248, and 250 may include separate detector packages, separate sensing elements, or separate regions of an array detector. In some embodiments, each detection subregion may include a single sensing element.
[0053] Another example of a charged particle beam apparatus will now be discussed with reference to Fig. 2B. An electron beam tool 100B (also referred to herein as apparatus 100B) may be an example of electron beam tool 100 and may be similar to electron beam tool 100A shown in Fig. 2A. However, different from apparatus 100A, apparatus 100B may be a single-beam tool that uses only one primary electron beam to scan one location on the wafer at a time.
[0054] As shown in Fig. 2B, apparatus 100B includes a wafer holder 136 supported by motorized stage 134 to hold a wafer 150 to be inspected. Electron beam tool 100B includes an electon emitter, which may comprise a cathode 103, an anode 121, and a gun aperture 122. Electron beam tool 100B further includes a beam limit aperture 125, a condenser lens 126, a column aperture 135, an objective lens assembly 132, and a detector 144. Objective lens assembly 132, in some embodiments, may be a modified SORIL lens, which includes a pole piece 132a, a control electrode 132b, a deflector 132c, and an exciting coil 132d. In a detection or imaging process, an electron beam 161 emanating from the tip of cathode 103 may be accelerated by anode 121 voltage, pass through gun aperture 122, beam limit aperture 125, condenser lens 126, and be focused into a probe spot 170 by the modified SORIL lens and impinge onto the surface of wafer 150. Probe spot 170 may be scanned across the surface of wafer 150 by a deflector, such as deflector 132c or other deflectors in the SORIL lens. Secondary or scattered particles, such as secondary electrons or scattered primary electrons emanated from the wafer surface may be collected by detector 144 to determine intensity of the beam and so that an image of an area of interest on wafer 150 may be reconstructed.
[0055] There may also be provided an image processing system 199 that includes an image acquirer 120, a storage 130, and controller 109. Image acquirer 120 may comprise one or more processors. For example, image acquirer 120 may comprise a computer, server, mainframe host, terminals, personal computer, any kind of mobile computing devices, and the like, or a combination thereof. Image acquirer 120 may be communicatively coupled with detector 144 of electron beam tool 100B through a medium such as an electrical conductor, optical fiber cable, portable storage media, IR, Bluetooth, internet, wireless network, wireless radio, or a combination thereof. Image acquirer 120 may receive a signal from detector 144 and may construct an image. Image acquirer 120 may thus acquire images of wafer 150. Image acquirer 120 may also perform various post-processing functions, such as image averaging, generating contours, superimposing indicators on an acquired image, and the like. Image acquirer 120 may be configured to perform adjustments of brightness and contrast, etc. of acquired images. Storage 130 may be a storage medium such as a hard disk, random access memory (RAM), cloud storage, other types of computer readable memory, and the like. Storage 130 may be coupled with image acquirer 120 and may be used for saving scanned raw image data as original images, and post-processed images. Image acquirer 120 and storage 130 may be connected to controller 109. In some embodiments, image acquirer 120, storage 130, and controller 109 may be integrated together as one electronic control unit.
[0056] In some embodiments, image acquirer 120 may acquire one or more images of a sample based on an imaging signal received from detector 144. An imaging signal may correspond to a scanning operation for conducting charged particle imaging. An acquired image may be a single image comprising a plurality of imaging areas that may contain various features of wafer 150. The single image may be stored in storage 130. Imaging may be performed on the basis of imaging frames. [0057] The condenser and illumination optics of the electron beam tool may comprise or be supplemented by electromagnetic quadrupole electron lenses. For example, as shown in Fig. 2B, electron beam tool 100B may comprise a first quadrupole lens 148 and a second quadrupole lens 149. In some embodiments, the quadrupole lenses may be used for controlling the electron beam. For example, first quadrupole lens 148 may be controlled to adjust the beam current and second quadrupole lens 149 may be controlled to adjust the beam spot size and beam shape.
[0058] Fig. 2B illustrates a charged particle beam apparatus that may use a single primary beam configured to generate secondary electrons by interacting with wafer 150. Detector 144 may be placed along optical axis 105, as in the embodiment shown in Fig. 2B. The primary electron beam may be configured to travel along optical axis 105. Accordingly, detector 144 may include a hole at its center so that the primary electron beam may pass through to reach wafer 150. Fig. 2B shows an example of detector 144 having an opening at its center. However, some embodiments may use a detector placed off-axis relative to the optical axis along which the primary electron beam travels. For example, as in the embodiment shown in Fig. 2A, discussed above, a beam separator 222 may be provided to direct secondary electron beams toward a detector placed off-axis. Beam separator 222 may be configured to divert secondary electron beams by an angle a toward an electron detection device 244, as shown in Fig. 2A.
[0059] A detector in a charged particle beam system may include one or more sensing elements. The detector may comprise a single-element detector or an array with multiple sensing elements. The sensing elements may be configured for charged particle counting. Sensing elements of a detector that may be useful for charged particle counting are discussed in U.S. Publication No. 2019/0379682, which is incorporated by reference in its entirety.
[0060] Sensing elements may include a diode or an element similar to a diode that may convert incident energy into a measurable signal. For example, sensing elements in a detector may include a PIN diode. Throughout this disclosure, sensing elements may be represented as a diode, for example in the figures, although sensing elements or other components may deviate from ideal circuit behavior of electrical elements such as diodes, resistors, capacitors, etc.
[0061] In some embodiments, machine learning may be employed in the generation of inspection images, reference images or other images associated with apparatus 100, 100A or 100B. For example, in some embodiments a machine learning system may operate in association with, e.g., controller 109, image acquisition unit 199, image acquirer 120, or storage unit 130 of Figs. 1-2B. In some embodiments, a machine learning system may comprise a discriminative model. In some
embodiments, a machine learning system may include a generative model. For example, learning can feature two types of mechanisms: discriminative learning that may be used to create classification and detection algorithms, and generative learning that may be used to actually create models that, in the extreme, can render images. For example, as described further below, a generative model may be configured for generating an image from a design clip that resembles a corresponding location on a wafer in a SEM image. This may be performed by 1) training the generative model with design clips and the associated actual SEM images from those locations on the wafer; and 2) using the model in inference mode to feed the model design clips in locations for which simulated SEM images are desired. Such simulated images can be used as reference images in, e.g., die-to-database inspection. [0062] If the model(s) include one or more discriminative models, the discriminative model(s) may have any suitable architecture and/or configuration known in the art. Discriminative models, also called conditional models, are a class of models used in machine learning for modeling the dependence of an unobserved variable “y” on an observed variable “x.” Within a probabilistic framework, this may be done by modeling a conditional probability distribution P(ylx), which can be used for predicting y based on x. Discriminative models, as opposed to generative models, may not allow one to generate samples from the joint distribution of x and y. However, for tasks such as classification and regression that do not require the joint distribution, discriminative models may yield superior performance. On the other hand, generative models are typically more flexible than discriminative models in expressing dependencies in complex learning tasks. In addition, most discriminative models are inherently supervised and cannot easily be extended to unsupervised learning. Application specific details ultimately dictate the suitability of selecting a discriminative versus generative model.
[0063] A generative model can be generally defined as a model that is probabilistic in nature. In other words, a “generative” model is not one that performs forward simulation or rule -based approaches and, as such, it may not be necessary to model the physics of the processes involved in generating an actual image or output (for which a simulated image or output is being generated). Instead, the generative model can be learned (in that its parameters can be learned) based on a suitable training set of data. Such generative models may have a number of advantages for the embodiments described herein. In addition, the generative model may be configured to have a deep learning architecture in that the generative model may include multiple layers, which may perform a number of algorithms or transformations. The number of layers included in the generative model may depend on the particular use case. For practical purposes, a suitable range of layers is from 2 layers to a few tens of layers.
[0064] Deep learning is a type of machine learning. Machine learning can be generally defined as a type of artificial intelligence (Al) that provides computers with the ability to learn without being explicitly programmed. Machine learning focuses on the development of computer programs that can teach themselves to grow and change when exposed to new data. In other words, machine learning
can be defined as the subfield of computer science that “gives computers the ability to learn without being explicitly programmed.” Machine learning explores the study and construction of algorithms that can learn from and make predictions on data — such algorithms overcome following strictly static program instructions by making data driven predictions or decisions, through building a model from sample inputs.
[0065] The machine learning described herein may be further performed as described in “Introduction to Statistical Machine Learning,” by Sugiyama, Morgan Kaufmann, 2016, 534 pages; “Discriminative, Generative, and Imitative Learning,” Jebara, MIT Thesis, 2002, 212 pages; and “Principles of Data Mining (Adaptive Computation and Machine Learning)” Hand et al., MIT Press, 2001, 578 pages; which are incorporated by reference as if fully set forth herein. The embodiments described herein may be further configured as described in these references.
[0066] In some embodiments, a machine learning system may comprise a neural network. For example, a model may be a deep neural network with a set of weights that model the world according to the data that it has been fed to train it. Neural networks can be generally defined as a computational approach which is based on a relatively large collection of neural units loosely modeling the way a biological brain solves problems with relatively large clusters of biological neurons connected by axons. Each neural unit is connected with many others, and links can be enforcing or inhibitory in their effect on the activation state of connected neural units. These systems are self-learning and trained rather than explicitly programmed and excel in areas where the solution or feature detection is difficult to express in a traditional computer program.
[0067] Neural networks typically consist of multiple layers, and the signal path traverses from front to back. The goal of the neural network is to solve problems in the same way that the human brain would, although several neural networks are much more abstract. Modern neural network projects typically work with a few thousand to a few million neural units and millions of connections. The neural network may have any suitable architecture and/or configuration known in the art.
[0068] In a further embodiment, a model may comprise convolutional and deconvolution neural network. For example, the embodiments described herein can take advantage of learning concepts such as a convolution and deconvolution neural network to solve the normally intractable representation conversion problem (e.g., rendering). The model may have any convolution and deconvolution neural network configuration or architecture known in the art.
[0069] Figs. 3A-B schematically illustrate contour extraction techniques 300 and 301 in a metrology process, such as an edge placement error (EPE) measurement, according to a comparative embodiment. First, as seen in Fig. 3A, an inspection image 351 of a feature area on a sample may be acquired. The image 351 may comprise, e.g., a charged particle image such as a SEM image. The feature area may correspond to, e.g., a field of view of the SEM, a portion of a field of view, a plurality of fields of view, etc. The sample may comprise, e.g., a semiconductor wafer comprising a printed circuit pattern. Inspection image 351 may comprise a pixelated grayscale intensity map of the
features for which contours are to be extracted. In a semiconductor wafer, for example, the edges of the features may comprise a transition region having a sharp change in topography. In a SEM image, for example, this transition may appear as bright, high intensity regions, in contrast to the dark, relatively flat areas both inside and outside the feature.
[0070] The grayscale inspection image 351 may be compared to a reference image 381. Reference image 381 may be a binary image, such that no grayscale or other gradient exists in the image.
Reference image 381 may be based on, e.g., a design file used to create the printed patterns. Therefore in some embodiments, the alignment between grayscale inspection image 351 and binary reference image 381 may comprise a die-to-database (D2DB) type alignment. For example, reference image 381 may be a set of contour outlines of pattern features derived from the original design file. The contour outlines may be subjected to further post-processing to yield outlines that more closely resemble the patterns as they are expected to form on the wafer. For instance, the reference image 381 may be spatial frequency-filtered to generate a lower spatial-frequency version of the design file, or it may have optical proximity corrections applied to it. This may result in pattern outlines that have, e.g., rounded corners, line-end shortening, or linewidth variations, etc., depending on their spatial relationship to other features on the wafer, as well as on modeled parameters of the printing process. [0071] Reference image 381 may be aligned to the SEM image to determine a coarse edge location, as seen in aligned image 353. For example, reference image 381 may be mapped directly onto inspection image 351 by a least squares or other fitting algorithm, or it may be subjected to further global deformations such as magnification, rotation, skew, pincushion or barrel distortion, to achieve an approximate matching of reference image 381 to inspection image 351.
[0072] Next, a further step 301 of normal thresholding may be performed to determine a fine contour extraction. The small sample region 383 of Fig. 3A is enlarged at the left side of in Fig. 3B to illustrate the fine contour extraction process. A further enlarged section of sample region 383 is shown on the right. Individual pixel intensities may be evaluated along a series of normal lines 384 running perpendicular to the coarse edge lines formed by reference image 381. As illustrated by chart 386, when running along a normal line starting from the outside of a feature and moving toward the inside, for example, a pixel is selected at the point where an intensity value reaches a prescribed threshold. This selected pixel may be set as one point along an edge of the feature. By repeating the process many times along the outlines of reference image 381, a fine edge 385 may be extracted as seen on the right in Fig. 3B.
[0073] Further, as illustrated on the left side of Fig. 3B, this contour-extraction process 301 may be iterative. For example, a first series of lines 384 running normal to the outlines of reference image 381 may be evaluated to determine an intermediate contour 385a. Then, a second series of lines 384 running normal to the intermediate contour 385a may be evaluated to extract a fine contour 385b. The thresholds used to extract intermediate contour 385a and fine contour 3385b may be the same or may
be different. In general, the process may be iterated as many times as is desired to optimize the extraction process.
[0074] As noted above, this technique suffers drawbacks and may not be suitable for modern inspection processes. In particular, the step of achieving an initial coarse outline has been identified as a limiting factor on the efficiency and accuracy of the edge contour extraction. This may be especially true when dealing with highly irregular patterns such as random logic layers, in which a complex logic is hard-wired into the circuit design. Embodiments of the present disclosure provide systems and methods for producing a customized contour image. The customized contour image may be used to directly extract an edge location in a metrology process, or may be used as an initial coarse contour in a normal thresholding operation. For example, the customized contour image may replace the reference image 381 in contour-extraction process 301 above.
[0075] Fig. 4 illustrates an edge extraction process 400, consistent with embodiments of the present disclosure. The edge extraction process may be used in, e.g., a metrology process such as, e.g., EPE measurement. In some embodiments, the process may be applied to other measurements, such as critical dimension (CD), overlay (OVL), linewidth roughness (LWR), line edge roughness (LER), line end shortening (LES), or other parameters in semiconductor manufacturing.
[0076] Alternatively, the edge extraction process may be used in, e.g., defect inspection. For example, various types of defects on a semiconductor wafer may be generated during different stages of a wafer manufacturing process. The stages may include, e.g., a lithography process, an etching process, a chemical mechanical polishing (CMP) process, and an interconnection forming process. Defects generated in the lithographic process may include, e.g., photoresist (RP) residue defects due to PR deterioration or impurity, peeling defects, bridge defects, bubble defects, and dummy pattern missing defects due to pattern shift. Defects generated in an etching process may include, e.g., etching residue defects, over-etching, defects and open circuit defect. Defects generated in a CMP process may include, e.g., slurry residue defects, dishing defects, and erosion defects due to variance in polishing rates, and scratched due to polishing. Defects generated in an interconnection forming process may include, e.g., broken line defects, void defects, extrusion defects, and bridge defects. Defect inspection parameters may include any measured features that may be used to identify or otherwise characterize defects such as those discussed above. For example, in some embodiments a value of a defect inspection parameter may include a value of the size, depth, height, surface roughness, edge roughness, or other characteristic of the defect.
[0077] As seen at the top left in Fig. 4, an inspection image 451 of a feature area on a sample may be acquired in a similar manner to inspection image 351 of Fig. 3A. Image 451 may be a grayscale image. Image 451 may comprise, e.g., a charged particle image such as a SEM image. As seen in the top center of Fig. 4, a reference image 481 may also be acquired. Reference image 481 may comprise, e.g., a design file used to create the printed patterns or another suitable reference image. In some embodiments, the design file may be subjected to post processing as discussed above with respect to
Figs. 3A-B. In some embodiments, the reference image may correspond to an original design file. [0078] Inspection image 451 may be aligned to reference image 481 by a transformation model in a first alignment process 401. For instance, the transformation model may comprise a neural network or other machine learning model as further discussed with respect to Fig. 5 below. The transformation model may be configured to calculate a first deformation map in the first alignment process 401. The first deformation map may be used to locally deform the shapes and locations of pattern elements in inspection image 451 so that they are mapped precisely onto the corresponding pattern elements in reference image 481. For example, in some embodiments, the pattern elements of inspection image 451 may be deformed to align with their corresponding elements in reference image 481 substantially at or near the pixel level. Alternatively or additionally, in some embodiments the pattern elements of inspection image 451 may be deformed to align with their corresponding elements in reference image 481 substantially at or near the resolution limit of the imaging system used to acquire inspection image 451. First alignment process 401 may generate an aligned image 491, as seen at the upper right corner in Fig. 4.
[0079] Inspection image 451 (such as a SEM image) may be alternatively denoted “S,” and reference image 481 may be alternatively denoted “R,” for the purpose of establishing a naming convention for derivative images that are generated based on these transformations. For example, the alignment of grayscale inspection image S onto binary reference image R may generate an aligned grayscale inspection image S2R. The reverse operation of aligning the binary reference image R onto the grayscale inspection image S would generate an aligned binary reference image R2S.
[0080] The aligned grayscale image 491 may then be re-aligned to the original inspection image 451 in a second alignment process 402. For example, second alignment process 402 may use the same transformation model to calculate a second deformation map. The second deformation map may be used to locally deform the shapes and locations of pattern elements in aligned grayscale image 491 so that they are mapped precisely onto the corresponding pattern elements in inspection image 451. The resulting re-aligned grayscale image 492 may be referred to as [S2R]2S. However, the key element of second alignment process 402 lies in deriving the second deformation map rather than actually generating the [S2R]2S image. Therefore, in some embodiments, the second deformation map may be calculated by other means. For example, in some embodiments the second deformation map may be generated based on the first deformation map, such as by taking an inverse of the first deformation map, or by performing another transformation on the first deformation map.
[0081] Finally, after calculating the second deformation map, a third alignment process 403 may be performed. Unlike the first and second alignment processes 401 and 402, third alignment process 403 may not comprise feeding two images into the transformation model. Instead, third alignment process 403 may comprise applying the second deformation map to the original reference image 481. In this way, a reference binary image may be transformed into an aligned binary image 493a using a deformation map that was derived by aligning two grayscale images to each other. This operation may
achieve a smooth, binary contour that very closely matches the actual edges of the features captured in inspection image 451.
[0082] For example, the aligned binary image 493a may be represented in outline form as customized contour image 493b. Customized contour image 493b may be overlaid on original inspection image 451 for performing edge contour extraction. However, all the information needed for edge extraction or other metrology processes may be present in the solid image 493a, without a need for further comparison to the original inspection image 451. Because edge extraction may be performed directly from the information contained in aligned binary image 493a, the image 493a may also be referred to as a customized contour image.
[0083] Alternatively, customized contour images 493a/b may be used as an initial coarse alignment in a normal thresholding process as discussed above with respect to Fig. 3B. For example, a single customized contour image may be applied to a plurality of similarly patterned samples, such as a plurality of identically patterned die regions on a wafer. In this way, a single transformation process according to Fig. 4 may be performed on a representative inspection image to derive strong initial coarse contour. Then minor die-to-die variations may be corrected by a rapid normal thresholding process, such as that disclosed in Fig. 3B.
[0084] It is noted that the final customized contour image, which requires multiple transformations, may appear similar to an aligned binary reference image R2S that could be achieved with a single transformation. However, for reasons discussed below with respect to Fig. 5, it may be more desirable to perform the series of alignment transformations of process 400 rather than performing a direct alignment transformation R2S.
[0085] Fig. 5 schematically illustrates a machine learning system 500 for performing the alignment transformations discussed above, consistent with embodiments of the present disclosure. An alignment process may be performed using an encoder-decoder network, such as a deep neural network (DNN) 595 comprising a set of weights w, or another machine learning model. The encoderdecoder network 595 may be configured to encode into, and decode out of, a latent space. The alignment process may be iterated to find an optimized test weighting for the encoder-decoder network 595. In some embodiments the encoder and decoder of the network may be set to operate with an initial test weighting. The initial test weighting can be selected using a variety of methods. For example, all values may be set to maximum, minimum or mid-range values, to random values or to values obtained from a previous use of the method.
[0086] Initially, an inspection image 551 and a reference image 581 may be input for alignment to generate an aligned image 599. For example, as illustrated, a binary reference image 581 may be aligned directly to a grayscale inspection image 551 to generate an “R2S”-type aligned image 599. Alternatively, as discussed at alignment process 401 of Fig. 4, a grayscale inspection image 551 may be aligned to a binary reference image 581 to generate an “S2R”-type aligned image 599. The inspection and reference images 551/581 may be encoded, using the encoder, into a latent space to
form an encoding. Following this, the encoding may be decoded to form a deformation map 596 indicative of a difference between inspection image 551 and reference image 581. After deformation map 596 has been formed, the inspection image is spatially transformed by deformation map 596 to obtain an aligned image 599.
[0087] With the aligned image 599 obtained, a loss function 597 may be determined. The loss function 597 may be at least partially defined by a similarity metric which is obtained by comparing the aligned image to the reference image. The loss metric may be obtained by inputting the reference image 581 and the aligned image 551 into a discriminator network that outputs values depending on the similarity of the images. For example, the network may output values close to 0 for similar inputs and close to 1 for inputs that are significantly different. Of course any metric may be used. In some embodiments, the loss function my take the form of the following equation:
in which w is a particular weighting, I represents a strength of a smoothness prior, d>(x, w) represents the deformation map, CC represents a cross-correlation calculation, f is a reference image, and m is the inspection image.
[0088] The loss function may also be at least partially defined by a smoothness metric, which is defined by the smoothness of the deformation map 596. Accordingly, the step of determining the loss function 597 may further comprise determining a smoothness metric of the deformation map 596. The smoothness metric may be defined by any suitable measurement of the deformation map 596, which is representative of smoothness. In an example, the smoothness metric is at least partially defined by the spatial gradients of the deformation map 596. Images of semiconductor substrates that are obtained using a SEM are known to display distortions of the first, second, and sometimes third order. Accordingly, by optimizing the smoothness of the deformation map 596, i.e., maximizing its smoothness, it may be possible to set the weighting of the encoder-decoder network such that an appropriate deformation map 596 can be generated. Higher frequency distortions can be due to actual differences in the measured geometry of the inspection image 551, when compared to the reference image, if the inspection image 551 and reference image 581 are obtained from different locations on a substrate, or to noise, and so it may not be desirable to form a deformation map 596 that corrects for these differences. This ensures that the deformation map 596 is indicative of the distortions in the image, rather than other differences between the reference image 581 and the inspection image 551. In some cases it is expected that the aligned image may have some differences to the reference image, for example if the aligned and reference images are obtained from different places on a substrate or are derived from different modalities (e.g., comparing an SEM image to a mask image, GDSII, or a simulated image).
[0089] Having carried out the above process for a particular test weighting, it may be determined whether a termination condition has been met. The termination condition can be one or more of the following conditions: a predetermined value for the loss function 597 has been achieved; the improvement in the loss function compared to previous iterations is below a predetermined value; a local minimum in the loss function has been found; and a predetermined number of iterations has been performed. If the termination condition is not met, the test weighting may be adjusted, and the process described above may be repeated with a different test weighting. For example, the values of the test weighting may be adjusted in a manner that is predicted to minimize the loss function 597. In some embodiments a random component may also be added to prevent the optimization routine becoming trapped in a local minimum.
[0090] After all the necessary test weightings have been iterated over, an optimized weighting may be determined as the test weighting having an optimized loss function. The weighting of the encoderdecoder network is then set as the optimized weighting and the alignment process may terminate. Further discussion of transformation methods that are applicable to embodiments of the present disclosure may be found in U.S. Patent Publication NO. 2023/0036630, the entirety of which is incorporated by reference.
[0091] As discussed above with respect to Fig. 4, a series of transformations may be used to calculate a second deformation map for generating the customized contour image 493, rather than generating a direct “S2R”-type aligned image as shown at Fig. 5. The close-up of aligned image 599 in Fig. 5 illustrates why. Although the global contours may appear acceptable when performing a direct “S2R”- type alignment, a pixel-level view reveals local noise introduced when the model attempts to map the binary form onto gradient images. These noisy edges may defeat the purpose of performing a pixellevel alignment operation. Thus, the first and second alignment processes 401 and 402 of Fig. 4 may be performed. This may be used to determine the appropriate deformation map 596 that will transform a reference image to generate smooth, binary contours that very closely match the features of the inspection image.
[0092] Although machine learning techniques for performing the alignment transformations have been described with respect to machine learning system 500 above, embodiments of the present disclosure are not limited to this. For example, in some embodiments, a machine learning system may include discriminative or generative models as described above.
[0093] Figs. 6A-E schematically illustrate an example of a challenging edge placement measurement, as performed using techniques according to a comparative embodiment and embodiments of the present disclosure. In Fig. 6A, an inspection image 651 of a portion of circuit pattern 654 comprises an intricate series of horizontal and vertical lines. The circuit 654 may form a part of, e.g., a memory structure or other feature having intricate patterns.
[0094] Figs. 6B-D illustrate a contour extraction 685 as achieved in a comparative embodiment. For example, the contour extraction of Figs. 6B-D may be performed using the processes 300-301
described with respect to Figs. 3A-B above. Contour extraction according to a comparative embodiment may be performed in steps to capture the vertical and horizontal patterns. For example, as seen in Fig. 6B, a first intermediate contour 685v may be extracted for the vertical lines by applying a vertical line pattern as a coarse contour and performing a first normal thresholding operation. The result may be a poor initial matching with further interference from the horizontal components of the pattern. Similarly, as shown in Fig. 6C, a second intermediate contour 685h may be extracted for the horizontal lines by applying a horizontal line pattern as a coarse contour and performing a second normal thresholding operation, again with similar results. The two contours may be further processed to achieve a combined extracted contour 685h-v (as shown in Fig. 6D) that fails to adequately capture the actual edge contours of the printed pattern.
[0095] Fig. 6E schematically illustrates a contour extraction of the same inspection image 651 using a customized contour image 693, consistent with embodiments of the present disclosure. A single set of transformations (such as using process 400 of Fig. 4) may generate the customized contour image 693 for both horizontal and vertical lines with pixel-level accuracy. Alternatively, a customized contour may be used as an initial coarse contour in a thresholding operation, thus eliminating the distorted edges introduced by poor initial contours and horizontal-vertical interference.
[0096] While certain features have been presented in the context of EPE measurement, embodiments of the present disclosure are not limited to this. For example, Fig. 7 schematically illustrates example use cases of contour extraction, consistent with embodiments of the present disclosure. In addition to edge placement error, the contour extraction techniques may be applied to, e.g., metrology measurements such as critical dimension (CD), overlay (OVL), linewidth roughness (LWR), line edge roughness (LER), line end shortening (LES), or other parameters in semiconductor manufacturing. For example, a critical pattern monitoring operation 710 may be used for, e.g., evaluating the overlay of repeated pattern images 751a, 751b,. . .,751x at various locations across a wafer to identify patterns or dies having a large local edge placement error (LEPE). Alternatively, the contour extraction may be applied in a dual layer contour stacking operation 720 to identify minimum overlay margins between neighboring pattern pairs. Alternatively or additionally, the contour extraction may be applied in a single layer contour stacking operation 730. Single layer stacking may be used to identify, e.g., a maximum LEPE per pattern type. In general, embodiments of the present disclosure may be applicable to any type of metrology or other operation as understood by a person of ordinary skill in the art.
[0097] Fig. 8 schematically illustrates a flowchart of an example method 800 of contour extraction, consistent with embodiments of the present disclosure. The method may be performed according to embodiments disclosed in, e.g., Figs. 1-2B and 3A-7. For example, method 800 may be performed using a controller, such as controller 109 in Fig. 1, or image acquisition unit 199 in Fig. 2B. [0098] At step 801, an inspection image of a feature area on a sample may be acquired. The inspection image may comprise, e.g., a charged particle image. For example, the inspection image
may comprise an electron beam image, such as a SEM image. Alternatively, the inspection image may comprise an optical inspection image such as an image acquired through an optical inspection tool. In some embodiments, the inspection image may comprise, e.g., inspection image 451 of Fig. 4. In general, the inspection image may comprise a grayscale image comprising pixels of varying intensity. The feature area may correspond to, e.g., a field of view of a SEM or other inspection apparatus, a portion of a field of view, a plurality of fields of view, etc. In some embodiments, the sample may comprise, e.g., a semiconductor wafer comprising a printed circuit pattern.
[0099] At step 802, the grayscale inspection image may be aligned to a reference image in a first alignment process. The reference image may be a binary image based on, e.g., a design file used to create the printed patterns in the inspection image. In some embodiments, the reference image may comprise, e.g., reference image 481 of Fig. 4. The first alignment process may comprise deforming pattern elements in the inspection image to conform to the corresponding elements in the reference image. For example, the alignment may be performed using a transformation model comprising a neural network or other machine learning model. The first alignment process may comprise applying local as well as global deformations according to a first deformation map calculated during the first alignment process. For example, the first deformation map may correspond in form to deformation map 596 of Fig. 5. The first alignment process may generate a first grayscale aligned image. For example, the first grayscale aligned image may comprise aligned image 491 of Fig. 4.
[0100] At step 803, the first grayscale aligned image may be re-aligned to the original inspection image in a second alignment process. For example, the second alignment process may utilize the same transformation model as was used in the first alignment process. The second alignment process may generate a second grayscale aligned image by deforming pattern elements in the first grayscale aligned image to conform to the corresponding elements in the inspection image. For example, the second grayscale aligned image may comprise re-aligned grayscale image 492 of Fig. 4. A second deformation map may be calculated based on the second alignment process. For example, the second deformation map may also correspond in form to deformation map 596 of Fig. 5.
[0101] At step 804, the second deformation map may be applied to the reference image to generate a customized contour image. For example, using substantially the same transformation that was applied to transform the first grayscale image into the second grayscale image, the reference image may be transformed into a customized contour image containing pixel-level information about edge features in the patterns of the inspection image. In some embodiments, the customized contour image may comprise customized contour image 493a or 493b of Fig. 4.
[0102] At step 805, contour extraction may be performed using the customized contour image. For example, in some embodiments, contour information may be directly extracted from the customized contour image. For example, the contour information may be contained entirely in customized contour image 493a or 493b of Fig. 4. Alternatively, contour extraction may be performed by comparing the customized contour image to the inspection image. In some embodiments, the customized contour
image may be used as an initial coarse contour. Then a fine contour extraction may be performed using a thresholding operation as discussed with respect to Fig. 3B above.
[0103] At step 806, a parameter value may be measured based on the extracted contour. For example, parameter values may comprise edge placement error value, an overlay value, a critical dimension value, a linewidth value, linewidth roughness value, line edge roughness value, a line end shortening value, etc. In general, the parameter value may comprise any observable dimensional parameter on a sample that may be determined using contour extraction or other edge determination. In some embodiments, measuring a parameter value may comprise performing one or more of the operations depicted at Fig. 7.
[0104] At step 807, an adjustment may be performed based on the measured parameter value. For example, the adjustment may comprise a tuning or correction to a process or apparatus involved in manufacturing the measured sample. For instance, the adjustment may be performed to a lithography or other semiconductor manufacturing apparatus or process based on the measured parameter value, such as by comparison to a target, threshold, or previously measured parameter value.
[0105] A non-transitory computer-readable medium may be provided that stores instructions for a processor of a controller (e.g., controller 109 in Fig. 1, or image acquisition unit 199 in Fig. 2B) for detecting charged particles according to, e.g., systems 301-500 of Figs. 3B-5, or the example method 800 of Fig. 8, consistent with embodiments of the present disclosure. For example, the instructions stored in the non-transitory computer-readable medium may be executed by the circuitry of the controller for performing measurements according to systems 301-500 or method 800 in part or in entirety. Common forms of non-transitory media include, for example, a floppy disk, a flexible disk, hard disk, solid-state drive, magnetic tape, or any other magnetic data storage medium, a Compact Disc Read-Only Memory (CD-ROM), any other optical data storage medium, any physical medium with patterns of holes, a Random Access Memory (RAM), a Programmable Read-Only Memory (PROM), and Erasable Programmable Read-Only Memory (EPROM), a FLASH-EPROM or any other flash memory, Non-Volatile Random Access Memory (NVRAM), a cache, a register, any other memory chip or cartridge, and networked versions of the same.
[0106] Embodiments of the present disclosure may further be described by the following clauses: 1. A non-transitory computer-readable medium that stores a set of instructions that is executable by at least one processor of an apparatus to cause the apparatus to perform operations comprising: acquiring an inspection image of a pattern region on a wafer; aligning the inspection image to a reference image in a first alignment process to generate an aligned image; aligning the aligned image to the inspection image in a second alignment process to generate a deformation map; applying the deformation map to the reference image to generate a customized contour image; measuring a parameter value of the pattern region based on the customized contour image;
and performing an adjustment to a semiconductor manufacturing process or a semiconductor manufacturing apparatus based on the measured parameter value.
2. The non-transitory computer-readable medium of clause 1 , wherein the inspection image is a charged-particle beam image.
3. The non-transitory computer-readable medium of clause 1, wherein acquiring the inspection image comprises utilizing a scanning electron microscope.
4. The non-transitory computer-readable medium of clause 1 , wherein the pattern region corresponds to a field of view of an inspection apparatus used to acquire the inspection image.
5. The non-transitory computer-readable medium of clause 1, wherein: the inspection image is a grayscale image; and the reference image is a binary image.
6. The non-transitory computer-readable medium of clause 5, wherein: the aligned image is a grayscale image; and the customized contour image is a binary image.
7. The non-transitory computer-readable medium of clause 1, wherein the reference image is based on a pattern file format corresponding to the pattern region.
8. The non-transitory computer-readable medium of clause 1, wherein one of the first alignment process or the second alignment process comprises a machine learning transformation model.
9. The non-transitory computer-readable medium of clause 8, wherein the machine learning transformation model comprises the inspection image and the reference image as inputs.
10. The non-transitory computer-readable medium of clause 8, wherein the machine learning transformation model comprises the aligned image and the inspection image as inputs.
11. The non-transitory computer-readable medium of clause 8, wherein the machine learning transformation model comprises an encoder-decoder network.
12. The non-transitory computer-readable medium of clause 11, wherein the encoder-decoder network comprises a deep neural network.
13. The non-transitory computer-readable medium of clause 11, wherein the encoder-decoder network comprises a plurality of initial test weightings.
14. The non-transitory computer-readable medium of clause 13, wherein the machine learning transformation model iterates one of the first alignment process or the second alignment process to optimize the plurality of initial test weightings for the encoder-decoder network.
15. The non-transitory computer-readable medium of clause 11, wherein the encoder-decoder network encodes the inspection image and the reference image into a latent space to form a first encoding.
16. The non-transitory computer-readable medium of clause 15, wherein: the deformation map comprises a second deformation map, and the encoder-decoder network decodes the first encoding to generate a first deformation map.
17. The non-transitory computer-readable medium of clause 11, wherein the encoder-decoder network encodes the aligned image and the inspection image into a latent space to form a second encoding.
18. The non-transitory computer-readable medium of clause 15, wherein the encoder-decoder network decodes the second encoding to generate the deformation map.
19. The non-transitory computer-readable medium of clause 8, wherein the machine learning transformation model comprises a loss function.
20. The non-transitory computer-readable medium of clause 19, wherein the loss function comprises the form:
wherein w represents a weighting value, I represents a strength of a smoothness prior, <D(x, w) represents the deformation map, CC represents a cross-correlation calculation, f represents the reference image, and m represents the inspection image.
21. The non-transitory computer-readable medium of clause 8, wherein the other of the first alignment process or the second alignment process comprises the machine learning transformation model.
22. The non-transitory computer-readable medium of clause 1 , wherein: the deformation map comprises a second deformation map, and aligning the inspection image to the reference image in the first alignment process comprises generating a first deformation map.
23. The non-transitory computer-readable medium of clause 1, wherein the operations further comprise: extracting an edge location based on the customized contour image.
24. The non-transitory computer-readable medium of clause 23, wherein extracting the edge location based on the customized contour image comprises extracting the edge location directly from the customized contour image.
25. The non-transitory computer-readable medium of clause 23, wherein extracting the edge location based on the customized contour image comprises comparing the customized contour image to the inspection image.
26. The non-transitory computer-readable medium of clause 23, wherein extracting the edge location based on the customized contour image comprises: matching the customized contour image to the inspection image to determine a coarse edge contour; acquiring a plurality of pixel intensity values from the inspection image along a line crossing the coarse edge contour; and
selecting a pixel value above a predetermined threshold intensity as an edge pixel to perform a fine edge contour extraction.
27. The non-transitory computer-readable medium of clause 1, wherein the parameter value comprises one of edge placement error, critical dimension, overlay, linewidth, linewidth roughness, or line edge roughness.
28. The non-transitory computer-readable medium of clause 1, wherein the parameter value comprises a defect inspection value.
29. An inspection method, comprising: acquiring an inspection image of a pattern region on a wafer; aligning the inspection image to a reference image in a first alignment process to generate an aligned image; aligning the aligned image to the inspection image in a second alignment process to generate a deformation map; applying the deformation map to the reference image to generate a customized contour image; measuring a parameter value of the pattern region based on the customized contour image; and performing an adjustment to a semiconductor manufacturing process or a semiconductor manufacturing apparatus based on the measured parameter value.
30. The inspection method of clause 29, wherein inspection image is a charged-particle beam image.
31. The inspection method of clause 29, wherein acquiring the inspection image comprises utilizing a scanning electron microscope.
32. The inspection method of clause 29, wherein the pattern region corresponds to a field of view of an inspection apparatus used to acquire the inspection image.
33. The inspection method of clause 29, wherein: the inspection image is a grayscale image; and the reference image is a binary image.
34. The inspection method of clause 29, wherein: the aligned image is a grayscale image; and the customized contour image is a binary image.
35. The inspection method of clause 29, wherein the reference image is based on a pattern file format corresponding to the pattern region.
36. The inspection method of clause 29, wherein one of the first alignment process or the second alignment process comprises a machine learning transformation model.
37. The inspection method of clause 36, wherein the machine learning transformation model comprises the inspection image and the reference image as inputs.
38. The inspection method of clause 36, wherein the machine learning transformation model comprises the aligned image and the inspection image as inputs.
39. The inspection method of clause 36, wherein the machine learning transformation model comprises an encoder-decoder network.
40. The inspection method of clause 39, wherein the encoder-decoder network comprises a deep neural network.
41. The inspection method of clause 39, wherein the encoder-decoder network comprises a plurality of initial test weightings.
42. The inspection method of clause 41, wherein the machine learning transformation model iterates one of the first alignment process or the second alignment process to optimize the plurality of initial test weightings for the encoder-decoder network.
43. The inspection method of clause 39, wherein the encoder-decoder network encodes the inspection image and the reference image into a latent space to form a first encoding.
44. The inspection method of clause 43, wherein: the deformation map comprises a second deformation map, and the encoder-decoder network decodes the first encoding to generate a first deformation map.
45. The inspection method of clause 39, wherein the encoder-decoder network encodes the aligned image and the inspection image into a latent space to form a second encoding.
46. The inspection method of clause 45, wherein the encoder-decoder network decodes the second encoding to generate the deformation map.
47. The inspection method of clause 29, wherein the machine learning transformation model comprises a loss function.
48. The inspection method of clause 47, wherein the loss function comprises the form:
wherein w represents a weighting value, I represents a strength of a smoothness prior, <D(x, w) represents the deformation map, CC represents a cross-correlation calculation, f represents the reference image, and m represents the inspection image.
49. The inspection method of clause 36, wherein the other of the first alignment process or the second alignment process comprises the machine learning transformation model.
50. The inspection method of clause 29, wherein: the deformation map comprises a second deformation map, and aligning the inspection image to the reference image in the first alignment process comprises generating a first deformation map.
51. The inspection method of clause 29, further comprising: extracting an edge location based on the customized contour image.
52. The inspection method of clause 51 , wherein extracting the edge location based on the customized contour image comprises extracting the edge location directly from the customized contour image.
53. The inspection method of clause 51, wherein extracting the edge location based on the customized contour image comprises comparing the customized contour image to the inspection image.
54. The inspection method of clause 51 , wherein extracting the edge location based on the customized contour image comprises: matching the customized contour image to the inspection image to determine a coarse edge contour; acquiring a plurality of pixel intensity values from the inspection image along a line crossing the coarse edge contour; and selecting a pixel value above a predetermined threshold intensity as an edge pixel to perform a fine edge contour extraction.
55. The inspection method of clause 29, wherein the parameter value comprises one of edge placement error, critical dimension, overlay, linewidth, linewidth roughness, or line edge roughness.
56. The inspection method of clause 29, wherein the parameter value comprises a defect inspection value.
57. A charged particle beam apparatus, comprising: a charged particle beam source configured to generate a beam of primary charged particles; a charged particle optical system configured to direct the beam of primary charged particles at a pattern region on a wafer; a controller comprising one or more processors and configured to cause the charged particle beam apparatus to perform operations comprising: irradiating a surface of the wafer with the beam to cause charged particles to be emitted from the surface; detecting the charged particles on a charged particle detector of the charged particle beam apparatus to produce a charged particle beam inspection image of the pattern region on the wafer; aligning the inspection image to a reference image in a first alignment process to generate an aligned image; aligning the aligned image to the inspection image in a second alignment process to generate a deformation map; applying the deformation map to the reference image to generate a customized contour image; measuring a parameter value of the pattern region based on the customized contour image; and performing an adjustment to a semiconductor manufacturing process or a semiconductor manufacturing apparatus based on the measured parameter value.
58. The charged particle beam apparatus of clause 57, wherein charged article beam apparatus comprises a scanning electron microscope.
59. The charged particle beam apparatus of clause 57, wherein the pattern region corresponds to a field of view of charged particle beam apparatus.
60. The charged particle beam apparatus of clause 57, wherein: the inspection image is a grayscale image; and the reference image is a binary image.
61. The charged particle beam apparatus of clause 60, wherein: the aligned image is a grayscale image; and the customized contour image is a binary image.
62. The charged particle beam apparatus of clause 57, wherein the reference image is based on a pattern file format corresponding to the pattern region.
63. The charged particle beam apparatus of clause 57, wherein one of the first alignment process or the second alignment process comprises a machine learning transformation model.
64. The charged particle beam apparatus of clause 63, wherein the machine learning transformation model comprises the inspection image and the reference image as inputs.
65. The charged particle beam apparatus of clause 63, wherein the machine learning transformation model comprises the aligned image and the inspection image as inputs.
66. The charged particle beam apparatus of clause 63, wherein the machine learning transformation model comprises an encoder-decoder network.
67. The charged particle beam apparatus of clause 66, wherein the encoder-decoder network comprises a deep neural network.
68. The charged particle beam apparatus of clause 66, wherein the encoder-decoder network comprises a plurality of initial test weightings.
69. The charged particle beam apparatus of clause 68, wherein the machine learning transformation model iterates one of the first alignment process or the second alignment process to optimize the plurality of initial test weightings for the encoder-decoder network.
70. The charged particle beam apparatus of clause 66, wherein the encoder-decoder network encodes the inspection image and the reference image into a latent space to form a first encoding.
71. The charged particle beam apparatus of clause 70, wherein: the deformation map comprises a second deformation map, and the encoder-decoder network decodes the first encoding to generate a first deformation map.
72. The charged particle beam apparatus of clause 66, wherein the encoder-decoder network encodes the aligned image and the inspection image into a latent space to form a second encoding.
73. The charged particle beam apparatus of clause 72, wherein the encoder-decoder network decodes the second encoding to generate the deformation map.
74. The charged particle beam apparatus of clause 63, wherein the machine learning transformation model comprises a loss function.
75. The charged particle beam apparatus of clause 74, wherein the loss function comprises the form:
wherein w represents a weighting value, X represents a strength of a smoothness prior, <D(x, w) represents the deformation map, CC represents a cross-correlation calculation, f represents the reference image, and m represents the inspection image.
76. The charged particle beam apparatus of clause 63, wherein the other of the first alignment process or the second alignment process comprises the machine learning transformation model.
77. The charged particle beam apparatus of clause 57, wherein: the deformation map comprises a second deformation map, and aligning the inspection image to the reference image in the first alignment process comprises generating a first deformation map.
78. The charged particle beam apparatus of clause 57, wherein the operations further comprise: extracting an edge location based on the customized contour image.
79. The charged particle beam apparatus of clause 78, wherein extracting the edge location based on the customized contour image comprises extracting the edge location directly from the customized contour image.
80. The charged particle beam apparatus of clause 78, wherein extracting the edge location based on the customized contour image comprises comparing the customized contour image to the inspection image.
81. The charged particle beam apparatus of clause 78, wherein extracting the edge location based on the customized contour image comprises: matching the customized contour image to the inspection image to determine a coarse edge contour; acquiring a plurality of pixel intensity values from the inspection image along a line crossing the coarse edge contour; and selecting a pixel value above a predetermined threshold intensity as an edge pixel to perform a fine edge contour extraction.
82. The charged particle beam apparatus of clause 57, wherein the parameter value comprises one of edge placement error, critical dimension, overlay, linewidth, linewidth roughness, or line edge roughness.
83. The charged particle beam apparatus of clause 57, wherein the parameter value comprises a defect inspection value.
[0107] Block diagrams in the figures may illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer hardware or software products according to various exemplary embodiments of the present disclosure. In this regard, each block in a schematic diagram may represent certain arithmetical or logical operation processing that may be implemented using hardware such as an electronic circuit. Blocks may also represent a module, segment, or portion of code that comprises one or more executable instructions for implementing the specified logical functions. It should be understood that in some alternative implementations, functions indicated in a block may occur out of the order noted in the figures. For example, two blocks shown in succession may be executed or implemented substantially concurrently, or two blocks may sometimes be executed in reverse order, depending upon the functionality involved. Some blocks may also be omitted. It should also be understood that each block of the block diagrams, and combination of the blocks, may be implemented by special purpose hardware -based systems that perform the specified functions or acts, or by combinations of special purpose hardware and computer instructions. [0108] It will be appreciated that the embodiments of the present disclosure are not limited to the exact construction that has been described above and illustrated in the accompanying drawings, and that various modifications and changes can be made without departing from the scope thereof. For example, a charged particle inspection system may be but one example of a charged particle beam system consistent with embodiments of the present disclosure.
Claims
1. A non-transitory computer-readable medium that stores a set of instructions that is executable by at least one processor of an apparatus to cause the apparatus to perform operations comprising: acquiring an inspection image of a pattern region on a wafer; aligning the inspection image to a reference image in a first alignment process to generate an aligned image; aligning the aligned image to the inspection image in a second alignment process to generate a deformation map; applying the deformation map to the reference image to generate a customized contour image; measuring a parameter value of the pattern region based on the customized contour image; and performing an adjustment to a semiconductor manufacturing process or a semiconductor manufacturing apparatus based on the measured parameter value.
2. The non-transitory computer-readable medium of claim 1 , wherein the inspection image is a charged-particle beam image.
3. The non-transitory computer-readable medium of claim 1, wherein: the inspection image is a grayscale image; and the reference image is a binary image.
4. The non-transitory computer-readable medium of claim 3, wherein: the aligned image is a grayscale image; and the customized contour image is a binary image.
5. The non-transitory computer-readable medium of claim 1, wherein the reference image is based on a pattern file format corresponding to the pattern region.
6. The non-transitory computer-readable medium of claim 1 , wherein one of the first alignment process or the second alignment process comprises a machine learning transformation model.
7. The non-transitory computer-readable medium of claim 6, wherein the other of the first alignment process or the second alignment process comprises the machine learning transformation model.
8. The non-transitory computer-readable medium of claim 1, wherein: the deformation map comprises a second deformation map, and
aligning the inspection image to the reference image in the first alignment process comprises generating a first deformation map.
9. The non-transitory computer-readable medium of claim 1, wherein the operations further comprise: extracting an edge location based on the customized contour image.
10. The non-transitory computer-readable medium of claim 9, wherein extracting the edge location based on the customized contour image comprises extracting the edge location directly from the customized contour image.
11. The non-transitory computer-readable medium of claim 9, wherein extracting the edge location based on the customized contour image comprises comparing the customized contour image to the inspection image.
12. The non-transitory computer-readable medium of claim 9, wherein extracting the edge location based on the customized contour image comprises: matching the customized contour image to the inspection image to determine a coarse edge contour; acquiring a plurality of pixel intensity values from the inspection image along a line crossing the coarse edge contour; and selecting a pixel value above a predetermined threshold intensity as an edge pixel to perform a fine edge contour extraction.
13. The non-transitory computer-readable medium of claim 1, wherein the parameter value comprises one of edge placement error, critical dimension, overlay, linewidth, linewidth roughness, line edge roughness.
14. An inspection method, comprising: acquiring an inspection image of a pattern region on a wafer; aligning the inspection image to a reference image in a first alignment process to generate an aligned image; aligning the aligned image to the inspection image in a second alignment process to generate a deformation map; applying the deformation map to the reference image to generate a customized contour image; measuring a parameter value of the pattern region based on the customized contour image; and
performing an adjustment to a semiconductor manufacturing process or a semiconductor manufacturing apparatus based on the measured parameter value.
15. A charged particle beam apparatus, comprising: a charged particle beam source configured to generate a beam of primary charged particles; a charged particle optical system configured to direct the beam of primary charged particles at a pattern region on a wafer; a controller comprising one or more processors and configured to cause the charged particle beam apparatus to perform operations comprising: irradiating a surface of the wafer with the beam to cause charged particles to be emitted from the surface; detecting the charged particles on a charged particle detector of the charged particle beam apparatus to produce a charged particle beam inspection image of the pattern region on the wafer; aligning the inspection image to a reference image in a first alignment process to generate an aligned image; aligning the aligned image to the inspection image in a second alignment process to generate a deformation map; applying the deformation map to the reference image to generate a customized contour image; measuring a parameter value of the pattern region based on the customized contour image; and performing an adjustment to a semiconductor manufacturing process or a semiconductor manufacturing apparatus based on the measured parameter value.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202363534114P | 2023-08-22 | 2023-08-22 | |
US63/534,114 | 2023-08-22 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2025040361A1 true WO2025040361A1 (en) | 2025-02-27 |
Family
ID=92106522
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2024/071028 WO2025040361A1 (en) | 2023-08-22 | 2024-07-24 | Learning-based local alignment for edge placement metrology |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2025040361A1 (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170021541A1 (en) | 2015-03-17 | 2017-01-26 | Edward Smith | Methods for cooling molds |
US20170021543A1 (en) | 2015-07-22 | 2017-01-26 | iMFLUX Inc. | Method of Injection Molding USing One or More Strain Gauges as a Virtual Sensor |
US9691586B2 (en) | 2015-03-10 | 2017-06-27 | Hermes Microvision, Inc. | Apparatus of plural charged-particle beams |
US20190379682A1 (en) | 2018-06-08 | 2019-12-12 | Nvidia Corporation | Protecting vehicle buses from cyber-attacks |
US20230036630A1 (en) | 2020-04-10 | 2023-02-02 | Asml Netherlands B.V. | Aligning a distorted image |
-
2024
- 2024-07-24 WO PCT/EP2024/071028 patent/WO2025040361A1/en unknown
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9691586B2 (en) | 2015-03-10 | 2017-06-27 | Hermes Microvision, Inc. | Apparatus of plural charged-particle beams |
US20170021541A1 (en) | 2015-03-17 | 2017-01-26 | Edward Smith | Methods for cooling molds |
US20170021543A1 (en) | 2015-07-22 | 2017-01-26 | iMFLUX Inc. | Method of Injection Molding USing One or More Strain Gauges as a Virtual Sensor |
US20190379682A1 (en) | 2018-06-08 | 2019-12-12 | Nvidia Corporation | Protecting vehicle buses from cyber-attacks |
US20230036630A1 (en) | 2020-04-10 | 2023-02-02 | Asml Netherlands B.V. | Aligning a distorted image |
Non-Patent Citations (4)
Title |
---|
FUKUDA KOSUKE ET AL: "Trainable die-to-database for large field of view e-beam inspection", JOURNAL OF MICRO/NANOLITHOGRAPHY, MEMS, AND MOEMS, SPIE - INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING, US, vol. 22, no. 2, 1 April 2023 (2023-04-01), pages 21004, XP060183256, ISSN: 1932-5150, [retrieved on 20221223], DOI: 10.1117/1.JMM.22.2.021004 * |
HAND ET AL.: "Principles of Data Mining (Adaptive Computation and Machine Learning", 2001, MIT PRESS, pages: 578 |
JEBARA: "Discriminative, Generative, and Imitative Learning", 2002, MIT THESIS, pages: 212 |
SUGIYAMA: "Introduction to Statistical Machine Learning", 2016, MORGAN KAUFMANN, pages: 534 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9965901B2 (en) | Generating simulated images from design information | |
US20200018944A1 (en) | Sem image enhancement methods and systems | |
JP2019537839A (en) | Diagnostic system and method for deep learning models configured for semiconductor applications | |
TW202105264A (en) | Learnable defect detection for semiconductor applications | |
US7558419B1 (en) | System and method for detecting integrated circuit pattern defects | |
US20220375063A1 (en) | System and method for generating predictive images for wafer inspection using machine learning | |
KR20230007431A (en) | Train a machine learning model to generate high-resolution images from inspection images | |
US20240212131A1 (en) | Improved charged particle image inspection | |
US20250104210A1 (en) | Method and system of defect detection for inspection sample based on machine learning model | |
JP2024528451A (en) | Method and system for anomaly-based defect inspection - Patents.com | |
US20240331115A1 (en) | Image distortion correction in charged particle inspection | |
US20250095116A1 (en) | Image enhancement in charged particle inspection | |
WO2025040361A1 (en) | Learning-based local alignment for edge placement metrology | |
TWI876176B (en) | Methods and apparatus for correcting distortion of an inspection image and associated non-transitory computer readable medium | |
JP7459007B2 (en) | Defect inspection equipment and defect inspection method | |
WO2024165248A1 (en) | Diversifying sem measurement scheme for improved accuracy | |
TW202425040A (en) | Region-density based misalignment index for image alignment | |
US20240062362A1 (en) | Machine learning-based systems and methods for generating synthetic defect images for wafer inspection | |
KR20250048187A (en) | Area density-based misalignment index for image alignment | |
WO2025011912A1 (en) | Systems and methods for defect inspection in charged-particle systems | |
WO2024213339A1 (en) | Method for efficient dynamic sampling plan generation and accurate probe die loss projection | |
CN119325622A (en) | Method and system for reducing charging artifacts in inspection images | |
JP2023030539A (en) | Inspection device and inspection method | |
WO2024199881A2 (en) | A method to monitor the cgi model performance without ground truth information | |
TW202509987A (en) | Precise and accurate critical dimension measurement by modeling local charging distortion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24748395 Country of ref document: EP Kind code of ref document: A1 |